1
|
Preisig BC, Riecke L, Hervais-Adelman A. Speech sound categorization: The contribution of non-auditory and auditory cortical regions. Neuroimage 2022; 258:119375. [PMID: 35700949 DOI: 10.1016/j.neuroimage.2022.119375] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2022] [Revised: 05/13/2022] [Accepted: 06/10/2022] [Indexed: 11/26/2022] Open
Abstract
Which processes in the human brain lead to the categorical perception of speech sounds? Investigation of this question is hampered by the fact that categorical speech perception is normally confounded by acoustic differences in the stimulus. By using ambiguous sounds, however, it is possible to dissociate acoustic from perceptual stimulus representations. Twenty-seven normally hearing individuals took part in an fMRI study in which they were presented with an ambiguous syllable (intermediate between /da/ and /ga/) in one ear and with disambiguating acoustic feature (third formant, F3) in the other ear. Multi-voxel pattern searchlight analysis was used to identify brain areas that consistently differentiated between response patterns associated with different syllable reports. By comparing responses to different stimuli with identical syllable reports and identical stimuli with different syllable reports, we disambiguated whether these regions primarily differentiated the acoustics of the stimuli or the syllable report. We found that BOLD activity patterns in left perisylvian regions (STG, SMG), left inferior frontal regions (vMC, IFG, AI), left supplementary motor cortex (SMA/pre-SMA), and right motor and somatosensory regions (M1/S1) represent listeners' syllable report irrespective of stimulus acoustics. Most of these regions are outside of what is traditionally regarded as auditory or phonological processing areas. Our results indicate that the process of speech sound categorization implicates decision-making mechanisms and auditory-motor transformations.
Collapse
Affiliation(s)
- Basil C Preisig
- Donders Institute for Brain, Cognition, and Behaviour, Radboud University, 6500 HB Nijmegen, The Netherlands; Max Planck Institute for Psycholinguistics, 6525 XD Nijmegen, The Netherlands; Department of Psychology, Neurolinguistics, University of Zurich, 8050 Zurich, Switzerland; Department of Comparative Language Science, Evolutionary Neuroscience of Language, University of Zurich, 8050 Zurich, Switzerland; Neuroscience Center Zurich, University of Zurich and Eidgenössische Technische Hochschule Zurich, 8057 Zurich, Switzerland.
| | - Lars Riecke
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, 6229 ER Maastricht, The Netherlands
| | - Alexis Hervais-Adelman
- Department of Psychology, Neurolinguistics, University of Zurich, 8050 Zurich, Switzerland; Neuroscience Center Zurich, University of Zurich and Eidgenössische Technische Hochschule Zurich, 8057 Zurich, Switzerland
| |
Collapse
|
2
|
Oesch N. Music and Language in Social Interaction: Synchrony, Antiphony, and Functional Origins. Front Psychol 2019; 10:1514. [PMID: 31312163 PMCID: PMC6614337 DOI: 10.3389/fpsyg.2019.01514] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2019] [Accepted: 06/17/2019] [Indexed: 11/13/2022] Open
Abstract
Music and language are universal human abilities with many apparent similarities relating to their acoustics, structure, and frequent use in social situations. We might therefore expect them to be understood and processed similarly, and indeed an emerging body of research suggests that this is the case. But the focus has historically been on the individual, looking at the passive listener or the isolated speaker or performer, even though social interaction is the primary site of use for both domains. Nonetheless, an important goal of emerging research is to compare music and language in terms of acoustics and structure, social interaction, and functional origins to develop parallel accounts across the two domains. Indeed, a central aim of both of evolutionary musicology and language evolution research is to understand the adaptive significance or functional origin of human music and language. An influential proposal to emerge in recent years has been referred to as the social bonding hypothesis. Here, within a comparative approach to animal communication systems, I review empirical studies in support of the social bonding hypothesis in humans, non-human primates, songbirds, and various other mammals. In support of this hypothesis, I review six research fields: (i) the functional origins of music; (ii) the functional origins of language; (iii) mechanisms of social synchrony for human social bonding; (iv) language and social bonding in humans; (v) music and social bonding in humans; and (vi) pitch, tone and emotional expression in human speech and music. I conclude that the comparative study of complex vocalizations and behaviors in various extant species can provide important insights into the adaptive function(s) of these traits in these species, as well as offer evidence-based speculations for the existence of "musilanguage" in our primate ancestors, and thus inform our understanding of the biology and evolution of human music and language.
Collapse
Affiliation(s)
- Nathan Oesch
- Music and Neuroscience Lab, Department of Psychology, The Brain and Mind Institute, Western University, London, ON, Canada
- Cognitive Neuroscience of Communication and Hearing (CoNCH) Lab, Department of Psychology, The Brain and Mind Institute, Western University, London, ON, Canada
| |
Collapse
|
3
|
Hernández M, Ventura-Campos N, Costa A, Miró-Padilla A, Ávila C. Brain networks involved in accented speech processing. BRAIN AND LANGUAGE 2019; 194:12-22. [PMID: 30959385 DOI: 10.1016/j.bandl.2019.03.003] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/02/2018] [Revised: 03/19/2019] [Accepted: 03/21/2019] [Indexed: 06/09/2023]
Abstract
We investigated the neural correlates of accented speech processing (ASP) with an fMRI study that overcame prior limitations in this line of research: we preserved intelligibility by using two regional accents that differ in prosody but only mildly in phonetics (Latin American and Castilian Spanish), and we used independent component analysis to identify brain networks as opposed to isolated regions. ASP engaged a speech perception network composed primarily of structures related with the processing of prosody (cerebellum, putamen, and thalamus). This network also included anterior fronto-temporal areas associated with lexical-semantic processing and a portion of the inferior frontal gyrus linked to executive control. ASP also recruited domain-general executive control networks related with cognitive demands (dorsal attentional and default mode networks) and the processing of salient events (salience network). Finally, the reward network showed a preference for the native accent, presumably revealing people's sense of social belonging.
Collapse
Affiliation(s)
- Mireia Hernández
- Section of Cognitive Processes, Department of Cognition, Development, and Educational Psychology, Institut de Neurociències, Universitat de Barcelona, Barcelona, Spain; Cognition and Brain Plasticity Group, Bellvitge Biomedical Research Institute (IDIBELL), L'Hospitalet de Llobregat, Barcelona, Spain.
| | - Noelia Ventura-Campos
- Neuropsychology and Functional Imaging Group, Department of Basic Psychology, Clinical Psychology, and Psychobiology, Universitat Jaume I, Castellón, Spain; Department of Education and Specific Didactics, Universitat Jaume I, Castellón, Spain
| | - Albert Costa
- Center for Brain and Cognition, Universitat Pompeu Fabra, Barcelona, Spain; Institució Catalana de Recerca i Estudis Avançats (ICREA), Barcelona, Spain
| | - Anna Miró-Padilla
- Neuropsychology and Functional Imaging Group, Department of Basic Psychology, Clinical Psychology, and Psychobiology, Universitat Jaume I, Castellón, Spain
| | - César Ávila
- Neuropsychology and Functional Imaging Group, Department of Basic Psychology, Clinical Psychology, and Psychobiology, Universitat Jaume I, Castellón, Spain
| |
Collapse
|
4
|
Xie X, Myers E. Left Inferior Frontal Gyrus Sensitivity to Phonetic Competition in Receptive Language Processing: A Comparison of Clear and Conversational Speech. J Cogn Neurosci 2017; 30:267-280. [PMID: 29160743 DOI: 10.1162/jocn_a_01208] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2022]
Abstract
The speech signal is rife with variations in phonetic ambiguity. For instance, when talkers speak in a conversational register, they demonstrate less articulatory precision, leading to greater potential for confusability at the phonetic level compared with a clear speech register. Current psycholinguistic models assume that ambiguous speech sounds activate more than one phonological category and that competition at prelexical levels cascades to lexical levels of processing. Imaging studies have shown that the left inferior frontal gyrus (LIFG) is modulated by phonetic competition between simultaneously activated categories, with increases in activation for more ambiguous tokens. Yet, these studies have often used artificially manipulated speech and/or metalinguistic tasks, which arguably may recruit neural regions that are not critical for natural speech recognition. Indeed, a prominent model of speech processing, the dual-stream model, posits that the LIFG is not involved in prelexical processing in receptive language processing. In the current study, we exploited natural variation in phonetic competition in the speech signal to investigate the neural systems sensitive to phonetic competition as listeners engage in a receptive language task. Participants heard nonsense sentences spoken in either a clear or conversational register as neural activity was monitored using fMRI. Conversational sentences contained greater phonetic competition, as estimated by measures of vowel confusability, and these sentences also elicited greater activation in a region in the LIFG. Sentence-level phonetic competition metrics uniquely correlated with LIFG activity as well. This finding is consistent with the hypothesis that the LIFG responds to competition at multiple levels of language processing and that recruitment of this region does not require an explicit phonological judgment.
Collapse
|
5
|
Carey D, Miquel ME, Evans BG, Adank P, McGettigan C. Functional brain outcomes of L2 speech learning emerge during sensorimotor transformation. Neuroimage 2017; 159:18-31. [PMID: 28669904 DOI: 10.1016/j.neuroimage.2017.06.053] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2017] [Revised: 06/20/2017] [Accepted: 06/21/2017] [Indexed: 11/18/2022] Open
Abstract
Sensorimotor transformation (ST) may be a critical process in mapping perceived speech input onto non-native (L2) phonemes, in support of subsequent speech production. Yet, little is known concerning the role of ST with respect to L2 speech, particularly where learned L2 phones (e.g., vowels) must be produced in more complex lexical contexts (e.g., multi-syllabic words). Here, we charted the behavioral and neural outcomes of producing trained L2 vowels at word level, using a speech imitation paradigm and functional MRI. We asked whether participants would be able to faithfully imitate trained L2 vowels when they occurred in non-words of varying complexity (one or three syllables). Moreover, we related individual differences in imitation success during training to BOLD activation during ST (i.e., pre-imitation listening), and during later imitation. We predicted that superior temporal and peri-Sylvian speech regions would show increased activation as a function of item complexity and non-nativeness of vowels, during ST. We further anticipated that pre-scan acoustic learning performance would predict BOLD activation for non-native (vs. native) speech during ST and imitation. We found individual differences in imitation success for training on the non-native vowel tokens in isolation; these were preserved in a subsequent task, during imitation of mono- and trisyllabic words containing those vowels. fMRI data revealed a widespread network involved in ST, modulated by both vowel nativeness and utterance complexity: superior temporal activation increased monotonically with complexity, showing greater activation for non-native than native vowels when presented in isolation and in trisyllables, but not in monosyllables. Individual differences analyses showed that learning versus lack of improvement on the non-native vowel during pre-scan training predicted increased ST activation for non-native compared with native items, at insular cortex, pre-SMA/SMA, and cerebellum. Our results hold implications for the importance of ST as a process underlying successful imitation of non-native speech.
Collapse
Affiliation(s)
- Daniel Carey
- Department of Psychology, Royal Holloway, University of London, TW20 0EX, UK; Combined Universities Brain Imaging Centre, Royal Holloway, University of London, TW20 0EX, UK; The Irish Longitudinal Study on Ageing (TILDA), Dept. Medical Gerontology, TCD, Dublin, Ireland
| | - Marc E Miquel
- William Harvey Research Institute, Queen Mary, University of London, EC1M 6BQ, UK; Clinical Physics, Barts Health NHS Trust, London, EC1A 7BE, UK
| | - Bronwen G Evans
- Department of Speech, Hearing & Phonetic Sciences, University College London, WC1E 6BT, UK
| | - Patti Adank
- Department of Speech, Hearing & Phonetic Sciences, University College London, WC1E 6BT, UK
| | - Carolyn McGettigan
- Department of Psychology, Royal Holloway, University of London, TW20 0EX, UK; Combined Universities Brain Imaging Centre, Royal Holloway, University of London, TW20 0EX, UK; Institute of Cognitive Neuroscience, University College London, WC1N 3AR, UK.
| |
Collapse
|
6
|
Borrie SA, Schäfer MCM. Effects of Lexical and Somatosensory Feedback on Long-Term Improvements in Intelligibility of Dysarthric Speech. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2017; 60:2151-2158. [PMID: 28687828 DOI: 10.1044/2017_jslhr-s-16-0411] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/29/2016] [Accepted: 02/09/2017] [Indexed: 06/07/2023]
Abstract
PURPOSE Intelligibility improvements immediately following perceptual training with dysarthric speech using lexical feedback are comparable to those observed when training uses somatosensory feedback (Borrie & Schäfer, 2015). In this study, we investigated if these lexical and somatosensory guided improvements in listener intelligibility of dysarthric speech remain comparable and stable over the course of 1 month. METHOD Following an intelligibility pretest, 60 participants were trained with dysarthric speech stimuli under one of three conditions: lexical feedback, somatosensory feedback, or no training (control). Participants then completed a series of intelligibility posttests, which took place immediately (immediate posttest), 1 week (1-week posttest) following training, and 1 month (1-month posttest) following training. RESULTS As per our previous study, intelligibility improvements at immediate posttest were equivalent between lexical and somatosensory feedback conditions. Condition differences, however, emerged over time. Improvements guided by lexical feedback deteriorated over the month whereas those guided by somatosensory feedback remained robust. CONCLUSIONS Somatosensory feedback, internally generated by vocal imitation, may be required to affect long-term perceptual gain in processing dysarthric speech. Findings are discussed in relation to underlying learning mechanisms and offer insight into how externally and internally generated feedback may differentially affect perceptual learning of disordered speech.
Collapse
Affiliation(s)
- Stephanie A Borrie
- Department of Communicative Disorders and Deaf Education, Utah State University, Logan
| | - Martina C M Schäfer
- New Zealand Institute of Language, Brain and Behaviour, University of Canterbury, Christchurch
| |
Collapse
|
7
|
Schuerman WL, Nagarajan S, McQueen JM, Houde J. Sensorimotor adaptation affects perceptual compensation for coarticulation. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2017; 141:2693. [PMID: 28464681 PMCID: PMC5848838 DOI: 10.1121/1.4979791] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/26/2016] [Revised: 03/22/2017] [Accepted: 03/23/2017] [Indexed: 05/21/2023]
Abstract
A given speech sound will be realized differently depending on the context in which it is produced. Listeners have been found to compensate perceptually for these coarticulatory effects, yet it is unclear to what extent this effect depends on actual production experience. In this study, whether changes in motor-to-sound mappings induced by adaptation to altered auditory feedback can affect perceptual compensation for coarticulation is investigated. Specifically, whether altering how the vowel [i] is produced can affect the categorization of a stimulus continuum between an alveolar and a palatal fricative whose interpretation is dependent on vocalic context is tested. It was found that participants could be sorted into three groups based on whether they tended to oppose the direction of the shifted auditory feedback, to follow it, or a mixture of the two, and that these articulatory responses, not the shifted feedback the participants heard, correlated with changes in perception. These results indicate that sensorimotor adaptation to altered feedback can affect the perception of unaltered yet coarticulatorily-dependent speech sounds, suggesting a modulatory role of sensorimotor experience on speech perception.
Collapse
Affiliation(s)
| | - Srikantan Nagarajan
- Department of Radiology, University of California-San Francisco School of Medicine, San Francisco, California 94143, USA
| | | | - John Houde
- Department of Otolaryngology Head and Neck Surgery, University of California-San Francisco School of Medicine, San Francisco, California 94143, USA
| |
Collapse
|
8
|
Schomers MR, Pulvermüller F. Is the Sensorimotor Cortex Relevant for Speech Perception and Understanding? An Integrative Review. Front Hum Neurosci 2016; 10:435. [PMID: 27708566 PMCID: PMC5030253 DOI: 10.3389/fnhum.2016.00435] [Citation(s) in RCA: 74] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2016] [Accepted: 08/15/2016] [Indexed: 11/21/2022] Open
Abstract
In the neuroscience of language, phonemes are frequently described as multimodal units whose neuronal representations are distributed across perisylvian cortical regions, including auditory and sensorimotor areas. A different position views phonemes primarily as acoustic entities with posterior temporal localization, which are functionally independent from frontoparietal articulatory programs. To address this current controversy, we here discuss experimental results from functional magnetic resonance imaging (fMRI) as well as transcranial magnetic stimulation (TMS) studies. On first glance, a mixed picture emerges, with earlier research documenting neurofunctional distinctions between phonemes in both temporal and frontoparietal sensorimotor systems, but some recent work seemingly failing to replicate the latter. Detailed analysis of methodological differences between studies reveals that the way experiments are set up explains whether sensorimotor cortex maps phonological information during speech perception or not. In particular, acoustic noise during the experiment and ‘motor noise’ caused by button press tasks work against the frontoparietal manifestation of phonemes. We highlight recent studies using sparse imaging and passive speech perception tasks along with multivariate pattern analysis (MVPA) and especially representational similarity analysis (RSA), which succeeded in separating acoustic-phonological from general-acoustic processes and in mapping specific phonological information on temporal and frontoparietal regions. The question about a causal role of sensorimotor cortex on speech perception and understanding is addressed by reviewing recent TMS studies. We conclude that frontoparietal cortices, including ventral motor and somatosensory areas, reflect phonological information during speech perception and exert a causal influence on language understanding.
Collapse
Affiliation(s)
- Malte R Schomers
- Brain Language Laboratory, Department of Philosophy and Humanities, Freie Universität BerlinBerlin, Germany; Berlin School of Mind and Brain, Humboldt-Universität zu BerlinBerlin, Germany
| | - Friedemann Pulvermüller
- Brain Language Laboratory, Department of Philosophy and Humanities, Freie Universität BerlinBerlin, Germany; Berlin School of Mind and Brain, Humboldt-Universität zu BerlinBerlin, Germany
| |
Collapse
|
9
|
The Left, The Better: White-Matter Brain Integrity Predicts Foreign Language Imitation Ability. Cereb Cortex 2016; 27:3906-3917. [DOI: 10.1093/cercor/bhw199] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2016] [Accepted: 02/06/2016] [Indexed: 11/15/2022] Open
|
10
|
Lima CF, Krishnan S, Scott SK. Roles of Supplementary Motor Areas in Auditory Processing and Auditory Imagery. Trends Neurosci 2016; 39:527-542. [PMID: 27381836 PMCID: PMC5441995 DOI: 10.1016/j.tins.2016.06.003] [Citation(s) in RCA: 136] [Impact Index Per Article: 17.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2016] [Revised: 05/26/2016] [Accepted: 06/09/2016] [Indexed: 11/28/2022]
Abstract
Although the supplementary and pre-supplementary motor areas have been intensely investigated in relation to their motor functions, they are also consistently reported in studies of auditory processing and auditory imagery. This involvement is commonly overlooked, in contrast to lateral premotor and inferior prefrontal areas. We argue here for the engagement of supplementary motor areas across a variety of sound categories, including speech, vocalizations, and music, and we discuss how our understanding of auditory processes in these regions relate to findings and hypotheses from the motor literature. We suggest that supplementary and pre-supplementary motor areas play a role in facilitating spontaneous motor responses to sound, and in supporting a flexible engagement of sensorimotor processes to enable imagery and to guide auditory perception. Hearing and imagining sounds–including speech, vocalizations, and music–can recruit SMA and pre-SMA, which are normally discussed in relation to their motor functions. Emerging research indicates that individual differences in the structure and function of SMA and pre-SMA can predict performance in auditory perception and auditory imagery tasks. Responses during auditory processing primarily peak in pre-SMA and in the boundary area between pre-SMA and SMA. This boundary area is crucially involved in the control of speech and vocal production, suggesting that sounds engage this region in an effector-specific manner. Activating sound-related motor representations in SMA and pre-SMA might facilitate behavioral responses to sounds. This might also support a flexible generation of sensory predictions based on previous experience to enable imagery and guide perception.
Collapse
Affiliation(s)
- César F Lima
- Institute of Cognitive Neuroscience, University College London, London, UK
| | - Saloni Krishnan
- Department of Experimental Psychology, University of Oxford, Oxford, UK
| | - Sophie K Scott
- Institute of Cognitive Neuroscience, University College London, London, UK.
| |
Collapse
|
11
|
Borrie SA, Schäfer MCM. The Role of Somatosensory Information in Speech Perception: Imitation Improves Recognition of Disordered Speech. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2015; 58:1708-16. [PMID: 26536172 DOI: 10.1044/2015_jslhr-s-15-0163] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/01/2015] [Accepted: 09/14/2015] [Indexed: 05/13/2023]
Abstract
PURPOSE Perceptual learning paradigms involving written feedback appear to be a viable clinical tool to reduce the intelligibility burden of dysarthria. The underlying theoretical assumption is that pairing the degraded acoustics with the intended lexical targets facilitates a remapping of existing mental representations in the lexicon. This study investigated whether ties to mental representations can be strengthened by way of a somatosensory motor trace. METHOD Following an intelligibility pretest, 100 participants were assigned to 1 of 5 experimental groups. The control group received no training, but the other 4 groups received training with dysarthric speech under conditions involving a unique combination of auditory targets, written feedback, and/or a vocal imitation task. All participants then completed an intelligibility posttest. RESULTS Training improved intelligibility of dysarthric speech, with the largest improvements observed when the auditory targets were accompanied by both written feedback and an imitation task. Further, a significant relationship between intelligibility improvement and imitation accuracy was identified. CONCLUSIONS This study suggests that somatosensory information can strengthen the activation of speech sound maps of dysarthric speech. The findings, therefore, implicate a bidirectional relationship between speech perception and speech production as well as advance our understanding of the mechanisms that underlie perceptual learning of degraded speech.
Collapse
|
12
|
Adank P, Nuttall HE, Banks B, Kennedy-Higgins D. Neural bases of accented speech perception. Front Hum Neurosci 2015; 9:558. [PMID: 26500526 PMCID: PMC4594029 DOI: 10.3389/fnhum.2015.00558] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2015] [Accepted: 09/22/2015] [Indexed: 02/02/2023] Open
Abstract
The recognition of unfamiliar regional and foreign accents represents a challenging task for the speech perception system (Floccia et al., 2006; Adank et al., 2009). Despite the frequency with which we encounter such accents, the neural mechanisms supporting successful perception of accented speech are poorly understood. Nonetheless, candidate neural substrates involved in processing speech in challenging listening conditions, including accented speech, are beginning to be identified. This review will outline neural bases associated with perception of accented speech in the light of current models of speech perception, and compare these data to brain areas associated with processing other speech distortions. We will subsequently evaluate competing models of speech processing with regards to neural processing of accented speech. See Cristia et al. (2012) for an in-depth overview of behavioral aspects of accent processing.
Collapse
Affiliation(s)
- Patti Adank
- Division of Psychology and Language Sciences, Department of Speech, Hearing, and Phonetic Sciences, University College London London, UK ; School of Psychological Sciences, University of Manchester Manchester, UK
| | - Helen E Nuttall
- Division of Psychology and Language Sciences, Department of Speech, Hearing, and Phonetic Sciences, University College London London, UK
| | - Briony Banks
- School of Psychological Sciences, University of Manchester Manchester, UK
| | - Daniel Kennedy-Higgins
- Division of Psychology and Language Sciences, Department of Speech, Hearing, and Phonetic Sciences, University College London London, UK
| |
Collapse
|
13
|
Callan D, Callan A, Jones JA. Speech motor brain regions are differentially recruited during perception of native and foreign-accented phonemes for first and second language listeners. Front Neurosci 2014; 8:275. [PMID: 25232302 PMCID: PMC4153045 DOI: 10.3389/fnins.2014.00275] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2014] [Accepted: 08/14/2014] [Indexed: 11/13/2022] Open
Abstract
Brain imaging studies indicate that speech motor areas are recruited for auditory speech perception, especially when intelligibility is low due to environmental noise or when speech is accented. The purpose of the present study was to determine the relative contribution of brain regions to the processing of speech containing phonetic categories from one's own language, speech with accented samples of one's native phonetic categories, and speech with unfamiliar phonetic categories. To that end, native English and Japanese speakers identified the speech sounds /r/ and /l/ that were produced by native English speakers (unaccented) and Japanese speakers (foreign-accented) while functional magnetic resonance imaging measured their brain activity. For native English speakers, the Japanese accented speech was more difficult to categorize than the unaccented English speech. In contrast, Japanese speakers have difficulty distinguishing between /r/ and /l/, so both the Japanese accented and English unaccented speech were difficult to categorize. Brain regions involved with listening to foreign-accented productions of a first language included primarily the right cerebellum, left ventral inferior premotor cortex PMvi, and Broca's area. Brain regions most involved with listening to a second-language phonetic contrast (foreign-accented and unaccented productions) also included the left PMvi and the right cerebellum. Additionally, increased activity was observed in the right PMvi, the left and right ventral superior premotor cortex PMvs, and the left cerebellum. These results support a role for speech motor regions during the perception of foreign-accented native speech and for perception of difficult second-language phonetic contrasts.
Collapse
Affiliation(s)
- Daniel Callan
- Center for Information and Neural Networks, National Institute of Information and Communications Technology, Osaka University Osaka, Japan ; Multisensory Cognition and Computation Laboratory Universal Communication Research Institute, National Institute of Information and Communications Technology Kyoto, Japan
| | - Akiko Callan
- Center for Information and Neural Networks, National Institute of Information and Communications Technology, Osaka University Osaka, Japan ; Multisensory Cognition and Computation Laboratory Universal Communication Research Institute, National Institute of Information and Communications Technology Kyoto, Japan
| | - Jeffery A Jones
- Laurier Centre for Cognitive Neuroscience and Department of Psychology, Wilfrid Laurier University Waterloo, ON, Canada
| |
Collapse
|
14
|
Clark JP, Adams SG, Dykstra AD, Moodie S, Jog M. Loudness perception and speech intensity control in Parkinson's disease. JOURNAL OF COMMUNICATION DISORDERS 2014; 51:1-12. [PMID: 25194745 DOI: 10.1016/j.jcomdis.2014.08.001] [Citation(s) in RCA: 31] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/07/2014] [Revised: 08/01/2014] [Accepted: 08/14/2014] [Indexed: 06/03/2023]
Abstract
UNLABELLED The aim of this study was to examine loudness perception in individuals with hypophonia and Parkinson's disease. The participants included 17 individuals with hypophonia related to Parkinson's disease (PD) and 25 age-equivalent controls. The three loudness perception tasks included a magnitude estimation procedure involving a sentence spoken at 60, 65, 70, 75 and 80 dB SPL, an imitation task involving a sentence spoken at 60, 65, 70, 75 and 80 dB SPL, and a magnitude production procedure involving the production of a sentence at five different loudness levels (habitual, two and four times louder and two and four times quieter). The participants with PD produced a significantly different pattern and used a more restricted range than the controls in their perception of speech loudness, imitation of speech intensity, and self-generated estimates of speech loudness. The results support a speech loudness perception deficit in PD involving an abnormal perception of externally generated and self-generated speech intensity. LEARNING OUTCOMES Readers will recognize that individuals with hypophonia related to Parkinson's disease may demonstrate a speech loudness perception deficit involving the abnormal perception of externally generated and self-generated speech intensity.
Collapse
Affiliation(s)
- Jenna P Clark
- School of Communication Sciences and Disorders, Western University, London, Ontario, Canada N6G 1H1; Health and Rehabilitation Sciences Program, Western University, London, Ontario, Canada N6G 1H1.
| | - Scott G Adams
- School of Communication Sciences and Disorders, Western University, London, Ontario, Canada N6G 1H1; Health and Rehabilitation Sciences Program, Western University, London, Ontario, Canada N6G 1H1; Department of Clinical Neuroscience, Western University, London, Ontario, Canada N6G 1H1.
| | - Allyson D Dykstra
- School of Communication Sciences and Disorders, Western University, London, Ontario, Canada N6G 1H1; Health and Rehabilitation Sciences Program, Western University, London, Ontario, Canada N6G 1H1.
| | - Shane Moodie
- School of Communication Sciences and Disorders, Western University, London, Ontario, Canada N6G 1H1.
| | - Mandar Jog
- Department of Clinical Neuroscience, Western University, London, Ontario, Canada N6G 1H1.
| |
Collapse
|