1
|
Orekhova EV, Fadeev KA, Goiaeva DE, Obukhova TS, Ovsiannikova TM, Prokofyev AO, Stroganova TA. Different hemispheric lateralization for periodicity and formant structure of vowels in the auditory cortex and its changes between childhood and adulthood. Cortex 2024; 171:287-307. [PMID: 38061210 DOI: 10.1016/j.cortex.2023.10.020] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2023] [Revised: 08/31/2023] [Accepted: 10/30/2023] [Indexed: 02/12/2024]
Abstract
The spectral formant structure and periodicity pitch are the major features that determine the identity of vowels and the characteristics of the speaker. However, very little is known about how the processing of these features in the auditory cortex changes during development. To address this question, we independently manipulated the periodicity and formant structure of vowels while measuring auditory cortex responses using magnetoencephalography (MEG) in children aged 7-12 years and adults. We analyzed the sustained negative shift of source current associated with these vowel properties, which was present in the auditory cortex in both age groups despite differences in the transient components of the auditory response. In adults, the sustained activation associated with formant structure was lateralized to the left hemisphere early in the auditory processing stream requiring neither attention nor semantic mapping. This lateralization was not yet established in children, in whom the right hemisphere contribution to formant processing was strong and decreased during or after puberty. In contrast to the formant structure, periodicity was associated with a greater response in the right hemisphere in both children and adults. These findings suggest that left-lateralization for the automatic processing of vowel formant structure emerges relatively late in ontogenesis and pose a serious challenge to current theories of hemispheric specialization for speech processing.
Collapse
Affiliation(s)
- Elena V Orekhova
- Center for Neurocognitive Research (MEG Center), Moscow State University of Psychology and Education, Moscow, Russian Federation.
| | - Kirill A Fadeev
- Center for Neurocognitive Research (MEG Center), Moscow State University of Psychology and Education, Moscow, Russian Federation.
| | - Dzerassa E Goiaeva
- Center for Neurocognitive Research (MEG Center), Moscow State University of Psychology and Education, Moscow, Russian Federation.
| | - Tatiana S Obukhova
- Center for Neurocognitive Research (MEG Center), Moscow State University of Psychology and Education, Moscow, Russian Federation.
| | - Tatiana M Ovsiannikova
- Center for Neurocognitive Research (MEG Center), Moscow State University of Psychology and Education, Moscow, Russian Federation.
| | - Andrey O Prokofyev
- Center for Neurocognitive Research (MEG Center), Moscow State University of Psychology and Education, Moscow, Russian Federation.
| | - Tatiana A Stroganova
- Center for Neurocognitive Research (MEG Center), Moscow State University of Psychology and Education, Moscow, Russian Federation.
| |
Collapse
|
2
|
Liu W, Wang Y, Liang C. Formant and Voice Source Characteristics of Vowels in Chinese National Singing and Bel Canto. A Pilot Study. J Voice 2023:S0892-1997(23)00323-5. [PMID: 37940420 DOI: 10.1016/j.jvoice.2023.10.016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2023] [Revised: 10/07/2023] [Accepted: 10/09/2023] [Indexed: 11/10/2023]
Abstract
BACKGROUND There have been numerous reports on the acoustic characteristics of singers' vowel articulation and phonation, and these studies cover many phonetic dimensions, such as fundamental frequency (F0), intensity, formant frequency, and voice quality. METHOD Taking the three representative vowels (/a/, /i/, /u/) in Chinese National Singing and Bel Canto as the research object, the present study investigates the differences and associations in vowel articulation and phonation between Chinese National Singing and Bel Canto using acoustic measures, for example, F0, formant frequency, long-term average spectrum (LTAS). RESULTS The relationship between F0 and formant indicates that F1 is proportional to F0, in which the female has a significant variation in vowel /a/. Compared with the male, the formant structure of the female singing voice differs significantly from that of the speech voice. Regarding the relationship between intensity and formant, LTAS shows that the Chinese National Singing tenor and Bel Canto baritone have the singer's formant cluster when singing vowels, while the two sopranos do not. CONCLUSIONS The systematic changes of formant frequencies with voice source are observed. (i) F1 of the female vowel /a/ has undergone a significant tuning change in the register transition, reflecting the characteristics of singing genres. (ii) Female singers utilize the intrinsic pitch of vowels when adopting the register transition strategy. This finding can be assumed to facilitate understanding the theory of intrinsic vowel pitch and revise Sundberg's hypothesis that F1 rises with F0. A non-linear relationship exists between F1 and F0, which adds to the non-linear interaction of the formant and vocal source. (iii) The singer's formant is affected by voice classification, gender, and singing genres.
Collapse
Affiliation(s)
- Wen Liu
- Center for Language Sciences, School of Literature, Shandong University, Jinan, China.
| | - Yue Wang
- School of Literature, Shandong University, Jinan, China
| | - Changwei Liang
- Laboratory of Language Sciences, Peking University, Beijing, China
| |
Collapse
|
3
|
Gunjawate DR, Ravi R, Tauro JP, Philip R. Spectral and Temporal Characteristics of Vowels in Konkani. Indian J Otolaryngol Head Neck Surg 2022; 74:4870-4879. [PMID: 36742666 PMCID: PMC9895148 DOI: 10.1007/s12070-020-02315-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2020] [Accepted: 12/07/2020] [Indexed: 02/07/2023] Open
Abstract
The present study was undertaken to study the acoustic characteristics of vowels using spectrographic analysis in Mangalorean Catholic Konkani dialect of Konkani spoken in Mangalore, Karnataka, India. Recordings were done using CVC words in 11 males and 19 females between the age range of 18-55 years. The CVC words consisted of combinations of vowels such as (/i, i:, e, ɵ, ə, u, o, ɐ, ӓ, ɔ/) and consonants such as (/m, k, w, s, ʅ, h, l, r, p, ʤ, g, n, Ɵ, ṭ, ḷ, b, dh/). Recordings were done in a sound-treated room using PRAAT software and spectrographic analysis was done and spectral and temporal characteristics such as fundamental frequency (F0), formants (F1, F2, F3) and vowel duration. The results showed that higher fundamental frequency values were observed for short, high and back vowels. Higher F1 values were noted for open vowels and F2 was higher for front vowels. Long vowels had longer duration compared to short vowels and females had longer vowel duration compared to males. The acoustic information in terms of spectral and temporal cues helps in better understanding the production and perception of languages and dialects.
Collapse
Affiliation(s)
- Dhanshree R. Gunjawate
- Department of Audiology and Speech Language Pathology, Kasturba Medical College, Mangalore, Manipal Academy of Higher Education, Manipal, India
| | - Rohit Ravi
- Department of Audiology and Speech Language Pathology, Kasturba Medical College, Mangalore, Manipal Academy of Higher Education, Manipal, India
| | - Jovita Priya Tauro
- Department of Audiology and Speech Language Pathology, Kasturba Medical College, Mangalore, Manipal Academy of Higher Education, Manipal, India
| | - Rhea Philip
- Department of Audiology and Speech Language Pathology, Kasturba Medical College, Mangalore, Manipal Academy of Higher Education, Manipal, India
| |
Collapse
|
4
|
Rai A, Dawadee P, Chaithra C. Acoustic Characteristics of Short Vowels by Normal Nepali Young Adults. Indian J Otolaryngol Head Neck Surg 2022; 74:5012-5015. [PMID: 36742742 PMCID: PMC9895227 DOI: 10.1007/s12070-021-02604-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2021] [Accepted: 05/03/2021] [Indexed: 02/07/2023] Open
Abstract
Introduction Speech production is the most unique task performed by a human. The speech signals consist of strings of vowels and consonants. Vowels are differentiated based on acoustic characteristics. Methodology A total of 50 Nepali students, 25 males and 25 females of the age range 18 to 25 years with no history of voice disorders, flu, neurological disorders, speech, language impairment, and respiratory dysfunction were included in the study. Sustained phonation of five short vowels /a/, /i/, /o/, /u/ and /e/ were used in order to measure the acoustic variables. PRAAT software was used to extract the acoustic parameter of voice; mean pitch, Jitter, RAP, PPQ 5, Shimmer, and APQ 11. Result Mean and SD was calculated using SPSS. Mann-Whitney test revealed there was a highly significant difference in all the parameters taken for the study among males and females. However, the females had greater F0, jitter, RAP, PPQ5, shimmer, APQ11 than the males. Conclusion Before implementing these norms in clinical set up we must consider that these values are developed for the adults whose L1 is Nepali language and the software used for the establishment of norms was Praat.
Collapse
Affiliation(s)
- Acksha Rai
- Speech-Language Pathologist, Mangalore University, Mangalore, India
| | - Prabha Dawadee
- Speech-Language Pathologist, Institute of Medicine (IOM), Maharajgunj Medical Campus (MMC), Kathmandu, Nepal
| | - C. Chaithra
- Speech and Swallow Pathologist, Hassan Institute of Medical Science, Hassan, India
| |
Collapse
|
5
|
Weyers I, Männel C, Mueller JL. Constraints on infants' ability to extract non-adjacent dependencies from vowels and consonants. Dev Cogn Neurosci 2022; 57:101149. [PMID: 36084447 PMCID: PMC9465114 DOI: 10.1016/j.dcn.2022.101149] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2022] [Revised: 08/29/2022] [Accepted: 08/30/2022] [Indexed: 12/04/2022] Open
Abstract
Language acquisition requires infants' ability to track dependencies between distant speech elements. Infants as young as 3 months have been shown to successfully identify such non-adjacent dependencies between syllables, and this ability has been related to the maturity of infants' pitch processing. The present study tested whether 8- to 10-month-old infants (N = 68) can also learn dependencies at smaller segmental levels and whether the relation between dependency and pitch processing extends to other auditory features. Infants heard either syllable sequences encoding an item-specific dependency between non-adjacent vowels or between consonants. These frequent standard sequences were interspersed with infrequent intensity deviants and dependency deviants, which violated the non-adjacent relationship. Both vowel and consonant groups showed electrophysiological evidence for detection of the intensity manipulation. However, evidence for dependency learning was only found for infants hearing the dependencies across vowels, not consonants, and only in a subgroup of infants who had an above-average language score in a behavioral test. In a correlation analysis, we found no relation between intensity and dependency processing. We conclude that item-specific, segment-based non-adjacent dependencies are not easily learned by infants and if so, vowels are more accessible to the task, but only to infants who display advanced language skills.
Collapse
Affiliation(s)
- Ivonne Weyers
- Department of Linguistics, University of Vienna, Sensengasse 3a, 1090 Vienna, Austria; Institute of Cognitive Science, University of Osnabrück, 49069 Osnabrück, Germany.
| | - Claudia Männel
- Department of Audiology and Phoniatrics, Charité - Universitätsmedizin Berlin, Augustenburger Platz 1, 13353 Berlin, Germany; Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstr. 1A, 04103 Leipzig, Germany
| | - Jutta L Mueller
- Department of Linguistics, University of Vienna, Sensengasse 3a, 1090 Vienna, Austria; Institute of Cognitive Science, University of Osnabrück, 49069 Osnabrück, Germany
| |
Collapse
|
6
|
Roepke E, Brosseau-Lapré F. Vowel errors produced by preschool-age children on a single-word test of articulation. Clin Linguist Phon 2021; 35:1161-1183. [PMID: 33459085 PMCID: PMC8285462 DOI: 10.1080/02699206.2020.1869834] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/15/2020] [Revised: 12/20/2020] [Accepted: 12/23/2020] [Indexed: 06/12/2023]
Abstract
Eighty-four children, age 4-5 years, with and without speech sound disorder (SSD) completed a battery of standardized speech and language tests, including the Goldman-Fristoe Test of Articulation, Third Edition (GFTA-3). Children with SSD produced more vowel errors than children with typical speech abilities. Percentage vowels correct and consonant error variability were highly correlated, suggesting that poorly specified phonological representations affect both consonants and vowels within a child's phonological system. However, the GFTA-3 did not contain sufficient target words to determine full vowel inventory. Using words from the GFTA-3, we present a case study of a child with vowel errors along with a sample analysis of these errors, primarily in terms of consonant-vowel feature interactions. Children who exhibit vowel errors on standardized single-word tests of speech accuracy may benefit from further vowel probes to determine how vowel and consonant errors interact in their phonological systems for more targeted therapy.
Collapse
Affiliation(s)
- Elizabeth Roepke
- Department of Speech, Language and Hearing Sciences, Purdue University, West Lafayette, Indiana, USA
| | - Françoise Brosseau-Lapré
- Department of Speech, Language and Hearing Sciences, Purdue University, West Lafayette, Indiana, USA
| |
Collapse
|
7
|
Zhang K, Peng G. The time course of normalizing speech variability in vowels. Brain Lang 2021; 222:105028. [PMID: 34597904 DOI: 10.1016/j.bandl.2021.105028] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/19/2020] [Revised: 07/21/2021] [Accepted: 09/08/2021] [Indexed: 06/13/2023]
Abstract
To achieve perceptual constancy, listeners utilize contextual cues to normalize speech variabilities in speakers. The present study tested the time course of this cognitive process with an event-related potential (ERP) experiment. The first neurophysiological evidence of speech normalization is observed in P2 (130-250 ms), which is functionally related to phonetic and phonological processes. Furthermore, the normalization process was found to ease lexical retrieval, as indexed by smaller N400 (350-470 ms) after larger P2. A cross-language vowel perception task was carried out to further specify whether normalization was processed in the phonetic and/or phonological stage(s). It was found that both phonetic and phonological cues in the speech context contributed to vowel normalization. The results suggest that vowel normalization in the speech context can be observed in the P2 time window and largely overlaps with phonetic and phonological processes.
Collapse
Affiliation(s)
- Kaile Zhang
- Research Centre for Language, Cognition, and Neuroscience, Department of Chinese and Bilingual Studies, The Hong Kong Polytechnic University, Hung Hom, Kowloon, Hong Kong Special Administrative Region.
| | - Gang Peng
- Research Centre for Language, Cognition, and Neuroscience, Department of Chinese and Bilingual Studies, The Hong Kong Polytechnic University, Hung Hom, Kowloon, Hong Kong Special Administrative Region; Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, 1068 Xueyuan Boulevard, Shenzhen 518055, China.
| |
Collapse
|
8
|
Zhang K, Sjerps MJ, Peng G. Integral perception, but separate processing: The perceptual normalization of lexical tones and vowels. Neuropsychologia 2021; 156:107839. [PMID: 33798490 DOI: 10.1016/j.neuropsychologia.2021.107839] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2020] [Revised: 03/20/2021] [Accepted: 03/26/2021] [Indexed: 11/28/2022]
Abstract
In tonal languages, speech variability arises in both lexical tone (i.e., suprasegmentally) and vowel quality (segmentally). Listeners can use surrounding speech context to overcome variability in both speech cues, a process known as extrinsic normalization. Although vowels are the main carriers of tones, it is still unknown whether the combined percept (lexical tone and vowel quality) is normalized integrally or in partly separate processes. Here we used electroencephalography (EEG) to investigate the time course of lexical tone normalization and vowel normalization to answer this question. Cantonese adults listened to synthesized three-syllable stimuli in which the identity of a target syllable - ambiguous between high vs. mid-tone (Tone condition) or between /o/ vs. /u/ (Vowel condition) - was dependent on either the tone range (Tone condition) or the formant range (Vowel condition) of the first two syllables. It was observed that the ambiguous tone was more often interpreted as a high-level tone when the context had a relatively low pitch than when it had a high pitch (Tone condition). Similarly, the ambiguous vowel was more often interpreted as /o/ when the context had a relatively low formant range than when it had a relatively high formant range (Vowel condition). These findings show the typical pattern of extrinsic tone and vowel normalization. Importantly, the EEG results of participants showing the contrastive normalization effect demonstrated that the effects of vowel normalization could already be observed within the N2 time window (190-350 ms), while the first reliable effect of lexical tone normalization on cortical processing was observable only from the P3 time window (220-500 ms) onwards. The ERP patterns demonstrate that the contrastive perceptual normalization of lexical tones and that of vowels occur at least in partially separate time windows. This suggests that the extrinsic normalization can operate at the level of phonemes and tonemes separately instead of operating on the whole syllable at once.
Collapse
Affiliation(s)
- Kaile Zhang
- Department of Chinese and Bilingual Studies, The Hong Kong Polytechnic University, Hung Hom, Kowloon, Hong Kong Special Administrative Region.
| | - Matthias J Sjerps
- Donders Institute for Brain, Cognition and Behaviour, Centre for Cognitive Neuroimaging, Radboud University, Kapittelweg 29, Nijmegen, 6525 EN, the Netherlands.
| | - Gang Peng
- Department of Chinese and Bilingual Studies, The Hong Kong Polytechnic University, Hung Hom, Kowloon, Hong Kong Special Administrative Region; Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, 1068 Xueyuan Boulevard, Shenzhen, 518055, China.
| |
Collapse
|
9
|
Giroud N, Baum SR, Gilbert AC, Phillips NA, Gracco V. Earlier age of second language learning induces more robust speech encoding in the auditory brainstem in adults, independent of amount of language exposure during early childhood. Brain Lang 2020; 207:104815. [PMID: 32535187 DOI: 10.1016/j.bandl.2020.104815] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/16/2019] [Revised: 05/20/2020] [Accepted: 05/27/2020] [Indexed: 06/11/2023]
Abstract
Learning a second language (L2) at a young age is a driving factor of functional neuroplasticity in the auditory brainstem. To date, it remains unclear whether these effects remain stable until adulthood and to what degree the amount of exposure to the L2 in early childhood might affect their outcome. We compared three groups of adult English-French bilinguals in their ability to categorize English vowels in relation to their frequency following responses (FFR) evoked by the same vowels. At the time of testing, cognitive abilities as well as fluency in both languages were matched between the (1) simultaneous bilinguals (SIM, N = 18); (2) sequential bilinguals with L1-English (N = 14); and (3) sequential bilinguals with L1-French (N = 11). Our results show that the L1-English group show sharper category boundaries in identification of the vowels compared to the L1-French group. Furthermore, the same pattern was reflected in the FFRs (i.e., larger FFR responses in L1-English > SIM > L1-French), while again only the difference between the L1-English and the L1-French group was statistically significant; nonetheless, there was a trend towards larger FFR in SIM compared to L1-French. Our data extends previous literature showing that exposure to a language during the first years of life induces functional neuroplasticity in the auditory brainstem that remains stable until at least young adulthood. Furthermore, the findings suggest that amount of exposure (i.e., 100% vs. 50%) to that language does not differentially shape the robustness of the perceptual abilities or the auditory brainstem encoding of phonetic categories of the language. Statement of significance: Previous studies have indicated that early age of L2 acquisition induces functional neuroplasticity in the auditory brainstem during processing of the L2. This study compared three groups of adult bilinguals who differed in their age of L2 acquisition as well as the amount of exposure to the L2 during early childhood. We demonstrate for the first time that the neuroplastic effect in the brainstem remains stable until young adulthood and that the amount of L2 exposure does not influence behavioral or brainstem plasticity. Our study provides novel insights into low-level auditory plasticity as a function of varying bilingual experience.
Collapse
Affiliation(s)
- Nathalie Giroud
- Department of Psychology, Centre for Research in Human Development (CRDH), Concordia University, Montréal, Canada; Centre for Research on Brain, Language, and Music (CRBLM), McGill University, Montréal, Canada.
| | - Shari R Baum
- Centre for Research on Brain, Language, and Music (CRBLM), McGill University, Montréal, Canada; School of Communication Sciences and Disorders, McGill University, Montréal, Canada
| | - Annie C Gilbert
- Centre for Research on Brain, Language, and Music (CRBLM), McGill University, Montréal, Canada; School of Communication Sciences and Disorders, McGill University, Montréal, Canada.
| | - Natalie A Phillips
- Department of Psychology, Centre for Research in Human Development (CRDH), Concordia University, Montréal, Canada; Centre for Research on Brain, Language, and Music (CRBLM), McGill University, Montréal, Canada; Lady Davis Institute for Medical Research, Jewish General Hospital, Montréal, Canada.
| | - Vincent Gracco
- Centre for Research on Brain, Language, and Music (CRBLM), McGill University, Montréal, Canada; School of Communication Sciences and Disorders, McGill University, Montréal, Canada; Haskins Laboratories, Yale University, New Haven, United States
| |
Collapse
|
10
|
Fisher JM, Dick FK, Levy DF, Wilson SM. Neural representation of vowel formants in tonotopic auditory cortex. Neuroimage 2018; 178:574-582. [PMID: 29860083 DOI: 10.1016/j.neuroimage.2018.05.072] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2018] [Revised: 05/29/2018] [Accepted: 05/30/2018] [Indexed: 11/25/2022] Open
Abstract
Speech sounds are encoded by distributed patterns of activity in bilateral superior temporal cortex. However, it is unclear whether speech sounds are topographically represented in cortex, or which acoustic or phonetic dimensions might be spatially mapped. Here, using functional MRI, we investigated the potential spatial representation of vowels, which are largely distinguished from one another by the frequencies of their first and second formants, i.e. peaks in their frequency spectra. This allowed us to generate clear hypotheses about the representation of specific vowels in tonotopic regions of auditory cortex. We scanned participants as they listened to multiple natural tokens of the vowels [ɑ] and [i], which we selected because their first and second formants overlap minimally. Formant-based regions of interest were defined for each vowel based on spectral analysis of the vowel stimuli and independently acquired tonotopic maps for each participant. We found that perception of [ɑ] and [i] yielded differential activation of tonotopic regions corresponding to formants of [ɑ] and [i], such that each vowel was associated with increased signal in tonotopic regions corresponding to its own formants. This pattern was observed in Heschl's gyrus and the superior temporal gyrus, in both hemispheres, and for both the first and second formants. Using linear discriminant analysis of mean signal change in formant-based regions of interest, the identity of untrained vowels was predicted with ∼73% accuracy. Our findings show that cortical encoding of vowels is scaffolded on tonotopy, a fundamental organizing principle of auditory cortex that is not language-specific.
Collapse
Affiliation(s)
- Julia M Fisher
- Department of Linguistics, University of Arizona, Tucson, AZ, USA; Statistics Consulting Laboratory, BIO5 Institute, University of Arizona, Tucson, AZ, USA
| | - Frederic K Dick
- Department of Psychological Sciences, Birkbeck College, University of London, UK; Birkbeck-UCL Center for Neuroimaging, London, UK; Department of Experimental Psychology, University College London, UK
| | - Deborah F Levy
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Stephen M Wilson
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA.
| |
Collapse
|
11
|
Carminati M, Fiori-Duharcourt N, Isel F. Neurophysiological differentiation between preattentive and attentive processing of emotional expressions on French vowels. Biol Psychol 2017; 132:55-63. [PMID: 29102707 DOI: 10.1016/j.biopsycho.2017.10.013] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2017] [Revised: 10/17/2017] [Accepted: 10/30/2017] [Indexed: 12/29/2022]
Abstract
The present electrophysiological study investigated the processing of emotional prosody by minimizing as much as possible the effect of emotional information conveyed by the lexical-semantic context. Emotionally colored French vowels (i.e., happiness, sadness, fear, and neutral) were presented in a mismatch negativity (MMN) oddball paradigm. Both the MMN, i.e., an event-related potential (ERP) component thought to reflect preattentive change detection, and the P3a, i.e., an ERP marker of involuntary orientation of attention toward deviant stimuli, were significantly modulated by the emotional deviants compared to the neutral ones. Critically, the largest amplitude (MMN, P3a) and the shortest peak latency (MMN) were observed for fear deviants, all other things being equal. Taken together, the present findings lend support to a sequential neurocognitive model of emotion processing (Scherer, 2001) which postulates, among other checks, a first stage of automatic emotion detection (MMN) followed by a second stage of subjective evaluation of the stimulus or event (P3a). Consistently with previous studies, our data suggest that among the six universal emotions, fear could have a special status probably because of its adaptive role in the evolution of the human species.
Collapse
Affiliation(s)
- Mathilde Carminati
- Laboratory Vision Action Cognition - EA 7326, Institute of Psychology, Paris Descartes University - Sorbonne Paris Cité, France.
| | - Nicole Fiori-Duharcourt
- Laboratory Vision Action Cognition - EA 7326, Institute of Psychology, Paris Descartes University - Sorbonne Paris Cité, France
| | - Frédéric Isel
- University Paris Nanterre - Paris Lumières, CNRS, UMR 7114 Models, Dynamics, Corpora, France
| |
Collapse
|
12
|
Masapollo M, Polka L, Ménard L. A universal bias in adult vowel perception - By ear or by eye. Cognition 2017; 166:358-370. [PMID: 28601721 DOI: 10.1016/j.cognition.2017.06.001] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2016] [Revised: 05/03/2017] [Accepted: 06/01/2017] [Indexed: 12/29/2022]
Abstract
Speech perceivers are universally biased toward "focal" vowels (i.e., vowels whose adjacent formants are close in frequency, which concentrates acoustic energy into a narrower spectral region). This bias is demonstrated in phonetic discrimination tasks as a directional asymmetry: a change from a relatively less to a relatively more focal vowel results in significantly better performance than a change in the reverse direction. We investigated whether the critical information for this directional effect is limited to the auditory modality, or whether visible articulatory information provided by the speaker's face also plays a role. Unimodal auditory and visual as well as bimodal (auditory-visual) vowel stimuli were created from video recordings of a speaker producing variants of /u/, differing in both their degree of focalization and visible lip rounding (i.e., lip compression and protrusion). In Experiment 1, we confirmed that subjects showed an asymmetry while discriminating the auditory vowel stimuli. We then found, in Experiment 2, a similar asymmetry when subjects lip-read those same vowels. In Experiment 3, we found asymmetries, comparable to those found for unimodal vowels, for bimodal vowels when the audio and visual channels were phonetically-congruent. In contrast, when the audio and visual channels were phonetically-incongruent (as in the "McGurk effect"), this asymmetry was disrupted. These findings collectively suggest that the perceptual processes underlying the "focal" vowel bias are sensitive to articulatory information available across sensory modalities, and raise foundational issues concerning the extent to which vowel perception derives from general-auditory or speech-gesture-specific processes.
Collapse
Affiliation(s)
- Matthew Masapollo
- School of Communication Sciences and Disorders, McGill University, 2001 McGill College, 8th Floor, Montreal, QC H3A 1G1, Canada; Centre for Research on Brain, Language, and Music, McGill University, 3640 de la Montagne, Montreal, Quebec H3G 2A8, Canada.
| | - Linda Polka
- School of Communication Sciences and Disorders, McGill University, 2001 McGill College, 8th Floor, Montreal, QC H3A 1G1, Canada; Centre for Research on Brain, Language, and Music, McGill University, 3640 de la Montagne, Montreal, Quebec H3G 2A8, Canada
| | - Lucie Ménard
- Département de Linguistique, Université du Québec à Montréal, Pavillon J.-A. De sève, DS-4425, 320, Sainte-Catherine Est, Montréal, QC H2X 1L7, Canada; Centre for Research on Brain, Language, and Music, McGill University, 3640 de la Montagne, Montreal, Quebec H3G 2A8, Canada
| |
Collapse
|
13
|
Monte-Ordoño J, Toro JM. Different ERP profiles for learning rules over consonants and vowels. Neuropsychologia 2017; 97:104-111. [PMID: 28232218 DOI: 10.1016/j.neuropsychologia.2017.02.014] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2016] [Revised: 02/16/2017] [Accepted: 02/18/2017] [Indexed: 11/28/2022]
Abstract
The Consonant-Vowel hypothesis suggests that consonants and vowels tend to be used differently during language processing. In this study we explored whether these functional differences trigger different neural responses in a rule learning task. We recorded ERPs while nonsense words were presented in an Oddball paradigm. An ABB rule was implemented either over the consonants (Consonant condition) or over the vowels (Vowel condition) composing standard words. Deviant stimuli were composed by novel phonemes. Deviants could either implement the same ABB rule as standards (Phoneme deviants) or implement a different ABA rule (Rule deviants). We observed shared early components (P1 and MMN) for both types of deviants across both conditions. We also observed differences across conditions around 400ms. In the Consonant condition, Phoneme deviants triggered a posterior negativity. In the Vowel condition, Rule deviants triggered an anterior negativity. Such responses demonstrate different neural responses after the violation of abstract rules over distinct phonetic categories.
Collapse
Affiliation(s)
| | - Juan M Toro
- Universitat Pompeu Fabra, C. Roc Boronat, 138, 08018 Barcelona, Spain; ICREA, Pg. Lluís Companys, 23, 08010 Barcelona, Spain
| |
Collapse
|
14
|
You RS, Serniclaes W, Rider D, Chabane N. On the nature of the speech perception deficits in children with autism spectrum disorders. Res Dev Disabil 2017; 61:158-171. [PMID: 28082004 DOI: 10.1016/j.ridd.2016.12.009] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/28/2015] [Revised: 07/16/2016] [Accepted: 12/19/2016] [Indexed: 06/06/2023]
Abstract
Previous studies have claimed to show deficits in the perception of speech sounds in autism spectrum disorders (ASD). The aim of the current study was to clarify the nature of such deficits. Children with ASD might only exhibit a lesser amount of precision in the perception of phoneme categories (CPR deficit). However, these children might further present an allophonic mode of speech perception, similar to the one evidenced in dyslexia, characterised by enhanced discrimination of acoustic differences within phoneme categories. Allophonic perception usually gives rise to a categorical perception (CP) deficit, characterised by a weaker coherence between discrimination and identification of speech sounds. The perceptual performance of ASD children was compared to that of control children of the same chronological age. Identification and discrimination data were collected for continua of natural vowels, synthetic vowels, and synthetic consonants. Results confirmed that children with ASD exhibit a CPR deficit for the three stimulus continua. These children further exhibited a trend toward allophonic perception that was, however, not accompanied by the usual CP deficit. These findings confirm that the commonly found CPR deficit is also present in ASD. Whether children with ASD also present allophonic perception requires further investigations.
Collapse
Affiliation(s)
- R S You
- Dipartimento di Neuroscienze, Università di Parma, Italy; Laboratoire de Psychologie de la Perception, Université Paris Descartes, France.
| | - W Serniclaes
- Laboratoire de Psychologie de la Perception, Université Paris Descartes, France; Centre National de la Recherche Scientifique (CNRS), France; UNESCOG, Université Libre de Bruxelles, Belgium
| | - D Rider
- Laboratoire de Psychologie de la Perception, Université Paris Descartes, France
| | - N Chabane
- Centre Cantonal de l'Autisme, Université de Lausanne, Switzerland
| |
Collapse
|
15
|
Sayles M, Stasiak A, Winter IM. Neural Segregation of Concurrent Speech: Effects of Background Noise and Reverberation on Auditory Scene Analysis in the Ventral Cochlear Nucleus. Adv Exp Med Biol 2016; 894:389-97. [PMID: 27080680 DOI: 10.1007/978-3-319-25474-6_41] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register]
Abstract
Concurrent complex sounds (e.g., two voices speaking at once) are perceptually disentangled into separate "auditory objects". This neural processing often occurs in the presence of acoustic-signal distortions from noise and reverberation (e.g., in a busy restaurant). A difference in periodicity between sounds is a strong segregation cue under quiet, anechoic conditions. However, noise and reverberation exert differential effects on speech intelligibility under "cocktail-party" listening conditions. Previous neurophysiological studies have concentrated on understanding auditory scene analysis under ideal listening conditions. Here, we examine the effects of noise and reverberation on periodicity-based neural segregation of concurrent vowels /a/ and /i/, in the responses of single units in the guinea-pig ventral cochlear nucleus (VCN): the first processing station of the auditory brain stem. In line with human psychoacoustic data, we find reverberation significantly impairs segregation when vowels have an intonated pitch contour, but not when they are spoken on a monotone. In contrast, noise impairs segregation independent of intonation pattern. These results are informative for models of speech processing under ecologically valid listening conditions, where noise and reverberation abound.
Collapse
|
16
|
Dodderi T, Narra M, Varghese SM, Deepak DT. Spectral Analysis of Hypernasality in Cleft Palate Children: A Pre-Post Surgery Comparison. J Clin Diagn Res 2016; 10:MC01-3. [PMID: 26894098 DOI: 10.7860/jcdr/2016/15389.7055] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2015] [Accepted: 10/30/2015] [Indexed: 11/24/2022]
Abstract
INTRODUCTION Change in resonance is the most commonly experienced speech problems in children diagnosed with cleft lip and palate. The degree of nasality during normal speech production is maintained by the changes in velopharyngeal port. These variations in speech signal are reported to be successfully captured using acoustical tools like spectral analysis. AIM The present study investigated to note voice low tone to high tone ratio (VLHR) values for phonation samples of individuals with cleft palate before and after surgery. MATERIALS AND METHODS Thirty children with congenital cleft of palate within 8 to 15 years of age participated in the study. Three trials of sustained vowels (/a/,/i/ and /u/) were recorded at their comfortable pitch and loudness level in a noise free room using a hand held dynamic microphone. Praat software that utilized Hillenbrand algorithm was used to extract the VLHR values for samples recorded before and after recovery from the surgery. RESULTS Statistical analysis revealed significant decrease in VLHR values after surgery in comparison to before the surgery. Analysis of Variance revealed statistical significant difference at 95% confidence level. CONCLUSION It is concluded that VLHR parameter could be used as an index to measure nasality and can be included in the routine tool assessment protocol.
Collapse
Affiliation(s)
- Thejaswi Dodderi
- Lecturer, Department of Audiology and Speech Language Pathology, Nitte Institute of Speech & Hearing , Mangaluru, India
| | - Manjunath Narra
- PhD Scholar, Department of Cognitive Science, Macquaire University , Sydney, Australia
| | - Sneha Mareen Varghese
- Lecturer, Department of Audiology and Speech Language Pathology, Dr. S.R. Chandrashekar Institute of Speech and Hearing , Bengaluru, India
| | - Dessai Teja Deepak
- Lecturer, Department of Audiology and Speech Language Pathology, Nitte Institute of Speech and Hearing , India
| |
Collapse
|
17
|
Abstract
In response to voiced speech sounds, auditory-nerve (AN) fibres phase-lock to harmonics near best frequency (BF) and to the fundamental frequency (F0) of voiced sounds. Due to nonlinearities in the healthy ear, phase-locking in each frequency channel is dominated either by a single harmonic, for channels tuned near formants, or by F0, for channels between formants. The alternating dominance of these factors sets up a robust pattern of F0-synchronized rate across best frequency (BF). This profile of a temporally coded measure is transformed into a mean rate profile in the midbrain (inferior colliculus, IC), where neurons are sensitive to low-frequency fluctuations. In the impaired ear, the F0-synchronized rate profile is affected by several factors: Reduced synchrony capture decreases the dominance of a single harmonic near BF on the response. Elevated thresholds also reduce the effect of rate saturation, resulting in increased F0-synchrony. Wider peripheral tuning results in a wider-band envelope with reduced F0 amplitude. In general, sensorineural hearing loss reduces the contrast in AN F0-synchronized rates across BF. Computational models for AN and IC neurons illustrate how hearing loss would affect the F0-synchronized rate profiles set up in response to voiced speech sounds.
Collapse
Affiliation(s)
- Laurel H Carney
- Departments of Biomedical Engineering, Neurobiology & Anatomy, Electrical & Computer Engineering, University of Rochester, Rochester, NY, USA.
| | - Duck O Kim
- Department f Neuroscience, University of Connecticut Health Center, Farmington, CT, USA
| | - Shigeyuki Kuwada
- Department f Neuroscience, University of Connecticut Health Center, Farmington, CT, USA
| |
Collapse
|
18
|
Carson C, Ryalls J, Hardin-Hollingsworth K, Le Normand MT, Ruddy B. Acoustic Analyses of Prolonged Vowels in Young Adults With Friedreich Ataxia. J Voice 2015; 30:272-80. [PMID: 26454768 DOI: 10.1016/j.jvoice.2015.05.008] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2015] [Accepted: 05/15/2015] [Indexed: 11/28/2022]
Abstract
OBJECTIVES Finding measures that track disease progression and determine treatment efficacy is vital for appropriate management in Friedreich ataxia (FA). The purpose of this study was to determine which cepstral- and spectral-based measures extracted from prolonged vowels using Analysis of Dysphonia in Speech and Voice (ADSV) program discriminate between those who have FA and normal voice (NV) peers. STUDY DESIGN This is a descriptive, prospective study. METHODS Initial 2 seconds of prolonged /a/, /i/, and /o/ were analyzed through ADSV from 20 individuals diagnosed with FA and 20 NV individuals. ADSV measures used were cepstral peak prominence (CPP), cepstral peak prominence standard deviation (CPP SD), low/high spectral ratio (L/H ratio), low/high spectral ratio standard deviation (L/H ratio SD), and the Cepstral/Spectral Index of Dysphonia (CSID). RESULTS L/H ratio SD was the only measure where significant differences were found across all vowels between groups. Comparing measures per vowel, the vowel /o/ was significantly different between groups on four of five measures. Discrimination analysis revealed 100% of those in the FA group were classified correctly (sensitivity), whereas 95% of NV members were correctly identified (specificity) when all ADSV measures, with the exception of L/H ratio, were entered. CONCLUSIONS Unstable periods of phonation, such as initiations of voice production in vowels, may yield robust acoustic cues in the FA population. ADSV provides measures that, when considered together, have excellent sensitivity and very good specificity. Vowels yielded differing results on ADSV measures; analysis of different vowel types is recommended.
Collapse
Affiliation(s)
- Cecyle Carson
- Department of Communication Sciences & Disorders, Health and Public Affairs I, University of Central Florida, Orlando, Florida 32816.
| | - Jack Ryalls
- Department of Communication Sciences & Disorders, Health and Public Affairs I, University of Central Florida, Orlando, Florida 32816
| | - Kaylea Hardin-Hollingsworth
- Department of Communication Sciences & Disorders, Health and Public Affairs I, University of Central Florida, Orlando, Florida 32816
| | | | - Bari Ruddy
- Department of Communication Sciences & Disorders, Health and Public Affairs I, University of Central Florida, Orlando, Florida 32816
| |
Collapse
|
19
|
Iuzzini-Seigel J, Hogan TP, Guarino AJ, Green JR. Reliance on auditory feedback in children with childhood apraxia of speech. J Commun Disord 2015; 54:32-42. [PMID: 25662298 DOI: 10.1016/j.jcomdis.2015.01.002] [Citation(s) in RCA: 31] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/06/2014] [Revised: 11/07/2014] [Accepted: 01/08/2015] [Indexed: 06/04/2023]
Abstract
UNLABELLED Children with childhood apraxia of speech (CAS) have been hypothesized to continuously monitor their speech through auditory feedback to minimize speech errors. We used an auditory masking paradigm to determine the effect of attenuating auditory feedback on speech in 30 children: 9 with CAS, 10 with speech delay, and 11 with typical development. The masking only affected the speech of children with CAS as measured by voice onset time and vowel space area. These findings provide preliminary support for greater reliance on auditory feedback among children with CAS. LEARNING OUTCOMES Readers of this article should be able to (i) describe the motivation for investigating the role of auditory feedback in children with CAS; (ii) report the effects of feedback attenuation on speech production in children with CAS, speech delay, and typical development, and (iii) understand how the current findings may support a feedforward program deficit in children with CAS.
Collapse
|
20
|
Viegas F, Viegas D, Baeck HE. Frequency measurement of vowel formants produced by Brazilian children aged between 4 and 8 years. J Voice 2014; 29:292-8. [PMID: 25510161 DOI: 10.1016/j.jvoice.2014.08.001] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2013] [Accepted: 08/04/2014] [Indexed: 11/28/2022]
Abstract
OBJECTIVE To investigate frequency measurement of the first three formants of the seven oral Brazilian Portuguese vowels of healthy children aged 4-8 years. METHODS Two hundred seven children of both genders were selected by oral expression screening and perceptive-auditory analysis. They were separated into four age groups (G1, G2, G3, and G4) and by gender. The voice signals were obtained from key sentence utterance and segments of the seven Brazilian Portuguese oral vowels in tonic position used to estimate formant frequency measurement. Software Praat was used to for processing the recordings. RESULTS Findings were presented by mean values of each of the investigated parameters. A rate of 61.90% of statistically significant differences between genders was found, and when analyzing the age groups and genders, we observed that 65 of the 84 items studied (seven vowels × three formant frequencies × four groups) had higher frequencies of formants for girls. There was a decrease in the frequencies values of the first three formants with age. The results recommended grouping of G1 and G2, and they showed a clear difference between this new formed group and G4. In the age groups of 5-year old to 6 years 11 months (G2 and G3) and 6-year old to 7 years 11 months (G3 and G4), there were statistically significant changes that were random for parameter and vowel. There was a decrease in the frequencies values of the first three formants with age. CONCLUSION Formant frequencies showed a tendency to differentiate genders and their absolute values were in general higher in girls. Age increases showed decreases in formant frequencies. Tests for statistical differences led to grouping of G1 and G2 and a clear difference between this new formed group and G4. The comparison between G2 and G3 and G3 and G4 showed random changes. The changes during this age period (5-year old to 7 years 11 months) were attributed to a transition stage of acoustic measurements in children. As formant frequencies vary according to structural and postural aspects of the vocal tract and speech organs, their study in healthy children contributes for the understanding of the development of the pediatric phonation system, in addition to offering a reference data set for future studies of children with vocal disorders that can potentially impact the resonance system.
Collapse
Affiliation(s)
- Flávia Viegas
- Graduate Course of Speech and Hearing Pathology, Department Specific Training in Speech and Hearing Pathology, Universidade Federal Fluminense, UFF, Rio de Janeiro, RJ, Brazil.
| | - Danieli Viegas
- Prefeitura da Cidade do Rio de Janeiro, PCRJ, Department of Speech and Hearing Pathology, Rio de Janeiro, RJ, Brazil
| | - Heidi Elisabeth Baeck
- Graduate Course of Speech and Hearing Pathology, Department Specific Training in Speech and Hearing Pathology, Universidade Federal Fluminense, UFF, Rio de Janeiro, RJ, Brazil
| |
Collapse
|
21
|
Abstract
The TRACE model of speech perception (McClelland & Elman, 1986) is used to simulate results from the infant word recognition literature, to provide a unified, theoretical framework for interpreting these findings. In a first set of simulations, we demonstrate how TRACE can reconcile apparently conflicting findings suggesting, on the one hand, that consonants play a pre-eminent role in lexical acquisition (Nespor, Peña & Mehler, 2003; Nazzi, 2005), and on the other, that there is a symmetry in infant sensitivity to vowel and consonant mispronunciations of familiar words (Mani & Plunkett, 2007). In a second series of simulations, we use TRACE to simulate infants' graded sensitivity to mispronunciations of familiar words as reported by White and Morgan (2008). An unexpected outcome is that TRACE fails to demonstrate graded sensitivity for White and Morgan's stimuli unless the inhibitory parameters in TRACE are substantially reduced. We explore the ramifications of this finding for theories of lexical development. Finally, TRACE mimics the impact of phonological neighbourhoods on early word learning reported by Swingley and Aslin (2007). TRACE offers an alternative explanation of these findings in terms of mispronunciations of lexical items rather than imputing word learning to infants. Together these simulations provide an evaluation of Developmental (Jusczyk, 1993) and Familiarity (Metsala, 1999) accounts of word recognition by infants and young children. The findings point to a role for both theoretical approaches whereby vocabulary structure and content constrain infant word recognition in an experience-dependent fashion, and highlight the continuity in the processes and representations involved in lexical development during the second year of life.
Collapse
Affiliation(s)
| | - Kim Plunkett
- Department of Experimental Psychology, University of Oxford, United Kingdom
| |
Collapse
|
22
|
McMurray B, Kovack-Lesh KA, Goodwin D, McEchron W. Infant directed speech and the development of speech perception: enhancing development or an unintended consequence? Cognition 2013; 129:362-78. [PMID: 23973465 PMCID: PMC3874452 DOI: 10.1016/j.cognition.2013.07.015] [Citation(s) in RCA: 69] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2012] [Revised: 07/18/2013] [Accepted: 07/22/2013] [Indexed: 11/30/2022]
Abstract
Infant directed speech (IDS) is a speech register characterized by simpler sentences, a slower rate, and more variable prosody. Recent work has implicated it in more subtle aspects of language development. Kuhl et al. (1997) demonstrated that segmental cues for vowels are affected by IDS in a way that may enhance development: the average locations of the extreme "point" vowels (/a/, /i/ and /u/) are further apart in acoustic space. If infants learn speech categories, in part, from the statistical distributions of such cues, these changes may specifically enhance speech category learning. We revisited this by asking (1) if these findings extend to a new cue (Voice Onset Time, a cue for voicing); (2) whether they extend to the interior vowels which are much harder to learn and/or discriminate; and (3) whether these changes may be an unintended phonetic consequence of factors like speaking rate or prosodic changes associated with IDS. Eighteen caregivers were recorded reading a picture book including minimal pairs for voicing (e.g., beach/peach) and a variety of vowels to either an adult or their infant. Acoustic measurements suggested that VOT was different in IDS, but not in a way that necessarily supports better development, and that these changes are almost entirely due to slower rate of speech of IDS. Measurements of the vowel suggested that in addition to changes in the mean, there was also an increase in variance, and statistical modeling suggests that this may counteract the benefit of any expansion of the vowel space. As a whole this suggests that changes in segmental cues associated with IDS may be an unintended by-product of the slower rate of speech and different prosodic structure, and do not necessarily derive from a motivation to enhance development.
Collapse
Affiliation(s)
- Bob McMurray
- Dept. of Psychology, University of Iowa, United States; Dept. of Communication Sciences and Disorders, University of Iowa, United States; Dept. of Linguistics, University of Iowa, United States; The Delta Center, University of Iowa, United States.
| | | | | | | |
Collapse
|
23
|
Al-Magaleh WR, Swelem AA, Shohdi SS, Mawsouf NM. Setting up of teeth in the neutral zone and its effect on speech. Saudi Dent J 2012; 24:43-8. [PMID: 23960527 DOI: 10.1016/j.sdentj.2011.11.004] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2011] [Revised: 07/10/2011] [Accepted: 11/27/2011] [Indexed: 11/26/2022] Open
Abstract
Rational goals for denture construction are basically directed at the restoration of esthetics and masticatory function and the healthy preservation of the remaining natural tissues. Little concern has been given to the perfection and optimization of the phonetic quality of denture users. However, insertion of prosthodontic restorations may lead to speech defects. Most such defects are mild but, nevertheless, can be a source of concern to the patient. For the dental practitioner, there are few guidelines for designing a prosthetic restoration with maximum phonetic success. One of these guidelines involves the setting up of teeth within the neutral zone. The aim of this study was to evaluate, subjectively and objectively, the effect on speech of setting up teeth in the neutral zone. Three groups were examined: group I (control) included 10 completely dentulous subjects, group II included 10 completely edentulous patients with conventional dentures, and group III included the same 10 edentulous patients with neutral zone dentures. Subjective assessment included patient satisfaction. Objective assessment included duration taken for recitation of Al-Fateha and acoustic analysis. Subjectively, patients were more satisfied with their neutral zone dentures. Objectively, speech produced with the neutral zone dentures was closer to normal than speech with conventional dentures.
Collapse
|