1
|
Nip ISB. Articulatory and Vocal Fold Movement Patterns During Loud Speech in Children With Cerebral Palsy. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2024; 67:477-493. [PMID: 38227476 PMCID: PMC11000802 DOI: 10.1044/2023_jslhr-23-00411] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/05/2023] [Revised: 09/19/2023] [Accepted: 11/25/2023] [Indexed: 01/17/2024]
Abstract
PURPOSE Speech motor control changes underlying louder speech are poorly understood in children with cerebral palsy (CP). The current study evaluates changes in the oral articulatory and laryngeal subsystems in children with CP and their typically developing (TD) peers during louder speech. METHOD Nine children with CP and nine age- and sex-matched TD peers produced sentence repetitions in two conditions: (a) with their habitual rate and loudness and (b) with louder speech. Lip and jaw movements were recorded with optical motion capture. Acoustic recordings were obtained to evaluate vocal fold articulation. RESULTS Children with CP had smaller jaw movements, larger lower lip movements, slower jaw speeds, faster lip speeds, reduced interarticulator coordination, reduced low-frequency spectral tilt, and lower cepstral peak prominences (CPP) in comparison to their TD peers. Both groups produced louder speech with larger lip and jaw movements, faster lip and jaw speeds, increased temporal coordination, reduced movement variability, reduced spectral tilt, and increased CPP. CONCLUSIONS Children with CP differ from their TD peers in the speech motor control of both the oral articulatory and laryngeal subsystems. Both groups alter oral articulatory and vocal fold movements when cued to speak loudly, which may contribute to the increased intelligibility associated with louder speech. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.24970302.
Collapse
|
2
|
Hansen M, Huttenlauch C, de Beer C, Wartenburger I, Hanne S. Individual Differences in Early Disambiguation of Prosodic Grouping. LANGUAGE AND SPEECH 2023; 66:706-733. [PMID: 36250333 DOI: 10.1177/00238309221127374] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Prosodic cues help to disambiguate incoming information in spoken language perception. In structurally ambiguous coordinate utterances, such as three-name sequences, the intended grouping is marked by three prosodic cues: F0-range, final lengthening, and pause. To indicate that the first two names are grouped together, speakers typically weaken the durational and tonal cues on the first name whereas they are strengthened on the second name, compared with a structure without internal grouping. The current study uses a gating paradigm to test whether listeners can decide about the internal grouping of a coordinate structure by already exploiting prosodic information on the first name. One hundred ninety-two stimuli were cut into seven parts (gates) and presented to naive participants (n = 45) successively (gate by gate) with increasing length of the utterance and amount of prosodic information. In a two-alternative forced-choice decision task, accuracy was above chance level after the second name. However, more than half of the participants could already reliably detect grouping patterns after the first name. These interindividual differences point toward the existence of different subgroups with diverging prosodic parsing strategies. Furthermore, listeners were sensitive to speaker-specific prosodic patterns. Depending on speaker-specific characteristics and individual parsing capacities, it seems possible-at least for a subgroup of listeners-to make predictions about the underlying grouping structure of coordinated name sequences based on early prosodic cues.
Collapse
Affiliation(s)
- Marie Hansen
- Cognitive Sciences, Department of Linguistics, University of Potsdam, Germany
| | - Clara Huttenlauch
- Cognitive Sciences, Department of Linguistics, University of Potsdam, Germany
| | - Carola de Beer
- Cognitive Sciences, Department of Linguistics, University of Potsdam, Germany
| | | | - Sandra Hanne
- Cognitive Sciences, Department of Linguistics, University of Potsdam, Germany
| |
Collapse
|
3
|
Wynn CJ, Barrett TS, Berisha V, Liss JM, Borrie SA. Speech Entrainment in Adolescent Conversations: A Developmental Perspective. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2023; 66:3132-3150. [PMID: 37071795 PMCID: PMC10569405 DOI: 10.1044/2023_jslhr-22-00263] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/11/2022] [Revised: 11/21/2022] [Accepted: 01/03/2023] [Indexed: 05/03/2023]
Abstract
PURPOSE Defined as the similarity of speech behaviors between interlocutors, speech entrainment plays an important role in successful adult conversations. According to theoretical models of entrainment and research on motoric, cognitive, and social developmental milestones, the ability to entrain should develop throughout adolescence. However, little is known about the specific developmental trajectory or the role of speech entrainment in conversational outcomes of this age group. The purpose of this study is to characterize speech entrainment patterns in the conversations of neurotypical early adolescents. METHOD This study utilized a corpus of 96 task-based conversations between adolescents between the ages of 9 and 14 years and a comparison corpus of 32 task-based conversations between adults. For each conversational turn, two speech entrainment scores were calculated for 429 acoustic features across rhythmic, articulatory, and phonatory dimensions. Predictive modeling was used to evaluate the degree of entrainment and relationship between entrainment and two metrics of conversational success. RESULTS Speech entrainment increased throughout early adolescence but did not reach the level exhibited in conversations between adults. Additionally, speech entrainment was predictive of both conversational quality and conversational efficiency. Furthermore, models that included all acoustic features and both entrainment types performed better than models that only included individual acoustic feature sets or one type of entrainment. CONCLUSIONS Our findings show that speech entrainment skills are largely developed during early adolescence with continued development possibly occurring across later adolescence. Additionally, results highlight the role of speech entrainment in successful conversation in this population, suggesting the import of continued exploration of this phenomenon in both neurotypical and neurodivergent adolescents. We also provide evidence of the value of using holistic measures that capture the multidimensionality of speech entrainment and provide a validated methodology for investigating entrainment across multiple acoustic features and entrainment types.
Collapse
Affiliation(s)
- Camille J. Wynn
- Department of Communication Sciences and Disorders, University of Houston, TX
| | - Tyson S. Barrett
- Department of Communicative Disorders and Deaf Education, Utah State University, Logan
| | - Visar Berisha
- Department of Speech and Hearing Science, Arizona State University, Tempe
- School of Electrical, Computer and Energy Engineering, Arizona State University, Tempe
| | - Julie M. Liss
- Department of Speech and Hearing Science, Arizona State University, Tempe
| | - Stephanie A. Borrie
- Department of Communicative Disorders and Deaf Education, Utah State University, Logan
| |
Collapse
|
4
|
Gampe A, Zahner-Ritter K, Müller JJ, Schmid S. How children speak with their voice assistant Sila depends on what they think about her. COMPUTERS IN HUMAN BEHAVIOR 2023. [DOI: 10.1016/j.chb.2023.107693] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/10/2023]
|
5
|
Tuomainen O, Taschenberger L, Rosen S, Hazan V. Speech modifications in interactive speech: effects of age, sex and noise type. Philos Trans R Soc Lond B Biol Sci 2022; 377:20200398. [PMID: 34775827 PMCID: PMC8591383 DOI: 10.1098/rstb.2020.0398] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/05/2023] Open
Abstract
When attempting to maintain conversations in noisy communicative settings, talkers typically modify their speech to make themselves understood by the listener. In this study, we investigated the impact of background interference type and talker age on speech adaptations, vocal effort and communicative success. We measured speech acoustics (articulation rate, mid-frequency energy, fundamental frequency), vocal effort (correlation between mid-frequency energy and fundamental frequency) and task completion time in 114 participants aged 8-80 years carrying out an interactive problem-solving task in good and noisy listening conditions (quiet, non-speech noise, background speech). We found greater changes in fundamental frequency and mid-frequency energy in non-speech noise than in background speech and similar reductions in articulation rate in both. However, older participants (50+ years) increased vocal effort in both background interference types, whereas younger children (less than 13 years) increased vocal effort only in background speech. The presence of background interference did not lead to longer task completion times. These results suggest that when the background interference involves a higher cognitive load, as in the case of other speech of other talkers, children and older talkers need to exert more vocal effort to ensure successful communication. We discuss these findings within the communication effort framework. This article is part of the theme issue 'Voice modulation: from origin and mechanism to social impact (Part II)'.
Collapse
Affiliation(s)
- Outi Tuomainen
- Speech Hearing and Phonetic Sciences, University College London, 2 Wakefield Street, London WC1N 1PF, UK,Department of Linguistics, University of Potsdam, Haus 14, Karl-Liebknecht-Straße 24-25, 14476 Potsdam, Germany
| | - Linda Taschenberger
- Speech Hearing and Phonetic Sciences, University College London, 2 Wakefield Street, London WC1N 1PF, UK
| | - Stuart Rosen
- Speech Hearing and Phonetic Sciences, University College London, 2 Wakefield Street, London WC1N 1PF, UK
| | - Valerie Hazan
- Speech Hearing and Phonetic Sciences, University College London, 2 Wakefield Street, London WC1N 1PF, UK
| |
Collapse
|
6
|
Hunter EJ, Cantor-Cutiva LC, van Leer E, van Mersbergen M, Nanjundeswaran CD, Bottalico P, Sandage MJ, Whitling S. Toward a Consensus Description of Vocal Effort, Vocal Load, Vocal Loading, and Vocal Fatigue. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2020; 63:509-532. [PMID: 32078404 PMCID: PMC7210446 DOI: 10.1044/2019_jslhr-19-00057] [Citation(s) in RCA: 108] [Impact Index Per Article: 27.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/08/2019] [Revised: 09/20/2019] [Accepted: 10/23/2019] [Indexed: 05/05/2023]
Abstract
Purpose The purpose of this document is threefold: (a) review the uses of the terms "vocal fatigue," "vocal effort," "vocal load," and "vocal loading" (as found in the literature) in order to track the occurrence and the related evolution of research; (b) present a "linguistically modeled" definition of the same from the review of literature on the terms; and (c) propose conceptualized definitions of the concepts. Method A comprehensive literature search was conducted using PubMed/MEDLINE, Embase, Cochrane Central Register of Controlled Trials, and Scientific Electronic Library Online. Four terms ("vocal fatigue," "vocal effort," "vocal load," and "vocal loading"), as well as possible variants, were included in the search, and their usages were compiled into conceptual definitions. Finally, a focus group of eight experts in the field (current authors) worked together to make conceptual connections and proposed consensus definitions. Results The occurrence and frequency of "vocal load," "vocal loading," "vocal effort," and "vocal fatigue" in the literature are presented, and summary definitions are developed. The results indicate that these terms appear to be often interchanged with blurred distinctions. Therefore, the focus group proposes the use of two new terms, "vocal demand" and "vocal demand response," in place of the terms "vocal load" and "vocal loading." We also propose standardized definitions for all four concepts. Conclusion Through a comprehensive literature search, the terms "vocal fatigue," "vocal effort," "vocal load," and "vocal loading" were explored, new terms were proposed, and standardized definitions were presented. Future work should refine these proposed definitions as research continues to address vocal health concerns.
Collapse
Affiliation(s)
- Eric J. Hunter
- Department of Communicative Sciences and Disorders, Michigan State University, East Lansing
| | - Lady Catherine Cantor-Cutiva
- Department of Collective Health, Universidad Nacional de Colombia, Bogotá
- Department of Speech and Language Pathology, Universidad Manuela Beltrán, Bogotá, Colombia
| | - Eva van Leer
- Department of Communication Sciences and Disorders, Georgia State University, Atlanta
| | | | - Chaya Devie Nanjundeswaran
- Department of Audiology and Speech-Language Pathology, East Tennessee State University, Johnson City, TN
| | - Pasquale Bottalico
- Department of Speech and Hearing Science, University of Illinois at Urbana–Champaign
| | - Mary J. Sandage
- Department of Communication Disorders, Auburn University, AL
| | - Susanna Whitling
- Department of Logopedics, Phoniatrics and Audiology, Lund University, Sweden
| |
Collapse
|
7
|
Yi H, Smiljanic R, Chandrasekaran B. The Effect of Talker and Listener Depressive Symptoms on Speech Intelligibility. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2019; 62:4269-4281. [PMID: 31738862 PMCID: PMC7201326 DOI: 10.1044/2019_jslhr-s-19-0112] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/07/2023]
Abstract
Purpose This study examined the effect of depressive symptoms on production and perception of conversational and clear speech (CS) sentences. Method Five talkers each with high-depressive (HD) and low-depressive (LD) symptoms read sentences in conversational and clear speaking style. Acoustic measures of speaking rate, mean fundamental frequency (F0; Hz), F0 range (Hz), and energy in the 1-3 kHz range (dB) were obtained. Thirty-two young adult participants (15 HD, 16 LD) heard these conversational and clear sentences mixed with energetic masking (speech-shaped noise) at -5 dB SPL signal-to-noise ratio. Another group of 39 young adult participants (18 HD, 19 LD) heard the same sentences mixed with informational masking (one-talker competing speech) at -12 dB SPL signal-to-noise ratio. The key word correct score was obtained. Results CS was characterized by a decreased speaking rate, increased F0 mean and range, and increased energy in the 1-3 kHz range. Talkers with HD symptoms produced these modifications significantly less compared to talkers with LD symptoms. When listening to speech in energetic masking (speech-shaped noise), listeners with both HD and LD symptoms benefited less from the CS produced by HD talkers. Listeners with HD symptoms performed significantly worse than listeners with LD symptoms when listening to speech in informational masking (one-talker competing speech). Conclusions Results provide evidence that depressive symptoms impact intelligibility and have the potential to aid in clinical decision making for individuals with depression.
Collapse
Affiliation(s)
- Hoyoung Yi
- Department of Speech-Language-Hearing Sciences, Texas Tech University Health Sciences Center, Lubbock
| | - Rajka Smiljanic
- Department of Linguistics, The University of Texas at Austin
| | - Bharath Chandrasekaran
- Department of Communication Science and Disorders and Center for the Neural Basis of Cognition, University of Pittsburgh, PA
| |
Collapse
|
8
|
Hazan V, Tuomainen O, Kim J, Davis C, Sheffield B, Brungart D. Clear speech adaptations in spontaneous speech produced by young and older adults. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2018; 144:1331. [PMID: 30424655 DOI: 10.1121/1.5053218] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/20/2018] [Accepted: 08/17/2018] [Indexed: 06/09/2023]
Abstract
The study investigated the speech adaptations by older adults (OA) with and without age-related hearing loss made to communicate effectively in challenging communicative conditions. Acoustic analyses were carried out on spontaneous speech produced during a problem-solving task (diapix) carried out by talker pairs in different listening conditions. There were 83 talkers of Southern British English. Fifty-seven talkers were OAs aged 65-84, 30 older adults with normal hearing (OANH), and 27 older adults with hearing loss (OAHL) [mean pure tone average (PTA) 0.250-4 kHz: 27.7 dB HL]. Twenty-six talkers were younger adults (YA) aged 18-26 with normal hearing. Participants were recorded while completing the diapix task with a conversational partner (YA of the same sex) when (a) both talkers heard normally (NORM), (b) the partner had a simulated hearing loss, and (c) both talkers heard babble noise. Irrespective of hearing status, there were age-related differences in some acoustic characteristics of YA and OA speech produced in NORM, most likely linked to physiological factors. In challenging conditions, while OANH talkers typically patterned with YA talkers, OAHL talkers made adaptations more consistent with an increase in vocal effort. The study suggests that even mild presbycusis in healthy OAs can affect the speech adaptations made to maintain effective communication.
Collapse
Affiliation(s)
- Valerie Hazan
- Department of Speech Hearing and Phonetic Sciences, University College London, 2 Wakefield Street, London WC1N 1PF, United Kingdom
| | - Outi Tuomainen
- Department of Speech Hearing and Phonetic Sciences, University College London, 2 Wakefield Street, London WC1N 1PF, United Kingdom
| | - Jeesun Kim
- The MARCS Institute, Western Sydney University, Locked Bag 1797, Penrith, New South Wales 2751, Australia
| | - Christopher Davis
- The MARCS Institute, Western Sydney University, Locked Bag 1797, Penrith, New South Wales 2751, Australia
| | - Benjamin Sheffield
- Audiology and Speech-Pathology Center, Walter Reed National Military Medical Center, Bethesda, 4494 North Palmer Road, Bethesda, Maryland 20889, USA
| | - Douglas Brungart
- Audiology and Speech-Pathology Center, Walter Reed National Military Medical Center, Bethesda, 4494 North Palmer Road, Bethesda, Maryland 20889, USA
| |
Collapse
|
9
|
Hazan V, Tuomainen O, Tu L, Kim J, Davis C, Brungart D, Sheffield B. How do aging and age-related hearing loss affect the ability to communicate effectively in challenging communicative conditions? Hear Res 2018; 369:33-41. [PMID: 29941310 DOI: 10.1016/j.heares.2018.06.009] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/31/2017] [Revised: 05/09/2018] [Accepted: 06/14/2018] [Indexed: 11/15/2022]
Abstract
This study investigated the relation between the intelligibility of conversational and clear speech produced by older and younger adults and (a) the acoustic profile of their speech (b) communication effectiveness. Speech samples from 30 talkers from the elderLUCID corpus were used: 10 young adults (YA), 10 older adults with normal hearing (OANH) and 10 older adults with presbycusis (OAHL). Samples were extracted from recordings made while participants completed a problem-solving cooperative task (diapix) with a conversational partner who could either hear them easily (NORM) or via a simulated hearing loss (HLS), which led talkers to naturally adopt a clear speaking style. In speech-in-noise listening experiments involving 21 young adult listeners, speech samples by OANH and OAHL were rated and perceived as less intelligible than those of YA talkers. HLS samples were more intelligible than NORM samples, with greater improvements in intelligibility across conditions seen for OA speech. The presence of presbycusis affected (a) the clear speech strategies adopted by OAHL talkers and (b) task effectiveness: OAHL talkers showed some adaptations consistent with an increase in vocal effort, and it took them significantly longer than the YA group to complete the diapix task. The relative energy in the 1-3 kHz frequency region of the long-term average spectrum was the feature that best predicted: (a) the intelligibility of speech samples, and (b) task transaction time in the HLS condition. Overall, our study suggests that spontaneous speech produced by older adults is less intelligible in babble noise, probably due to less energy present in the 1-3 kHz frequency range rich in acoustic cues. Even mild presbycusis in 'healthy aged' adults can affect the dynamic adaptations in speech that are beneficial for effective communication.
Collapse
Affiliation(s)
- Valerie Hazan
- Department of Speech Hearing and Phonetic Sciences, Chandler House, UCL, 2 Wakefield Street, London WC1N 1PF, UK.
| | - Outi Tuomainen
- Department of Speech Hearing and Phonetic Sciences, Chandler House, UCL, 2 Wakefield Street, London WC1N 1PF, UK.
| | - Lilian Tu
- Department of Speech Hearing and Phonetic Sciences, Chandler House, UCL, 2 Wakefield Street, London WC1N 1PF, UK.
| | - Jeesun Kim
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Locked Bag 1797, Penrith NSW 2751, Australia.
| | - Chris Davis
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Locked Bag 1797, Penrith NSW 2751, Australia.
| | - Douglas Brungart
- Audiology and Speech-Pathology Center, Walter Reed National Military Medical Center, Bethesda, 4494 North Palmer Road, Bethesda, MD 20889, USA.
| | - Benjamin Sheffield
- Audiology and Speech-Pathology Center, Walter Reed National Military Medical Center, Bethesda, 4494 North Palmer Road, Bethesda, MD 20889, USA.
| |
Collapse
|
10
|
Granlund S, Hazan V, Mahon M. Children's Acoustic and Linguistic Adaptations to Peers With Hearing Impairment. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2018; 61:1055-1069. [PMID: 29710271 DOI: 10.1044/2017_jslhr-s-16-0456] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/19/2016] [Accepted: 10/13/2017] [Indexed: 06/08/2023]
Abstract
PURPOSE This study aims to examine the clear speaking strategies used by older children when interacting with a peer with hearing loss, focusing on both acoustic and linguistic adaptations in speech. METHOD The Grid task, a problem-solving task developed to elicit spontaneous interactive speech, was used to obtain a range of global acoustic and linguistic measures. Eighteen 9- to 14-year-old children with normal hearing (NH) performed the task in pairs, once with a friend with NH and once with a friend with a hearing impairment (HI). RESULTS In HI-directed speech, children increased their fundamental frequency range and midfrequency intensity, decreased the number of words per phrase, and expanded their vowel space area by increasing F1 and F2 range, relative to NH-directed speech. However, participants did not appear to make changes to their articulation rate, the lexical frequency of content words, or lexical diversity when talking to their friend with HI compared with their friend with NH. CONCLUSIONS Older children show evidence of listener-oriented adaptations to their speech production; although their speech production systems are still developing, they are able to make speech adaptations to benefit the needs of a peer with HI, even without being given a specific instruction to do so. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.6118817.
Collapse
Affiliation(s)
- Sonia Granlund
- Speech, Hearing & Phonetic Sciences, University College London, United Kingdom
| | - Valerie Hazan
- Speech, Hearing & Phonetic Sciences, University College London, United Kingdom
| | - Merle Mahon
- Language & Cognition, University College London, United Kingdom
| |
Collapse
|
11
|
Smiljanic R, Gilbert RC. Acoustics of Clear and Noise-Adapted Speech in Children, Young, and Older Adults. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2017; 60:3081-3096. [PMID: 29075775 DOI: 10.1044/2017_jslhr-s-16-0130] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/04/2016] [Accepted: 05/08/2017] [Indexed: 06/07/2023]
Abstract
PURPOSE This study investigated acoustic-phonetic modifications produced in noise-adapted speech (NAS) and clear speech (CS) by children, young adults, and older adults. METHOD Ten children (11-13 years of age), 10 young adults (18-29 years of age), and 10 older adults (60-84 years of age) read sentences in conversational and clear speaking style in quiet and in noise. A number of acoustic measurements were obtained. RESULTS NAS and CS were characterized by a decrease in speaking rate and an increase in 1-3 kHz energy, sound pressure level (SPL), vowel space area (VSA), and harmonics-to-noise ratio. NAS increased fundamental frequency (F0) mean and decreased jitter and shimmer. CS increased frequency and duration of pauses. Older adults produced the slowest speaking rate, longest pauses, and smallest increase in F0 mean, 1-3 kHz energy, and SPL when speaking clearly. They produced the smallest increases in VSA in NAS and CS. Children slowed down less, increased the VSA least, increased harmonics-to-noise ratio, and decreased jitter and shimmer most in CS. Children increased mean F0 and F1 most in noise. CONCLUSIONS Findings have implications for a model of speech production in healthy speakers as well as the potential to aid in clinical decision making for individuals with speech disorders, particularly dysarthria.
Collapse
|
12
|
Smiljanic R, Gilbert RC. Intelligibility of Noise-Adapted and Clear Speech in Child, Young Adult, and Older Adult Talkers. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2017; 60:3069-3080. [PMID: 29075748 DOI: 10.1044/2017_jslhr-s-16-0165] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/25/2016] [Accepted: 04/21/2017] [Indexed: 06/07/2023]
Abstract
PURPOSE This study examined intelligibility of conversational and clear speech sentences produced in quiet and in noise by children, young adults, and older adults. Relative talker intelligibility was assessed across speaking styles. METHOD Sixty-one young adult participants listened to sentences mixed with speech-shaped noise at -5 dB signal-to-noise ratio. The analyses examined percent correct scores across conversational, clear, and noise-adapted conditions and the three talker groups. Correlation analyses examined whether talker intelligibility is consistent across speaking style adaptations. RESULTS Noise-adapted and clear speech significantly enhanced intelligibility for young adult listeners. The intelligibility improvement varied across the three talker groups. Notably, intelligibility benefit was smallest for children's speaking style modifications. Listeners also perceived speech produced in noise by older adults to be less intelligible compared to the younger talkers. Talker intelligibility was correlated strongly between conversational and clear speech in quiet, but not for conversational speech produced in quiet and in noise. CONCLUSIONS Results provide evidence that intelligibility variation related to age and communicative barrier has the potential to aid clinical decision making for individuals with speech disorders, particularly dysarthria.
Collapse
|
13
|
Fuchs S, Lancia L. Seeing Speech Production Through the Window of Complex Interactions: Introduction to the Supplement of Select Papers From the 10th International Seminar on Speech Production (ISSP) in Cologne. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2016; 59:S1555-S1557. [PMID: 28002835 DOI: 10.1044/2016_jslhr-s-16-0387] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/05/2016] [Accepted: 10/11/2016] [Indexed: 06/06/2023]
Abstract
As the famous linguist and anthropologist C. Hockett noted about 30 years ago, "What one sees of language, as of anything, depends on the angle of view, and different explorers approach from different directions. Unfortunately, sometimes they become so enamored of their particular approach that they incline to scoff at any other, so that instead of everybody being the richer for the variety, everybody loses. … It is obviously impossible to see all of anything from a single vantage point. So it is never inappropriate to seek new perspectives" (Hockett, 1987, p. 1). This supplement takes such a broad perspective and contains a selection of peer-reviewed papers seeing speech production through the window of complex interactions between physical, linguistic, social, and communicative factors. Papers were presented at the 10th International Seminar on Speech Production in Cologne. We hope to encourage the reader to continue working in this exciting direction.
Collapse
Affiliation(s)
- Susanne Fuchs
- Zentrum für Allgemeine Sprachwissenschaft, Berlin, Germany
| | - Leonardo Lancia
- Laboratoire de Phonétique et Phonologie, CNRS/Université Sorbonne Nouvelle-Paris, France
| |
Collapse
|