1
|
Namasivayam AK, Cheung K, Atputhajeyam B, Petrosov J, Branham M, Grover V, van Lieshout P. Effectiveness of the Kaufman Speech to Language Protocol for Children With Childhood Apraxia of Speech and Comorbidities When Delivered in a Dyadic and Group Format. AMERICAN JOURNAL OF SPEECH-LANGUAGE PATHOLOGY 2024:1-17. [PMID: 39353057 DOI: 10.1044/2024_ajslp-24-00098] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/04/2024]
Abstract
PURPOSE The current study is a Phase I clinical study with the goal of determining feasibility and the effectiveness of the Kaufman Speech to Language Protocol (K-SLP) for children with childhood apraxia of speech (CAS) and comorbidities. We hypothesized that K-SLP intervention would result in improved outcomes and maintenance of treatment effect at 3-4 months postintervention. METHOD Single-subject experimental design with multiple baselines across behaviors was replicated across a group of six children. Five out of six participants completed the study. The K-SLP intervention was administered in dyads four times a week for three consecutive weeks. Outcomes included assessment of word/syllable shapes, articulation accuracy, speech intelligibility, and functional communication. Treatment progress was measured through: (a) the administration of custom probe word lists and (b) assessments carried out at pretreatment, immediately following intervention and approximately 3-4 months after the study period. RESULTS Four out of five participants demonstrated significant improvements to words targeted in treatment and three out of five generalized these to untreated words. Furthermore, three out of five participants showed immediate and clinically significant posttreatment improvements in speech intelligibility and functional outcomes, and this increased to four out of five participants at 3-4 months follow-up. CONCLUSIONS The study provides preliminary support for the effectiveness of the K-SLP program when delivered in dyads to children with CAS with comorbidities. The study replicates earlier findings and reaffirms the positive outcomes of K-SLP for children with CAS.
Collapse
Affiliation(s)
- Aravind K Namasivayam
- Oral Dynamics Laboratory, Department of Speech-Language Pathology, University of Toronto, Ontario, Canada
- Speech Research Centre Inc., Toronto, Ontario, Canada
| | - Karina Cheung
- Oral Dynamics Laboratory, Department of Speech-Language Pathology, University of Toronto, Ontario, Canada
- Speech Research Centre Inc., Toronto, Ontario, Canada
| | - Bavika Atputhajeyam
- Oral Dynamics Laboratory, Department of Speech-Language Pathology, University of Toronto, Ontario, Canada
- Speech Research Centre Inc., Toronto, Ontario, Canada
| | - Julia Petrosov
- Oral Dynamics Laboratory, Department of Speech-Language Pathology, University of Toronto, Ontario, Canada
- Speech Research Centre Inc., Toronto, Ontario, Canada
| | | | | | - Pascal van Lieshout
- Oral Dynamics Laboratory, Department of Speech-Language Pathology, University of Toronto, Ontario, Canada
| |
Collapse
|
2
|
Namasivayam AK, Coleman D, O’Dwyer A, van Lieshout P. Speech Sound Disorders in Children: An Articulatory Phonology Perspective. Front Psychol 2020; 10:2998. [PMID: 32047453 PMCID: PMC6997346 DOI: 10.3389/fpsyg.2019.02998] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2019] [Accepted: 12/18/2019] [Indexed: 01/20/2023] Open
Abstract
Speech Sound Disorders (SSDs) is a generic term used to describe a range of difficulties producing speech sounds in children (McLeod and Baker, 2017). The foundations of clinical assessment, classification and intervention for children with SSD have been heavily influenced by psycholinguistic theory and procedures, which largely posit a firm boundary between phonological processes and phonetics/articulation (Shriberg, 2010). Thus, in many current SSD classification systems the complex relationships between the etiology (distal), processing deficits (proximal) and the behavioral levels (speech symptoms) is under-specified (Terband et al., 2019a). It is critical to understand the complex interactions between these levels as they have implications for differential diagnosis and treatment planning (Terband et al., 2019a). There have been some theoretical attempts made towards understanding these interactions (e.g., McAllister Byun and Tessier, 2016) and characterizing speech patterns in children either solely as the product of speech motor performance limitations or purely as a consequence of phonological/grammatical competence has been challenged (Inkelas and Rose, 2007; McAllister Byun, 2012). In the present paper, we intend to reconcile the phonetic-phonology dichotomy and discuss the interconnectedness between these levels and the nature of SSDs using an alternative perspective based on the notion of an articulatory "gesture" within the broader concepts of the Articulatory Phonology model (AP; Browman and Goldstein, 1992). The articulatory "gesture" serves as a unit of phonological contrast and characterization of the resulting articulatory movements (Browman and Goldstein, 1992; van Lieshout and Goldstein, 2008). We present evidence supporting the notion of articulatory gestures at the level of speech production and as reflected in control processes in the brain and discuss how an articulatory "gesture"-based approach can account for articulatory behaviors in typical and disordered speech production (van Lieshout, 2004; Pouplier and van Lieshout, 2016). Specifically, we discuss how the AP model can provide an explanatory framework for understanding SSDs in children. Although other theories may be able to provide alternate explanations for some of the issues we will discuss, the AP framework in our view generates a unique scope that covers linguistic (phonology) and motor processes in a unified manner.
Collapse
Affiliation(s)
- Aravind Kumar Namasivayam
- Oral Dynamics Laboratory, Department of Speech-Language Pathology, University of Toronto, Toronto, ON, Canada
- Toronto Rehabilitation Institute, University Health Network, Toronto, ON, Canada
| | - Deirdre Coleman
- Oral Dynamics Laboratory, Department of Speech-Language Pathology, University of Toronto, Toronto, ON, Canada
- Independent Researcher, Surrey, BC, Canada
| | - Aisling O’Dwyer
- Oral Dynamics Laboratory, Department of Speech-Language Pathology, University of Toronto, Toronto, ON, Canada
- St. James’s Hospital, Dublin, Ireland
| | - Pascal van Lieshout
- Oral Dynamics Laboratory, Department of Speech-Language Pathology, University of Toronto, Toronto, ON, Canada
- Toronto Rehabilitation Institute, University Health Network, Toronto, ON, Canada
- Rehabilitation Sciences Institute, University of Toronto, Toronto, ON, Canada
| |
Collapse
|
3
|
Tiede M, Mooshammer C, Goldstein L. Noggin Nodding: Head Movement Correlates With Increased Effort in Accelerating Speech Production Tasks. Front Psychol 2019; 10:2459. [PMID: 31827451 PMCID: PMC6890824 DOI: 10.3389/fpsyg.2019.02459] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2019] [Accepted: 10/17/2019] [Indexed: 11/13/2022] Open
Abstract
Movements of the head and speech articulators have been observed in tandem during an alternating word pair production task driven by an accelerating rate metronome. Word pairs contrasted either onset or coda dissimilarity with same word controls. Results show that as production effort increased, so did speaker head nodding, and that nodding increased abruptly following errors. More errors occurred under faster production rates, and in coda rather than onset alternations. The greatest entrainment between head and articulators was observed at the fastest rate under coda alternation. Neither jaw coupling nor imposed prosodic stress was observed to be a primary driver of head movement. In alternating pairs, nodding frequency tracked the slower alternation rate rather than the syllable rate, interpreted as recruitment of additional degrees of freedom to stabilize the alternation pattern under increasing production rate pressure.
Collapse
Affiliation(s)
- Mark Tiede
- Haskins Laboratories, New Haven, CT, United States
| | - Christine Mooshammer
- Haskins Laboratories, New Haven, CT, United States
- Institut für Deutsche Sprache und Linguistik, Humboldt-Universität zu Berlin, Berlin, Germany
| | - Louis Goldstein
- Haskins Laboratories, New Haven, CT, United States
- Department of Linguistics, University of Southern California, Los Angeles, CA, United States
| |
Collapse
|
4
|
Goldrick M, McClain R, Cibelli E, Adi Y, Gustafson E, Moers C, Keshet J. The influence of lexical selection disruptions on articulation. J Exp Psychol Learn Mem Cogn 2019; 45:1107-1141. [PMID: 30024252 PMCID: PMC6339616 DOI: 10.1037/xlm0000633] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/28/2022]
Abstract
Interactive models of language production predict that it should be possible to observe long-distance interactions; effects that arise at one level of processing influence multiple subsequent stages of representation and processing. We examine the hypothesis that disruptions arising in nonform-based levels of planning-specifically, lexical selection-should modulate articulatory processing. A novel automatic phonetic analysis method was used to examine productions in a paradigm yielding both general disruptions to formulation processes and, more specifically, overt errors during lexical selection. This analysis method allowed us to examine articulatory disruptions at multiple levels of analysis, from whole words to individual segments. Baseline performance by young adults was contrasted with young speakers' performance under time pressure (which previous work has argued increases interaction between planning and articulation) and performance by older adults (who may have difficulties inhibiting nontarget representations, leading to heightened interactive effects). The results revealed the presence of interactive effects. Our new analysis techniques revealed these effects were strongest in initial portions of responses, suggesting that speech is initiated as soon as the first segment has been planned. Interactive effects did not increase under response pressure, suggesting interaction between planning and articulation is relatively fixed. Unexpectedly, lexical selection disruptions appeared to yield some degree of facilitation in articulatory processing (possibly reflecting semantic facilitation of target retrieval) and older adults showed weaker, not stronger interactive effects (possibly reflecting weakened connections between lexical and form-level representations). (PsycINFO Database Record (c) 2019 APA, all rights reserved).
Collapse
|
5
|
Mooshammer C, Tiede M, Shattuck-Hufnagel S, Goldstein L. Towards the Quantification of Peggy Babcock: Speech Errors and Their Position within the Word. PHONETICA 2018; 76:363-396. [PMID: 30481752 DOI: 10.1159/000494140] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/30/2017] [Accepted: 09/20/2018] [Indexed: 06/09/2023]
Abstract
Sequences of similar (i.e., partially identical) words can be hard to say, as indicated by error frequencies, longer reaction and execution times. This study investigates the role of the location of this partial identity and the accompanying differences, i.e. whether errors are more frequent with mismatches in word onsets (top cop), codas (top tock) or both (pop tot). Number of syllables (tippy ticky) and empty positions (top ta) were also varied. Since the gradient nature of errors can be difficult to determine acoustically, articulatory data were investigated. Articulator movements were recorded using electromagnetic articulography, for up to 9 speakers of American English repeatedly producing 2-word sequences to an accelerating metronome. Most word pairs showed more intrusions and greater variability in coda than in onset position, in contrast to the predominance of onset position errors in corpora from perceptual observation.
Collapse
Affiliation(s)
- Christine Mooshammer
- Institut für deutsche Sprache und Linguistik, Humboldt Universität zu Berlin, Berlin, Germany,
- Haskins Laboratories, New Haven, Connecticut, USA,
| | - Mark Tiede
- Haskins Laboratories, New Haven, Connecticut, USA
| | | | - Louis Goldstein
- Haskins Laboratories, New Haven, Connecticut, USA
- Department of Linguistics, University of Southern California, Los Angeles, California, USA
| |
Collapse
|
6
|
Hoole P, Pouplier M. Öhman returns: New horizons in the collection and analysis of imaging data in speech production research. COMPUT SPEECH LANG 2017. [DOI: 10.1016/j.csl.2017.03.002] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
7
|
Kember H, Connaghan K, Patel R. Inducing speech errors in dysarthria using tongue twisters. INTERNATIONAL JOURNAL OF LANGUAGE & COMMUNICATION DISORDERS 2017; 52:469-478. [PMID: 27891744 DOI: 10.1111/1460-6984.12285] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/11/2015] [Revised: 07/27/2016] [Accepted: 07/28/2016] [Indexed: 06/06/2023]
Abstract
Although tongue twisters have been widely use to study speech production in healthy speakers, few studies have employed this methodology for individuals with speech impairment. The present study compared tongue twister errors produced by adults with dysarthria and age-matched healthy controls. Eight speakers (four female, four male; mean age = 54.5 years) with spastic (mixed-spastic) dysarthria of varying aetiology (cerebral palsy, multiple sclerosis, multiple system atrophy) and eight controls (four female, four male; mean age = 56.9 years) were audio-recorded producing tongue twisters. One word in each tongue twister was marked for prominence. Speakers with dysarthria produced significantly more errors and spoke slower than healthy controls. The effect of prominence was significant for both groups-words spoken with prosodic prominence were significantly less error prone compared with words without prominence. While both groups produced most errors on words in the third position (of four-word utterances), speakers with dysarthria also produced high rates of errors on the first and fourth words. This preliminary investigation demonstrated the promise of applying the tongue twister paradigm to speakers with dysarthria and contributes to the evidence base for the implementation of prosodic strategies in speech intervention.
Collapse
Affiliation(s)
- Heather Kember
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Australia
- Department of Communication Sciences and Disorders, Northeastern University, Boston, MA, USA
| | - Kathryn Connaghan
- Department of Communication Sciences and Disorders, Northeastern University, Boston, MA, USA
| | - Rupal Patel
- Department of Communication Sciences and Disorders, Northeastern University, Boston, MA, USA
- College of Computer and Information Science, Northeastern University, Boston, MA, USA
| |
Collapse
|
8
|
Hagedorn C, Proctor M, Goldstein L, Wilson SM, Miller B, Gorno-Tempini ML, Narayanan SS. Characterizing Articulation in Apraxic Speech Using Real-Time Magnetic Resonance Imaging. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2017; 60:877-891. [PMID: 28314241 PMCID: PMC5548083 DOI: 10.1044/2016_jslhr-s-15-0112] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/23/2015] [Revised: 12/19/2015] [Accepted: 07/15/2016] [Indexed: 05/29/2023]
Abstract
Purpose Real-time magnetic resonance imaging (MRI) and accompanying analytical methods are shown to capture and quantify salient aspects of apraxic speech, substantiating and expanding upon evidence provided by clinical observation and acoustic and kinematic data. Analysis of apraxic speech errors within a dynamic systems framework is provided and the nature of pathomechanisms of apraxic speech discussed. Method One adult male speaker with apraxia of speech was imaged using real-time MRI while producing spontaneous speech, repeated naming tasks, and self-paced repetition of word pairs designed to elicit speech errors. Articulatory data were analyzed, and speech errors were detected using time series reflecting articulatory activity in regions of interest. Results Real-time MRI captured two types of apraxic gestural intrusion errors in a word pair repetition task. Gestural intrusion errors in nonrepetitive speech, multiple silent initiation gestures at the onset of speech, and covert (unphonated) articulation of entire monosyllabic words were also captured. Conclusion Real-time MRI and accompanying analytical methods capture and quantify many features of apraxic speech that have been previously observed using other modalities while offering high spatial resolution. This patient's apraxia of speech affected the ability to select only the appropriate vocal tract gestures for a target utterance, suppressing others, and to coordinate them in time.
Collapse
Affiliation(s)
| | - Michael Proctor
- Macquarie University, North Ryde, New South Wales, Australia
| | | | | | | | | | | |
Collapse
|
9
|
Slis A, van Lieshout P. The Effect of Auditory Information on Patterns of Intrusions and Reductions. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2016; 59:430-445. [PMID: 27232422 DOI: 10.1044/2015_jslhr-s-14-0258] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/16/2014] [Accepted: 10/09/2015] [Indexed: 06/05/2023]
Abstract
PURPOSE The study investigates whether auditory information affects the nature of intrusion and reduction errors in reiterated speech. These errors are hypothesized to arise as a consequence of autonomous mechanisms to stabilize movement coordination. The specific question addressed is whether this process is affected by auditory information so that it will influence the occurrence of intrusions and reductions. METHODS Fifteen speakers produced word pairs with alternating onset consonants and identical rhymes repetitively at a normal and fast speaking rate, in masked and unmasked speech. Movement ranges of the tongue tip, tongue dorsum, and lower lip during onset consonants were retrieved from kinematic data collected with electromagnetic articulography. Reductions and intrusions were defined as statistical outliers from movement range distributions of target and nontarget articulators, respectively. RESULTS Regardless of masking condition, the number of intrusions and reductions increased during the course of a trial, suggesting movement stabilization. However, compared with unmasked speech, speakers made fewer intrusions in masked speech. The number of reductions was not significantly affected. CONCLUSIONS Masking of auditory information resulted in fewer intrusions, suggesting that speakers were able to pay closer attention to their articulatory movements. This highlights a possible stabilizing role for proprioceptive information in speech movement coordination.
Collapse
|
10
|
Automatic analysis of slips of the tongue: Insights into the cognitive architecture of speech production. Cognition 2016; 149:31-9. [PMID: 26779665 DOI: 10.1016/j.cognition.2016.01.002] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2015] [Revised: 09/09/2015] [Accepted: 01/04/2016] [Indexed: 11/21/2022]
Abstract
Traces of the cognitive mechanisms underlying speaking can be found within subtle variations in how we pronounce sounds. While speech errors have traditionally been seen as categorical substitutions of one sound for another, acoustic/articulatory analyses show they partially reflect the intended sound. When "pig" is mispronounced as "big," the resulting /b/ sound differs from correct productions of "big," moving towards intended "pig"-revealing the role of graded sound representations in speech production. Investigating the origins of such phenomena requires detailed estimation of speech sound distributions; this has been hampered by reliance on subjective, labor-intensive manual annotation. Computational methods can address these issues by providing for objective, automatic measurements. We develop a novel high-precision computational approach, based on a set of machine learning algorithms, for measurement of elicited speech. The algorithms are trained on existing manually labeled data to detect and locate linguistically relevant acoustic properties with high accuracy. Our approach is robust, is designed to handle mis-productions, and overall matches the performance of expert coders. It allows us to analyze a very large dataset of speech errors (containing far more errors than the total in the existing literature), illuminating properties of speech sound distributions previously impossible to reliably observe. We argue that this provides novel evidence that two sources both contribute to deviations in speech errors: planning processes specifying the targets of articulation and articulatory processes specifying the motor movements that execute this plan. These findings illustrate how a much richer picture of speech provides an opportunity to gain novel insights into language processing.
Collapse
|
11
|
Kember H, Croot K, Patrick E. Phonological Encoding in Mandarin Chinese: Evidence from Tongue Twisters. LANGUAGE AND SPEECH 2015; 58:417-440. [PMID: 27483738 DOI: 10.1177/0023830914562654] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
Models of connected speech production in Mandarin Chinese must specify how lexical tone, speech segments, and phrase-level prosody are integrated in speech production. This study used tongue twisters to test predictions of the two different models of word form encoding. Tongue twisters were constructed from 5 sets of characters that rotated pairs of initial segments or pairs of tones, or both, across format (ABAB, ABBA), and across position of the characters in four-character tongue twister strings. Fifty two native Mandarin Chinese speakers read aloud 120 tongue twisters, repeating each one six times in a row. They made a total of 3503 (2.34%) segment errors and 1372 (.92%) tone errors. Segment errors occurred on the onsets of the first and third characters in the ABBA but not ABAB segment-alternating tongue twisters, and on the onsets of the second and fourth characters of the tone-alternating tongue twisters. Tone errors were highest on the third and fourth characters in the tone-alternating tongue twisters. The pattern of tone errors is consistent with the claim that tone is associated to a metrical frame prior to segment encoding, while the format by position interaction found for the segment-alternating tongue twisters suggest articulatory gestures oscillate in segment production as proposed by gestural phonology.
Collapse
|
12
|
|
13
|
Ziegler W, Aichert I. How much is a word? Predicting ease of articulation planning from apraxic speech error patterns. Cortex 2015; 69:24-39. [PMID: 25967085 DOI: 10.1016/j.cortex.2015.04.001] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2014] [Revised: 02/27/2015] [Accepted: 04/01/2015] [Indexed: 10/23/2022]
Abstract
BACKGROUND According to intuitive concepts, 'ease of articulation' is influenced by factors like word length or the presence of consonant clusters in an utterance. Imaging studies of speech motor control use these factors to systematically tax the speech motor system. Evidence from apraxia of speech, a disorder supposed to result from speech motor planning impairment after lesions to speech motor centers in the left hemisphere, supports the relevance of these and other factors in disordered speech planning and the genesis of apraxic speech errors. Yet, there is no unified account of the structural properties rendering a word easy or difficult to pronounce. AIM To model the motor planning demands of word articulation by a nonlinear regression model trained to predict the likelihood of accurate word production in apraxia of speech. METHOD We used a tree-structure model in which vocal tract gestures are embedded in hierarchically nested prosodic domains to derive a recursive set of terms for the computation of the likelihood of accurate word production. The model was trained with accuracy data from a set of 136 words averaged over 66 samples from apraxic speakers. In a second step, the model coefficients were used to predict a test dataset of accuracy values for 96 new words, averaged over 120 samples produced by a different group of apraxic speakers. RESULTS Accurate modeling of the first dataset was achieved in the training study (R(2)adj = .71). In the cross-validation, the test dataset was predicted with a high accuracy as well (R(2)adj = .67). The model shape, as reflected by the coefficient estimates, was consistent with current phonetic theories and with clinical evidence. In accordance with phonetic and psycholinguistic work, a strong influence of word stress on articulation errors was found. CONCLUSIONS The proposed model provides a unified and transparent account of the motor planning requirements of word articulation.
Collapse
Affiliation(s)
- Wolfram Ziegler
- EKN - Clinical Neuropsychology Research Group, Clinic for Neuropsychology, City Hospital, Munich, Germany.
| | - Ingrid Aichert
- EKN - Clinical Neuropsychology Research Group, Clinic for Neuropsychology, City Hospital, Munich, Germany
| |
Collapse
|
14
|
Barberena LDS, Brasil BDC, Melo RM, Mezzomo CL, Mota HB, Keske-Soares M. Ultrasound applicability in Speech Language Pathology and Audiology. Codas 2014; 26:520-30. [DOI: 10.1590/2317-1782/20142013086] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2013] [Accepted: 09/01/2014] [Indexed: 11/21/2022] Open
Abstract
PURPOSE: To present recent studies that used the ultrasound in the fields of Speech Language Pathology and Audiology, which evidence possibilities of the applicability of this technique in different subareas. RESEARCH STRATEGY: A bibliographic research was carried out in the PubMed database, using the keywords "ultrasonic," "speech," "phonetics," "Speech, Language and Hearing Sciences," "voice," "deglutition," and "myofunctional therapy," comprising some areas of Speech Language Pathology and Audiology Sciences. The keywords "ultrasound," "ultrasonography," "swallow," "orofacial myofunctional therapy," and "orofacial myology" were also used in the search. SELECTION CRITERIA: Studies in humans from the past 5 years were selected. In the preselection, duplicated studies, articles not fully available, and those that did not present direct relation between ultrasound and Speech Language Pathology and Audiology Sciences were discarded. DATA ANALYSIS: The data were analyzed descriptively and classified subareas of Speech Language Pathology and Audiology Sciences. The following items were considered: purposes, participants, procedures, and results. RESULTS: We selected 12 articles for ultrasound versus speech/phonetics subarea, 5 for ultrasound versus voice, 1 for ultrasound versus muscles of mastication, and 10 for ultrasound versus swallow. Studies relating "ultrasound" and "Speech Language Pathology and Audiology Sciences" in the past 5 years were not found. CONCLUSION: Different studies on the use of ultrasound in Speech Language Pathology and Audiology Sciences were found. Each of them, according to its purpose, confirms new possibilities of the use of this instrument in the several subareas, aiming at a more accurate diagnosis and new evaluative and therapeutic possibilities.
Collapse
Affiliation(s)
| | - Brunah de Castro Brasil
- Universidade Federal de Santa Maria - UFSM, Brazil; Universidade Federal do Rio Grande do Sul - UFRGS, Brazil
| | | | - Carolina Lisbôa Mezzomo
- Universidade Federal de Santa Maria - UFSM, Brazil; Universidade Federal de Santa Maria - UFSM, Brazil
| | - Helena Bolli Mota
- Universidade Federal de Santa Maria - UFSM, Brazil; Universidade Federal de Santa Maria - UFSM, Brazil
| | - Márcia Keske-Soares
- Universidade Federal de Santa Maria - UFSM, Brazil; Universidade Federal de Santa Maria - UFSM, Brazil
| |
Collapse
|
15
|
Goldrick M, Baker HR, Murphy A, Baese-Berk M. Interaction and representational integration: evidence from speech errors. Cognition 2011; 121:58-72. [PMID: 21669409 DOI: 10.1016/j.cognition.2011.05.006] [Citation(s) in RCA: 35] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2010] [Revised: 04/21/2011] [Accepted: 05/19/2011] [Indexed: 10/18/2022]
Abstract
We examine the mechanisms that support interaction between lexical, phonological and phonetic processes during language production. Studies of the phonetics of speech errors have provided evidence that partially activated lexical and phonological representations influence phonetic processing. We examine how these interactive effects are modulated by lexical frequency. Previous research has demonstrated that during lexical access, the processing of high frequency words is facilitated; in contrast, during phonetic encoding, the properties of low frequency words are enhanced. These contrasting effects provide the opportunity to distinguish two theoretical perspectives on how interaction between processing levels can be increased. A theory in which cascading activation is used to increase interaction predicts that the facilitation of high frequency words will enhance their influence on the phonetic properties of speech errors. Alternatively, if interaction is increased by integrating levels of representation, the phonetics of speech errors will reflect the retrieval of enhanced phonetic properties for low frequency words. Utilizing a novel statistical analysis method, we show that in experimentally induced speech errors low lexical frequency targets and outcomes exhibit enhanced phonetic processing. We sketch an interactive model of lexical, phonological and phonetic processing that accounts for the conflicting effects of lexical frequency on lexical access and phonetic processing.
Collapse
Affiliation(s)
- Matthew Goldrick
- Department of Linguistics, Northwestern University, 2016 Sheridan Rd., Evanston, IL 60208, USA.
| | | | | | | |
Collapse
|
16
|
McMillan CT, Corley M. Cascading influences on the production of speech: evidence from articulation. Cognition 2010; 117:243-60. [PMID: 20947071 DOI: 10.1016/j.cognition.2010.08.019] [Citation(s) in RCA: 54] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2009] [Revised: 08/20/2010] [Accepted: 08/27/2010] [Indexed: 11/13/2022]
Abstract
Recent investigations have supported the suggestion that phonological speech errors may reflect the simultaneous activation of more than one phonemic representation. This presents a challenge for speech error evidence which is based on the assumption of well-formedness, because we may continue to perceive well-formed errors, even when they are not produced. To address this issue, we present two tongue-twister experiments in which the articulation of onset consonants is quantified and compared to baseline measures from cases where there is no phonemic competition. We report three measure of articulatory variability: changes in tongue-to-palate contact using electropalatography (EPG, Experiment 1), changes in midsagittal spline of the tongue using ultrasound (Experiment 2), and acoustic changes manifested as voice-onset time (VOT). These three sources provide converging evidence that articulatory variability increases when competing onsets differ by one phonological feature, but the increase is attenuated when onsets differ by two features. This finding provides clear evidence, based solely on production, that the articulation of phonemes is influenced by cascading activation from the speech plan.
Collapse
Affiliation(s)
- Corey T McMillan
- Department of Neurology, University of Pennsylvania Medical Center, Philadelphia, PA 19104, USA
| | | |
Collapse
|
17
|
Marianne P, Goldstein L. Intention in Articulation: Articulatory Timing in Alternating Consonant Sequences and Its Implications for Models of Speech Production. LANGUAGE AND COGNITIVE PROCESSES 2010; 25:616-649. [PMID: 25009365 PMCID: PMC4085136 DOI: 10.1080/01690960903395380] [Citation(s) in RCA: 24] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
Several studies have reported that during the production of phrases with alternating consonants (e.g., top cop), the constriction gestures for these consonants can come to be produced in the same prevocalic position. Since these coproductions occur in contexts that also elicit segmental substitution errors, the question arises whether they may result from monitoring and repair, or whether they arise from the architecture of the phonological and phonetic planning process. This paper examines the articulatory timing of the coproduced gestures in order to shed light on the underlying process that gives rise to them. Results show that overall at movement onset the gestures are mostly synchronous, but it is the intended consonant that is released last. Overall the data support the view that the activation of two gestures is inherent to the speech production process itself rather than being due to a monitoring process. We argue that the interactions between planning and articulatory dynamics apparent in our data require a more comprehensive approach to speech production than is provided by current models.
Collapse
Affiliation(s)
- Pouplier Marianne
- Institute of Phonetics and Speech Processing, Ludwig-Maximilians-University Munich, Germany Haskins Laboratories, New Haven, CT, USA
| | - Louis Goldstein
- Linguistics Department, University of Southern California, Los Angeles, USA Haskins Laboratories, New Haven, CT, USA
| |
Collapse
|
18
|
Marin S, Pouplier M, Harrington J. Acoustic consequences of articulatory variability during productions of /t/ and /k/ and its implications for speech error research. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2010; 127:445-461. [PMID: 20058990 PMCID: PMC2821172 DOI: 10.1121/1.3268600] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/30/2009] [Revised: 08/13/2009] [Accepted: 10/27/2009] [Indexed: 05/28/2023]
Abstract
An increasing number of studies has linked certain types of articulatory or acoustic variability with speech errors, but no study has yet examined the relationship between such articulatory variability and acoustics. The present study aims to evaluate the acoustic properties of articulatorily errorful /k/ and /t/ stimuli to determine whether these errors are consistently reflected in the acoustics. The most frequent error observed in the articulatory data is the production of /k/ and /t/ with simultaneous tongue tip and tongue dorsum constrictions. Spectral analysis of these stimuli's bursts shows that /k/ and /t/ are differently affected by such co-production errors: co-production of tongue tip and tongue dorsum during intended /k/ results in typical /k/ spectra (and hence in tokens robustly classified as /k/), while co-productions during intended /t/ result in spectra with roughly equal prominence at both the mid-frequency (/k/-like) and high-frequency (/t/-like) ranges (and hence in tokens ambiguous between /k/ and /t/). This outcome is not due to an articulatory timing difference, but to tongue dorsum constriction having an overall greater effect on the acoustic than a tongue tip constriction when the two are co-produced.
Collapse
Affiliation(s)
- Stefania Marin
- Institute of Phonetics and Speech Processing, Ludwig-Maximilians-University Munich, 80799 Munich, Germany.
| | | | | |
Collapse
|
19
|
Goldrick M, Daland R. Linking speech errors and phonological grammars: Insights from Harmonic Grammar networks. PHONOLOGY 2009; 26:147-185. [PMID: 20046856 PMCID: PMC2789494 DOI: 10.1017/s0952675709001742] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/20/2023]
Abstract
Phonological grammars characterize distinctions between relatively well-formed (unmarked) and relatively ill-formed (marked) phonological structures. We review evidence that markedness influences speech error probabilities. Specifically, although errors result in both unmarked as well as marked structures, there is a markedness asymmetry: errors are more likely to produce unmarked outcomes. We show that stochastic disruption to the computational mechanisms realizing a Harmonic Grammar (HG) can account for the broad empirical patterns of speech errors. We demonstrate that our proposal can account for the general markedness asymmetry. We also develop methods for linking particular HG proposals to speech error distributions, and illustrate these methods using a simple HG and a set of initial consonant errors in English.
Collapse
|