1
|
Franken MC, Oonk LC, Bast BJEG, Bouwen J, De Nil L. Erasmus clinical model of the onset and development of stuttering 2.0. JOURNAL OF FLUENCY DISORDERS 2024; 80:106040. [PMID: 38493582 DOI: 10.1016/j.jfludis.2024.106040] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/12/2023] [Revised: 12/25/2023] [Accepted: 02/11/2024] [Indexed: 03/19/2024]
Abstract
A clinical, evidence-based model to inform clients and their parents about the nature of stuttering is indispensable for the field. In this paper, we propose the Erasmus Clinical Model of Stuttering 2.0 for children who stutter and their parents, and adult clients. It provides an up-to-date, clinical model summary of current insights into the genetic, neurological, motoric, linguistic, sensory, temperamental, psychological and social factors (be it causal, eliciting, or maintaining) related to stuttering. First a review is presented of current insights in these factors, and of six scientific theories or models that have inspired the development of our current clinical model. Following this, we will propose the model, which has proven to be useful in clinical practice. The proposed Erasmus Clinical Model of Stuttering visualizes the onset and course of stuttering, and includes scales for stuttering severity and impact, to be completed by the (parent of) the person who stutters. The pathway of the model towards stuttering onset is based on predisposing and mediating factors. In most children with an onset of stuttering, stuttering is transient, but if stuttering continues, its severity and impact vary widely. The model includes the circle of Engel (1977), which visualizes unique interactions of relevant biological, psychological, and social factors that determine the speaker's experience of stuttering severity and its impact. Discussing these factors and their interaction with an individual client can feed into therapeutic targets. The model is supplemented by a lifeline casus.
Collapse
Affiliation(s)
- Marie-Christine Franken
- Department of Otorhinolaryngology and Head and Neck Surgery, Erasmus University Medical Center, Rotterdam, the Netherlands.
| | - Leonoor C Oonk
- StotterFonds, Nijkerk, the Netherlands; University of Applied Sciences, Department of Speech-Language Therapy, Utrecht, the Netherlands
| | | | - Jan Bouwen
- Department of Otorhinolaryngology and Head and Neck Surgery, Erasmus University Medical Center, Rotterdam, the Netherlands.
| | - Luc De Nil
- Department of Speech-Language Pathology, University of Toronto, Canada; Rehabilitation Sciences Institute, University of Toronto, Canada.
| |
Collapse
|
2
|
Anastasopoulou I, Cheyne DO, van Lieshout P, Johnson BW. Decoding kinematic information from beta-band motor rhythms of speech motor cortex: a methodological/analytic approach using concurrent speech movement tracking and magnetoencephalography. Front Hum Neurosci 2024; 18:1305058. [PMID: 38646159 PMCID: PMC11027130 DOI: 10.3389/fnhum.2024.1305058] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2023] [Accepted: 02/26/2024] [Indexed: 04/23/2024] Open
Abstract
Introduction Articulography and functional neuroimaging are two major tools for studying the neurobiology of speech production. Until now, however, it has generally not been feasible to use both in the same experimental setup because of technical incompatibilities between the two methodologies. Methods Here we describe results from a novel articulography system dubbed Magneto-articulography for the Assessment of Speech Kinematics (MASK), which is technically compatible with magnetoencephalography (MEG) brain scanning systems. In the present paper we describe our methodological and analytic approach for extracting brain motor activities related to key kinematic and coordination event parameters derived from time-registered MASK tracking measurements. Data were collected from 10 healthy adults with tracking coils on the tongue, lips, and jaw. Analyses targeted the gestural landmarks of reiterated utterances/ipa/ and /api/, produced at normal and faster rates. Results The results show that (1) Speech sensorimotor cortex can be reliably located in peri-rolandic regions of the left hemisphere; (2) mu (8-12 Hz) and beta band (13-30 Hz) neuromotor oscillations are present in the speech signals and contain information structures that are independent of those present in higher-frequency bands; and (3) hypotheses concerning the information content of speech motor rhythms can be systematically evaluated with multivariate pattern analytic techniques. Discussion These results show that MASK provides the capability, for deriving subject-specific articulatory parameters, based on well-established and robust motor control parameters, in the same experimental setup as the MEG brain recordings and in temporal and spatial co-register with the brain data. The analytic approach described here provides new capabilities for testing hypotheses concerning the types of kinematic information that are encoded and processed within specific components of the speech neuromotor system.
Collapse
Affiliation(s)
| | - Douglas Owen Cheyne
- Department of Speech-Language Pathology, University of Toronto, Toronto, ON, Canada
- Hospital for Sick Children Research Institute, Toronto, ON, Canada
| | - Pascal van Lieshout
- Department of Speech-Language Pathology, University of Toronto, Toronto, ON, Canada
| | | |
Collapse
|
3
|
Gutz SE, Maffei MF, Green JR. Feedback From Automatic Speech Recognition to Elicit Clear Speech in Healthy Speakers. AMERICAN JOURNAL OF SPEECH-LANGUAGE PATHOLOGY 2023; 32:2940-2959. [PMID: 37824377 PMCID: PMC10721250 DOI: 10.1044/2023_ajslp-23-00030] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/25/2023] [Revised: 04/10/2023] [Accepted: 08/01/2023] [Indexed: 10/14/2023]
Abstract
PURPOSE This study assessed the effectiveness of feedback generated by automatic speech recognition (ASR) for eliciting clear speech from young, healthy individuals. As a preliminary step toward exploring a novel method for eliciting clear speech in patients with dysarthria, we investigated the effects of ASR feedback in healthy controls. If successful, ASR feedback has the potential to facilitate independent, at-home clear speech practice. METHOD Twenty-three healthy control speakers (ages 23-40 years) read sentences aloud in three speaking modes: Habitual, Clear (over-enunciated), and in response to ASR feedback (ASR). In the ASR condition, we used Mozilla DeepSpeech to transcribe speech samples and provide participants with a value indicating the accuracy of the ASR's transcription. For speakers who achieved sufficiently high ASR accuracy, noise was added to their speech at a participant-specific signal-to-noise ratio to ensure that each participant had to over-enunciate to achieve high ASR accuracy. RESULTS Compared to habitual speech, speech produced in the ASR and Clear conditions was clearer, as rated by speech-language pathologists, and more intelligible, per speech-language pathologist transcriptions. Speech in the Clear and ASR conditions aligned on several acoustic measures, particularly those associated with increased vowel distinctiveness and decreased speaking rate. However, ASR accuracy, intelligibility, and clarity were each correlated with different speech features, which may have implications for how people change their speech for ASR feedback. CONCLUSIONS ASR successfully elicited outcomes similar to clear speech in healthy speakers. Future work should investigate its efficacy in eliciting clear speech in people with dysarthria.
Collapse
Affiliation(s)
- Sarah E. Gutz
- Department of Communication Sciences and Disorders, MGH Institute of Health Professions, Boston, MA
- Program in Speech and Hearing Bioscience and Technology, Harvard University, Cambridge, MA
| | - Marc F. Maffei
- Department of Communication Sciences and Disorders, MGH Institute of Health Professions, Boston, MA
| | - Jordan R. Green
- Department of Communication Sciences and Disorders, MGH Institute of Health Professions, Boston, MA
- Program in Speech and Hearing Bioscience and Technology, Harvard University, Cambridge, MA
| |
Collapse
|
4
|
Gutz SE, Rowe HP, Tilton-Bolowsky VE, Green JR. Speaking with a KN95 face mask: a within-subjects study on speaker adaptation and strategies to improve intelligibility. Cogn Res Princ Implic 2022; 7:73. [PMID: 35907167 PMCID: PMC9339031 DOI: 10.1186/s41235-022-00423-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2021] [Accepted: 07/18/2022] [Indexed: 11/15/2022] Open
Abstract
Mask-wearing during the COVID-19 pandemic has prompted a growing interest in the functional impact of masks on speech and communication. Prior work has shown that masks dampen sound, impede visual communication cues, and reduce intelligibility. However, more work is needed to understand how speakers change their speech while wearing a mask and to identify strategies to overcome the impact of wearing a mask. Data were collected from 19 healthy adults during a single in-person session. We investigated the effects of wearing a KN95 mask on speech intelligibility, as judged by two speech-language pathologists, examined speech kinematics and acoustics associated with mask-wearing, and explored KN95 acoustic filtering. We then considered the efficacy of three speaking strategies to improve speech intelligibility: Loud, Clear, and Slow speech. To inform speaker strategy recommendations, we related findings to self-reported speaker effort. Results indicated that healthy speakers could compensate for the presence of a mask and achieve normal speech intelligibility. Additionally, we showed that speaking loudly or clearly-and, to a lesser extent, slowly-improved speech intelligibility. However, using these strategies may require increased physical and cognitive effort and should be used only when necessary. These results can inform recommendations for speakers wearing masks, particularly those with communication disorders (e.g., dysarthria) who may struggle to adapt to a mask but can respond to explicit instructions. Such recommendations may further help non-native speakers and those communicating in a noisy environment or with listeners with hearing loss.
Collapse
Affiliation(s)
- Sarah E. Gutz
- Program in Speech and Hearing Bioscience and Technology, Harvard Medical School, Boston, MA USA
| | - Hannah P. Rowe
- Department of Communication Sciences and Disorders, MGH Institute of Health Professions, Building 79/96, 2nd floor, 13th Street, Boston, MA 02129 USA
| | - Victoria E. Tilton-Bolowsky
- Department of Communication Sciences and Disorders, MGH Institute of Health Professions, Building 79/96, 2nd floor, 13th Street, Boston, MA 02129 USA
| | - Jordan R. Green
- Program in Speech and Hearing Bioscience and Technology, Harvard Medical School, Boston, MA USA
- Department of Communication Sciences and Disorders, MGH Institute of Health Professions, Building 79/96, 2nd floor, 13th Street, Boston, MA 02129 USA
| |
Collapse
|
5
|
Anastasopoulou I, van Lieshout P, Cheyne DO, Johnson BW. Speech Kinematics and Coordination Measured With an MEG-Compatible Speech Tracking System. Front Neurol 2022; 13:828237. [PMID: 35837226 PMCID: PMC9273948 DOI: 10.3389/fneur.2022.828237] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2021] [Accepted: 06/06/2022] [Indexed: 11/13/2022] Open
Abstract
Articulography and functional neuroimaging are two major tools for studying the neurobiology of speech production. Until recently, however, it has generally not been possible to use both in the same experimental setup because of technical incompatibilities between the two methodologies. Here we describe results from a novel articulography system dubbed Magneto-articulography for the Assessment of Speech Kinematics (MASK), which we used to derive kinematic profiles of oro-facial movements during speech. MASK was used to characterize speech kinematics in two healthy adults, and the results were compared to measurements from a separate participant with a conventional Electromagnetic Articulography (EMA) system. Analyses targeted the gestural landmarks of reiterated utterances /ipa/, /api/ and /pataka/. The results demonstrate that MASK reliably characterizes key kinematic and movement coordination parameters of speech motor control. Since these parameters are intrinsically registered in time with concurrent magnetoencephalographic (MEG) measurements of neuromotor brain activity, this methodology paves the way for innovative cross-disciplinary studies of the neuromotor control of human speech production, speech development, and speech motor disorders.
Collapse
Affiliation(s)
- Ioanna Anastasopoulou
- School of Psychological Sciences, Macquarie University, Sydney, NSW, Australia
- *Correspondence: Ioanna Anastasopoulou
| | - Pascal van Lieshout
- Department of Speech-Language Pathology, University of Toronto, Toronto, ON, Canada
| | - Douglas O. Cheyne
- Department of Speech-Language Pathology, University of Toronto, Toronto, ON, Canada
- Hospital for Sick Children Research Institute, Toronto, ON, Canada
| | - Blake W. Johnson
- School of Psychological Sciences, Macquarie University, Sydney, NSW, Australia
- Blake W. Johnson
| |
Collapse
|
6
|
Pfordresher PQ, Greenspon EB, Friedman AL, Palmer C. Spontaneous Production Rates in Music and Speech. Front Psychol 2021; 12:611867. [PMID: 34135799 PMCID: PMC8200629 DOI: 10.3389/fpsyg.2021.611867] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2020] [Accepted: 04/30/2021] [Indexed: 11/13/2022] Open
Abstract
Individuals typically produce auditory sequences, such as speech or music, at a consistent spontaneous rate or tempo. We addressed whether spontaneous rates would show patterns of convergence across the domains of music and language production when the same participants spoke sentences and performed melodic phrases on a piano. Although timing plays a critical role in both domains, different communicative and motor constraints apply in each case and so it is not clear whether music and speech would display similar timing mechanisms. We report the results of two experiments in which adult participants produced sequences from memory at a comfortable spontaneous (uncued) rate. In Experiment 1, monolingual pianists in Buffalo, New York engaged in three production tasks: speaking sentences from memory, performing short melodies from memory, and tapping isochronously. In Experiment 2, English-French bilingual pianists in Montréal, Canada produced melodies on a piano as in Experiment 1, and spoke short rhythmically-structured phrases repeatedly. Both experiments led to the same pattern of results. Participants exhibited consistent spontaneous rates within each task. People who produced one spoken phrase rapidly were likely to produce another spoken phrase rapidly. This consistency across stimuli was also found for performance of different musical melodies. In general, spontaneous rates across speech and music tasks were not correlated, whereas rates of tapping and music were correlated. Speech rates (for syllables) were faster than music rates (for tones) and speech showed a smaller range of spontaneous rates across individuals than did music or tapping rates. Taken together, these results suggest that spontaneous rate reflects cumulative influences of endogenous rhythms (in consistent self-generated rates within domain), peripheral motor constraints (in finger movements across tapping and music), and communicative goals based on the cultural transmission of auditory information (slower rates for to-be-synchronized music than for speech).
Collapse
Affiliation(s)
- Peter Q. Pfordresher
- Department of Psychology, University at Buffalo, State University of New York, Buffalo, NY, United States
- Department of Psychology, McGill University, Montreal, QC, Canada
| | - Emma B. Greenspon
- Department of Psychology, University at Buffalo, State University of New York, Buffalo, NY, United States
- Department of Psychology, Monmouth University, West Long Branch, NJ, United States
| | - Amy L. Friedman
- Department of Psychology, McGill University, Montreal, QC, Canada
| | - Caroline Palmer
- Department of Psychology, McGill University, Montreal, QC, Canada
| |
Collapse
|
7
|
Verdurand M, Rossato S, Zmarich C. Coarticulatory Aspects of the Fluent Speech of French and Italian People Who Stutter Under Altered Auditory Feedback. Front Psychol 2020; 11:1745. [PMID: 32793069 PMCID: PMC7390966 DOI: 10.3389/fpsyg.2020.01745] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2019] [Accepted: 06/24/2020] [Indexed: 12/03/2022] Open
Abstract
A number of studies have shown that phonetic peculiarities, especially at the coarticulation level, exist in the disfluent as well as in the perceptively fluent speech of people who stutter (PWS). However, results from fluent speech are very disparate and not easily interpretable. Are the coarticulatory features observed in fluent speech of PWS a manifestation of the disorder, or rather a compensation for the disorder itself? The purpose of the present study is to investigate the coarticulatory behavior in the fluent speech of PWS in the attempt to answer the question on its symptomatic or adaptive nature. In order to achieve this, we have studied the speech of 21 adult PWS (10 French and 11 Italian) compared to that of 20 fluent adults (10 French and 10 Italian). The participants had to repeat simple CV syllables in short carrier sentences, where C = /b, d, g/ and V = /a, i, u/. Crucially, this repetition task was performed in order to compare fluent speech coarticulation of PWS to that of PWNS, and to compare the coarticulation of PWS under a condition with normal auditory feedback (NAF) and under a fluency-enhancing condition due to an altered auditory feedback (AAF). This is the first study, to our knowledge, to investigate the coarticulation behavior under AAF. The degree of coarticulation was measured by means of the Locus Equations (LE). The coarticulation degree observed in fluent PWS speech is lower than that of the PWNS, and, more importantly, in AAF condition, PWS coarticulation appears even weaker than in the NAF condition. The results allow to interpret the lower degree of coarticulation found in fluent speech of PWS under NAF condition as a compensation for the disorder, based on the fact that PWS’s coarticulation is weakening in fluency-enhancing conditions, further away from the degree of coarticulation observed in PWNS. Since a lower degree of coarticulation is associated to a greater separation between the places of articulation of the consonant and the vowel, these results are compatible with the hypothesis that larger articulatory movements could be responsible for the stabilization of the PWS speech motor system, increasing the kinesthetic feedback from the effector system. This interpretation shares with a number of relatively recent proposal the idea that stuttering derives from an impaired feedforward (open-loop) control system, which makes PWS rely more heavily on a feedback-based (closed loop) motor control strategy.
Collapse
Affiliation(s)
- Marine Verdurand
- Speech Therapy Study, Cabestany, France.,Université Grenoble Alpes, CNRS, Grenoble INP, LIG, Grenoble, France
| | - Solange Rossato
- Université Grenoble Alpes, CNRS, Grenoble INP, LIG, Grenoble, France
| | - Claudio Zmarich
- Institute of Cognitive Sciences and Technologies, National Research Council, Padua, Italy
| |
Collapse
|
8
|
Namasivayam AK, Coleman D, O’Dwyer A, van Lieshout P. Speech Sound Disorders in Children: An Articulatory Phonology Perspective. Front Psychol 2020; 10:2998. [PMID: 32047453 PMCID: PMC6997346 DOI: 10.3389/fpsyg.2019.02998] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2019] [Accepted: 12/18/2019] [Indexed: 01/20/2023] Open
Abstract
Speech Sound Disorders (SSDs) is a generic term used to describe a range of difficulties producing speech sounds in children (McLeod and Baker, 2017). The foundations of clinical assessment, classification and intervention for children with SSD have been heavily influenced by psycholinguistic theory and procedures, which largely posit a firm boundary between phonological processes and phonetics/articulation (Shriberg, 2010). Thus, in many current SSD classification systems the complex relationships between the etiology (distal), processing deficits (proximal) and the behavioral levels (speech symptoms) is under-specified (Terband et al., 2019a). It is critical to understand the complex interactions between these levels as they have implications for differential diagnosis and treatment planning (Terband et al., 2019a). There have been some theoretical attempts made towards understanding these interactions (e.g., McAllister Byun and Tessier, 2016) and characterizing speech patterns in children either solely as the product of speech motor performance limitations or purely as a consequence of phonological/grammatical competence has been challenged (Inkelas and Rose, 2007; McAllister Byun, 2012). In the present paper, we intend to reconcile the phonetic-phonology dichotomy and discuss the interconnectedness between these levels and the nature of SSDs using an alternative perspective based on the notion of an articulatory "gesture" within the broader concepts of the Articulatory Phonology model (AP; Browman and Goldstein, 1992). The articulatory "gesture" serves as a unit of phonological contrast and characterization of the resulting articulatory movements (Browman and Goldstein, 1992; van Lieshout and Goldstein, 2008). We present evidence supporting the notion of articulatory gestures at the level of speech production and as reflected in control processes in the brain and discuss how an articulatory "gesture"-based approach can account for articulatory behaviors in typical and disordered speech production (van Lieshout, 2004; Pouplier and van Lieshout, 2016). Specifically, we discuss how the AP model can provide an explanatory framework for understanding SSDs in children. Although other theories may be able to provide alternate explanations for some of the issues we will discuss, the AP framework in our view generates a unique scope that covers linguistic (phonology) and motor processes in a unified manner.
Collapse
Affiliation(s)
- Aravind Kumar Namasivayam
- Oral Dynamics Laboratory, Department of Speech-Language Pathology, University of Toronto, Toronto, ON, Canada
- Toronto Rehabilitation Institute, University Health Network, Toronto, ON, Canada
| | - Deirdre Coleman
- Oral Dynamics Laboratory, Department of Speech-Language Pathology, University of Toronto, Toronto, ON, Canada
- Independent Researcher, Surrey, BC, Canada
| | - Aisling O’Dwyer
- Oral Dynamics Laboratory, Department of Speech-Language Pathology, University of Toronto, Toronto, ON, Canada
- St. James’s Hospital, Dublin, Ireland
| | - Pascal van Lieshout
- Oral Dynamics Laboratory, Department of Speech-Language Pathology, University of Toronto, Toronto, ON, Canada
- Toronto Rehabilitation Institute, University Health Network, Toronto, ON, Canada
- Rehabilitation Sciences Institute, University of Toronto, Toronto, ON, Canada
| |
Collapse
|
9
|
Neumann K, Foundas AL. From locations to networks: Can brain imaging inform treatment of stuttering? JOURNAL OF FLUENCY DISORDERS 2018; 55:1-5. [PMID: 29054456 DOI: 10.1016/j.jfludis.2017.08.001] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Affiliation(s)
- Katrin Neumann
- Department of Phoniatrics and Pediatric Audiology, Clinic of Otorhinolaryngology, Head and Neck Surgery, St. Elisabeth-Hospital, Ruhr University Bochum, Bleichstr. 16, 44787 Bochum, Germany.
| | - Anne L Foundas
- Brain Institute of Louisiana, Department of Communication Sciences and Disorders, 74 Hatcher Hall, Louisiana State University, Baton Rouge, LA 70803, United States
| |
Collapse
|