1
|
Wulfert S, Auer P, Hanulíková A. Speech Errors in the Production of Initial Consonant Clusters: The Roles of Frequency and Sonority. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2022; 65:3709-3729. [PMID: 36198060 DOI: 10.1044/2022_jslhr-22-00148] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
PURPOSE One of the central questions in speech production research is to what degree certain structures have an inherent difficulty and to what degree repeated encounter and practice make them easier to process. The goal of this article was to determine the extent to which frequency and sonority distance of consonant clusters predict production difficulties. METHOD We used a tongue twister paradigm to elicit speech errors on syllable-initial German consonant clusters and investigated the relative influences of cluster frequency and sonority distance between the consonants of a cluster on production accuracy. Native speakers of German produced pairs of monosyllabic pseudowords beginning with consonant clusters at a high speech rate. RESULTS Error rates decreased with increasing frequency of the consonant clusters. A high sonority distance, on the other hand, did not facilitate a cluster's production, but speech errors led to optimized sonority structure for a subgroup of clusters. In addition, the combination of consonant clusters in a stimulus pair has a great impact on production accuracy. CONCLUSION These results suggest that both frequency of use and sonority distance codetermine production ease, as well as syntagmatic competition between adjacent sound sequences.
Collapse
Affiliation(s)
- Sophia Wulfert
- Department of German Studies, University of Freiburg, Germany
| | - Peter Auer
- Department of German Studies, University of Freiburg, Germany
| | - Adriana Hanulíková
- Department of German Studies, University of Freiburg, Germany
- Freiburg Institute for Advanced Studies, University of Freiburg, Germany
| |
Collapse
|
2
|
Braun EJ, Kiran S. Stimulus- and Person-Level Variables Influence Word Production and Response to Anomia Treatment for Individuals With Chronic Poststroke Aphasia. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2022; 65:3854-3872. [PMID: 36201169 PMCID: PMC9927625 DOI: 10.1044/2022_jslhr-21-00527] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/03/2021] [Revised: 02/28/2022] [Accepted: 06/28/2022] [Indexed: 06/16/2023]
Abstract
PURPOSE The impact of stimulus-level psycholinguistic variables and person-level semantic and phonological processing skills on treatment outcomes in individuals with aphasia requires further examination to inform clinical decision making in treatment prescription and stimuli selection. This study investigated the influence of stimulus-level psycholinguistic properties and person-level semantic and phonological processing skills on word production accuracy and treatment response. METHOD This retrospective analysis included 35 individuals with chronic, poststroke aphasia, 30 of whom completed typicality-based semantic feature treatment. Mixed-effects logistic regression models were used to predict binary naming accuracy (a) at baseline and (b) over the course of treatment using stimulus-level psycholinguistic word properties and person-level semantic and phonological processing skills as predictors. RESULTS In baseline naming, words with less complex lexical-semantic and phonological properties showed greater predicted accuracy. There was also an interaction at baseline between stimulus-level lexical-semantic properties and person-level semantic processing skills in predicting baseline naming accuracy. With treatment, words that were more complex from a lexical-semantic standpoint (vs. less complex) and less complex from a phonological standpoint (vs. more complex) improved more. Individuals with greater baseline semantic and phonological processing skills showed a greater treatment response. CONCLUSIONS This study suggests that future clinical research and clinical work should consider semantic and phonological properties of words in selecting stimuli for semantically based treatment. Furthermore, future clinical research should continue to evaluate baseline individual semantic and phonological profiles as predictors of response to semantically based treatment. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.21256341.
Collapse
Affiliation(s)
- Emily J. Braun
- Aphasia Research Laboratory, Department of Speech, Language & Hearing Sciences, Boston University College of Health & Rehabilitation Sciences: Sargent College, MA
| | - Swathi Kiran
- Aphasia Research Laboratory, Department of Speech, Language & Hearing Sciences, Boston University College of Health & Rehabilitation Sciences: Sargent College, MA
| |
Collapse
|
3
|
Abstract
OBJECTIVES Sonority is the relative perceptual prominence/loudness of speech sounds of the same length, stress, and pitch. Children with cochlear implants (CIs), with restored audibility and relatively intact temporal processing, are expected to benefit from the perceptual prominence cues of highly sonorous sounds. Sonority also influences lexical access through the sonority-sequencing principle (SSP), a grammatical phonotactic rule, which facilitates the recognition and segmentation of syllables within speech. The more nonsonorous the onset of a syllable is, the larger is the degree of sonority rise to the nucleus, and the more optimal the SSP. Children with CIs may experience hindered or delayed development of the language-learning rule SSP, as a result of their deprived/degraded auditory experience. The purpose of the study was to explore sonority's role in speech perception and lexical access of prelingually deafened children with CIs. DESIGN A case-control study with 15 children with CIs, 25 normal-hearing children (NHC), and 50 normal-hearing adults was conducted, using a lexical identification task of novel, nonreal CV-CV words taught via fast mapping. The CV-CV words were constructed according to four sonority conditions, entailing syllables with sonorous onsets/less optimal SSP (SS) and nonsonorous onsets/optimal SSP (NS) in all combinations, that is, SS-SS, SS-NS, NS-SS, and NS-NS. Outcome measures were accuracy and reaction times (RTs). A subgroup analysis of 12 children with CIs pair matched to 12 NHC on hearing age aimed to study the effect of oral-language exposure period on the sonority-related performance. RESULTS The children groups showed similar accuracy performance, overall and across all the sonority conditions. However, within-group comparisons showed that the children with CIs scored more accurately on the SS-SS condition relative to the NS-NS and NS-SS conditions, while the NHC performed equally well across all conditions. Additionally, adult-comparable accuracy performance was achieved by the children with CIs only on the SS-SS condition, as opposed to NS-SS, SS-NS, and SS-SS conditions for NHC. Accuracy analysis of the subgroups of children matched in hearing age showed similar results. Overall longer RTs were recorded by the children with CIs on the sonority-treated lexical task, specifically on the SS-SS condition compared with age-matched controls. However, the subgroup analysis showed that both groups of children did not differ on RTs. CONCLUSIONS Children with CIs performed better in lexical tasks relying on the sonority perceptual prominence cues, as in SS-SS condition, than on SSP initial relying conditions as NS-NS and NS-SS. Template-driven word learning, an early word-learning strategy, appears to play a role in the lexical access of children with CIs whether matched in hearing age or not. The SS-SS condition acts as a preferred word template. The longer RTs brought about by the highly accurate SS-SS condition in children with CIs is possibly because listening becomes more effortful. The lack of RTs difference between the children groups when matched on hearing age points out the importance of oral-language exposure period as a key factor in developing the auditory processing skills.
Collapse
|
4
|
Ziegler W. Complexity of articulation planning in apraxia of speech: The limits of phoneme-based approaches. Cogn Neuropsychol 2019; 34:482-487. [PMID: 29457554 DOI: 10.1080/02643294.2017.1421148] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Abstract
This report presents evidence suggesting that the phoneme-based approach taken by Romani, Galuzzi, Guariglia, and Goslin (Comparing phoneme frequency, age of acquisition, and loss in aphasia: Implications for phonological universals. Cognitive Neuropsychology, this issue) falls short of capturing the complexity of articulation planning in patients with apraxia of speech. Empirical and modelling data are reported to demonstrate that the apraxic pathomechanism resides in the hierarchical architecture of phonological words rather than in the context-independent properties of phonemes. Because the factors determining complexity of articulation planning are interlaced between gestural, syllabic, and metrical levels, they cannot be captured by markedness rankings limited to any of these levels.
Collapse
Affiliation(s)
- Wolfram Ziegler
- a EKN-Clinical Neuropsychology Research Group, Institute of Phonetics and Speech Processing , Ludwig-Maximilians-University , Munich , Germany
| |
Collapse
|
5
|
Affiliation(s)
- Adam Buchwald
- a Department of Communicative Sciences and Disorders , New York University , New York , NY , USA
| |
Collapse
|
6
|
Buchwald A, Gagnon B, Miozzo M. Identification and Remediation of Phonological and Motor Errors in Acquired Sound Production Impairment. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2017; 60:1726-1738. [PMID: 28655044 PMCID: PMC5544403 DOI: 10.1044/2017_jslhr-s-16-0240] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/15/2016] [Revised: 10/25/2016] [Accepted: 11/24/2016] [Indexed: 05/24/2023]
Abstract
Purpose This study aimed to test whether an approach to distinguishing errors arising in phonological processing from those arising in motor planning also predicts the extent to which repetition-based training can lead to improved production of difficult sound sequences. Method Four individuals with acquired speech production impairment who produced consonant cluster errors involving deletion were examined using a repetition task. We compared the acoustic details of productions with deletion errors in target consonant clusters to singleton consonants. Changes in accuracy over the course of the study were also compared. Results Two individuals produced deletion errors consistent with a phonological locus of the errors, and 2 individuals produced errors consistent with a motoric locus of the errors. The 2 individuals who made phonologically driven errors showed no change in performance on a repetition training task, whereas the 2 individuals with motoric errors improved in their production of both trained and untrained items. Conclusions The results extend previous findings about a metric for identifying the source of sound production errors in individuals with both apraxia of speech and aphasia. In particular, this work may provide a tool for identifying predominant error types in individuals with complex deficits.
Collapse
Affiliation(s)
| | | | - Michele Miozzo
- The New School for Social Science Research, New York
- Johns Hopkins University, Baltimore, MD
| |
Collapse
|
7
|
Abstract
We report on an English-speaking, aphasic individual (TB) who showed a striking dissociation in speaking with the different forms (allomorphs) that an inflection can take. Although very accurate in producing the consonantal inflections (-/s/, -/z/, -/d/, -/t/), TB consistently omitted syllabic inflections (-/əz/, -/əd/), therefore correctly saying "dogs" or "walked," but "bench" for benches or "skate" for skated. Results from control tests ruled out that TB's selective difficulties stemmed from problems in selecting the correct inflection for the syntactic context or problems related to phonological or articulatory mechanisms. TB's selective difficulties appeared instead to concern morpho-phonological mechanisms responsible for adapting morphological elements to word phonology. These mechanisms determine whether the plural inflection surfaces in the noun bench as voiced (-/z/), unvoiced (-/s/) or syllabic (-/əz/). Our results have implications for understanding how morphological elements are encoded in the lexicon and the nature of morpho-phonological mechanisms involved in speech production.
Collapse
Affiliation(s)
| | - Michele Miozzo
- a Department of Psychology , The New School , New York , NY , USA
| |
Collapse
|
8
|
Berent I. Commentary: "An Evaluation of Universal Grammar and the Phonological Mind"-UG Is Still a Viable Hypothesis. Front Psychol 2016; 7:1029. [PMID: 27471480 PMCID: PMC4943953 DOI: 10.3389/fpsyg.2016.01029] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2016] [Accepted: 06/23/2016] [Indexed: 11/16/2022] Open
Abstract
Everett (2016b) criticizes The Phonological Mind thesis (Berent, 2013a,b) on logical, methodological and empirical grounds. Most of Everett’s concerns are directed toward the hypothesis that the phonological grammar is constrained by universal grammatical (UG) principles. Contrary to Everett’s logical challenges, here I show that the UG hypothesis is readily falsifiable, that universality is not inconsistent with innateness (Everett’s arguments to the contrary are rooted in a basic confusion of the UG phenotype and the genotype), and that its empirical evaluation does not require a full evolutionary account of language. A detailed analysis of one case study, the syllable hierarchy, presents a specific demonstration that people have knowledge of putatively universal principles that are unattested in their language and these principles are most likely linguistic in nature. Whether Universal Grammar exists remains unknown, but Everett’s arguments hardly undermine the viability of this hypothesis.
Collapse
Affiliation(s)
- Iris Berent
- Phonology and Reading Laboratory, Department of Psychology, Northeastern University, Boston MA, USA
| |
Collapse
|
9
|
Deschamps I, Baum SR, Gracco VL. Phonological processing in speech perception: What do sonority differences tell us? BRAIN AND LANGUAGE 2015; 149:77-83. [PMID: 26186232 DOI: 10.1016/j.bandl.2015.06.008] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/06/2014] [Revised: 06/15/2015] [Accepted: 06/16/2015] [Indexed: 06/04/2023]
Abstract
Previous research has associated the inferior frontal and posterior temporal brain regions with a number of phonological processes. In order to identify how these specific brain regions contribute to phonological processing, we manipulated subsyllabic phonological complexity and stimulus modality during speech perception using fMRI. Subjects passively attended to visual or auditory pseudowords. Similar to previous studies, a bilateral network of cortical regions was recruited during the presentation of visual and auditory stimuli. Moreover, pseudowords recruited a similar network of regions as words and letters. Few regions in the whole-brain results revealed neural processing differences associated with phonological complexity independent of modality of presentation. In an ROI analysis, the only region sensitive to phonological complexity was the posterior part of the inferior frontal gyrus (IFGpo), with the complexity effect only present for print. In sum, the sensitivity of phonological brain areas depends on the modality of stimulus presentation and task demands.
Collapse
Affiliation(s)
- Isabelle Deschamps
- Centre for Research on Brain, Language and Music, Rabinovitch House, McGill University, 3640 rue de la Montagne, Montreal, Quebec H3G 2A8, Canada; Rehabilitation Department, Laval University, Quebec, QC, Canada; Centre de Recherche de l'Institut Universitaire en santé mentale de Québec, Quebec, QC, Canada.
| | - Shari R Baum
- McGill University, Faculty of Medicine, School of Communication Sciences and Disorders, 1266 Avenue des Pins, Montreal, Quebec H3G 1A8, Canada; Centre for Research on Brain, Language and Music, Rabinovitch House, McGill University, 3640 rue de la Montagne, Montreal, Quebec H3G 2A8, Canada
| | - Vincent L Gracco
- McGill University, Faculty of Medicine, School of Communication Sciences and Disorders, 1266 Avenue des Pins, Montreal, Quebec H3G 1A8, Canada; Centre for Research on Brain, Language and Music, Rabinovitch House, McGill University, 3640 rue de la Montagne, Montreal, Quebec H3G 2A8, Canada; Haskins Laboratories, 300 George St., Suite 900, New Haven, CT 06511, USA
| |
Collapse
|
10
|
Abstract
Drawing on phonology research within the generative linguistics tradition, stochastic methods, and notions from complex systems, we develop a modelling paradigm linking phonological structure, expressed in terms of syllables, to speech movement data acquired with 3D electromagnetic articulography and X-ray microbeam methods. The essential variable in the models is syllable structure. When mapped to discrete coordination topologies, syllabic organization imposes systematic patterns of variability on the temporal dynamics of speech articulation. We simulated these dynamics under different syllabic parses and evaluated simulations against experimental data from Arabic and English, two languages claimed to parse similar strings of segments into different syllabic structures. Model simulations replicated several key experimental results, including the fallibility of past phonetic heuristics for syllable structure, and exposed the range of conditions under which such heuristics remain valid. More importantly, the modelling approach consistently diagnosed syllable structure proving resilient to multiple sources of variability in experimental data including measurement variability, speaker variability, and contextual variability. Prospects for extensions of our modelling paradigm to acoustic data are also discussed.
Collapse
Affiliation(s)
- Jason A. Shaw
- MARCS Institute, University of Western Sydney, Penrith, New South Wales, Australia
- School of Humanities and Communication Arts, University of Western Sydney, Penrith, New South Wales, Australia
| | - Adamantios I. Gafos
- Faculty of Human Sciences, University of Potsdam, Potsdam, Germany
- Haskins Laboratories, New Haven, Connecticut, United States of America
| |
Collapse
|
11
|
Abstract
All spoken languages express words by sound patterns, and certain patterns (e.g., blog) are systematically preferred to others (e.g., lbog). What principles account for such preferences: does the language system encode abstract rules banning syllables like lbog, or does their dislike reflect the increased motor demands associated with speech production? More generally, we ask whether linguistic knowledge is fully embodied or whether some linguistic principles could potentially be abstract. To address this question, here we gauge the sensitivity of English speakers to the putative universal syllable hierarchy (e.g., blif ≻ bnif ≻ bdif ≻ lbif) while undergoing transcranial magnetic stimulation (TMS) over the cortical motor representation of the left orbicularis oris muscle. If syllable preferences reflect motor simulation, then worse-formed syllables (e.g., lbif) should (i) elicit more errors; (ii) engage more strongly motor brain areas; and (iii) elicit stronger effects of TMS on these motor regions. In line with the motor account, we found that repetitive TMS pulses impaired participants' global sensitivity to the number of syllables, and functional MRI confirmed that the cortical stimulation site was sensitive to the syllable hierarchy. Contrary to the motor account, however, ill-formed syllables were least likely to engage the lip sensorimotor area and they were least impaired by TMS. Results suggest that speech perception automatically triggers motor action, but this effect is not causally linked to the computation of linguistic structure. We conclude that the language and motor systems are intimately linked, yet distinct. Language is designed to optimize motor action, but its knowledge includes principles that are disembodied and potentially abstract.
Collapse
|
12
|
Riley EA, Thompson CK. Training Pseudoword Reading in Acquired Dyslexia: A Phonological Complexity Approach. APHASIOLOGY 2015; 29:129-150. [PMID: 26085708 PMCID: PMC4467909 DOI: 10.1080/02687038.2014.955389] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/03/2023]
Abstract
BACKGROUND Individuals with acquired phonological dyslexia experience difficulty associating written letters with corresponding sounds, especially in pseudowords. Previous studies have shown that reading can be improved in these individuals by training letter-sound correspondence, practicing phonological skills, or using combined approaches. However, generalization to untrained items is typically limited. AIMS We investigated whether principles of phonological complexity can be applied to training letter-sound correspondence reading in acquired phonological dyslexia to improve generalization to untrained words. Based on previous work in other linguistic domains, we hypothesized that training phonologically "more complex" material (i.e., consonant clusters with small sonority differences) would result in generalization to phonologically "less complex" material (i.e., consonant clusters with larger sonority differences), but this generalization pattern would not be demonstrated when training the "less complex" material. METHODS & PROCEDURES We used a single-participant, multiple baseline design across participants and behaviors to examine phonological complexity as a training variable in five individuals. Based on participants' error data from a previous experiment, a "more complex" onset and a "less complex" onset were selected for training for each participant. Training order assignment was pseudo-randomized and counterbalanced across participants. Three participants were trained in the "more complex" condition and two in the "less complex" condition while tracking oral reading accuracy of both onsets. OUTCOMES & RESULTS As predicted, participants trained in the "more complex" condition demonstrated improved pseudoword reading of the trained cluster and generalization to pseudowords with the untrained, "simple" onset, but not vice versa. CONCLUSIONS These findings suggest phonological complexity can be used to improve generalization to untrained phonologically related words in acquired phonological dyslexia. These findings also provide preliminary support for using phonological complexity theory as a tool for designing more effective and efficient reading treatments for acquired dyslexia.
Collapse
Affiliation(s)
- Ellyn A Riley
- Aphasia and Neurolinguistics Research Laboratory, Department of Communication Sciences & Disorders, Northwestern University, Evanston, IL
| | - Cynthia K Thompson
- Aphasia and Neurolinguistics Research Laboratory, Department of Communication Sciences & Disorders, Northwestern University, Evanston, IL ; Cognitive Neurology and Alzheimer's Disease Center, Northwestern University, Evanston, IL
| |
Collapse
|