1
|
Wulfert S, Auer P, Hanulíková A. Speech Errors in the Production of Initial Consonant Clusters: The Roles of Frequency and Sonority. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2022; 65:3709-3729. [PMID: 36198060 DOI: 10.1044/2022_jslhr-22-00148] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
PURPOSE One of the central questions in speech production research is to what degree certain structures have an inherent difficulty and to what degree repeated encounter and practice make them easier to process. The goal of this article was to determine the extent to which frequency and sonority distance of consonant clusters predict production difficulties. METHOD We used a tongue twister paradigm to elicit speech errors on syllable-initial German consonant clusters and investigated the relative influences of cluster frequency and sonority distance between the consonants of a cluster on production accuracy. Native speakers of German produced pairs of monosyllabic pseudowords beginning with consonant clusters at a high speech rate. RESULTS Error rates decreased with increasing frequency of the consonant clusters. A high sonority distance, on the other hand, did not facilitate a cluster's production, but speech errors led to optimized sonority structure for a subgroup of clusters. In addition, the combination of consonant clusters in a stimulus pair has a great impact on production accuracy. CONCLUSION These results suggest that both frequency of use and sonority distance codetermine production ease, as well as syntagmatic competition between adjacent sound sequences.
Collapse
Affiliation(s)
- Sophia Wulfert
- Department of German Studies, University of Freiburg, Germany
| | - Peter Auer
- Department of German Studies, University of Freiburg, Germany
| | - Adriana Hanulíková
- Department of German Studies, University of Freiburg, Germany
- Freiburg Institute for Advanced Studies, University of Freiburg, Germany
| |
Collapse
|
2
|
Namasivayam AK, Coleman D, O’Dwyer A, van Lieshout P. Speech Sound Disorders in Children: An Articulatory Phonology Perspective. Front Psychol 2020; 10:2998. [PMID: 32047453 PMCID: PMC6997346 DOI: 10.3389/fpsyg.2019.02998] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2019] [Accepted: 12/18/2019] [Indexed: 01/20/2023] Open
Abstract
Speech Sound Disorders (SSDs) is a generic term used to describe a range of difficulties producing speech sounds in children (McLeod and Baker, 2017). The foundations of clinical assessment, classification and intervention for children with SSD have been heavily influenced by psycholinguistic theory and procedures, which largely posit a firm boundary between phonological processes and phonetics/articulation (Shriberg, 2010). Thus, in many current SSD classification systems the complex relationships between the etiology (distal), processing deficits (proximal) and the behavioral levels (speech symptoms) is under-specified (Terband et al., 2019a). It is critical to understand the complex interactions between these levels as they have implications for differential diagnosis and treatment planning (Terband et al., 2019a). There have been some theoretical attempts made towards understanding these interactions (e.g., McAllister Byun and Tessier, 2016) and characterizing speech patterns in children either solely as the product of speech motor performance limitations or purely as a consequence of phonological/grammatical competence has been challenged (Inkelas and Rose, 2007; McAllister Byun, 2012). In the present paper, we intend to reconcile the phonetic-phonology dichotomy and discuss the interconnectedness between these levels and the nature of SSDs using an alternative perspective based on the notion of an articulatory "gesture" within the broader concepts of the Articulatory Phonology model (AP; Browman and Goldstein, 1992). The articulatory "gesture" serves as a unit of phonological contrast and characterization of the resulting articulatory movements (Browman and Goldstein, 1992; van Lieshout and Goldstein, 2008). We present evidence supporting the notion of articulatory gestures at the level of speech production and as reflected in control processes in the brain and discuss how an articulatory "gesture"-based approach can account for articulatory behaviors in typical and disordered speech production (van Lieshout, 2004; Pouplier and van Lieshout, 2016). Specifically, we discuss how the AP model can provide an explanatory framework for understanding SSDs in children. Although other theories may be able to provide alternate explanations for some of the issues we will discuss, the AP framework in our view generates a unique scope that covers linguistic (phonology) and motor processes in a unified manner.
Collapse
Affiliation(s)
- Aravind Kumar Namasivayam
- Oral Dynamics Laboratory, Department of Speech-Language Pathology, University of Toronto, Toronto, ON, Canada
- Toronto Rehabilitation Institute, University Health Network, Toronto, ON, Canada
| | - Deirdre Coleman
- Oral Dynamics Laboratory, Department of Speech-Language Pathology, University of Toronto, Toronto, ON, Canada
- Independent Researcher, Surrey, BC, Canada
| | - Aisling O’Dwyer
- Oral Dynamics Laboratory, Department of Speech-Language Pathology, University of Toronto, Toronto, ON, Canada
- St. James’s Hospital, Dublin, Ireland
| | - Pascal van Lieshout
- Oral Dynamics Laboratory, Department of Speech-Language Pathology, University of Toronto, Toronto, ON, Canada
- Toronto Rehabilitation Institute, University Health Network, Toronto, ON, Canada
- Rehabilitation Sciences Institute, University of Toronto, Toronto, ON, Canada
| |
Collapse
|
3
|
Shaw JA, Kawahara S. Effects of Surprisal and Entropy on Vowel Duration in Japanese. LANGUAGE AND SPEECH 2019; 62:80-114. [PMID: 29105604 DOI: 10.1177/0023830917737331] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
Research on English and other languages has shown that syllables and words that contain more information tend to be produced with longer duration. This research is evolving into a general thesis that speakers articulate linguistic units with more information more robustly. While this hypothesis seems plausible from the perspective of communicative efficiency, previous support for it has come mainly from English and some other Indo-European languages. Moreover, most previous studies focus on global effects, such as the interaction of word duration and sentential/semantic predictability. The current study is focused at the level of phonotactics, exploring the effects of local predictability on vowel duration in Japanese, using the Corpus of Spontaneous Japanese. To examine gradient consonant-vowel phonotactics within a consonant-vowel-mora, consonant-conditioned Surprisal and Shannon Entropy were calculated, and their effects on vowel duration were examined, together with other linguistic factors that are known from previous research to affect vowel duration. Results show significant effects of both Surprisal and Entropy, as well as notable interactions with vowel length and vowel quality. The effect of Entropy is stronger on peripheral vowels than on central vowels. Surprisal has a stronger positive effect on short vowels than on long vowels. We interpret the main patterns and the interactions by conceptualizing Surprisal as an index of motor fluency and Entropy as an index of competition in vowel selection.
Collapse
|
4
|
Mugler EM, Tate MC, Livescu K, Templer JW, Goldrick MA, Slutzky MW. Differential Representation of Articulatory Gestures and Phonemes in Precentral and Inferior Frontal Gyri. J Neurosci 2018; 38:9803-9813. [PMID: 30257858 PMCID: PMC6234299 DOI: 10.1523/jneurosci.1206-18.2018] [Citation(s) in RCA: 45] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2018] [Revised: 09/09/2018] [Accepted: 09/10/2018] [Indexed: 11/21/2022] Open
Abstract
Speech is a critical form of human communication and is central to our daily lives. Yet, despite decades of study, an understanding of the fundamental neural control of speech production remains incomplete. Current theories model speech production as a hierarchy from sentences and phrases down to words, syllables, speech sounds (phonemes), and the actions of vocal tract articulators used to produce speech sounds (articulatory gestures). Here, we investigate the cortical representation of articulatory gestures and phonemes in ventral precentral and inferior frontal gyri in men and women. Our results indicate that ventral precentral cortex represents gestures to a greater extent than phonemes, while inferior frontal cortex represents both gestures and phonemes. These findings suggest that speech production shares a common cortical representation with that of other types of movement, such as arm and hand movements. This has important implications both for our understanding of speech production and for the design of brain-machine interfaces to restore communication to people who cannot speak.SIGNIFICANCE STATEMENT Despite being studied for decades, the production of speech by the brain is not fully understood. In particular, the most elemental parts of speech, speech sounds (phonemes) and the movements of vocal tract articulators used to produce these sounds (articulatory gestures), have both been hypothesized to be encoded in motor cortex. Using direct cortical recordings, we found evidence that primary motor and premotor cortices represent gestures to a greater extent than phonemes. Inferior frontal cortex (part of Broca's area) appears to represent both gestures and phonemes. These findings suggest that speech production shares a similar cortical organizational structure with the movement of other body parts.
Collapse
Affiliation(s)
| | | | - Karen Livescu
- Toyota Technological Institute at Chicago, Chicago, Illinois 60637
| | | | | | - Marc W Slutzky
- Departments of Neurology,
- Physiology
- Physical Medicine & Rehabilitation, Northwestern University, Chicago, Illinois 60611, and
| |
Collapse
|
5
|
Beirne MB, Croot K. The prosodic domain of phonological encoding: Evidence from speech errors. Cognition 2018; 177:1-7. [PMID: 29614350 DOI: 10.1016/j.cognition.2018.03.004] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2016] [Revised: 02/07/2018] [Accepted: 03/06/2018] [Indexed: 10/17/2022]
Abstract
Phonological encoding of segments is thought to occur within a prosodically-defined frame, but it is not clear which of the constituent/s within the prosodic hierarchy (syllables, phonological words, intonational phrases and utterances) serve/s as the domain of phonological encoding. This experiment investigated whether segmental speech errors elicited in tongue-twisters were influenced by position within prosodic constituents above the level of the phonological word. Forty-four participants produced six repetitions each of 40 two-intonational phrase tongue-twisters with error-prone word-initial "target" segments in phrase-initial and phrase-final words. If the domain of phonological encoding is the intonational phrase, we hypothesised that segments within a current intonational phrase would interact in more errors than would segments across intonational phrase boundaries. Participants made more anticipatory than perseveratory errors on target segments in phrase-initial words as predicted. They also made more perseveratory than anticipatory errors on targets in phrase-final words, but only in utterance-final phrases. These results suggest that the intonational phrase is one domain of phonological encoding, and that segments for upcoming phrases are activated while current phrases are being articulated.
Collapse
Affiliation(s)
| | - Karen Croot
- School of Psychology, University of Sydney, Australia.
| |
Collapse
|
6
|
Kember H, Connaghan K, Patel R. Inducing speech errors in dysarthria using tongue twisters. INTERNATIONAL JOURNAL OF LANGUAGE & COMMUNICATION DISORDERS 2017; 52:469-478. [PMID: 27891744 DOI: 10.1111/1460-6984.12285] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/11/2015] [Revised: 07/27/2016] [Accepted: 07/28/2016] [Indexed: 06/06/2023]
Abstract
Although tongue twisters have been widely use to study speech production in healthy speakers, few studies have employed this methodology for individuals with speech impairment. The present study compared tongue twister errors produced by adults with dysarthria and age-matched healthy controls. Eight speakers (four female, four male; mean age = 54.5 years) with spastic (mixed-spastic) dysarthria of varying aetiology (cerebral palsy, multiple sclerosis, multiple system atrophy) and eight controls (four female, four male; mean age = 56.9 years) were audio-recorded producing tongue twisters. One word in each tongue twister was marked for prominence. Speakers with dysarthria produced significantly more errors and spoke slower than healthy controls. The effect of prominence was significant for both groups-words spoken with prosodic prominence were significantly less error prone compared with words without prominence. While both groups produced most errors on words in the third position (of four-word utterances), speakers with dysarthria also produced high rates of errors on the first and fourth words. This preliminary investigation demonstrated the promise of applying the tongue twister paradigm to speakers with dysarthria and contributes to the evidence base for the implementation of prosodic strategies in speech intervention.
Collapse
Affiliation(s)
- Heather Kember
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Australia
- Department of Communication Sciences and Disorders, Northeastern University, Boston, MA, USA
| | - Kathryn Connaghan
- Department of Communication Sciences and Disorders, Northeastern University, Boston, MA, USA
| | - Rupal Patel
- Department of Communication Sciences and Disorders, Northeastern University, Boston, MA, USA
- College of Computer and Information Science, Northeastern University, Boston, MA, USA
| |
Collapse
|
7
|
Hagedorn C, Proctor M, Goldstein L, Wilson SM, Miller B, Gorno-Tempini ML, Narayanan SS. Characterizing Articulation in Apraxic Speech Using Real-Time Magnetic Resonance Imaging. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2017; 60:877-891. [PMID: 28314241 PMCID: PMC5548083 DOI: 10.1044/2016_jslhr-s-15-0112] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/23/2015] [Revised: 12/19/2015] [Accepted: 07/15/2016] [Indexed: 05/29/2023]
Abstract
Purpose Real-time magnetic resonance imaging (MRI) and accompanying analytical methods are shown to capture and quantify salient aspects of apraxic speech, substantiating and expanding upon evidence provided by clinical observation and acoustic and kinematic data. Analysis of apraxic speech errors within a dynamic systems framework is provided and the nature of pathomechanisms of apraxic speech discussed. Method One adult male speaker with apraxia of speech was imaged using real-time MRI while producing spontaneous speech, repeated naming tasks, and self-paced repetition of word pairs designed to elicit speech errors. Articulatory data were analyzed, and speech errors were detected using time series reflecting articulatory activity in regions of interest. Results Real-time MRI captured two types of apraxic gestural intrusion errors in a word pair repetition task. Gestural intrusion errors in nonrepetitive speech, multiple silent initiation gestures at the onset of speech, and covert (unphonated) articulation of entire monosyllabic words were also captured. Conclusion Real-time MRI and accompanying analytical methods capture and quantify many features of apraxic speech that have been previously observed using other modalities while offering high spatial resolution. This patient's apraxia of speech affected the ability to select only the appropriate vocal tract gestures for a target utterance, suppressing others, and to coordinate them in time.
Collapse
Affiliation(s)
| | - Michael Proctor
- Macquarie University, North Ryde, New South Wales, Australia
| | | | | | | | | | | |
Collapse
|
8
|
Buz E, Tanenhaus MK, Jaeger TF. Dynamically adapted context-specific hyper-articulation: Feedback from interlocutors affects speakers' subsequent pronunciations. JOURNAL OF MEMORY AND LANGUAGE 2016; 89:68-86. [PMID: 27375344 PMCID: PMC4927008 DOI: 10.1016/j.jml.2015.12.009] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/28/2023]
Abstract
We ask whether speakers can adapt their productions when feedback from their interlocutors suggests that previous productions were perceptually confusable. To address this question, we use a novel web-based task-oriented paradigm for speech recording, in which participants produce instructions towards a (simulated) partner with naturalistic response times. We manipulate (1) whether a target word with a voiceless plosive (e.g., pill) occurs in the presence of a voiced competitor (bill) or an unrelated word (food) and (2) whether or not the simulated partner occasionally misunderstands the target word. Speakers hyper-articulated the target word when a voiced competitor was present. Moreover, the size of the hyper-articulation effect was nearly doubled when partners occasionally misunderstood the instruction. A novel type of distributional analysis further suggests that hyper-articulation did not change the target of production, but rather reduced the probability of perceptually ambiguous or confusable productions. These results were obtained in the absence of explicit clarification requests, and persisted across words and over trials. Our findings suggest that speakers adapt their pronunciations based on the perceived communicative success of their previous productions in the current environment. We discuss why speakers make adaptive changes to their speech and what mechanisms might underlie speakers' ability to do so.
Collapse
Affiliation(s)
- Esteban Buz
- Department of Brain and Cognitive Sciences, University of Rochester, United States
| | - Michael K. Tanenhaus
- Department of Brain and Cognitive Sciences, University of Rochester, United States
- Department of Linguistics, University of Rochester, United States
| | - T. Florian Jaeger
- Department of Brain and Cognitive Sciences, University of Rochester, United States
- Department of Linguistics, University of Rochester, United States
- Department of Computer Science, University of Rochester, United States
| |
Collapse
|
9
|
Slis A, van Lieshout P. The Effect of Auditory Information on Patterns of Intrusions and Reductions. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2016; 59:430-445. [PMID: 27232422 DOI: 10.1044/2015_jslhr-s-14-0258] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/16/2014] [Accepted: 10/09/2015] [Indexed: 06/05/2023]
Abstract
PURPOSE The study investigates whether auditory information affects the nature of intrusion and reduction errors in reiterated speech. These errors are hypothesized to arise as a consequence of autonomous mechanisms to stabilize movement coordination. The specific question addressed is whether this process is affected by auditory information so that it will influence the occurrence of intrusions and reductions. METHODS Fifteen speakers produced word pairs with alternating onset consonants and identical rhymes repetitively at a normal and fast speaking rate, in masked and unmasked speech. Movement ranges of the tongue tip, tongue dorsum, and lower lip during onset consonants were retrieved from kinematic data collected with electromagnetic articulography. Reductions and intrusions were defined as statistical outliers from movement range distributions of target and nontarget articulators, respectively. RESULTS Regardless of masking condition, the number of intrusions and reductions increased during the course of a trial, suggesting movement stabilization. However, compared with unmasked speech, speakers made fewer intrusions in masked speech. The number of reductions was not significantly affected. CONCLUSIONS Masking of auditory information resulted in fewer intrusions, suggesting that speakers were able to pay closer attention to their articulatory movements. This highlights a possible stabilizing role for proprioceptive information in speech movement coordination.
Collapse
|
10
|
Automatic analysis of slips of the tongue: Insights into the cognitive architecture of speech production. Cognition 2016; 149:31-9. [PMID: 26779665 DOI: 10.1016/j.cognition.2016.01.002] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2015] [Revised: 09/09/2015] [Accepted: 01/04/2016] [Indexed: 11/21/2022]
Abstract
Traces of the cognitive mechanisms underlying speaking can be found within subtle variations in how we pronounce sounds. While speech errors have traditionally been seen as categorical substitutions of one sound for another, acoustic/articulatory analyses show they partially reflect the intended sound. When "pig" is mispronounced as "big," the resulting /b/ sound differs from correct productions of "big," moving towards intended "pig"-revealing the role of graded sound representations in speech production. Investigating the origins of such phenomena requires detailed estimation of speech sound distributions; this has been hampered by reliance on subjective, labor-intensive manual annotation. Computational methods can address these issues by providing for objective, automatic measurements. We develop a novel high-precision computational approach, based on a set of machine learning algorithms, for measurement of elicited speech. The algorithms are trained on existing manually labeled data to detect and locate linguistically relevant acoustic properties with high accuracy. Our approach is robust, is designed to handle mis-productions, and overall matches the performance of expert coders. It allows us to analyze a very large dataset of speech errors (containing far more errors than the total in the existing literature), illuminating properties of speech sound distributions previously impossible to reliably observe. We argue that this provides novel evidence that two sources both contribute to deviations in speech errors: planning processes specifying the targets of articulation and articulatory processes specifying the motor movements that execute this plan. These findings illustrate how a much richer picture of speech provides an opportunity to gain novel insights into language processing.
Collapse
|
11
|
Kember H, Croot K, Patrick E. Phonological Encoding in Mandarin Chinese: Evidence from Tongue Twisters. LANGUAGE AND SPEECH 2015; 58:417-440. [PMID: 27483738 DOI: 10.1177/0023830914562654] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
Models of connected speech production in Mandarin Chinese must specify how lexical tone, speech segments, and phrase-level prosody are integrated in speech production. This study used tongue twisters to test predictions of the two different models of word form encoding. Tongue twisters were constructed from 5 sets of characters that rotated pairs of initial segments or pairs of tones, or both, across format (ABAB, ABBA), and across position of the characters in four-character tongue twister strings. Fifty two native Mandarin Chinese speakers read aloud 120 tongue twisters, repeating each one six times in a row. They made a total of 3503 (2.34%) segment errors and 1372 (.92%) tone errors. Segment errors occurred on the onsets of the first and third characters in the ABBA but not ABAB segment-alternating tongue twisters, and on the onsets of the second and fourth characters of the tone-alternating tongue twisters. Tone errors were highest on the third and fourth characters in the tone-alternating tongue twisters. The pattern of tone errors is consistent with the claim that tone is associated to a metrical frame prior to segment encoding, while the format by position interaction found for the segment-alternating tongue twisters suggest articulatory gestures oscillate in segment production as proposed by gestural phonology.
Collapse
|
12
|
Galluzzi C, Bureca I, Guariglia C, Romani C. Phonological simplifications, apraxia of speech and the interaction between phonological and phonetic processing. Neuropsychologia 2015; 71:64-83. [DOI: 10.1016/j.neuropsychologia.2015.03.007] [Citation(s) in RCA: 40] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2014] [Revised: 02/23/2015] [Accepted: 03/07/2015] [Indexed: 11/25/2022]
|
13
|
Pouplier M, Marin S, Waltl S. Voice onset time in consonant cluster errors: can phonetic accommodation differentiate cognitive from motor errors? JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2014; 57:1577-1588. [PMID: 24686567 DOI: 10.1044/2014_jslhr-s-12-0412] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/21/2012] [Accepted: 02/26/2014] [Indexed: 06/03/2023]
Abstract
PURPOSE Phonetic accommodation in speech errors has traditionally been used to identify the processing level at which an error has occurred. Recent studies have challenged the view that noncanonical productions may solely be due to phonetic, not phonological, processing irregularities, as previously assumed. The authors of the present study investigated the relationship between phonological and phonetic planning processes on the basis of voice onset time (VOT) behavior in consonant cluster errors. METHOD Acoustic data from 22 German speakers were recorded while eliciting errors on sibilant-stop clusters. Analyses consider VOT duration as well as intensity and spectral properties of the sibilant. RESULTS Of all incorrect responses, 28% failed to show accommodation. Sibilant intensity and spectral properties differed from correct responses irrespective of whether VOT was accommodated. CONCLUSIONS The data overall do not allow using (a lack of) accommodation as a diagnostic as to the processing level at which an error has occurred. The data support speech production models that allow for an integrated view of phonological and phonetic processing.
Collapse
|
14
|
Smolensky P, Goldrick M, Mathis D. Optimization and Quantization in Gradient Symbol Systems: A Framework for Integrating the Continuous and the Discrete in Cognition. Cogn Sci 2013; 38:1102-38. [DOI: 10.1111/cogs.12047] [Citation(s) in RCA: 55] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2010] [Revised: 05/09/2012] [Accepted: 05/24/2012] [Indexed: 11/30/2022]
Affiliation(s)
- Paul Smolensky
- Department of Cognitive Science; Johns Hopkins University
| | | | - Donald Mathis
- Department of Cognitive Science; Johns Hopkins University
| |
Collapse
|
15
|
Buchwald A, Miozzo M. Phonological and motor errors in individuals with acquired sound production impairment. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2012; 55:S1573-S1586. [PMID: 23033450 DOI: 10.1044/1092-4388(2012/11-0200)] [Citation(s) in RCA: 28] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
PURPOSE This study aimed to compare sound production errors arising due to phonological processing impairment with errors arising due to motor speech impairment. METHOD Two speakers with similar clinical profiles who produced similar consonant cluster simplification errors were examined using a repetition task. We compared both overall accuracy and acoustic details of hundreds of productions with target consonant clusters to tokens with singletons. Changes in accuracy over the course of the study were also compared. RESULTS In target words with consonant cluster simplification, the individual whose errors reflected phonological impairment produced articulatory timing consistent with singleton onsets. These productions improved when resyllabification was possible, but error rates were not affected by exposure. In contrast, the individual with motoric-based errors produced simplifications that contained the articulatory timing associated with clusters. Accuracy was not affected by the ability to resyllabify, but it did significantly improve following repeated production. CONCLUSIONS Our findings reveal clear differences between errors arising in phonological processing and in motor planning that reflect the underlying systems. The changes over the course of the study suggest that error types with different sources are responsive to different intervention strategies.
Collapse
|
16
|
Parrell B. Dynamical account of how /b, d, g/ differ from /p, t, k/ in Spanish: Evidence from labials. LABORATORY PHONOLOGY 2011; 2:423-449. [PMID: 23843928 PMCID: PMC3703669 DOI: 10.1515/labphon.2011.016] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
This study examines articulatory lenition of intervocalic stops in Spanish and tests the theories that 1) /b, d, g/ have an intended target for closure equal to that of /p, t, k/ and 2) spirantization of /b, d, g/ is caused by undershoot due to their short duration phrase medially. Consistent with past acoustic studies, subjects produce /b/ with incomplete closure phrase medially and complete closure phrase initially. Additionally, /b/ is shorter than /p/ phrase medially though not initially. For /b/, though not for /p/, there is a correlation between constriction degree and duration, consistent with the theory of dynamical undershoot. The results from the study are accurately modeled with a virtual target for /b/ slightly beyond the point of articulator contact. Such a target results in full closure at long durations (such as found phrase initially) and incomplete closure at shorter durations. Based on this evidence, it is proposed that /b, d, g/ differ from /p, t, k/ in three ways: they are shorter, lack a devoicing gesture, and have a target closer to - but still beyond - the point of articulator contact.
Collapse
|
17
|
Goldrick M, Baker HR, Murphy A, Baese-Berk M. Interaction and representational integration: evidence from speech errors. Cognition 2011; 121:58-72. [PMID: 21669409 DOI: 10.1016/j.cognition.2011.05.006] [Citation(s) in RCA: 35] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2010] [Revised: 04/21/2011] [Accepted: 05/19/2011] [Indexed: 10/18/2022]
Abstract
We examine the mechanisms that support interaction between lexical, phonological and phonetic processes during language production. Studies of the phonetics of speech errors have provided evidence that partially activated lexical and phonological representations influence phonetic processing. We examine how these interactive effects are modulated by lexical frequency. Previous research has demonstrated that during lexical access, the processing of high frequency words is facilitated; in contrast, during phonetic encoding, the properties of low frequency words are enhanced. These contrasting effects provide the opportunity to distinguish two theoretical perspectives on how interaction between processing levels can be increased. A theory in which cascading activation is used to increase interaction predicts that the facilitation of high frequency words will enhance their influence on the phonetic properties of speech errors. Alternatively, if interaction is increased by integrating levels of representation, the phonetics of speech errors will reflect the retrieval of enhanced phonetic properties for low frequency words. Utilizing a novel statistical analysis method, we show that in experimentally induced speech errors low lexical frequency targets and outcomes exhibit enhanced phonetic processing. We sketch an interactive model of lexical, phonological and phonetic processing that accounts for the conflicting effects of lexical frequency on lexical access and phonetic processing.
Collapse
Affiliation(s)
- Matthew Goldrick
- Department of Linguistics, Northwestern University, 2016 Sheridan Rd., Evanston, IL 60208, USA.
| | | | | | | |
Collapse
|
18
|
Marin S, Pouplier M, Harrington J. Acoustic consequences of articulatory variability during productions of /t/ and /k/ and its implications for speech error research. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2010; 127:445-461. [PMID: 20058990 PMCID: PMC2821172 DOI: 10.1121/1.3268600] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/30/2009] [Revised: 08/13/2009] [Accepted: 10/27/2009] [Indexed: 05/28/2023]
Abstract
An increasing number of studies has linked certain types of articulatory or acoustic variability with speech errors, but no study has yet examined the relationship between such articulatory variability and acoustics. The present study aims to evaluate the acoustic properties of articulatorily errorful /k/ and /t/ stimuli to determine whether these errors are consistently reflected in the acoustics. The most frequent error observed in the articulatory data is the production of /k/ and /t/ with simultaneous tongue tip and tongue dorsum constrictions. Spectral analysis of these stimuli's bursts shows that /k/ and /t/ are differently affected by such co-production errors: co-production of tongue tip and tongue dorsum during intended /k/ results in typical /k/ spectra (and hence in tokens robustly classified as /k/), while co-productions during intended /t/ result in spectra with roughly equal prominence at both the mid-frequency (/k/-like) and high-frequency (/t/-like) ranges (and hence in tokens ambiguous between /k/ and /t/). This outcome is not due to an articulatory timing difference, but to tongue dorsum constriction having an overall greater effect on the acoustic than a tongue tip constriction when the two are co-produced.
Collapse
Affiliation(s)
- Stefania Marin
- Institute of Phonetics and Speech Processing, Ludwig-Maximilians-University Munich, 80799 Munich, Germany.
| | | | | |
Collapse
|