1
|
Rong P, Heidrick L, Pattee GL. A multimodal approach to automated hierarchical assessment of bulbar involvement in amyotrophic lateral sclerosis. Front Neurol 2024; 15:1396002. [PMID: 38836001 PMCID: PMC11148322 DOI: 10.3389/fneur.2024.1396002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2024] [Accepted: 05/01/2024] [Indexed: 06/06/2024] Open
Abstract
Introduction As a hallmark feature of amyotrophic lateral sclerosis (ALS), bulbar involvement leads to progressive declines of speech and swallowing functions, significantly impacting social, emotional, and physical health, and quality of life. Standard clinical tools for bulbar assessment focus primarily on clinical symptoms and functional outcomes. However, ALS is known to have a long, clinically silent prodromal stage characterized by complex subclinical changes at various levels of the bulbar motor system. These changes accumulate over time and eventually culminate in clinical symptoms and functional declines. Detection of these subclinical changes is critical, both for mechanistic understanding of bulbar neuromuscular pathology and for optimal clinical management of bulbar dysfunction in ALS. To this end, we developed a novel multimodal measurement tool based on two clinically readily available, noninvasive instruments-facial surface electromyography (sEMG) and acoustic techniques-to hierarchically assess seven constructs of bulbar/speech motor control at the neuromuscular and acoustic levels. These constructs, including prosody, pause, functional connectivity, amplitude, rhythm, complexity, and regularity, are both mechanically and clinically relevant to bulbar involvement. Methods Using a custom-developed, fully automated data analytic algorithm, a variety of features were extracted from the sEMG and acoustic recordings of a speech task performed by 13 individuals with ALS and 10 neurologically healthy controls. These features were then factorized into 10 composite outcome measures using confirmatory factor analysis. Statistical and machine learning techniques were applied to these composite outcome measures to evaluate their reliability (internal consistency), validity (concurrent and construct), and efficacy for early detection and progress monitoring of bulbar involvement in ALS. Results The composite outcome measures were demonstrated to (1) be internally consistent and structurally valid in measuring the targeted constructs; (2) hold concurrent validity with the existing clinical and functional criteria for bulbar assessment; and (3) outperform the outcome measures obtained from each constituent modality in differentiating individuals with ALS from healthy controls. Moreover, the composite outcome measures combined demonstrated high efficacy for detecting subclinical changes in the targeted constructs, both during the prodromal stage and during the transition from prodromal to symptomatic stages. Discussion The findings provided compelling initial evidence for the utility of the multimodal measurement tool for improving early detection and progress monitoring of bulbar involvement in ALS, which have important implications in facilitating timely access to and delivery of optimal clinical care of bulbar dysfunction.
Collapse
Affiliation(s)
- Panying Rong
- Department of Speech-Language-Hearing: Sciences and Disorders, University of Kansas, Lawrence, KS, United States
| | - Lindsey Heidrick
- Department of Hearing and Speech, University of Kansas Medical Center, Kansas City, KS, United States
| | - Gary L Pattee
- Neurology Associate P.C., Lincoln, NE, United States
| |
Collapse
|
2
|
Rong P, Heidrick L. Hierarchical Temporal Structuring of Speech: A Multiscale, Multimodal Framework to Inform the Assessment and Management of Neuromotor Speech Disorder. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2024; 67:92-115. [PMID: 38099851 DOI: 10.1044/2023_jslhr-23-00219] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/10/2024]
Abstract
PURPOSE Hierarchical temporal structuring of speech is the key to multiscale linguistic information transfer toward effective communication. This study investigated and linked the hierarchical temporal cues of the kinematic and acoustic modalities of natural, unscripted speech in neurologically healthy and impaired speakers. METHOD Thirteen individuals with amyotrophic lateral sclerosis (ALS) and 10 age-matched healthy controls performed a story-telling task. The hierarchical temporal structure of the speech stimulus was measured by (a) 26 articulatory-kinematic features characterizing the depth, phase synchronization, and coherence of temporal modulation of the tongue tip, tongue body, lower lip, and jaw, at three hierarchically nested timescales corresponding to prosodic stress, syllables, and onset-rime/phonemes, and (b) 25 acoustic features characterizing the parallel aspects of temporal modulation of five critical-spectral-band envelopes. All features were compared between groups. For each aspect of temporal modulation, the contributions of all articulatory features to the parallel acoustic features were evaluated by group. RESULTS Generally consistent disease impacts were identified on the articulatory and acoustic features, manifested by reduced modulation depths of most articulators and critical-spectral-band envelopes, primarily at the timescales of syllables and onset-rime/phonemes. For healthy speakers, the strongest articulatory-acoustic relationships were found for (a) jaw and lip, in modulating stress timing, and (b) tongue tip, in modulating the timing relation between onset-rime/phonemes and syllables. For speakers with ALS, the tongue body, tongue tip, and jaw all showed the greatest contributions to modulating syllable timing. CONCLUSIONS The observed disease impacts likely reflect reduced entrainment of speech motor activities to finer-grained linguistic events, presumably due to the dynamic constraints of the neuromuscular system. To accommodate these restrictions, speakers with ALS appear to use their residual articulatory motor capacities to accentuate and convey the perceptually most salient temporal cues underpinned by the syllable-centric parsing mechanism. This adaptive strategy has potential implications in managing neuromotor speech disorders.
Collapse
Affiliation(s)
- Panying Rong
- Department of Speech-Language-Hearing: Sciences & Disorders, The University of Kansas, Lawrence
| | - Lindsey Heidrick
- Department of Hearing and Speech, The University of Kansas Medical Center, Kansas City
| |
Collapse
|
3
|
Kim JA, Jang H, Choi Y, Min YG, Hong YH, Sung JJ, Choi SJ. Subclinical articulatory changes of vowel parameters in Korean amyotrophic lateral sclerosis patients with perceptually normal voices. PLoS One 2023; 18:e0292460. [PMID: 37831677 PMCID: PMC10575489 DOI: 10.1371/journal.pone.0292460] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2023] [Accepted: 09/21/2023] [Indexed: 10/15/2023] Open
Abstract
The available quantitative methods for evaluating bulbar dysfunction in patients with amyotrophic lateral sclerosis (ALS) are limited. We aimed to characterize vowel properties in Korean ALS patients, investigate associations between vowel parameters and clinical features of ALS, and analyze subclinical articulatory changes of vowel parameters in those with perceptually normal voices. Forty-three patients with ALS (27 with dysarthria and 16 without dysarthria) and 20 healthy controls were prospectively collected in the study. Dysarthria was assessed using the ALS Functional Rating Scale-Revised (ALSFRS-R) speech subscores, with any loss of 4 points indicating the presence of dysarthria. The structured speech samples were recorded and analyzed using Praat software. For three corner vowels (/a/, /i/, and /u/), data on the vowel duration, fundamental frequency, frequencies of the first two formants (F1 and F2), harmonics-to-noise ratio, vowel space area (VSA), and vowel articulation index (VAI) were extracted from the speech samples. Corner vowel durations were significantly longer in ALS patients with dysarthria than in healthy controls. The F1 frequency of /a/, F2 frequencies of /i/ and /u/, the VSA, and the VAI showed significant differences between ALS patients with dysarthria and healthy controls. The area under the curve (AUC) was 0.912. The F1 frequency of /a/ and the VSA were the major determinants for differentiating ALS patients who had not yet developed apparent dysarthria from healthy controls (AUC 0.887). In linear regression analyses, as the ALSFRS-R speech subscore decreased, both the VSA and VAI were reduced. In contrast, vowel durations were found to be rather prolonged. The analyses of vowel parameters provided a useful metric correlated with disease severity for detecting subclinical bulbar dysfunction in ALS patients.
Collapse
Affiliation(s)
- Jin-Ah Kim
- Department of Neurology, Seoul National University Hospital, Seoul, Republic of Korea
- Department of Translational Medicine, Seoul National University College of Medicine, Seoul, Republic of Korea
- Genomic Medicine Institute, Medical Research Center, Seoul National University, Seoul, Republic of Korea
| | - Hayeun Jang
- Division of English, Busan University of Foreign Studies, Busan, Republic of Korea
| | - Yoonji Choi
- Department of Korean Language and Literature, Seoul National University, Seoul, Republic of Korea
| | - Young Gi Min
- Department of Neurology, Seoul National University Hospital, Seoul, Republic of Korea
- Department of Translational Medicine, Seoul National University College of Medicine, Seoul, Republic of Korea
| | - Yoon-Ho Hong
- Department of Neurology, Seoul Metropolitan Government-Seoul National University Boramae Medical Center, Seoul, Republic of Korea
| | - Jung-Joon Sung
- Department of Neurology, Seoul National University Hospital, Seoul, Republic of Korea
- Neuroscience Research Institute, Seoul National University College of Medicine, Seoul, Republic of Korea
| | - Seok-Jin Choi
- Department of Neurology, Seoul National University Hospital, Seoul, Republic of Korea
- Center for Hospital Medicine, Seoul National University Hospital, Seoul, Republic of Korea
| |
Collapse
|
4
|
Malaia EA, Borneman SC, Borneman JD, Krebs J, Wilbur RB. Prediction underlying comprehension of human motion: an analysis of Deaf signer and non-signer EEG in response to visual stimuli. Front Neurosci 2023; 17:1218510. [PMID: 37901437 PMCID: PMC10602904 DOI: 10.3389/fnins.2023.1218510] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2023] [Accepted: 09/27/2023] [Indexed: 10/31/2023] Open
Abstract
Introduction Sensory inference and top-down predictive processing, reflected in human neural activity, play a critical role in higher-order cognitive processes, such as language comprehension. However, the neurobiological bases of predictive processing in higher-order cognitive processes are not well-understood. Methods This study used electroencephalography (EEG) to track participants' cortical dynamics in response to Austrian Sign Language and reversed sign language videos, measuring neural coherence to optical flow in the visual signal. We then used machine learning to assess entropy-based relevance of specific frequencies and regions of interest to brain state classification accuracy. Results EEG features highly relevant for classification were distributed across language processing-related regions in Deaf signers (frontal cortex and left hemisphere), while in non-signers such features were concentrated in visual and spatial processing regions. Discussion The results highlight functional significance of predictive processing time windows for sign language comprehension and biological motion processing, and the role of long-term experience (learning) in minimizing prediction error.
Collapse
Affiliation(s)
- Evie A. Malaia
- Department of Communicative Disorders, University of Alabama, Tuscaloosa, AL, United States
| | - Sean C. Borneman
- Department of Communicative Disorders, University of Alabama, Tuscaloosa, AL, United States
| | - Joshua D. Borneman
- Department of Linguistics, Purdue University, West Lafayette, IN, United States
| | - Julia Krebs
- Linguistics Department, University of Salzburg, Salzburg, Austria
- Centre for Cognitive Neuroscience, University of Salzburg, Salzburg, Austria
| | - Ronnie B. Wilbur
- Department of Linguistics, Purdue University, West Lafayette, IN, United States
- Department of Speech, Language, and Hearing Sciences, Purdue University, West Lafayette, IN, United States
| |
Collapse
|
5
|
Rong P, Taylor A. A Vowel-Centric View Toward Characterizing Temporal Organization of Motor Speech Activities in Neurologically Impaired and Healthy Speakers. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2023; 66:3697-3720. [PMID: 37607386 DOI: 10.1044/2023_jslhr-23-00129] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/24/2023]
Abstract
PURPOSE This study tested the hypotheses that (a) motor speech activities are temporally organized around the nuclei into vowel-centric units that hold both stability and flexibility and (b) such temporal organization is impacted by motor speech impairment. METHOD Thirteen individuals with amyotrophic lateral sclerosis and 10 healthy controls read a sentence 3 times at each of the following rates: habitual, fast, and slow. Articulatory gestures and phonatory event were assessed in two vowel-centric units, as operationally defined within and across the boundaries of two target words-cat and must-to accommodate common coda omission and coarticulation. Twelve absolute and relative timing measures centering on the nucleus were derived to characterize the temporal organization of each unit. These measures were evaluated in terms of (a) their relations with global duration across rate conditions and (b) between-groups differences for the habitual rate condition. RESULTS Both vowel-centric units remained stable in relative timing between the articulatory gestures approaching and moving away from the nucleus across rate conditions. Relative timing between the articulatory gestures and phonatory event at smaller temporal granularities varied with global duration, but in different ways for neurologically impaired and healthy speakers. Disease impacts on relative timing were only detected across word boundaries. All absolute timing measures revealed consistent temporal scaling effects and disease-related prolongations. CONCLUSIONS The findings provide preliminary support for vowel-centric temporal organization of motor speech activities. Such temporal organization holds some extent of both stability and flexibility, which may facilitate the parsing of syllabic events during auditory processing, while accommodating task-specific suprasegmental variations. The timing impairments in amyotrophic lateral sclerosis are likely attributed to the disease-imposed dynamic constraints, reducing the entrainment of the related motor speech activities to the underlying linguistic elements. These findings have potential implications in guiding the assessment and management of temporal speech deficits in ALS.
Collapse
Affiliation(s)
- Panying Rong
- Department of Speech-Language-Hearing: Sciences & Disorders, University of Kansas, Lawrence
| | - Ava Taylor
- Department of Speech-Language-Hearing: Sciences & Disorders, University of Kansas, Lawrence
| |
Collapse
|
6
|
Sidiras C, Iliadou VV, Nimatoudis I, Bamiou DE. Absence of Rhythm Benefit on Speech in Noise Recognition in Children Diagnosed With Auditory Processing Disorder. Front Neurosci 2020; 14:418. [PMID: 32477048 PMCID: PMC7232546 DOI: 10.3389/fnins.2020.00418] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2019] [Accepted: 04/07/2020] [Indexed: 01/23/2023] Open
Abstract
Auditory processing disorder (APD) is a specific deficit in the processing of auditory information along the central auditory nervous system. It is characterized mainly by deficits in speech in noise recognition. APD children may also present with deficits in processing of auditory rhythm. Rhythmic neural entrainment is commonly present in perception of both speech and music, while auditory rhythmic priming of speech in noise has been known to enhance recognition in typical children. Here, we test the hypothesis that the effect of rhythmic priming is compromised in APD children, and further assessed for correlations with verbal and non-verbal auditory processing and cognition. Forty APD children and 33 neurotypical ones were assessed through (a) WRRC, a test measuring the effects of rhythmic priming on speech in noise recognition, (b) a battery of auditory processing tests, commonly used in APD diagnosis, and (c) two cognitive tests, assessing working memory and auditory attention respectively. Findings revealed that (a) the effect of rhythmic priming on speech in noise recognition is absent in APD children, (b) it is linked to non-verbal auditory processing, and (c) it is only weakly dependent on cognition. We discuss these findings in light of Dynamic Attention Theory, neural entrainment and neural oscillations and suggest that these functions may be compromised in APD children. Further research is needed (a) to explore the nature of the mechanics of rhythmic priming on speech in noise perception and why the effect is absent in APD children, (b) which other mechanisms related to both rhythm and language are also affected in this population, and (c) whether music/rhythm training can restore deficits in rhythm effects.
Collapse
Affiliation(s)
- Christos Sidiras
- Clinical Psychoacoustics Lab, 3rd Department of Psychiatry, Neuroscience Sector, Medical School, Aristotle University of Thessaloniki, Thessaloniki, Greece
| | - Vasiliki Vivian Iliadou
- Clinical Psychoacoustics Lab, 3rd Department of Psychiatry, Neuroscience Sector, Medical School, Aristotle University of Thessaloniki, Thessaloniki, Greece
| | - Ioannis Nimatoudis
- Clinical Psychoacoustics Lab, 3rd Department of Psychiatry, Neuroscience Sector, Medical School, Aristotle University of Thessaloniki, Thessaloniki, Greece
| | - Doris-Eva Bamiou
- Faculty of Brain Sciences, UCL Ear Institute, University College London, London, United Kingdom
- Hearing & Deafness Biomedical Research Centre, National Institute for Health Research, London, United Kingdom
| |
Collapse
|