1
|
Tseng HC, Hsieh IH. Effects of absolute pitch on brain activation and functional connectivity during hearing-in-noise perception. Cortex 2024; 174:1-18. [PMID: 38484435 DOI: 10.1016/j.cortex.2024.02.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2023] [Revised: 01/11/2024] [Accepted: 02/06/2024] [Indexed: 04/21/2024]
Abstract
Hearing-in-noise (HIN) ability is crucial in speech and music communication. Recent evidence suggests that absolute pitch (AP), the ability to identify isolated musical notes, is associated with HIN benefits. A theoretical account postulates a link between AP ability and neural network indices of segregation. However, how AP ability modulates the brain activation and functional connectivity underlying HIN perception remains unclear. Here we used functional magnetic resonance imaging to contrast brain responses among a sample (n = 45) comprising 15 AP musicians, 15 non-AP musicians, and 15 non-musicians in perceiving Mandarin speech and melody targets under varying signal-to-noise ratios (SNRs: No-Noise, 0, -9 dB). Results reveal that AP musicians exhibited increased activation in auditory and superior frontal regions across both HIN domains (music and speech), irrespective of noise levels. Notably, substantially higher sensorimotor activation was found in AP musicians when the target was music compared to speech. Furthermore, we examined AP effects on neural connectivity using psychophysiological interaction analysis with the auditory cortex as the seed region. AP musicians showed decreased functional connectivity with the sensorimotor and middle frontal gyrus compared to non-AP musicians. Crucially, AP differentially affected connectivity with parietal and frontal brain regions depending on the HIN domain being music or speech. These findings suggest that AP plays a critical role in HIN perception, manifested by increased activation and functional independence between auditory and sensorimotor regions for perceiving music and speech streams.
Collapse
Affiliation(s)
- Hung-Chen Tseng
- Institute of Cognitive Neuroscience, National Central University, Taoyuan City, Taiwan
| | - I-Hui Hsieh
- Institute of Cognitive Neuroscience, National Central University, Taoyuan City, Taiwan; Cognitive Intelligence and Precision Healthcare Center, National Central University, Taoyuan City, Taiwan.
| |
Collapse
|
2
|
Bidelman G, Sisson A, Rizzi R, MacLean J, Baer K. Myogenic artifacts masquerade as neuroplasticity in the auditory frequency-following response (FFR). BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.10.27.564446. [PMID: 37961324 PMCID: PMC10634913 DOI: 10.1101/2023.10.27.564446] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/15/2023]
Abstract
The frequency-following response (FFR) is an evoked potential that provides a "neural fingerprint" of complex sound encoding in the brain. FFRs have been widely used to characterize speech and music processing, experience-dependent neuroplasticity (e.g., learning, musicianship), and biomarkers for hearing and language-based disorders that distort receptive communication abilities. It is widely assumed FFRs stem from a mixture of phase-locked neurogenic activity from brainstem and cortical structures along the hearing neuraxis. Here, we challenge this prevailing view by demonstrating upwards of ~50% of the FFR can originate from a non-neural source: contamination from the postauricular muscle (PAM) vestigial startle reflex. We first establish PAM artifact is present in all ears, varies with electrode proximity to the muscle, and can be experimentally manipulated by directing listeners' eye gaze toward the ear of sound stimulation. We then show this muscular noise easily confounds auditory FFRs, spuriously amplifying responses by 3-4x fold with tandem PAM contraction and even explaining putative FFR enhancements observed in highly skilled musicians. Our findings expose a new and unrecognized myogenic source to the FFR that drives its large inter-subject variability and cast doubt on whether changes in the response typically attributed to neuroplasticity/pathology are solely of brain origin.
Collapse
|
3
|
Bidelman GM, Bernard F, Skubic K. Hearing in categories aids speech streaming at the "cocktail party". BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.04.03.587795. [PMID: 38617284 PMCID: PMC11014555 DOI: 10.1101/2024.04.03.587795] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/16/2024]
Abstract
Our perceptual system bins elements of the speech signal into categories to make speech perception manageable. Here, we aimed to test whether hearing speech in categories (as opposed to a continuous/gradient fashion) affords yet another benefit to speech recognition: parsing noisy speech at the "cocktail party." We measured speech recognition in a simulated 3D cocktail party environment. We manipulated task difficulty by varying the number of additional maskers presented at other spatial locations in the horizontal soundfield (1-4 talkers) and via forward vs. time-reversed maskers, promoting more and less informational masking (IM), respectively. In separate tasks, we measured isolated phoneme categorization using two-alternative forced choice (2AFC) and visual analog scaling (VAS) tasks designed to promote more/less categorical hearing and thus test putative links between categorization and real-world speech-in-noise skills. We first show that listeners can only monitor up to ~3 talkers despite up to 5 in the soundscape and streaming is not related to extended high-frequency hearing thresholds (though QuickSIN scores are). We then confirm speech streaming accuracy and speed decline with additional competing talkers and amidst forward compared to reverse maskers with added IM. Dividing listeners into "discrete" vs. "continuous" categorizers based on their VAS labeling (i.e., whether responses were binary or continuous judgments), we then show the degree of IM experienced at the cocktail party is predicted by their degree of categoricity in phoneme labeling; more discrete listeners are less susceptible to IM than their gradient responding peers. Our results establish a link between speech categorization skills and cocktail party processing, with a categorical (rather than gradient) listening strategy benefiting degraded speech perception. These findings imply figure-ground deficits common in many disorders might arise through a surprisingly simple mechanism: a failure to properly bin sounds into categories.
Collapse
Affiliation(s)
- Gavin M. Bidelman
- Department of Speech, Language and Hearing Sciences, Indiana University, Bloomington, IN, USA
- Program in Neuroscience, Indiana University, Bloomington, IN, USA
- Cognitive Science Program, Indiana University, Bloomington, IN, USA
| | - Fallon Bernard
- School of Communication Sciences & Disorders, University of Memphis, Memphis TN, USA
| | - Kimberly Skubic
- School of Communication Sciences & Disorders, University of Memphis, Memphis TN, USA
| |
Collapse
|
4
|
Lyu S, Põldver N, Kask L, Wang L, Kreegipuu K. Effect of musical expertise on the perception of duration and pitch in language: A cross-linguistic study. Acta Psychol (Amst) 2024; 244:104195. [PMID: 38412710 DOI: 10.1016/j.actpsy.2024.104195] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2023] [Revised: 02/20/2024] [Accepted: 02/21/2024] [Indexed: 02/29/2024] Open
Abstract
This study adopts a cross-linguistic perspective and investigates how musical expertise affects the perception of duration and pitch in language. Native speakers of Chinese (N = 44) and Estonian (N = 46), each group subdivided into musicians and non-musicians, participated in a mismatch negativity (MMN) experiment where they passively listened to both Chinese and Estonian stimuli, followed by a behavioral experiment where they attentively discriminated the stimuli in the non-native language (i.e., Chinese to Estonian participants and Estonian to Chinese participants). In both experiments, stimuli of duration change, pitch change, and duration plus pitch change were discriminated. We found higher behavioral sensitivity among Chinese musicians than non-musicians in perceiving the duration change in Estonian and higher behavioral sensitivity among Estonian musicians than non-musicians in perceiving all types of changes in Chinese, but no corresponding effect was found in the MMN results, which suggests a more salient effect of musical expertise on foreign language processing when attention is required. Secondly, Chinese musicians did not outperform non-musicians in attentively discriminating the pitch-related stimuli in Estonian, suggesting that musical expertise can be overridden by tonal language experience when perceiving foreign linguistic pitch, especially when an attentive discrimination task is administered. Thirdly, we found larger MMN among Chinese and Estonian musicians than their non-musician counterparts in perceiving the largest deviant (i.e., duration plus pitch) in their native language. Taken together, our results demonstrate a positive effect of musical expertise on language processing.
Collapse
Affiliation(s)
- Siqi Lyu
- Institute of Psychology, University of Tartu, Tartu, Estonia
| | - Nele Põldver
- Institute of Psychology, University of Tartu, Tartu, Estonia
| | - Liis Kask
- Institute of Psychology, University of Tartu, Tartu, Estonia
| | - Luming Wang
- College of Foreign Languages, Zhejiang University of Technology, Hangzhou, China
| | - Kairi Kreegipuu
- Institute of Psychology, University of Tartu, Tartu, Estonia.
| |
Collapse
|
5
|
MacLean J, Stirn J, Sisson A, Bidelman GM. Short- and long-term neuroplasticity interact during the perceptual learning of concurrent speech. Cereb Cortex 2024; 34:bhad543. [PMID: 38212291 PMCID: PMC10839853 DOI: 10.1093/cercor/bhad543] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2023] [Revised: 12/20/2023] [Accepted: 12/21/2023] [Indexed: 01/13/2024] Open
Abstract
Plasticity from auditory experience shapes the brain's encoding and perception of sound. However, whether such long-term plasticity alters the trajectory of short-term plasticity during speech processing has yet to be investigated. Here, we explored the neural mechanisms and interplay between short- and long-term neuroplasticity for rapid auditory perceptual learning of concurrent speech sounds in young, normal-hearing musicians and nonmusicians. Participants learned to identify double-vowel mixtures during ~ 45 min training sessions recorded simultaneously with high-density electroencephalography (EEG). We analyzed frequency-following responses (FFRs) and event-related potentials (ERPs) to investigate neural correlates of learning at subcortical and cortical levels, respectively. Although both groups showed rapid perceptual learning, musicians showed faster behavioral decisions than nonmusicians overall. Learning-related changes were not apparent in brainstem FFRs. However, plasticity was highly evident in cortex, where ERPs revealed unique hemispheric asymmetries between groups suggestive of different neural strategies (musicians: right hemisphere bias; nonmusicians: left hemisphere). Source reconstruction and the early (150-200 ms) time course of these effects localized learning-induced cortical plasticity to auditory-sensory brain areas. Our findings reinforce the domain-general benefits of musicianship but reveal that successful speech sound learning is driven by a critical interplay between long- and short-term mechanisms of auditory plasticity, which first emerge at a cortical level.
Collapse
Affiliation(s)
- Jessica MacLean
- Department of Speech, Language and Hearing Sciences, Indiana University, Bloomington, IN, USA
- Program in Neuroscience, Indiana University, Bloomington, IN, USA
| | - Jack Stirn
- Department of Speech, Language and Hearing Sciences, Indiana University, Bloomington, IN, USA
| | - Alexandria Sisson
- Department of Speech, Language and Hearing Sciences, Indiana University, Bloomington, IN, USA
| | - Gavin M Bidelman
- Department of Speech, Language and Hearing Sciences, Indiana University, Bloomington, IN, USA
- Program in Neuroscience, Indiana University, Bloomington, IN, USA
- Cognitive Science Program, Indiana University, Bloomington, IN, USA
| |
Collapse
|
6
|
Vonthron F, Yuen A, Pellerin H, Cohen D, Grossard C. A Serious Game to Train Rhythmic Abilities in Children With Dyslexia: Feasibility and Usability Study. JMIR Serious Games 2024; 12:e42733. [PMID: 37830510 PMCID: PMC10811594 DOI: 10.2196/42733] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2022] [Revised: 01/26/2023] [Accepted: 10/09/2023] [Indexed: 10/14/2023] Open
Abstract
BACKGROUND Rhythm perception and production are related to phonological awareness and reading performance, and rhythmic deficits have been reported in dyslexia. In addition, rhythm-based interventions can improve cognitive function, and there is consistent evidence suggesting that they are an efficient tool for training reading skills in dyslexia. OBJECTIVE This paper describes a rhythmic training protocol for children with dyslexia provided through a serious game (SG) called Mila-Learn and the methodology used to test its usability. METHODS We computed Mila-Learn, an SG that makes training remotely accessible and consistently reproducible and follows an educative agenda using Unity (Unity Technologies). The SG's development was informed by 2 studies conducted during the French COVID-19 lockdowns. Study 1 was a feasibility study evaluating the autonomous use of Mila-Learn with 2500 children with reading deficits. Data were analyzed from a subsample of 525 children who spontaneously played at least 15 (median 42) games. Study 2, following the same real-life setting as study 1, evaluated the usability of an enhanced version of Mila-Learn over 6 months in a sample of 3337 children. The analysis was carried out in 98 children with available diagnoses. RESULTS Benefiting from study 1 feedback, we improved Mila-Learn to enhance motivation and learning by adding specific features, including customization, storylines, humor, and increasing difficulty. Linear mixed models showed that performance improved over time. The scores were better for older children (P<.001), children with attention-deficit/hyperactivity disorder (P<.001), and children with dyslexia (P<.001). Performance improved significantly faster in children with attention-deficit/hyperactivity disorder (β=.06; t3754=3.91; P<.001) and slower in children with dyslexia (β=-.06; t3816=-5.08; P<.001). CONCLUSIONS Given these encouraging results, future work will focus on the clinical evaluation of Mila-Learn through a large double-blind randomized controlled trial comparing Mila-Learn and a placebo game.
Collapse
Affiliation(s)
| | | | - Hugues Pellerin
- Service de Psychiatrie de l'Enfant et de l'Adolescent, Groupe Hospitalier Pitié-Salpêtrière, Assistance Publique-Hôpitaux de Paris, Paris, France
| | - David Cohen
- Service de Psychiatrie de l'Enfant et de l'Adolescent, Groupe Hospitalier Pitié-Salpêtrière, Assistance Publique-Hôpitaux de Paris, Paris, France
- Institut des Systèmes Intelligents et Robotiques (ISIR, CNRS UMR7222), Sorbonne Université, Paris, France
| | - Charline Grossard
- Service de Psychiatrie de l'Enfant et de l'Adolescent, Groupe Hospitalier Pitié-Salpêtrière, Assistance Publique-Hôpitaux de Paris, Paris, France
- Institut des Systèmes Intelligents et Robotiques (ISIR, CNRS UMR7222), Sorbonne Université, Paris, France
| |
Collapse
|
7
|
Zhao G, Zhan Y, Zha J, Cao Y, Zhou F, He L. Abnormal intrinsic brain functional network dynamics in patients with cervical spondylotic myelopathy. Cogn Neurodyn 2023; 17:1201-1211. [PMID: 37786665 PMCID: PMC10542087 DOI: 10.1007/s11571-022-09807-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2021] [Revised: 03/15/2022] [Accepted: 04/01/2022] [Indexed: 11/03/2022] Open
Abstract
The specific topological changes in dynamic functional networks and their role in cervical spondylotic myelopathy (CSM) brain function reorganization remain unclear. This study aimed to investigate the dynamic functional connection (dFC) of patients with CSM, focusing on the temporal characteristics of the functional connection state patterns and the variability of network topological organization. Eighty-eight patients with CSM and 77 healthy controls (HCs) were recruited for resting-state functional magnetic resonance imaging. We applied the sliding time window analysis method and K-means clustering analysis to capture the dFC variability patterns of the two groups. The graph-theoretical approach was used to investigate the variance in the topological organization of whole-brain functional networks. All participants showed four types of dynamic functional connection states. The mean dwell time in state 2 was significantly different between the two groups. Particularly, the mean dwell time in state 2 was significantly longer in the CSM group than in the healthy control group. Among the four states, switching of relative brain networks mainly included the executive control network (ECN), salience network (SN), default mode network (DMN), language network (LN), visual network (VN), auditory network (AN), precuneus network (PN), and sensorimotor network (SMN). Additionally, the topological properties of the dynamic network were variable in patients with CSM. Dynamic functional connection states may offer new insights into intrinsic functional activities in CSM brain networks. The variance of topological organization may suggest instability of the brain networks in patients with CSM.
Collapse
Affiliation(s)
- Guoshu Zhao
- Department of Radiology, the First Affiliated Hospital of Nanchang University, No. 17 Yongwaizheng Street, Nanchang, Jiangxi 330006 People’s Republic of China
- Neuroimaging Lab, Jiangxi Province Medical Imaging Research Institute, Nanchang, 330006 People’s Republic of China
| | - Yaru Zhan
- Department of Radiology, the First Affiliated Hospital of Nanchang University, No. 17 Yongwaizheng Street, Nanchang, Jiangxi 330006 People’s Republic of China
- Neuroimaging Lab, Jiangxi Province Medical Imaging Research Institute, Nanchang, 330006 People’s Republic of China
| | - Jing Zha
- The 908th Hospital of Chinese People’s Liberation Army Joint Logistic Support Force, Fuzhou, 330006 People’s Republic of China
| | - Yuan Cao
- Department of Nuclear Medicine, West China Hospital of Sichuan University, Chengdu, 610041 People’s Republic of China
- Huaxi MR Research Center (HMRRC), Department of Radiology, West China Hospital of Sichuan University, Chengdu, 610041 People’s Republic of China
- Neuroimaging Lab, Jiangxi Province Medical Imaging Research Institute, Nanchang, 330006 People’s Republic of China
| | - Fuqing Zhou
- Department of Radiology, the First Affiliated Hospital of Nanchang University, No. 17 Yongwaizheng Street, Nanchang, Jiangxi 330006 People’s Republic of China
- Neuroimaging Lab, Jiangxi Province Medical Imaging Research Institute, Nanchang, 330006 People’s Republic of China
| | - Laichang He
- Department of Radiology, the First Affiliated Hospital of Nanchang University, No. 17 Yongwaizheng Street, Nanchang, Jiangxi 330006 People’s Republic of China
- Neuroimaging Lab, Jiangxi Province Medical Imaging Research Institute, Nanchang, 330006 People’s Republic of China
| |
Collapse
|
8
|
MacLean J, Stirn J, Sisson A, Bidelman GM. Short- and long-term experience-dependent neuroplasticity interact during the perceptual learning of concurrent speech. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.09.26.559640. [PMID: 37808665 PMCID: PMC10557636 DOI: 10.1101/2023.09.26.559640] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/10/2023]
Abstract
Plasticity from auditory experiences shapes brain encoding and perception of sound. However, whether such long-term plasticity alters the trajectory of short-term plasticity during speech processing has yet to be investigated. Here, we explored the neural mechanisms and interplay between short- and long-term neuroplasticity for rapid auditory perceptual learning of concurrent speech sounds in young, normal-hearing musicians and nonmusicians. Participants learned to identify double-vowel mixtures during ∼45 minute training sessions recorded simultaneously with high-density EEG. We analyzed frequency-following responses (FFRs) and event-related potentials (ERPs) to investigate neural correlates of learning at subcortical and cortical levels, respectively. While both groups showed rapid perceptual learning, musicians showed faster behavioral decisions than nonmusicians overall. Learning-related changes were not apparent in brainstem FFRs. However, plasticity was highly evident in cortex, where ERPs revealed unique hemispheric asymmetries between groups suggestive of different neural strategies (musicians: right hemisphere bias; nonmusicians: left hemisphere). Source reconstruction and the early (150-200 ms) time course of these effects localized learning-induced cortical plasticity to auditory-sensory brain areas. Our findings confirm domain-general benefits for musicianship but reveal successful speech sound learning is driven by a critical interplay between long- and short-term mechanisms of auditory plasticity that first emerge at a cortical level.
Collapse
|
9
|
Kostanian D, Kleeva D, Soghoyan G, Rebreikina A, Sysoeva O. Opposite effects of rapid auditory stimulation on tetanized and non-tetanized tone of adjacent frequency: Mismatch negativity study. PLoS One 2023; 18:e0289964. [PMID: 37566611 PMCID: PMC10420357 DOI: 10.1371/journal.pone.0289964] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2023] [Accepted: 07/31/2023] [Indexed: 08/13/2023] Open
Abstract
Our study describes the effects of sensory tetanization on neurophysiological and behavioral measures in humans linking cellular studies of long-term potentiation with high-level brain processes. Rapid (every 75ms) presentation of pure tone (1020 Hz, 50ms) for 2 minutes was preceded and followed by oddball blocks that contained the same stimulus presented as deviant (probability of 5-10%) interspersed with standard (80-90%) and deviant tones (5-10%) of adjacent frequencies (1000 and 980Hz, respectively). Mismatch negativity (MMN) component in response to tetanized tone (1020Hz), while being similar to MMN for non-tetanized tone before tetanization, became larger than that after tetanization, pointing to the increase in cortical differentiation of these tones. However, this differentiation was partially due to the MMN decrease after tetanization for tones adjacent to tetanized frequency, suggesting the influence of lateral inhibition to this effect. Although MMN correlated with tone discriminability in a psychophysical task, the behavioral improvement after tetanization was not statistically detectable. To conclude, short-term auditory tetanization affects cortical representation of tones that are not limited to the tetanized stimuli.
Collapse
Affiliation(s)
- Daria Kostanian
- Center for Cognitive Sciences, Sirius University of Science and Technology, Sochi, Russia
| | - Daria Kleeva
- Center for Bioelectric Interfaces, National Research University “Higher School of Economics”, Moscow, Russia
- V. Zelman Center for Neurobiology and Brain Restoration, Skolkovo Institute of Science and Technology, Moscow, Russia
| | - Gurgen Soghoyan
- V. Zelman Center for Neurobiology and Brain Restoration, Skolkovo Institute of Science and Technology, Moscow, Russia
| | - Anna Rebreikina
- Center for Cognitive Sciences, Sirius University of Science and Technology, Sochi, Russia
- Laboratory of Human Higher Nervous Activity, Institute of Higher Nervous Activity and Neurophysiology of RAS, Moscow, Russia
| | - Olga Sysoeva
- Center for Cognitive Sciences, Sirius University of Science and Technology, Sochi, Russia
- Laboratory of Human Higher Nervous Activity, Institute of Higher Nervous Activity and Neurophysiology of RAS, Moscow, Russia
| |
Collapse
|
10
|
Carter JA, Bidelman GM. Perceptual warping exposes categorical representations for speech in human brainstem responses. Neuroimage 2023; 269:119899. [PMID: 36720437 PMCID: PMC9992300 DOI: 10.1016/j.neuroimage.2023.119899] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2022] [Revised: 01/17/2023] [Accepted: 01/22/2023] [Indexed: 01/30/2023] Open
Abstract
The brain transforms continuous acoustic events into discrete category representations to downsample the speech signal for our perceptual-cognitive systems. Such phonetic categories are highly malleable, and their percepts can change depending on surrounding stimulus context. Previous work suggests these acoustic-phonetic mapping and perceptual warping of speech emerge in the brain no earlier than auditory cortex. Here, we examined whether these auditory-category phenomena inherent to speech perception occur even earlier in the human brain, at the level of auditory brainstem. We recorded speech-evoked frequency following responses (FFRs) during a task designed to induce more/less warping of listeners' perceptual categories depending on stimulus presentation order of a speech continuum (random, forward, backward directions). We used a novel clustered stimulus paradigm to rapidly record the high trial counts needed for FFRs concurrent with active behavioral tasks. We found serial stimulus order caused perceptual shifts (hysteresis) near listeners' category boundary confirming identical speech tokens are perceived differentially depending on stimulus context. Critically, we further show neural FFRs during active (but not passive) listening are enhanced for prototypical vs. category-ambiguous tokens and are biased in the direction of listeners' phonetic label even for acoustically-identical speech stimuli. These findings were not observed in the stimulus acoustics nor model FFR responses generated via a computational model of cochlear and auditory nerve transduction, confirming a central origin to the effects. Our data reveal FFRs carry category-level information and suggest top-down processing actively shapes the neural encoding and categorization of speech at subcortical levels. These findings suggest the acoustic-phonetic mapping and perceptual warping in speech perception occur surprisingly early along the auditory neuroaxis, which might aid understanding by reducing ambiguity inherent to the speech signal.
Collapse
Affiliation(s)
- Jared A Carter
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, USA; School of Communication Sciences and Disorders, University of Memphis, Memphis, TN, USA; Division of Clinical Neuroscience, School of Medicine, Hearing Sciences - Scottish Section, University of Nottingham, Glasgow, Scotland, UK
| | - Gavin M Bidelman
- Department of Speech, Language and Hearing Sciences, Indiana University, Bloomington, IN, USA; Program in Neuroscience, Indiana University, Bloomington, IN, USA.
| |
Collapse
|
11
|
Bidelman GM, Carter JA. Continuous dynamics in behavior reveal interactions between perceptual warping in categorization and speech-in-noise perception. Front Neurosci 2023; 17:1032369. [PMID: 36937676 PMCID: PMC10014819 DOI: 10.3389/fnins.2023.1032369] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2022] [Accepted: 02/14/2023] [Indexed: 03/05/2023] Open
Abstract
Introduction Spoken language comprehension requires listeners map continuous features of the speech signal to discrete category labels. Categories are however malleable to surrounding context and stimulus precedence; listeners' percept can dynamically shift depending on the sequencing of adjacent stimuli resulting in a warping of the heard phonetic category. Here, we investigated whether such perceptual warping-which amplify categorical hearing-might alter speech processing in noise-degraded listening scenarios. Methods We measured continuous dynamics in perception and category judgments of an acoustic-phonetic vowel gradient via mouse tracking. Tokens were presented in serial vs. random orders to induce more/less perceptual warping while listeners categorized continua in clean and noise conditions. Results Listeners' responses were faster and their mouse trajectories closer to the ultimate behavioral selection (marked visually on the screen) in serial vs. random order, suggesting increased perceptual attraction to category exemplars. Interestingly, order effects emerged earlier and persisted later in the trial time course when categorizing speech in noise. Discussion These data describe interactions between perceptual warping in categorization and speech-in-noise perception: warping strengthens the behavioral attraction to relevant speech categories, making listeners more decisive (though not necessarily more accurate) in their decisions of both clean and noise-degraded speech.
Collapse
Affiliation(s)
- Gavin M. Bidelman
- Department of Speech, Language and Hearing Sciences, Indiana University, Bloomington, IN, United States
- Program in Neuroscience, Indiana University, Bloomington, IN, United States
| | - Jared A. Carter
- School of Communication Sciences and Disorders, University of Memphis, Memphis, TN, United States
- Hearing Sciences – Scottish Section, Division of Clinical Neuroscience, School of Medicine, University of Nottingham, Glasgow, United Kingdom
| |
Collapse
|
12
|
Cohn M, Barreda S, Zellou G. Differences in a Musician's Advantage for Speech-in-Speech Perception Based on Age and Task. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2023; 66:545-564. [PMID: 36729698 DOI: 10.1044/2022_jslhr-22-00259] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/18/2023]
Abstract
PURPOSE This study investigates the debate that musicians have an advantage in speech-in-noise perception from years of targeted auditory training. We also consider the effect of age on any such advantage, comparing musicians and nonmusicians (age range: 18-66 years), all of whom had normal hearing. We manipulate the degree of fundamental frequency (f o) separation between the competing talkers, as well as use different tasks, to probe attentional differences that might shape a musician's advantage across ages. METHOD Participants (ranging in age from 18 to 66 years) included 29 musicians and 26 nonmusicians. They completed two tasks varying in attentional demands: (a) a selective attention task where listeners identify the target sentence presented with a one-talker interferer (Experiment 1), and (b) a divided attention task where listeners hear two vowels played simultaneously and identify both competing vowels (Experiment 2). In both paradigms, f o separation was manipulated between the two voices (Δf o = 0, 0.156, 0.306, 1, 2, 3 semitones). RESULTS Results show that increasing differences in f o separation lead to higher accuracy on both tasks. Additionally, we find evidence for a musician's advantage across the two studies. In the sentence identification task, younger adult musicians show higher accuracy overall, as well as a stronger reliance on f o separation. Yet, this advantage declines with musicians' age. In the double vowel identification task, musicians of all ages show an across-the-board advantage in detecting two vowels-and use f o separation more to aid in stream separation-but show no consistent difference in double vowel identification. CONCLUSIONS Overall, we find support for a hybrid auditory encoding-attention account of music-to-speech transfer. The musician's advantage includes f o, but the benefit also depends on the attentional demands in the task and listeners' age. Taken together, this study suggests a complex relationship between age, musical experience, and speech-in-speech paradigm on a musician's advantage. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.21956777.
Collapse
Affiliation(s)
- Michelle Cohn
- Phonetics Lab, Department of Linguistics, University of California, Davis
| | - Santiago Barreda
- Phonetics Lab, Department of Linguistics, University of California, Davis
| | - Georgia Zellou
- Phonetics Lab, Department of Linguistics, University of California, Davis
| |
Collapse
|
13
|
Martins I, Lima CF, Pinheiro AP. Enhanced salience of musical sounds in singers and instrumentalists. COGNITIVE, AFFECTIVE & BEHAVIORAL NEUROSCIENCE 2022; 22:1044-1062. [PMID: 35501427 DOI: 10.3758/s13415-022-01007-x] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 04/10/2022] [Indexed: 06/14/2023]
Abstract
Music training has been linked to facilitated processing of emotional sounds. However, most studies have focused on speech, and less is known about musicians' brain responses to other emotional sounds and in relation to instrument-specific experience. The current study combined behavioral and EEG methods to address two novel questions related to the perception of auditory emotional cues: whether and how long-term music training relates to a distinct emotional processing of nonverbal vocalizations and music; and whether distinct training profiles (vocal vs. instrumental) modulate brain responses to emotional sounds from early to late processing stages. Fifty-eight participants completed an EEG implicit emotional processing task, in which musical and vocal sounds differing in valence were presented as nontarget stimuli. After this task, participants explicitly evaluated the same sounds regarding the emotion being expressed, their valence, and arousal. Compared with nonmusicians, musicians displayed enhanced salience detection (P2), attention orienting (P3), and elaborative processing (Late Positive Potential) of musical (vs. vocal) sounds in event-related potential (ERP) data. The explicit evaluation of musical sounds also was distinct in musicians: accuracy in the emotional recognition of musical sounds was similar across valence types in musicians, who also judged musical sounds to be more pleasant and more arousing than nonmusicians. Specific profiles of music training (singers vs. instrumentalists) did not relate to differences in the processing of vocal vs. musical sounds. Together, these findings reveal that music has a privileged status in the auditory system of long-term musically trained listeners, irrespective of their instrument-specific experience.
Collapse
Affiliation(s)
- Inês Martins
- CICPSI, Faculdade de Psicologia, Universidade de Lisboa, 1649-013, Lisbon, Portugal
| | - César F Lima
- Instituto Universitário de Lisboa (ISCTE-IUL), Lisbon, Portugal
| | - Ana P Pinheiro
- CICPSI, Faculdade de Psicologia, Universidade de Lisboa, 1649-013, Lisbon, Portugal.
| |
Collapse
|
14
|
Price CN, Bidelman GM. Musical experience partially counteracts temporal speech processing deficits in putative mild cognitive impairment. Ann N Y Acad Sci 2022; 1516:114-122. [PMID: 35762658 PMCID: PMC9588638 DOI: 10.1111/nyas.14853] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Abstract
Mild cognitive impairment (MCI) commonly results in more rapid cognitive and behavioral declines than typical aging. Individuals with MCI can exhibit impaired receptive speech abilities that may reflect neurophysiological changes in auditory-sensory processing prior to usual cognitive deficits. Benefits from current interventions targeting communication difficulties in MCI are limited. Yet, neuroplasticity associated with musical experience has been implicated in improving neural representations of speech and offsetting age-related declines in perception. Here, we asked whether these experience-dependent effects of musical experience might extend to aberrant aging and offer some degree of cognitive protection against MCI. During a vowel categorization task, we recorded single-channel electroencephalograms (EEGs) in older adults with putative MCI to evaluate speech encoding across subcortical and cortical levels of the auditory system. Critically, listeners varied in their duration of formal musical experience (0-21 years). Musical experience sharpened temporal precision in auditory cortical responses, suggesting that musical experience produces more efficient processing of acoustic features by counteracting age-related neural delays. Additionally, robustness of brainstem responses predicted the severity of cognitive decline, suggesting that early speech representations are sensitive to preclinical stages of cognitive impairment. Our results extend prior studies by demonstrating positive benefits of musical experience in older adults with emergent cognitive impairments.
Collapse
Affiliation(s)
- Caitlin N. Price
- Department of Audiology & Speech Pathology, University of Arkansas for Medical Sciences, Little Rock, Arkansas, USA
| | - Gavin M. Bidelman
- Department of Speech, Language and Hearing Sciences, Indiana University, Bloomington, Indiana, USA
| |
Collapse
|
15
|
Parker A, Skoe E, Tecoulesco L, Naigles L. A Home-Based Approach to Auditory Brainstem Response Measurement: Proof-of-Concept and Practical Guidelines. Semin Hear 2022; 43:177-196. [PMID: 36313050 PMCID: PMC9605808 DOI: 10.1055/s-0042-1756163] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/16/2023] Open
Abstract
Broad-scale neuroscientific investigations of diverse human populations are difficult to implement. This is because the primary neuroimaging methods (magnetic resonance imaging, electroencephalography [EEG]) historically have not been portable, and participants may be unable or unwilling to travel to test sites. Miniaturization of EEG technologies has now opened the door to neuroscientific fieldwork, allowing for easier access to under-represented populations. Recent efforts to conduct auditory neuroscience outside a laboratory setting are reviewed and then an in-home technique for recording auditory brainstem responses (ABRs) and frequency-following responses (FFRs) in a home setting is introduced. As a proof of concept, we have conducted two in-home electrophysiological studies: one in 27 children aged 6 to 16 years (13 with autism spectrum disorder) and another in 12 young adults aged 18 to 27 years, using portable electrophysiological equipment to record ABRs and FFRs to click and speech stimuli, spanning rural and urban and multiple homes and testers. We validate our fieldwork approach by presenting waveforms and data on latencies and signal-to-noise ratio. Our findings demonstrate the feasibility and utility of home-based ABR/FFR techniques, paving the course for larger fieldwork investigations of populations that are difficult to test or recruit. We conclude this tutorial with practical tips and guidelines for recording ABRs and FFRs in the field and discuss possible clinical and research applications of this approach.
Collapse
Affiliation(s)
- Ashley Parker
- Department of Speech, Language, and Hearing Sciences, University of Connecticut, Storrs, Connecticut
- Connecticut Institute for Brain and Cognitive Sciences, University of Connecticut, Storrs, Connecticut
- Department of Communication Sciences and Disorders, University of Pittsburgh, Pittsburgh, Pennsylvania.
| | - Erika Skoe
- Department of Speech, Language, and Hearing Sciences, University of Connecticut, Storrs, Connecticut
- Connecticut Institute for Brain and Cognitive Sciences, University of Connecticut, Storrs, Connecticut
- Cognitive Sciences Program, University of Connecticut, Storrs, Connecticut
| | - Lee Tecoulesco
- Cognitive Sciences Program, University of Connecticut, Storrs, Connecticut
- Department of Psychological Sciences, University of Connecticut, Storrs, Connecticut
| | - Letitia Naigles
- Connecticut Institute for Brain and Cognitive Sciences, University of Connecticut, Storrs, Connecticut
- Cognitive Sciences Program, University of Connecticut, Storrs, Connecticut
- Department of Psychological Sciences, University of Connecticut, Storrs, Connecticut
| |
Collapse
|
16
|
Mankel K, Shrestha U, Tipirneni-Sajja A, Bidelman GM. Functional Plasticity Coupled With Structural Predispositions in Auditory Cortex Shape Successful Music Category Learning. Front Neurosci 2022; 16:897239. [PMID: 35837119 PMCID: PMC9274125 DOI: 10.3389/fnins.2022.897239] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2022] [Accepted: 05/25/2022] [Indexed: 11/23/2022] Open
Abstract
Categorizing sounds into meaningful groups helps listeners more efficiently process the auditory scene and is a foundational skill for speech perception and language development. Yet, how auditory categories develop in the brain through learning, particularly for non-speech sounds (e.g., music), is not well understood. Here, we asked musically naïve listeners to complete a brief (∼20 min) training session where they learned to identify sounds from a musical interval continuum (minor-major 3rds). We used multichannel EEG to track behaviorally relevant neuroplastic changes in the auditory event-related potentials (ERPs) pre- to post-training. To rule out mere exposure-induced changes, neural effects were evaluated against a control group of 14 non-musicians who did not undergo training. We also compared individual categorization performance with structural volumetrics of bilateral Heschl's gyrus (HG) from MRI to evaluate neuroanatomical substrates of learning. Behavioral performance revealed steeper (i.e., more categorical) identification functions in the posttest that correlated with better training accuracy. At the neural level, improvement in learners' behavioral identification was characterized by smaller P2 amplitudes at posttest, particularly over right hemisphere. Critically, learning-related changes in the ERPs were not observed in control listeners, ruling out mere exposure effects. Learners also showed smaller and thinner HG bilaterally, indicating superior categorization was associated with structural differences in primary auditory brain regions. Collectively, our data suggest successful auditory categorical learning of music sounds is characterized by short-term functional changes (i.e., greater post-training efficiency) in sensory coding processes superimposed on preexisting structural differences in bilateral auditory cortex.
Collapse
Affiliation(s)
- Kelsey Mankel
- School of Communication Sciences and Disorders, University of Memphis, Memphis, TN, United States
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, United States
- Center for Mind and Brain, University of California, Davis, Davis, CA, United States
| | - Utsav Shrestha
- Department of Biomedical Engineering, University of Memphis, Memphis, TN, United States
| | | | - Gavin M. Bidelman
- School of Communication Sciences and Disorders, University of Memphis, Memphis, TN, United States
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, United States
- Department of Speech, Language and Hearing Sciences, Indiana University, Bloomington, IN, United States
| |
Collapse
|
17
|
Chapelle ADL, Savard MA, Restani R, Ghaemmaghami P, Thillou N, Zardoui K, Chandrasekaran B, Coffey EBJ. Sleep affects higher-level categorization of speech sounds, but not frequency encoding. Cortex 2022; 154:27-45. [PMID: 35732089 DOI: 10.1016/j.cortex.2022.04.018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2021] [Revised: 03/26/2022] [Accepted: 04/19/2022] [Indexed: 11/03/2022]
Abstract
Sleep can increase consolidation of new knowledge and skills. It is less clear whether sleep plays a role in other aspects of experience-dependent neuroplasticity, which underlie important human capabilities such as spoken language processing. Theories of sensory learning differ in their predictions; some imply rapid learning at early sensory levels, while other propose a slow, progressive timecourse such that higher-level categorical representations guide immediate, novice learning, while lower-level sensory changes do not emerge until later stages. In this study, we investigated the role of sleep across both behavioural and physiological indices of auditory neuroplasticity. Forty healthy young human adults (23 female) who did not speak a tonal language participated in the study. They learned to categorize non-native Mandarin lexical tones using a sound-to-category training paradigm, and were then randomly assigned to a Nap or Wake condition. Polysomnographic data were recorded to quantify sleep during a 3 h afternoon nap opportunity, or equivalent period of quiet wakeful activity. Measures of behavioural performance accuracy revealed a significant improvement in learning the sound-to-category training paradigm between Nap and Wake groups. Conversely, a neural index of fine sound encoding fidelity of speech sounds known as the frequency-following response (FFR) suggested no change due to sleep, and a null model was supported, using Bayesian statistics. Together, these results support theories that propose a slow, progressive and hierarchical timecourse for sensory learning. Sleep's effect may play the biggest role in the higher-level learning, although contributions to more protracted processes of plasticity that exceed the study duration cannot be ruled out.
Collapse
Affiliation(s)
- Aurélien de la Chapelle
- Lyon Neuroscience Research Centre, Lyon, France; Department of Psychology, Concordia University, Montreal, QC, Canada
| | | | - Reyan Restani
- Department of Psychology, Concordia University, Montreal, QC, Canada; Université Paris Nanterre, Paris, France
| | | | - Noam Thillou
- Department of Psychology, Concordia University, Montreal, QC, Canada
| | - Khashayar Zardoui
- Department of Psychology, Concordia University, Montreal, QC, Canada
| | - Bharath Chandrasekaran
- Department of Communication Science and Disorders, University of Pittsburgh, Pittsburgh, USA
| | - Emily B J Coffey
- Department of Psychology, Concordia University, Montreal, QC, Canada.
| |
Collapse
|
18
|
Carter JA, Buder EH, Bidelman GM. Nonlinear dynamics in auditory cortical activity reveal the neural basis of perceptual warping in speech categorization. JASA EXPRESS LETTERS 2022; 2:045201. [PMID: 35434716 PMCID: PMC8984957 DOI: 10.1121/10.0009896] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/30/2021] [Accepted: 03/03/2022] [Indexed: 06/14/2023]
Abstract
Surrounding context influences speech listening, resulting in dynamic shifts to category percepts. To examine its neural basis, event-related potentials (ERPs) were recorded during vowel identification with continua presented in random, forward, and backward orders to induce perceptual warping. Behaviorally, sequential order shifted individual listeners' categorical boundary, versus random delivery, revealing perceptual warping (biasing) of the heard phonetic category dependent on recent stimulus history. ERPs revealed later (∼300 ms) activity localized to superior temporal and middle/inferior frontal gyri that predicted listeners' hysteresis/enhanced contrast magnitudes. Findings demonstrate that interactions between frontotemporal brain regions govern top-down, stimulus history effects on speech categorization.
Collapse
Affiliation(s)
- Jared A Carter
- Institute for Intelligent Systems, University of Memphis, Memphis, Tennessee 38152, USA
| | - Eugene H Buder
- School of Communication Sciences and Disorders, University of Memphis, Memphis, Tennessee 38152, USA
| | - Gavin M Bidelman
- Department of Speech, Language and Hearing Sciences, Indiana University, , Bloomington, Indiana 47408, USA , ,
| |
Collapse
|
19
|
Bianco V, Berchicci M, Gigante E, Perri RL, Quinzi F, Mussini E, Di Russo F. Brain Plasticity Induced by Musical Expertise on Proactive and Reactive Cognitive Functions. Neuroscience 2021; 483:1-12. [PMID: 34973386 DOI: 10.1016/j.neuroscience.2021.12.032] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2021] [Revised: 12/21/2021] [Accepted: 12/23/2021] [Indexed: 01/01/2023]
Abstract
Proactive and reactive brain activities usually refer to processes occurring in anticipation or in response to perceptual and/or cognitive events. Previous studies found that, in auditory tasks, musical expertise improves performance mainly at the reactive stage of processing. In the present work, we aimed at acknowledging the effects of musical practice on proactive brain activities as a result of neuroplasticity processes occurring at the level of anticipatory motor/cognitive functions. Accordingly, performance and electroencephalographic recordings were compared between professional musicians and non-musicians during an auditory go/no-go task. Both proactive (pre-stimulus) and reactive (post-stimulus) event-related potentials (ERPs) were analyzed. Behavioral findings showed improved performance in musicians compared to non-musicians in terms of accuracy. For what concerns electrophysiological results, different ERP patterns of activity both before and after the presentation of the auditory stimulus emerged between groups. Specifically, musicians showed increased proactive cognitive activity in prefrontal scalp areas, previously localized in the prefrontal cortex, and reduced anticipatory excitability in frontal scalp areas, previously localized in the associative auditory cortices (reflected by the pN and aP components, respectively). In the reactive stage of processing (i.e., following stimulus presentation), musicians showed enhanced early (N1) and late (P3) components, in line with longstanding literature of enhanced auditory processing in this group. Crucially, we also found a significant correlation between the N1 component and years of musical practice. We interpreted these findings in terms of neural plasticity processes resulting from musical training, which lead musicians to high efficiency in auditory sensorial anticipation and more intense cognitive control and sound analysis.
Collapse
Affiliation(s)
- Valentina Bianco
- Dept. of Movement, Human and Health Sciences, University of Rome "Foro Italico", Rome, Italy; Laboratory of Cognitive Neuroscience, Dept. of Languages and Literatures, Communication, Education and Society, University of Udine, Udine, Italy.
| | - Marika Berchicci
- Dept. of Movement, Human and Health Sciences, University of Rome "Foro Italico", Rome, Italy
| | - Elena Gigante
- International Association for Analytical Psychology, Zurich, Switzerland
| | | | - Federico Quinzi
- Dept. of Movement, Human and Health Sciences, University of Rome "Foro Italico", Rome, Italy
| | - Elena Mussini
- Dept. of Movement, Human and Health Sciences, University of Rome "Foro Italico", Rome, Italy
| | - Francesco Di Russo
- Dept. of Movement, Human and Health Sciences, University of Rome "Foro Italico", Rome, Italy; Santa Lucia Foundation IRCCS, Rome, Italy
| |
Collapse
|
20
|
Shukla B, Bidelman GM. Enhanced brainstem phase-locking in low-level noise reveals stochastic resonance in the frequency-following response (FFR). Brain Res 2021; 1771:147643. [PMID: 34473999 PMCID: PMC8490316 DOI: 10.1016/j.brainres.2021.147643] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2021] [Revised: 08/23/2021] [Accepted: 08/28/2021] [Indexed: 11/29/2022]
Abstract
In nonlinear systems, the inclusion of low-level noise can paradoxically improve signal detection, a phenomenon known as stochastic resonance (SR). SR has been observed in human hearing whereby sensory thresholds (e.g., signal detection and discrimination) are enhanced in the presence of noise. Here, we asked whether subcortical auditory processing (neural phase locking) shows evidence of SR. We recorded brainstem frequency-following-responses (FFRs) in young, normal-hearing listeners to near-electrophysiological-threshold (40 dB SPL) complex tones composed of 10 iso-amplitude harmonics of 150 Hz fundamental frequency (F0) presented concurrent with low-level noise (+20 to -20 dB SNRs). Though variable and weak across ears, some listeners showed improvement in auditory detection thresholds with subthreshold noise confirming SR psychophysically. At the neural level, low-level FFRs were initially eradicated by noise (expected masking effect) but were surprisingly reinvigorated at select masker levels (local maximum near ∼ 35 dB SPL). These data suggest brainstem phase-locking to near threshold periodic stimuli is enhanced in optimal levels of noise, the hallmark of SR. Our findings provide novel evidence for stochastic resonance in the human auditory brainstem and suggest that under some circumstances, noise can actually benefit both the behavioral and neural encoding of complex sounds.
Collapse
Affiliation(s)
- Bhanu Shukla
- School of Communication Sciences & Disorders, University of Memphis, Memphis, TN, USA; Institute for Intelligent Systems, University of Memphis, Memphis, TN, USA
| | - Gavin M Bidelman
- School of Communication Sciences & Disorders, University of Memphis, Memphis, TN, USA; Institute for Intelligent Systems, University of Memphis, Memphis, TN, USA; University of Tennessee Health Sciences Center, Department of Anatomy and Neurobiology, Memphis, TN, USA.
| |
Collapse
|
21
|
Rathcke T, Lin CY. Towards a Comprehensive Account of Rhythm Processing Issues in Developmental Dyslexia. Brain Sci 2021; 11:brainsci11101303. [PMID: 34679368 PMCID: PMC8533826 DOI: 10.3390/brainsci11101303] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2021] [Revised: 09/17/2021] [Accepted: 09/21/2021] [Indexed: 11/16/2022] Open
Abstract
Developmental dyslexia is typically defined as a difficulty with an individual's command of written language, arising from deficits in phonological awareness. However, motor entrainment difficulties in non-linguistic synchronization and time-keeping tasks have also been reported. Such findings gave rise to proposals of an underlying rhythm processing deficit in dyslexia, even though to date, evidence for impaired motor entrainment with the rhythm of natural speech is rather scarce, and the role of speech rhythm in phonological awareness is unclear. The present study aimed to fill these gaps. Dyslexic adults and age-matched control participants with variable levels of previous music training completed a series of experimental tasks assessing phoneme processing, rhythm perception, and motor entrainment abilities. In a rhythm entrainment task, participants tapped along to the perceived beat of natural spoken sentences. In a phoneme processing task, participants monitored for sonorant and obstruent phonemes embedded in nonsense strings. Individual sensorimotor skills were assessed using a number of screening tests. The results lacked evidence for a motor impairment or a general motor entrainment difficulty in dyslexia, at least among adult participants of the study. Instead, the results showed that the participants' performance in the phonemic task was predictive of their performance in the rhythmic task, but not vice versa, suggesting that atypical rhythm processing in dyslexia may be the consequence, but not the cause, of dyslexic difficulties with phoneme-level encoding. No evidence for a deficit in the entrainment to the syllable rate in dyslexic adults was found. Rather, metrically weak syllables were significantly less often at the center of rhythmic attention in dyslexic adults as compared to neurotypical controls, with an increased tendency in musically trained participants. This finding could not be explained by an auditory deficit in the processing of acoustic-prosodic cues to the rhythm structure, but it is likely to be related to the well-documented auditory short-term memory issue in dyslexia.
Collapse
Affiliation(s)
- Tamara Rathcke
- Department of Linguistics, Faculty of Humanities, University of Konstanz, 78464 Konstanz, Germany
- Modern Languages and Linguistics, School of Cultures and Languages, University of Kent, Canterbury CT2 7NR, UK;
- Correspondence:
| | - Chia-Yuan Lin
- Modern Languages and Linguistics, School of Cultures and Languages, University of Kent, Canterbury CT2 7NR, UK;
- Department of Psychology, School of Humanities and Health Sciences, University of Huddersfield, Huddersfield HD1 3DH, UK
| |
Collapse
|
22
|
Hennessy S, Wood A, Wilcox R, Habibi A. Neurophysiological improvements in speech-in-noise task after short-term choir training in older adults. Aging (Albany NY) 2021; 13:9468-9495. [PMID: 33824226 PMCID: PMC8064162 DOI: 10.18632/aging.202931] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2020] [Accepted: 03/26/2021] [Indexed: 01/24/2023]
Abstract
Perceiving speech in noise (SIN) is important for health and well-being and decreases with age. Musicians show improved speech-in-noise abilities and reduced age-related auditory decline, yet it is unclear whether short term music engagement has similar effects. In this randomized control trial we used a pre-post design to investigate whether a 12-week music intervention in adults aged 50-65 without prior music training and with subjective hearing loss improves well-being, speech-in-noise abilities, and auditory encoding and voluntary attention as indexed by auditory evoked potentials (AEPs) in a syllable-in-noise task, and later AEPs in an oddball task. Age and gender-matched adults were randomized to a choir or control group. Choir participants sang in a 2-hr ensemble with 1-hr home vocal training weekly; controls listened to a 3-hr playlist weekly, attended concerts, and socialized online with fellow participants. From pre- to post-intervention, no differences between groups were observed on quantitative measures of well-being or behavioral speech-in-noise abilities. In the choir group, but not the control group, changes in the N1 component were observed for the syllable-in-noise task, with increased N1 amplitude in the passive condition and decreased N1 latency in the active condition. During the oddball task, larger N1 amplitudes to the frequent standard stimuli were also observed in the choir but not control group from pre to post intervention. Findings have implications for the potential role of music training to improve sound encoding in individuals who are in the vulnerable age range and at risk of auditory decline.
Collapse
Affiliation(s)
- Sarah Hennessy
- Brain and Creativity Institute, University of Southern California, Los Angeles, CA 90089, USA
| | - Alison Wood
- Brain and Creativity Institute, University of Southern California, Los Angeles, CA 90089, USA
| | - Rand Wilcox
- Department of Psychology, University of Southern California, Los Angeles, CA 90089, USA
| | - Assal Habibi
- Brain and Creativity Institute, University of Southern California, Los Angeles, CA 90089, USA
| |
Collapse
|
23
|
MEG Intersubject Phase Locking of Stimulus-Driven Activity during Naturalistic Speech Listening Correlates with Musical Training. J Neurosci 2021; 41:2713-2722. [PMID: 33536196 DOI: 10.1523/jneurosci.0932-20.2020] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2020] [Revised: 11/13/2020] [Accepted: 11/17/2020] [Indexed: 12/26/2022] Open
Abstract
Musical training is associated with increased structural and functional connectivity between auditory sensory areas and higher-order brain networks involved in speech and motor processing. Whether such changed connectivity patterns facilitate the cortical propagation of speech information in musicians remains poorly understood. We here used magnetoencephalography (MEG) source imaging and a novel seed-based intersubject phase-locking approach to investigate the effects of musical training on the interregional synchronization of stimulus-driven neural responses during listening to naturalistic continuous speech presented in silence. MEG data were obtained from 20 young human subjects (both sexes) with different degrees of musical training. Our data show robust bilateral patterns of stimulus-driven interregional phase synchronization between auditory cortex and frontotemporal brain regions previously associated with speech processing. Stimulus-driven phase locking was maximal in the delta band, but was also observed in the theta and alpha bands. The individual duration of musical training was positively associated with the magnitude of stimulus-driven alpha-band phase locking between auditory cortex and parts of the dorsal and ventral auditory processing streams. These findings provide evidence for a positive relationship between musical training and the propagation of speech-related information between auditory sensory areas and higher-order processing networks, even when speech is presented in silence. We suggest that the increased synchronization of higher-order cortical regions to auditory cortex may contribute to the previously described musician advantage in processing speech in background noise.SIGNIFICANCE STATEMENT Musical training has been associated with widespread structural and functional brain plasticity. It has been suggested that these changes benefit the production and perception of music but can also translate to other domains of auditory processing, such as speech. We developed a new magnetoencephalography intersubject analysis approach to study the cortical synchronization of stimulus-driven neural responses during the perception of continuous natural speech and its relationship to individual musical training. Our results provide evidence that musical training is associated with higher synchronization of stimulus-driven activity between brain regions involved in early auditory sensory and higher-order processing. We suggest that the increased synchronized propagation of speech information may contribute to the previously described musician advantage in processing speech in background noise.
Collapse
|
24
|
Mahmud MS, Yeasin M, Bidelman GM. Data-driven machine learning models for decoding speech categorization from evoked brain responses. J Neural Eng 2021; 18. [PMID: 33690177 DOI: 10.1101/2020.08.03.234997] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2020] [Accepted: 03/09/2021] [Indexed: 05/24/2023]
Abstract
Objective.Categorical perception (CP) of audio is critical to understand how the human brain perceives speech sounds despite widespread variability in acoustic properties. Here, we investigated the spatiotemporal characteristics of auditory neural activity that reflects CP for speech (i.e. differentiates phonetic prototypes from ambiguous speech sounds).Approach.We recorded 64-channel electroencephalograms as listeners rapidly classified vowel sounds along an acoustic-phonetic continuum. We used support vector machine classifiers and stability selection to determine when and where in the brain CP was best decoded across space and time via source-level analysis of the event-related potentials.Main results. We found that early (120 ms) whole-brain data decoded speech categories (i.e. prototypical vs. ambiguous tokens) with 95.16% accuracy (area under the curve 95.14%;F1-score 95.00%). Separate analyses on left hemisphere (LH) and right hemisphere (RH) responses showed that LH decoding was more accurate and earlier than RH (89.03% vs. 86.45% accuracy; 140 ms vs. 200 ms). Stability (feature) selection identified 13 regions of interest (ROIs) out of 68 brain regions [including auditory cortex, supramarginal gyrus, and inferior frontal gyrus (IFG)] that showed categorical representation during stimulus encoding (0-260 ms). In contrast, 15 ROIs (including fronto-parietal regions, IFG, motor cortex) were necessary to describe later decision stages (later 300-800 ms) of categorization but these areas were highly associated with the strength of listeners' categorical hearing (i.e. slope of behavioral identification functions).Significance.Our data-driven multivariate models demonstrate that abstract categories emerge surprisingly early (∼120 ms) in the time course of speech processing and are dominated by engagement of a relatively compact fronto-temporal-parietal brain network.
Collapse
Affiliation(s)
- Md Sultan Mahmud
- Department of Electrical and Computer Engineering, University of Memphis, 3815 Central Avenue, Memphis, TN 38152, United States of America
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, United States of America
| | - Mohammed Yeasin
- Department of Electrical and Computer Engineering, University of Memphis, 3815 Central Avenue, Memphis, TN 38152, United States of America
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, United States of America
| | - Gavin M Bidelman
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, United States of America
- School of Communication Sciences and Disorders, University of Memphis, Memphis, TN, United States of America
- University of Tennessee Health Sciences Center, Department of Anatomy and Neurobiology, Memphis, TN, United States of America
| |
Collapse
|
25
|
Mahmud MS, Yeasin M, Bidelman GM. Data-driven machine learning models for decoding speech categorization from evoked brain responses. J Neural Eng 2021; 18:10.1088/1741-2552/abecf0. [PMID: 33690177 PMCID: PMC8738965 DOI: 10.1088/1741-2552/abecf0] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2020] [Accepted: 03/09/2021] [Indexed: 11/12/2022]
Abstract
Objective.Categorical perception (CP) of audio is critical to understand how the human brain perceives speech sounds despite widespread variability in acoustic properties. Here, we investigated the spatiotemporal characteristics of auditory neural activity that reflects CP for speech (i.e. differentiates phonetic prototypes from ambiguous speech sounds).Approach.We recorded 64-channel electroencephalograms as listeners rapidly classified vowel sounds along an acoustic-phonetic continuum. We used support vector machine classifiers and stability selection to determine when and where in the brain CP was best decoded across space and time via source-level analysis of the event-related potentials.Main results. We found that early (120 ms) whole-brain data decoded speech categories (i.e. prototypical vs. ambiguous tokens) with 95.16% accuracy (area under the curve 95.14%;F1-score 95.00%). Separate analyses on left hemisphere (LH) and right hemisphere (RH) responses showed that LH decoding was more accurate and earlier than RH (89.03% vs. 86.45% accuracy; 140 ms vs. 200 ms). Stability (feature) selection identified 13 regions of interest (ROIs) out of 68 brain regions [including auditory cortex, supramarginal gyrus, and inferior frontal gyrus (IFG)] that showed categorical representation during stimulus encoding (0-260 ms). In contrast, 15 ROIs (including fronto-parietal regions, IFG, motor cortex) were necessary to describe later decision stages (later 300-800 ms) of categorization but these areas were highly associated with the strength of listeners' categorical hearing (i.e. slope of behavioral identification functions).Significance.Our data-driven multivariate models demonstrate that abstract categories emerge surprisingly early (∼120 ms) in the time course of speech processing and are dominated by engagement of a relatively compact fronto-temporal-parietal brain network.
Collapse
Affiliation(s)
- Md Sultan Mahmud
- Department of Electrical and Computer Engineering, University of Memphis, 3815 Central Avenue, Memphis, TN 38152, United States of America
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, United States of America
| | - Mohammed Yeasin
- Department of Electrical and Computer Engineering, University of Memphis, 3815 Central Avenue, Memphis, TN 38152, United States of America
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, United States of America
| | - Gavin M Bidelman
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, United States of America
- School of Communication Sciences and Disorders, University of Memphis, Memphis, TN, United States of America
- University of Tennessee Health Sciences Center, Department of Anatomy and Neurobiology, Memphis, TN, United States of America
| |
Collapse
|
26
|
Defining the Role of Attention in Hierarchical Auditory Processing. Audiol Res 2021; 11:112-128. [PMID: 33805600 PMCID: PMC8006147 DOI: 10.3390/audiolres11010012] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2021] [Revised: 03/07/2021] [Accepted: 03/10/2021] [Indexed: 01/09/2023] Open
Abstract
Communication in noise is a complex process requiring efficient neural encoding throughout the entire auditory pathway as well as contributions from higher-order cognitive processes (i.e., attention) to extract speech cues for perception. Thus, identifying effective clinical interventions for individuals with speech-in-noise deficits relies on the disentanglement of bottom-up (sensory) and top-down (cognitive) factors to appropriately determine the area of deficit; yet, how attention may interact with early encoding of sensory inputs remains unclear. For decades, attentional theorists have attempted to address this question with cleverly designed behavioral studies, but the neural processes and interactions underlying attention's role in speech perception remain unresolved. While anatomical and electrophysiological studies have investigated the neurological structures contributing to attentional processes and revealed relevant brain-behavior relationships, recent electrophysiological techniques (i.e., simultaneous recording of brainstem and cortical responses) may provide novel insight regarding the relationship between early sensory processing and top-down attentional influences. In this article, we review relevant theories that guide our present understanding of attentional processes, discuss current electrophysiological evidence of attentional involvement in auditory processing across subcortical and cortical levels, and propose areas for future study that will inform the development of more targeted and effective clinical interventions for individuals with speech-in-noise deficits.
Collapse
|
27
|
Auditory categorical processing for speech is modulated by inherent musical listening skills. Neuroreport 2021; 31:162-166. [PMID: 31834142 DOI: 10.1097/wnr.0000000000001369] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
During successful auditory perception, the human brain classifies diverse acoustic information into meaningful groupings, a process known as categorical perception (CP). Intense auditory experiences (e.g., musical training and language expertise) shape categorical representations necessary for speech identification and novel sound-to-meaning learning, but little is known concerning the role of innate auditory function in CP. Here, we tested whether listeners vary in their intrinsic abilities to categorize complex sounds and individual differences in the underlying auditory brain mechanisms. To this end, we recorded EEGs in individuals without formal music training but who differed in their inherent auditory perceptual abilities (i.e., musicality) as they rapidly categorized sounds along a speech vowel continuum. Behaviorally, individuals with naturally more adept listening skills ("musical sleepers") showed enhanced speech categorization in the form of faster identification. At the neural level, inverse modeling parsed EEG data into different sources to evaluate the contribution of region-specific activity [i.e., auditory cortex (AC)] to categorical neural coding. We found stronger categorical processing in musical sleepers around the timeframe of P2 (~180 ms) in the right AC compared to those with poorer musical listening abilities. Our data show that listeners with naturally more adept auditory skills map sound to meaning more efficiently than their peers, which may aid novel sound learning related to language and music acquisition.
Collapse
|
28
|
Dittinger E, Korka B, Besson M. Evidence for Enhanced Long-term Memory in Professional Musicians and Its Contribution to Novel Word Learning. J Cogn Neurosci 2020; 33:662-682. [PMID: 33378241 DOI: 10.1162/jocn_a_01670] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/03/2023]
Abstract
Previous studies evidenced transfer effects from professional music training to novel word learning. However, it is unclear whether such an advantage is driven by cascading, bottom-up effects from better auditory perception to semantic processing or by top-down influences from cognitive functions on perception. Moreover, the long-term effects of novel word learning remain an open issue. To address these questions, we used a word learning design, with four different sets of novel words, and we neutralized the potential perceptive and associative learning advantages in musicians. Under such conditions, we did not observe any advantage in musicians on the day of learning (Day 1 [D1]), at neither a behavioral nor an electrophysiological level; this suggests that the previously reported advantages in musicians are likely to be related to bottom-up processes. Nevertheless, 1 month later (Day 30 [D30]) and for all types of novel words, the error increase from D1 to D30 was lower in musicians compared to nonmusicians. In addition, for the set of words that were perceptually difficult to discriminate, only musicians showed typical N400 effects over parietal sites on D30. These results demonstrate that music training improved long-term memory and that transfer effects from music training to word learning (i.e., semantic levels of speech processing) benefit from reinforced (long-term) memory functions. Finally, these findings highlight the positive impact of music training on the acquisition of foreign languages.
Collapse
Affiliation(s)
- Eva Dittinger
- Université Publique de France, CNRS & Aix-Marseille University, Laboratoire de Neurosciences Cognitives (LNC).,Université Publique de France, CNRS & Aix-Marseille University, Laboratoire Parole et Langage (LPL).,Institute for Language and Communication in the Brain, Aix-en-Provence, France
| | - Betina Korka
- Cognitive and Biological Psychology, Institute of Psychology - Wilhelm Wundt, Leipzig University, Germany
| | - Mireille Besson
- Université Publique de France, CNRS & Aix-Marseille University, Laboratoire de Neurosciences Cognitives (LNC).,Institute for Language and Communication in the Brain, Aix-en-Provence, France
| |
Collapse
|
29
|
Parker AN, Wallis GM, Obergrussberger R, Siebeck UE. Categorical face perception in fish: How a fish brain warps reality to dissociate "same" from "different". J Comp Neurol 2020; 528:2919-2928. [PMID: 32406088 DOI: 10.1002/cne.24947] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2019] [Revised: 04/14/2020] [Accepted: 04/28/2020] [Indexed: 11/07/2022]
Abstract
Categorical perception (CP) is the phenomenon by which a smoothly varying stimulus property undergoes a nonlinear transformation during processing in the brain. Consequently, the stimuli are perceived as belonging to distinct categories separated by a sharp boundary. Originally thought to be largely innate, the discovery of CP in tasks such as novel image discrimination has piqued the interest of cognitive scientists because it provides compelling evidence that learning can shape a category's perceptual boundaries. CP has been particularly closely studied in human face perception. In nonprimates, there is evidence for CP for sound and color discrimination, but not for image or face discrimination. Here, we investigate the potential for learned CP in a lower vertebrate, the damselfish Pomacentrus amboinensis. Specifically, we tested whether the ability of these fish to discriminate complex facial patterns tracked categorical rather than metric differences in the stimuli. We first trained the fish to discriminate sets of two facial patterns. Next, we morphed between these patterns and determined the just noticeable difference (JND) between a morph and original image. Finally, we tested for CP by analyzing the discrimination ability of the fish for pairs of JND stimuli along the spectrum of morphs between two original images. Discrimination performance was significant for the image pair straddling the boundary between categories, and chance for equivalent stimulus pairs on either side, thus producing the classic "category boundary" effect. Our results reveal how perception can be influenced in a top-down manner even in the absence of a visual cortex.
Collapse
Affiliation(s)
- Amira N Parker
- School of Biomedical Sciences, The University of Queensland, Brisbane, Queensland, Australia
| | - Guy M Wallis
- School of Human Movement and Nutrition Sciences, The University of Queensland, Brisbane, Queensland, Australia
| | - Rainer Obergrussberger
- School of Biomedical Sciences, The University of Queensland, Brisbane, Queensland, Australia
| | - Ulrike E Siebeck
- School of Biomedical Sciences, The University of Queensland, Brisbane, Queensland, Australia
| |
Collapse
|
30
|
Kessler DM, Ananthakrishnan S, Smith SB, D'Onofrio K, Gifford RH. Frequency Following Response and Speech Recognition Benefit for Combining a Cochlear Implant and Contralateral Hearing Aid. Trends Hear 2020; 24:2331216520902001. [PMID: 32003296 PMCID: PMC7257083 DOI: 10.1177/2331216520902001] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022] Open
Abstract
Multiple studies have shown significant speech recognition benefit when acoustic hearing is combined with a cochlear implant (CI) for a bimodal hearing configuration. However, this benefit varies greatly between individuals. There are few clinical measures correlated with bimodal benefit and those correlations are driven by extreme values prohibiting data-driven, clinical counseling. This study evaluated the relationship between neural representation of fundamental frequency (F0) and temporal fine structure via the frequency following response (FFR) in the nonimplanted ear as well as spectral and temporal resolution of the nonimplanted ear and bimodal benefit for speech recognition in quiet and noise. Participants included 14 unilateral CI users who wore a hearing aid (HA) in the nonimplanted ear. Testing included speech recognition in quiet and in noise with the HA-alone, CI-alone, and in the bimodal condition (i.e., CI + HA), measures of spectral and temporal resolution in the nonimplanted ear, and FFR recording for a 170-ms/da/stimulus in the nonimplanted ear. Even after controlling for four-frequency pure-tone average, there was a significant correlation (r = .83) between FFR F0 amplitude in the nonimplanted ear and bimodal benefit. Other measures of auditory function of the nonimplanted ear were not significantly correlated with bimodal benefit. The FFR holds potential as an objective tool that may allow data-driven counseling regarding expected benefit from the nonimplanted ear. It is possible that this information may eventually be used for clinical decision-making, particularly in difficult-to-test populations such as young children, regarding effectiveness of bimodal hearing versus bilateral CI candidacy.
Collapse
Affiliation(s)
- David M Kessler
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
| | | | - Spencer B Smith
- Department of Communication Sciences and Disorders, The University of Texas at Austin, TX, USA
| | - Kristen D'Onofrio
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
| | - René H Gifford
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA.,Department of Otolaryngology, Vanderbilt University Medical Center, Nashville, TN, USA
| |
Collapse
|
31
|
Barbaroux M, Norena A, Rasamimanana M, Castet E, Besson M. From Psychoacoustics to Brain Waves: A Longitudinal Approach to Novel Word Learning. J Cogn Neurosci 2020; 33:8-27. [PMID: 32985943 DOI: 10.1162/jocn_a_01629] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Musical expertise has been shown to positively influence high-level speech abilities such as novel word learning. This study addresses the question whether low-level enhanced perceptual skills causally drives successful novel word learning. We used a longitudinal approach with psychoacoustic procedures to train 2 groups of nonmusicians either on pitch discrimination or on intensity discrimination, using harmonic complex sounds. After short (approximately 3 hr) psychoacoustic training, discrimination thresholds were lower on the specific feature (pitch or intensity) that was trained. Moreover, compared to the intensity group, participants trained on pitch were faster to categorize words varying in pitch. Finally, although the N400 components in both the word learning phase and in the semantic task were larger in the pitch group than in the intensity group, no between-group differences were found at the behavioral level in the semantic task. Thus, these results provide mixed evidence that enhanced perception of relevant features through a few hours of acoustic training with harmonic sounds causally impacts the categorization of speech sounds as well as novel word learning. These results are discussed within the framework of near and far transfer effects from music training to speech processing.
Collapse
|
32
|
Tecoulesco L, Skoe E, Naigles LR. Phonetic discrimination mediates the relationship between auditory brainstem response stability and syntactic performance. BRAIN AND LANGUAGE 2020; 208:104810. [PMID: 32683226 DOI: 10.1016/j.bandl.2020.104810] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/08/2019] [Revised: 02/03/2020] [Accepted: 04/27/2020] [Indexed: 06/11/2023]
Abstract
Syntactic, lexical, and phonological/phonetic knowledge are vital aspects of macro level language ability. Prior research has predominantly focused on environmental or cortical sources of individual differences in these areas; however, a growing literature suggests an auditory brainstem contribution to language performance in both typically developing (TD) populations and children with autism spectrum disorder (ASD). This study investigates whether one aspect of auditory brainstem responses (ABRs), neural response stability, which is a metric reflecting trial-by-trial consistency in the neural encoding of sound, can predict syntactic, lexical, and phonetic performance in TD and ASD school-aged children. Pooling across children with ASD and TD, results showed that higher neural stability in response to the syllable /da/ was associated with better phonetic discrimination, and with better syntactic performance on a standardized measure. Furthermore, phonetic discrimination was a successful mediator of the relationship between neural stability and syntactic performance. This study supports the growing body of literature that stable subcortical neural encoding of sound is important for successful language performance.
Collapse
Affiliation(s)
- Lisa Tecoulesco
- University of Connecticut Psychological Sciences, United States.
| | - Erika Skoe
- University of Connecticut, Speech Language and Hearing Sciences, United States
| | | |
Collapse
|
33
|
Lee J, Han JH, Lee HJ. Long-Term Musical Training Alters Auditory Cortical Activity to the Frequency Change. Front Hum Neurosci 2020; 14:329. [PMID: 32973478 PMCID: PMC7471721 DOI: 10.3389/fnhum.2020.00329] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2020] [Accepted: 07/24/2020] [Indexed: 11/13/2022] Open
Abstract
Objective: The ability to detect frequency variation is a fundamental skill necessary for speech perception. It is known that musical expertise is associated with a range of auditory perceptual skills, including discriminating frequency change, which suggests the neural encoding of spectral features can be enhanced by musical training. In this study, we measured auditory cortical responses to frequency change in musicians to examine the relationships between N1/P2 responses and behavioral performance/musical training. Methods: Behavioral and electrophysiological data were obtained from professional musicians and age-matched non-musician participants. Behavioral data included frequency discrimination detection thresholds for no threshold-equalizing noise (TEN), +5, 0, and -5 signal-to-noise ratio settings. Auditory-evoked responses were measured using a 64-channel electroencephalogram (EEG) system in response to frequency changes in ongoing pure tones consisting of 250 and 4,000 Hz, and the magnitudes of frequency change were 10%, 25% or 50% from the base frequencies. N1 and P2 amplitudes and latencies as well as dipole source activation in the left and right hemispheres were measured for each condition. Results: Compared to the non-musician group, behavioral thresholds in the musician group were lower for frequency discrimination in quiet conditions only. The scalp-recorded N1 amplitudes were modulated as a function of frequency change. P2 amplitudes in the musician group were larger than in the non-musician group. Dipole source analysis showed that P2 dipole activity to frequency changes was lateralized to the right hemisphere, with greater activity in the musician group regardless of the hemisphere side. Additionally, N1 amplitudes to frequency changes were positively related to behavioral thresholds for frequency discrimination while enhanced P2 amplitudes were associated with a longer duration of musical training. Conclusions: Our results demonstrate that auditory cortical potentials evoked by frequency change are related to behavioral thresholds for frequency discrimination in musicians. Larger P2 amplitudes in musicians compared to non-musicians reflects musical training-induced neural plasticity.
Collapse
Affiliation(s)
- Jihyun Lee
- Laboratory of Brain & Cognitive Sciences for Convergence Medicine, Hallym University College of Medicine, Anyang, South Korea
| | - Ji-Hye Han
- Laboratory of Brain & Cognitive Sciences for Convergence Medicine, Hallym University College of Medicine, Anyang, South Korea
| | - Hyo-Jeong Lee
- Laboratory of Brain & Cognitive Sciences for Convergence Medicine, Hallym University College of Medicine, Anyang, South Korea.,Department of Otorhinolaryngology, College of Medicine, Hallym University, Anyang, South Korea
| |
Collapse
|
34
|
Bidelman GM, Yoo J. Musicians Show Improved Speech Segregation in Competitive, Multi-Talker Cocktail Party Scenarios. Front Psychol 2020; 11:1927. [PMID: 32973610 PMCID: PMC7461890 DOI: 10.3389/fpsyg.2020.01927] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2020] [Accepted: 07/13/2020] [Indexed: 12/05/2022] Open
Abstract
Studies suggest that long-term music experience enhances the brain’s ability to segregate speech from noise. Musicians’ “speech-in-noise (SIN) benefit” is based largely on perception from simple figure-ground tasks rather than competitive, multi-talker scenarios that offer realistic spatial cues for segregation and engage binaural processing. We aimed to investigate whether musicians show perceptual advantages in cocktail party speech segregation in a competitive, multi-talker environment. We used the coordinate response measure (CRM) paradigm to measure speech recognition and localization performance in musicians vs. non-musicians in a simulated 3D cocktail party environment conducted in an anechoic chamber. Speech was delivered through a 16-channel speaker array distributed around the horizontal soundfield surrounding the listener. Participants recalled the color, number, and perceived location of target callsign sentences. We manipulated task difficulty by varying the number of additional maskers presented at other spatial locations in the horizontal soundfield (0–1–2–3–4–6–8 multi-talkers). Musicians obtained faster and better speech recognition amidst up to around eight simultaneous talkers and showed less noise-related decline in performance with increasing interferers than their non-musician peers. Correlations revealed associations between listeners’ years of musical training and CRM recognition and working memory. However, better working memory correlated with better speech streaming. Basic (QuickSIN) but not more complex (speech streaming) SIN processing was still predicted by music training after controlling for working memory. Our findings confirm a relationship between musicianship and naturalistic cocktail party speech streaming but also suggest that cognitive factors at least partially drive musicians’ SIN advantage.
Collapse
Affiliation(s)
- Gavin M Bidelman
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, United States.,School of Communication Sciences and Disorders, University of Memphis, Memphis, TN, United States.,Department of Anatomy and Neurobiology, University of Tennessee Health Sciences Center, Memphis, TN, United States
| | - Jessica Yoo
- School of Communication Sciences and Disorders, University of Memphis, Memphis, TN, United States
| |
Collapse
|
35
|
Giroud N, Baum SR, Gilbert AC, Phillips NA, Gracco V. Earlier age of second language learning induces more robust speech encoding in the auditory brainstem in adults, independent of amount of language exposure during early childhood. BRAIN AND LANGUAGE 2020; 207:104815. [PMID: 32535187 DOI: 10.1016/j.bandl.2020.104815] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/16/2019] [Revised: 05/20/2020] [Accepted: 05/27/2020] [Indexed: 06/11/2023]
Abstract
Learning a second language (L2) at a young age is a driving factor of functional neuroplasticity in the auditory brainstem. To date, it remains unclear whether these effects remain stable until adulthood and to what degree the amount of exposure to the L2 in early childhood might affect their outcome. We compared three groups of adult English-French bilinguals in their ability to categorize English vowels in relation to their frequency following responses (FFR) evoked by the same vowels. At the time of testing, cognitive abilities as well as fluency in both languages were matched between the (1) simultaneous bilinguals (SIM, N = 18); (2) sequential bilinguals with L1-English (N = 14); and (3) sequential bilinguals with L1-French (N = 11). Our results show that the L1-English group show sharper category boundaries in identification of the vowels compared to the L1-French group. Furthermore, the same pattern was reflected in the FFRs (i.e., larger FFR responses in L1-English > SIM > L1-French), while again only the difference between the L1-English and the L1-French group was statistically significant; nonetheless, there was a trend towards larger FFR in SIM compared to L1-French. Our data extends previous literature showing that exposure to a language during the first years of life induces functional neuroplasticity in the auditory brainstem that remains stable until at least young adulthood. Furthermore, the findings suggest that amount of exposure (i.e., 100% vs. 50%) to that language does not differentially shape the robustness of the perceptual abilities or the auditory brainstem encoding of phonetic categories of the language. Statement of significance: Previous studies have indicated that early age of L2 acquisition induces functional neuroplasticity in the auditory brainstem during processing of the L2. This study compared three groups of adult bilinguals who differed in their age of L2 acquisition as well as the amount of exposure to the L2 during early childhood. We demonstrate for the first time that the neuroplastic effect in the brainstem remains stable until young adulthood and that the amount of L2 exposure does not influence behavioral or brainstem plasticity. Our study provides novel insights into low-level auditory plasticity as a function of varying bilingual experience.
Collapse
Affiliation(s)
- Nathalie Giroud
- Department of Psychology, Centre for Research in Human Development (CRDH), Concordia University, Montréal, Canada; Centre for Research on Brain, Language, and Music (CRBLM), McGill University, Montréal, Canada.
| | - Shari R Baum
- Centre for Research on Brain, Language, and Music (CRBLM), McGill University, Montréal, Canada; School of Communication Sciences and Disorders, McGill University, Montréal, Canada
| | - Annie C Gilbert
- Centre for Research on Brain, Language, and Music (CRBLM), McGill University, Montréal, Canada; School of Communication Sciences and Disorders, McGill University, Montréal, Canada.
| | - Natalie A Phillips
- Department of Psychology, Centre for Research in Human Development (CRDH), Concordia University, Montréal, Canada; Centre for Research on Brain, Language, and Music (CRBLM), McGill University, Montréal, Canada; Lady Davis Institute for Medical Research, Jewish General Hospital, Montréal, Canada.
| | - Vincent Gracco
- Centre for Research on Brain, Language, and Music (CRBLM), McGill University, Montréal, Canada; School of Communication Sciences and Disorders, McGill University, Montréal, Canada; Haskins Laboratories, Yale University, New Haven, United States
| |
Collapse
|
36
|
Puschmann S, Baillet S, Zatorre RJ. Musicians at the Cocktail Party: Neural Substrates of Musical Training During Selective Listening in Multispeaker Situations. Cereb Cortex 2020; 29:3253-3265. [PMID: 30137239 DOI: 10.1093/cercor/bhy193] [Citation(s) in RCA: 28] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2018] [Revised: 06/26/2018] [Accepted: 07/19/2018] [Indexed: 11/12/2022] Open
Abstract
Musical training has been demonstrated to benefit speech-in-noise perception. It is however unknown whether this effect translates to selective listening in cocktail party situations, and if so what its neural basis might be. We investigated this question using magnetoencephalography-based speech envelope reconstruction and a sustained selective listening task, in which participants with varying amounts of musical training attended to 1 of 2 speech streams while detecting rare target words. Cortical frequency-following responses (FFR) and auditory working memory were additionally measured to dissociate musical training-related effects on low-level auditory processing versus higher cognitive function. Results show that the duration of musical training is associated with a reduced distracting effect of competing speech on target detection accuracy. Remarkably, more musical training was related to a robust neural tracking of both the to-be-attended and the to-be-ignored speech stream, up until late cortical processing stages. Musical training-related increases in FFR power were associated with a robust speech tracking in auditory sensory areas, whereas training-related differences in auditory working memory were linked to an increased representation of the to-be-ignored stream beyond auditory cortex. Our findings suggest that musically trained persons can use additional information about the distracting stream to limit interference by competing speech.
Collapse
Affiliation(s)
- Sebastian Puschmann
- Montreal Neurological Institute, McGill University, Montreal, Quebec, Canada.,Centre for Research on Brain, Language and Music, Montreal, Quebec, Canada
| | - Sylvain Baillet
- Montreal Neurological Institute, McGill University, Montreal, Quebec, Canada.,Centre for Research on Brain, Language and Music, Montreal, Quebec, Canada
| | - Robert J Zatorre
- Montreal Neurological Institute, McGill University, Montreal, Quebec, Canada.,Centre for Research on Brain, Language and Music, Montreal, Quebec, Canada.,International Laboratory for Brain, Music and Sound Research, Montreal, Quebec, Canada
| |
Collapse
|
37
|
Yashaswini L, Maruthy S. Effect of Music Training on Categorical Perception of Speech and Music. J Audiol Otol 2020; 24:140-148. [PMID: 32575954 PMCID: PMC7364187 DOI: 10.7874/jao.2019.00500] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2019] [Accepted: 04/17/2020] [Indexed: 11/22/2022] Open
Abstract
Background and Objectives The aim of this study is to evaluate the effect of music training on the characteristics of auditory perception of speech and music. The perception of speech and music stimuli was assessed across their respective stimulus continuum and the resultant plots were compared between musicians and non-musicians. Subjects and Methods Thirty musicians with formal music training and twenty-seven non-musicians participated in the study (age: 20 to 30 years). They were assessed for identification of consonant-vowel syllables (/da/ to /ga/), vowels (/u/ to /a/), vocal music note (/ri/ to /ga/), and instrumental music note (/ri/ to /ga/) across their respective stimulus continuum. The continua contained 15 tokens with equal step size between any adjacent tokens. The resultant identification scores were plotted against each token and were analyzed for presence of categorical boundary. If the categorical boundary was found, the plots were analyzed by six parameters of categorical perception; for the point of 50% crossover, lower edge of categorical boundary, upper edge of categorical boundary, phoneme boundary width, slope, and intercepts. Results Overall, the results showed that both speech and music are perceived differently in musicians and non-musicians. In musicians, both speech and music are categorically perceived, while in non-musicians, only speech is perceived categorically. Conclusions The findings of the present study indicate that music is perceived categorically by musicians, even if the stimulus is devoid of vocal tract features. The findings support that the categorical perception is strongly influenced by training and results are discussed in light of notions of motor theory of speech perception.
Collapse
|
38
|
Couth S, Prendergast G, Guest H, Munro KJ, Moore DR, Plack CJ, Ginsborg J, Dawes P. Investigating the effects of noise exposure on self-report, behavioral and electrophysiological indices of hearing damage in musicians with normal audiometric thresholds. Hear Res 2020; 395:108021. [PMID: 32631495 DOI: 10.1016/j.heares.2020.108021] [Citation(s) in RCA: 31] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/11/2020] [Revised: 05/02/2020] [Accepted: 06/11/2020] [Indexed: 01/11/2023]
Abstract
Musicians are at risk of hearing loss due to prolonged noise exposure, but they may also be at risk of early sub-clinical hearing damage, such as cochlear synaptopathy. In the current study, we investigated the effects of noise exposure on electrophysiological, behavioral and self-report correlates of hearing damage in young adult (age range = 18-27 years) musicians and non-musicians with normal audiometric thresholds. Early-career musicians (n = 76) and non-musicians (n = 47) completed a test battery including the Noise Exposure Structured Interview, pure-tone audiometry (PTA; 0.25-8 kHz), extended high-frequency (EHF; 12 and 16 kHz) thresholds, otoacoustic emissions (OAEs), auditory brainstem responses (ABRs), speech perception in noise (SPiN), and self-reported tinnitus, hyperacusis and hearing in noise difficulties. Total lifetime noise exposure was similar between musicians and non-musicians, the majority of which could be accounted for by recreational activities. Musicians showed significantly greater ABR wave I/V ratios than non-musicians and were also more likely to report experience of - and/or more severe - tinnitus, hyperacusis and hearing in noise difficulties, irrespective of noise exposure. A secondary analysis revealed that individuals with the highest levels of noise exposure had reduced outer hair cell function compared to individuals with the lowest levels of noise exposure, as measured by OAEs. OAE level was also related to PTA and EHF thresholds. High levels of noise exposure were also associated with a significant increase in ABR wave V latency, but only for males, and a higher prevalence and severity of hyperacusis. These findings suggest that there may be sub-clinical effects of noise exposure on various hearing metrics even at a relatively young age, but do not support a link between lifetime noise exposure and proxy measures of cochlear synaptopathy such as ABR wave amplitudes and SPiN. Closely monitoring OAEs, PTA and EHF thresholds when conventional PTA is within the clinically 'normal' range could provide a useful early metric of noise-induced hearing damage. This may be particularly relevant to early-career musicians as they progress through a period of intensive musical training, and thus interventions to protect hearing longevity may be vital.
Collapse
Affiliation(s)
- Samuel Couth
- Manchester Centre for Audiology and Deafness, University of Manchester, UK.
| | | | - Hannah Guest
- Manchester Centre for Audiology and Deafness, University of Manchester, UK
| | - Kevin J Munro
- Manchester Centre for Audiology and Deafness, University of Manchester, UK; Manchester Academic Health Science Centre, Manchester University Hospitals NHS Foundation Trust, UK
| | - David R Moore
- Manchester Centre for Audiology and Deafness, University of Manchester, UK; Communication Sciences Research Center, Cincinnati Children's Hospital Medical Centre, OH, USA
| | - Christopher J Plack
- Manchester Centre for Audiology and Deafness, University of Manchester, UK; Department of Psychology, Lancaster University, UK
| | | | - Piers Dawes
- Manchester Centre for Audiology and Deafness, University of Manchester, UK; Department of Linguistics, Macquarie University, Sydney, Australia
| |
Collapse
|
39
|
Sorati M, Behne DM. Audiovisual Modulation in Music Perception for Musicians and Non-musicians. Front Psychol 2020; 11:1094. [PMID: 32547458 PMCID: PMC7273518 DOI: 10.3389/fpsyg.2020.01094] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2020] [Accepted: 04/29/2020] [Indexed: 11/13/2022] Open
Abstract
In audiovisual music perception, visual information from a musical instrument being played is available prior to the onset of the corresponding musical sound and consequently allows a perceiver to form a prediction about the upcoming audio music. This prediction in audiovisual music perception, compared to auditory music perception, leads to lower N1 and P2 amplitudes and latencies. Although previous research suggests that audiovisual experience, such as previous musical experience may enhance this prediction, a remaining question is to what extent musical experience modifies N1 and P2 amplitudes and latencies. Furthermore, corresponding event-related phase modulations quantified as inter-trial phase coherence (ITPC) have not previously been reported for audiovisual music perception. In the current study, audio video recordings of a keyboard key being played were presented to musicians and non-musicians in audio only (AO), video only (VO), and audiovisual (AV) conditions. With predictive movements from playing the keyboard isolated from AV music perception (AV-VO), the current findings demonstrated that, compared to the AO condition, both groups had a similar decrease in N1 amplitude and latency, and P2 amplitude, along with correspondingly lower ITPC values in the delta, theta, and alpha frequency bands. However, while musicians showed lower ITPC values in the beta-band in AV-VO compared to the AO, non-musicians did not show this pattern. Findings indicate that AV perception may be broadly correlated with auditory perception, and differences between musicians and non-musicians further indicate musical experience to be a specific factor influencing AV perception. Predicting an upcoming sound in AV music perception may involve visual predictory processes, as well as beta-band oscillations, which may be influenced by years of musical training. This study highlights possible interconnectivity in AV perception as well as potential modulation with experience.
Collapse
Affiliation(s)
- Marzieh Sorati
- Department of Psychology, Norwegian University of Science and Technology, Trondheim, Norway
| | - Dawn Marie Behne
- Department of Psychology, Norwegian University of Science and Technology, Trondheim, Norway
| |
Collapse
|
40
|
Richard C, Neel ML, Jeanvoine A, Connell SM, Gehred A, Maitre NL. Characteristics of the Frequency-Following Response to Speech in Neonates and Potential Applicability in Clinical Practice: A Systematic Review. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2020; 63:1618-1635. [PMID: 32407639 DOI: 10.1044/2020_jslhr-19-00322] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Purpose We sought to critically analyze and evaluate published evidence regarding feasibility and clinical potential for predicting neurodevelopmental outcomes of the frequency-following responses (FFRs) to speech recordings in neonates (birth to 28 days). Method A systematic search of MeSH terms in the Cumulative Index to Nursing and Allied HealthLiterature, Embase, Google Scholar, Ovid Medline (R) and E-Pub Ahead of Print, In-Process & Other Non-Indexed Citations and Daily, Web of Science, SCOPUS, COCHRANE Library, and ClinicalTrials.gov was performed. Manual review of all items identified in the search was performed by two independent reviewers. Articles were evaluated based on the level of methodological quality and evidence according to the RTI item bank. Results Seven articles met inclusion criteria. None of the included studies reported neurodevelopmental outcomes past 3 months of age. Quality of the evidence ranged from moderate to high. Protocol variations were frequent. Conclusions Based on this systematic review, the FFR to speech can capture both temporal and spectral acoustic features in neonates. It can accurately be recorded in a fast and easy manner at the infant's bedside. However, at this time, further studies are needed to identify and validate which FFR features could be incorporated as an addition to standard evaluation of infant sound processing evaluation in subcortico-cortical networks. This review identifies the need for further research focused on identifying specific features of the neonatal FFRs, those with predictive value for early childhood outcomes to help guide targeted early speech and hearing interventions.
Collapse
Affiliation(s)
- Céline Richard
- Center for Perinatal Research and Department of Pediatrics, Nationwide Children's Hospital, Columbus, OH
- Laboratory for Investigative Neurophysiology, Department of Radiology and Department of Clinical Neurosciences, University Hospital Center and University of Lausanne, Switzerland
| | - Mary Lauren Neel
- Center for Perinatal Research and Department of Pediatrics, Nationwide Children's Hospital, Columbus, OH
| | - Arnaud Jeanvoine
- Center for Perinatal Research and Department of Pediatrics, Nationwide Children's Hospital, Columbus, OH
| | - Sharon Mc Connell
- Center for Perinatal Research and Department of Pediatrics, Nationwide Children's Hospital, Columbus, OH
| | - Alison Gehred
- Medical Library Division, Nationwide Children's Hospital, Columbus, OH
| | - Nathalie L Maitre
- Center for Perinatal Research and Department of Pediatrics, Nationwide Children's Hospital, Columbus, OH
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN
| |
Collapse
|
41
|
Bidelman GM, Bush LC, Boudreaux AM. Effects of Noise on the Behavioral and Neural Categorization of Speech. Front Neurosci 2020; 14:153. [PMID: 32180700 PMCID: PMC7057933 DOI: 10.3389/fnins.2020.00153] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2019] [Accepted: 02/10/2020] [Indexed: 02/02/2023] Open
Abstract
We investigated whether the categorical perception (CP) of speech might also provide a mechanism that aids its perception in noise. We varied signal-to-noise ratio (SNR) [clear, 0 dB, -5 dB] while listeners classified an acoustic-phonetic continuum (/u/ to /a/). Noise-related changes in behavioral categorization were only observed at the lowest SNR. Event-related brain potentials (ERPs) differentiated category vs. category-ambiguous speech by the P2 wave (~180-320 ms). Paralleling behavior, neural responses to speech with clear phonetic status (i.e., continuum endpoints) were robust to noise down to -5 dB SNR, whereas responses to ambiguous tokens declined with decreasing SNR. Results demonstrate that phonetic speech representations are more resistant to degradation than corresponding acoustic representations. Findings suggest the mere process of binning speech sounds into categories provides a robust mechanism to aid figure-ground speech perception by fortifying abstract categories from the acoustic signal and making the speech code more resistant to external interferences.
Collapse
Affiliation(s)
- Gavin M Bidelman
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, United States.,School of Communication Sciences and Disorders, University of Memphis, Memphis, TN, United States.,Department of Anatomy and Neurobiology, University of Tennessee Health Sciences Center, Memphis, TN, United States
| | - Lauren C Bush
- School of Communication Sciences and Disorders, University of Memphis, Memphis, TN, United States
| | - Alex M Boudreaux
- School of Communication Sciences and Disorders, University of Memphis, Memphis, TN, United States
| |
Collapse
|
42
|
Al-Fahad R, Yeasin M, Bidelman GM. Decoding of single-trial EEG reveals unique states of functional brain connectivity that drive rapid speech categorization decisions. J Neural Eng 2020; 17:016045. [PMID: 31822643 PMCID: PMC7004853 DOI: 10.1088/1741-2552/ab6040] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
OBJECTIVE Categorical perception (CP) is an inherent property of speech perception. The response time (RT) of listeners' perceptual speech identification is highly sensitive to individual differences. While the neural correlates of CP have been well studied in terms of the regional contributions of the brain to behavior, functional connectivity patterns that signify individual differences in listeners' speed (RT) for speech categorization is less clear. In this study, we introduce a novel approach to address these questions. APPROACH We applied several computational approaches to the EEG, including graph mining, machine learning (i.e., support vector machine), and stability selection to investigate the unique brain states (functional neural connectivity) that predict the speed of listeners' behavioral decisions. MAIN RESULTS We infer that (i) the listeners' perceptual speed is directly related to dynamic variations in their brain connectomics, (ii) global network assortativity and efficiency distinguished fast, medium, and slow RTs, (iii) the functional network underlying speeded decisions increases in negative assortativity (i.e., became disassortative) for slower RTs, (iv) slower categorical speech decisions cause excessive use of neural resources and more aberrant information flow within the CP circuitry, (v) slower responders tended to utilize functional brain networks excessively (or inappropriately) whereas fast responders (with lower global efficiency) utilized the same neural pathways but with more restricted organization. SIGNIFICANCE Findings show that neural classifiers (SVM) coupled with stability selection correctly classify behavioral RTs from functional connectivity alone with over 92% accuracy (AUC = 0.9). Our results corroborate previous studies by supporting the engagement of similar temporal (STG), parietal, motor, and prefrontal regions in CP using an entirely data-driven approach.
Collapse
Affiliation(s)
- Rakib Al-Fahad
- Department of Electrical and Computer Engineering, University of Memphis, Memphis, 38152 TN, USA
| | - Mohammed Yeasin
- Department of Electrical and Computer Engineering, University of Memphis, Memphis, 38152 TN, USA
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, USA
| | - Gavin M. Bidelman
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, USA
- School of Communication Sciences & Disorders, University of Memphis, Memphis, TN, USA
- University of Tennessee Health Sciences Center, Department of Anatomy and Neurobiology, Memphis, TN, USA
| |
Collapse
|
43
|
Lewis GA, Bidelman GM. Autonomic Nervous System Correlates of Speech Categorization Revealed Through Pupillometry. Front Neurosci 2020; 13:1418. [PMID: 31998068 PMCID: PMC6967406 DOI: 10.3389/fnins.2019.01418] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2019] [Accepted: 12/16/2019] [Indexed: 02/06/2023] Open
Abstract
Human perception requires the many-to-one mapping between continuous sensory elements and discrete categorical representations. This grouping operation underlies the phenomenon of categorical perception (CP)-the experience of perceiving discrete categories rather than gradual variations in signal input. Speech perception requires CP because acoustic cues do not share constant relations with perceptual-phonetic representations. Beyond facilitating perception of unmasked speech, we reasoned CP might also aid the extraction of target speech percepts from interfering sound sources (i.e., noise) by generating additional perceptual constancy and reducing listening effort. Specifically, we investigated how noise interference impacts cognitive load and perceptual identification of unambiguous (i.e., categorical) vs. ambiguous stimuli. Listeners classified a speech vowel continuum (/u/-/a/) at various signal-to-noise ratios (SNRs [unmasked, 0 and -5 dB]). Continuous recordings of pupil dilation measured processing effort, with larger, later dilations reflecting increased listening demand. Critical comparisons were between time-locked changes in eye data in response to unambiguous (i.e., continuum endpoints) tokens vs. ambiguous tokens (i.e., continuum midpoint). Unmasked speech elicited faster responses and sharper psychometric functions, which steadily declined in noise. Noise increased pupil dilation across stimulus conditions, but not straightforwardly. Noise-masked speech modulated peak pupil size (i.e., [0 and -5 dB] > unmasked). In contrast, peak dilation latency varied with both token and SNR. Interestingly, categorical tokens elicited earlier pupil dilation relative to ambiguous tokens. Our pupillary data suggest CP reconstructs auditory percepts under challenging listening conditions through interactions between stimulus salience and listeners' internalized effort and/or arousal.
Collapse
Affiliation(s)
- Gwyneth A Lewis
- Institute for Intelligent Systems, The University of Memphis, Memphis, TN, United States.,School of Communication Sciences and Disorders, The University of Memphis, Memphis, TN, United States
| | - Gavin M Bidelman
- Institute for Intelligent Systems, The University of Memphis, Memphis, TN, United States.,School of Communication Sciences and Disorders, The University of Memphis, Memphis, TN, United States.,Department of Anatomy and Neurobiology, University of Tennessee Health Sciences Center, Memphis, TN, United States
| |
Collapse
|
44
|
MacCutcheon D, Füllgrabe C, Eccles R, van der Linde J, Panebianco C, Ljung R. Investigating the Effect of One Year of Learning to Play a Musical Instrument on Speech-in-Noise Perception and Phonological Short-Term Memory in 5-to-7-Year-Old Children. Front Psychol 2020; 10:2865. [PMID: 31998174 PMCID: PMC6970197 DOI: 10.3389/fpsyg.2019.02865] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2019] [Accepted: 12/03/2019] [Indexed: 12/05/2022] Open
Abstract
The benefits in speech-in-noise perception, language and cognition brought about by extensive musical training in adults and children have been demonstrated in a number of cross-sectional studies. Therefore, this study aimed to investigate whether one year of school-delivered musical training, consisting of individual and group instrumental classes, was capable of producing advantages for speech-in-noise perception and phonological short-term memory in children tested in a simulated classroom environment. Forty-one children aged 5–7 years at the first measurement point participated in the study and either went to a music-focused or a sport-focused private school with an otherwise equivalent school curriculum. The children’s ability to detect number and color words in noise was measured under a number of conditions including different masker types (speech-shaped noise, single-talker background) and under varying spatial combinations of target and masker (spatially collocated, spatially separated). Additionally, a cognitive factor essential to speech perception, namely phonological short-term memory, was assessed. Findings were unable to confirm that musical training of the frequency and duration administered was associated with a musicians’ advantage for either speech in noise, under any of the masker or spatial conditions tested, or phonological short-term memory.
Collapse
Affiliation(s)
- Douglas MacCutcheon
- Department of Building, Energy and Environmental Engineering, Högskolan i Gävle, Gävle, Sweden.,Department of Music, University of Pretoria, Pretoria, South Africa
| | - Christian Füllgrabe
- School of Sport, Exercise and Health Sciences, Loughborough University, Loughborough, United Kingdom
| | - Renata Eccles
- Department of Music, University of Pretoria, Pretoria, South Africa.,Department of Speech-Language Pathology and Audiology, University of Pretoria, Pretoria, South Africa
| | - Jeannie van der Linde
- Department of Music, University of Pretoria, Pretoria, South Africa.,Department of Speech-Language Pathology and Audiology, University of Pretoria, Pretoria, South Africa
| | | | - Robert Ljung
- Department of Building, Energy and Environmental Engineering, Högskolan i Gävle, Gävle, Sweden
| |
Collapse
|
45
|
Sorati M, Behne DM. Musical Expertise Affects Audiovisual Speech Perception: Findings From Event-Related Potentials and Inter-trial Phase Coherence. Front Psychol 2019; 10:2562. [PMID: 31803107 PMCID: PMC6874039 DOI: 10.3389/fpsyg.2019.02562] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2019] [Accepted: 10/29/2019] [Indexed: 12/03/2022] Open
Abstract
In audiovisual speech perception, visual information from a talker's face during mouth articulation is available before the onset of the corresponding audio speech, and thereby allows the perceiver to use visual information to predict the upcoming audio. This prediction from phonetically congruent visual information modulates audiovisual speech perception and leads to a decrease in N1 and P2 amplitudes and latencies compared to the perception of audio speech alone. Whether audiovisual experience, such as with musical training, influences this prediction is unclear, but if so, may explain some of the variations observed in previous research. The current study addresses whether audiovisual speech perception is affected by musical training, first assessing N1 and P2 event-related potentials (ERPs) and in addition, inter-trial phase coherence (ITPC). Musicians and non-musicians are presented the syllable, /ba/ in audio only (AO), video only (VO), and audiovisual (AV) conditions. With the predictory effect of mouth movement isolated from the AV speech (AV-VO), results showed that, compared to audio speech, both groups have a lower N1 latency and P2 amplitude and latency. Moreover, they also showed lower ITPCs in the delta, theta, and beta bands in audiovisual speech perception. However, musicians showed significant suppression of N1 amplitude and desynchronization in the alpha band in audiovisual speech, not present for non-musicians. Collectively, the current findings indicate that early sensory processing can be modified by musical experience, which in turn can explain some of the variations in previous AV speech perception research.
Collapse
Affiliation(s)
- Marzieh Sorati
- Department of Psychology, Norwegian University of Science and Technology, Trondheim, Norway
| | | |
Collapse
|
46
|
Ogg M, Carlson TA, Slevc LR. The Rapid Emergence of Auditory Object Representations in Cortex Reflect Central Acoustic Attributes. J Cogn Neurosci 2019; 32:111-123. [PMID: 31560265 DOI: 10.1162/jocn_a_01472] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Human listeners are bombarded by acoustic information that the brain rapidly organizes into coherent percepts of objects and events in the environment, which aids speech and music perception. The efficiency of auditory object recognition belies the critical constraint that acoustic stimuli necessarily require time to unfold. Using magnetoencephalography, we studied the time course of the neural processes that transform dynamic acoustic information into auditory object representations. Participants listened to a diverse set of 36 tokens comprising everyday sounds from a typical human environment. Multivariate pattern analysis was used to decode the sound tokens from the magnetoencephalographic recordings. We show that sound tokens can be decoded from brain activity beginning 90 msec after stimulus onset with peak decoding performance occurring at 155 msec poststimulus onset. Decoding performance was primarily driven by differences between category representations (e.g., environmental vs. instrument sounds), although within-category decoding was better than chance. Representational similarity analysis revealed that these emerging neural representations were related to harmonic and spectrotemporal differences among the stimuli, which correspond to canonical acoustic features processed by the auditory pathway. Our findings begin to link the processing of physical sound properties with the perception of auditory objects and events in cortex.
Collapse
|
47
|
Hudak EM, Bugos J, Andel R, Lister JJ, Ji M, Edwards JD. Keys to staying sharp: A randomized clinical trial of piano training among older adults with and without mild cognitive impairment. Contemp Clin Trials 2019; 84:105789. [PMID: 31226405 PMCID: PMC6945489 DOI: 10.1016/j.cct.2019.06.003] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2019] [Revised: 06/06/2019] [Accepted: 06/11/2019] [Indexed: 11/23/2022]
Abstract
BACKGROUND The prevalence of dementia, the most expensive medical condition (Kirschstein, 2000 and Hurd et al., 2013 [1,2]), and its precursor, mild cognitive impairment (MCI) are increasing [3]. Finding effective intervention strategies to prevent or delay dementia is imperative to public health. Prior research provides compelling evidence that central auditory processing (CAP) deficits are a risk factor for dementia [4-6]. Grounded in the information degradation theory [7, 8], we hypothesize that improving brain function at early perceptual levels (i.e., CAP) may be optimal to attenuate cognitive and functional decline and potentially curb dementia prevalence. Piano training is one avenue to enhance cognition [9-13] by facilitating CAP at initial perceptual stages [14-18]. OBJECTIVES The Keys To Staying Sharp study is a two arm, randomized clinical trial examining the efficacy of piano training relative to music listening instruction to improve CAP, cognition, and everyday function among older adults. In addition, the moderating effects of MCI status on piano training efficacy will be examined and potential mediators of intervention effects will be explored. HYPOTHESES We hypothesize that piano training will improve CAP and cognitive performance, leading to functional improvements. We expect that enhanced CAP will mediate cognitive gains. We further hypothesize that cognitive gains will mediate functional improvements. METHOD We plan to enroll 360 adults aged 60 years and older who will be randomized to piano training or an active control condition of music listening instruction and complete pre- and immediate post- assessments of CAP, cognition, and everyday function.
Collapse
Affiliation(s)
- Elizabeth M Hudak
- Department of Psychiatry and Behavioral Neurosciences, University of South Florida.
| | | | - Ross Andel
- School of Aging Studies, University of South Florida; Department of Neurology, Charles University and Motol University Hospital, Prague, Czech Republic
| | - Jennifer J Lister
- Department of Communication Sciences and Disorders, University of South Florida
| | - Ming Ji
- College of Nursing, University of South Florida
| | - Jerri D Edwards
- Department of Psychiatry and Behavioral Neurosciences, University of South Florida; Department of Communication Sciences and Disorders, University of South Florida
| |
Collapse
|
48
|
Bidelman GM, Price CN, Shen D, Arnott SR, Alain C. Afferent-efferent connectivity between auditory brainstem and cortex accounts for poorer speech-in-noise comprehension in older adults. Hear Res 2019; 382:107795. [PMID: 31479953 DOI: 10.1016/j.heares.2019.107795] [Citation(s) in RCA: 30] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/08/2019] [Revised: 08/14/2019] [Accepted: 08/22/2019] [Indexed: 12/19/2022]
Abstract
Speech-in-noise (SIN) comprehension deficits in older adults have been linked to changes in both subcortical and cortical auditory evoked responses. However, older adults' difficulty understanding SIN may also be related to an imbalance in signal transmission (i.e., functional connectivity) between brainstem and auditory cortices. By modeling high-density scalp recordings of speech-evoked responses with sources in brainstem (BS) and bilateral primary auditory cortices (PAC), we show that beyond attenuating neural activity, hearing loss in older adults compromises the transmission of speech information between subcortical and early cortical hubs of the speech network. We found that the strength of afferent BS→PAC neural signaling (but not the reverse efferent flow; PAC→BS) varied with mild declines in hearing acuity and this "bottom-up" functional connectivity robustly predicted older adults' performance in a SIN identification task. Connectivity was also a better predictor of SIN processing than unitary subcortical or cortical responses alone. Our neuroimaging findings suggest that in older adults (i) mild hearing loss differentially reduces neural output at several stages of auditory processing (PAC > BS), (ii) subcortical-cortical connectivity is more sensitive to peripheral hearing loss than top-down (cortical-subcortical) control, and (iii) reduced functional connectivity in afferent auditory pathways plays a significant role in SIN comprehension problems.
Collapse
Affiliation(s)
- Gavin M Bidelman
- School of Communication Sciences & Disorders, University of Memphis, Memphis, TN, USA; Institute for Intelligent Systems, University of Memphis, Memphis, TN, USA; University of Tennessee Health Sciences Center, Department of Anatomy and Neurobiology, Memphis, TN, USA.
| | - Caitlin N Price
- School of Communication Sciences & Disorders, University of Memphis, Memphis, TN, USA
| | - Dawei Shen
- Rotman Research Institute-Baycrest Centre for Geriatric Care, Toronto, Ontario, Canada
| | - Stephen R Arnott
- Rotman Research Institute-Baycrest Centre for Geriatric Care, Toronto, Ontario, Canada
| | - Claude Alain
- Rotman Research Institute-Baycrest Centre for Geriatric Care, Toronto, Ontario, Canada; University of Toronto, Department of Psychology, Toronto, Ontario, Canada; University of Toronto, Institute of Medical Sciences, Toronto, Ontario, Canada
| |
Collapse
|
49
|
Bidelman GM, Walker B. Plasticity in auditory categorization is supported by differential engagement of the auditory-linguistic network. Neuroimage 2019; 201:116022. [PMID: 31310863 DOI: 10.1016/j.neuroimage.2019.116022] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2019] [Revised: 06/30/2019] [Accepted: 07/12/2019] [Indexed: 12/21/2022] Open
Abstract
To construct our perceptual world, the brain categorizes variable sensory cues into behaviorally-relevant groupings. Categorical representations are apparent within a distributed fronto-temporo-parietal brain network but how this neural circuitry is shaped by experience remains undefined. Here, we asked whether speech and music categories might be formed within different auditory-linguistic brain regions depending on listeners' auditory expertise. We recorded EEG in highly skilled (musicians) vs. less experienced (nonmusicians) perceivers as they rapidly categorized speech and musical sounds. Musicians showed perceptual enhancements across domains, yet source EEG data revealed a double dissociation in the neurobiological mechanisms supporting categorization between groups. Whereas musicians coded categories in primary auditory cortex (PAC), nonmusicians recruited non-auditory regions (e.g., inferior frontal gyrus, IFG) to generate category-level information. Functional connectivity confirmed nonmusicians' increased left IFG involvement reflects stronger routing of signal from PAC directed to IFG, presumably because sensory coding is insufficient to construct categories in less experienced listeners. Our findings establish auditory experience modulates specific engagement and inter-regional communication in the auditory-linguistic network supporting categorical perception. Whereas early canonical PAC representations are sufficient to generate categories in highly trained ears, less experienced perceivers broadcast information downstream to higher-order linguistic brain areas (IFG) to construct abstract sound labels.
Collapse
Affiliation(s)
- Gavin M Bidelman
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, USA; School of Communication Sciences & Disorders, University of Memphis, Memphis, TN, USA; University of Tennessee Health Sciences Center, Department of Anatomy and Neurobiology, Memphis, TN, USA.
| | - Breya Walker
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, USA; Department of Psychology, University of Memphis, Memphis, TN, USA; Department of Mathematical Sciences, University of Memphis, Memphis, TN, USA
| |
Collapse
|
50
|
Backer KC, Kessler AS, Lawyer LA, Corina DP, Miller LM. A novel EEG paradigm to simultaneously and rapidly assess the functioning of auditory and visual pathways. J Neurophysiol 2019; 122:1312-1329. [PMID: 31268796 DOI: 10.1152/jn.00868.2018] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/11/2023] Open
Abstract
Objective assessment of the sensory pathways is crucial for understanding their development across the life span and how they may be affected by neurodevelopmental disorders (e.g., autism spectrum) and neurological pathologies (e.g., stroke, multiple sclerosis, etc.). Quick and passive measurements, for example, using electroencephalography (EEG), are especially important when working with infants and young children and with patient populations having communication deficits (e.g., aphasia). However, many EEG paradigms are limited to measuring activity from one sensory domain at a time, may be time consuming, and target only a subset of possible responses from that particular sensory domain (e.g., only auditory brainstem responses or only auditory P1-N1-P2 evoked potentials). Thus we developed a new multisensory paradigm that enables simultaneous, robust, and rapid (6-12 min) measurements of both auditory and visual EEG activity, including auditory brainstem responses, auditory and visual evoked potentials, as well as auditory and visual steady-state responses. This novel method allows us to examine neural activity at various stations along the auditory and visual hierarchies with an ecologically valid continuous speech stimulus, while an unrelated video is playing. Both the speech stimulus and the video can be customized for any population of interest. Furthermore, by using two simultaneous visual steady-state stimulation rates, we demonstrate the ability of this paradigm to track both parafoveal and peripheral visual processing concurrently. We report results from 25 healthy young adults, which validate this new paradigm.NEW & NOTEWORTHY A novel electroencephalography paradigm enables the rapid, reliable, and noninvasive assessment of neural activity along both auditory and visual pathways concurrently. The paradigm uses an ecologically valid continuous speech stimulus for auditory evaluation and can simultaneously track visual activity to both parafoveal and peripheral visual space. This new methodology may be particularly appealing to researchers and clinicians working with infants and young children and with patient populations with limited communication abilities.
Collapse
Affiliation(s)
- Kristina C Backer
- Center for Mind and Brain, University of California, Davis, California.,Department of Cognitive and Information Sciences, University of California, Merced, California
| | - Andrew S Kessler
- Center for Mind and Brain, University of California, Davis, California
| | - Laurel A Lawyer
- Center for Mind and Brain, University of California, Davis, California
| | - David P Corina
- Center for Mind and Brain, University of California, Davis, California.,Deptartment of Linguistics, University of California, Davis, California
| | - Lee M Miller
- Center for Mind and Brain, University of California, Davis, California.,Department of Neurobiology, Physiology, and Behavior, University of California, Davis, California
| |
Collapse
|