1
|
Lasnick OHM, Hoeft F. Sensory temporal sampling in time: an integrated model of the TSF and neural noise hypothesis as an etiological pathway for dyslexia. Front Hum Neurosci 2024; 17:1294941. [PMID: 38234592 PMCID: PMC10792016 DOI: 10.3389/fnhum.2023.1294941] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2023] [Accepted: 12/04/2023] [Indexed: 01/19/2024] Open
Abstract
Much progress has been made in research on the causal mechanisms of developmental dyslexia. In recent years, the "temporal sampling" account of dyslexia has evolved considerably, with contributions from neurogenetics and novel imaging methods resulting in a much more complex etiological view of the disorder. The original temporal sampling framework implicates disrupted neural entrainment to speech as a causal factor for atypical phonological representations. Yet, empirical findings have not provided clear evidence of a low-level etiology for this endophenotype. In contrast, the neural noise hypothesis presents a theoretical view of the manifestation of dyslexia from the level of genes to behavior. However, its relative novelty (published in 2017) means that empirical research focused on specific predictions is sparse. The current paper reviews dyslexia research using a dual framework from the temporal sampling and neural noise hypotheses and discusses the complementary nature of these two views of dyslexia. We present an argument for an integrated model of sensory temporal sampling as an etiological pathway for dyslexia. Finally, we conclude with a brief discussion of outstanding questions.
Collapse
Affiliation(s)
- Oliver H. M. Lasnick
- brainLENS Laboratory, Department of Psychological Sciences, University of Connecticut, Storrs, CT, United States
| | | |
Collapse
|
2
|
Flinker A, Doyle WK, Mehta AD, Devinsky O, Poeppel D. Spectrotemporal modulation provides a unifying framework for auditory cortical asymmetries. Nat Hum Behav 2019; 3:393-405. [PMID: 30971792 PMCID: PMC6650286 DOI: 10.1038/s41562-019-0548-z] [Citation(s) in RCA: 71] [Impact Index Per Article: 14.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2018] [Accepted: 01/28/2019] [Indexed: 11/29/2022]
Abstract
The principles underlying functional asymmetries in cortex remain debated. For example, it is accepted that speech is processed bilaterally in auditory cortex, but a left hemisphere dominance emerges when the input is interpreted linguistically. The mechanisms, however, are contested: what sound features or processing principles underlie laterality? Recent findings across species (humans, canines, bats) provide converging evidence that spectrotemporal sound features drive asymmetrical responses. Typically, accounts invoke models wherein the hemispheres differ in time-frequency resolution or integration window size. We develop a framework that builds on and unifies prevailing models, using spectrotemporal modulation space. Using signal processing techniques motivated by neural responses, we test this approach employing behavioral and neurophysiological measures. We show how psychophysical judgments align with spectrotemporal modulations and then characterize the neural sensitivities to temporal and spectral modulations. We demonstrate differential contributions from both hemispheres, with a left lateralization for temporal modulations and a weaker right lateralization for spectral modulations. We argue that representations in the modulation domain provide a more mechanistic basis to account for lateralization in auditory cortex.
Collapse
Affiliation(s)
- Adeen Flinker
- Department of Psychology, New York University, New York, NY, USA. .,Department of Neurology, New York University School of Medicine, New York, NY, USA.
| | - Werner K Doyle
- Department of Neurosurgery, New York University School of Medicine, New York, NY, USA
| | - Ashesh D Mehta
- Department of Neurosurgery, Donald and Barbara Zucker School of Medicine at Hofstra/Northwell, Manhasset, NY, USA
| | - Orrin Devinsky
- Department of Neurology, New York University School of Medicine, New York, NY, USA
| | - David Poeppel
- Department of Psychology, New York University, New York, NY, USA.,Max Planck Institute for Empirical Aesthetics, Frankfurt, Germany
| |
Collapse
|
3
|
Amplitude modulation rate dependent topographic organization of the auditory steady-state response in human auditory cortex. Hear Res 2017; 354:102-108. [PMID: 28917446 DOI: 10.1016/j.heares.2017.09.003] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/04/2017] [Revised: 08/06/2017] [Accepted: 09/08/2017] [Indexed: 11/22/2022]
Abstract
Periodic modulations of an acoustic feature, such as amplitude over a certain frequency range, leads to phase locking of neural responses to the envelope of the modulation. Using electrophysiological methods this neural activity pattern, also called the auditory steady-state response (aSSR), is visible following frequency transformation of the evoked response as a clear spectral peak at the modulation frequency. Despite several studies employing the aSSR that show, for example, strongest responses for ∼40 Hz and an overall right-hemispheric dominance, it has not been investigated so far to what extent within auditory cortex different modulation frequencies elicit aSSRs at a homogenous source or whether the localization of the aSSR is topographically organized in a systematic manner. The latter would be suggested by previous neuroimaging works in monkeys and humans showing a periodotopic organization within and across distinct auditory fields. However, the sluggishness of the signal from these neuroimaging works prohibit inferences with regards to the fine-temporal features of the neural response. In the present study, we employed amplitude-modulated (AM) sounds over a range between 4 and 85 Hz to elicit aSSRs while recording brain activity via magnetoencephalography (MEG). Using beamforming and a fine spatially resolved grid restricted to auditory cortical processing regions, our study revealed a topographic representation of the aSSR that depends on AM rate, in particular in the medial-lateral (bilateral) and posterior-anterior (right auditory cortex) direction. In summary, our findings confirm previous studies that showing different AM rates to elicit maximal response in distinct neural populations. They extend these findings however by also showing that these respective neural ensembles in auditory cortex actually phase lock their activity over a wide modulation frequency range.
Collapse
|
4
|
Brown EC, Muzik O, Rothermel R, Juhász C, Shah AK, Fuerst D, Mittal S, Sood S, Asano E. Evaluating signal-correlated noise as a control task with language-related gamma activity on electrocorticography. Clin Neurophysiol 2013; 125:1312-23. [PMID: 24412331 DOI: 10.1016/j.clinph.2013.11.026] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2013] [Revised: 10/29/2013] [Accepted: 11/13/2013] [Indexed: 01/24/2023]
Abstract
OBJECTIVE Our recent electrocorticography (ECoG) study suggested reverse speech, a widely used control task, to be a poor control for non-language-related auditory activity. We hypothesized that this may be due to retained perception as a human voice. We report a follow-up ECoG study in which we contrast forward and reverse speech with a signal-correlated noise (SCN) control task that cannot be perceived as a human voice. METHODS Ten patients were presented 90 audible stimuli, including 30 each of corresponding forward speech, reverse speech, and SCN trials, during ECoG recording with evaluation of gamma activity between 50 and 150 Hz. RESULTS Sites of the lateral temporal gyri activated throughout speech stimuli were generally less activated by SCN, while some temporal sites seemed to process both human and non-human sounds. Reverse speech trials were associated with activities across the temporal lobe similar to those associated with forward speech. CONCLUSIONS Findings herein externally validate functional neuroimaging studies utilizing SCN as a control for non-language-specific auditory function. Our findings are consistent with the notion that stimuli perceived as originating from a human voice are poor controls for non-language auditory function. SIGNIFICANCE Our findings have implications in functional neuroimaging research as well as improved clinical mapping of auditory functions.
Collapse
Affiliation(s)
- Erik C Brown
- MD-PhD Program, School of Medicine, Wayne State University, Detroit, MI 48201, USA; Department of Psychiatry and Behavioral Neurosciences, School of Medicine, Wayne State University, Detroit, MI 48201, USA
| | - Otto Muzik
- Department of Pediatrics, Wayne State University, Detroit Medical Center, Detroit, MI 48201, USA; Department of Neurology, Wayne State University, Detroit Medical Center, Detroit, MI 48201, USA
| | - Robert Rothermel
- Department of Psychiatry, Wayne State University, Detroit Medical Center, Detroit, MI 48201, USA
| | - Csaba Juhász
- Department of Pediatrics, Wayne State University, Detroit Medical Center, Detroit, MI 48201, USA; Department of Neurology, Wayne State University, Detroit Medical Center, Detroit, MI 48201, USA
| | - Aashit K Shah
- Department of Neurology, Wayne State University, Detroit Medical Center, Detroit, MI 48201, USA
| | - Darren Fuerst
- Department of Neurology, Wayne State University, Detroit Medical Center, Detroit, MI 48201, USA
| | - Sandeep Mittal
- Department of Neurosurgery, Wayne State University, Detroit Medical Center, Detroit, MI 48201, USA
| | - Sandeep Sood
- Department of Neurosurgery, Wayne State University, Detroit Medical Center, Detroit, MI 48201, USA
| | - Eishi Asano
- Department of Pediatrics, Wayne State University, Detroit Medical Center, Detroit, MI 48201, USA; Department of Neurology, Wayne State University, Detroit Medical Center, Detroit, MI 48201, USA.
| |
Collapse
|
5
|
Scott SK, McGettigan C. Do temporal processes underlie left hemisphere dominance in speech perception? BRAIN AND LANGUAGE 2013; 127:36-45. [PMID: 24125574 PMCID: PMC4083253 DOI: 10.1016/j.bandl.2013.07.006] [Citation(s) in RCA: 41] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/17/2012] [Revised: 07/18/2013] [Accepted: 07/22/2013] [Indexed: 05/27/2023]
Abstract
It is not unusual to find it stated as a fact that the left hemisphere is specialized for the processing of rapid, or temporal aspects of sound, and that the dominance of the left hemisphere in the perception of speech can be a consequence of this specialization. In this review we explore the history of this claim and assess the weight of this assumption. We will demonstrate that instead of a supposed sensitivity of the left temporal lobe for the acoustic properties of speech, it is the right temporal lobe which shows a marked preference for certain properties of sounds, for example longer durations, or variations in pitch. We finish by outlining some alternative factors that contribute to the left lateralization of speech perception.
Collapse
Affiliation(s)
- Sophie K Scott
- Institute for Cognitive Neuroscience, 17 Queen Square, London WC1N 3AR, UK.
| | | |
Collapse
|
6
|
Horton C, D'Zmura M, Srinivasan R. Suppression of competing speech through entrainment of cortical oscillations. J Neurophysiol 2013; 109:3082-93. [PMID: 23515789 DOI: 10.1152/jn.01026.2012] [Citation(s) in RCA: 99] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
People are highly skilled at attending to one speaker in the presence of competitors, but the neural mechanisms supporting this remain unclear. Recent studies have argued that the auditory system enhances the gain of a speech stream relative to competitors by entraining (or "phase-locking") to the rhythmic structure in its acoustic envelope, thus ensuring that syllables arrive during periods of high neuronal excitability. We hypothesized that such a mechanism could also suppress a competing speech stream by ensuring that syllables arrive during periods of low neuronal excitability. To test this, we analyzed high-density EEG recorded from human adults while they attended to one of two competing, naturalistic speech streams. By calculating the cross-correlation between the EEG channels and the speech envelopes, we found evidence of entrainment to the attended speech's acoustic envelope as well as weaker yet significant entrainment to the unattended speech's envelope. An independent component analysis (ICA) decomposition of the data revealed sources in the posterior temporal cortices that displayed robust correlations to both the attended and unattended envelopes. Critically, in these components the signs of the correlations when attended were opposite those when unattended, consistent with the hypothesized entrainment-based suppressive mechanism.
Collapse
Affiliation(s)
- Cort Horton
- Department of Cognitive Sciences, University of California, Irvine, CA, USA.
| | | | | |
Collapse
|
7
|
Markman TM, Quittner AL, Eisenberg LS, Tobey EA, Thal D, Niparko JK, Wang NY. Language development after cochlear implantation: an epigenetic model. J Neurodev Disord 2011; 3:388-404. [PMID: 22101809 PMCID: PMC3230757 DOI: 10.1007/s11689-011-9098-z] [Citation(s) in RCA: 51] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/08/2011] [Accepted: 10/27/2011] [Indexed: 12/27/2022] Open
Abstract
Growing evidence supports the notion that dynamic gene expression, subject to epigenetic control, organizes multiple influences to enable a child to learn to listen and to talk. Here, we review neurobiological and genetic influences on spoken language development in the context of results of a longitudinal trial of cochlear implantation of young children with severe to profound sensorineural hearing loss in the Childhood Development after Cochlear Implantation study. We specifically examine the results of cochlear implantation in participants who were congenitally deaf (N = 116). Prior to intervention, these participants were subject to naturally imposed constraints in sensory (acoustic-phonologic) inputs during critical phases of development when spoken language skills are typically achieved rapidly. Their candidacy for a cochlear implant was prompted by delays (n = 20) or an essential absence of spoken language acquisition (n = 96). Observations thus present an opportunity to evaluate the impact of factors that influence the emergence of spoken language, particularly in the context of hearing restoration in sensitive periods for language acquisition. Outcomes demonstrate considerable variation in spoken language learning, although significant advantages exist for the congenitally deaf children implanted prior to 18 months of age. While age at implantation carries high predictive value in forecasting performance on measures of spoken language, several factors show significant association, particularly those related to parent-child interactions. Importantly, the significance of environmental variables in their predictive value for language development varies with age at implantation. These observations are considered in the context of an epigenetic model in which dynamic genomic expression can modulate aspects of auditory learning, offering insights into factors that can influence a child's acquisition of spoken language after cochlear implantation. Increased understanding of these interactions could lead to targeted interventions that interact with the epigenome to influence language outcomes with intervention, particularly in periods in which development is subject to time-sensitive experience.
Collapse
Affiliation(s)
| | | | | | | | - Donna Thal
- San Diego State University, San Diego, CA USA
- Center for Research on Language, University of California, San Diego, CA USA
| | - John K. Niparko
- Department of Otolaryngology-Head and Neck Surgery, The Johns Hopkins School of Medicine, Baltimore, MD USA
| | - Nae-Yuh Wang
- Department of Medicine, Johns Hopkins School of Medicine, Baltimore, MD USA
- Department of Biostatistics, Johns Hopkins Bloomberg School of Public Health, Baltimore, MD USA
| | - The CDaCI Investigative Team
- Johns Hopkins School of Medicine, Baltimore, MD USA
- University of Miami, Miami, FL USA
- House Ear Institute, Los Angeles, CA USA
- University of Texas at Dallas, Dallas, TX USA
- San Diego State University, San Diego, CA USA
- Center for Research on Language, University of California, San Diego, CA USA
- Department of Otolaryngology-Head and Neck Surgery, The Johns Hopkins School of Medicine, Baltimore, MD USA
- Department of Medicine, Johns Hopkins School of Medicine, Baltimore, MD USA
- Department of Biostatistics, Johns Hopkins Bloomberg School of Public Health, Baltimore, MD USA
| |
Collapse
|
8
|
Ding N, Simon JZ. Neural coding of continuous speech in auditory cortex during monaural and dichotic listening. J Neurophysiol 2011; 107:78-89. [PMID: 21975452 DOI: 10.1152/jn.00297.2011] [Citation(s) in RCA: 273] [Impact Index Per Article: 21.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
The cortical representation of the acoustic features of continuous speech is the foundation of speech perception. In this study, noninvasive magnetoencephalography (MEG) recordings are obtained from human subjects actively listening to spoken narratives, in both simple and cocktail party-like auditory scenes. By modeling how acoustic features of speech are encoded in ongoing MEG activity as a spectrotemporal response function, we demonstrate that the slow temporal modulations of speech in a broad spectral region are represented bilaterally in auditory cortex by a phase-locked temporal code. For speech presented monaurally to either ear, this phase-locked response is always more faithful in the right hemisphere, but with a shorter latency in the hemisphere contralateral to the stimulated ear. When different spoken narratives are presented to each ear simultaneously (dichotic listening), the resulting cortical neural activity precisely encodes the acoustic features of both of the spoken narratives, but slightly weakened and delayed compared with the monaural response. Critically, the early sensory response to the attended speech is considerably stronger than that to the unattended speech, demonstrating top-down attentional gain control. This attentional gain is substantial even during the subjects' very first exposure to the speech mixture and therefore largely independent of knowledge of the speech content. Together, these findings characterize how the spectrotemporal features of speech are encoded in human auditory cortex and establish a single-trial-based paradigm to study the neural basis underlying the cocktail party phenomenon.
Collapse
Affiliation(s)
- Nai Ding
- Univ. of Maryland, College Park, MD 20742, USA
| | | |
Collapse
|