1
|
Rançon U, Masquelier T, Cottereau BR. A general model unifying the adaptive, transient and sustained properties of ON and OFF auditory neural responses. PLoS Comput Biol 2024; 20:e1012288. [PMID: 39093852 PMCID: PMC11324186 DOI: 10.1371/journal.pcbi.1012288] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2024] [Revised: 08/14/2024] [Accepted: 06/29/2024] [Indexed: 08/04/2024] Open
Abstract
Sounds are temporal stimuli decomposed into numerous elementary components by the auditory nervous system. For instance, a temporal to spectro-temporal transformation modelling the frequency decomposition performed by the cochlea is a widely adopted first processing step in today's computational models of auditory neural responses. Similarly, increments and decrements in sound intensity (i.e., of the raw waveform itself or of its spectral bands) constitute critical features of the neural code, with high behavioural significance. However, despite the growing attention of the scientific community on auditory OFF responses, their relationship with transient ON, sustained responses and adaptation remains unclear. In this context, we propose a new general model, based on a pair of linear filters, named AdapTrans, that captures both sustained and transient ON and OFF responses into a unifying and easy to expand framework. We demonstrate that filtering audio cochleagrams with AdapTrans permits to accurately render known properties of neural responses measured in different mammal species such as the dependence of OFF responses on the stimulus fall time and on the preceding sound duration. Furthermore, by integrating our framework into gold standard and state-of-the-art machine learning models that predict neural responses from audio stimuli, following a supervised training on a large compilation of electrophysiology datasets (ready-to-deploy PyTorch models and pre-processed datasets shared publicly), we show that AdapTrans systematically improves the prediction accuracy of estimated responses within different cortical areas of the rat and ferret auditory brain. Together, these results motivate the use of our framework for computational and systems neuroscientists willing to increase the plausibility and performances of their models of audition.
Collapse
Affiliation(s)
- Ulysse Rançon
- CerCo UMR 5549, CNRS – Université Toulouse III, Toulouse, France
| | | | - Benoit R. Cottereau
- CerCo UMR 5549, CNRS – Université Toulouse III, Toulouse, France
- IPAL, CNRS IRL62955, Singapore, Singapore
| |
Collapse
|
2
|
Bogdanov C, Mulders WH, Goulios H, Távora-Vieira D. The Impact of Patient Factors on Objective Cochlear Implant Verification Using Acoustic Cortical Auditory-Evoked Potentials. Audiol Neurootol 2023; 29:96-106. [PMID: 37690449 PMCID: PMC10994594 DOI: 10.1159/000533273] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2023] [Accepted: 07/18/2023] [Indexed: 09/12/2023] Open
Abstract
INTRODUCTION Hearing loss is a major global public health issue that negatively impacts quality of life, communication, cognition, social participation, and mental health. The cochlear implant (CI) is the most efficacious treatment for severe-to-profound sensorineural hearing loss. However, variability in outcomes remains high among CI users. Our previous research demonstrated that the existing subjective methodology of CI programming does not consistently produce optimal stimulation for speech perception, thereby limiting the potential for CI users to derive the maximum device benefit to achieve their peak potential. We demonstrated the benefit of utilising the objective method of measuring auditory-evoked cortical responses to speech stimuli as a reliable tool to guide and verify CI programming and, in turn, significantly improve speech perception performance. The present study was designed to investigate the impact of patient- and device-specific factors on the application of acoustically-evoked cortical auditory-evoked potential (aCAEP) measures as an objective clinical tool to verify CI mapping in adult CI users with bilateral deafness (BD). METHODS aCAEP responses were elicited using binaural peripheral auditory stimulation for four speech tokens (/m/, /g/, /t/, and /s/) and recorded by HEARLab™ software in adult BD CI users. Participants were classified into groups according to subjective or objective CI mapping procedures to elicit present aCAEP responses to all four speech tokens. The impact of patient- and device-specific factors on the presence of aCAEP responses and speech perception was investigated between participant groups. RESULTS Participants were categorised based on the presence or absence of the P1-N1-P2 aCAEP response to speech tokens. Out of the total cohort of adult CI users (n = 132), 63 participants demonstrated present responses pre-optimisation, 37 participants exhibited present responses post-optimisation, and the remaining 32 participants either showed an absent response for at least one speech token post-optimisation or did not accept the optimised CI map adjustments. Overall, no significant correlation was shown between patient and device-specific factors and the presence of aCAEP responses or speech perception scores. CONCLUSION This study reinforces that aCAEP measures offer an objective, non-invasive approach to verify CI mapping, irrespective of patient or device factors. These findings further our understanding of the importance of personalised CI rehabilitation through CI mapping to minimise the degree of speech perception variation post-CI and allow all CI users to achieve maximum device benefit.
Collapse
Affiliation(s)
- Caris Bogdanov
- School of Human Sciences, The University of Western Australia, Perth, WA, Australia
- Department of Audiology, Fiona Stanley Fremantle Hospitals Group, Perth, WA, Australia
| | | | - Helen Goulios
- School of Human Sciences, The University of Western Australia, Perth, WA, Australia
| | - Dayse Távora-Vieira
- Department of Audiology, Fiona Stanley Fremantle Hospitals Group, Perth, WA, Australia
- Division of Surgery, Medical School, The University of Western Australia, Perth, WA, Australia
| |
Collapse
|
3
|
Choi MH, Li N, Popelka G, Butts Pauly K. Development and validation of a computational method to predict unintended auditory brainstem response during transcranial ultrasound neuromodulation in mice. Brain Stimul 2023; 16:1362-1370. [PMID: 37690602 DOI: 10.1016/j.brs.2023.09.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2023] [Revised: 08/03/2023] [Accepted: 09/06/2023] [Indexed: 09/12/2023] Open
Abstract
BACKGROUND Transcranial ultrasound stimulation (TUS) is a promising noninvasive neuromodulation modality. The inadvertent and unpredictable activation of the auditory system in response to TUS obfuscates the interpretation of non-auditory neuromodulatory responses. OBJECTIVE The objective was to develop and validate a computational metric to quantify the susceptibility to unintended auditory brainstem response (ABR) in mice premised on time frequency analyses of TUS signals and auditory sensitivity. METHODS Ultrasound pulses with varying amplitudes, pulse repetition frequencies (PRFs), envelope smoothing profiles, and sinusoidal modulation frequencies were selected. Each pulse's time-varying frequency spectrum was differentiated across time, weighted by the mouse hearing sensitivity, then summed across frequencies. The resulting time-varying function, computationally predicting the ABR, was validated against experimental ABR in mice during TUS with the corresponding pulse. RESULTS There was a significant correlation between experimental ABRs and the computational predictions for 19 TUS signals (R2 = 0.97). CONCLUSIONS To reduce ABR in mice during in vivo TUS studies, 1) reduce the amplitude of a rectangular continuous wave envelope, 2) increase the rise/fall times of a smoothed continuous wave envelope, and/or 3) change the PRF and/or duty cycle of a rectangular or sinusoidal pulsed wave to reduce the gap between pulses and increase the rise/fall time of the overall envelope. This metric can aid researchers performing in vivo mouse studies in selecting TUS signal parameters that minimize unintended ABR. The methods for developing this metric can be adapted to other animal models.
Collapse
Affiliation(s)
- Mi Hyun Choi
- Department of Bioengineering, Stanford University, Stanford, CA, 94305, USA.
| | - Ningrui Li
- Department of Electrical Engineering, Stanford University, Stanford, CA, 94305, USA
| | - Gerald Popelka
- Department of Otolaryngology, Stanford School of Medicine, Stanford, CA, 94305, USA; Department of Radiology, Stanford School of Medicine, Stanford, CA, 94305, USA
| | - Kim Butts Pauly
- Department of Bioengineering, Stanford University, Stanford, CA, 94305, USA; Department of Electrical Engineering, Stanford University, Stanford, CA, 94305, USA; Department of Radiology, Stanford School of Medicine, Stanford, CA, 94305, USA.
| |
Collapse
|
4
|
Morse K, Vander Werff KR. Onset-offset cortical auditory evoked potential amplitude differences indicate auditory cortical hyperactivity and reduced inhibition in people with tinnitus. Clin Neurophysiol 2023; 149:223-233. [PMID: 36963993 DOI: 10.1016/j.clinph.2023.02.164] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2022] [Revised: 12/26/2022] [Accepted: 02/05/2023] [Indexed: 02/25/2023]
Abstract
OBJECTIVE The current study investigates evidence of hypothesized reduced central inhibition and/or increased excitation in individuals with tinnitus by evaluating cortical auditory onset versus offset responses. METHODS Cortical auditory evoked potentials (CAEPs) were recorded to the onset and offset of 3-second white noise stimuli in tinnitus and control groups matched in pairs by age, hearing, and sex (n = 26 total). Independent t-tests and 2-way mixed model ANOVA were used to evaluate onset-offset differences in amplitude, area, and latency of CAEP components by group. The predictive influence of tinnitus presence and associated participant characteristics on CAEP outcomes was assessed by multiple regression proportional reduction in error. RESULTS The tinnitus group had significantly larger onset minus offset P2 amplitudes (ΔP2 amplitudes) than control group participants. No other component variables differed significantly. ΔP2 amplitude was best predicted by tinnitus status and not significantly influenced by other variables such as hearing loss or age. CONCLUSIONS Hypothesized reduced central inhibition and/or increased excitation in tinnitus participants was partially supported by a group difference in ΔP2 amplitude. SIGNIFICANCE This was the first study to evaluate CAEP onset minus offset differences to investigate changes in central excitation/inhibition in individuals with tinnitus versus controls in matched groups.
Collapse
Affiliation(s)
- Kenneth Morse
- West Virginia University, Division of Communication Sciences and Disorders, USA.
| | | |
Collapse
|
5
|
Coughler C, Quinn de Launay KL, Purcell DW, Oram Cardy J, Beal DS. Pediatric Responses to Fundamental and Formant Frequency Altered Auditory Feedback: A Scoping Review. Front Hum Neurosci 2022; 16:858863. [PMID: 35664350 PMCID: PMC9157279 DOI: 10.3389/fnhum.2022.858863] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2022] [Accepted: 04/12/2022] [Indexed: 11/13/2022] Open
Abstract
Purpose The ability to hear ourselves speak has been shown to play an important role in the development and maintenance of fluent and coherent speech. Despite this, little is known about the developing speech motor control system throughout childhood, in particular if and how vocal and articulatory control may differ throughout development. A scoping review was undertaken to identify and describe the full range of studies investigating responses to frequency altered auditory feedback in pediatric populations and their contributions to our understanding of the development of auditory feedback control and sensorimotor learning in childhood and adolescence. Method Relevant studies were identified through a comprehensive search strategy of six academic databases for studies that included (a) real-time perturbation of frequency in auditory input, (b) an analysis of immediate effects on speech, and (c) participants aged 18 years or younger. Results Twenty-three articles met inclusion criteria. Across studies, there was a wide variety of designs, outcomes and measures used. Manipulations included fundamental frequency (9 studies), formant frequency (12), frequency centroid of fricatives (1), and both fundamental and formant frequencies (1). Study designs included contrasts across childhood, between children and adults, and between typical, pediatric clinical and adult populations. Measures primarily explored acoustic properties of speech responses (latency, magnitude, and variability). Some studies additionally examined the association of these acoustic responses with clinical measures (e.g., stuttering severity and reading ability), and neural measures using electrophysiology and magnetic resonance imaging. Conclusion Findings indicated that children above 4 years generally compensated in the opposite direction of the manipulation, however, in several cases not as effectively as adults. Overall, results varied greatly due to the broad range of manipulations and designs used, making generalization challenging. Differences found between age groups in the features of the compensatory vocal responses, latency of responses, vocal variability and perceptual abilities, suggest that maturational changes may be occurring in the speech motor control system, affecting the extent to which auditory feedback is used to modify internal sensorimotor representations. Varied findings suggest vocal control develops prior to articulatory control. Future studies with multiple outcome measures, manipulations, and more expansive age ranges are needed to elucidate findings.
Collapse
Affiliation(s)
- Caitlin Coughler
- Graduate Program in Health and Rehabilitation Sciences, Faculty of Health Sciences, The University of Western Ontario, London, ON, Canada
- *Correspondence: Caitlin Coughler,
| | - Keelia L. Quinn de Launay
- Bloorview Research Institute, Holland Bloorview Kids Rehabilitation Hospital, Toronto, ON, Canada
- Rehabilitation Sciences Institute, Temerty Faculty of Medicine, University of Toronto, Toronto, ON, Canada
| | - David W. Purcell
- School of Communication Sciences and Disorders, Faculty of Health Sciences, The University of Western Ontario, London, ON, Canada
- National Centre for Audiology, Faculty of Health Sciences, The University of Western Ontario, London, ON, Canada
| | - Janis Oram Cardy
- School of Communication Sciences and Disorders, Faculty of Health Sciences, The University of Western Ontario, London, ON, Canada
- National Centre for Audiology, Faculty of Health Sciences, The University of Western Ontario, London, ON, Canada
| | - Deryk S. Beal
- Bloorview Research Institute, Holland Bloorview Kids Rehabilitation Hospital, Toronto, ON, Canada
- Rehabilitation Sciences Institute, Temerty Faculty of Medicine, University of Toronto, Toronto, ON, Canada
- Department of Speech-Language Pathology, Temerty Faculty of Medicine, University of Toronto, Toronto, ON, Canada
| |
Collapse
|
6
|
Somervail R, Bufacchi RJ, Salvatori C, Neary-Zajiczek L, Guo Y, Novembre G, Iannetti GD. Brain Responses to Surprising Stimulus Offsets: Phenomenology and Functional Significance. Cereb Cortex 2022; 32:2231-2244. [PMID: 34668519 PMCID: PMC9113248 DOI: 10.1093/cercor/bhab352] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2021] [Revised: 08/25/2021] [Accepted: 08/26/2021] [Indexed: 11/15/2022] Open
Abstract
Abrupt increases of sensory input (onsets) likely reflect the occurrence of novel events or objects in the environment, potentially requiring immediate behavioral responses. Accordingly, onsets elicit a transient and widespread modulation of ongoing electrocortical activity: the Vertex Potential (VP), which is likely related to the optimisation of rapid behavioral responses. In contrast, the functional significance of the brain response elicited by abrupt decreases of sensory input (offsets) is more elusive, and a detailed comparison of onset and offset VPs is lacking. In four experiments conducted on 44 humans, we observed that onset and offset VPs share several phenomenological and functional properties: they (1) have highly similar scalp topographies across time, (2) are both largely comprised of supramodal neural activity, (3) are both highly sensitive to surprise and (4) co-occur with similar modulations of ongoing motor output. These results demonstrate that the onset and offset VPs largely reflect the activity of a common supramodal brain network, likely consequent to the activation of the extralemniscal sensory system which runs in parallel with core sensory pathways. The transient activation of this system has clear implications in optimizing the behavioral responses to surprising environmental changes.
Collapse
Affiliation(s)
- R Somervail
- Neuroscience and Behaviour Laboratory, Istituto Italiano di Tecnologia, 00161, Rome, Italy
- Department of Neuroscience, Physiology and Pharmacology, University College London (UCL), WC1E 6BT, London, UK
| | - R J Bufacchi
- Neuroscience and Behaviour Laboratory, Istituto Italiano di Tecnologia, 00161, Rome, Italy
| | - C Salvatori
- Neuroscience and Behaviour Laboratory, Istituto Italiano di Tecnologia, 00161, Rome, Italy
| | - L Neary-Zajiczek
- Department of Computer Science, University College London (UCL), WC1E 6BT, London, UK
| | - Y Guo
- Neuroscience and Behaviour Laboratory, Istituto Italiano di Tecnologia, 00161, Rome, Italy
| | - G Novembre
- Neuroscience and Behaviour Laboratory, Istituto Italiano di Tecnologia, 00161, Rome, Italy
| | - G D Iannetti
- Neuroscience and Behaviour Laboratory, Istituto Italiano di Tecnologia, 00161, Rome, Italy
- Department of Neuroscience, Physiology and Pharmacology, University College London (UCL), WC1E 6BT, London, UK
| |
Collapse
|
7
|
Lertpoompunya A, Ozmeral EJ, Higgins NC, Eddins AC, Eddins DA. Large group differences in binaural sensitivity are represented in preattentive responses from auditory cortex. J Neurophysiol 2022; 127:660-672. [PMID: 35108112 PMCID: PMC8896993 DOI: 10.1152/jn.00360.2021] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2021] [Revised: 01/04/2022] [Accepted: 01/25/2022] [Indexed: 11/22/2022] Open
Abstract
Correlated sounds presented to two ears are perceived as compact and centrally lateralized, whereas decorrelation between ears leads to intracranial image widening. Though most listeners have fine resolution for perceptual changes in interaural correlation (IAC), some investigators have reported large variability in IAC thresholds, and some normal-hearing listeners even exhibit seemingly debilitating IAC thresholds. It is unknown whether or not this variability across individuals and outlier manifestations are a product of task difficulty, poor training, or a neural deficit in the binaural auditory system. The purpose of this study was first to identify listeners with normal and abnormal IAC resolution, second to evaluate the neural responses elicited by IAC changes, and third to use a well-established model of binaural processing to determine a potential explanation for observed individual variability. Nineteen subjects were enrolled in the study, eight of whom were identified as poor performers in the IAC-threshold task. Global scalp responses (N1 and P2 amplitudes of an auditory change complex) in the individuals with poor IAC behavioral thresholds were significantly smaller than for listeners with better IAC resolution. Source-localized evoked responses confirmed this group effect in multiple subdivisions of the auditory cortex, including Heschl's gyrus, planum temporale, and the temporal sulcus. In combination with binaural modeling results, this study provides objective electrophysiological evidence of a binaural processing deficit linked to internal noise, that corresponds to very poor IAC thresholds in listeners that otherwise have normal audiometric profiles and lack spatial hearing complaints.NEW & NOTEWORTHY Group differences in the perception of interaural correlation (IAC) were observed in human adults with normal audiometric sensitivity. These differences were reflected in cortical-evoked activity measured via electroencephalography (EEG). For some participants, weak representation of the binaural cue at the cortical level in preattentive N1-P2 cortical responses may be indicative of a potential processing deficit. Such a deficit may be related to a poorly understood condition known as hidden hearing loss.
Collapse
Affiliation(s)
- Angkana Lertpoompunya
- Department of Communication Sciences and Disorders, University of South Florida, Tampa, Florida
- Department of Communication Sciences and Disorders, Mahidol University, Bangkok, Thailand
| | - Erol J Ozmeral
- Department of Communication Sciences and Disorders, University of South Florida, Tampa, Florida
| | - Nathan C Higgins
- Department of Communication Sciences and Disorders, University of South Florida, Tampa, Florida
| | - Ann C Eddins
- Department of Communication Sciences and Disorders, University of South Florida, Tampa, Florida
- Department of Communication Sciences and Disorders, Mahidol University, Bangkok, Thailand
| | - David A Eddins
- Department of Communication Sciences and Disorders, University of South Florida, Tampa, Florida
- Department of Communication Sciences and Disorders, Mahidol University, Bangkok, Thailand
| |
Collapse
|
8
|
Carretié L, Fernández-Folgueiras U, Álvarez F, Cipriani GA, Tapia M, Kessel D. Fast Unconscious Processing of Emotional Stimuli in Early Stages of the Visual Cortex. Cereb Cortex 2022; 32:4331-4344. [DOI: 10.1093/cercor/bhab486] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2021] [Revised: 11/04/2021] [Accepted: 11/24/2021] [Indexed: 11/12/2022] Open
Abstract
Abstract
Several cortical and subcortical brain areas have been reported to be sensitive to the emotional content of subliminal stimuli. However, the timing of these activations remains unclear. Our scope was to detect the earliest cortical traces of emotional unconscious processing of visual stimuli by recording event-related potentials (ERPs) from 43 participants. Subliminal spiders (emotional) and wheels (neutral), sharing similar low-level visual parameters, were presented at two different locations (fixation and periphery). The differential (peak-to-peak) amplitude from CP1 (77 ms from stimulus onset) to C2 (100 ms), two early visual ERP components originated in V1/V2 according to source localization analyses, was analyzed via Bayesian and traditional frequentist analyses. Spiders elicited greater CP1–C2 amplitudes than wheels when presented at fixation. This fast effect of subliminal stimulation—not reported previously to the best of our knowledge—has implications in several debates: 1) The amygdala cannot be mediating these effects, 2) latency of other evaluative structures recently proposed, such as the visual thalamus, is compatible with these results, 3) the absence of peripheral stimuli effects points to a relevant role of the parvocellular visual system in unconscious processing.
Collapse
|
9
|
Lim SJ, Carter YD, Njoroge JM, Shinn-Cunningham BG, Perrachione TK. Talker discontinuity disrupts attention to speech: Evidence from EEG and pupillometry. BRAIN AND LANGUAGE 2021; 221:104996. [PMID: 34358924 PMCID: PMC8515637 DOI: 10.1016/j.bandl.2021.104996] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/29/2021] [Revised: 07/11/2021] [Accepted: 07/13/2021] [Indexed: 05/13/2023]
Abstract
Speech is processed less efficiently from discontinuous, mixed talkers than one consistent talker, but little is known about the neural mechanisms for processing talker variability. Here, we measured psychophysiological responses to talker variability using electroencephalography (EEG) and pupillometry while listeners performed a delayed recall of digit span task. Listeners heard and recalled seven-digit sequences with both talker (single- vs. mixed-talker digits) and temporal (0- vs. 500-ms inter-digit intervals) discontinuities. Talker discontinuity reduced serial recall accuracy. Both talker and temporal discontinuities elicited P3a-like neural evoked response, while rapid processing of mixed-talkers' speech led to increased phasic pupil dilation. Furthermore, mixed-talkers' speech produced less alpha oscillatory power during working memory maintenance, but not during speech encoding. Overall, these results are consistent with an auditory attention and streaming framework in which talker discontinuity leads to involuntary, stimulus-driven attentional reorientation to novel speech sources, resulting in the processing interference classically associated with talker variability.
Collapse
Affiliation(s)
- Sung-Joo Lim
- Department of Speech, Language, and Hearing Sciences, Boston University, United States.
| | - Yaminah D Carter
- Department of Speech, Language, and Hearing Sciences, Boston University, United States
| | - J Michelle Njoroge
- Department of Speech, Language, and Hearing Sciences, Boston University, United States
| | | | - Tyler K Perrachione
- Department of Speech, Language, and Hearing Sciences, Boston University, United States.
| |
Collapse
|
10
|
Schalles MD, Houser DS, Finneran JJ, Tyack P, Shinn-Cunningham B, Mulsow J. Measuring auditory cortical responses in Tursiops truncatus. J Comp Physiol A Neuroethol Sens Neural Behav Physiol 2021; 207:629-640. [PMID: 34327551 PMCID: PMC8408064 DOI: 10.1007/s00359-021-01502-5] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2020] [Revised: 06/10/2021] [Accepted: 06/25/2021] [Indexed: 11/30/2022]
Abstract
Auditory neuroscience in dolphins has largely focused on auditory brainstem responses; however, such measures reveal little about the cognitive processes dolphins employ during echolocation and acoustic communication. The few previous studies of mid- and long-latency auditory-evoked potentials (AEPs) in dolphins report different latencies, polarities, and magnitudes. These inconsistencies may be due to any number of differences in methodology, but these studies do not make it clear which methodological differences may account for the disparities. The present study evaluates how electrode placement and pre-processing methods affect mid- and long-latency AEPs in (Tursiops truncatus). AEPs were measured when reference electrodes were placed on the skin surface over the forehead, the external auditory meatus, or the dorsal surface anterior to the dorsal fin. Data were pre-processed with or without a digital 50-Hz low-pass filter, and the use of independent component analysis to isolate signal components related to neural processes from other signals. Results suggest that a meatus reference electrode provides the highest quality AEP signals for analyses in sensor space, whereas a dorsal reference yielded nominal improvements in component space. These results provide guidance for measuring cortical AEPs in dolphins, supporting future studies of their cognitive auditory processing.
Collapse
Affiliation(s)
- Matt D Schalles
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA, 15213, USA. .,Biomedical Engineering, Boston University, Boston, MA, 02215, USA.
| | - Dorian S Houser
- National Marine Mammal Foundation, San Diego, CA, 92106, USA
| | - James J Finneran
- US Navy Marine Mammal Program, Naval Information Warfare Center Pacific, San Diego, CA, 92152, USA
| | - Peter Tyack
- School of Biology, University of St Andrews, St Andrews, UK
| | - Barbara Shinn-Cunningham
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA, 15213, USA.,Biomedical Engineering, Boston University, Boston, MA, 02215, USA
| | - Jason Mulsow
- National Marine Mammal Foundation, San Diego, CA, 92106, USA
| |
Collapse
|
11
|
Gonzalez JE, Musiek FE. The Onset-Offset N1-P2 Auditory Evoked Response in Individuals With High-Frequency Sensorineural Hearing Loss: Responses to Broadband Noise. Am J Audiol 2021; 30:423-432. [PMID: 34057857 DOI: 10.1044/2021_aja-20-00113] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022] Open
Abstract
Purpose Clinical use of electrophysiologic measures has been limited to use of brief stimuli to evoke responses. While brief stimuli elicit onset responses in individuals with normal hearing and normal central auditory nervous system (CANS) function, responses represent the integrity of a fraction of the mainly excitatory central auditory neurons. Longer stimuli could provide information regarding excitatory and inhibitory CANS function. Our goal was to measure the onset-offset N1-P2 auditory evoked response in subjects with normal hearing and subjects with moderate high-frequency sensorineural hearing loss (HFSNHL) to determine whether the response can be measured in individuals with moderate HFSNHL and, if so, whether waveform components differ between participant groups. Method Waveforms were obtained from 10 participants with normal hearing and seven participants with HFSNHL aged 40-67 years using 2,000-ms broadband noise stimuli with 40-ms rise-fall times presented at 50 dB SL referenced to stimulus threshold. Amplitudes and latencies were analyzed via repeated-measures analysis of variance (ANOVA). N1 and P2 onset latencies were compared to offset counterparts via repeated-measures ANOVA after subtracting 2,000 ms from the offset latencies to account for stimulus duration. Offset-to-onset trough-to-peak amplitude ratios between groups were compared using a one-way ANOVA. Results Responses were evoked from all participants. There were no differences between participant groups for the waveform components measured. Response × Participant Group interactions were not significant. Offset N1-P2 latencies were significantly shorter than onset counterparts after adjusting for stimulus duration (normal hearing: 43 ms shorter; HFSNHL: 47 ms shorter). Conclusions Onset-offset N1-P2 responses were resistant to moderate HFSNHL. It is likely that the onset was elicited by the presentation of a sound in silence and the offset by the change in stimulus envelope from plateau to fall, suggesting an excitatory onset response and an inhibitory-influenced offset response. Results indicated this protocol can be used to investigate CANS function in individuals with moderate HFSNHL. Supplemental Material https://doi.org/10.23641/asha.14669007.
Collapse
Affiliation(s)
- Jennifer E. Gonzalez
- Speech and Hearing Science, College of Health Solutions, Arizona State University, Tempe
| | - Frank E. Musiek
- Department of Speech, Language, and Hearing Sciences, The University of Arizona, Tucson
| |
Collapse
|
12
|
Reeves A, Seluakumaran K, Scharf B. Contralateral proximal interference. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2021; 149:3352. [PMID: 34241123 DOI: 10.1121/10.0004786] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/26/2020] [Accepted: 04/06/2021] [Indexed: 06/13/2023]
Abstract
A contralateral "cue" tone presented in continuous broadband noise both lowers the threshold of a signal tone by guiding attention to it and raises its threshold by interference. Here, signal tones were fixed in duration (40 ms, 52 ms with ramps), frequency (1500 Hz), timing, and level, so attention did not need guidance. Interference by contralateral cues was studied in relation to cue-signal proximity, cue-signal temporal overlap, and cue-signal order (cue after: backward interference, BI; or cue first: forward interference, FI). Cues, also ramped, were 12 dB above the signal level. Long cues (300 or 600 ms) raised thresholds by 5.3 dB when the signal and cue overlapped and by 5.1 dB in FI and 3.2 dB in BI when cues and signals were separated by 40 ms. Short cues (40 ms) raised thresholds by 4.5 dB in FI and 4.0 dB in BI for separations of 7 to 40 ms, but by ∼13 dB when simultaneous and in phase. FI and BI are comparable in magnitude and hardly increase when the signal is close in time to abrupt cue transients. These results do not support the notion that masking of the signal is due to the contralateral cue onset/offset transient response. Instead, sluggish attention or temporal integration may explain contralateral proximal interference.
Collapse
Affiliation(s)
- Adam Reeves
- Department of Psychology, Northeastern University, Boston, Massachusetts 02115, USA
| | - Kumar Seluakumaran
- Faculty of Medicine, Department of Physiology, University of Malaya, 50603 Kuala Lumpur, Malaysia
| | - Bertram Scharf
- Department of Psychology, Northeastern University, Boston, Massachusetts 02115, USA
| |
Collapse
|
13
|
O'Reilly JA, Conway BA. Classical and controlled auditory mismatch responses to multiple physical deviances in anaesthetised and conscious mice. Eur J Neurosci 2020; 53:1839-1854. [DOI: 10.1111/ejn.15072] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2020] [Revised: 11/16/2020] [Accepted: 11/26/2020] [Indexed: 12/24/2022]
Affiliation(s)
- Jamie A. O'Reilly
- College of Biomedical Engineering Rangsit University Pathum Thani Thailand
| | - Bernard A. Conway
- Department of Biomedical Engineering University of Strathclyde Glasgow UK
| |
Collapse
|
14
|
Kumar P, Sanju HK, Hussain RO, Kaverappa Ganapathy M, Singh NK. Utility of Acoustic Change Complex as an Objective Tool to Evaluate Difference Limen for Intensity in Cochlear Hearing Loss and Auditory Neuropathy Spectrum Disorder. Am J Audiol 2020; 29:375-383. [PMID: 32628503 DOI: 10.1044/2020_aja-19-00084] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022] Open
Abstract
Purpose This study aimed to investigate usefulness of acoustic change complex (ACC) as an objective measure of difference limen for intensity (DLI) in auditory neuropathy spectrum disorders (ANSD) and cochlear hearing loss (CHL). Method The study used a multiple static group comparison research design. Twenty normal-hearing individuals (NH), 19 individuals with ANSD, and 23 individuals with CHL underwent DLI measurement using behavioral (psychoacoustic) techniques and ACC. For eliciting ACC, a 500-ms, 1,000-Hz pure tone was presented at 80 dB SPL. Additionally, six variants of this stimulus with intensity increments of 1, 3, 4, 5, 10, and 20 dB starting 250 ms after stimulus onset were used to elicit the ACC. Results The lowest intensity change that produced replicable and clearly identifiable ACC was referred as objective DLI. In comparison to NH and CHL, the behavioral as well as the objective DLI were significantly larger (poorer) in ANSD (p < .05). Significantly strong positive correlation existed between DLI obtained using behavioral and objective measures (p < .05). Conclusions ACC could be a useful objective tool to measure DLI in the clinical population, provided the individuals of the clinical population fulfill the prerequisite of the presence of Auditory Long Latency Responses. Supplemental Material https://doi.org/10.23641/asha.12560132.
Collapse
Affiliation(s)
- Prawin Kumar
- Department of Audiology, All India Institute of Speech and Hearing,Manasagangotri, Mysore, Karnataka, India
| | - Himanshu Kumar Sanju
- Department of Audiology, All India Institute of Speech and Hearing,Manasagangotri, Mysore, Karnataka, India
| | - Reesha Oovattil Hussain
- Department of Audiology, All India Institute of Speech and Hearing,Manasagangotri, Mysore, Karnataka, India
| | | | - Niraj Kumar Singh
- Department of Audiology, All India Institute of Speech and Hearing,Manasagangotri, Mysore, Karnataka, India
| |
Collapse
|
15
|
Liu J, Whiteway MR, Sheikhattar A, Butts DA, Babadi B, Kanold PO. Parallel Processing of Sound Dynamics across Mouse Auditory Cortex via Spatially Patterned Thalamic Inputs and Distinct Areal Intracortical Circuits. Cell Rep 2020; 27:872-885.e7. [PMID: 30995483 DOI: 10.1016/j.celrep.2019.03.069] [Citation(s) in RCA: 63] [Impact Index Per Article: 15.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2018] [Revised: 10/24/2018] [Accepted: 03/18/2019] [Indexed: 12/17/2022] Open
Abstract
Natural sounds have rich spectrotemporal dynamics. Spectral information is spatially represented in the auditory cortex (ACX) via large-scale maps. However, the representation of temporal information, e.g., sound offset, is unclear. We perform multiscale imaging of neuronal and thalamic activity evoked by sound onset and offset in awake mouse ACX. ACX areas differed in onset responses (On-Rs) and offset responses (Off-Rs). Most excitatory L2/3 neurons show either On-Rs or Off-Rs, and ACX areas are characterized by differing fractions of On and Off-R neurons. Somatostatin and parvalbumin interneurons show distinct temporal dynamics, potentially amplifying Off-Rs. Functional network analysis shows that ACX areas contain distinct parallel onset and offset networks. Thalamic (MGB) terminals show either On-Rs or Off-Rs, indicating a thalamic origin of On and Off-R pathways. Thus, ACX areas spatially represent temporal features, and this representation is created by spatial convergence and co-activation of distinct MGB inputs and is refined by specific intracortical connectivity.
Collapse
Affiliation(s)
- Ji Liu
- Department of Biology, University of Maryland, College Park, MD 20742, USA
| | - Matthew R Whiteway
- Applied Mathematics and Statistics and Scientific Computation Program, University of Maryland, College Park, MD 20742, USA
| | - Alireza Sheikhattar
- Department of Electrical & Computer Engineering, University of Maryland, College Park, MD 20742, USA
| | - Daniel A Butts
- Department of Biology, University of Maryland, College Park, MD 20742, USA; Applied Mathematics and Statistics and Scientific Computation Program, University of Maryland, College Park, MD 20742, USA; Program in Neuroscience and Cognitive Science, University of Maryland, College Park, MD 20742, USA
| | - Behtash Babadi
- Department of Electrical & Computer Engineering, University of Maryland, College Park, MD 20742, USA
| | - Patrick O Kanold
- Department of Biology, University of Maryland, College Park, MD 20742, USA; Program in Neuroscience and Cognitive Science, University of Maryland, College Park, MD 20742, USA.
| |
Collapse
|
16
|
Volosin M, Horváth J. Task difficulty modulates voluntary attention allocation, but not distraction in an auditory distraction paradigm. Brain Res 2020; 1727:146565. [PMID: 31765629 DOI: 10.1016/j.brainres.2019.146565] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2019] [Revised: 10/27/2019] [Accepted: 11/21/2019] [Indexed: 11/25/2022]
Abstract
Keeping task-relevant sensory events in the focus of attention while ignoring irrelevant ones is crucial for optimizing task behavior. This attention-distraction balance might change with the perceptual demands of the ongoing task: while easy tasks might be performed with low attentional effort, difficult ones require enhanced attention. The goal of the present study was to investigate how task difficulty affected allocation of attention and distractibility in an auditory distraction paradigm. Participants performed a tone duration discrimination task in which tones were rarely, occasionally presented at a rare pitch (distracters), and task difficulty was manipulated by the duration difference between short and long tones. Short tones were consistently 200 ms long, while long tone duration was 400 ms in the easy, and 260 ms in the difficult condition. Behavioral results and deviant-minus-standard event-related potential (ERP) waveforms suggested similar magnitudes of distraction in both conditions. ERPs without such a subtraction showed that tone onsets were preceded by a negative-going trend, suggesting that participants prepared for tone onsets. In the difficult condition, N1 amplitudes to tone onsets were enhanced, indicating that participants invested more attentional resources. Increased difficulty also slowed down tone offset processing as reflected by significantly delayed offset-related P1 and N1/N2 waveforms. These results suggest that although task difficulty compels participants to attend the tones more strongly, this does not have significant impact on distraction-related processing.
Collapse
Affiliation(s)
- Márta Volosin
- Institute of Cognitive Neuroscience and Psychology, Research Centre for Natural Sciences, Hungarian Academy of Sciences, Magyar Tudósok körútja 2, H-1117 Budapest, Hungary; Institute of Psychology, University of Szeged, Egyetem utca 2, H-6722 Szeged, Hungary.
| | - János Horváth
- Institute of Cognitive Neuroscience and Psychology, Research Centre for Natural Sciences, Hungarian Academy of Sciences, Magyar Tudósok körútja 2, H-1117 Budapest, Hungary; Institute of Psychology, Károli Gáspár University of the Reformed Church in Hungary, Bécsi út 324, H-1037 Budapest, Hungary.
| |
Collapse
|
17
|
Double-epoch subtraction reveals long-latency mismatch response in urethane-anaesthetized mice. J Neurosci Methods 2019; 326:108375. [DOI: 10.1016/j.jneumeth.2019.108375] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2019] [Revised: 07/23/2019] [Accepted: 07/24/2019] [Indexed: 11/21/2022]
|
18
|
Itoh K, Nejime M, Konoike N, Nakamura K, Nakada T. Evolutionary Elongation of the Time Window of Integration in Auditory Cortex: Macaque vs. Human Comparison of the Effects of Sound Duration on Auditory Evoked Potentials. Front Neurosci 2019; 13:630. [PMID: 31293370 PMCID: PMC6601703 DOI: 10.3389/fnins.2019.00630] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2019] [Accepted: 05/31/2019] [Indexed: 11/29/2022] Open
Abstract
The auditory cortex integrates auditory information over time to obtain neural representations of sound events, the time scale of which critically affects perception. This work investigated the species differences in the time scale of integration by comparing humans and monkeys regarding how their scalp-recorded cortical auditory evoked potentials (CAEPs) decrease in amplitude as stimulus duration is shortened from 100 ms (or longer) to 2 ms. Cortical circuits tuned to processing sounds at short time scales would continue to produce large CAEPs to brief sounds whereas those tuned to longer time scales would produce diminished responses. Four peaks were identified in the CAEPs and labeled P1, N1, P2, and N2 in humans and mP1, mN1, mP2, and mN2 in monkeys. In humans, the N1 diminished in amplitude as sound duration was decreased, consistent with the previously described temporal integration window of N1 (>50 ms). In macaques, by contrast, the mN1 was unaffected by sound duration, and it was clearly elicited by even the briefest sounds. Brief sounds also elicited significant mN2 in the macaque, but not the human N2. Regarding earlier latencies, both P1 (humans) and mP1 (macaques) were elicited at their full amplitudes even by the briefest sounds. These findings suggest an elongation of the time scale of late stages of human auditory cortical processing, as reflected by N1/mN1 and later CAEP components. Longer time scales of integration would allow neural representations of complex auditory features that characterize speech and music.
Collapse
Affiliation(s)
- Kosuke Itoh
- Center for Integrated Human Brain Science, Brain Research Institute, Niigata University, Niigata, Japan
| | - Masafumi Nejime
- Cognitive Neuroscience Section, Primate Research Institute, Kyoto University, Kyoto, Japan
| | - Naho Konoike
- Cognitive Neuroscience Section, Primate Research Institute, Kyoto University, Kyoto, Japan
| | - Katsuki Nakamura
- Cognitive Neuroscience Section, Primate Research Institute, Kyoto University, Kyoto, Japan
| | - Tsutomu Nakada
- Center for Integrated Human Brain Science, Brain Research Institute, Niigata University, Niigata, Japan
| |
Collapse
|
19
|
Gansonre C, Højlund A, Leminen A, Bailey C, Shtyrov Y. Task-free auditory EEG paradigm for probing multiple levels of speech processing in the brain. Psychophysiology 2018; 55:e13216. [PMID: 30101984 DOI: 10.1111/psyp.13216] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2017] [Revised: 05/09/2018] [Accepted: 05/09/2018] [Indexed: 11/26/2022]
Abstract
While previous studies on language processing highlighted several ERP components in relation to specific stages of sound and speech processing, no study has yet combined them to obtain a comprehensive picture of language abilities in a single session. Here, we propose a novel task-free paradigm aimed at assessing multiple levels of speech processing by combining various speech and nonspeech sounds in an adaptation of a multifeature passive oddball design. We recorded EEG in healthy adult participants, who were presented with these sounds in the absence of sound-directed attention while being engaged in a primary visual task. This produced a range of responses indexing various levels of sound processing and language comprehension: (a) P1-N1 complex, indexing obligatory auditory processing; (b) P3-like dynamics associated with involuntary attention allocation for unusual sounds; (c) enhanced responses for native speech (as opposed to nonnative phonemes) from ∼50 ms from phoneme onset, indicating phonological processing; (d) amplitude advantage for familiar real words as opposed to meaningless pseudowords, indexing automatic lexical access; (e) topographic distribution differences in the cortical activation of action verbs versus concrete nouns, likely linked with the processing of lexical semantics. These multiple indices of speech-sound processing were acquired in a single attention-free setup that does not require any task or subject cooperation; subject to future research, the present protocol may potentially be developed into a useful tool for assessing the status of auditory and linguistic functions in uncooperative or unresponsive participants, including a range of clinical or developmental populations.
Collapse
Affiliation(s)
- Christelle Gansonre
- Center of Functionally Integrative Neuroscience (CFIN), Department of Clinical Medicine, Aarhus University, Aarhus, Denmark
| | - Andreas Højlund
- Center of Functionally Integrative Neuroscience (CFIN), Department of Clinical Medicine, Aarhus University, Aarhus, Denmark
| | - Alina Leminen
- Center of Functionally Integrative Neuroscience (CFIN), Department of Clinical Medicine, Aarhus University, Aarhus, Denmark.,Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Helsinki, Finland
| | - Christopher Bailey
- Center of Functionally Integrative Neuroscience (CFIN), Department of Clinical Medicine, Aarhus University, Aarhus, Denmark
| | - Yury Shtyrov
- Center of Functionally Integrative Neuroscience (CFIN), Department of Clinical Medicine, Aarhus University, Aarhus, Denmark.,Laboratory of Behavioural Neurodynamics, St. Petersburg State University, St. Petersburg, Russia
| |
Collapse
|
20
|
Sound frequency affects the auditory motion-onset response in humans. Exp Brain Res 2018; 236:2713-2726. [PMID: 29998350 DOI: 10.1007/s00221-018-5329-9] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2017] [Accepted: 07/04/2018] [Indexed: 10/28/2022]
Abstract
The current study examines the modulation of the motion-onset response based on the frequency-range of sound stimuli. Delayed motion-onset and stationary stimuli were presented in a free-field by sequentially activating loudspeakers on an azimuthal plane keeping the natural percept of externalized sound presentation. The sounds were presented in low- or high-frequency ranges and had different motion direction within each hemifield. Difference waves were calculated by contrasting the moving and stationary sounds to isolate the motion-onset responses. Analyses carried out at the peak amplitudes and latencies on the difference waves showed that the early part of the motion response (cN1) was modulated by the frequency range of the sounds with stronger amplitudes elicited by stimuli with high frequency range. Subsequent post hoc analysis of the normalized amplitude of the motion response confirmed the previous finding by excluding the possibility that the frequency range had an overall effect on the waveform, and showing that this effect was instead limited to the motion response. These results support the idea of a modular organization of the motion-onset response with the processing of primary sound motion characteristics being reflected in the early part of the response. Also, the article highlights the importance of specificity in auditory stimulus design.
Collapse
|
21
|
Motomura E, Inui K, Nishihara M, Tanahashi M, Kakigi R, Okada M. Prepulse Inhibition of the Auditory Off-Response: A Magnetoencephalographic Study. Clin EEG Neurosci 2018; 49:152-158. [PMID: 28490194 DOI: 10.1177/1550059417708914] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
A weak preceding sound stimulus attenuates the startle response evoked by an intense sound stimulus. Like startle reflexes, change-related auditory responses are suppressed by a weak leading stimulus (ie, a prepulse). We aim to examine whether a prepulse inhibits cerebral responses to the sound offset and how the prepulse magnitude affects the degree of the prepulse inhibition (PPI). Using magnetoencephalography, we recorded the Off-P50m elicited by an offset of a train sound of 100-Hz clicks in 12 healthy subjects. A single click slightly louder (+1.5, +3, or +5 dB) than the background sound of 80 dB was inserted 50 ms before the sound offset as a prepulse. We performed a dipole source analysis of the Off-P50m, and we measured its latency and amplitude using the source strength waveforms. The origin of the Off-P50m was estimated to be the auditory cortex on both hemispheres. The Off-P50m was clearly attenuated by the prepulses, and the degree of PPI was greater with a louder prepulse. The Off-P50m is considered to be a simple change-related response, which does not overlap with a processing of incoming sounds. Thus, the Off-P50m and its PPI comprise a valuable tool for investigating the neural inhibitory system.
Collapse
Affiliation(s)
- Eishi Motomura
- 1 Department of Neuropsychiatry, Mie University Graduate School of Medicine, Tsu, Japan
| | - Koji Inui
- 2 Institute for Developmental Research, Aichi Human Service Center, Kasugai, Japan.,3 Department of Integrative Physiology, National Institute for Physiological Sciences, Okazaki, Japan
| | - Makoto Nishihara
- 4 Multidisciplinary Pain Center, Aichi Medical University, Nagakute, Japan
| | - Megumi Tanahashi
- 1 Department of Neuropsychiatry, Mie University Graduate School of Medicine, Tsu, Japan
| | - Ryusuke Kakigi
- 3 Department of Integrative Physiology, National Institute for Physiological Sciences, Okazaki, Japan
| | - Motohiro Okada
- 1 Department of Neuropsychiatry, Mie University Graduate School of Medicine, Tsu, Japan
| |
Collapse
|
22
|
Denham SL, Winkler I. Predictive coding in auditory perception: challenges and unresolved questions. Eur J Neurosci 2018; 51:1151-1160. [PMID: 29250827 DOI: 10.1111/ejn.13802] [Citation(s) in RCA: 26] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2017] [Revised: 09/03/2017] [Accepted: 11/20/2017] [Indexed: 11/30/2022]
Abstract
Predictive coding is arguably the currently dominant theoretical framework for the study of perception. It has been employed to explain important auditory perceptual phenomena, and it has inspired theoretical, experimental and computational modelling efforts aimed at describing how the auditory system parses the complex sound input into meaningful units (auditory scene analysis). These efforts have uncovered some vital questions, addressing which could help to further specify predictive coding and clarify some of its basic assumptions. The goal of the current review is to motivate these questions and show how unresolved issues in explaining some auditory phenomena lead to general questions of the theoretical framework. We focus on experimental and computational modelling issues related to sequential grouping in auditory scene analysis (auditory pattern detection and bistable perception), as we believe that this is the research topic where predictive coding has the highest potential for advancing our understanding. In addition to specific questions, our analysis led us to identify three more general questions that require further clarification: (1) What exactly is meant by prediction in predictive coding? (2) What governs which generative models make the predictions? and (3) What (if it exists) is the correlate of perceptual experience within the predictive coding framework?
Collapse
Affiliation(s)
- Susan L Denham
- School of Psychology, University of Plymouth, Drake Circus, Plymouth, PL4 8AA, UK
| | - István Winkler
- Institute of Cognitive Neuroscience and Psychology, Research Centre for Natural Sciences, Hungarian Academy of Sciences, Budapest, Hungary
| |
Collapse
|
23
|
Horváth J, Gaál ZA, Volosin M. Sound offset-related brain potentials show retained sensory processing, but increased cognitive control activity in older adults. Neurobiol Aging 2017; 57:232-246. [DOI: 10.1016/j.neurobiolaging.2017.05.026] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2017] [Revised: 05/16/2017] [Accepted: 05/30/2017] [Indexed: 10/19/2022]
|
24
|
Papagiannopoulou EA, Lagopoulos J. P300 event-related potentials in children with dyslexia. ANNALS OF DYSLEXIA 2017; 67:99-108. [PMID: 27761877 DOI: 10.1007/s11881-016-0122-6] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/28/2015] [Accepted: 01/11/2016] [Indexed: 05/21/2023]
Abstract
To elucidate the timing and the nature of neural disturbances in dyslexia and to further understand the topographical distribution of these, we examined entire brain regions employing the non-invasive auditory oddball P300 paradigm in children with dyslexia and neurotypical controls. Our findings revealed abnormalities for the dyslexia group in (i) P300 latency, globally, but greatest in frontal brain regions and (ii) decreased P300 amplitude confined to the central brain regions (Fig. 1). These findings reflect abnormalities associated with a diminished capacity to process mental workload as well as delayed processing of this information in children with dyslexia. Furthermore, the topographical distribution of these findings suggests a distinct spatial distribution for the observed P300 abnormalities. This information may be useful in future therapeutic or brain stimulation intervention trials.
Collapse
Affiliation(s)
- Eleni A Papagiannopoulou
- The Brain and Mind Research Institute, The University of Sydney, 94 Mallett Street, Camperdown, NSW, 2050, Australia.
| | - Jim Lagopoulos
- The Brain and Mind Research Institute, The University of Sydney, 94 Mallett Street, Camperdown, NSW, 2050, Australia
| |
Collapse
|
25
|
Perceptual Temporal Asymmetry Associated with Distinct ON and OFF Responses to Time-Varying Sounds with Rising versus Falling Intensity: A Magnetoencephalography Study. Brain Sci 2016; 6:brainsci6030027. [PMID: 27527227 PMCID: PMC5039456 DOI: 10.3390/brainsci6030027] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2016] [Revised: 07/26/2016] [Accepted: 07/29/2016] [Indexed: 11/29/2022] Open
Abstract
This magnetoencephalography (MEG) study investigated evoked ON and OFF responses to ramped and damped sounds in normal-hearing human adults. Two pairs of stimuli that differed in spectral complexity were used in a passive listening task; each pair contained identical acoustical properties except for the intensity envelope. Behavioral duration judgment was conducted in separate sessions, which replicated the perceptual bias in favour of the ramped sounds and the effect of spectral complexity on perceived duration asymmetry. MEG results showed similar cortical sites for the ON and OFF responses. There was a dominant ON response with stronger phase-locking factor (PLF) in the alpha (8–14 Hz) and theta (4–8 Hz) bands for the damped sounds. In contrast, the OFF response for sounds with rising intensity was associated with stronger PLF in the gamma band (30–70 Hz). Exploratory correlation analysis showed that the OFF response in the left auditory cortex was a good predictor of the perceived temporal asymmetry for the spectrally simpler pair. The results indicate distinct asymmetry in ON and OFF responses and neural oscillation patterns associated with the dynamic intensity changes, which provides important preliminary data for future studies to examine how the auditory system develops such an asymmetry as a function of age and learning experience and whether the absence of asymmetry or abnormal ON and OFF responses can be taken as a biomarker for certain neurological conditions associated with auditory processing deficits.
Collapse
|
26
|
Glushko A, Steinhauer K, DePriest J, Koelsch S. Neurophysiological Correlates of Musical and Prosodic Phrasing: Shared Processing Mechanisms and Effects of Musical Expertise. PLoS One 2016; 11:e0155300. [PMID: 27192560 PMCID: PMC4871576 DOI: 10.1371/journal.pone.0155300] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2015] [Accepted: 04/27/2016] [Indexed: 12/03/2022] Open
Abstract
The processing of prosodic phrase boundaries in language is immediately reflected by a specific event-related potential component called the Closure Positive Shift (CPS). A component somewhat reminiscent of the CPS in language has also been reported for musical phrases (i.e., the so-called 'music CPS'). However, in previous studies the quantification of the music-CPS as well as its morphology and timing differed substantially from the characteristics of the language-CPS. Therefore, the degree of correspondence between cognitive mechanisms of phrasing in music and in language has remained questionable. Here, we probed the shared nature of mechanisms underlying musical and prosodic phrasing by (1) investigating whether the music-CPS is present at phrase boundary positions where the language-CPS has been originally reported (i.e., at the onset of the pause between phrases), and (2) comparing the CPS in music and in language in non-musicians and professional musicians. For the first time, we report a positive shift at the onset of musical phrase boundaries that strongly resembles the language-CPS and argue that the post-boundary 'music-CPS' of previous studies may be an entirely distinct ERP component. Moreover, the language-CPS in musicians was found to be less prominent than in non-musicians, suggesting more efficient processing of prosodic phrases in language as a result of higher musical expertise.
Collapse
Affiliation(s)
- Anastasia Glushko
- Freie Universität Berlin, Berlin, Germany
- Integrated Program in Neuroscience, McGill University, Montreal, Quebec, Canada
- The Centre for Research on Brain, Language and Music (CRBLM), Montreal, Quebec, Canada
| | - Karsten Steinhauer
- Integrated Program in Neuroscience, McGill University, Montreal, Quebec, Canada
- The Centre for Research on Brain, Language and Music (CRBLM), Montreal, Quebec, Canada
- School of Communication Sciences and Disorders, McGill University, Montreal, Quebec, Canada
| | - John DePriest
- Program in Linguistics, Tulane University, New Orleans, Louisiana, United States of America
| | - Stefan Koelsch
- Freie Universität Berlin, Berlin, Germany
- Department of Biological and Medical Psychology, University in Bergen, Bergen, Norway
| |
Collapse
|
27
|
Gabriel D, Wong TC, Nicolier M, Giustiniani J, Mignot C, Noiret N, Monnin J, Magnin E, Pazart L, Moulin T, Haffen E, Vandel P. Don't forget the lyrics! Spatiotemporal dynamics of neural mechanisms spontaneously evoked by gaps of silence in familiar and newly learned songs. Neurobiol Learn Mem 2016; 132:18-28. [PMID: 27131744 DOI: 10.1016/j.nlm.2016.04.011] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2015] [Revised: 04/18/2016] [Accepted: 04/24/2016] [Indexed: 10/21/2022]
Abstract
The vast majority of people experience musical imagery, the sensation of reliving a song in absence of any external stimulation. Internal perception of a song can be deliberate and effortful, but also may occur involuntarily and spontaneously. Moreover, musical imagery is also involuntarily used for automatically completing missing parts of music or lyrics from a familiar song. The aim of our study was to explore the onset of musical imagery dynamics that leads to the automatic completion of missing lyrics. High-density electroencephalography was used to record the cerebral activity of twenty healthy volunteers while they were passively listening to unfamiliar songs, very familiar songs, and songs previously listened to for two weeks. Silent gaps inserted into these songs elicited a series of neural activations encompassing perceptual, attentional and cognitive mechanisms (range 100-500ms). Familiarity and learning effects emerged as early as 100ms and lasted 400ms after silence occurred. Although participants reported more easily mentally imagining lyrics in familiar rather than passively learnt songs, the onset of neural mechanisms and the power spectrum underlying musical imagery were similar for both types of songs. This study offers new insights into the musical imagery dynamics evoked by gaps of silence and on the role of familiarity and learning processes in the generation of these dynamics. The automatic and effortless method presented here is a potentially useful tool to understand failure in the familiarity and learning processes of pathological populations.
Collapse
Affiliation(s)
- Damien Gabriel
- Centre d'investigation Clinique-Innovation Technologique CIC-IT 1431, Inserm, CHRU Besançon, F-25000 Besançon, France; Neurosciences intégratives et cliniques EA 481, Univ. Franche-Comté, Univ. Bourgogne Franche-Comté, F-25000 Besançon, France.
| | - Thian Chiew Wong
- Centre d'investigation Clinique-Innovation Technologique CIC-IT 1431, Inserm, CHRU Besançon, F-25000 Besançon, France
| | - Magali Nicolier
- Centre d'investigation Clinique-Innovation Technologique CIC-IT 1431, Inserm, CHRU Besançon, F-25000 Besançon, France; Neurosciences intégratives et cliniques EA 481, Univ. Franche-Comté, Univ. Bourgogne Franche-Comté, F-25000 Besançon, France; Service de psychiatrie de l'adulte, CHRU Besançon, F-25000 Besançon, France
| | - Julie Giustiniani
- Service de psychiatrie de l'adulte, CHRU Besançon, F-25000 Besançon, France
| | - Coralie Mignot
- Centre d'investigation Clinique-Innovation Technologique CIC-IT 1431, Inserm, CHRU Besançon, F-25000 Besançon, France
| | - Nicolas Noiret
- Centre Mémoire de Ressource et de Recherche de Franche-Comté, CHRU Besançon, F-25000 Besançon, France; Laboratoire de psychologie EA 3188, Université de Franche-Comté, Besançon, France
| | - Julie Monnin
- Centre d'investigation Clinique-Innovation Technologique CIC-IT 1431, Inserm, CHRU Besançon, F-25000 Besançon, France; Neurosciences intégratives et cliniques EA 481, Univ. Franche-Comté, Univ. Bourgogne Franche-Comté, F-25000 Besançon, France; Service de psychiatrie de l'adulte, CHRU Besançon, F-25000 Besançon, France
| | - Eloi Magnin
- Centre Mémoire de Ressource et de Recherche de Franche-Comté, CHRU Besançon, F-25000 Besançon, France; Service de neurologie, CHRU Besançon, F-25000 Besançon, France
| | - Lionel Pazart
- Centre d'investigation Clinique-Innovation Technologique CIC-IT 1431, Inserm, CHRU Besançon, F-25000 Besançon, France; Neurosciences intégratives et cliniques EA 481, Univ. Franche-Comté, Univ. Bourgogne Franche-Comté, F-25000 Besançon, France
| | - Thierry Moulin
- Centre d'investigation Clinique-Innovation Technologique CIC-IT 1431, Inserm, CHRU Besançon, F-25000 Besançon, France; Neurosciences intégratives et cliniques EA 481, Univ. Franche-Comté, Univ. Bourgogne Franche-Comté, F-25000 Besançon, France; Service de neurologie, CHRU Besançon, F-25000 Besançon, France
| | - Emmanuel Haffen
- Centre d'investigation Clinique-Innovation Technologique CIC-IT 1431, Inserm, CHRU Besançon, F-25000 Besançon, France; Neurosciences intégratives et cliniques EA 481, Univ. Franche-Comté, Univ. Bourgogne Franche-Comté, F-25000 Besançon, France; Service de psychiatrie de l'adulte, CHRU Besançon, F-25000 Besançon, France
| | - Pierre Vandel
- Centre d'investigation Clinique-Innovation Technologique CIC-IT 1431, Inserm, CHRU Besançon, F-25000 Besançon, France; Neurosciences intégratives et cliniques EA 481, Univ. Franche-Comté, Univ. Bourgogne Franche-Comté, F-25000 Besançon, France; Service de psychiatrie de l'adulte, CHRU Besançon, F-25000 Besançon, France; Centre Mémoire de Ressource et de Recherche de Franche-Comté, CHRU Besançon, F-25000 Besançon, France
| |
Collapse
|
28
|
Horváth J. Attention-dependent sound offset-related brain potentials. Psychophysiology 2016; 53:663-77. [DOI: 10.1111/psyp.12607] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2015] [Accepted: 12/11/2015] [Indexed: 11/30/2022]
Affiliation(s)
- János Horváth
- Institute of Cognitive Neuroscience and Psychology, Research Centre for Natural Sciences, Hungarian Academy of Sciences; Budapest Hungary
| |
Collapse
|
29
|
Takeshita Y, Yokosawa K. Acoustic pressure reduction at rhythm deviants causes magnetoencephalographic response. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2016; 2015:6650-3. [PMID: 26737818 DOI: 10.1109/embc.2015.7319918] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Rhythm is an element of music and is important for determining the impression of the music. To investigate the mechanism by which musical rhythmic changes are perceived, magnetoencephalographic responses to rhythm deviants were recorded from 11 healthy volunteers. Auditory stimuli consisting of physically controlled tones were adapted from a song. The auditory stimuli had a steady rhythm, but "early" and "late" deviants were inserted. Only the "early" deviant, which was a tone with a short duration, caused N100m-like prominent transient responses at around the offset of the deviant tone. The latency of the prominent response depended on the descending sound pressure of the deviant tone and was 65 ms after 50% descent. The results suggest that unexpected shortening of tone in a continuous rhythm evokes a transient response and that the response is caused by descending sound pressure of the shortened tone itself, not by the following tones.
Collapse
|
30
|
Kim JR. Acoustic Change Complex: Clinical Implications. J Audiol Otol 2015; 19:120-4. [PMID: 26771009 PMCID: PMC4704548 DOI: 10.7874/jao.2015.19.3.120] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2015] [Revised: 11/16/2015] [Accepted: 11/18/2015] [Indexed: 11/22/2022] Open
Abstract
The acoustic change complex (ACC) is a cortical auditory evoked potential elicited in response to a change in an ongoing sound. The characteristics and potential clinical implications of the ACC are reviewed in this article. The P1-N1-P2 recorded from the auditory cortex following presentation of an acoustic stimulus is believed to reflect the neural encoding of a sound signal, but this provides no information regarding sound discrimination. However, the neural processing underlying behavioral discrimination capacity can be measured by modifying the traditional methodology for recording the P1-N1-P2. When obtained in response to an acoustic change within an ongoing sound, the resulting waveform is referred to as the ACC. When elicited, the ACC indicates that the brain has detected changes within a sound and the patient has the neural capacity to discriminate the sounds. In fact, results of several studies have shown that the ACC amplitude increases with increasing magnitude of acoustic changes in intensity, spectrum, and gap duration. In addition, the ACC can be reliably recorded with good test-retest reliability not only from listeners with normal hearing but also from individuals with hearing loss, hearing aids, and cochlear implants. The ACC can be obtained even in the absence of attention, and requires relatively few stimulus presentations to record a response with a good signal-to-noise ratio. Most importantly, the ACC shows reasonable agreement with behavioral measures. Therefore, these findings suggest that the ACC might represent a promising tool for the objective clinical evaluation of auditory discrimination and/or speech perception capacity.
Collapse
Affiliation(s)
- Jae-Ryong Kim
- Department of Otolaryngology-Head and Neck Surgery, Busan Paik Hospital, Inje University College of Medicine, Busan, Korea
| |
Collapse
|
31
|
Leung AWS, Jolicoeur P, Alain C. Attentional Capacity Limits Gap Detection during Concurrent Sound Segregation. J Cogn Neurosci 2015. [PMID: 26226073 DOI: 10.1162/jocn_a_00849] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Detecting a brief silent interval (i.e., a gap) is more difficult when listeners perceive two concurrent sounds rather than one in a sound containing a mistuned harmonic in otherwise in-tune harmonics. This impairment in gap detection may reflect the interaction of low-level encoding or the division of attention between two sound objects, both of which could interfere with signal detection. To distinguish between these two alternatives, we compared ERPs during active and passive listening with complex harmonic tones that could include a gap, a mistuned harmonic, both features, or neither. During active listening, participants indicated whether they heard a gap irrespective of mistuning. During passive listening, participants watched a subtitled muted movie of their choice while the same sounds were presented. Gap detection was impaired when the complex sounds included a mistuned harmonic that popped out as a separate object. The ERP analysis revealed an early gap-related activity that was little affected by mistuning during the active or passive listening condition. However, during active listening, there was a marked decrease in the late positive wave that was thought to index attention and response-related processes. These results suggest that the limitation in detecting the gap is related to attentional processing, possibly divided attention induced by the concurrent sound objects, rather than deficits in preattentional sensory encoding.
Collapse
Affiliation(s)
- Ada W S Leung
- University of Alberta.,Rotman Research Institute, Baycrest Centre for Geriatric Care, Toronto, Canada
| | - Pierre Jolicoeur
- Université de Montréal.,Centre de Recherche en Neuropsychologie et Cognition (CERNEC), Montréal, Canada.,BRAMS (International Laboratory for Brain, Music, and Sound Research), Montréal, Canada.,Centre de Recherche de l'Institut Universitaire de Gériatrie de Montréal (CRIUGM)
| | - Claude Alain
- Rotman Research Institute, Baycrest Centre for Geriatric Care, Toronto, Canada.,University of Toronto
| |
Collapse
|
32
|
Dissociating the effects of semantic grouping and rehearsal strategies on event-related brain potentials. Int J Psychophysiol 2014; 94:319-28. [PMID: 25242500 DOI: 10.1016/j.ijpsycho.2014.09.007] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2014] [Revised: 08/25/2014] [Accepted: 09/11/2014] [Indexed: 11/20/2022]
Abstract
The application of elaborative encoding strategies during learning, such as grouping items on similar semantic categories, increases the likelihood of later recall. Previous studies have suggested that stimuli that encourage semantic grouping strategies had modulating effects on specific ERP components. However, these studies did not differentiate between ERP activation patterns evoked by elaborative working memory strategies like semantic grouping and more simple strategies like rote rehearsal. Identification of neurocognitive correlates underlying successful use of elaborative strategies is important to understand better why certain populations, like children or elderly people, have problems applying such strategies. To compare ERP activation during the application of elaborative versus more simple strategies subjects had to encode either four semantically related or unrelated pictures by respectively applying a semantic category grouping or a simple rehearsal strategy. Another goal was to investigate if maintenance of semantically grouped vs. ungrouped pictures modulated ERP-slow waves differently. At the behavioral level there was only a semantic grouping benefit in terms of faster responding on correct rejections (i.e. when the memory probe stimulus was not part of the memory set). At the neural level, during encoding semantic grouping only had a modest specific modulatory effect on a fronto-central Late Positive Component (LPC), emerging around 650 ms. Other ERP components (i.e. P200, N400 and a second Late Positive Component) that had been earlier related to semantic grouping encoding processes now showed stronger modulation by rehearsal than by semantic grouping. During maintenance semantic grouping had specific modulatory effects on left and right frontal slow wave activity. These results stress the importance of careful control of strategy use when investigating the neural correlates of elaborative encoding.
Collapse
|
33
|
Bardy F, McMahon CM, Yau SH, Johnson BW. Deconvolution of magnetic acoustic change complex (mACC). Clin Neurophysiol 2014; 125:2220-2231. [PMID: 24704142 DOI: 10.1016/j.clinph.2014.03.003] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2013] [Revised: 03/03/2014] [Accepted: 03/04/2014] [Indexed: 11/19/2022]
Abstract
OBJECTIVE The aim of this study was to design a novel experimental approach to investigate the morphological characteristics of auditory cortical responses elicited by rapidly changing synthesized speech sounds. METHODS Six sound-evoked magnetoencephalographic (MEG) responses were measured to a synthesized train of speech sounds using the vowels /e/ and /u/ in 17 normal hearing young adults. Responses were measured to: (i) the onset of the speech train, (ii) an F0 increment; (iii) an F0 decrement; (iv) an F2 decrement; (v) an F2 increment; and (vi) the offset of the speech train using short (jittered around 135ms) and long (1500ms) stimulus onset asynchronies (SOAs). The least squares (LS) deconvolution technique was used to disentangle the overlapping MEG responses in the short SOA condition only. RESULTS Comparison between the morphology of the recovered cortical responses in the short and long SOAs conditions showed high similarity, suggesting that the LS deconvolution technique was successful in disentangling the MEG waveforms. Waveform latencies and amplitudes were different for the two SOAs conditions and were influenced by the spectro-temporal properties of the sound sequence. The magnetic acoustic change complex (mACC) for the short SOA condition showed significantly lower amplitudes and shorter latencies compared to the long SOA condition. The F0 transition showed a larger reduction in amplitude from long to short SOA compared to the F2 transition. Lateralization of the cortical responses were observed under some stimulus conditions and appeared to be associated with the spectro-temporal properties of the acoustic stimulus. CONCLUSIONS The LS deconvolution technique provides a new tool to study the properties of the auditory cortical response to rapidly changing sound stimuli. The presence of the cortical auditory evoked responses for rapid transition of synthesized speech stimuli suggests that the temporal code is preserved at the level of the auditory cortex. Further, the reduced amplitudes and shorter latencies might reflect intrinsic properties of the cortical neurons to rapidly presented sounds. SIGNIFICANCE This is the first demonstration of the separation of overlapping cortical responses to rapidly changing speech sounds and offers a potential new biomarker of discrimination of rapid transition of sound.
Collapse
Affiliation(s)
- Fabrice Bardy
- HEARing Co-operative Research Centre, VIC, Australia; Department of Linguistics, Macquarie University, NSW, Australia; National Acoustic Laboratories, NSW, Australia; Department of Cognitive Science, Macquarie University, NSW, Australia; ARC Centre of Excellence in Cognition and its Disorders, Australia.
| | - Catherine M McMahon
- HEARing Co-operative Research Centre, VIC, Australia; Department of Linguistics, Macquarie University, NSW, Australia; ARC Centre of Excellence in Cognition and its Disorders, Australia
| | - Shu Hui Yau
- Department of Cognitive Science, Macquarie University, NSW, Australia; ARC Centre of Excellence in Cognition and its Disorders, Australia
| | - Blake W Johnson
- Department of Cognitive Science, Macquarie University, NSW, Australia; ARC Centre of Excellence in Cognition and its Disorders, Australia
| |
Collapse
|
34
|
Okamoto H, Kakigi R. Neural adaptation to silence in the human auditory cortex: a magnetoencephalographic study. Brain Behav 2014; 4:858-66. [PMID: 25365810 PMCID: PMC4212114 DOI: 10.1002/brb3.290] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/23/2014] [Revised: 07/25/2014] [Accepted: 09/05/2014] [Indexed: 12/02/2022] Open
Abstract
INTRODUCTION Previous studies demonstrated that a decrement in the N1m response, a major deflection in the auditory evoked response, with sound repetition was mainly caused by bottom-up driven neural refractory periods following brain activation due to sound stimulations. However, it currently remains unknown whether this decrement occurs with a repetition of silences, which do not induce refractoriness. METHODS In the present study, we investigated decrements in N1m responses elicited by five repetitive silences in a continuous pure tone and by five repetitive pure tones in silence using magnetoencephalography. RESULTS Repetitive sound stimulation differentially affected the N1m decrement in a sound type-dependent manner; while the N1m amplitude decreased from the 1st to the 2nd pure tone and remained constant from the 2nd to the 5th pure tone in silence, a gradual decrement was observed in the N1m amplitude from the 1st to the 5th silence embedded in a continuous pure tone. CONCLUSIONS Our results suggest that neural refractoriness may mainly cause decrements in N1m responses elicited by trains of pure tones in silence, while habituation, which is a form of the implicit learning process, may play an important role in the N1m source strength decrements elicited by successive silences in a continuous pure tone.
Collapse
Affiliation(s)
- Hidehiko Okamoto
- Department of Integrative Physiology, National Institute for Physiological Sciences Okazaki, Japan ; Department of Physiological Sciences, The Graduate University for Advanced Studies Hayama, Japan
| | - Ryusuke Kakigi
- Department of Integrative Physiology, National Institute for Physiological Sciences Okazaki, Japan ; Department of Physiological Sciences, The Graduate University for Advanced Studies Hayama, Japan
| |
Collapse
|
35
|
Abstract
AbstractOffset neurons which respond to the termination of the sound stimulation may play important roles in auditory temporal information processing, sound signal recognition, and complex distinction. Two additional possible mechanisms were reviewed: neural inhibition and the intrinsic conductance property of offset neuron membranes. The underlying offset response was postulated to be located in the superior paraolivary nucleus of mice. The biological significance of the offset neurons was discussed as well.
Collapse
|
36
|
McMullan AR, Hambrook DA, Tata MS. Brain dynamics encode the spectrotemporal boundaries of auditory objects. Hear Res 2013; 304:77-90. [DOI: 10.1016/j.heares.2013.06.009] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/11/2013] [Revised: 06/14/2013] [Accepted: 06/24/2013] [Indexed: 10/26/2022]
|
37
|
Sensitivity of offset and onset cortical auditory evoked potentials to signals in noise. Clin Neurophysiol 2013; 125:370-80. [PMID: 24007688 DOI: 10.1016/j.clinph.2013.08.003] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2013] [Revised: 07/03/2013] [Accepted: 08/05/2013] [Indexed: 11/21/2022]
Abstract
OBJECTIVE The purpose of this study was to determine the effects of SNR and signal level on the offset response of the cortical auditory evoked potential (CAEP). Successful listening often depends on how well the auditory system can extract target signals from competing background noise. Both signal onsets and offsets are encoded neurally and contribute to successful listening in noise. Neural onset responses to signals in noise demonstrate a strong sensitivity to signal-to-noise ratio (SNR) rather than signal level; however, the sensitivity of neural offset responses to these cues is not known. METHODS We analyzed the offset response from two previously published datasets for which only the onset response was reported. For both datasets, CAEPs were recorded from young normal-hearing adults in response to a 1000-Hz tone. For the first dataset, tones were presented at seven different signal levels without background noise, while the second dataset varied both signal level and SNR. RESULTS Offset responses demonstrated sensitivity to absolute signal level in quiet, SNR, and to absolute signal level in noise. CONCLUSIONS Offset sensitivity to signal level when presented in noise contrasts with previously published onset results. SIGNIFICANCE This sensitivity suggests a potential clinical measure of cortical encoding of signal level in noise.
Collapse
|
38
|
Richter N, Schröger E, Rübsamen R. Differences in evoked potentials during the active processing of sound location and motion. Neuropsychologia 2013; 51:1204-14. [PMID: 23499852 DOI: 10.1016/j.neuropsychologia.2013.03.001] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2012] [Revised: 02/25/2013] [Accepted: 03/04/2013] [Indexed: 10/27/2022]
Abstract
Difference in the processing of motion and static sounds in the human cortex was studied by electroencephalography with subjects performing an active discrimination task. Sound bursts were presented in the acoustic free-field between 47° to the left and 47° to the right under three different stimulus conditions: (i) static, (ii) leftward motion, and (iii) rightward motion. In an active oddball design, subject was asked to detect target stimuli which were randomly embedded within a stream of frequently occurring non-target events (i.e. 'standards') and rare non-target stimuli (i.e. 'deviants'). The respective acoustic stimuli were presented in blocks with each stimulus type presented in either of three stimulus conditions: as target, as non-target, or as standard. The analysis focussed on the event related potentials evoked by the different stimulus types under the respective standard condition. Same as in previous studies, all three different acoustic stimuli elicited the obligatory P1/N1/P2 complex in the range of 50-200 ms. However, comparisons of ERPs elicited by static stimuli and both kinds of motion stimuli yielded differences as early as ~100 ms after stimulus-onset, i.e. at the level of the exogenous N1 and P2 components. Differences in signal amplitudes were also found in a time window 300-400 ms ('d300-400 ms' component in 'motion-minus-static' difference wave). For motion stimuli, the N1 amplitudes were larger over the hemisphere contralateral to the origin of motion, while for static stimuli N1 amplitudes over both hemispheres were in the same range. Contrary to the N1 component, the ERP in the 'd300-400 ms' time period showed stronger responses over the hemisphere contralateral to motion termination, with the static stimuli again yielding equal bilateral amplitudes. For the P2 component a motion-specific effect with larger signal amplitudes over the left hemisphere was found compared to static stimuli. The presently documented N1 components comply with the results of previous studies on auditory space processing and suggest a contralateral dominance during the process of cortical integration of spatial acoustic information. Additionally, the cortical activity in the 'd300-400 ms' time period indicates, that in addition to the motion origin (as reflected by the N1) also the direction of motion (leftward/ rightward motion) or rather motion termination is cortically encoded. These electrophysiological results are in accordance with the 'snap shot' hypothesis, assuming that auditory motion processing is not based on a genuine motion-sensitive system, but rather on a comparison process of spatial positions of motion origin (onset) and motion termination (offset). Still, specificities of the present P2 component provides evidence for additional motion-specific processes possibly associated with the evaluation of motion-specific attributes, i.e. motion direction and/or velocity which is preponderant in the left hemisphere.
Collapse
Affiliation(s)
- Nicole Richter
- University of Leipzig, Institute for Biology, Talstr 33, 04103 Leipzig, Germany.
| | | | | |
Collapse
|
39
|
Ganapathy MK, Narne VK, Kalaiah MK, Manjula P. Effect of pre-transition stimulus duration on acoustic change complex. Int J Audiol 2013; 52:350-9. [PMID: 23343242 DOI: 10.3109/14992027.2012.760850] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
|
40
|
Sensory thresholds obtained from MEG data: Cortical psychometric functions. Neuroimage 2012; 63:1249-56. [DOI: 10.1016/j.neuroimage.2012.08.013] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2012] [Revised: 07/10/2012] [Accepted: 08/05/2012] [Indexed: 11/19/2022] Open
|
41
|
A Pilot Study on Cortical Auditory Evoked Potentials in Children: Aided CAEPs Reflect Improved High-Frequency Audibility with Frequency Compression Hearing Aid Technology. Int J Otolaryngol 2012. [PMID: 23197983 PMCID: PMC3501956 DOI: 10.1155/2012/982894] [Citation(s) in RCA: 27] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/03/2023] Open
Abstract
Background. This study investigated whether cortical auditory evoked potentials (CAEPs) could reliably be recorded and interpreted using clinical testing equipment, to assess the effects of hearing aid technology on the CAEP.
Methods. Fifteen normal hearing (NH) and five hearing impaired (HI) children were included in the study. NH children were tested unaided; HI children were tested while wearing hearing aids. CAEPs were evoked with tone bursts presented at a suprathreshold level. Presence/absence of CAEPs was established based on agreement between two independent raters.
Results. Present waveforms were interpreted for most NH listeners and all HI listeners, when stimuli were measured to be at an audible level. The younger NH children were found to have significantly different waveform morphology, compared to the older children, with grand averaged waveforms differing in the later part of the time window (the N2 response). Results suggest that in some children, frequency compression hearing aid processing improved audibility of specific frequencies, leading to increased rates of detectable cortical responses in HI children. Conclusions. These findings provide support for the use of CAEPs in measuring hearing aid benefit. Further research is needed to validate aided results across a larger group of HI participants and with speech-based stimuli.
Collapse
|
42
|
Agrawal D, Timm L, Viola FC, Debener S, Büchner A, Dengler R, Wittfoth M. ERP evidence for the recognition of emotional prosody through simulated cochlear implant strategies. BMC Neurosci 2012; 13:113. [PMID: 22994867 PMCID: PMC3479061 DOI: 10.1186/1471-2202-13-113] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2012] [Accepted: 07/10/2012] [Indexed: 11/26/2022] Open
Abstract
Background Emotionally salient information in spoken language can be provided by variations in speech melody (prosody) or by emotional semantics. Emotional prosody is essential to convey feelings through speech. In sensori-neural hearing loss, impaired speech perception can be improved by cochlear implants (CIs). Aim of this study was to investigate the performance of normal-hearing (NH) participants on the perception of emotional prosody with vocoded stimuli. Semantically neutral sentences with emotional (happy, angry and neutral) prosody were used. Sentences were manipulated to simulate two CI speech-coding strategies: the Advance Combination Encoder (ACE) and the newly developed Psychoacoustic Advanced Combination Encoder (PACE). Twenty NH adults were asked to recognize emotional prosody from ACE and PACE simulations. Performance was assessed using behavioral tests and event-related potentials (ERPs). Results Behavioral data revealed superior performance with original stimuli compared to the simulations. For simulations, better recognition for happy and angry prosody was observed compared to the neutral. Irrespective of simulated or unsimulated stimulus type, a significantly larger P200 event-related potential was observed for happy prosody after sentence onset than the other two emotions. Further, the amplitude of P200 was significantly more positive for PACE strategy use compared to the ACE strategy. Conclusions Results suggested P200 peak as an indicator of active differentiation and recognition of emotional prosody. Larger P200 peak amplitude for happy prosody indicated importance of fundamental frequency (F0) cues in prosody processing. Advantage of PACE over ACE highlighted a privileged role of the psychoacoustic masking model in improving prosody perception. Taken together, the study emphasizes on the importance of vocoded simulation to better understand the prosodic cues which CI users may be utilizing.
Collapse
Affiliation(s)
- Deepashri Agrawal
- Department of Neurology, Hannover Medical School, Hannover, Germany.
| | | | | | | | | | | | | |
Collapse
|
43
|
Shahin AJ, Kerlin JR, Bhat J, Miller LM. Neural restoration of degraded audiovisual speech. Neuroimage 2011; 60:530-8. [PMID: 22178454 DOI: 10.1016/j.neuroimage.2011.11.097] [Citation(s) in RCA: 31] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2011] [Revised: 11/25/2011] [Accepted: 11/26/2011] [Indexed: 11/25/2022] Open
Abstract
When speech is interrupted by noise, listeners often perceptually "fill-in" the degraded signal, giving an illusion of continuity and improving intelligibility. This phenomenon involves a neural process in which the auditory cortex (AC) response to onsets and offsets of acoustic interruptions is suppressed. Since meaningful visual cues behaviorally enhance this illusory filling-in, we hypothesized that during the illusion, lip movements congruent with acoustic speech should elicit a weaker AC response to interruptions relative to static (no movements) or incongruent visual speech. AC response to interruptions was measured as the power and inter-trial phase consistency of the auditory evoked theta band (4-8 Hz) activity of the electroencephalogram (EEG) and the N1 and P2 auditory evoked potentials (AEPs). A reduction in the N1 and P2 amplitudes and in theta phase-consistency reflected the perceptual illusion at the onset and/or offset of interruptions regardless of visual condition. These results suggest that the brain engages filling-in mechanisms throughout the interruption, which repairs degraded speech lasting up to ~250 ms following the onset of the degradation. Behaviorally, participants perceived speech continuity over longer interruptions for congruent compared to incongruent or static audiovisual streams. However, this specific behavioral profile was not mirrored in the neural markers of interest. We conclude that lip-reading enhances illusory perception of degraded speech not by altering the quality of the AC response, but by delaying it during degradations so that longer interruptions can be tolerated.
Collapse
Affiliation(s)
- Antoine J Shahin
- Department of Otolaryngology-Head and Neck Surgery, The Ohio State University, Columbus, OH 43212, USA.
| | | | | | | |
Collapse
|
44
|
When and where of auditory spatial processing in cortex: a novel approach using electrotomography. PLoS One 2011; 6:e25146. [PMID: 21949873 PMCID: PMC3176323 DOI: 10.1371/journal.pone.0025146] [Citation(s) in RCA: 28] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2011] [Accepted: 08/29/2011] [Indexed: 11/19/2022] Open
Abstract
The modulation of brain activity as a function of auditory location was investigated using electro-encephalography in combination with standardized low-resolution brain electromagnetic tomography. Auditory stimuli were presented at various positions under anechoic conditions in free-field space, thus providing the complete set of natural spatial cues. Variation of electrical activity in cortical areas depending on sound location was analyzed by contrasts between sound locations at the time of the N1 and P2 responses of the auditory evoked potential. A clear-cut double dissociation with respect to the cortical locations and the points in time was found, indicating spatial processing (1) in the primary auditory cortex and posterodorsal auditory cortical pathway at the time of the N1, and (2) in the anteroventral pathway regions about 100 ms later at the time of the P2. Thus, it seems as if both auditory pathways are involved in spatial analysis but at different points in time. It is possible that the late processing in the anteroventral auditory network reflected the sharing of this region by analysis of object-feature information and spectral localization cues or even the integration of spatial and non-spatial sound features.
Collapse
|
45
|
Nourski KV, Brugge JF. Representation of temporal sound features in the human auditory cortex. Rev Neurosci 2011; 22:187-203. [PMID: 21476940 DOI: 10.1515/rns.2011.016] [Citation(s) in RCA: 55] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Temporal information in acoustic signals is important for the perception of environmental sounds, including speech. This review focuses on several aspects of temporal processing within human auditory cortex and its relevance for the processing of speech sounds. Periodic non-speech sounds, such as trains of acoustic clicks and bursts of amplitude-modulated noise or tones, can elicit different percepts depending on the pulse repetition rate or modulation frequency. Such sounds provide convenient methodological tools to study representation of timing information in the auditory system. At low repetition rates of up to 8-10 Hz, each individual stimulus (a single click or a sinusoidal amplitude modulation cycle) within the sequence is perceived as a separate event. As repetition rates increase up to and above approximately 40 Hz, these events blend together, giving rise first to the percept of flutter and then to pitch. The extent to which neural responses of human auditory cortex encode temporal features of acoustic stimuli is discussed within the context of these perceptual classes of periodic stimuli and their relationship to speech sounds. Evidence for neural coding of temporal information at the level of the core auditory cortex in humans suggests possible physiological counterparts to perceptual categorical boundaries for periodic acoustic stimuli. Temporal coding is less evident in auditory cortical fields beyond the core. Finally, data suggest hemispheric asymmetry in temporal cortical processing.
Collapse
Affiliation(s)
- Kirill V Nourski
- Human Brain Research Laboratory, Department of Neurosurgery, The University of Iowa, 200 Hawkins Dr., Iowa City, IA 52242, USA.
| | | |
Collapse
|
46
|
Yamashiro K, Inui K, Otsuru N, Kakigi R. Change-related responses in the human auditory cortex: An MEG study. Psychophysiology 2010; 48:23-30. [DOI: 10.1111/j.1469-8986.2010.01038.x] [Citation(s) in RCA: 50] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
47
|
A brief introduction to the use of event-related potentials in studies of perception and attention. Atten Percept Psychophys 2010. [DOI: 10.3758/bf03196680] [Citation(s) in RCA: 284] [Impact Index Per Article: 20.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
48
|
Woodman GF. A brief introduction to the use of event-related potentials in studies of perception and attention. Atten Percept Psychophys 2010; 72:2031-46. [PMID: 21097848 PMCID: PMC3816929 DOI: 10.3758/app.72.8.2031] [Citation(s) in RCA: 140] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Because of the precise temporal resolution of electrophysiological recordings, the event-related potential (ERP) technique has proven particularly valuable for testing theories of perception and attention. Here, I provide a brief tutorial on the ERP technique for consumers of such research and those considering the use of human electrophysiology in their own work. My discussion begins with the basics regarding what brain activity ERPs measure and why they are well suited to reveal critical aspects of perceptual processing, attentional selection, and cognition, which are unobservable with behavioral methods alone. I then review a number of important methodological issues and often-forgotten facts that should be considered when evaluating or planning ERP experiments.
Collapse
Affiliation(s)
- Geoffrey F Woodman
- Department of Psychology, Vanderbilt University, PMB 407817, 2301 Vanderbilt Place, Nashville, TN 37240-7817, USA.
| |
Collapse
|
49
|
Martin BA, Boothroyd A, Ali D, Leach-Berth T. Stimulus presentation strategies for eliciting the acoustic change complex: increasing efficiency. Ear Hear 2010; 31:356-66. [PMID: 20440114 PMCID: PMC2864929 DOI: 10.1097/aud.0b013e3181ce6355] [Citation(s) in RCA: 30] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVE The purpose of this study was to compare four strategies for stimulus presentation in terms of their efficiency when generating a speech-evoked cortical acoustic change complex (ACC) in adults and children. DESIGN Ten normally hearing adults (aged 22 to 31 yrs) and nine normally hearing children (aged 6 to 9 yrs) served as participants. The ACC was elicited using a 75-dB SPL synthetic vowel containing 1000 Hz changes of second formant frequency, creating a change of perceived vowel between /u/ and /i/. The ACC was recorded from Cz using four stimulus formats:ACC magnitude was expressed as the standard deviation of the voltage waveform within a window believed to span the ACC. Noise magnitude was estimated from the variances at each sampling point in the same window. Efficiency was expressed in terms of the ACC to noise magnitude ratio divided by testing time. RESULTS ACC magnitude was not significantly different for the two directions of second formant change. Reducing interonset interval from 2 to 1 sec increased efficiency by a factor close to two. Combining data from the two directions of change increased efficiency further, by a factor approximating the square root of 2. CONCLUSION Continuous alternating stimulus presentation is more efficient than interrupted stimulus presentation in eliciting the ACC. The benefits of eliminating silent periods and doubling the number of acoustic changes presented in a given time period are not seriously offset by a reduction in root mean square response amplitude, at least in young adults and in children as young as 6 yrs.
Collapse
Affiliation(s)
- Brett A Martin
- Program in Speech-Language-Hearing Sciences, Graduate Center of the City University of New York, New York, New York 10016, USA.
| | | | | | | |
Collapse
|
50
|
Contribution of Spectrotemporal Features on Auditory Event-Related Potentials Elicited by Consonant-Vowel Syllables. Ear Hear 2009; 30:704-12. [DOI: 10.1097/aud.0b013e3181b1d42d] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|