1
|
Dinçer D'Alessandro H, Nicastri M, Portanova G, Giallini I, Russo FY, Magliulo G, Greco A, Mancini P. Low-frequency pitch coding: relationships with speech-in-noise and music perception by pediatric populations with typical hearing and cochlear implants. Eur Arch Otorhinolaryngol 2024; 281:3475-3482. [PMID: 38194096 PMCID: PMC11211119 DOI: 10.1007/s00405-023-08445-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2023] [Accepted: 12/27/2023] [Indexed: 01/10/2024]
Abstract
PURPOSE This study aimed to investigate the effects of low frequency (LF) pitch perception on speech-in-noise and music perception performance by children with cochlear implants (CIC) and typical hearing (THC). Moreover, the relationships between speech-in-noise and music perception as well as the effects of demographic and audiological factors on present research outcomes were studied. METHODS The sample consisted of 22 CIC and 20 THC (7-10 years). Harmonic intonation (HI) and disharmonic intonation (DI) tests were used to assess LF pitch perception. Speech perception in quiet (WRSq)/noise (WRSn + 10) were tested with the Italian bisyllabic words for pediatric populations. The Gordon test was used to evaluate music perception (rhythm, melody, harmony, and overall). RESULTS CIC/THC performance comparisons for LF pitch, speech-in-noise, and all music measures except harmony revealed statistically significant differences with large effect sizes. For the CI group, HI showed statistically significant correlations with melody discrimination. Melody/total Gordon scores were significantly correlated with WRSn + 10. For the overall group, HI/DI showed significant correlations with all music perception measures and WRSn + 10. Hearing thresholds showed significant effects on HI/DI scores. Hearing thresholds and WRSn + 10 scores were significantly correlated; both revealed significant effects on all music perception scores. CI age had significant effects on WRSn + 10, harmony, and total Gordon scores (p < 0.05). CONCLUSION Such findings confirmed the significant effects of LF pitch perception on complex listening performance. Significant speech-in-noise and music perception correlations were as promising as results from recent studies indicating significant positive effects of music training on speech-in-noise recognition in CIC.
Collapse
Affiliation(s)
- Hilal Dinçer D'Alessandro
- Department of Audiology, Faculty of Health Sciences, Istanbul University-Cerrahpaşa, Istanbul, Turkey.
| | - Maria Nicastri
- Department of Sense Organs, Sapienza University of Rome, Rome, Italy
| | - Ginevra Portanova
- Department of Sense Organs, Sapienza University of Rome, Rome, Italy
| | - Ilaria Giallini
- Department of Sense Organs, Sapienza University of Rome, Rome, Italy
| | | | - Giuseppe Magliulo
- Department of Sense Organs, Sapienza University of Rome, Rome, Italy
| | - Antonio Greco
- Department of Sense Organs, Sapienza University of Rome, Rome, Italy
| | - Patrizia Mancini
- Department of Sense Organs, Sapienza University of Rome, Rome, Italy
| |
Collapse
|
2
|
Brainstem encoding of frequency-modulated sweeps is relevant to Mandarin concurrent-vowels identification for normal-hearing and hearing-impaired listeners. Hear Res 2019; 380:123-136. [DOI: 10.1016/j.heares.2019.06.005] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/05/2018] [Revised: 05/21/2019] [Accepted: 06/25/2019] [Indexed: 11/22/2022]
|
3
|
Cataldo A, Ferrè ER, di Pellegrino G, Haggard P. Why the whole is more than the sum of its parts: Salience-driven overestimation in aggregated tactile sensations. Q J Exp Psychol (Hove) 2019; 72:2509-2526. [PMID: 30971159 DOI: 10.1177/1747021819847131] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Experimental psychology often studies perception analytically, reducing its focus to minimal sensory units, such as thresholds or just noticeable differences in a single stimulus. Here, in contrast, we examine a synthetic aspect: how multiple inputs to a sensory system are aggregated into an overall percept. Participants in three experiments judged the total stimulus intensity for simultaneous electrical shocks to two digits. We tested whether the integration of component somatosensory stimuli into a total percept occurs automatically, or rather depends on the ability to consciously perceive discrepancy among components (Experiment 1), whether the discrepancy among these components influences sensitivity or/and perceptual bias in judging totals (Experiment 2), and whether the salience of each individual component stimulus affects perception of total intensity (Experiment 3). Perceptual aggregation of two simultaneous component events occurred both when participants could perceptually discriminate the two intensities, and also when they could not. Further, the actual discrepancy between the stimuli modulated both participants' sensitivity and perceptual bias: increasing discrepancies produced a systematic and progressive overestimation of total intensity. The degree of this bias depended primarily on the salience of the stronger stimulus in the pair. Overall, our results suggest that important nonlinear mechanisms contribute to sensory aggregation. The mind aggregates component inputs into a coherent and synthetic perceptual experience in a salience-weighted fashion that is not based on simple summation of inputs.
Collapse
Affiliation(s)
- Antonio Cataldo
- 1 Institute of Cognitive Neuroscience, University College London, London, UK.,2 Centre for Studies and Research in Cognitive Neuroscience, Alma Mater Studiorum -University of Bologna, Cesena, Italy.,3 Institute of Philosophy, School of Advanced Study, University of London, London, UK
| | | | - Giuseppe di Pellegrino
- 2 Centre for Studies and Research in Cognitive Neuroscience, Alma Mater Studiorum -University of Bologna, Cesena, Italy
| | - Patrick Haggard
- 1 Institute of Cognitive Neuroscience, University College London, London, UK.,3 Institute of Philosophy, School of Advanced Study, University of London, London, UK
| |
Collapse
|
4
|
Coffey EBJ, Colagrosso EMG, Lehmann A, Schönwiesner M, Zatorre RJ. Individual Differences in the Frequency-Following Response: Relation to Pitch Perception. PLoS One 2016; 11:e0152374. [PMID: 27015271 PMCID: PMC4807774 DOI: 10.1371/journal.pone.0152374] [Citation(s) in RCA: 28] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2015] [Accepted: 03/14/2016] [Indexed: 11/30/2022] Open
Abstract
The scalp-recorded frequency-following response (FFR) is a measure of the auditory nervous system’s representation of periodic sound, and may serve as a marker of training-related enhancements, behavioural deficits, and clinical conditions. However, FFRs of healthy normal subjects show considerable variability that remains unexplained. We investigated whether the FFR representation of the frequency content of a complex tone is related to the perception of the pitch of the fundamental frequency. The strength of the fundamental frequency in the FFR of 39 people with normal hearing was assessed when they listened to complex tones that either included or lacked energy at the fundamental frequency. We found that the strength of the fundamental representation of the missing fundamental tone complex correlated significantly with people's general tendency to perceive the pitch of the tone as either matching the frequency of the spectral components that were present, or that of the missing fundamental. Although at a group level the fundamental representation in the FFR did not appear to be affected by the presence or absence of energy at the same frequency in the stimulus, the two conditions were statistically distinguishable for some subjects individually, indicating that the neural representation is not linearly dependent on the stimulus content. In a second experiment using a within-subjects paradigm, we showed that subjects can learn to reversibly select between either fundamental or spectral perception, and that this is accompanied both by changes to the fundamental representation in the FFR and to cortical-based gamma activity. These results suggest that both fundamental and spectral representations coexist, and are available for later auditory processing stages, the requirements of which may also influence their relative strength and thus modulate FFR variability. The data also highlight voluntary mode perception as a new paradigm with which to study top-down vs bottom-up mechanisms that support the emerging view of the FFR as the outcome of integrated processing in the entire auditory system.
Collapse
Affiliation(s)
- Emily B. J. Coffey
- Montreal Neurological Institute, McGill University, Montreal, Canada
- Laboratory for Brain, Music and Sound Research (BRAMS), Montreal, Canada
- Centre for Research on Brain, Language and Music (CRBLM), Montreal, Canada
- * E-mail:
| | | | - Alexandre Lehmann
- Laboratory for Brain, Music and Sound Research (BRAMS), Montreal, Canada
- Centre for Research on Brain, Language and Music (CRBLM), Montreal, Canada
- Department of Psychology, University of Montreal, Montreal, Canada
- Department of Otolaryngology Head & Neck Surgery, McGill University, Montreal, Canada
| | - Marc Schönwiesner
- Montreal Neurological Institute, McGill University, Montreal, Canada
- Laboratory for Brain, Music and Sound Research (BRAMS), Montreal, Canada
- Centre for Research on Brain, Language and Music (CRBLM), Montreal, Canada
- Department of Psychology, University of Montreal, Montreal, Canada
| | - Robert J. Zatorre
- Montreal Neurological Institute, McGill University, Montreal, Canada
- Laboratory for Brain, Music and Sound Research (BRAMS), Montreal, Canada
- Centre for Research on Brain, Language and Music (CRBLM), Montreal, Canada
| |
Collapse
|
5
|
Gockel HE, Krugliak A, Plack CJ, Carlyon RP. Specificity of the Human Frequency Following Response for Carrier and Modulation Frequency Assessed Using Adaptation. J Assoc Res Otolaryngol 2015; 16:747-62. [PMID: 26162415 PMCID: PMC4636589 DOI: 10.1007/s10162-015-0533-9] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2015] [Accepted: 06/17/2015] [Indexed: 11/24/2022] Open
Abstract
The frequency following response (FFR) is a scalp-recorded measure of phase-locked brainstem activity to stimulus-related periodicities. Three experiments investigated the specificity of the FFR for carrier and modulation frequency using adaptation. FFR waveforms evoked by alternating-polarity stimuli were averaged for each polarity and added, to enhance envelope, or subtracted, to enhance temporal fine structure information. The first experiment investigated peristimulus adaptation of the FFR for pure and complex tones as a function of stimulus frequency and fundamental frequency (F0). It showed more adaptation of the FFR in response to sounds with higher frequencies or F0s than to sounds with lower frequency or F0s. The second experiment investigated tuning to modulation rate in the FFR. The FFR to a complex tone with a modulation rate of 213 Hz was not reduced more by an adaptor that had the same modulation rate than by an adaptor with a different modulation rate (90 or 504 Hz), thus providing no evidence that the FFR originates mainly from neurons that respond selectively to the modulation rate of the stimulus. The third experiment investigated tuning to audio frequency in the FFR using pure tones. An adaptor that had the same frequency as the target (213 or 504 Hz) did not generally reduce the FFR to the target more than an adaptor that differed in frequency (by 1.24 octaves). Thus, there was no evidence that the FFR originated mainly from neurons tuned to the frequency of the target. Instead, the results are consistent with the suggestion that the FFR for low-frequency pure tones at medium to high levels mainly originates from neurons tuned to higher frequencies. Implications for the use and interpretation of the FFR are discussed.
Collapse
Affiliation(s)
- Hedwig E Gockel
- MRC-Cognition and Brain Sciences Unit, 15 Chaucer Road, Cambridge, CB2 7EF, UK.
| | - Alexandra Krugliak
- MRC-Cognition and Brain Sciences Unit, 15 Chaucer Road, Cambridge, CB2 7EF, UK.
| | - Christopher J Plack
- School of Psychological Sciences, University of Manchester, Manchester Academic Health Science Centre, Manchester, M13 9PL, UK.
| | - Robert P Carlyon
- MRC-Cognition and Brain Sciences Unit, 15 Chaucer Road, Cambridge, CB2 7EF, UK.
| |
Collapse
|
6
|
Bidelman GM, Alain C. Hierarchical neurocomputations underlying concurrent sound segregation: Connecting periphery to percept. Neuropsychologia 2015; 68:38-50. [DOI: 10.1016/j.neuropsychologia.2014.12.020] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2014] [Revised: 12/18/2014] [Accepted: 12/22/2014] [Indexed: 10/24/2022]
|
7
|
Xu Q, Ye D. Evaluation of a posteriori Wiener filtering applied to frequency-following response extraction in the auditory brainstem. Biomed Signal Process Control 2014. [DOI: 10.1016/j.bspc.2014.08.003] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
8
|
Rapid acquisition of auditory subcortical steady state responses using multichannel recordings. Clin Neurophysiol 2014; 125:1878-88. [PMID: 24525091 DOI: 10.1016/j.clinph.2014.01.011] [Citation(s) in RCA: 36] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2013] [Revised: 12/30/2013] [Accepted: 01/17/2014] [Indexed: 11/22/2022]
Abstract
OBJECTIVE Auditory subcortical steady state responses (SSSRs), also known as frequency following responses (FFRs), provide a non-invasive measure of phase-locked neural responses to acoustic and cochlear-induced periodicities. SSSRs have been used both clinically and in basic neurophysiological investigation of auditory function. SSSR data acquisition typically involves thousands of presentations of each stimulus type, sometimes in two polarities, with acquisition times often exceeding an hour per subject. Here, we present a novel approach to reduce the data acquisition times significantly. METHODS Because the sources of the SSSR are deep compared to the primary noise sources, namely background spontaneous cortical activity, the SSSR varies more smoothly over the scalp than the noise. We exploit this property and extract SSSRs efficiently, using multichannel recordings and an eigendecomposition of the complex cross-channel spectral density matrix. RESULTS Our proposed method yields SNR improvement exceeding a factor of 3 compared to traditional single-channel methods. CONCLUSIONS It is possible to reduce data acquisition times for SSSRs significantly with our approach. SIGNIFICANCE The proposed method allows SSSRs to be recorded for several stimulus conditions within a single session and also makes it possible to acquire both SSSRs and cortical EEG responses without increasing the session length.
Collapse
|
9
|
Nozaradan S, Zerouali Y, Peretz I, Mouraux A. Capturing with EEG the neural entrainment and coupling underlying sensorimotor synchronization to the beat. ACTA ACUST UNITED AC 2013; 25:736-47. [PMID: 24108804 DOI: 10.1093/cercor/bht261] [Citation(s) in RCA: 76] [Impact Index Per Article: 6.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
Synchronizing movements with rhythmic inputs requires tight coupling of sensory and motor neural processes. Here, using a novel approach based on the recording of steady-state-evoked potentials (SS-EPs), we examine how distant brain areas supporting these processes coordinate their dynamics. The electroencephalogram was recorded while subjects listened to a 2.4-Hz auditory beat and tapped their hand on every second beat. When subjects tapped to the beat, the EEG was characterized by a 2.4-Hz SS-EP compatible with beat-related entrainment and a 1.2-Hz SS-EP compatible with movement-related entrainment, based on the results of source analysis. Most importantly, when compared with passive listening of the beat, we found evidence suggesting an interaction between sensory- and motor-related activities when subjects tapped to the beat, in the form of (1) additional SS-EP appearing at 3.6 Hz, compatible with a nonlinear product of sensorimotor integration; (2) phase coupling of beat- and movement-related activities; and (3) selective enhancement of beat-related activities over the hemisphere contralateral to the tapping, suggesting a top-down effect of movement-related activities on auditory beat processing. Taken together, our results are compatible with the view that rhythmic sensorimotor synchronization is supported by a dynamic coupling of sensory and motor related activities.
Collapse
Affiliation(s)
- Sylvie Nozaradan
- Institute of Neuroscience (IONS), Université catholique de Louvain (UCL), Belgium International Laboratory for Brain, Music and Sound Research (BRAMS), Université de Montréal, Canada
| | - Younes Zerouali
- Ecole de Technologie Supérieure, Université de Montréal, Canada
| | - Isabelle Peretz
- International Laboratory for Brain, Music and Sound Research (BRAMS), Université de Montréal, Canada
| | - André Mouraux
- Institute of Neuroscience (IONS), Université catholique de Louvain (UCL), Belgium
| |
Collapse
|
10
|
Lerud KD, Almonte FV, Kim JC, Large EW. Mode-locking neurodynamics predict human auditory brainstem responses to musical intervals. Hear Res 2013; 308:41-9. [PMID: 24091182 DOI: 10.1016/j.heares.2013.09.010] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/22/2013] [Revised: 09/13/2013] [Accepted: 09/17/2013] [Indexed: 11/25/2022]
Abstract
The auditory nervous system is highly nonlinear. Some nonlinear responses arise through active processes in the cochlea, while others may arise in neural populations of the cochlear nucleus, inferior colliculus and higher auditory areas. In humans, auditory brainstem recordings reveal nonlinear population responses to combinations of pure tones, and to musical intervals composed of complex tones. Yet the biophysical origin of central auditory nonlinearities, their signal processing properties, and their relationship to auditory perception remain largely unknown. Both stimulus components and nonlinear resonances are well represented in auditory brainstem nuclei due to neural phase-locking. Recently mode-locking, a generalization of phase-locking that implies an intrinsically nonlinear processing of sound, has been observed in mammalian auditory brainstem nuclei. Here we show that a canonical model of mode-locked neural oscillation predicts the complex nonlinear population responses to musical intervals that have been observed in the human brainstem. The model makes predictions about auditory signal processing and perception that are different from traditional delay-based models, and may provide insight into the nature of auditory population responses. We anticipate that the application of dynamical systems analysis will provide the starting point for generic models of auditory population dynamics, and lead to a deeper understanding of nonlinear auditory signal processing possibly arising in excitatory-inhibitory networks of the central auditory nervous system. This approach has the potential to link neural dynamics with the perception of pitch, music, and speech, and lead to dynamical models of auditory system development.
Collapse
Affiliation(s)
- Karl D Lerud
- University of Connecticut, Department of Psychology, 406 Babbidge Road, Storrs, CT 06269-1020, USA
| | - Felix V Almonte
- University of Connecticut, Department of Psychology, 406 Babbidge Road, Storrs, CT 06269-1020, USA
| | - Ji Chul Kim
- University of Connecticut, Department of Psychology, 406 Babbidge Road, Storrs, CT 06269-1020, USA
| | - Edward W Large
- University of Connecticut, Department of Psychology, 406 Babbidge Road, Storrs, CT 06269-1020, USA.
| |
Collapse
|
11
|
Plack CJ, Barker D, Hall DA. Pitch coding and pitch processing in the human brain. Hear Res 2013; 307:53-64. [PMID: 23938209 DOI: 10.1016/j.heares.2013.07.020] [Citation(s) in RCA: 41] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/11/2013] [Revised: 07/15/2013] [Accepted: 07/31/2013] [Indexed: 11/16/2022]
Abstract
Neuroimaging studies have provided important information regarding how and where pitch is coded and processed in the human brain. Recordings of the frequency-following response (FFR), an electrophysiological measure of neural temporal coding in the brainstem, have shown that the precision of temporal pitch information is dependent on linguistic and musical experience, and can even be modified by short-term training. However, the FFR does not seem to represent the output of a pitch extraction process, and this raises questions regarding how the peripheral neural signal is processed to produce a unified sensation. Since stimuli with a wide variety of spectral and binaural characteristics can produce the same pitch, it has been suggested that there is a place in the ascending auditory pathway at which the representations converge. There is evidence from many different human neuroimaging studies that certain areas of auditory cortex are specifically sensitive to pitch, although the location is still a matter of debate. Taken together, the results suggest that the initial temporal pitch code in the auditory periphery is converted to a code based on neural firing rate in the brainstem. In the upper brainstem or auditory cortex, the information from the individual harmonics of complex tones is combined to form a general representation of pitch. This article is part of a Special Issue entitled Human Auditory Neuroimaging.
Collapse
Affiliation(s)
- Christopher J Plack
- School of Psychological Sciences, The University of Manchester, Manchester M13 9PL, UK.
| | | | | |
Collapse
|
12
|
Zhu L, Bharadwaj H, Xia J, Shinn-Cunningham B. A comparison of spectral magnitude and phase-locking value analyses of the frequency-following response to complex tones. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2013; 134:384-395. [PMID: 23862815 PMCID: PMC3724813 DOI: 10.1121/1.4807498] [Citation(s) in RCA: 60] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/25/2012] [Revised: 04/07/2013] [Accepted: 04/23/2013] [Indexed: 05/31/2023]
Abstract
Two experiments, both presenting diotic, harmonic tone complexes (100 Hz fundamental), were conducted to explore the envelope-related component of the frequency-following response (FFRENV), a measure of synchronous, subcortical neural activity evoked by a periodic acoustic input. Experiment 1 directly compared two common analysis methods, computing the magnitude spectrum and the phase-locking value (PLV). Bootstrapping identified which FFRENV frequency components were statistically above the noise floor for each metric and quantified the statistical power of the approaches. Across listeners and conditions, the two methods produced highly correlated results. However, PLV analysis required fewer processing stages to produce readily interpretable results. Moreover, at the fundamental frequency of the input, PLVs were farther above the metric's noise floor than spectral magnitudes. Having established the advantages of PLV analysis, the efficacy of the approach was further demonstrated by investigating how different acoustic frequencies contribute to FFRENV, analyzing responses to complex tones composed of different acoustic harmonics of 100 Hz (Experiment 2). Results show that the FFRENV response is dominated by peripheral auditory channels responding to unresolved harmonics, although low-frequency channels driven by resolved harmonics also contribute. These results demonstrate the utility of the PLV for quantifying the strength of FFRENV across conditions.
Collapse
Affiliation(s)
- Li Zhu
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, 100084, People's Republic of China
| | | | | | | |
Collapse
|
13
|
Gockel HE, Farooq R, Muhammed L, Plack CJ, Carlyon RP. Differences between psychoacoustic and frequency following response measures of distortion tone level and masking. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2012; 132:2524-2535. [PMID: 23039446 PMCID: PMC5777604 DOI: 10.1121/1.4751541] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
The scalp-recorded frequency following response (FFR) in humans was measured for a 244-Hz pure tone at a range of input levels and for complex tones containing harmonics 2-4 of a 300-Hz fundamental, but shifted by ±56 Hz. The effective magnitude of the cubic difference tone (CDT) and the quadratic difference tone (QDT, at F(2)-F(1)) in the FFR for the complex was estimated by comparing the magnitude spectrum of the FFR at the distortion product (DP) frequency with that for the pure tone. The effective DP levels in the FFR were higher than those commonly estimated in psychophysical experiments, indicating contributions to the DP in the FFR in addition to the audible propagated component. A low-frequency narrowband noise masker reduced the magnitude of FFR responses to the CDT but also to primary components over a wide range of frequencies. The results indicate that audible DPs may contribute very little to the DPs observed in the FFR and that using a narrowband noise for the purpose of masking audible DPs can have undesired effects on the FFR over a wide frequency range. The results are consistent with the notion that broadly tuned mechanisms central to the auditory nerve strongly influence the FFR.
Collapse
Affiliation(s)
- Hedwig E Gockel
- MRC Cognition and Brain Sciences Unit, 15 Chaucer Road, Cambridge CB2 7EF, United Kingdom.
| | | | | | | | | |
Collapse
|
14
|
The frequency following response (FFR) may reflect pitch-bearing information but is not a direct representation of pitch. J Assoc Res Otolaryngol 2011; 12:767-82. [PMID: 21826534 PMCID: PMC3214239 DOI: 10.1007/s10162-011-0284-1] [Citation(s) in RCA: 48] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2011] [Accepted: 07/18/2011] [Indexed: 10/31/2022] Open
Abstract
The frequency following response (FFR), a scalp-recorded measure of phase-locked brainstem activity, is often assumed to reflect the pitch of sounds as perceived by humans. In two experiments, we investigated the characteristics of the FFR evoked by complex tones. FFR waveforms to alternating-polarity stimuli were averaged for each polarity and added, to enhance envelope, or subtracted, to enhance temporal fine structure information. In experiment 1, frequency-shifted complex tones, with all harmonics shifted by the same amount in Hertz, were presented diotically. Only the autocorrelation functions (ACFs) of the subtraction-FFR waveforms showed a peak at a delay shifted in the direction of the expected pitch shifts. This expected pitch shift was also present in the ACFs of the output of an auditory nerve model. In experiment 2, the components of a harmonic complex with harmonic numbers 2, 3, and 4 were presented either to the same ear ("mono") or the third harmonic was presented contralaterally to the ear receiving the even harmonics ("dichotic"). In the latter case, a pitch corresponding to the missing fundamental was still perceived. Monaural control conditions presenting only the even harmonics ("2 + 4") or only the third harmonic ("3") were also tested. Both the subtraction and the addition waveforms showed that (1) the FFR magnitude spectra for "dichotic" were similar to the sum of the spectra for the two monaural control conditions and lacked peaks at the fundamental frequency and other distortion products visible for "mono" and (2) ACFs for "dichotic" were similar to those for "2 + 4" and dissimilar to those for "mono." The results indicate that the neural responses reflected in the FFR preserve monaural temporal information that may be important for pitch, but provide no evidence for any additional processing over and above that already present in the auditory periphery, and do not directly represent the pitch of dichotic stimuli.
Collapse
|
15
|
Abstract
This tutorial provides a comprehensive overview of the methodological approach to collecting and analyzing auditory brain stem responses to complex sounds (cABRs). cABRs provide a window into how behaviorally relevant sounds such as speech and music are processed in the brain. Because temporal and spectral characteristics of sounds are preserved in this subcortical response, cABRs can be used to assess specific impairments and enhancements in auditory processing. Notably, subcortical auditory function is neither passive nor hardwired but dynamically interacts with higher-level cognitive processes to refine how sounds are transcribed into neural code. This experience-dependent plasticity, which can occur on a number of time scales (e.g., life-long experience with speech or music, short-term auditory training, on-line auditory processing), helps shape sensory perception. Thus, by being an objective and noninvasive means for examining cognitive function and experience-dependent processes in sensory activity, cABRs have considerable utility in the study of populations where auditory function is of interest (e.g., auditory experts such as musicians, and persons with hearing loss, auditory processing, and language disorders). This tutorial is intended for clinicians and researchers seeking to integrate cABRs into their clinical or research programs.
Collapse
|
16
|
Ushakov YV, Dubkov AA, Spagnolo B. Spike train statistics for consonant and dissonant musical accords in a simple auditory sensory model. PHYSICAL REVIEW. E, STATISTICAL, NONLINEAR, AND SOFT MATTER PHYSICS 2010; 81:041911. [PMID: 20481757 DOI: 10.1103/physreve.81.041911] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/07/2009] [Revised: 02/17/2010] [Indexed: 05/29/2023]
Abstract
The phenomena of dissonance and consonance in a simple auditory sensory model composed of three neurons are considered. Two of them, here so-called sensory neurons, are driven by noise and subthreshold periodic signals with different ratio of frequencies, and its outputs plus noise are applied synaptically to a third neuron, so-called interneuron. We present a theoretical analysis with a probabilistic approach to investigate the interspike intervals statistics of the spike train generated by the interneuron. We find that tones with frequency ratios that are considered consonant by musicians produce at the third neuron inter-firing intervals statistics densities that are very distinctive from densities obtained using tones with ratios that are known to be dissonant. In other words, at the output of the interneuron, inharmonious signals give rise to blurry spike trains, while the harmonious signals produce more regular, less noisy, spike trains. Theoretical results are compared with numerical simulations.
Collapse
Affiliation(s)
- Yuriy V Ushakov
- Radiophysics Department, N.I. Lobachevsky State University, 23 Gagarin Avenue, 603950 Nizhniy Novgorod, Russia.
| | | | | |
Collapse
|