1
|
Schneider P, Groß C, Bernhofs V, Christiner M, Benner J, Turker S, Zeidler BM, Seither‐Preisler A. Short-term plasticity of neuro-auditory processing induced by musical active listening training. Ann N Y Acad Sci 2022; 1517:176-190. [PMID: 36114664 PMCID: PMC9826140 DOI: 10.1111/nyas.14899] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/11/2023]
Abstract
Although there is strong evidence for the positive effects of musical training on auditory perception, processing, and training-induced neuroplasticity, there is still little knowledge on the auditory and neurophysiological short-term plasticity through listening training. In a sample of 37 adolescents (20 musicians and 17 nonmusicians) that was compared to a control group matched for age, gender, and musical experience, we conducted a 2-week active listening training (AULOS: Active IndividUalized Listening OptimizationS). Using magnetoencephalography and psychoacoustic tests, the short-term plasticity of auditory evoked fields and auditory skills were examined in a pre-post design, adapted to the individual neuro-auditory profiles. We found bilateral, but more pronounced plastic changes in the right auditory cortex. Moreover, we observed synchronization of the auditory evoked P1, N1, and P2 responses and threefold larger amplitudes of the late P2 response, similar to the reported effects of musical long-term training. Auditory skills and thresholds benefited largely from the AULOS training. Remarkably, after training, the mean thresholds improved by 12 dB for bone conduction and by 3-4 dB for air conduction. Thus, our findings indicate a strong positive influence of active listening training on neural auditory processing and perception in adolescence, when the auditory system is still developing.
Collapse
Affiliation(s)
- Peter Schneider
- Division of NeuroradiologyUniversity of Heidelberg Medical SchoolHeidelbergGermany,Department of Neurology, Section of BiomagnetismUniversity of Heidelberg Medical SchoolHeidelbergGermany,Jazeps Vitols Latvian Academy of MusicRigaLatvia,Centre for Systematic MusicologyUniversity of GrazGrazAustria
| | - Christine Groß
- Division of NeuroradiologyUniversity of Heidelberg Medical SchoolHeidelbergGermany,Jazeps Vitols Latvian Academy of MusicRigaLatvia
| | | | - Markus Christiner
- Jazeps Vitols Latvian Academy of MusicRigaLatvia,Centre for Systematic MusicologyUniversity of GrazGrazAustria
| | - Jan Benner
- Division of NeuroradiologyUniversity of Heidelberg Medical SchoolHeidelbergGermany,Department of Neurology, Section of BiomagnetismUniversity of Heidelberg Medical SchoolHeidelbergGermany
| | - Sabrina Turker
- Lise Meitner Research Group “Cognition and Plasticity”Max Planck Institute for Human Cognitive and Brain SciencesLeipzigGermany
| | | | | |
Collapse
|
2
|
Renvall H, Seol J, Tuominen R, Sorger B, Riecke L, Salmelin R. Selective auditory attention within naturalistic scenes modulates reactivity to speech sounds. Eur J Neurosci 2021; 54:7626-7641. [PMID: 34697833 PMCID: PMC9298413 DOI: 10.1111/ejn.15504] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2021] [Accepted: 10/10/2021] [Indexed: 11/27/2022]
Abstract
Rapid recognition and categorization of sounds are essential for humans and animals alike, both for understanding and reacting to our surroundings and for daily communication and social interaction. For humans, perception of speech sounds is of crucial importance. In real life, this task is complicated by the presence of a multitude of meaningful non‐speech sounds. The present behavioural, magnetoencephalography (MEG) and functional magnetic resonance imaging (fMRI) study was set out to address how attention to speech versus attention to natural non‐speech sounds within complex auditory scenes influences cortical processing. The stimuli were superimpositions of spoken words and environmental sounds, with parametric variation of the speech‐to‐environmental sound intensity ratio. The participants' task was to detect a repetition in either the speech or the environmental sound. We found that specifically when participants attended to speech within the superimposed stimuli, higher speech‐to‐environmental sound ratios resulted in shorter sustained MEG responses and stronger BOLD fMRI signals especially in the left supratemporal auditory cortex and in improved behavioural performance. No such effects of speech‐to‐environmental sound ratio were observed when participants attended to the environmental sound part within the exact same stimuli. These findings suggest stronger saliency of speech compared with other meaningful sounds during processing of natural auditory scenes, likely linked to speech‐specific top‐down and bottom‐up mechanisms activated during speech perception that are needed for tracking speech in real‐life‐like auditory environments.
Collapse
Affiliation(s)
- Hanna Renvall
- Department of Neuroscience and Biomedical Engineering, Aalto University, Espoo, Finland.,Aalto NeuroImaging, Aalto University, Espoo, Finland.,BioMag Laboratory, HUS Diagnostic Center, Helsinki University Hospital, University of Helsinki and Aalto University School of Science, Helsinki, Finland
| | - Jaeho Seol
- Department of Neuroscience and Biomedical Engineering, Aalto University, Espoo, Finland.,Aalto NeuroImaging, Aalto University, Espoo, Finland
| | - Riku Tuominen
- Department of Neuroscience and Biomedical Engineering, Aalto University, Espoo, Finland.,Aalto NeuroImaging, Aalto University, Espoo, Finland
| | - Bettina Sorger
- Department of Cognitive Neuroscience, Maastricht University, Maastricht, The Netherlands
| | - Lars Riecke
- Department of Cognitive Neuroscience, Maastricht University, Maastricht, The Netherlands
| | - Riitta Salmelin
- Department of Neuroscience and Biomedical Engineering, Aalto University, Espoo, Finland.,Aalto NeuroImaging, Aalto University, Espoo, Finland
| |
Collapse
|
3
|
Weise A, Schröger E, Horváth J. The detection of higher-order acoustic transitions is reflected in the N1 ERP. Psychophysiology 2018; 55:e13063. [DOI: 10.1111/psyp.13063] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2017] [Revised: 12/15/2017] [Accepted: 12/18/2017] [Indexed: 11/28/2022]
Affiliation(s)
- Annekathrin Weise
- Institut für Psychologie; Universität Leipzig; Leipzig Germany
- Paris-Lodron Universität Salzburg, Division of Physiological Psychology; Salzburg Austria
| | - Erich Schröger
- Institut für Psychologie; Universität Leipzig; Leipzig Germany
| | - János Horváth
- Research Centre for Natural Sciences, Hungarian Academy of Sciences, Institute of Cognitive Neuroscience and Psychology; Budapest Hungary
| |
Collapse
|
4
|
Neuromagnetic correlates of voice pitch, vowel type, and speaker size in auditory cortex. Neuroimage 2017; 158:79-89. [PMID: 28669914 DOI: 10.1016/j.neuroimage.2017.06.065] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2017] [Revised: 06/13/2017] [Accepted: 06/22/2017] [Indexed: 11/24/2022] Open
Abstract
Vowel recognition is largely immune to differences in speaker size despite the waveform differences associated with variation in speaker size. This has led to the suggestion that voice pitch and mean formant frequency (MFF) are extracted early in the hierarchy of hearing/speech processing and used to normalize the internal representation of vowel sounds. This paper presents a magnetoencephalographic (MEG) experiment designed to locate and compare neuromagnetic activity associated with voice pitch, MFF and vowel type in human auditory cortex. Sequences of six sustained vowels were used to contrast changes in the three components of vowel perception, and MEG responses to the changes were recorded from 25 participants. A staged procedure was employed to fit the MEG data with a source model having one bilateral pair of dipoles for each component of vowel perception. This dipole model showed that the activity associated with the three perceptual changes was functionally separable; the pitch source was located in Heschl's gyrus (bilaterally), while the vowel-type and formant-frequency sources were located (bilaterally) just behind Heschl's gyrus in planum temporale. The results confirm that vowel normalization begins in auditory cortex at an early point in the hierarchy of speech processing.
Collapse
|
5
|
Krishnan A, Suresh CH, Gandour JT. Changes in pitch height elicit both language-universal and language-dependent changes in neural representation of pitch in the brainstem and auditory cortex. Neuroscience 2017; 346:52-63. [PMID: 28108254 DOI: 10.1016/j.neuroscience.2017.01.013] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2016] [Revised: 12/09/2016] [Accepted: 01/08/2017] [Indexed: 11/24/2022]
Abstract
Language experience shapes encoding of pitch-relevant information at both brainstem and cortical levels of processing. Pitch height is a salient dimension that orders pitch from low to high. Herein we investigate the effects of language experience (Chinese, English) in the brainstem and cortex on (i) neural responses to variations in pitch height, (ii) presence of asymmetry in cortical pitch representation, and (iii) patterns of relative changes in magnitude of pitch height between these two levels of brain structure. Stimuli were three nonspeech homologs of Mandarin Tone 2 varying in pitch height only. The frequency-following response (FFR) and the cortical pitch-specific response (CPR) were recorded concurrently. At the Fz-linked T7/T8 site, peak latency of Na, Pb, and Nb decreased with increasing pitch height for both groups. Peak-to-peak amplitude of Na-Pb and Pb-Nb increased with increasing pitch height across groups. A language-dependent effect was restricted to Na-Pb; the Chinese had larger amplitude than the English group. At temporal sites (T7/T8), the Chinese group had larger amplitude, as compared to English, across stimuli, but also limited to the Na-Pb component and right temporal site. In the brainstem, F0 magnitude decreased with increasing pitch height; Chinese had larger magnitude across stimuli. A comparison of CPR and FFR responses revealed distinct patterns of relative changes in magnitude common to both groups. CPR amplitude increased and FFR amplitude decreased with increasing pitch height. Experience-dependent effects on CPR components vary as a function of neural sensitivity to pitch height within a particular temporal window (Na-Pb). Differences between the auditory brainstem and cortex imply distinct neural mechanisms for pitch extraction at both levels of brain structure.
Collapse
Affiliation(s)
- Ananthanarayan Krishnan
- Purdue University, Department of Speech Language Hearing Sciences, Lyles-Porter Hall, 715 Clinic Drive, West Lafayette, IN 47907-2122, USA.
| | - Chandan H Suresh
- Purdue University, Department of Speech Language Hearing Sciences, Lyles-Porter Hall, 715 Clinic Drive, West Lafayette, IN 47907-2122, USA.
| | - Jackson T Gandour
- Purdue University, Department of Speech Language Hearing Sciences, Lyles-Porter Hall, 715 Clinic Drive, West Lafayette, IN 47907-2122, USA.
| |
Collapse
|
6
|
Tabas A, Siebert A, Supek S, Pressnitzer D, Balaguer-Ballester E, Rupp A. Insights on the Neuromagnetic Representation of Temporal Asymmetry in Human Auditory Cortex. PLoS One 2016; 11:e0153947. [PMID: 27096960 PMCID: PMC4838253 DOI: 10.1371/journal.pone.0153947] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2015] [Accepted: 04/06/2016] [Indexed: 11/26/2022] Open
Abstract
Communication sounds are typically asymmetric in time and human listeners are highly sensitive to this short-term temporal asymmetry. Nevertheless, causal neurophysiological correlates of auditory perceptual asymmetry remain largely elusive to our current analyses and models. Auditory modelling and animal electrophysiological recordings suggest that perceptual asymmetry results from the presence of multiple time scales of temporal integration, central to the auditory periphery. To test this hypothesis we recorded auditory evoked fields (AEF) elicited by asymmetric sounds in humans. We found a strong correlation between perceived tonal salience of ramped and damped sinusoids and the AEFs, as quantified by the amplitude of the N100m dynamics. The N100m amplitude increased with stimulus half-life time, showing a maximum difference between the ramped and damped stimulus for a modulation half-life time of 4 ms which is greatly reduced at 0.5 ms and 32 ms. This behaviour of the N100m closely parallels psychophysical data in a manner that: i) longer half-life times are associated with a stronger tonal percept, and ii) perceptual differences between damped and ramped are maximal at 4 ms half-life time. Interestingly, differences in evoked fields were significantly stronger in the right hemisphere, indicating some degree of hemispheric specialisation. Furthermore, the N100m magnitude was successfully explained by a pitch perception model using multiple scales of temporal integration of auditory nerve activity patterns. This striking correlation between AEFs, perception, and model predictions suggests that the physiological mechanisms involved in the processing of pitch evoked by temporal asymmetric sounds are reflected in the N100m.
Collapse
Affiliation(s)
- Alejandro Tabas
- Faculty of Science and Technology, Bournemouth University, Bournemouth, England, United Kingdom
- * E-mail:
| | - Anita Siebert
- Institute of Pharmacology and Toxicology, University of Zurich, Zürich, Zürich, Switzerland
| | - Selma Supek
- Department of Physics, Faculty of Science, University of Zagreb, Zagreb, Croatia
| | - Daniel Pressnitzer
- Département d’Études Cognitives, École Normale Supérieure, Paris, France
| | - Emili Balaguer-Ballester
- Faculty of Science and Technology, Bournemouth University, Bournemouth, England, United Kingdom
- The Bernstein Center for Computational Neuroscience Heidelberg-Mannheim, Mannheim, Baden-Würtemberg, Germany
| | - André Rupp
- Department of Neurology, Heidelberg University, Heidelberg, Baden-Würtemberg, Germany
| |
Collapse
|
7
|
Plack CJ, Barker D, Hall DA. Pitch coding and pitch processing in the human brain. Hear Res 2013; 307:53-64. [PMID: 23938209 DOI: 10.1016/j.heares.2013.07.020] [Citation(s) in RCA: 41] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/11/2013] [Revised: 07/15/2013] [Accepted: 07/31/2013] [Indexed: 11/16/2022]
Abstract
Neuroimaging studies have provided important information regarding how and where pitch is coded and processed in the human brain. Recordings of the frequency-following response (FFR), an electrophysiological measure of neural temporal coding in the brainstem, have shown that the precision of temporal pitch information is dependent on linguistic and musical experience, and can even be modified by short-term training. However, the FFR does not seem to represent the output of a pitch extraction process, and this raises questions regarding how the peripheral neural signal is processed to produce a unified sensation. Since stimuli with a wide variety of spectral and binaural characteristics can produce the same pitch, it has been suggested that there is a place in the ascending auditory pathway at which the representations converge. There is evidence from many different human neuroimaging studies that certain areas of auditory cortex are specifically sensitive to pitch, although the location is still a matter of debate. Taken together, the results suggest that the initial temporal pitch code in the auditory periphery is converted to a code based on neural firing rate in the brainstem. In the upper brainstem or auditory cortex, the information from the individual harmonics of complex tones is combined to form a general representation of pitch. This article is part of a Special Issue entitled Human Auditory Neuroimaging.
Collapse
Affiliation(s)
- Christopher J Plack
- School of Psychological Sciences, The University of Manchester, Manchester M13 9PL, UK.
| | | | | |
Collapse
|
8
|
Renvall H, Staeren N, Siep N, Esposito F, Jensen O, Formisano E. Of cats and women: Temporal dynamics in the right temporoparietal cortex reflect auditory categorical processing of vocalizations. Neuroimage 2012; 62:1877-83. [DOI: 10.1016/j.neuroimage.2012.06.010] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2012] [Revised: 05/15/2012] [Accepted: 06/10/2012] [Indexed: 11/25/2022] Open
|
9
|
Auditory event-related potentials reflect dedicated change detection activity for higher-order acoustic transitions. Biol Psychol 2012; 91:142-9. [DOI: 10.1016/j.biopsycho.2012.06.001] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2011] [Revised: 03/22/2012] [Accepted: 06/03/2012] [Indexed: 11/22/2022]
|
10
|
Soeta Y, Nakagawa S. Auditory evoked responses in human auditory cortex to the variation of sound intensity in an ongoing tone. Hear Res 2012; 287:67-75. [DOI: 10.1016/j.heares.2012.03.006] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/17/2011] [Revised: 03/08/2012] [Accepted: 03/16/2012] [Indexed: 10/28/2022]
|
11
|
Miettinen I, Alku P, Yrttiaho S, May PJ, Tiitinen H. Cortical processing of degraded speech sounds: Effects of distortion type and continuity. Neuroimage 2012; 60:1036-45. [DOI: 10.1016/j.neuroimage.2012.01.085] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2011] [Revised: 01/09/2012] [Accepted: 01/11/2012] [Indexed: 11/28/2022] Open
|
12
|
Andermann M, van Dinther R, Patterson RD, Rupp A. Neuromagnetic representation of musical register information in human auditory cortex. Neuroimage 2011; 57:1499-506. [DOI: 10.1016/j.neuroimage.2011.05.049] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2011] [Revised: 04/25/2011] [Accepted: 05/17/2011] [Indexed: 11/25/2022] Open
|
13
|
Renvall H, Formisano E, Parviainen T, Bonte M, Vihla M, Salmelin R. Parametric Merging of MEG and fMRI Reveals Spatiotemporal Differences in Cortical Processing of Spoken Words and Environmental Sounds in Background Noise. Cereb Cortex 2011; 22:132-43. [DOI: 10.1093/cercor/bhr095] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
|
14
|
Park JY, Park H, Kim JI, Park HJ. Consonant chords stimulate higher EEG gamma activity than dissonant chords. Neurosci Lett 2011; 488:101-5. [DOI: 10.1016/j.neulet.2010.11.011] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2010] [Revised: 10/28/2010] [Accepted: 11/03/2010] [Indexed: 10/18/2022]
|
15
|
Miettinen I, Alku P, Salminen N, May PJ, Tiitinen H. Responsiveness of the human auditory cortex to degraded speech sounds: Reduction of amplitude resolution vs. additive noise. Brain Res 2011; 1367:298-309. [DOI: 10.1016/j.brainres.2010.10.037] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2010] [Revised: 10/07/2010] [Accepted: 10/12/2010] [Indexed: 11/15/2022]
|
16
|
The analysis of simple and complex auditory signals in human auditory cortex: magnetoencephalographic evidence from M100 modulation. Ear Hear 2010; 31:515-26. [PMID: 20445455 DOI: 10.1097/aud.0b013e3181d99a75] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
OBJECTIVE Ecologically valid signals (e.g., vowels) have multiple components of substantially different frequencies and amplitudes that may not be equally cortically represented. In this study, we investigate a relatively simple signal at an intermediate level of complexity, two-frequency composite tones, a stimulus lying between simple sinusoids and ecologically valid signals such as speech. We aim to characterize the cortical response properties to better understand how complex signals may be represented in auditory cortex. DESIGN Using magnetoencephalography, we assessed the sensitivity of the M100/N100m auditory-evoked component to manipulations of the power ratio of the individual frequency components of the two-frequency complexes. Fourteen right-handed subjects with normal hearing were scanned while passively listening to 10 complex and 12 simple signals. The complex signals were composed of one higher frequency and one lower frequency sinusoid; the lower frequency sinusoidal component was at one of the five loudness levels relative to the higher frequency one: -20, -10, 0, +10, +20 dB. The simple signals comprised all the complex signal components presented in isolation. RESULTS The data replicate and extend several previous findings: (1) the systematic dependence of the M100 latency on signal intensity and (2) the dependence of the M100 latency on signal frequency, with lower frequency signals ( approximately 100 Hz) exhibiting longer latencies than higher frequency signals ( approximately 1000 Hz) even at matched loudness levels. (3) Importantly, we observe that, relative to simple signals, complex signals show increased response amplitude-as one might predict-but decreased M100 latencies. CONCLUSION : The data suggest that by the time the M100 is generated in auditory cortex ( approximately 70 to 80 msecs after stimulus onset), integrative processing across frequency channels has taken place which is observable in the M100 modulation. In light of these data models that attribute more time and processing resources to a complex stimulus merit reevaluation, in that our data show that acoustically more complex signals are associated with robust temporal facilitation, across frequencies and signal amplitude level.
Collapse
|
17
|
Miettinen I, Tiitinen H, Alku P, May PJC. Sensitivity of the human auditory cortex to acoustic degradation of speech and non-speech sounds. BMC Neurosci 2010; 11:24. [PMID: 20175890 PMCID: PMC2837048 DOI: 10.1186/1471-2202-11-24] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2009] [Accepted: 02/22/2010] [Indexed: 12/04/2022] Open
Abstract
Background Recent studies have shown that the human right-hemispheric auditory cortex is particularly sensitive to reduction in sound quality, with an increase in distortion resulting in an amplification of the auditory N1m response measured in the magnetoencephalography (MEG). Here, we examined whether this sensitivity is specific to the processing of acoustic properties of speech or whether it can be observed also in the processing of sounds with a simple spectral structure. We degraded speech stimuli (vowel /a/), complex non-speech stimuli (a composite of five sinusoidals), and sinusoidal tones by decreasing the amplitude resolution of the signal waveform. The amplitude resolution was impoverished by reducing the number of bits to represent the signal samples. Auditory evoked magnetic fields (AEFs) were measured in the left and right hemisphere of sixteen healthy subjects. Results We found that the AEF amplitudes increased significantly with stimulus distortion for all stimulus types, which indicates that the right-hemispheric N1m sensitivity is not related exclusively to degradation of acoustic properties of speech. In addition, the P1m and P2m responses were amplified with increasing distortion similarly in both hemispheres. The AEF latencies were not systematically affected by the distortion. Conclusions We propose that the increased activity of AEFs reflects cortical processing of acoustic properties common to both speech and non-speech stimuli. More specifically, the enhancement is most likely caused by spectral changes brought about by the decrease of amplitude resolution, in particular the introduction of periodic, signal-dependent distortion to the original sound. Converging evidence suggests that the observed AEF amplification could reflect cortical sensitivity to periodic sounds.
Collapse
Affiliation(s)
- Ismo Miettinen
- Department of Biomedical Engineering and Computational Science, Aalto University School of Science and Technology, Espoo, Finland.
| | | | | | | |
Collapse
|
18
|
Abstract
The aim of this paper was to determine whether the latency and/or amplitude of the N1m deflection of the auditory-evoked magnetic fields are influenced by the delay and number of iterations of iterated rippled noise, which are related to pitch and pitch strength, respectively. The results indicate that the N1m amplitude decreased sharply for delays between 16 and 32 ms, suggesting that the N1m amplitude reflects the lower limit of the audible pitch range. The N1m latency increases with increasing delay of up to 8-16 ms and then decreases again for delays longer than 16 ms. The behavior of the latency may reflect the balance between the pitch-related component of the N1m and a specific pitch-unrelated component.
Collapse
Affiliation(s)
- Yoshiharu Soeta
- Institute for Human Science and Biomedical Engineering, National Institute of Advanced Industrial Science and Technology , Midorigaoka, Ikeda, Osaka, Japan.
| | | |
Collapse
|
19
|
Shahin AJ, Roberts LE, Chau W, Trainor LJ, Miller LM. Music training leads to the development of timbre-specific gamma band activity. Neuroimage 2008; 41:113-22. [PMID: 18375147 DOI: 10.1016/j.neuroimage.2008.01.067] [Citation(s) in RCA: 82] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2007] [Revised: 01/20/2008] [Accepted: 01/28/2008] [Indexed: 11/17/2022] Open
Abstract
Oscillatory gamma band activity (GBA, 30-100 Hz) has been shown to correlate with perceptual and cognitive phenomena including feature binding, template matching, and learning and memory formation. We hypothesized that if GBA reflects highly learned perceptual template matching, we should observe its development in musicians specific to the timbre of their instrument of practice. EEG was recorded in adult professional violinists and amateur pianists as well as in 4- and 5-year-old children studying piano in the Suzuki method before they commenced music lessons and 1 year later. The adult musicians showed robust enhancement of induced (non-time-locked) GBA, specifically to their instrument of practice, with the strongest effect in professional violinists. Consistent with this result, the children receiving piano lessons exhibited increased power of induced GBA for piano tones with 1 year of training, while children not taking lessons showed no effect. In comparison to induced GBA, evoked (time-locked) gamma band activity (30-90 Hz, approximately 80 ms latency) was present only in adult groups. Evoked GBA was more pronounced in musicians than non-musicians, with synchronization equally exhibited for violin and piano tones but enhanced for these tones compared to pure tones. Evoked gamma activity may index the physical properties of a sound and is modulated by acoustical training, while induced GBA may reflect higher perceptual learning and is shaped by specific auditory experiences.
Collapse
Affiliation(s)
- Antoine J Shahin
- University of California, Davis Center for Mind and Brain, 267 Cousteau Place, Davis, CA 95618, USA.
| | | | | | | | | |
Collapse
|
20
|
Shahin AJ, Roberts LE, Miller LM, McDonald KL, Alain C. Sensitivity of EEG and MEG to the N1 and P2 auditory evoked responses modulated by spectral complexity of sounds. Brain Topogr 2007; 20:55-61. [PMID: 17899352 PMCID: PMC4373076 DOI: 10.1007/s10548-007-0031-4] [Citation(s) in RCA: 44] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/08/2007] [Indexed: 11/29/2022]
Abstract
Acoustic complexity of a stimulus has been shown to modulate the electromagnetic N1 (latency approximately 110 ms) and P2 (latency 190 ms) auditory evoked responses. We compared the relative sensitivity of electroencephalography (EEG) and magnetoencephalography (MEG) to these neural correlates of sensation. Simultaneous EEG and MEG were recorded while participants listened to three variants of a piano tone. The piano stimuli differed in their number of harmonics: the fundamental frequency (f ( 0 )), only, or f ( 0 ) and the first two or eight harmonics. The root mean square (RMS) of the amplitude of P2 but not N1 increased with spectral complexity of the piano tones in EEG and MEG. The RMS increase for P2 was more prominent in EEG than MEG, suggesting important radial sources contributing to the P2 only in EEG. Source analysis revealing contributions from radial and tangential sources was conducted to test this hypothesis. Source waveforms revealed a significant increase in the P2 radial source amplitude in EEG with increased spectral complexity of piano tones. The P2 of the tangential source waveforms also increased in amplitude with increased spectral complexity in EEG and MEG. The P2 auditory evoked response is thus represented by both tangential (gyri) and radial (sulci) activities. The radial contribution is expressed preferentially in EEG, highlighting the importance of combining EEG with MEG where complex source configurations are suspected.
Collapse
Affiliation(s)
- Antoine J Shahin
- UC Davis Center for Mind and Brain, University of California-Davis, 267 Cousteau Place, Davis, CA 95618, USA.
| | | | | | | | | |
Collapse
|
21
|
Shahin AJ, Roberts LE, Pantev C, Aziz M, Picton TW. Enhanced anterior-temporal processing for complex tones in musicians. Clin Neurophysiol 2007; 118:209-20. [PMID: 17095291 DOI: 10.1016/j.clinph.2006.09.019] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2006] [Revised: 09/09/2006] [Accepted: 09/26/2006] [Indexed: 10/23/2022]
Abstract
OBJECTIVE To examine how auditory brain responses change with increased spectral complexity of sounds in musicians and non-musicians. METHODS Event-related potentials (ERPs) and fields (ERFs) to binaural piano tones were measured in musicians and non-musicians. The stimuli were C4 piano tones and a pure sine tone of the C4 fundamental frequency (f0). The first piano tone contained f0 and the first eight harmonics, the second piano tone consisted of f0 and the first two harmonics and the third piano tone consisted of f0. RESULTS Subtraction of ERPs of the piano tone with only the fundamental from ERPs of the harmonically rich piano tones yielded positive difference waves peaking at 130 ms (DP130) and 300 ms (DP300). The DP130 was larger in musicians than non-musicians and both waves were maximally recorded over the right anterior scalp. ERP source analysis indicated anterior temporal sources with greater strength in the right hemisphere for both waves. Arbitrarily using these anterior sources to analyze the MEG signals showed a DP130m in musicians but not in non-musicians. CONCLUSIONS Auditory responses in the anterior temporal cortex to complex musical tones are larger in musicians than non-musicians. SIGNIFICANCE Neural networks in the anterior temporal cortex are activated during the processing of complex sounds. Their greater activation in musicians may index either underlying cortical differences related to musical aptitude or cortical modification by acoustical training.
Collapse
|
22
|
Soeta Y, Nakagawa S. Complex tone processing and critical band in the human auditory cortex. Hear Res 2006; 222:125-32. [PMID: 17081712 DOI: 10.1016/j.heares.2006.09.005] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/14/2006] [Revised: 09/15/2006] [Accepted: 09/24/2006] [Indexed: 10/24/2022]
Abstract
Psychophysical experiments in humans have indicated that the auditory system has a well-defined bandwidth for resolution of complex stimuli. This bandwidth is known as the critical bandwidth (CBW). Physiological correlates of the CBW were examined in the human auditory cortex. Two- and three-tone complexes were used as the sound stimuli with all signals presented at 55 dB sound pressure level (SPL). The duration of stimulation was 500 ms, with rise and fall ramps of 10 ms. Ten normal-hearing subjects took part in the study. Auditory-evoked fields were recorded using a 122-channel whole-head magnetometer in a magnetically shielded room. The latencies, source strengths, and coordinates of the N1m waves, which were found above the left and right temporal lobes approximately 100 ms after the onset of stimulation, were analyzed. The results indicated that N1m amplitudes were approximately constant when the frequency separation of a two-tone complex or the total bandwidth of a three-tone complex was less than the CBW; however, the N1m amplitudes increased with increasing frequency separation or total bandwidth when these were greater than the CBW. These findings indicate critical band-like behavior in the human auditory cortex. The N1m amplitudes in the right hemisphere were significantly greater than those in the left hemisphere, which may reflect a right-hemispheric dominance in the processing of tonal stimuli.
Collapse
Affiliation(s)
- Yoshiharu Soeta
- Institute for Human Science and Biomedical Engineering, National Institute of Advanced Industrial Science and Technology (AIST), 1-8-31 Midorigaoka, Ikeda, Osaka 563-8577, Japan.
| | | |
Collapse
|
23
|
Meyer M, Baumann S, Jancke L. Electrical brain imaging reveals spatio-temporal dynamics of timbre perception in humans. Neuroimage 2006; 32:1510-23. [PMID: 16798014 DOI: 10.1016/j.neuroimage.2006.04.193] [Citation(s) in RCA: 56] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2005] [Revised: 03/29/2006] [Accepted: 04/10/2006] [Indexed: 11/27/2022] Open
Abstract
Timbre is a major attribute of sound perception and a key feature for the identification of sound quality. Here, we present event-related brain potentials (ERPs) obtained from sixteen healthy individuals while they discriminated complex instrumental tones (piano, trumpet, and violin) or simple sine wave tones that lack the principal features of timbre. Data analysis yielded enhanced N1 and P2 responses to instrumental tones relative to sine wave tones. Furthermore, we applied an electrical brain imaging approach using low-resolution electromagnetic tomography (LORETA) to estimate the neural sources of N1/P2 responses. Separate significance tests of instrumental vs. sine wave tones for N1 and P2 revealed distinct regions as principally governing timbre perception. In an initial stage (N1), timbre perception recruits left and right (peri-)auditory fields with an activity maximum over the right posterior Sylvian fissure (SF) and the posterior cingulate (PCC) territory. In the subsequent stage (P2), we uncovered enhanced activity in the vicinity of the entire cingulate gyrus. The involvement of extra-auditory areas in timbre perception may imply the presence of a highly associative processing level which might be generally related to musical sensations and integrates widespread medial areas of the human cortex. In summary, our results demonstrate spatio-temporally distinct stages in timbre perception which not only involve bilateral parts of the peri-auditory cortex but also medially situated regions of the human brain associated with emotional and auditory imagery functions.
Collapse
Affiliation(s)
- Martin Meyer
- Department of Neuropsychology, University of Zurich, Treichlerstrasse 10, CH-8032 Zurich, Switzerland.
| | | | | |
Collapse
|
24
|
Soeta Y, Nakagawa S, Matsuoka K. The effect of center frequency and bandwidth on the auditory evoked magnetic field. Hear Res 2006; 218:64-71. [PMID: 16797895 DOI: 10.1016/j.heares.2006.04.002] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/15/2005] [Revised: 04/14/2006] [Accepted: 04/27/2006] [Indexed: 11/16/2022]
Abstract
Auditory evoked magnetic fields in relation to the center frequency of sound with a certain bandwidth were examined by magnetoencephalography (MEG). Octave band, 1/3 octave band, and 130 Hz bandwidth noises were used as the sound stimuli. All signals were presented at 60 dB SPL. The stimulus duration was 500 ms, with rise and fall ramps of 10 ms. Ten normal-hearing subjects took part in the study. Auditory evoked fields were recorded using a 122 channel whole-head magnetometer in a magnetically shielded room. The latencies, source strengths and coordinates of the N1m wave, which was found above the left and right temporal lobes around 100 ms after the stimulus onset, were analyzed. The results demonstrated that the middle frequency range had shorter N1m latencies and larger N1m amplitudes, and that the lower and higher frequency stimuli had relatively delayed N1m latencies and decreased N1m amplitudes. The N1m amplitudes correlated well to the loudness values in the frequency ranges between 250 and 2000 Hz. The source locations of N1m did not reveal any systematic changes related to the center frequency and bandwidth.
Collapse
Affiliation(s)
- Yoshiharu Soeta
- Institute for Human Science and Biomedical Engineering, National Institute of Advanced Industrial Science and Technology, 1-8-31 Midorigaoka, Ikeda, Osaka 563-8577, Japan.
| | | | | |
Collapse
|
25
|
Seither-Preisler A, Patterson RD, Krumbholz K, Seither S, Lütkenhöner B. From noise to pitch: transient and sustained responses of the auditory evoked field. Hear Res 2006; 218:50-63. [PMID: 16814971 DOI: 10.1016/j.heares.2006.04.005] [Citation(s) in RCA: 26] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/27/2006] [Revised: 04/22/2006] [Accepted: 04/27/2006] [Indexed: 11/22/2022]
Abstract
In recent magnetoencephalographic studies, we established a novel component of the auditory evoked field, which is elicited by a transition from noise to pitch in the absence of a change in energy. It is referred to as the 'pitch onset response'. To extend our understanding of pitch-related neural activity, we compared transient and sustained auditory evoked fields in response to a 2000-ms segment of noise and a subsequent 1000-ms segment of regular interval sound (RIS). RIS provokes the same long-term spectral representation in the auditory system as noise, but is distinguished by a definite pitch, the salience of which depends on the degree of temporal regularity. The stimuli were presented at three steps of increasing regularity and two spectral bandwidths. The auditory evoked fields were recorded from both cerebral hemispheres of twelve subjects with a 37-channel magnetoencephalographic system. Both the transient and the sustained components evoked by noise and RIS were sensitive to spectral bandwidth. Moreover, the pitch salience of the RIS systematically affected the pitch onset response, the sustained field, and the off-response. This indicates that the underlying neural generators reflect the emergence, persistence and offset of perceptual attributes derived from the temporal regularity of a sound.
Collapse
Affiliation(s)
- A Seither-Preisler
- Department of Experimental Audiology, ENT Clinic, Münster University Hospital, Kardinal-von-Galen-Ring 10, D-48149 Münster, Germany.
| | | | | | | | | |
Collapse
|
26
|
Lütkenhöner B, Seither-Preisler A, Seither S. Piano tones evoke stronger magnetic fields than pure tones or noise, both in musicians and non-musicians. Neuroimage 2006; 30:927-37. [PMID: 16337814 DOI: 10.1016/j.neuroimage.2005.10.034] [Citation(s) in RCA: 39] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2005] [Revised: 10/25/2005] [Accepted: 10/31/2005] [Indexed: 10/25/2022] Open
Abstract
Regarding the net firing rate of the auditory nerve, the strongest response is to be expected when the input energy is spread as evenly as possible over the cochlea rather than being concentrated at a particular location. In some respects, this effect seems to be preserved up to the auditory cortex, but conflicting results have been reported as well. Here, we compared the auditory evoked fields (AEF) elicited by a pure tone and two sounds causing a more wide-spread cochlear activation: a piano tone as a representative of a complex tone, and bandpass noise. The stimuli had the same intensity (60 dB above threshold), and the center frequency of the noise corresponded to the fundamental frequency of the tones (1047 Hz, two octaves above middle C). Among the 26 subjects were 11 musicians and 11 persons who never played an instrument. At a latency of about 50 ms (wave P50m), the piano tone and the noise yielded stronger responses than the pure tone, in accordance with the concepts about the auditory periphery. By contrast, around 100 ms (wave N100m), the noise clearly elicited the smallest response, whereas the strongest response was elicited again by the piano tone. Musicians and non-musicians did not significantly differ concerning the responses to pure tones and piano tones. Thus, previous claims that an enhanced response to piano tones indicates use-dependent reorganization in musicians are not supported by the present data.
Collapse
Affiliation(s)
- Bernd Lütkenhöner
- Department of Experimental Audiology, ENT Clinic, Kardinal-von-Galen-Ring 10, 48129 Münster, Germany.
| | | | | |
Collapse
|
27
|
Seither-Preisler A, Patterson R, Krumbholz K, Seither S, Lütkenhöner B. Evidence of pitch processing in the N100m component of the auditory evoked field. Hear Res 2006; 213:88-98. [PMID: 16464550 DOI: 10.1016/j.heares.2006.01.003] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/16/2005] [Revised: 12/23/2005] [Accepted: 01/02/2006] [Indexed: 11/19/2022]
Abstract
The latency of the N100m component of the auditory evoked field (AEF) is sensitive to the period and spectrum of a sound. However, little attention was paid so far to the wave shape at stimulus onset, which might have biased previous results. This problem was fixed in the present study by aligning the first major peaks in the acoustic waveforms. The stimuli were harmonic tones (spectral range: 800-5000 Hz) with periods corresponding to 100, 200, 400, and 800 Hz. The frequency components were in sine, alternating or random phase. Simulations with a computational model suggest that the auditory-nerve activity is strongly affected by both the period and the relative phase of the stimulus, whereas the output of the more central pitch processor only depends on the period. Our AEF data, recorded from the right hemisphere of seven subjects, are consistent with the latter prediction: The latency of the N100m depends on the period, but not on the relative phase of the stimulus components. This suggests that the N100m reflects temporal pitch extraction, not necessarily implying that the underlying generators are directly involved in this analysis.
Collapse
Affiliation(s)
- Annemarie Seither-Preisler
- Department of Experimental Audiology, ENT Clinic, Münster University Hospital, Kardinal von Galen-Ring 10, D-48129 Münster, Germany.
| | | | | | | | | |
Collapse
|
28
|
Shahin A, Roberts LE, Pantev C, Trainor LJ, Ross B. Modulation of P2 auditory-evoked responses by the spectral complexity of musical sounds. Neuroreport 2005; 16:1781-5. [PMID: 16237326 DOI: 10.1097/01.wnr.0000185017.29316.63] [Citation(s) in RCA: 139] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
We investigated whether N1 and P2 auditory-evoked responses are modulated by the spectral complexity of musical sounds in pianists and non-musicians. Study participants were presented with three variants of a C4 piano tone equated for temporal envelope but differing in the number of harmonics contained in the stimulus. A fourth tone was a pure tone matched to the fundamental frequency of the piano tones. A simultaneous electroencephalographic/magnetoencephalographic recording was made. P2 amplitude was larger in musicians and increased with spectral complexity preferentially in this group, but N1 did not. The results suggest that P2 reflects the specific features of acoustic stimuli experienced during musical practice and point to functional differences in P2 and N1 that relate to their underlying mechanisms.
Collapse
Affiliation(s)
- Antoine Shahin
- Department of Medical Physics and Applied Radiation Sciences, McMaster University, Hamilton, Ontario, Canada
| | | | | | | | | |
Collapse
|
29
|
Soeta Y, Nakagawa S, Tonoike M. Auditory evoked magnetic fields in relation to bandwidth variations of bandpass noise. Hear Res 2005; 202:47-54. [PMID: 15811698 DOI: 10.1016/j.heares.2004.09.012] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/20/2004] [Accepted: 09/30/2004] [Indexed: 11/16/2022]
Abstract
Auditory evoked magnetic fields in relation to the bandwidth of bandpass noise were examined by magnetoencephalography (MEG). Pure tone and bandpass noises with center frequencies of 500, 1000 or 2000 Hz were used as the auditory signals. All source signals had the sound pressure level set at 74 dB. The stimulus duration was 0.5 s, with rise and fall ramps of 10 ms. Eight volunteers with normal hearing took part in the study. Auditory evoked fields were recorded using a neuromagnetometer in a magnetically-shielded room. The results showed that the peak amplitude of N1m, which was found above the left and right temporal lobes around 100 ms after the stimulus onset, decreased with increasing bandwidth of the bandpass noise. The latency and estimated equivalent current dipole (ECD) locations of N1m did not show any systematic variation as a function of the bandwidth for any of the center frequencies.
Collapse
Affiliation(s)
- Yoshiharu Soeta
- Institute for Human Science and Biomedical Engineering, National Institute of Advanced Industrial Science and Technology (AIST), Midorigaoka, Ikeda, Osaka, Japan.
| | | | | |
Collapse
|
30
|
Soeta Y, Nakagawa S, Tonoike M. Auditory evoked magnetic fields in relation to iterated rippled noise. Hear Res 2005; 205:256-61. [PMID: 15953534 DOI: 10.1016/j.heares.2005.03.026] [Citation(s) in RCA: 13] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/05/2005] [Accepted: 03/26/2005] [Indexed: 10/25/2022]
Abstract
Auditory evoked magnetic fields in relation to iterated rippled noise (IRN) were examined by magnetoencephalography (MEG). IRN was used as the sound stimulus to control the peak amplitude of the autocorrelation function of the sound. The IRN was produced by a delay-and-add algorithm applied to bandpass noise that was filtered using fourth-order Butterworth filters between 400-2200 Hz. All sound signals had the same sound pressure level. The stimulus duration was 0.5 s, with rise and fall ramps of 10 ms. Ten normal-hearing subjects took part in the study. Auditory evoked fields were recorded using a 122 channel whole-head magnetometer in a magnetically shielded room. The results showed that the peak amplitude of N1m, which was found above the left and right temporal lobes around 100 ms after the stimulus onset, increased with increase in the number of iterations of the IRN. The latency and estimated equivalent current dipole (ECD) locations of N1m did not show any systematic variation as a function of the number of iterations.
Collapse
Affiliation(s)
- Yoshiharu Soeta
- Institute for Human Science and Biomedical Engineering, National Institute of Advanced Industrial Science and Technology, Ikeda, Osaka 563-8577, Japan.
| | | | | |
Collapse
|
31
|
Seither-Preisler A, Krumbholz K, Patterson R, Seither S, Lütkenhöner B. Interaction between the neuromagnetic responses to sound energy onset and pitch onset suggests common generators. Eur J Neurosci 2004; 19:3073-80. [PMID: 15182315 DOI: 10.1111/j.0953-816x.2004.03423.x] [Citation(s) in RCA: 27] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Abstract
The pitch-onset response (POR) is a negative component of the auditory evoked field which is elicited when the temporal fine structure of a continuous noise is regularized to produce a pitch perception without altering the gross spectral characteristics of the sound. Previously, we showed that the latency of the POR is inversely related to the pitch value and its amplitude is correlated with the salience of the pitch, suggesting that the underlying generators are part of a pitch-processing network [Krumbholz, K., Patterson, R.D., Seither-Preisler, A., Lammertmann, C. & Lütkenhöner, B. (2003) Cereb. Cortex,13, 765-772]. The source of the POR was located near the medial part of Heschl's gyrus. The present study was designed to determine whether the POR originates from the same generators as the energy-onset response (EOR) represented by the N100m/P200m complex. The EOR to the onset of a noise, and the POR to a subsequent transition from noise to pitch, were recorded as the time interval between the noise onset and the transition varied from 500 to 4000 ms. The mean amplitude of the POR increased by approximately 5.9 nA.m with each doubling of the time between noise onset and transition. This suggests an interaction between the POR and the EOR, which may be based on common neural generators.
Collapse
Affiliation(s)
- A Seither-Preisler
- Department of Experimental Audiology, ENT Clinic, Münster University Hospital, Kardinal-von-Galen-Ring 10, D-48149 Münster, Germany.
| | | | | | | | | |
Collapse
|