1
|
İlhan B, Kurt S, Ungan P. Auditory cortical responses to abrupt lateralization shifts do not reflect the activity of hemifield-specific units involved in opponent coding of auditory space. Neuropsychologia 2023; 188:108629. [PMID: 37356539 DOI: 10.1016/j.neuropsychologia.2023.108629] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2023] [Revised: 06/20/2023] [Accepted: 06/22/2023] [Indexed: 06/27/2023]
Abstract
Recent studies show that the classical model based on axonal delay-lines may not explain interaural time difference (ITD) based spatial coding in humans. Instead, a population-code model called "opponent channels model" (OCM) has been suggested. This model comprises two competing channels respectively for the two auditory hemifields, each with a sigmoidal tuning curve. Event-related potentials (ERPs) to ITD-changes are used in some studies to test the predictions of this model by considering the sounds before and after the change as adaptor and probe stimuli, respectively. It is assumed in these studies that the former stimulus causes adaptation of the neurons selective to its side, and that the ERP N1-P2 response to the ITD-change is the specific response of the neurons with selectivity to the side of probe sound. However, these ERP components are known as a global, non-specific acoustic change complex of cortical origin evoked by any change in the auditory environment. It probably does not genuinely reflect the activity of some stimulus-specific neuronal units that have escaped the refractory effect of the preceding adaptor, which means a violation of the crucial assumption in an adaptor-probe paradigm. To assess this viewpoint, we conducted two experiments. In the first one, we recorded ERPs to abrupt lateralization shifts of click trains having various pre- and post-shift ITDs within the physiological range of -600μs to +600μs. Magnitudes of the ERP components P1, N1, and P2 to these ITD-shifts did not comply with the additive behavior of partial probe responses presumed for an adaptor-probe paradigm, casting doubt on the accuracy of testing sensory coding models by using ERPs to abrupt lateralization changes. Findings of the second experiment, involving ERPs to conjoint outwards/transverse shift stimuli also supported this conclusion.
Collapse
Affiliation(s)
- Barkın İlhan
- Department of Biophysics, Necmettin Erbakan University Meram Medical Faculty, Konya, Türkiye.
| | - Saliha Kurt
- Department of Audiometry, Selçuk University Vocational School of Health Services, Konya, Türkiye.
| | | |
Collapse
|
2
|
Bruzzone SEP, Haumann NT, Kliuchko M, Vuust P, Brattico E. Applying Spike-density component analysis for high-accuracy auditory event-related potentials in children. Clin Neurophysiol 2021; 132:1887-1896. [PMID: 34157633 DOI: 10.1016/j.clinph.2021.05.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2020] [Revised: 05/11/2021] [Accepted: 05/19/2021] [Indexed: 11/18/2022]
Abstract
OBJECTIVE Overlapping neurophysiological signals are the main obstacle preventing from using cortical auditory event-related potentials (AEPs) in clinical settings. Children AEPs are particularly affected by this problem, as their cerebral cortex is still maturing. To overcome this problem, we applied a new version of Spike-density Component Analysis (SCA), an analysis method recently developed, to isolate with high accuracy the neural components of auditory responses of 8-year-old children. METHODS Electroencephalography was used with 33 children to record AEPs to auditory stimuli varying in spectrotemporal features. Three different analysis approaches were adopted: the standard AEP analysis procedure, SCA with template-match (SCA-TM), and SCA with half-split average consistency (SCA-HSAC). RESULTS SCA-HSAC most successfully allowed the extraction of AEPs for each child, revealing that the most consistent components were P1 and N2. An immature N1 component was also detected. CONCLUSION Superior accuracy in isolating neural components at the individual level was demonstrated for SCA-HSAC over other SCA approaches even for children AEPs. SIGNIFICANCE Reliable methods of extraction of neurophysiological signals at the individual level are crucial for the application of cortical AEPs for routine diagnostic exams in clinical settings both in children and adults.
Collapse
Affiliation(s)
- S E P Bruzzone
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University and Royal Academy of Music, Aarhus/Aalborg, Universitetsbyen 3, 8000 Aarhus C, Denmark.
| | - N T Haumann
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University and Royal Academy of Music, Aarhus/Aalborg, Universitetsbyen 3, 8000 Aarhus C, Denmark.
| | - M Kliuchko
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University and Royal Academy of Music, Aarhus/Aalborg, Universitetsbyen 3, 8000 Aarhus C, Denmark; Hearing Systems Section, Department of Health Technology, Technical University of Denmark, DK-2800 Kgs. Lyngby, Denmark
| | - P Vuust
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University and Royal Academy of Music, Aarhus/Aalborg, Universitetsbyen 3, 8000 Aarhus C, Denmark
| | - E Brattico
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University and Royal Academy of Music, Aarhus/Aalborg, Universitetsbyen 3, 8000 Aarhus C, Denmark; Department of Education, Psychology, Communication, University of Bari Aldo Moro, Italy
| |
Collapse
|
3
|
Watanabe T, Motomura E, Kawano Y, Fujii S, Hakumoto Y, Morimoto M, Nakatani K, Okada M, Inui K. Electrical field distribution of Change-N1 and its prepulse inhibition. Neurosci Lett 2021; 751:135804. [PMID: 33705935 DOI: 10.1016/j.neulet.2021.135804] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2020] [Revised: 02/17/2021] [Accepted: 03/01/2021] [Indexed: 10/22/2022]
Abstract
An abrupt change in a sound feature (Test) in a continuous sound elicits an auditory evoked potential, peaking at approx. 100-180 ms (Change-N1) after the change onset. Change-N1 is attenuated by a preceding weak change stimulus (Prepulse), in the phenomenon known as prepulse inhibition (PPI). In this electroencephalographic study, we compared these two indexes among scalp electrodes. Change-N1 was elicited by an abrupt 10-dB increase in sound pressure in repeats of a 70-dB click sound at 100 Hz and was recorded using 22 electrodes in 31 healthy subjects. The prepulse was a 10-dB decrease in three consecutive clicks at 30, 40, and 50 ms before the Test onset. Four stimuli (Test alone, Test with Prepulse, Prepulse alone, and background alone) were presented randomly through headphones at an even probability. The results demonstrated that: (1) Electrodes at the frontal/central midline were reconfirmed to be suitable to record Change-N1; (2) Change-N1 showed right-hemisphere predominance; (3) There was no difference in the %PPI among regions (prefrontal/frontal/central) and hemispheres (midline/left/right); and (4) the Change-N1 amplitude and its PPI at prefrontal electrodes were positively correlated with those at the frontal electrodes. These results support the use of Change-N1 and its PPI as a tool to evaluate the change detection sensitivity and inhibitory function in individuals. The use of prefrontal electrodes can be an option for a screening test.
Collapse
Affiliation(s)
- Takayasu Watanabe
- Department of Central Laboratories, Mie University Hospital, Tsu, 514-8507, Japan
| | - Eishi Motomura
- Department of Neuropsychiatry, Mie University Graduate School of Medicine, Tsu, 514-8507, Japan.
| | - Yasuhiro Kawano
- Department of Neuropsychiatry, Mie University Graduate School of Medicine, Tsu, 514-8507, Japan
| | - Shinobu Fujii
- Department of Central Laboratories, Mie University Hospital, Tsu, 514-8507, Japan
| | - Yuhei Hakumoto
- Department of Central Laboratories, Mie University Hospital, Tsu, 514-8507, Japan
| | - Makoto Morimoto
- Department of Central Laboratories, Mie University Hospital, Tsu, 514-8507, Japan
| | - Kaname Nakatani
- Department of Central Laboratories, Mie University Hospital, Tsu, 514-8507, Japan
| | - Motohiro Okada
- Department of Neuropsychiatry, Mie University Graduate School of Medicine, Tsu, 514-8507, Japan
| | - Koji Inui
- Department of Functioning and Disability, Institute for Developmental Research, Aichi Developmental Disability Center, Kasugai, 480-0392, Japan
| |
Collapse
|
4
|
Yaralı M. Varying effect of noise on sound onset and acoustic change evoked auditory cortical N1 responses evoked by a vowel-vowel stimulus. Int J Psychophysiol 2020; 152:36-43. [PMID: 32302643 DOI: 10.1016/j.ijpsycho.2020.04.010] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2019] [Revised: 04/09/2020] [Accepted: 04/10/2020] [Indexed: 11/24/2022]
Abstract
INTRODUCTION According to previous studies noise causes prolonged latencies and decreased amplitudes in acoustic change evoked cortical responses. Particularly for a consonant-vowel stimulus, speech shaped noise leads to more pronounced changes on onset evoked response than acoustic change evoked response. Reasoning that this may be related to the spectral characteristics of the stimuli and the noise, in the current study a vowel-vowel stimulus (/ui/) was presented in white noise during cortical response recordings. The hypothesis is that the effect of noise will be higher on acoustic change N1 compared to onset N1 due to the masking effects on formant transitions. METHODS Onset and acoustic change evoked auditory cortical N1-P2 responses were obtained from 21 young adults with normal hearing while presenting 1000 ms /ui/ stimuli in quiet and in white noise at +10 dB and 0 dB signal-to-noise ratio (SNR). RESULTS In the quiet and +10 dB SNR conditions, the N1-P2 responses to both onset and change were present. In the +10 dB SNR condition acoustic change N1-P2 peak-to-peak amplitudes were reduced and N1 latencies were prolonged compared to the quiet condition. Whereas there was not a significant change in onset N1 latencies and N1-P2 peak-to-peak amplitudes in the +10 dB SNR condition. In the 0 dB SNR condition change responses were not observed but onset N1-P2 peak-to-peak amplitudes were significantly lower, and onset N1 latencies were significantly higher compared to the quiet and the 10 dB SNR conditions. Onset and change responses were also compared with each other in each condition. N1 latencies and N1-P2 peak to peak amplitudes of onset and acoustic change were not significantly different in the quiet condition. Whereas at 10 dB SNR, acoustic change N1 latencies were higher and N1-P2 amplitudes were lower than onset latencies and amplitudes. To summarize, presentation of white noise at 10 dB SNR resulted in the reduction of acoustic change evoked N1-P2 peak-to-peak amplitudes and the prolongation of N1 latencies compared to quiet. Same effect on onsets were only observed at 0 dB SNR, where acoustic change N1 was not observed. In the quiet condition, latencies and amplitudes of onsets and changes were not different. Whereas at 10 dB SNR, acoustic change N1 latencies were higher, amplitudes were lower than onset N1. DISCUSSION/CONCLUSIONS The effect of noise was found to be higher on acoustic change evoked N1 response compared to onset N1. This may be related to the spectral characteristics of the utilized noise and the stimuli, possible differences in acoustic features of sound onsets and acoustic changes, or to the possible differences in the mechanisms for detecting acoustic changes and sound onsets. In order to investigate the possible reasons for more pronounced effect of noise on acoustic changes, future work with different vowel-vowel transitions in different noise types is suggested.
Collapse
Affiliation(s)
- Mehmet Yaralı
- Department of Audiology, Hacettepe University, Ankara, Turkey.
| |
Collapse
|
5
|
Kinukawa T, Takeuchi N, Sugiyama S, Nishihara M, Nishiwaki K, Inui K. Properties of echoic memory revealed by auditory-evoked magnetic fields. Sci Rep 2019; 9:12260. [PMID: 31439871 PMCID: PMC6706430 DOI: 10.1038/s41598-019-48796-9] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2019] [Accepted: 08/12/2019] [Indexed: 11/09/2022] Open
Abstract
We used auditory-evoked magnetic fields to investigate the properties of echoic memory. The sound stimulus was a repeated 1-ms click at 100 Hz for 500 ms, presented every 800 ms. The phase of the sound was shifted by inserting an interaural time delay of 0.49 ms to each side. Therefore, there were two sounds, lateralized to the left and right. According to the preceding sound, each sound was labeled as D (preceded by a different sound) or S (by the same sound). The D sounds were further grouped into 1D, 2D, and 3D, according to the number of preceding different sounds. The S sounds were similarly grouped to 1S and 2S. The results showed that the preceding event significantly affected the amplitude of the cortical response; although there was no difference between 1S and 2S, the amplitudes for D sounds were greater than those for S sounds. Most importantly, there was a significant amplitude difference between 1S and 1D. These results suggested that sensory memory was formed by a single sound, and was immediately replaced by new information. The constantly-updating nature of sensory memory is considered to enable it to act as a real-time monitor for new information.
Collapse
Affiliation(s)
- Tomoaki Kinukawa
- Department of Anesthesiology, Nagoya University Graduate School of Medicine, Nagoya, 466-8550, Japan.
| | - Nobuyuki Takeuchi
- Neuropsychiatric Department, Aichi Medical University, Nagakute, 480-1195, Japan
| | - Shunsuke Sugiyama
- Department of Psychiatry and Psychotherapy, , Gifu University, Gifu, 501-1193, Japan
| | - Makoto Nishihara
- Multidisciplinary Pain Center, Aichi Medical University, Nagakute, 480-1195, Japan
| | - Kimitoshi Nishiwaki
- Department of Anesthesiology, Nagoya University Graduate School of Medicine, Nagoya, 466-8550, Japan
| | - Koji Inui
- Department of Functioning and Disability, Institute for Developmental Research, Kasugai, 480-0392, Japan.,Department of Integrative Physiology, National Institute for Physiological Sciences, Okazaki, 444-8585, Japan
| |
Collapse
|
6
|
Motomura E, Inui K, Kawano Y, Nishihara M, Okada M. Effects of Sound-Pressure Change on the 40 Hz Auditory Steady-State Response and Change-Related Cerebral Response. Brain Sci 2019; 9:brainsci9080203. [PMID: 31426410 PMCID: PMC6721352 DOI: 10.3390/brainsci9080203] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2019] [Revised: 08/07/2019] [Accepted: 08/13/2019] [Indexed: 12/19/2022] Open
Abstract
The auditory steady-state response (ASSR) elicited by a periodic sound stimulus is a neural oscillation recorded by magnetoencephalography (MEG), which is phase-locked to the repeated sound stimuli. This ASSR phase alternates after an abrupt change in the feature of a periodic sound stimulus and returns to its steady-state value. An abrupt change also elicits a MEG component peaking at approximately 100-180 ms (called "Change-N1m"). We investigated whether both the ASSR phase deviation and Change-N1m were affected by the magnitude of change in sound pressure. The ASSR and Change-N1m to 40 Hz click-trains (1000 ms duration, 70 dB), with and without an abrupt change (± 5, ± 10, or ± 15 dB) were recorded in ten healthy subjects. We used the source strength waveforms obtained by a two-dipole model for measurement of the ASSR phase deviation and Change-N1m values (peak amplitude and latency). As the magnitude of change increased, Change-N1m increased in amplitude and decreased in latency. Similarly, ASSR phase deviation depended on the magnitude of sound-pressure change. Thus, we suspect that both Change-N1m and the ASSR phase deviation reflect the sensitivity of the brain's neural change-detection system.
Collapse
Affiliation(s)
- Eishi Motomura
- Department of Neuropsychiatry, Mie University Graduate School of Medicine, Tsu 514-8507, Japan.
| | - Koji Inui
- Department of Functioning and Disability, Institute for Developmental Research, Aichi Human Service Center, Kasugai 480-0392, Japan
| | - Yasuhiro Kawano
- Department of Neuropsychiatry, Mie University Graduate School of Medicine, Tsu 514-8507, Japan
| | - Makoto Nishihara
- Multidisciplinary Pain Center, Aichi Medical University, Nagakute 480-1195, Japan
| | - Motohiro Okada
- Department of Neuropsychiatry, Mie University Graduate School of Medicine, Tsu 514-8507, Japan
| |
Collapse
|
7
|
Fiveash A, McArthur G, Thompson WF. Syntactic and non-syntactic sources of interference by music on language processing. Sci Rep 2018; 8:17918. [PMID: 30559400 PMCID: PMC6297162 DOI: 10.1038/s41598-018-36076-x] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2018] [Accepted: 11/08/2018] [Indexed: 11/09/2022] Open
Abstract
Music and language are complex hierarchical systems in which individual elements are systematically combined to form larger, syntactic structures. Suggestions that music and language share syntactic processing resources have relied on evidence that syntactic violations in music interfere with syntactic processing in language. However, syntactic violations may affect auditory processing in non-syntactic ways, accounting for reported interference effects. To investigate the factors contributing to interference effects, we assessed recall of visually presented sentences and word-lists when accompanied by background auditory stimuli differing in syntactic structure and auditory distraction: melodies without violations, scrambled melodies, melodies that alternate in timbre, and environmental sounds. In Experiment 1, one-timbre melodies interfered with sentence recall, and increasing both syntactic complexity and distraction by scrambling melodies increased this interference. In contrast, three-timbre melodies reduced interference on sentence recall, presumably because alternating instruments interrupted auditory streaming, reducing pressure on long-distance syntactic structure building. Experiment 2 confirmed that participants were better at discriminating syntactically coherent one-timbre melodies than three-timbre melodies. Together, these results illustrate that syntactic processing and auditory streaming interact to influence sentence recall, providing implications for theories of shared syntactic processing and auditory distraction.
Collapse
Affiliation(s)
- Anna Fiveash
- Department of Psychology, Macquarie University, Sydney, Australia.
- Lyon Neuroscience Research Centre, Auditory Cognition and Psychoacoustics Team and Dynamique Du Langage Laboratory, INSERM, U1028, CNRS, UMR5292, Lyon, France.
- ARC Centre of Excellence in Cognition and its Disorders, Macquarie University, Sydney, Australia.
| | - Genevieve McArthur
- ARC Centre of Excellence in Cognition and its Disorders, Macquarie University, Sydney, Australia
- Department of Cognitive Science, Macquarie University, Sydney, Australia
| | - William Forde Thompson
- Department of Psychology, Macquarie University, Sydney, Australia
- ARC Centre of Excellence in Cognition and its Disorders, Macquarie University, Sydney, Australia
| |
Collapse
|
8
|
Neural Mechanisms Underlying Cross-Modal Phonetic Encoding. J Neurosci 2017; 38:1835-1849. [PMID: 29263241 DOI: 10.1523/jneurosci.1566-17.2017] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2017] [Revised: 11/17/2017] [Accepted: 12/08/2017] [Indexed: 11/21/2022] Open
Abstract
Audiovisual (AV) integration is essential for speech comprehension, especially in adverse listening situations. Divergent, but not mutually exclusive, theories have been proposed to explain the neural mechanisms underlying AV integration. One theory advocates that this process occurs via interactions between the auditory and visual cortices, as opposed to fusion of AV percepts in a multisensory integrator. Building upon this idea, we proposed that AV integration in spoken language reflects visually induced weighting of phonetic representations at the auditory cortex. EEG was recorded while male and female human subjects watched and listened to videos of a speaker uttering consonant vowel (CV) syllables /ba/ and /fa/, presented in Auditory-only, AV congruent or incongruent contexts. Subjects reported whether they heard /ba/ or /fa/. We hypothesized that vision alters phonetic encoding by dynamically weighting which phonetic representation in the auditory cortex is strengthened or weakened. That is, when subjects are presented with visual /fa/ and acoustic /ba/ and hear /fa/ (illusion-fa), the visual input strengthens the weighting of the phone /f/ representation. When subjects are presented with visual /ba/ and acoustic /fa/ and hear /ba/ (illusion-ba), the visual input weakens the weighting of the phone /f/ representation. Indeed, we found an enlarged N1 auditory evoked potential when subjects perceived illusion-ba, and a reduced N1 when they perceived illusion-fa, mirroring the N1 behavior for /ba/ and /fa/ in Auditory-only settings. These effects were especially pronounced in individuals with more robust illusory perception. These findings provide evidence that visual speech modifies phonetic encoding at the auditory cortex.SIGNIFICANCE STATEMENT The current study presents evidence that audiovisual integration in spoken language occurs when one modality (vision) acts on representations of a second modality (audition). Using the McGurk illusion, we show that visual context primes phonetic representations at the auditory cortex, altering the auditory percept, evidenced by changes in the N1 auditory evoked potential. This finding reinforces the theory that audiovisual integration occurs via visual networks influencing phonetic representations in the auditory cortex. We believe that this will lead to the generation of new hypotheses regarding cross-modal mapping, particularly whether it occurs via direct or indirect routes (e.g., via a multisensory mediator).
Collapse
|
9
|
Lightfoot G. Summary of the N1-P2 Cortical Auditory Evoked Potential to Estimate the Auditory Threshold in Adults. Semin Hear 2016; 37:1-8. [PMID: 27587918 DOI: 10.1055/s-0035-1570334] [Citation(s) in RCA: 44] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022] Open
Abstract
This article introduces the cortical auditory evoked potential (CAEP) and describes the use of the N1-P2 response complex as an objective predictor of hearing threshold in adults and older children. The generators of the CAEP are discussed together with issues of maturation, subject factors, and stimuli and recording parameters for use in the clinic. The basic methods for response identification are outlined and suggestions are made for determining the CAEP threshold. Clinical applications are introduced and the accuracy of the CAEP as an estimator of hearing threshold is given. Finally, a case study provides an example of the technique in the context of medicolegal assessment.
Collapse
Affiliation(s)
- Guy Lightfoot
- ERA Training & Consultancy Ltd., West Kirby, England
| |
Collapse
|
10
|
Jeong E, Ryu H. Nonverbal auditory working memory: Can music indicate the capacity? Brain Cogn 2016; 105:9-21. [PMID: 27031677 DOI: 10.1016/j.bandc.2016.03.003] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2015] [Revised: 02/27/2016] [Accepted: 03/15/2016] [Indexed: 01/09/2023]
Abstract
Different working memory (WM) mechanisms that underlie words, tones, and timbres have been proposed in previous studies. In this regard, the present study developed a WM test with nonverbal sounds and compared it to the conventional verbal WM test. A total of twenty-five, non-music major, right-handed college students were presented with four different types of sounds (words, syllables, pitches, timbres) that varied from two to eight digits in length. Both accuracy and oxygenated hemoglobin (oxyHb) were measured. The results showed significant effects of number of targets on accuracy and sound type on oxyHb. A further analysis showed prefrontal asymmetry with pitch being processed by the right hemisphere (RH) and timbre by the left hemisphere (LH). These findings suggest a potential for employing musical sounds (i.e., pitch and timbre) as a complementary stimuli for conventional nonverbal WM tests, which can additionally examine its asymmetrical roles in the prefrontal regions.
Collapse
Affiliation(s)
- Eunju Jeong
- Department of Arts & Technology, Hanyang University, Republic of Korea
| | - Hokyoung Ryu
- Department of Arts & Technology, Hanyang University, Republic of Korea.
| |
Collapse
|
11
|
Auditory change-related cerebral responses and personality traits. Neurosci Res 2015; 103:34-9. [PMID: 26360233 DOI: 10.1016/j.neures.2015.08.005] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2014] [Revised: 08/31/2015] [Accepted: 08/31/2015] [Indexed: 11/24/2022]
Abstract
The rapid detection of changes in sensory information is an essential process for survival. Individual humans are thought to have their own intrinsic preattentive responsiveness to sensory changes. Here we sought to determine the relationship between auditory change-related responses and personality traits, using event-related potentials. A change-related response peaking at approximately 120 ms (Change-N1) was elicited by an abrupt decrease in sound pressure (10 dB) from the baseline (60 dB) of a continuous sound. Sixty-three healthy volunteers (14 females and 49 males) were recruited and were assessed by the Temperament and Character Inventory (TCI) for personality traits. We investigated the relationship between Change-N1 values (amplitude and latency) and each TCI dimension. The Change-N1 amplitude was positively correlated with harm avoidance scores and negatively correlated with the self-directedness scores, but not with other TCI dimensions. Since these two TCI dimensions are associated with anxiety disorders and depression, it is possible that the change-related response is affected by personality traits, particularly anxiety- or depression-related traits.
Collapse
|
12
|
Han JH, Dimitrijevic A. Acoustic change responses to amplitude modulation: a method to quantify cortical temporal processing and hemispheric asymmetry. Front Neurosci 2015; 9:38. [PMID: 25717291 PMCID: PMC4324071 DOI: 10.3389/fnins.2015.00038] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2014] [Accepted: 01/26/2015] [Indexed: 11/18/2022] Open
Abstract
Objective: Sound modulation is a critical temporal cue for the perception of speech and environmental sounds. To examine auditory cortical responses to sound modulation, we developed an acoustic change stimulus involving amplitude modulation (AM) of ongoing noise. The AM transitions in this stimulus evoked an acoustic change complex (ACC) that was examined parametrically in terms of rate and depth of modulation and hemispheric symmetry. Methods: Auditory cortical potentials were recorded from 64 scalp electrodes during passive listening in two conditions: (1) ACC from white noise to 4, 40, 300 Hz AM, with varying AM depths of 100, 50, 25% lasting 1 s and (2) 1 s AM noise bursts at the same modulation rate. Behavioral measures included AM detection from an attend ACC condition and AM depth thresholds (i.e., a temporal modulation transfer function, TMTF). Results: The N1 response of the ACC was large to 4 and 40 Hz and small to the 300 Hz AM. In contrast, the opposite pattern was observed with bursts of AM showing larger responses with increases in AM rate. Brain source modeling showed significant hemispheric asymmetry such that 4 and 40 Hz ACC responses were dominated by right and left hemispheres respectively. Conclusion: N1 responses to the ACC resembled a low pass filter shape similar to a behavioral TMTF. In the ACC paradigm, the only stimulus parameter that changes is AM and therefore the N1 response provides an index for this AM change. In contrast, an AM burst stimulus contains both AM and level changes and is likely dominated by the rise time of the stimulus. The hemispheric differences are consistent with the asymmetric sampling in time hypothesis suggesting that the different hemispheres preferentially sample acoustic time across different time windows. Significance: The ACC provides a novel approach to studying temporal processing at the level of cortex and provides further evidence of hemispheric specialization for fast and slow stimuli.
Collapse
Affiliation(s)
- Ji Hye Han
- Communication Sciences Research Center, Cincinnati Children's Hospital Medical Center Cincinnati, OH, USA
| | - Andrew Dimitrijevic
- Communication Sciences Research Center, Cincinnati Children's Hospital Medical Center Cincinnati, OH, USA
| |
Collapse
|
13
|
Effects of acute nicotine on prepulse inhibition of auditory change-related cortical responses. Behav Brain Res 2013; 256:27-35. [DOI: 10.1016/j.bbr.2013.07.045] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2013] [Revised: 07/08/2013] [Accepted: 07/25/2013] [Indexed: 01/18/2023]
|
14
|
He S, Grose JH, Buchman CA. Auditory discrimination: the relationship between psychophysical and electrophysiological measures. Int J Audiol 2013; 51:771-82. [PMID: 22998415 DOI: 10.3109/14992027.2012.699198] [Citation(s) in RCA: 40] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
Abstract
OBJECTIVES This study aimed to (1) investigate the relationship between the acoustic change complex (ACC) and perceptual measures of frequency and intensity discrimination, and gap detection; and (2) examine the effects of acoustic change on the amplitudes and latencies of the ACC. DESIGN Psychophysical thresholds for frequency and intensity discrimination and gap detection, as well as ACCs elicited by stimuli containing increments in frequency, or intensity or gaps, were recorded from the same group of subjects. The magnitude of the acoustic change was systematically varied for the ACC recording. STUDY SAMPLE Twenty-six adults with normal hearing, ranging in age between 19 and 39 years. RESULTS Electrophysiological and psychophysical measures for frequency and intensity discrimination were significantly correlated. Electrophysiological thresholds were comparable to psychophysical thresholds for intensity discrimination but were higher than psychophysical thresholds for gap detection and frequency discrimination. Increasing the magnitude of acoustic change increased the ACC amplitude but did not show consistent effects across acoustic dimensions for ACC latency. CONCLUSIONS The ACC can be used as an objective index of auditory discrimination in frequency and intensity. The ACC amplitude is a better indicator for auditory processing than the ACC latency.
Collapse
Affiliation(s)
- Shuman He
- Department of Otolaryngology-Head and Neck Surgery, The University of North Carolina at Chapel Hill, Chapel Hill, NC 27599-7070, USA.
| | | | | |
Collapse
|
15
|
Ganapathy MK, Narne VK, Kalaiah MK, Manjula P. Effect of pre-transition stimulus duration on acoustic change complex. Int J Audiol 2013; 52:350-9. [PMID: 23343242 DOI: 10.3109/14992027.2012.760850] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
|
16
|
Sensory thresholds obtained from MEG data: Cortical psychometric functions. Neuroimage 2012; 63:1249-56. [DOI: 10.1016/j.neuroimage.2012.08.013] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2012] [Revised: 07/10/2012] [Accepted: 08/05/2012] [Indexed: 11/19/2022] Open
|
17
|
Inui K, Tsuruhara A, Kodaira M, Motomura E, Tanii H, Nishihara M, Keceli S, Kakigi R. Prepulse inhibition of auditory change-related cortical responses. BMC Neurosci 2012; 13:135. [PMID: 23113968 PMCID: PMC3502566 DOI: 10.1186/1471-2202-13-135] [Citation(s) in RCA: 34] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2012] [Accepted: 10/25/2012] [Indexed: 12/03/2022] Open
Abstract
Background Prepulse inhibition (PPI) of the startle response is an important tool to investigate the biology of schizophrenia. PPI is usually observed by use of a startle reflex such as blinking following an intense sound. A similar phenomenon has not been reported for cortical responses. Results In 12 healthy subjects, change-related cortical activity in response to an abrupt increase of sound pressure by 5 dB above the background of 65 dB SPL (test stimulus) was measured using magnetoencephalography. The test stimulus evoked a clear cortical response peaking at around 130 ms (Change-N1m). In Experiment 1, effects of the intensity of a prepulse (0.5 ~ 5 dB) on the test response were examined using a paired stimulation paradigm. In Experiment 2, effects of the interval between the prepulse and test stimulus were examined using interstimulus intervals (ISIs) of 50 ~ 350 ms. When the test stimulus was preceded by the prepulse, the Change-N1m was more strongly inhibited by a stronger prepulse (Experiment 1) and a shorter ISI prepulse (Experiment 2). In addition, the amplitude of the test Change-N1m correlated positively with both the amplitude of the prepulse-evoked response and the degree of inhibition, suggesting that subjects who are more sensitive to the auditory change are more strongly inhibited by the prepulse. Conclusions Since Change-N1m is easy to measure and control, it would be a valuable tool to investigate mechanisms of sensory gating or the biology of certain mental diseases such as schizophrenia.
Collapse
Affiliation(s)
- Koji Inui
- Department of Integrative Physiology, National Institute for Physiological Sciences, Okazaki, 444-8585, Japan.
| | | | | | | | | | | | | | | |
Collapse
|
18
|
Itoh K, Okumiya-Kanke Y, Nakayama Y, Kwee IL, Nakada T. Effects of musical training on the early auditory cortical representation of pitch transitions as indexed by change-N1. Eur J Neurosci 2012; 36:3580-92. [DOI: 10.1111/j.1460-9568.2012.08278.x] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2012] [Revised: 08/03/2012] [Accepted: 08/06/2012] [Indexed: 11/27/2022]
Affiliation(s)
- Kosuke Itoh
- Center for Integrated Human Brain Science; Brain Research Institute; University of Niigata; 1-757 Asahimachi; Niigata; 951-8585; Japan
| | | | - Yoh Nakayama
- Yamaha Music Foundation; Music Research Institute; Tokyo; Japan
| | - Ingrid L. Kwee
- Department of Neurology; University of California; Davis; CA; USA
| | | |
Collapse
|
19
|
Nishihara M, Inui K, Motomura E, Otsuru N, Ushida T, Kakigi R. Auditory N1 as a change-related automatic response. Neurosci Res 2011; 71:145-8. [DOI: 10.1016/j.neures.2011.07.004] [Citation(s) in RCA: 30] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2011] [Revised: 06/27/2011] [Accepted: 06/27/2011] [Indexed: 10/17/2022]
|
20
|
Akiyama LF, Yamashiro K, Inui K, Kakigi R. Automatic cortical responses to sound movement: A magnetoencephalography study. Neurosci Lett 2011; 488:183-7. [DOI: 10.1016/j.neulet.2010.11.025] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2010] [Revised: 11/07/2010] [Accepted: 11/09/2010] [Indexed: 10/18/2022]
|
21
|
Inui K, Urakawa T, Yamashiro K, Otsuru N, Takeshima Y, Nishihara M, Motomura E, Kida T, Kakigi R. Echoic memory of a single pure tone indexed by change-related brain activity. BMC Neurosci 2010; 11:135. [PMID: 20961454 PMCID: PMC2978218 DOI: 10.1186/1471-2202-11-135] [Citation(s) in RCA: 25] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2010] [Accepted: 10/20/2010] [Indexed: 11/29/2022] Open
Abstract
Background The rapid detection of sensory change is important to survival. The process should relate closely to memory since it requires that the brain separate a new stimulus from an ongoing background or past event. Given that sensory memory monitors current sensory status and works to pick-up changes in real-time, any change detected by this system should evoke a change-related cortical response. To test this hypothesis, we examined whether the single presentation of a sound is enough to elicit a change-related cortical response, and therefore, shape a memory trace enough to separate a subsequent stimulus. Results Under a paradigm where two pure sounds 300 ms in duration and 800 or 840 Hz in frequency were presented in a specific order at an even probability, cortical responses to each sound were measured with magnetoencephalograms. Sounds were grouped to five events regardless of their frequency, 1D, 2D, and 3D (a sound preceded by one, two, or three different sounds), and 1S and 2S (a sound preceded by one or two same sounds). Whereas activation in the planum temporale did not differ among events, activation in the superior temporal gyrus (STG) was clearly greater for the different events (1D, 2D, 3D) than the same event (1S and 2S). Conclusions One presentation of a sound is enough to shape a memory trace for comparison with a subsequent physically different sound and elicits change-related cortical responses in the STG. The STG works as a real-time sensory gate open to a new event.
Collapse
Affiliation(s)
- Koji Inui
- Department of Integrative Physiology, National Institute for Physiological Sciences, Okazaki 444-8585, Japan.
| | | | | | | | | | | | | | | | | |
Collapse
|
22
|
Inui K, Urakawa T, Yamashiro K, Otsuru N, Nishihara M, Takeshima Y, Keceli S, Kakigi R. Non-linear laws of echoic memory and auditory change detection in humans. BMC Neurosci 2010; 11:80. [PMID: 20598152 PMCID: PMC2904354 DOI: 10.1186/1471-2202-11-80] [Citation(s) in RCA: 41] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2009] [Accepted: 07/03/2010] [Indexed: 11/29/2022] Open
Abstract
BACKGROUND The detection of any abrupt change in the environment is important to survival. Since memory of preceding sensory conditions is necessary for detecting changes, such a change-detection system relates closely to the memory system. Here we used an auditory change-related N1 subcomponent (change-N1) of event-related brain potentials to investigate cortical mechanisms underlying change detection and echoic memory. RESULTS Change-N1 was elicited by a simple paradigm with two tones, a standard followed by a deviant, while subjects watched a silent movie. The amplitude of change-N1 elicited by a fixed sound pressure deviance (70 dB vs. 75 dB) was negatively correlated with the logarithm of the interval between the standard sound and deviant sound (1, 10, 100, or 1000 ms), while positively correlated with the logarithm of the duration of the standard sound (25, 100, 500, or 1000 ms). The amplitude of change-N1 elicited by a deviance in sound pressure, sound frequency, and sound location was correlated with the logarithm of the magnitude of physical differences between the standard and deviant sounds. CONCLUSIONS The present findings suggest that temporal representation of echoic memory is non-linear and Weber-Fechner law holds for the automatic cortical response to sound changes within a suprathreshold range. Since the present results show that the behavior of echoic memory can be understood through change-N1, change-N1 would be a useful tool to investigate memory systems.
Collapse
Affiliation(s)
- Koji Inui
- Department of Integrative Physiology, National Institute for Physiological Sciences, Myodaiji, Okazaki 444-8585, Japan
| | - Tomokazu Urakawa
- Department of Integrative Physiology, National Institute for Physiological Sciences, Myodaiji, Okazaki 444-8585, Japan
| | - Koya Yamashiro
- Department of Integrative Physiology, National Institute for Physiological Sciences, Myodaiji, Okazaki 444-8585, Japan
| | - Naofumi Otsuru
- Department of Integrative Physiology, National Institute for Physiological Sciences, Myodaiji, Okazaki 444-8585, Japan
| | - Makoto Nishihara
- Multidisciplinary Pain Center, Aichi Medical University, Aichi 480-1195, Japan
| | - Yasuyuki Takeshima
- Department of Integrative Physiology, National Institute for Physiological Sciences, Myodaiji, Okazaki 444-8585, Japan
| | - Sumru Keceli
- Department of Integrative Physiology, National Institute for Physiological Sciences, Myodaiji, Okazaki 444-8585, Japan
| | - Ryusuke Kakigi
- Department of Integrative Physiology, National Institute for Physiological Sciences, Myodaiji, Okazaki 444-8585, Japan
| |
Collapse
|
23
|
Pratt H, Starr A, Michalewski HJ, Dimitrijevic A, Bleich N, Mittelman N. Cortical evoked potentials to an auditory illusion: binaural beats. Clin Neurophysiol 2009; 120:1514-24. [PMID: 19616993 DOI: 10.1016/j.clinph.2009.06.014] [Citation(s) in RCA: 34] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2009] [Revised: 05/31/2009] [Accepted: 06/18/2009] [Indexed: 11/30/2022]
Abstract
OBJECTIVE To define brain activity corresponding to an auditory illusion of 3 and 6Hz binaural beats in 250Hz or 1000Hz base frequencies, and compare it to the sound onset response. METHODS Event-Related Potentials (ERPs) were recorded in response to unmodulated tones of 250 or 1000Hz to one ear and 3 or 6Hz higher to the other, creating an illusion of amplitude modulations (beats) of 3Hz and 6Hz, in base frequencies of 250Hz and 1000Hz. Tones were 2000ms in duration and presented with approximately 1s intervals. Latency, amplitude and source current density estimates of ERP components to tone onset and subsequent beats-evoked oscillations were determined and compared across beat frequencies with both base frequencies. RESULTS All stimuli evoked tone-onset P(50), N(100) and P(200) components followed by oscillations corresponding to the beat frequency, and a subsequent tone-offset complex. Beats-evoked oscillations were higher in amplitude with the low base frequency and to the low beat frequency. Sources of the beats-evoked oscillations across all stimulus conditions located mostly to left lateral and inferior temporal lobe areas in all stimulus conditions. Onset-evoked components were not different across stimulus conditions; P(50) had significantly different sources than the beats-evoked oscillations; and N(100) and P(200) sources located to the same temporal lobe regions as beats-evoked oscillations, but were bilateral and also included frontal and parietal contributions. CONCLUSIONS Neural activity with slightly different volley frequencies from left and right ear converges and interacts in the central auditory brainstem pathways to generate beats of neural activity to modulate activities in the left temporal lobe, giving rise to the illusion of binaural beats. Cortical potentials recorded to binaural beats are distinct from onset responses. SIGNIFICANCE Brain activity corresponding to an auditory illusion of low frequency beats can be recorded from the scalp.
Collapse
Affiliation(s)
- Hillel Pratt
- Evoked Potentials Laboratory, Behavioral Biology, Technion - Israel Institute of Technology, Haifa, Israel.
| | | | | | | | | | | |
Collapse
|
24
|
Pratt H, Starr A, Michalewski HJ, Dimitrijevic A, Bleich N, Mittelman N. Auditory-evoked potentials to frequency increase and decrease of high- and low-frequency tones. Clin Neurophysiol 2009; 120:360-73. [DOI: 10.1016/j.clinph.2008.10.158] [Citation(s) in RCA: 29] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2008] [Revised: 10/15/2008] [Accepted: 10/24/2008] [Indexed: 11/16/2022]
|
25
|
Laufer I, Negishi M, Constable RT. Comparator and non-comparator mechanisms of change detection in the context of speech--an ERP study. Neuroimage 2009; 44:546-62. [PMID: 18938250 PMCID: PMC2643129 DOI: 10.1016/j.neuroimage.2008.09.010] [Citation(s) in RCA: 13] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2008] [Revised: 08/11/2008] [Accepted: 09/09/2008] [Indexed: 11/25/2022] Open
Abstract
Automatic change detection reflects a cognitive memory-based comparison mechanism as well as a sensorial non-comparator mechanism based on differential states of refractoriness. The purpose of this study was to examine whether the comparator mechanism of the mismatch negativity component (MMN) is differentially affected by the lexical status of the deviant. Event-related potential (ERP) data was collected during an "oddball" paradigm designed to elicit the MMN from 15 healthy subjects that were involved in a counting task. Topography pattern analysis and source estimation were utilized to examine the deviance (deviants vs. standards), cognitive (deviants vs. control counterparts) and refractoriness (standards vs. control counterparts) effects elicited by standard-deviant pairs ("deh-day"; "day-deh"; "teh-tay") embedded within "oddball" blocks. Our results showed that when the change was salient regardless of lexical status (i.e., the /e:/ to /eI/ transition) the response tapped the comparator based-mechanism of the MMN which was located in the cuneus/posterior cingulate, reflected sensitivity to the novelty of the auditory object, appeared in the P2 latency range and mainly involved topography modulations. In contrast, when the novelty was low (i.e., the /eI/ to /e:/ transition) an acoustic change complex was elicited which involved strength modulations over the P1/N1 range and implicated the middle temporal gyrus. This result pattern also resembled the one displayed by the non-comparator mechanism. These findings suggest spatially and temporally distinct brain activities of comparator mechanisms of change detection in the context of speech.
Collapse
Affiliation(s)
- Ilan Laufer
- Department of Diagnostic Radiology, Yale University School of Medicine, The Anlyan Center, New Haven, CT 06520-8043, USA.
| | | | | |
Collapse
|
26
|
Dimitrijevic A, Michalewski HJ, Zeng FG, Pratt H, Starr A. Frequency changes in a continuous tone: auditory cortical potentials. Clin Neurophysiol 2008; 119:2111-24. [PMID: 18635394 DOI: 10.1016/j.clinph.2008.06.002] [Citation(s) in RCA: 56] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2008] [Revised: 05/19/2008] [Accepted: 06/06/2008] [Indexed: 11/30/2022]
Abstract
OBJECTIVE We examined auditory cortical potentials in normal hearing subjects to spectral changes in continuous low and high frequency pure tones. METHODS Cortical potentials were recorded to increments of frequency from continuous 250 or 4000Hz tones. The magnitude of change was random and varied from 0% to 50% above the base frequency. RESULTS Potentials consisted of N100, P200 and a slow negative wave (SN). N100 amplitude, latency and dipole magnitude with frequency increments were significantly greater for low compared to high frequencies. Dipole amplitudes were greater in the right than left hemisphere for both base frequencies. The SN amplitude to frequency changes between 4% and 50% was not significantly related to the magnitude of spectral change. CONCLUSIONS Modulation of N100 amplitude and latency elicited by spectral change is more pronounced with low compared to high frequencies. SIGNIFICANCE These data provide electrophysiological evidence that central processing of spectral changes in the cortex differs for low and high frequencies. Some of these differences may be related to both temporal- and spectral-based coding at the auditory periphery. Central representation of frequency change may be related to the different temporal windows of integration across frequencies.
Collapse
Affiliation(s)
- Andrew Dimitrijevic
- Department of Neurology, University of California, 150 Med Surge 1, Irvine, CA 92697, USA.
| | | | | | | | | |
Collapse
|
27
|
|
28
|
Abstract
To investigate the temporal aspect of timbre processing, we recorded auditory-evoked neuromagnetic responses to periodic complex sounds, which were matched in all acoustic parameters except for two fundamental frequencies (F0s) and 12 spectral envelopes of vocal and nonvocal categories. Only for nonvocal sounds, a significant difference in N1m latency for F0 was detected in both hemispheres. A significant difference among stimuli was detected in both hemispheres for vocal and linear sounds, whereas only in the right hemisphere for instrumental sounds. Moreover, the results of paired comparison among F0s revealed that not only the vocal sounds but also some of the nonvocal sounds were F0-independent. This latency independence may be attributed to the relatively high power in the higher frequency spectrum.
Collapse
|
29
|
Ritter S, Dosch HG, Specht HJ, Schneider P, Rupp A. Latency effect of the pitch response due to variations of frequency and spectral envelope. Clin Neurophysiol 2007; 118:2276-81. [PMID: 17709289 DOI: 10.1016/j.clinph.2007.06.017] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2006] [Revised: 06/15/2007] [Accepted: 06/25/2007] [Indexed: 10/22/2022]
Abstract
OBJECTIVE A clear definition of pitch and timbre is still an open debate and often both terms are mixed up in investigations of tone height. However, fundamental frequency (f(0)) and spectral envelope of a sound play a major role in the perception of tone height. Recent electrophysiological experiments showed that one sub-component of the complex N 100-signal was found to be highly correlated to the perceived tone height. METHODS Tone height was independently varied by both, a change of f(0) and spectral envelope in order to disentangle the influence of both parameters. Relative tone height was determined psychoacoustically. Neuromagnetic responses were evaluated using source-analysis. RESULTS Perceived tone height increases with increasing f(0) or spectral envelope. Latency of the pitch change response (PCR) reacts oppositely for the two modi of tone height change. For increasing f(0) and fixed bandpass condition, tone height increases and the latency of the PCR decreases. In contrast, for increasing the center frequency of the bandpass with fixed f(0), tone height increases, but the latency of the PCR increases. CONCLUSIONS The neuromagnetic pitch response is influenced by both, f(0) and spectral envelope. SIGNIFICANCE Further investigations of the influence of pitch and timbre on neurophysiological pitch responses have to take into account that both, f(0) and spectral envelope, affect tone height and latency of the PCR.
Collapse
Affiliation(s)
- Steffen Ritter
- Section of Biomagnetism, Department of Neurology, University of Heidelberg, Im Neuenheimer Feld 400, 69120 Heidelberg, Germany.
| | | | | | | | | |
Collapse
|
30
|
Micheyl C, Carlyon RP, Gutschalk A, Melcher JR, Oxenham AJ, Rauschecker JP, Tian B, Courtenay Wilson E. The role of auditory cortex in the formation of auditory streams. Hear Res 2007; 229:116-31. [PMID: 17307315 PMCID: PMC2040076 DOI: 10.1016/j.heares.2007.01.007] [Citation(s) in RCA: 131] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/15/2006] [Revised: 12/04/2006] [Accepted: 01/03/2007] [Indexed: 11/22/2022]
Abstract
Auditory streaming refers to the perceptual parsing of acoustic sequences into "streams", which makes it possible for a listener to follow the sounds from a given source amidst other sounds. Streaming is currently regarded as an important function of the auditory system in both humans and animals, crucial for survival in environments that typically contain multiple sound sources. This article reviews recent findings concerning the possible neural mechanisms behind this perceptual phenomenon at the level of the auditory cortex. The first part is devoted to intra-cortical recordings, which provide insight into the neural "micromechanisms" of auditory streaming in the primary auditory cortex (A1). In the second part, recent results obtained using functional magnetic resonance imaging (fMRI) and magnetoencephalography (MEG) in humans, which suggest a contribution from cortical areas other than A1, are presented. Overall, the findings concur to demonstrate that many important features of sequential streaming can be explained relatively simply based on neural responses in the auditory cortex.
Collapse
|
31
|
Jones SJ. Cortical processing of quasi-periodic versus random noise sounds. Hear Res 2006; 221:65-72. [PMID: 16963209 DOI: 10.1016/j.heares.2006.06.019] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/16/2005] [Revised: 06/21/2006] [Accepted: 06/30/2006] [Indexed: 11/27/2022]
Abstract
The first objective was to confirm using auditory evoked potentials (AEPs) the findings of magnetoencephalographic studies, that quasi-periodic iterated rippled noise (IRN) elicits a population response in the human auditory cortex which is topographically distinct from that elicited by random noise with a similar overall frequency spectrum. AEPs were recorded at the onset of random noise from silence, at the transition from random noise to IRN with a period of 5 ms, and in the two complementary conditions, IRN onset from silence and the transition from IRN to random noise. An N1/P2 complex was recorded to all four stimuli, that to the transition to IRN being significantly the most anteriorly distributed on the scalp. The second objective was to determine whether the response to the transition to IRN was due to detection of its quasi-periodicity, rather than its spectral "ripples". Virtually no effect was found of applying a 2 kHz low- or high-pass filter, above which it is unlikely that the spectral ripples at intervals of 200 Hz would have been resolved on the cochlear partition. It is concluded that a substantial neuronal population in the auditory cortex is influenced by temporal regularity in sounds, and that this population is equally responsive to spectral frequencies below and above 2 kHz.
Collapse
Affiliation(s)
- S J Jones
- Department of Clinical Neurophysiology, The National Hospital for Neurology and Neurosurgery, Queen Square, London WC1N 3BG, UK.
| |
Collapse
|
32
|
Meyer M, Baumann S, Jancke L. Electrical brain imaging reveals spatio-temporal dynamics of timbre perception in humans. Neuroimage 2006; 32:1510-23. [PMID: 16798014 DOI: 10.1016/j.neuroimage.2006.04.193] [Citation(s) in RCA: 56] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2005] [Revised: 03/29/2006] [Accepted: 04/10/2006] [Indexed: 11/27/2022] Open
Abstract
Timbre is a major attribute of sound perception and a key feature for the identification of sound quality. Here, we present event-related brain potentials (ERPs) obtained from sixteen healthy individuals while they discriminated complex instrumental tones (piano, trumpet, and violin) or simple sine wave tones that lack the principal features of timbre. Data analysis yielded enhanced N1 and P2 responses to instrumental tones relative to sine wave tones. Furthermore, we applied an electrical brain imaging approach using low-resolution electromagnetic tomography (LORETA) to estimate the neural sources of N1/P2 responses. Separate significance tests of instrumental vs. sine wave tones for N1 and P2 revealed distinct regions as principally governing timbre perception. In an initial stage (N1), timbre perception recruits left and right (peri-)auditory fields with an activity maximum over the right posterior Sylvian fissure (SF) and the posterior cingulate (PCC) territory. In the subsequent stage (P2), we uncovered enhanced activity in the vicinity of the entire cingulate gyrus. The involvement of extra-auditory areas in timbre perception may imply the presence of a highly associative processing level which might be generally related to musical sensations and integrates widespread medial areas of the human cortex. In summary, our results demonstrate spatio-temporally distinct stages in timbre perception which not only involve bilateral parts of the peri-auditory cortex but also medially situated regions of the human brain associated with emotional and auditory imagery functions.
Collapse
Affiliation(s)
- Martin Meyer
- Department of Neuropsychology, University of Zurich, Treichlerstrasse 10, CH-8032 Zurich, Switzerland.
| | | | | |
Collapse
|
33
|
Snyder JS, Alain C, Picton TW. Effects of attention on neuroelectric correlates of auditory stream segregation. J Cogn Neurosci 2006; 18:1-13. [PMID: 16417678 DOI: 10.1162/089892906775250021] [Citation(s) in RCA: 202] [Impact Index Per Article: 11.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
A general assumption underlying auditory scene analysis is that the initial grouping of acoustic elements is independent of attention. The effects of attention on auditory stream segregation were investigated by recording event-related potentials (ERPs) while participants either attended to sound stimuli and indicated whether they heard one or two streams or watched a muted movie. The stimuli were pure-tone ABA--patterns that repeated for 10.8 sec with a stimulus onset asynchrony between A and B tones of 100 msec in which the A tone was fixed at 500 Hz, the B tone could be 500, 625, 750, or 1000 Hz, and--was a silence. In both listening conditions, an enhancement of the auditory-evoked response (P1-N1-P2 and N1c) to the B tone varied with Deltaf and correlated with perception of streaming. The ERP from 150 to 250 msec after the beginning of the repeating ABA- patterns became more positive during the course of the trial and was diminished when participants ignored the tones, consistent with behavioral studies indicating that streaming takes several seconds to build up. The N1c enhancement and the buildup over time were larger at right than left temporal electrodes, suggesting a right-hemisphere dominance for stream segregation. Sources in Heschl's gyrus accounted for the ERP modulations related to Deltaf-based segregation and buildup. These findings provide evidence for two cortical mechanisms of streaming: automatic segregation of sounds and attention-dependent buildup process that integrates successive tones within streams over several seconds.
Collapse
Affiliation(s)
- Joel S Snyder
- Baycrest Centre for Geriatric Care, University of Toronto, Canada.
| | | | | |
Collapse
|
34
|
Micheyl C, Tian B, Carlyon RP, Rauschecker JP. Perceptual organization of tone sequences in the auditory cortex of awake macaques. Neuron 2006; 48:139-48. [PMID: 16202714 DOI: 10.1016/j.neuron.2005.08.039] [Citation(s) in RCA: 189] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2005] [Revised: 07/15/2005] [Accepted: 08/24/2005] [Indexed: 11/25/2022]
Abstract
Acoustic sequences such as speech and music are generally perceived as coherent auditory "streams," which can be individually attended to and followed over time. Although the psychophysical stimulus parameters governing this "auditory streaming" are well established, the brain mechanisms underlying the formation of auditory streams remain largely unknown. In particular, an essential feature of the phenomenon, which corresponds to the fact that the segregation of sounds into streams typically takes several seconds to build up, remains unexplained. Here, we show that this and other major features of auditory-stream formation measured in humans using alternating-tone sequences can be quantitatively accounted for based on single-unit responses recorded in the primary auditory cortex (A1) of awake rhesus monkeys listening to the same sound sequences.
Collapse
Affiliation(s)
- Christophe Micheyl
- Research Laboratory of Electronics, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA.
| | | | | | | |
Collapse
|
35
|
Jones SJ. Two ways of hearing--dissociation between spectral and temporal processes in the auditory cortex. SUPPLEMENTS TO CLINICAL NEUROPHYSIOLOGY 2006; 59:89-95. [PMID: 16893098 DOI: 10.1016/s1567-424x(09)70017-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/11/2023]
Affiliation(s)
- S J Jones
- Department of Clinical Neurophysiology, The National Hospital for Neurology and Neurosurgery, Queen Square, London, UK.
| |
Collapse
|
36
|
Laufer I, Pratt H. The ‘F-complex’ and MMN tap different aspects of deviance. Clin Neurophysiol 2005; 116:336-52. [PMID: 15661112 DOI: 10.1016/j.clinph.2004.08.007] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/19/2004] [Indexed: 10/26/2022]
Abstract
OBJECTIVE To compare the 'F(fusion)-complex' with the Mismatch negativity (MMN), both components associated with automatic detection of changes in the acoustic stimulus flow. METHODS Ten right-handed adult native Hebrew speakers discriminated vowel-consonant-vowel (V-C-V) sequences /ada/ (deviant) and /aga/ (standard) in an active auditory 'Oddball' task, and the brain potentials associated with performance of the task were recorded from 21 electrodes. Stimuli were generated by fusing the acoustic elements of the V-C-V sequences as follows: base was always presented in front of the subject, and formant transitions were presented to the front, left or right in a virtual reality room. An illusion of a lateralized echo (duplex sensation) accompanied base fusion with the lateralized formant locations. Source current density estimates were derived for the net response to the fusion of the speech elements (F-complex) and for the MMN, using low-resolution electromagnetic tomography (LORETA). Statistical non-parametric mapping was used to estimate the current density differences between the brain sources of the F-complex and the MMN. RESULTS Occipito-parietal regions and prefrontal regions were associated with the F-complex in all formant locations, whereas the vicinity of the supratemporal plane was bilaterally associated with the MMN, but only in case of front-fusion (no duplex effect). CONCLUSIONS MMN is sensitive to the novelty of the auditory object in relation to other stimuli in a sequence, whereas the F-complex is sensitive to the acoustic features of the auditory object and reflects a process of matching them with target categories. SIGNIFICANCE The F-complex and MMN reflect different aspects of auditory processing in a stimulus-rich and changing environment: content analysis of the stimulus and novelty detection, respectively.
Collapse
Affiliation(s)
- Ilan Laufer
- Evoked Potentials Laboratory, Technion-Israel Institute of Technology, Gutwirth Building, 3200 Haifa, Israel
| | | |
Collapse
|
37
|
Hertrich I, Mathiak K, Lutzenberger W, Ackermann H. Time course and hemispheric lateralization effects of complex pitch processing: evoked magnetic fields in response to rippled noise stimuli. Neuropsychologia 2004; 42:1814-26. [PMID: 15351630 DOI: 10.1016/j.neuropsychologia.2004.04.022] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2003] [Revised: 03/30/2004] [Accepted: 04/27/2004] [Indexed: 11/30/2022]
Abstract
To delineate the time course and processing stages of pitch encoding at the level of the supratemporal plane, the present study recorded evoked magnetic fields in response to rippled noise (RN) stimuli. RN largely masks simple tonotopic representations and addresses pitch processing within the temporal domain (periodicity encoding). Four dichotic stimulus types (111 or 133 Hz RN at one ear, white noise to the other one) were applied in randomized order during either visual distraction or selective auditory attention. Strictly periodic signals, noise-like events, and mixtures of both signals served as control conditions. (1) Attention-dependent ear x hemisphere interactions were observed within the time domain of the M50 field, indicating early streaming of auditory information. (2) M100 responses to strictly periodic stimuli were found lateralized to the right hemisphere. Furthermore, the higher-pitched stimuli yielded enhanced activation as compared to the lower-pitch signals (pitch scaling), conceivably reflecting sensory memory operations. (3) Besides right-hemisphere pitch scaling, the relatively late M100 component in association with the RN condition (latency = 136 ms) showed significantly stronger field strengths over the left hemisphere. Control experiments revealed this lateralization effect to be related to noise rather than pitch processing. Furthermore, subtle noise variations interacted with signal periodicity. Obviously, thus, complex task demands such as RN encoding give rise to functional segregation of auditory processing across the two hemispheres (left hemisphere: noise, right hemisphere: periodicity representation). The observed noise/periodicity interactions, furthermore, might reflect pitch-synchronous spectral evaluation at the level of the left supratemporal plane, triggered by right-hemisphere representation of signal periodicity.
Collapse
Affiliation(s)
- Ingo Hertrich
- Department of Neurology, University of Tübingen, Hoppe-Seyler-Str. 3, D-72076 Tübingen, Germany.
| | | | | | | |
Collapse
|
38
|
Abstract
Objects are the building blocks of experience, but what do we mean by an object? Increasingly, neuroscientists refer to 'auditory objects', yet it is not clear what properties these should possess, how they might be represented in the brain, or how they might relate to the more familiar objects of vision. The concept of an auditory object challenges our understanding of object perception. Here, we offer a critical perspective on the concept and its basis in the brain.
Collapse
Affiliation(s)
- Timothy D Griffiths
- Auditory Group, University of Newcastle Medical School, Newcastle-upon-Tyne NE2 4HH, UK.
| | | |
Collapse
|
39
|
Halpern AR, Zatorre RJ, Bouffard M, Johnson JA. Behavioral and neural correlates of perceived and imagined musical timbre. Neuropsychologia 2004; 42:1281-92. [PMID: 15178179 DOI: 10.1016/j.neuropsychologia.2003.12.017] [Citation(s) in RCA: 177] [Impact Index Per Article: 8.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2003] [Revised: 10/03/2003] [Accepted: 12/23/2003] [Indexed: 11/19/2022]
Abstract
The generality of findings implicating secondary auditory areas in auditory imagery was tested by using a timbre imagery task with fMRI. Another aim was to test whether activity in supplementary motor area (SMA) seen in prior studies might have been related to subvocalization. Participants with moderate musical background were scanned while making similarity judgments about the timbre of heard or imagined musical instrument sounds. The critical control condition was a visual imagery task. The pattern of judgments in perceived and imagined conditions was similar, suggesting that perception and imagery access similar cognitive representations of timbre. As expected, judgments of heard timbres, relative to the visual imagery control, activated primary and secondary auditory areas with some right-sided asymmetry. Timbre imagery also activated secondary auditory areas relative to the visual imagery control, although less strongly, in accord with previous data. Significant overlap was observed in these regions between perceptual and imagery conditions. Because the visual control task resulted in deactivation of auditory areas relative to a silent baseline, we interpret the timbre imagery effect as a reversal of that deactivation. Despite the lack of an obvious subvocalization component to timbre imagery, some activity in SMA was observed, suggesting that SMA may have a more general role in imagery beyond any motor component.
Collapse
Affiliation(s)
- Andrea R Halpern
- Psychology Department, Bucknell University, Lewisburg, PA 17837, USA.
| | | | | | | |
Collapse
|
40
|
Gutschalk A, Patterson RD, Scherg M, Uppenkamp S, Rupp A. Temporal dynamics of pitch in human auditory cortex. Neuroimage 2004; 22:755-66. [PMID: 15193604 DOI: 10.1016/j.neuroimage.2004.01.025] [Citation(s) in RCA: 91] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2003] [Revised: 01/16/2004] [Accepted: 01/20/2004] [Indexed: 11/25/2022] Open
Abstract
Recent functional imaging studies have shown that sounds with temporal pitch produce selective activation in anterolateral Heschl's gyrus. This paper reports a magnetoencephalographic (MEG) study of the temporal dynamics of this activation. The cortical response specific to pitch was isolated from the intensity-related response in Planum temporale using a 'continuous stimulation' paradigm in which regular and irregular click trains alternate without interruption. The mean interclick interval (ICI) was 6, 12, 24, or 48 ms; the train length was 720 ms. The auditory sustained field serves as a level-dependent baseline that enhances the signal-to-noise ratio over previous techniques. The onset of pitch was accompanied by a prominent transient field, followed by a strong sustained field, both of which were associated with sources in lateral Heschl's gyrus. The sustained field rose from baseline about 70 ms after the onset of temporal regularity, asymptoted at about 450 ms, and commenced its return to baseline about 70 ms after pitch offset. The peak of the transient field occurred between 130 and 190 ms after regularity onset depending on the ICI. The latencies of the cortical pitch response are substantially longer than might be anticipated from temporal models of pitch perception. This finding suggests that the temporal integration associated with periodicity processing occurs in a subcortical structure, and that the cortical responses reflect subsequent processes involving the measurement of pitch values and changes in pitch.
Collapse
Affiliation(s)
- Alexander Gutschalk
- Department of Neurology, University of Heidelberg, Heidelberg 69120, Germany.
| | | | | | | | | |
Collapse
|
41
|
Pratt H, Mittelman N, Bleich N, Laufer I. Auditory middle-latency components to fusion of speech elements forming an auditory object. Clin Neurophysiol 2004; 115:1083-9. [PMID: 15066534 DOI: 10.1016/j.clinph.2003.12.004] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/04/2003] [Indexed: 10/26/2022]
Abstract
OBJECTIVE The purpose of this study was to define early brain activity associated with fusion of speech elements to form an auditory object in the middle-latency range preceding the F-Complex. METHODS Stimuli were binaural formant transition and base, that were presented separately or fused to form the vowel-consonant-vowel sequence /ada/. Eleven right-handed, adult, native Hebrew speakers listened to 2/s presentations, and the brain potentials from C(z) during the 250 msec following transition onset (in the responses to transition and to the fused word) or following the time it would have been presented (in the response to base alone) were recorded. The net-fusion response was extracted by subtracting the sum of potentials to the base and the formant transition from the potentials to the fused sound. RESULTS Auditory middle-latency components, comprising of 9 peaks and troughs were recorded in response to the base, to the formant transition and to the fused /ada/. In general, the responses to the fused object were significantly smaller in peak amplitude and in total activity (area under the curve) resulting in the difference waveform of the net-fusion response that also included 9 peaks, but with opposite polarities. CONCLUSIONS The early middle-latency components to fusion indicate that the fusion of speech elements to a word involves inhibition, occlusion or both. The results are in line with the uniqueness of speech perception and the early role of the auditory cortex in speech analysis.
Collapse
Affiliation(s)
- Hillel Pratt
- Evoked Potentials Laboratory, Behavioral Biology, Gutwirth Building, Technion-Israel Institute of Technology, Haifa 32000, Israel.
| | | | | | | |
Collapse
|
42
|
Abstract
OBJECTIVE The purpose of this study was to examine the processing of auditory movement sensation accompanying duplex perception in binaural hearing. METHODS Stimuli were formant transitions (presented to the front, left or right of the subject) and base (presented to the front), that fused to result in vowel-consonant-vowel (V-C-V) sequences /aga/ and /ada/. An illusion of auditory movement (duplex sensation) accompanied the fusion of these V-C-V sequences when the spatial locations of the formant transitions and base were different. Ten right-handed, adult, native Hebrew speakers discriminated each fused stimulus, and the brain potentials associated with performance of the task were recorded from 21 electrodes. The processing of auditory movement was studied by a factorial design (ANOVA) and statistical non-parametric mapping (SnPM) of low resolution electromagnetic tomography (LORETA) images of the net-fusion response. Brain regions implicated in auditory movement processing were expected to be associated with the lateralized formant location, which gave rise to duplex perception. In addition, the time-course of significant activation in brain areas that differentiated between fusion conditions was determined. RESULTS The posterior parietal, anterior cingulate and premotor cortices were found to be implicated in duplex processing. Auditory cortex involvement was also evident, and together with the latter two brain regions was affected by right-ear advantage. CONCLUSIONS Duplex perception resulting from fusion of spatially separate sounds forming an auditory object results in activation of a network of brain regions reflecting enhanced allocation of attention and the effect of language processing.
Collapse
Affiliation(s)
- Ilan Laufer
- Evoked Potentials Laboratory, Technion--Israel Institute of Technology, Gutwirth Building, 32000 Haifa, Israel
| | | |
Collapse
|
43
|
Jones SJ. Sensitivity of human auditory evoked potentials to the harmonicity of complex tones: evidence for dissociated cortical processes of spectral and periodicity analysis. Exp Brain Res 2003; 150:506-14. [PMID: 12700880 DOI: 10.1007/s00221-003-1482-9] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2002] [Accepted: 03/18/2003] [Indexed: 10/20/2022]
Abstract
A strong subjective tendency exists for simultaneous sound frequencies forming an harmonic series (integer multiples of the fundamental) to "group" together into a unified auditory percept whose pitch is similar to that of the fundamental. The aim of the study was to determine whether cortical auditory evoked potentials (AEPs) to complex tones differ according to whether the component frequencies of the stimuli are harmonically related or not. AEPs were recorded to continuous complex tones comprising four or more sinusoids. The vertex-maximal "change-potentials" (CP1, CN1, CP2), recorded to a stimulus cycle comprising one harmonic and five inharmonic complexes changing every second, showed no sensitivity to harmonicity, although an additional mismatch negativity was possibly present to the harmonic complex. In a second study the CP2 was significantly attenuated when an harmonic complex changed to a new one in the presence of an unchanging sinusoidal background tone, harmonically related to the first complex but not to the second, and thus becoming perceptually distinct. This, however, might be caused by lateral inhibitory effects not related to harmonicity. In a third experiment, when four concurrent sinusoidal tones came to rest on steady frequencies after a 5-s period of 16/s pseudo-random frequency changes, fronto-centrally maximal "mismatch-potentials" (MN1, MP2), were recorded. Both the MN1 and the MP2 were significantly shorter in latency when the steady frequencies formed an harmonic complex. Since the harmonic complex had a short overall periodicity, equal to that of the fundamental, while that of the inharmonic complex was much longer, the effect might be explained if the latencies of the mismatch-potential are related to periodicity. The perceptual grouping of harmonically related frequencies appears not to be a function of spectral domain analysis, reflected in the change-potentials, but of periodicity analysis, reflected in the mismatch-potentials
Collapse
Affiliation(s)
- S J Jones
- Department of Clinical Neurophysiology, National Hospital for Neurology and Neurosurgery, Queen Square, WC1 N 3BG, London, UK.
| |
Collapse
|
44
|
Laufer I, Pratt H. The electrophysiological net response ('F-complex') to spatial fusion of speech elements forming an auditory object. Clin Neurophysiol 2003; 114:818-34. [PMID: 12738428 DOI: 10.1016/s1388-2457(03)00029-4] [Citation(s) in RCA: 18] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
OBJECTIVE The purpose of this study was to define and analyze the brain activity associated with fusion of speech elements to form an auditory object and to study the effects of presenting the elements at different spatial locations (duplex stimulus). METHODS Stimuli were formant transitions (presented to the front, left or right of the subject) and base (presented to the front), that fused to result in V-C-V sequences /aga/ and /ada/. Ten right-handed, adult, native Hebrew speakers discriminated each fused stimulus, and the brain potentials associated with performance of the task were recorded from 21 electrodes. The net-fusion response, the 'F(fusion)-complex', was extracted by subtracting the sum of potentials to the base and formant transitions from the potentials to the fused sound. Low resolution electromagnetic tomography analysis (LORETA) was performed to assess the timing and brain location of the fusion process. RESULTS The 'F-complex', comprising of the difference N(1), P(2), N(2b) (FN(1), FP(2), FN(2b)) components could be identified for each of the stimuli and reflected a process indicating inhibition, occlusion or both, with right ear advantage in fusion. LORETA analyses indicate sequential processing of speech fusion in the temporal lobes, beginning with right prominence in FN(1) and FP(2) shifting to a more symmetrical pattern in FN(2). CONCLUSIONS The electrophysiological correlates of speech fusion highlight the uniqueness of speech perception and the brain areas involved in its analysis.
Collapse
Affiliation(s)
- Ilan Laufer
- Evoked Potentials Laboratory, Gutwirth Building, Technion - Israel Institute of Technology, Haifa 32000, Israel
| | | |
Collapse
|
45
|
Tonnquist-Uhlen I, Ponton CW, Eggermont JJ, Kwong B, Don M. Maturation of human central auditory system activity: the T-complex. Clin Neurophysiol 2003; 114:685-701. [PMID: 12686277 DOI: 10.1016/s1388-2457(03)00005-1] [Citation(s) in RCA: 70] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
Abstract
OBJECTIVE The purpose of this study was to evaluate and describe the maturation of a set of auditory evoked potentials (AEPs) described as the T-complex from a large group of children, adolescents, and young adults who ranged in age from 5 to 20 years of age. METHODS The AEPs evoked by brief trains of clicks presented to the left ear were measured at 30 scalp-electrode locations. Analyses focused on age-related latency and amplitude changes in the T-complex recorded at the temporal electrode sites T3 and T5 over the left hemisphere and T4 and T6 over the right hemisphere. The maturation of the T-complex components Na, Ta, and Tb was contrasted with those of the obligatory AEPs P1, N1b, and P2 measured at electrodes C3 and C4. RESULTS T-complex activity was present in the grand average AEPs of all 14 age groups spanning ages 5-20 years. T-complex components recorded at electrodes T3 and T4 differed in both morphology and maturation rate from those recorded at T5 and T6. In contrast to the prolonged maturation of AEP latency measured at electrodes T5 and T6, the T-complex components measured at electrodes T3 and T4 did not show a significant overall change in peak latency as a function of age. Consistent amplitude and latency correlations were found between the obligatory AEP components P1, N1b and P2 recorded at C3 and C4 and the T-complex components measured at T5 and T6, but not T3 and T4. CONCLUSIONS Distinct patterns of AEP maturation were measured at electrode sites commonly used to record the T-complex. At scalp electrodes located over more posterior temporal areas (T5 and T6), the AEPs were characterized by a prolonged pattern of maturation very similar to that measured at the central electrodes C3 and C4. These findings and others reported in this paper provide strong evidence that the AEPs recorded at electrodes T5 and T6 are not T-complex peaks. In contrast, the AEPs measured at electrodes T3 and T4 over more anterior temporal scalp areas appear largely independent of activity measured at the central electrode locations. The T-complex peaks Ta and Tb measured at these scalp locations mature early, with no overall significant age-related changes in peak latencies. SIGNIFICANCE The T-complex is recorded from the temporal electrodes T3 and T4 represents activity of secondary auditory cortex better than, and independent from, midline potentials. Its robust presence in 5-8 year olds supports its potential usefulness in assessing language impairment.
Collapse
|
46
|
Jones SJ, Sprague L, Vaz Pato M. Electrophysiological evidence for a defect in the processing of temporal sound patterns in multiple sclerosis. J Neurol Neurosurg Psychiatry 2002; 73:561-7. [PMID: 12397152 PMCID: PMC1738106 DOI: 10.1136/jnnp.73.5.561] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/03/2022]
Abstract
OBJECTIVES To assess the processing of spectrotemporal sound patterns in multiple sclerosis by using auditory evoked potentials (AEPs) to complex harmonic tones. METHODS 22 patients with definite multiple sclerosis but mild disability and no auditory complaints were compared with 15 normal controls. Short latency AEPs were recorded using standard methods. Long latency AEPs were recorded to synthesised musical instrument tones, at onset every two seconds, at abrupt frequency changes every two seconds, and at the end of a two second period of 16/s frequency changes. The subjects were inattentive but awake, reading irrelevant material. RESULTS Short latency AEPs were abnormal in only 4 of 22 patients, whereas long latency AEPs were abnormal to one or more stimuli in 17 of 22. No significant latency prolongation was seen in response to onset and infrequent frequency changes (P1, N1, P2) but the potentials at the end of 16/s frequency modulations, particularly the P2 peaking approximately 200 ms after the next expected change, were significantly delayed. CONCLUSION The delayed responses appear to be a mild disorder in the processing of change in temporal sound patterns. The delay may be conceived of as extra time taken to compare the incoming sound with the contents of a temporally ordered sensory memory store (the long auditory store or echoic memory), which generates a response when the next expected frequency change fails to occur. The defect cannot be ascribed to lesions of the afferent pathways and so may be due to disseminated brain lesions visible or invisible on magnetic resonance imaging.
Collapse
Affiliation(s)
- S J Jones
- The National Hospital for Neurology and Neurosurgery, London, UK.
| | | | | |
Collapse
|
47
|
|
48
|
Jones SJ. The internal auditory clock: what can evoked potentials reveal about the analysis of temporal sound patterns, and abnormal states of consciousness? Neurophysiol Clin 2002; 32:241-53. [PMID: 12448181 DOI: 10.1016/s0987-7053(02)00309-x] [Citation(s) in RCA: 13] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
Whereas in vision a large amount of information may in theory be extracted from instantaneous images, sound exists only in its temporal extent, and most of its information is contained in the pattern of changes over time. The "echoic memory" is a pre-attentive auditory sensory store in which sounds are apparently retained in full temporal detail for a period of a few seconds. From the long-latency auditory evoked potentials to spectro-temporal modulation of complex harmonic tones, at least two automatic sound analysis processes can be identified whose time constants suggest participation of the echoic memory. When a steady tone changes its pitch or timbre, "change-type" CP1, CN1 and CP2 potentials are maximally recorded near the vertex. These potentials appear to reflect a process concerned with the distribution of sound energy across the frequency spectrum. When, on the other hand, changes occur in the temporal pattern of tones (in which individual pitch changes are occurring at a rate sufficiently rapid for the C-potentials to be refractory), a large mismatch negativity (or MN1) and following positivity (MP2) are generated. The amplitude of these potentials is influenced by the degree of regularity of the pattern, larger responses being generated to a "deviant" tone when the pitch and time of occurrence of the "standards" are fully specified by the preceding pattern. At the sudden cessation of changes, on resumption of a steady pitch, a mismatch response is generated whose latency is determined with high precision (in the order of a few milliseconds) by the anticipated time of the next change, which did not in fact occur. The mismatch process, therefore, functions as spectro-temporal auditory pattern analyser, whose consequences are manifested each time the pattern changes. Since calibration of the passage of time is essential for all conscious and subconscious behaviour, is it possible that some states of unconsciousness may be directly due to disruption of internal "clocks"? Abnormal mismatch potentials may provide a manifestation of a disordered auditory time-sense, sometimes being abolished in comatose patients while the C-potentials and similar responses to the onset of tones are preserved. Both C- and M-potentials were usually found to be preserved, however, in patients who had emerged from coma and were capable of discriminating sounds. Substantially intact responses were also recorded from three patients who were functionally in a "vegetative" state. The C- and M-potentials were once again dissociated in a group of patients with multiple sclerosis, only the mismatch potentials being found to be significantly delayed. This subclinical impairment of a memory-based process responsible for the detection of change in temporal sound patterns may be related to defects in other memory domains such as working memory.
Collapse
Affiliation(s)
- S J Jones
- Department of Clinical Neurophysiology, National Hospital for Neurology and Neurosurgery, Queen Square, London WC1N 3BG, UK.
| |
Collapse
|
49
|
Pratt H, Sinai A, Laufer I, Horev N. Time course of auditory cortex activation during speech processing. J Basic Clin Physiol Pharmacol 2002; 13:135-49. [PMID: 16411427 DOI: 10.1515/jbcpp.2002.13.2.135] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/06/2023]
Abstract
The purpose of the studies summarized in this report was to determine the time course of auditory cortex involvement in speech and language processing in the context of auditory object formation. Forty-one subjects took part in the three studies summarized in this report. In all three studies, subjects performed a choice-reaction task that required their pressing an appropriate button in response to auditory stimuli (speech/non-speech, good/worse fused phonemes, first/second language words) presented through earphones. Event-related potentials (ERPs) were recorded during performance of the task from 21 scalp electrodes, in addition to peri-ocular electrodes for monitoring eye movements. Current densities within the gray matter of the brain were estimated using the LORETA (low resolution electromagnetic tomography) method. In general, except for some periods, processing phonetic and linguistic information was associated with elevated activity in the left auditory cortex. Peaks in auditory cortex activation corresponded in time to scalp recorded peaks in the latencies of P1 and up to as late as P3. The adjacent posterior temporal areas showed a similar temporal pattern of activation, but tended to be less lateralized to the left, or even biased toward right hemisphere predominance, depending on the stimulus, particularly in the later time frames. The results indicate that the auditory cortex is engaged in auditory processing from its early stages and as long as a few hundreds of msec, even after cessation of the stimulus, defining sounds as distinct auditory objects and differentiating speech from non-speech material, relying on acoustic cues. Hemispheric dominance fluctuates to include activity in the 'non-dominant' hemisphere depending on stimulus type and stage of processing.
Collapse
Affiliation(s)
- Hillel Pratt
- Evoked Potentials Laboratory, Behavioral Biology, Gutwirth Bldg., Technion-Israel Institute of Technology, Haifa 32000, Israel.
| | | | | | | |
Collapse
|
50
|
Abstract
OBJECTIVE To examine the hypothesis that auditory evoked potentials (AEPs) to pitch and timbre change of complex harmonic tones reflect a process of spectral envelope analysis. METHODS AEPs were recorded to: (1) continuous tones of 'clarinet' timbre whose pitch abruptly rose or fell by 1 or 7 semitones every 0.5 or 1.5 s; (2) a cycle of 6 pitches changing every 0.5 s; (3) tones of constant pitch whose timbre (spectral envelope shape) changed periodically; (4) pitch change of high- and low-pass filtered 'clarinet' tones. RESULTS The amplitudes of the 'change-N1' (CN1) potential peaking at ca. 90 ms and the following CP2 were influenced to a far greater degree by the time interval between changes, than by the magnitude of the change or by the time interval between occurrences of the same pitch. Amplitudes were also strongly dependent on the number of partials present, irrespective of whether they were increasing or decreasing in energy. The algebraic sum of the responses to pitch change of high- and low-pass filtered tones closely approximated the response to the unfiltered tone. CONCLUSION The rate-sensitivity of the responses cannot be explained by the refractoriness of frequency-specific 'feature detector' neurones, but rather of a process (termed 'C-process') which analyzes amplitude modulations across the spectral envelope, the contribution of different frequency bands combining linearly in the scalp-recorded activity. On-going computation of the spectral envelope shape may be an important factor in maintaining the perceptual constancy of timbre.
Collapse
Affiliation(s)
- S J Jones
- The National Hospital for Neurology and Neurosurgery, Queen Square, WC1N 3BG, London, UK.
| | | |
Collapse
|