76
|
Heinz MG, Colburn HS, Carney LH. Evaluating auditory performance limits: i. one-parameter discrimination using a computational model for the auditory nerve. Neural Comput 2001; 13:2273-316. [PMID: 11570999 DOI: 10.1162/089976601750541804] [Citation(s) in RCA: 122] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
A method for calculating psychophysical performance limits based on stochastic neural responses is introduced and compared to previous analytical methods for evaluating auditory discrimination of tone frequency and level. The method uses signal detection theory and a computational model for a population of auditory nerve (AN) fiber responses. The use of computational models allows predictions to be made over a wider parameter range and with more complete descriptions of AN responses than in analytical models. Performance based on AN discharge times (all-information) is compared to performance based only on discharge counts (rate-place). After the method is verified over the range of parameters for which previous analytical models are applicable, the parameter space is then extended. For example, a computational model of AN activity that extends to high frequencies is used to explore the common belief that rate-place information is responsible for frequency encoding at high frequencies due to the rolloff in AN phase locking above 2 kHz. This rolloff is thought to eliminate temporal information at high frequencies. Contrary to this belief, results of this analysis show that rate-place predictions for frequency discrimination are inconsistent with human performance in the dependence on frequency for high frequencies and that there is significant temporal information in the AN up to at least 10 kHz. In fact, the all-information predictions match the functional dependence of human performance on frequency, although optimal performance is much better than human performance. The use of computational AN models in this study provides new constraints on hypotheses of neural encoding of frequency in the auditory system; however, the method is limited to simple tasks with deterministic stimuli. A companion article in this issue ("Evaluating Auditory Performance Limits: II") describes an extension of this approach to more complex tasks that include random variation of one parameter, for example, random-level variation, which is often used in psychophysics to test neural encoding hypotheses.
Collapse
|
|
24 |
122 |
77
|
Abstract
Evidence from fMRI, ERPs and intracranial recordings suggests the existence of face-specific mechanisms in the primate occipitotemporal cortex. The present study used a 64-channel MEG system to monitor neural activity while normal subjects viewed a sequence of grayscale photographs of a variety of unfamiliar faces and non-face stimuli. In 14 of 15 subjects, face stimuli evoked a larger response than non-face stimuli at a latency of 160 ms after stimulus onset at bilateral occipitotemporal sensors. Inverted face stimuli elicited responses that were no different in amplitude but 13 ms later in latency than upright faces. The profile of this M170 response across stimulus conditions is largely consistent with prior results using scalp and subdural ERPs.
Collapse
|
|
25 |
122 |
78
|
Abstract
This review discusses hearing performance in primates and selective pressures that may influence it. The hearing sensitivity and sound-localization abilities of primates, as indicated by behavioral tests, are reviewed and compared to hearing and sound localization among mammals in general. Primates fit the mammalian pattern with small species hearing higher frequencies than larger species in order to use spectral/intensity cues for sound localization. In this broader comparative context, the restricted high-frequency hearing of humans is not unusual. All of the primates tested so far are able to hear frequencies below 125 Hz, placing them among the majority of mammals. Sound-localization acuity has been determined for only three primates, and here also they have relatively good localization acuity (with a minimum audible angle roughly similar to other mammals such as cats, pigs, and opossums). This is in keeping with the pattern among mammals in general, in which species with narrow fields of best vision, such as a fovea, are better localizers than those with broad fields of best vision. Multiple lines of evidence support the view that sound localization is the selective pressure on smaller primates and on other mammals with short interaural distances for hearing high frequencies.
Collapse
|
|
21 |
122 |
79
|
Sussman E, Winkler I, Huotilainen M, Ritter W, Näätänen R. Top-down effects can modify the initially stimulus-driven auditory organization. BRAIN RESEARCH. COGNITIVE BRAIN RESEARCH 2002; 13:393-405. [PMID: 11919003 DOI: 10.1016/s0926-6410(01)00131-8] [Citation(s) in RCA: 120] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
Abstract
We recorded event-related potentials (ERPs) and magnetic fields (ERFs) of the human brain to determine whether top-down control could modulate the initial organization of sound representations in the auditory cortex. We presented identical sound stimulation and manipulated top-down processes by instructing participants to either ignore the sounds (Ignore condition), to detect pitch changes (Attend-pitch condition), or to detect violations of a repeating tone pattern (Attend-pattern condition). The ERP results obtained in the Attend-pattern condition dramatically differed from those obtained with the other two task instructions. The magnetoencephalogram (MEG) findings were fully compatible, showing that the neural populations involved in detecting pattern violations differed from those involved in detecting pitch changes. The results demonstrate a top-down effect on the sound representation maintained in auditory cortex.
Collapse
|
|
23 |
120 |
80
|
Vickers DA, Moore BC, Baer T. Effects of low-pass filtering on the intelligibility of speech in quiet for people with and without dead regions at high frequencies. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2001; 110:1164-1175. [PMID: 11519583 DOI: 10.1121/1.1381534] [Citation(s) in RCA: 120] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
A dead region is a region of the cochlea where there are no functioning inner hair cells (IHCs) and/or neurons; it can be characterized in terms of the characteristic frequencies of the IHCs bordering that region. We examined the effect of high-frequency amplification on speech perception for subjects with high-frequency hearing loss with and without dead regions. The limits of any dead regions were defined by measuring psychophysical tuning curves and were confirmed using the TEN test described in Moore et al. [Br. J. Audiol. 34, 205-224 (2000)]. The speech stimuli were vowel-consonant-vowel (VCV) nonsense syllables, using one of three vowels (/i/, /a/, and /u/) and 21 different consonants. In a baseline condition, subjects were tested using broadband stimuli with a nominal input level of 65 dB SPL. Prior to presentation via Sennheiser HD580 earphones, the stimuli were subjected to the frequency-gain characteristic prescribed by the "Cambridge" formula, which is intended to give speech at 65 dB SPL the same overall loudness as for a normal listener, and to make the average loudness of the speech the same for each critical band over the frequency range important for speech intelligibility (in a listener without a dead region). The stimuli for all other conditions were initially subjected to this same frequency-gain characteristic. Then, the speech was low-pass filtered with various cutoff frequencies. For subjects without dead regions, performance generally improved progressively with increasing cutoff frequency. This indicates that they benefited from high-frequency information. For subjects with dead regions, two patterns of performance were observed. For most subjects, performance improved with increasing cutoff frequency until the cutoff frequency was somewhat above the estimated edge frequency of the dead region, but hardly changed with further increases. For a few subjects, performance initially improved with increasing cutoff frequency and then worsened with further increases, although the worsening was significant only for one subject. The results have important implications for the fitting of hearing aids.
Collapse
|
|
24 |
120 |
81
|
Grothe B. Interaction of excitation and inhibition in processing of pure tone and amplitude-modulated stimuli in the medial superior olive of the mustached bat. J Neurophysiol 1994; 71:706-21. [PMID: 8176433 DOI: 10.1152/jn.1994.71.2.706] [Citation(s) in RCA: 119] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/29/2023] Open
Abstract
1. In mammals with good low-frequency hearing, the medial superior olive (MSO) processes interaural time or phase differences that are important cues for sound localization. Its cells receive excitatory projections from both cochlear nuclei and are thought to function as coincidence detectors. The response patterns of MSO neurons in most mammals are predominantly sustained. In contrast, the MSO in the mustached bat is a monaural nucleus containing neurons with phasic discharge patterns. These neurons receive projections from the contralateral anteroventral cochlear nucleus (AVCN) and the ipsilateral medial nucleus of the trapezoid body (MNTB). 2. To further investigate the role of the MSO in the bat, the responses of 252 single units in the MSO to pure tones and sinusoidal amplitude-modulated (SAM) stimuli were recorded. The results confirmed that the MSO in the mustached bat is tonotopically organized, with low frequencies in the dorsal part and high frequencies in the ventral part. The 61-kHz region is overrepresented. Most neurons tested (88%) were monaural and discharged only in response to contralateral stimuli. Their response could not be influenced by stimulation of the ipsilateral ear. 3. Only 11% of all MSO neurons were spontaneously active. In these neurons the spontaneous discharge rate was suppressed during the stimulus presentation. 4. The majority of cells (85%) responded with a phasic discharge pattern. About one-half (51%) responded with a level-independent phasic ON response. Other phasic response patterns included phasic OFF or phasic ON-OFF, depending on the stimulus frequency. Neurons with ON-OFF discharge patterns were most common in the 61-kHz region and absent in the high-frequency region. 5. Double tone experiments showed that at short intertone intervals the ON response to the second stimulus or the OFF response to the first stimulus was inhibited. 6. In neuropharmacological experiments, glycine applied to MSO neurons (n = 71) inhibited any tone-evoked response. In the presence of the glycine antagonist strychnine the response patterns changed from phasic to sustained (n = 35) and the neurons responded to both tones presented in double tone experiments independent of the intertone interval (n = 5). The effects of strychnine were reversible. 7. Twenty of 21 neurons tested with sinusoidally amplitude-modulated (SAM) signals exhibited low-pass or band-pass filter characteristics. Tests with SAM signals also revealed a weak temporal summation of inhibition in 13 of the 21 cells tested.(ABSTRACT TRUNCATED AT 400 WORDS)
Collapse
|
|
31 |
119 |
82
|
Oades RD, Dittmann-Balcar A, Schepker R, Eggers C, Zerbin D. Auditory event-related potentials (ERPs) and mismatch negativity (MMN) in healthy children and those with attention-deficit or tourette/tic symptoms. Biol Psychol 1996; 43:163-85. [PMID: 8805970 DOI: 10.1016/0301-0511(96)05189-7] [Citation(s) in RCA: 118] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/02/2023]
Abstract
The study compares 5 auditory event-related potential (ERP) components (P1 to P3) after 3 tones differing in pitch and rarity, and contrasts the mismatch negativity (MMN) between them in 12 children with attention-deficit hyperactivity disorder (ADHD; mean 10.2 years of age), 12 healthy controls pairwise matched for age (controls), and 10 with Chronic Tic or Tourette Syndrome (TS). Topographic recordings were derived from 19 scalp electrodes. Four major effects are reported. (a) Shorter latencies in ADHD patients were evident as early as 100 ms. (b) Both ADHD and TS groups showed very large P2 components where the maxima were shifted anteriorly. The differences in the later potentials were of a topographical nature. (c) Frontal MMN was non-significantly larger in the ADHD group but normalized data showed a left rather than a right frontal bias as in control subjects. Maxima for TS were usually posterior. (d) ADHD patients did not show the usual right-biased P3 asymmetry nor the frontal versus parietal P3 latency difference. From these results it is suggested that ADHD patients process perceptual information faster from an early stage (N1). Further, along with the TS group, ADHD patients showed an unusually marked inhibitory phase in processing (P2), interpreted as a reduction of the normal controls on further processing. Later indices of stimulus processing (N2-P3) showed a frontal impairment in TS and a right hemisphere impairment in ADHD patients. These are interpreted in terms of the difficulties in sustaining attention experienced by both ADHD and TS patients.
Collapse
|
|
29 |
118 |
83
|
Benasich AA, Choudhury N, Friedman JT, Realpe-Bonilla T, Chojnowska C, Gou Z. The infant as a prelinguistic model for language learning impairments: predicting from event-related potentials to behavior. Neuropsychologia 2005; 44:396-411. [PMID: 16054661 PMCID: PMC1569769 DOI: 10.1016/j.neuropsychologia.2005.06.004] [Citation(s) in RCA: 117] [Impact Index Per Article: 5.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2005] [Revised: 05/31/2005] [Accepted: 06/08/2005] [Indexed: 12/01/2022]
Abstract
Associations between efficient processing of brief, rapidly presented, successive stimuli and language learning impairments (LLI) in older children and adults have been well documented. In this paper we examine the role that impaired rapid auditory processing (RAP) might play during early language acquisition. Using behavioral measures we have demonstrated that RAP abilities in infancy are critically linked to later language abilities for both non-speech and speech stimuli. Variance in infant RAP thresholds reliably predict language outcome at 3 years-of-age for infants at risk for LLI and control infants. We present data here describing patterns of electrocortical (EEG/ERP) activation at 6 month-of-age to the same non-verbal stimuli used in our behavioral studies. Well-defined differences were seen between infants from families with a history of LLI (FH+) and FH- controls in the amplitude of the mismatch response (MMR) as well as the latency of the N250 component in the 70 ms ISI condition only. Smaller mismatch responses and delayed onsets of the N250 component were seen in the FH+ group. The latency differences in the N250 component, but not the MMR amplitude variation, were significantly related to 24-month language outcome. Such converging tasks provide the opportunity to examine early precursors of LLI and allow the opportunity for earlier identification and intervention.
Collapse
|
Research Support, N.I.H., Extramural |
20 |
117 |
84
|
Abstract
We investigated the emergence of discriminative responses to pitch by recording 2-, 3-, and 4-month-old infants' electro-encephalogram responses to infrequent pitch changes in piano tones. In all age groups, infants' responses to deviant tones were significantly different from responses to standard tones. However, two types of mismatch responses were observed simultaneously in the difference waves. An increase in the left-lateralized positive slow wave was prominent in 2-month-olds, present in 3-month-olds, but insignificant in 4-month-olds. A faster adultlike mismatch negativity (MMN), lateralized to the right hemisphere, emerged at 2 months of age and became earlier and stronger as age increased. The coexistence and dissociation of two types of mismatch responses suggests different underlying neuromechanisms for the two responses. Furthermore, the earlier emergence of the MMN-like component to changes in pitch compared to other sound features implies that neural circuits involved in generating MMN-like responses have different maturational timetables for different sound features.
Collapse
|
Research Support, Non-U.S. Gov't |
18 |
116 |
85
|
Kidd GR, Watson CS, Gygi B. Individual differences in auditory abilities. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2007; 122:418-35. [PMID: 17614500 DOI: 10.1121/1.2743154] [Citation(s) in RCA: 116] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/16/2023]
Abstract
Performance on 19 auditory discrimination and identification tasks was measured for 340 listeners with normal hearing. Test stimuli included single tones, sequences of tones, amplitude-modulated and rippled noise, temporal gaps, speech, and environmental sounds. Principal components analysis and structural equation modeling of the data support the existence of a general auditory ability and four specific auditory abilities. The specific abilities are (1) loudness and duration (overall energy) discrimination; (2) sensitivity to temporal envelope variation; (3) identification of highly familiar sounds (speech and nonspeech); and (4) discrimination of unfamiliar simple and complex spectral and temporal patterns. Examination of Scholastic Aptitude Test (SAT) scores for a large subset of the population revealed little or no association between general or specific auditory abilities and general intellectual ability. The findings provide a basis for research to further specify the nature of the auditory abilities. Of particular interest are results suggestive of a familiar sound recognition (FSR) ability, apparently specialized for sound recognition on the basis of limited or distorted information. This FSR ability is independent of normal variation in both spectral-temporal acuity and of general intellectual ability.
Collapse
|
Research Support, N.I.H., Extramural |
18 |
116 |
86
|
Abstract
A study was performed to examine the utility of an ERP-based irrelevant probe technique for the assessment of variations in mental workload. Ten highly trained Navy radar operators performed a simulated radar-monitoring task which varied in the density and type of targets to be detected and identified. This task was performed in the presence of a series of irrelevant auditory probes which the radar operators were instructed to ignore. Prior to performing the radar-monitoring task the subjects performed a block of auditory detection trials in which they were asked to respond to the occurrence of one of two low probability tones and ignore the other low probability tone along with a higher probability standard tone. ERPs were recorded from the occurrence of the tones in both the baseline and low and high workload radar-monitoring conditions. The amplitude of the N100, N200, and early and late mismatch negativity (MMN) components decreased from the baseline to the low load radar-monitoring task and again with an increase in the difficulty of the radar-monitoring task. P300 amplitude was sensitive only to the introduction of the radar-monitoring task. These results are interpreted with respect to the phenomenon of attentional capture and suggest that the ERP-based irrelevant-probe technique might prove an effective method for the nonintrusive evaluation of increases in mental workload in complex tasks.
Collapse
|
|
30 |
116 |
87
|
Abstract
The ability to discriminate between acoustic signals of different frequencies is fundamental to the interpretation of auditory information and the development of language perception and production. The fact that the human fetus responds to sounds of different frequencies raises the question of whether the fetus is able to discriminate between them? To investigate whether the fetus has the ability to discriminate between different pure tone acoustic stimuli and different speech sounds the following study used an habituation paradigm and examined whether the fetus could discriminate between two pure tone acoustic stimuli, 250 Hz and 500 Hz, or two speech sounds, [ba] and [bi], at 27 and 35 weeks of gestational age. The results indicated that the fetus is capable of discriminating between the different sounds, i.e. 250 Hz and 500 Hz and [ba] and [bi] at 35 weeks of gestational age but less able at 27 weeks of gestational age. The implications of this for the development of the auditory system are discussed.
Collapse
|
|
31 |
115 |
88
|
Tervaniemi M, Saarinen J, Paavilainen P, Danilova N, Näätänen R. Temporal integration of auditory information in sensory memory as reflected by the mismatch negativity. Biol Psychol 1994; 38:157-67. [PMID: 7873700 DOI: 10.1016/0301-0511(94)90036-1] [Citation(s) in RCA: 115] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
Abstract
Event-related potentials (ERPs) to different types of infrequent change in a tone pair composed of two closely spaced tones of different frequencies were recorded. The mismatch negativity (MMN), a change-specific component of the ERP, was elicited by reversing the order of the two tones, by repeating the first tone, by replacing the first tone with the second tone, or by omitting the second tone. The omission of the second tone, however, elicited the MMN only when the interval between the two tones was very short (offset to onset 40 or 140 ms) but did not when this interval was somewhat longer (240 or 340 ms). The pattern of the present results suggests that sensory-memory traces, as reflected by the MMN, integrate information about two closely spaced stimuli into a unitary sensory event.
Collapse
|
|
31 |
115 |
89
|
Baharloo S, Service SK, Risch N, Gitschier J, Freimer NB. Familial aggregation of absolute pitch. Am J Hum Genet 2000; 67:755-8. [PMID: 10924408 PMCID: PMC1287535 DOI: 10.1086/303057] [Citation(s) in RCA: 114] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2000] [Accepted: 07/14/2000] [Indexed: 11/03/2022] Open
Abstract
Absolute pitch (AP) is a behavioral trait that is defined as the ability to identify the pitch of tones in the absence of a reference pitch. AP is an ideal phenotype for investigation of gene and environment interactions in the development of complex human behaviors. Individuals who score exceptionally well on formalized auditory tests of pitch perception are designated as "AP-1." As described in this report, auditory testing of siblings of AP-1 probands and of a control sample indicates that AP-1 aggregates in families. The implications of this finding for the mapping of loci for AP-1 predisposition are discussed.
Collapse
|
case-report |
25 |
114 |
90
|
Wu GK, Li P, Tao HW, Zhang LI. Nonmonotonic synaptic excitation and imbalanced inhibition underlying cortical intensity tuning. Neuron 2007; 52:705-15. [PMID: 17114053 PMCID: PMC1764440 DOI: 10.1016/j.neuron.2006.10.009] [Citation(s) in RCA: 114] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2006] [Revised: 08/28/2006] [Accepted: 10/12/2006] [Indexed: 11/17/2022]
Abstract
Intensity-tuned neurons, characterized by their nonmonotonic response-level function, may play important roles in the encoding of sound intensity-related information. The synaptic mechanisms underlying intensity tuning remain unclear. Here, in vivo whole-cell recordings in rat auditory cortex revealed that intensity-tuned neurons, mostly clustered in a posterior zone, receive imbalanced tone-evoked excitatory and inhibitory synaptic inputs. Excitatory inputs exhibit nonmonotonic intensity tuning, whereas with tone intensity increments, the temporally delayed inhibitory inputs increase monotonically in strength. In addition, this delay reduces with the increase of intensity, resulting in an enhanced suppression of excitation at high intensities and a significant sharpening of intensity tuning. In contrast, non-intensity-tuned neurons exhibit covaried excitatory and inhibitory inputs, and the relative time interval between them is stable with intensity increments, resulting in monotonic response-level function. Thus, cortical intensity tuning is primarily determined by excitatory inputs and shaped by cortical inhibition through a dynamic control of excitatory and inhibitory timing.
Collapse
|
Research Support, Non-U.S. Gov't |
18 |
114 |
91
|
Poeppel D, Guillemin A, Thompson J, Fritz J, Bavelier D, Braun AR. Auditory lexical decision, categorical perception, and FM direction discrimination differentially engage left and right auditory cortex. Neuropsychologia 2004; 42:183-200. [PMID: 14644105 DOI: 10.1016/j.neuropsychologia.2003.07.010] [Citation(s) in RCA: 114] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
Abstract
Recent neuroimaging and neuropsychological data suggest that speech perception is supported in bilaterally auditory areas. We evaluate this issue building on well-known behavioral effects. While undergoing positron emission tomography (PET), subjects performed standard auditory tasks: direction discrimination of frequency-modulated (FM) tones, categorical perception (CP) of consonant-vowel (CV) syllables, and word/non-word judgments (lexical decision, LD). Compared to rest, the three conditions led to bilateral activation of the auditory cortices. However, lateralization patterns differed as a function of stimulus type: the LD task generated stronger responses in the left, the FM task a stronger response in the right hemisphere. Contrasts between either words or syllables versus FM were associated with significantly greater activity bilaterally in superior temporal gyrus (STG) ventro-lateral to Heschl's gyrus. These activations extended into the superior temporal sulcus (STS) and the middle temporal gyrus (MTG) and were greater in the left. The same areas were more active in the LD than the CP task. In contrast, the FM task was associated with significantly greater activity in the right lateral-posterior STG and lateral MTG. The findings argue for a view in which speech perception is mediated bilaterally in the auditory cortices and that the well-documented lateralization is likely associated with processes subsequent to the auditory analysis of speech.
Collapse
|
|
21 |
114 |
92
|
Evans EF. Place and time coding of frequency in the peripheral auditory system: some physiological pros and cons. AUDIOLOGY : OFFICIAL ORGAN OF THE INTERNATIONAL SOCIETY OF AUDIOLOGY 1978; 17:369-420. [PMID: 697652 DOI: 10.3109/00206097809072605] [Citation(s) in RCA: 112] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
|
|
47 |
112 |
93
|
Squires NK, Donchin E, Squires KC, Grossberg S. Bisensory stimulation: Inferring decision-related processes from the P300 component. ACTA ACUST UNITED AC 1977; 3:299-315. [PMID: 864401 DOI: 10.1037/0096-1523.3.2.299] [Citation(s) in RCA: 111] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Three experiments were conducted to evaluate the P300 component of the human evoked response as an index of bisensory information processing. On different blocks of trials, subjects were presented with auditory stimuli alone, visual stimuli alone, or with audiovisual compounds. In each series there were two possible stimuli, one of which was presented less frequently than the other; the subjects' task was to count the infrequent stimuli. In the first two experiments the information in the two modalities was redundant, whereas in the third the modalities provided nonredundant information. With redundant information, the P300 latency indicated bisensory facilitation when the unimodal P300 latencies were similar; when the unimodal latencies were dissimilar, the bisensory P300 occurred at the latency of the earlier unimodal P300. Reaction times paralleled P300 latency. When the information in the two modalities was nonredundant, both P300 amplitude and reaction-time data indicated interference between the two modalities, regardless of which modality was task relevant. P300 latency and reaction time did not covary in this situation. These data suggest that P300 latency and amplitude do reflect bisensory interactions and that the P300 promises to be a valuable tool for assessing brain processes during complex decision making.
Collapse
|
|
48 |
111 |
94
|
Lattner S, Meyer ME, Friederici AD. Voice perception: Sex, pitch, and the right hemisphere. Hum Brain Mapp 2005; 24:11-20. [PMID: 15593269 PMCID: PMC6871712 DOI: 10.1002/hbm.20065] [Citation(s) in RCA: 111] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2003] [Accepted: 05/17/2004] [Indexed: 11/07/2022] Open
Abstract
The present functional magnetic resonance imaging (fMRI) study examined the neurophysiological processing of voice information. The impact of the major acoustic parameters as well as the role of the listener's and the speaker's gender were investigated. Male and female, natural, and manipulated voices were presented to 16 young adults who were asked to judge the naturalness of each voice. The hemodynamic responses were acquired by a 3T Bruker scanner utilizing an event-related design. The activation was generally stronger in response to female voices as well as to manipulated voice signals, and there was no interaction with the listener's gender. Most importantly, the results suggest a functional segregation of the right superior temporal cortex for the processing of different voice parameters, whereby (1) voice pitch is processed in regions close and anterior to Heschl's Gyrus, (2) voice spectral information is processed in posterior parts of the superior temporal gyrus (STG) and areas surrounding the planum parietale (PP) bilaterally, and (3) information about prototypicality is predominately processed in anterior parts of the right STG. Generally, by identifying distinct functional regions in the right STG, our study supports the notion of a fundamental role of the right hemisphere in spoken language comprehension.
Collapse
|
research-article |
20 |
111 |
95
|
Musiek FE, Baran JA, Pinheiro ML. Duration pattern recognition in normal subjects and patients with cerebral and cochlear lesions. AUDIOLOGY : OFFICIAL ORGAN OF THE INTERNATIONAL SOCIETY OF AUDIOLOGY 1990; 29:304-13. [PMID: 2275645 DOI: 10.3109/00206099009072861] [Citation(s) in RCA: 111] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/31/2022]
Abstract
Three groups of subjects were tested on a duration pattern recognition task. The groups included normal subjects, subjects with cochlear hearing loss, and subjects with lesions involving but not limited to the auditory areas of the cerebrum. Results indicated no significant difference in pattern recognition between the normal subjects and subjects with cochlear hearing loss. However, the subjects with cerebral lesions performed significantly more poorly than either the normal subjects or those with cochlear hearing loss. In comparing pattern recognition performance for the ears ipsilateral and contralateral to the lesioned hemispheres no differences were noted. Rather, when a central lesion was present, both ears generally yielded abnormal scores.
Collapse
|
|
35 |
111 |
96
|
Hansen JC, Hillyard SA. Selective attention to multidimensional auditory stimuli. J Exp Psychol Hum Percept Perform 1983; 9:1-19. [PMID: 6220115 DOI: 10.1037/0096-1523.9.1.1] [Citation(s) in RCA: 110] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/19/2023]
Abstract
Event-related brain potentials (ERPs) elicited by multidimensional auditory stimuli were recorded from the scalp in a selective-attention task. Subjects listened to tone pips varying orthogonally between two levels each of pitch, location, and duration and responded to longer duration target stimuli having specific values of pitch and location. The discriminability of the pitch and location attributes was varied between groups. By examining the ERPs to tones that shared pitch and/or locational cues with the designated target, we inferred interrelationships among the processing of these attributes. In all groups, stimuli that failed to match the target tone in an easily discriminable cue elicited only transitory ERP signs of selective processing. Tones sharing the "easy" but not the "hard" cue with the target elicited ERPs that indicated more extensive processing, but not as extensive as stimuli sharing both cues. In addition, reaction times and ERP latencies to the designated targets were not influenced by variations in the discriminability of pitch and location. This pattern of results is consistent with parallel, self-terminating models and holistic models of processing and contradicts models specifying either serial or exhaustive parallel processing of these dimensions. Both the parallel, self-terminating models and the holistic models provide a generalized mechanism for hierarchical stimulus selections that achieve an economy of processing, an underlying goal of classic, multiple-stage theories of selective attention.
Collapse
|
|
42 |
110 |
97
|
Florentine M, Buus S, Scharf B, Zwicker E. Frequency selectivity in normally-hearing and hearing-impaired observers. JOURNAL OF SPEECH AND HEARING RESEARCH 1980; 23:646-669. [PMID: 7421164 DOI: 10.1044/jshr.2303.646] [Citation(s) in RCA: 107] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/21/2023]
Abstract
This study compares frequency selectivity--as measured by four different methods--in observers with normal hearing and in observers with conductive (nonotosclerotic), otosclerotic, noise-induced, or degenerative hearing losses. Each category of loss was represented by a group of 7 to 10 observers, who were tested at center frequencies of 500 Hz and 4000 Hz. For each group, the following four measurements were made: psychoacoustical tuning curves, narrow-band masking, two-tone masking, and loudness summation. Results showed that (a) frequency selectivity was reduced at frequencies where a cochlear hearing loss was present, (b) frequency selectivity was reduced regardless of the test level at which normally-hearing observers and observers with cochlear impairment were compared, (c) all four measures of frequency selectivity were significantly correlated and (d) reduced frequency selectivity was positively correlated with the amount of cochlear hearing loss.
Collapse
|
|
45 |
107 |
98
|
Sidtis JJ. On the nature of the cortical function underlying right hemisphere auditory perception. Neuropsychologia 1980; 18:321-30. [PMID: 7413065 DOI: 10.1016/0028-3932(80)90127-x] [Citation(s) in RCA: 107] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/25/2023]
|
|
45 |
107 |
99
|
Jäncke L, Gaab N, Wüstenberg T, Scheich H, Heinze HJ. Short-term functional plasticity in the human auditory cortex: an fMRI study. BRAIN RESEARCH. COGNITIVE BRAIN RESEARCH 2001; 12:479-85. [PMID: 11689309 DOI: 10.1016/s0926-6410(01)00092-1] [Citation(s) in RCA: 105] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Abstract
Applying functional magnetic resonance imaging (fMRI) techniques, hemodynamic responses elicited by sequences of pure tones of 950 Hz (standard) and deviant tones of 952, 954, and 958 Hz were measured before and 1 week after subjects had been trained at frequency discrimination for five sessions (over 1 week) using an oddball procedure. The task of the subject was to detect deviants differing from the standard stimulus. Frequency discrimination improved during the training session for three subjects (performance gain: T+) but not for three other subjects (no performance gain: T-). Hemodynamic responses in the auditory cortex comprising the planum temporale, planum polare and sulcus temporalis superior significantly decreased during training only for the T+ group. These activation changes were strongest for those stimuli accompanied by the strongest performance gain (958 and 954 Hz). There was no difference with respect to the hemodynamic responses in the auditory cortex for the T- group and the control group (CO) who did not received any pitch discrimination training. The results suggest a plastic reorganization of the cortical representation for the trained frequencies which can be best explained on the basis of 'fast learning' theories.
Collapse
|
Clinical Trial |
24 |
105 |
100
|
Abstract
The P300 component was obtained from ten pairs of identical twins and matched control subjects with an auditory discrimination task. The event-related brain potentials (ERPs) from the identical twins were strikingly similar in amplitude and latency compared to control subject pairs. The data suggest that individual variations in ERP waveform morphology are determined by the structure of the neurophysiological mechanisms responsible for P300 generation.
Collapse
|
|
38 |
105 |