26
|
Behrens D, Klump GM. Comparison of mouse minimum audible angle determined in prepulse inhibition and operant conditioning procedures. Hear Res 2016; 333:167-178. [DOI: 10.1016/j.heares.2016.01.011] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/07/2015] [Revised: 01/07/2016] [Accepted: 01/20/2016] [Indexed: 10/22/2022]
|
27
|
Tolnai S, Dolležal LV, Klump GM. Binaural cues provide for a release from informational masking. Behav Neurosci 2015; 129:589-98. [DOI: 10.1037/bne0000091] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
28
|
Feinkohl A, Borzeszkowski KM, Klump GM. Azimuthal sound localization in the European starling (Sturnus vulgaris): III. Comparison of sound localization measures. Hear Res 2015; 332:238-248. [PMID: 25870127 DOI: 10.1016/j.heares.2015.04.001] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/19/2014] [Revised: 03/30/2015] [Accepted: 04/01/2015] [Indexed: 11/17/2022]
Abstract
Sound localization studies have typically employed two types of tasks: absolute tasks that measured the localization of the angular location of a single sound and relative tasks that measured the localization of the angular location of a sound relative to the angular location of another sound from a different source (e.g., in the Minimum Audible Angle task). The present study investigates the localization of single sounds in the European starling (Sturnus vulgaris) with a left/right discrimination paradigm. Localization thresholds of 8-12° determined in starlings using this paradigm were much lower than the minimum audible angle thresholds determined in a previous study with the same individuals. The traditional concept of sound localization classifies the present experiment as an absolute localization task. However, we propose that the experiment presenting single sounds measured localization of the angular location of the sound relative to a non-acoustic spatial frame of reference. We discuss how the properties of the setup can determine if presentation of single sounds in a left/right discrimination paradigm comprises an absolute localization task rather than a localization task relative to a non-acoustic reference. Furthermore, the analysis methods employed may lead to quite different threshold estimates for the same data, especially in case of a response bias in left/right discrimination. We propose using an analysis method precluding effects of response bias on the threshold estimate.
Collapse
|
29
|
Pohl NU, Klump GM, Langemann U. Effects of signal features and background noise on distance cue discrimination by a songbird. ACTA ACUST UNITED AC 2015; 218:1006-15. [PMID: 25657204 DOI: 10.1242/jeb.113639] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2014] [Accepted: 01/23/2015] [Indexed: 11/20/2022]
Abstract
During the transmission of acoustic signals, the spectral and temporal properties of the original signal are degraded, and with increasing distance more and more echo patterns are imposed. It is well known that these physical alterations provide useful cues to assess the distance of a sound source. Previous studies in birds have shown that birds employ the degree of degradation of a signal to estimate the distance of another singing male (referred to as ranging). Little is known about how acoustic masking by background noise interferes with ranging, and if the number of song elements and stimulus familiarity affect the ability to discriminate between degraded and undegraded signals. In this study we trained great tits (Parus major L.) to discriminate between signal variants in two background types, a silent condition and a condition consisting of a natural dawn chorus. We manipulated great tit song types to simulate patterns of reverberation and degradation equivalent to transmission distances of between 5 and 160 m. The birds' responses were significantly affected by the differences between the signal variants and by background type. In contrast, stimulus familiarity or their element number had no significant effect on signal discrimination. Although background type was a significant main effect with respect to the response latencies, the great tits' overall performance in the noisy dawn chorus was similar to the performance in silence.
Collapse
|
30
|
Behrens D, Klump GM. Comparison of the sensitivity of prepulse inhibition of the startle reflex and operant conditioning in an auditory intensity difference limen paradigm. Hear Res 2015; 321:35-44. [PMID: 25580004 DOI: 10.1016/j.heares.2014.12.010] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/13/2014] [Revised: 12/22/2014] [Accepted: 12/23/2014] [Indexed: 11/28/2022]
Abstract
Reward-based operant conditioning (OC) procedures and reflex-based prepulse inhibition (PPI) procedures are used in mouse psychoacoustics. Therefore it is important to know whether both procedures provide comparable results for perceptual measurements. Here we evaluate the sensitivity of the C57BL/6N mouse in both procedures by testing the same individuals in the same Intensity Difference Limen (IDL) task. Level increments of a 10 kHz tone were presented in a train of 10 kHz reference tones. Objective analysis based on signal-detection theory was applied to compare the results of OC and PPI procedures. In both procedures the sensitivity increased with level increment. In agreement with the near miss to Weber's law, sensitivity increased with sound level of the reference stimuli. The sensitivity observed in the OC procedure was considerably larger than the sensitivity in the PPI procedure. Applying a sensitivity of 1.0 as the threshold criterion, mean IDLs in the OC procedure were 5.0, 4.0 and 3.5 dB at reference levels of 30, 50 and 75 dB SPL respectively. In the PPI procedure, mean IDLs of 18.9 and 17.0 dB at reference levels of 50 and 75 dB SPL respectively were observed. Due to the low sensitivity, IDLs could not be determined in the PPI procedure at a reference level of 30 dB SPL. Possible causes for the low sensitivity in the PPI procedure are discussed. These results challenge the idea that both procedures can be used as simple substitutes of one another and the experimenter must be aware of the limitations of the respective procedure.
Collapse
|
31
|
Beutelmann R, Laumen G, Tollin D, Klump GM. Amplitude and phase equalization of stimuli for click evoked auditory brainstem responses. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2015; 137:EL71-EL77. [PMID: 25618102 PMCID: PMC5404818 DOI: 10.1121/1.4903921] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/03/2014] [Revised: 11/13/2014] [Accepted: 11/25/2014] [Indexed: 06/04/2023]
Abstract
Although auditory brainstem responses (ABRs), the sound-evoked brain activity in response to transient sounds, are routinely measured in humans and animals there are often differences in ABR waveform morphology across studies. One possible reason may be the method of stimulus calibration. To explore this hypothesis, click-evoked ABRs were measured from seven ears in four Mongolian gerbils (Meriones unguiculatus) using three common spectrum calibration strategies: Minimum phase filter, linear phase filter, and no filter. The results show significantly higher ABR amplitude and signal-to-noise ratio, and better waveform resolution with the minimum phase filtered click than with the other strategies.
Collapse
|
32
|
Dolležal LV, Brechmann A, Klump GM, Deike S. Evaluating auditory stream segregation of SAM tone sequences by subjective and objective psychoacoustical tasks, and brain activity. Front Neurosci 2014; 8:119. [PMID: 24936170 PMCID: PMC4047832 DOI: 10.3389/fnins.2014.00119] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2013] [Accepted: 05/03/2014] [Indexed: 11/13/2022] Open
Abstract
Auditory stream segregation refers to a segregated percept of signal streams with different acoustic features. Different approaches have been pursued in studies of stream segregation. In psychoacoustics, stream segregation has mostly been investigated with a subjective task asking the subjects to report their percept. Few studies have applied an objective task in which stream segregation is evaluated indirectly by determining thresholds for a percept that depends on whether auditory streams are segregated or not. Furthermore, both perceptual measures and physiological measures of brain activity have been employed but only little is known about their relation. How the results from different tasks and measures are related is evaluated in the present study using examples relying on the ABA- stimulation paradigm that apply the same stimuli. We presented A and B signals that were sinusoidally amplitude modulated (SAM) tones providing purely temporal, spectral or both types of cues to evaluate perceptual stream segregation and its physiological correlate. Which types of cues are most prominent was determined by the choice of carrier and modulation frequencies (f mod) of the signals. In the subjective task subjects reported their percept and in the objective task we measured their sensitivity for detecting time-shifts of B signals in an ABA- sequence. As a further measure of processes underlying stream segregation we employed functional magnetic resonance imaging (fMRI). SAM tone parameters were chosen to evoke an integrated (1-stream), a segregated (2-stream), or an ambiguous percept by adjusting the f mod difference between A and B tones (Δf mod). The results of both psychoacoustical tasks are significantly correlated. BOLD responses in fMRI depend on Δf mod between A and B SAM tones. The effect of Δf mod, however, differs between auditory cortex and frontal regions suggesting differences in representation related to the degree of perceptual ambiguity of the sequences.
Collapse
|
33
|
van den Heuvel IM, Cherry MI, Klump GM. Crimson-breasted Shrike females with extra pair offspring contributed more to duets. Behav Ecol Sociobiol 2014. [DOI: 10.1007/s00265-014-1735-6] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
34
|
van den Heuvel IM, Cherry MI, Klump GM. Land or lover? Territorial defence and mutual mate guarding in the crimson-breasted shrike. Behav Ecol Sociobiol 2013. [DOI: 10.1007/s00265-013-1651-1] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
35
|
Feinkohl A, Borzeszkowski KM, Klump GM. Effect of head turns on the localization accuracy of sounds in the European starling (Sturnus vulgaris). Behav Brain Res 2013; 256:669-76. [PMID: 24035879 DOI: 10.1016/j.bbr.2013.08.038] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2013] [Revised: 08/19/2013] [Accepted: 08/22/2013] [Indexed: 11/25/2022]
Abstract
Long signal durations that represent closed-loop conditions permit responses based on the sensory feedback during the presentation of the stimulus, while short stimulus durations that represent open-loop conditions do not allow for directed head turns during signal presentation. A previous study showed that for broadband noise stimuli, the minimum audible angle (MAA) of the European starling (Sturnus vulgaris) is smaller under closed-loop compared to open-loop conditions (Feinkohl & Klump, 2013). Head turns represent a possible strategy to improve sound localization cues under closed-loop conditions. In this study, we analyze the influence of head turns on the starling MAA for broadband noise and 2 kHz tones under closed-loop and open-loop conditions. The starlings made more head turns under closed-loop conditions compared to open-loop conditions. Under closed-loop conditions, their sensitivity for discriminating sound source positions was best if they turned their head once or more per stimulus presentation. We discuss potential cues generated from head turns under closed-loop conditions.
Collapse
|
36
|
Klinge-Strahl A, Parnitzke T, Beutelmann R, Klump GM. Phase discrimination ability in Mongolian gerbils provides evidence for possible processing mechanism of mistuning detection. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2013; 787:399-407. [PMID: 23716246 DOI: 10.1007/978-1-4614-1590-9_44] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/20/2023]
Abstract
Compared to humans, Mongolian gerbils (Meriones unguiculatus) are much more sensitive at detecting mistuning of frequency components of a harmonic complex (Klinge and Klump. J Acoust Soc Am 128:280-290, 2010). One processing mechanism suggested to result in the high sensitivity involves evaluating the phase shift that gradually develops between the mistuned and the remaining components in the same or separate auditory filters. To investigate if this processing mechanism may explain the observed sensitivity, we determined the gerbils' thresholds to detect a constant phase shift in a component of a harmonic complex that is introduced without a frequency shift. The gerbils' detection thresholds for constant phase shifts were considerably lower for a high-frequency component (6,400 Hz) than for a low-frequency component (400 Hz) of a 200-Hz harmonic complex and increased with decreasing stimulus duration. Compared to the phase shifts calculated from the mistuning detection thresholds, the detection thresholds for constant phase shifts were similar to those for gradual phase shifts for the low-frequency harmonic but considerably lower for the high-frequency harmonic. A simulation of the processing of harmonic complexes by the gerbil's peripheral auditory filters when components are phase shifted shows waveform changes comparable to those assessed for mistuning detection Klinge and Klump (J Acoust Soc Am 128:280-290, 2010) and provides evidence that detection of the gradual phase shifts may underlie mistuning detection.
Collapse
|
37
|
Pohl NU, Slabbekoorn H, Neubauer H, Heil P, Klump GM, Langemann U. Why longer song elements are easier to detect: threshold level-duration functions in the Great Tit and comparison with human data. J Comp Physiol A Neuroethol Sens Neural Behav Physiol 2013; 199:239-52. [PMID: 23338560 DOI: 10.1007/s00359-012-0789-z] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2012] [Revised: 12/21/2012] [Accepted: 12/25/2012] [Indexed: 10/27/2022]
Abstract
Our study estimates detection thresholds for tones of different durations and frequencies in Great Tits (Parus major) with operant procedures. We employ signals covering the duration and frequency range of communication signals of this species (40-1,010 ms; 2, 4, 6.3 kHz), and we measure threshold level-duration (TLD) function (relating threshold level to signal duration) in silence as well as under behaviorally relevant environmental noise conditions (urban noise, woodland noise). Detection thresholds decreased with increasing signal duration. Thresholds at any given duration were a function of signal frequency and were elevated in background noise, but the shape of Great Tit TLD functions was independent of signal frequency and background condition. To enable comparisons of our Great Tit data to those from other species, TLD functions were first fitted with a traditional leaky-integrator model. We then applied a probabilistic model to interpret the trade-off between signal amplitude and duration at threshold. Great Tit TLD functions exhibit features that are similar across species. The current results, however, cannot explain why Great Tits in noisy urban environments produce shorter song elements or faster songs than those in quieter woodland environments, as detection thresholds are lower for longer elements also under noisy conditions.
Collapse
|
38
|
Dolležal LV, Itatani N, Günther S, Klump GM. Auditory streaming by phase relations between components of harmonic complexes: a comparative study of human subjects and bird forebrain neurons. Behav Neurosci 2012; 126:797-808. [PMID: 23067380 DOI: 10.1037/a0030249] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Auditory streaming describes a percept in which a sequential series of sounds either is segregated into different streams or is integrated into one stream based on differences in their spectral or temporal characteristics. This phenomenon has been analyzed in human subjects (psychophysics) and European starlings (neurophysiology), presenting harmonic complex (HC) stimuli with different phase relations between their frequency components. Such stimuli allow evaluating streaming by temporal cues, as these stimuli only vary in the temporal waveform but have identical amplitude spectra. The present study applied the commonly used ABA- paradigm (van Noorden, 1975) and matched stimulus sets in psychophysics and neurophysiology to evaluate the effects of fundamental frequency (f₀), frequency range (f(LowCutoff)), tone duration (TD), and tone repetition time (TRT) on streaming by phase relations of the HC stimuli. By comparing the percept of humans with rate or temporal responses of avian forebrain neurons, a neuronal correlate of perceptual streaming of HC stimuli is described. The differences in the pattern of the neurons' spike rate responses provide for a better explanation for the percept observed in humans than the differences in the temporal responses (i.e., the representation of the periodicity in the timing of the action potentials). Especially for HC stimuli with a short 40-ms duration, the differences in the pattern of the neurons' temporal responses failed to represent the patterns of human perception, whereas the neurons' rate responses showed a good match. These results suggest that differential rate responses are a better predictor for auditory streaming by phase relations than temporal responses.
Collapse
|
39
|
Dolležal LV, Beutelmann R, Klump GM. Stream segregation in the perception of sinusoidally amplitude-modulated tones. PLoS One 2012; 7:e43615. [PMID: 22984436 PMCID: PMC3440405 DOI: 10.1371/journal.pone.0043615] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2012] [Accepted: 07/26/2012] [Indexed: 11/25/2022] Open
Abstract
Amplitude modulation can serve as a cue for segregating streams of sounds from different sources. Here we evaluate stream segregation in humans using ABA- sequences of sinusoidally amplitude modulated (SAM) tones. A and B represent SAM tones with the same carrier frequency (1000, 4000 Hz) and modulation depth (30, 100%). The modulation frequency of the A signals (fmodA) was 30, 100 or 300 Hz, respectively. The modulation frequency of the B signals was up to four octaves higher (Δfmod). Three different ABA- tone patterns varying in tone duration and stimulus onset asynchrony were presented to evaluate the effect of forward suppression. Subjects indicated their 1- or 2-stream percept on a touch screen at the end of each ABA- sequence (presentation time 5 or 15 s). Tone pattern, fmodA, Δfmod, carrier frequency, modulation depth and presentation time significantly affected the percentage of a 2-stream percept. The human psychophysical results are compared to responses of avian forebrain neurons evoked by different ABA- SAM tone conditions [1] that were broadly overlapping those of the present study. The neurons also showed significant effects of tone pattern and Δfmod that were comparable to effects observed in the present psychophysical study. Depending on the carrier frequency, modulation frequency, modulation depth and the width of the auditory filters, SAM tones may provide mainly temporal cues (sidebands fall within the range of the filter), spectral cues (sidebands fall outside the range of the filter) or possibly both. A computational model based on excitation pattern differences was used to predict the 50% threshold of 2-stream responses. In conditions for which the model predicts a considerably larger 50% threshold of 2-stream responses (i.e., larger Δfmod at threshold) than was observed, it is unlikely that spectral cues can provide an explanation of stream segregation by SAM.
Collapse
|
40
|
Maier JK, Hehrmann P, Harper NS, Klump GM, Pressnitzer D, McAlpine D. Adaptive coding is constrained to midline locations in a spatial listening task. J Neurophysiol 2012; 108:1856-68. [PMID: 22773777 DOI: 10.1152/jn.00652.2011] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Many neurons adapt their spike output to accommodate the prevailing sensory environment. Although such adaptation is thought to improve coding of relevant stimulus features, the relationship between adaptation at the neural and behavioral levels remains to be established. Here we describe improved discrimination performance for an auditory spatial cue (interaural time differences, ITDs) following adaptation to stimulus statistics. Physiological recordings in the midbrain of anesthetized guinea pigs and measurement of discrimination performance in humans both demonstrate improved coding of the most prevalent ITDs in a distribution, but with highest accuracy maintained for ITDs corresponding to frontal locations, suggesting the existence of a fovea for auditory space. A biologically plausible model accounting for the physiological data suggests that neural tuning is stabilized by inhibition to maintain high discriminability for frontal locations. The data support the notion that adaptive coding in the midbrain is a key element of behaviorally efficient sound localization in dynamic acoustic environments.
Collapse
|
41
|
van den Heuvel IM, Cherry MI, Klump GM. Individual identity, song repertoire and duet function in the Crimson-breasted Shrike (Laniarius atrococcineus). BIOACOUSTICS 2012. [DOI: 10.1080/09524622.2012.701041] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
|
42
|
Pohl NU, Leadbeater E, Slabbekoorn H, Klump GM, Langemann U. Great tits in urban noise benefit from high frequencies in song detection and discrimination. Anim Behav 2012. [DOI: 10.1016/j.anbehav.2011.12.019] [Citation(s) in RCA: 60] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
43
|
Klinge A, Beutelmann R, Klump GM. Effect of harmonicity on the detection of a signal in a complex masker and on spatial release from masking. PLoS One 2011; 6:e26124. [PMID: 22028814 PMCID: PMC3196535 DOI: 10.1371/journal.pone.0026124] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2011] [Accepted: 09/20/2011] [Indexed: 11/19/2022] Open
Abstract
The amount of masking of sounds from one source (signals) by sounds from a competing source (maskers) heavily depends on the sound characteristics of the masker and the signal and on their relative spatial location. Numerous studies investigated the ability to detect a signal in a speech or a noise masker or the effect of spatial separation of signal and masker on the amount of masking, but there is a lack of studies investigating the combined effects of many cues on the masking as is typical for natural listening situations. The current study using free-field listening systematically evaluates the combined effects of harmonicity and inharmonicity cues in multi-tone maskers and cues resulting from spatial separation of target signal and masker on the detection of a pure tone in a multi-tone or a noise masker. A linear binaural processing model was implemented to predict the masked thresholds in order to estimate whether the observed thresholds can be accounted for by energetic masking in the auditory periphery or whether other effects are involved. Thresholds were determined for combinations of two target frequencies (1 and 8 kHz), two spatial configurations (masker and target either co-located or spatially separated by 90 degrees azimuth), and five different masker types (four complex multi-tone stimuli, one noise masker). A spatial separation of target and masker resulted in a release from masking for all masker types. The amount of masking significantly depended on the masker type and frequency range. The various harmonic and inharmonic relations between target and masker or between components of the masker resulted in a complex pattern of increased or decreased masked thresholds in comparison to the predicted energetic masking. The results indicate that harmonicity cues affect the detectability of a tonal target in a complex masker.
Collapse
|
44
|
Itatani N, Klump GM. Neural Correlates of Auditory Streaming of Harmonic Complex Sounds With Different Phase Relations in the Songbird Forebrain. J Neurophysiol 2011; 105:188-99. [DOI: 10.1152/jn.00496.2010] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
It has been suggested that successively presented sounds that are perceived as separate auditory streams are represented by separate populations of neurons. Mostly, spectral separation in different peripheral filters has been identified as the cue for segregation. However, stream segregation based on temporal cues is also possible without spectral separation. Here we present sequences of ABA- triplet stimuli providing only temporal cues to neurons in the European starling auditory forebrain. A and B sounds (125 ms duration) were harmonic complexes (fundamentals 100, 200, or 400 Hz; center frequency and bandwidth chosen to fit the neurons' tuning characteristic) with identical amplitude spectra but different phase relations between components (cosine, alternating, or random phase) and presented at different rates. Differences in both rate responses and temporal response patterns of the neurons when stimulated with harmonic complexes with different phase relations provide first evidence for a mechanism allowing a separate neural representation of such stimuli. Recording sites responding >1 kHz showed enhanced rate and temporal differences compared with those responding at lower frequencies. These results demonstrate a neural correlate of streaming by temporal cues due to the variation of phase that shows striking parallels to observations in previous psychophysical studies.
Collapse
|
45
|
Bee MA, Micheyl C, Oxenham AJ, Klump GM. Neural adaptation to tone sequences in the songbird forebrain: patterns, determinants, and relation to the build-up of auditory streaming. J Comp Physiol A Neuroethol Sens Neural Behav Physiol 2010; 196:543-57. [PMID: 20563587 DOI: 10.1007/s00359-010-0542-4] [Citation(s) in RCA: 43] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2010] [Revised: 05/08/2010] [Accepted: 05/28/2010] [Indexed: 11/29/2022]
Abstract
Neural responses to tones in the mammalian primary auditory cortex (A1) exhibit adaptation over the course of several seconds. Important questions remain about the taxonomic distribution of multi-second adaptation and its possible roles in hearing. It has been hypothesized that neural adaptation could explain the gradual "build-up" of auditory stream segregation. We investigated the influence of several stimulus-related factors on neural adaptation in the avian homologue of mammalian A1 (field L2) in starlings (Sturnus vulgaris). We presented awake birds with sequences of repeated triplets of two interleaved tones (ABA-ABA-...) in which we varied the frequency separation between the A and B tones (DeltaF), the stimulus onset asynchrony (time from tone onset to onset within a triplet), and tone duration. We found that stimulus onset asynchrony generally had larger effects on adaptation compared with DeltaF and tone duration over the parameter range tested. Using a simple model, we show how time-dependent changes in neural responses can be transformed into neurometric functions that make testable predictions about the dependence of the build-up of stream segregation on various spectral and temporal stimulus properties.
Collapse
|
46
|
Klink KB, Dierker H, Beutelmann R, Klump GM. Comodulation masking release determined in the mouse (Mus musculus) using a flanking-band paradigm. J Assoc Res Otolaryngol 2010; 11:79-88. [PMID: 19763691 PMCID: PMC2820211 DOI: 10.1007/s10162-009-0186-7] [Citation(s) in RCA: 13] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2008] [Accepted: 08/12/2009] [Indexed: 11/29/2022] Open
Abstract
Comodulation masking release (CMR) has been attributed to auditory processing within one auditory channel (within-channel cues) and/or across several auditory channels (across-channel cues). The present flanking-band (FB) experiment-using a 25-Hz-wide on-frequency noise masker (OFM) centered at the signal frequency of 10 kHz and a single 25-Hz-wide noise FB-was designed to separate the amount of CMR due to within- and across-channel cues and to investigate the role of temporal cues on the size of within-channel CMR. The results demonstrated within-channel CMR in the Naval Medical Research Institute mouse, while no unambiguous evidence could be found for CMR occurring due to across-channel processing (i.e., "true CMR"). The amount of within-channel CMR was dependent on the frequency separation between the FB and the OFM. CMR increased from 4 to 6 dB for a frequency separation of 1 kHz to 18 dB for a frequency separation of 100 Hz. The large increase for a frequency separation of 100 Hz is likely to be due to the exploitation of changes in the temporal pattern of the stimulus upon the addition of the signal. Temporal interaction between both masker bands results in modulations with a large depth at a modulation frequency equal to the beating rate. Adding a signal to the maskers reduces the depth of the modulation. The auditory system of mice might be able to use the change in modulation depth at a beating frequency of 100 Hz as a cue for signal detection, while being unable to detect changes in modulation depth at high modulation frequencies. These results are consistent with other experiments and model predictions for CMR in humans which suggested that the main contribution to the CMR effect stems from processing of within-channel cues.
Collapse
|
47
|
Maier JK, McAlpine D, Klump GM, Pressnitzer D. Context effects in the discriminability of spatial cues. J Assoc Res Otolaryngol 2009; 11:319-28. [PMID: 20033247 DOI: 10.1007/s10162-009-0200-0] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2009] [Accepted: 11/25/2009] [Indexed: 11/27/2022] Open
Abstract
In order to investigate whether performance in an auditory spatial discrimination task depends on the prevailing listening conditions, we tested the ability of human listeners to discriminate target sounds with and without presentation of a preceding sound. Target sounds were either lateralized by means of interaural time differences (ITDs) of +400, 0, or -400 micros or interaural level differences (ILDs) with the same subjective intracranial locations. The preceding sound was always lateralized by means of ITD. This allowed for testing whether the effects of a preceding sound were location- or cue-specific. Preceding sounds and target sounds were randomly paired across trials. Listeners had to discriminate whether they perceived the target sounds as coming from the same or different intracranial locations. Finally, stimuli were selected so that, without any preceding sound, ITD and ILD cues were equally discriminable at all target lateralizations. Stimuli were 800 Hz-wide, 400-ms duration bands of noise centered at 500 Hz, presented over headphones. The duration of the preceding sound was randomly selected from a uniform distribution spanning from 1s to 2s. Results show that discriminability of both binaural cues was improved for midline target positions when preceding sound and targets were co-located, whereas it was impaired when preceding sound and targets came from different positions. No effect of the preceding sound was found for left or right target positions. These results are compatible with a purely bottom-up mechanism based on adaptive coding of ITD around the midline that may be combined with top-down mechanisms to increase localization accuracy in realistic listening conditions.
Collapse
|
48
|
Pohl NU, Slabbekoorn H, Klump GM, Langemann U. Effects of signal features and environmental noise on signal detection in the great tit, Parus major. Anim Behav 2009. [DOI: 10.1016/j.anbehav.2009.09.005] [Citation(s) in RCA: 68] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
49
|
Seeba F, Klump GM. Stimulus familiarity affects perceptual restoration in the European starling (Sturnus vulgaris). PLoS One 2009; 4:e5974. [PMID: 19551146 PMCID: PMC2696095 DOI: 10.1371/journal.pone.0005974] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2009] [Accepted: 05/13/2009] [Indexed: 11/18/2022] Open
Abstract
BACKGROUND Humans can easily restore a speech signal that is temporally masked by an interfering sound (e.g., a cough masking parts of a word in a conversation), and listeners have the illusion that the speech continues through the interfering sound. This perceptual restoration for human speech is affected by prior experience. Here we provide evidence for perceptual restoration in complex vocalizations of a songbird that are acquired by vocal learning in a similar way as humans learn their language. METHODOLOGY/PRINCIPAL FINDINGS European starlings were trained in a same/different paradigm to report salient differences between successive sounds. The birds' response latency for discriminating between a stimulus pair is an indicator for the salience of the difference, and these latencies can be used to evaluate perceptual distances using multi-dimensional scaling. For familiar motifs the birds showed a large perceptual distance if discriminating between song motifs that were muted for brief time periods and complete motifs. If the muted periods were filled with noise, the perceptual distance was reduced. For unfamiliar motifs no such difference was observed. CONCLUSIONS/SIGNIFICANCE The results suggest that starlings are able to perceptually restore partly masked sounds and, similarly to humans, rely on prior experience. They may be a suitable model to study the mechanism underlying experience-dependent perceptual restoration.
Collapse
|
50
|
Itatani N, Klump GM. Auditory streaming of amplitude-modulated sounds in the songbird forebrain. J Neurophysiol 2009; 101:3212-25. [PMID: 19357341 DOI: 10.1152/jn.91333.2008] [Citation(s) in RCA: 33] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Streaming in auditory scene analysis refers to the perceptual grouping of multiple interleaved sounds having similar characteristics while sounds with different characteristics are segregated. In human perception, auditory streaming occurs on the basis of temporal features of sounds such as the rate of amplitude modulation. We present results from multiunit recordings in the auditory forebrain of awake European starlings (Sturnus vulgaris) on the representation of sinusoidally amplitude modulated (SAM) tones to investigate the effect of temporal envelope structure on neural stream segregation. Different types of rate modulation transfer functions in response to SAM tones were observed. The strongest responses were found for modulation frequencies (fmod) <160 Hz. The streaming stimulus consisted of sequences of alternating SAM tones with the same carrier frequency but differing in fmod (ABA-ABA-ABA-...). A signals had a modulation frequency evoking a large excitation, whereas the fmod of B signals was <or=4 octaves higher. Synchrony of B signal responses to the modulation decreased as fmod increased. Spike rate in response to B signals dropped as fmod increased. Faster signal repetition resulted in fewer spikes, suggesting the contribution of forward suppression to the response that may be due to both signals having similar spectral energy and that is not related to the temporal pattern of modulation. These two effects are additive and may provide the basis for a more separated representation of A and B signals by two populations of neurons that can be viewed as a neuronal correlate of segregated streams.
Collapse
|