1
|
Heil P, Friedrich B. How to define thresholds for level and interaural-level-difference discrimination: Insights from scedasticities and distributions. Hear Res 2023; 436:108837. [PMID: 37413706 DOI: 10.1016/j.heares.2023.108837] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/06/2022] [Revised: 05/31/2023] [Accepted: 06/19/2023] [Indexed: 07/08/2023]
Abstract
Sensitivity to changes in the stimulus level at one or at both ears and to changes in the interaural level difference (ILD) between the two ears has been studied widely. Several different definitions of threshold and, for one of them, two different ways of averaging single-listener thresholds have been used (i.e., arithmetically and geometrically), but it is unclear which definition and which way of averaging is most suitable. Here, we addressed this issue by examining which of the differently defined thresholds yielded the highest degree of homoscedasticity (homogeneity of the variance). We also examined how closely the differently defined thresholds followed the normal distribution. We measured thresholds from a large number of human listeners as a function of stimulus duration in six experimental conditions, using an adaptive two-alternative forced-choice paradigm. Thresholds defined as the logarithm of the ratio of the intensities or amplitudes of the target and the reference stimulus (i.e., as the difference in their levels or ILDs; the most commonly used definition) were clearly heteroscedastic. Log-transformation of these latter thresholds, as sometimes performed, did not result in homoscedasticity. Thresholds defined as the logarithm of the Weber fraction for stimulus intensity and thresholds defined as the logarithm of the Weber fraction for stimulus amplitude (the most rarely used definition) were consistent with homoscedasticity, but the latter were closer to the ideal case. Thresholds defined as the logarithm of the Weber fraction for stimulus amplitude also followed the normal distribution most closely. The discrimination thresholds should therefore be expressed as the logarithm of the Weber fraction for stimulus amplitude and be averaged arithmetically across listeners. Other implications are discussed, and the obtained differences between the thresholds in different conditions are compared to the literature.
Collapse
Affiliation(s)
- Peter Heil
- Department of Systems Physiology of Learning, Leibniz Institute for Neurobiology, Magdeburg, Germany; Center for Behavioral Brain Sciences, Magdeburg, Germany.
| | - Björn Friedrich
- Department of Experimental Audiology, Otto von Guericke University, Magdeburg, Germany
| |
Collapse
|
2
|
Temporal and Directional Cue Effects on the Cocktail Party Problem for Patients With Listening Difficulties Without Clinical Hearing Loss. Ear Hear 2022; 43:1740-1751. [DOI: 10.1097/aud.0000000000001247] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
3
|
Herbst SK, Stefanics G, Obleser J. Endogenous modulation of delta phase by expectation–A replication of Stefanics et al., 2010. Cortex 2022; 149:226-245. [DOI: 10.1016/j.cortex.2022.02.001] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2021] [Revised: 01/31/2022] [Accepted: 02/01/2022] [Indexed: 11/03/2022]
|
4
|
Heil P, Mohamed ESI, Matysiak A. Towards a unifying basis of auditory thresholds: Thresholds for multicomponent stimuli. Hear Res 2021; 410:108349. [PMID: 34530356 DOI: 10.1016/j.heares.2021.108349] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/17/2020] [Revised: 08/23/2021] [Accepted: 08/30/2021] [Indexed: 11/25/2022]
Abstract
Sounds consisting of multiple simultaneous or consecutive components can be detected by listeners when the stimulus levels of the components are lower than those needed to detect the individual components alone. The mechanisms underlying such spectral, spectrotemporal, temporal, or across-ear integration are not completely understood. Here, we report threshold measurements from human subjects for multicomponent stimuli (tone complexes, tone sequences, diotic or dichotic tones) and for their individual sinusoidal components in quiet. We examine whether the data are compatible with the detection model developed by Heil, Matysiak, and Neubauer (HMN model) to account for temporal integration (Heil et al. 2017), and we compare its performance to that of the statistical summation model (Green 1958), the model commonly used to account for spectral and spectrotemporal integration. In addition, we compare the performance of both models with respect to previously published thresholds for sequences of identical tones and for diotic tones. The HMN model is similar to the statistical summation model but is based on the assumption that the decision variable is a number of sensory events generated by the components via independent Poisson point processes. The rate of events is low without stimulation and increases with stimulation. The increase is proportional to the time-varying amplitude envelope of the bandpass-filtered component(s) raised to an exponent of 3. For an ideal observer, the decision variable is the sum of the events from all channels carrying information, for as long as they carry information. We find that the HMN model provides a better account of the thresholds for multicomponent stimuli than the statistical summation model, and it offers a unifying account of spectral, spectrotemporal, temporal, and across-ear integration at threshold.
Collapse
Affiliation(s)
- Peter Heil
- Department of Systems Physiology of Learning, Leibniz Institute for Neurobiology, Magdeburg 39118, Germany; Center for Behavioral Brain Sciences, Magdeburg, Germany.
| | - Esraa S I Mohamed
- Department of Systems Physiology of Learning, Leibniz Institute for Neurobiology, Magdeburg 39118, Germany
| | - Artur Matysiak
- Research Group Comparative Neuroscience, Leibniz Institute for Neurobiology, Magdeburg, Germany
| |
Collapse
|
5
|
Clayton KK, Asokan MM, Watanabe Y, Hancock KE, Polley DB. Behavioral Approaches to Study Top-Down Influences on Active Listening. Front Neurosci 2021; 15:666627. [PMID: 34305516 PMCID: PMC8299106 DOI: 10.3389/fnins.2021.666627] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2021] [Accepted: 06/09/2021] [Indexed: 11/21/2022] Open
Abstract
The massive network of descending corticofugal projections has been long-recognized by anatomists, but their functional contributions to sound processing and auditory-guided behaviors remain a mystery. Most efforts to characterize the auditory corticofugal system have been inductive; wherein function is inferred from a few studies employing a wide range of methods to manipulate varying limbs of the descending system in a variety of species and preparations. An alternative approach, which we focus on here, is to first establish auditory-guided behaviors that reflect the contribution of top-down influences on auditory perception. To this end, we postulate that auditory corticofugal systems may contribute to active listening behaviors in which the timing of bottom-up sound cues can be predicted from top-down signals arising from cross-modal cues, temporal integration, or self-initiated movements. Here, we describe a behavioral framework for investigating how auditory perceptual performance is enhanced when subjects can anticipate the timing of upcoming target sounds. Our first paradigm, studied both in human subjects and mice, reports species-specific differences in visually cued expectation of sound onset in a signal-in-noise detection task. A second paradigm performed in mice reveals the benefits of temporal regularity as a perceptual grouping cue when detecting repeating target tones in complex background noise. A final behavioral approach demonstrates significant improvements in frequency discrimination threshold and perceptual sensitivity when auditory targets are presented at a predictable temporal interval following motor self-initiation of the trial. Collectively, these three behavioral approaches identify paradigms to study top-down influences on sound perception that are amenable to head-fixed preparations in genetically tractable animals, where it is possible to monitor and manipulate particular nodes of the descending auditory pathway with unparalleled precision.
Collapse
Affiliation(s)
- Kameron K. Clayton
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear, Boston, MA, United States
| | - Meenakshi M. Asokan
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear, Boston, MA, United States
| | - Yurika Watanabe
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear, Boston, MA, United States
| | - Kenneth E. Hancock
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear, Boston, MA, United States
- Department of Otolaryngology – Head and Neck Surgery, Harvard Medical School, Boston, MA, United States
| | - Daniel B. Polley
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear, Boston, MA, United States
- Department of Otolaryngology – Head and Neck Surgery, Harvard Medical School, Boston, MA, United States
| |
Collapse
|
6
|
Wright BA, Dai H. Humans attend to signal duration but not temporal structure for sound detection: Steady-state versus pulse-train signals. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2021; 149:4543. [PMID: 34241429 DOI: 10.1121/10.0005283] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/13/2021] [Accepted: 05/26/2021] [Indexed: 06/13/2023]
Abstract
Most sounds fluctuate in amplitude, but do listeners attend to the temporal structure of those fluctuations when trying to detect the mere presence of those sounds? This question was addressed by leading listeners to expect a faint sound with a fixed temporal structure (pulse train or steady-state tone) and total duration (300 ms) and measuring their ability to detect equally faint sounds of unexpected temporal structure (pulse train when expecting steady state) and/or total duration (<300 ms). Detection was poorer for sounds with unexpected than with expected total durations, replicating previous outcomes, but was uninfluenced by the temporal structure of the expected sound. The results disagree with computational predictions of the multiple-look model, which posits that listeners attend to both the total duration and temporal structure of the signal, but agree with predictions of the matched-window energy-detector model, which posits that listeners attend to the total duration but not the temporal structure of the signal. Moreover, the matched-window energy-detector model could also account for previous results, including some that were originally interpreted as supporting the multiple-look model. Taken together, at least when detecting faint sounds, listeners appear to attend to the total duration of expected sounds but to ignore their detailed temporal structure.
Collapse
Affiliation(s)
- Beverly A Wright
- Department of Communication Sciences and Disorders, 2240 Campus Drive, Northwestern University, Evanston, Illinois 60208, USA
| | - Huanping Dai
- Department of Speech, Language, and Hearing Sciences, College of Science, 1131 East Second Street, University of Arizona, Tucson, Arizona 85721, USA
| |
Collapse
|
7
|
Reeves A, Seluakumaran K, Scharf B. Contralateral proximal interference. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2021; 149:3352. [PMID: 34241123 DOI: 10.1121/10.0004786] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/26/2020] [Accepted: 04/06/2021] [Indexed: 06/13/2023]
Abstract
A contralateral "cue" tone presented in continuous broadband noise both lowers the threshold of a signal tone by guiding attention to it and raises its threshold by interference. Here, signal tones were fixed in duration (40 ms, 52 ms with ramps), frequency (1500 Hz), timing, and level, so attention did not need guidance. Interference by contralateral cues was studied in relation to cue-signal proximity, cue-signal temporal overlap, and cue-signal order (cue after: backward interference, BI; or cue first: forward interference, FI). Cues, also ramped, were 12 dB above the signal level. Long cues (300 or 600 ms) raised thresholds by 5.3 dB when the signal and cue overlapped and by 5.1 dB in FI and 3.2 dB in BI when cues and signals were separated by 40 ms. Short cues (40 ms) raised thresholds by 4.5 dB in FI and 4.0 dB in BI for separations of 7 to 40 ms, but by ∼13 dB when simultaneous and in phase. FI and BI are comparable in magnitude and hardly increase when the signal is close in time to abrupt cue transients. These results do not support the notion that masking of the signal is due to the contralateral cue onset/offset transient response. Instead, sluggish attention or temporal integration may explain contralateral proximal interference.
Collapse
Affiliation(s)
- Adam Reeves
- Department of Psychology, Northeastern University, Boston, Massachusetts 02115, USA
| | - Kumar Seluakumaran
- Faculty of Medicine, Department of Physiology, University of Malaya, 50603 Kuala Lumpur, Malaysia
| | - Bertram Scharf
- Department of Psychology, Northeastern University, Boston, Massachusetts 02115, USA
| |
Collapse
|
8
|
Heil P. Comparing and modeling absolute auditory thresholds in an alternative-forced-choice and a yes-no procedure. Hear Res 2021; 403:108164. [PMID: 33453643 DOI: 10.1016/j.heares.2020.108164] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/05/2020] [Revised: 12/08/2020] [Accepted: 12/30/2020] [Indexed: 01/11/2023]
Abstract
Detecting sounds in quiet is arguably the simplest task performed by an auditory system, but the underlying mechanisms are still a matter of debate. Threshold stimulus levels depend not only on the physical properties of the sounds to be detected but also on the experimental procedure used to measure them. Here, thresholds of human subjects were measured for sounds consisting of different numbers of bursts using both an alternative-forced-choice and a yes-no procedure in the same experimental sessions. Thresholds measured with the yes-no procedure were typically higher than thresholds measured with the alternative-forced choice procedure. The difference between the two thresholds decreased as stimulus duration increased. It also varied between subjects and varied with the probability of false alarms in the yes-no procedure. It is shown that a previously proposed model of detection (Heil et al., Hear Res 2017) can account for these findings better than other models. It can also account for the shapes of the psychometric functions. The model is consistent with basic concepts of signal detection theory but is based on a decision variable that follows Poisson statistics. It also differs from other models of detection with respect to the transformation of the stimulus into the decision variable. The findings in this study further support the model.
Collapse
Affiliation(s)
- Peter Heil
- Department of Systems Physiology of Learning, Leibniz Institute for Neurobiology, Magdeburg, Germany; Center for Behavioral Brain Sciences, Magdeburg, Germany.
| |
Collapse
|
9
|
Schutz M, Gillard J. On the generalization of tones: A detailed exploration of non-speech auditory perception stimuli. Sci Rep 2020; 10:9520. [PMID: 32533008 PMCID: PMC7293323 DOI: 10.1038/s41598-020-63132-2] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2019] [Accepted: 03/13/2020] [Indexed: 11/09/2022] Open
Abstract
The dynamic changes in natural sounds’ temporal structures convey important event-relevant information. However, prominent researchers have previously expressed concern that non-speech auditory perception research disproportionately uses simplistic stimuli lacking the temporal variation found in natural sounds. A growing body of work now demonstrates that some conclusions and models derived from experiments using simplistic tones fail to generalize, raising important questions about the types of stimuli used to assess the auditory system. To explore the issue empirically, we conducted a novel, large-scale survey of non-speech auditory perception research from four prominent journals. A detailed analysis of 1017 experiments from 443 articles reveals that 89% of stimuli employ amplitude envelopes lacking the dynamic variations characteristic of non-speech sounds heard outside the laboratory. Given differences in task outcomes and even the underlying perceptual strategies evoked by dynamic vs. invariant amplitude envelopes, this raises important questions of broad relevance to psychologists and neuroscientists alike. This lack of exploration of a property increasingly recognized as playing a crucial role in perception suggests future research using stimuli with time-varying amplitude envelopes holds significant potential for furthering our understanding of the auditory system’s basic processing capabilities.
Collapse
Affiliation(s)
- Michael Schutz
- School of the Arts, McMaster University, Hamilton, Canada. .,Department of Psychology, Neuroscience & Behaviour, McMaster University, Hamilton, Canada.
| | - Jessica Gillard
- Department of Psychology, Neuroscience & Behaviour, McMaster University, Hamilton, Canada
| |
Collapse
|
10
|
Norman LJ, Thaler L. Stimulus uncertainty affects perception in human echolocation: Timing, level, and spectrum. J Exp Psychol Gen 2020; 149:2314-2331. [PMID: 32324025 PMCID: PMC7727089 DOI: 10.1037/xge0000775] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
The human brain may use recent sensory experience to create sensory templates that are then compared to incoming sensory input, that is, "knowing what to listen for." This can lead to greater perceptual sensitivity, as long as the relevant properties of the target stimulus can be reliably estimated from past sensory experiences. Echolocation is an auditory skill probably best understood in bats, but humans can also echolocate. Here we investigated for the first time whether echolocation in humans involves the use of sensory templates derived from recent sensory experiences. Our results showed that when there was certainty in the acoustic properties of the echo relative to the emission, either in temporal onset, spectral content or level, people detected the echo more accurately than when there was uncertainty. In addition, we found that people were more accurate when the emission's spectral content was certain but, surprisingly, not when either its level or temporal onset was certain. Importantly, the lack of an effect of temporal onset of the emission is counter to that found previously for tasks using nonecholocation sounds, suggesting that the underlying mechanisms might be different for echolocation and nonecholocation sounds. Importantly, the effects of stimulus certainty were no different for people with and without experience in echolocation, suggesting that stimulus-specific sensory templates can be used in a skill that people have never used before. From an applied perspective our results suggest that echolocation instruction should encourage users to make clicks that are similar to one another in their spectral content. (PsycInfo Database Record (c) 2020 APA, all rights reserved).
Collapse
|
11
|
Herbst SK, Obleser J. Implicit temporal predictability enhances pitch discrimination sensitivity and biases the phase of delta oscillations in auditory cortex. Neuroimage 2019; 203:116198. [PMID: 31539590 DOI: 10.1016/j.neuroimage.2019.116198] [Citation(s) in RCA: 30] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2019] [Revised: 08/23/2019] [Accepted: 09/14/2019] [Indexed: 10/26/2022] Open
Abstract
Can human listeners use implicit temporal contingencies in auditory input to form temporal predictions, and if so, how are these predictions represented endogenously? To assess this question, we implicitly manipulated temporal predictability in an auditory pitch discrimination task: unbeknownst to participants, the pitch of the standard tone could either be deterministically predictive of the temporal onset of the target tone, or convey no predictive information. Predictive and non-predictive conditions were presented interleaved in one stream, and separated by variable inter-stimulus intervals such that there was no dominant stimulus rhythm throughout. Even though participants were unaware of the implicit temporal contingencies, pitch discrimination sensitivity (the slope of the psychometric function) increased when the onset of the target tone was predictable in time (N = 49, 28 female, 21 male). Concurrently recorded EEG data (N = 24) revealed that standard tones that conveyed temporal predictions evoked a more negative N1 component than non-predictive standards. We observed no significant differences in oscillatory power or phase coherence between conditions during the foreperiod. Importantly, the phase angle of delta oscillations (1-3 Hz) in auditory areas in the post-standard and pre-target time windows predicted behavioral pitch discrimination sensitivity. This suggests that temporal predictions are encoded in delta oscillatory phase during the foreperiod interval. In sum, we show that auditory perception benefits from implicit temporal contingencies, and provide evidence for a role of slow neural oscillations in the endogenous representation of temporal predictions, in absence of exogenously driven entrainment to rhythmic input.
Collapse
Affiliation(s)
- Sophie K Herbst
- Department of Psychology, University of Lübeck, Ratzeburger Allee 160, 23552, Lübeck, Germany; NeuroSpin, CEA, DRF/Joliot; INSERM Cognitive Neuroimaging Unit; Université Paris-Sud, Université Paris-Saclay; Bât 145Gif s/ Yvette, 91190 France.
| | - Jonas Obleser
- Department of Psychology, University of Lübeck, Ratzeburger Allee 160, 23552, Lübeck, Germany
| |
Collapse
|
12
|
Perceptual-learning evidence for inter-onset-interval- and frequency-specific processing of fast rhythms. Atten Percept Psychophys 2019; 81:533-542. [PMID: 30488189 DOI: 10.3758/s13414-018-1631-7] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022]
Abstract
Rhythm is fundamental to music and speech, yet little is known about how even simple rhythmic patterns are processed. Here we investigated the processing of isochronous rhythms in the short inter-onset-interval (IOI) range (IOIs < 250-400 ms) using a perceptual-learning paradigm. Trained listeners (n=8) practiced anisochrony detection with a 100-ms IOI marked by 1-kHz tones, 720 trials per day for 7 days. Between pre- and post-training tests, trained listeners improved significantly more than controls (no training; n=8) on the anisochrony-detection condition that the trained listeners practiced. However, the learning on anisochrony detection did not generalize to temporal-interval discrimination with the trained IOI (100 ms) and marker frequency (1 kHz) or to anisochrony detection with an untrained marker frequency (4 kHz or variable frequency vs. 1 kHz), and generalized negatively to anisochrony detection with an untrained IOI (200 ms vs. 100 ms). Further, pre-training thresholds were correlated among nearly all of the conditions with the same IOI (100-ms IOIs), but not between conditions with different IOIs (100-ms vs. 200-ms IOIs). Thus, it appears that some task-, IOI-, and frequency-specific processes are involved in fast-rhythm processing. These outcomes are most consistent with a holistic rhythm-processing model in which a holistic "image" of the stimulus is compared to a stimulus-specific template.
Collapse
|
13
|
Francis NA, Zhao W, Guinan Jr. JJ. Auditory Attention Reduced Ear-Canal Noise in Humans by Reducing Subject Motion, Not by Medial Olivocochlear Efferent Inhibition: Implications for Measuring Otoacoustic Emissions During a Behavioral Task. Front Syst Neurosci 2018; 12:42. [PMID: 30271329 PMCID: PMC6146202 DOI: 10.3389/fnsys.2018.00042] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2018] [Accepted: 08/24/2018] [Indexed: 12/12/2022] Open
Abstract
Otoacoustic emissions (OAEs) are often measured to non-invasively determine activation of medial olivocochlear (MOC) efferents in humans. Usually these experiments assume that ear-canal noise remains constant. However, changes in ear-canal noise have been reported in some behavioral experiments. We studied the variability of ear-canal noise in eight subjects who performed a two-interval-forced-choice (2IFC) sound-level-discrimination task on monaural tone pips in masker noise. Ear-canal noise was recorded directly from the unstimulated ear opposite the task ear. Recordings were also made with similar sounds presented, but no task done. In task trials, ear-canal noise was reduced at the time the subject did the discrimination, relative to the ear-canal noise level earlier in the trial. In two subjects, there was a decrease in ear-canal noise, primarily at 1-2 kHz, with a time course similar to that expected from inhibition by MOC activity elicited by the task-ear masker noise. These were the only subjects with spontaneous OAEs (SOAEs). We hypothesize that the SOAEs were inhibited by MOC activity elicited by the task-ear masker. Based on the standard rationale in OAE experiments that large bursts of ear-canal noise are artifacts due to subject movement, ear-canal noise bursts above a sound-level criterion were removed. As the criterion was lowered and more high- and moderate-level ear-canal noise bursts were removed, the reduction in ear-canal noise level at the time of the 2IFC discrimination decreased to almost zero, for the six subjects without SOAEs. This pattern is opposite that expected from MOC-induced inhibition (which is greater on lower-level sounds), but can be explained by the hypothesis that subjects move less and create fewer bursts of ear-canal noise when they concentrate on doing the task. In no-task trials for these six subjects, the ear-canal noise level was little changed throughout the trial. Our results show that measurements of MOC effects on OAEs must measure and account for changes in ear-canal noise, especially in behavioral experiments. The results also provide a novel way of showing the time course of the buildup of attention via the time course of the reduction in ear-canal noise.
Collapse
Affiliation(s)
- Nikolas A. Francis
- Speech and Hearing Bioscience and Technology, Harvard-Massachusetts Institute of Technology (MIT) Division of Health Sciences and Technology, Cambridge, MA, United States
- Eaton Peabody Laboratories, Department of Otolaryngology, Massachusetts Eye and Ear, Boston, MA, United States
| | - Wei Zhao
- Eaton Peabody Laboratories, Department of Otolaryngology, Massachusetts Eye and Ear, Boston, MA, United States
- Department of Otolaryngology, Harvard Medical School, Harvard University, Boston, MA, United States
| | - John J. Guinan Jr.
- Speech and Hearing Bioscience and Technology, Harvard-Massachusetts Institute of Technology (MIT) Division of Health Sciences and Technology, Cambridge, MA, United States
- Eaton Peabody Laboratories, Department of Otolaryngology, Massachusetts Eye and Ear, Boston, MA, United States
- Department of Otolaryngology, Harvard Medical School, Harvard University, Boston, MA, United States
| |
Collapse
|
14
|
The Magnitude, But Not the Sign, of MT Single-Trial Spike-Time Correlations Predicts Motion Detection Performance. J Neurosci 2018; 38:4399-4417. [PMID: 29626168 DOI: 10.1523/jneurosci.1182-17.2018] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2017] [Revised: 03/23/2018] [Accepted: 03/29/2018] [Indexed: 11/21/2022] Open
Abstract
Spike-time correlations capture the short timescale covariance between the activity of neurons on a single trial. These correlations can significantly vary in magnitude and sign from trial to trial, and have been proposed to contribute to information encoding in visual cortex. While monkeys performed a motion-pulse detection task, we examined the behavioral impact of both the magnitude and sign of single-trial spike-time correlations between two nonoverlapping pools of middle temporal (MT) neurons. We applied three single-trial measures of spike-time correlation between our multiunit MT spike trains (Pearson's, absolute value of Pearson's, and mutual information), and examined the degree to which they predicted a subject's performance on a trial-by-trial basis. We found that on each trial, positive and negative spike-time correlations were almost equally likely, and, once the correlational sign was accounted for, all three measures were similarly predictive of behavior. Importantly, just before the behaviorally relevant motion pulse occurred, single-trial spike-time correlations were as predictive of the performance of the animal as single-trial firing rates. While firing rates were positively associated with behavioral outcomes, the presence of either strong positive or negative correlations had a detrimental effect on behavior. These correlations occurred on short timescales, and the strongest positive and negative correlations modulated behavioral performance by ∼9%, compared with trials with no correlations. We suggest a model where spike-time correlations are associated with a common noise source for the two MT pools, which in turn decreases the signal-to-noise ratio of the integrated signals that drive motion detection.SIGNIFICANCE STATEMENT Previous work has shown that spike-time correlations occurring on short timescales can affect the encoding of visual inputs. Although spike-time correlations significantly vary in both magnitude and sign across trials, their impact on trial-by-trial behavior is not fully understood. Using neural recordings from area MT (middle temporal) in monkeys performing a motion-detection task using a brief stimulus, we found that both positive and negative spike-time correlations predicted behavioral responses as well as firing rate on a trial-by-trial basis. We propose that strong positive and negative spike-time correlations decreased behavioral performance by reducing the signal-to-noise ratio of integrated MT neural signals.
Collapse
|
15
|
Wang M, Kong L, Zhang C, Wu X, Li L. Speaking rhythmically improves speech recognition under "cocktail-party" conditions. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2018; 143:EL255. [PMID: 29716270 DOI: 10.1121/1.5030518] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
This study examines whether speech rhythm affects speech recognition under "cocktail-party" conditions. Against a two-talker masker, but not a speech-spectrum noise masker, recognition of the last (third) keyword in a normal rhythmic sentence was significantly better than that of the first keyword. However, this word-position-related speech-recognition improvement disappeared for rhythmically hybrid target sentences that were constructed by grouping parts from different sentences with different artificially modulated rhythms (rates) (fast, normal, or slow). Thus, the normal rhythm with a constant rate plays a role in improving speech recognition against informational speech masking, probably through a build-up of temporal prediction for target words.
Collapse
Affiliation(s)
- Mengyuan Wang
- Beijing Key Lab of Applied Experimental Psychology, School of Psychology, Beijing Normal University, Beijing 100875, China
| | - Lingzhi Kong
- Allied Health School, Beijing Language and Culture University, Beijing 10083, China
| | - Changxin Zhang
- Faculty of Education, East China Normal University, Shanghai 200062, China
| | - Xihong Wu
- Department of Machine Intelligence, Peking University, Beijing 100871, China
| | - Liang Li
- School of Psychological and Cognitive Sciences, Beijing Key Laboratory of Behavior and Mental Health, Key Laboratory on Machine Perception (Ministry of Education), Peking University, Beijing 100080, China , , , ,
| |
Collapse
|
16
|
Shinn-Cunningham B. Cortical and Sensory Causes of Individual Differences in Selective Attention Ability Among Listeners With Normal Hearing Thresholds. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2017; 60:2976-2988. [PMID: 29049598 PMCID: PMC5945067 DOI: 10.1044/2017_jslhr-h-17-0080] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/27/2017] [Revised: 06/23/2017] [Accepted: 07/05/2017] [Indexed: 05/28/2023]
Abstract
PURPOSE This review provides clinicians with an overview of recent findings relevant to understanding why listeners with normal hearing thresholds (NHTs) sometimes suffer from communication difficulties in noisy settings. METHOD The results from neuroscience and psychoacoustics are reviewed. RESULTS In noisy settings, listeners focus their attention by engaging cortical brain networks to suppress unimportant sounds; they then can analyze and understand an important sound, such as speech, amidst competing sounds. Differences in the efficacy of top-down control of attention can affect communication abilities. In addition, subclinical deficits in sensory fidelity can disrupt the ability to perceptually segregate sound sources, interfering with selective attention, even in listeners with NHTs. Studies of variability in control of attention and in sensory coding fidelity may help to isolate and identify some of the causes of communication disorders in individuals presenting at the clinic with "normal hearing." CONCLUSIONS How well an individual with NHTs can understand speech amidst competing sounds depends not only on the sound being audible but also on the integrity of cortical control networks and the fidelity of the representation of suprathreshold sound. Understanding the root cause of difficulties experienced by listeners with NHTs ultimately can lead to new, targeted interventions that address specific deficits affecting communication in noise. PRESENTATION VIDEO http://cred.pubs.asha.org/article.aspx?articleid=2601617.
Collapse
Affiliation(s)
- Barbara Shinn-Cunningham
- Center for Research in Sensory Communication and Emerging Neural Technology, Boston University, MA
| |
Collapse
|
17
|
Wright BA, Fitzgerald MB. Detection of tones of unexpected frequency in amplitude-modulated noise. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2017; 142:2043. [PMID: 29092596 DOI: 10.1121/1.5007718] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
Detection of a tonal signal in amplitude-modulated noise can improve with increases in noise bandwidth if the pattern of amplitude fluctuations is uniform across frequency, a phenomenon termed comodulation masking release (CMR). Most explanations for CMR rely on an assumption that listeners monitor frequency channels both at and remote from the signal frequency in conditions that yield the effect. To test this assumption, detectability was assessed for signals presented at expected and unexpected frequencies in wideband amplitude-modulated noise. Detection performance was high even for signals of unexpected frequency, suggesting that listeners were monitoring multiple frequency channels, as has been assumed.
Collapse
Affiliation(s)
- Beverly A Wright
- Department of Communication Sciences and Disorders and Knowles Hearing Center, Northwestern University, 2240 Campus Drive, Evanston, Illinois 60208, USA
| | - Matthew B Fitzgerald
- Department of Otolaryngology/Head and Neck Surgery, Stanford University, Stanford Ear Institute, 2452 Watson Court, Palo Alto, California 94303, USA
| |
Collapse
|
18
|
A probabilistic Poisson-based model accounts for an extensive set of absolute auditory threshold measurements. Hear Res 2017; 353:135-161. [DOI: 10.1016/j.heares.2017.06.011] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/09/2016] [Revised: 06/19/2017] [Accepted: 06/25/2017] [Indexed: 01/11/2023]
|
19
|
Guo W, Clause AR, Barth-Maron A, Polley DB. A Corticothalamic Circuit for Dynamic Switching between Feature Detection and Discrimination. Neuron 2017; 95:180-194.e5. [PMID: 28625486 PMCID: PMC5568886 DOI: 10.1016/j.neuron.2017.05.019] [Citation(s) in RCA: 112] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2016] [Revised: 03/03/2017] [Accepted: 05/09/2017] [Indexed: 01/05/2023]
Abstract
Sensory processing must be sensitive enough to encode faint signals near the noise floor but selective enough to differentiate between similar stimuli. Here we describe a layer 6 corticothalamic (L6 CT) circuit in the mouse auditory forebrain that alternately biases sound processing toward hypersensitivity and improved behavioral sound detection or dampened excitability and enhanced sound discrimination. Optogenetic activation of L6 CT neurons could increase or decrease the gain and tuning precision in the thalamus and all layers of the cortical column, depending on the timing between L6 CT activation and sensory stimulation. The direction of neural and perceptual modulation - enhanced detection at the expense of discrimination or vice versa - arose from the interaction of L6 CT neurons and subnetworks of fast-spiking inhibitory neurons that reset the phase of low-frequency cortical rhythms. These findings suggest that L6 CT neurons contribute to the resolution of the competing demands of detection and discrimination.
Collapse
Affiliation(s)
- Wei Guo
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear Infirmary, Boston, MA 02114, USA; Center for Computational Neuroscience and Neural Technology, Boston University, Boston, MA 02215, USA
| | - Amanda R Clause
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear Infirmary, Boston, MA 02114, USA
| | - Asa Barth-Maron
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear Infirmary, Boston, MA 02114, USA
| | - Daniel B Polley
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear Infirmary, Boston, MA 02114, USA; Department of Otolaryngology, Harvard Medical School, Boston, MA 02114, USA.
| |
Collapse
|
20
|
Implicit variations of temporal predictability: Shaping the neural oscillatory and behavioural response. Neuropsychologia 2017; 101:141-152. [DOI: 10.1016/j.neuropsychologia.2017.05.019] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2017] [Revised: 04/21/2017] [Accepted: 05/14/2017] [Indexed: 11/20/2022]
|
21
|
Wu C, Zheng Y, Li J, Zhang B, Li R, Wu H, She S, Liu S, Peng H, Ning Y, Li L. Activation and Functional Connectivity of the Left Inferior Temporal Gyrus during Visual Speech Priming in Healthy Listeners and Listeners with Schizophrenia. Front Neurosci 2017; 11:107. [PMID: 28360829 PMCID: PMC5350153 DOI: 10.3389/fnins.2017.00107] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2016] [Accepted: 02/20/2017] [Indexed: 11/13/2022] Open
Abstract
Under a "cocktail-party" listening condition with multiple-people talking, compared to healthy people, people with schizophrenia benefit less from the use of visual-speech (lipreading) priming (VSP) cues to improve speech recognition. The neural mechanisms underlying the unmasking effect of VSP remain unknown. This study investigated the brain substrates underlying the unmasking effect of VSP in healthy listeners and the schizophrenia-induced changes in the brain substrates. Using functional magnetic resonance imaging, brain activation and functional connectivity for the contrasts of the VSP listening condition vs. the visual non-speech priming (VNSP) condition were examined in 16 healthy listeners (27.4 ± 8.6 years old, 9 females and 7 males) and 22 listeners with schizophrenia (29.0 ± 8.1 years old, 8 females and 14 males). The results showed that in healthy listeners, but not listeners with schizophrenia, the VSP-induced activation (against the VNSP condition) of the left posterior inferior temporal gyrus (pITG) was significantly correlated with the VSP-induced improvement in target-speech recognition against speech masking. Compared to healthy listeners, listeners with schizophrenia showed significantly lower VSP-induced activation of the left pITG and reduced functional connectivity of the left pITG with the bilateral Rolandic operculum, bilateral STG, and left insular. Thus, the left pITG and its functional connectivity may be the brain substrates related to the unmasking effect of VSP, assumedly through enhancing both the processing of target visual-speech signals and the inhibition of masking-speech signals. In people with schizophrenia, the reduced unmasking effect of VSP on speech recognition may be associated with a schizophrenia-related reduction of VSP-induced activation and functional connectivity of the left pITG.
Collapse
Affiliation(s)
- Chao Wu
- Beijing Key Laboratory of Behavior and Mental Health, Key Laboratory on Machine Perception, Ministry of Education, School of Psychological and Cognitive Sciences, Peking UniversityBeijing, China; School of Life Sciences, Peking UniversityBeijing, China; School of Psychology, Beijing Normal UniversityBeijing, China
| | - Yingjun Zheng
- The Affiliated Brain Hospital of Guangzhou Medical University (Guangzhou Huiai Hospital) Guangzhou, China
| | - Juanhua Li
- The Affiliated Brain Hospital of Guangzhou Medical University (Guangzhou Huiai Hospital) Guangzhou, China
| | - Bei Zhang
- The Affiliated Brain Hospital of Guangzhou Medical University (Guangzhou Huiai Hospital) Guangzhou, China
| | - Ruikeng Li
- The Affiliated Brain Hospital of Guangzhou Medical University (Guangzhou Huiai Hospital) Guangzhou, China
| | - Haibo Wu
- The Affiliated Brain Hospital of Guangzhou Medical University (Guangzhou Huiai Hospital) Guangzhou, China
| | - Shenglin She
- The Affiliated Brain Hospital of Guangzhou Medical University (Guangzhou Huiai Hospital) Guangzhou, China
| | - Sha Liu
- The Affiliated Brain Hospital of Guangzhou Medical University (Guangzhou Huiai Hospital) Guangzhou, China
| | - Hongjun Peng
- The Affiliated Brain Hospital of Guangzhou Medical University (Guangzhou Huiai Hospital) Guangzhou, China
| | - Yuping Ning
- The Affiliated Brain Hospital of Guangzhou Medical University (Guangzhou Huiai Hospital) Guangzhou, China
| | - Liang Li
- Beijing Key Laboratory of Behavior and Mental Health, Key Laboratory on Machine Perception, Ministry of Education, School of Psychological and Cognitive Sciences, Peking UniversityBeijing, China; The Affiliated Brain Hospital of Guangzhou Medical University (Guangzhou Huiai Hospital)Guangzhou, China; Beijing Institute for Brain Disorder, Capital Medical UniversityBeijing, China
| |
Collapse
|
22
|
Shinn-Cunningham B, Best V, Lee AKC. Auditory Object Formation and Selection. SPRINGER HANDBOOK OF AUDITORY RESEARCH 2017. [DOI: 10.1007/978-3-319-51662-2_2] [Citation(s) in RCA: 31] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/03/2022]
|
23
|
ten Oever S, van Atteveldt N, Sack AT. Increased Stimulus Expectancy Triggers Low-frequency Phase Reset during Restricted Vigilance. J Cogn Neurosci 2015; 27:1811-22. [DOI: 10.1162/jocn_a_00820] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Abstract
Temporal cues can be used to selectively attend to relevant information during abundant sensory stimulation. However, such cues differ vastly in the accuracy of their temporal estimates, ranging from very predictable to very unpredictable. When cues are strongly predictable, attention may facilitate selective processing by aligning relevant incoming information to high neuronal excitability phases of ongoing low-frequency oscillations. However, top–down effects on ongoing oscillations when temporal cues have some predictability, but also contain temporal uncertainties, are unknown. Here, we experimentally created such a situation of mixed predictability and uncertainty: A target could occur within a limited time window after cue but was always unpredictable in exact timing. Crucially to assess top–down effects in such a mixed situation, we manipulated target probability. High target likelihood, compared with low likelihood, enhanced delta oscillations more strongly as measured by evoked power and intertrial coherence. Moreover, delta phase modulated detection rates for probable targets. The delta frequency range corresponds with half-a-period to the target occurrence window and therefore suggests that low-frequency phase reset is engaged to produce a long window of high excitability when event timing is uncertain within a restricted temporal window.
Collapse
|
24
|
Effect of echolocation behavior-related constant frequency-frequency modulation sound on the frequency tuning of inferior collicular neurons in Hipposideros armiger. J Comp Physiol A Neuroethol Sens Neural Behav Physiol 2015; 201:783-94. [PMID: 26026915 DOI: 10.1007/s00359-015-1018-3] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2014] [Revised: 05/11/2015] [Accepted: 05/19/2015] [Indexed: 12/19/2022]
Abstract
In constant frequency-frequency modulation (CF-FM) bats, the CF-FM echolocation signals include both CF and FM components, yet the role of such complex acoustic signals in frequency resolution by bats remains unknown. Using CF and CF-FM echolocation signals as acoustic stimuli, the responses of inferior collicular (IC) neurons of Hipposideros armiger were obtained by extracellular recordings. We tested the effect of preceding CF or CF-FM sounds on the shape of the frequency tuning curves (FTCs) of IC neurons. Results showed that both CF-FM and CF sounds reduced the number of FTCs with tailed lower-frequency-side of IC neurons. However, more IC neurons experienced such conversion after adding CF-FM sound compared with CF sound. We also found that the Q 20 value of the FTC of IC neurons experienced the largest increase with the addition of CF-FM sound. Moreover, only CF-FM sound could cause an increase in the slope of the neurons' FTCs, and such increase occurred mainly in the lower-frequency edge. These results suggested that CF-FM sound could increase the accuracy of frequency analysis of echo and cut-off low-frequency elements from the habitat of bats more than CF sound.
Collapse
|
25
|
Behaviorally gated reduction of spontaneous discharge can improve detection thresholds in auditory cortex. J Neurosci 2014; 34:4076-81. [PMID: 24623785 DOI: 10.1523/jneurosci.4825-13.2014] [Citation(s) in RCA: 47] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Animals often listen selectively for particular sounds, a strategy that could alter neural encoding mechanisms to maximize the ability to detect the target. Here, we recorded auditory cortex neuron responses in well trained, freely moving gerbils as they performed a tone detection task. Each trial was initiated by the animal, providing a predictable time window during which to listen. No sound was presented on nogo trials, permitting us to assess spontaneous activity on trials in which a signal could have been expected, but was not delivered. Immediately after animals initiated a trial, auditory cortex neurons displayed a 26% reduction in spontaneous activity. Moreover, when stimulus-driven discharge rate was referenced to this reduced baseline, a larger fraction of auditory cortex neurons displayed a detection threshold within 10 dB of the behavioral threshold. These findings suggest that auditory cortex spontaneous discharge rate can be modulated transiently during task performance, thereby increasing the signal-to-noise ratio and enhancing signal detection.
Collapse
|
26
|
The time course of temporal attention effects on nonconscious prime processing. Atten Percept Psychophys 2014; 75:1667-86. [PMID: 23943498 DOI: 10.3758/s13414-013-0515-0] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
We presented a masked prime at various prime-target intervals (PTIs) before a target that required a speeded motor response and investigated the impact of temporal attention on the nonconscious prime processing. The allocation of temporal attention to the target was manipulated by presenting an accessory tone and comparing that condition with a no-tone condition. The results showed that, independently of the visibility of the prime, temporal attention led to an enhanced effect of prime-target congruency on the reaction times, and that the amount of the enhancement increased with increasing PTIs. This effect pattern is consistent with the assumption of increasing influences of temporal attention and of the increasing PTI on nonconscious prime processing; it argues against the hypothesis that temporal attention narrows the time period in which the prime may affect target processing. An accumulator model is proposed assuming that target-related temporal attention increases the accumulation rate for masked primes and, thus, enhances the impact of the prime on the speed of choice decisions.
Collapse
|
27
|
Dhamani I, Leung J, Carlile S, Sharma M. Switch attention to listen. Sci Rep 2013; 3:1297. [PMID: 23416613 PMCID: PMC3575018 DOI: 10.1038/srep01297] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2012] [Accepted: 02/01/2013] [Indexed: 11/09/2022] Open
Abstract
The aim of this research was to evaluate the ability to switch attention and selectively attend to relevant information in children (10-15 years) with persistent listening difficulties in noisy environments. A wide battery of clinical tests indicated that children with complaints of listening difficulties had otherwise normal hearing sensitivity and auditory processing skills. Here we show that these children are markedly slower to switch their attention compared to their age-matched peers. The results suggest poor attention switching, lack of response inhibition and/or poor listening effort consistent with a predominantly top-down (central) information processing deficit. A deficit in the ability to switch attention across talkers would provide the basis for this otherwise hidden listening disability, especially in noisy environments involving multiple talkers such as classrooms.
Collapse
Affiliation(s)
- Imran Dhamani
- Audiology Section, Macquarie University and The Hearing CRC.
| | | | | | | |
Collapse
|
28
|
Wu C, Cao S, Wu X, Li L. Temporally pre-presented lipreading cues release speech from informational masking. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2013; 133:EL281-EL285. [PMID: 23556692 DOI: 10.1121/1.4794933] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
Listeners can use temporally pre-presented content cues and concurrently presented lipreading cues to improve speech recognition under masking conditions. This study investigated whether temporally pre-presented lipreading cues also unmask speech. In a test trial, before the target sentence was co-presented with the masker, either target-matched (priming) lipreading video or static face (priming-control) video was presented in quiet. Participants' target-recognition performance was improved by a shift from the priming-control condition to the priming condition when the masker was speech but not noise. This release from informational masking suggests a combined effect of working memory and cross-modal integration on selective attention to target speech.
Collapse
Affiliation(s)
- Chao Wu
- Department of Psychology, Department of Machine Intelligence, Speech and Hearing Research Center, Key Laboratory on Machine Perception, Ministry of Education, Peking University, Beijing 100871, China.
| | | | | | | |
Collapse
|
29
|
Korzyukov O, Sattler L, Behroozmand R, Larson CR. Neuronal mechanisms of voice control are affected by implicit expectancy of externally triggered perturbations in auditory feedback. PLoS One 2012; 7:e41216. [PMID: 22815974 PMCID: PMC3398890 DOI: 10.1371/journal.pone.0041216] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2012] [Accepted: 06/18/2012] [Indexed: 11/18/2022] Open
Abstract
Accurate vocal production relies on several factors including sensory feedback and the ability to predict future challenges to the control processes. Repetitive patterns of perturbations in sensory feedback by themselves elicit implicit expectations in the vocal control system regarding the timing, quality and direction of perturbations. In the present study, the predictability of voice pitch-shifted auditory feedback was experimentally manipulated. A block of trials where all pitch-shift stimuli were upward, and therefore predictable was contrasted against an unpredictable block of trials in which the stimulus direction was randomized between upward and downward pitch-shifts. It was found that predictable perturbations in voice auditory feedback led to a reduction in the proportion of compensatory vocal responses, which might be indicative of a reduction in vocal control. The predictable perturbations also led to a reduction in the magnitude of the N1 component of cortical Event Related Potentials (ERP) that was associated with the reflexive compensations to the perturbations. We hypothesize that formation of expectancy in our study is accompanied by involuntary allocation of attentional resources occurring as a result of habituation or learning, that in turn trigger limited and controlled exploration-related motor variability in the vocal control system.
Collapse
Affiliation(s)
- Oleg Korzyukov
- Department of Communication Sciences and Disorders, Northwestern University, Evanston, Illinois, United States of America.
| | | | | | | |
Collapse
|
30
|
Sanes DH, Woolley SMN. A behavioral framework to guide research on central auditory development and plasticity. Neuron 2011; 72:912-29. [PMID: 22196328 PMCID: PMC3244881 DOI: 10.1016/j.neuron.2011.12.005] [Citation(s) in RCA: 55] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/06/2011] [Indexed: 01/14/2023]
Abstract
The auditory CNS is influenced profoundly by sounds heard during development. Auditory deprivation and augmented sound exposure can each perturb the maturation of neural computations as well as their underlying synaptic properties. However, we have learned little about the emergence of perceptual skills in these same model systems, and especially how perception is influenced by early acoustic experience. Here, we argue that developmental studies must take greater advantage of behavioral benchmarks. We discuss quantitative measures of perceptual development and suggest how they can play a much larger role in guiding experimental design. Most importantly, including behavioral measures will allow us to establish empirical connections among environment, neural development, and perception.
Collapse
Affiliation(s)
- Dan H Sanes
- Center for Neural Science, 4 Washington Place, New York University, New York, NY 10003, USA.
| | | |
Collapse
|
31
|
He S, Buss E, Hall JW. Monaural temporal integration and temporally selective listening in children and adults. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2010; 127:3643-3653. [PMID: 20550263 PMCID: PMC2896408 DOI: 10.1121/1.3397464] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/28/2009] [Revised: 03/12/2010] [Accepted: 03/23/2010] [Indexed: 05/29/2023]
Abstract
This study used two paradigms to investigate the development of temporal integration and temporally selective listening. Experiment 1 measured detection as a function of duration for a pure tone at 1625 or 6500 Hz. At both frequencies thresholds of children younger than 7 years old were higher than those for older children and adults. The pattern of temporal integration was similar across groups for the 6500-Hz signal, but younger children showed relatively more temporal integration for the 1625-Hz signal due to high thresholds for the briefest 1625-Hz signal. Experiment 2 measured detection thresholds for one or for three brief tone pips presented in a noise masker. In one set of conditions, the noise masker consisted of 100-ms steady bursts interleaved with 10-ms temporal gaps. In other conditions, the level of the central 50 ms of the 100-ms masking noise bursts was adjusted by either +6 or -6 dB. Children showed higher thresholds but similar temporal integration compared with adults. Overall, these data suggest that children are less efficient than adults in weighting the output of the monaural temporal window at 1625 but not 6500 Hz. Children are efficient in combining energy from brief temporal epochs that are separated by noise.
Collapse
Affiliation(s)
- Shuman He
- Department of Otolaryngology/Head and Neck Surgery, University of North Carolina School of Medicine, Chapel Hill, North Carolina 27599, USA.
| | | | | |
Collapse
|
32
|
Echo amplitude selectivity of the bat is better for expected than for unexpected echo duration. Neuroreport 2009; 20:1183-7. [DOI: 10.1097/wnr.0b013e32832f0805] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
33
|
Moore DR. Auditory processing disorder (APD): Definition, diagnosis, neural basis, and intervention. ACTA ACUST UNITED AC 2009. [DOI: 10.1080/16513860600568573] [Citation(s) in RCA: 32] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
34
|
Stohl JS, Throckmorton CS, Collins LM. Investigating the effects of stimulus duration and context on pitch perception by cochlear implant users. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2009; 126:318-326. [PMID: 19603888 PMCID: PMC2723905 DOI: 10.1121/1.3133246] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/28/2008] [Revised: 04/21/2009] [Accepted: 04/22/2009] [Indexed: 05/28/2023]
Abstract
Cochlear implant sound processing strategies that use time-varying pulse rates to transmit fine structure information are one proposed method for improving the spectral representation of a sound with the eventual goal of improving speech recognition in noisy conditions, speech recognition in tonal languages, and music identification and appreciation. However, many of the perceptual phenomena associated with time-varying rates are not well understood. In this study, the effects of stimulus duration on both the place and rate-pitch percepts were investigated via psychophysical experiments. Four Nucleus CI24 cochlear implant users participated in these experiments, which included a short-duration pitch ranking task and three adaptive pulse rate discrimination tasks. When duration was fixed from trial-to-trial and rate was varied adaptively, results suggested that both the place-pitch and rate-pitch percepts may be independent of duration for durations above 10 and 20 ms, respectively. When duration was varied and pulse rates were fixed, performance was highly variable within and across subjects. Implications for multi-rate sound processing strategies are discussed.
Collapse
Affiliation(s)
- Joshua S Stohl
- Department of Electrical and Computer Engineering, Duke University, Durham, North Carolina 27708-0291, USA
| | | | | |
Collapse
|
35
|
|
36
|
Wu CH, Jen PHS. Echo frequency selectivity of duration-tuned inferior collicular neurons of the big brown bat, Eptesicus fuscus, determined with pulse-echo pairs. Neuroscience 2008; 156:1028-38. [PMID: 18804149 DOI: 10.1016/j.neuroscience.2008.08.039] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2008] [Revised: 08/15/2008] [Accepted: 08/20/2008] [Indexed: 11/30/2022]
Abstract
During hunting, insectivorous bats such as Eptesicus fuscus progressively vary the repetition rate, duration, frequency and amplitude of emitted pulses such that analysis of an echo parameter by bats would be inevitably affected by other co-varying echo parameters. The present study is to determine the variation of echo frequency selectivity of duration-tuned inferior collicular neurons during different phases of hunting using pulse-echo (P-E) pairs as stimuli. All collicular neurons discharge maximally to a tone at a particular frequency which is defined as the best frequency (BF). Most collicular neurons also discharge maximally to a BF pulse at a particular duration which is defined as the best duration (BD). A family of echo iso-level frequency tuning curves (iso-level FTC) of these duration-tuned collicular neurons is measured with the number of impulses in response to the echo pulse at selected frequencies when the P-E pairs are presented at varied P-E duration and gap. Our data show that these duration-tuned collicular neurons have narrower echo iso-level FTC when measured with BD than with non-BD echo pulses. Also, IC neurons with low BF and short BD have narrower echo iso-level FTC than IC neurons with high BF and long BD have. The bandwidth of echo iso-level FTC significantly decreases with shortening of P-E duration and P-E gap. These data suggest that duration-tuned collicular neurons not only can facilitate bat's echo recognition but also can enhance echo frequency selectivity for prey feature analysis throughout a target approaching sequence during hunting. These data also support previous behavior studies showing that bats prepare their auditory system to analyze expected returning echoes within a time window to extract target features after pulse emission.
Collapse
Affiliation(s)
- C H Wu
- Division of Biological Sciences, Interdisciplinary Neuroscience Program, University of Missouri, Columbia, MO 65211, USA
| | | |
Collapse
|
37
|
Tan MN, Robertson D, Hammond GR. Separate contributions of enhanced and suppressed sensitivity to the auditory attentional filter. Hear Res 2008; 241:18-25. [PMID: 18524512 DOI: 10.1016/j.heares.2008.04.003] [Citation(s) in RCA: 13] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/26/2007] [Revised: 04/07/2008] [Accepted: 04/15/2008] [Indexed: 10/22/2022]
Abstract
Three experiments used a probe-signal method to determine the extent to which exposure-related changes in sensitivity result from an immediate effect of stimulation and from a cumulative effect of repeated stimulation. In the first experiment, a fixed-frequency cue was followed by a same-frequency target (on 75% of trials) or a different-frequency probe (on 25% of trials). In the second experiment, a cue frequency selected randomly from a set of five was followed by a same-frequency target, or one of four different-frequency probes. Targets and probes were randomly selected independently of the cue frequency and all were equiprobable (20%). Target detection showed an average 3.4 dB advantage over probe detection. In the third experiment, tones with a randomly selected frequency were detected better when cued by a tone of the same-frequency than when presented without a prior cue. The cued tones showed an average 2.6 dB advantage over the uncued tones. Together, these results suggest that two mechanisms contribute to changes in sensitivity following auditory stimulation: first, an immediate enhancement of target detection produced by an auditory cue and second, a suppression of non-target frequencies caused by the expectation of a target.
Collapse
Affiliation(s)
- Michael N Tan
- The Auditory Laboratory, Physiology, School of Biomedical, Biomolecular and Chemical Sciences, The University of Western Australia, WA 6009, Australia.
| | | | | |
Collapse
|
38
|
Astheimer LB, Sanders LD. Listeners modulate temporally selective attention during natural speech processing. Biol Psychol 2008; 80:23-34. [PMID: 18395316 DOI: 10.1016/j.biopsycho.2008.01.015] [Citation(s) in RCA: 72] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2007] [Revised: 01/25/2008] [Accepted: 01/29/2008] [Indexed: 11/30/2022]
Abstract
Spatially selective attention allows for the preferential processing of relevant stimuli when more information than can be processed in detail is presented simultaneously at distinct locations. Temporally selective attention may serve a similar function during speech perception by allowing listeners to allocate attentional resources to time windows that contain highly relevant acoustic information. To test this hypothesis, event-related potentials were compared in response to attention probes presented in six conditions during a narrative: concurrently with word onsets, beginning 50 and 100 ms before and after word onsets, and at random control intervals. Times for probe presentation were selected such that the acoustic environments of the narrative were matched for all conditions. Linguistic attention probes presented at and immediately following word onsets elicited larger amplitude N1s than control probes over medial and anterior regions. These results indicate that native speakers selectively process sounds presented at specific times during normal speech perception.
Collapse
Affiliation(s)
- Lori B Astheimer
- Department of Psychology and Neuroscience and Behavior Program, University of Massachusetts, Tobin Hall, Amherst, MA 01003, USA.
| | | |
Collapse
|
39
|
Wu CH, Jen PHS. Auditory frequency selectivity is better for expected than for unexpected sound duration. Neuroreport 2008; 19:127-31. [DOI: 10.1097/wnr.0b013e3282f3b11c] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
40
|
Kauramäki J, Jääskeläinen IP, Sams M. Selective attention increases both gain and feature selectivity of the human auditory cortex. PLoS One 2007; 2:e909. [PMID: 17878944 PMCID: PMC1975472 DOI: 10.1371/journal.pone.0000909] [Citation(s) in RCA: 65] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2007] [Accepted: 08/23/2007] [Indexed: 11/25/2022] Open
Abstract
BACKGROUND An experienced car mechanic can often deduce what's wrong with a car by carefully listening to the sound of the ailing engine, despite the presence of multiple sources of noise. Indeed, the ability to select task-relevant sounds for awareness, whilst ignoring irrelevant ones, constitutes one of the most fundamental of human faculties, but the underlying neural mechanisms have remained elusive. While most of the literature explains the neural basis of selective attention by means of an increase in neural gain, a number of papers propose enhancement in neural selectivity as an alternative or a complementary mechanism. METHODOLOGY/PRINCIPAL FINDINGS Here, to address the question whether pure gain increase alone can explain auditory selective attention in humans, we quantified the auditory cortex frequency selectivity in 20 healthy subjects by masking 1000-Hz tones by continuous noise masker with parametrically varying frequency notches around the tone frequency (i.e., a notched-noise masker). The task of the subjects was, in different conditions, to selectively attend to either occasionally occurring slight increments in tone frequency (1020 Hz), tones of slightly longer duration, or ignore the sounds. In line with previous studies, in the ignore condition, the global field power (GFP) of event-related brain responses at 100 ms from the stimulus onset to the 1000-Hz tones was suppressed as a function of the narrowing of the notch width. During the selective attention conditions, the suppressant effect of the noise notch width on GFP was decreased, but as a function significantly different from a multiplicative one expected on the basis of simple gain model of selective attention. CONCLUSIONS/SIGNIFICANCE Our results suggest that auditory selective attention in humans cannot be explained by a gain model, where only the neural activity level is increased, but rather that selective attention additionally enhances auditory cortex frequency selectivity.
Collapse
Affiliation(s)
- Jaakko Kauramäki
- Laboratory of Computational Engineering, Helsinki University of Technology, Espoo, Finland.
| | | | | |
Collapse
|
41
|
Fritz JB, Elhilali M, David SV, Shamma SA. Does attention play a role in dynamic receptive field adaptation to changing acoustic salience in A1? Hear Res 2007; 229:186-203. [PMID: 17329048 PMCID: PMC2077083 DOI: 10.1016/j.heares.2007.01.009] [Citation(s) in RCA: 133] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/28/2006] [Revised: 11/27/2006] [Accepted: 01/03/2007] [Indexed: 11/19/2022]
Abstract
Acoustic filter properties of A1 neurons can dynamically adapt to stimulus statistics, classical conditioning, instrumental learning and the changing auditory attentional focus. We have recently developed an experimental paradigm that allows us to view cortical receptive field plasticity on-line as the animal meets different behavioral challenges by attending to salient acoustic cues and changing its cortical filters to enhance performance. We propose that attention is the key trigger that initiates a cascade of events leading to the dynamic receptive field changes that we observe. In our paradigm, ferrets were initially trained, using conditioned avoidance training techniques, to discriminate between background noise stimuli (temporally orthogonal ripple combinations) and foreground tonal target stimuli. They learned to generalize the task for a wide variety of distinct background and foreground target stimuli. We recorded cortical activity in the awake behaving animal and computed on-line spectrotemporal receptive fields (STRFs) of single neurons in A1. We observed clear, predictable task-related changes in STRF shape while the animal performed spectral tasks (including single tone and multi-tone detection, and two-tone discrimination) with different tonal targets. A different set of task-related changes occurred when the animal performed temporal tasks (including gap detection and click-rate discrimination). Distinctive cortical STRF changes may constitute a "task-specific signature". These spectral and temporal changes in cortical filters occur quite rapidly, within 2min of task onset, and fade just as quickly after task completion, or in some cases, persisted for hours. The same cell could multiplex by differentially changing its receptive field in different task conditions. On-line dynamic task-related changes, as well as persistent plastic changes, were observed at a single-unit, multi-unit and population level. Auditory attention is likely to be pivotal in mediating these task-related changes since the magnitude of STRF changes correlated with behavioral performance on tasks with novel targets. Overall, these results suggest the presence of an attention-triggered plasticity algorithm in A1 that can swiftly change STRF shape by transforming receptive fields to enhance figure/ground separation, by using a contrast matched filter to filter out the background, while simultaneously enhancing the salient acoustic target in the foreground. These results favor the view of a nimble, dynamic, attentive and adaptive brain that can quickly reshape its sensory filter properties and sensori-motor links on a moment-to-moment basis, depending upon the current challenges the animal faces. In this review, we summarize our results in the context of a broader survey of the field of auditory attention, and then consider neuronal networks that could give rise to this phenomenon of attention-driven receptive field plasticity in A1.
Collapse
Affiliation(s)
- Jonathan B Fritz
- Centre for Auditory and Acoustic Research, University of Maryland, College Park, MD 20742, USA.
| | | | | | | |
Collapse
|
42
|
Best V, Ozmeral EJ, Shinn-Cunningham BG. Visually-guided attention enhances target identification in a complex auditory scene. J Assoc Res Otolaryngol 2007; 8:294-304. [PMID: 17453308 PMCID: PMC2538357 DOI: 10.1007/s10162-007-0073-z] [Citation(s) in RCA: 67] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2006] [Accepted: 01/17/2007] [Indexed: 10/23/2022] Open
Abstract
In auditory scenes containing many similar sound sources, sorting of acoustic information into streams becomes difficult, which can lead to disruptions in the identification of behaviorally relevant targets. This study investigated the benefit of providing simple visual cues for when and/or where a target would occur in a complex acoustic mixture. Importantly, the visual cues provided no information about the target content. In separate experiments, human subjects either identified learned birdsongs in the presence of a chorus of unlearned songs or recalled strings of spoken digits in the presence of speech maskers. A visual cue indicating which loudspeaker (from an array of five) would contain the target improved accuracy for both kinds of stimuli. A cue indicating which time segment (out of a possible five) would contain the target also improved accuracy, but much more for birdsong than for speech. These results suggest that in real world situations, information about where a target of interest is located can enhance its identification, while information about when to listen can also be helpful when targets are unfamiliar or extremely similar to their competitors.
Collapse
Affiliation(s)
- Virginia Best
- Hearing Research Center, Boston University, Boston, MA 02215, USA.
| | | | | |
Collapse
|
43
|
Abstract
We examined the pitch and temporal acuity of auditory expectations/images formed under attentional-cuing and imagery task conditions, in order to address whether auditory expectations and auditory images are functionally equivalent. Across three experiments, we observed that pitch acuity was comparable between the two task conditions, whereas temporal acuity deteriorated in the imagery task. A fourth experiment indicated that the observed pitch acuity could not be attributed to implicit influences of the primed context alone. Across the experiments, image acuity in both pitch and time was better in listeners with more musical training. The results support a view that auditory images are multifaceted and that their acuity along any given dimension depends partially on the context in which they are formed.
Collapse
Affiliation(s)
- Petr Janata
- Center for Mind and Brain, University of California, One Shields Avenue, Davis, CA 95616, USA.
| | | |
Collapse
|
44
|
Jones MR, Johnston HM, Puente J. Effects of auditory pattern structure on anticipatory and reactive attending. Cogn Psychol 2006; 53:59-96. [PMID: 16563367 DOI: 10.1016/j.cogpsych.2006.01.003] [Citation(s) in RCA: 71] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2005] [Revised: 01/06/2006] [Accepted: 01/26/2006] [Indexed: 11/24/2022]
Abstract
In three experiments, participants listened for a target's pitch change within recurrent nine-tone patterns having largely isochronous rhythms. Patterns differed in pitch structure of initial (context) and final (target distance) pattern segments. Also varied were: probe timing (Experiments 2 and 3) and instructions about probe timing (Experiments 2 and 3). In all experiments, identification of a recurrent target was poorer in patterns with wider context pitch intervals (in semitones) than in others. Effects of probe timing also occurred, with better performance for temporally expected than unexpected probes. However, when listeners were explicitly told to focus upon a target's pitch and not its timing (Experiment 3), they performed selectively better in patterns with smaller target/probe pitch distances, especially for rhythmically expected probes. Five theoretical approaches to the respective roles of pitch and/or time structure were assessed. Although no single approach accounted for all results, a modification of one theory (a Pitch/Time Entrainment model) provided a reasonable description of findings.
Collapse
Affiliation(s)
- Mari Riess Jones
- Department of Psychology, The Ohio State University, Columbus, OH 43210, USA.
| | | | | |
Collapse
|
45
|
Wright BA. Combined representations for frequency and duration in detection templates for expected signals. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2005; 117:1299-1304. [PMID: 15807018 DOI: 10.1121/1.1855771] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
When trying to detect a tonal signal in a continuous broadband noise, listeners attend selectively to both the frequency and the duration of the expected signal. However, it is not known whether they monitor separate or combined representations of these two attributes. To investigate this question, a probe-signal method was used to measure the detectability of signals of expected and unexpected durations at two expected frequencies. The four listeners expected only one of two signals to be presented at random: a brief tone at one frequency or a long tone at another frequency. For each signal frequency, the detectability of the signals of unexpected duration decreased to near chance as the difference between the expected and unexpected duration, at that frequency, increased. The frequency specificity of this duration tuning indicates that both the frequency and the duration of an expected stimulus are represented in a single template.
Collapse
Affiliation(s)
- Beverly A Wright
- Department of Communication Sciences and Disorders and Northwestern University Institute for Neuroscience, Northwestern University, Evanston, Illinois 60208-3550, USA.
| |
Collapse
|