1
|
Fischer C, Nolting C, Schneider F, Bledowski C, Kaiser J. Auditory objects in working memory include task-irrelevant features. Sci Rep 2024; 14:21216. [PMID: 39261536 PMCID: PMC11390711 DOI: 10.1038/s41598-024-72177-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2024] [Accepted: 09/03/2024] [Indexed: 09/13/2024] Open
Abstract
Object-based attention operates both in perception and visual working memory. While the efficient perception of auditory stimuli also requires the formation of auditory objects, little is known about their role in auditory working memory (AWM). To investigate whether attention to one object feature in AWM leads to the involuntary maintenance of another, task-irrelevant feature, we conducted four experiments. Stimuli were abstract sounds that differed on the dimensions frequency and location, only one of which was task-relevant in each experiment. The first two experiments required a match-nonmatch decision about a probe sound whose irrelevant feature value could either be identical to or differ from the memorized stimulus. Matches on the relevant dimension were detected more accurately when the irrelevant feature matched as well, whereas for nonmatches on the relevant dimension, performance was better for irrelevant feature nonmatches. Signal-detection analysis showed that changes of irrelevant frequency reduced the sensitivity for sound location. Two further experiments used continuous report tasks. When location was the target feature, changes of irrelevant sound frequency had an impact on both recall error and adjustment time. Irrelevant location changes affected adjustment time only. In summary, object-based attention led to a concurrent maintenance of task-irrelevant sound features in AWM.
Collapse
Affiliation(s)
- Cora Fischer
- Institute of Medical Psychology, Faculty of Medicine, Goethe University Frankfurt am Main, Heinrich-Hoffmann-Str. 10, 60528, Frankfurt am Main, Germany
| | - Carina Nolting
- Institute of Medical Psychology, Faculty of Medicine, Goethe University Frankfurt am Main, Heinrich-Hoffmann-Str. 10, 60528, Frankfurt am Main, Germany
| | - Flavia Schneider
- Institute of Medical Psychology, Faculty of Medicine, Goethe University Frankfurt am Main, Heinrich-Hoffmann-Str. 10, 60528, Frankfurt am Main, Germany
| | - Christoph Bledowski
- Institute of Medical Psychology, Faculty of Medicine, Goethe University Frankfurt am Main, Heinrich-Hoffmann-Str. 10, 60528, Frankfurt am Main, Germany
| | - Jochen Kaiser
- Institute of Medical Psychology, Faculty of Medicine, Goethe University Frankfurt am Main, Heinrich-Hoffmann-Str. 10, 60528, Frankfurt am Main, Germany.
| |
Collapse
|
2
|
Bellur A, Thakkar K, Elhilali M. Explicit-memory multiresolution adaptive framework for speech and music separation. EURASIP JOURNAL ON AUDIO, SPEECH, AND MUSIC PROCESSING 2023; 2023:20. [PMID: 37181589 PMCID: PMC10169896 DOI: 10.1186/s13636-023-00286-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/23/2022] [Accepted: 04/21/2023] [Indexed: 05/16/2023]
Abstract
The human auditory system employs a number of principles to facilitate the selection of perceptually separated streams from a complex sound mixture. The brain leverages multi-scale redundant representations of the input and uses memory (or priors) to guide the selection of a target sound from the input mixture. Moreover, feedback mechanisms refine the memory constructs resulting in further improvement of selectivity of a particular sound object amidst dynamic backgrounds. The present study proposes a unified end-to-end computational framework that mimics these principles for sound source separation applied to both speech and music mixtures. While the problems of speech enhancement and music separation have often been tackled separately due to constraints and specificities of each signal domain, the current work posits that common principles for sound source separation are domain-agnostic. In the proposed scheme, parallel and hierarchical convolutional paths map input mixtures onto redundant but distributed higher-dimensional subspaces and utilize the concept of temporal coherence to gate the selection of embeddings belonging to a target stream abstracted in memory. These explicit memories are further refined through self-feedback from incoming observations in order to improve the system's selectivity when faced with unknown backgrounds. The model yields stable outcomes of source separation for both speech and music mixtures and demonstrates benefits of explicit memory as a powerful representation of priors that guide information selection from complex inputs.
Collapse
Affiliation(s)
- Ashwin Bellur
- Electrical and Computer Engineering, Johns Hopkins University, Baltimore, USA
| | - Karan Thakkar
- Electrical and Computer Engineering, Johns Hopkins University, Baltimore, USA
| | - Mounya Elhilali
- Electrical and Computer Engineering, Johns Hopkins University, Baltimore, USA
| |
Collapse
|
3
|
Skerritt-Davis B, Elhilali M. Neural Encoding of Auditory Statistics. J Neurosci 2021; 41:6726-6739. [PMID: 34193552 PMCID: PMC8336711 DOI: 10.1523/jneurosci.1887-20.2021] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2020] [Revised: 05/19/2021] [Accepted: 05/26/2021] [Indexed: 11/21/2022] Open
Abstract
The human brain extracts statistical regularities embedded in real-world scenes to sift through the complexity stemming from changing dynamics and entwined uncertainty along multiple perceptual dimensions (e.g., pitch, timbre, location). While there is evidence that sensory dynamics along different auditory dimensions are tracked independently by separate cortical networks, how these statistics are integrated to give rise to unified objects remains unknown, particularly in dynamic scenes that lack conspicuous coupling between features. Using tone sequences with stochastic regularities along spectral and spatial dimensions, this study examines behavioral and electrophysiological responses from human listeners (male and female) to changing statistics in auditory sequences and uses a computational model of predictive Bayesian inference to formulate multiple hypotheses for statistical integration across features. Neural responses reveal multiplexed brain responses reflecting both local statistics along individual features in frontocentral networks, together with global (object-level) processing in centroparietal networks. Independent tracking of local surprisal along each acoustic feature reveals linear modulation of neural responses, while global melody-level statistics follow a nonlinear integration of statistical beliefs across features to guide perception. Near identical results are obtained in separate experiments along spectral and spatial acoustic dimensions, suggesting a common mechanism for statistical inference in the brain. Potential variations in statistical integration strategies and memory deployment shed light on individual variability between listeners in terms of behavioral efficacy and fidelity of neural encoding of stochastic change in acoustic sequences.SIGNIFICANCE STATEMENT The world around us is complex and ever changing: in everyday listening, sound sources evolve along multiple dimensions, such as pitch, timbre, and spatial location, and they exhibit emergent statistical properties that change over time. In the face of this complexity, the brain builds an internal representation of the external world by collecting statistics from the sensory input along multiple dimensions. Using a Bayesian predictive inference model, this work considers alternative hypotheses for how statistics are combined across sensory dimensions. Behavioral and neural responses from human listeners show the brain multiplexes two representations, where local statistics along each feature linearly affect neural responses, and global statistics nonlinearly combine statistical beliefs across dimensions to shape perception of stochastic auditory sequences.
Collapse
|
4
|
Möller M, Mayr S, Buchner A. The time-course of distractor processing in auditory spatial negative priming. PSYCHOLOGICAL RESEARCH 2015; 80:744-56. [PMID: 26233234 DOI: 10.1007/s00426-015-0685-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2014] [Accepted: 07/06/2015] [Indexed: 11/28/2022]
Abstract
The spatial negative priming effect denotes slowed-down and sometimes more error-prone responding to a location that previously contained a distractor as compared with a previously unoccupied location. In vision, this effect has been attributed to the inhibition of irrelevant locations, and recently, of their task-assigned responses. Interestingly, auditory versions of the task did not yield evidence for inhibitory processing of task-irrelevant events which might suggest modality-specific distractor processing in vision and audition. Alternatively, the inhibitory processes may differ in how they develop over time. If this were the case, the absence of inhibitory after-effects might be due to an inappropriate timing of successive presentations in previous auditory spatial negative priming tasks. Specifically, the distractor may not yet have been inhibited or inhibition may already have dissipated at the time performance is assessed. The present study was conducted to test these alternatives. Participants indicated the location of a target sound in the presence of a concurrent distractor sound. Performance was assessed between two successive prime-probe presentations. The time between the prime response and the probe sounds (response-stimulus interval, RSI) was systematically varied between three groups (600, 1250, 1900 ms). For all RSI groups, the results showed no evidence for inhibitory distractor processing but conformed to the predictions of the feature mismatching hypothesis. The results support the assumption that auditory distractor processing does not recruit an inhibitory mechanism but involves the integration of spatial and sound identity features into common representations.
Collapse
Affiliation(s)
- Malte Möller
- Institute of Experimental Psychology, Heinrich-Heine-University, Düsseldorf, Germany.
| | - Susanne Mayr
- Chair of Psychology and Human-Machine Interaction, University of Passau, Passau, Germany
| | - Axel Buchner
- Institute of Experimental Psychology, Heinrich-Heine-University, Düsseldorf, Germany
| |
Collapse
|
5
|
Leung AWS, Jolicoeur P, Alain C. Attentional Capacity Limits Gap Detection during Concurrent Sound Segregation. J Cogn Neurosci 2015. [PMID: 26226073 DOI: 10.1162/jocn_a_00849] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Detecting a brief silent interval (i.e., a gap) is more difficult when listeners perceive two concurrent sounds rather than one in a sound containing a mistuned harmonic in otherwise in-tune harmonics. This impairment in gap detection may reflect the interaction of low-level encoding or the division of attention between two sound objects, both of which could interfere with signal detection. To distinguish between these two alternatives, we compared ERPs during active and passive listening with complex harmonic tones that could include a gap, a mistuned harmonic, both features, or neither. During active listening, participants indicated whether they heard a gap irrespective of mistuning. During passive listening, participants watched a subtitled muted movie of their choice while the same sounds were presented. Gap detection was impaired when the complex sounds included a mistuned harmonic that popped out as a separate object. The ERP analysis revealed an early gap-related activity that was little affected by mistuning during the active or passive listening condition. However, during active listening, there was a marked decrease in the late positive wave that was thought to index attention and response-related processes. These results suggest that the limitation in detecting the gap is related to attentional processing, possibly divided attention induced by the concurrent sound objects, rather than deficits in preattentional sensory encoding.
Collapse
Affiliation(s)
- Ada W S Leung
- University of Alberta.,Rotman Research Institute, Baycrest Centre for Geriatric Care, Toronto, Canada
| | - Pierre Jolicoeur
- Université de Montréal.,Centre de Recherche en Neuropsychologie et Cognition (CERNEC), Montréal, Canada.,BRAMS (International Laboratory for Brain, Music, and Sound Research), Montréal, Canada.,Centre de Recherche de l'Institut Universitaire de Gériatrie de Montréal (CRIUGM)
| | - Claude Alain
- Rotman Research Institute, Baycrest Centre for Geriatric Care, Toronto, Canada.,University of Toronto
| |
Collapse
|
6
|
Neural dynamics underlying attentional orienting to auditory representations in short-term memory. J Neurosci 2015; 35:1307-18. [PMID: 25609643 DOI: 10.1523/jneurosci.1487-14.2015] [Citation(s) in RCA: 35] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Sounds are ephemeral. Thus, coherent auditory perception depends on "hearing" back in time: retrospectively attending that which was lost externally but preserved in short-term memory (STM). Current theories of auditory attention assume that sound features are integrated into a perceptual object, that multiple objects can coexist in STM, and that attention can be deployed to an object in STM. Recording electroencephalography from humans, we tested these assumptions, elucidating feature-general and feature-specific neural correlates of auditory attention to STM. Alpha/beta oscillations and frontal and posterior event-related potentials indexed feature-general top-down attentional control to one of several coexisting auditory representations in STM. Particularly, task performance during attentional orienting was correlated with alpha/low-beta desynchronization (i.e., power suppression). However, attention to one feature could occur without simultaneous processing of the second feature of the representation. Therefore, auditory attention to memory relies on both feature-specific and feature-general neural dynamics.
Collapse
|
7
|
The effects of association strength and cross-modal correspondence on the development of multimodal stimuli. Atten Percept Psychophys 2014; 77:560-70. [PMID: 25391886 DOI: 10.3758/s13414-014-0794-0] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
In addition to temporal and spatial contributions, multimodal binding is also influenced by association strength and the congruency between stimulus elements. A paradigm was established in which an audio-visual stimulus consisting of four attributes (two visual, two auditory) was presented, followed by questions regarding the specific nature of two of those attributes. We wanted to know how association strength and congruency would modulate the basic effect that responding to same-modality information (two visual or two auditory) would be easier than retrieving different-modality information (one visual and one auditory). In Experiment 1, association strengths were compared across three conditions: baseline, intramodal (100 % association within modalities, thereby benefiting same-modality retrieval), and intermodal (100 % association between modalities, thereby benefiting different-modality retrieval). Association strength was shown to damage responses to same-modality information during intermodal conditions. In Experiment 2, association strength was manipulated identically, but was combined with cross-modally corresponding stimuli (further benefiting different-modality retrieval). The locus of the effect was again on responses to same-modality information, damaging responding during intermodal conditions but helping responding during intramodal conditions. The potential contributions of association strength and cross-modal congruency in promoting learning between vision and audition are discussed in relation to a potential default within-modality binding mechanism.
Collapse
|
8
|
Effects of spatial response coding on distractor processing: evidence from auditory spatial negative priming tasks with keypress, joystick, and head movement responses. Atten Percept Psychophys 2014; 77:293-310. [PMID: 25214304 DOI: 10.3758/s13414-014-0760-x] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Prior studies of spatial negative priming indicate that distractor-assigned keypress responses are inhibited as part of visual, but not auditory, processing. However, recent evidence suggests that static keypress responses are not directly activated by spatially presented sounds and, therefore, might not call for an inhibitory process. In order to investigate the role of response inhibition in auditory processing, we used spatially directed responses that have been shown to result in direct response activation to irrelevant sounds. Participants localized a target sound by performing manual joystick responses (Experiment 1) or head movements (Experiment 2B) while ignoring a concurrent distractor sound. Relations between prime distractor and probe target were systematically manipulated (repeated vs. changed) with respect to identity and location. Experiment 2A investigated the influence of distractor sounds on spatial parameters of head movements toward target locations and showed that distractor-assigned responses are immediately inhibited to prevent false responding in the ongoing trial. Interestingly, performance in Experiments 1 and 2B was not generally impaired when the probe target appeared at the location of the former prime distractor and required a previously withheld and presumably inhibited response. Instead, performance was impaired only when prime distractor and probe target mismatched in terms of location or identity, which fully conforms to the feature-mismatching hypothesis. Together, the results suggest that response inhibition operates in auditory processing when response activation is provided but is presumably too short-lived to affect responding on the subsequent trial.
Collapse
|
9
|
Attention to memory: orienting attention to sound object representations. PSYCHOLOGICAL RESEARCH 2013; 78:439-52. [PMID: 24352689 DOI: 10.1007/s00426-013-0531-7] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2013] [Accepted: 11/29/2013] [Indexed: 01/08/2023]
Abstract
Despite a growing acceptance that attention and memory interact, and that attention can be focused on an active internal mental representation (i.e., reflective attention), there has been a paucity of work focusing on reflective attention to 'sound objects' (i.e., mental representations of actual sound sources in the environment). Further research on the dynamic interactions between auditory attention and memory, as well as its degree of neuroplasticity, is important for understanding how sound objects are represented, maintained, and accessed in the brain. This knowledge can then guide the development of training programs to help individuals with attention and memory problems. This review article focuses on attention to memory with an emphasis on behavioral and neuroimaging studies that have begun to explore the mechanisms that mediate reflective attentional orienting in vision and more recently, in audition. Reflective attention refers to situations in which attention is oriented toward internal representations rather than focused on external stimuli. We propose four general principles underlying attention to short-term memory. Furthermore, we suggest that mechanisms involved in orienting attention to visual object representations may also apply for orienting attention to sound object representations.
Collapse
|
10
|
Oberfeld D, Stahn P. Sequential grouping modulates the effect of non-simultaneous masking on auditory intensity resolution. PLoS One 2012; 7:e48054. [PMID: 23110174 PMCID: PMC3480468 DOI: 10.1371/journal.pone.0048054] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2011] [Accepted: 09/26/2012] [Indexed: 11/22/2022] Open
Abstract
The presence of non-simultaneous maskers can result in strong impairment in auditory intensity resolution relative to a condition without maskers, and causes a complex pattern of effects that is difficult to explain on the basis of peripheral processing. We suggest that the failure of selective attention to the target tones is a useful framework for understanding these effects. Two experiments tested the hypothesis that the sequential grouping of the targets and the maskers into separate auditory objects facilitates selective attention and therefore reduces the masker-induced impairment in intensity resolution. In Experiment 1, a condition favoring the processing of the maskers and the targets as two separate auditory objects due to grouping by temporal proximity was contrasted with the usual forward masking setting where the masker and the target presented within each observation interval of the two-interval task can be expected to be grouped together. As expected, the former condition resulted in a significantly smaller masker-induced elevation of the intensity difference limens (DLs). In Experiment 2, embedding the targets in an isochronous sequence of maskers led to a significantly smaller DL-elevation than control conditions not favoring the perception of the maskers as a separate auditory stream. The observed effects of grouping are compatible with the assumption that a precise representation of target intensity is available at the decision stage, but that this information is used only in a suboptimal fashion due to limitations of selective attention. The data can be explained within a framework of object-based attention. The results impose constraints on physiological models of intensity discrimination. We discuss candidate structures for physiological correlates of the psychophysical data.
Collapse
Affiliation(s)
- Daniel Oberfeld
- Department of Psychology, Section Experimental Psychology, Johannes Gutenberg-Universität Mainz, Mainz, Germany.
| | | |
Collapse
|
11
|
Target localization among concurrent sound sources: no evidence for the inhibition of previous distractor responses. Atten Percept Psychophys 2012; 75:132-44. [PMID: 23077027 DOI: 10.3758/s13414-012-0380-2] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
The visuospatial negative priming effect-that is, the slowed-down responding to a previously ignored location-is partly due to response inhibition associated with the previously ignored location (Buckolz, Goldfarb, & Khan, Perception & Psychophysics 66:837-845 2004). We tested whether response inhibition underlies spatial negative priming in the auditory modality as well. Eighty participants localized a target sound while ignoring a simultaneous distractor sound at another location. Eight possible sound locations were arranged in a semicircle around the participant. Pairs of adjacent locations were associated with the same response. On ignored repetition trials, the probe target sound was played from the same location as the previously ignored prime sound. On response control trials, prime distractor and probe target were played from different locations but were associated with the same response. On control trials, prime distractor and probe target shared neither location nor response. A response inhibition account predicts slowed-down responding when the response associated with the prime distractor has to be executed in the probe. There was no evidence of response inhibition in audition. Instead, the negative priming effect depended on whether the sound at the repeatedly occupied location changed identity between prime and probe. The latter result replicates earlier findings and supports the feature mismatching hypothesis, while the former is compatible with the assumption that response inhibition is irrelevant in auditory spatial attention.
Collapse
|
12
|
Abstract
An area of research that has experienced recent growth is the study of memory during perception of simple and complex auditory scenes. These studies have provided important information about how well auditory objects are encoded in memory and how well listeners can notice changes in auditory scenes. These are significant developments because they present an opportunity to better understand how we hear in realistic situations, how higher-level aspects of hearing such as semantics and prior exposure affect perception, and the similarities and differences between auditory perception and perception in other modalities, such as vision and touch. The research also poses exciting challenges for behavioral and neural models of how auditory perception and memory work.
Collapse
|
13
|
Spatial and identity negative priming in audition: evidence of feature binding in auditory spatial memory. Atten Percept Psychophys 2011; 73:1710-32. [PMID: 21590513 DOI: 10.3758/s13414-011-0138-2] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Two experiments are reported with identical auditory stimulation in three-dimensional space but with different instructions. Participants localized a cued sound (Experiment 1) or identified a sound at a cued location (Experiment 2). A distractor sound at another location had to be ignored. The prime distractor and the probe target sound were manipulated with respect to sound identity (repeated vs. changed) and location (repeated vs. changed). The localization task revealed a symmetric pattern of partial repetition costs: Participants were impaired on trials with identity-location mismatches between the prime distractor and probe target-that is, when either the sound was repeated but not the location or vice versa. The identification task revealed an asymmetric pattern of partial repetition costs: Responding was slowed down when the prime distractor sound was repeated as the probe target, but at another location; identity changes at the same location were not impaired. Additionally, there was evidence of retrieval of incompatible prime responses in the identification task. It is concluded that feature binding of auditory prime distractor information takes place regardless of whether the task is to identify or locate a sound. Instructions determine the kind of identity-location mismatch that is detected. Identity information predominates over location information in auditory memory.
Collapse
|
14
|
Townsend SW, Allen C, Manser MB. A simple test of vocal individual recognition in wild meerkats. Biol Lett 2011; 8:179-82. [PMID: 21992821 DOI: 10.1098/rsbl.2011.0844] [Citation(s) in RCA: 41] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
Individual recognition is thought to be a crucial ability facilitating the evolution of animal societies. Given its central importance, much research has addressed the extent of this capacity across the animal kingdom. Recognition of individuals vocally has received particular attention due, in part, to the insights it provides regarding the cognitive processes that underlie this skill. While much work has focused on vocal individual recognition in primates, there is currently very little data showing comparable skills in non-primate mammals under natural conditions. This may be because non-primate mammal societies do not provide obvious contexts in which vocal individual recognition can be rigorously tested. We addressed this gap in understanding by designing an experimental paradigm to test for individual recognition in meerkats (Suricata suricatta) without having to rely on naturally occurring social contexts. Results suggest that when confronted with a physically impossible scenario-the presence of the same conspecific meerkat in two different places-subjects responded more strongly than during the control, a physically possible setup. We argue that this provides the first clear evidence for vocal individual recognition in wild non-primate mammals and hope that this novel experimental design will allow more systematic cross-species comparisons of individual recognition under natural settings.
Collapse
Affiliation(s)
- Simon W Townsend
- Animal Behaviour, Institute of Evolutionary Biology and Environmental Studies, University of Zurich, Zurich, Switzerland.
| | | | | |
Collapse
|
15
|
Bang SJ, Brown TH. Perirhinal cortex supports acquired fear of auditory objects. Neurobiol Learn Mem 2009; 92:53-62. [PMID: 19185613 DOI: 10.1016/j.nlm.2009.01.002] [Citation(s) in RCA: 30] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2008] [Revised: 01/05/2009] [Accepted: 01/09/2009] [Indexed: 11/19/2022]
Abstract
Damage to rat perirhinal cortex (PR) profoundly impairs fear conditioning to 22kHz ultrasonic vocalizations (USVs), but has no effect on fear conditioning to continuous tones. The most obvious difference between these two sounds is that continuous tones have no internal temporal structure, whereas USVs consist of strings of discrete calls separated by temporal discontinuities. PR was hypothesized to support the fusion or integration of discontinuous auditory segments into unitary representations or "auditory objects". This transform was suggested to be necessary for normal fear conditioning to occur. These ideas naturally assume that the effect of PR damage on auditory fear conditioning is not peculiar to 22kHz USVs. The present study directly tested these ideas by using a different set of continuous and discontinuous auditory cues. Control and PR-damaged rats were fear conditioned to a 53kHz USV, a 53kHz continuous tone, or a 53kHz discontinuous tone. The continuous and discontinuous tones matched the 53kHz USV in terms of duration, loudness, and principle frequency. The on/off pattern of the discontinuous tone matched the pattern of the individual calls of the 53kHz USV. The on/off pattern of the 50kHz USV was very different from the patterns in the 22kHz USVs that have been comparably examined. Rats with PR damage were profoundly impaired in fear conditioning to both discontinuous cues, but they were unimpaired in conditioning to the continuous cue. The implications of this temporal discontinuity effect are explored in terms of contemporary ideas about PR function.
Collapse
Affiliation(s)
- Sun Jung Bang
- Department of Psychology, Yale University, New Haven, CT 06520, USA
| | | |
Collapse
|