1
|
Griffiths CS, Lebert JM, Sollini J, Bizley JK. Gradient boosted decision trees reveal nuances of auditory discrimination behavior. PLoS Comput Biol 2024; 20:e1011985. [PMID: 38626220 PMCID: PMC11051626 DOI: 10.1371/journal.pcbi.1011985] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2023] [Revised: 04/26/2024] [Accepted: 03/09/2024] [Indexed: 04/18/2024] Open
Abstract
Animal psychophysics can generate rich behavioral datasets, often comprised of many 1000s of trials for an individual subject. Gradient-boosted models are a promising machine learning approach for analyzing such data, partly due to the tools that allow users to gain insight into how the model makes predictions. We trained ferrets to report a target word's presence, timing, and lateralization within a stream of consecutively presented non-target words. To assess the animals' ability to generalize across pitch, we manipulated the fundamental frequency (F0) of the speech stimuli across trials, and to assess the contribution of pitch to streaming, we roved the F0 from word token to token. We then implemented gradient-boosted regression and decision trees on the trial outcome and reaction time data to understand the behavioral factors behind the ferrets' decision-making. We visualized model contributions by implementing SHAPs feature importance and partial dependency plots. While ferrets could accurately perform the task across all pitch-shifted conditions, our models reveal subtle effects of shifting F0 on performance, with within-trial pitch shifting elevating false alarms and extending reaction times. Our models identified a subset of non-target words that animals commonly false alarmed to. Follow-up analysis demonstrated that the spectrotemporal similarity of target and non-target words rather than similarity in duration or amplitude waveform was the strongest predictor of the likelihood of false alarming. Finally, we compared the results with those obtained with traditional mixed effects models, revealing equivalent or better performance for the gradient-boosted models over these approaches.
Collapse
Affiliation(s)
| | - Jules M. Lebert
- Ear Institute, University College London, London, United Kingdom
| | - Joseph Sollini
- Ear Institute, University College London, London, United Kingdom
- Hearing Sciences, University of Nottingham, Nottingham, United Kingdom
| | | |
Collapse
|
2
|
Peng F, Bizley JK, Schnupp JW, Auksztulewicz R. Dissociable neural correlates of multisensory coherence and selective attention. J Neurosci 2023:JNEUROSCI.1310-22.2023. [PMID: 37221094 DOI: 10.1523/jneurosci.1310-22.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2022] [Revised: 01/03/2023] [Accepted: 01/04/2023] [Indexed: 05/25/2023] Open
Abstract
Previous work has demonstrated that performance in an auditory selective attention task can be enhanced or impaired, depending on whether a task-irrelevant visual stimulus is temporally coherent with a target auditory stream or with a competing distractor. However, it remains unclear how audiovisual (AV) temporal coherence and auditory selective attention interact at the neurophysiological level. Here, we measured neural activity using electroencephalography (EEG) while human participants (men and women) performed an auditory selective attention task, detecting deviants in a target audio stream. The amplitude envelope of the two competing auditory streams changed independently, while the radius of a visual disc was manipulated to control the audiovisual coherence. Analysis of the neural responses to the sound envelope demonstrated that auditory responses were enhanced independently of the attentional condition: both target and masker stream responses were enhanced when temporally coherent with the visual stimulus. In contrast, attention enhanced the event-related response (ERP) evoked by the transient deviants, independently of AV coherence. Finally, in an exploratory analysis, we identified a spatiotemporal component of ERP, in which temporal coherence enhanced the deviant-evoked responses only in the unattended stream. These results provide evidence for dissociable neural signatures of bottom-up (coherence) and top-down (attention) effects in AV object formation.Significance StatementTemporal coherence between auditory stimuli and task-irrelevant visual stimuli can enhance behavioral performance in auditory selective attention tasks. However, how audiovisual temporal coherence and attention interact at the neural level has not been established. Here, we measured EEG during a behavioral task designed to independently manipulate AV coherence and auditory selective attention. While some auditory features (sound envelope) could be coherent with visual stimuli, other features (timbre) were independent of visual stimuli. We find that audiovisual integration can be observed independently of attention for sound envelopes temporally coherent with visual stimuli, while the neural responses to unexpected timbre changes are most strongly modulated by attention. Our results provide evidence for dissociable neural mechanisms of bottom-up (coherence) and top-down (attention) effects on AV object formation.
Collapse
Affiliation(s)
- Fei Peng
- Department of Neuroscience, City University of Hong Kong, Hong Kong, China
| | | | - Jan W Schnupp
- Department of Neuroscience, City University of Hong Kong, Hong Kong, China
| | - Ryszard Auksztulewicz
- Department of Neuroscience, City University of Hong Kong, Hong Kong, China
- Center for Cognitive Neuroscience Berlin, Department of Education and Psychology, Free University Berlin, Germany
| |
Collapse
|
3
|
Dancer AMM, Díez-León M, Bizley JK, Burn CC. Pet Owner Perception of Ferret Boredom and Consequences for Housing, Husbandry, and Environmental Enrichment. Animals (Basel) 2022; 12:ani12233262. [PMID: 36496783 PMCID: PMC9740969 DOI: 10.3390/ani12233262] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2022] [Revised: 11/15/2022] [Accepted: 11/17/2022] [Indexed: 11/25/2022] Open
Abstract
Boredom is a potential chronic but overlooked animal welfare problem. Caused by monotony, sub-optimal stimulation, and restrictive housing, boredom can therefore affect companion animals, particularly those traditionally caged, such as ferrets. We surveyed owners' (n = 621) perceptions of ferrets' capacity to experience boredom, behaviours they associate with it, and whether their perception of their ferrets' capacity for boredom influenced training techniques, housing, and environmental enrichment (EE). Most (93.0%) owners believed that ferrets could experience boredom, but owners who doubted that ferrets experience boredom (7.0%) provided slightly but significantly fewer EE types to their ferrets. Heat map and classification tree analysis showed that owners identified scratching at enclosure walls (n = 420) and excessive sleeping (n = 312) as distinctive behavioural indicators of ferret boredom. Repetitive pacing (n = 381), yawning (n = 191), and resting with eyes open (n = 171) were also suggested to indicate ferret boredom, but these overlapped with other states. Finally, ferret owners suggested social housing, tactile interaction with humans, and exploration as most important for preventing boredom. These results suggest that pet ferrets are at risk of reduced welfare from owners who doubt they can experience boredom, highlighting an opportunity to improve welfare through information dissemination. We recommend further investigation into ferret boredom capacity, behavioural indicators, and mitigation strategies.
Collapse
Affiliation(s)
- Alice M. M. Dancer
- Department of Pathobiology and Population Sciences, The Royal Veterinary College, Hawkshead Lane, North Mymms, Hertfordshire, Hatfield AL9 7TA, UK
- Correspondence:
| | - María Díez-León
- Department of Pathobiology and Population Sciences, The Royal Veterinary College, Hawkshead Lane, North Mymms, Hertfordshire, Hatfield AL9 7TA, UK
| | - Jennifer K. Bizley
- Ear Institute, University College London, 332 Gray’s Inn Road, London WC1X 8EE, UK
| | - Charlotte C. Burn
- Department of Pathobiology and Population Sciences, The Royal Veterinary College, Hawkshead Lane, North Mymms, Hertfordshire, Hatfield AL9 7TA, UK
| |
Collapse
|
4
|
Parmar BJ, Rajasingam SL, Bizley JK, Vickers DA. Factors Affecting the Use of Speech Testing in Adult Audiology. Am J Audiol 2022; 31:528-540. [PMID: 35737980 PMCID: PMC7613483 DOI: 10.1044/2022_aja-21-00233] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2021] [Revised: 01/04/2022] [Accepted: 04/05/2022] [Indexed: 02/02/2023] Open
Abstract
OBJECTIVE The aim of this study was to evaluate hearing health care professionals' (HHPs) speech testing practices in routine adult audiology services and better understand the facilitators and barriers to speech testing provision. DESIGN A cross-sectional questionnaire study was conducted. STUDY SAMPLE A sample (N = 306) of HHPs from the public (64%) and private (36%) sectors in the United Kingdom completed the survey. RESULTS In the United Kingdom, speech testing practice varied significantly between health sectors. Speech testing was carried out during the audiology assessment by 73.4% of private sector HHPs and 20.4% of those from the public sector. During the hearing aid intervention stage, speech testing was carried out by 56.5% and 26.5% of HHPs from the private and public sectors, respectively. Recognized benefits of speech testing included (a) providing patients with relatable assessment information, (b) guiding hearing aid fitting, and (c) supporting a diagnostic test battery. A lack of clinical time was a key barrier to uptake. CONCLUSIONS Use of speech testing varies in adult audiology. Results from this study found that the percentage of U.K. HHPs making use of speech tests was low compared to that of other countries. HHPs recognized different benefits of speech testing in audiology practice, but the barriers limiting uptake were often driven by factors derived from decision makers rather than clinical rationale. Privately funded HHPs used speech tests more frequently than those working in the public sector where time and resources are under greater pressure and governed by guidance that does not include a recommendation for speech testing. Therefore, the inclusion of speech testing in national clinical guidelines could increase the consistency of use and facilitate the comparison of practice trends across centers. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.20044457.
Collapse
Affiliation(s)
- Bhavisha J. Parmar
- UCL Ear Institute, University College London, United Kingdom
- Sound Lab, Cambridge Hearing Group, Department of Clinical Neurosciences, University of Cambridge, United Kingdom
| | | | | | - Deborah A. Vickers
- Sound Lab, Cambridge Hearing Group, Department of Clinical Neurosciences, University of Cambridge, United Kingdom
| |
Collapse
|
5
|
Sollini J, Poole KC, Blauth-Muszkowski D, Bizley JK. The role of temporal coherence and temporal predictability in the build-up of auditory grouping. Sci Rep 2022; 12:14493. [PMID: 36008519 PMCID: PMC9411505 DOI: 10.1038/s41598-022-18583-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2022] [Accepted: 08/16/2022] [Indexed: 11/10/2022] Open
Abstract
The cochlea decomposes sounds into separate frequency channels, from which the auditory brain must reconstruct the auditory scene. To do this the auditory system must make decisions about which frequency information should be grouped together, and which should remain distinct. Two key cues for grouping are temporal coherence, resulting from coherent changes in power across frequency, and temporal predictability, resulting from regular or predictable changes over time. To test how these cues contribute to the construction of a sound scene we present listeners with a range of precursor sounds, which act to prime the auditory system by providing information about each sounds structure, followed by a fixed masker in which participants were required to detect the presence of an embedded tone. By manipulating temporal coherence and/or temporal predictability in the precursor we assess how prior sound exposure influences subsequent auditory grouping. In Experiment 1, we measure the contribution of temporal predictability by presenting temporally regular or jittered precursors, and temporal coherence by using either narrow or broadband sounds, demonstrating that both independently contribute to masking/unmasking. In Experiment 2, we measure the relative impact of temporal coherence and temporal predictability and ask whether the influence of each in the precursor signifies an enhancement or interference of unmasking. We observed that interfering precursors produced the largest changes to thresholds.
Collapse
Affiliation(s)
- Joseph Sollini
- Hearing Sciences, Mental Health and Clinical Neurosciences, School of Medicine, University of Nottingham, Nottingham, England, UK. .,The Ear Institute, University College London, London, England, UK.
| | - Katarina C Poole
- The Ear Institute, University College London, London, England, UK
| | | | | |
Collapse
|
6
|
Abstract
The location of sounds can be described in multiple coordinate systems that are defined relative to ourselves, or the world around us. Evidence from neural recordings in animals point toward the existence of both head-centered and world-centered representations of sound location in the brain; however, it is unclear whether such neural representations have perceptual correlates in the sound localization abilities of nonhuman listeners. Here, we establish novel behavioral tests to determine the coordinate systems in which ferrets can localize sounds. We found that ferrets could learn to discriminate between sound locations that were fixed in either world-centered or head-centered space, across wide variations in sound location in the alternative coordinate system. Using probe sounds to assess broader generalization of spatial hearing, we demonstrated that in both head and world-centered tasks, animals used continuous maps of auditory space to guide behavior. Single trial responses of individual animals were sufficiently informative that we could then model sound localization using speaker position in specific coordinate systems and accurately predict ferrets' actions in held-out data. Our results demonstrate that ferrets, an animal model in which neurons are known to be tuned to sound location in egocentric and allocentric reference frames, can also localize sounds in multiple head and world-centered spaces.SIGNIFICANCE STATEMENT Humans can describe the location of sounds either relative to themselves, or in the world, independent of their momentary position. These different spaces are also represented in the activity of neurons in animals, but it is not clear whether nonhuman listeners also perceive both head and world-centered sound location. Here, we designed behavioral tasks in which ferrets discriminated between sounds using their position in the world, or relative to the head. Subjects learnt to solve both problems and generalized sound location in each space when presented with infrequent probe sounds. These findings reveal a perceptual correlate of neural sensitivity previously observed in the ferret brain and establish that, like humans, ferrets can access an auditory map of their local environment.
Collapse
Affiliation(s)
- Stephen M Town
- Ear Institute, University College London, London WC1X 8EE, United Kingdom
| | - Jennifer K Bizley
- Ear Institute, University College London, London WC1X 8EE, United Kingdom
| |
Collapse
|
7
|
Parmar BJ, Mehta K, Vickers DA, Bizley JK. Experienced hearing aid users' perspectives of assessment and communication within audiology: a qualitative study using digital methods. Int J Audiol 2021; 61:956-964. [PMID: 34821527 DOI: 10.1080/14992027.2021.1998839] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
OBJECTIVE To explore experienced hearing aid users' perspectives of audiological assessments and the patient-audiologist communication dynamic during clinical interactions. DESIGN A qualitative study was implemented incorporating both an online focus group and online semi-structured interviews. Sessions were audio-recorded and transcribed verbatim. Iterative-inductive thematic analysis was carried out to identify themes related to assessment and communication within audiology practice. STUDY SAMPLES Seven experienced hearing aid users took part in an online focus group and 14 participated in online semi-structured interviews (age range: 22 - 86 years; 9 males, 11 females). RESULTS Themes related to assessment included the unaided and aided testing procedure and relating tests to real world hearing difficulties. Themes related to communication included the importance of deaf aware communication strategies, explanation of test results and patient centred care in audiology. CONCLUSION To ensure hearing aid services meet the needs of the service users, we should explore user perspectives and proactively adapt service delivery. This approach should be ongoing, in response to advances in hearing aid technology. Within audiology, experienced hearing aid users' value (1) comprehensive, relatable hearing assessment, (2) deaf aware patient-audiologist communication, (3) accessible services and (4) a personalised approach to recommend suitable technology and address patient specific aspects of hearing loss.
Collapse
Affiliation(s)
| | - Kinjal Mehta
- St Ann's Hospital, Whittington Health NHS Trust, London, UK
| | - Deborah A Vickers
- Sound Lab, Cambridge Hearing Group, University of Cambridge, Cambridge, UK
| | | |
Collapse
|
8
|
Slonina ZA, Poole KC, Bizley JK. What can we learn from inactivation studies? Lessons from auditory cortex. Trends Neurosci 2021; 45:64-77. [PMID: 34799134 PMCID: PMC8897194 DOI: 10.1016/j.tins.2021.10.005] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2020] [Revised: 10/05/2021] [Accepted: 10/11/2021] [Indexed: 11/29/2022]
Abstract
Inactivation experiments in auditory cortex (AC) produce widely varying results that complicate interpretations regarding the precise role of AC in auditory perception and ensuing behaviour. The advent of optogenetic methods in neuroscience offers previously unachievable insight into the mechanisms transforming brain activity into behaviour. With a view to aiding the design and interpretation of future studies in and outside AC, here we discuss the methodological challenges faced in manipulating neural activity. While considering AC’s role in auditory behaviour through the prism of inactivation experiments, we consider the factors that confound the interpretation of the effects of inactivation on behaviour, including the species, the type of inactivation, the behavioural task employed, and the exact location of the inactivation. Wide variation in the outcome of auditory cortex inactivation has been an impediment to clear conclusions regarding the roles of the auditory cortex in behaviour. Inactivation methods differ in their efficacy and specificity. The likelihood of observing a behavioural deficit is additionally influenced by factors such as the species being used, task design and reward. A synthesis of previous results suggests that auditory cortex involvement is critical for tasks that require integrating across multiple stimulus features, and less likely to be critical for simple feature discriminations. New methods of neural silencing provide opportunities for spatially and temporally precise manipulation of activity, allowing perturbation of individual subfields and specific circuits.
Collapse
|
9
|
Khandhadia AP, Murphy AP, Romanski LM, Bizley JK, Leopold DA. Audiovisual integration in macaque face patch neurons. Curr Biol 2021; 31:1826-1835.e3. [PMID: 33636119 PMCID: PMC8521527 DOI: 10.1016/j.cub.2021.01.102] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2020] [Revised: 12/29/2020] [Accepted: 01/28/2021] [Indexed: 12/03/2022]
Abstract
Primate social communication depends on the perceptual integration of visual and auditory cues, reflected in the multimodal mixing of sensory signals in certain cortical areas. The macaque cortical face patch network, identified through visual, face-selective responses measured with fMRI, is assumed to contribute to visual social interactions. However, whether face patch neurons are also influenced by acoustic information, such as the auditory component of a natural vocalization, remains unknown. Here, we recorded single-unit activity in the anterior fundus (AF) face patch, in the superior temporal sulcus, and anterior medial (AM) face patch, on the undersurface of the temporal lobe, in macaques presented with audiovisual, visual-only, and auditory-only renditions of natural movies of macaques vocalizing. The results revealed that 76% of neurons in face patch AF were significantly influenced by the auditory component of the movie, most often through enhancement of visual responses but sometimes in response to the auditory stimulus alone. By contrast, few neurons in face patch AM exhibited significant auditory responses or modulation. Control experiments in AF used an animated macaque avatar to demonstrate, first, that the structural elements of the face were often essential for audiovisual modulation and, second, that the temporal modulation of the acoustic stimulus was more important than its frequency spectrum. Together, these results identify a striking contrast between two face patches and specifically identify AF as playing a potential role in the integration of audiovisual cues during natural modes of social communication.
Collapse
Affiliation(s)
- Amit P Khandhadia
- Laboratory of Neuropsychology, National Institute of Mental Health, NIH, Bethesda, MD 20892, USA; Ear Institute, University College London, 332 Gray's Inn Road, London WC1X 8EE, UK.
| | - Aidan P Murphy
- Laboratory of Neuropsychology, National Institute of Mental Health, NIH, Bethesda, MD 20892, USA; Neurophysiology Imaging Facility, National Institute of Mental Health, National Institute of Neurological Disorders and Stroke, National Eye Institute, NIH, Bethesda, MD 20892, USA
| | - Lizabeth M Romanski
- Department of Neuroscience, University of Rochester School of Medicine, Rochester, NY 14642, USA
| | - Jennifer K Bizley
- Ear Institute, University College London, 332 Gray's Inn Road, London WC1X 8EE, UK
| | - David A Leopold
- Laboratory of Neuropsychology, National Institute of Mental Health, NIH, Bethesda, MD 20892, USA; Neurophysiology Imaging Facility, National Institute of Mental Health, National Institute of Neurological Disorders and Stroke, National Eye Institute, NIH, Bethesda, MD 20892, USA.
| |
Collapse
|
10
|
Atilgan H, Bizley JK. Training enhances the ability of listeners to exploit visual information for auditory scene analysis. Cognition 2020; 208:104529. [PMID: 33373937 PMCID: PMC7868888 DOI: 10.1016/j.cognition.2020.104529] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2020] [Revised: 11/24/2020] [Accepted: 11/25/2020] [Indexed: 11/25/2022]
Abstract
The ability to use temporal relationships between cross-modal cues facilitates perception and behavior. Previously we observed that temporally correlated changes in the size of a visual stimulus and the intensity in an auditory stimulus influenced the ability of listeners to perform an auditory selective attention task (Maddox, Atilgan, Bizley, & Lee, 2015). Participants detected timbral changes in a target sound while ignoring those in a simultaneously presented masker. When the visual stimulus was temporally coherent with the target sound, performance was significantly better than when the visual stimulus was temporally coherent with the masker, despite the visual stimulus conveying no task-relevant information. Here, we trained observers to detect audiovisual temporal coherence and asked whether this changed the way in which they were able to exploit visual information in the auditory selective attention task. We observed that after training, participants were able to benefit from temporal coherence between the visual stimulus and both the target and masker streams, relative to the condition in which the visual stimulus was coherent with neither sound. However, we did not observe such changes in a second group that were trained to discriminate modulation rate differences between temporally coherent audiovisual streams, although they did show an improvement in their overall performance. A control group did not change their performance between pretest and post-test and did not change how they exploited visual information. These results provide insights into how crossmodal experience may optimize multisensory integration.
Collapse
|
11
|
Abstract
Ferrets (Mustela putorius furo) are a valuable animal model used in biomedical research. Like many animals, ferrets undergo significant variation in body weight seasonally, affected by photoperiod, and these variations complicate the use weight as an indicator of health status. To overcome this requires a better understanding of these seasonal weight changes. We provide a normative weight data set for the female ferret accounting for seasonal changes, and also investigate the effect of fluid regulation on weight change. Female ferrets (n = 39) underwent behavioural testing from May 2017 to August 2019 and were weighed daily, while housed in an animal care facility with controlled light exposure. In the winter (October to March), animals experienced 10 hours of light and 14 hours of dark, while in summer (March to October), this contingency was reversed. Individual animals varied in their body weight from approximately 700 to 1200 g. However, weights fluctuated with light cycle, with animals losing weight in summer, and gaining weight in winter such that they fluctuated between approximately 80% and 120% of their long-term average. Ferrets were weighed as part of their health assessment while experiencing water regulation for behavioural training. Water regulation superimposed additional weight changes on these seasonal fluctuations, with weight loss during the 5-day water regulation period being greater in summer than winter. Analysing the data with a Generalised Linear Model confirmed that the percentage decrease in weight per week was relatively constant throughout the summer months, while the percentage increase in body weight per week in winter decreased through the season. Finally, we noted that the timing of oestrus was reliably triggered by the increase in day length in spring. These data establish a normative benchmark for seasonal weight variation in female ferrets that can be incorporated into the health assessment of an animal's condition.
Collapse
Affiliation(s)
- Eleanor J. Jones
- The Ear Institute, University College London, London, England, United Kingdom
| | - Katarina C. Poole
- The Ear Institute, University College London, London, England, United Kingdom
| | - Joseph Sollini
- The Ear Institute, University College London, London, England, United Kingdom
| | - Stephen M. Town
- The Ear Institute, University College London, London, England, United Kingdom
| | - Jennifer K. Bizley
- The Ear Institute, University College London, London, England, United Kingdom
| |
Collapse
|
12
|
Bizley JK. Auditory Neuroscience: Unravelling How the Brain Gives Sound Meaning. Curr Biol 2020; 30:R400-R402. [DOI: 10.1016/j.cub.2020.03.041] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
13
|
Abstract
Much environmental enrichment for laboratory animals is intended to enhance animal welfare and normalcy by providing stimulation to reduce 'boredom'. Behavioural manifestations of boredom include restless sensation-seeking behaviours combined with indicators of sub-optimal arousal. Here we explored whether these signs could be reduced by extra daily play opportunity in laboratory ferrets. Specifically, we hypothesised that playtime would reduce restlessness, aggression, sensation-seeking and awake drowsiness, even 24h later in the homecage. Female ferrets (n = 14) were group housed in enriched multi-level cages. Playtime involved exploring a room containing a ball pool, paper bags, balls containing bells, and a familiar interactive human for 1h. This was repeated on three consecutive mornings, and on the fourth morning, homecage behaviour was compared between ferrets who had experienced the playtime treatment versus control cagemates who had not. Their investigation of stimuli (positive = mouse odour or ball; ambiguous = empty bottle or tea-strainer; and negative = peppermint or bitter apple odour) was also recorded. We then swapped treatments, creating a paired experimental design. Ferrets under control conditions lay awake with their eyes open and screeched significantly more, but slept and sat/stood less, than following playtime. They also contacted negative and ambiguous stimuli significantly more under control conditions than they did following playtime; contact with positive stimuli showed no effects. Attempts to blind the observer to treatments were unsuccessful, so replication is required, but the findings suggest that playtime may have reduced both sub-optimal arousal and restless sensation seeking behaviour, consistent with reducing boredom.
Collapse
Affiliation(s)
- Charlotte C Burn
- Animal Welfare Science and Ethics, The Royal Veterinary College, Hertfordshire, UK
| | - Jade Raffle
- Animal Welfare Science and Ethics, The Royal Veterinary College, Hertfordshire, UK
| | | |
Collapse
|
14
|
Freeman LCA, Wood KC, Bizley JK. Multisensory stimuli improve relative localisation judgments compared to unisensory auditory or visual stimuli. J Acoust Soc Am 2018; 143:EL516. [PMID: 29960438 PMCID: PMC6018061 DOI: 10.1121/1.5042759] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/20/2018] [Revised: 04/25/2018] [Accepted: 05/29/2018] [Indexed: 06/08/2023]
Abstract
Observers performed a relative localisation task in which they reported whether the second of two sequentially presented signals occurred to the left or right of the first. Stimuli were detectability-matched auditory, visual, or auditory-visual signals and the goal was to compare changes in performance with eccentricity across modalities. Visual performance was superior to auditory at the midline, but inferior in the periphery, while auditory-visual performance exceeded both at all locations. No such advantage was seen when performance for auditory-only trials was contrasted with trials in which the first stimulus was auditory-visual and the second auditory only.
Collapse
Affiliation(s)
- Laura C A Freeman
- Ear Institute, University College London, 332 Gray's Inn Road, London, WC1X 8EE, United Kingdom , ,
| | - Katherine C Wood
- Ear Institute, University College London, 332 Gray's Inn Road, London, WC1X 8EE, United Kingdom , ,
| | - Jennifer K Bizley
- Ear Institute, University College London, 332 Gray's Inn Road, London, WC1X 8EE, United Kingdom , ,
| |
Collapse
|
15
|
Atilgan H, Town SM, Wood KC, Jones GP, Maddox RK, Lee AKC, Bizley JK. Integration of Visual Information in Auditory Cortex Promotes Auditory Scene Analysis through Multisensory Binding. Neuron 2018; 97:640-655.e4. [PMID: 29395914 PMCID: PMC5814679 DOI: 10.1016/j.neuron.2017.12.034] [Citation(s) in RCA: 79] [Impact Index Per Article: 13.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2017] [Revised: 10/28/2017] [Accepted: 12/22/2017] [Indexed: 12/29/2022]
Abstract
How and where in the brain audio-visual signals are bound to create multimodal objects remains unknown. One hypothesis is that temporal coherence between dynamic multisensory signals provides a mechanism for binding stimulus features across sensory modalities. Here, we report that when the luminance of a visual stimulus is temporally coherent with the amplitude fluctuations of one sound in a mixture, the representation of that sound is enhanced in auditory cortex. Critically, this enhancement extends to include both binding and non-binding features of the sound. We demonstrate that visual information conveyed from visual cortex via the phase of the local field potential is combined with auditory information within auditory cortex. These data provide evidence that early cross-sensory binding provides a bottom-up mechanism for the formation of cross-sensory objects and that one role for multisensory binding in auditory cortex is to support auditory scene analysis. Visual stimuli can shape how auditory cortical neurons respond to sound mixtures Temporal coherence between senses enhances sound features of a bound multisensory object Visual stimuli elicit changes in the phase of the local field potential in auditory cortex Vision-induced phase effects are lost when visual cortex is reversibly silenced
Collapse
Affiliation(s)
- Huriye Atilgan
- The Ear Institute, University College London, London, UK
| | - Stephen M Town
- The Ear Institute, University College London, London, UK
| | | | - Gareth P Jones
- The Ear Institute, University College London, London, UK
| | - Ross K Maddox
- Department of Biomedical Engineering and Department of Neuroscience, Del Monte Institute for Neuroscience, University of Rochester, Rochester, NY, USA; Institute for Learning and Brain Sciences and Department of Speech and Hearing Sciences, University of Washington, Seattle, WA, USA
| | - Adrian K C Lee
- Institute for Learning and Brain Sciences and Department of Speech and Hearing Sciences, University of Washington, Seattle, WA, USA
| | | |
Collapse
|
16
|
Abstract
A key function of the brain is to provide a stable representation of an object's location in the world. In hearing, sound azimuth and elevation are encoded by neurons throughout the auditory system, and auditory cortex is necessary for sound localization. However, the coordinate frame in which neurons represent sound space remains undefined: classical spatial receptive fields in head-fixed subjects can be explained either by sensitivity to sound source location relative to the head (egocentric) or relative to the world (allocentric encoding). This coordinate frame ambiguity can be resolved by studying freely moving subjects; here we recorded spatial receptive fields in the auditory cortex of freely moving ferrets. We found that most spatially tuned neurons represented sound source location relative to the head across changes in head position and direction. In addition, we also recorded a small number of neurons in which sound location was represented in a world-centered coordinate frame. We used measurements of spatial tuning across changes in head position and direction to explore the influence of sound source distance and speed of head movement on auditory cortical activity and spatial tuning. Modulation depth of spatial tuning increased with distance for egocentric but not allocentric units, whereas, for both populations, modulation was stronger at faster movement speeds. Our findings suggest that early auditory cortex primarily represents sound source location relative to ourselves but that a minority of cells can represent sound location in the world independent of our own position.
Collapse
Affiliation(s)
- Stephen M. Town
- Ear Institute, University College London, London, United Kingdom
| | - W. Owen Brimijoin
- MRC/CSO Institute of Hearing Research – Scottish Section, Glasgow, United Kingdom
| | | |
Collapse
|
17
|
Bizley JK, Jones GP, Town SM. Where are multisensory signals combined for perceptual decision-making? Curr Opin Neurobiol 2016; 40:31-37. [DOI: 10.1016/j.conb.2016.06.003] [Citation(s) in RCA: 46] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2016] [Revised: 05/26/2016] [Accepted: 06/02/2016] [Indexed: 12/21/2022]
|
18
|
Wood KC, Bizley JK. Erratum: Relative sound localisation abilities in human listeners [J. Acoust. Soc. Am. 138, 674-686 (2015)]. J Acoust Soc Am 2016; 139:3043. [PMID: 27369125 PMCID: PMC6910014 DOI: 10.1121/1.4952412] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/01/2016] [Accepted: 05/11/2016] [Indexed: 06/06/2023]
Affiliation(s)
- Katherine C Wood
- University College London Ear Institute, 332 Grays Inn Road, London, WC1X 8EE, United Kingdom
| | - Jennifer K Bizley
- University College London Ear Institute, 332 Grays Inn Road, London, WC1X 8EE, United Kingdom
| |
Collapse
|
19
|
Bizley JK, Maddox RK, Lee AKC. Defining Auditory-Visual Objects: Behavioral Tests and Physiological Mechanisms. Trends Neurosci 2016; 39:74-85. [PMID: 26775728 PMCID: PMC4738154 DOI: 10.1016/j.tins.2015.12.007] [Citation(s) in RCA: 47] [Impact Index Per Article: 5.9] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2015] [Revised: 12/03/2015] [Accepted: 12/11/2015] [Indexed: 11/30/2022]
Abstract
Crossmodal integration is a term applicable to many phenomena in which one sensory modality influences task performance or perception in another sensory modality. We distinguish the term binding as one that should be reserved specifically for the process that underpins perceptual object formation. To unambiguously differentiate binding form other types of integration, behavioral and neural studies must investigate perception of a feature orthogonal to the features that link the auditory and visual stimuli. We argue that supporting true perceptual binding (as opposed to other processes such as decision-making) is one role for cross-sensory influences in early sensory cortex. These early multisensory interactions may therefore form a physiological substrate for the bottom-up grouping of auditory and visual stimuli into auditory-visual (AV) objects. Crossmodal integration and binding have been treated as synonymous in the literature, with no clear delineation between perceptual changes and other interactions such as decision-making. Crossmodal binding is proposed as a distinct form of integration leading to multisensory object formation. Multisensory stimuli are most beneficial in noisy situations, but few studies use stimulus competition to investigate the processes underpinning multisensory integration. Evidence suggests that both visual and auditory attention is object-based – all features within an object are enhanced and there is a cost to attending features across versus within objects. Multisensory interactions can be observed throughout the brain, including early sensory cortex. The role of early sensory cortex in multisensory integration is unknown, but may underlie crossmodal binding.
Collapse
Affiliation(s)
- Jennifer K Bizley
- University College London (UCL) Ear Institute, 332 Gray's Inn Road, London, WC1X 8EE, UK.
| | - Ross K Maddox
- Institute for Learning and Brain Sciences, University of Washington, 1715 NE Columbia Road, Portage Bay Building, Box 357988, Seattle, WA 98195, USA
| | - Adrian K C Lee
- Institute for Learning and Brain Sciences, University of Washington, 1715 NE Columbia Road, Portage Bay Building, Box 357988, Seattle, WA 98195, USA; Department of Speech and Hearing Sciences, University of Washington, 1417 NE 42nd Street, Eagleson Hall, Box 354875, Seattle, WA 98105, USA.
| |
Collapse
|
20
|
Bizley JK, Elliott N, Wood KC, Vickers DA. Simultaneous Assessment of Speech Identification and Spatial Discrimination: A Potential Testing Approach for Bilateral Cochlear Implant Users? Trends Hear 2015; 19:19/0/2331216515619573. [PMID: 26721927 PMCID: PMC4771039 DOI: 10.1177/2331216515619573] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022] Open
Abstract
With increasing numbers of children and adults receiving bilateral cochlear implants, there is an urgent need for assessment tools that enable testing of binaural hearing abilities. Current test batteries are either limited in scope or are of an impractical duration for routine testing. Here, we report a behavioral test that enables combined testing of speech identification and spatial discrimination in noise. In this task, multitalker babble was presented from all speakers, and pairs of speech tokens were sequentially presented from two adjacent speakers. Listeners were required to identify both words from a closed set of four possibilities and to determine whether the second token was presented to the left or right of the first. In Experiment 1, normal-hearing adult listeners were tested at 15° intervals throughout the frontal hemifield. Listeners showed highest spatial discrimination performance in and around the frontal midline, with a decline at more eccentric locations. In contrast, speech identification abilities were least accurate near the midline and showed an improvement in performance at more lateral locations. In Experiment 2, normal-hearing listeners were assessed using a restricted range of speaker locations designed to match those found in clinical testing environments. Here, speakers were separated by 15° around the midline and 30° at more lateral locations. This resulted in a similar pattern of behavioral results as in Experiment 1. We conclude, this test offers the potential to assess both spatial discrimination and the ability to use spatial information for unmasking in clinical populations.
Collapse
|
21
|
Wood KC, Bizley JK. Relative sound localisation abilities in human listeners. J Acoust Soc Am 2015; 138:674-686. [PMID: 26328685 PMCID: PMC4610194 DOI: 10.1121/1.4923452] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/12/2015] [Revised: 06/16/2015] [Accepted: 06/22/2015] [Indexed: 06/05/2023]
Abstract
Spatial acuity varies with sound-source azimuth, signal-to-noise ratio, and the spectral characteristics of the sound source. Here, the spatial localisation abilities of listeners were assessed using a relative localisation task. This task tested localisation ability at fixed angular separations throughout space using a two-alternative forced-choice design across a variety of listening conditions. Subjects were required to determine whether a target sound originated to the left or right of a preceding reference in the presence of a multi-source noise background. Experiment 1 demonstrated that subjects' ability to determine the relative location of two sources declined with less favourable signal-to-noise ratios and at peripheral locations. Experiment 2 assessed performance with both broadband and spectrally restricted stimuli designed to limit localisation cues to predominantly interaural level differences or interaural timing differences (ITDs). Predictions generated from topographic, modified topographic, and two-channel models of sound localisation suggest that for low-pass stimuli, where ITD cues were dominant, the two-channel model provides an adequate description of the experimental data, whereas for broadband and high frequency bandpass stimuli none of the models was able to fully account for performance. Experiment 3 demonstrated that relative localisation performance was uninfluenced by shifts in gaze direction.
Collapse
Affiliation(s)
- Katherine C Wood
- University College London Ear Institute, 332 Grays Inn Road, London, WC1X 8EE, United Kingdom
| | - Jennifer K Bizley
- University College London Ear Institute, 332 Grays Inn Road, London, WC1X 8EE, United Kingdom
| |
Collapse
|
22
|
Bizley JK, Bajo VM, Nodal FR, King AJ. Cortico-Cortical Connectivity Within Ferret Auditory Cortex. J Comp Neurol 2015; 523:2187-210. [PMID: 25845831 PMCID: PMC4737260 DOI: 10.1002/cne.23784] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2014] [Revised: 03/29/2015] [Accepted: 04/01/2015] [Indexed: 12/29/2022]
Abstract
Despite numerous studies of auditory cortical processing in the ferret (Mustela putorius), very little is known about the connections between the different regions of the auditory cortex that have been characterized cytoarchitectonically and physiologically. We examined the distribution of retrograde and anterograde labeling after injecting tracers into one or more regions of ferret auditory cortex. Injections of different tracers at frequency‐matched locations in the core areas, the primary auditory cortex (A1) and anterior auditory field (AAF), of the same animal revealed the presence of reciprocal connections with overlapping projections to and from discrete regions within the posterior pseudosylvian and suprasylvian fields (PPF and PSF), suggesting that these connections are frequency specific. In contrast, projections from the primary areas to the anterior dorsal field (ADF) on the anterior ectosylvian gyrus were scattered and non‐overlapping, consistent with the non‐tonotopic organization of this field. The relative strength of the projections originating in each of the primary fields differed, with A1 predominantly targeting the posterior bank fields PPF and PSF, which in turn project to the ventral posterior field, whereas AAF projects more heavily to the ADF, which then projects to the anteroventral field and the pseudosylvian sulcal cortex. These findings suggest that parallel anterior and posterior processing networks may exist, although the connections between different areas often overlap and interactions were present at all levels. J. Comp. Neurol. 523:2187–2210, 2015. © 2015 Wiley Periodicals, Inc.
Collapse
Affiliation(s)
- Jennifer K Bizley
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, OX1 3PT, United Kingdom.,Ear Institute, University College London, London, WC1X 8EE, United Kingdom
| | - Victoria M Bajo
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, OX1 3PT, United Kingdom
| | | | - Andrew J King
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, OX1 3PT, United Kingdom
| |
Collapse
|
23
|
Abstract
Timbre distinguishes sounds of equal loudness, pitch, and duration; however, little is known about the neural mechanisms underlying timbre perception. Such understanding requires animal models such as the ferret in which neuronal and behavioral observation can be combined. The current study asked what spectral cues ferrets use to discriminate between synthetic vowels. Ferrets were trained to discriminate vowels differing in the position of the first (F1) and second formants (F2), inter-formant distance, and spectral centroid. In experiment 1, ferrets responded to probe trials containing novel vowels in which the spectral cues of trained vowels were mismatched. Regression models fitted to behavioral responses determined that F2 and spectral centroid were stronger predictors of ferrets' behavior than either F1 or inter-formant distance. Experiment 2 examined responses to single formant vowels and found that individual spectral peaks failed to account for multi-formant vowel perception. Experiment 3 measured responses to unvoiced vowels and showed that ferrets could generalize vowel identity across voicing conditions. Experiment 4 employed the same design as experiment 1 but with human participants. Their responses were also predicted by F2 and spectral centroid. Together these findings further support the ferret as a model for studying the neural processes underlying timbre perception.
Collapse
Affiliation(s)
- Stephen M Town
- Ear Institute, University College London, 332 Gray's Inn Road, London WC1X 8EE, United Kingdom
| | - Huriye Atilgan
- Ear Institute, University College London, 332 Gray's Inn Road, London WC1X 8EE, United Kingdom
| | - Katherine C Wood
- Ear Institute, University College London, 332 Gray's Inn Road, London WC1X 8EE, United Kingdom
| | - Jennifer K Bizley
- Ear Institute, University College London, 332 Gray's Inn Road, London WC1X 8EE, United Kingdom
| |
Collapse
|
24
|
Maddox RK, Atilgan H, Bizley JK, Lee AKC. Auditory selective attention is enhanced by a task-irrelevant temporally coherent visual stimulus in human listeners. eLife 2015; 4:e04995. [PMID: 25654748 PMCID: PMC4337603 DOI: 10.7554/elife.04995] [Citation(s) in RCA: 41] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2014] [Accepted: 12/27/2014] [Indexed: 11/22/2022] Open
Abstract
In noisy settings, listening is aided by correlated dynamic visual cues gleaned from a talker's face-an improvement often attributed to visually reinforced linguistic information. In this study, we aimed to test the effect of audio-visual temporal coherence alone on selective listening, free of linguistic confounds. We presented listeners with competing auditory streams whose amplitude varied independently and a visual stimulus with varying radius, while manipulating the cross-modal temporal relationships. Performance improved when the auditory target's timecourse matched that of the visual stimulus. The fact that the coherence was between task-irrelevant stimulus features suggests that the observed improvement stemmed from the integration of auditory and visual streams into cross-modal objects, enabling listeners to better attend the target. These findings suggest that in everyday conditions, where listeners can often see the source of a sound, temporal cues provided by vision can help listeners to select one sound source from a mixture.
Collapse
Affiliation(s)
- Ross K Maddox
- Institute for Learning and Brain Sciences, University of Washington, Seattle, United States
| | - Huriye Atilgan
- Ear Institute, University College London, London, United Kingdom
| | | | - Adrian KC Lee
- Institute for Learning and Brain Sciences, University of Washington, Seattle, United States
- Department of Speech and Hearing Sciences, University of Washington, Seattle, United States
| |
Collapse
|
25
|
Kumpik DP, Roberts HE, King AJ, Bizley JK. Visual sensitivity is a stronger determinant of illusory processes than auditory cue parameters in the sound-induced flash illusion. J Vis 2014; 14:14.7.12. [PMID: 24961249 DOI: 10.1167/14.7.12] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
The sound-induced flash illusion (SIFI) is a multisensory perceptual phenomenon in which the number of brief visual stimuli perceived by an observer is influenced by the number of concurrently presented sounds. While the strength of this illusion has been shown to be modulated by the temporal congruence of the stimuli from each modality, there is conflicting evidence regarding its dependence upon their spatial congruence. We addressed this question by examining SIFIs under conditions in which the spatial reliability of the visual stimuli was degraded and different sound localization cues were presented using either free-field or closed-field stimulation. The likelihood of reporting a SIFI varied with the spatial cue composition of the auditory stimulus and was highest when binaural cues were presented over headphones. SIFIs were more common for small flashes than for large flashes, and for small flashes at peripheral locations, subjects experienced a greater number of illusory fusion events than fission events. However, the SIFI was not dependent on the spatial proximity of the audiovisual stimuli, but was instead determined primarily by differences in subjects' underlying sensitivity across the visual field to the number of flashes presented. Our findings indicate that the influence of auditory stimulation on visual numerosity judgments can occur independently of the spatial relationship between the stimuli.
Collapse
Affiliation(s)
- Daniel P Kumpik
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, UK
| | - Helen E Roberts
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, UK
| | - Andrew J King
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, UK
| | - Jennifer K Bizley
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, UKUCL Ear Institute, London, UK
| |
Collapse
|
26
|
Abstract
Timbre is the attribute that distinguishes sounds of equal pitch, loudness and duration. It contributes to our perception and discrimination of different vowels and consonants in speech, instruments in music and environmental sounds. Here we begin by reviewing human timbre perception and the spectral and temporal acoustic features that give rise to timbre in speech, musical and environmental sounds. We also consider the perception of timbre by animals, both in the case of human vowels and non-human vocalizations. We then explore the neural representation of timbre, first within the peripheral auditory system and later at the level of the auditory cortex. We examine the neural networks that are implicated in timbre perception and the computations that may be performed in auditory cortex to enable listeners to extract information about timbre. We consider whether single neurons in auditory cortex are capable of representing spectral timbre independently of changes in other perceptual attributes and the mechanisms that may shape neural sensitivity to timbre. Finally, we conclude by outlining some of the questions that remain about the role of neural mechanisms in behavior and consider some potentially fruitful avenues for future research.
Collapse
|
27
|
Bizley JK, Walker KMM, King AJ, Schnupp JWH. Spectral timbre perception in ferrets: discrimination of artificial vowels under different listening conditions. J Acoust Soc Am 2013; 133:365-376. [PMID: 23297909 PMCID: PMC3783993 DOI: 10.1121/1.4768798] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
Spectral timbre is an acoustic feature that enables human listeners to determine the identity of a spoken vowel. Despite its importance to sound perception, little is known about the neural representation of sound timbre and few psychophysical studies have investigated timbre discrimination in non-human species. In this study, ferrets were positively conditioned to discriminate artificial vowel sounds in a two-alternative-forced-choice paradigm. Animals quickly learned to discriminate the vowel sound /u/ from /ε/ and were immediately able to generalize across a range of voice pitches. They were further tested in a series of experiments designed to assess how well they could discriminate these vowel sounds under different listening conditions. First, a series of morphed vowels was created by systematically shifting the location of the first and second formant frequencies. Second, the ferrets were tested with single formant stimuli designed to assess which spectral cues they could be using to make their decisions. Finally, vowel discrimination thresholds were derived in the presence of noise maskers presented from either the same or a different spatial location. These data indicate that ferrets show robust vowel discrimination behavior across a range of listening conditions and that this ability shares many similarities with human listeners.
Collapse
Affiliation(s)
- Jennifer K Bizley
- Department of Physiology, Anatomy and Genetics, University of Oxford, Parks Road, Oxford OX1 3PT, United Kingdom.
| | | | | | | |
Collapse
|
28
|
Abstract
We are able to rapidly recognize and localize the many sounds in our environment. We can describe any of these sounds in terms of various independent "features" such as their loudness, pitch, or position in space. However, we still know surprisingly little about how neurons in the auditory brain, specifically the auditory cortex, might form representations of these perceptual characteristics from the information that the ear provides about sound acoustics. In this article, the authors examine evidence that the auditory cortex is necessary for processing the pitch, timbre, and location of sounds, and document how neurons across multiple auditory cortical fields might represent these as trains of action potentials. They conclude by asking whether neurons in different regions of the auditory cortex might not be simply sensitive to each of these three sound features but whether they might be selective for one of them. The few studies that have examined neural sensitivity to multiple sound attributes provide only limited support for neural selectivity within auditory cortex. Providing an explanation of the neural basis of feature invariance is thus one of the major challenges to sensory neuroscience obtaining the ultimate goal of understanding how neural firing patterns in the brain give rise to perception.
Collapse
Affiliation(s)
- Jennifer K Bizley
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom.
| | | |
Collapse
|
29
|
Bajo VM, Nodal FR, Bizley JK, King AJ. The non-lemniscal auditory cortex in ferrets: convergence of corticotectal inputs in the superior colliculus. Front Neuroanat 2010; 4:18. [PMID: 20640247 PMCID: PMC2904598 DOI: 10.3389/fnana.2010.00018] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2010] [Accepted: 04/23/2010] [Indexed: 11/19/2022] Open
Abstract
Descending cortical inputs to the superior colliculus (SC) contribute to the unisensory response properties of the neurons found there and are critical for multisensory integration. However, little is known about the relative contribution of different auditory cortical areas to this projection or the distribution of their terminals in the SC. We characterized this projection in the ferret by injecting tracers in the SC and auditory cortex. Large pyramidal neurons were labeled in layer V of different parts of the ectosylvian gyrus after tracer injections in the SC. Those cells were most numerous in the anterior ectosylvian gyrus (AEG), and particularly in the anterior ventral field, which receives both auditory and visual inputs. Labeling was also found in the posterior ectosylvian gyrus (PEG), predominantly in the tonotopically organized posterior suprasylvian field. Profuse anterograde labeling was present in the SC following tracer injections at the site of acoustically responsive neurons in the AEG or PEG, with terminal fields being both more prominent and clustered for inputs originating from the AEG. Terminals from both cortical areas were located throughout the intermediate and deep layers, but were most concentrated in the posterior half of the SC, where peripheral stimulus locations are represented. No inputs were identified from primary auditory cortical areas, although some labeling was found in the surrounding sulci. Our findings suggest that higher level auditory cortical areas, including those involved in multisensory processing, may modulate SC function via their projections into its deeper layers.
Collapse
Affiliation(s)
- Victoria M Bajo
- Department of Physiology, Anatomy and Genetics, University of Oxford Oxford, UK
| | | | | | | |
Collapse
|
30
|
Schnupp JWH, Bizley JK. On pitch, the ear and the brain of the beholder. Focus on "neural coding of periodicity in marmoset auditory cortex.". J Neurophysiol 2010; 103:1708-11. [PMID: 20164385 DOI: 10.1152/jn.00182.2010] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
|
31
|
Nodal FR, Kacelnik O, Bajo VM, Bizley JK, Moore DR, King AJ. Lesions of the auditory cortex impair azimuthal sound localization and its recalibration in ferrets. J Neurophysiol 2009; 103:1209-25. [PMID: 20032231 DOI: 10.1152/jn.00991.2009] [Citation(s) in RCA: 50] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
The role of auditory cortex in sound localization and its recalibration by experience was explored by measuring the accuracy with which ferrets turned toward and approached the source of broadband sounds in the horizontal plane. In one group, large bilateral lesions were made of the middle ectosylvian gyrus, where the primary auditory cortical fields are located, and part of the anterior and/or posterior ectosylvian gyrus, which contain higher-level fields. In the second group, the lesions were intended to be confined to primary auditory cortex (A1). The ability of the animals to localize noise bursts of different duration and level was measured before and after the lesions were made. A1 lesions produced a modest disruption of approach-to-target responses to short-duration stimuli (<500 ms) on both sides of space, whereas head orienting accuracy was unaffected. More extensive lesions produced much greater auditory localization deficits, again primarily for shorter sounds. In these ferrets, the accuracy of both the approach-to-target behavior and the orienting responses was impaired, and they could do little more than correctly lateralize the stimuli. Although both groups of ferrets were still able to localize long-duration sounds accurately, they were, in contrast to ferrets with an intact auditory cortex, unable to relearn to localize these stimuli after altering the spatial cues available by reversibly plugging one ear. These results indicate that both primary and nonprimary cortical areas are necessary for normal sound localization, although only higher auditory areas seem to contribute to accurate head orienting behavior. They also show that the auditory cortex, and A1 in particular, plays an essential role in training-induced plasticity in adult ferrets, and that this is the case for both head orienting responses and approach-to-target behavior.
Collapse
Affiliation(s)
- Fernando R Nodal
- Dept. of Physiology, Anatomy and Genetics, Sherrington Bldg., Univ. of Oxford, Parks Rd., Oxford OX1 3PT, UK.
| | | | | | | | | | | |
Collapse
|
32
|
Walker KMM, Schnupp JWH, Hart-Schnupp SMB, King AJ, Bizley JK. Pitch discrimination by ferrets for simple and complex sounds. J Acoust Soc Am 2009; 126:1321-1335. [PMID: 19739746 PMCID: PMC2784999 DOI: 10.1121/1.3179676] [Citation(s) in RCA: 32] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/28/2023]
Abstract
Although many studies have examined the performance of animals in detecting a frequency change in a sequence of tones, few have measured animals' discrimination of the fundamental frequency (F0) of complex, naturalistic stimuli. Additionally, it is not yet clear if animals perceive the pitch of complex sounds along a continuous, low-to-high scale. Here, four ferrets (Mustela putorius) were trained on a two-alternative forced choice task to discriminate sounds that were higher or lower in F0 than a reference sound using pure tones and artificial vowels as stimuli. Average Weber fractions for ferrets on this task varied from approximately 20% to 80% across references (200-1200 Hz), and these fractions were similar for pure tones and vowels. These thresholds are approximately ten times higher than those typically reported for other mammals on frequency change detection tasks that use go/no-go designs. Naive human listeners outperformed ferrets on the present task, but they showed similar effects of stimulus type and reference F0. These results suggest that while non-human animals can be trained to label complex sounds as high or low in pitch, this task may be much more difficult for animals than simply detecting a frequency change.
Collapse
Affiliation(s)
- Kerry M M Walker
- Department of Physiology, Anatomy and Genetics, Sherrington Building, Parks Road, University of Oxford, Oxfordshire, United Kingdom.
| | | | | | | | | |
Collapse
|
33
|
Bizley JK, King AJ. Visual-auditory spatial processing in auditory cortical neurons. Brain Res 2008; 1242:24-36. [PMID: 18407249 DOI: 10.1016/j.brainres.2008.02.087] [Citation(s) in RCA: 102] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2007] [Revised: 02/18/2008] [Accepted: 02/21/2008] [Indexed: 11/27/2022]
Abstract
Neurons responsive to visual stimulation have now been described in the auditory cortex of various species, but their functions are largely unknown. Here we investigate the auditory and visual spatial sensitivity of neurons recorded in 5 different primary and non-primary auditory cortical areas of the ferret. We quantified the spatial tuning of neurons by measuring the responses to stimuli presented across a range of azimuthal positions and calculating the mutual information (MI) between the neural responses and the location of the stimuli that elicited them. MI estimates of spatial tuning were calculated for unisensory visual, unisensory auditory and for spatially and temporally coincident auditory-visual stimulation. The majority of visually responsive units conveyed significant information about light-source location, whereas, over a corresponding region of space, acoustically responsive units generally transmitted less information about sound-source location. Spatial sensitivity for visual, auditory and bisensory stimulation was highest in the anterior dorsal field, the auditory area previously shown to be innervated by a region of extrastriate visual cortex thought to be concerned primarily with spatial processing, whereas the posterior pseudosylvian field and posterior suprasylvian field, whose principal visual input arises from cortical areas that appear to be part of the 'what' processing stream, conveyed less information about stimulus location. In some neurons, pairing visual and auditory stimuli led to an increase in the spatial information available relative to the most effective unisensory stimulus, whereas, in a smaller subpopulation, combined stimulation decreased the spatial MI. These data suggest that visual inputs to auditory cortex can enhance spatial processing in the presence of multisensory cues and could therefore potentially underlie visual influences on auditory localization.
Collapse
Affiliation(s)
- Jennifer K Bizley
- Department of Physiology, Anatomy and Genetics, University of Oxford, Parks Road, Oxford, OX1 3PT, UK.
| | | |
Collapse
|
34
|
Nelken I, Bizley JK, Nodal FR, Ahmed B, King AJ, Schnupp JWH. Responses of auditory cortex to complex stimuli: functional organization revealed using intrinsic optical signals. J Neurophysiol 2008; 99:1928-41. [PMID: 18272880 DOI: 10.1152/jn.00469.2007] [Citation(s) in RCA: 55] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
We used optical imaging of intrinsic signals to study the large-scale organization of ferret auditory cortex in response to complex sounds. Cortical responses were collected during continuous stimulation by sequences of sounds with varying frequency, period, or interaural level differences. We used a set of stimuli that differ in spectral structure, but have the same periodicity and therefore evoke the same pitch percept (click trains, sinusoidally amplitude modulated tones, and iterated ripple noise). These stimuli failed to reveal a consistent periodotopic map across the auditory fields imaged. Rather, gradients of period sensitivity differed for the different types of periodic stimuli. Binaural interactions were studied both with single contralateral, ipsilateral, and diotic broadband noise bursts and with sequences of broadband noise bursts with varying level presented contralaterally, ipsilaterally, or in opposite phase to both ears. Contralateral responses were generally largest and ipsilateral responses were smallest when using single noise bursts, but the extent of the activated area was large and comparable in all three aural configurations. Modulating the amplitude in counter phase to the two ears generally produced weaker modulation of the optical signals than the modulation produced by the monaural stimuli. These results suggest that binaural interactions seen in cortex are most likely predominantly due to subcortical processing. Thus our optical imaging data do not support the theory that the primary or nonprimary cortical fields imaged are topographically organized to form consistent maps of systematically varying sensitivity either to stimulus pitch or to simple binaural properties of the acoustic stimuli.
Collapse
Affiliation(s)
- Israel Nelken
- Department of Neurobiology, Interdisciplinary Center for Neural Computation, The Hebrew University, Jerusalem, Israel.
| | | | | | | | | | | |
Collapse
|
35
|
Abstract
Although the auditory cortex is known to be essential for normal sound localization in the horizontal plane, its contribution to vertical localization has not so far been examined. In this study, we measured the acuity with which ferrets could discriminate between two speakers in the midsagittal plane before and after silencing activity bilaterally in the primary auditory cortex (A1). This was achieved either by subdural placement of Elvax implants containing the GABA A receptor agonist muscimol or by making aspiration lesions after determining the approximate location of A1 electrophysiologically. Psychometric functions and minimum audible angles were measured in the upper hemifield for 500-, 200-, and 40-ms noise bursts. Muscimol-Elvax inactivation of A1 produced a small but significant deficit in the animals' ability to localize brief (40-ms) sounds, which was reversed after removal of the Elvax implants. A similar deficit in vertical localization was observed after bilateral aspiration lesions of A1, whereas performance at longer sound durations was unaffected. Another group of ferrets received larger lesions, encompassing both primary and nonprimary auditory cortical areas, and showed a greater deficit with performance being impaired for long- and short-duration (500- and 40-ms, respectively) stimuli. These data suggest that the integrity of the auditory cortex is required to successfully utilize spectral localization cues, which are thought to provide the basis for vertical localization, and that multiple cortical fields, including A1, contribute to this task.
Collapse
Affiliation(s)
- Jennifer K Bizley
- Department of Physiology, Anatomy and Genetics, Sherrington Building, University of Oxford, Parks Road, Oxford OX1 3PT, UK.
| | | | | | | |
Collapse
|
36
|
King AJ, Bajo VM, Bizley JK, Campbell RAA, Nodal FR, Schulz AL, Schnupp JWH. Physiological and behavioral studies of spatial coding in the auditory cortex. Hear Res 2007; 229:106-15. [PMID: 17314017 PMCID: PMC7116512 DOI: 10.1016/j.heares.2007.01.001] [Citation(s) in RCA: 64] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/15/2006] [Revised: 12/19/2006] [Accepted: 01/03/2007] [Indexed: 11/27/2022]
Abstract
Despite extensive subcortical processing, the auditory cortex is believed to be essential for normal sound localization. However, we still have a poor understanding of how auditory spatial information is encoded in the cortex and of the relative contribution of different cortical areas to spatial hearing. We investigated the behavioral consequences of inactivating ferret primary auditory cortex (A1) on auditory localization by implanting a sustained release polymer containing the GABA(A) agonist muscimol bilaterally over A1. Silencing A1 led to a reversible deficit in the localization of brief noise bursts in both the horizontal and vertical planes. In other ferrets, large bilateral lesions of the auditory cortex, which extended beyond A1, produced more severe and persistent localization deficits. To investigate the processing of spatial information by high-frequency A1 neurons, we measured their binaural-level functions and used individualized virtual acoustic space stimuli to record their spatial receptive fields (SRFs) in anesthetized ferrets. We observed the existence of a continuum of response properties, with most neurons preferring contralateral sound locations. In many cases, the SRFs could be explained by a simple linear interaction between the acoustical properties of the head and external ears and the binaural frequency tuning of the neurons. Azimuth response profiles recorded in awake ferrets were very similar and further analysis suggested that the slopes of these functions and location-dependent variations in spike timing are the main information-bearing parameters. Studies of sensory plasticity can also provide valuable insights into the role of different brain areas and the way in which information is represented within them. For example, stimulus-specific training allows accurate auditory localization by adult ferrets to be relearned after manipulating binaural cues by occluding one ear. Reversible inactivation of A1 resulted in slower and less complete adaptation than in normal controls, whereas selective lesions of the descending cortico collicular pathway prevented any improvement in performance. These results reveal a role for auditory cortex in training-induced plasticity of auditory localization, which could be mediated by descending cortical pathways.
Collapse
Affiliation(s)
- Andrew J King
- Department of Physiology, Anatomy and Genetics, Sherrington Building, University of Oxford, Oxford, UK.
| | | | | | | | | | | | | |
Collapse
|
37
|
Abstract
Recent studies, conducted almost exclusively in primates, have shown that several cortical areas usually associated with modality-specific sensory processing are subject to influences from other senses. Here we demonstrate using single-unit recordings and estimates of mutual information that visual stimuli can influence the activity of units in the auditory cortex of anesthetized ferrets. In many cases, these units were also acoustically responsive and frequently transmitted more information in their spike discharge patterns in response to paired visual-auditory stimulation than when either modality was presented by itself. For each stimulus, this information was conveyed by a combination of spike count and spike timing. Even in primary auditory areas (primary auditory cortex [A1] and anterior auditory field [AAF]), approximately 15% of recorded units were found to have nonauditory input. This proportion increased in the higher level fields that lie ventral to A1/AAF and was highest in the anterior ventral field, where nearly 50% of the units were found to be responsive to visual stimuli only and a further quarter to both visual and auditory stimuli. Within each field, the pure-tone response properties of neurons sensitive to visual stimuli did not differ in any systematic way from those of visually unresponsive neurons. Neural tracer injections revealed direct inputs from visual cortex into auditory cortex, indicating a potential source of origin for the visual responses. Primary visual cortex projects sparsely to A1, whereas higher visual areas innervate auditory areas in a field-specific manner. These data indicate that multisensory convergence and integration are features common to all auditory cortical areas but are especially prevalent in higher areas.
Collapse
Affiliation(s)
- Jennifer K Bizley
- Department of Physiology, Anatomy, and Genetics, University of Oxford, Oxford OX1 3PT, UK.
| | | | | | | | | |
Collapse
|
38
|
Abstract
Descending corticofugal projections are thought to play a critical role in shaping the responses of subcortical neurons. Here, we examine the origins and targets of ferret auditory corticocollicular projections. We show that the ectosylvian gyrus (EG), where the auditory cortex is located, can be subdivided into middle, anterior, and posterior regions according to the pattern of cytochrome oxidase staining and immunoreactivity for the neurofilament antibody SMI32. Injection of retrograde tracers in the inferior colliculus (IC) labeled large layer V pyramidal cells throughout the EG and adjacent sulci. Each region of the EG has a different pattern of descending projections. Neurons in the primary auditory fields in the middle EG project to the lateral nucleus (LN) of the ipsilateral IC and bilaterally to the dorsal cortex and dorsal part of the central nucleus (CN). The projection to these dorsomedial regions of the IC is predominantly ipsilateral and topographically organized. The secondary cortical fields in the posterior EG target the same midbrain areas but exclude the CN of the IC. A smaller projection to the ipsilateral LN also arises from the anterior EG, which is the only region of auditory cortex to target tegmental areas surrounding the IC, including the superior colliculus, periaqueductal gray, intercollicular tegmentum, and cuneiform nucleus. This pattern of corticocollicular connectivity is consistent with regional differences in physiological properties and provides another basis for subdividing ferret auditory cortex into functionally distinct areas.
Collapse
Affiliation(s)
- Victoria M Bajo
- Department of Physiology, Anatomy and Genetics, Sherrington Building, University of Oxford, Parks Road, Oxford OX1 3PT, UK.
| | | | | | | | | |
Collapse
|
39
|
Abstract
We characterized the functional organization of different fields within the auditory cortex of anaesthetized ferrets. As previously reported, the primary auditory cortex, A1, and the anterior auditory field, AAF, are located on the middle ectosylvian gyrus. These areas exhibited a similar tonotopic organization, with high frequencies represented at the dorsal tip of the gyrus and low frequencies more ventrally, but differed in that AAF neurons had shorter response latencies than those in A1. On the basis of differences in frequency selectivity, temporal response properties and thresholds, we identified four more, previously undescribed fields. Two of these are located on the posterior ectosylvian gyrus and were tonotopically organized. Neurons in these areas responded robustly to tones, but had longer latencies, more sustained responses and a higher incidence of non-monotonic rate-level functions than those in the primary fields. Two further auditory fields, which were not tonotopically organized, were found on the anterior ectosylvian gyrus. Neurons in the more dorsal anterior area gave short-latency, transient responses to tones and were generally broadly tuned with a preference for high (>8 kHz) frequencies. Neurons in the other anterior area were frequently unresponsive to tones, but often responded vigorously to broadband noise. The presence of both tonotopic and non-tonotopic auditory cortical fields indicates that the organization of ferret auditory cortex is comparable to that seen in other mammals.
Collapse
|
40
|
Smith AL, Parsons CH, Lanyon RG, Bizley JK, Akerman CJ, Baker GE, Dempster AC, Thompson ID, King AJ. An investigation of the role of auditory cortex in sound localization using muscimol-releasing Elvax. Eur J Neurosci 2004; 19:3059-72. [PMID: 15182314 DOI: 10.1111/j.0953-816x.2004.03379.x] [Citation(s) in RCA: 66] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
Abstract
Lesion studies suggest that primary auditory cortex (A1) is required for accurate sound localization by carnivores and primates. In order to elucidate further its role in spatial hearing, we examined the behavioural consequences of reversibly inactivating ferret A1 over long periods, using Elvax implants releasing the GABA(A) receptor agonist muscimol. Sub-dural polymer placements were shown to deliver relatively constant levels of muscimol to underlying cortex for >5 months. The measured diffusion of muscimol beneath and around the implant was limited to 1 mm. Cortical silencing was assessed electrophysiologically in both auditory and visual cortices. This exhibited rapid onset and was reversed within a few hours of implant removal. Inactivation of cortical neurons extended to all layers for implants lasting up to 6 weeks and throughout at least layers I-IV for longer placements, whereas thalamic activity in layer IV appeared to be unaffected. Blockade of cortical neurons in the deeper layers was restricted to < or = 500 microm from the edge of the implant, but was usually more widespread in the superficial layers. In contrast, drug-free Elvax implants had little discernible effect on the responses of the underlying cortical neurons. Bilateral implants of muscimol-Elvax over A1 produced significant deficits in the localization of brief sounds in horizontal space and particularly a reduced ability to discriminate between anterior and posterior sound sources. The performance of these ferrets gradually improved over the period in which the Elvax was in place and attained that of control animals following its removal. Although similar in nature, these deficits were less pronounced than those caused by cortical lesions and suggest a specific role for A1 in resolving the spatial ambiguities inherent in auditory localization cues.
Collapse
Affiliation(s)
- Adam L Smith
- Nature Reviews Drug Discovery, 4 Crinan Street, London N1 9XW, UK
| | | | | | | | | | | | | | | | | |
Collapse
|
41
|
Nelken I, Bizley JK, Nodal FR, Ahmed B, Schnupp JWH, King AJ. Large-scale organization of ferret auditory cortex revealed using continuous acquisition of intrinsic optical signals. J Neurophysiol 2004; 92:2574-88. [PMID: 15152018 DOI: 10.1152/jn.00276.2004] [Citation(s) in RCA: 66] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
We have adapted a new approach for intrinsic optical imaging, in which images were acquired continuously while stimuli were delivered in a series of continually repeated sequences, to provide the first demonstration of the large-scale tonotopic organization of both primary and nonprimary areas of the ferret auditory cortex. Optical responses were collected during continuous stimulation by repeated sequences of sounds with varying frequency. The optical signal was averaged as a function of time during the sequence, to produce reflectance modulation functions (RMFs). We examined the stability and properties of the RMFs and show that their zero-crossing points provide the best temporal reference points for quantifying the relationship between the stimulus parameter values and optical responses. Sequences of different duration and direction of frequency change gave rise to comparable results, although in some cases discrepancies were observed, mostly between upward- and downward-frequency sequences. We demonstrated frequency maps, consistent with previous data, in primary auditory cortex and in the anterior auditory field, which were verified with electrophysiological recordings. In addition to these tonotopic gradients, we demonstrated at least 2 new acoustically responsive areas on the anterior and posterior ectosylvian gyri, which have not previously been described. Although responsive to pure tones, these areas exhibit less tonotopic order than the primary fields.
Collapse
Affiliation(s)
- Israel Nelken
- Dept. of Neurobiology, The Alexander Silberman Institute for Life Sciences, Hebrew University, Jerusalem 91904, Israel.
| | | | | | | | | | | |
Collapse
|