1
|
Veugen LCE, van Opstal AJ, van Wanrooij MM. Reaction Time Sensitivity to Spectrotemporal Modulations of Sound. Trends Hear 2022; 26:23312165221127589. [PMID: 36172759 PMCID: PMC9523861 DOI: 10.1177/23312165221127589] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2022] [Revised: 07/18/2022] [Accepted: 09/02/2022] [Indexed: 11/24/2022] Open
Abstract
We tested whether sensitivity to acoustic spectrotemporal modulations can be observed from reaction times for normal-hearing and impaired-hearing conditions. In a manual reaction-time task, normal-hearing listeners had to detect the onset of a ripple (with density between 0-8 cycles/octave and a fixed modulation depth of 50%), that moved up or down the log-frequency axis at constant velocity (between 0-64 Hz), in an otherwise-unmodulated broadband white-noise. Spectral and temporal modulations elicited band-pass filtered sensitivity characteristics, with fastest detection rates around 1 cycle/oct and 32 Hz for normal-hearing conditions. These results closely resemble data from other studies that typically used the modulation-depth threshold as a sensitivity criterion. To simulate hearing-impairment, stimuli were processed with a 6-channel cochlear-implant vocoder, and a hearing-aid simulation that introduced separate spectral smearing and low-pass filtering. Reaction times were always much slower compared to normal hearing, especially for the highest spectral densities. Binaural performance was predicted well by the benchmark race model of binaural independence, which models statistical facilitation of independent monaural channels. For the impaired-hearing simulations this implied a "best-of-both-worlds" principle in which the listeners relied on the hearing-aid ear to detect spectral modulations, and on the cochlear-implant ear for temporal-modulation detection. Although singular-value decomposition indicated that the joint spectrotemporal sensitivity matrix could be largely reconstructed from independent temporal and spectral sensitivity functions, in line with time-spectrum separability, a substantial inseparable spectral-temporal interaction was present in all hearing conditions. These results suggest that the reaction-time task yields a valid and effective objective measure of acoustic spectrotemporal-modulation sensitivity.
Collapse
Affiliation(s)
- Lidwien C. E. Veugen
- Department of Biophysics, Donders Institute for Brain, Cognition and Behavior, Radboud University, Nijmegen, Netherlands
| | - A. John van Opstal
- Department of Biophysics, Donders Institute for Brain, Cognition and Behavior, Radboud University, Nijmegen, Netherlands
| | - Marc M. van Wanrooij
- Department of Biophysics, Donders Institute for Brain, Cognition and Behavior, Radboud University, Nijmegen, Netherlands
| |
Collapse
|
2
|
Adaptive Response Behavior in the Pursuit of Unpredictably Moving Sounds. eNeuro 2021; 8:ENEURO.0556-20.2021. [PMID: 33875456 PMCID: PMC8116108 DOI: 10.1523/eneuro.0556-20.2021] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2020] [Revised: 03/05/2021] [Accepted: 03/13/2021] [Indexed: 11/21/2022] Open
Abstract
Although moving sound-sources abound in natural auditory scenes, it is not clear how the human brain processes auditory motion. Previous studies have indicated that, although ocular localization responses to stationary sounds are quite accurate, ocular smooth pursuit of moving sounds is very poor. We here demonstrate that human subjects faithfully track a sound’s unpredictable movements in the horizontal plane with smooth-pursuit responses of the head. Our analysis revealed that the stimulus–response relation was well described by an under-damped passive, second-order low-pass filter in series with an idiosyncratic, fixed, pure delay. The model contained only two free parameters: the system’s damping coefficient, and its central (resonance) frequency. We found that the latter remained constant at ∼0.6 Hz throughout the experiment for all subjects. Interestingly, the damping coefficient systematically increased with trial number, suggesting the presence of an adaptive mechanism in the auditory pursuit system (APS). This mechanism functions even for unpredictable sound-motion trajectories endowed with fixed, but covert, frequency characteristics in open-loop tracking conditions. We conjecture that the APS optimizes a trade-off between response speed and effort. Taken together, our data support the existence of a pursuit system for auditory head-tracking, which would suggest the presence of a neural representation of a spatial auditory fovea (AF).
Collapse
|
3
|
Evoked Response Strength in Primary Auditory Cortex Predicts Performance in a Spectro-Spatial Discrimination Task in Rats. J Neurosci 2019; 39:6108-6121. [PMID: 31175214 DOI: 10.1523/jneurosci.0041-18.2019] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2018] [Revised: 04/19/2019] [Accepted: 05/12/2019] [Indexed: 11/21/2022] Open
Abstract
The extent to which the primary auditory cortex (A1) participates in instructing animal behavior remains debated. Although multiple studies have shown A1 activity to correlate with animals' perceptual judgments (Jaramillo and Zador, 2011; Bizley et al., 2013; Rodgers and DeWeese, 2014), others have found no relationship between A1 responses and reported auditory percepts (Lemus et al., 2009; Dong et al., 2011). To address this ambiguity, we performed chronic recordings of evoked local field potentials (eLFPs) in A1 of head-fixed female rats performing a two-alternative forced-choice auditory discrimination task. Rats were presented with two interleaved sequences of pure tones from opposite sides and had to indicate the side from which the higher-frequency target stimulus was played. Animal performance closely correlated (r rm = 0.68) with the difference between the target and distractor eLFP responses: the more the target response exceeded the distractor response, the better the animals were at identifying the side of the target frequency. Reducing the evoked response of either frequency through stimulus-specific adaptation affected performance in the expected way: target localization accuracy was degraded when the target frequency was adapted and improved when the distractor frequency was adapted. Target frequency eLFPs were stronger on hit trials than on error trials. Our results suggest that the degree to which one stimulus stands out over others within A1 activity may determine its perceptual saliency for the animals and accordingly bias their behavioral choices.SIGNIFICANCE STATEMENT The brain must continuously calibrate the saliency of sensory percepts against their relevance to the current behavioral goal. The inability to ignore irrelevant distractors characterizes a spectrum of human attentional disorders. Meanwhile, the connection between the neural underpinnings of stimulus saliency and sensory decisions remains elusive. Here, we record local field potentials in the primary auditory cortex of rats engaged in auditory discrimination to investigate how the cortical representation of target and distractor stimuli impacts behavior. We find that the amplitude difference between target- and distractor-evoked activity predicts discrimination performance (r rm = 0.68). Specific adaptation of target or distractor shifts performance either below or above chance, respectively. It appears that recent auditory history profoundly influences stimulus saliency, biasing animals toward diametrically-opposed decisions.
Collapse
|
4
|
Riecke L, Peters JC, Valente G, Poser BA, Kemper VG, Formisano E, Sorger B. Frequency-specific attentional modulation in human primary auditory cortex and midbrain. Neuroimage 2018; 174:274-287. [DOI: 10.1016/j.neuroimage.2018.03.038] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2017] [Revised: 03/15/2018] [Accepted: 03/17/2018] [Indexed: 12/24/2022] Open
|
5
|
Bremen P, Massoudi R, Van Wanrooij MM, Van Opstal AJ. Audio-Visual Integration in a Redundant Target Paradigm: A Comparison between Rhesus Macaque and Man. Front Syst Neurosci 2017; 11:89. [PMID: 29238295 PMCID: PMC5712580 DOI: 10.3389/fnsys.2017.00089] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2017] [Accepted: 11/16/2017] [Indexed: 11/13/2022] Open
Abstract
The mechanisms underlying multi-sensory interactions are still poorly understood despite considerable progress made since the first neurophysiological recordings of multi-sensory neurons. While the majority of single-cell neurophysiology has been performed in anesthetized or passive-awake laboratory animals, the vast majority of behavioral data stems from studies with human subjects. Interpretation of neurophysiological data implicitly assumes that laboratory animals exhibit perceptual phenomena comparable or identical to those observed in human subjects. To explicitly test this underlying assumption, we here characterized how two rhesus macaques and four humans detect changes in intensity of auditory, visual, and audio-visual stimuli. These intensity changes consisted of a gradual envelope modulation for the sound, and a luminance step for the LED. Subjects had to detect any perceived intensity change as fast as possible. By comparing the monkeys' results with those obtained from the human subjects we found that (1) unimodal reaction times differed across modality, acoustic modulation frequency, and species, (2) the largest facilitation of reaction times with the audio-visual stimuli was observed when stimulus onset asynchronies were such that the unimodal reactions would occur at the same time (response, rather than physical synchrony), and (3) the largest audio-visual reaction-time facilitation was observed when unimodal auditory stimuli were difficult to detect, i.e., at slow unimodal reaction times. We conclude that despite marked unimodal heterogeneity, similar multisensory rules applied to both species. Single-cell neurophysiology in the rhesus macaque may therefore yield valuable insights into the mechanisms governing audio-visual integration that may be informative of the processes taking place in the human brain.
Collapse
Affiliation(s)
- Peter Bremen
- Department of Biophysics, Donders Institute for Brain, Cognition, and Behaviour, Radboud University Nijmegen, Nijmegen, Netherlands.,Department of Neuroscience, Erasmus Medical Center, Rotterdam, Netherlands
| | - Rooholla Massoudi
- Department of Biophysics, Donders Institute for Brain, Cognition, and Behaviour, Radboud University Nijmegen, Nijmegen, Netherlands.,Department of Physiology, Development and Neuroscience, University of Cambridge, Cambridge, United Kingdom
| | - Marc M Van Wanrooij
- Department of Biophysics, Donders Institute for Brain, Cognition, and Behaviour, Radboud University Nijmegen, Nijmegen, Netherlands
| | - A J Van Opstal
- Department of Biophysics, Donders Institute for Brain, Cognition, and Behaviour, Radboud University Nijmegen, Nijmegen, Netherlands
| |
Collapse
|
6
|
van de Rijt LPH, van Opstal AJ, Mylanus EAM, Straatman LV, Hu HY, Snik AFM, van Wanrooij MM. Temporal Cortex Activation to Audiovisual Speech in Normal-Hearing and Cochlear Implant Users Measured with Functional Near-Infrared Spectroscopy. Front Hum Neurosci 2016; 10:48. [PMID: 26903848 PMCID: PMC4750083 DOI: 10.3389/fnhum.2016.00048] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2015] [Accepted: 01/29/2016] [Indexed: 11/13/2022] Open
Abstract
BACKGROUND Speech understanding may rely not only on auditory, but also on visual information. Non-invasive functional neuroimaging techniques can expose the neural processes underlying the integration of multisensory processes required for speech understanding in humans. Nevertheless, noise (from functional MRI, fMRI) limits the usefulness in auditory experiments, and electromagnetic artifacts caused by electronic implants worn by subjects can severely distort the scans (EEG, fMRI). Therefore, we assessed audio-visual activation of temporal cortex with a silent, optical neuroimaging technique: functional near-infrared spectroscopy (fNIRS). METHODS We studied temporal cortical activation as represented by concentration changes of oxy- and deoxy-hemoglobin in four, easy-to-apply fNIRS optical channels of 33 normal-hearing adult subjects and five post-lingually deaf cochlear implant (CI) users in response to supra-threshold unisensory auditory and visual, as well as to congruent auditory-visual speech stimuli. RESULTS Activation effects were not visible from single fNIRS channels. However, by discounting physiological noise through reference channel subtraction (RCS), auditory, visual and audiovisual (AV) speech stimuli evoked concentration changes for all sensory modalities in both cohorts (p < 0.001). Auditory stimulation evoked larger concentration changes than visual stimuli (p < 0.001). A saturation effect was observed for the AV condition. CONCLUSIONS Physiological, systemic noise can be removed from fNIRS signals by RCS. The observed multisensory enhancement of an auditory cortical channel can be plausibly described by a simple addition of the auditory and visual signals with saturation.
Collapse
Affiliation(s)
- Luuk P H van de Rijt
- Department of Otorhinolaryngology, Donders Institute for Brain, Cognition, and Behaviour, Radboud University Nijmegen Medical CentreNijmegen, Netherlands; Department of Biophysics, Donders Institute for Brain, Cognition, and Behaviour, Radboud University NijmegenNijmegen, Netherlands
| | - A John van Opstal
- Department of Biophysics, Donders Institute for Brain, Cognition, and Behaviour, Radboud University Nijmegen Nijmegen, Netherlands
| | - Emmanuel A M Mylanus
- Department of Otorhinolaryngology, Donders Institute for Brain, Cognition, and Behaviour, Radboud University Nijmegen Medical Centre Nijmegen, Netherlands
| | - Louise V Straatman
- Department of Otorhinolaryngology, Donders Institute for Brain, Cognition, and Behaviour, Radboud University Nijmegen Medical Centre Nijmegen, Netherlands
| | - Hai Yin Hu
- Department of Biophysics, Donders Institute for Brain, Cognition, and Behaviour, Radboud University Nijmegen Nijmegen, Netherlands
| | - Ad F M Snik
- Department of Otorhinolaryngology, Donders Institute for Brain, Cognition, and Behaviour, Radboud University Nijmegen Medical Centre Nijmegen, Netherlands
| | - Marc M van Wanrooij
- Department of Otorhinolaryngology, Donders Institute for Brain, Cognition, and Behaviour, Radboud University Nijmegen Medical CentreNijmegen, Netherlands; Department of Biophysics, Donders Institute for Brain, Cognition, and Behaviour, Radboud University NijmegenNijmegen, Netherlands
| |
Collapse
|
7
|
Osmanski MS, Wang X. Behavioral dependence of auditory cortical responses. Brain Topogr 2015; 28:365-78. [PMID: 25690831 PMCID: PMC4409507 DOI: 10.1007/s10548-015-0428-4] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2014] [Accepted: 02/12/2015] [Indexed: 10/24/2022]
Abstract
Neural responses in the auditory cortex have historically been measured from either anesthetized or awake but non-behaving animals. A growing body of work has begun to focus instead on recording from auditory cortex of animals actively engaged in behavior tasks. These studies have shown that auditory cortical responses are dependent upon the behavioral state of the animal. The longer ascending subcortical pathway of the auditory system and unique characteristics of auditory processing suggest that such dependencies may have a more profound influence on cortical processing in the auditory system compared to other sensory systems. It is important to understand the nature of these dependencies and their functional implications. In this article, we review the literature on this topic pertaining to cortical processing of sounds.
Collapse
Affiliation(s)
- Michael S Osmanski
- Department of Biomedical Engineering, Johns Hopkins University School of Medicine, 720 Rutland Ave., Traylor 410, Baltimore, MD, 21025, USA,
| | | |
Collapse
|
8
|
Spectrotemporal response properties of core auditory cortex neurons in awake monkey. PLoS One 2015; 10:e0116118. [PMID: 25680187 PMCID: PMC4332665 DOI: 10.1371/journal.pone.0116118] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2014] [Accepted: 12/03/2014] [Indexed: 11/19/2022] Open
Abstract
So far, most studies of core auditory cortex (AC) have characterized the spectral and temporal tuning properties of cells in non-awake, anesthetized preparations. As experiments in awake animals are scarce, we here used dynamic spectral-temporal broadband ripples to study the properties of the spectrotemporal receptive fields (STRFs) of AC cells in awake monkeys. We show that AC neurons were typically most sensitive to low ripple densities (spectral) and low velocities (temporal), and that most cells were not selective for a particular spectrotemporal sweep direction. A substantial proportion of neurons preferred amplitude-modulated sounds (at zero ripple density) to dynamic ripples (at non-zero densities). The vast majority (>93%) of modulation transfer functions were separable with respect to spectral and temporal modulations, indicating that time and spectrum are independently processed in AC neurons. We also analyzed the linear predictability of AC responses to natural vocalizations on the basis of the STRF. We discuss our findings in the light of results obtained from the monkey midbrain inferior colliculus by comparing the spectrotemporal tuning properties and linear predictability of these two important auditory stages.
Collapse
|
9
|
Bezgin G, Rybacki K, van Opstal AJ, Bakker R, Shen K, Vakorin VA, McIntosh AR, Kötter R. Auditory-prefrontal axonal connectivity in the macaque cortex: quantitative assessment of processing streams. BRAIN AND LANGUAGE 2014; 135:73-84. [PMID: 24980416 DOI: 10.1016/j.bandl.2014.05.006] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/10/2013] [Revised: 04/26/2014] [Accepted: 05/26/2014] [Indexed: 06/03/2023]
Abstract
Primate sensory systems subserve complex neurocomputational functions. Consequently, these systems are organised anatomically in a distributed fashion, commonly linking areas to form specialised processing streams. Each stream is related to a specific function, as evidenced from studies of the visual cortex, which features rather prominent segregation into spatial and non-spatial domains. It has been hypothesised that other sensory systems, including auditory, are organised in a similar way on the cortical level. Recent studies offer rich qualitative evidence for the dual stream hypothesis. Here we provide a new paradigm to quantitatively uncover these patterns in the auditory system, based on an analysis of multiple anatomical studies using multivariate techniques. As a test case, we also apply our assessment techniques to more ubiquitously-explored visual system. Importantly, the introduced framework opens the possibility for these techniques to be applied to other neural systems featuring a dichotomised organisation, such as language or music perception.
Collapse
Affiliation(s)
- Gleb Bezgin
- Rotman Research Institute of Baycrest Centre, University of Toronto, Toronto, Ontario M6A 2E1, Canada; Department of Neuroinformatics, Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, 6525 AJ Nijmegen, The Netherlands; C. & O. Vogt Brain Research Institute, Heinrich Heine University, D-40225 Düsseldorf, Germany; Institute of Computer Science, Heinrich Heine University, D-40225 Düsseldorf, Germany.
| | - Konrad Rybacki
- C. & O. Vogt Brain Research Institute, Heinrich Heine University, D-40225 Düsseldorf, Germany; Department of Diagnostic and Interventional Neuroradiology, HELIOS Medical Center Wuppertal, University Hospital Witten/Herdecke, Wuppertal, Germany
| | - A John van Opstal
- Department of Biophysics, Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, 6525 AJ Nijmegen, The Netherlands
| | - Rembrandt Bakker
- Department of Neuroinformatics, Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, 6525 AJ Nijmegen, The Netherlands; Institute of Neuroscience and Medicine (INM-6), Research Center Jülich, Germany; Department of Biology II, Ludwig-Maximilians-Universität München, Germany
| | - Kelly Shen
- Rotman Research Institute of Baycrest Centre, University of Toronto, Toronto, Ontario M6A 2E1, Canada
| | - Vasily A Vakorin
- Rotman Research Institute of Baycrest Centre, University of Toronto, Toronto, Ontario M6A 2E1, Canada; The Hospital for Sick Children, University of Toronto, Toronto, Ontario, Canada
| | - Anthony R McIntosh
- Rotman Research Institute of Baycrest Centre, University of Toronto, Toronto, Ontario M6A 2E1, Canada; Department of Psychology, University of Toronto, Toronto, Ontario M5S 3G3, Canada
| | - Rolf Kötter
- Department of Neuroinformatics, Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, 6525 AJ Nijmegen, The Netherlands; C. & O. Vogt Brain Research Institute, Heinrich Heine University, D-40225 Düsseldorf, Germany
| |
Collapse
|
10
|
Massoudi R, Van Wanrooij MM, Van Wetter SMCI, Versnel H, Van Opstal AJ. Task-related preparatory modulations multiply with acoustic processing in monkey auditory cortex. Eur J Neurosci 2014; 39:1538-50. [PMID: 24649904 DOI: 10.1111/ejn.12532] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2014] [Accepted: 01/28/2014] [Indexed: 11/30/2022]
Abstract
We characterised task-related top-down signals in monkey auditory cortex cells by comparing single-unit activity during passive sound exposure with neuronal activity during a predictable and unpredictable reaction-time task for a variety of spectral-temporally modulated broadband sounds. Although animals were not trained to attend to particular spectral or temporal sound modulations, their reaction times demonstrated clear acoustic spectral-temporal sensitivity for unpredictable modulation onsets. Interestingly, this sensitivity was absent for predictable trials with fast manual responses, but re-emerged for the slower reactions in these trials. Our analysis of neural activity patterns revealed a task-related dynamic modulation of auditory cortex neurons that was locked to the animal's reaction time, but invariant to the spectral and temporal acoustic modulations. This finding suggests dissociation between acoustic and behavioral signals at the single-unit level. We further demonstrated that single-unit activity during task execution can be described by a multiplicative gain modulation of acoustic-evoked activity and a task-related top-down signal, rather than by linear summation of these signals.
Collapse
Affiliation(s)
- Roohollah Massoudi
- Department of Biophysics, Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Heyendaalseweg 135, 6525 AJ, Nijmegen, The Netherlands
| | | | | | | | | |
Collapse
|
11
|
Abstract
After hearing a tone, the human auditory system becomes more sensitive to similar tones than to other tones. Current auditory models explain this phenomenon by a simple bandpass attention filter. Here, we demonstrate that auditory attention involves multiple pass-bands around octave-related frequencies above and below the cued tone. Intriguingly, this "octave effect" not only occurs for physically presented tones, but even persists for the missing fundamental in complex tones, and for imagined tones. Our results suggest neural interactions combining octave-related frequencies, likely located in nonprimary cortical regions. We speculate that this connectivity scheme evolved from exposure to natural vibrations containing octave-related spectral peaks, e.g., as produced by vocal cords.
Collapse
|