1
|
Undurraga JA, Luke R, Van Yper L, Monaghan JJM, McAlpine D. The neural representation of an auditory spatial cue in the primate cortex. Curr Biol 2024; 34:2162-2174.e5. [PMID: 38718798 DOI: 10.1016/j.cub.2024.04.034] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2023] [Revised: 02/14/2024] [Accepted: 04/12/2024] [Indexed: 05/23/2024]
Abstract
Humans make use of small differences in the timing of sounds at the two ears-interaural time differences (ITDs)-to locate their sources. Despite extensive investigation, however, the neural representation of ITDs in the human brain is contentious, particularly the range of ITDs explicitly represented by dedicated neural detectors. Here, using magneto- and electro-encephalography (MEG and EEG), we demonstrate evidence of a sparse neural representation of ITDs in the human cortex. The magnitude of cortical activity to sounds presented via insert earphones oscillated as a function of increasing ITD-within and beyond auditory cortical regions-and listeners rated the perceptual quality of these sounds according to the same oscillating pattern. This pattern was accurately described by a population of model neurons with preferred ITDs constrained to the narrow, sound-frequency-dependent range evident in other mammalian species. When scaled for head size, the distribution of ITD detectors in the human cortex is remarkably like that recorded in vivo from the cortex of rhesus monkeys, another large primate that uses ITDs for source localization. The data solve a long-standing issue concerning the neural representation of ITDs in humans and suggest a representation that scales for head size and sound frequency in an optimal manner.
Collapse
Affiliation(s)
- Jaime A Undurraga
- Department of Linguistics, Macquarie University, 16 University Avenue, Sydney, NSW 2109, Australia; Interacoustics Research Unit, Technical University of Denmark, Ørsteds Plads, Building 352, 2800 Kgs. Lyngby, Denmark.
| | - Robert Luke
- Department of Linguistics, Macquarie University, 16 University Avenue, Sydney, NSW 2109, Australia; The Bionics Institute, 384-388 Albert St., East Melbourne, VIC 3002, Australia
| | - Lindsey Van Yper
- Department of Linguistics, Macquarie University, 16 University Avenue, Sydney, NSW 2109, Australia; Institute of Clinical Research, University of Southern Denmark, 5230 Odense, Denmark; Research Unit for ORL, Head & Neck Surgery and Audiology, Odense University Hospital & University of Southern Denmark, 5230 Odense, Denmark
| | - Jessica J M Monaghan
- Department of Linguistics, Macquarie University, 16 University Avenue, Sydney, NSW 2109, Australia; National Acoustic Laboratories, Australian Hearing Hub, 16 University Avenue, Sydney, NSW 2109, Australia
| | - David McAlpine
- Department of Linguistics, Macquarie University, 16 University Avenue, Sydney, NSW 2109, Australia; Macquarie University Hearing and the Australian Hearing Hub, Macquarie University, 16 University Avenue, Sydney, NSW 2109, Australia.
| |
Collapse
|
2
|
Luke R, Innes-Brown H, Undurraga JA, McAlpine D. Human cortical processing of interaural coherence. iScience 2022; 25:104181. [PMID: 35494228 PMCID: PMC9051632 DOI: 10.1016/j.isci.2022.104181] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2021] [Revised: 11/29/2021] [Accepted: 03/29/2022] [Indexed: 11/17/2022] Open
Abstract
Sounds reach the ears as a mixture of energy generated by different sources. Listeners extract cues that distinguish different sources from one another, including how similar sounds arrive at the two ears, the interaural coherence (IAC). Here, we find listeners cannot reliably distinguish two completely interaurally coherent sounds from a single sound with reduced IAC. Pairs of sounds heard toward the front were readily confused with single sounds with high IAC, whereas those heard to the sides were confused with single sounds with low IAC. Sounds that hold supra-ethological spatial cues are perceived as more diffuse than can be accounted for by their IAC, and this is accounted for by a computational model comprising a restricted, and sound-frequency dependent, distribution of auditory-spatial detectors. We observed elevated cortical hemodynamic responses for sounds with low IAC, suggesting that the ambiguity elicited by sounds with low interaural similarity imposes elevated cortical load.
Collapse
Affiliation(s)
- Robert Luke
- Macquarie University, Sydney, NSW, Australia
- The Bionics Institute, Melbourne, VIC, Australia
| | | | | | | |
Collapse
|
3
|
Wang L, Noordanus E, van Opstal AJ. Estimating multiple latencies in the auditory system from auditory steady-state responses on a single EEG channel. Sci Rep 2021; 11:2150. [PMID: 33495484 PMCID: PMC7835249 DOI: 10.1038/s41598-021-81232-5] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2020] [Accepted: 01/05/2021] [Indexed: 01/30/2023] Open
Abstract
The latency of the auditory steady-state response (ASSR) may provide valuable information regarding the integrity of the auditory system, as it could potentially reveal the presence of multiple intracerebral sources. To estimate multiple latencies from high-order ASSRs, we propose a novel two-stage procedure that consists of a nonparametric estimation method, called apparent latency from phase coherence (ALPC), followed by a heuristic sequential forward selection algorithm (SFS). Compared with existing methods, ALPC-SFS requires few prior assumptions, and is straightforward to implement for higher-order nonlinear responses to multi-cosine sound complexes with their initial phases set to zero. It systematically evaluates the nonlinear components of the ASSRs by estimating multiple latencies, automatically identifies involved ASSR components, and reports a latency consistency index. To verify the proposed method, we performed simulations for several scenarios: two nonlinear subsystems with different or overlapping outputs. We compared the results from our method with predictions from existing, parametric methods. We also recorded the EEG from ten normal-hearing adults by bilaterally presenting superimposed tones with four frequencies that evoke a unique set of ASSRs. From these ASSRs, two major latencies were found to be stable across subjects on repeated measurement days. The two latencies are dominated by low-frequency (LF) (near 40 Hz, at around 41-52 ms) and high-frequency (HF) (> 80 Hz, at around 21-27 ms) ASSR components. The frontal-central brain region showed longer latencies on LF components, but shorter latencies on HF components, when compared with temporal-lobe regions. In conclusion, the proposed nonparametric ALPC-SFS method, applied to zero-phase, multi-cosine sound complexes is more suitable for evaluating embedded nonlinear systems underlying ASSRs than existing methods. It may therefore be a promising objective measure for hearing performance and auditory cortex (dys)function.
Collapse
Affiliation(s)
- Lei Wang
- Department of Biophysics, Radboud University, Nijmegen, 6525 AJ, The Netherlands.
- Donders Centre for Neuroscience, Radboud University, Nijmegen, 6525 AJ, The Netherlands.
| | - Elisabeth Noordanus
- Department of Biophysics, Radboud University, Nijmegen, 6525 AJ, The Netherlands
- Donders Centre for Neuroscience, Radboud University, Nijmegen, 6525 AJ, The Netherlands
| | - A John van Opstal
- Department of Biophysics, Radboud University, Nijmegen, 6525 AJ, The Netherlands
- Donders Centre for Neuroscience, Radboud University, Nijmegen, 6525 AJ, The Netherlands
| |
Collapse
|
4
|
Stern RM, Colburn HS, Bernstein LR, Trahiotis C. The fMRI Data of Thompson et al. (2006) Do Not Constrain How the Human Midbrain Represents Interaural Time Delay. J Assoc Res Otolaryngol 2019; 20:305-311. [PMID: 31089846 DOI: 10.1007/s10162-019-00715-5] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2017] [Accepted: 03/01/2019] [Indexed: 10/26/2022] Open
Abstract
This commentary provides an alternate interpretation of the fMRI data that were presented in a communication to the journal Nature Neuroscience (Thompson et al., Nat. Neurosci. 9: 1096-1098, 2006 ). The authors argued that their observations demonstrated that traditional models of binaural hearing which incorporate "internal delays," such as the coincidence-counting mechanism proposed by Jeffress and quantified by Colburn, are invalid, and that a new model for human interaural time delay processing must be developed. We argue that the fMRI data presented do not strongly favor either the refutation or the retention of the traditional models, although they may be useful in constraining the physiological sites of various processing stages. The conclusions of Thompson et al. are based on the locations of maximal activity in the midbrain in response to selected binaural signals. These locations are inconsistent with well-known perceptual attributes of the stimuli under consideration, as is noted by the authors, which suggests that further processing is involved in forming the percept of subjective lateral position.
Collapse
Affiliation(s)
- Richard M Stern
- Department of Electrical and Computer Engineering, Carnegie Mellon University, 5000 Forbes Avenue, Pittsburgh, PA, 15213, USA.
| | - H Steven Colburn
- Department of Biomedical Engineering, Boston University, One Silber Way, Boston, MA, 02215, USA
| | - Leslie R Bernstein
- Departments of Neuroscience and Surgery (Otolaryngology), University of Connecticut Health Center, Farmington, CT, 06030, USA
| | - Constantine Trahiotis
- Departments of Neuroscience and Surgery (Otolaryngology), University of Connecticut Health Center, Farmington, CT, 06030, USA
| |
Collapse
|
5
|
Moncada-Torres A, Joshi SN, Prokopiou A, Wouters J, Epp B, Francart T. A framework for computational modelling of interaural time difference discrimination of normal and hearing-impaired listeners. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2018; 144:940. [PMID: 30180705 DOI: 10.1121/1.5051322] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/19/2017] [Accepted: 08/03/2018] [Indexed: 06/08/2023]
Abstract
Different computational models have been developed to study the interaural time difference (ITD) perception. However, only few have used a physiologically inspired architecture to study ITD discrimination. Furthermore, they do not include aspects of hearing impairment. In this work, a framework was developed to predict ITD thresholds in listeners with normal and impaired hearing. It combines the physiologically inspired model of the auditory periphery proposed by Zilany, Bruce, Nelson, and Carney [(2009). J. Acoust. Soc. Am. 126(5), 2390-2412] as a front end with a coincidence detection stage and a neurometric decision device as a back end. It was validated by comparing its predictions against behavioral data for narrowband stimuli from literature. The framework is able to model ITD discrimination of normal-hearing and hearing-impaired listeners at a group level. Additionally, it was used to explore the effect of different proportions of outer- and inner-hair cell impairment on ITD discrimination.
Collapse
Affiliation(s)
- Arturo Moncada-Torres
- KU Leuven - University of Leuven, Department of Neurosciences, ExpORL, Herestraat 49, Bus 721, 3000 Leuven, Belgium
| | - Suyash N Joshi
- Department of Electrical Engineering, Hearing Systems, Technical University of Denmark, Ørsteds Plads, Building 352, DK-2800 Kongens Lyngby, Denmark
| | - Andreas Prokopiou
- KU Leuven - University of Leuven, Department of Neurosciences, ExpORL, Herestraat 49, Bus 721, 3000 Leuven, Belgium
| | - Jan Wouters
- KU Leuven - University of Leuven, Department of Neurosciences, ExpORL, Herestraat 49, Bus 721, 3000 Leuven, Belgium
| | - Bastian Epp
- Department of Electrical Engineering, Hearing Systems, Technical University of Denmark, Ørsteds Plads, Building 352, DK-2800 Kongens Lyngby, Denmark
| | - Tom Francart
- KU Leuven - University of Leuven, Department of Neurosciences, ExpORL, Herestraat 49, Bus 721, 3000 Leuven, Belgium
| |
Collapse
|