1
|
Bartlett EL, Han EX, Parthasarathy A. Neurometric amplitude modulation detection in the inferior colliculus of Young and Aged rats. Hear Res 2024; 447:109028. [PMID: 38733711 PMCID: PMC11129790 DOI: 10.1016/j.heares.2024.109028] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/23/2024] [Revised: 04/29/2024] [Accepted: 05/02/2024] [Indexed: 05/13/2024]
Abstract
Amplitude modulation is an important acoustic cue for sound discrimination, and humans and animals are able to detect small modulation depths behaviorally. In the inferior colliculus (IC), both firing rate and phase-locking may be used to detect amplitude modulation. How neural representations that detect modulation change with age are poorly understood, including the extent to which age-related changes may be attributed to the inherited properties of ascending inputs to IC neurons. Here, simultaneous measures of local field potentials (LFPs) and single-unit responses were made from the inferior colliculus of Young and Aged rats using both noise and tone carriers in response to sinusoidally amplitude-modulated sounds of varying depths. We found that Young units had higher firing rates than Aged for noise carriers, whereas Aged units had higher phase-locking (vector strength), especially for tone carriers. Sustained LFPs were larger in Young animals for modulation frequencies 8-16 Hz and comparable at higher modulation frequencies. Onset LFP amplitudes were much larger in Young animals and were correlated with the evoked firing rates, while LFP onset latencies were shorter in Aged animals. Unit neurometric thresholds by synchrony or firing rate measures did not differ significantly across age and were comparable to behavioral thresholds in previous studies whereas LFP thresholds were lower than behavior.
Collapse
Affiliation(s)
- Edward L Bartlett
- Department of Biological Sciences and the Purdue Institute for Integrative Neuroscience, Purdue University, West Lafayette, IN 47907, United States; Weldon School of Biomedical Engineering, Purdue University, West Lafayette, IN 47907, United States.
| | - Emily X Han
- Department of Biological Sciences and the Purdue Institute for Integrative Neuroscience, Purdue University, West Lafayette, IN 47907, United States
| | - Aravindakshan Parthasarathy
- Department of Biological Sciences and the Purdue Institute for Integrative Neuroscience, Purdue University, West Lafayette, IN 47907, United States
| |
Collapse
|
2
|
Chang A, Teng X, Assaneo MF, Poeppel D. The human auditory system uses amplitude modulation to distinguish music from speech. PLoS Biol 2024; 22:e3002631. [PMID: 38805517 PMCID: PMC11132470 DOI: 10.1371/journal.pbio.3002631] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2023] [Accepted: 04/17/2024] [Indexed: 05/30/2024] Open
Abstract
Music and speech are complex and distinct auditory signals that are both foundational to the human experience. The mechanisms underpinning each domain are widely investigated. However, what perceptual mechanism transforms a sound into music or speech and how basic acoustic information is required to distinguish between them remain open questions. Here, we hypothesized that a sound's amplitude modulation (AM), an essential temporal acoustic feature driving the auditory system across processing levels, is critical for distinguishing music and speech. Specifically, in contrast to paradigms using naturalistic acoustic signals (that can be challenging to interpret), we used a noise-probing approach to untangle the auditory mechanism: If AM rate and regularity are critical for perceptually distinguishing music and speech, judging artificially noise-synthesized ambiguous audio signals should align with their AM parameters. Across 4 experiments (N = 335), signals with a higher peak AM frequency tend to be judged as speech, lower as music. Interestingly, this principle is consistently used by all listeners for speech judgments, but only by musically sophisticated listeners for music. In addition, signals with more regular AM are judged as music over speech, and this feature is more critical for music judgment, regardless of musical sophistication. The data suggest that the auditory system can rely on a low-level acoustic property as basic as AM to distinguish music from speech, a simple principle that provokes both neurophysiological and evolutionary experiments and speculations.
Collapse
Affiliation(s)
- Andrew Chang
- Department of Psychology, New York University, New York, New York, United States of America
| | - Xiangbin Teng
- Department of Psychology, Chinese University of Hong Kong, Hong Kong SAR, China
| | - M. Florencia Assaneo
- Instituto de Neurobiología, Universidad Nacional Autónoma de México, Juriquilla, Querétaro, México
| | - David Poeppel
- Department of Psychology, New York University, New York, New York, United States of America
- Ernst Struengmann Institute for Neuroscience, Frankfurt am Main, Germany
- Center for Language, Music, and Emotion (CLaME), New York University, New York, New York, United States of America
- Music and Audio Research Lab (MARL), New York University, New York, New York, United States of America
| |
Collapse
|
3
|
Morandell K, Yin A, Triana Del Rio R, Schneider DM. Movement-Related Modulation in Mouse Auditory Cortex Is Widespread Yet Locally Diverse. J Neurosci 2024; 44:e1227232024. [PMID: 38286628 PMCID: PMC10941236 DOI: 10.1523/jneurosci.1227-23.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2023] [Revised: 12/12/2023] [Accepted: 01/15/2024] [Indexed: 01/31/2024] Open
Abstract
Neurons in the mouse auditory cortex are strongly influenced by behavior, including both suppression and enhancement of sound-evoked responses during movement. The mouse auditory cortex comprises multiple fields with different roles in sound processing and distinct connectivity to movement-related centers of the brain. Here, we asked whether movement-related modulation in male mice might differ across auditory cortical fields, thereby contributing to the heterogeneity of movement-related modulation at the single-cell level. We used wide-field calcium imaging to identify distinct cortical fields and cellular-resolution two-photon calcium imaging to visualize the activity of layer 2/3 excitatory neurons within each field. We measured each neuron's responses to three sound categories (pure tones, chirps, and amplitude-modulated white noise) as mice rested and ran on a non-motorized treadmill. We found that individual neurons in each cortical field typically respond to just one sound category. Some neurons are only active during rest and others during locomotion, and those that are responsive across conditions retain their sound-category tuning. The effects of locomotion on sound-evoked responses vary at the single-cell level, with both suppression and enhancement of neural responses, and the net modulatory effect of locomotion is largely conserved across cortical fields. Movement-related modulation in auditory cortex also reflects more complex behavioral patterns, including instantaneous running speed and nonlocomotor movements such as grooming and postural adjustments, with similar patterns seen across all auditory cortical fields. Our findings underscore the complexity of movement-related modulation throughout the mouse auditory cortex and indicate that movement-related modulation is a widespread phenomenon.
Collapse
Affiliation(s)
- Karin Morandell
- Center for Neural Science, New York University, New York, New York 10012
| | - Audrey Yin
- Center for Neural Science, New York University, New York, New York 10012
| | | | - David M Schneider
- Center for Neural Science, New York University, New York, New York 10012
| |
Collapse
|
4
|
Nocon JC, Witter J, Gritton H, Han X, Houghton C, Sen K. A robust and compact population code for competing sounds in auditory cortex. J Neurophysiol 2023; 130:775-787. [PMID: 37646080 PMCID: PMC10642980 DOI: 10.1152/jn.00148.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2023] [Revised: 08/22/2023] [Accepted: 08/24/2023] [Indexed: 09/01/2023] Open
Abstract
Cortical circuits encoding sensory information consist of populations of neurons, yet how information aggregates via pooling individual cells remains poorly understood. Such pooling may be particularly important in noisy settings where single-neuron encoding is degraded. One example is the cocktail party problem, with competing sounds from multiple spatial locations. How populations of neurons in auditory cortex code competing sounds have not been previously investigated. Here, we apply a novel information-theoretic approach to estimate information in populations of neurons in mouse auditory cortex about competing sounds from multiple spatial locations, including both summed population (SP) and labeled line (LL) codes. We find that a small subset of neurons is sufficient to nearly maximize mutual information over different spatial configurations, with the labeled line code outperforming the summed population code and approaching information levels attained in the absence of competing stimuli. Finally, information in the labeled line code increases with spatial separation between target and masker, in correspondence with behavioral results on spatial release from masking in humans and animals. Taken together, our results reveal that a compact population of neurons in auditory cortex provides a robust code for competing sounds from different spatial locations.NEW & NOTEWORTHY Little is known about how populations of neurons within cortical circuits encode sensory stimuli in the presence of competing stimuli at other spatial locations. Here, we investigate this problem in auditory cortex using a recently proposed information-theoretic approach. We find a small subset of neurons nearly maximizes information about target sounds in the presence of competing maskers, approaching information levels for isolated stimuli, and provides a noise-robust code for sounds in a complex auditory scene.
Collapse
Affiliation(s)
- Jian Carlo Nocon
- Neurophotonics Center, Boston University, Boston, Massachusetts, United States
- Center for Systems Neuroscience, Boston University, Boston, Massachusetts, United States
- Hearing Research Center, Boston University, Boston, Massachusetts, United States
- Department of Biomedical Engineering, Boston University, Boston, Massachusetts, United States
| | - Jake Witter
- Department of Computer Science, University of Bristol, Bristol, United Kingdom
| | - Howard Gritton
- Department of Comparative Biosciences, University of Illinois, Urbana, Illinois, United States
- Department of Bioengineering, University of Illinois, Urbana, Illinois, United States
| | - Xue Han
- Neurophotonics Center, Boston University, Boston, Massachusetts, United States
- Center for Systems Neuroscience, Boston University, Boston, Massachusetts, United States
- Hearing Research Center, Boston University, Boston, Massachusetts, United States
- Department of Biomedical Engineering, Boston University, Boston, Massachusetts, United States
| | - Conor Houghton
- Department of Computer Science, University of Bristol, Bristol, United Kingdom
| | - Kamal Sen
- Neurophotonics Center, Boston University, Boston, Massachusetts, United States
- Center for Systems Neuroscience, Boston University, Boston, Massachusetts, United States
- Hearing Research Center, Boston University, Boston, Massachusetts, United States
- Department of Biomedical Engineering, Boston University, Boston, Massachusetts, United States
| |
Collapse
|
5
|
Nocon JC, Gritton HJ, James NM, Mount RA, Qu Z, Han X, Sen K. Parvalbumin neurons enhance temporal coding and reduce cortical noise in complex auditory scenes. Commun Biol 2023; 6:751. [PMID: 37468561 PMCID: PMC10356822 DOI: 10.1038/s42003-023-05126-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2022] [Accepted: 07/10/2023] [Indexed: 07/21/2023] Open
Abstract
Cortical representations supporting many cognitive abilities emerge from underlying circuits comprised of several different cell types. However, cell type-specific contributions to rate and timing-based cortical coding are not well-understood. Here, we investigated the role of parvalbumin neurons in cortical complex scene analysis. Many complex scenes contain sensory stimuli which are highly dynamic in time and compete with stimuli at other spatial locations. Parvalbumin neurons play a fundamental role in balancing excitation and inhibition in cortex and sculpting cortical temporal dynamics; yet their specific role in encoding complex scenes via timing-based coding, and the robustness of temporal representations to spatial competition, has not been investigated. Here, we address these questions in auditory cortex of mice using a cocktail party-like paradigm, integrating electrophysiology, optogenetic manipulations, and a family of spike-distance metrics, to dissect parvalbumin neurons' contributions towards rate and timing-based coding. We find that suppressing parvalbumin neurons degrades cortical discrimination of dynamic sounds in a cocktail party-like setting via changes in rapid temporal modulations in rate and spike timing, and over a wide range of time-scales. Our findings suggest that parvalbumin neurons play a critical role in enhancing cortical temporal coding and reducing cortical noise, thereby improving representations of dynamic stimuli in complex scenes.
Collapse
Affiliation(s)
- Jian Carlo Nocon
- Neurophotonics Center, Boston University, Boston, 02215, MA, USA
- Center for Systems Neuroscience, Boston University, Boston, 02215, MA, USA
- Hearing Research Center, Boston University, Boston, 02215, MA, USA
- Department of Biomedical Engineering, Boston University, Boston, 02215, MA, USA
| | - Howard J Gritton
- Department of Comparative Biosciences, University of Illinois, Urbana, 61820, IL, USA
- Department of Bioengineering, University of Illinois, Urbana, 61820, IL, USA
| | - Nicholas M James
- Neurophotonics Center, Boston University, Boston, 02215, MA, USA
- Center for Systems Neuroscience, Boston University, Boston, 02215, MA, USA
- Hearing Research Center, Boston University, Boston, 02215, MA, USA
- Department of Biomedical Engineering, Boston University, Boston, 02215, MA, USA
| | - Rebecca A Mount
- Neurophotonics Center, Boston University, Boston, 02215, MA, USA
- Center for Systems Neuroscience, Boston University, Boston, 02215, MA, USA
- Hearing Research Center, Boston University, Boston, 02215, MA, USA
- Department of Biomedical Engineering, Boston University, Boston, 02215, MA, USA
| | - Zhili Qu
- Department of Comparative Biosciences, University of Illinois, Urbana, 61820, IL, USA
- Department of Bioengineering, University of Illinois, Urbana, 61820, IL, USA
| | - Xue Han
- Neurophotonics Center, Boston University, Boston, 02215, MA, USA
- Center for Systems Neuroscience, Boston University, Boston, 02215, MA, USA
- Hearing Research Center, Boston University, Boston, 02215, MA, USA
- Department of Biomedical Engineering, Boston University, Boston, 02215, MA, USA
| | - Kamal Sen
- Neurophotonics Center, Boston University, Boston, 02215, MA, USA.
- Center for Systems Neuroscience, Boston University, Boston, 02215, MA, USA.
- Hearing Research Center, Boston University, Boston, 02215, MA, USA.
- Department of Biomedical Engineering, Boston University, Boston, 02215, MA, USA.
| |
Collapse
|
6
|
A Redundant Cortical Code for Speech Envelope. J Neurosci 2023; 43:93-112. [PMID: 36379706 PMCID: PMC9838705 DOI: 10.1523/jneurosci.1616-21.2022] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2021] [Revised: 08/19/2022] [Accepted: 10/23/2022] [Indexed: 11/17/2022] Open
Abstract
Animal communication sounds exhibit complex temporal structure because of the amplitude fluctuations that comprise the sound envelope. In human speech, envelope modulations drive synchronized activity in auditory cortex (AC), which correlates strongly with comprehension (Giraud and Poeppel, 2012; Peelle and Davis, 2012; Haegens and Zion Golumbic, 2018). Studies of envelope coding in single neurons, performed in nonhuman animals, have focused on periodic amplitude modulation (AM) stimuli and use response metrics that are not easy to juxtapose with data from humans. In this study, we sought to bridge these fields. Specifically, we looked directly at the temporal relationship between stimulus envelope and spiking, and we assessed whether the apparent diversity across neurons' AM responses contributes to the population representation of speech-like sound envelopes. We gathered responses from single neurons to vocoded speech stimuli and compared them to sinusoidal AM responses in auditory cortex (AC) of alert, freely moving Mongolian gerbils of both sexes. While AC neurons displayed heterogeneous tuning to AM rate, their temporal dynamics were stereotyped. Preferred response phases accumulated near the onsets of sinusoidal AM periods for slower rates (<8 Hz), and an over-representation of amplitude edges was apparent in population responses to both sinusoidal AM and vocoded speech envelopes. Crucially, this encoding bias imparted a decoding benefit: a classifier could discriminate vocoded speech stimuli using summed population activity, while higher frequency modulations required a more sophisticated decoder that tracked spiking responses from individual cells. Together, our results imply that the envelope structure relevant to parsing an acoustic stream could be read-out from a distributed, redundant population code.SIGNIFICANCE STATEMENT Animal communication sounds have rich temporal structure and are often produced in extended sequences, including the syllabic structure of human speech. Although the auditory cortex (AC) is known to play a crucial role in representing speech syllables, the contribution of individual neurons remains uncertain. Here, we characterized the representations of both simple, amplitude-modulated sounds and complex, speech-like stimuli within a broad population of cortical neurons, and we found an overrepresentation of amplitude edges. Thus, a phasic, redundant code in auditory cortex can provide a mechanistic explanation for segmenting acoustic streams like human speech.
Collapse
|
7
|
Morrill RJ, Bigelow J, DeKloe J, Hasenstaub AR. Audiovisual task switching rapidly modulates sound encoding in mouse auditory cortex. eLife 2022; 11:e75839. [PMID: 35980027 PMCID: PMC9427107 DOI: 10.7554/elife.75839] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2021] [Accepted: 08/17/2022] [Indexed: 11/13/2022] Open
Abstract
In everyday behavior, sensory systems are in constant competition for attentional resources, but the cellular and circuit-level mechanisms of modality-selective attention remain largely uninvestigated. We conducted translaminar recordings in mouse auditory cortex (AC) during an audiovisual (AV) attention shifting task. Attending to sound elements in an AV stream reduced both pre-stimulus and stimulus-evoked spiking activity, primarily in deep-layer neurons and neurons without spectrotemporal tuning. Despite reduced spiking, stimulus decoder accuracy was preserved, suggesting improved sound encoding efficiency. Similarly, task-irrelevant mapping stimuli during inter-trial intervals evoked fewer spikes without impairing stimulus encoding, indicating that attentional modulation generalized beyond training stimuli. Importantly, spiking reductions predicted trial-to-trial behavioral accuracy during auditory attention, but not visual attention. Together, these findings suggest auditory attention facilitates sound discrimination by filtering sound-irrelevant background activity in AC, and that the deepest cortical layers serve as a hub for integrating extramodal contextual information.
Collapse
Affiliation(s)
- Ryan J Morrill
- Coleman Memorial Laboratory, University of California, San FranciscoSan FranciscoUnited States
- Neuroscience Graduate Program, University of California, San FranciscoSan FranciscoUnited States
- Department of Otolaryngology–Head and Neck Surgery, University of California, San FranciscoSan FranciscoUnited States
| | - James Bigelow
- Coleman Memorial Laboratory, University of California, San FranciscoSan FranciscoUnited States
- Department of Otolaryngology–Head and Neck Surgery, University of California, San FranciscoSan FranciscoUnited States
| | - Jefferson DeKloe
- Coleman Memorial Laboratory, University of California, San FranciscoSan FranciscoUnited States
- Department of Otolaryngology–Head and Neck Surgery, University of California, San FranciscoSan FranciscoUnited States
| | - Andrea R Hasenstaub
- Coleman Memorial Laboratory, University of California, San FranciscoSan FranciscoUnited States
- Neuroscience Graduate Program, University of California, San FranciscoSan FranciscoUnited States
- Department of Otolaryngology–Head and Neck Surgery, University of California, San FranciscoSan FranciscoUnited States
| |
Collapse
|
8
|
Ruthig P, Schönwiesner M. Common principles in the lateralisation of auditory cortex structure and function for vocal communication in primates and rodents. Eur J Neurosci 2022; 55:827-845. [PMID: 34984748 DOI: 10.1111/ejn.15590] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2021] [Accepted: 12/24/2021] [Indexed: 11/27/2022]
Abstract
This review summarises recent findings on the lateralisation of communicative sound processing in the auditory cortex (AC) of humans, non-human primates, and rodents. Functional imaging in humans has demonstrated a left hemispheric preference for some acoustic features of speech, but it is unclear to which degree this is caused by bottom-up acoustic feature selectivity or top-down modulation from language areas. Although non-human primates show a less pronounced functional lateralisation in AC, the properties of AC fields and behavioral asymmetries are qualitatively similar. Rodent studies demonstrate microstructural circuits that might underlie bottom-up acoustic feature selectivity in both hemispheres. Functionally, the left AC in the mouse appears to be specifically tuned to communication calls, whereas the right AC may have a more 'generalist' role. Rodents also show anatomical AC lateralisation, such as differences in size and connectivity. Several of these functional and anatomical characteristics are also lateralized in human AC. Thus, complex vocal communication processing shares common features among rodents and primates. We argue that a synthesis of results from humans, non-human primates, and rodents is necessary to identify the neural circuitry of vocal communication processing. However, data from different species and methods are often difficult to compare. Recent advances may enable better integration of methods across species. Efforts to standardise data formats and analysis tools would benefit comparative research and enable synergies between psychological and biological research in the area of vocal communication processing.
Collapse
Affiliation(s)
- Philip Ruthig
- Faculty of Life Sciences, Leipzig University, Leipzig, Sachsen.,Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig
| | | |
Collapse
|
9
|
Downer JD, Verhein JR, Rapone BC, O'Connor KN, Sutter ML. An Emergent Population Code in Primary Auditory Cortex Supports Selective Attention to Spectral and Temporal Sound Features. J Neurosci 2021; 41:7561-7577. [PMID: 34210783 PMCID: PMC8425978 DOI: 10.1523/jneurosci.0693-20.2021] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2020] [Revised: 05/19/2021] [Accepted: 05/28/2021] [Indexed: 11/21/2022] Open
Abstract
Textbook descriptions of primary sensory cortex (PSC) revolve around single neurons' representation of low-dimensional sensory features, such as visual object orientation in primary visual cortex (V1), location of somatic touch in primary somatosensory cortex (S1), and sound frequency in primary auditory cortex (A1). Typically, studies of PSC measure neurons' responses along few (one or two) stimulus and/or behavioral dimensions. However, real-world stimuli usually vary along many feature dimensions and behavioral demands change constantly. In order to illuminate how A1 supports flexible perception in rich acoustic environments, we recorded from A1 neurons while rhesus macaques (one male, one female) performed a feature-selective attention task. We presented sounds that varied along spectral and temporal feature dimensions (carrier bandwidth and temporal envelope, respectively). Within a block, subjects attended to one feature of the sound in a selective change detection task. We found that single neurons tend to be high-dimensional, in that they exhibit substantial mixed selectivity for both sound features, as well as task context. We found no overall enhancement of single-neuron coding of the attended feature, as attention could either diminish or enhance this coding. However, a population-level analysis reveals that ensembles of neurons exhibit enhanced encoding of attended sound features, and this population code tracks subjects' performance. Importantly, surrogate neural populations with intact single-neuron tuning but shuffled higher-order correlations among neurons fail to yield attention- related effects observed in the intact data. These results suggest that an emergent population code not measurable at the single-neuron level might constitute the functional unit of sensory representation in PSC.SIGNIFICANCE STATEMENT The ability to adapt to a dynamic sensory environment promotes a range of important natural behaviors. We recorded from single neurons in monkey primary auditory cortex (A1), while subjects attended to either the spectral or temporal features of complex sounds. Surprisingly, we found no average increase in responsiveness to, or encoding of, the attended feature across single neurons. However, when we pooled the activity of the sampled neurons via targeted dimensionality reduction (TDR), we found enhanced population-level representation of the attended feature and suppression of the distractor feature. This dissociation of the effects of attention at the level of single neurons versus the population highlights the synergistic nature of cortical sound encoding and enriches our understanding of sensory cortical function.
Collapse
Affiliation(s)
- Joshua D Downer
- Center for Neuroscience, University of California, Davis, Davis, California 95618
- Department of Otolaryngology, Head and Neck Surgery, University of California, San Francisco, California 94143
| | - Jessica R Verhein
- Center for Neuroscience, University of California, Davis, Davis, California 95618
- School of Medicine, Stanford University, Stanford, California 94305
| | - Brittany C Rapone
- Center for Neuroscience, University of California, Davis, Davis, California 95618
- School of Social Sciences, Oxford Brookes University, Oxford, OX4 0BP, United Kingdom
| | - Kevin N O'Connor
- Center for Neuroscience, University of California, Davis, Davis, California 95618
- Department of Neurobiology, Physiology and Behavior, University of California, Davis, Davis, California 95618
| | - Mitchell L Sutter
- Center for Neuroscience, University of California, Davis, Davis, California 95618
- Department of Neurobiology, Physiology and Behavior, University of California, Davis, Davis, California 95618
| |
Collapse
|
10
|
Downer JD, Bigelow J, Runfeldt MJ, Malone BJ. Temporally precise population coding of dynamic sounds by auditory cortex. J Neurophysiol 2021; 126:148-169. [PMID: 34077273 DOI: 10.1152/jn.00709.2020] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Fluctuations in the amplitude envelope of complex sounds provide critical cues for hearing, particularly for speech and animal vocalizations. Responses to amplitude modulation (AM) in the ascending auditory pathway have chiefly been described for single neurons. How neural populations might collectively encode and represent information about AM remains poorly characterized, even in primary auditory cortex (A1). We modeled population responses to AM based on data recorded from A1 neurons in awake squirrel monkeys and evaluated how accurately single trial responses to modulation frequencies from 4 to 512 Hz could be decoded as functions of population size, composition, and correlation structure. We found that a population-based decoding model that simulated convergent, equally weighted inputs was highly accurate and remarkably robust to the inclusion of neurons that were individually poor decoders. By contrast, average rate codes based on convergence performed poorly; effective decoding using average rates was only possible when the responses of individual neurons were segregated, as in classical population decoding models using labeled lines. The relative effectiveness of dynamic rate coding in auditory cortex was explained by shared modulation phase preferences among cortical neurons, despite heterogeneity in rate-based modulation frequency tuning. Our results indicate significant population-based synchrony in primary auditory cortex and suggest that robust population coding of the sound envelope information present in animal vocalizations and speech can be reliably achieved even with indiscriminate pooling of cortical responses. These findings highlight the importance of firing rate dynamics in population-based sensory coding.NEW & NOTEWORTHY Fundamental questions remain about population coding in primary auditory cortex (A1). In particular, issues of spike timing in models of neural populations have been largely ignored. We find that spike-timing in response to sound envelope fluctuations is highly similar across neuron populations in A1. This property of shared envelope phase preference allows for a simple population model involving unweighted convergence of neuronal responses to classify amplitude modulation frequencies with high accuracy.
Collapse
Affiliation(s)
- Joshua D Downer
- Department of Otolaryngology-Head and Neck Surgery, University of California, San Francisco, California
| | - James Bigelow
- Department of Otolaryngology-Head and Neck Surgery, University of California, San Francisco, California
| | - Melissa J Runfeldt
- Department of Otolaryngology-Head and Neck Surgery, University of California, San Francisco, California
| | - Brian J Malone
- Department of Otolaryngology-Head and Neck Surgery, University of California, San Francisco, California.,Kavli Institute for Fundamental Neuroscience, University of California, San Francisco, California
| |
Collapse
|
11
|
Johnson JS, Niwa M, O'Connor KN, Sutter ML. Amplitude modulation encoding in the auditory cortex: comparisons between the primary and middle lateral belt regions. J Neurophysiol 2020; 124:1706-1726. [PMID: 33026929 DOI: 10.1152/jn.00171.2020] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
In macaques, the middle lateral auditory cortex (ML) is a belt region adjacent to the primary auditory cortex (A1) and believed to be at a hierarchically higher level. Although ML single-unit responses have been studied for several auditory stimuli, the ability of ML cells to encode amplitude modulation (AM)-an ability that has been widely studied in A1-has not yet been characterized. Here, we compared the responses of A1 and ML neurons to amplitude-modulated (AM) noise in awake macaques. Although several of the basic properties of A1 and ML responses to AM noise were similar, we found several key differences. ML neurons were less likely to phase lock, did not phase lock as strongly, and were more likely to respond in a nonsynchronized fashion than A1 cells, consistent with a temporal-to-rate transformation as information ascends the auditory hierarchy. ML neurons tended to have lower temporally (phase-locking) based best modulation frequencies than A1 neurons. Neurons that decreased their firing rate in response to AM noise relative to their firing rate in response to unmodulated noise became more common at the level of ML than they were in A1. In both A1 and ML, we found a prevalent class of neurons that usually have enhanced rate responses relative to responses to the unmodulated noise at lower modulation frequencies and suppressed rate responses relative to responses to the unmodulated noise at middle modulation frequencies.NEW & NOTEWORTHY ML neurons synchronized less than A1 neurons, consistent with a hierarchical temporal-to-rate transformation. Both A1 and ML had a class of modulation transfer functions previously unreported in the cortex with a low-modulation-frequency (MF) peak, a middle-MF trough, and responses similar to unmodulated noise responses at high MFs. The results support a hierarchical shift toward a two-pool opponent code, where subtraction of neural activity between two populations of oppositely tuned neurons encodes AM.
Collapse
Affiliation(s)
- Jeffrey S Johnson
- Center for Neuroscience, University of California, Davis, California
| | - Mamiko Niwa
- Center for Neuroscience, University of California, Davis, California
| | - Kevin N O'Connor
- Center for Neuroscience, University of California, Davis, California.,Department of Neurobiology, Physiology and Behavior, University of California, Davis, California
| | - Mitchell L Sutter
- Center for Neuroscience, University of California, Davis, California.,Department of Neurobiology, Physiology and Behavior, University of California, Davis, California
| |
Collapse
|
12
|
Bigelow J, Malone B. Extracellular voltage thresholds for maximizing information extraction in primate auditory cortex: implications for a brain computer interface. J Neural Eng 2020; 18. [PMID: 32126540 DOI: 10.1088/1741-2552/ab7c19] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2019] [Accepted: 03/03/2020] [Indexed: 01/08/2023]
Abstract
OBJECTIVE Research by Oby et al (2016) demonstrated that the optimal threshold for extracting information from visual and motor cortices may differ from the optimal threshold for identifying single neurons via spike sorting methods. The optimal threshold for extracting information from auditory cortex has yet to be identified, nor has the optimal temporal scale for representing auditory cortical activity. Here, we describe a procedure to jointly optimize the extracellular threshold and bin size with respect to the decoding accuracy achieved by a linear classifier for a diverse set of auditory stimuli. APPROACH We used linear multichannel arrays to record extracellular neural activity from the auditory cortex of awake squirrel monkeys passively listening to both simple and complex sounds. We executed a grid search of the coordinate space defined by the voltage threshold (in units of standard deviation) and the bin size (in units of milliseconds), and computed decoding accuracy at each point. MAIN RESULTS The optimal threshold for information extraction was consistently near two standard deviations below the voltage trace mean, which falls significantly below the range of three to five standard deviations typically used as inputs to spike sorting algorithms in basic research and in brain-computer interface (BCI) applications. The optimal binwidth was minimized at the optimal voltage threshold, particularly for acoustic stimuli dominated by temporally dynamic features, indicating that permissive thresholding permits readout of cortical responses with temporal precision on the order of a few milliseconds. SIGNIFICANCE The improvements in decoding accuracy we observed for optimal readout parameters suggest that standard thresholding methods substantially underestimate the information present in auditory cortical spiking patterns. The fact that optimal thresholds were relatively low indicates that local populations of cortical neurons exhibit high temporal coherence that could be leveraged in service of future auditory BCI applications.
Collapse
Affiliation(s)
- James Bigelow
- OHNS, University of California System, San Francisco, California, UNITED STATES
| | - Brian Malone
- OHNS, University of California System, 675 Nelson Rising Lane (Room 535), University of California San Francisco, San Francisco, San Francisco, California, 94158, UNITED STATES
| |
Collapse
|
13
|
Miller DM, Joshi A, Kambouroglos ET, Engstrom IC, Bielanin JP, Wittman SR, McCall AA, Barman SM, Yates BJ. Responses of neurons in the rostral ventrolateral medulla of conscious cats to anticipated and passive movements. Am J Physiol Regul Integr Comp Physiol 2020; 318:R481-R492. [PMID: 31940234 PMCID: PMC7099461 DOI: 10.1152/ajpregu.00205.2019] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2019] [Revised: 12/04/2019] [Accepted: 01/02/2020] [Indexed: 11/22/2022]
Abstract
The vestibular system contributes to regulating sympathetic nerve activity and blood pressure. Initial studies in decerebrate animals showed that neurons in the rostral ventrolateral medulla (RVLM) respond to small-amplitude (<10°) rotations of the body, as in other brain areas that process vestibular signals, although such movements do not affect blood distribution in the body. However, a subsequent experiment in conscious animals showed that few RVLM neurons respond to small-amplitude movements. This study tested the hypothesis that RVLM neurons in conscious animals respond to signals from the vestibular otolith organs elicited by large-amplitude static tilts. The activity of approximately one-third of RVLM neurons whose firing rate was related to the cardiac cycle, and thus likely received baroreceptor inputs, was modulated by vestibular inputs elicited by 40° head-up tilts in conscious cats, but not during 10° sinusoidal rotations in the pitch plane that affected the activity of neurons in brain regions providing inputs to the RVLM. These data suggest the existence of brain circuitry that suppresses vestibular influences on the activity of RVLM neurons and the sympathetic nervous system unless these inputs are physiologically warranted. We also determined that RVLM neurons failed to respond to a light cue signaling the movement, suggesting that feedforward cardiovascular responses do not occur before passive movements that require cardiovascular adjustments.
Collapse
Affiliation(s)
- Derek M Miller
- Department of Otolaryngology, University of Pittsburgh, Pittsburgh, Pennsylvania
| | - Asmita Joshi
- Department of Otolaryngology, University of Pittsburgh, Pittsburgh, Pennsylvania
| | | | - Isaiah C Engstrom
- Department of Otolaryngology, University of Pittsburgh, Pittsburgh, Pennsylvania
| | - John P Bielanin
- Department of Otolaryngology, University of Pittsburgh, Pittsburgh, Pennsylvania
- Department of Neuroscience, University of Pittsburgh, Pittsburgh, Pennsylvania
| | - Samuel R Wittman
- Department of Otolaryngology, University of Pittsburgh, Pittsburgh, Pennsylvania
| | - Andrew A McCall
- Department of Otolaryngology, University of Pittsburgh, Pittsburgh, Pennsylvania
| | - Susan M Barman
- Department of Pharmacology and Toxicology, Michigan State University, East Lansing, Michigan
| | - Bill J Yates
- Department of Otolaryngology, University of Pittsburgh, Pittsburgh, Pennsylvania
- Department of Neuroscience, University of Pittsburgh, Pittsburgh, Pennsylvania
| |
Collapse
|
14
|
Kim KX, Atencio CA, Schreiner CE. Stimulus dependent transformations between synaptic and spiking receptive fields in auditory cortex. Nat Commun 2020; 11:1102. [PMID: 32107370 PMCID: PMC7046699 DOI: 10.1038/s41467-020-14835-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2019] [Accepted: 02/06/2020] [Indexed: 11/09/2022] Open
Abstract
Auditory cortex neurons nonlinearly integrate synaptic inputs from the thalamus and cortex, and generate spiking outputs for simple and complex sounds. Directly comparing synaptic and spiking activity can determine whether this input-output transformation is stimulus-dependent. We employ in vivo whole-cell recordings in the mouse primary auditory cortex, using pure tones and broadband dynamic moving ripple stimuli, to examine properties of functional integration in tonal (TRFs) and spectrotemporal (STRFs) receptive fields. Spectral tuning in STRFs derived from synaptic, subthreshold and spiking responses proves to be substantially more selective than for TRFs. We describe diverse spectral and temporal modulation preferences and distinct nonlinearities, and their modifications between the input and output stages of neural processing. These results characterize specific processing differences at the level of synaptic convergence, integration and spike generation resulting in stimulus-dependent transformation patterns in the primary auditory cortex. The authors compare receptive fields and nonlinearities of synaptic inputs, membrane potentials, and spiking activity in the auditory cortex for broadband stimuli revealing distinct differences, which lead to an increase in feature selectivity from neuron input to output. Frequency selectivity is distinctly higher for spectrotemporal receptive fields (STRFs) than for tonal receptive fields (TRFs).
Collapse
Affiliation(s)
- Kyunghee X Kim
- Coleman Memorial Laboratory, Department of Otolaryngology - Head and Neck Surgery, University of California San Francisco, San Francisco, USA.
| | - Craig A Atencio
- Coleman Memorial Laboratory, Department of Otolaryngology - Head and Neck Surgery, University of California San Francisco, San Francisco, USA
| | - Christoph E Schreiner
- Coleman Memorial Laboratory, Department of Otolaryngology - Head and Neck Surgery, University of California San Francisco, San Francisco, USA.,Center for Integrative Neuroscience, University of California San Francisco, San Francisco, USA
| |
Collapse
|
15
|
Cai H, Dent ML. Best sensitivity of temporal modulation transfer functions in laboratory mice matches the amplitude modulation embedded in vocalizations. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2020; 147:337. [PMID: 32006990 PMCID: PMC7043865 DOI: 10.1121/10.0000583] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/30/2019] [Revised: 12/18/2019] [Accepted: 12/22/2019] [Indexed: 06/10/2023]
Abstract
The perception of spectrotemporal changes is crucial for distinguishing between acoustic signals, including vocalizations. Temporal modulation transfer functions (TMTFs) have been measured in many species and reveal that the discrimination of amplitude modulation suffers at rapid modulation frequencies. TMTFs were measured in six CBA/CaJ mice in an operant conditioning procedure, where mice were trained to discriminate an 800 ms amplitude modulated white noise target from a continuous noise background. TMTFs of mice show a bandpass characteristic, with an upper limit cutoff frequency of around 567 Hz. Within the measured modulation frequencies ranging from 5 Hz to 1280 Hz, the mice show a best sensitivity for amplitude modulation at around 160 Hz. To look for a possible parallel evolution between sound perception and production in living organisms, we also analyzed the components of amplitude modulations embedded in natural ultrasonic vocalizations (USVs) emitted by this strain. We found that the cutoff frequency of amplitude modulation in most of the individual USVs is around their most sensitive range obtained from the psychoacoustic experiments. Further analyses of the duration and modulation frequency ranges of USVs indicated that the broader the frequency ranges of amplitude modulation in natural USVs, the shorter the durations of the USVs.
Collapse
Affiliation(s)
- Huaizhen Cai
- Department of Psychology, University at Buffalo-SUNY, Buffalo, New York 14260, USA
| | - Micheal L Dent
- Department of Psychology, University at Buffalo-SUNY, Buffalo, New York 14260, USA
| |
Collapse
|
16
|
Movement and VIP Interneuron Activation Differentially Modulate Encoding in Mouse Auditory Cortex. eNeuro 2019; 6:ENEURO.0164-19.2019. [PMID: 31481397 PMCID: PMC6751373 DOI: 10.1523/eneuro.0164-19.2019] [Citation(s) in RCA: 31] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2019] [Revised: 08/02/2019] [Accepted: 08/14/2019] [Indexed: 11/22/2022] Open
Abstract
Information processing in sensory cortex is highly sensitive to nonsensory variables such as anesthetic state, arousal, and task engagement. Recent work in mouse visual cortex suggests that evoked firing rates, stimulus–response mutual information, and encoding efficiency increase when animals are engaged in movement. A disinhibitory circuit appears central to this change: inhibitory neurons expressing vasoactive intestinal peptide (VIP) are activated during movement and disinhibit pyramidal cells by suppressing other inhibitory interneurons. Paradoxically, although movement activates a similar disinhibitory circuit in auditory cortex (ACtx), most ACtx studies report reduced spiking during movement. It is unclear whether the resulting changes in spike rates result in corresponding changes in stimulus–response mutual information. We examined ACtx responses evoked by tone cloud stimuli, in awake mice of both sexes, during spontaneous movement and still conditions. VIP+ cells were optogenetically activated on half of trials, permitting independent analysis of the consequences of movement and VIP activation, as well as their intersection. Movement decreased stimulus-related spike rates as well as mutual information and encoding efficiency. VIP interneuron activation tended to increase stimulus-evoked spike rates but not stimulus–response mutual information, thus reducing encoding efficiency. The intersection of movement and VIP activation was largely consistent with a linear combination of these main effects: VIP activation recovered movement-induced reduction in spike rates, but not information transfer.
Collapse
|
17
|
Xu N, Luo L, Wang Q, Li L. Binaural unmasking of the accuracy of envelope-signal representation in rat auditory cortex but not auditory midbrain. Hear Res 2019; 377:224-233. [PMID: 30991272 DOI: 10.1016/j.heares.2019.04.003] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/26/2018] [Revised: 03/25/2019] [Accepted: 04/03/2019] [Indexed: 01/16/2023]
Abstract
Accurate neural representations of acoustic signals under noisy conditions are critical for animals' survival. Detecting signal against background noise can be improved by binaural hearing particularly when an interaural-time-difference (ITD) disparity is introduced between the signal and the noise, a phenomenon known as binaural unmasking. Previous studies have mainly focused on the binaural unmasking effect on response magnitudes, and it is not clear whether binaural unmasking affects the accuracy of central representations of target acoustic signals and the relative contributions of different central auditory structures to this accuracy. Frequency following responses (FFRs), which are sustained phase-locked neural activities, can be used for measuring the accuracy of the representation of signals. Using intracranial recordings of local field potentials, this study aimed to assess whether the binaural unmasking effects include an improvement of the accuracy of neural representations of sound-envelope signals in the rat IC and/or auditory cortex (AC). The results showed that (1) when a narrow-band noise was presented binaurally, the stimulus-response (S-R) coherence of the FFRs to the envelope (FFRenvelope) of the narrow-band noise recorded in the IC was higher than that recorded in the AC. (2) Presenting a broad-band masking noise caused a larger reduction of the S-R coherence for FFRenvelope in the IC than that in the AC. (3) Introducing an ITD disparity between the narrow-band signal noise and the broad-band masking noise did not affect the IC S-R coherence, but enhanced both the AC S-R coherence and the coherence between the IC FFRenvelope and AC FFRenvelope. Thus, although the accuracy of representing envelope signals in the AC is lower than that in the IC, it can be binaurally unmasked, indicating a binaural-unmasking mechanism that is formed during the signal transmission from the IC to the AC.
Collapse
Affiliation(s)
- Na Xu
- School of Psychological and Cognitive Sciences, Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, 100080, China
| | - Lu Luo
- School of Psychological and Cognitive Sciences, Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, 100080, China
| | - Qian Wang
- School of Psychological and Cognitive Sciences, Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, 100080, China; Beijing Key Laboratory of Epilepsy, Epilepsy Center, Department of Functional Neurosurgery, Sanbo Brain Hospital, Capital Medical University, Beijing, 100093, China
| | - Liang Li
- School of Psychological and Cognitive Sciences, Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, 100080, China; Speech and Hearing Research Center, Key Laboratory on Machine Perception (Ministry of Education), Peking University, Beijing, 100871, China; Beijing Institute for Brain Disorders, Beijing, 100096, China.
| |
Collapse
|
18
|
Hörpel SG, Firzlaff U. Processing of fast amplitude modulations in bat auditory cortex matches communication call-specific sound features. J Neurophysiol 2019; 121:1501-1512. [PMID: 30785811 DOI: 10.1152/jn.00748.2018] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Bats use a large repertoire of calls for social communication. In the bat Phyllostomus discolor, social communication calls are often characterized by sinusoidal amplitude and frequency modulations with modulation frequencies in the range of 100-130 Hz. However, peaks in mammalian auditory cortical modulation transfer functions are typically limited to modulation frequencies below 100 Hz. We investigated the coding of sinusoidally amplitude modulated sounds in auditory cortical neurons in P. discolor by constructing rate and temporal modulation transfer functions. Neuronal responses to playbacks of various communication calls were additionally recorded and compared with the neurons' responses to sinusoidally amplitude-modulated sounds. Cortical neurons in the posterior dorsal field of the auditory cortex were tuned to unusually high modulation frequencies: rate modulation transfer functions often peaked around 130 Hz (median: 87 Hz), and the median of the highest modulation frequency that evoked significant phase-locking was also 130 Hz. Both values are much higher than reported from the auditory cortex of other mammals, with more than 51% of the units preferring modulation frequencies exceeding 100 Hz. Conspicuously, the fast modulations preferred by the neurons match the fast amplitude and frequency modulations of prosocial, and mostly of aggressive, communication calls in P. discolor. We suggest that the preference for fast amplitude modulations in the P. discolor dorsal auditory cortex serves to reliably encode the fast modulations seen in their communication calls. NEW & NOTEWORTHY Neural processing of temporal sound features is crucial for the analysis of communication calls. In bats, these calls are often characterized by fast temporal envelope modulations. Because auditory cortex neurons typically encode only low modulation frequencies, it is unclear how species-specific vocalizations are cortically processed. We show that auditory cortex neurons in the bat Phyllostomus discolor encode fast temporal envelope modulations. This property improves response specificity to communication calls and thus might support species-specific communication.
Collapse
Affiliation(s)
- Stephen Gareth Hörpel
- Chair of Zoology, Department of Animal Sciences, Technical University of Munich , Freising , Germany
| | - Uwe Firzlaff
- Chair of Zoology, Department of Animal Sciences, Technical University of Munich , Freising , Germany
| |
Collapse
|