1
|
Sun JV, Jing Z, Rankin J, Rinzel J. Perceptual tri-stability, measured and fitted as emergent from a model for bistable alternations. Hear Res 2024; 453:109123. [PMID: 39437585 DOI: 10.1016/j.heares.2024.109123] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/20/2024] [Revised: 09/19/2024] [Accepted: 09/22/2024] [Indexed: 10/25/2024]
Abstract
The human auditory system in attempting to decipher ambiguous sounds appears to resort to perceptual exploration as evidenced by multi-stable perceptual alternations. This phenomenon has been widely investigated via the auditory streaming paradigm, employing ABA_ triplet sequences with much research focused on perceptual bi-stability with the alternate percepts as either a single integrated stream or as two simultaneous distinct streams. We extend this inquiry with experiments and modeling to include tri-stable perception. Here, the segregated percepts may involve a foreground/background distinction. We collected empirical data from participants engaged in a tri-stable auditory task, utilizing this dataset to refine a neural mechanistic model that had successfully reproduced multiple features of auditory bi-stability. Remarkably, the model successfully emulated basic statistical characteristics of tri-stability without substantial modification. This model also allows us to demonstrate a parsimonious approach to account for individual variability by adjusting the parameter of either the noise level or the neural adaptation strength.
Collapse
Affiliation(s)
- Jiaqiu Vince Sun
- Center for Neural Science, New York University, 4 Washington Place, New York, NY 10003, USA; New York University Shanghai, 567 West Yangsi Rd, Pudong New District, Shanghai, 200124, PR China
| | - Zeyu Jing
- Courant Institute of Mathematical Sciences, New York University, 251 Mercer St, New York, NY 10012, USA; Current Affiliation: Computation & Neural Systems, Biology and Biological Engineering, California Institute of Technology, 1200 East California Boulevard, Pasadena, CA 91125, USA
| | - James Rankin
- Department of Mathematics, College of Engineering, Mathematics and Physical Sciences, University of Exeter, Harrison Building, North Park Rd, Exeter EX4 4QF, UK
| | - John Rinzel
- Center for Neural Science, New York University, 4 Washington Place, New York, NY 10003, USA; Courant Institute of Mathematical Sciences, New York University, 251 Mercer St, New York, NY 10012, USA.
| |
Collapse
|
2
|
Shmakov S, Littlewood PB. Coalescence of limit cycles in the presence of noise. Phys Rev E 2024; 109:024220. [PMID: 38491679 DOI: 10.1103/physreve.109.024220] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2023] [Accepted: 01/19/2024] [Indexed: 03/18/2024]
Abstract
Complex dynamical systems may exhibit multiple steady states, including time-periodic limit cycles, where the final trajectory depends on initial conditions. With tuning of parameters, limit cycles can proliferate or merge at an exceptional point. Here we ask how dynamics in the vicinity of such a bifurcation are influenced by noise. A pitchfork bifurcation can be used to induce bifurcation behavior. We model a limit cycle with the normal form of the Hopf oscillator, couple it to the pitchfork, and investigate the resulting dynamical system in the presence of noise. We show that the generating functional for the averages of the dynamical variables factorizes between the pitchfork and the oscillator. The statistical properties of the pitchfork in the presence of noise in its various regimes are investigated and a scaling theory is developed for the correlation and response functions, including a possible symmetry-breaking field. The analysis is done by perturbative calculations as well as numerical means. Finally, observables illustrating the coupling of a system with a limit cycle to a pitchfork are discussed and the phase-phase correlations are shown to exhibit nondiffusive behavior with universal scaling.
Collapse
Affiliation(s)
- Sergei Shmakov
- James Franck Institute and Department of Physics, The University of Chicago, Chicago, Illinois 60637, USA
| | - Peter B Littlewood
- James Franck Institute and Department of Physics, The University of Chicago, Chicago, Illinois 60637, USA and School of Physics and Astronomy, University of St Andrews, St Andrews KY16 9AJ, United Kingdom
| |
Collapse
|
3
|
Alain C, Göke K, Shen D, Bidelman GM, Bernstein LJ, Snyder JS. Neural alpha oscillations index context-driven perception of ambiguous vowel sequences. iScience 2023; 26:108457. [PMID: 38058304 PMCID: PMC10696458 DOI: 10.1016/j.isci.2023.108457] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2023] [Revised: 10/05/2023] [Accepted: 11/11/2023] [Indexed: 12/08/2023] Open
Abstract
Perception of bistable stimuli is influenced by prior context. In some cases, the interpretation matches with how the preceding stimulus was perceived; in others, it tends to be the opposite of the previous stimulus percept. We measured high-density electroencephalography (EEG) while participants were presented with a sequence of vowels that varied in formant transition, promoting the perception of one or two auditory streams followed by an ambiguous bistable sequence. For the bistable sequence, participants were more likely to report hearing the opposite percept of the one heard immediately before. This auditory contrast effect coincided with changes in alpha power localized in the left angular gyrus and left sensorimotor and right sensorimotor/supramarginal areas. The latter correlated with participants' perception. These results suggest that the contrast effect for a bistable sequence of vowels may be related to neural adaptation in posterior auditory areas, which influences participants' perceptual construal level of ambiguous stimuli.
Collapse
Affiliation(s)
- Claude Alain
- Rotman Research Institute, Toronto, ON M6A 2E1, Canada
- Department of Psychology, University of Toronto, Toronto, ON M5S 3G3, Canada
| | | | - Dawei Shen
- Rotman Research Institute, Toronto, ON M6A 2E1, Canada
| | - Gavin M. Bidelman
- Department of Speech, Language and Hearing Sciences and Program in Neuroscience, Indiana University, Bloomington, IN 47408, USA
| | - Lori J. Bernstein
- Department of Psychiatry, University of Toronto and University Health Network, Toronto, ON M5G 2C4, Canada
| | - Joel S. Snyder
- Department of Psychology, University of Nevada, Las Vegas, NV 89154, USA
| |
Collapse
|
4
|
Fernandez Pujol C, Blundon EG, Dykstra AR. Laminar specificity of the auditory perceptual awareness negativity: A biophysical modeling study. PLoS Comput Biol 2023; 19:e1011003. [PMID: 37384802 PMCID: PMC10337981 DOI: 10.1371/journal.pcbi.1011003] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2023] [Revised: 07/12/2023] [Accepted: 06/17/2023] [Indexed: 07/01/2023] Open
Abstract
How perception of sensory stimuli emerges from brain activity is a fundamental question of neuroscience. To date, two disparate lines of research have examined this question. On one hand, human neuroimaging studies have helped us understand the large-scale brain dynamics of perception. On the other hand, work in animal models (mice, typically) has led to fundamental insight into the micro-scale neural circuits underlying perception. However, translating such fundamental insight from animal models to humans has been challenging. Here, using biophysical modeling, we show that the auditory awareness negativity (AAN), an evoked response associated with perception of target sounds in noise, can be accounted for by synaptic input to the supragranular layers of auditory cortex (AC) that is present when target sounds are heard but absent when they are missed. This additional input likely arises from cortico-cortical feedback and/or non-lemniscal thalamic projections and targets the apical dendrites of layer-5 (L5) pyramidal neurons. In turn, this leads to increased local field potential activity, increased spiking activity in L5 pyramidal neurons, and the AAN. The results are consistent with current cellular models of conscious processing and help bridge the gap between the macro and micro levels of perception-related brain activity.
Collapse
Affiliation(s)
- Carolina Fernandez Pujol
- Department of Biomedical Engineering, University of Miami, Coral Gables, Florida, United States of America
| | - Elizabeth G. Blundon
- Department of Biomedical Engineering, University of Miami, Coral Gables, Florida, United States of America
| | - Andrew R. Dykstra
- Department of Biomedical Engineering, University of Miami, Coral Gables, Florida, United States of America
| |
Collapse
|
5
|
Melland P, Curtu R. Attractor-Like Dynamics Extracted from Human Electrocorticographic Recordings Underlie Computational Principles of Auditory Bistable Perception. J Neurosci 2023; 43:3294-3311. [PMID: 36977581 PMCID: PMC10162465 DOI: 10.1523/jneurosci.1531-22.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2022] [Revised: 03/03/2023] [Accepted: 03/15/2023] [Indexed: 03/30/2023] Open
Abstract
In bistable perception, observers experience alternations between two interpretations of an unchanging stimulus. Neurophysiological studies of bistable perception typically partition neural measurements into stimulus-based epochs and assess neuronal differences between epochs based on subjects' perceptual reports. Computational studies replicate statistical properties of percept durations with modeling principles like competitive attractors or Bayesian inference. However, bridging neuro-behavioral findings with modeling theory requires the analysis of single-trial dynamic data. Here, we propose an algorithm for extracting nonstationary timeseries features from single-trial electrocorticography (ECoG) data. We applied the proposed algorithm to 5-min ECoG recordings from human primary auditory cortex obtained during perceptual alternations in an auditory triplet streaming task (six subjects: four male, two female). We report two ensembles of emergent neuronal features in all trial blocks. One ensemble consists of periodic functions that encode a stereotypical response to the stimulus. The other comprises more transient features and encodes dynamics associated with bistable perception at multiple time scales: minutes (within-trial alternations), seconds (duration of individual percepts), and milliseconds (switches between percepts). Within the second ensemble, we identified a slowly drifting rhythm that correlates with the perceptual states and several oscillators with phase shifts near perceptual switches. Projections of single-trial ECoG data onto these features establish low-dimensional attractor-like geometric structures invariant across subjects and stimulus types. These findings provide supporting neural evidence for computational models with oscillatory-driven attractor-based principles. The feature extraction techniques described here generalize across recording modality and are appropriate when hypothesized low-dimensional dynamics characterize an underlying neural system.SIGNIFICANCE STATEMENT Irrespective of the sensory modality, neurophysiological studies of multistable perception have typically investigated events time-locked to the perceptual switching rather than the time course of the perceptual states per se. Here, we propose an algorithm that extracts neuronal features of bistable auditory perception from largescale single-trial data while remaining agnostic to the subject's perceptual reports. The algorithm captures the dynamics of perception at multiple timescales, minutes (within-trial alternations), seconds (durations of individual percepts), and milliseconds (timing of switches), and distinguishes attributes of neural encoding of the stimulus from those encoding the perceptual states. Finally, our analysis identifies a set of latent variables that exhibit alternating dynamics along a low-dimensional manifold, similar to trajectories in attractor-based models for perceptual bistability.
Collapse
Affiliation(s)
- Pake Melland
- Department of Mathematics, Southern Methodist University, Dallas, Texas 75275
- Applied Mathematical & Computational Sciences, The University of Iowa, Iowa City, Iowa 52242
| | - Rodica Curtu
- Department of Mathematics, The University of Iowa, Iowa City, Iowa 52242
- The Iowa Neuroscience Institute, The University of Iowa, Iowa City, Iowa 52242
| |
Collapse
|
6
|
Higgins NC, Scurry AN, Jiang F, Little DF, Alain C, Elhilali M, Snyder JS. Adaptation in the sensory cortex drives bistable switching during auditory stream segregation. Neurosci Conscious 2023; 2023:niac019. [PMID: 36751309 PMCID: PMC9899071 DOI: 10.1093/nc/niac019] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2022] [Revised: 10/17/2022] [Accepted: 12/26/2022] [Indexed: 02/06/2023] Open
Abstract
Current theories of perception emphasize the role of neural adaptation, inhibitory competition, and noise as key components that lead to switches in perception. Supporting evidence comes from neurophysiological findings of specific neural signatures in modality-specific and supramodal brain areas that appear to be critical to switches in perception. We used functional magnetic resonance imaging to study brain activity around the time of switches in perception while participants listened to a bistable auditory stream segregation stimulus, which can be heard as one integrated stream of tones or two segregated streams of tones. The auditory thalamus showed more activity around the time of a switch from segregated to integrated compared to time periods of stable perception of integrated; in contrast, the rostral anterior cingulate cortex and the inferior parietal lobule showed more activity around the time of a switch from integrated to segregated compared to time periods of stable perception of segregated streams, consistent with prior findings of asymmetries in brain activity depending on the switch direction. In sound-responsive areas in the auditory cortex, neural activity increased in strength preceding switches in perception and declined in strength over time following switches in perception. Such dynamics in the auditory cortex are consistent with the role of adaptation proposed by computational models of visual and auditory bistable switching, whereby the strength of neural activity decreases following a switch in perception, which eventually destabilizes the current percept enough to lead to a switch to an alternative percept.
Collapse
Affiliation(s)
- Nathan C Higgins
- Department of Communication Sciences and Disorders, University of South Florida, 4202 E. Fowler Avenue, PCD1017, Tampa, FL 33620, USA
| | - Alexandra N Scurry
- Department of Psychology, University of Nevada, 1664 N. Virginia Street Mail Stop 0296, Reno, NV 89557, USA
| | - Fang Jiang
- Department of Psychology, University of Nevada, 1664 N. Virginia Street Mail Stop 0296, Reno, NV 89557, USA
| | - David F Little
- Department of Electrical and Computer Engineering, Johns Hopkins University, 3400 North Charles Street, Baltimore, MD 21218, USA
| | - Claude Alain
- Rotman Research Institute, Baycrest Health Sciences, 3560 Bathurst Street, Toronto, ON M6A 2E1, Canada
| | - Mounya Elhilali
- Department of Electrical and Computer Engineering, Johns Hopkins University, 3400 North Charles Street, Baltimore, MD 21218, USA
| | - Joel S Snyder
- Department of Psychology, University of Nevada, 4505 Maryland Parkway Mail Stop 5030, Las Vegas, NV 89154, USA
| |
Collapse
|
7
|
Thomassen S, Hartung K, Einhäuser W, Bendixen A. Low-high-low or high-low-high? Pattern effects on sequential auditory scene analysis. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2022; 152:2758. [PMID: 36456271 DOI: 10.1121/10.0015054] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/20/2022] [Accepted: 10/17/2022] [Indexed: 06/17/2023]
Abstract
Sequential auditory scene analysis (ASA) is often studied using sequences of two alternating tones, such as ABAB or ABA_, with "_" denoting a silent gap, and "A" and "B" sine tones differing in frequency (nominally low and high). Many studies implicitly assume that the specific arrangement (ABAB vs ABA_, as well as low-high-low vs high-low-high within ABA_) plays a negligible role, such that decisions about the tone pattern can be governed by other considerations. To explicitly test this assumption, a systematic comparison of different tone patterns for two-tone sequences was performed in three different experiments. Participants were asked to report whether they perceived the sequences as originating from a single sound source (integrated) or from two interleaved sources (segregated). Results indicate that core findings of sequential ASA, such as an effect of frequency separation on the proportion of integrated and segregated percepts, are similar across the different patterns during prolonged listening. However, at sequence onset, the integrated percept was more likely to be reported by the participants in ABA_low-high-low than in ABA_high-low-high sequences. This asymmetry is important for models of sequential ASA, since the formation of percepts at onset is an integral part of understanding how auditory interpretations build up.
Collapse
Affiliation(s)
- Sabine Thomassen
- Cognitive Systems Lab, Faculty of Natural Sciences, Chemnitz University of Technology, 09107 Chemnitz, Germany
| | - Kevin Hartung
- Cognitive Systems Lab, Faculty of Natural Sciences, Chemnitz University of Technology, 09107 Chemnitz, Germany
| | - Wolfgang Einhäuser
- Physics of Cognition Group, Faculty of Natural Sciences, Chemnitz University of Technology, 09107 Chemnitz, Germany
| | - Alexandra Bendixen
- Cognitive Systems Lab, Faculty of Natural Sciences, Chemnitz University of Technology, 09107 Chemnitz, Germany
| |
Collapse
|
8
|
Darki F, Ferrario A, Rankin J. Hierarchical processing underpins competition in tactile perceptual bistability. J Comput Neurosci 2022; 51:343-360. [PMID: 37204542 PMCID: PMC10404575 DOI: 10.1007/s10827-023-00852-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2022] [Revised: 04/25/2023] [Accepted: 04/28/2023] [Indexed: 05/20/2023]
Abstract
Ambiguous sensory information can lead to spontaneous alternations between perceptual states, recently shown to extend to tactile perception. The authors recently proposed a simplified form of tactile rivalry which evokes two competing percepts for a fixed difference in input amplitudes across antiphase, pulsatile stimulation of the left and right fingers. This study addresses the need for a tactile rivalry model that captures the dynamics of perceptual alternations and that incorporates the structure of the somatosensory system. The model features hierarchical processing with two stages. The first and the second stages of model could be located at the secondary somatosensory cortex (area S2), or in higher areas driven by S2. The model captures dynamical features specific to the tactile rivalry percepts and produces general characteristics of perceptual rivalry: input strength dependence of dominance times (Levelt's proposition II), short-tailed skewness of dominance time distributions and the ratio of distribution moments. The presented modelling work leads to experimentally testable predictions. The same hierarchical model could generalise to account for percept formation, competition and alternations for bistable stimuli that involve pulsatile inputs from the visual and auditory domains.
Collapse
Affiliation(s)
- Farzaneh Darki
- Department of Mathematics, College of Engineering, Mathematics and Physical Sciences, University of Exeter, Exeter, UK
| | - Andrea Ferrario
- Biorobotics Laboratory, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| | - James Rankin
- Department of Mathematics, College of Engineering, Mathematics and Physical Sciences, University of Exeter, Exeter, UK
| |
Collapse
|
9
|
Attentional control via synaptic gain mechanisms in auditory streaming. Brain Res 2021; 1778:147720. [PMID: 34785256 DOI: 10.1016/j.brainres.2021.147720] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2021] [Revised: 09/13/2021] [Accepted: 11/05/2021] [Indexed: 11/21/2022]
Abstract
Attention is a crucial component in sound source segregation allowing auditory objects of interest to be both singled out and held in focus. Our study utilizes a fundamental paradigm for sound source segregation: a sequence of interleaved tones, A and B, of different frequencies that can be heard as a single integrated stream or segregated into two streams (auditory streaming paradigm). We focus on the irregular alternations between integrated and segregated that occur for long presentations, so-called auditory bistability. Psychaoustic experiments demonstrate how attentional control, a listener's intention to experience integrated or segregated, biases perception in favour of different perceptual interpretations. Our data show that this is achieved by prolonging the dominance times of the attended percept and, to a lesser extent, by curtailing the dominance times of the unattended percept, an effect that remains consistent across a range of values for the difference in frequency between A and B. An existing neuromechanistic model describes the neural dynamics of perceptual competition downstream of primary auditory cortex (A1). The model allows us to propose plausible neural mechanisms for attentional control, as linked to different attentional strategies, in a direct comparison with behavioural data. A mechanism based on a percept-specific input gain best accounts for the effects of attentional control.
Collapse
|
10
|
Heggli OA, Konvalinka I, Kringelbach ML, Vuust P. A metastable attractor model of self-other integration (MEAMSO) in rhythmic synchronization. Philos Trans R Soc Lond B Biol Sci 2021; 376:20200332. [PMID: 34420393 DOI: 10.1098/rstb.2020.0332] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2022] Open
Abstract
Human interaction is often accompanied by synchronized bodily rhythms. Such synchronization may emerge spontaneously as when a crowd's applause turns into a steady beat, be encouraged as in nursery rhymes, or be intentional as in the case of playing music together. The latter has been extensively studied using joint finger-tapping paradigms as a simplified version of rhythmic interpersonal synchronization. A key finding is that synchronization in such cases is multifaceted, with synchronized behaviour resting upon different synchronization strategies such as mutual adaptation, leading-following and leading-leading. However, there are multiple open questions regarding the mechanism behind these strategies and how they develop dynamically over time. Here, we propose a metastable attractor model of self-other integration (MEAMSO). This model conceptualizes dyadic rhythmic interpersonal synchronization as a process of integrating and segregating signals of self and other. Perceived sounds are continuously evaluated as either being attributed to self-produced or other-produced actions. The model entails a metastable system with two particular attractor states: one where an individual maintains two separate predictive models for self- and other-produced actions, and the other where these two predictive models integrate into one. The MEAMSO explains the three known synchronization strategies and makes testable predictions about the dynamics of interpersonal synchronization both in behaviour and the brain. This article is part of the theme issue 'Synchrony and rhythm interaction: from the brain to behavioural ecology'.
Collapse
Affiliation(s)
- Ole Adrian Heggli
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University and the Royal Academy of Music Aarhus/Aalborg, Aarhus, Denmark
| | - Ivana Konvalinka
- SINe Lab, Section for Cognitive Systems, DTU Compute, Technical University of Denmark, Kongens Lyngby, Denmark
| | - Morten L Kringelbach
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University and the Royal Academy of Music Aarhus/Aalborg, Aarhus, Denmark.,Centre for Eudaimonia and Human Flourishing, Department of Psychiatry, University of Oxford, Oxford, UK
| | - Peter Vuust
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University and the Royal Academy of Music Aarhus/Aalborg, Aarhus, Denmark
| |
Collapse
|
11
|
Higgins NC, Monjaras AG, Yerkes BD, Little DF, Nave-Blodgett JE, Elhilali M, Snyder JS. Resetting of Auditory and Visual Segregation Occurs After Transient Stimuli of the Same Modality. Front Psychol 2021; 12:720131. [PMID: 34621219 PMCID: PMC8490814 DOI: 10.3389/fpsyg.2021.720131] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2021] [Accepted: 08/16/2021] [Indexed: 12/03/2022] Open
Abstract
In the presence of a continually changing sensory environment, maintaining stable but flexible awareness is paramount, and requires continual organization of information. Determining which stimulus features belong together, and which are separate is therefore one of the primary tasks of the sensory systems. Unknown is whether there is a global or sensory-specific mechanism that regulates the final perceptual outcome of this streaming process. To test the extent of modality independence in perceptual control, an auditory streaming experiment, and a visual moving-plaid experiment were performed. Both were designed to evoke alternating perception of an integrated or segregated percept. In both experiments, transient auditory and visual distractor stimuli were presented in separate blocks, such that the distractors did not overlap in frequency or space with the streaming or plaid stimuli, respectively, thus preventing peripheral interference. When a distractor was presented in the opposite modality as the bistable stimulus (visual distractors during auditory streaming or auditory distractors during visual streaming), the probability of percept switching was not significantly different than when no distractor was presented. Conversely, significant differences in switch probability were observed following within-modality distractors, but only when the pre-distractor percept was segregated. Due to the modality-specificity of the distractor-induced resetting, the results suggest that conscious perception is at least partially controlled by modality-specific processing. The fact that the distractors did not have peripheral overlap with the bistable stimuli indicates that the perceptual reset is due to interference at a locus in which stimuli of different frequencies and spatial locations are integrated.
Collapse
Affiliation(s)
- Nathan C Higgins
- Department of Psychology, University of Nevada Las Vegas, Las Vegas, NV, United States
| | - Ambar G Monjaras
- Department of Psychology, University of Nevada Las Vegas, Las Vegas, NV, United States
| | - Breanne D Yerkes
- Department of Psychology, University of Nevada Las Vegas, Las Vegas, NV, United States
| | - David F Little
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD, United States
| | | | - Mounya Elhilali
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD, United States
| | - Joel S Snyder
- Department of Psychology, University of Nevada Las Vegas, Las Vegas, NV, United States
| |
Collapse
|
12
|
Ksander J, Katz DB, Miller P. A model of naturalistic decision making in preference tests. PLoS Comput Biol 2021; 17:e1009012. [PMID: 34555012 PMCID: PMC8491944 DOI: 10.1371/journal.pcbi.1009012] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2021] [Revised: 10/05/2021] [Accepted: 09/10/2021] [Indexed: 11/30/2022] Open
Abstract
Decisions as to whether to continue with an ongoing activity or to switch to an alternative are a constant in an animal’s natural world, and in particular underlie foraging behavior and performance in food preference tests. Stimuli experienced by the animal both impact the choice and are themselves impacted by the choice, in a dynamic back and forth. Here, we present model neural circuits, based on spiking neurons, in which the choice to switch away from ongoing behavior instantiates this back and forth, arising as a state transition in neural activity. We analyze two classes of circuit, which differ in whether state transitions result from a loss of hedonic input from the stimulus (an “entice to stay” model) or from aversive stimulus-input (a “repel to leave” model). In both classes of model, we find that the mean time spent sampling a stimulus decreases with increasing value of the alternative stimulus, a fact that we linked to the inclusion of depressing synapses in our model. The competitive interaction is much greater in “entice to stay” model networks, which has qualitative features of the marginal value theorem, and thereby provides a framework for optimal foraging behavior. We offer suggestions as to how our models could be discriminatively tested through the analysis of electrophysiological and behavioral data. Many decisions are of the ilk of whether to continue sampling a stimulus or to switch to an alternative, a key feature of foraging behavior. We produce two classes of model for such stay-switch decisions, which differ in how decisions to switch stimuli can arise. In an “entice-to-stay” model, a reduction in the necessary positive stimulus input causes switching decisions. In a “repel-to-leave” model, a rise in aversive stimulus input produces a switch decision. We find that in tasks where the sampling of one stimulus follows another, adaptive biological processes arising from a highly hedonic stimulus can reduce the time spent at the following stimulus, by up to ten-fold in the “entice-to-stay” models. Along with potentially observable behavioral differences that could distinguish the classes of networks, we also found signatures in neural activity, such as oscillation of neural firing rates and a rapid change in rates preceding the time of choice to leave a stimulus. In summary, our model findings lead to testable predictions and suggest a neural circuit-based framework for explaining foraging choices.
Collapse
Affiliation(s)
- John Ksander
- Volen National Center for Complex Systems, Brandeis University, Waltham, Massachusetts, United States of America
- Department of Psychology, Brandeis University, Waltham, Massachusetts, United States of America
| | - Donald B. Katz
- Volen National Center for Complex Systems, Brandeis University, Waltham, Massachusetts, United States of America
- Department of Psychology, Brandeis University, Waltham, Massachusetts, United States of America
| | - Paul Miller
- Volen National Center for Complex Systems, Brandeis University, Waltham, Massachusetts, United States of America
- Department of Biology, Brandeis University, Waltham, Massachusetts, United States of America
- * E-mail:
| |
Collapse
|
13
|
Brace KM, Sussman ES. The role of attention and explicit knowledge in perceiving bistable auditory input. Psychophysiology 2021; 58:e13875. [PMID: 34110020 DOI: 10.1111/psyp.13875] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2020] [Revised: 04/19/2021] [Accepted: 05/14/2021] [Indexed: 11/26/2022]
Abstract
The auditory system frequently encounters ambiguous sound input that can be perceived in multiple ways. The current study investigated the role of explicit knowledge in modulating how sounds are represented in auditory memory for a bistable sound sequence that could be perceived equally as integrated or segregated. We hypothesized that the dominant percept of the bistable sequence would suppress representation of the alternative perceptual organization as a function of how much top-down knowledge the listener had about the structure of the sequence. Performance measures and event-related brain potentials were compared when participants had explicit knowledge about one perceptual organization in the first half of the experiment to when they had explicit knowledge of both in the second half. We hypothesized that knowledge would modify the brain response to the alternative percept of the bistable sequence. However, that did not occur. When participants were performing one task, with no explicit knowledge of the bistable structure of the sequence, both integrated and segregated percepts were represented in auditory working memory. This demonstrates that explicit knowledge about the sounds is not a necessary factor for deriving and maintaining representations of multiple sound organizations within a complex sound environment. Passive attention operates in parallel with active or selective attention to maintain consistent representations of the environment, representations that may or may not be useful for task performance. It suggests a highly adaptive system useful in everyday listening situations where the listener has no prior knowledge about how the sound environment is structured.
Collapse
Affiliation(s)
- Kelin M Brace
- Department of Neuroscience, Albert Einstein College of Medicine, Bronx, NY, USA
| | - Elyse S Sussman
- Department of Neuroscience, Albert Einstein College of Medicine, Bronx, NY, USA.,Department of Otorhinolaryngology-Head & Neck Surgery, Albert Einstein College of Medicine, Bronx, NY, USA
| |
Collapse
|
14
|
Cao Q, Parks N, Goldwyn JH. Dynamics of the Auditory Continuity Illusion. Front Comput Neurosci 2021; 15:676637. [PMID: 34168547 PMCID: PMC8217826 DOI: 10.3389/fncom.2021.676637] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2021] [Accepted: 05/04/2021] [Indexed: 11/13/2022] Open
Abstract
Illusions give intriguing insights into perceptual and neural dynamics. In the auditory continuity illusion, two brief tones separated by a silent gap may be heard as one continuous tone if a noise burst with appropriate characteristics fills the gap. This illusion probes the conditions under which listeners link related sounds across time and maintain perceptual continuity in the face of sudden changes in sound mixtures. Conceptual explanations of this illusion have been proposed, but its neural basis is still being investigated. In this work we provide a dynamical systems framework, grounded in principles of neural dynamics, to explain the continuity illusion. We construct an idealized firing rate model of a neural population and analyze the conditions under which firing rate responses persist during the interruption between the two tones. First, we show that sustained inputs and hysteresis dynamics (a mismatch between tone levels needed to activate and inactivate the population) can produce continuous responses. Second, we show that transient inputs and bistable dynamics (coexistence of two stable firing rate levels) can also produce continuous responses. Finally, we combine these input types together to obtain neural dynamics consistent with two requirements for the continuity illusion as articulated in a well-known theory of auditory scene analysis: responses persist through the noise-filled gap if noise provides sufficient evidence that the tone continues and if there is no evidence of discontinuities between the tones and noise. By grounding these notions in a quantitative model that incorporates elements of neural circuits (recurrent excitation, and mutual inhibition, specifically), we identify plausible mechanisms for the continuity illusion. Our findings can help guide future studies of neural correlates of this illusion and inform development of more biophysically-based models of the auditory continuity illusion.
Collapse
Affiliation(s)
- Qianyi Cao
- Department of Mathematics and Statistics, Swarthmore College, Swarthmore, PA, United States
| | - Noah Parks
- Department of Mathematics and Statistics, Swarthmore College, Swarthmore, PA, United States
| | - Joshua H Goldwyn
- Department of Mathematics and Statistics, Swarthmore College, Swarthmore, PA, United States
| |
Collapse
|
15
|
Grenzebach J, Wegner TGG, Einhäuser W, Bendixen A. Pupillometry in auditory multistability. PLoS One 2021; 16:e0252370. [PMID: 34086770 PMCID: PMC8177413 DOI: 10.1371/journal.pone.0252370] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2020] [Accepted: 05/15/2021] [Indexed: 11/20/2022] Open
Abstract
In multistability, a constant stimulus induces alternating perceptual interpretations. For many forms of visual multistability, the transition from one interpretation to another ("perceptual switch") is accompanied by a dilation of the pupil. Here we ask whether the same holds for auditory multistability, specifically auditory streaming. Two tones were played in alternation, yielding four distinct interpretations: the tones can be perceived as one integrated percept (single sound source), or as segregated with either tone or both tones in the foreground. We found that the pupil dilates significantly around the time a perceptual switch is reported ("multistable condition"). When participants instead responded to actual stimulus changes that closely mimicked the multistable perceptual experience ("replay condition"), the pupil dilated more around such responses than in multistability. This still held when data were corrected for the pupil response to the stimulus change as such. Hence, active responses to an exogeneous stimulus change trigger a stronger or temporally more confined pupil dilation than responses to an endogenous perceptual switch. In another condition, participants randomly pressed the buttons used for reporting multistability. In Study 1, this "random condition" failed to sufficiently mimic the temporal pattern of multistability. By adapting the instructions, in Study 2 we obtained a response pattern more similar to the multistable condition. In this case, the pupil dilated significantly around the random button presses. Albeit numerically smaller, this pupil response was not significantly different from the multistable condition. While there are several possible explanations-related, e.g., to the decision to respond-this underlines the difficulty to isolate a purely perceptual effect in multistability. Our data extend previous findings from visual to auditory multistability. They highlight methodological challenges in interpreting such data and suggest possible approaches to meet them, including a novel stimulus to simulate the experience of perceptual switches in auditory streaming.
Collapse
Affiliation(s)
- Jan Grenzebach
- Cognitive Systems Lab, Institute of Physics, Chemnitz University of Technology, Chemnitz, Germany
- Physics of Cognition Group, Institute of Physics, Chemnitz University of Technology, Chemnitz, Germany
| | - Thomas G. G. Wegner
- Cognitive Systems Lab, Institute of Physics, Chemnitz University of Technology, Chemnitz, Germany
- Physics of Cognition Group, Institute of Physics, Chemnitz University of Technology, Chemnitz, Germany
| | - Wolfgang Einhäuser
- Physics of Cognition Group, Institute of Physics, Chemnitz University of Technology, Chemnitz, Germany
| | - Alexandra Bendixen
- Cognitive Systems Lab, Institute of Physics, Chemnitz University of Technology, Chemnitz, Germany
| |
Collapse
|
16
|
Ferrario A, Rankin J. Auditory streaming emerges from fast excitation and slow delayed inhibition. JOURNAL OF MATHEMATICAL NEUROSCIENCE 2021; 11:8. [PMID: 33939042 PMCID: PMC8093365 DOI: 10.1186/s13408-021-00106-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/04/2020] [Accepted: 04/22/2021] [Indexed: 05/29/2023]
Abstract
In the auditory streaming paradigm, alternating sequences of pure tones can be perceived as a single galloping rhythm (integration) or as two sequences with separated low and high tones (segregation). Although studied for decades, the neural mechanisms underlining this perceptual grouping of sound remains a mystery. With the aim of identifying a plausible minimal neural circuit that captures this phenomenon, we propose a firing rate model with two periodically forced neural populations coupled by fast direct excitation and slow delayed inhibition. By analyzing the model in a non-smooth, slow-fast regime we analytically prove the existence of a rich repertoire of dynamical states and of their parameter dependent transitions. We impose plausible parameter restrictions and link all states with perceptual interpretations. Regions of stimulus parameters occupied by states linked with each percept match those found in behavioural experiments. Our model suggests that slow inhibition masks the perception of subsequent tones during segregation (forward masking), whereas fast excitation enables integration for large pitch differences between the two tones.
Collapse
Affiliation(s)
- Andrea Ferrario
- Department of Mathematics, College of Engineering, Mathematics & Physical Sciences, University of Exeter, Exeter, UK.
| | - James Rankin
- Department of Mathematics, College of Engineering, Mathematics & Physical Sciences, University of Exeter, Exeter, UK
| |
Collapse
|
17
|
Abstract
In perceptual rivalry, ambiguous sensory information leads to dynamic changes in the perceptual interpretation of fixed stimuli. This phenomenon occurs when participants receive sensory stimuli that support two or more distinct interpretations; this results in spontaneous alternations between possible perceptual interpretations. Perceptual rivalry has been widely studied across different sensory modalities including vision, audition, and to a limited extent, in the tactile domain. Common features of perceptual rivalry across various ambiguous visual and auditory paradigms characterize the randomness of switching times and their dependence on input strength manipulations (Levelt's propositions). It is still unclear whether the general characteristics of perceptual rivalry are preserved with tactile stimuli. This study aims to introduce a simple tactile stimulus capable of generating perceptual rivalry and explores whether general features of perceptual rivalry from other modalities extend to the tactile domain. Our results confirm that Levelt's proposition II extends to tactile bistability, and that the stochastic characteristics of irregular perceptual alternations agree with non-tactile modalities. An analysis of correlations between subsequent perceptual phases reveals a significant positive correlation at lag 1 (as found in visual bistability), and a negative correlation for lag 2 (in contrast with visual bistability).
Collapse
|
18
|
Boscain U, Prandi D, Sacchelli L, Turco G. A bio-inspired geometric model for sound reconstruction. JOURNAL OF MATHEMATICAL NEUROSCIENCE 2021; 11:2. [PMID: 33394219 PMCID: PMC7782772 DOI: 10.1186/s13408-020-00099-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/17/2020] [Accepted: 12/08/2020] [Indexed: 05/03/2023]
Abstract
The reconstruction mechanisms built by the human auditory system during sound reconstruction are still a matter of debate. The purpose of this study is to propose a mathematical model of sound reconstruction based on the functional architecture of the auditory cortex (A1). The model is inspired by the geometrical modelling of vision, which has undergone a great development in the last ten years. There are, however, fundamental dissimilarities, due to the different role played by time and the different group of symmetries. The algorithm transforms the degraded sound in an 'image' in the time-frequency domain via a short-time Fourier transform. Such an image is then lifted to the Heisenberg group and is reconstructed via a Wilson-Cowan integro-differential equation. Preliminary numerical experiments are provided, showing the good reconstruction properties of the algorithm on synthetic sounds concentrated around two frequencies.
Collapse
Affiliation(s)
- Ugo Boscain
- CNRS, LJLL, Sorbonne Université, Université de Paris, Inria, Paris, France
| | - Dario Prandi
- Université Paris-Saclay, CNRS, CentraleSupélec, Laboratoire des signaux et systèmes, 91190 Gif-sur-Yvette, France
| | - Ludovic Sacchelli
- Université Lyon, Université Claude Bernard Lyon 1, CNRS, LAGEPP UMR 5007, 43 bd du 11 novembre 1918, F-69100 Villeurbanne, France
| | - Giuseppina Turco
- CNRS, Laboratoire de Linguistique Formelle, UMR 7110, Université de Paris, Paris, France
| |
Collapse
|
19
|
Nguyen QA, Rinzel J, Curtu R. Buildup and bistability in auditory streaming as an evidence accumulation process with saturation. PLoS Comput Biol 2020; 16:e1008152. [PMID: 32853256 PMCID: PMC7480857 DOI: 10.1371/journal.pcbi.1008152] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2020] [Revised: 09/09/2020] [Accepted: 07/15/2020] [Indexed: 12/23/2022] Open
Abstract
A repeating triplet-sequence ABA- of non-overlapping brief tones, A and B, is a valued paradigm for studying auditory stream formation and the cocktail party problem. The stimulus is "heard" either as a galloping pattern (integration) or as two interleaved streams (segregation); the initial percept is typically integration then followed by spontaneous alternations between segregation and integration, each being dominant for a few seconds. The probability of segregation grows over seconds, from near-zero to a steady value, defining the buildup function, BUF. Its stationary level increases with the difference in tone frequencies, DF, and the BUF rises faster. Percept durations have DF-dependent means and are gamma-like distributed. Behavioral and computational studies usually characterize triplet streaming either during alternations or during buildup. Here, our experimental design and modeling encompass both. We propose a pseudo-neuromechanistic model that incorporates spiking activity in primary auditory cortex, A1, as input and resolves perception along two network-layers downstream of A1. Our model is straightforward and intuitive. It describes the noisy accumulation of evidence against the current percept which generates switches when reaching a threshold. Accumulation can saturate either above or below threshold; if below, the switching dynamics resemble noise-induced transitions from an attractor state. Our model accounts quantitatively for three key features of data: the BUFs, mean durations, and normalized dominance duration distributions, at various DF values. It describes perceptual alternations without competition per se, and underscores that treating triplets in the sequence independently and averaging across trials, as implemented in earlier widely cited studies, is inadequate.
Collapse
Affiliation(s)
- Quynh-Anh Nguyen
- Department of Mathematics, The University of Iowa, Iowa City, Iowa, United States of America
| | - John Rinzel
- Center for Neural Science, New York University, New York, New York, United States of America
- Courant Institute of Mathematical Sciences, New York University, New York, New York, United States of America
| | - Rodica Curtu
- Department of Mathematics, The University of Iowa, Iowa City, Iowa, United States of America
- Iowa Neuroscience Institute, Human Brain Research Laboratory, Iowa City, Iowa, United States of America
- * E-mail:
| |
Collapse
|
20
|
Gustafson SJ, Grose J, Buss E. Perceptual organization and stability of auditory streaming for pure tones and /ba/ stimuli. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2020; 148:EL159. [PMID: 32873027 PMCID: PMC7438158 DOI: 10.1121/10.0001744] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/26/2020] [Revised: 07/23/2020] [Accepted: 07/27/2020] [Indexed: 06/11/2023]
Abstract
The dynamics of auditory stream segregation were evaluated using repeating triplets composed of pure tones or the syllable /ba/. Stimuli differed in frequency (tones) or fundamental frequency (speech) by 4, 6, 8, or 10 semitones, and the standard frequency was either 250 Hz (tones and speech) or 400 Hz (tones). Twenty normal-hearing adults participated. For both tones and speech, a two-stream percept became more likely as frequency separation increased. Perceptual organization for speech tended to be more integrated and less stable compared to tones. Results suggest that prior data patterns observed with tones in this paradigm may generalize to speech stimuli.
Collapse
Affiliation(s)
- Samantha J Gustafson
- Department of Communication Sciences and Disorders, University of Utah, 390 South 1530 East, Salt Lake City, Utah 84112, USA
| | - John Grose
- Department of Otolaryngology-Head and Neck Surgery, University of North Carolina, 170, Manning Drive, Chapel Hill, North Carolina 27599, , ,
| | - Emily Buss
- Department of Otolaryngology-Head and Neck Surgery, University of North Carolina, 170, Manning Drive, Chapel Hill, North Carolina 27599, , ,
| |
Collapse
|
21
|
Park Y, Geffen MN. A circuit model of auditory cortex. PLoS Comput Biol 2020; 16:e1008016. [PMID: 32716912 PMCID: PMC7410340 DOI: 10.1371/journal.pcbi.1008016] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2020] [Revised: 08/06/2020] [Accepted: 06/04/2020] [Indexed: 01/05/2023] Open
Abstract
The mammalian sensory cortex is composed of multiple types of inhibitory and excitatory neurons, which form sophisticated microcircuits for processing and transmitting sensory information. Despite rapid progress in understanding the function of distinct neuronal populations, the parameters of connectivity that are required for the function of these microcircuits remain unknown. Recent studies found that two most common inhibitory interneurons, parvalbumin- (PV) and somatostatin-(SST) positive interneurons control sound-evoked responses, temporal adaptation and network dynamics in the auditory cortex (AC). These studies can inform our understanding of parameters for the connectivity of excitatory-inhibitory cortical circuits. Specifically, we asked whether a common microcircuit can account for the disparate effects found in studies by different groups. By starting with a cortical rate model, we find that a simple current-compensating mechanism accounts for the experimental findings from multiple groups. They key mechanisms are two-fold. First, PVs compensate for reduced SST activity when thalamic inputs are strong with less compensation when thalamic inputs are weak. Second, SSTs are generally disinhibited by reduced PV activity regardless of thalamic input strength. These roles are augmented by plastic synapses. These roles reproduce the differential effects of PVs and SSTs in stimulus-specific adaptation, forward suppression and tuning-curve adaptation, as well as the influence of PVs on feedforward functional connectivity in the circuit. This circuit exhibits a balance of inhibitory and excitatory currents that persists on stimulation. This approach brings together multiple findings from different laboratories and identifies a circuit that can be used in future studies of upstream and downstream sensory processing.
Collapse
Affiliation(s)
- Youngmin Park
- Department of Otorhinolaryngology: HNS, University of Pennsylvania, Philadelphia, Pennsylvania, United States of America
| | - Maria N. Geffen
- Department of Otorhinolaryngology: HNS, University of Pennsylvania, Philadelphia, Pennsylvania, United States of America
- Department of Neuroscience, Department of Neurology, University of Pennsylvania, Philadelphia, Pennsylvania, United States of America
| |
Collapse
|
22
|
Darki F, Rankin J. Methods to assess binocular rivalry with periodic stimuli. JOURNAL OF MATHEMATICAL NEUROSCIENCE 2020; 10:10. [PMID: 32542516 PMCID: PMC7295892 DOI: 10.1186/s13408-020-00087-8] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/06/2019] [Accepted: 06/04/2020] [Indexed: 05/29/2023]
Abstract
Binocular rivalry occurs when the two eyes are presented with incompatible stimuli and perception alternates between these two stimuli. This phenomenon has been investigated in two types of experiments: (1) Traditional experiments where the stimulus is fixed, (2) eye-swap experiments in which the stimulus periodically swaps between eyes many times per second (Logothetis et al. in Nature 380(6575):621-624, 1996). In spite of the rapid swapping between eyes, perception can be stable for many seconds with specific stimulus parameter configurations. Wilson introduced a two-stage, hierarchical model to explain both types of experiments (Wilson in Proc. Natl. Acad. Sci. 100(24):14499-14503, 2003). Wilson's model and other rivalry models have been only studied with bifurcation analysis for fixed inputs and different types of dynamical behavior that can occur with periodically forcing inputs have not been investigated. Here we report (1) a more complete description of the complex dynamics in the unforced Wilson model, (2) a bifurcation analysis with periodic forcing. Previously, bifurcation analysis of the Wilson model with fixed inputs has revealed three main types of dynamical behaviors: Winner-takes-all (WTA), Rivalry oscillations (RIV), Simultaneous activity (SIM). Our results have revealed richer dynamics including mixed-mode oscillations (MMOs) and a period-doubling cascade, which corresponds to low-amplitude WTA (LAWTA) oscillations. On the other hand, studying rivalry models with numerical continuation shows that periodic forcing with high frequency (e.g. 18 Hz, known as flicker) modulates the three main types of behaviors that occur with fixed inputs with forcing frequency (WTA-Mod, RIV-Mod, SIM-Mod). However, dynamical behavior will be different with low frequency periodic forcing (around 1.5 Hz, so-called swap). In addition to WTA-Mod and SIM-Mod, cycle skipping, multi-cycle skipping and chaotic dynamics are found. This research provides a framework for either assessing binocular rivalry models to check consistency with empirical results, or for better understanding neural dynamics and mechanisms necessary to implement a minimal binocular rivalry model.
Collapse
Affiliation(s)
- Farzaneh Darki
- Department of Mathematics, College of Engineering, Mathematics & Physical Sciences, University of Exeter, Exeter, UK.
| | - James Rankin
- Department of Mathematics, College of Engineering, Mathematics & Physical Sciences, University of Exeter, Exeter, UK
| |
Collapse
|
23
|
Pietras B, Devalle F, Roxin A, Daffertshofer A, Montbrió E. Exact firing rate model reveals the differential effects of chemical versus electrical synapses in spiking networks. Phys Rev E 2020; 100:042412. [PMID: 31771022 DOI: 10.1103/physreve.100.042412] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2019] [Indexed: 01/09/2023]
Abstract
Chemical and electrical synapses shape the dynamics of neuronal networks. Numerous theoretical studies have investigated how each of these types of synapses contributes to the generation of neuronal oscillations, but their combined effect is less understood. This limitation is further magnified by the impossibility of traditional neuronal mean-field models-also known as firing rate models or firing rate equations-to account for electrical synapses. Here, we introduce a firing rate model that exactly describes the mean-field dynamics of heterogeneous populations of quadratic integrate-and-fire (QIF) neurons with both chemical and electrical synapses. The mathematical analysis of the firing rate model reveals a well-established bifurcation scenario for networks with chemical synapses, characterized by a codimension-2 cusp point and persistent states for strong recurrent excitatory coupling. The inclusion of electrical coupling generally implies neuronal synchrony by virtue of a supercritical Hopf bifurcation. This transforms the cusp scenario into a bifurcation scenario characterized by three codimension-2 points (cusp, Takens-Bogdanov, and saddle-node separatrix loop), which greatly reduces the possibility for persistent states. This is generic for heterogeneous QIF networks with both chemical and electrical couplings. Our results agree with several numerical studies on the dynamics of large networks of heterogeneous spiking neurons with electrical and chemical couplings.
Collapse
Affiliation(s)
- Bastian Pietras
- Faculty of Behavioural and Movement Sciences, Amsterdam Movement Sciences & Institute of Brain and Behavior Amsterdam, Vrije Universiteit Amsterdam, van der Boechorststraat 9, Amsterdam 1081 BT, The Netherlands.,Department of Physics, Lancaster University, Lancaster LA1 4YB, United Kingdom.,Institute of Mathematics, Technical University Berlin, 10623 Berlin, Germany.,Bernstein Center for Computational Neuroscience Berlin, 10115 Berlin, Germany
| | - Federico Devalle
- Department of Physics, Lancaster University, Lancaster LA1 4YB, United Kingdom.,Department of Information and Communication Technologies, Universitat Pompeu Fabra, 08003 Barcelona, Spain
| | - Alex Roxin
- Centre de Recerca Matemàtica, Campus de Bellaterra, Edifici C, 08193 Bellaterra (Barcelona), Spain.,Barcelona Graduate School of Mathematics, 08193 Barcelona, Spain
| | - Andreas Daffertshofer
- Faculty of Behavioural and Movement Sciences, Amsterdam Movement Sciences & Institute of Brain and Behavior Amsterdam, Vrije Universiteit Amsterdam, van der Boechorststraat 9, Amsterdam 1081 BT, The Netherlands
| | - Ernest Montbrió
- Department of Information and Communication Technologies, Universitat Pompeu Fabra, 08003 Barcelona, Spain
| |
Collapse
|
24
|
Little DF, Snyder JS, Elhilali M. Ensemble modeling of auditory streaming reveals potential sources of bistability across the perceptual hierarchy. PLoS Comput Biol 2020; 16:e1007746. [PMID: 32275706 PMCID: PMC7185718 DOI: 10.1371/journal.pcbi.1007746] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2019] [Revised: 04/27/2020] [Accepted: 02/25/2020] [Indexed: 11/19/2022] Open
Abstract
Perceptual bistability-the spontaneous, irregular fluctuation of perception between two interpretations of a stimulus-occurs when observing a large variety of ambiguous stimulus configurations. This phenomenon has the potential to serve as a tool for, among other things, understanding how function varies across individuals due to the large individual differences that manifest during perceptual bistability. Yet it remains difficult to interpret the functional processes at work, without knowing where bistability arises during perception. In this study we explore the hypothesis that bistability originates from multiple sources distributed across the perceptual hierarchy. We develop a hierarchical model of auditory processing comprised of three distinct levels: a Peripheral, tonotopic analysis, a Central analysis computing features found more centrally in the auditory system, and an Object analysis, where sounds are segmented into different streams. We model bistable perception within this system by applying adaptation, inhibition and noise into one or all of the three levels of the hierarchy. We evaluate a large ensemble of variations of this hierarchical model, where each model has a different configuration of adaptation, inhibition and noise. This approach avoids the assumption that a single configuration must be invoked to explain the data. Each model is evaluated based on its ability to replicate two hallmarks of bistability during auditory streaming: the selectivity of bistability to specific stimulus configurations, and the characteristic log-normal pattern of perceptual switches. Consistent with a distributed origin, a broad range of model parameters across this hierarchy lead to a plausible form of perceptual bistability.
Collapse
Affiliation(s)
- David F. Little
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, Maryland, United States of America
| | - Joel S. Snyder
- Department of Psychology, University of Nevada, Las Vegas; Las Vegas, Nevada, United States of America
| | - Mounya Elhilali
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, Maryland, United States of America
| |
Collapse
|
25
|
Neural correlates of perceptual switching while listening to bistable auditory streaming stimuli. Neuroimage 2020; 204:116220. [DOI: 10.1016/j.neuroimage.2019.116220] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2019] [Revised: 08/19/2019] [Accepted: 09/19/2019] [Indexed: 11/15/2022] Open
|
26
|
Auditory streaming and bistability paradigm extended to a dynamic environment. Hear Res 2019; 383:107807. [PMID: 31622836 DOI: 10.1016/j.heares.2019.107807] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/25/2019] [Revised: 09/19/2019] [Accepted: 10/01/2019] [Indexed: 11/23/2022]
Abstract
We explore stream segregation with temporally modulated acoustic features using behavioral experiments and modelling. The auditory streaming paradigm in which alternating high- A and low-frequency tones B appear in a repeating ABA-pattern, has been shown to be perceptually bistable for extended presentations (order of minutes). For a fixed, repeating stimulus, perception spontaneously changes (switches) at random times, every 2-15 s, between an integrated interpretation with a galloping rhythm and segregated streams. Streaming in a natural auditory environment requires segregation of auditory objects with features that evolve over time. With the relatively idealized ABA-triplet paradigm, we explore perceptual switching in a non-static environment by considering slowly and periodically varying stimulus features. Our previously published model captures the dynamics of auditory bistability and predicts here how perceptual switches are entrained, tightly locked to the rising and falling phase of modulation. In psychoacoustic experiments we find that entrainment depends on both the period of modulation and the intrinsic switch characteristics of individual listeners. The extended auditory streaming paradigm with slowly modulated stimulus features presented here will be of significant interest for future imaging and neurophysiology experiments by reducing the need for subjective perceptual reports of ongoing perception.
Collapse
|
27
|
Neural Signatures of Auditory Perceptual Bistability Revealed by Large-Scale Human Intracranial Recordings. J Neurosci 2019; 39:6482-6497. [PMID: 31189576 PMCID: PMC6697394 DOI: 10.1523/jneurosci.0655-18.2019] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2018] [Revised: 05/26/2019] [Accepted: 05/28/2019] [Indexed: 11/25/2022] Open
Abstract
A key challenge in neuroscience is understanding how sensory stimuli give rise to perception, especially when the process is supported by neural activity from an extended network of brain areas. Perception is inherently subjective, so interrogating its neural signatures requires, ideally, a combination of three factors: (1) behavioral tasks that separate stimulus-driven activity from perception per se; (2) human subjects who self-report their percepts while performing those tasks; and (3) concurrent neural recordings acquired at high spatial and temporal resolution. In this study, we analyzed human electrocorticographic recordings obtained during an auditory task which supported mutually exclusive perceptual interpretations. Eight neurosurgical patients (5 male; 3 female) listened to sequences of repeated triplets where tones were separated in frequency by several semitones. Subjects reported spontaneous alternations between two auditory perceptual states, 1-stream and 2-stream, by pressing a button. We compared averaged auditory evoked potentials (AEPs) associated with 1-stream and 2-stream percepts and identified significant differences between them in primary and nonprimary auditory cortex, surrounding auditory-related temporoparietal cortex, and frontal areas. We developed classifiers to identify spatial maps of percept-related differences in the AEP, corroborating findings from statistical analysis. We used one-dimensional embedding spaces to perform the group-level analysis. Our data illustrate exemplar high temporal resolution AEP waveforms in auditory core region; explain inconsistencies in perceptual effects within auditory cortex, reported across noninvasive studies of streaming of triplets; show percept-related changes in frontoparietal areas previously highlighted by studies that focused on perceptual transitions; and demonstrate that auditory cortex encodes maintenance of percepts and switches between them. SIGNIFICANCE STATEMENT The human brain has the remarkable ability to discern complex and ambiguous stimuli from the external world by parsing mixed inputs into interpretable segments. However, one's perception can deviate from objective reality. But how do perceptual discrepancies occur? What are their anatomical substrates? To address these questions, we performed intracranial recordings in neurosurgical patients as they reported their perception of sounds associated with two mutually exclusive interpretations. We identified signatures of subjective percepts as distinct from sound-driven brain activity in core and non-core auditory cortex and frontoparietal cortex. These findings were compared with previous studies of auditory bistable perception and suggested that perceptual transitions and maintenance of perceptual states were supported by common neural substrates.
Collapse
|
28
|
Pérez-Cervera A, Ashwin P, Huguet G, M Seara T, Rankin J. The uncoupling limit of identical Hopf bifurcations with an application to perceptual bistability. JOURNAL OF MATHEMATICAL NEUROSCIENCE 2019; 9:7. [PMID: 31385150 PMCID: PMC6682846 DOI: 10.1186/s13408-019-0075-2] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/22/2019] [Accepted: 06/19/2019] [Indexed: 06/01/2023]
Abstract
We study the dynamics arising when two identical oscillators are coupled near a Hopf bifurcation where we assume a parameter ϵ uncouples the system at [Formula: see text]. Using a normal form for [Formula: see text] identical systems undergoing Hopf bifurcation, we explore the dynamical properties. Matching the normal form coefficients to a coupled Wilson-Cowan oscillator network gives an understanding of different types of behaviour that arise in a model of perceptual bistability. Notably, we find bistability between in-phase and anti-phase solutions that demonstrates the feasibility for synchronisation to act as the mechanism by which periodic inputs can be segregated (rather than via strong inhibitory coupling, as in the existing models). Using numerical continuation we confirm our theoretical analysis for small coupling strength and explore the bifurcation diagrams for large coupling strength, where the normal form approximation breaks down.
Collapse
Affiliation(s)
- Alberto Pérez-Cervera
- Departament de Matemàtiques - BGSMATH, Universitat Politècnica de Catalunya, Barcelona, Spain.
| | - Peter Ashwin
- Department of Mathematics, College of Engineering, Mathematics and Physical Sciences, University of Exeter, Exeter, UK
- EPSRC Centre for Predictive Modelling in Healthcare, University of Exeter, Exeter, UK
| | - Gemma Huguet
- Departament de Matemàtiques - BGSMATH, Universitat Politècnica de Catalunya, Barcelona, Spain
| | - Tere M Seara
- Departament de Matemàtiques - BGSMATH, Universitat Politècnica de Catalunya, Barcelona, Spain
| | - James Rankin
- Department of Mathematics, College of Engineering, Mathematics and Physical Sciences, University of Exeter, Exeter, UK
- EPSRC Centre for Predictive Modelling in Healthcare, University of Exeter, Exeter, UK
| |
Collapse
|
29
|
Rankin J, Rinzel J. Computational models of auditory perception from feature extraction to stream segregation and behavior. Curr Opin Neurobiol 2019; 58:46-53. [PMID: 31326723 DOI: 10.1016/j.conb.2019.06.009] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2019] [Accepted: 06/22/2019] [Indexed: 10/26/2022]
Abstract
Audition is by nature dynamic, from brainstem processing on sub-millisecond time scales, to segregating and tracking sound sources with changing features, to the pleasure of listening to music and the satisfaction of getting the beat. We review recent advances from computational models of sound localization, of auditory stream segregation and of beat perception/generation. A wealth of behavioral, electrophysiological and imaging studies shed light on these processes, typically with synthesized sounds having regular temporal structure. Computational models integrate knowledge from different experimental fields and at different levels of description. We advocate a neuromechanistic modeling approach that incorporates knowledge of the auditory system from various fields, that utilizes plausible neural mechanisms, and that bridges our understanding across disciplines.
Collapse
Affiliation(s)
- James Rankin
- College of Engineering, Mathematics and Physical Sciences, University of Exeter, Harrison Building, North Park Rd, Exeter EX4 4QF, UK.
| | - John Rinzel
- Center for Neural Science, New York University, 4 Washington Place, 10003 New York, NY, United States; Courant Institute of Mathematical Sciences, New York University, 251 Mercer St, 10012 New York, NY, United States
| |
Collapse
|
30
|
Paredes-Gallardo A, Dau T, Marozeau J. Auditory Stream Segregation Can Be Modeled by Neural Competition in Cochlear Implant Listeners. Front Comput Neurosci 2019; 13:42. [PMID: 31333438 PMCID: PMC6616076 DOI: 10.3389/fncom.2019.00042] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2019] [Accepted: 06/17/2019] [Indexed: 11/13/2022] Open
Abstract
Auditory stream segregation is a perceptual process by which the human auditory system groups sounds from different sources into perceptually meaningful elements (e.g., a voice or a melody). The perceptual segregation of sounds is important, for example, for the understanding of speech in noisy scenarios, a particularly challenging task for listeners with a cochlear implant (CI). It has been suggested that some aspects of stream segregation may be explained by relatively basic neural mechanisms at a cortical level. During the past decades, a variety of models have been proposed to account for the data from stream segregation experiments in normal-hearing (NH) listeners. However, little attention has been given to corresponding findings in CI listeners. The present study investigated whether a neural model of sequential stream segregation, proposed to describe the behavioral effects observed in NH listeners, can account for behavioral data from CI listeners. The model operates on the stimulus features at the cortical level and includes a competition stage between the neuronal units encoding the different percepts. The competition arises from a combination of mutual inhibition, adaptation, and additive noise. The model was found to capture the main trends in the behavioral data from CI listeners, such as the larger probability of a segregated percept with increasing the feature difference between the sounds as well as the build-up effect. Importantly, this was achieved without any modification to the model's competition stage, suggesting that stream segregation could be mediated by a similar mechanism in both groups of listeners.
Collapse
Affiliation(s)
- Andreu Paredes-Gallardo
- Hearing Systems Section, Department of Health Technology, Technical University of Denmark, Lyngby, Denmark
| | - Torsten Dau
- Hearing Systems Section, Department of Health Technology, Technical University of Denmark, Lyngby, Denmark
| | - Jeremy Marozeau
- Hearing Systems Section, Department of Health Technology, Technical University of Denmark, Lyngby, Denmark
| |
Collapse
|
31
|
Bielczyk NZ, Piskała K, Płomecka M, Radziński P, Todorova L, Foryś U. Time-delay model of perceptual decision making in cortical networks. PLoS One 2019; 14:e0211885. [PMID: 30768608 PMCID: PMC6377186 DOI: 10.1371/journal.pone.0211885] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2018] [Accepted: 01/23/2019] [Indexed: 11/18/2022] Open
Abstract
It is known that cortical networks operate on the edge of instability, in which oscillations can appear. However, the influence of this dynamic regime on performance in decision making, is not well understood. In this work, we propose a population model of decision making based on a winner-take-all mechanism. Using this model, we demonstrate that local slow inhibition within the competing neuronal populations can lead to Hopf bifurcation. At the edge of instability, the system exhibits ambiguity in the decision making, which can account for the perceptual switches observed in human experiments. We further validate this model with fMRI datasets from an experiment on semantic priming in perception of ambivalent (male versus female) faces. We demonstrate that the model can correctly predict the drop in the variance of the BOLD within the Superior Parietal Area and Inferior Parietal Area while watching ambiguous visual stimuli.
Collapse
Affiliation(s)
| | | | - Martyna Płomecka
- Methods of Plasticity Research, Department of Psychology, University of Zürich, Zürich, Switzerland
- Laboratory of Brain Imaging, Neurobiology Center, Nencki Institute of Experimental Biology of Polish Academy of Sciences, Warsaw, Poland
| | - Piotr Radziński
- Faculty of Mathematics, University of Warsaw, Warsaw, Poland
| | - Lara Todorova
- Faculty of Social Sciences, Radboud University Nijmegen, Nijmegen, The Netherlands
- Donders Centre for Cognitive Neuroimaging, Donders Institute for Brain, Cognition and Behavior, Nijmegen, the Netherlands
| | - Urszula Foryś
- Faculty of Mathematics, University of Warsaw, Warsaw, Poland
| |
Collapse
|
32
|
Kondo HM, Pressnitzer D, Shimada Y, Kochiyama T, Kashino M. Inhibition-excitation balance in the parietal cortex modulates volitional control for auditory and visual multistability. Sci Rep 2018; 8:14548. [PMID: 30267021 PMCID: PMC6162284 DOI: 10.1038/s41598-018-32892-3] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2018] [Accepted: 09/18/2018] [Indexed: 11/25/2022] Open
Abstract
Perceptual organisation must select one interpretation from several alternatives to guide behaviour. Computational models suggest that this could be achieved through an interplay between inhibition and excitation across competing types of neural population coding for each interpretation. Here, to test for such models, we used magnetic resonance spectroscopy to measure non-invasively the concentrations of inhibitory γ-aminobutyric acid (GABA) and excitatory glutamate-glutamine (Glx) in several brain regions. Human participants first performed auditory and visual multistability tasks that produced spontaneous switching between percepts. Then, we observed that longer percept durations during behaviour were associated with higher GABA/Glx ratios in the sensory area coding for each modality. When participants were asked to voluntarily modulate their perception, a common factor across modalities emerged: the GABA/Glx ratio in the posterior parietal cortex tended to be positively correlated with the amount of effective volitional control. Our results provide direct evidence implicating that the balance between neural inhibition and excitation within sensory regions resolves perceptual competition. This powerful computational principle appears to be leveraged by both audition and vision, implemented independently across modalities, but modulated by an integrated control process.
Collapse
Affiliation(s)
- Hirohito M Kondo
- School of Psychology, Chukyo University, Nagoya, Aichi, Japan.
- Human Information Science Laboratory, NTT Communication Science Laboratories, NTT Corporation, Atsugi, Kanagawa, Japan.
| | - Daniel Pressnitzer
- Laboratoire des Systèmes Perceptifs, CNRS UMR 8248, Paris, France
- Département d'Études Cognitive, École Normale Supérieure, Paris, France
| | - Yasuhiro Shimada
- Brain Activity Imaging Center, ATR-Promotions, Seika-cho, Kyoto, Japan
| | - Takanori Kochiyama
- Brain Activity Imaging Center, ATR-Promotions, Seika-cho, Kyoto, Japan
- Department of Cognitive Neuroscience, Advanced Telecommunications Research Institute International, Seika-cho, Kyoto, Japan
| | - Makio Kashino
- Sports Brain Science Project, NTT Communication Science Laboratories, NTT Corporation, Atsugi, Kanagawa, Japan
- School of Engineering, Tokyo Institute of Technology, Yokohama, Kanagawa, Japan
| |
Collapse
|
33
|
Schmidt H, Avitabile D, Montbrió E, Roxin A. Network mechanisms underlying the role of oscillations in cognitive tasks. PLoS Comput Biol 2018; 14:e1006430. [PMID: 30188889 PMCID: PMC6143269 DOI: 10.1371/journal.pcbi.1006430] [Citation(s) in RCA: 41] [Impact Index Per Article: 5.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2018] [Revised: 09/18/2018] [Accepted: 08/13/2018] [Indexed: 11/18/2022] Open
Abstract
Oscillatory activity robustly correlates with task demands during many cognitive tasks. However, not only are the network mechanisms underlying the generation of these rhythms poorly understood, but it is also still unknown to what extent they may play a functional role, as opposed to being a mere epiphenomenon. Here we study the mechanisms underlying the influence of oscillatory drive on network dynamics related to cognitive processing in simple working memory (WM), and memory recall tasks. Specifically, we investigate how the frequency of oscillatory input interacts with the intrinsic dynamics in networks of recurrently coupled spiking neurons to cause changes of state: the neuronal correlates of the corresponding cognitive process. We find that slow oscillations, in the delta and theta band, are effective in activating network states associated with memory recall. On the other hand, faster oscillations, in the beta range, can serve to clear memory states by resonantly driving transient bouts of spike synchrony which destabilize the activity. We leverage a recently derived set of exact mean-field equations for networks of quadratic integrate-and-fire neurons to systematically study the bifurcation structure in the periodically forced spiking network. Interestingly, we find that the oscillatory signals which are most effective in allowing flexible switching between network states are not smooth, pure sinusoids, but rather burst-like, with a sharp onset. We show that such periodic bursts themselves readily arise spontaneously in networks of excitatory and inhibitory neurons, and that the burst frequency can be tuned via changes in tonic drive. Finally, we show that oscillations in the gamma range can actually stabilize WM states which otherwise would not persist.
Collapse
Affiliation(s)
- Helmut Schmidt
- Centre de Recerca Matemàtica, Campus de Bellaterra Edifici C, 08193 Bellaterra, Barcelona, Spain.,Barcelona Graduate School of Mathematics, Campus de Bellaterra Edifici C, 08193 Bellaterra, Barcelona, Spain
| | - Daniele Avitabile
- School of Mathematical Sciences, University of Nottingham, University Park, Nottingham NG7 2QL, United Kingdom.,Inria Sophia Antipolis Méditerranée Research Centre, MathNeuro Team, 2004 route des Lucioles - Boîte Postale 93 06902 Sophia Antipolis, Cedex, France
| | - Ernest Montbrió
- Center for Brain and Cognition, Department of Information and Communication Technologies, Universitat Pompeu Fabra, C. Ramon Trias Fargas 25 - 27, 08005 Barcelona, Spain
| | - Alex Roxin
- Centre de Recerca Matemàtica, Campus de Bellaterra Edifici C, 08193 Bellaterra, Barcelona, Spain.,Barcelona Graduate School of Mathematics, Campus de Bellaterra Edifici C, 08193 Bellaterra, Barcelona, Spain
| |
Collapse
|
34
|
Knyazeva S, Selezneva E, Gorkin A, Aggelopoulos NC, Brosch M. Neuronal Correlates of Auditory Streaming in Monkey Auditory Cortex for Tone Sequences without Spectral Differences. Front Integr Neurosci 2018; 12:4. [PMID: 29440999 PMCID: PMC5797536 DOI: 10.3389/fnint.2018.00004] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2017] [Accepted: 01/16/2018] [Indexed: 11/13/2022] Open
Abstract
This study finds a neuronal correlate of auditory perceptual streaming in the primary auditory cortex for sequences of tone complexes that have the same amplitude spectrum but a different phase spectrum. Our finding is based on microelectrode recordings of multiunit activity from 270 cortical sites in three awake macaque monkeys. The monkeys were presented with repeated sequences of a tone triplet that consisted of an A tone, a B tone, another A tone and then a pause. The A and B tones were composed of unresolved harmonics formed by adding the harmonics in cosine phase, in alternating phase, or in random phase. A previous psychophysical study on humans revealed that when the A and B tones are similar, humans integrate them into a single auditory stream; when the A and B tones are dissimilar, humans segregate them into separate auditory streams. We found that the similarity of neuronal rate responses to the triplets was highest when all A and B tones had cosine phase. Similarity was intermediate when the A tones had cosine phase and the B tones had alternating phase. Similarity was lowest when the A tones had cosine phase and the B tones had random phase. The present study corroborates and extends previous reports, showing similar correspondences between neuronal activity in the primary auditory cortex and auditory streaming of sound sequences. It also is consistent with Fishman’s population separation model of auditory streaming.
Collapse
Affiliation(s)
- Stanislava Knyazeva
- Speziallabor Primatenneurobiologie, Leibniz-Institute für Neurobiologie, Magdeburg, Germany
| | - Elena Selezneva
- Speziallabor Primatenneurobiologie, Leibniz-Institute für Neurobiologie, Magdeburg, Germany
| | - Alexander Gorkin
- Speziallabor Primatenneurobiologie, Leibniz-Institute für Neurobiologie, Magdeburg, Germany.,Laboratory of Psychophysiology, Institute of Psychology, Moscow, Russia
| | | | - Michael Brosch
- Speziallabor Primatenneurobiologie, Leibniz-Institute für Neurobiologie, Magdeburg, Germany.,Center for Behavioral Brain Sciences, Otto-von-Guericke-University, Magdeburg, Germany
| |
Collapse
|
35
|
Neural Decoding of Bistable Sounds Reveals an Effect of Intention on Perceptual Organization. J Neurosci 2018; 38:2844-2853. [PMID: 29440556 PMCID: PMC5852662 DOI: 10.1523/jneurosci.3022-17.2018] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2017] [Revised: 01/21/2018] [Accepted: 02/06/2018] [Indexed: 12/05/2022] Open
Abstract
Auditory signals arrive at the ear as a mixture that the brain must decompose into distinct sources based to a large extent on acoustic properties of the sounds. An important question concerns whether listeners have voluntary control over how many sources they perceive. This has been studied using pure high (H) and low (L) tones presented in the repeating pattern HLH-HLH-, which can form a bistable percept heard either as an integrated whole (HLH-) or as segregated into high (H-H-) and low (-L-) sequences. Although instructing listeners to try to integrate or segregate sounds affects reports of what they hear, this could reflect a response bias rather than a perceptual effect. We had human listeners (15 males, 12 females) continuously report their perception of such sequences and recorded neural activity using MEG. During neutral listening, a classifier trained on patterns of neural activity distinguished between periods of integrated and segregated perception. In other conditions, participants tried to influence their perception by allocating attention either to the whole sequence or to a subset of the sounds. They reported hearing the desired percept for a greater proportion of time than when listening neutrally. Critically, neural activity supported these reports; stimulus-locked brain responses in auditory cortex were more likely to resemble the signature of segregation when participants tried to hear segregation than when attempting to perceive integration. These results indicate that listeners can influence how many sound sources they perceive, as reflected in neural responses that track both the input and its perceptual organization. SIGNIFICANCE STATEMENT Can we consciously influence our perception of the external world? We address this question using sound sequences that can be heard either as coming from a single source or as two distinct auditory streams. Listeners reported spontaneous changes in their perception between these two interpretations while we recorded neural activity to identify signatures of such integration and segregation. They also indicated that they could, to some extent, choose between these alternatives. This claim was supported by corresponding changes in responses in auditory cortex. By linking neural and behavioral correlates of perception, we demonstrate that the number of objects that we perceive can depend not only on the physical attributes of our environment, but also on how we intend to experience it.
Collapse
|
36
|
Albert S, Schmack K, Sterzer P, Schneider G. A hierarchical stochastic model for bistable perception. PLoS Comput Biol 2017; 13:e1005856. [PMID: 29155808 PMCID: PMC5714404 DOI: 10.1371/journal.pcbi.1005856] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2017] [Revised: 12/04/2017] [Accepted: 10/29/2017] [Indexed: 01/29/2023] Open
Abstract
Viewing of ambiguous stimuli can lead to bistable perception alternating between the possible percepts. During continuous presentation of ambiguous stimuli, percept changes occur as single events, whereas during intermittent presentation of ambiguous stimuli, percept changes occur at more or less regular intervals either as single events or bursts. Response patterns can be highly variable and have been reported to show systematic differences between patients with schizophrenia and healthy controls. Existing models of bistable perception often use detailed assumptions and large parameter sets which make parameter estimation challenging. Here we propose a parsimonious stochastic model that provides a link between empirical data analysis of the observed response patterns and detailed models of underlying neuronal processes. Firstly, we use a Hidden Markov Model (HMM) for the times between percept changes, which assumes one single state in continuous presentation and a stable and an unstable state in intermittent presentation. The HMM captures the observed differences between patients with schizophrenia and healthy controls, but remains descriptive. Therefore, we secondly propose a hierarchical Brownian model (HBM), which produces similar response patterns but also provides a relation to potential underlying mechanisms. The main idea is that neuronal activity is described as an activity difference between two competing neuronal populations reflected in Brownian motions with drift. This differential activity generates switching between the two conflicting percepts and between stable and unstable states with similar mechanisms on different neuronal levels. With only a small number of parameters, the HBM can be fitted closely to a high variety of response patterns and captures group differences between healthy controls and patients with schizophrenia. At the same time, it provides a link to mechanistic models of bistable perception, linking the group differences to potential underlying mechanisms. Patients suffering from schizophrenia show specific cognitive deficits. One way to study these cognitive phenomena works with the presentation of ambiguous stimuli. During viewing of an ambiguous stimulus, perception alters spontaneously between different percepts. Percept changes occur as single events during continuous presentation, whereas during intermittent presentation, percept changes occur at regular intervals either as single events or bursts. Here we investigate perceptual responses to continuous and intermittent stimulation in healthy control subjects and patients with schizophrenia. Interestingly, the response patterns can be highly variable but show systematic group differences. We propose a model that connects these perceptual responses to underlying neuronal processes. The model mainly describes the activity difference between competing neuronal populations on different neuronal levels. In a hierarchical manner, the differential activity generates switching between the conflicting percepts as well as between states of higher and lower perceptual stability. By fitting the model directly to empirical responses, a high variety of patterns can be reproduced, and group differences between healthy controls and patients with schizophrenia can be captured. This helps to link the observed group differences to potential neuronal mechanisms, suggesting that patients with schizophrenia tend to spend more time in neuronal states of lower perceptual stability.
Collapse
Affiliation(s)
- Stefan Albert
- Institute of Mathematics, Goethe University, Frankfurt (Main), Germany
| | - Katharina Schmack
- Department of Psychiatry and Psychotherapy, Charité Universitätsmedizin Berlin, Germany
| | - Philipp Sterzer
- Department of Psychiatry and Psychotherapy, Charité Universitätsmedizin Berlin, Germany
| | - Gaby Schneider
- Institute of Mathematics, Goethe University, Frankfurt (Main), Germany
- * E-mail:
| |
Collapse
|
37
|
A Crucial Test of the Population Separation Model of Auditory Stream Segregation in Macaque Primary Auditory Cortex. J Neurosci 2017; 37:10645-10655. [PMID: 28954867 DOI: 10.1523/jneurosci.0792-17.2017] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2017] [Revised: 08/29/2017] [Accepted: 09/05/2017] [Indexed: 11/21/2022] Open
Abstract
An important aspect of auditory scene analysis is auditory stream segregation-the organization of sound sequences into perceptual streams reflecting different sound sources in the environment. Several models have been proposed to account for stream segregation. According to the "population separation" (PS) model, alternating ABAB tone sequences are perceived as a single stream or as two separate streams when "A" and "B" tones activate the same or distinct frequency-tuned neuronal populations in primary auditory cortex (A1), respectively. A crucial test of the PS model is whether it can account for the observation that A and B tones are generally perceived as a single stream when presented synchronously, rather than in an alternating pattern, even if they are widely separated in frequency. Here, we tested the PS model by recording neural responses to alternating (ALT) and synchronous (SYNC) tone sequences in A1 of male macaques. Consistent with predictions of the PS model, a greater effective tonotopic separation of A and B tone responses was observed under ALT than under SYNC conditions, thus paralleling the perceptual organization of the sequences. While other models of stream segregation, such as temporal coherence, are not excluded by the present findings, we conclude that PS is sufficient to account for the perceptual organization of ALT and SYNC sequences and thus remains a viable model of auditory stream segregation.SIGNIFICANCE STATEMENT According to the population separation (PS) model of auditory stream segregation, sounds that activate the same or separate neural populations in primary auditory cortex (A1) are perceived as one or two streams, respectively. It is unclear, however, whether the PS model can account for the perception of sounds as a single stream when they are presented synchronously. Here, we tested the PS model by recording neural responses to alternating (ALT) and synchronous (SYNC) tone sequences in macaque A1. A greater effective separation of tonotopic activity patterns was observed under ALT than under SYNC conditions, thus paralleling the perceptual organization of the sequences. Based on these findings, we conclude that PS remains a plausible neurophysiological model of auditory stream segregation.
Collapse
|
38
|
Snyder JS, Elhilali M. Recent advances in exploring the neural underpinnings of auditory scene perception. Ann N Y Acad Sci 2017; 1396:39-55. [PMID: 28199022 PMCID: PMC5446279 DOI: 10.1111/nyas.13317] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2016] [Revised: 12/21/2016] [Accepted: 01/08/2017] [Indexed: 11/29/2022]
Abstract
Studies of auditory scene analysis have traditionally relied on paradigms using artificial sounds-and conventional behavioral techniques-to elucidate how we perceptually segregate auditory objects or streams from each other. In the past few decades, however, there has been growing interest in uncovering the neural underpinnings of auditory segregation using human and animal neuroscience techniques, as well as computational modeling. This largely reflects the growth in the fields of cognitive neuroscience and computational neuroscience and has led to new theories of how the auditory system segregates sounds in complex arrays. The current review focuses on neural and computational studies of auditory scene perception published in the last few years. Following the progress that has been made in these studies, we describe (1) theoretical advances in our understanding of the most well-studied aspects of auditory scene perception, namely segregation of sequential patterns of sounds and concurrently presented sounds; (2) the diversification of topics and paradigms that have been investigated; and (3) how new neuroscience techniques (including invasive neurophysiology in awake humans, genotyping, and brain stimulation) have been used in this field.
Collapse
Affiliation(s)
- Joel S. Snyder
- Department of Psychology, University of Nevada, Las Vegas, Las Vegas, Nevada
| | - Mounya Elhilali
- Department of Electrical and Computer Engineering, The Johns Hopkins University, Baltimore, Maryland
| |
Collapse
|
39
|
Rankin J, Osborn Popp PJ, Rinzel J. Stimulus Pauses and Perturbations Differentially Delay or Promote the Segregation of Auditory Objects: Psychoacoustics and Modeling. Front Neurosci 2017; 11:198. [PMID: 28473747 PMCID: PMC5397483 DOI: 10.3389/fnins.2017.00198] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2016] [Accepted: 03/23/2017] [Indexed: 11/21/2022] Open
Abstract
Segregating distinct sound sources is fundamental for auditory perception, as in the cocktail party problem. In a process called the build-up of stream segregation, distinct sound sources that are perceptually integrated initially can be segregated into separate streams after several seconds. Previous research concluded that abrupt changes in the incoming sounds during build-up—for example, a step change in location, loudness or timing—reset the percept to integrated. Following this reset, the multisecond build-up process begins again. Neurophysiological recordings in auditory cortex (A1) show fast (subsecond) adaptation, but unified mechanistic explanations for the bias toward integration, multisecond build-up and resets remain elusive. Combining psychoacoustics and modeling, we show that initial unadapted A1 responses bias integration, that the slowness of build-up arises naturally from competition downstream, and that recovery of adaptation can explain resets. An early bias toward integrated perceptual interpretations arising from primary cortical stages that encode low-level features and feed into competition downstream could also explain similar phenomena in vision. Further, we report a previously overlooked class of perturbations that promote segregation rather than integration. Our results challenge current understanding for perturbation effects on the emergence of sound source segregation, leading to a new hypothesis for differential processing downstream of A1. Transient perturbations can momentarily redirect A1 responses as input to downstream competition units that favor segregation.
Collapse
Affiliation(s)
- James Rankin
- Department of Mathematics, University of ExeterExeter, UK.,Center for Neural Science, New York UniversityNew York, NY, USA
| | | | - John Rinzel
- Center for Neural Science, New York UniversityNew York, NY, USA.,Courant Institute of Mathematical SciencesNew York, NY, USA
| |
Collapse
|
40
|
Dykstra AR, Cariani PA, Gutschalk A. A roadmap for the study of conscious audition and its neural basis. Philos Trans R Soc Lond B Biol Sci 2017; 372:20160103. [PMID: 28044014 PMCID: PMC5206271 DOI: 10.1098/rstb.2016.0103] [Citation(s) in RCA: 37] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/03/2016] [Indexed: 12/16/2022] Open
Abstract
How and which aspects of neural activity give rise to subjective perceptual experience-i.e. conscious perception-is a fundamental question of neuroscience. To date, the vast majority of work concerning this question has come from vision, raising the issue of generalizability of prominent resulting theories. However, recent work has begun to shed light on the neural processes subserving conscious perception in other modalities, particularly audition. Here, we outline a roadmap for the future study of conscious auditory perception and its neural basis, paying particular attention to how conscious perception emerges (and of which elements or groups of elements) in complex auditory scenes. We begin by discussing the functional role of the auditory system, particularly as it pertains to conscious perception. Next, we ask: what are the phenomena that need to be explained by a theory of conscious auditory perception? After surveying the available literature for candidate neural correlates, we end by considering the implications that such results have for a general theory of conscious perception as well as prominent outstanding questions and what approaches/techniques can best be used to address them.This article is part of the themed issue 'Auditory and visual scene analysis'.
Collapse
Affiliation(s)
- Andrew R Dykstra
- Department of Neurology, Ruprecht-Karls-Universität Heidelberg, Heidelberg, Germany
| | | | - Alexander Gutschalk
- Department of Neurology, Ruprecht-Karls-Universität Heidelberg, Heidelberg, Germany
| |
Collapse
|
41
|
Szabó BT, Denham SL, Winkler I. Computational Models of Auditory Scene Analysis: A Review. Front Neurosci 2016; 10:524. [PMID: 27895552 PMCID: PMC5108797 DOI: 10.3389/fnins.2016.00524] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2016] [Accepted: 10/28/2016] [Indexed: 12/02/2022] Open
Abstract
Auditory scene analysis (ASA) refers to the process (es) of parsing the complex acoustic input into auditory perceptual objects representing either physical sources or temporal sound patterns, such as melodies, which contributed to the sound waves reaching the ears. A number of new computational models accounting for some of the perceptual phenomena of ASA have been published recently. Here we provide a theoretically motivated review of these computational models, aiming to relate their guiding principles to the central issues of the theoretical framework of ASA. Specifically, we ask how they achieve the grouping and separation of sound elements and whether they implement some form of competition between alternative interpretations of the sound input. We consider the extent to which they include predictive processes, as important current theories suggest that perception is inherently predictive, and also how they have been evaluated. We conclude that current computational models of ASA are fragmentary in the sense that rather than providing general competing interpretations of ASA, they focus on assessing the utility of specific processes (or algorithms) for finding the causes of the complex acoustic signal. This leaves open the possibility for integrating complementary aspects of the models into a more comprehensive theory of ASA.
Collapse
Affiliation(s)
- Beáta T Szabó
- Faculty of Information Technology and Bionics, Pázmány Péter Catholic UniversityBudapest, Hungary; Institute of Cognitive Neuroscience and Psychology, Research Centre for Natural Sciences, Hungarian Academy of SciencesBudapest, Hungary
| | - Susan L Denham
- School of Psychology, University of Plymouth Plymouth, UK
| | - István Winkler
- Institute of Cognitive Neuroscience and Psychology, Research Centre for Natural Sciences, Hungarian Academy of Sciences Budapest, Hungary
| |
Collapse
|