1
|
Nitta J, Kondoh S, Okanoya K, Tachibana RO. Spectral consistency in sound sequence affects perceptual accuracy in discriminating subdivided rhythmic patterns. PLoS One 2024; 19:e0303347. [PMID: 38805449 PMCID: PMC11132482 DOI: 10.1371/journal.pone.0303347] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2023] [Accepted: 04/24/2024] [Indexed: 05/30/2024] Open
Abstract
Musical compositions are distinguished by their unique rhythmic patterns, determined by subtle differences in how regular beats are subdivided. Precise perception of these subdivisions is essential for discerning nuances in rhythmic patterns. While musical rhythm typically comprises sound elements with a variety of timbres or spectral cues, the impact of such spectral variations on the perception of rhythmic patterns remains unclear. Here, we show that consistency in spectral cues affects perceptual accuracy in discriminating subdivided rhythmic patterns. We conducted online experiments using rhythmic sound sequences consisting of band-passed noise bursts to measure discrimination accuracy. Participants were asked to discriminate between a swing-like rhythm sequence, characterized by a 2:1 interval ratio, and its more or less exaggerated version. This task was also performed under two additional rhythm conditions: inversed-swing rhythm (1:2 ratio) and regular subdivision (1:1 ratio). The center frequency of the band noises was either held constant or alternated between two values. Our results revealed a significant decrease in discrimination accuracy when the center frequency was alternated, irrespective of the rhythm ratio condition. This suggests that rhythm perception is shaped by temporal structure and affected by spectral properties.
Collapse
Affiliation(s)
- Jun Nitta
- Graduate School of Arts and Sciences, the University of Tokyo, Tokyo, Japan
| | - Sotaro Kondoh
- Graduate School of Arts and Sciences, the University of Tokyo, Tokyo, Japan
- Advanced Comprehensive Research Organization, Teikyo University, Tokyo, Japan
- Graduate School of Media and Governance, Keio University, Kanagawa, Japan
| | - Kazuo Okanoya
- Graduate School of Arts and Sciences, the University of Tokyo, Tokyo, Japan
- Advanced Comprehensive Research Organization, Teikyo University, Tokyo, Japan
| | - Ryosuke O. Tachibana
- Graduate School of Arts and Sciences, the University of Tokyo, Tokyo, Japan
- Human Informatics and Interaction Research Institute, National Institute of Advanced Industrial Science and Technology, Tsukuba, Japan
| |
Collapse
|
2
|
Elliott MA, Porter G, Nakajima Y. Measures of music-like experience emergent in a sonic Ganzfeld: An example of perceptual structuring on the edge of silence. PROGRESS IN BRAIN RESEARCH 2023; 277:141-155. [PMID: 37301567 DOI: 10.1016/bs.pbr.2023.04.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
We conducted an experiment in which participants listened to a semi-stochastic stream of acoustic data, during which they reported regular variations in melody, pitch and rhythm that are not physically present in the stimulus. In addition, the occurrence of particular forms (melodies and rhythms) and pitches appear to be associated with the occurrence of others. This indicates that a complex taxonomy of subjective auditory experience can be evoked in observers given small variation in the quality of noise along the auditory spectrum. It also strongly indicates that when experiencing "noise," our automatic response is to restructure this such that it becomes "perceptually" meaningful. In an environment where there is no sound, neural systems will reduce their engagement, and will respond semi stochastically. Taken alongside our data, this tends to suggest that one consequence of "silence" might be a tendency to spontaneously hallucinate complex and well-structured auditory experience based solely upon the stochastic neural response to the absence of sound. This paper describes the type of experience one might have on the "edge of silence" and discusses some of the associated implications.
Collapse
Affiliation(s)
- Mark A Elliott
- School of Psychology, National University of Ireland, Galway, Galway, Republic of Ireland.
| | - Graeme Porter
- School of Psychology, National University of Ireland, Galway, Galway, Republic of Ireland
| | - Yoshitaka Nakajima
- Department of Acoustic Design, Faculty for Design, Kyushu University, Fukuoka, Japan; Sound Corporation, Fukuoka, Japan
| |
Collapse
|
3
|
Neubert CR, Förstel AP, Debener S, Bendixen A. Predictability-Based Source Segregation and Sensory Deviance Detection in Auditory Aging. Front Hum Neurosci 2021; 15:734231. [PMID: 34776906 PMCID: PMC8586071 DOI: 10.3389/fnhum.2021.734231] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2021] [Accepted: 10/08/2021] [Indexed: 11/30/2022] Open
Abstract
When multiple sound sources are present at the same time, auditory perception is often challenged with disentangling the resulting mixture and focusing attention on the target source. It has been repeatedly demonstrated that background (distractor) sound sources are easier to ignore when their spectrotemporal signature is predictable. Prior evidence suggests that this ability to exploit predictability for foreground-background segregation degrades with age. On a theoretical level, this has been related with an impairment in elderly adults’ capabilities to detect certain types of sensory deviance in unattended sound sequences. Yet the link between those two capacities, deviance detection and predictability-based sound source segregation, has not been empirically demonstrated. Here we report on a combined behavioral-EEG study investigating the ability of elderly listeners (60–75 years of age) to use predictability as a cue for sound source segregation, as well as their sensory deviance detection capacities. Listeners performed a detection task on a target stream that can only be solved when a concurrent distractor stream is successfully ignored. We contrast two conditions whose distractor streams differ in their predictability. The ability to benefit from predictability was operationalized as performance difference between the two conditions. Results show that elderly listeners can use predictability for sound source segregation at group level, yet with a high degree of inter-individual variation in this ability. In a further, passive-listening control condition, we measured correlates of deviance detection in the event-related brain potential (ERP) elicited by occasional deviations from the same spectrotemporal pattern as used for the predictable distractor sequence during the behavioral task. ERP results confirmed neural signatures of deviance detection in terms of mismatch negativity (MMN) at group level. Correlation analyses at single-subject level provide no evidence for the hypothesis that deviance detection ability (measured by MMN amplitude) is related to the ability to benefit from predictability for sound source segregation. These results are discussed in the frameworks of sensory deviance detection and predictive coding.
Collapse
Affiliation(s)
- Christiane R Neubert
- Cognitive Systems Lab, Faculty of Natural Sciences, Institute of Physics, Chemnitz University of Technology, Chemnitz, Germany
| | - Alexander P Förstel
- Neuropsychology Lab, Department of Psychology, Carl von Ossietzky University of Oldenburg, Oldenburg, Germany
| | - Stefan Debener
- Neuropsychology Lab, Department of Psychology, Carl von Ossietzky University of Oldenburg, Oldenburg, Germany
| | - Alexandra Bendixen
- Cognitive Systems Lab, Faculty of Natural Sciences, Institute of Physics, Chemnitz University of Technology, Chemnitz, Germany
| |
Collapse
|
4
|
Rezaeizadeh M, Shamma S. Binding the Acoustic Features of an Auditory Source through Temporal Coherence. Cereb Cortex Commun 2021; 2:tgab060. [PMID: 34746791 PMCID: PMC8567849 DOI: 10.1093/texcom/tgab060] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2021] [Revised: 09/29/2021] [Accepted: 09/30/2021] [Indexed: 11/25/2022] Open
Abstract
Numerous studies have suggested that the perception of a target sound stream (or source) can only be segregated from a complex acoustic background mixture if the acoustic features underlying its perceptual attributes (e.g., pitch, location, and timbre) induce temporally modulated responses that are mutually correlated (or coherent), and that are uncorrelated (incoherent) from those of other sources in the mixture. This "temporal coherence" hypothesis asserts that attentive listening to one acoustic feature of a target enhances brain responses to that feature but would also concomitantly (1) induce mutually excitatory influences with other coherently responding neurons, thus enhancing (or binding) them all as they respond to the attended source; by contrast, (2) suppressive interactions are hypothesized to build up among neurons driven by temporally incoherent sound features, thus relatively reducing their activity. In this study, we report on EEG measurements in human subjects engaged in various sound segregation tasks that demonstrate rapid binding among the temporally coherent features of the attended source regardless of their identity (pure tone components, tone complexes, or noise), harmonic relationship, or frequency separation, thus confirming the key role temporal coherence plays in the analysis and organization of auditory scenes.
Collapse
Affiliation(s)
- Mohsen Rezaeizadeh
- Department of Electrical and Computer Engineering and Institute for Systems Research, University of Maryland, College Park 20742, USA
| | - Shihab Shamma
- Department of Electrical and Computer Engineering and Institute for Systems Research, University of Maryland, College Park 20742, USA
- Laboratoire des systèmes perceptifs, Département d’études cognitive, Ecole Normale Supérieure, PSL, 75005 Paris, France
| |
Collapse
|
5
|
Gurariy G, Randall R, Greenberg AS. Manipulation of low-level features modulates grouping strength of auditory objects. PSYCHOLOGICAL RESEARCH 2020; 85:2256-2270. [PMID: 32691138 DOI: 10.1007/s00426-020-01391-4] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2019] [Accepted: 07/10/2020] [Indexed: 11/29/2022]
Abstract
A central challenge of auditory processing involves the segregation, analysis, and integration of acoustic information into auditory perceptual objects for processing by higher order cognitive operations. This study explores the influence of low-level features on auditory object perception. Participants provided perceived musicality ratings in response to randomly generated pure tone sequences. Previous work has shown that music perception relies on the integration of discrete sounds into a holistic structure. Hence, high (versus low) ratings were viewed as indicative of strong (versus weak) object formation. Additionally, participants rated sequences in which random subsets of tones were manipulated along one of three low-level dimensions (timbre, amplitude, or fade-in) at one of three strengths (low, medium, or high). Our primary findings demonstrate how low-level acoustic features modulate the perception of auditory objects, as measured by changes in musicality ratings for manipulated sequences. Secondarily, we used principal component analysis to categorize participants into subgroups based on differential sensitivities to low-level auditory dimensions, thereby highlighting the importance of individual differences in auditory perception. Finally, we report asymmetries regarding the effects of low-level dimensions; specifically, the perceptual significance of timbre. Together, these data contribute to our understanding of how low-level auditory features modulate auditory object perception.
Collapse
Affiliation(s)
- Gennadiy Gurariy
- Department of Biomedical Engineering, Medical College of Wisconsin & Marquette University, Milwaukee, USA
| | - Richard Randall
- School of Music and Neuroscience Institute, Carnegie Mellon University, Pittsburgh, USA.
| | - Adam S Greenberg
- Department of Biomedical Engineering, Medical College of Wisconsin & Marquette University, Milwaukee, USA
| |
Collapse
|
6
|
Li Y, Wang F, Chen Y, Cichocki A, Sejnowski T. The Effects of Audiovisual Inputs on Solving the Cocktail Party Problem in the Human Brain: An fMRI Study. Cereb Cortex 2019; 28:3623-3637. [PMID: 29029039 DOI: 10.1093/cercor/bhx235] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2017] [Indexed: 11/13/2022] Open
Abstract
At cocktail parties, our brains often simultaneously receive visual and auditory information. Although the cocktail party problem has been widely investigated under auditory-only settings, the effects of audiovisual inputs have not. This study explored the effects of audiovisual inputs in a simulated cocktail party. In our fMRI experiment, each congruent audiovisual stimulus was a synthesis of 2 facial movie clips, each of which could be classified into 1 of 2 emotion categories (crying and laughing). Visual-only (faces) and auditory-only stimuli (voices) were created by extracting the visual and auditory contents from the synthesized audiovisual stimuli. Subjects were instructed to selectively attend to 1 of the 2 objects contained in each stimulus and to judge its emotion category in the visual-only, auditory-only, and audiovisual conditions. The neural representations of the emotion features were assessed by calculating decoding accuracy and brain pattern-related reproducibility index based on the fMRI data. We compared the audiovisual condition with the visual-only and auditory-only conditions and found that audiovisual inputs enhanced the neural representations of emotion features of the attended objects instead of the unattended objects. This enhancement might partially explain the benefits of audiovisual inputs for the brain to solve the cocktail party problem.
Collapse
Affiliation(s)
- Yuanqing Li
- Center for Brain Computer Interfaces and Brain Information Processing, South China University of Technology, Guangzhou, China.,Guangzhou Key Laboratory of Brain Computer Interaction and Applications, Guangzhou, China
| | - Fangyi Wang
- Center for Brain Computer Interfaces and Brain Information Processing, South China University of Technology, Guangzhou, China.,Guangzhou Key Laboratory of Brain Computer Interaction and Applications, Guangzhou, China
| | - Yongbin Chen
- Center for Brain Computer Interfaces and Brain Information Processing, South China University of Technology, Guangzhou, China.,Guangzhou Key Laboratory of Brain Computer Interaction and Applications, Guangzhou, China
| | - Andrzej Cichocki
- Riken Brain Science Institute, Wako shi, Japan.,Skolkovo Institute of Science and Technology (SKOTECH), Moscow, Russia
| | - Terrence Sejnowski
- Neurobiology Laboratory, The Salk Institute for Biological Studies, La Jolla, CA, USA
| |
Collapse
|
7
|
Mamo SK, Grose JH, Buss E. Perceptual sensitivity to, and electrophysiological encoding of, a complex periodic signal: effects of age. Int J Audiol 2019; 58:441-449. [PMID: 31056966 DOI: 10.1080/14992027.2019.1587179] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Abstract
Objective: The purpose of this study was to investigate perceptual and electrophysiological encoding of complex periodic signals as a function of age. Design: Two groups of adults completed three listening tasks: a behavioural task of detection of a mistuned harmonic component in a complex tone, an electrophysiological measure of speech-evoked auditory brainstem response (sABR), and a speech-in-noise measure. Between group comparisons were undertaken for each task as well as pairwise correlation analyses for all tasks. Study sample: One group of younger adults (n = 20) and one group of older adults (n = 20) participated. All listeners had relatively normal audiometric thresholds (≤20 dB HL) from 250-4000 Hz. Results: Younger adults had better results than the older adults on all three tasks: sensitivity for detecting a mistuned harmonic, spectral encoding for sABR, and release from masking for the speech-in-noise test. There were no significant correlations between measures when evaluating the older adults in isolation. Conclusions: The results are consistent with the body of literature that demonstrates reduced temporal processing abilities for older adults. The combined method approach undertaken in this investigation did not result in correlations between the perceptual and electrophysiological measures of temporal processing.
Collapse
Affiliation(s)
- Sara K Mamo
- a Department of Communication Disorders , University of Massachusetts , Amherst , MA , USA
| | - John H Grose
- b Department of Otolaryngology , University of North Carolina , Chapel Hill , NC , USA.,c Division of Speech and Hearing Sciences, Department of Allied Health Sciences , University of North Carolina , Chapel Hill , NC , USA
| | - Emily Buss
- b Department of Otolaryngology , University of North Carolina , Chapel Hill , NC , USA
| |
Collapse
|
8
|
Thomassen S, Bendixen A. Assessing the background decomposition of a complex auditory scene with event-related brain potentials. Hear Res 2018; 370:120-129. [PMID: 30368055 DOI: 10.1016/j.heares.2018.09.008] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/07/2018] [Revised: 09/17/2018] [Accepted: 09/30/2018] [Indexed: 11/26/2022]
Abstract
A listener who focusses on a sound source of interest must continuously integrate the sounds emitted by the attended source and ignore the sounds emitted by the remaining sources in the auditory scene. Little is known about how the ignored sound sources in the background are mentally represented after the source of interest has formed the perceptual foreground. This is due to a key methodological challenge: the background representation is by definition not overtly reportable. Here we developed a paradigm based on event-related brain potentials (ERPs) to assess the mental representation of background sounds. Participants listened to sequences of three repeatedly presented tones arranged in an ascending order (low, middle, high frequency). They were instructed to detect intensity deviants in one of the tones, creating the perceptual foreground. The remaining two background tones contained timing and location deviants. Those deviants were set up such that mismatch negativity (MMN) components would be elicited in distinct ways if the background was decomposed into two separate sound streams (background segregation) or if it was not further decomposed (background integration). Results provide MMN-based evidence for background segregation and integration in parallel. This suggests that mental representations of background integration and segregation can be concurrently available, and that collecting empirical evidence for only one of these background organization alternatives might lead to erroneous conclusions.
Collapse
Affiliation(s)
- Sabine Thomassen
- Institute of Physics, School of Natural Sciences, Chemnitz University of Technology, Reichenhainer Str. 70, D-09126, Chemnitz, Germany; Auditory Psychophysiology Lab, Department of Psychology, Carl von Ossietzky University of Oldenburg, Ammerländer Heerstr. 114-118, D-26129, Oldenburg, Germany.
| | - Alexandra Bendixen
- Institute of Physics, School of Natural Sciences, Chemnitz University of Technology, Reichenhainer Str. 70, D-09126, Chemnitz, Germany; Institute of Psychology, University of Leipzig, Neumarkt 9-19, D-04109, Leipzig, Germany.
| |
Collapse
|
9
|
Neural Decoding of Bistable Sounds Reveals an Effect of Intention on Perceptual Organization. J Neurosci 2018; 38:2844-2853. [PMID: 29440556 PMCID: PMC5852662 DOI: 10.1523/jneurosci.3022-17.2018] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2017] [Revised: 01/21/2018] [Accepted: 02/06/2018] [Indexed: 12/05/2022] Open
Abstract
Auditory signals arrive at the ear as a mixture that the brain must decompose into distinct sources based to a large extent on acoustic properties of the sounds. An important question concerns whether listeners have voluntary control over how many sources they perceive. This has been studied using pure high (H) and low (L) tones presented in the repeating pattern HLH-HLH-, which can form a bistable percept heard either as an integrated whole (HLH-) or as segregated into high (H-H-) and low (-L-) sequences. Although instructing listeners to try to integrate or segregate sounds affects reports of what they hear, this could reflect a response bias rather than a perceptual effect. We had human listeners (15 males, 12 females) continuously report their perception of such sequences and recorded neural activity using MEG. During neutral listening, a classifier trained on patterns of neural activity distinguished between periods of integrated and segregated perception. In other conditions, participants tried to influence their perception by allocating attention either to the whole sequence or to a subset of the sounds. They reported hearing the desired percept for a greater proportion of time than when listening neutrally. Critically, neural activity supported these reports; stimulus-locked brain responses in auditory cortex were more likely to resemble the signature of segregation when participants tried to hear segregation than when attempting to perceive integration. These results indicate that listeners can influence how many sound sources they perceive, as reflected in neural responses that track both the input and its perceptual organization. SIGNIFICANCE STATEMENT Can we consciously influence our perception of the external world? We address this question using sound sequences that can be heard either as coming from a single source or as two distinct auditory streams. Listeners reported spontaneous changes in their perception between these two interpretations while we recorded neural activity to identify signatures of such integration and segregation. They also indicated that they could, to some extent, choose between these alternatives. This claim was supported by corresponding changes in responses in auditory cortex. By linking neural and behavioral correlates of perception, we demonstrate that the number of objects that we perceive can depend not only on the physical attributes of our environment, but also on how we intend to experience it.
Collapse
|
10
|
Shearer DE, Molis MR, Bennett KO, Leek MR. Auditory stream segregation of iterated rippled noises by normal-hearing and hearing-impaired listeners. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2018; 143:378. [PMID: 29390743 PMCID: PMC5785299 DOI: 10.1121/1.5021333] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/16/2016] [Revised: 08/09/2017] [Accepted: 01/02/2018] [Indexed: 05/28/2023]
Abstract
Individuals with hearing loss are thought to be less sensitive to the often subtle variations of acoustic information that support auditory stream segregation. Perceptual segregation can be influenced by differences in both the spectral and temporal characteristics of interleaved stimuli. The purpose of this study was to determine what stimulus characteristics support sequential stream segregation by normal-hearing and hearing-impaired listeners. Iterated rippled noises (IRNs) were used to assess the effects of tonality, spectral resolvability, and hearing loss on the perception of auditory streams in two pitch regions, corresponding to 250 and 1000 Hz. Overall, listeners with hearing loss were significantly less likely to segregate alternating IRNs into two auditory streams than were normally hearing listeners. Low pitched IRNs were generally less likely to segregate into two streams than were higher pitched IRNs. High-pass filtering was a strong contributor to reduced segregation for both groups. The tonality, or pitch strength, of the IRNs had a significant effect on streaming, but the effect was similar for both groups of subjects. These data demonstrate that stream segregation is influenced by many factors including pitch differences, pitch region, spectral resolution, and degree of stimulus tonality, in addition to the loss of auditory sensitivity.
Collapse
Affiliation(s)
- Daniel E Shearer
- National Center for Rehabilitative Auditory Research, Portland VA Healthcare System. Portland, Oregon 97239, USA
| | - Michelle R Molis
- National Center for Rehabilitative Auditory Research, Portland VA Healthcare System. Portland, Oregon 97239, USA
| | - Keri O Bennett
- National Center for Rehabilitative Auditory Research, Portland VA Healthcare System. Portland, Oregon 97239, USA
| | - Marjorie R Leek
- National Center for Rehabilitative Auditory Research, Portland VA Healthcare System. Portland, Oregon 97239, USA
| |
Collapse
|
11
|
Paredes-Gallardo A, Madsen SMK, Dau T, Marozeau J. The Role of Temporal Cues in Voluntary Stream Segregation for Cochlear Implant Users. Trends Hear 2018; 22:2331216518773226. [PMID: 29766759 PMCID: PMC5974563 DOI: 10.1177/2331216518773226] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2018] [Revised: 03/27/2018] [Accepted: 03/30/2018] [Indexed: 11/30/2022] Open
Abstract
The role of temporal cues in sequential stream segregation was investigated in cochlear implant (CI) listeners using a delay detection task composed of a sequence of bursts of pulses (B) on a single electrode interleaved with a second sequence (A) presented on the same electrode with a different pulse rate. In half of the trials, a delay was added to the last burst of the otherwise regular B sequence and the listeners were asked to detect this delay. As a jitter was added to the period between consecutive A bursts, time judgments between the A and B sequences provided an unreliable cue to perform the task. Thus, the segregation of the A and B sequences should improve performance. The pulse rate difference and the duration of the sequences were varied between trials. The performance in the detection task improved by increasing both the pulse rate differences and the sequence duration. This suggests that CI listeners can use pulse rate differences to segregate sequential sounds and that a segregated percept builds up over time. In addition, the contribution of place versus temporal cues for voluntary stream segregation was assessed by combining the results from this study with those from our previous study, where the same paradigm was used to determine the role of place cues on stream segregation. Pitch height differences between the A and the B sounds accounted for the results from both studies, suggesting that stream segregation is related to the salience of the perceptual difference between the sounds.
Collapse
Affiliation(s)
- Andreu Paredes-Gallardo
- Hearing Systems Group, Department of Electrical Engineering, Technical University of Denmark, Kongens Lyngby, Denmark
| | - Sara M. K. Madsen
- Hearing Systems Group, Department of Electrical Engineering, Technical University of Denmark, Kongens Lyngby, Denmark
| | - Torsten Dau
- Hearing Systems Group, Department of Electrical Engineering, Technical University of Denmark, Kongens Lyngby, Denmark
| | - Jeremy Marozeau
- Hearing Systems Group, Department of Electrical Engineering, Technical University of Denmark, Kongens Lyngby, Denmark
| |
Collapse
|
12
|
Paredes-Gallardo A, Madsen SMK, Dau T, Marozeau J. The Role of Place Cues in Voluntary Stream Segregation for Cochlear Implant Users. Trends Hear 2018; 22:2331216517750262. [PMID: 29347886 PMCID: PMC5777547 DOI: 10.1177/2331216517750262] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2017] [Accepted: 11/28/2017] [Indexed: 11/15/2022] Open
Abstract
Sequential stream segregation by cochlear implant (CI) listeners was investigated using a temporal delay detection task composed of a sequence of regularly presented bursts of pulses on a single electrode (B) interleaved with an irregular sequence (A) presented on a different electrode. In half of the trials, a delay was added to the last burst of the regular B sequence, and the listeners were asked to detect this delay. As a jitter was added to the period between consecutive A bursts, time judgments between the A and B sequences provided an unreliable cue to perform the task. Thus, the segregation of the A and B sequences should improve performance. In Experiment 1, the electrode separation and the sequence duration were varied to clarify whether place cues help CI listeners to voluntarily segregate sounds and whether a two-stream percept needs time to build up. Results suggested that place cues can facilitate the segregation of sequential sounds if enough time is provided to build up a two-stream percept. In Experiment 2, the duration of the sequence was fixed, and only the electrode separation was varied to estimate the fission boundary. Most listeners were able to segregate the sounds for separations of three or more electrodes, and some listeners could segregate sounds coming from adjacent electrodes.
Collapse
Affiliation(s)
- Andreu Paredes-Gallardo
- Hearing Systems Group, Department of Electrical Engineering, Technical University of Denmark, Denmark
| | - Sara M. K. Madsen
- Hearing Systems Group, Department of Electrical Engineering, Technical University of Denmark, Denmark
| | - Torsten Dau
- Hearing Systems Group, Department of Electrical Engineering, Technical University of Denmark, Denmark
| | - Jeremy Marozeau
- Hearing Systems Group, Department of Electrical Engineering, Technical University of Denmark, Denmark
| |
Collapse
|
13
|
Auditory sequential accumulation of spectral information. Hear Res 2017; 356:118-126. [PMID: 29042121 DOI: 10.1016/j.heares.2017.10.001] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/29/2017] [Revised: 10/03/2017] [Accepted: 10/04/2017] [Indexed: 11/22/2022]
Abstract
In many listening situations, information about the spectral content of a target sound may be distributed over time, and estimating the target spectrum requires efficient sequential processing. Listeners' ability to estimate the spectrum of a random-frequency, six-tone complex was investigated and the spectral content of the complex was revealed using a sequence of bursts. Whether each of the six tones was presented within each burst was determined at random according to a presentation probability. In separate conditions, the presentation probabilities (p) ranged from 0.2 to 1, the total number of bursts varied from 1 to 16, and the inter-burst interval was either 0 or 200 ms. To evaluate the information acquired by the listener, the burst sequence was followed, after a 500-ms silent interval, by the six-tone complex acting as an informational masker and the listener was required to detect a pure-tone target presented simultaneously with the masker. Greater performance in this task indicates more accurate estimation of the spectrum of the complex by the listener. Evidence for integration of information across bursts was observed, and the integration process did not significantly depend on inter-burst interval.
Collapse
|
14
|
Itatani N, Klump GM. Interaction of spatial and non-spatial cues in auditory stream segregation in the European starling. Eur J Neurosci 2017; 51:1191-1200. [PMID: 28922512 DOI: 10.1111/ejn.13716] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2017] [Revised: 09/14/2017] [Accepted: 09/14/2017] [Indexed: 11/29/2022]
Abstract
Integrating sounds from the same source and segregating sounds from different sources in an acoustic scene are an essential function of the auditory system. Naturally, the auditory system simultaneously makes use of multiple cues. Here, we investigate the interaction between spatial cues and frequency cues in stream segregation of European starlings (Sturnus vulgaris) using an objective measure of perception. Neural responses to streaming sounds were recorded, while the bird was performing a behavioural task that results in a higher sensitivity during a one-stream than a two-stream percept. Birds were trained to detect an onset time shift of a B tone in an ABA- triplet sequence in which A and B could differ in frequency and/or spatial location. If the frequency difference or spatial separation between the signal sources or both were increased, the behavioural time shift detection performance deteriorated. Spatial separation had a smaller effect on the performance compared to the frequency difference and both cues additively affected the performance. Neural responses in the primary auditory forebrain were affected by the frequency and spatial cues. However, frequency and spatial cue differences being sufficiently large to elicit behavioural effects did not reveal correlated neural response differences. The difference between the neuronal response pattern and behavioural response is discussed with relation to the task given to the bird. Perceptual effects of combining different cues in auditory scene analysis indicate that these cues are analysed independently and given different weights suggesting that the streaming percept arises consecutively to initial cue analysis.
Collapse
Affiliation(s)
- Naoya Itatani
- Animal Physiology and Behavior Group, Department for Neuroscience, School for Medicine and Health Sciences, Carl-von-Ossietzky University Oldenburg, 26111, Oldenburg, Germany.,Cluster of Excellence Hearing4all, Carl-von-Ossietzky University Oldenburg, Oldenburg, Germany
| | - Georg M Klump
- Animal Physiology and Behavior Group, Department for Neuroscience, School for Medicine and Health Sciences, Carl-von-Ossietzky University Oldenburg, 26111, Oldenburg, Germany.,Cluster of Excellence Hearing4all, Carl-von-Ossietzky University Oldenburg, Oldenburg, Germany
| |
Collapse
|
15
|
David M, Lavandier M, Grimault N, Oxenham AJ. Discrimination and streaming of speech sounds based on differences in interaural and spectral cues. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2017; 142:1674. [PMID: 28964066 PMCID: PMC5617732 DOI: 10.1121/1.5003809] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/09/2017] [Revised: 09/01/2017] [Accepted: 09/07/2017] [Indexed: 05/29/2023]
Abstract
Differences in spatial cues, including interaural time differences (ITDs), interaural level differences (ILDs) and spectral cues, can lead to stream segregation of alternating noise bursts. It is unknown how effective such cues are for streaming sounds with realistic spectro-temporal variations. In particular, it is not known whether the high-frequency spectral cues associated with elevation remain sufficiently robust under such conditions. To answer these questions, sequences of consonant-vowel tokens were generated and filtered by non-individualized head-related transfer functions to simulate the cues associated with different positions in the horizontal and median planes. A discrimination task showed that listeners could discriminate changes in interaural cues both when the stimulus remained constant and when it varied between presentations. However, discrimination of changes in spectral cues was much poorer in the presence of stimulus variability. A streaming task, based on the detection of repeated syllables in the presence of interfering syllables, revealed that listeners can use both interaural and spectral cues to segregate alternating syllable sequences, despite the large spectro-temporal differences between stimuli. However, only the full complement of spatial cues (ILDs, ITDs, and spectral cues) resulted in obligatory streaming in a task that encouraged listeners to integrate the tokens into a single stream.
Collapse
Affiliation(s)
- Marion David
- Department of Psychology, University of Minnesota, Minneapolis, Minnesota 55455, USA
| | - Mathieu Lavandier
- Univ Lyon, ENTPE, Laboratoire Génie Civil et bâtiment, Rue Maurice Audin, 69518 Vaulx-en-Velin Cedex, France
| | - Nicolas Grimault
- Centre de Recherche en Neurosciences de Lyon, Université Lyon 1, Cognition Auditive et Psychoacoustique, Avenue Tony Garnier, 69366 Lyon Cedex 07, France
| | - Andrew J Oxenham
- Department of Psychology, University of Minnesota, Minneapolis, Minnesota 55455, USA
| |
Collapse
|
16
|
Temporal coherence structure rapidly shapes neuronal interactions. Nat Commun 2017; 8:13900. [PMID: 28054545 PMCID: PMC5228385 DOI: 10.1038/ncomms13900] [Citation(s) in RCA: 43] [Impact Index Per Article: 6.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2016] [Accepted: 11/10/2016] [Indexed: 11/08/2022] Open
Abstract
Perception of segregated sources is essential in navigating cluttered acoustic environments. A basic mechanism to implement this process is the temporal coherence principle. It postulates that a signal is perceived as emitted from a single source only when all of its features are temporally modulated coherently, causing them to bind perceptually. Here we report on neural correlates of this process as rapidly reshaped interactions in primary auditory cortex, measured in three different ways: as changes in response rates, as adaptations of spectrotemporal receptive fields following stimulation by temporally coherent and incoherent tone sequences, and as changes in spiking correlations during the tone sequences. Responses, sensitivity and presumed connectivity were rapidly enhanced by synchronous stimuli, and suppressed by alternating (asynchronous) sounds, but only when the animals engaged in task performance and were attentive to the stimuli. Temporal coherence and attention are therefore both important factors in auditory scene analysis. One can easily identify if multiple sounds are originating from a single source yet the neural mechanisms underlying this process are unknown. Here the authors show that temporally coherent sounds elicit changes in receptive field dynamics of auditory cortical neurons in ferrets only when paying attention.
Collapse
|
17
|
Itatani N, Klump GM. Animal models for auditory streaming. Philos Trans R Soc Lond B Biol Sci 2017; 372:rstb.2016.0112. [PMID: 28044022 DOI: 10.1098/rstb.2016.0112] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/31/2016] [Indexed: 11/12/2022] Open
Abstract
Sounds in the natural environment need to be assigned to acoustic sources to evaluate complex auditory scenes. Separating sources will affect the analysis of auditory features of sounds. As the benefits of assigning sounds to specific sources accrue to all species communicating acoustically, the ability for auditory scene analysis is widespread among different animals. Animal studies allow for a deeper insight into the neuronal mechanisms underlying auditory scene analysis. Here, we will review the paradigms applied in the study of auditory scene analysis and streaming of sequential sounds in animal models. We will compare the psychophysical results from the animal studies to the evidence obtained in human psychophysics of auditory streaming, i.e. in a task commonly used for measuring the capability for auditory scene analysis. Furthermore, the neuronal correlates of auditory streaming will be reviewed in different animal models and the observations of the neurons' response measures will be related to perception. The across-species comparison will reveal whether similar demands in the analysis of acoustic scenes have resulted in similar perceptual and neuronal processing mechanisms in the wide range of species being capable of auditory scene analysis.This article is part of the themed issue 'Auditory and visual scene analysis'.
Collapse
Affiliation(s)
- Naoya Itatani
- Cluster of Excellence Hearing4all, Animal Physiology and Behaviour Group, Department of Neuroscience, School of Medicine and Health Sciences, Carl von Ossietzky University Oldenburg, 26111 Oldenburg, Germany
| | - Georg M Klump
- Cluster of Excellence Hearing4all, Animal Physiology and Behaviour Group, Department of Neuroscience, School of Medicine and Health Sciences, Carl von Ossietzky University Oldenburg, 26111 Oldenburg, Germany
| |
Collapse
|
18
|
Thomassen S, Bendixen A. Subjective perceptual organization of a complex auditory scene. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2017; 141:265. [PMID: 28147594 DOI: 10.1121/1.4973806] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
Empirical research on the sequential decomposition of an auditory scene primarily relies on interleaved sound mixtures of only two tone sequences (e.g., ABAB…). This oversimplifies the sound decomposition problem by limiting the number of putative perceptual organizations. The current study used a sound mixture composed of three different tones (ABCABC…) that could be perceptually organized in many different ways. Participants listened to these sequences and reported their subjective perception by continuously choosing one out of 12 visually presented perceptual organization alternatives. Different levels of frequency and spatial separation were implemented to check whether participants' perceptual reports would be systematic and plausible. As hypothesized, while perception switched back and forth in each condition between various perceptual alternatives (multistability), spatial as well as frequency separation generally raised the proportion of segregated and reduced the proportion of integrated alternatives. During segregated percepts, in contrast to the hypothesis, many participants had a tendency to perceive two streams in the foreground, rather than reporting alternatives with a clear foreground-background differentiation. Finally, participants perceived the organization with intermediate feature values (e.g., middle tones of the pattern) segregated in the foreground slightly less often than similar alternatives with outer feature values (e.g., higher tones).
Collapse
Affiliation(s)
- Sabine Thomassen
- Auditory Psychophysiology Lab, Department of Psychology, Carl von Ossietzky University of Oldenburg, Ammerländer Heerstrasse 114-118, D-26129 Oldenburg, Germany
| | - Alexandra Bendixen
- Auditory Psychophysiology Lab, Department of Psychology, Carl von Ossietzky University of Oldenburg, Ammerländer Heerstrasse 114-118, D-26129 Oldenburg, Germany
| |
Collapse
|
19
|
David M, Lavandier M, Grimault N, Oxenham AJ. Sequential stream segregation of voiced and unvoiced speech sounds based on fundamental frequency. Hear Res 2016; 344:235-243. [PMID: 27923739 DOI: 10.1016/j.heares.2016.11.016] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/18/2016] [Revised: 11/22/2016] [Accepted: 11/29/2016] [Indexed: 11/30/2022]
Abstract
Differences in fundamental frequency (F0) between voiced sounds are known to be a strong cue for stream segregation. However, speech consists of both voiced and unvoiced sounds, and less is known about whether and how the unvoiced portions are segregated. This study measured listeners' ability to integrate or segregate sequences of consonant-vowel tokens, comprising a voiceless fricative and a vowel, as a function of the F0 difference between interleaved sequences of tokens. A performance-based measure was used, in which listeners detected the presence of a repeated token either within one sequence or between the two sequences (measures of voluntary and obligatory streaming, respectively). The results showed a systematic increase of voluntary stream segregation as the F0 difference between the two interleaved sequences increased from 0 to 13 semitones, suggesting that F0 differences allowed listeners to segregate speech sounds, including the unvoiced portions. In contrast to the consistent effects of voluntary streaming, the trend towards obligatory stream segregation at large F0 differences failed to reach significance. Listeners were no longer able to perform the voluntary-streaming task reliably when the unvoiced portions were removed from the stimuli, suggesting that the unvoiced portions were used and correctly segregated in the original task. The results demonstrate that streaming based on F0 differences occurs for natural speech sounds, and that the unvoiced portions are correctly assigned to the corresponding voiced portions.
Collapse
Affiliation(s)
- Marion David
- Department of Psychology, University of Minnesota, Minneapolis, MN, 55455, USA.
| | - Mathieu Lavandier
- Univ. Lyon, ENTPE, Laboratoire Génie Civil et Bâtiment, Rue M. Audin, F-69518, Vaulx-en-Velin Cedex, France
| | - Nicolas Grimault
- Cognition Auditive et Psychoacoustique, Centre de Recherche en Neurosciences de Lyon, Université Lyon 1, UMR CRNS 5292, Avenue Tony Garnier, 69366, Lyon Cedex 07, France
| | - Andrew J Oxenham
- Department of Psychology, University of Minnesota, Minneapolis, MN, 55455, USA
| |
Collapse
|
20
|
Shen Y. The effect of frequency cueing on the perceptual segregation of simultaneous tones: Bottom-up and top-down contributions. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2016; 140:3496. [PMID: 27908095 PMCID: PMC5848834 DOI: 10.1121/1.4965969] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/01/2016] [Revised: 09/30/2016] [Accepted: 10/08/2016] [Indexed: 06/06/2023]
Abstract
Listeners were presented with two simultaneous tones of different frequencies (more than one octave apart) and asked to identify the tone that was amplitude-modulated while a tonal precursor was presented to cue the frequency of the lower frequency tone. Performance thresholds were estimated based on the duration of the tone-pair. In Exp. I the duration of the precursor varied from 100 to 400 ms and the inter-stimulus interval (ISI) between the precursor and the tone-pair varied from 0 to 1 s. The presence of the precursor facilitated segregation. As the ISI increased, the facilitation effect of the precursor increased for the precursor durations of 100 and 200 ms, but not for the 400-ms precursor duration. When the precursor was presented to the contralateral ear relative to the tone-pair in Exp. II, no significant change to the precursor effect was observed. These observations contradict the predictions of the model based solely on bottom-up processing, suggesting the likely involvement of top-down processes.
Collapse
Affiliation(s)
- Yi Shen
- Department of Speech and Hearing Sciences, Indiana University Bloomington, Bloomington, Indiana 47405, USA
| |
Collapse
|
21
|
Mehta AH, Yasin I, Oxenham AJ, Shamma S. Neural correlates of attention and streaming in a perceptually multistable auditory illusion. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2016; 140:2225. [PMID: 27794350 PMCID: PMC5849028 DOI: 10.1121/1.4963902] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/10/2016] [Revised: 09/12/2016] [Accepted: 09/20/2016] [Indexed: 06/06/2023]
Abstract
In a complex acoustic environment, acoustic cues and attention interact in the formation of streams within the auditory scene. In this study, a variant of the "octave illusion" [Deutsch (1974). Nature 251, 307-309] was used to investigate the neural correlates of auditory streaming, and to elucidate the effects of attention on the interaction between sequential and concurrent sound segregation in humans. By directing subjects' attention to different frequencies and ears, it was possible to elicit several different illusory percepts with the identical stimulus. The first experiment tested the hypothesis that the illusion depends on the ability of listeners to perceptually stream the target tones from within the alternating sound sequences. In the second experiment, concurrent psychophysical measures and electroencephalography recordings provided neural correlates of the various percepts elicited by the multistable stimulus. The results show that the perception and neural correlates of the auditory illusion can be manipulated robustly by attentional focus and that the illusion is constrained in much the same way as auditory stream segregation, suggesting common underlying mechanisms.
Collapse
Affiliation(s)
- Anahita H Mehta
- Ear Institute, University College London, 332 Gray's Inn Road, London WC1X 8EE, United Kingdom
| | - Ifat Yasin
- Department of Computer Science, University College London, 66-72 Gower Street, London WC1E 6BT, United Kingdom
| | - Andrew J Oxenham
- Department of Psychology, University of Minnesota, 75 East River Parkway, Minneapolis, Minnesota 55455, USA
| | - Shihab Shamma
- Institute for Systems Research, 2203 A.V. Williams Building, University of Maryland, College Park, Maryland 20742, USA
| |
Collapse
|
22
|
Chang AC, Lutfi R, Lee J, Heo I. A Detection-Theoretic Analysis of Auditory Streaming and Its Relation to Auditory Masking. Trends Hear 2016; 20:20/0/2331216516664343. [PMID: 27641681 PMCID: PMC5029798 DOI: 10.1177/2331216516664343] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Research on hearing has long been challenged with understanding our exceptional ability to hear out individual sounds in a mixture (the so-called cocktail party problem). Two general approaches to the problem have been taken using sequences of tones as stimuli. The first has focused on our tendency to hear sequences, sufficiently separated in frequency, split into separate cohesive streams (auditory streaming). The second has focused on our ability to detect a change in one sequence, ignoring all others (auditory masking). The two phenomena are clearly related, but that relation has never been evaluated analytically. This article offers a detection-theoretic analysis of the relation between multitone streaming and masking that underscores the expected similarities and differences between these phenomena and the predicted outcome of experiments in each case. The key to establishing this relation is the function linking performance to the information divergence of the tone sequences, DKL (a measure of the statistical separation of their parameters). A strong prediction is that streaming and masking of tones will be a common function of DKL provided that the statistical properties of sequences are symmetric. Results of experiments are reported supporting this prediction.
Collapse
Affiliation(s)
- An-Chieh Chang
- Department of Communication Sciences and Disorders, University of Wisconsin-Madison, WI, USA
| | - Robert Lutfi
- Department of Communication Sciences and Disorders, University of Wisconsin-Madison, WI, USA
| | - Jungmee Lee
- Department of Communication Sciences and Disorders, University of Wisconsin-Madison, WI, USA
| | - Inseok Heo
- Department of Electrical and Computer Engineering, University of Wisconsin-Madison, WI, USA
| |
Collapse
|
23
|
Abstract
Most people are able to recognise familiar tunes even when played in a different key. It is assumed that this depends on a general capacity for relative pitch perception; the ability to recognise the pattern of inter-note intervals that characterises the tune. However, when healthy adults are required to detect rare deviant melodic patterns in a sequence of randomly transposed standard patterns they perform close to chance. Musically experienced participants perform better than naïve participants, but even they find the task difficult, despite the fact that musical education includes training in interval recognition.To understand the source of this difficulty we designed an experiment to explore the relative influence of the size of within-pattern intervals and between-pattern transpositions on detecting deviant melodic patterns. We found that task difficulty increases when patterns contain large intervals (5-7 semitones) rather than small intervals (1-3 semitones). While task difficulty increases substantially when transpositions are introduced, the effect of transposition size (large vs small) is weaker. Increasing the range of permissible intervals to be used also makes the task more difficult. Furthermore, providing an initial exact repetition followed by subsequent transpositions does not improve performance. Although musical training correlates with task performance, we find no evidence that violations to musical intervals important in Western music (i.e. the perfect fifth or fourth) are more easily detected. In summary, relative pitch perception does not appear to be conducive to simple explanations based exclusively on invariant physical ratios.
Collapse
|
24
|
Bizley JK, Maddox RK, Lee AKC. Defining Auditory-Visual Objects: Behavioral Tests and Physiological Mechanisms. Trends Neurosci 2016; 39:74-85. [PMID: 26775728 PMCID: PMC4738154 DOI: 10.1016/j.tins.2015.12.007] [Citation(s) in RCA: 50] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2015] [Revised: 12/03/2015] [Accepted: 12/11/2015] [Indexed: 11/30/2022]
Abstract
Crossmodal integration is a term applicable to many phenomena in which one sensory modality influences task performance or perception in another sensory modality. We distinguish the term binding as one that should be reserved specifically for the process that underpins perceptual object formation. To unambiguously differentiate binding form other types of integration, behavioral and neural studies must investigate perception of a feature orthogonal to the features that link the auditory and visual stimuli. We argue that supporting true perceptual binding (as opposed to other processes such as decision-making) is one role for cross-sensory influences in early sensory cortex. These early multisensory interactions may therefore form a physiological substrate for the bottom-up grouping of auditory and visual stimuli into auditory-visual (AV) objects. Crossmodal integration and binding have been treated as synonymous in the literature, with no clear delineation between perceptual changes and other interactions such as decision-making. Crossmodal binding is proposed as a distinct form of integration leading to multisensory object formation. Multisensory stimuli are most beneficial in noisy situations, but few studies use stimulus competition to investigate the processes underpinning multisensory integration. Evidence suggests that both visual and auditory attention is object-based – all features within an object are enhanced and there is a cost to attending features across versus within objects. Multisensory interactions can be observed throughout the brain, including early sensory cortex. The role of early sensory cortex in multisensory integration is unknown, but may underlie crossmodal binding.
Collapse
Affiliation(s)
- Jennifer K Bizley
- University College London (UCL) Ear Institute, 332 Gray's Inn Road, London, WC1X 8EE, UK.
| | - Ross K Maddox
- Institute for Learning and Brain Sciences, University of Washington, 1715 NE Columbia Road, Portage Bay Building, Box 357988, Seattle, WA 98195, USA
| | - Adrian K C Lee
- Institute for Learning and Brain Sciences, University of Washington, 1715 NE Columbia Road, Portage Bay Building, Box 357988, Seattle, WA 98195, USA; Department of Speech and Hearing Sciences, University of Washington, 1417 NE 42nd Street, Eagleson Hall, Box 354875, Seattle, WA 98105, USA.
| |
Collapse
|
25
|
David M, Lavandier M, Grimault N. Sequential streaming, binaural cues and lateralization. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2015; 138:3500-3512. [PMID: 26723307 DOI: 10.1121/1.4936902] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
Interaural time differences (ITDs) and interaural level differences (ILDs) associated with monaural spectral differences (coloration) enable the localization of sound sources. The influence of these spatial cues as well as their relative importance on obligatory stream segregation were assessed in experiment 1. A temporal discrimination task favored by integration was used to measure obligatory stream segregation for sequences of speech-shaped noises. Binaural and monaural differences associated with different spatial positions increased discrimination thresholds, indicating that spatial cues can induce stream segregation. The results also demonstrated that ITDs and coloration were relatively more important cues compared to ILDs. Experiment 2 questioned whether sound segregation takes place at the level of acoustic cue extraction (ITD per se) or at the level of object formation (perceived azimuth). A difference in ITDs between stimuli was introduced either consistently or inconsistently across frequencies, leading to clearly lateralized sounds or blurred lateralization, respectively. Conditions with ITDs and clearly perceived azimuths induced significantly more segregation than the condition with ITDs but reduced lateralization. The results suggested that segregation was mainly based on a difference in lateralization, although the extraction of ITDs might have also helped segregation up to a ceiling magnitude.
Collapse
Affiliation(s)
- Marion David
- Université of Lyon, ENTPE, Laboratoire Génie Civil et Bâtiment, Rue M. Audin, F-69518 Vaulx-en-Velin Cedex, France
| | - Mathieu Lavandier
- Université of Lyon, ENTPE, Laboratoire Génie Civil et Bâtiment, Rue M. Audin, F-69518 Vaulx-en-Velin Cedex, France
| | - Nicolas Grimault
- Cognition Auditive et Psychoacoustique, Centre de Recherche en Neurosciences de Lyon, Université Lyon 1, UMR CNRS 5292, Avenue Tony Garnier, 69366 Lyon Cedex 07, France
| |
Collapse
|
26
|
Billig AJ, Carlyon RP. Automaticity and primacy of auditory streaming: Concurrent subjective and objective measures. J Exp Psychol Hum Percept Perform 2015; 42:339-353. [PMID: 26414168 PMCID: PMC4763253 DOI: 10.1037/xhp0000146] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
Two experiments used subjective and objective measures to study the automaticity and primacy of auditory streaming. Listeners heard sequences of “ABA–” triplets, where “A” and “B” were tones of different frequencies and “–” was a silent gap. Segregation was more frequently reported, and rhythmically deviant triplets less well detected, for a greater between-tone frequency separation and later in the sequence. In Experiment 1, performing a competing auditory task for the first part of the sequence led to a reduction in subsequent streaming compared to when the tones were attended throughout. This is consistent with focused attention promoting streaming, and/or with attention switches resetting it. However, the proportion of segregated reports increased more rapidly following a switch than at the start of a sequence, indicating that some streaming occurred automatically. Modeling ruled out a simple “covert attention” account of this finding. Experiment 2 required listeners to perform subjective and objective tasks concurrently. It revealed superior performance during integrated compared to segregated reports, beyond that explained by the codependence of the two measures on stimulus parameters. We argue that listeners have limited access to low-level stimulus representations once perceptual organization has occurred, and that subjective and objective streaming measures partly index the same processes.
Collapse
|
27
|
Nie Y, Nelson PB. Auditory stream segregation using amplitude modulated bandpass noise. Front Psychol 2015; 6:1151. [PMID: 26300831 PMCID: PMC4528102 DOI: 10.3389/fpsyg.2015.01151] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2015] [Accepted: 07/23/2015] [Indexed: 12/23/2022] Open
Abstract
The purpose of this study was to investigate the roles of spectral overlap and amplitude modulation (AM) rate for stream segregation for noise signals, as well as to test the build-up effect based on these two cues. Segregation ability was evaluated using an objective paradigm with listeners' attention focused on stream segregation. Stimulus sequences consisted of two interleaved sets of bandpass noise bursts (A and B bursts). The A and B bursts differed in spectrum, AM-rate, or both. The amount of the difference between the two sets of noise bursts was varied. Long and short sequences were studied to investigate the build-up effect for segregation based on spectral and AM-rate differences. Results showed the following: (1). Stream segregation ability increased with greater spectral separation. (2). Larger AM-rate separations were associated with stronger segregation abilities. (3). Spectral separation was found to elicit the build-up effect for the range of spectral differences assessed in the current study. (4). AM-rate separation interacted with spectral separation suggesting an additive effect of spectral separation and AM-rate separation on segregation build-up. The findings suggest that, when normal-hearing listeners direct their attention towards segregation, they are able to segregate auditory streams based on reduced spectral contrast cues that vary by the amount of spectral overlap. Further, regardless of the spectral separation they are able to use AM-rate difference as a secondary/weaker cue. Based on the spectral differences, listeners can segregate auditory streams better as the listening duration is prolonged—i.e., sparse spectral cues elicit build-up segregation; however, AM-rate differences only appear to elicit build-up when in combination with spectral difference cues.
Collapse
Affiliation(s)
- Yingjiu Nie
- Department of Communication Sciences and Disorders, James Madison University Harrisonburg, VA, USA
| | - Peggy B Nelson
- Department of Speech-Language-Hearing Sciences, University of Minnesota Minneapolis, MN, USA
| |
Collapse
|
28
|
Liu AS, Tsunada J, Gold JI, Cohen YE. Temporal Integration of Auditory Information Is Invariant to Temporal Grouping Cues. eNeuro 2015; 2:ENEURO.0077-14.2015. [PMID: 26464975 PMCID: PMC4596088 DOI: 10.1523/eneuro.0077-14.2015] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2014] [Revised: 03/01/2015] [Accepted: 03/30/2015] [Indexed: 11/29/2022] Open
Abstract
Auditory perception depends on the temporal structure of incoming acoustic stimuli. Here, we examined whether a temporal manipulation that affects the perceptual grouping also affects the time dependence of decisions regarding those stimuli. We designed a novel discrimination task that required human listeners to decide whether a sequence of tone bursts was increasing or decreasing in frequency. We manipulated temporal perceptual-grouping cues by changing the time interval between the tone bursts, which led to listeners hearing the sequences as a single sound for short intervals or discrete sounds for longer intervals. Despite these strong perceptual differences, this manipulation did not affect the efficiency of how auditory information was integrated over time to form a decision. Instead, the grouping manipulation affected subjects' speed-accuracy trade-offs. These results indicate that the temporal dynamics of evidence accumulation for auditory perceptual decisions can be invariant to manipulations that affect the perceptual grouping of the evidence.
Collapse
Affiliation(s)
| | - Joji Tsunada
- Department of Otorhinolaryngology, Perelman School of Medicine
| | - Joshua I. Gold
- Department of Neuroscience, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania 19104
| | - Yale E. Cohen
- Department of Otorhinolaryngology, Perelman School of Medicine
- Department of Neuroscience, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania 19104
| |
Collapse
|
29
|
Akram S, Englitz B, Elhilali M, Simon JZ, Shamma SA. Investigating the neural correlates of a streaming percept in an informational-masking paradigm. PLoS One 2014; 9:e114427. [PMID: 25490720 PMCID: PMC4260833 DOI: 10.1371/journal.pone.0114427] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2014] [Accepted: 11/10/2014] [Indexed: 11/19/2022] Open
Abstract
Humans routinely segregate a complex acoustic scene into different auditory streams, through the extraction of bottom-up perceptual cues and the use of top-down selective attention. To determine the neural mechanisms underlying this process, neural responses obtained through magnetoencephalography (MEG) were correlated with behavioral performance in the context of an informational masking paradigm. In half the trials, subjects were asked to detect frequency deviants in a target stream, consisting of a rhythmic tone sequence, embedded in a separate masker stream composed of a random cloud of tones. In the other half of the trials, subjects were exposed to identical stimuli but asked to perform a different task—to detect tone-length changes in the random cloud of tones. In order to verify that the normalized neural response to the target sequence served as an indicator of streaming, we correlated neural responses with behavioral performance under a variety of stimulus parameters (target tone rate, target tone frequency, and the “protection zone”, that is, the spectral area with no tones around the target frequency) and attentional states (changing task objective while maintaining the same stimuli). In all conditions that facilitated target/masker streaming behaviorally, MEG normalized neural responses also changed in a manner consistent with the behavior. Thus, attending to the target stream caused a significant increase in power and phase coherence of the responses in recording channels correlated with an increase in the behavioral performance of the listeners. Normalized neural target responses also increased as the protection zone widened and as the frequency of the target tones increased. Finally, when the target sequence rate increased, the buildup of the normalized neural responses was significantly faster, mirroring the accelerated buildup of the streaming percepts. Our data thus support close links between the perceptual and neural consequences of the auditory stream segregation.
Collapse
Affiliation(s)
- Sahar Akram
- The Institute for Systems Research, University of Maryland, College Park, Maryland, United States of America
- Department of Electrical and Computer Engineering, University of Maryland, College Park, Maryland, United States of America
- * E-mail:
| | - Bernhard Englitz
- The Institute for Systems Research, University of Maryland, College Park, Maryland, United States of America
- Department of Electrical and Computer Engineering, University of Maryland, College Park, Maryland, United States of America
- Département d'Etudes Cognitives, Ecole normale supérieure, Paris, France
- Department of Neurophysiology, Donders Institute for Brain, Cognition and Behaviour, Nijmegen, The Netherlands
| | - Mounya Elhilali
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, Maryland, United States of America
| | - Jonathan Z. Simon
- Department of Electrical and Computer Engineering, University of Maryland, College Park, Maryland, United States of America
- Department of Biology, University of Maryland University, College Park, Maryland, United States of America
| | - Shihab A. Shamma
- The Institute for Systems Research, University of Maryland, College Park, Maryland, United States of America
- Department of Electrical and Computer Engineering, University of Maryland, College Park, Maryland, United States of America
- Département d'Etudes Cognitives, Ecole normale supérieure, Paris, France
| |
Collapse
|
30
|
Smith NA, Joshi S. Neural correlates of auditory stream segregation: an analysis of onset- and change-related responses. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2014; 136:EL295-EL301. [PMID: 25324113 PMCID: PMC4223979 DOI: 10.1121/1.4896414] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/28/2014] [Revised: 08/18/2014] [Accepted: 09/12/2014] [Indexed: 06/04/2023]
Abstract
The temporal order discrimination of target tone pairs is hindered by the presence of flanker tones but is improved when the flanker tones are captured by a separate stream of tones that match the flankers in frequency [Bregman and Rudnicky (1975). J. Exp. Psychol. 1, 263-267]. In an event-related potential (ERP) study with these stimuli, listeners' mismatch negativity (MMN) responses were temporally linked to the position of the changing target tones, irrespective of streaming. In contrast, N1 response latency varied as a function of the perceived grouping of flanker tones established by previous behavioral studies, providing a neurophysiological index of auditory stream segregation.
Collapse
Affiliation(s)
- Nicholas A Smith
- Perceptual Development Laboratory, Boys Town National Research Hospital, 555 North 30th Street, Omaha, Nebraska, 68131 ,
| | - Suyash Joshi
- Perceptual Development Laboratory, Boys Town National Research Hospital, 555 North 30th Street, Omaha, Nebraska, 68131 ,
| |
Collapse
|
31
|
Nie Y, Zhang Y, Nelson PB. Auditory stream segregation using bandpass noises: evidence from event-related potentials. Front Neurosci 2014; 8:277. [PMID: 25309306 PMCID: PMC4162371 DOI: 10.3389/fnins.2014.00277] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2014] [Accepted: 08/18/2014] [Indexed: 11/13/2022] Open
Abstract
The current study measured neural responses to investigate auditory stream segregation of noise stimuli with or without clear spectral contrast. Sequences of alternating A and B noise bursts were presented to elicit stream segregation in normal-hearing listeners. The successive B bursts in each sequence maintained an equal amount of temporal separation with manipulations introduced on the last stimulus. The last B burst was either delayed for 50% of the sequences or not delayed for the other 50%. The A bursts were jittered in between every two adjacent B bursts. To study the effects of spectral separation on streaming, the A and B bursts were further manipulated by using either bandpass-filtered noises widely spaced in center frequency or broadband noises. Event-related potentials (ERPs) to the last B bursts were analyzed to compare the neural responses to the delay vs. no-delay trials in both passive and attentive listening conditions. In the passive listening condition, a trend for a possible late mismatch negativity (MMN) or late discriminative negativity (LDN) response was observed only when the A and B bursts were spectrally separate, suggesting that spectral separation in the A and B burst sequences could be conducive to stream segregation at the pre-attentive level. In the attentive condition, a P300 response was consistently elicited regardless of whether there was spectral separation between the A and B bursts, indicating the facilitative role of voluntary attention in stream segregation. The results suggest that reliable ERP measures can be used as indirect indicators for auditory stream segregation in conditions of weak spectral contrast. These findings have important implications for cochlear implant (CI) studies-as spectral information available through a CI device or simulation is substantially degraded, it may require more attention to achieve stream segregation.
Collapse
Affiliation(s)
- Yingjiu Nie
- Department of Communication Sciences and Disorders, James Madison UniversityHarrisonburg, VA, USA
| | - Yang Zhang
- Department of Speech-Language-Hearing Sciences, University of MinnesotaTwin-Cities, MN, USA
- Center for Neurobehavioral Development, University of MinnesotaTwin-Cities, MN, USA
| | - Peggy B. Nelson
- Department of Speech-Language-Hearing Sciences, University of MinnesotaTwin-Cities, MN, USA
| |
Collapse
|
32
|
Neural correlates of auditory streaming in an objective behavioral task. Proc Natl Acad Sci U S A 2014; 111:10738-43. [PMID: 25002519 DOI: 10.1073/pnas.1321487111] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Segregating streams of sounds from sources in complex acoustic scenes is crucial for perception in real world situations. We analyzed an objective psychophysical measure of stream segregation obtained while simultaneously recording forebrain neurons in the European starlings to investigate neural correlates of segregating a stream of A tones from a stream of B tones presented at one-half the rate. The objective measure, sensitivity for time shift detection of the B tone, was higher when the A and B tones were of the same frequency (one stream) compared with when there was a 6- or 12-semitone difference between them (two streams). The sensitivity for representing time shifts in spiking patterns was correlated with the behavioral sensitivity. The spiking patterns reflected the stimulus characteristics but not the behavioral response, indicating that the birds' primary cortical field represents the segregated streams, but not the decision process.
Collapse
|
33
|
David M, Lavandier M, Grimault N. Room and head coloration can induce obligatory stream segregation. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2014; 136:5-8. [PMID: 24993189 DOI: 10.1121/1.4883387] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
Multiple sound reflections from room materials and a listener's head induce slight spectral modifications of sounds. This coloration depends on the listener and source positions, and on the room itself. This study investigated whether coloration could help segregate competing sources. Obligatory streaming was evaluated for diotic speech-shaped noises using a rhythmic discrimination task. Thresholds for detecting anisochrony were always significantly higher when stimuli differed in spectrum. The tested differences corresponded to three spatial configurations involving different levels of head and room coloration. These results suggest that, despite the generally deleterious effects of reverberation on speech intelligibility, coloration could favor source segregation.
Collapse
Affiliation(s)
- Marion David
- Université de Lyon, École Nationale des Travaux Publics de l'État, Laboratoire Génie Civil et Bâtiment, Rue M. Audin, 69518 Vaulx-en-Velin Cedex, France
| | - Mathieu Lavandier
- Université de Lyon, École Nationale des Travaux Publics de l'État, Laboratoire Génie Civil et Bâtiment, Rue M. Audin, 69518 Vaulx-en-Velin Cedex, France
| | - Nicolas Grimault
- Unité Mixte de Recherche au Centre National de la Recherche Scientifique 5292, Centre de Recherche en Neurosciences de Lyon, Université Lyon 1, Cognition Auditive et Psychoacoustique, Avenue Tony Garnier, 69366 Lyon Cedex 07, France
| |
Collapse
|
34
|
Dolležal LV, Brechmann A, Klump GM, Deike S. Evaluating auditory stream segregation of SAM tone sequences by subjective and objective psychoacoustical tasks, and brain activity. Front Neurosci 2014; 8:119. [PMID: 24936170 PMCID: PMC4047832 DOI: 10.3389/fnins.2014.00119] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2013] [Accepted: 05/03/2014] [Indexed: 11/13/2022] Open
Abstract
Auditory stream segregation refers to a segregated percept of signal streams with different acoustic features. Different approaches have been pursued in studies of stream segregation. In psychoacoustics, stream segregation has mostly been investigated with a subjective task asking the subjects to report their percept. Few studies have applied an objective task in which stream segregation is evaluated indirectly by determining thresholds for a percept that depends on whether auditory streams are segregated or not. Furthermore, both perceptual measures and physiological measures of brain activity have been employed but only little is known about their relation. How the results from different tasks and measures are related is evaluated in the present study using examples relying on the ABA- stimulation paradigm that apply the same stimuli. We presented A and B signals that were sinusoidally amplitude modulated (SAM) tones providing purely temporal, spectral or both types of cues to evaluate perceptual stream segregation and its physiological correlate. Which types of cues are most prominent was determined by the choice of carrier and modulation frequencies (f mod) of the signals. In the subjective task subjects reported their percept and in the objective task we measured their sensitivity for detecting time-shifts of B signals in an ABA- sequence. As a further measure of processes underlying stream segregation we employed functional magnetic resonance imaging (fMRI). SAM tone parameters were chosen to evoke an integrated (1-stream), a segregated (2-stream), or an ambiguous percept by adjusting the f mod difference between A and B tones (Δf mod). The results of both psychoacoustical tasks are significantly correlated. BOLD responses in fMRI depend on Δf mod between A and B SAM tones. The effect of Δf mod, however, differs between auditory cortex and frontal regions suggesting differences in representation related to the degree of perceptual ambiguity of the sequences.
Collapse
Affiliation(s)
- Lena-Vanessa Dolležal
- Animal Physiology and Behavior Group, Department for Neuroscience, School for Medicine and Health Sciences, Center of Excellence "Hearing4all," Carl von Ossietzky University Oldenburg Oldenburg, Germany
| | - André Brechmann
- Special Lab Non-invasive Brain Imaging, Leibniz Institute for Neurobiology Magdeburg, Germany
| | - Georg M Klump
- Animal Physiology and Behavior Group, Department for Neuroscience, School for Medicine and Health Sciences, Center of Excellence "Hearing4all," Carl von Ossietzky University Oldenburg Oldenburg, Germany
| | - Susann Deike
- Special Lab Non-invasive Brain Imaging, Leibniz Institute for Neurobiology Magdeburg, Germany
| |
Collapse
|
35
|
Christiansen SK, Oxenham AJ. Assessing the effects of temporal coherence on auditory stream formation through comodulation masking release. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2014; 135:3520-3529. [PMID: 24907815 PMCID: PMC4048442 DOI: 10.1121/1.4872300] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/13/2013] [Revised: 03/29/2014] [Accepted: 04/07/2014] [Indexed: 05/29/2023]
Abstract
Recent studies of auditory streaming have suggested that repeated synchronous onsets and offsets over time, referred to as "temporal coherence," provide a strong grouping cue between acoustic components, even when they are spectrally remote. This study uses a measure of auditory stream formation, based on comodulation masking release (CMR), to assess the conditions under which a loss of temporal coherence across frequency can lead to auditory stream segregation. The measure relies on the assumption that the CMR, produced by flanking bands remote from the masker and target frequency, only occurs if the masking and flanking bands form part of the same perceptual stream. The masking and flanking bands consisted of sequences of narrowband noise bursts, and the temporal coherence between the masking and flanking bursts was manipulated in two ways: (a) By introducing a fixed temporal offset between the flanking and masking bands that varied from zero to 60 ms and (b) by presenting the flanking and masking bursts at different temporal rates, so that the asynchronies varied from burst to burst. The results showed reduced CMR in all conditions where the flanking and masking bands were temporally incoherent, in line with expectations of the temporal coherence hypothesis.
Collapse
Affiliation(s)
| | - Andrew J Oxenham
- Departments of Psychology and Otolaryngology, University of Minnesota, Minneapolis, Minnesota 55455
| |
Collapse
|
36
|
Bendixen A. Predictability effects in auditory scene analysis: a review. Front Neurosci 2014; 8:60. [PMID: 24744695 PMCID: PMC3978260 DOI: 10.3389/fnins.2014.00060] [Citation(s) in RCA: 73] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2014] [Accepted: 03/14/2014] [Indexed: 12/02/2022] Open
Abstract
Many sound sources emit signals in a predictable manner. The idea that predictability can be exploited to support the segregation of one source's signal emissions from the overlapping signals of other sources has been expressed for a long time. Yet experimental evidence for a strong role of predictability within auditory scene analysis (ASA) has been scarce. Recently, there has been an upsurge in experimental and theoretical work on this topic resulting from fundamental changes in our perspective on how the brain extracts predictability from series of sensory events. Based on effortless predictive processing in the auditory system, it becomes more plausible that predictability would be available as a cue for sound source decomposition. In the present contribution, empirical evidence for such a role of predictability in ASA will be reviewed. It will be shown that predictability affects ASA both when it is present in the sound source of interest (perceptual foreground) and when it is present in other sound sources that the listener wishes to ignore (perceptual background). First evidence pointing toward age-related impairments in the latter capacity will be addressed. Moreover, it will be illustrated how effects of predictability can be shown by means of objective listening tests as well as by subjective report procedures, with the latter approach typically exploiting the multi-stable nature of auditory perception. Critical aspects of study design will be delineated to ensure that predictability effects can be unambiguously interpreted. Possible mechanisms for a functional role of predictability within ASA will be discussed, and an analogy with the old-plus-new heuristic for grouping simultaneous acoustic signals will be suggested.
Collapse
Affiliation(s)
- Alexandra Bendixen
- Auditory Psychophysiology Lab, Department of Psychology, Cluster of Excellence "Hearing4all," European Medical School, Carl von Ossietzky University of Oldenburg Oldenburg, Germany
| |
Collapse
|
37
|
Attention effects on auditory scene analysis: insights from event-related brain potentials. PSYCHOLOGICAL RESEARCH 2014; 78:361-78. [PMID: 24553776 DOI: 10.1007/s00426-014-0547-7] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2013] [Accepted: 02/06/2014] [Indexed: 10/25/2022]
Abstract
Sounds emitted by different sources arrive at our ears as a mixture that must be disentangled before meaningful information can be retrieved. It is still a matter of debate whether this decomposition happens automatically or requires the listener's attention. These opposite positions partly stem from different methodological approaches to the problem. We propose an integrative approach that combines the logic of previous measurements targeting either auditory stream segregation (interpreting a mixture as coming from two separate sources) or integration (interpreting a mixture as originating from only one source). By means of combined behavioral and event-related potential (ERP) measures, our paradigm has the potential to measure stream segregation and integration at the same time, providing the opportunity to obtain positive evidence of either one. This reduces the reliance on zero findings (i.e., the occurrence of stream integration in a given condition can be demonstrated directly, rather than indirectly based on the absence of empirical evidence for stream segregation, and vice versa). With this two-way approach, we systematically manipulate attention devoted to the auditory stimuli (by varying their task relevance) and to their underlying structure (by delivering perceptual tasks that require segregated or integrated percepts). ERP results based on the mismatch negativity (MMN) show no evidence for a modulation of stream integration by attention, while stream segregation results were less clear due to overlapping attention-related components in the MMN latency range. We suggest future studies combining the proposed two-way approach with some improvements in the ERP measurement of sequential stream segregation.
Collapse
|
38
|
An objective measure of auditory stream segregation based on molecular psychophysics. Atten Percept Psychophys 2014; 76:829-51. [DOI: 10.3758/s13414-013-0613-z] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
39
|
Rajendran VG, Harper NS, Willmore BD, Hartmann WM, Schnupp JWH. Temporal predictability as a grouping cue in the perception of auditory streams. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2013; 134:EL98-104. [PMID: 23862914 PMCID: PMC4491984 DOI: 10.1121/1.4811161] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
This study reports a role of temporal regularity on the perception of auditory streams. Listeners were presented with two-tone sequences in an A-B-A-B rhythm that was either regular or had a controlled amount of temporal jitter added independently to each of the B tones. Subjects were asked to report whether they perceived one or two streams. The percentage of trials in which two streams were reported substantially and significantly increased with increasing amounts of temporal jitter. This suggests that temporal predictability may serve as a binding cue during auditory scene analysis.
Collapse
Affiliation(s)
- Vani G Rajendran
- Department of Physiology, Anatomy and Genetics, University of Oxford, Sherrington Building, Parks Road, Oxford OX1 3PT, United Kingdom.
| | | | | | | | | |
Collapse
|
40
|
Hutka SA, Alain C, Binns MA, Bidelman GM. Age-related differences in the sequential organization of speech sounds. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2013; 133:4177-4187. [PMID: 23742369 DOI: 10.1121/1.4802745] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
This study investigated the effects of age on listeners' tendency to group speech tokens into one or two auditory streams. Younger and older adults were presented with sequences of four vowel sounds, which were arranged according to the proximity of first-formant frequencies between adjacent vowels. In Experiment 1, participants were less accurate in identifying the order of the four vowels and more likely to report hearing two streams when the first-formant alternated between low and high frequency and the overall difference between adjacent vowels was large. This effect of first-formant continuity on temporal order judgments and probability of hearing two streams was higher in younger than in older adults. In Experiment 2, participants indicated whether there was rhythm irregularity in an otherwise isochronous sequence of four vowels. Young adults' thresholds were lower when successive first-formants ascended or descended monotonically (condition promoting integration) than when they alternated discontinuously (condition promoting streaming). This effect was not observed in older adults whose thresholds were comparable for both types of vowel sequences. These two experiments provide converging evidence for an age-related deficit in exploiting first-formant information between consecutive vowels, which appear to impede older adults' ability to sequentially group speech sounds over time.
Collapse
Affiliation(s)
- Stefanie A Hutka
- Rotman Research Institute, Baycrest Center, 3560 Bathurst Street, Toronto, Ontario M6A 2E1, Canada
| | | | | | | |
Collapse
|
41
|
Middlebrooks JC, Onsan ZA. Stream segregation with high spatial acuity. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2012; 132:3896-3911. [PMID: 23231120 PMCID: PMC3528685 DOI: 10.1121/1.4764879] [Citation(s) in RCA: 39] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/26/2012] [Revised: 09/25/2012] [Accepted: 10/12/2012] [Indexed: 06/01/2023]
Abstract
Spatial hearing is widely regarded as helpful in recognizing a sound amid other competing sounds. It is a matter of debate, however, whether spatial cues contribute to "stream segregation," which refers to the specific task of assigning multiple interleaved sequences of sounds to their respective sources. The present study employed "rhythmic masking release" as a measure of the spatial acuity of stream segregation. Listeners discriminated between rhythms of noise-burst sequences presented from free-field targets in the presence of interleaved maskers that varied in location. For broadband sounds in the horizontal plane, target-masker separations of ≥8° permitted rhythm discrimination with d' ≥ 1; in some cases, such thresholds approached listeners' minimum audible angles. Thresholds were the same for low-frequency sounds but were substantially wider for high-frequency sounds, suggesting that interaural delays provided higher spatial acuity in this task than did interaural level differences. In the vertical midline, performance varied dramatically as a function of noise-burst duration with median thresholds ranging from >30° for 10-ms bursts to 7.1° for 40-ms bursts. A marked dissociation between minimum audible angles and masking release thresholds across the various pass-band and burst-duration conditions suggests that location discrimination and spatial stream segregation are mediated by distinct auditory mechanisms.
Collapse
Affiliation(s)
- John C Middlebrooks
- Department of Otolaryngology, University of California at Irvine, Irvine, California 92697-5310, USA.
| | | |
Collapse
|
42
|
Oberfeld D, Stahn P. Sequential grouping modulates the effect of non-simultaneous masking on auditory intensity resolution. PLoS One 2012; 7:e48054. [PMID: 23110174 PMCID: PMC3480468 DOI: 10.1371/journal.pone.0048054] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2011] [Accepted: 09/26/2012] [Indexed: 11/22/2022] Open
Abstract
The presence of non-simultaneous maskers can result in strong impairment in auditory intensity resolution relative to a condition without maskers, and causes a complex pattern of effects that is difficult to explain on the basis of peripheral processing. We suggest that the failure of selective attention to the target tones is a useful framework for understanding these effects. Two experiments tested the hypothesis that the sequential grouping of the targets and the maskers into separate auditory objects facilitates selective attention and therefore reduces the masker-induced impairment in intensity resolution. In Experiment 1, a condition favoring the processing of the maskers and the targets as two separate auditory objects due to grouping by temporal proximity was contrasted with the usual forward masking setting where the masker and the target presented within each observation interval of the two-interval task can be expected to be grouped together. As expected, the former condition resulted in a significantly smaller masker-induced elevation of the intensity difference limens (DLs). In Experiment 2, embedding the targets in an isochronous sequence of maskers led to a significantly smaller DL-elevation than control conditions not favoring the perception of the maskers as a separate auditory stream. The observed effects of grouping are compatible with the assumption that a precise representation of target intensity is available at the decision stage, but that this information is used only in a suboptimal fashion due to limitations of selective attention. The data can be explained within a framework of object-based attention. The results impose constraints on physiological models of intensity discrimination. We discuss candidate structures for physiological correlates of the psychophysical data.
Collapse
Affiliation(s)
- Daniel Oberfeld
- Department of Psychology, Section Experimental Psychology, Johannes Gutenberg-Universität Mainz, Mainz, Germany.
| | | |
Collapse
|
43
|
Weintraub DM, Ramage EM, Sutton G, Ringdahl E, Boren A, Pasinski AC, Thaler N, Haderlie M, Allen DN, Snyder JS. Auditory stream segregation impairments in schizophrenia. Psychophysiology 2012; 49:1372-83. [PMID: 22913452 DOI: 10.1111/j.1469-8986.2012.01457.x] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2012] [Accepted: 07/11/2012] [Indexed: 11/27/2022]
Abstract
We used behavior and event-related potentials (ERPs) to examine auditory stream segregation in people with schizophrenia and control participants. During each trial, a context pattern was presented, consisting of low (A) and high (B) tones and silence (-) in a repeating ABA- pattern, with a frequency separation (Δf) of 3, 6, or 12 semitones. Next, a test ABA-pattern was presented that always had a 6-semitone Δf. Larger Δf during the context resulted in more perception of two streams and larger N1 and P2 ERPs, but less perception of two streams during the test pattern. These effects of Δf were smaller in schizophrenia. Individuals with schizophrenia also showed a reduced effect of prior perceptual judgments. Overall, the findings demonstrate that people with schizophrenia have abnormalities in segregating sounds. These abnormalities result from difficulties utilizing frequency cues in addition to reduced temporal context effects.
Collapse
Affiliation(s)
- David M Weintraub
- Department of Psychology, University of Nevada, Las Vegas, Las Vegas, Nevada, USA
| | | | | | | | | | | | | | | | | | | |
Collapse
|
44
|
Hupé JM, Pressnitzer D. The initial phase of auditory and visual scene analysis. Philos Trans R Soc Lond B Biol Sci 2012; 367:942-53. [PMID: 22371616 DOI: 10.1098/rstb.2011.0368] [Citation(s) in RCA: 30] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
Auditory streaming and visual plaids have been used extensively to study perceptual organization in each modality. Both stimuli can produce bistable alternations between grouped (one object) and split (two objects) interpretations. They also share two peculiar features: (i) at the onset of stimulus presentation, organization starts with a systematic bias towards the grouped interpretation; (ii) this first percept has 'inertia'; it lasts longer than the subsequent ones. As a result, the probability of forming different objects builds up over time, a landmark of both behavioural and neurophysiological data on auditory streaming. Here we show that first percept bias and inertia are independent. In plaid perception, inertia is due to a depth ordering ambiguity in the transparent (split) interpretation that makes plaid perception tristable rather than bistable: experimental manipulations removing the depth ambiguity suppressed inertia. However, the first percept bias persisted. We attempted a similar manipulation for auditory streaming by introducing level differences between streams, to bias which stream would appear in the perceptual foreground. Here both inertia and first percept bias persisted. We thus argue that the critical common feature of the onset of perceptual organization is the grouping bias, which may be related to the transition from temporally/spatially local to temporally/spatially global computation.
Collapse
Affiliation(s)
- Jean-Michel Hupé
- Centre de Recherche Cerveau et Cognition, Université de Toulouse and Centre National de la Recherche Scientifique, 31300 Toulouse, France.
| | | |
Collapse
|
45
|
Rimmele J, Schröger E, Bendixen A. Age-related changes in the use of regular patterns for auditory scene analysis. Hear Res 2012; 289:98-107. [PMID: 22543088 DOI: 10.1016/j.heares.2012.04.006] [Citation(s) in RCA: 32] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/21/2011] [Revised: 03/31/2012] [Accepted: 04/09/2012] [Indexed: 11/30/2022]
Abstract
A recent approach to auditory processing suggests a close relationship of regularity processing in auditory sensory memory (ASM) and stream segregation, such that within-stream regularities can be used to stabilize stream segregation. The present study investigates age-related changes in how regular patterns are used for auditory scene analysis (ASA), when the stream containing the regularity is attended or unattended. In order to accomplish an intensity level deviant detection task, participants had to segregate the task-relevant pure tone sequence from an irrelevant distractor pure tone sequence, which randomly varied in level. In three conditions a simple spectro-temporal regularity ("Isochronous"), a more complex spectro-temporal regularity ("Rhythmic"), or no regularity ("Random") was embedded in either the attended target sequence (Experiment 1), or the unattended distractor sequence (Experiment 2). When the sequence containing the regularity was attended, older participants showed a similar increase of performance to younger adults in the conditions with regular patterns ("Isochronous" and "Rhythmic") compared to the "Random" condition. In contrast, when the sequence containing the regularity was unattended, older adults showed a specific performance decline compared to younger adults in the "Isochronous" condition. Results suggest a link between impaired automatic processing of regularities in ASM, and age-related deficits in the use of regular patterns for ASA.
Collapse
Affiliation(s)
- Johanna Rimmele
- Institute of Psychology, University of Leipzig, Leipzig, Germany.
| | | | | |
Collapse
|
46
|
Snyder JS, Gregg MK, Weintraub DM, Alain C. Attention, awareness, and the perception of auditory scenes. Front Psychol 2012; 3:15. [PMID: 22347201 PMCID: PMC3273855 DOI: 10.3389/fpsyg.2012.00015] [Citation(s) in RCA: 78] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2011] [Accepted: 01/11/2012] [Indexed: 11/25/2022] Open
Abstract
Auditory perception and cognition entails both low-level and high-level processes, which are likely to interact with each other to create our rich conscious experience of soundscapes. Recent research that we review has revealed numerous influences of high-level factors, such as attention, intention, and prior experience, on conscious auditory perception. And recently, studies have shown that auditory scene analysis tasks can exhibit multistability in a manner very similar to ambiguous visual stimuli, presenting a unique opportunity to study neural correlates of auditory awareness and the extent to which mechanisms of perception are shared across sensory modalities. Research has also led to a growing number of techniques through which auditory perception can be manipulated and even completely suppressed. Such findings have important consequences for our understanding of the mechanisms of perception and also should allow scientists to precisely distinguish the influences of different higher-level influences.
Collapse
Affiliation(s)
- Joel S. Snyder
- Department of Psychology, University of Nevada Las VegasLas Vegas, NV, USA
| | - Melissa K. Gregg
- Department of Psychology, University of Nevada Las VegasLas Vegas, NV, USA
| | - David M. Weintraub
- Department of Psychology, University of Nevada Las VegasLas Vegas, NV, USA
| | - Claude Alain
- The Rotman Research Institute, Baycrest Centre for Geriatric CareToronto, ON, Canada
| |
Collapse
|
47
|
Richards VM, Carreira EM, Shen Y. Toward an objective measure for a "stream segregation" task. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2012; 131:EL8-EL13. [PMID: 22280734 PMCID: PMC3261051 DOI: 10.1121/1.3664107] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/19/2011] [Accepted: 10/28/2011] [Indexed: 05/31/2023]
Abstract
A procedure to estimate the relative contribution of "A" and "B" tones for a stream-segregation task is described. Listeners detected a delay in the penultimate A tone in an A-B-A-B sequence of tones. For small A-B frequency separations, for most listeners, classification models based on both the A and B tones were superior to models based on just the A tones. For large frequency separations, models based on just the A tones were superior, indicating the A and B tones were segregated. The results also revealed individual differences in the strategies adopted to complete the task.
Collapse
Affiliation(s)
- Virginia M Richards
- Department of Cognitive Sciences, University of California, Irvine, 3151 Social Science Plaza, Irvine, California 92697-5100, USA.
| | | | | |
Collapse
|
48
|
Fidali BC, Poudrier È, Repp BH. Detecting perturbations in polyrhythms: effects of complexity and attentional strategies. PSYCHOLOGICAL RESEARCH 2011; 77:183-95. [DOI: 10.1007/s00426-011-0406-8] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2011] [Accepted: 12/17/2011] [Indexed: 10/14/2022]
|
49
|
Innes-Brown H, Marozeau J, Blamey P. The effect of visual cues on difficulty ratings for segregation of musical streams in listeners with impaired hearing. PLoS One 2011; 6:e29327. [PMID: 22195046 PMCID: PMC3240656 DOI: 10.1371/journal.pone.0029327] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2011] [Accepted: 11/25/2011] [Indexed: 12/03/2022] Open
Abstract
Background Enjoyment of music is an important part of life that may be degraded for people with hearing impairments, especially those using cochlear implants. The ability to follow separate lines of melody is an important factor in music appreciation. This ability relies on effective auditory streaming, which is much reduced in people with hearing impairment, contributing to difficulties in music appreciation. The aim of this study was to assess whether visual cues could reduce the subjective difficulty of segregating a melody from interleaved background notes in normally hearing listeners, those using hearing aids, and those using cochlear implants. Methodology/Principal Findings Normally hearing listeners (N = 20), hearing aid users (N = 10), and cochlear implant users (N = 11) were asked to rate the difficulty of segregating a repeating four-note melody from random interleaved distracter notes. The pitch of the background notes was gradually increased or decreased throughout blocks, providing a range of difficulty from easy (with a large pitch separation between melody and distracter) to impossible (with the melody and distracter completely overlapping). Visual cues were provided on half the blocks, and difficulty ratings for blocks with and without visual cues were compared between groups. Visual cues reduced the subjective difficulty of extracting the melody from the distracter notes for normally hearing listeners and cochlear implant users, but not hearing aid users. Conclusion/Significance Simple visual cues may improve the ability of cochlear implant users to segregate lines of music, thus potentially increasing their enjoyment of music. More research is needed to determine what type of acoustic cues to encode visually in order to optimise the benefits they may provide.
Collapse
|
50
|
Devergie A, Grimault N, Gaudrain E, Healy EW, Berthommier F. The effect of lip-reading on primary stream segregation. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2011; 130:283-291. [PMID: 21786898 PMCID: PMC3155588 DOI: 10.1121/1.3592223] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/07/2010] [Revised: 03/16/2011] [Accepted: 04/25/2011] [Indexed: 05/31/2023]
Abstract
Lip-reading has been shown to improve the intelligibility of speech in multitalker situations, where auditory stream segregation naturally takes place. This study investigated whether the benefit of lip-reading is a result of a primary audiovisual interaction that enhances the obligatory streaming mechanism. Two behavioral experiments were conducted involving sequences of French vowels that alternated in fundamental frequency. In Experiment 1, subjects attempted to identify the order of items in a sequence. In Experiment 2, subjects attempted to detect a disruption to temporal isochrony across alternate items. Both tasks are disrupted by streaming, thus providing a measure of primary or obligatory streaming. Visual lip gestures articulating alternate vowels were synchronized with the auditory sequence. Overall, the results were consistent with the hypothesis that visual lip gestures enhance segregation by affecting primary auditory streaming. Moreover, increases in the naturalness of visual lip gestures and auditory vowels, and corresponding increases in audiovisual congruence may potentially lead to increases in the effect of visual lip gestures on streaming.
Collapse
Affiliation(s)
- Aymeric Devergie
- Centre de Recherche en Neurosciences de Lyon, UMR CNRS 5292 Université Lyon 1, 69366 Lyon Cedex 07, France
| | | | | | | | | |
Collapse
|