1
|
Bürgel M, Mares D, Siedenburg K. Enhanced salience of edge frequencies in auditory pattern recognition. Atten Percept Psychophys 2024:10.3758/s13414-024-02971-x. [PMID: 39461935 DOI: 10.3758/s13414-024-02971-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/20/2024] [Indexed: 10/28/2024]
Abstract
Within musical scenes or textures, sounds from certain instruments capture attention more prominently than others, hinting at biases in the perception of multisource mixtures. Besides musical factors, these effects might be related to frequency biases in auditory perception. Using an auditory pattern-recognition task, we studied the existence of such frequency biases. Mixtures of pure tone melodies were presented in six frequency bands. Listeners were instructed to assess whether the target melody was part of the mixture or not, with the target melody presented either before or after the mixture. In Experiment 1, the mixture always contained melodies in five out of the six bands. In Experiment 2, the mixture contained three bands that stemmed from the lower or the higher part of the range. As expected, Experiments 1 and 2 both highlighted strong effects of presentation order, with higher accuracies for the target presented before the mixture. Notably, Experiment 1 showed that edge frequencies yielded superior accuracies compared with center frequencies. Experiment 2 corroborated this finding by yielding enhanced accuracies for edge frequencies irrespective of the absolute frequency region. Our results highlight the salience of sound elements located at spectral edges within complex musical scenes. Overall, this implies that neither the high voice superiority effect nor the insensitivity to bass instruments observed by previous research can be explained by absolute frequency biases in auditory perception.
Collapse
Affiliation(s)
- Michel Bürgel
- Dept. of Medical Physics and Acoustics, Carl Von Ossietzy University of Oldenburg, 26129, Oldenburg, Germany.
| | - Diana Mares
- Dept. of Medical Physics and Acoustics, Carl Von Ossietzy University of Oldenburg, 26129, Oldenburg, Germany.
| | - Kai Siedenburg
- Dept. of Medical Physics and Acoustics, Carl Von Ossietzy University of Oldenburg, 26129, Oldenburg, Germany
- Signal Processing and Speech Communication Laboratory, Graz University of Technology, 8010, Graz, Austria
| |
Collapse
|
2
|
Loutrari A, Alqadi A, Jiang C, Liu F. Exploring the role of singing, semantics, and amusia screening in speech-in-noise perception in musicians and non-musicians. Cogn Process 2024; 25:147-161. [PMID: 37851154 PMCID: PMC10827916 DOI: 10.1007/s10339-023-01165-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2023] [Accepted: 09/26/2023] [Indexed: 10/19/2023]
Abstract
Sentence repetition has been the focus of extensive psycholinguistic research. The notion that music training can bolster speech perception in adverse auditory conditions has been met with mixed results. In this work, we sought to gauge the effect of babble noise on immediate repetition of spoken and sung phrases of varying semantic content (expository, narrative, and anomalous), initially in 100 English-speaking monolinguals with and without music training. The two cohorts also completed some non-musical cognitive tests and the Montreal Battery of Evaluation of Amusia (MBEA). When disregarding MBEA results, musicians were found to significantly outperform non-musicians in terms of overall repetition accuracy. Sung targets were recalled significantly better than spoken ones across groups in the presence of babble noise. Sung expository targets were recalled better than spoken expository ones, and semantically anomalous content was recalled more poorly in noise. Rerunning the analysis after eliminating thirteen participants who were diagnosed with amusia showed no significant group differences. This suggests that the notion of enhanced speech perception-in noise or otherwise-in musicians needs to be evaluated with caution. Musicianship aside, this study showed for the first time that sung targets presented in babble noise seem to be recalled better than spoken ones. We discuss the present design and the methodological approach of screening for amusia as factors which may partially account for some of the mixed results in the field.
Collapse
Affiliation(s)
- Ariadne Loutrari
- School of Psychology and Clinical Language Sciences, University of Reading, Earley Gate, Reading, RG6 6AL, UK
- Division of Psychology and Language Sciences, University College London, London, WC1N 1PF, UK
| | - Aseel Alqadi
- School of Psychology and Clinical Language Sciences, University of Reading, Earley Gate, Reading, RG6 6AL, UK
| | - Cunmei Jiang
- Music College, Shanghai Normal University, Shanghai, 200234, China
| | - Fang Liu
- School of Psychology and Clinical Language Sciences, University of Reading, Earley Gate, Reading, RG6 6AL, UK.
| |
Collapse
|
3
|
Sun Y, Oxenham V, Lo CY, Walsh J, Martens WL, Cremer P, Thompson WF. Acquired amusia after a right middle cerebral artery infarction - a case study. Neurocase 2024; 30:18-28. [PMID: 38734872 DOI: 10.1080/13554794.2024.2350104] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/03/2022] [Accepted: 04/22/2024] [Indexed: 05/13/2024]
Abstract
A 62-year-old musician-MM-developed amusia after a right middle-cerebral-artery infarction. Initially, MM showed melodic deficits while discriminating pitch-related differences in melodies, musical memory problems, and impaired sensitivity to tonal structures, but normal pitch discrimination and spectral resolution thresholds, and normal cognitive and language abilities. His rhythmic processing was intact when pitch variations were removed. After 3 months, MM showed a large improvement in his sensitivity to tonality, but persistent melodic deficits and a decline in perceiving the metric structure of rhythmic sequences. We also found visual cues aided melodic processing, which is novel and beneficial for future rehabilitation practice.
Collapse
Affiliation(s)
- Yanan Sun
- Faculty of Medicine, Health and Human Sciences, Macquarie University, Sydney, Australia
| | - Vincent Oxenham
- Faculty of Medicine, Health and Human Sciences, Macquarie University, Sydney, Australia
- Neurology Department, Royal North Shore Hospital, Sydney, Australia
| | - Chi Yhun Lo
- Faculty of Medicine, Health and Human Sciences, Macquarie University, Sydney, Australia
- Department of Psychology, Toronto Metropolitan University, Toronto, Canada
| | - Jessica Walsh
- Faculty of Medicine, Health and Human Sciences, Macquarie University, Sydney, Australia
- Neurology Department, Royal North Shore Hospital, Sydney, Australia
| | - William L Martens
- Faculty of Medicine, Health and Human Sciences, Macquarie University, Sydney, Australia
| | - Phillip Cremer
- Faculty of Medicine, Health and Human Sciences, Macquarie University, Sydney, Australia
- Neurology Department, Royal North Shore Hospital, Sydney, Australia
| | - William Forde Thompson
- Faculty of Medicine, Health and Human Sciences, Macquarie University, Sydney, Australia
- Faculty of Society and Design, Bond University, Queensland, Australia
| |
Collapse
|
4
|
Shen D, Ross B, Alain C. Temporal deployment of attention in musicians: Evidence from an attentional blink paradigm. Ann N Y Acad Sci 2023; 1530:110-123. [PMID: 37823710 DOI: 10.1111/nyas.15069] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/13/2023]
Abstract
The generalization of music training to unrelated nonmusical domains is well established and may reflect musicians' superior ability to regulate attention. We investigated the temporal deployment of attention in musicians and nonmusicians using scalp-recording of event-related potentials in an attentional blink (AB) paradigm. Participants listened to rapid sequences of stimuli and identified target and probe sounds. The AB was defined as a probe identification deficit when the probe closely follows the target. The sequence of stimuli was preceded by a neutral or informative cue about the probe position within the sequence. Musicians outperformed nonmusicians in identifying the target and probe. In both groups, cueing improved target and probe identification and reduced the AB. The informative cue elicited a sustained potential, which was more prominent in musicians than nonmusicians over left temporal areas and yielded a larger N1 amplitude elicited by the target. The N1 was larger in musicians than nonmusicians, and its amplitude over the left frontocentral cortex of musicians correlated with accuracy. Together, these results reveal musicians' superior ability to regulate attention, allowing them to prepare for incoming stimuli, thereby improving sound object identification. This capacity to manage attentional resources to optimize task performance may generalize to nonmusical activities.
Collapse
Affiliation(s)
- Dawei Shen
- Rotman Research Institute, Baycrest Centre for Geriatric Care, Toronto, Ontario, Canada
| | - Bernhard Ross
- Rotman Research Institute, Baycrest Centre for Geriatric Care, Toronto, Ontario, Canada
- Department of Medical Biophysics, University of Toronto, Toronto, Ontario, Canada
- Institute of Medical Sciences, University of Toronto, Toronto, Ontario, Canada
- Music and Health Science Research Collaboratory, University of Toronto, Toronto, Ontario, Canada
| | - Claude Alain
- Rotman Research Institute, Baycrest Centre for Geriatric Care, Toronto, Ontario, Canada
- Institute of Medical Sciences, University of Toronto, Toronto, Ontario, Canada
- Music and Health Science Research Collaboratory, University of Toronto, Toronto, Ontario, Canada
- Department of Psychology, University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|
5
|
Johns MA, Calloway RC, Phillips I, Karuzis VP, Dutta K, Smith E, Shamma SA, Goupell MJ, Kuchinsky SE. Performance on stochastic figure-ground perception varies with individual differences in speech-in-noise recognition and working memory capacity. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2023; 153:286. [PMID: 36732241 PMCID: PMC9851714 DOI: 10.1121/10.0016756] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/18/2022] [Revised: 12/07/2022] [Accepted: 12/10/2022] [Indexed: 06/18/2023]
Abstract
Speech recognition in noisy environments can be challenging and requires listeners to accurately segregate a target speaker from irrelevant background noise. Stochastic figure-ground (SFG) tasks in which temporally coherent inharmonic pure-tones must be identified from a background have been used to probe the non-linguistic auditory stream segregation processes important for speech-in-noise processing. However, little is known about the relationship between performance on SFG tasks and speech-in-noise tasks nor the individual differences that may modulate such relationships. In this study, 37 younger normal-hearing adults performed an SFG task with target figure chords consisting of four, six, eight, or ten temporally coherent tones amongst a background of randomly varying tones. Stimuli were designed to be spectrally and temporally flat. An increased number of temporally coherent tones resulted in higher accuracy and faster reaction times (RTs). For ten target tones, faster RTs were associated with better scores on the Quick Speech-in-Noise task. Individual differences in working memory capacity and self-reported musicianship further modulated these relationships. Overall, results demonstrate that the SFG task could serve as an assessment of auditory stream segregation accuracy and RT that is sensitive to individual differences in cognitive and auditory abilities, even among younger normal-hearing adults.
Collapse
Affiliation(s)
- Michael A Johns
- Institute for Systems Research, University of Maryland, College Park, Maryland 20742, USA
| | - Regina C Calloway
- Institute for Systems Research, University of Maryland, College Park, Maryland 20742, USA
| | - Ian Phillips
- Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, Bethesda, Maryland 20889, USA
| | - Valerie P Karuzis
- Applied Research Laboratory of Intelligence and Security, University of Maryland, College Park, Maryland 20742, USA
| | - Kelsey Dutta
- Institute for Systems Research, University of Maryland, College Park, Maryland 20742, USA
| | - Ed Smith
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland 20742, USA
| | - Shihab A Shamma
- Department of Electrical and Computer Engineering, University of Maryland, College Park, Maryland 20742, USA
| | - Matthew J Goupell
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland 20742, USA
| | - Stefanie E Kuchinsky
- Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, Bethesda, Maryland 20889, USA
| |
Collapse
|
6
|
Wilbiks JMP, Yi SM. Musical Novices Are Unable to Judge Musical Quality from Brief Video Clips: A Failed Replication of Tsay (2014). VISION (BASEL, SWITZERLAND) 2022; 6:vision6040065. [PMID: 36412646 PMCID: PMC9680492 DOI: 10.3390/vision6040065] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/28/2022] [Revised: 11/01/2022] [Accepted: 11/07/2022] [Indexed: 11/11/2022]
Abstract
Research focusing on "thin slicing" suggests in making judgements of others' moods, personality traits, and relationships, we are able to make relatively reliable decisions based on a small amount of information. In some instances, this can be done in a matter of a few seconds. A similar result was found with regard to the judgement of musical quality of ensemble performances by Tsay (2014), wherein musical novices were able to reliably choose the winner of a music competition based on the visual information only (but not auditory or audiovisual information). Tsay argues that this occurs due to a lack of auditory expertise in musical novices, and that they are able to extract quality information based on visual movements with more accuracy. As part of the SCORE project (OSF, 2021), we conducted a direct replication of Tsay (2014). Findings showed that musical novices were unable to judge musical quality at a level greater than chance, and this result held for auditory, visual, and audiovisual presentation. This suggests that 6 s is not a sufficient amount of time for novices to judge the relative quality of musical performance, regardless of the modality in which they were presented.
Collapse
|
7
|
Sauvé SA, Marozeau J, Rich Zendel B. The effects of aging and musicianship on the use of auditory streaming cues. PLoS One 2022; 17:e0274631. [PMID: 36137151 PMCID: PMC9498935 DOI: 10.1371/journal.pone.0274631] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2022] [Accepted: 08/31/2022] [Indexed: 11/22/2022] Open
Abstract
Auditory stream segregation, or separating sounds into their respective sources and tracking them over time, is a fundamental auditory ability. Previous research has separately explored the impacts of aging and musicianship on the ability to separate and follow auditory streams. The current study evaluated the simultaneous effects of age and musicianship on auditory streaming induced by three physical features: intensity, spectral envelope and temporal envelope. In the first study, older and younger musicians and non-musicians with normal hearing identified deviants in a four-note melody interleaved with distractors that were more or less similar to the melody in terms of intensity, spectral envelope and temporal envelope. In the second study, older and younger musicians and non-musicians participated in a dissimilarity rating paradigm with pairs of melodies that differed along the same three features. Results suggested that auditory streaming skills are maintained in older adults but that older adults rely on intensity more than younger adults while musicianship is associated with increased sensitivity to spectral and temporal envelope, acoustic features that are typically less effective for stream segregation, particularly in older adults.
Collapse
Affiliation(s)
- Sarah A. Sauvé
- Division of Community Health and Humanities, Faculty of Medicine, Memorial University of Newfoundland, St. John’s, Newfoundland and Labrador, Canada
| | - Jeremy Marozeau
- Department of Health Technology, Technical University of Denmark, Lyngby, Denmark
| | - Benjamin Rich Zendel
- Division of Community Health and Humanities, Faculty of Medicine, Memorial University of Newfoundland, St. John’s, Newfoundland and Labrador, Canada
| |
Collapse
|
8
|
Johnson N, Shiju AM, Parmar A, Prabhu P. Evaluation of Auditory Stream Segregation in Musicians and Nonmusicians. Int Arch Otorhinolaryngol 2021; 25:e77-e80. [PMID: 33542755 PMCID: PMC7851367 DOI: 10.1055/s-0040-1709116] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2019] [Accepted: 01/30/2020] [Indexed: 11/26/2022] Open
Abstract
Introduction
One of the major cues that help in auditory stream segregation is spectral profiling. Musicians are trained to perceive a fine structural variation in the acoustic stimuli and have enhanced temporal perception and speech perception in noise.
Objective
To analyze the differences in spectral profile thresholds in musicians and nonmusicians.
Methods
The spectral profile analysis threshold was compared between 2 groups (musicians and nonmusicians) in the age range between 15 and 30 years old. The stimuli had 5 harmonics, all at the same amplitude (f0 = 330 Hz, mi4). The third (variable tone) has a similar harmonic structure; however, the amplitude of the third harmonic component was higher, producing a different timbre in comparison with the standards. The subject had to identify the odd timbre tone. The testing was performed at 60 dB HL in a sound-treated room.
Results
The results of the study showed that the profile analysis thresholds were significantly better in musicians compared with nonmusicians. The result of the study also showed that the profile analysis thresholds were better with an increase in the duration of music training. Thus, improved auditory processing in musicians could have resulted in a better profile analysis threshold.
Conclusions
Auditory stream segregation was found to be better in musicians compared with nonmusicians, and the performance improved with an increase in several years of training. However, further studies are essential on a larger group with more variables for validation of the results.
Collapse
Affiliation(s)
- Naina Johnson
- Department of Audiology, All India Institute of Speech and Hearing, Mysore, India
| | - Annika Mariam Shiju
- Department of Audiology, All India Institute of Speech and Hearing, Mysore, India
| | - Adya Parmar
- Department of Audiology, All India Institute of Speech and Hearing, Mysore, India
| | - Prashanth Prabhu
- Department of Audiology, All India Institute of Speech and Hearing, Mysore, India
| |
Collapse
|
9
|
|
10
|
Coffey EBJ, Arseneau-Bruneau I, Zhang X, Zatorre RJ. The Music-In-Noise Task (MINT): A Tool for Dissecting Complex Auditory Perception. Front Neurosci 2019; 13:199. [PMID: 30930734 PMCID: PMC6427094 DOI: 10.3389/fnins.2019.00199] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2018] [Accepted: 02/20/2019] [Indexed: 11/30/2022] Open
Abstract
The ability to segregate target sounds in noisy backgrounds is relevant both to neuroscience and to clinical applications. Recent research suggests that hearing-in-noise (HIN) problems are solved using combinations of sub-skills that are applied according to task demand and information availability. While evidence is accumulating for a musician advantage in HIN, the exact nature of the reported training effect is not fully understood. Existing HIN tests focus on tasks requiring understanding of speech in the presence of competing sound. Because visual, spatial and predictive cues are not systematically considered in these tasks, few tools exist to investigate the most relevant components of cognitive processes involved in stream segregation. We present the Music-In-Noise Task (MINT) as a flexible tool to expand HIN measures beyond speech perception, and for addressing research questions pertaining to the relative contributions of HIN sub-skills, inter-individual differences in their use, and their neural correlates. The MINT uses a match-mismatch trial design: in four conditions (Baseline, Rhythm, Spatial, and Visual) subjects first hear a short instrumental musical excerpt embedded in an informational masker of "multi-music" noise, followed by either a matching or scrambled repetition of the target musical excerpt presented in silence; the four conditions differ according to the presence or absence of additional cues. In a fifth condition (Prediction), subjects hear the excerpt in silence as a target first, which helps to anticipate incoming information when the target is embedded in masking sound. Data from samples of young adults show that the MINT has good reliability and internal consistency, and demonstrate selective benefits of musicianship in the Prediction, Rhythm, and Visual subtasks. We also report a performance benefit of multilingualism that is separable from that of musicianship. Average MINT scores were correlated with scores on a sentence-in-noise perception task, but only accounted for a relatively small percentage of the variance, indicating that the MINT is sensitive to additional factors and can provide a complement and extension of speech-based tests for studying stream segregation. A customizable version of the MINT is made available for use and extension by the scientific community.
Collapse
Affiliation(s)
- Emily B. J. Coffey
- Department of Psychology, Concordia University, Montreal, QC, Canada
- Laboratory for Brain, Music and Sound Research (BRAMS), Montreal, QC, Canada
- Centre for Research on Brain, Language and Music (CRBLM), Montreal, QC, Canada
- Centre for Interdisciplinary Research in Music Media and Technology (CIRMMT), Montreal, QC, Canada
| | - Isabelle Arseneau-Bruneau
- Laboratory for Brain, Music and Sound Research (BRAMS), Montreal, QC, Canada
- Centre for Research on Brain, Language and Music (CRBLM), Montreal, QC, Canada
- Centre for Interdisciplinary Research in Music Media and Technology (CIRMMT), Montreal, QC, Canada
- Montreal Neurological Institute, McGill University, Montreal, QC, Canada
| | - Xiaochen Zhang
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China
| | - Robert J. Zatorre
- Laboratory for Brain, Music and Sound Research (BRAMS), Montreal, QC, Canada
- Centre for Research on Brain, Language and Music (CRBLM), Montreal, QC, Canada
- Centre for Interdisciplinary Research in Music Media and Technology (CIRMMT), Montreal, QC, Canada
- Montreal Neurological Institute, McGill University, Montreal, QC, Canada
| |
Collapse
|
11
|
Audio Visual Integration with Competing Sources in the Framework of Audio Visual Speech Scene Analysis. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2016. [DOI: 10.1007/978-3-319-25474-6_42] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register]
|
12
|
Deike S, Heil P, Böckmann-Barthel M, Brechmann A. Decision making and ambiguity in auditory stream segregation. Front Neurosci 2015; 9:266. [PMID: 26321899 PMCID: PMC4531241 DOI: 10.3389/fnins.2015.00266] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2015] [Accepted: 07/14/2015] [Indexed: 12/01/2022] Open
Abstract
Researchers of auditory stream segregation have largely taken a bottom-up view on the link between physical stimulus parameters and the perceptual organization of sequences of ABAB sounds. However, in the majority of studies, researchers have relied on the reported decisions of the subjects regarding which of the predefined percepts (e.g., one stream or two streams) predominated when subjects listened to more or less ambiguous streaming sequences. When searching for neural mechanisms of stream segregation, it should be kept in mind that such decision processes may contribute to brain activation, as also suggested by recent human imaging data. The present study proposes that the uncertainty of a subject in making a decision about the perceptual organization of ambiguous streaming sequences may be reflected in the time required to make an initial decision. To this end, subjects had to decide on their current percept while listening to ABAB auditory streaming sequences. Each sequence had a duration of 30 s and was composed of A and B harmonic tone complexes differing in fundamental frequency (ΔF). Sequences with seven different ΔF were tested. We found that the initial decision time varied non-monotonically with ΔF and that it was significantly correlated with the degree of perceptual ambiguity defined from the proportions of time the subjects reported a one-stream or a two-stream percept subsequent to the first decision. This strong relation of the proposed measures of decision uncertainty and perceptual ambiguity should be taken into account when searching for neural correlates of auditory stream segregation.
Collapse
Affiliation(s)
- Susann Deike
- Special Lab Non-invasive Brain Imaging, Leibniz Institute for Neurobiology Magdeburg, Germany
| | - Peter Heil
- Department of Systems Physiology of Learning, Leibniz Institute for Neurobiology Magdeburg, Germany
| | - Martin Böckmann-Barthel
- Department of Experimental Audiology, Otto-von-Guericke-University Magdeburg Magdeburg, Germany
| | - André Brechmann
- Special Lab Non-invasive Brain Imaging, Leibniz Institute for Neurobiology Magdeburg, Germany
| |
Collapse
|
13
|
Marozeau J, Innes-Brown H, Blamey PJ. The acoustic and perceptual cues affecting melody segregation for listeners with a cochlear implant. Front Psychol 2013; 4:790. [PMID: 24223563 PMCID: PMC3818467 DOI: 10.3389/fpsyg.2013.00790] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2013] [Accepted: 10/07/2013] [Indexed: 11/13/2022] Open
Abstract
Our ability to listen selectively to single sound sources in complex auditory environments is termed "auditory stream segregation."This ability is affected by peripheral disorders such as hearing loss, as well as plasticity in central processing such as occurs with musical training. Brain plasticity induced by musical training can enhance the ability to segregate sound, leading to improvements in a variety of auditory abilities. The melody segregation ability of 12 cochlear-implant recipients was tested using a new method to determine the perceptual distance needed to segregate a simple 4-note melody from a background of interleaved random-pitch distractor notes. In experiment 1, participants rated the difficulty of segregating the melody from distracter notes. Four physical properties of the distracter notes were changed. In experiment 2, listeners were asked to rate the dissimilarity between melody patterns whose notes differed on the four physical properties simultaneously. Multidimensional scaling analysis transformed the dissimilarity ratings into perceptual distances. Regression between physical and perceptual cues then derived the minimal perceptual distance needed to segregate the melody. The most efficient streaming cue for CI users was loudness. For the normal hearing listeners without musical backgrounds, a greater difference on the perceptual dimension correlated to the temporal envelope is needed for stream segregation in CI users. No differences in streaming efficiency were found between the perceptual dimensions linked to the F0 and the spectral envelope. Combined with our previous results in normally-hearing musicians and non-musicians, the results show that differences in training as well as differences in peripheral auditory processing (hearing impairment and the use of a hearing device) influences the way that listeners use different acoustic cues for segregating interleaved musical streams.
Collapse
Affiliation(s)
- Jeremy Marozeau
- Department of Medical Bionics, University of Melbourne Melbourne, VIC, Australia ; Bionics Institute Melbourne, VIC, Australia
| | | | | |
Collapse
|
14
|
Maarefvand M, Marozeau J, Blamey PJ. A cochlear implant user with exceptional musical hearing ability. Int J Audiol 2013; 52:424-32. [PMID: 23509878 DOI: 10.3109/14992027.2012.762606] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
Abstract
UNLABELLED Although the perception of music is generally poor in cochlear implant users, there are a few excellent performers. OBJECTIVE The aim of this study was the assessment of different aspects of music perception in one exceptional cochlear implant user. DESIGN The assessments included pitch direction discrimination, melody and timbre recognition, relative and absolute pitch judgment, and consonance rating of musical notes presented through the sound processor(s). STUDY SAMPLE An adult cochlear implant user with musical background who lost her hearing postlingually, and five normally-hearing listeners with musical training participated in the study. RESULTS The CI user discriminated pitch direction for sounds differing by one semitone and recognized melody with nearly 100% accuracy. Her results in timbre recognition were better than average published data for cochlear implant users. Her consonance rating, and relative and absolute pitch perception were comparable to normally-hearing listeners with musical training. CONCLUSION The results in this study showed that excellent performance is possible on musical perception tasks including pitch perception using present day cochlear implant technologies. Factors that may explain this user's exceptional performance are short duration of deafness, pre- and post-deafness musical training, and perfect pitch abilities before the onset of deafness.
Collapse
|
15
|
Reaction times reflect subjective auditory perception of tone sequences in macaque monkeys. Hear Res 2012; 294:133-42. [PMID: 22990003 DOI: 10.1016/j.heares.2012.08.014] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/24/2012] [Revised: 08/10/2012] [Accepted: 08/29/2012] [Indexed: 11/23/2022]
Abstract
Perceptually ambiguous stimuli are useful for testing psychological and neuronal models of perceptual organization, e.g. for studying brain processes that underlie sequential segregation and integration. This is because the same stimuli may give rise to different subjective experiences. For humans, a tone sequence that alternates between a low-frequency and a high-frequency tone is perceptually bistable, and can be perceived as one or two streams. In the current study we present a new method based on response times (RTs) which allows identification ambiguous and unambiguous stimuli for subjects who cannot verbally report their subjective experience. We required two macaque monkeys (macaca fascicularis) to detect the termination of a sequence of light flashes which were either presented alone, or synchronized in different ways with a sequence of alternating low and high tones. We found that the monkeys responded faster to the termination of the flash sequence when the tone sequence terminated shortly before the flash sequence and thus predicted the termination of the flash sequence. This RT gain depended on the frequency separation of the tones. RT gains were largest when the frequency separation was small and the tones were presumably heard mainly as one stream. RT gains were smallest when the frequency separation was large and the tones were presumably mainly heard as two streams. RT gain was of intermediate size for intermediate frequency separations. Similar results were obtained from human subjects. We conclude that the observed RT gains reflect the perceptual organization of the tone sequence, and that tone sequences with an intermediate frequency separation, as for humans, are perceptually ambiguous for monkeys.
Collapse
|
16
|
Innes-Brown H, Marozeau J, Blamey P. The effect of visual cues on difficulty ratings for segregation of musical streams in listeners with impaired hearing. PLoS One 2011; 6:e29327. [PMID: 22195046 PMCID: PMC3240656 DOI: 10.1371/journal.pone.0029327] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2011] [Accepted: 11/25/2011] [Indexed: 12/03/2022] Open
Abstract
Background Enjoyment of music is an important part of life that may be degraded for people with hearing impairments, especially those using cochlear implants. The ability to follow separate lines of melody is an important factor in music appreciation. This ability relies on effective auditory streaming, which is much reduced in people with hearing impairment, contributing to difficulties in music appreciation. The aim of this study was to assess whether visual cues could reduce the subjective difficulty of segregating a melody from interleaved background notes in normally hearing listeners, those using hearing aids, and those using cochlear implants. Methodology/Principal Findings Normally hearing listeners (N = 20), hearing aid users (N = 10), and cochlear implant users (N = 11) were asked to rate the difficulty of segregating a repeating four-note melody from random interleaved distracter notes. The pitch of the background notes was gradually increased or decreased throughout blocks, providing a range of difficulty from easy (with a large pitch separation between melody and distracter) to impossible (with the melody and distracter completely overlapping). Visual cues were provided on half the blocks, and difficulty ratings for blocks with and without visual cues were compared between groups. Visual cues reduced the subjective difficulty of extracting the melody from the distracter notes for normally hearing listeners and cochlear implant users, but not hearing aid users. Conclusion/Significance Simple visual cues may improve the ability of cochlear implant users to segregate lines of music, thus potentially increasing their enjoyment of music. More research is needed to determine what type of acoustic cues to encode visually in order to optimise the benefits they may provide.
Collapse
|