1
|
Kania D, Romaniszyn-Kania P, Tuszy A, Bugdol M, Ledwoń D, Czak M, Turner B, Bibrowicz K, Szurmik T, Pollak A, Mitas AW. Evaluation of physiological response and synchronisation errors during synchronous and pseudosynchronous stimulation trials. Sci Rep 2024; 14:8814. [PMID: 38627479 PMCID: PMC11021516 DOI: 10.1038/s41598-024-59477-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2023] [Accepted: 04/11/2024] [Indexed: 04/19/2024] Open
Abstract
Rhythm perception and synchronisation is musical ability with neural basis defined as the ability to perceive rhythm in music and synchronise body movements with it. The study aimed to check the errors of synchronisation and physiological response as a reaction of the subjects to metrorhythmic stimuli of synchronous and pseudosynchronous stimulation (synchronisation with an externally controlled rhythm, but in reality controlled or produced tone by tapping) Nineteen subjects without diagnosed motor disorders participated in the study. Two tests were performed, where the electromyography signal and reaction time were recorded using the NORAXON system. In addition, physiological signals such as electrodermal activity and blood volume pulse were measured using the Empatica E4. Study 1 consisted of adapting the finger tapping test in pseudosynchrony with a given metrorhythmic stimulus with a selection of preferred, choices of decreasing and increasing tempo. Study 2 consisted of metrorhythmic synchronisation during the heel stomping test. Numerous correlations and statistically significant parameters were found between the response of the subjects with respect to their musical education, musical and sports activities. Most of the differentiating characteristics shown evidence of some group division in the undertaking of musical activities. The use of detailed analyses of synchronisation errors can contribute to the development of methods to improve the rehabilitation process of subjects with motor dysfunction, and this will contribute to the development of an expert system that considers personalised musical preferences.
Collapse
Affiliation(s)
- Damian Kania
- Institute of Physiotherapy and Health Sciences, The Jerzy Kukuczka Academy of Physical Education in Katowice, Mikołowska 72A, 40-065, Katowice, Poland
| | - Patrycja Romaniszyn-Kania
- Faculty of Biomedical Engineering, Silesian University of Technology, Roosevelta 40, 41-800, Zabrze, Poland.
| | - Aleksandra Tuszy
- Faculty of Biomedical Engineering, Silesian University of Technology, Roosevelta 40, 41-800, Zabrze, Poland
| | - Monika Bugdol
- Faculty of Biomedical Engineering, Silesian University of Technology, Roosevelta 40, 41-800, Zabrze, Poland
| | - Daniel Ledwoń
- Faculty of Biomedical Engineering, Silesian University of Technology, Roosevelta 40, 41-800, Zabrze, Poland
| | - Miroslaw Czak
- Faculty of Biomedical Engineering, Silesian University of Technology, Roosevelta 40, 41-800, Zabrze, Poland
| | - Bruce Turner
- dBs Music, HE Music Faculty, 17 St Thomas St, Redcliffe, Bristol, BS1 6JS, UK
| | - Karol Bibrowicz
- Science and Research Center of Body Posture, College of Education and Therapy in Poznań, 61-473, Poznań, Poland
| | - Tomasz Szurmik
- Faculty of Arts and Educational Science, University of Silesia, ul. Bielska 62, 43-400, Cieszyn, Poland
| | - Anita Pollak
- Faculty of Biomedical Engineering, Silesian University of Technology, Roosevelta 40, 41-800, Zabrze, Poland
- Institute of Psychology, University of Silesia, ul. Grazynskiego 53, 40-126, Katowice, Poland
| | - Andrzej W Mitas
- Faculty of Biomedical Engineering, Silesian University of Technology, Roosevelta 40, 41-800, Zabrze, Poland
| |
Collapse
|
2
|
Shorey AE, King CJ, Whiteford KL, Stilp CE. Musical training is not associated with spectral context effects in instrument sound categorization. Atten Percept Psychophys 2024; 86:991-1007. [PMID: 38216848 DOI: 10.3758/s13414-023-02839-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/21/2023] [Indexed: 01/14/2024]
Abstract
Musicians display a variety of auditory perceptual benefits relative to people with little or no musical training; these benefits are collectively referred to as the "musician advantage." Importantly, musicians consistently outperform nonmusicians for tasks relating to pitch, but there are mixed reports as to musicians outperforming nonmusicians for timbre-related tasks. Due to their experience manipulating the timbre of their instrument or voice in performance, we hypothesized that musicians would be more sensitive to acoustic context effects stemming from the spectral changes in timbre across a musical context passage (played by a string quintet then filtered) and a target instrument sound (French horn or tenor saxophone; Experiment 1). Additionally, we investigated the role of a musician's primary instrument of instruction by recruiting French horn and tenor saxophone players to also complete this task (Experiment 2). Consistent with the musician advantage literature, musicians exhibited superior pitch discrimination to nonmusicians. Contrary to our main hypothesis, there was no difference between musicians and nonmusicians in how spectral context effects shaped instrument sound categorization. Thus, musicians may only outperform nonmusicians for some auditory skills relevant to music (e.g., pitch perception) but not others (e.g., timbre perception via spectral differences).
Collapse
Affiliation(s)
- Anya E Shorey
- Department of Psychological and Brain Sciences, University of Louisville, Louisville, KY, 40292, USA.
| | - Caleb J King
- Department of Psychological and Brain Sciences, University of Louisville, Louisville, KY, 40292, USA.
| | - Kelly L Whiteford
- Department of Psychology, University of Minnesota, Minneapolis, MN, 55455, USA
| | - Christian E Stilp
- Department of Psychological and Brain Sciences, University of Louisville, Louisville, KY, 40292, USA
| |
Collapse
|
3
|
Couvignou M, Tillmann B, Caclin A, Kolinsky R. Do developmental dyslexia and congenital amusia share underlying impairments? Child Neuropsychol 2023; 29:1294-1340. [PMID: 36606656 DOI: 10.1080/09297049.2022.2162031] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2022] [Accepted: 12/19/2022] [Indexed: 01/07/2023]
Abstract
Developmental dyslexia and congenital amusia have common characteristics. Yet, their possible association in some individuals has been addressed only scarcely. Recently, two converging studies reported a sizable comorbidity rate between these two neurodevelopmental disorders (Couvignou et al., Cognitive Neuropsychology 2019; Couvignou & Kolinsky, Neuropsychologia 2021). However, the reason for their association remains unclear. Here, we investigate the hypothesis of shared underlying impairments between dyslexia and amusia. Fifteen dyslexic children with amusia (DYS+A), 15 dyslexic children without amusia (DYS-A), and two groups of 25 typically developing children matched on either chronological age (CA) or reading level (RL) were assessed with a behavioral battery aiming to investigate phonological and pitch processing capacities at auditory memory, perceptual awareness, and attentional levels. Overall, our results suggest that poor auditory serial-order memory increases susceptibility to comorbidity between dyslexia and amusia and may play a role in the development of the comorbid phenotype. In contrast, the impairments observed in the DYS+A children for auditory item memory, perceptual awareness, and attention might be a consequence of their reduced reading experience combined with weaker musical skills. Comparing DYS+A and DYS-A children suggests that the latter are more resourceful and/or have more effective compensatory strategies, or that their phenotype results from a different developmental trajectory. We will discuss the relevance of these findings for delving into the etiology of these two developmental disorders and address their implications for future research and practice.
Collapse
Affiliation(s)
- Manon Couvignou
- Unité de Recherche en Neurosciences Cognitives (Unescog), Center for Research in Cognition & Neurosciences (CRCN), Université Libre de Bruxelles (ULB), Brussels, Belgium
| | - Barbara Tillmann
- Lyon Neuroscience Research Center, CNRS, UMR 5292, INSERM, U1028, Lyon, France
- University Lyon 1, Lyon, France
| | - Anne Caclin
- Lyon Neuroscience Research Center, CNRS, UMR 5292, INSERM, U1028, Lyon, France
- University Lyon 1, Lyon, France
| | - Régine Kolinsky
- Unité de Recherche en Neurosciences Cognitives (Unescog), Center for Research in Cognition & Neurosciences (CRCN), Université Libre de Bruxelles (ULB), Brussels, Belgium
- Fonds de la Recherche Scientifique-FNRS (FRS-FNRS), Brussels, Belgium
| |
Collapse
|
4
|
Brown JA, Bidelman GM. Attention, Musicality, and Familiarity Shape Cortical Speech Tracking at the Musical Cocktail Party. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.10.28.562773. [PMID: 37961204 PMCID: PMC10634879 DOI: 10.1101/2023.10.28.562773] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/15/2023]
Abstract
The "cocktail party problem" challenges our ability to understand speech in noisy environments, which often include background music. Here, we explored the role of background music in speech-in-noise listening. Participants listened to an audiobook in familiar and unfamiliar music while tracking keywords in either speech or song lyrics. We used EEG to measure neural tracking of the audiobook. When speech was masked by music, the modeled peak latency at 50 ms (P1TRF) was prolonged compared to unmasked. Additionally, P1TRF amplitude was larger in unfamiliar background music, suggesting improved speech tracking. We observed prolonged latencies at 100 ms (N1TRF) when speech was not the attended stimulus, though only in less musical listeners. Our results suggest early neural representations of speech are enhanced with both attention and concurrent unfamiliar music, indicating familiar music is more distracting. One's ability to perceptually filter "musical noise" at the cocktail party depends on objective musical abilities.
Collapse
Affiliation(s)
- Jane A. Brown
- School of Communication Sciences & Disorders, University of Memphis, Memphis, TN, USA
- Institute for Intelligent Systems, University of Memphis, Memphis, TN 38152, USA
| | - Gavin M. Bidelman
- Department of Speech, Language and Hearing Sciences, Indiana University, Bloomington, IN, USA
- Program in Neuroscience, Indiana University, Bloomington, IN, USA
- Cognitive Science Program, Indiana University, Bloomington, IN, USA
| |
Collapse
|
5
|
Brown JA, Bidelman GM. Familiarity of Background Music Modulates the Cortical Tracking of Target Speech at the "Cocktail Party". Brain Sci 2022; 12:brainsci12101320. [PMID: 36291252 PMCID: PMC9599198 DOI: 10.3390/brainsci12101320] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2022] [Revised: 09/23/2022] [Accepted: 09/27/2022] [Indexed: 11/23/2022] Open
Abstract
The "cocktail party" problem-how a listener perceives speech in noisy environments-is typically studied using speech (multi-talker babble) or noise maskers. However, realistic cocktail party scenarios often include background music (e.g., coffee shops, concerts). Studies investigating music's effects on concurrent speech perception have predominantly used highly controlled synthetic music or shaped noise, which do not reflect naturalistic listening environments. Behaviorally, familiar background music and songs with vocals/lyrics inhibit concurrent speech recognition. Here, we investigated the neural bases of these effects. While recording multichannel EEG, participants listened to an audiobook while popular songs (or silence) played in the background at a 0 dB signal-to-noise ratio. Songs were either familiar or unfamiliar to listeners and featured either vocals or isolated instrumentals from the original audio recordings. Comprehension questions probed task engagement. We used temporal response functions (TRFs) to isolate cortical tracking to the target speech envelope and analyzed neural responses around 100 ms (i.e., auditory N1 wave). We found that speech comprehension was, expectedly, impaired during background music compared to silence. Target speech tracking was further hindered by the presence of vocals. When masked by familiar music, response latencies to speech were less susceptible to informational masking, suggesting concurrent neural tracking of speech was easier during music known to the listener. These differential effects of music familiarity were further exacerbated in listeners with less musical ability. Our neuroimaging results and their dependence on listening skills are consistent with early attentional-gain mechanisms where familiar music is easier to tune out (listeners already know the song's expectancies) and thus can allocate fewer attentional resources to the background music to better monitor concurrent speech material.
Collapse
Affiliation(s)
- Jane A. Brown
- School of Communication Sciences and Disorders, University of Memphis, Memphis, TN 38152, USA
- Institute for Intelligent Systems, University of Memphis, Memphis, TN 38152, USA
| | - Gavin M. Bidelman
- School of Communication Sciences and Disorders, University of Memphis, Memphis, TN 38152, USA
- Institute for Intelligent Systems, University of Memphis, Memphis, TN 38152, USA
- Department of Speech, Language and Hearing Sciences, Indiana University, Bloomington, IN 47408, USA
- Program in Neuroscience, Indiana University, Bloomington, IN 47405, USA
- Correspondence:
| |
Collapse
|
6
|
Zendel BR. The importance of the motor system in the development of music-based forms of auditory rehabilitation. Ann N Y Acad Sci 2022; 1515:10-19. [PMID: 35648040 DOI: 10.1111/nyas.14810] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Abstract
Hearing abilities decline with age, and one of the most commonly reported hearing issues in older adults is a difficulty understanding speech when there is loud background noise. Understanding speech in noise relies on numerous cognitive processes, including working memory, and is supported by numerous brain regions, including the motor and motor planning systems. Indeed, many working memory processes are supported by motor and premotor cortical regions. Interestingly, lifelong musicians and nonmusicians given music training over the course of weeks or months show an improved ability to understand speech when there is loud background noise. These benefits are associated with enhanced working memory abilities, and enhanced activity in motor and premotor cortical regions. Accordingly, it is likely that music training improves the coupling between the auditory and motor systems and promotes plasticity in these regions and regions that feed into auditory/motor areas. This leads to an enhanced ability to dynamically process incoming acoustic information, and is likely the reason that musicians and those who receive laboratory-based music training are better able to understand speech when there is background noise. Critically, these findings suggest that music-based forms of auditory rehabilitation are possible and should focus on tasks that promote auditory-motor interactions.
Collapse
Affiliation(s)
- Benjamin Rich Zendel
- Faculty of Medicine, Memorial University of Newfoundland, St. John's, Newfoundland and Labrador, Canada.,Aging Research Centre - Newfoundland and Labrador, Grenfell Campus, Memorial University, Corner Brook, Newfoundland and Labrador, Canada
| |
Collapse
|
7
|
Del Solar Dorrego F, Vigeant MC. A study of the just noticeable difference of early decay time for symphonic halls. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2022; 151:80. [PMID: 35105034 DOI: 10.1121/10.0009167] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/29/2021] [Accepted: 12/07/2021] [Indexed: 06/14/2023]
Abstract
The just noticeable differences (JNDs) of room acoustic parameters are important for the design of concert halls and, in general, research of room acoustics. Precise knowledge of JNDs helps the concert hall designer in assessing the impact that changes in the geometry or materials of the hall will have on its perceived acoustics. When designing a concert hall, creating an appropriate feeling of reverberance for the audience is of prime importance. The early decay time (EDT) parameter has proved to be a better predictor of the perception of reverberance than the classical reverberation time (T30), but no studies have been conducted to specifically determine the EDT JND. In the present study, the EDT JND was investigated for broadband conditions and assessed for individual frequency ranges. A subjective study was conducted with 26 subjects with musical training, in which 21 were considered reliable. The participants listened to orchestral music convolved with measured spatial room impulse responses from three concert halls. The stimuli were auralized in an anechoic chamber using third-order Ambisonic reproduction. The obtained values show that the JNDs for the broadband conditions are lower than those for the individual frequency ranges. The EDT JND for the broadband conditions was found to be approximately 18% of the EDT value.
Collapse
Affiliation(s)
- Fernando Del Solar Dorrego
- Graduate Program in Acoustics, The Pennsylvania State University, University Park, Pennsylvania 16802, USA
| | - Michelle C Vigeant
- Graduate Program in Acoustics, The Pennsylvania State University, University Park, Pennsylvania 16802, USA
| |
Collapse
|
8
|
Zhang M, Denison RN, Pelli DG, Le TTC, Ihlefeld A. An auditory-visual tradeoff in susceptibility to clutter. Sci Rep 2021; 11:23540. [PMID: 34876580 PMCID: PMC8651672 DOI: 10.1038/s41598-021-00328-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2021] [Accepted: 08/12/2021] [Indexed: 01/13/2023] Open
Abstract
Sensory cortical mechanisms combine auditory or visual features into perceived objects. This is difficult in noisy or cluttered environments. Knowing that individuals vary greatly in their susceptibility to clutter, we wondered whether there might be a relation between an individual's auditory and visual susceptibilities to clutter. In auditory masking, background sound makes spoken words unrecognizable. When masking arises due to interference at central auditory processing stages, beyond the cochlea, it is called informational masking. A strikingly similar phenomenon in vision, called visual crowding, occurs when nearby clutter makes a target object unrecognizable, despite being resolved at the retina. We here compare susceptibilities to auditory informational masking and visual crowding in the same participants. Surprisingly, across participants, we find a negative correlation (R = -0.7) between susceptibility to informational masking and crowding: Participants who have low susceptibility to auditory clutter tend to have high susceptibility to visual clutter, and vice versa. This reveals a tradeoff in the brain between auditory and visual processing.
Collapse
Affiliation(s)
- Min Zhang
- grid.260896.30000 0001 2166 4955Department of Biomedical Engineering, New Jersey Institute of Technology, Newark, NJ USA ,grid.430387.b0000 0004 1936 8796Department of Biomedical Engineering, Rutgers New Jersey Medical School, Newark, NJ USA
| | - Rachel N Denison
- grid.189504.10000 0004 1936 7558Department of Psychology, Boston University, Boston, MA USA
| | - Denis G Pelli
- grid.137628.90000 0004 1936 8753Department of Psychology, New York University, New York, NY USA
| | - Thuy Tien C Le
- grid.260896.30000 0001 2166 4955Department of Biomedical Engineering, New Jersey Institute of Technology, Newark, NJ USA ,grid.430387.b0000 0004 1936 8796Department of Biomedical Engineering, Rutgers New Jersey Medical School, Newark, NJ USA
| | - Antje Ihlefeld
- Department of Biomedical Engineering, New Jersey Institute of Technology, Newark, NJ, USA.
| |
Collapse
|
9
|
Symons AE, Dick F, Tierney AT. Dimension-selective attention and dimensional salience modulate cortical tracking of acoustic dimensions. Neuroimage 2021; 244:118544. [PMID: 34492294 DOI: 10.1016/j.neuroimage.2021.118544] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2021] [Revised: 08/19/2021] [Accepted: 08/31/2021] [Indexed: 11/17/2022] Open
Abstract
Some theories of auditory categorization suggest that auditory dimensions that are strongly diagnostic for particular categories - for instance voice onset time or fundamental frequency in the case of some spoken consonants - attract attention. However, prior cognitive neuroscience research on auditory selective attention has largely focused on attention to simple auditory objects or streams, and so little is known about the neural mechanisms that underpin dimension-selective attention, or how the relative salience of variations along these dimensions might modulate neural signatures of attention. Here we investigate whether dimensional salience and dimension-selective attention modulate the cortical tracking of acoustic dimensions. In two experiments, participants listened to tone sequences varying in pitch and spectral peak frequency; these two dimensions changed at different rates. Inter-trial phase coherence (ITPC) and amplitude of the EEG signal at the frequencies tagged to pitch and spectral changes provided a measure of cortical tracking of these dimensions. In Experiment 1, tone sequences varied in the size of the pitch intervals, while the size of spectral peak intervals remained constant. Cortical tracking of pitch changes was greater for sequences with larger compared to smaller pitch intervals, with no difference in cortical tracking of spectral peak changes. In Experiment 2, participants selectively attended to either pitch or spectral peak. Cortical tracking was stronger in response to the attended compared to unattended dimension for both pitch and spectral peak. These findings suggest that attention can enhance the cortical tracking of specific acoustic dimensions rather than simply enhancing tracking of the auditory object as a whole.
Collapse
Affiliation(s)
- Ashley E Symons
- Department of Psychological Sciences, Birkbeck College, University of London UK.
| | - Fred Dick
- Department of Psychological Sciences, Birkbeck College, University of London UK; Division of Psychology & Language Sciences, University College London UK
| | - Adam T Tierney
- Department of Psychological Sciences, Birkbeck College, University of London UK
| |
Collapse
|
10
|
Individual differences in mental imagery in different modalities and levels of intentionality. Mem Cognit 2021; 50:29-44. [PMID: 34462893 PMCID: PMC8763825 DOI: 10.3758/s13421-021-01209-7] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/11/2021] [Indexed: 11/08/2022]
Abstract
Mental imagery is a highly common component of everyday cognitive functioning. While substantial progress is being made in clarifying this fundamental human function, much is still unclear or unknown. A more comprehensive account of mental imagery aspects would be gained by examining individual differences in age, sex, and background experience in an activity and their association with imagery in different modalities and intentionality levels. The current online study combined multiple imagery self-report measures in a sample (n = 279) with a substantial age range (18-65 years), aiming to identify whether age, sex, or background experience in sports, music, or video games were associated with aspects of imagery in the visual, auditory, or motor stimulus modality and voluntary or involuntary intentionality level. The findings show weak positive associations between age and increased vividness of voluntary auditory imagery and decreased involuntary musical imagery frequency, weak associations between being female and more vivid visual imagery, and relations of greater music and video game experience with higher involuntary musical imagery frequency. Moreover, all imagery stimulus modalities were associated with each other, for both intentionality levels, except involuntary musical imagery frequency, which was only related to higher voluntary auditory imagery vividness. These results replicate previous research but also contribute new insights, showing that individual differences in age, sex, and background experience are associated with various aspects of imagery such as modality, intentionality, vividness, and frequency. The study's findings can inform the growing domain of applications of mental imagery to clinical and pedagogical settings.
Collapse
|
11
|
Abstract
OBJECTIVES Speech-in-noise (SIN) perception is essential for everyday communication. In most communication situations, the listener requires the ability to process simultaneous complex auditory signals to understand the target speech or target sound. As the listening situation becomes more difficult, the ability to distinguish between speech and noise becomes dependent on recruiting additional cognitive resources, such as working memory (WM). Previous studies have explored correlations between WM and SIN perception in musicians and nonmusicians, with mixed findings. However, no study to date has examined the speech perception abilities of musicians and nonmusicians with similar WM capacity. The objectives of this study were to investigate (1) whether musical experience results in improved listening in adverse listening situations, and (2) whether the benefit of musical experience can be separated from the effect of greater WM capacity. DESIGN Forty-nine young musicians and nonmusicians were assigned to subgroups of high versus low WM, based on the performance on the backward digit span test. To investigate the effects of music training and WM on SIN perception, performance was assessed on clinical tests of speech perception in background noise. Listening effort (LE) was assessed in a dual-task paradigm and via self-report. We hypothesized that musicians would have an advantage when listening to SIN, at least in terms of reduced LE. RESULTS There was no statistically significant difference between musicians and nonmusicians, and no significant interaction between music training and WM on any of the outcome measures used in this study. However, a significant effect of WM on SIN ability was found on both the Quick Speech-In-Noise test (QuickSIN) and the Hearing in Noise Test (HINT) tests. CONCLUSION The results of this experiment suggest that music training does not provide an advantage in adverse listening situations either in terms of improved speech understanding or reduced LE. While musicians have been shown to have heightened basic auditory abilities, the effect on SIN performance may be more subtle. Our results also show that regardless of prior music training, listeners with high WM capacity are able to perform significantly better on speech-in-noise tasks.
Collapse
|
12
|
Zhang M, Alamatsaz N, Ihlefeld A. Hemodynamic Responses Link Individual Differences in Informational Masking to the Vicinity of Superior Temporal Gyrus. Front Neurosci 2021; 15:675326. [PMID: 34366772 PMCID: PMC8339305 DOI: 10.3389/fnins.2021.675326] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2021] [Accepted: 05/13/2021] [Indexed: 01/20/2023] Open
Abstract
Suppressing unwanted background sound is crucial for aural communication. A particularly disruptive type of background sound, informational masking (IM), often interferes in social settings. However, IM mechanisms are incompletely understood. At present, IM is identified operationally: when a target should be audible, based on suprathreshold target/masker energy ratios, yet cannot be heard because target-like background sound interferes. We here confirm that speech identification thresholds differ dramatically between low- vs. high-IM background sound. However, speech detection thresholds are comparable across the two conditions. Moreover, functional near infrared spectroscopy recordings show that task-evoked blood oxygenation changes near the superior temporal gyrus (STG) covary with behavioral speech detection performance for high-IM but not low-IM background sound, suggesting that the STG is part of an IM-dependent network. Moreover, listeners who are more vulnerable to IM show increased hemodynamic recruitment near STG, an effect that cannot be explained based on differences in task difficulty across low- vs. high-IM. In contrast, task-evoked responses near another auditory region of cortex, the caudal inferior frontal sulcus (cIFS), do not predict behavioral sensitivity, suggesting that the cIFS belongs to an IM-independent network. Results are consistent with the idea that cortical gating shapes individual vulnerability to IM.
Collapse
Affiliation(s)
- Min Zhang
- Department of Biomedical Engineering, New Jersey Institute of Technology, Newark, NJ, United States
- Rutgers Biomedical and Health Sciences, Rutgers University, Newark, NJ, United States
| | - Nima Alamatsaz
- Department of Biomedical Engineering, New Jersey Institute of Technology, Newark, NJ, United States
- Rutgers Biomedical and Health Sciences, Rutgers University, Newark, NJ, United States
| | - Antje Ihlefeld
- Department of Biomedical Engineering, New Jersey Institute of Technology, Newark, NJ, United States
| |
Collapse
|
13
|
Jennings SG. The role of the medial olivocochlear reflex in psychophysical masking and intensity resolution in humans: a review. J Neurophysiol 2021; 125:2279-2308. [PMID: 33909513 PMCID: PMC8285664 DOI: 10.1152/jn.00672.2020] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2020] [Revised: 03/16/2021] [Accepted: 04/02/2021] [Indexed: 02/01/2023] Open
Abstract
This review addresses the putative role of the medial olivocochlear (MOC) reflex in psychophysical masking and intensity resolution in humans. A framework for interpreting psychophysical results in terms of the expected influence of the MOC reflex is introduced. This framework is used to review the effects of a precursor or contralateral acoustic stimulation on 1) simultaneous masking of brief tones, 2) behavioral estimates of cochlear gain and frequency resolution in forward masking, 3) the buildup and decay of forward masking, and 4) measures of intensity resolution. Support, or lack thereof, for a role of the MOC reflex in psychophysical perception is discussed in terms of studies on estimates of MOC strength from otoacoustic emissions and the effects of resection of the olivocochlear bundle in patients with vestibular neurectomy. Novel, innovative approaches are needed to resolve the dissatisfying conclusion that current results are unable to definitively confirm or refute the role of the MOC reflex in masking and intensity resolution.
Collapse
Affiliation(s)
- Skyler G Jennings
- Department of Communication Sciences and Disorders, The University of Utah, Salt Lake City, Utah
| |
Collapse
|
14
|
Liu Y, Xu R, Gong Q. Human Auditory-Frequency Tuning Is Sensitive to Tonal Language Experience. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2020; 63:4277-4288. [PMID: 33151817 DOI: 10.1044/2020_jslhr-20-00152] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Purpose The aim of this study is to investigate whether human auditory frequency tuning can be influenced by tonal language experience. Method Perceptual tuning measured via psychophysical tuning curves and cochlear tuning derived via stimulus-frequency otoacoustic emission suppression tuning curves in 14 native speakers of a tonal language (Mandarin) were compared to those of 14 native speakers of a nontonal language (English) at 1 and 4 kHz. Results Group comparisons of both psychophysical tuning curves (p = .046) and stimulus-frequency otoacoustic emission suppression tuning curves (p = .007) in the 4-kHz region indicated sharper frequency tuning in the Mandarin-speaking group relative to the English-speaking group. The auditory tuning was better at the higher (4 kHz) than the lower (1 kHz) probe frequencies (p < .001). Conclusions The sharper auditory tuning in the 4-kHz cochlear region is associated with long-term tonal language (i.e., Mandarin) experience. Experience-dependent plasticity of tonal language may occur before the sound signal reaches central neural stages, as peripheral as the cochlea.
Collapse
Affiliation(s)
- Yin Liu
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China
| | - Runyi Xu
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China
| | - Qin Gong
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China
- School of Medicine, Shanghai University, China
| |
Collapse
|
15
|
Anderson SR, Glickman B, Oh Y, Reiss LAJ. Binaural pitch fusion: Effects of sound level in listeners with normal hearing. Hear Res 2020; 396:108067. [PMID: 32961518 DOI: 10.1016/j.heares.2020.108067] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/03/2020] [Revised: 08/11/2020] [Accepted: 08/31/2020] [Indexed: 11/25/2022]
Abstract
Pitch is an important cue that allows the auditory system to distinguish between sound sources. Pitch cues are less useful when listeners are not able to discriminate different pitches between the two ears, a problem encountered by listeners with hearing impairment (HI). Many listeners with HI will fuse the pitch of two dichotically presented tones over a larger range of interaural frequency disparities, i.e., have a broader fusion range, than listeners with normal hearing (NH). One potential explanation for broader fusion in listeners with HI is that hearing aids stimulate at high sound levels. The present study investigated effects of overall sound levels on pitch fusion in listeners with NH. It was hypothesized that if sound level increased, then fusion range would increase. Fusion ranges were measured by presenting a fixed frequency tone to a reference ear simultaneously with a variable frequency tone to the opposite ear and finding the range of frequencies that were fused with the reference frequency. No significant effects of sound level (comfortable level ± 15 dB) on fusion range were found, even when tested within the range of levels where some listeners with HI show large fusion ranges. Results suggest that increased sound level does not explain increased fusion range in listeners with HI and imply that other factors associated with hearing loss might play a larger role.
Collapse
Affiliation(s)
- Sean R Anderson
- Oregon Health and Science University, Portland, OR 97239, United States.
| | - Bess Glickman
- Oregon Health and Science University, Portland, OR 97239, United States
| | - Yonghee Oh
- Oregon Health and Science University, Portland, OR 97239, United States
| | - Lina A J Reiss
- Oregon Health and Science University, Portland, OR 97239, United States
| |
Collapse
|
16
|
Tarnowska E, Wicher A, Moore BCJ. No Influence of Musicianship on the Effect of Contralateral Stimulation on Frequency Selectivity. Trends Hear 2020; 24:2331216520939776. [PMID: 32840175 PMCID: PMC7450455 DOI: 10.1177/2331216520939776] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022] Open
Abstract
The efferent system may control the gain of the cochlea and thereby
influence frequency selectivity. This effect can be assessed using
contralateral stimulation (CS) applied to the ear opposite to that
used to assess frequency selectivity. The effect of CS may be stronger
for musicians than for nonmusicians. To assess whether this was the
case, psychophysical tuning curves (PTCs) were compared for 12
musicians and 12 nonmusicians. The PTCs were measured with and without
a 60-dB sound pressure level (SPL) pink-noise CS, using signal
frequencies of 2 and 4 kHz. The sharpness of the PTCs was quantified
using the measure Q10, the signal frequency divided by the PTC
bandwidth measured 10 dB above the level at the tip. Q10 values were
lower in the presence of the CS, but this effect did not differ
significantly for musicians and nonmusicians. The main effect of group
(musicians vs. nonmusicians) on the Q10 values was not significant.
Overall, these results do not support the idea that musicianship
enhances contralateral efferent gain control as measured using the
effect of CS on PTCs.
Collapse
Affiliation(s)
- Emilia Tarnowska
- Chair of Acoustics, Faculty of Physics, Adam Mickiewicz University, Poznań, Poland
| | - Andrzej Wicher
- Chair of Acoustics, Faculty of Physics, Adam Mickiewicz University, Poznań, Poland
| | | |
Collapse
|
17
|
Yashaswini L, Maruthy S. Effect of Music Training on Categorical Perception of Speech and Music. J Audiol Otol 2020; 24:140-148. [PMID: 32575954 PMCID: PMC7364187 DOI: 10.7874/jao.2019.00500] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2019] [Accepted: 04/17/2020] [Indexed: 11/22/2022] Open
Abstract
Background and Objectives The aim of this study is to evaluate the effect of music training on the characteristics of auditory perception of speech and music. The perception of speech and music stimuli was assessed across their respective stimulus continuum and the resultant plots were compared between musicians and non-musicians. Subjects and Methods Thirty musicians with formal music training and twenty-seven non-musicians participated in the study (age: 20 to 30 years). They were assessed for identification of consonant-vowel syllables (/da/ to /ga/), vowels (/u/ to /a/), vocal music note (/ri/ to /ga/), and instrumental music note (/ri/ to /ga/) across their respective stimulus continuum. The continua contained 15 tokens with equal step size between any adjacent tokens. The resultant identification scores were plotted against each token and were analyzed for presence of categorical boundary. If the categorical boundary was found, the plots were analyzed by six parameters of categorical perception; for the point of 50% crossover, lower edge of categorical boundary, upper edge of categorical boundary, phoneme boundary width, slope, and intercepts. Results Overall, the results showed that both speech and music are perceived differently in musicians and non-musicians. In musicians, both speech and music are categorically perceived, while in non-musicians, only speech is perceived categorically. Conclusions The findings of the present study indicate that music is perceived categorically by musicians, even if the stimulus is devoid of vocal tract features. The findings support that the categorical perception is strongly influenced by training and results are discussed in light of notions of motor theory of speech perception.
Collapse
|
18
|
Laffere A, Dick F, Tierney A. Effects of auditory selective attention on neural phase: individual differences and short-term training. Neuroimage 2020; 213:116717. [PMID: 32165265 DOI: 10.1016/j.neuroimage.2020.116717] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2019] [Revised: 03/02/2020] [Accepted: 03/04/2020] [Indexed: 02/06/2023] Open
Abstract
How does the brain follow a sound that is mixed with others in a noisy environment? One possible strategy is to allocate attention to task-relevant time intervals. Prior work has linked auditory selective attention to alignment of neural modulations with stimulus temporal structure. However, since this prior research used relatively easy tasks and focused on analysis of main effects of attention across participants, relatively little is known about the neural foundations of individual differences in auditory selective attention. Here we investigated individual differences in auditory selective attention by asking participants to perform a 1-back task on a target auditory stream while ignoring a distractor auditory stream presented 180° out of phase. Neural entrainment to the attended auditory stream was strongly linked to individual differences in task performance. Some variability in performance was accounted for by degree of musical training, suggesting a link between long-term auditory experience and auditory selective attention. To investigate whether short-term improvements in auditory selective attention are possible, we gave participants 2 h of auditory selective attention training and found improvements in both task performance and enhancements of the effects of attention on neural phase angle. Our results suggest that although there exist large individual differences in auditory selective attention and attentional modulation of neural phase angle, this skill improves after a small amount of targeted training.
Collapse
Affiliation(s)
- Aeron Laffere
- Department of Psychological Sciences, Birkbeck, University of London, Malet Street, London, WC1E 7HX, UK
| | - Fred Dick
- Department of Psychological Sciences, Birkbeck, University of London, Malet Street, London, WC1E 7HX, UK; Division of Psychology & Language Sciences, UCL, Gower Street, London, WC1E 6BT, UK
| | - Adam Tierney
- Department of Psychological Sciences, Birkbeck, University of London, Malet Street, London, WC1E 7HX, UK.
| |
Collapse
|
19
|
Moore BCJ, Wan J, Varathanathan A, Naddell S, Baer T. No Effect of Musical Training on Frequency Selectivity Estimated Using Three Methods. Trends Hear 2019; 23:2331216519841980. [PMID: 31081487 DOI: 10.1177/2331216519841980] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
It is widely believed that the frequency selectivity of the auditory system is largely determined by processes occurring in the cochlea. If so, musical training would not be expected to influence frequency selectivity. Consistent with this, auditory filter shapes for low center frequencies do not differ for musicians and nonmusicians. However, it has been reported that psychophysical tuning curves (PTCs) at 4000 Hz were sharper for musicians than for nonmusicians. This study explored the origin of the discrepancy across studies. Frequency selectivity was estimated for musicians and nonmusicians using three methods: fast PTCs with a masker that swept in frequency, "traditional" PTCs obtained using several fixed masker center frequencies, and the notched-noise method. The signal frequency was 4000 Hz. The data were fitted assuming that each side of the auditory filter had the shape of a rounded-exponential function. The sharpness of the auditory filters, estimated as the Q10 values, did not differ significantly between musicians and nonmusicians for any of the methods, but detection efficiency tended to be higher for the musicians. This is consistent with the idea that musicianship influences auditory proficiency but does not influence the peripheral processes that determine the frequency selectivity of the auditory system.
Collapse
Affiliation(s)
| | - Jie Wan
- 1 Department of Psychology, University of Cambridge, UK.,2 Research School of Behavioural and Cognitive Neurosciences, University of Groningen, the Netherlands
| | | | | | - Thomas Baer
- 1 Department of Psychology, University of Cambridge, UK
| |
Collapse
|
20
|
Tarnowska E, Wicher A, Moore BCJ. The effect of musicianship, contralateral noise, and ear of presentation on the detection of changes in temporal fine structure. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2019; 146:1. [PMID: 31370621 DOI: 10.1121/1.5114820] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/30/2019] [Accepted: 06/07/2019] [Indexed: 06/10/2023]
Abstract
Musicians are better than non-musicians at discriminating changes in the fundamental frequency (F0) of harmonic complex tones. Such discrimination may be based on place cues derived from low resolved harmonics, envelope cues derived from high harmonics, and temporal fine structure (TFS) cues derived from both low and high harmonics. The present study compared the ability of highly trained violinists and non-musicians to discriminate changes in complex sounds that differed primarily in their TFS. The task was to discriminate harmonic (H) and frequency-shifted inharmonic (I) tones that were bandpass filtered such that the components were largely or completely unresolved. The effect of contralateral noise and ear of presentation was also investigated. It was hypothesized that contralateral noise would activate the efferent system, helping to preserve the neural representation of envelope fluctuations in the H and I stimuli, thereby improving their discrimination. Violinists were significantly better than non-musicians at discriminating the H and I tones. However, contralateral noise and ear of presentation had no effect. It is concluded that, compared to non-musicians, violinists have a superior ability to discriminate complex sounds based on their TFS, and this ability is unaffected by contralateral stimulation or ear of presentation.
Collapse
Affiliation(s)
- Emilia Tarnowska
- Department of Psychoacoustics and Room Acoustics, Institute of Acoustics, Faculty of Physics, Adam Mickiewicz University, Poznań, Umultowska 85, 61-614 Poland
| | - Andrzej Wicher
- Department of Psychoacoustics and Room Acoustics, Institute of Acoustics, Faculty of Physics, Adam Mickiewicz University, Poznań, Umultowska 85, 61-614 Poland
| | - Brian C J Moore
- Department of Psychology, University of Cambridge, Downing Street, Cambridge CB2 3EB, United Kingdom
| |
Collapse
|
21
|
Dick DA, Vigeant MC. An investigation of listener envelopment utilizing a spherical microphone array and third-order ambisonics reproduction. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2019; 145:2795. [PMID: 31046314 DOI: 10.1121/1.5096161] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/09/2018] [Accepted: 02/22/2019] [Indexed: 06/09/2023]
Abstract
Listener envelopment (LEV), the sense of being surrounded by the sound field, is a perception that has been found to be related to the overall impression of a concert hall. The purpose of this study was to investigate the relationship between the perception of LEV and the direction and arrival time of energy from spatial room impulse responses (IRs). IRs were obtained in a 2000-seat concert hall using a 32-channel spherical microphone array and analyzed using a third-order plane wave decomposition. Additionally, the IRs were convolved with anechoic music and processed for third-order Ambisonic reproductions and presented to subjects over a 30-loudspeaker array. Instances were found in which the energy in the late sound field did not correlate with LEV ratings as well as energy in a 70-100 ms time window. Follow-up listening tests were conducted with hybrid IRs containing portions of an enveloping IR and an unenveloping IR with crossover times ranging from 40 to 140 ms. Additional hybrid IRs were studied wherein portions of the spatial IRs were collapsed into all frontal energy with crossover times ranging from 40 to 120 ms. The tests confirmed that much of the important LEV information exists in the early portion of these IRs.
Collapse
Affiliation(s)
- David A Dick
- Graduate Program in Acoustics, The Pennsylvania State University, University Park, Pennsylvania 16802, USA
| | - Michelle C Vigeant
- Graduate Program in Acoustics, The Pennsylvania State University, University Park, Pennsylvania 16802, USA
| |
Collapse
|
22
|
Graves JE, Oxenham AJ. Pitch discrimination with mixtures of three concurrent harmonic complexes. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2019; 145:2072. [PMID: 31046318 PMCID: PMC6469983 DOI: 10.1121/1.5096639] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/13/2018] [Revised: 02/19/2019] [Accepted: 03/13/2019] [Indexed: 06/09/2023]
Abstract
In natural listening contexts, especially in music, it is common to hear three or more simultaneous pitches, but few empirical or theoretical studies have addressed how this is achieved. Place and pattern-recognition theories of pitch require at least some harmonics to be spectrally resolved for pitch to be extracted, but it is unclear how often such conditions exist when multiple complex tones are presented together. In three behavioral experiments, mixtures of three concurrent complexes were filtered into a single bandpass spectral region, and the relationship between the fundamental frequencies and spectral region was varied in order to manipulate the extent to which harmonics were resolved either before or after mixing. In experiment 1, listeners discriminated major from minor triads (a difference of 1 semitone in one note of the triad). In experiments 2 and 3, listeners compared the pitch of a probe tone with that of a subsequent target, embedded within two other tones. All three experiments demonstrated above-chance performance, even in conditions where the combinations of harmonic components were unlikely to be resolved after mixing, suggesting that fully resolved harmonics may not be necessary to extract the pitch from multiple simultaneous complexes.
Collapse
Affiliation(s)
- Jackson E Graves
- Department of Psychology, University of Minnesota, 75 East River Parkway, Minneapolis, Minnesota 55455, USA
| | - Andrew J Oxenham
- Department of Psychology, University of Minnesota, 75 East River Parkway, Minneapolis, Minnesota 55455, USA
| |
Collapse
|
23
|
Yoo J, Bidelman GM. Linguistic, perceptual, and cognitive factors underlying musicians' benefits in noise-degraded speech perception. Hear Res 2019; 377:189-195. [PMID: 30978607 DOI: 10.1016/j.heares.2019.03.021] [Citation(s) in RCA: 29] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/25/2018] [Revised: 03/20/2019] [Accepted: 03/27/2019] [Indexed: 10/27/2022]
Abstract
Previous studies have reported better speech-in-noise (SIN) recognition in musicians relative to nonmusicians while others have failed to observe this "musician SIN advantage." Here, we aimed to clarify equivocal findings and determine the most relevant perceptual and cognitive factors that do and do not account for musicians' benefits in SIN processing. We measured behavioral performance in musicians and nonmusicians on a battery of SIN recognition, auditory backward masking (a marker of attention), fluid intelligence (IQ), and working memory tasks. We found that musicians outperformed nonmusicians in SIN recognition but also demonstrated better performance in IQ, working memory, and attention. SIN advantages were restricted to more complex speech tasks featuring sentence-level recognition with speech-on-speech masking (i.e., QuickSIN) whereas no group differences were observed in non-speech simultaneous (noise-on-tone) masking. This suggests musicians' advantage is limited to cases where the noise interference is linguistic in nature. Correlations showed SIN scores were associated with working memory, reinforcing the importance of general cognition to degraded speech perception. Lastly, listeners' years of music training predicted auditory attention scores, working memory skills, general fluid intelligence, and SIN perception (i.e., QuickSIN scores), implying that extensive musical training enhances perceptual and cognitive skills. Overall, our results suggest (i) enhanced SIN recognition in musicians is due to improved parsing of competing linguistic signals rather than signal-in-noise extraction, per se, and (ii) cognitive factors (working memory, attention, IQ) at least partially drive musicians' SIN advantages.
Collapse
Affiliation(s)
- Jessica Yoo
- School of Communication Sciences & Disorders, University of Memphis, Memphis, TN, USA
| | - Gavin M Bidelman
- School of Communication Sciences & Disorders, University of Memphis, Memphis, TN, USA; Institute for Intelligent Systems, University of Memphis, Memphis, TN, USA; University of Tennessee Health Sciences Center, Department of Anatomy and Neurobiology, Memphis, TN, USA.
| |
Collapse
|
24
|
Coffey EBJ, Arseneau-Bruneau I, Zhang X, Zatorre RJ. The Music-In-Noise Task (MINT): A Tool for Dissecting Complex Auditory Perception. Front Neurosci 2019; 13:199. [PMID: 30930734 PMCID: PMC6427094 DOI: 10.3389/fnins.2019.00199] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2018] [Accepted: 02/20/2019] [Indexed: 11/30/2022] Open
Abstract
The ability to segregate target sounds in noisy backgrounds is relevant both to neuroscience and to clinical applications. Recent research suggests that hearing-in-noise (HIN) problems are solved using combinations of sub-skills that are applied according to task demand and information availability. While evidence is accumulating for a musician advantage in HIN, the exact nature of the reported training effect is not fully understood. Existing HIN tests focus on tasks requiring understanding of speech in the presence of competing sound. Because visual, spatial and predictive cues are not systematically considered in these tasks, few tools exist to investigate the most relevant components of cognitive processes involved in stream segregation. We present the Music-In-Noise Task (MINT) as a flexible tool to expand HIN measures beyond speech perception, and for addressing research questions pertaining to the relative contributions of HIN sub-skills, inter-individual differences in their use, and their neural correlates. The MINT uses a match-mismatch trial design: in four conditions (Baseline, Rhythm, Spatial, and Visual) subjects first hear a short instrumental musical excerpt embedded in an informational masker of "multi-music" noise, followed by either a matching or scrambled repetition of the target musical excerpt presented in silence; the four conditions differ according to the presence or absence of additional cues. In a fifth condition (Prediction), subjects hear the excerpt in silence as a target first, which helps to anticipate incoming information when the target is embedded in masking sound. Data from samples of young adults show that the MINT has good reliability and internal consistency, and demonstrate selective benefits of musicianship in the Prediction, Rhythm, and Visual subtasks. We also report a performance benefit of multilingualism that is separable from that of musicianship. Average MINT scores were correlated with scores on a sentence-in-noise perception task, but only accounted for a relatively small percentage of the variance, indicating that the MINT is sensitive to additional factors and can provide a complement and extension of speech-based tests for studying stream segregation. A customizable version of the MINT is made available for use and extension by the scientific community.
Collapse
Affiliation(s)
- Emily B. J. Coffey
- Department of Psychology, Concordia University, Montreal, QC, Canada
- Laboratory for Brain, Music and Sound Research (BRAMS), Montreal, QC, Canada
- Centre for Research on Brain, Language and Music (CRBLM), Montreal, QC, Canada
- Centre for Interdisciplinary Research in Music Media and Technology (CIRMMT), Montreal, QC, Canada
| | - Isabelle Arseneau-Bruneau
- Laboratory for Brain, Music and Sound Research (BRAMS), Montreal, QC, Canada
- Centre for Research on Brain, Language and Music (CRBLM), Montreal, QC, Canada
- Centre for Interdisciplinary Research in Music Media and Technology (CIRMMT), Montreal, QC, Canada
- Montreal Neurological Institute, McGill University, Montreal, QC, Canada
| | - Xiaochen Zhang
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China
| | - Robert J. Zatorre
- Laboratory for Brain, Music and Sound Research (BRAMS), Montreal, QC, Canada
- Centre for Research on Brain, Language and Music (CRBLM), Montreal, QC, Canada
- Centre for Interdisciplinary Research in Music Media and Technology (CIRMMT), Montreal, QC, Canada
- Montreal Neurological Institute, McGill University, Montreal, QC, Canada
| |
Collapse
|
25
|
Morse-Fortier C, Parrish MM, Baran JA, Freyman RL. The Effects of Musical Training on Speech Detection in the Presence of Informational and Energetic Masking. Trends Hear 2019; 21:2331216517739427. [PMID: 29161982 PMCID: PMC5703091 DOI: 10.1177/2331216517739427] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022] Open
Abstract
Recent research has suggested that musicians have an advantage in some speech-in-noise paradigms, but not all. Whether musicians outperform nonmusicians on a given speech-in-noise task may well depend on the type of noise involved. To date, few groups have specifically studied the role that informational masking plays in the observation of a musician advantage. The current study investigated the effect of musicianship on listeners’ ability to overcome informational versus energetic masking of speech. Monosyllabic words were presented in four conditions that created similar energetic masking but either high or low informational masking. Two of these conditions used noise-vocoded target and masking stimuli to determine whether the absence of natural fine structure and spectral variations influenced any musician advantage. Forty young normal-hearing listeners (20 musicians and 20 nonmusicians) completed the study. There was a significant overall effect of participant group collapsing across the four conditions; however, planned comparisons showed musicians’ thresholds were only significantly better in the high informational masking natural speech condition, where the musician advantage was approximately 3 dB. These results add to the mounting evidence that informational masking plays a role in the presence and amount of musician benefit.
Collapse
Affiliation(s)
| | - Mary M Parrish
- 1 Department of Communication Disorders, University of Massachusetts Amherst, MA, USA
| | - Jane A Baran
- 1 Department of Communication Disorders, University of Massachusetts Amherst, MA, USA
| | - Richard L Freyman
- 1 Department of Communication Disorders, University of Massachusetts Amherst, MA, USA
| |
Collapse
|
26
|
Moore BCJ, Mariathasan S, Sęk AP. Effects of Age and Hearing Loss on the Discrimination of Amplitude and Frequency Modulation for 2- and 10-Hz Rates. Trends Hear 2019; 23:2331216519853963. [PMID: 31250705 PMCID: PMC6600487 DOI: 10.1177/2331216519853963] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2019] [Revised: 05/07/2019] [Accepted: 05/08/2019] [Indexed: 11/16/2022] Open
Abstract
Detection of frequency modulation (FM) with rate = 10 Hz may depend on conversion of FM to amplitude modulation (AM) in the cochlea, while detection of 2-Hz FM may depend on the use of temporal fine structure (TFS) information. TFS processing may worsen with greater age and hearing loss while AM processing probably does not. A two-stage experiment was conducted to test these ideas while controlling for the effects of detection efficiency. Stage 1 measured psychometric functions for the detection of AM alone and FM alone imposed on a 1-kHz carrier, using 2- and 10-Hz rates. Stage 2 assessed the discrimination of AM from FM at the same modulation rate when the detectability of the AM alone and FM alone was equated. Discrimination was better for the 2-Hz than for the 10-Hz rate for all young normal-hearing subjects and for some older subjects with normal hearing at 1 kHz. Other older subjects with normal hearing showed no clear difference in AM-FM discrimination for the 2- and 10-Hz rates, as was the case for most older hearing-impaired subjects. The results suggest that the ability to use TFS cues is reduced for some older people and most hearing-impaired people.
Collapse
Affiliation(s)
- Brian C. J. Moore
- Department of Experimental
Psychology, University of Cambridge, England
| | - Sashi Mariathasan
- Department of Experimental
Psychology, University of Cambridge, England
| | - Aleksander P. Sęk
- Faculty of Physics, Institute of
Acoustics, Adam Mickiewicz University, Poznań, Poland
| |
Collapse
|
27
|
Multisensory Integration in Short-term Memory: Musicians do Rock. Neuroscience 2018; 389:141-151. [PMID: 28461217 DOI: 10.1016/j.neuroscience.2017.04.031] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2017] [Accepted: 04/20/2017] [Indexed: 01/08/2023]
Abstract
Demonstrated interactions between seeing and hearing led us to assess the link between music training and short-term memory for auditory, visual and audiovisual sequences of rapidly presented, quasi-random components. Visual sequences' components varied in luminance; auditory sequences' components varied in frequency. Concurrent components in audiovisual sequences were either congruent (the frequency of an auditory item increased monotonically with the luminance of the visual item it accompanied), or incongruent (an item's frequency was uncorrelated with luminance of the item it accompanied). Subjects judged whether the last four items in a sequence replicated its first four items. With audiovisual sequences, subjects were instructed to ignore the sequence's auditory components, basing their judgments solely on the visual input. Subjects with prior instrumental training significantly outperformed their untrained counterparts, with both auditory and visual sequences, and with sequences of correlated auditory and visual items. Reverse correlation showed that the presence of a correlated, concurrent auditory stream altered subjects' reliance on particular visual items in a sequence. Moreover, congruence between auditory and visual items produced performance above what would be predicted from simple summation of information from the two modalities, a result that might reflect a contribution from special-purpose, multimodal neural mechanisms.
Collapse
|
28
|
Wollman I, Morillon B. Organizational principles of multidimensional predictions in human auditory attention. Sci Rep 2018; 8:13466. [PMID: 30194376 PMCID: PMC6128843 DOI: 10.1038/s41598-018-31878-5] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2018] [Accepted: 08/17/2018] [Indexed: 11/09/2022] Open
Abstract
Anticipating the future rests upon our ability to exploit contextual cues and to formulate valid internal models or predictions. It is currently unknown how multiple predictions combine to bias perceptual information processing, and in particular whether this is determined by physiological constraints, behavioral relevance (task demands), or past knowledge (perceptual expertise). In a series of behavioral auditory experiments involving musical experts and non-musicians, we investigated the respective and combined contribution of temporal and spectral predictions in multiple detection tasks. We show that temporal and spectral predictions alone systematically increase perceptual sensitivity, independently of task demands or expertise. When combined, however, spectral predictions benefit more to non-musicians and dominate over temporal ones, and the extent of the spectrotemporal synergistic interaction depends on task demands. This suggests that the hierarchy of dominance primarily reflects the tonotopic organization of the auditory system and that expertise or attention only have a secondary modulatory influence.
Collapse
Affiliation(s)
- Indiana Wollman
- Montreal Neurological Institute, McGill University, Montreal, Canada
- CIRMMT, Schulich School of Music, McGill University, Montreal, Canada
| | - Benjamin Morillon
- Montreal Neurological Institute, McGill University, Montreal, Canada.
- Aix Marseille Univ, Inserm, INS, Inst Neurosci Syst, Marseille, France.
| |
Collapse
|
29
|
Vanden Bosch der Nederlanden CM, Zaragoza C, Rubio-Garcia A, Clarkson E, Snyder JS. Change detection in complex auditory scenes is predicted by auditory memory, pitch perception, and years of musical training. PSYCHOLOGICAL RESEARCH 2018; 84:585-601. [PMID: 30120544 DOI: 10.1007/s00426-018-1072-x] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2018] [Accepted: 08/04/2018] [Indexed: 10/28/2022]
Abstract
Our world is a sonically busy place and we use both acoustic information and experience-based knowledge to make sense of the sounds arriving at our ears. The knowledge we gain through experience has the potential to shape what sounds are prioritized in a complex scene. There are many examples of how visual expertise influences how we perceive objects in visual scenes, but few studies examine how auditory expertise is associated with attentional biases toward familiar real-world sounds in complex scenes. In the current study, we investigated whether musical expertise is associated with the ability to detect changes to real-world sounds in complex auditory scenes, and whether any such benefit is specific to musical instrument sounds. We also examined whether change detection is better for human-generated sounds in general or only communicative human sounds. We found that musicians had less change deafness overall. All listeners were better at detecting human communicative sounds compared to human non-communicative sounds, but this benefit was driven by speech sounds and sounds that were vocally generated. Musical listening skill, speech-in-noise, and executive function abilities were used to predict rates of change deafness. Auditory memory, musical training, fine-grained pitch processing, and an interaction between training and pitch processing accounted for 45.8% of the variance in change deafness. To better understand perceptual and cognitive expertise, it may be more important to measure various auditory skills and relate them to each other, as opposed to comparing experts to non-experts.
Collapse
Affiliation(s)
- Christina M Vanden Bosch der Nederlanden
- Department of Psychology, University of Nevada, Las Vegas, USA. .,The Brain and Mind Institute, Western University, 1151 Richmond St, London, ON, N6A 3K7, Canada.
| | | | | | - Evan Clarkson
- Department of Psychology, University of Nevada, Las Vegas, USA
| | - Joel S Snyder
- Department of Psychology, University of Nevada, Las Vegas, USA
| |
Collapse
|
30
|
Wiegand K, Heiland S, Uhlig CH, Dykstra AR, Gutschalk A. Cortical networks for auditory detection with and without informational masking: Task effects and implications for conscious perception. Neuroimage 2018; 167:178-190. [DOI: 10.1016/j.neuroimage.2017.11.036] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2017] [Revised: 10/06/2017] [Accepted: 11/18/2017] [Indexed: 01/08/2023] Open
|
31
|
Bianchi F, Hjortkjær J, Santurette S, Zatorre RJ, Siebner HR, Dau T. Subcortical and cortical correlates of pitch discrimination: Evidence for two levels of neuroplasticity in musicians. Neuroimage 2017; 163:398-412. [DOI: 10.1016/j.neuroimage.2017.07.057] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2017] [Revised: 07/11/2017] [Accepted: 07/27/2017] [Indexed: 10/19/2022] Open
|
32
|
Abstract
Auditory perception is our main gateway to communication with others via speech and music, and it also plays an important role in alerting and orienting us to new events. This review provides an overview of selected topics pertaining to the perception and neural coding of sound, starting with the first stage of filtering in the cochlea and its profound impact on perception. The next topic, pitch, has been debated for millennia, but recent technical and theoretical developments continue to provide us with new insights. Cochlear filtering and pitch both play key roles in our ability to parse the auditory scene, enabling us to attend to one auditory object or stream while ignoring others. An improved understanding of the basic mechanisms of auditory perception will aid us in the quest to tackle the increasingly important problem of hearing loss in our aging population.
Collapse
Affiliation(s)
- Andrew J Oxenham
- Department of Psychology, University of Minnesota, Minneapolis, Minnesota 55455;
| |
Collapse
|
33
|
Madsen SMK, Whiteford KL, Oxenham AJ. Musicians do not benefit from differences in fundamental frequency when listening to speech in competing speech backgrounds. Sci Rep 2017; 7:12624. [PMID: 28974705 PMCID: PMC5626707 DOI: 10.1038/s41598-017-12937-9] [Citation(s) in RCA: 43] [Impact Index Per Article: 6.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2017] [Accepted: 09/11/2017] [Indexed: 11/09/2022] Open
Abstract
Recent studies disagree on whether musicians have an advantage over non-musicians in understanding speech in noise. However, it has been suggested that musicians may be able to use differences in fundamental frequency (F0) to better understand target speech in the presence of interfering talkers. Here we studied a relatively large (N = 60) cohort of young adults, equally divided between non-musicians and highly trained musicians, to test whether the musicians were better able to understand speech either in noise or in a two-talker competing speech masker. The target speech and competing speech were presented with either their natural F0 contours or on a monotone F0, and the F0 difference between the target and masker was systematically varied. As expected, speech intelligibility improved with increasing F0 difference between the target and the two-talker masker for both natural and monotone speech. However, no significant intelligibility advantage was observed for musicians over non-musicians in any condition. Although F0 discrimination was significantly better for musicians than for non-musicians, it was not correlated with speech scores. Overall, the results do not support the hypothesis that musical training leads to improved speech intelligibility in complex speech or noise backgrounds.
Collapse
Affiliation(s)
- Sara M K Madsen
- Hearing Systems, Department of Electrical Engineering, Technical University of Denmark, Ørsteds Plads 352, 2800, Kgs. Lyngby, Denmark.
| | - Kelly L Whiteford
- Department of Psychology, University of Minnesota, 75 East River Parkway, Minneapolis, MN, 55455, USA
| | - Andrew J Oxenham
- Department of Psychology, University of Minnesota, 75 East River Parkway, Minneapolis, MN, 55455, USA
| |
Collapse
|
34
|
Deroche MLD, Limb CJ, Chatterjee M, Gracco VL. Similar abilities of musicians and non-musicians to segregate voices by fundamental frequency. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2017; 142:1739. [PMID: 29092612 PMCID: PMC5626570 DOI: 10.1121/1.5005496] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/24/2017] [Revised: 09/08/2017] [Accepted: 09/12/2017] [Indexed: 06/07/2023]
Abstract
Musicians can sometimes achieve better speech recognition in noisy backgrounds than non-musicians, a phenomenon referred to as the "musician advantage effect." In addition, musicians are known to possess a finer sense of pitch than non-musicians. The present study examined the hypothesis that the latter fact could explain the former. Four experiments measured speech reception threshold for a target voice against speech or non-speech maskers. Although differences in fundamental frequency (ΔF0s) were shown to be beneficial even when presented to opposite ears (experiment 1), the authors' attempt to maximize their use by directing the listener's attention to the target F0 led to unexpected impairments (experiment 2) and the authors' attempt to hinder their use by generating uncertainty about the competing F0s led to practically negligible effects (experiments 3 and 4). The benefits drawn from ΔF0s showed surprisingly little malleability for a cue that can be used in the complete absence of energetic masking. In half of the experiments, musicians obtained better thresholds than non-musicians, particularly in speech-on-speech conditions, but they did not reliably obtain larger ΔF0 benefits. Thus, the data do not support the hypothesis that the musician advantage effect is based on greater ability to exploit ΔF0s.
Collapse
Affiliation(s)
- Mickael L D Deroche
- Centre for Research on Brain, Language and Music, McGill University, 3640 rue de la Montagne, Montreal H3G 2A8, Canada
| | - Charles J Limb
- Department of Otolaryngology-Head and Neck Surgery, University of California San Francisco School of Medicine, 2233 Post Street, San Francisco, California 94115, USA
| | - Monita Chatterjee
- Auditory Prostheses and Perception Laboratory, Boys Town National Research Hospital, 555 North 30th Street, Omaha, Nebraska 68131, USA
| | - Vincent L Gracco
- Haskins Laboratories, 300 George Street, New Haven, Connecticut 06511, USA
| |
Collapse
|
35
|
Lawless MS, Vigeant MC. Effects of test method and participant musical training on preference ratings of stimuli with different reverberation times. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2017; 142:2258. [PMID: 29092592 DOI: 10.1121/1.5006065] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
Selecting an appropriate listening test design for concert hall research depends on several factors, including listening test method and participant critical-listening experience. Although expert listeners afford more reliable data, their perceptions may not be broadly representative. The present paper contains two studies that examined the validity and reliability of the data obtained from two listening test methods, a successive and a comparative method, and two types of participants, musicians and non-musicians. Participants rated their overall preference of auralizations generated from eight concert hall conditions with a range of reverberation times (0.0-7.2 s). Study 1, with 34 participants, assessed the two methods. The comparative method yielded similar results and reliability as the successive method. Additionally, the comparative method was rated as less difficult and more preferable. For study 2, an additional 37 participants rated the stimuli using the comparative method only. An analysis of variance of the responses from both studies revealed that musicians are better than non-musicians at discerning their preferences across stimuli. This result was confirmed with a k-means clustering analysis on the entire dataset that revealed five preference groups. Four groups exhibited clear preferences to the stimuli, while the fifth group, predominantly comprising non-musicians, demonstrated no clear preference.
Collapse
Affiliation(s)
- Martin S Lawless
- Graduate Program in Acoustics, The Pennsylvania State University, University Park, Pennsylvania 16802, USA
| | - Michelle C Vigeant
- Graduate Program in Acoustics, The Pennsylvania State University, University Park, Pennsylvania 16802, USA
| |
Collapse
|
36
|
Bolders AC, Band GPH, Stallen PJM. Inconsistent Effect of Arousal on Early Auditory Perception. Front Psychol 2017; 8:447. [PMID: 28424639 PMCID: PMC5372791 DOI: 10.3389/fpsyg.2017.00447] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2016] [Accepted: 03/09/2017] [Indexed: 11/23/2022] Open
Abstract
Mood has been shown to influence cognitive performance. However, little is known about the influence of mood on sensory processing, specifically in the auditory domain. With the current study, we sought to investigate how auditory processing of neutral sounds is affected by the mood state of the listener. This was tested in two experiments by measuring masked-auditory detection thresholds before and after a standard mood-induction procedure. In the first experiment (N = 76), mood was induced by imagining a mood-appropriate event combined with listening to mood inducing music. In the second experiment (N = 80), imagining was combined with affective picture viewing to exclude any possibility of confounding the results by acoustic properties of the music. In both experiments, the thresholds were determined by means of an adaptive staircase tracking method in a two-interval forced-choice task. Masked detection thresholds were compared between participants in four different moods (calm, happy, sad, and anxious), which enabled differentiation of mood effects along the dimensions arousal and pleasure. Results of the two experiments were analyzed both in separate analyses and in a combined analysis. The first experiment showed that, while there was no impact of pleasure level on the masked threshold, lower arousal was associated with lower threshold (higher masked sensitivity). However, as indicated by an interaction effect between experiment and arousal, arousal did have a different effect on the threshold in Experiment 2. Experiment 2 showed a trend of arousal in opposite direction. These results show that the effect of arousal on auditory-masked sensitivity may depend on the modality of the mood-inducing stimuli. As clear conclusions regarding the genuineness of the arousal effect on the masked threshold cannot be drawn, suggestions for further research that could clarify this issue are provided.
Collapse
Affiliation(s)
- Anna C Bolders
- Cognitive Psychology Unit, Institute of Psychology, Leiden UniversityLeiden, Netherlands
| | - Guido P H Band
- Cognitive Psychology Unit, Institute of Psychology, Leiden UniversityLeiden, Netherlands.,Leiden Institute for Brain and Cognition, Leiden UniversityLeiden, Netherlands
| | - Pieter Jan M Stallen
- Cognitive Psychology Unit, Institute of Psychology, Leiden UniversityLeiden, Netherlands
| |
Collapse
|
37
|
Speech-in-noise perception in musicians: A review. Hear Res 2017; 352:49-69. [PMID: 28213134 DOI: 10.1016/j.heares.2017.02.006] [Citation(s) in RCA: 83] [Impact Index Per Article: 11.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/26/2016] [Revised: 02/01/2017] [Accepted: 02/05/2017] [Indexed: 11/23/2022]
Abstract
The ability to understand speech in the presence of competing sound sources is an important neuroscience question in terms of how the nervous system solves this computational problem. It is also a critical clinical problem that disproportionally affects the elderly, children with language-related learning disorders, and those with hearing loss. Recent evidence that musicians have an advantage on this multifaceted skill has led to the suggestion that musical training might be used to improve or delay the decline of speech-in-noise (SIN) function. However, enhancements have not been universally reported, nor have the relative contributions of different bottom-up versus top-down processes, and their relation to preexisting factors been disentangled. This information that would be helpful to establish whether there is a real effect of experience, what exactly is its nature, and how future training-based interventions might target the most relevant components of cognitive processes. These questions are complicated by important differences in study design and uneven coverage of neuroimaging modality. In this review, we aim to systematize recent results from studies that have specifically looked at musician-related differences in SIN by their study design properties, to summarize the findings, and to identify knowledge gaps for future work.
Collapse
|
38
|
Pelofi C, de Gardelle V, Egré P, Pressnitzer D. Interindividual variability in auditory scene analysis revealed by confidence judgements. Philos Trans R Soc Lond B Biol Sci 2017; 372:rstb.2016.0107. [PMID: 28044018 DOI: 10.1098/rstb.2016.0107] [Citation(s) in RCA: 36] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/30/2016] [Indexed: 01/20/2023] Open
Abstract
Because musicians are trained to discern sounds within complex acoustic scenes, such as an orchestra playing, it has been hypothesized that musicianship improves general auditory scene analysis abilities. Here, we compared musicians and non-musicians in a behavioural paradigm using ambiguous stimuli, combining performance, reaction times and confidence measures. We used 'Shepard tones', for which listeners may report either an upward or a downward pitch shift for the same ambiguous tone pair. Musicians and non-musicians performed similarly on the pitch-shift direction task. In particular, both groups were at chance for the ambiguous case. However, groups differed in their reaction times and judgements of confidence. Musicians responded to the ambiguous case with long reaction times and low confidence, whereas non-musicians responded with fast reaction times and maximal confidence. In a subsequent experiment, non-musicians displayed reduced confidence for the ambiguous case when pure-tone components of the Shepard complex were made easier to discern. The results suggest an effect of musical training on scene analysis: we speculate that musicians were more likely to discern components within complex auditory scenes, perhaps because of enhanced attentional resolution, and thus discovered the ambiguity. For untrained listeners, stimulus ambiguity was not available to perceptual awareness.This article is part of the themed issue 'Auditory and visual scene analysis'.
Collapse
Affiliation(s)
- C Pelofi
- Laboratoire des systèmes perceptifs, CNRS UMR 8248, École normale supérieure - PSL Research University, 75005 Paris, France.,Institut d'étude de la cognition, École normale supérieure - PSL Research University, 75005 Paris, France
| | - V de Gardelle
- Paris School of Economics & CNRS, École normale supérieure - PSL Research University, 75005 Paris, France
| | - P Egré
- Institut Jean Nicod, CNRS UMR 8129, École normale supérieure - PSL Research University, 75005 Paris, France.,Institut d'étude de la cognition, École normale supérieure - PSL Research University, 75005 Paris, France
| | - D Pressnitzer
- Laboratoire des systèmes perceptifs, CNRS UMR 8248, École normale supérieure - PSL Research University, 75005 Paris, France .,Institut d'étude de la cognition, École normale supérieure - PSL Research University, 75005 Paris, France
| |
Collapse
|
39
|
Communicating in Challenging Environments: Noise and Reverberation. THE FREQUENCY-FOLLOWING RESPONSE 2017. [DOI: 10.1007/978-3-319-47944-6_8] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
|
40
|
Grose JH, Buss E, Hall JW. Loud Music Exposure and Cochlear Synaptopathy in Young Adults: Isolated Auditory Brainstem Response Effects but No Perceptual Consequences. Trends Hear 2017; 21:2331216517737417. [PMID: 29105620 PMCID: PMC5676494 DOI: 10.1177/2331216517737417] [Citation(s) in RCA: 70] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2017] [Accepted: 09/21/2017] [Indexed: 01/20/2023] Open
Abstract
The purpose of this study was to test the hypothesis that listeners with frequent exposure to loud music exhibit deficits in suprathreshold auditory performance consistent with cochlear synaptopathy. Young adults with normal audiograms were recruited who either did ( n = 31) or did not ( n = 30) have a history of frequent attendance at loud music venues where the typical sound levels could be expected to result in temporary threshold shifts. A test battery was administered that comprised three sets of procedures: (a) electrophysiological tests including distortion product otoacoustic emissions, auditory brainstem responses, envelope following responses, and the acoustic change complex evoked by an interaural phase inversion; (b) psychoacoustic tests including temporal modulation detection, spectral modulation detection, and sensitivity to interaural phase; and (c) speech tests including filtered phoneme recognition and speech-in-noise recognition. The results demonstrated that a history of loud music exposure can lead to a profile of peripheral auditory function that is consistent with an interpretation of cochlear synaptopathy in humans, namely, modestly abnormal auditory brainstem response Wave I/Wave V ratios in the presence of normal distortion product otoacoustic emissions and normal audiometric thresholds. However, there were no other electrophysiological, psychophysical, or speech perception effects. The absence of any behavioral effects in suprathreshold sound processing indicated that, even if cochlear synaptopathy is a valid pathophysiological condition in humans, its perceptual sequelae are either too diffuse or too inconsequential to permit a simple differential diagnosis of hidden hearing loss.
Collapse
Affiliation(s)
- John H. Grose
- Department of Otolaryngology—Head and Neck Surgery, University of North Carolina at Chapel Hill, NC, USA
| | - Emily Buss
- Department of Otolaryngology—Head and Neck Surgery, University of North Carolina at Chapel Hill, NC, USA
| | - Joseph W. Hall
- Department of Otolaryngology—Head and Neck Surgery, University of North Carolina at Chapel Hill, NC, USA
| |
Collapse
|
41
|
Clayton KK, Swaminathan J, Yazdanbakhsh A, Zuk J, Patel AD, Kidd G. Executive Function, Visual Attention and the Cocktail Party Problem in Musicians and Non-Musicians. PLoS One 2016; 11:e0157638. [PMID: 27384330 PMCID: PMC4934907 DOI: 10.1371/journal.pone.0157638] [Citation(s) in RCA: 58] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2015] [Accepted: 06/02/2016] [Indexed: 11/24/2022] Open
Abstract
The goal of this study was to investigate how cognitive factors influence performance in a multi-talker, “cocktail-party” like environment in musicians and non-musicians. This was achieved by relating performance in a spatial hearing task to cognitive processing abilities assessed using measures of executive function (EF) and visual attention in musicians and non-musicians. For the spatial hearing task, a speech target was presented simultaneously with two intelligible speech maskers that were either colocated with the target (0° azimuth) or were symmetrically separated from the target in azimuth (at ±15°). EF assessment included measures of cognitive flexibility, inhibition control and auditory working memory. Selective attention was assessed in the visual domain using a multiple object tracking task (MOT). For the MOT task, the observers were required to track target dots (n = 1,2,3,4,5) in the presence of interfering distractor dots. Musicians performed significantly better than non-musicians in the spatial hearing task. For the EF measures, musicians showed better performance on measures of auditory working memory compared to non-musicians. Furthermore, across all individuals, a significant correlation was observed between performance on the spatial hearing task and measures of auditory working memory. This result suggests that individual differences in performance in a cocktail party-like environment may depend in part on cognitive factors such as auditory working memory. Performance in the MOT task did not differ between groups. However, across all individuals, a significant correlation was found between performance in the MOT and spatial hearing tasks. A stepwise multiple regression analysis revealed that musicianship and performance on the MOT task significantly predicted performance on the spatial hearing task. Overall, these findings confirm the relationship between musicianship and cognitive factors including domain-general selective attention and working memory in solving the “cocktail party problem”.
Collapse
Affiliation(s)
- Kameron K. Clayton
- Department of Speech, Language and Hearing Sciences, Boston University, Boston, MA, United States of America
| | - Jayaganesh Swaminathan
- Department of Speech, Language and Hearing Sciences, Boston University, Boston, MA, United States of America
- * E-mail:
| | - Arash Yazdanbakhsh
- Department for Psychological and Brain Sciences, Boston University, Boston, MA, United States of America
- Center for Computational Neuroscience and Neural Technology (CompNet), Boston University, Boston, MA, United States of America
| | - Jennifer Zuk
- Harvard Medical School, Harvard University, Boston, MA, United States of America
| | - Aniruddh D. Patel
- Department of Psychology, Tufts University, Medford, MA, United States of America
| | - Gerald Kidd
- Department of Speech, Language and Hearing Sciences, Boston University, Boston, MA, United States of America
| |
Collapse
|
42
|
Evans S, McGettigan C, Agnew ZK, Rosen S, Scott SK. Getting the Cocktail Party Started: Masking Effects in Speech Perception. J Cogn Neurosci 2015; 28:483-500. [PMID: 26696297 DOI: 10.1162/jocn_a_00913] [Citation(s) in RCA: 42] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
Abstract
Spoken conversations typically take place in noisy environments, and different kinds of masking sounds place differing demands on cognitive resources. Previous studies, examining the modulation of neural activity associated with the properties of competing sounds, have shown that additional speech streams engage the superior temporal gyrus. However, the absence of a condition in which target speech was heard without additional masking made it difficult to identify brain networks specific to masking and to ascertain the extent to which competing speech was processed equivalently to target speech. In this study, we scanned young healthy adults with continuous fMRI, while they listened to stories masked by sounds that differed in their similarity to speech. We show that auditory attention and control networks are activated during attentive listening to masked speech in the absence of an overt behavioral task. We demonstrate that competing speech is processed predominantly in the left hemisphere within the same pathway as target speech but is not treated equivalently within that stream and that individuals who perform better in speech in noise tasks activate the left mid-posterior superior temporal gyrus more. Finally, we identify neural responses associated with the onset of sounds in the auditory environment; activity was found within right lateralized frontal regions consistent with a phasic alerting response. Taken together, these results provide a comprehensive account of the neural processes involved in listening in noise.
Collapse
Affiliation(s)
| | | | - Zarinah K Agnew
- University College London.,University of California, San Francisco
| | | | | |
Collapse
|
43
|
Feng L, Oxenham AJ. New perspectives on the measurement and time course of auditory enhancement. J Exp Psychol Hum Percept Perform 2015; 41:1696-708. [PMID: 26280269 DOI: 10.1037/xhp0000115] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
A target sound can become more audible and may "pop out" from a simultaneously presented masker if the masker is presented first by itself, as a precursor. This phenomenon, known as auditory enhancement, may reflect the general perceptual principle of contrast enhancement, which facilitates adaptation to ongoing acoustic conditions and the detection of new events. Little is known about the mechanisms underlying enhancement, and potential confounding factors have made the size of the effect and its time course a point of contention. Here we measured enhancement as a function of precursor duration and delay between precursor offset and target onset, using 2 single-interval pitch comparison tasks, which involve either same-different or up-down judgments, to avoid the potential confounds of earlier studies. Although these 2 tasks elicit different levels of performance and may reflect different underlying mechanisms, they produced similar amounts of enhancement. The effect decreased with decreasing precursor duration, but remained present for precursors as short as 62.5 ms, and decreased with increasing gap between the precursor and target, but remained measurable 1 s after the precursor. Additional conditions, examining the effect of precursor/masker similarity and the possible role of grouping and cueing, suggest multiple sources of auditory enhancement.
Collapse
Affiliation(s)
- Lei Feng
- Department of Otolaryngology, University of Minnesota
| | | |
Collapse
|
44
|
Vigeant MC, Celmer RD, Jasinski CM, Ahearn MJ, Schaeffler MJ, Giacomoni CB, Wells AP, Ormsbee CI. The effects of different test methods on the just noticeable difference of clarity index for music. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2015; 138:476-491. [PMID: 26233046 DOI: 10.1121/1.4922955] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
The just noticeable differences (JNDs) of room acoustics metrics are necessary for research and design of performing arts venues. The goal of this work was to evaluate the effects of different testing methods on the measured JND of clarity index for music (C80). An initial study was conducted to verify the findings of other published works that the C80 JND is approximately 1 dB, as currently listed in ISO 3382:2009 (International Organization for Standardization, Switzerland, 2009), however, the results suggested a higher value. In the second study, the effects of using two variations of the method of constant stimuli were examined, where one variation required the subjects to evaluate the pair of signals by listening to each of them in their entirety, while the second approach allowed the participants to switch back and forth in real-time. More consistent results were obtained with the latter variation and the results indicated a C80 JND greater than 1 dB. In the final study, an extensive training period using the first variation was required, based on the second study, and the data were collected using the second variation. The analysis revealed that for the conditions used in this study (concert hall and chamber music hall) that the C80 JND is approximately 3 dB.
Collapse
Affiliation(s)
- Michelle C Vigeant
- Acoustics Program & Laboratory, Mechanical Engineering Department, University of Hartford, 200 Bloomfield Avenue, West Hartford, Connecticut 06117, USA
| | - Robert D Celmer
- Acoustics Program & Laboratory, Mechanical Engineering Department, University of Hartford, 200 Bloomfield Avenue, West Hartford, Connecticut 06117, USA
| | - Chris M Jasinski
- Acoustics Program & Laboratory, Mechanical Engineering Department, University of Hartford, 200 Bloomfield Avenue, West Hartford, Connecticut 06117, USA
| | - Meghan J Ahearn
- Acoustics Program & Laboratory, Mechanical Engineering Department, University of Hartford, 200 Bloomfield Avenue, West Hartford, Connecticut 06117, USA
| | - Matthew J Schaeffler
- Acoustics Program & Laboratory, Mechanical Engineering Department, University of Hartford, 200 Bloomfield Avenue, West Hartford, Connecticut 06117, USA
| | - Clothilde B Giacomoni
- Acoustics Program & Laboratory, Mechanical Engineering Department, University of Hartford, 200 Bloomfield Avenue, West Hartford, Connecticut 06117, USA
| | - Adam P Wells
- Acoustics Program & Laboratory, Mechanical Engineering Department, University of Hartford, 200 Bloomfield Avenue, West Hartford, Connecticut 06117, USA
| | - Caitlin I Ormsbee
- Acoustics Program & Laboratory, Mechanical Engineering Department, University of Hartford, 200 Bloomfield Avenue, West Hartford, Connecticut 06117, USA
| |
Collapse
|
45
|
Swaminathan J, Mason CR, Streeter TM, Best V, Kidd G, Patel AD. Musical training, individual differences and the cocktail party problem. Sci Rep 2015; 5:11628. [PMID: 26112910 PMCID: PMC4481518 DOI: 10.1038/srep11628] [Citation(s) in RCA: 88] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2014] [Accepted: 06/02/2015] [Indexed: 11/09/2022] Open
Abstract
Are musicians better able to understand speech in noise than non-musicians? Recent findings have produced contradictory results. Here we addressed this question by asking musicians and non-musicians to understand target sentences masked by other sentences presented from different spatial locations, the classical 'cocktail party problem' in speech science. We found that musicians obtained a substantial benefit in this situation, with thresholds ~6 dB better than non-musicians. Large individual differences in performance were noted particularly for the non-musically trained group. Furthermore, in different conditions we manipulated the spatial location and intelligibility of the masking sentences, thus changing the amount of 'informational masking' (IM) while keeping the amount of 'energetic masking' (EM) relatively constant. When the maskers were unintelligible and spatially separated from the target (low in IM), musicians and non-musicians performed comparably. These results suggest that the characteristics of speech maskers and the amount of IM can influence the magnitude of the differences found between musicians and non-musicians in multiple-talker "cocktail party" environments. Furthermore, considering the task in terms of the EM-IM distinction provides a conceptual framework for future behavioral and neuroscientific studies which explore the underlying sensory and cognitive mechanisms contributing to enhanced "speech-in-noise" perception by musicians.
Collapse
Affiliation(s)
| | - Christine R Mason
- Department of Speech, Language and Hearing Sciences, Boston University, Boston, MA
| | - Timothy M Streeter
- Department of Speech, Language and Hearing Sciences, Boston University, Boston, MA
| | - Virginia Best
- Department of Speech, Language and Hearing Sciences, Boston University, Boston, MA
| | - Gerald Kidd
- Department of Speech, Language and Hearing Sciences, Boston University, Boston, MA
| | | |
Collapse
|
46
|
Carey D, Rosen S, Krishnan S, Pearce MT, Shepherd A, Aydelott J, Dick F. Generality and specificity in the effects of musical expertise on perception and cognition. Cognition 2015; 137:81-105. [DOI: 10.1016/j.cognition.2014.12.005] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2013] [Revised: 11/03/2014] [Accepted: 12/18/2014] [Indexed: 10/24/2022]
|
47
|
Jones PR, Moore DR, Amitay S. Development of auditory selective attention: why children struggle to hear in noisy environments. Dev Psychol 2015; 51:353-69. [PMID: 25706591 PMCID: PMC4337492 DOI: 10.1037/a0038570] [Citation(s) in RCA: 33] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2014] [Revised: 11/11/2014] [Accepted: 11/17/2014] [Indexed: 11/29/2022]
Abstract
Children's hearing deteriorates markedly in the presence of unpredictable noise. To explore why, 187 school-age children (4-11 years) and 15 adults performed a tone-in-noise detection task, in which the masking noise varied randomly between every presentation. Selective attention was evaluated by measuring the degree to which listeners were influenced by (i.e., gave weight to) each spectral region of the stimulus. Psychometric fits were also used to estimate levels of internal noise and bias. Levels of masking were found to decrease with age, becoming adult-like by 9-11 years. This change was explained by improvements in selective attention alone, with older listeners better able to ignore noise similar in frequency to the target. Consistent with this, age-related differences in masking were abolished when the noise was made more distant in frequency to the target. This work offers novel evidence that improvements in selective attention are critical for the normal development of auditory judgments.
Collapse
|
48
|
Boebinger D, Evans S, Rosen S, Lima CF, Manly T, Scott SK. Musicians and non-musicians are equally adept at perceiving masked speech. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2015; 137:378-87. [PMID: 25618067 PMCID: PMC4434218 DOI: 10.1121/1.4904537] [Citation(s) in RCA: 98] [Impact Index Per Article: 10.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/19/2023]
Abstract
There is much interest in the idea that musicians perform better than non-musicians in understanding speech in background noise. Research in this area has often used energetic maskers, which have their effects primarily at the auditory periphery. However, masking interference can also occur at more central auditory levels, known as informational masking. This experiment extends existing research by using multiple maskers that vary in their informational content and similarity to speech, in order to examine differences in perception of masked speech between trained musicians (n = 25) and non-musicians (n = 25). Although musicians outperformed non-musicians on a measure of frequency discrimination, they showed no advantage in perceiving masked speech. Further analysis revealed that non-verbal IQ, rather than musicianship, significantly predicted speech reception thresholds in noise. The results strongly suggest that the contribution of general cognitive abilities needs to be taken into account in any investigations of individual variability for perceiving speech in noise.
Collapse
Affiliation(s)
- Dana Boebinger
- Institute of Cognitive Neuroscience, University College London, 17 Queen Square, London WC1N 3AR, United Kingdom
| | - Samuel Evans
- Institute of Cognitive Neuroscience, University College London, 17 Queen Square, London WC1N 3AR, United Kingdom
| | - Stuart Rosen
- Speech, Hearing & Phonetic Sciences, University College London, 2 Wakefield Street, London WC1N 2PF, United Kingdom
| | - César F Lima
- Centre for Psychology at University of Porto, Rua Alfredo Allen, 4200-135 Porto, Portugal
| | - Tom Manly
- Medical Research Council Cognition and Brain Sciences Unit, Cambridge, 15 Chaucer Road, Cambridge CB2 7EF, United Kingdom
| | - Sophie K Scott
- Division of Psychology and Language Sciences, University College London, Gower Street, London WC1E 6BT, United Kingdom
| |
Collapse
|
49
|
Mishra SK, Panda MR, Raj S. Influence of musical training on sensitivity to temporal fine structure. Int J Audiol 2014; 54:220-6. [PMID: 25395259 DOI: 10.3109/14992027.2014.969411] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
Abstract
OBJECTIVE The objective of this study was to extend the findings that temporal fine structure encoding is altered in musicians by examining sensitivity to temporal fine structure (TFS) in an alternative (non-Western) musician model that is rarely adopted--Indian classical music. DESIGN The sensitivity to TFS was measured by the ability to discriminate two complex tones that differed in TFS but not in envelope repetition rate. STUDY SAMPLE Sixteen South Indian classical (Carnatic) musicians and 28 non-musicians with normal hearing participated in this study. RESULTS Musicians have significantly lower relative frequency shift at threshold in the TFS task compared to non-musicians. A significant negative correlation was observed between years of musical experience and relative frequency shift at threshold in the TFS task. Test-retest repeatability of thresholds in the TFS tasks was similar for both musicians and non-musicians. CONCLUSIONS The enhanced performance of the Carnatic-trained musicians suggests that the musician advantage for frequency and harmonicity discrimination is not restricted to training in Western classical music, on which much of the previous research on musical training has narrowly focused. The perceptual judgments obtained from non-musicians were as reliable as those of musicians.
Collapse
Affiliation(s)
- Srikanta K Mishra
- * Department of Special Education and Communication Disorders, New Mexico State University , Las Cruces, NM , USA
| | | | | |
Collapse
|
50
|
Jennings SG, Ahlstrom JB, Dubno JR. Computational modeling of individual differences in behavioral estimates of cochlear nonlinearities. J Assoc Res Otolaryngol 2014; 15:945-60. [PMID: 25266264 DOI: 10.1007/s10162-014-0486-4] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2013] [Accepted: 09/01/2014] [Indexed: 02/07/2023] Open
Abstract
Temporal masking curves (TMCs) are often used to estimate cochlear compression in individuals with normal and impaired hearing. These estimates may yield a wide range of individual differences, even among subjects with similar quiet thresholds. This study used an auditory model to assess potential sources of variance in TMCs from 51 listeners in Poling et al. [J Assoc Res Otolaryngol, 13:91-108 (2012)]. These sources included threshold elevation, the contribution of outer and inner hair cell dysfunction to threshold elevation, compression of the off-frequency linear reference, and detection efficiency. Simulations suggest that detection efficiency is a primary factor contributing to individual differences in TMCs measured in normal-hearing subjects, while threshold elevation and the contribution of outer and inner hair cell dysfunction are primary factors in hearing-impaired subjects. Approximating the most compressive growth rate of the cochlear response from TMCs was achieved only in subjects with the highest detection efficiency. Simulations included off-frequency nonlinearity in basilar membrane and inner hair cell processing; however, this nonlinearity did not improve predictions, suggesting that other sources, such as the decay of masking and the strength of the medial olivocochlear reflex, may mimic off-frequency nonlinearity. Findings from this study suggest that sources of individual differences can play a strong role in behavioral estimates of compression, and these sources should be considered when using forward masking to study cochlear function in individual listeners or across groups of listeners.
Collapse
Affiliation(s)
- Skyler G Jennings
- Department of Communication Sciences and Disorders, The University of Utah, 390 South, 1530 East, BEHS 1201, Salt Lake City, UT, 84112, USA,
| | | | | |
Collapse
|