51
|
Bidelman GM, Schug JM, Jennings SG, Bhagat SP. Psychophysical auditory filter estimates reveal sharper cochlear tuning in musicians. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2014; 136:EL33-EL39. [PMID: 24993235 DOI: 10.1121/1.4885484] [Citation(s) in RCA: 29] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
Musicianship confers enhancements to hearing at nearly all levels of the auditory system from periphery to percept. Musicians' superior psychophysical abilities are particularly evident in spectral discrimination and noise-degraded listening tasks, achieving higher perceptual sensitivity than their nonmusician peers. Greater spectral acuity implies that musicianship may increase auditory filter selectivity. This hypothesis was directly tested by measuring both forward- and simultaneous-masked psychophysical tuning curves. Sharper filter tuning (i.e., higher Q10) was observed in musicians compared to nonmusicians. Findings suggest musicians' pervasive listening benefits may be facilitated, in part, by superior spectral processing/decomposition as early as the auditory periphery.
Collapse
Affiliation(s)
- Gavin M Bidelman
- Institute for Intelligent Systems, University of Memphis, Memphis, Tennessee 38105
| | - Jonathan M Schug
- School of Communication Sciences & Disorders, University of Memphis, Memphis, Tennessee 38105
| | - Skyler G Jennings
- Department of Communication Sciences & Disorders, University of Utah, Salt Lake City, Utah 84112
| | - Shaum P Bhagat
- School of Communication Sciences & Disorders, University of Memphis, Memphis, Tennessee 38105
| |
Collapse
|
52
|
Krause MO, Kennedy MRT, Nelson PB. Masking release, processing speed and listening effort in adults with traumatic brain injury. Brain Inj 2014; 28:1473-84. [DOI: 10.3109/02699052.2014.920520] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
|
53
|
Başkent D, van Engelshoven S, Galvin JJ. Susceptibility to interference by music and speech maskers in middle-aged adults. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2014; 135:EL147-EL153. [PMID: 24606308 PMCID: PMC4043475 DOI: 10.1121/1.4865261] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/23/2013] [Revised: 12/05/2013] [Accepted: 12/17/2013] [Indexed: 06/03/2023]
Abstract
Older listeners commonly complain about difficulty in understanding speech in noise. Previous studies have shown an age effect for both speech and steady noise maskers, and it is largest for speech maskers. In the present study, speech reception thresholds (SRTs) measured with competing speech, music, and steady noise maskers significantly differed between young (19 to 26 years) and middle-aged (51 to 63 years) adults. SRT differences ranged from 2.1 dB for competing speech, 0.4-1.6 dB for music maskers, and 0.8 dB for steady noise. The data suggest that aging effects are already evident in middle-aged adults without significant hearing impairment.
Collapse
Affiliation(s)
- Deniz Başkent
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| | - Suzanne van Engelshoven
- Department of Biomedical Engineering, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| | - John J Galvin
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| |
Collapse
|
54
|
Allen EJ, Oxenham AJ. Symmetric interactions and interference between pitch and timbre. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2014; 135:1371-9. [PMID: 24606275 PMCID: PMC3985978 DOI: 10.1121/1.4863269] [Citation(s) in RCA: 57] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
Variations in the spectral shape of harmonic tone complexes are perceived as timbre changes and can lead to poorer fundamental frequency (F0) or pitch discrimination. Less is known about the effects of F0 variations on spectral shape discrimination. The aims of the study were to determine whether the interactions between pitch and timbre are symmetric, and to test whether musical training affects listeners' ability to ignore variations in irrelevant perceptual dimensions. Difference limens (DLs) for F0 were measured with and without random, concurrent, variations in spectral centroid, and vice versa. Additionally, sensitivity was measured as the target parameter and the interfering parameter varied by the same amount, in terms of individual DLs. Results showed significant and similar interference between pitch (F0) and timbre (spectral centroid) dimensions, with upward spectral motion often confused for upward F0 motion, and vice versa. Musicians had better F0DLs than non-musicians on average, but similar spectral centroid DLs. Both groups showed similar interference effects, in terms of decreased sensitivity, in both dimensions. Results reveal symmetry in the interference effects between pitch and timbre, once differences in sensitivity between dimensions and subjects are controlled. Musical training does not reliably help to overcome these effects.
Collapse
Affiliation(s)
- Emily J Allen
- Department of Psychology, University of Minnesota, Minneapolis, Minnesota 55455
| | - Andrew J Oxenham
- Department of Psychology, University of Minnesota, Minneapolis, Minnesota 55455
| |
Collapse
|
55
|
Moreno S, Bidelman GM. Examining neural plasticity and cognitive benefit through the unique lens of musical training. Hear Res 2014; 308:84-97. [DOI: 10.1016/j.heares.2013.09.012] [Citation(s) in RCA: 118] [Impact Index Per Article: 11.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/19/2013] [Revised: 09/14/2013] [Accepted: 09/19/2013] [Indexed: 11/30/2022]
|
56
|
Ruggles DR, Freyman RL, Oxenham AJ. Influence of musical training on understanding voiced and whispered speech in noise. PLoS One 2014; 9:e86980. [PMID: 24489819 PMCID: PMC3904968 DOI: 10.1371/journal.pone.0086980] [Citation(s) in RCA: 106] [Impact Index Per Article: 10.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2013] [Accepted: 12/19/2013] [Indexed: 11/18/2022] Open
Abstract
This study tested the hypothesis that the previously reported advantage of musicians over non-musicians in understanding speech in noise arises from more efficient or robust coding of periodic voiced speech, particularly in fluctuating backgrounds. Speech intelligibility was measured in listeners with extensive musical training, and in those with very little musical training or experience, using normal (voiced) or whispered (unvoiced) grammatically correct nonsense sentences in noise that was spectrally shaped to match the long-term spectrum of the speech, and was either continuous or gated with a 16-Hz square wave. Performance was also measured in clinical speech-in-noise tests and in pitch discrimination. Musicians exhibited enhanced pitch discrimination, as expected. However, no systematic or statistically significant advantage for musicians over non-musicians was found in understanding either voiced or whispered sentences in either continuous or gated noise. Musicians also showed no statistically significant advantage in the clinical speech-in-noise tests. Overall, the results provide no evidence for a significant difference between young adult musicians and non-musicians in their ability to understand speech in noise.
Collapse
Affiliation(s)
- Dorea R. Ruggles
- Department of Psychology, University of Minnesota, Minneapolis, Minnesota, United States of America
- * E-mail: .
| | - Richard L. Freyman
- Department of Communication Disorders, University of Massachusetts, Amherst, Amherst, Massachusetts, United States of America
| | - Andrew J. Oxenham
- Department of Psychology, University of Minnesota, Minneapolis, Minnesota, United States of America
| |
Collapse
|
57
|
Sheft S, Smayda K, Shafiro V, Maddox WT, Chandrasekaran B. Effect of musical training on static and dynamic measures of spectral-pattern discrimination. PROCEEDINGS OF MEETINGS ON ACOUSTICS. ACOUSTICAL SOCIETY OF AMERICA 2013; 19:050025. [PMID: 26500713 PMCID: PMC4613787 DOI: 10.1121/1.4799742] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
Abstract
Both behavioral and physiological studies have demonstrated enhanced processing of speech in challenging listening environments attributable to musical training. The relationship, however, of this benefit to auditory abilities as assessed by psychoacoustic measures remains unclear. Using tasks previously shown to relate to speech-in-noise perception, the present study evaluated discrimination ability for static and dynamic spectral patterns by 49 listeners grouped as either musicians or nonmusicians. The two static conditions measured the ability to detect a change in the phase of a logarithmic sinusoidal spectral ripple of wideband noise with ripple densities of 1.5 and 3.0 cycles per octave chosen to emphasize either timbre or pitch distinctions, respectively. The dynamic conditions assessed temporal-pattern discrimination of 1-kHz pure tones frequency modulated by different lowpass noise samples with thresholds estimated in terms of either stimulus duration or signal-to-noise ratio. Musicians performed significantly better than nonmusicians on all four tasks. Discriminant analysis showed that group membership was correctly predicted for 88% of the listeners with the structure coefficient of each measure greater than 0.51. Results suggest that enhanced processing of static and dynamic spectral patterns defined by low-rate modulation may contribute to the relationship between musical training and speech-in-noise perception. [Supported by NIH.].
Collapse
Affiliation(s)
- Stanley Sheft
- Corresponding author’s address: Communication Disorders and Sciences, Rush University Medical Center, Chicago, IL 60612,
| | | | | | | | | |
Collapse
|
58
|
Croghan NBH, Arehart KH, Kates JM. Quality and loudness judgments for music subjected to compression limiting. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2012; 132:1177-1188. [PMID: 22894236 DOI: 10.1121/1.4730881] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
Dynamic-range compression (DRC) is used in the music industry to maximize loudness. The amount of compression applied to commercial recordings has increased over time due to a motivating perspective that louder music is always preferred. In contrast to this viewpoint, artists and consumers have argued that using large amounts of DRC negatively affects the quality of music. However, little research evidence has supported the claims of either position. The present study investigated how DRC affects the perceived loudness and sound quality of recorded music. Rock and classical music samples were peak-normalized and then processed using different amounts of DRC. Normal-hearing listeners rated the processed and unprocessed samples on overall loudness, dynamic range, pleasantness, and preference, using a scaled paired-comparison procedure in two conditions: un-equalized, in which the loudness of the music samples varied, and loudness-equalized, in which loudness differences were minimized. Results indicated that a small amount of compression was preferred in the un-equalized condition, but the highest levels of compression were generally detrimental to quality, whether loudness was equalized or varied. These findings are contrary to the "louder is better" mentality in the music industry and suggest that more conservative use of DRC may be preferred for commercial music.
Collapse
Affiliation(s)
- Naomi B H Croghan
- Department of Speech, Language, and Hearing Sciences, University of Colorado, Boulder, Colorado 80309, USA.
| | | | | |
Collapse
|
59
|
Eskridge EN, Galvin JJ, Aronoff JM, Li T, Fu QJ. Speech perception with music maskers by cochlear implant users and normal-hearing listeners. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2012; 55:800-810. [PMID: 22223890 PMCID: PMC5847337 DOI: 10.1044/1092-4388(2011/11-0124)] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
PURPOSE The goal of this study was to investigate how the spectral and temporal properties in background music may interfere with cochlear implant (CI) and normal-hearing listeners' (NH) speech understanding. METHOD Speech-recognition thresholds (SRTs) were adaptively measured in 11 CI and 9 NH subjects. CI subjects were tested while using their clinical processors; NH subjects were tested while listening to unprocessed audio. Speech was presented with different music maskers (excerpts from musical pieces) and with steady, speech-shaped noise. To estimate the contributions of energetic and informational masking, SRTs were also measured in "music-shaped noise" and in music-shaped noise modulated by the music temporal envelopes. RESULTS NH performance was much better than CI performance. For both subject groups, SRTs were much lower with the music-related maskers than with speech-shaped noise. SRTs were strongly predicted by the amount of energetic masking in the music maskers. Unlike CI users, NH listeners obtained release from masking with envelope and fine structure cues in the modulated noise and music maskers. CONCLUSIONS Although speech understanding was greatly limited by energetic masking in both subject groups, CI performance worsened as more spectrotemporal complexity was added to the maskers, most likely due to poor spectral resolution.
Collapse
|
60
|
Eugênio ML, Escalda J, Lemos SMA. Desenvolvimento cognitivo, auditivo e linguístico em crianças expostas à música: produção de conhecimento nacional e internacional. REVISTA CEFAC 2012. [DOI: 10.1590/s1516-18462012005000038] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
A música é um fator ambiental importante para o desenvolvimento das habilidades motoras, auditivas, linguísticas, cognitivas, visuais, entre outras. Estudos recentes citam a relação entre o estudo da música e o aprimoramento do processamento auditivo, das habilidades linguísticas e metalinguísticas e dos processos cognitivos, que são habilidades inerentes à comunicação humana. A Fonoaudiologia se ocupa da aquisição e do desenvolvimento, bem como, do aperfeiçoamento das habilidades necessárias à comunicação humana. Assim, parece haver uma inter-relação entre as áreas Música e Fonoaudiologia. O objetivo deste estudo é descrever e analisar as produções científicas relevantes para compreender a influência da música nas habilidades auditivas, linguísticas e cognitivas. Apesar da escassa produção científica sobre o tema, os estudos apresentados apontam relação positiva entre prática musical e desenvolvimento global infantil. O tema mais abordado foi o processamento auditivo, seguido do desenvolvimento cognitivo e da linguagem. A música pode ser considerada verdadeira aliada na terapia fonoaudiológica, demonstrando a importância da educação musical para crianças com desvio fonológico, alteração do processamento auditivo, distúrbio de linguagem oral e escrita. Baseado no que foi encontrado na revisão de literatura abrem-se novas perspectivas de trabalhos a serem realizados na fonoaudiologia para que as lacunas existentes possam ser preenchidas e que novos conhecimentos possam ser somados aos já construídos para a promoção do pleno desenvolvimento infantil.
Collapse
|
61
|
Chen J, Baer T, Moore BCJ. Effect of enhancement of spectral changes on speech intelligibility and clarity preferences for the hearing impaired. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2012; 131:2987-2998. [PMID: 22501075 DOI: 10.1121/1.3689556] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
Most information in speech is carried in spectral changes over time, rather than in static spectral shape per se. A form of signal processing aimed at enhancing spectral changes over time was developed and evaluated using hearing-impaired listeners. The signal processing was based on the overlap-add method, and the degree and type of enhancement could be manipulated via four parameters. Two experiments were conducted to assess speech intelligibility and clarity preferences. Three sets of parameter values (one corresponding to a control condition), two types of masker (steady speech-spectrum noise and two-talker speech) and two signal-to-masker ratios (SMRs) were used for each masker type. Generally, the effects of the processing were small, although intelligibility was improved by about 8 percentage points relative to the control condition for one set of parameter values using the steady noise masker at -6 dB SMR. The processed signals were not preferred over those for the control condition, except for the steady noise masker at -6 dB SMR. Further work is needed to determine whether tailoring the processing to the characteristics of the individual hearing-impaired listener is beneficial.
Collapse
Affiliation(s)
- Jing Chen
- Department of Experimental Psychology, University of Cambridge, Downing Street, Cambridge CB2 3EB, England.
| | | | | |
Collapse
|
62
|
Moore BCJ, Glasberg BR, Oxenham AJ. Effects of pulsing of a target tone on the ability to hear it out in different types of complex sounds. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2012; 131:2927-2937. [PMID: 22501070 PMCID: PMC3543369 DOI: 10.1121/1.3692243] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/18/2011] [Revised: 02/09/2012] [Accepted: 02/14/2012] [Indexed: 05/28/2023]
Abstract
Judgments of whether a sinusoidal probe is higher or lower in frequency than the closest partial ("target") in a multi-partial complex are improved when the target is pulsed on and off. These experiments explored the contribution of reduction in perceptual confusion and recovery from adaptation to this effect. In experiment 1, all partials except the target were replaced by noise to reduce perceptual confusion. Performance was much better than when the background was composed of multiple partials. When the level of the target was reduced to avoid ceiling effects, no effect of pulsing the target occurred. In experiment 2, the target and background partials were irregularly and independently amplitude modulated. This gave a large effect of pulsing the target, suggesting that if recovery from adaptation contributes to the effect, amplitude fluctuations do not prevent this. In experiment 3, the background was composed of multiple steady partials, but the target was irregularly amplitude modulated. This gave better performance than when the target was unmodulated and a moderate effect of pulsing the target. It is argued that when the target and background are steady tones, pulsing the target may result both in reduction of perceptual confusion and recovery from adaptation.
Collapse
Affiliation(s)
- Brian C J Moore
- Department of Experimental Psychology, University of Cambridge, Downing Street, Cambridge CB2 3EB, England.
| | | | | |
Collapse
|
63
|
Garinis A, Werner L, Abdala C. The relationship between MOC reflex and masked threshold. Hear Res 2011; 282:128-37. [PMID: 21878379 DOI: 10.1016/j.heares.2011.08.007] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/03/2011] [Revised: 08/04/2011] [Accepted: 08/19/2011] [Indexed: 10/17/2022]
Abstract
Otoacoustic emission (OAE) amplitude can be reduced by acoustic stimulation. This effect is produced by the medial olivocochlear (MOC) reflex. Past studies have shown that the MOC reflex is related to listening in noise and attention. In the present study, the relationship between strength of the contralateral MOC reflex and masked threshold was investigated in 19 adults. Detection thresholds were determined for 1000-Hz, 300-ms tone presented simultaneously with one repetition of a 300-ms masker in an ongoing train of masker bursts. Three masking conditions were tested: 1) broadband noise 2) a fixed-frequency 4-tone complex masker and 3) a random-frequency 4-tone complex masker. Broadband noise was expected to produce energetic masking and the tonal maskers were expected to produce informational masking in some listeners. DPOAEs were recorded at fine frequency intervals from 500 to 4000 Hz, with and without contralateral acoustic stimulation. MOC reflex strength was estimated as a reduction in baseline level and a shift in frequency of DPOAE fine-structure maxima near 1000-Hz. MOC reflex and psychophysical testing were completed in separate sessions. Individuals with poorer thresholds in broadband noise and in random-frequency maskers were found to have stronger MOC reflexes.
Collapse
Affiliation(s)
- Angela Garinis
- University of Washington, Department of Speech and Hearing Sciences, 1417 N.E. 42nd Street, Seattle, WA 98105-6246, USA.
| | | | | |
Collapse
|
64
|
Abstract
Numerous studies have shown that musicians outperform nonmusicians on a variety of tasks. Here we provide the first evidence that musicians have superior auditory recognition memory for both musical and nonmusical stimuli, compared to nonmusicians. However, this advantage did not generalize to the visual domain. Previously, we showed that auditory recognition memory is inferior to visual recognition memory. Would this be true even for trained musicians? We compared auditory and visual memory in musicians and nonmusicians using familiar music, spoken English, and visual objects. For both groups, memory for the auditory stimuli was inferior to memory for the visual objects. Thus, although considerable musical training is associated with better musical and nonmusical auditory memory, it does not increase the ability to remember sounds to the levels found with visual stimuli. This suggests a fundamental capacity difference between auditory and visual recognition memory, with a persistent advantage for the visual domain.
Collapse
Affiliation(s)
- Michael A Cohen
- Department of Psychology, Harvard University, Cambridge, MA, USA.
| | | | | | | |
Collapse
|
65
|
Parbery-Clark A, Strait DL, Anderson S, Hittner E, Kraus N. Musical experience and the aging auditory system: implications for cognitive abilities and hearing speech in noise. PLoS One 2011; 6:e18082. [PMID: 21589653 PMCID: PMC3092743 DOI: 10.1371/journal.pone.0018082] [Citation(s) in RCA: 191] [Impact Index Per Article: 14.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2010] [Accepted: 02/22/2011] [Indexed: 11/19/2022] Open
Abstract
Much of our daily communication occurs in the presence of background noise, compromising our ability to hear. While understanding speech in noise is a challenge for everyone, it becomes increasingly difficult as we age. Although aging is generally accompanied by hearing loss, this perceptual decline cannot fully account for the difficulties experienced by older adults for hearing in noise. Decreased cognitive skills concurrent with reduced perceptual acuity are thought to contribute to the difficulty older adults experience understanding speech in noise. Given that musical experience positively impacts speech perception in noise in young adults (ages 18-30), we asked whether musical experience benefits an older cohort of musicians (ages 45-65), potentially offsetting the age-related decline in speech-in-noise perceptual abilities and associated cognitive function (i.e., working memory). Consistent with performance in young adults, older musicians demonstrated enhanced speech-in-noise perception relative to nonmusicians along with greater auditory, but not visual, working memory capacity. By demonstrating that speech-in-noise perception and related cognitive function are enhanced in older musicians, our results imply that musical training may reduce the impact of age-related auditory decline.
Collapse
Affiliation(s)
- Alexandra Parbery-Clark
- Auditory Neuroscience Laboratory, Northwestern University, Evanston, Illinois, United States of America
- Communication Sciences, Northwestern University, Evanston, Illinois, United States of America
| | - Dana L. Strait
- Auditory Neuroscience Laboratory, Northwestern University, Evanston, Illinois, United States of America
- Institute for Neuroscience, Northwestern University, Evanston, Illinois, United States of America
| | - Samira Anderson
- Auditory Neuroscience Laboratory, Northwestern University, Evanston, Illinois, United States of America
- Communication Sciences, Northwestern University, Evanston, Illinois, United States of America
| | - Emily Hittner
- Auditory Neuroscience Laboratory, Northwestern University, Evanston, Illinois, United States of America
| | - Nina Kraus
- Auditory Neuroscience Laboratory, Northwestern University, Evanston, Illinois, United States of America
- Communication Sciences, Northwestern University, Evanston, Illinois, United States of America
- Institute for Neuroscience, Northwestern University, Evanston, Illinois, United States of America
- Departments of Neurobiology and Physiology, Northwestern University, Evanston, Illinois, United States of America
- Otolaryngology, Northwestern University, Evanston, Illinois, United States of America
- * E-mail:
| |
Collapse
|
66
|
Lutfi RA, Liu CJ, Stoelinga CNJ. Auditory discrimination of force of impact. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2011; 129:2104-2111. [PMID: 21476666 PMCID: PMC3097070 DOI: 10.1121/1.3543969] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/07/2010] [Revised: 12/16/2010] [Accepted: 12/22/2010] [Indexed: 05/30/2023]
Abstract
The auditory discrimination of force of impact was measured for three groups of listeners using sounds synthesized according to first-order equations of motion for the homogenous, isotropic bar [Morse and Ingard (1968). Theoretical Acoustics pp. 175-191]. The three groups were professional percussionists, nonmusicians, and individuals recruited from the general population without regard to musical background. In the two-interval, forced-choice procedure, listeners chose the sound corresponding to the greater force of impact as the length of the bar varied from one presentation to the next. From the equations of motion, a maximum-likelihood test for the task was determined to be of the form Δlog A + αΔ log f > 0, where A and f are the amplitude and frequency of any one partial and α = 0.5. Relative decision weights on Δ log f were obtained from the trial-by-trial responses of listeners and compared to α. Percussionists generally outperformed the other groups; however, the obtained decision weights of all listeners deviated significantly from α and showed variability within groups far in excess of the variability associated with replication. Providing correct feedback after each trial had little effect on the decision weights. The variability in these measures was comparable to that seen in studies involving the auditory discrimination of other source attributes.
Collapse
Affiliation(s)
- Robert A Lutfi
- Auditory Behavioral Research Laboraory and Department of Communicative Disorders, University of Wisconsin, Madison, Wisconsin 53706, USA.
| | | | | |
Collapse
|
67
|
Vinnik E, Itskov PM, Balaban E. Individual differences in sound-in-noise perception are related to the strength of short-latency neural responses to noise. PLoS One 2011; 6:e17266. [PMID: 21387016 PMCID: PMC3046163 DOI: 10.1371/journal.pone.0017266] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2010] [Accepted: 01/25/2011] [Indexed: 11/18/2022] Open
Abstract
Important sounds can be easily missed or misidentified in the presence of extraneous noise. We describe an auditory illusion in which a continuous ongoing tone becomes inaudible during a brief, non-masking noise burst more than one octave away, which is unexpected given the frequency resolution of human hearing. Participants strongly susceptible to this illusory discontinuity did not perceive illusory auditory continuity (in which a sound subjectively continues during a burst of masking noise) when the noises were short, yet did so at longer noise durations. Participants who were not prone to illusory discontinuity showed robust early electroencephalographic responses at 40-66 ms after noise burst onset, whereas those prone to the illusion lacked these early responses. These data suggest that short-latency neural responses to auditory scene components reflect subsequent individual differences in the parsing of auditory scenes.
Collapse
|
68
|
Abstract
Although links between music training and cognitive abilities are relatively well-established, unresolved issues include the generality of the association, the direction of causation, and whether the association is mediated by executive function. Musically trained and untrained 9- to 12-year olds were compared on a measure of IQ and five measures of executive function. IQ and executive function were correlated. The musically trained group had higher IQs than their untrained counterparts and the advantage extended across the IQ subtests. The association between music training and executive function was negligible. These results provide no support for the hypothesis that the association between music training and IQ is mediated by executive function. When considered jointly with the available literature, the findings suggest that children with higher IQs are more likely than their lower-IQ counterparts to take music lessons, and to perform well on a variety of tests of cognitive ability except for those measuring executive function.
Collapse
|
69
|
Itoh K, Suwazono S, Nakada T. Central auditory processing of noncontextual consonance in music: an evoked potential study. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2010; 128:3781-3787. [PMID: 21218909 DOI: 10.1121/1.3500685] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
The consonance of individual chords presented out of musical context, or the noncontextual consonance of chords, is usually defined as the absence of roughness, which is a sensation perceived when slightly mistuned frequencies are not clearly resolved in the cochlea. The present work uses evoked potentials to demonstrate that the absence of roughness is not sufficient to explain the entirety of noncontextual consonance perception. Presented with a random sequence of various pure-tone intervals (0-13 semitones), listeners' cerebral cortical activities distinguished these stimuli according to their noncontextual consonance in a manner consistent with standard musical practice, even when the intervals exceeded the critical bandwidth (approximately three semitones). The roughness-based model of noncontextual consonance could not account for this result because these wide intervals had indistinguishably low levels of roughness. Further, this effect was evident only in musicians, indicating plasticity in the underlying neural mechanisms. The results are consistent with the hypothesis that, although the absence of roughness may represent an important aspect of noncontextual consonance, properties of intervals other than those related to roughness also contribute to this perception, underpinned by neural activity in the central auditory system that can be plastically modified by experience.
Collapse
Affiliation(s)
- Kosuke Itoh
- Center for Integrated Human Brain Science, Brain Research Institute, University of Niigata, Asahimachi 1-757, Niigata 951-8585, Japan.
| | | | | |
Collapse
|
70
|
Bidelman GM, Krishnan A. Effects of reverberation on brainstem representation of speech in musicians and non-musicians. Brain Res 2010; 1355:112-25. [PMID: 20691672 PMCID: PMC2939203 DOI: 10.1016/j.brainres.2010.07.100] [Citation(s) in RCA: 149] [Impact Index Per Article: 10.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2010] [Revised: 07/14/2010] [Accepted: 07/29/2010] [Indexed: 11/18/2022]
Abstract
Perceptual and neurophysiological enhancements in linguistic processing in musicians suggest that domain specific experience may enhance neural resources recruited for language specific behaviors. In everyday situations, listeners are faced with extracting speech signals in degraded listening conditions. Here, we examine whether musical training provides resilience to the degradative effects of reverberation on subcortical representations of pitch and formant-related harmonic information of speech. Brainstem frequency-following responses (FFRs) were recorded from musicians and non-musician controls in response to the vowel /i/ in four different levels of reverberation and analyzed based on their spectro-temporal composition. For both groups, reverberation had little effect on the neural encoding of pitch but significantly degraded neural encoding of formant-related harmonics (i.e., vowel quality) suggesting a differential impact on the source-filter components of speech. However, in quiet and across nearly all reverberation conditions, musicians showed more robust responses than non-musicians. Neurophysiologic results were confirmed behaviorally by comparing brainstem spectral magnitudes with perceptual measures of fundamental (F0) and first formant (F1) frequency difference limens (DLs). For both types of discrimination, musicians obtained DLs which were 2-4 times better than non-musicians. Results suggest that musicians' enhanced neural encoding of acoustic features, an experience-dependent effect, is more resistant to reverberation degradation which may explain their enhanced perceptual ability on behaviorally relevant speech and/or music tasks in adverse listening conditions.
Collapse
Affiliation(s)
- Gavin M. Bidelman
- Department of Speech Language Hearing Sciences, Purdue University, West Lafayette, Indiana, USA
| | - Ananthanarayan Krishnan
- Department of Speech Language Hearing Sciences, Purdue University, West Lafayette, Indiana, USA
| |
Collapse
|
71
|
Objective and subjective psychophysical measures of auditory stream integration and segregation. J Assoc Res Otolaryngol 2010; 11:709-24. [PMID: 20658165 DOI: 10.1007/s10162-010-0227-2] [Citation(s) in RCA: 43] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2009] [Accepted: 06/30/2010] [Indexed: 10/19/2022] Open
Abstract
The perceptual organization of sound sequences into auditory streams involves the integration of sounds into one stream and the segregation of sounds into separate streams. "Objective" psychophysical measures of auditory streaming can be obtained using behavioral tasks where performance is facilitated by segregation and hampered by integration, or vice versa. Traditionally, these two types of tasks have been tested in separate studies involving different listeners, procedures, and stimuli. Here, we tested subjects in two complementary temporal-gap discrimination tasks involving similar stimuli and procedures. One task was designed so that performance in it would be facilitated by perceptual integration; the other, so that performance would be facilitated by perceptual segregation. Thresholds were measured in both tasks under a wide range of conditions produced by varying three stimulus parameters known to influence stream formation: frequency separation, tone-presentation rate, and sequence length. In addition to these performance-based measures, subjective judgments of perceived segregation were collected in the same listeners under corresponding stimulus conditions. The patterns of results obtained in the two temporal-discrimination tasks, and the relationships between thresholds and perceived-segregation judgments, were mostly consistent with the hypothesis that stream segregation helped performance in one task and impaired performance in the other task. The tasks and stimuli described here may prove useful in future behavioral or neurophysiological experiments, which seek to manipulate and measure neural correlates of auditory streaming while minimizing differences between the physical stimuli.
Collapse
|
72
|
Shi LF, Law Y. Masking effects of speech and music: does the masker's hierarchical structure matter? Int J Audiol 2010; 49:296-308. [PMID: 20151877 DOI: 10.3109/14992020903350188] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
Abstract
Speech and music are time-varying signals organized by parallel hierarchical rules. Through a series of four experiments, this study compared the masking effects of single-talker speech and instrumental music on speech perception while manipulating the complexity of hierarchical and temporal structures of the maskers. Listeners' word recognition was found to be similar between hierarchically intact and disrupted speech or classical music maskers (Experiment 1). When sentences served as the signal, significantly greater masking effects were observed with disrupted than intact speech or classical music maskers (Experiment 2), although not with jazz or serial music maskers, which differed from the classical music masker in their hierarchical structures (Experiment 3). Removing the classical music masker's temporal dynamics or partially restoring it affected listeners' sentence recognition; yet, differences in performance between intact and disrupted maskers remained robust (Experiment 4). Hence, the effect of structural expectancy was largely present across maskers when comparing them before and after their hierarchical structure was purposefully disrupted. This effect seemed to lend support to the auditory stream segregation theory.
Collapse
Affiliation(s)
- Lu-Feng Shi
- Department of Communication Sciences and Disorders, Long Island University - Brooklyn Campus, New York 11201, USA.
| | | |
Collapse
|
73
|
Shi LF. Normal-hearing English-as-a-second-language listeners' recognition of English words in competing signals. Int J Audiol 2010; 48:260-70. [PMID: 19842801 DOI: 10.1080/14992020802607431] [Citation(s) in RCA: 27] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
English-as-a-second-language (ESL) listeners have difficulty perceiving English speech presented in background noise. The current study furthered this line of investigations by including participants who varied widely in their age of English acquisition and length of English learning: 24 native English monolingual (EML), 12 simultaneous bilingual (SBL), 10 early ESL (E-ESL), and 14 late ESL (L-ESL) listeners. Word recognition scores were obtained in quiet and in the presence of speech-weighted noise, multi-talker babble, forward-playing music, and time-reversed music. All words and competing signals were presented at 45 dB HL. EML and SBL listeners' performances were found to be similar across test conditions. ESL, especially L-ESL listeners, performed significantly more poorly in all conditions than EML and SBL listeners. Overall, speech-weighted noise and multi-talker babble showed greater masking effect than music; however, the difference in performance between L-ESL and EML listeners was the largest for the music maskers, indicating that L-ESL listeners are susceptible to weaker maskers. Age of acquisition and length of learning were both shown to be good indicators of SBL and ESL listeners' performance.
Collapse
Affiliation(s)
- Lu-Feng Shi
- Department of Communication Sciences & Disorders, Long Island University-Brooklyn Campus, Brooklyn, New York 11201, USA.
| |
Collapse
|
74
|
|
75
|
Helfer KS, Freyman RL. Lexical and indexical cues in masking by competing speech. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2009; 125:447-56. [PMID: 19173430 PMCID: PMC2736724 DOI: 10.1121/1.3035837] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/21/2008] [Revised: 10/30/2008] [Accepted: 11/02/2008] [Indexed: 05/20/2023]
Abstract
Three experiments were conducted using the TVM sentences, a new set of stimuli for competing speech research. These open-set sentences incorporate a cue name that allows the experimenter to direct the listener's attention to a target sentence. The first experiment compared the relative efficacy of directing the listener's attention to the cue name versus instructing the subject to listen for a particular talker's voice. Results demonstrated that listeners could use either cue about equally well to find the target sentence. Experiment 2 was designed to determine whether differences in intelligibility among talkers' voices that were noted when three utterances were presented together persisted when each talker's sentences were presented in steady-state noise. Results of experiment 2 showed only minor intelligibility differences between talkers' utterances presented in noise. The final experiment considered how providing accurate and inaccurate information about the target talker's voice influenced speech recognition performance. This voice cue was found to have minimal effect on listeners' ability to understand the target utterance or ignore a masking voice.
Collapse
Affiliation(s)
- Karen S Helfer
- Department of Communication Disorders, University of Massachusetts Amherst, Amherst, Massachusetts 01003, USA
| | | |
Collapse
|
76
|
Jones GL, Litovsky RY. Role of masker predictability in the cocktail party problem. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2008; 124:3818-3830. [PMID: 19206808 PMCID: PMC2676623 DOI: 10.1121/1.2996336] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/25/2007] [Revised: 08/29/2008] [Accepted: 09/06/2008] [Indexed: 05/27/2023]
Abstract
In studies of the cocktail party problem, the number and locations of maskers are typically fixed throughout a block of trials, which leaves out uncertainty that exists in real-world environments. The current experiments examined whether there is (1) improved speech intelligibility and (2) increased spatial release from masking (SRM), as predictability of the number/locations of speech maskers is increased. In the first experiment, subjects identified a target word presented at a fixed level in the presence of 0, 1, or 2 maskers as predictability of the masker configuration ranged from 10% to 80%. The second experiment examined speech reception thresholds and SRM as (a) predictability of the masker configuration is increased from 20% to 80% and/or (b) the complexity of the listening environment is decreased. In the third experiment, predictability of the masker configuration was increased from 20% up to 100% while minimizing the onset delay between maskers and the target. All experiments showed no effect of predictability of the masker configuration on speech intelligibility or SRM. These results suggest that knowing the number and location(s) of maskers may not necessarily contribute significantly to solving the cocktail party problem, at least not when the location of the target is known.
Collapse
Affiliation(s)
- Gary L Jones
- Department of Physiology, University of Wisconsin-Madison, Madison, Wisconsin 53705, USA
| | | |
Collapse
|
77
|
McDermott JH, Oxenham AJ. Music perception, pitch, and the auditory system. Curr Opin Neurobiol 2008; 18:452-63. [PMID: 18824100 DOI: 10.1016/j.conb.2008.09.005] [Citation(s) in RCA: 84] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2008] [Revised: 09/11/2008] [Accepted: 09/12/2008] [Indexed: 11/16/2022]
Abstract
The perception of music depends on many culture-specific factors, but is also constrained by properties of the auditory system. This has been best characterized for those aspects of music that involve pitch. Pitch sequences are heard in terms of relative as well as absolute pitch. Pitch combinations give rise to emergent properties not present in the component notes. In this review we discuss the basic auditory mechanisms contributing to these and other perceptual effects in music.
Collapse
Affiliation(s)
- Josh H McDermott
- Department of Psychology, University of Minnesota, United States.
| | | |
Collapse
|
78
|
|
79
|
Freyman RL, Balakrishnan U, Helfer KS. Spatial release from masking with noise-vocoded speech. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2008; 124:1627-37. [PMID: 19045654 PMCID: PMC2736712 DOI: 10.1121/1.2951964] [Citation(s) in RCA: 15] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/08/2023]
Abstract
This study investigated how confusability between target and masking utterances affects the masking release achieved through spatial separation. Important distinguishing characteristics between competing voices were removed by processing speech with six-channel envelope vocoding, which simulates some aspects of listening with a cochlear implant. In the first experiment, vocoded target nonsense sentences were presented against two-talker vocoded maskers in conditions that provide different spatial impressions but not reliable cues that lead to traditional release from masking. Surprisingly, no benefit of spatial separation was found. The absence of spatial release was hypothesized to be the result of the highly positive target-to-masker ratios necessary to understand vocoded speech, which may have been sufficient to reduce confusability. In experiment 2, words excised from the vocoded nonsense sentences were presented against the same vocoded two-talker masker in a four-alternative forced-choice detection paradigm where threshold performance was achieved at negative target-to-masker ratios. Here, the spatial release from masking was more than 20 dB. The results suggest the importance of signal-to-noise ratio in the observation of "informational" masking and indicate that careful attention should be paid to this type of masking as implant processing improves and listeners begin to achieve success in poorer listening environments.
Collapse
Affiliation(s)
- Richard L Freyman
- Department of Communication Disorders, University of Massachusetts, 358 North Pleasant Street, Amherst, Massachusetts 01003, USA.
| | | | | |
Collapse
|
80
|
Gallun FJ, Durlach NI, Colburn HS, Shinn-Cunningham BG, Best V, Mason CR, Kidd G. The extent to which a position-based explanation accounts for binaural release from informational masking. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2008; 124:439-449. [PMID: 18646988 PMCID: PMC2587211 DOI: 10.1121/1.2924127] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/17/2007] [Revised: 04/14/2008] [Accepted: 04/15/2008] [Indexed: 05/26/2023]
Abstract
Detection was measured for a 500 Hz tone masked by noise (an "energetic" masker) or sets of ten randomly drawn tones (an "informational" masker). Presenting the maskers diotically and the target tone with a variety of interaural differences (interaural amplitude ratios and/or interaural time delays) resulted in reduced detection thresholds relative to when the target was presented diotically ("binaural release from masking"). Thresholds observed when time and amplitude differences applied to the target were "reinforcing" (favored the same ear, resulting in a lateralized position for the target) were not significantly different from thresholds obtained when differences were "opposing" (favored opposite ears, resulting in a centered position for the target). This irrelevance of differences in the perceived location of the target is a classic result for energetic maskers but had not previously been shown for informational maskers. However, this parallellism between the patterns of binaural release for energetic and informational maskers was not accompanied by high correlations between the patterns for individual listeners, supporting the idea that the mechanisms for binaural release from energetic and informational masking are fundamentally different.
Collapse
Affiliation(s)
- Frederick J Gallun
- Hearing Research Center, Boston University, 635 Commonwealth Avenue, Boston, Massachusetts 02215, USA.
| | | | | | | | | | | | | |
Collapse
|
81
|
Balakrishnan U, Freyman RL. Speech detection in spatial and nonspatial speech maskers. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2008; 123:2680-91. [PMID: 18529187 PMCID: PMC2811546 DOI: 10.1121/1.2902176] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/08/2023]
Abstract
The effect of perceived spatial differences on masking release was examined using a 4AFC speech detection paradigm. Targets were 20 words produced by a female talker. Maskers were recordings of continuous streams of nonsense sentences spoken by two female talkers and mixed into each of two channels (two talker, and the same masker time reversed). Two masker spatial conditions were employed: "RF" with a 4 ms time lead to the loudspeaker 60 degrees horizontally to the right, and "FR" with the time lead to the front (0 degrees ) loudspeaker. The reference nonspatial "F" masker was presented from the front loudspeaker only. Target presentation was always from the front loudspeaker. In Experiment 1, target detection threshold for both natural and time-reversed spatial maskers was 17-20 dB lower than that for the nonspatial masker, suggesting that significant release from informational masking occurs with spatial speech maskers regardless of masker understandability. In Experiment 2, the effectiveness of the FR and RF maskers was evaluated as the right loudspeaker output was attenuated until the two-source maskers were indistinguishable from the F masker, as measured independently in a discrimination task. Results indicated that spatial release from masking can be observed with barely noticeable target-masker spatial differences.
Collapse
Affiliation(s)
- Uma Balakrishnan
- Department of Communication Disorders, University of Massachusetts, 358 N. Pleasant Street, Amherst, Massachusetts 01003, USA.
| | | |
Collapse
|
82
|
|
83
|
Gallun FJ, Mason CR, Kidd G. The ability to listen with independent ears. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2007; 122:2814-2825. [PMID: 18189571 DOI: 10.1121/1.2780143] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
In three experiments, listeners identified speech processed into narrow bands and presented to the right ("target") ear. The ability of listeners to ignore (or even use) conflicting contralateral stimulation was examined by presenting various maskers to the target ear ("ipsilateral") and nontarget ear ("contralateral"). Theoretically, an absence of contralateral interference would imply selectively attending to only the target ear; the presence of interference from the contralateral stimulus would imply that listeners were unable to treat the stimuli at the two ears independently; and improved performance in the presence of informative contralateral stimulation would imply that listeners can process the signals at both ears and keep them separate rather than combining them. Experiments showed evidence of the ability to selectively process (or respond to) only the target ear in some, but not all, conditions. No evidence was found for improved performance due to contralateral stimulation. The pattern of interference found across experiments supports an interaction of stimulus-based factors (auditory grouping) and task-based factors (demand for processing resources) and suggests that listeners may not always be able to listen to the "better" ear even when it would be beneficial to do so.
Collapse
Affiliation(s)
- Frederick J Gallun
- Department of Speech, Language and Hearing Sciences, and Hearing Research Center, Boston University, 635 Commonwealth Avenue, Boston, Massachusetts 02215, USA.
| | | | | |
Collapse
|
84
|
Soncini F, Costa MJ. Efeito da prática musical no reconhecimento da fala no silêncio e no ruído. ACTA ACUST UNITED AC 2006; 18:161-70. [PMID: 16927621 DOI: 10.1590/s0104-56872006000200005] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
Abstract
TEMA: o treinamento auditivo melhora a percepção de sinais acústicos complexos como a fala. OBJETIVO: verificar se o treinamento auditivo proporcionado pela prática musical é um fator que exerce influência na habilidade de reconhecer a fala no silêncio e no ruído. MÉTODO: participaram do estudo 55 indivíduos sem experiência musical (não músicos) e 45 indivíduos que atuavam como músicos profissionais em bandas militares há, no mínimo, 5 anos (músicos). Todos os voluntários eram militares, do sexo masculino, destros, normo-ouvintes e com idades variando entre 25 e 40 anos. Utilizando o teste Listas de Sentenças em Português (LSP), realizou-se a pesquisa do limiar de reconhecimento de sentenças no silêncio (LRSS) e do limiar de reconhecimento de sentenças no ruído (LRSR), a partir do qual foi calculada a relação sinal/ruído (S/R). As sentenças e o ruído (fixo a 65 dB NA) foram apresentados monoauralmente, por fones auriculares. RESULTADOS: ao serem comparados os desempenhos dos grupos estudados, a análise estatística dos resultados não evidenciou diferença significante entre os valores médios obtidos para os LRSS. No entanto, foi constatada diferença estatisticamente significante entre os valores médios obtidos para as relações S/R. CONCLUSÃO: no silêncio, músicos e não músicos apresentaram desempenhos semelhantes, porém, em tarefas de reconhecimento de sentenças apresentadas diante de ruído competitivo, músicos apresentaram melhores desempenhos, indicando que a prática musical é uma atividade que melhora a habilidade de reconhecimento da fala, quando esta ocorre diante de ruído.
Collapse
|
85
|
Dye RH, Stellmack MA, Jurcin NF. Observer weighting strategies in interaural time-difference discrimination and monaural level discrimination for a multi-tone complex. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2005; 117:3079-90. [PMID: 15957776 DOI: 10.1121/1.1861832] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/03/2023]
Abstract
Two experiments measured listeners' abilities to weight information from different components in a complex of 553, 753, and 953 Hz. The goal was to determine whether or not the ability to adjust perceptual weights generalized across tasks. Weights were measured by binary logistic regression between stimulus values that were sampled from Gaussian distributions and listeners' responses. The first task was interaural time discrimination in which listeners judged the laterality of the target component. The second task was monaural level discrimination in which listeners indicated whether the level of the target component decreased or increased across two intervals. For both experiments, each of the three components served as the target. Ten listeners participated in both experiments. The results showed that those individuals who adjusted perceptual weights in the interaural time experiment could also do so in the monaural level discrimination task. The fact that the same individuals appeared to be analytic in both tasks is an indication that the weights measure the ability to attend to a particular region of the spectrum while ignoring other spectral regions.
Collapse
Affiliation(s)
- Raymond H Dye
- Parmly Hearing Institute, Loyola University of Chicago, Chicago, Illinois 60626, USA
| | | | | |
Collapse
|
86
|
Kidd G, Mason CR, Richards VM. Multiple bursts, multiple looks, and stream coherence in the release from informational masking. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2003; 114:2835-2845. [PMID: 14650018 DOI: 10.1121/1.1621864] [Citation(s) in RCA: 26] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
In the simultaneous multitone masking paradigm introduced by Neff and Green [Percept. Psychophys. 41, 409-415 (1987)] the masker typically is a small number of tones having frequencies and levels that are randomly drawn on every presentation. Large amounts of masking for a pure-tone signal often occur that are thought to reflect central, rather than peripheral, limitations on processing. Previous work from this laboratory has indicated that playing a rapid succession of randomly drawn multitone maskers in each observation interval dramatically reduces the amount of masking that is observed relative to a single burst (SB). In this multiple-bursts-different (MBD) procedure, the signal tone is the only constant frequency component during the sequence of bursts and tends to perceptually segregate from the masker. In this study, the number of masker bursts and the interburst interval (IBI) were varied. The goals were to determine how the release from masking relative to the SB condition depends on the number of bursts and to examine whether increasing the IBI would cause each burst to be processed independently. If the latter were true, it might disrupt the perception of signal stream coherence, thereby diminishing the MBD advantage. However, multiple independent looks could also lead to an improvement in performance. For those subjects showing large amounts of informational masking in the SB condition, substantial reduction in masked thresholds occurred as the number of masker bursts increased, while masking increased as IBI lengthened. The results were not consistent with a simple version of a multiple-look model in which the information from each burst was combined optimally, but instead appear to be attributable to mechanisms involved in the perceptual organization of sounds.
Collapse
Affiliation(s)
- Gerald Kidd
- Hearing Research Center and Programs in Communication Disorders, Boston University, 635 Commonwealth Avenue, Boston, Massachusetts 02215, USA.
| | | | | |
Collapse
|
87
|
Durlach NI, Mason CR, Kidd G, Arbogast TL, Colburn HS, Shinn-Cunningham BG. Note on informational masking. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2003; 113:2984-7. [PMID: 12822768 DOI: 10.1121/1.1570435] [Citation(s) in RCA: 147] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/20/2023]
|