1
|
Heng JG, Zhang J, Bonetti L, Lim WPH, Vuust P, Agres K, Chen SHA. Understanding music and aging through the lens of Bayesian inference. Neurosci Biobehav Rev 2024; 163:105768. [PMID: 38908730 DOI: 10.1016/j.neubiorev.2024.105768] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2024] [Revised: 06/05/2024] [Accepted: 06/10/2024] [Indexed: 06/24/2024]
Abstract
Bayesian inference has recently gained momentum in explaining music perception and aging. A fundamental mechanism underlying Bayesian inference is the notion of prediction. This framework could explain how predictions pertaining to musical (melodic, rhythmic, harmonic) structures engender action, emotion, and learning, expanding related concepts of music research, such as musical expectancies, groove, pleasure, and tension. Moreover, a Bayesian perspective of music perception may shed new insights on the beneficial effects of music in aging. Aging could be framed as an optimization process of Bayesian inference. As predictive inferences refine over time, the reliance on consolidated priors increases, while the updating of prior models through Bayesian inference attenuates. This may affect the ability of older adults to estimate uncertainties in their environment, limiting their cognitive and behavioral repertoire. With Bayesian inference as an overarching framework, this review synthesizes the literature on predictive inferences in music and aging, and details how music could be a promising tool in preventive and rehabilitative interventions for older adults through the lens of Bayesian inference.
Collapse
Affiliation(s)
- Jiamin Gladys Heng
- School of Computer Science and Engineering, Nanyang Technological University, Singapore.
| | - Jiayi Zhang
- Interdisciplinary Graduate Program, Nanyang Technological University, Singapore; School of Social Sciences, Nanyang Technological University, Singapore; Centre for Research and Development in Learning, Nanyang Technological University, Singapore
| | - Leonardo Bonetti
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & The Royal Academy of Music, Aarhus, Aalborg, Denmark; Centre for Eudaimonia and Human Flourishing, Linacre College, University of Oxford, United Kingdom; Department of Psychiatry, University of Oxford, United Kingdom; Department of Psychology, University of Bologna, Italy
| | | | - Peter Vuust
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & The Royal Academy of Music, Aarhus, Aalborg, Denmark
| | - Kat Agres
- Centre for Music and Health, National University of Singapore, Singapore; Yong Siew Toh Conservatory of Music, National University of Singapore, Singapore
| | - Shen-Hsing Annabel Chen
- School of Social Sciences, Nanyang Technological University, Singapore; Centre for Research and Development in Learning, Nanyang Technological University, Singapore; Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore; National Institute of Education, Nanyang Technological University, Singapore.
| |
Collapse
|
2
|
Faber SEM, Belden AG, Loui P, McIntosh R. Age-related variability in network engagement during music listening. Netw Neurosci 2023; 7:1404-1419. [PMID: 38144689 PMCID: PMC10713012 DOI: 10.1162/netn_a_00333] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2023] [Accepted: 07/31/2023] [Indexed: 12/26/2023] Open
Abstract
Listening to music is an enjoyable behaviour that engages multiple networks of brain regions. As such, the act of music listening may offer a way to interrogate network activity, and to examine the reconfigurations of brain networks that have been observed in healthy aging. The present study is an exploratory examination of brain network dynamics during music listening in healthy older and younger adults. Network measures were extracted and analyzed together with behavioural data using a combination of hidden Markov modelling and partial least squares. We found age- and preference-related differences in fMRI data collected during music listening in healthy younger and older adults. Both age groups showed higher occupancy (the proportion of time a network was active) in a temporal-mesolimbic network while listening to self-selected music. Activity in this network was strongly positively correlated with liking and familiarity ratings in younger adults, but less so in older adults. Additionally, older adults showed a higher degree of correlation between liking and familiarity ratings consistent with past behavioural work on age-related dedifferentiation. We conclude that, while older adults do show network and behaviour patterns consistent with dedifferentiation, activity in the temporal-mesolimbic network is relatively robust to dedifferentiation. These findings may help explain how music listening remains meaningful and rewarding in old age.
Collapse
Affiliation(s)
- Sarah E. M. Faber
- University of Toronto, Toronto, ON, Canada
- Simon Fraser University, Burnaby, BC, Canada
| | | | | | | |
Collapse
|
3
|
Tichko P, Page N, Kim JC, Large EW, Loui P. Neural Entrainment to Musical Pulse in Naturalistic Music Is Preserved in Aging: Implications for Music-Based Interventions. Brain Sci 2022; 12:brainsci12121676. [PMID: 36552136 PMCID: PMC9775503 DOI: 10.3390/brainsci12121676] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2022] [Revised: 11/21/2022] [Accepted: 12/01/2022] [Indexed: 12/12/2022] Open
Abstract
Neural entrainment to musical rhythm is thought to underlie the perception and production of music. In aging populations, the strength of neural entrainment to rhythm has been found to be attenuated, particularly during attentive listening to auditory streams. However, previous studies on neural entrainment to rhythm and aging have often employed artificial auditory rhythms or limited pieces of recorded, naturalistic music, failing to account for the diversity of rhythmic structures found in natural music. As part of larger project assessing a novel music-based intervention for healthy aging, we investigated neural entrainment to musical rhythms in the electroencephalogram (EEG) while participants listened to self-selected musical recordings across a sample of younger and older adults. We specifically measured neural entrainment to the level of musical pulse-quantified here as the phase-locking value (PLV)-after normalizing the PLVs to each musical recording's detected pulse frequency. As predicted, we observed strong neural phase-locking to musical pulse, and to the sub-harmonic and harmonic levels of musical meter. Overall, PLVs were not significantly different between older and younger adults. This preserved neural entrainment to musical pulse and rhythm could support the design of music-based interventions that aim to modulate endogenous brain activity via self-selected music for healthy cognitive aging.
Collapse
Affiliation(s)
- Parker Tichko
- Department of Music, Northeastern University, Boston, MA 02115, USA
| | - Nicole Page
- Department of Music, Northeastern University, Boston, MA 02115, USA
| | - Ji Chul Kim
- Department of Psychological Sciences, University of Connecticut, Storrs, CT 06269, USA
| | - Edward W. Large
- Department of Psychological Sciences, University of Connecticut, Storrs, CT 06269, USA
| | - Psyche Loui
- Department of Music, Northeastern University, Boston, MA 02115, USA
- Correspondence:
| |
Collapse
|
4
|
Abstract
Music is ubiquitous. Despite the fact that most people find music enjoyable, there are individual differences in the degree to which listeners derive pleasure from music. However, there has been little focus on how musical reward may change across the lifespan. Some theories predict that there would be little change, or even an increase in musical reward across the lifespan, while others suggest that older adults may have decreased capacity for musical reward. Here, we investigated musical reward across the lifespan. Participants consisted of American adults ranging between 20-85 years old (n = 20 participants in each 10-year age bin). Participants in Study 1 completed the Barcelona Music Reward Questionnaire (BMRQ), which is a multi-dimensional assessment of musical reward. We found a negative correlation between age and BMRQ scores, suggesting decreases in musical reward across the lifespan. When investigating which components were driving this effect, we found that the music seeking subscale was the strongest predictor of age. Participants in Study 2 completed the Aesthetic Experiences in Music Scale (AES-M), which focuses on intense emotional responses to music. In contrast to the BMRQ, we found no relationship between age and scores on the AES-M, suggesting that strong emotional responses to music are consistent across the lifespan. These results have implications for the use of music as a therapeutic tool in older adults. In addition, this work points to the importance of considering age when investigating reward for music and suggests that the ways individuals experience music may change across the lifespan.
Collapse
Affiliation(s)
- Amy M Belfi
- Department of Psychological Science, Missouri University of Science and Technology, Rolla, MO, USA
| | - Georgina L Moreno
- Department of Psychology, University of Houston - Clear Lake, Houston, TX, USA
| | - Maria Gugliano
- Department of Psychological Science, Missouri University of Science and Technology, Rolla, MO, USA
| | - Claire Neill
- Department of Psychological Science, Missouri University of Science and Technology, Rolla, MO, USA
| |
Collapse
|
5
|
Carcagno S, Plack CJ. Effects of age on psychophysical measures of auditory temporal processing and speech reception at low and high levels. Hear Res 2020; 400:108117. [PMID: 33253994 PMCID: PMC7812372 DOI: 10.1016/j.heares.2020.108117] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/11/2020] [Revised: 10/18/2020] [Accepted: 11/17/2020] [Indexed: 01/21/2023]
Abstract
We found little evidence of greater age-related hearing declines at high sound levels. There are age-related temporal-processing declines independent of hearing loss. No evidence of age-related speech-reception deficits independent of hearing loss.
Age-related cochlear synaptopathy (CS) has been shown to occur in rodents with minimal noise exposure, and has been hypothesized to play a crucial role in age-related hearing declines in humans. It is not known to what extent age-related CS occurs in humans, and how it affects the coding of supra-threshold sounds and speech in noise. Because in rodents CS affects mainly low- and medium-spontaneous rate (L/M-SR) auditory-nerve fibers with rate-level functions covering medium-high levels, it should lead to greater deficits in the processing of sounds at high than at low stimulus levels. In this cross-sectional study the performance of 102 listeners across the age range (34 young, 34 middle-aged, 34 older) was assessed in a set of psychophysical temporal processing and speech reception in noise tests at both low, and high stimulus levels. Mixed-effect multiple regression models were used to estimate the effects of age while partialing out effects of audiometric thresholds, lifetime noise exposure, cognitive abilities (assessed with additional tests), and musical experience. Age was independently associated with performance deficits on several tests. However, only for one out of 13 tests were age effects credibly larger at the high compared to the low stimulus level. Overall these results do not provide much evidence that age-related CS, to the extent to which it may occur in humans according to the rodent model of greater L/M-SR synaptic loss, has substantial effects on psychophysical measures of auditory temporal processing or on speech reception in noise.
Collapse
Affiliation(s)
- Samuele Carcagno
- Department of Psychology, Lancaster University, Lancaster, LA1 4YF, United Kingdom.
| | - Christopher J Plack
- Department of Psychology, Lancaster University, Lancaster, LA1 4YF, United Kingdom; Manchester Centre for Audiology and Deafness, University of Manchester, Academic Health Science Centre, M13 9PL, United Kingdom
| |
Collapse
|
6
|
Kessler DM, Ananthakrishnan S, Smith SB, D'Onofrio K, Gifford RH. Frequency Following Response and Speech Recognition Benefit for Combining a Cochlear Implant and Contralateral Hearing Aid. Trends Hear 2020; 24:2331216520902001. [PMID: 32003296 PMCID: PMC7257083 DOI: 10.1177/2331216520902001] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022] Open
Abstract
Multiple studies have shown significant speech recognition benefit when acoustic hearing is combined with a cochlear implant (CI) for a bimodal hearing configuration. However, this benefit varies greatly between individuals. There are few clinical measures correlated with bimodal benefit and those correlations are driven by extreme values prohibiting data-driven, clinical counseling. This study evaluated the relationship between neural representation of fundamental frequency (F0) and temporal fine structure via the frequency following response (FFR) in the nonimplanted ear as well as spectral and temporal resolution of the nonimplanted ear and bimodal benefit for speech recognition in quiet and noise. Participants included 14 unilateral CI users who wore a hearing aid (HA) in the nonimplanted ear. Testing included speech recognition in quiet and in noise with the HA-alone, CI-alone, and in the bimodal condition (i.e., CI + HA), measures of spectral and temporal resolution in the nonimplanted ear, and FFR recording for a 170-ms/da/stimulus in the nonimplanted ear. Even after controlling for four-frequency pure-tone average, there was a significant correlation (r = .83) between FFR F0 amplitude in the nonimplanted ear and bimodal benefit. Other measures of auditory function of the nonimplanted ear were not significantly correlated with bimodal benefit. The FFR holds potential as an objective tool that may allow data-driven counseling regarding expected benefit from the nonimplanted ear. It is possible that this information may eventually be used for clinical decision-making, particularly in difficult-to-test populations such as young children, regarding effectiveness of bimodal hearing versus bilateral CI candidacy.
Collapse
Affiliation(s)
- David M Kessler
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
| | | | - Spencer B Smith
- Department of Communication Sciences and Disorders, The University of Texas at Austin, TX, USA
| | - Kristen D'Onofrio
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
| | - René H Gifford
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA.,Department of Otolaryngology, Vanderbilt University Medical Center, Nashville, TN, USA
| |
Collapse
|
7
|
Siedenburg K, Röttges S, Wagener KC, Hohmann V. Can You Hear Out the Melody? Testing Musical Scene Perception in Young Normal-Hearing and Older Hearing-Impaired Listeners. Trends Hear 2020; 24:2331216520945826. [PMID: 32895034 PMCID: PMC7502688 DOI: 10.1177/2331216520945826] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/25/2023] Open
Abstract
It is well known that hearing loss compromises auditory scene analysis abilities,
as is usually manifested in difficulties of understanding speech in noise.
Remarkably little is known about auditory scene analysis of hearing-impaired
(HI) listeners when it comes to musical sounds. Specifically, it is unclear to
which extent HI listeners are able to hear out a melody or an instrument from a
musical mixture. Here, we tested a group of younger normal-hearing (yNH) and
older HI (oHI) listeners with moderate hearing loss in their ability to match
short melodies and instruments presented as part of mixtures. Four-tone
sequences were used in conjunction with a simple musical accompaniment that
acted as a masker (cello/piano dyads or spectrally matched noise). In each
trial, a signal-masker mixture was presented, followed by two different versions
of the signal alone. Listeners indicated which signal version was part of the
mixture. Signal versions differed either in terms of the sequential order of the
pitch sequence or in terms of timbre (flute vs. trumpet). Signal-to-masker
thresholds were measured by varying the signal presentation level in an adaptive
two-down/one-up procedure. We observed that thresholds of oHI listeners were
elevated by on average 10 dB compared with that of yNH listeners. In contrast to
yNH listeners, oHI listeners did not show evidence of listening in dips of the
masker. Musical training of participants was associated with a lowering of
thresholds. These results may indicate detrimental effects of hearing loss on
central aspects of musical scene perception.
Collapse
Affiliation(s)
- Kai Siedenburg
- Department of Medical Physics and Acoustics and Cluster of Excellence Hearing4all, Carl von Ossietzky University of Oldenburg
| | - Saskia Röttges
- Department of Medical Physics and Acoustics and Cluster of Excellence Hearing4all, Carl von Ossietzky University of Oldenburg
| | | | - Volker Hohmann
- Department of Medical Physics and Acoustics and Cluster of Excellence Hearing4all, Carl von Ossietzky University of Oldenburg.,Hörzentrum Oldenburg GmbH & Hörtech gGmbH, Oldenburg, Germany
| |
Collapse
|
8
|
Sattari K, Rahbar N, Ahadi M, Haghani H. The effects of a temporal processing-based auditory training program on the auditory skills of elderly users of hearing aids: a study protocol for a randomized clinical trial. F1000Res 2020; 9:425. [PMID: 32595959 PMCID: PMC7308962 DOI: 10.12688/f1000research.22757.2] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 07/01/2020] [Indexed: 02/03/2023] Open
Abstract
Background: One of the most important effects of age-related declines in neural processing speed is the impairment of temporal resolution, which leads to difficulty hearing in noisy environments. Since the central auditory system is highly plastic, by designing and implementing a temporal processing-based auditory training program, we can help the elderly improve their listening skills and speech understanding in noisy environments. Methods: In the first phase of this research, based on the theoretical framework of temporal processing, an auditory training solution was developed as a software program. In the second phase, which will be described in the present study, the effects of the designed program on the listening skills of the elderly users of hearing aids (age: 60-75 years) will be studied in the control and intervention groups. In the intervention group, the auditory training program will be implemented for three months (36 sessions), and the results of central tests (GIN, DPT, QuickSIN) and the electrophysiological speech-ABR test will be compared in both groups before, immediately and one month after the intervention. Discussion: Since temporal processing is not sufficient in auditory training programs for the elderly with hearing impairments, implementation of a temporal processing-based auditory training program can reduce hearing problems in noisy environments among elderly users of hearing aids. Trial registration: This study was registered as a clinical trial in the Iranian Registry of Clinical Trials (
IRCT20190921044838N1) on December 25, 2019.
Collapse
Affiliation(s)
- Karim Sattari
- Department of Audiology, Rehabilitation Research Center, School of Rehabilitation Sciences, Iran University of Medical Sciences, Tehran, Iran
| | - Nariman Rahbar
- Department of Audiology, Rehabilitation Research Center, School of Rehabilitation Sciences, Iran University of Medical Sciences, Tehran, Iran
| | - Mohsen Ahadi
- Department of Audiology, Rehabilitation Research Center, School of Rehabilitation Sciences, Iran University of Medical Sciences, Tehran, Iran
| | - Hamid Haghani
- Department of Biostatistics, School of Management and Information Technology, Iran University of Medical Sciences, Tehran, Iran
| |
Collapse
|
9
|
Carcagno S, Lakhani S, Plack CJ. Consonance perception beyond the traditional existence region of pitch. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2019; 146:2279. [PMID: 31671967 DOI: 10.1121/1.5127845] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/20/2019] [Accepted: 09/12/2019] [Indexed: 06/10/2023]
Abstract
Some theories posit that the perception of consonance is based on neural periodicity detection, which is dependent on accurate phase locking of auditory nerve fibers to features of the stimulus waveform. In the current study, 15 listeners were asked to rate the pleasantness of complex tone dyads (2 note chords) forming various harmonic intervals and bandpass filtered in a high-frequency region (all components >5.8 kHz), where phase locking to the rapid stimulus fine structure is thought to be severely degraded or absent. The two notes were presented to opposite ears. Consonant intervals (minor third and perfect fifth) received higher ratings than dissonant intervals (minor second and tritone). The results could not be explained in terms of phase locking to the slower waveform envelope because the preference for consonant intervals was higher when the stimuli were harmonic, compared to a condition in which they were made inharmonic by shifting their component frequencies by a constant offset, so as to preserve their envelope periodicity. Overall the results indicate that, if phase locking is indeed absent at frequencies greater than ∼5 kHz, neural periodicity detection is not necessary for the perception of consonance.
Collapse
Affiliation(s)
- Samuele Carcagno
- Department of Psychology, Lancaster University, Lancaster, LA1 4YF, United Kingdom
| | - Saday Lakhani
- Department of Psychology, Lancaster University, Lancaster, LA1 4YF, United Kingdom
| | - Christopher J Plack
- Department of Psychology, Lancaster University, Lancaster, LA1 4YF, United Kingdom
| |
Collapse
|
10
|
Carcagno S, Bucknall R, Woodhouse J, Fritz C, Plack CJ. Effect of back wood choice on the perceived quality of steel-string acoustic guitars. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2018; 144:3533. [PMID: 30599660 DOI: 10.1121/1.5084735] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/01/2018] [Accepted: 12/06/2018] [Indexed: 06/09/2023]
Abstract
Some of the most prized woods used for the backs and sides of acoustic guitars are expensive, rare, and from unsustainable sources. It is unclear to what extent back woods contribute to the sound and playability qualities of acoustic guitars. Six steel-string acoustic guitars were built for this study to the same design and material specifications except for the back/side plates which were made of woods varying widely in availability and price (Brazilian rosewood, Indian rosewood, mahogany, maple, sapele, and walnut). Bridge-admittance measurements revealed small differences between the modal properties of the guitars which could be largely attributed to residual manufacturing variability rather than to the back/side plates. Overall sound quality ratings, given by 52 guitarists in a dimly lit room while wearing welder's goggles to prevent visual identification, were very similar between the six guitars. The results of a blinded ABX discrimination test, performed by another subset of 31 guitarists, indicate that guitarists could not easily distinguish the guitars by their sound or feel. Overall, the results suggest that the species of wood used for the back and sides of a steel-string acoustic guitar has only a marginal impact on its body mode properties and perceived sound.
Collapse
Affiliation(s)
- Samuele Carcagno
- Department of Psychology, Lancaster University, Lancaster, LA1 4YF, United Kingdom
| | | | - Jim Woodhouse
- Engineering Department, Cambridge University, Cambridge, CB2 1PZ, United Kingdom
| | - Claudia Fritz
- Sorbonne Université, Centre National de la Recherche Scientifique, Institut Jean Le Rond d'Alembert, 75005, Paris, France
| | - Christopher J Plack
- Department of Psychology, Lancaster University, Lancaster, LA1 4YF, United Kingdom
| |
Collapse
|
11
|
Prendergast G, Millman RE, Guest H, Munro KJ, Kluk K, Dewey RS, Hall DA, Heinz MG, Plack CJ. Effects of noise exposure on young adults with normal audiograms II: Behavioral measures. Hear Res 2017; 356:74-86. [PMID: 29126651 PMCID: PMC5714059 DOI: 10.1016/j.heares.2017.10.007] [Citation(s) in RCA: 83] [Impact Index Per Article: 11.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/10/2017] [Revised: 10/17/2017] [Accepted: 10/23/2017] [Indexed: 12/24/2022]
Abstract
An estimate of lifetime noise exposure was used as the primary predictor of performance on a range of behavioral tasks: frequency and intensity difference limens, amplitude modulation detection, interaural phase discrimination, the digit triplet speech test, the co-ordinate response speech measure, an auditory localization task, a musical consonance task and a subjective report of hearing ability. One hundred and thirty-eight participants (81 females) aged 18-36 years were tested, with a wide range of self-reported noise exposure. All had normal pure-tone audiograms up to 8 kHz. It was predicted that increased lifetime noise exposure, which we assume to be concordant with noise-induced cochlear synaptopathy, would elevate behavioral thresholds, in particular for stimuli with high levels in a high spectral region. However, the results showed little effect of noise exposure on performance. There were a number of weak relations with noise exposure across the test battery, although many of these were in the opposite direction to the predictions, and none were statistically significant after correction for multiple comparisons. There were also no strong correlations between electrophysiological measures of synaptopathy published previously and the behavioral measures reported here. Consistent with our previous electrophysiological results, the present results provide no evidence that noise exposure is related to significant perceptual deficits in young listeners with normal audiometric hearing. It is possible that the effects of noise-induced cochlear synaptopathy are only measurable in humans with extreme noise exposures, and that these effects always co-occur with a loss of audiometric sensitivity.
Collapse
Affiliation(s)
- Garreth Prendergast
- Manchester Centre for Audiology and Deafness, University of Manchester, Manchester Academic Health Science Centre, M13 9PL, UK.
| | - Rebecca E Millman
- Manchester Centre for Audiology and Deafness, University of Manchester, Manchester Academic Health Science Centre, M13 9PL, UK; NIHR Manchester Biomedical Research Centre, Central Manchester University Hospitals NHS Foundation Trust, Manchester Academic Health Science Centre, Manchester, M13 9WL, UK
| | - Hannah Guest
- Manchester Centre for Audiology and Deafness, University of Manchester, Manchester Academic Health Science Centre, M13 9PL, UK
| | - Kevin J Munro
- Manchester Centre for Audiology and Deafness, University of Manchester, Manchester Academic Health Science Centre, M13 9PL, UK; NIHR Manchester Biomedical Research Centre, Central Manchester University Hospitals NHS Foundation Trust, Manchester Academic Health Science Centre, Manchester, M13 9WL, UK
| | - Karolina Kluk
- Manchester Centre for Audiology and Deafness, University of Manchester, Manchester Academic Health Science Centre, M13 9PL, UK; NIHR Manchester Biomedical Research Centre, Central Manchester University Hospitals NHS Foundation Trust, Manchester Academic Health Science Centre, Manchester, M13 9WL, UK
| | - Rebecca S Dewey
- Sir Peter Mansfield Imaging Centre, School of Physics and Astronomy, University of Nottingham Nottingham, NG7 2RD, UK; National Institute for Health Research (NIHR) Nottingham Biomedical Research Centre, Nottingham, NG1 5DU, UK; Otology and Hearing Group, Division of Clinical Neuroscience, School of Medicine, University of Nottingham, Nottingham, NG7 2UH, UK
| | - Deborah A Hall
- National Institute for Health Research (NIHR) Nottingham Biomedical Research Centre, Nottingham, NG1 5DU, UK; Otology and Hearing Group, Division of Clinical Neuroscience, School of Medicine, University of Nottingham, Nottingham, NG7 2UH, UK
| | - Michael G Heinz
- Department of Speech, Language, & Hearing Sciences and Biomedical Engineering, Purdue University, West Lafayette, IN, 47907, USA
| | - Christopher J Plack
- Manchester Centre for Audiology and Deafness, University of Manchester, Manchester Academic Health Science Centre, M13 9PL, UK; NIHR Manchester Biomedical Research Centre, Central Manchester University Hospitals NHS Foundation Trust, Manchester Academic Health Science Centre, Manchester, M13 9WL, UK; Department of Psychology, Lancaster University, Lancaster, LA1 4YF, UK
| |
Collapse
|
12
|
Moreno-Gómez FN, Véliz G, Rojas M, Martínez C, Olmedo R, Panussis F, Dagnino-Subiabre A, Delgado C, Delano PH. Music Training and Education Slow the Deterioration of Music Perception Produced by Presbycusis in the Elderly. Front Aging Neurosci 2017; 9:149. [PMID: 28579956 PMCID: PMC5437118 DOI: 10.3389/fnagi.2017.00149] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2016] [Accepted: 05/02/2017] [Indexed: 12/13/2022] Open
Abstract
The perception of music depends on the normal function of the peripheral and central auditory system. Aged subjects without hearing loss have altered music perception, including pitch and temporal features. Presbycusis or age-related hearing loss is a frequent condition in elderly people, produced by neurodegenerative processes that affect the cochlear receptor cells and brain circuits involved in auditory perception. Clinically, presbycusis patients have bilateral high-frequency hearing loss and deteriorated speech intelligibility. Music impairments in presbycusis subjects can be attributed to the normal aging processes and to presbycusis neuropathological changes. However, whether presbycusis further impairs music perception remains controversial. Here, we developed a computerized version of the Montreal battery of evaluation of amusia (MBEA) and assessed music perception in 175 Chilean adults aged between 18 and 90 years without hearing complaints and in symptomatic presbycusis patients. We give normative data for MBEA performance in a Latin-American population, showing age and educational effects. In addition, we found that symptomatic presbycusis was the most relevant factor determining global MBEA accuracy in aged subjects. Moreover, we show that melodic impairments in presbycusis individuals were diminished by music training, while the performance in temporal tasks were affected by the educational level and music training. We conclude that music training and education are important factors as they can slow the deterioration of music perception produced by age-related hearing loss.
Collapse
Affiliation(s)
- Felipe N. Moreno-Gómez
- Laboratorio de Neurobiología de la Audición, Programa de Fisiología y Biofísica, Instituto de Ciencias Biomédicas (ICBM), Facultad de Medicina, Universidad de ChileSantiago, Chile
- Auditory and Cognition Center, AUCOSantiago, Chile
- Departamento de Biología y Química, Facultad de Ciencias Básicas, Universidad Católica del MauleTalca, Chile
| | - Guillermo Véliz
- Laboratorio de Neurobiología de la Audición, Programa de Fisiología y Biofísica, Instituto de Ciencias Biomédicas (ICBM), Facultad de Medicina, Universidad de ChileSantiago, Chile
- Departamento de Otorrinolaringología, Hospital Clínico de la Universidad de ChileSantiago, Chile
| | - Marcos Rojas
- Laboratorio de Neurobiología de la Audición, Programa de Fisiología y Biofísica, Instituto de Ciencias Biomédicas (ICBM), Facultad de Medicina, Universidad de ChileSantiago, Chile
- Departamento de Otorrinolaringología, Hospital Clínico de la Universidad de ChileSantiago, Chile
| | - Cristián Martínez
- Departamento de Otorrinolaringología, Hospital Clínico de la Universidad de ChileSantiago, Chile
| | - Rubén Olmedo
- Departamento de Otorrinolaringología, Hospital Clínico de la Universidad de ChileSantiago, Chile
| | - Felipe Panussis
- Departamento de Otorrinolaringología, Hospital Clínico de la Universidad de ChileSantiago, Chile
| | - Alexies Dagnino-Subiabre
- Auditory and Cognition Center, AUCOSantiago, Chile
- Laboratorio de Neurobiología del Stress, Centro de Neurobiología y Plasticidad Cerebral (CNPC), Instituto de Fisiología, Facultad de Ciencias, Universidad de ValparaísoValparaíso, Chile
| | - Carolina Delgado
- Auditory and Cognition Center, AUCOSantiago, Chile
- Departamento Neurología y Neurocirugía, Hospital Clínico de la Universidad de ChileSantiago, Chile
| | - Paul H. Delano
- Laboratorio de Neurobiología de la Audición, Programa de Fisiología y Biofísica, Instituto de Ciencias Biomédicas (ICBM), Facultad de Medicina, Universidad de ChileSantiago, Chile
- Auditory and Cognition Center, AUCOSantiago, Chile
- Departamento de Otorrinolaringología, Hospital Clínico de la Universidad de ChileSantiago, Chile
| |
Collapse
|
13
|
Ananthakrishnan S, Krishnan A, Bartlett E. Human Frequency Following Response: Neural Representation of Envelope and Temporal Fine Structure in Listeners with Normal Hearing and Sensorineural Hearing Loss. Ear Hear 2016; 37:e91-e103. [PMID: 26583482 DOI: 10.1097/aud.0000000000000247] [Citation(s) in RCA: 46] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
OBJECTIVE Listeners with sensorineural hearing loss (SNHL) typically experience reduced speech perception, which is not completely restored with amplification. This likely occurs because cochlear damage, in addition to elevating audiometric thresholds, alters the neural representation of speech transmitted to higher centers along the auditory neuroaxis. While the deleterious effects of SNHL on speech perception in humans have been well-documented using behavioral paradigms, our understanding of the neural correlates underlying these perceptual deficits remains limited. Using the scalp-recorded frequency following response (FFR), the authors examine the effects of SNHL and aging on subcortical neural representation of acoustic features important for pitch and speech perception, namely the periodicity envelope (F0) and temporal fine structure (TFS; formant structure), as reflected in the phase-locked neural activity generating the FFR. DESIGN FFRs were obtained from 10 listeners with normal hearing (NH) and 9 listeners with mild-moderate SNHL in response to a steady-state English back vowel /u/ presented at multiple intensity levels. Use of multiple presentation levels facilitated comparisons at equal sound pressure level (SPL) and equal sensation level. In a second follow-up experiment to address the effect of age on envelope and TFS representation, FFRs were obtained from 25 NH and 19 listeners with mild to moderately severe SNHL to the same vowel stimulus presented at 80 dB SPL. Temporal waveforms, Fast Fourier Transform and spectrograms were used to evaluate the magnitude of the phase-locked activity at F0 (periodicity envelope) and F1 (TFS). RESULTS Neural representation of both envelope (F0) and TFS (F1) at equal SPLs was stronger in NH listeners compared with listeners with SNHL. Also, comparison of neural representation of F0 and F1 across stimulus levels expressed in SPL and sensation level (accounting for audibility) revealed that level-related changes in F0 and F1 magnitude were different for listeners with SNHL compared with listeners with NH. Furthermore, the degradation in subcortical neural representation was observed to persist in listeners with SNHL even when the effects of age were controlled for. CONCLUSIONS Overall, our results suggest a relatively greater degradation in the neural representation of TFS compared with periodicity envelope in individuals with SNHL. This degraded neural representation of TFS in SNHL, as reflected in the brainstem FFR, may reflect a disruption in the temporal pattern of phase-locked neural activity arising from altered tonotopic maps and/or wider filters causing poor frequency selectivity in these listeners. Finally, while preliminary results indicate that the deleterious effects of SNHL may be greater than age-related degradation in subcortical neural representation, the lack of a balanced age-matched control group in this study does not permit us to completely rule out the effects of age on subcortical neural representation.
Collapse
Affiliation(s)
- Saradha Ananthakrishnan
- 1Department of Speech Language Hearing Sciences, Purdue University, West Lafayette, Indiana, USA; 2Department of Audiology, Speech-Language Pathology and Deaf studies, Towson University, Towson, Maryland, USA; 3Department of Biomedical Engineering, Purdue University, West Lafayette, Indiana, USA; and 4Department of Biological Sciences, Purdue University, West Lafayette, Indiana, USA
| | | | | |
Collapse
|
14
|
Plack CJ, Léger A, Prendergast G, Kluk K, Guest H, Munro KJ. Toward a Diagnostic Test for Hidden Hearing Loss. Trends Hear 2016; 20:2331216516657466. [PMID: 27604783 PMCID: PMC5017571 DOI: 10.1177/2331216516657466] [Citation(s) in RCA: 51] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2015] [Revised: 01/25/2016] [Accepted: 03/02/2016] [Indexed: 11/16/2022] Open
Abstract
Cochlear synaptopathy (or hidden hearing loss), due to noise exposure or aging, has been demonstrated in animal models using histological techniques. However, diagnosis of the condition in individual humans is problematic because of (a) test reliability and (b) lack of a gold standard validation measure. Wave I of the transient-evoked auditory brainstem response is a noninvasive electrophysiological measure of auditory nerve function and has been validated in the animal models. However, in humans, Wave I amplitude shows high variability both between and within individuals. The frequency-following response, a sustained evoked potential reflecting synchronous neural activity in the rostral brainstem, is potentially more robust than auditory brainstem response Wave I. However, the frequency-following response is a measure of central activity and may be dependent on individual differences in central processing. Psychophysical measures are also affected by intersubject variability in central processing. Differential measures may help to reduce intersubject variability due to unrelated factors. A measure can be compared, within an individual, between conditions that are affected differently by cochlear synaptopathy. Validation of the metrics is also an issue. Comparisons with animal models, computational modeling, auditory nerve imaging, and human temporal bone histology are all potential options for validation, but there are technical and practical hurdles and difficulties in interpretation. Despite the obstacles, a diagnostic test for hidden hearing loss is a worthwhile goal, with important implications for clinical practice and health surveillance.
Collapse
|
15
|
Jeong E, Ryu H. Melodic Contour Identification Reflects the Cognitive Threshold of Aging. Front Aging Neurosci 2016; 8:134. [PMID: 27378907 PMCID: PMC4904015 DOI: 10.3389/fnagi.2016.00134] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2016] [Accepted: 05/27/2016] [Indexed: 01/16/2023] Open
Abstract
Cognitive decline is a natural phenomenon of aging. Although there exists a consensus that sensitivity to acoustic features of music is associated with such decline, no solid evidence has yet shown that structural elements and contexts of music explain this loss of cognitive performance. This study examined the extent and the type of cognitive decline that is related to the contour identification task (CIT) using tones with different pitches (i.e., melodic contours). Both younger and older adult groups participated in the CIT given in three listening conditions (i.e., focused, selective, and alternating). Behavioral data (accuracy and response times) and hemodynamic reactions were measured using functional near-infrared spectroscopy (fNIRS). Our findings showed cognitive declines in the older adult group but with a subtle difference from the younger adult group. The accuracy of the melodic CITs given in the target-like distraction task (CIT2) was significantly lower than that in the environmental noise (CIT1) condition in the older adult group, indicating that CIT2 may be a benchmark test for age-specific cognitive decline. The fNIRS findings also agreed with this interpretation, revealing significant increases in oxygenated hemoglobin (oxyHb) concentration in the younger (p < 0.05 for Δpre - on task; p < 0.01 for Δon – post task) rather than the older adult group (n.s for Δpre - on task; n.s for Δon – post task). We further concluded that the oxyHb difference was present in the brain regions near the right dorsolateral prefrontal cortex. Taken together, these findings suggest that CIT2 (i.e., the melodic contour task in the target-like distraction) is an optimized task that could indicate the degree and type of age-related cognitive decline.
Collapse
Affiliation(s)
- Eunju Jeong
- Department of Arts and Technology, Hanyang University Seoul, South Korea
| | - Hokyoung Ryu
- Department of Arts and Technology, Hanyang University Seoul, South Korea
| |
Collapse
|
16
|
Cortical contributions to the auditory frequency-following response revealed by MEG. Nat Commun 2016; 7:11070. [PMID: 27009409 PMCID: PMC4820836 DOI: 10.1038/ncomms11070] [Citation(s) in RCA: 243] [Impact Index Per Article: 30.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2015] [Accepted: 02/17/2016] [Indexed: 11/09/2022] Open
Abstract
The auditory frequency-following response (FFR) to complex periodic sounds is used to study the subcortical auditory system, and has been proposed as a biomarker for disorders that feature abnormal sound processing. Despite its value in fundamental and clinical research, the neural origins of the FFR are unclear. Using magnetoencephalography, we observe a strong, right-asymmetric contribution to the FFR from the human auditory cortex at the fundamental frequency of the stimulus, in addition to signal from cochlear nucleus, inferior colliculus and medial geniculate. This finding is highly relevant for our understanding of plasticity and pathology in the auditory system, as well as higher-level cognition such as speech and music processing. It suggests that previous interpretations of the FFR may need re-examination using methods that allow for source separation. Auditory brainstem response (ABR) is used to study temporal encoding of auditory information in music and language. This study utilizes magnetoencephalography to localize both cortical and subcortical origins of the sustained frequency following response (FFR), the ABR component that encodes the periodicity of sound.
Collapse
|
17
|
|
18
|
On the Relevance of Natural Stimuli for the Study of Brainstem Correlates: The Example of Consonance Perception. PLoS One 2015; 10:e0145439. [PMID: 26720000 PMCID: PMC4697839 DOI: 10.1371/journal.pone.0145439] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2015] [Accepted: 12/03/2015] [Indexed: 11/19/2022] Open
Abstract
Some combinations of musical tones sound pleasing to Western listeners, and are termed consonant, while others sound discordant, and are termed dissonant. The perceptual phenomenon of consonance has been traced to the acoustic property of harmonicity. It has been repeatedly shown that neural correlates of consonance can be found as early as the auditory brainstem as reflected in the harmonicity of the scalp-recorded frequency-following response (FFR). “Neural Pitch Salience” (NPS) measured from FFRs—essentially a time-domain equivalent of the classic pattern recognition models of pitch—has been found to correlate with behavioral judgments of consonance for synthetic stimuli. Following the idea that the auditory system has evolved to process behaviorally relevant natural sounds, and in order to test the generalizability of this finding made with synthetic tones, we recorded FFRs for consonant and dissonant intervals composed of synthetic and natural stimuli. We found that NPS correlated with behavioral judgments of consonance and dissonance for synthetic but not for naturalistic sounds. These results suggest that while some form of harmonicity can be computed from the auditory brainstem response, the general percept of consonance and dissonance is not captured by this measure. It might either be represented in the brainstem in a different code (such as place code) or arise at higher levels of the auditory pathway. Our findings further illustrate the importance of using natural sounds, as a complementary tool to fully-controlled synthetic sounds, when probing auditory perception.
Collapse
|