1
|
Vinay, Moore BCJ. Exploiting individual differences to assess the role of place and phase locking cues in auditory frequency discrimination at 2 kHz. Sci Rep 2023; 13:13801. [PMID: 37612303 PMCID: PMC10447419 DOI: 10.1038/s41598-023-40571-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2023] [Accepted: 08/13/2023] [Indexed: 08/25/2023] Open
Abstract
The relative role of place and temporal mechanisms in auditory frequency discrimination was assessed for a centre frequency of 2 kHz. Four measures of frequency discrimination were obtained for 63 normal-hearing participants: detection of frequency modulation using modulation rates of 2 Hz (FM2) and 20 Hz (FM20); detection of a change in frequency across successive pure tones (difference limen for frequency, DLF); and detection of changes in the temporal fine structure of bandpass filtered complex tones centred at 2 kHz (TFS). Previous work has suggested that: FM2 depends on the use of both temporal and place cues; FM20 depends primarily on the use of place cues because the temporal mechanism cannot track rapid changes in frequency; DLF depends primarily on temporal cues; TFS depends exclusively on temporal cues. This led to the following predicted patterns of the correlations of scores across participants: DLF and TFS should be highly correlated; FM2 should be correlated with DLF and TFS; FM20 should not be correlated with DLF or TFS. The results were broadly consistent with these predictions and with the idea that frequency discrimination at 2 kHz depends partly or primarily on temporal cues except for frequency modulation detection at a high rate.
Collapse
Affiliation(s)
- Vinay
- Audiology Group, Department of Neuromedicine and Movement Science, Faculty of Medicine and Health Sciences, Norwegian University of Science and Technology (NTNU), Tungasletta 2, 7491, Trondheim, Norway.
| | - Brian C J Moore
- Cambridge Hearing Group, Department of Psychology, University of Cambridge, Cambridge, UK
| |
Collapse
|
2
|
Carter JA, Bidelman GM. Perceptual warping exposes categorical representations for speech in human brainstem responses. Neuroimage 2023; 269:119899. [PMID: 36720437 PMCID: PMC9992300 DOI: 10.1016/j.neuroimage.2023.119899] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2022] [Revised: 01/17/2023] [Accepted: 01/22/2023] [Indexed: 01/30/2023] Open
Abstract
The brain transforms continuous acoustic events into discrete category representations to downsample the speech signal for our perceptual-cognitive systems. Such phonetic categories are highly malleable, and their percepts can change depending on surrounding stimulus context. Previous work suggests these acoustic-phonetic mapping and perceptual warping of speech emerge in the brain no earlier than auditory cortex. Here, we examined whether these auditory-category phenomena inherent to speech perception occur even earlier in the human brain, at the level of auditory brainstem. We recorded speech-evoked frequency following responses (FFRs) during a task designed to induce more/less warping of listeners' perceptual categories depending on stimulus presentation order of a speech continuum (random, forward, backward directions). We used a novel clustered stimulus paradigm to rapidly record the high trial counts needed for FFRs concurrent with active behavioral tasks. We found serial stimulus order caused perceptual shifts (hysteresis) near listeners' category boundary confirming identical speech tokens are perceived differentially depending on stimulus context. Critically, we further show neural FFRs during active (but not passive) listening are enhanced for prototypical vs. category-ambiguous tokens and are biased in the direction of listeners' phonetic label even for acoustically-identical speech stimuli. These findings were not observed in the stimulus acoustics nor model FFR responses generated via a computational model of cochlear and auditory nerve transduction, confirming a central origin to the effects. Our data reveal FFRs carry category-level information and suggest top-down processing actively shapes the neural encoding and categorization of speech at subcortical levels. These findings suggest the acoustic-phonetic mapping and perceptual warping in speech perception occur surprisingly early along the auditory neuroaxis, which might aid understanding by reducing ambiguity inherent to the speech signal.
Collapse
Affiliation(s)
- Jared A Carter
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, USA; School of Communication Sciences and Disorders, University of Memphis, Memphis, TN, USA; Division of Clinical Neuroscience, School of Medicine, Hearing Sciences - Scottish Section, University of Nottingham, Glasgow, Scotland, UK
| | - Gavin M Bidelman
- Department of Speech, Language and Hearing Sciences, Indiana University, Bloomington, IN, USA; Program in Neuroscience, Indiana University, Bloomington, IN, USA.
| |
Collapse
|
3
|
Di Stefano N, Vuust P, Brattico E. Consonance and dissonance perception. A critical review of the historical sources, multidisciplinary findings, and main hypotheses. Phys Life Rev 2022; 43:273-304. [PMID: 36372030 DOI: 10.1016/j.plrev.2022.10.004] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2022] [Accepted: 10/17/2022] [Indexed: 11/05/2022]
Abstract
Revealed more than two millennia ago by Pythagoras, consonance and dissonance (C/D) are foundational concepts in music theory, perception, and aesthetics. The search for the biological, acoustical, and cultural factors that affect C/D perception has resulted in descriptive accounts inspired by arithmetic, musicological, psychoacoustical or neurobiological frameworks without reaching a consensus. Here, we review the key historical sources and modern multidisciplinary findings on C/D and integrate them into three main hypotheses: the vocal similarity hypothesis (VSH), the psychocultural hypothesis (PH), and the sensorimotor hypothesis (SH). By illustrating the hypotheses-related findings, we highlight their major conceptual, methodological, and terminological shortcomings. Trying to provide a unitary framework for C/D understanding, we put together multidisciplinary research on human and animal vocalizations, which converges to suggest that auditory roughness is associated with distress/danger and, therefore, elicits defensive behavioral reactions and neural responses that indicate aversion. We therefore stress the primacy of vocality and roughness as key factors in the explanation of C/D phenomenon, and we explore the (neuro)biological underpinnings of the attraction-aversion mechanisms that are triggered by C/D stimuli. Based on the reviewed evidence, while the aversive nature of dissonance appears as solidly rooted in the multidisciplinary findings, the attractive nature of consonance remains a somewhat speculative claim that needs further investigation. Finally, we outline future directions for empirical research in C/D, especially regarding cross-modal and cross-cultural approaches.
Collapse
Affiliation(s)
- Nicola Di Stefano
- Institute for Cognitive Sciences and Technologies (ISTC), National Research Council of Italy (CNR), Via San Martino della Battaglia 44, 00185 Rome, Italy.
| | - Peter Vuust
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University Royal Academy of Music Aarhus/Aalborg (RAMA), 8000 Aarhus, Denmark.
| | - Elvira Brattico
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University Royal Academy of Music Aarhus/Aalborg (RAMA), 8000 Aarhus, Denmark; Department of Education, Psychology, Communication, University of Bari Aldo Moro, 70122 Bari, Italy.
| |
Collapse
|
4
|
Spence C, Di Stefano N. Crossmodal Harmony: Looking for the Meaning of Harmony Beyond Hearing. Iperception 2022; 13:20416695211073817. [PMID: 35186248 PMCID: PMC8850342 DOI: 10.1177/20416695211073817] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2021] [Revised: 11/20/2021] [Accepted: 12/23/2021] [Indexed: 12/02/2022] Open
Abstract
The notion of harmony was first developed in the context of metaphysics before being applied to the domain of music. However, in recent centuries, the term has often been used to describe especially pleasing combinations of colors by those working in the visual arts too. Similarly, the harmonization of flavors is nowadays often invoked as one of the guiding principles underpinning the deliberate pairing of food and drink. However, beyond the various uses of the term to describe and construct pleasurable unisensory perceptual experiences, it has also been suggested that music and painting may be combined harmoniously (e.g., see the literature on “color music”). Furthermore, those working in the area of “sonic seasoning” sometimes describe certain sonic compositions as harmonizing crossmodally with specific flavor sensations. In this review, we take a critical look at the putative meaning(s) of the term “harmony” when used in a crossmodal, or multisensory, context. Furthermore, we address the question of whether the term's use outside of a strictly unimodal auditory context should be considered literally or merely metaphorically (i.e., as a shorthand to describe those combinations of sensory stimuli that, for whatever reason, appear to go well together, and hence which can be processed especially fluently).
Collapse
Affiliation(s)
- Charles Spence
- Crossmodal Research Laboratory, University of Oxford, Oxford, UK
| | - Nicola Di Stefano
- Institute for Cognitive Sciences and Technologies, National Research Council of Italy (CNR), Rome, Italy
| |
Collapse
|
5
|
Carcagno S, Plack CJ. Relations between speech-reception, psychophysical temporal processing, and subcortical electrophysiological measures of auditory function in humans. Hear Res 2022; 417:108456. [PMID: 35149333 PMCID: PMC8935383 DOI: 10.1016/j.heares.2022.108456] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/07/2021] [Revised: 01/05/2022] [Accepted: 01/27/2022] [Indexed: 11/04/2022]
|
6
|
Carcagno S, Plack CJ. Effects of age on psychophysical measures of auditory temporal processing and speech reception at low and high levels. Hear Res 2020; 400:108117. [PMID: 33253994 PMCID: PMC7812372 DOI: 10.1016/j.heares.2020.108117] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/11/2020] [Revised: 10/18/2020] [Accepted: 11/17/2020] [Indexed: 01/21/2023]
Abstract
We found little evidence of greater age-related hearing declines at high sound levels. There are age-related temporal-processing declines independent of hearing loss. No evidence of age-related speech-reception deficits independent of hearing loss.
Age-related cochlear synaptopathy (CS) has been shown to occur in rodents with minimal noise exposure, and has been hypothesized to play a crucial role in age-related hearing declines in humans. It is not known to what extent age-related CS occurs in humans, and how it affects the coding of supra-threshold sounds and speech in noise. Because in rodents CS affects mainly low- and medium-spontaneous rate (L/M-SR) auditory-nerve fibers with rate-level functions covering medium-high levels, it should lead to greater deficits in the processing of sounds at high than at low stimulus levels. In this cross-sectional study the performance of 102 listeners across the age range (34 young, 34 middle-aged, 34 older) was assessed in a set of psychophysical temporal processing and speech reception in noise tests at both low, and high stimulus levels. Mixed-effect multiple regression models were used to estimate the effects of age while partialing out effects of audiometric thresholds, lifetime noise exposure, cognitive abilities (assessed with additional tests), and musical experience. Age was independently associated with performance deficits on several tests. However, only for one out of 13 tests were age effects credibly larger at the high compared to the low stimulus level. Overall these results do not provide much evidence that age-related CS, to the extent to which it may occur in humans according to the rodent model of greater L/M-SR synaptic loss, has substantial effects on psychophysical measures of auditory temporal processing or on speech reception in noise.
Collapse
Affiliation(s)
- Samuele Carcagno
- Department of Psychology, Lancaster University, Lancaster, LA1 4YF, United Kingdom.
| | - Christopher J Plack
- Department of Psychology, Lancaster University, Lancaster, LA1 4YF, United Kingdom; Manchester Centre for Audiology and Deafness, University of Manchester, Academic Health Science Centre, M13 9PL, United Kingdom
| |
Collapse
|
7
|
Perceptual fusion of musical notes by native Amazonians suggests universal representations of musical intervals. Nat Commun 2020; 11:2786. [PMID: 32493923 PMCID: PMC7270137 DOI: 10.1038/s41467-020-16448-6] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2019] [Accepted: 04/23/2020] [Indexed: 01/31/2023] Open
Abstract
Music perception is plausibly constrained by universal perceptual mechanisms adapted to natural sounds. Such constraints could arise from our dependence on harmonic frequency spectra for segregating concurrent sounds, but evidence has been circumstantial. We measured the extent to which concurrent musical notes are misperceived as a single sound, testing Westerners as well as native Amazonians with limited exposure to Western music. Both groups were more likely to mistake note combinations related by simple integer ratios as single sounds (‘fusion’). Thus, even with little exposure to Western harmony, acoustic constraints on sound segregation appear to induce perceptual structure on note combinations. However, fusion did not predict aesthetic judgments of intervals in Westerners, or in Amazonians, who were indifferent to consonance/dissonance. The results suggest universal perceptual mechanisms that could help explain cross-cultural regularities in musical systems, but indicate that these mechanisms interact with culture-specific influences to produce musical phenomena such as consonance. Music varies across cultures, but some features are widespread, consistent with biological constraints. Here, the authors report that both Western and native Amazonian listeners perceptually fuse concurrent notes related by simple-integer ratios, suggestive of one such biological constraint.
Collapse
|
8
|
Cultural familiarity and musical expertise impact the pleasantness of consonance/dissonance but not its perceived tension. Sci Rep 2020; 10:8693. [PMID: 32457382 PMCID: PMC7250829 DOI: 10.1038/s41598-020-65615-8] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2020] [Accepted: 04/29/2020] [Indexed: 11/08/2022] Open
Abstract
The contrast between consonance and dissonance is vital in making music emotionally meaningful. Consonance typically denotes perceived agreeableness and stability, while dissonance disagreeableness and a need of resolution. This study addresses the perception of consonance/dissonance in single intervals and chords with two empirical experiments conducted online. Experiment 1 explored the perception of a representative sample of intervals and chords to investigate the overlap between the seven most used concepts (Consonance, Smoothness, Purity, Harmoniousness, Tension, Pleasantness, Preference) denoting consonance/dissonance in all the available (60) empirical studies published since 1883. The results show that the concepts exhibit high correlations, albeit these are somewhat lower for non-musicians compared to musicians. In Experiment 2 the stimuli’s cultural familiarity was divided into three levels, and the correlations between the key concepts of Consonance, Tension, Harmoniousness, Pleasantness, and Preference were further examined. Cultural familiarity affected the correlations drastically across both musicians and non-musicians, but in different ways. Tension maintained relatively high correlations with Consonance across musical expertise and cultural familiarity levels, making it a useful concept for studies addressing both musicians and non-musicians. On the basis of the results a control for cultural familiarity and musical expertise is recommended for all studies investigating consonance/dissonance perception.
Collapse
|
9
|
Carcagno S, Lakhani S, Plack CJ. Consonance perception beyond the traditional existence region of pitch. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2019; 146:2279. [PMID: 31671967 DOI: 10.1121/1.5127845] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/20/2019] [Accepted: 09/12/2019] [Indexed: 06/10/2023]
Abstract
Some theories posit that the perception of consonance is based on neural periodicity detection, which is dependent on accurate phase locking of auditory nerve fibers to features of the stimulus waveform. In the current study, 15 listeners were asked to rate the pleasantness of complex tone dyads (2 note chords) forming various harmonic intervals and bandpass filtered in a high-frequency region (all components >5.8 kHz), where phase locking to the rapid stimulus fine structure is thought to be severely degraded or absent. The two notes were presented to opposite ears. Consonant intervals (minor third and perfect fifth) received higher ratings than dissonant intervals (minor second and tritone). The results could not be explained in terms of phase locking to the slower waveform envelope because the preference for consonant intervals was higher when the stimuli were harmonic, compared to a condition in which they were made inharmonic by shifting their component frequencies by a constant offset, so as to preserve their envelope periodicity. Overall the results indicate that, if phase locking is indeed absent at frequencies greater than ∼5 kHz, neural periodicity detection is not necessary for the perception of consonance.
Collapse
Affiliation(s)
- Samuele Carcagno
- Department of Psychology, Lancaster University, Lancaster, LA1 4YF, United Kingdom
| | - Saday Lakhani
- Department of Psychology, Lancaster University, Lancaster, LA1 4YF, United Kingdom
| | - Christopher J Plack
- Department of Psychology, Lancaster University, Lancaster, LA1 4YF, United Kingdom
| |
Collapse
|
10
|
The pleasantness of sensory dissonance is mediated by musical style and expertise. Sci Rep 2019; 9:1070. [PMID: 30705379 PMCID: PMC6355932 DOI: 10.1038/s41598-018-35873-8] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2017] [Accepted: 11/09/2018] [Indexed: 12/13/2022] Open
Abstract
Western musical styles use a large variety of chords and vertical sonorities. Based on objective acoustical properties, chords can be situated on a dissonant-consonant continuum. While this might to some extent converge with the unpleasant-pleasant continuum, subjective liking might diverge for various chord forms from music across different styles. Our study aimed to investigate how well appraisals of the roughness and pleasantness dimensions of isolated chords taken from real-world music are predicted by Parncutt’s established model of sensory dissonance. Furthermore, we related these subjective ratings to style of origin and acoustical features of the chords as well as musical sophistication of the raters. Ratings were obtained for chords deemed representative of the harmonic language of three different musical styles (classical, jazz and avant-garde music), plus randomly generated chords. Results indicate that pleasantness and roughness ratings were, on average, mirror opposites; however, their relative distribution differed greatly across styles, reflecting different underlying aesthetic ideals. Parncutt’s model only weakly predicted ratings for all but Classical chords, suggesting that listeners’ appraisal of the dissonance and pleasantness of chords bears not only on stimulus-side but also on listener-side factors. Indeed, we found that levels of musical sophistication negatively predicted listeners’ tendency to rate the consonance and pleasantness of any one chord as coupled measures, suggesting that musical education and expertise may serve to individuate how these musical dimensions are apprehended.
Collapse
|
11
|
Carcagno S, Bucknall R, Woodhouse J, Fritz C, Plack CJ. Effect of back wood choice on the perceived quality of steel-string acoustic guitars. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2018; 144:3533. [PMID: 30599660 DOI: 10.1121/1.5084735] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/01/2018] [Accepted: 12/06/2018] [Indexed: 06/09/2023]
Abstract
Some of the most prized woods used for the backs and sides of acoustic guitars are expensive, rare, and from unsustainable sources. It is unclear to what extent back woods contribute to the sound and playability qualities of acoustic guitars. Six steel-string acoustic guitars were built for this study to the same design and material specifications except for the back/side plates which were made of woods varying widely in availability and price (Brazilian rosewood, Indian rosewood, mahogany, maple, sapele, and walnut). Bridge-admittance measurements revealed small differences between the modal properties of the guitars which could be largely attributed to residual manufacturing variability rather than to the back/side plates. Overall sound quality ratings, given by 52 guitarists in a dimly lit room while wearing welder's goggles to prevent visual identification, were very similar between the six guitars. The results of a blinded ABX discrimination test, performed by another subset of 31 guitarists, indicate that guitarists could not easily distinguish the guitars by their sound or feel. Overall, the results suggest that the species of wood used for the back and sides of a steel-string acoustic guitar has only a marginal impact on its body mode properties and perceived sound.
Collapse
Affiliation(s)
- Samuele Carcagno
- Department of Psychology, Lancaster University, Lancaster, LA1 4YF, United Kingdom
| | | | - Jim Woodhouse
- Engineering Department, Cambridge University, Cambridge, CB2 1PZ, United Kingdom
| | - Claudia Fritz
- Sorbonne Université, Centre National de la Recherche Scientifique, Institut Jean Le Rond d'Alembert, 75005, Paris, France
| | - Christopher J Plack
- Department of Psychology, Lancaster University, Lancaster, LA1 4YF, United Kingdom
| |
Collapse
|
12
|
Peng F, Innes-Brown H, McKay CM, Fallon JB, Zhou Y, Wang X, Hu N, Hou W. Temporal Coding of Voice Pitch Contours in Mandarin Tones. Front Neural Circuits 2018; 12:55. [PMID: 30087597 PMCID: PMC6066958 DOI: 10.3389/fncir.2018.00055] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2017] [Accepted: 06/27/2018] [Indexed: 11/13/2022] Open
Abstract
Accurate perception of time-variant pitch is important for speech recognition, particularly for tonal languages with different lexical tones such as Mandarin, in which different tones convey different semantic information. Previous studies reported that the auditory nerve and cochlear nucleus can encode different pitches through phase-locked neural activities. However, little is known about how the inferior colliculus (IC) encodes the time-variant periodicity pitch of natural speech. In this study, the Mandarin syllable /ba/ pronounced with four lexical tones (flat, rising, falling then rising and falling) were used as stimuli. Local field potentials (LFPs) and single neuron activity were simultaneously recorded from 90 sites within contralateral IC of six urethane-anesthetized and decerebrate guinea pigs in response to the four stimuli. Analysis of the temporal information of LFPs showed that 93% of the LFPs exhibited robust encoding of periodicity pitch. Pitch strength of LFPs derived from the autocorrelogram was significantly (p < 0.001) stronger for rising tones than flat and falling tones. Pitch strength are also significantly increased (p < 0.05) with the characteristic frequency (CF). On the other hand, only 47% (42 or 90) of single neuron activities were significantly synchronized to the fundamental frequency of the stimulus suggesting that the temporal spiking pattern of single IC neuron could encode the time variant periodicity pitch of speech robustly. The difference between the number of LFPs and single neurons that encode the time-variant F0 voice pitch supports the notion of a transition at the level of IC from direct temporal coding in the spike trains of individual neurons to other form of neural representation.
Collapse
Affiliation(s)
- Fei Peng
- Key Laboratory of Biorheological Science and Technology of Ministry of Education, Bioengineering College of Chongqing University, Chongqing, China
- Collaborative Innovation Center for Brain Science, Chongqing University, Chongqing, China
| | - Hamish Innes-Brown
- Bionics Institute, East Melbourne, VIC, Australia
- Department of Medical Bionics Department, University of Melbourne, Melbourne, VIC, Australia
| | - Colette M. McKay
- Bionics Institute, East Melbourne, VIC, Australia
- Department of Medical Bionics Department, University of Melbourne, Melbourne, VIC, Australia
| | - James B. Fallon
- Bionics Institute, East Melbourne, VIC, Australia
- Department of Medical Bionics Department, University of Melbourne, Melbourne, VIC, Australia
- Department of Otolaryngology, University of Melbourne, Melbourne, VIC, Australia
| | - Yi Zhou
- Chongqing Key Laboratory of Neurobiology, Department of Neurobiology, Third Military Medical University, Chongqing, China
| | - Xing Wang
- Key Laboratory of Biorheological Science and Technology of Ministry of Education, Bioengineering College of Chongqing University, Chongqing, China
- Chongqing Medical Electronics Engineering Technology Research Center, Chongqing University, Chongqing, China
| | - Ning Hu
- Key Laboratory of Biorheological Science and Technology of Ministry of Education, Bioengineering College of Chongqing University, Chongqing, China
- Collaborative Innovation Center for Brain Science, Chongqing University, Chongqing, China
| | - Wensheng Hou
- Key Laboratory of Biorheological Science and Technology of Ministry of Education, Bioengineering College of Chongqing University, Chongqing, China
- Collaborative Innovation Center for Brain Science, Chongqing University, Chongqing, China
- Chongqing Medical Electronics Engineering Technology Research Center, Chongqing University, Chongqing, China
| |
Collapse
|
13
|
Bidelman GM. Subcortical sources dominate the neuroelectric auditory frequency-following response to speech. Neuroimage 2018; 175:56-69. [PMID: 29604459 DOI: 10.1016/j.neuroimage.2018.03.060] [Citation(s) in RCA: 146] [Impact Index Per Article: 24.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2018] [Accepted: 03/26/2018] [Indexed: 11/16/2022] Open
Abstract
Frequency-following responses (FFRs) are neurophonic potentials that provide a window into the encoding of complex sounds (e.g., speech/music), auditory disorders, and neuroplasticity. While the neural origins of the FFR remain debated, renewed controversy has reemerged after demonstration that FFRs recorded via magnetoencephalography (MEG) are dominated by cortical rather than brainstem structures as previously assumed. Here, we recorded high-density (64 ch) FFRs via EEG and applied state-of-the art source imaging techniques to multichannel data (discrete dipole modeling, distributed imaging, independent component analysis, computational simulations). Our data confirm a mixture of generators localized to bilateral auditory nerve (AN), brainstem inferior colliculus (BS), and bilateral primary auditory cortex (PAC). However, frequency-specific scrutiny of source waveforms showed the relative contribution of these nuclei to the aggregate FFR varied across stimulus frequencies. Whereas AN and BS sources produced robust FFRs up to ∼700 Hz, PAC showed weak phase-locking with little FFR energy above the speech fundamental (100 Hz). Notably, CLARA imaging further showed PAC activation was eradicated for FFRs >150 Hz, above which only subcortical sources remained active. Our results show (i) the site of FFR generation varies critically with stimulus frequency; and (ii) opposite the pattern observed in MEG, subcortical structures make the largest contribution to electrically recorded FFRs (AN ≥ BS > PAC). We infer that cortical dominance observed in previous neuromagnetic data is likely due to the bias of MEG to superficial brain tissue, underestimating subcortical structures that drive most of the speech-FFR. Cleanly separating subcortical from cortical FFRs can be achieved by ensuring stimulus frequencies are >150-200 Hz, above the phase-locking limit of cortical neurons.
Collapse
Affiliation(s)
- Gavin M Bidelman
- School of Communication Sciences & Disorders, University of Memphis, Memphis, TN, USA; Institute for Intelligent Systems, University of Memphis, Memphis, TN, USA; Univeristy of Tennessee Health Sciences Center, Department of Anatomy and Neurobiology, Memphis, TN, USA.
| |
Collapse
|
14
|
Bader R. Cochlear spike synchronization and neuron coincidence detection model. CHAOS (WOODBURY, N.Y.) 2018; 28:023105. [PMID: 29495673 DOI: 10.1063/1.5011450] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Coincidence detection of a spike pattern fed from the cochlea into a single neuron is investigated using a physical Finite-Difference model of the cochlea and a physiologically motivated neuron model. Previous studies have shown experimental evidence of increased spike synchronization in the nucleus cochlearis and the trapezoid body [Joris et al., J. Neurophysiol. 71(3), 1022-1036 and 1037-1051 (1994)] and models show tone partial phase synchronization at the transition from mechanical waves on the basilar membrane into spike patterns [Ch. F. Babbs, J. Biophys. 2011, 435135]. Still the traveling speed of waves on the basilar membrane cause a frequency-dependent time delay of simultaneously incoming sound wavefronts up to 10 ms. The present model shows nearly perfect synchronization of multiple spike inputs as neuron outputs with interspike intervals (ISI) at the periodicity of the incoming sound for frequencies from about 30 to 300 Hz for two different amounts of afferent nerve fiber neuron inputs. Coincidence detection serves here as a fusion of multiple inputs into one single event enhancing pitch periodicity detection for low frequencies, impulse detection, or increased sound or speech intelligibility due to dereverberation.
Collapse
Affiliation(s)
- Rolf Bader
- Institute of Systematic Musicology, University of Hamburg, Neue Rabenstr. 13, 20354 Hamburg, Germany
| |
Collapse
|
15
|
Prendergast G, Millman RE, Guest H, Munro KJ, Kluk K, Dewey RS, Hall DA, Heinz MG, Plack CJ. Effects of noise exposure on young adults with normal audiograms II: Behavioral measures. Hear Res 2017; 356:74-86. [PMID: 29126651 PMCID: PMC5714059 DOI: 10.1016/j.heares.2017.10.007] [Citation(s) in RCA: 78] [Impact Index Per Article: 11.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/10/2017] [Revised: 10/17/2017] [Accepted: 10/23/2017] [Indexed: 12/24/2022]
Abstract
An estimate of lifetime noise exposure was used as the primary predictor of performance on a range of behavioral tasks: frequency and intensity difference limens, amplitude modulation detection, interaural phase discrimination, the digit triplet speech test, the co-ordinate response speech measure, an auditory localization task, a musical consonance task and a subjective report of hearing ability. One hundred and thirty-eight participants (81 females) aged 18-36 years were tested, with a wide range of self-reported noise exposure. All had normal pure-tone audiograms up to 8 kHz. It was predicted that increased lifetime noise exposure, which we assume to be concordant with noise-induced cochlear synaptopathy, would elevate behavioral thresholds, in particular for stimuli with high levels in a high spectral region. However, the results showed little effect of noise exposure on performance. There were a number of weak relations with noise exposure across the test battery, although many of these were in the opposite direction to the predictions, and none were statistically significant after correction for multiple comparisons. There were also no strong correlations between electrophysiological measures of synaptopathy published previously and the behavioral measures reported here. Consistent with our previous electrophysiological results, the present results provide no evidence that noise exposure is related to significant perceptual deficits in young listeners with normal audiometric hearing. It is possible that the effects of noise-induced cochlear synaptopathy are only measurable in humans with extreme noise exposures, and that these effects always co-occur with a loss of audiometric sensitivity.
Collapse
Affiliation(s)
- Garreth Prendergast
- Manchester Centre for Audiology and Deafness, University of Manchester, Manchester Academic Health Science Centre, M13 9PL, UK.
| | - Rebecca E Millman
- Manchester Centre for Audiology and Deafness, University of Manchester, Manchester Academic Health Science Centre, M13 9PL, UK; NIHR Manchester Biomedical Research Centre, Central Manchester University Hospitals NHS Foundation Trust, Manchester Academic Health Science Centre, Manchester, M13 9WL, UK
| | - Hannah Guest
- Manchester Centre for Audiology and Deafness, University of Manchester, Manchester Academic Health Science Centre, M13 9PL, UK
| | - Kevin J Munro
- Manchester Centre for Audiology and Deafness, University of Manchester, Manchester Academic Health Science Centre, M13 9PL, UK; NIHR Manchester Biomedical Research Centre, Central Manchester University Hospitals NHS Foundation Trust, Manchester Academic Health Science Centre, Manchester, M13 9WL, UK
| | - Karolina Kluk
- Manchester Centre for Audiology and Deafness, University of Manchester, Manchester Academic Health Science Centre, M13 9PL, UK; NIHR Manchester Biomedical Research Centre, Central Manchester University Hospitals NHS Foundation Trust, Manchester Academic Health Science Centre, Manchester, M13 9WL, UK
| | - Rebecca S Dewey
- Sir Peter Mansfield Imaging Centre, School of Physics and Astronomy, University of Nottingham Nottingham, NG7 2RD, UK; National Institute for Health Research (NIHR) Nottingham Biomedical Research Centre, Nottingham, NG1 5DU, UK; Otology and Hearing Group, Division of Clinical Neuroscience, School of Medicine, University of Nottingham, Nottingham, NG7 2UH, UK
| | - Deborah A Hall
- National Institute for Health Research (NIHR) Nottingham Biomedical Research Centre, Nottingham, NG1 5DU, UK; Otology and Hearing Group, Division of Clinical Neuroscience, School of Medicine, University of Nottingham, Nottingham, NG7 2UH, UK
| | - Michael G Heinz
- Department of Speech, Language, & Hearing Sciences and Biomedical Engineering, Purdue University, West Lafayette, IN, 47907, USA
| | - Christopher J Plack
- Manchester Centre for Audiology and Deafness, University of Manchester, Manchester Academic Health Science Centre, M13 9PL, UK; NIHR Manchester Biomedical Research Centre, Central Manchester University Hospitals NHS Foundation Trust, Manchester Academic Health Science Centre, Manchester, M13 9WL, UK; Department of Psychology, Lancaster University, Lancaster, LA1 4YF, UK
| |
Collapse
|
16
|
Cortical Correlates of the Auditory Frequency-Following and Onset Responses: EEG and fMRI Evidence. J Neurosci 2017; 37:830-838. [PMID: 28123019 DOI: 10.1523/jneurosci.1265-16.2016] [Citation(s) in RCA: 70] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2016] [Revised: 11/01/2016] [Accepted: 11/06/2016] [Indexed: 11/21/2022] Open
Abstract
The frequency-following response (FFR) is a measure of the brain's periodic sound encoding. It is of increasing importance for studying the human auditory nervous system due to numerous associations with auditory cognition and dysfunction. Although the FFR is widely interpreted as originating from brainstem nuclei, a recent study using MEG suggested that there is also a right-lateralized contribution from the auditory cortex at the fundamental frequency (Coffey et al., 2016b). Our objectives in the present work were to validate and better localize this result using a completely different neuroimaging modality and to document the relationships between the FFR, the onset response, and cortical activity. Using a combination of EEG, fMRI, and diffusion-weighted imaging, we show that activity in the right auditory cortex is related to individual differences in FFR-fundamental frequency (f0) strength, a finding that was replicated with two independent stimulus sets, with and without acoustic energy at the fundamental frequency. We demonstrate a dissociation between this FFR-f0-sensitive response in the right and an area in left auditory cortex that is sensitive to individual differences in the timing of initial response to sound onset. Relationships to timing and their lateralization are supported by parallels in the microstructure of the underlying white matter, implicating a mechanism involving neural conduction efficiency. These data confirm that the FFR has a cortical contribution and suggest ways in which auditory neuroscience may be advanced by connecting early sound representation to measures of higher-level sound processing and cognitive function. SIGNIFICANCE STATEMENT The frequency-following response (FFR) is an EEG signal that is used to explore how the auditory system encodes temporal regularities in sound and is related to differences in auditory function between individuals. It is known that brainstem nuclei contribute to the FFR, but recent findings of an additional cortical source are more controversial. Here, we use fMRI to validate and extend the prediction from MEG data of a right auditory cortex contribution to the FFR. We also demonstrate a dissociation between FFR-related cortical activity from that related to the latency of the response to sound onset, which is found in left auditory cortex. The findings provide a clearer picture of cortical processes for analysis of sound features.
Collapse
|
17
|
Kim SG, Lepsien J, Fritz TH, Mildner T, Mueller K. Dissonance encoding in human inferior colliculus covaries with individual differences in dislike of dissonant music. Sci Rep 2017; 7:5726. [PMID: 28720776 PMCID: PMC5516034 DOI: 10.1038/s41598-017-06105-2] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2017] [Accepted: 06/09/2017] [Indexed: 12/20/2022] Open
Abstract
Harmony is one of the most fundamental elements of music that evokes emotional response. The inferior colliculus (IC) has been known to detect poor agreement of harmonics of sound, that is, dissonance. Electrophysiological evidence has implicated a relationship between a sustained auditory response mainly from the brainstem and unpleasant emotion induced by dissonant harmony. Interestingly, an individual’s dislike of dissonant harmony of an individual correlated with a reduced sustained auditory response. In the current paper, we report novel evidence based on functional magnetic resonance imaging (fMRI) for such a relationship between individual variability in dislike of dissonance and the IC activation. Furthermore, for the first time, we show how dissonant harmony modulates functional connectivity of the IC and its association with behaviourally reported unpleasantness. The current findings support important contributions of low level auditory processing and corticofugal interaction in musical harmony preference.
Collapse
Affiliation(s)
- Seung-Goo Kim
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.
| | - Jöran Lepsien
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Thomas Hans Fritz
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.,Institute for Psychoacoustics and Electronic Music, University of Ghent, Ghent, Belgium
| | - Toralf Mildner
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Karsten Mueller
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| |
Collapse
|
18
|
Tichko P, Skoe E. Frequency-dependent fine structure in the frequency-following response: The byproduct of multiple generators. Hear Res 2017; 348:1-15. [DOI: 10.1016/j.heares.2017.01.014] [Citation(s) in RCA: 71] [Impact Index Per Article: 10.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/12/2016] [Revised: 01/12/2017] [Accepted: 01/22/2017] [Indexed: 11/28/2022]
|
19
|
Abstract
Previous research on harmony perception has mainly been concerned with horizontal aspects of harmony, turning less attention to how listeners perceive psychoacoustic qualities and emotions in single isolated chords. A recent study found mild dissonances to be more preferred than consonances in single chord perception, although the authors did not systematically vary register and consonance in their study; these omissions were explored here. An online empirical experiment was conducted where participants (N = 410) evaluated chords on the dimensions of Valence, Tension, Energy, Consonance, and Preference; 15 different chords were played with piano timbre across two octaves. The results suggest significant differences on all dimensions across chord types, and a strong correlation between perceived dissonance and tension. The register and inversions contributed to the evaluations significantly, nonmusicians distinguishing between triadic inversions similarly to musicians. The mildly dissonant minor ninth, major ninth, and minor seventh chords were rated highest for preference, regardless of musical sophistication. The role of theoretical explanations such as aggregate dyadic consonance, the inverted-U hypothesis, and psychoacoustic roughness, harmonicity, and sharpness will be discussed to account for the preference of mild dissonance over consonance in single chord perception.
Collapse
Affiliation(s)
- Imre Lahdelma
- University of Jyväskylä, Finland; University of Washington, USA
| | | |
Collapse
|
20
|
Jeong E, Ryu H. Melodic Contour Identification Reflects the Cognitive Threshold of Aging. Front Aging Neurosci 2016; 8:134. [PMID: 27378907 PMCID: PMC4904015 DOI: 10.3389/fnagi.2016.00134] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2016] [Accepted: 05/27/2016] [Indexed: 01/16/2023] Open
Abstract
Cognitive decline is a natural phenomenon of aging. Although there exists a consensus that sensitivity to acoustic features of music is associated with such decline, no solid evidence has yet shown that structural elements and contexts of music explain this loss of cognitive performance. This study examined the extent and the type of cognitive decline that is related to the contour identification task (CIT) using tones with different pitches (i.e., melodic contours). Both younger and older adult groups participated in the CIT given in three listening conditions (i.e., focused, selective, and alternating). Behavioral data (accuracy and response times) and hemodynamic reactions were measured using functional near-infrared spectroscopy (fNIRS). Our findings showed cognitive declines in the older adult group but with a subtle difference from the younger adult group. The accuracy of the melodic CITs given in the target-like distraction task (CIT2) was significantly lower than that in the environmental noise (CIT1) condition in the older adult group, indicating that CIT2 may be a benchmark test for age-specific cognitive decline. The fNIRS findings also agreed with this interpretation, revealing significant increases in oxygenated hemoglobin (oxyHb) concentration in the younger (p < 0.05 for Δpre - on task; p < 0.01 for Δon – post task) rather than the older adult group (n.s for Δpre - on task; n.s for Δon – post task). We further concluded that the oxyHb difference was present in the brain regions near the right dorsolateral prefrontal cortex. Taken together, these findings suggest that CIT2 (i.e., the melodic contour task in the target-like distraction) is an optimized task that could indicate the degree and type of age-related cognitive decline.
Collapse
Affiliation(s)
- Eunju Jeong
- Department of Arts and Technology, Hanyang University Seoul, South Korea
| | - Hokyoung Ryu
- Department of Arts and Technology, Hanyang University Seoul, South Korea
| |
Collapse
|
21
|
Hall IC, Woolley SMN, Kwong-Brown U, Kelley DB. Sex differences and endocrine regulation of auditory-evoked, neural responses in African clawed frogs (Xenopus). J Comp Physiol A Neuroethol Sens Neural Behav Physiol 2016; 202:17-34. [PMID: 26572136 PMCID: PMC4699871 DOI: 10.1007/s00359-015-1049-9] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2015] [Revised: 10/03/2015] [Accepted: 10/05/2015] [Indexed: 12/01/2022]
Abstract
Mating depends on the accurate detection of signals that convey species identity and reproductive state. In African clawed frogs, Xenopus, this information is conveyed by vocal signals that differ in temporal patterns and spectral features between sexes and across species. We characterized spectral sensitivity using auditory-evoked potentials (AEPs), commonly known as the auditory brainstem response, in males and females of four Xenopus species. In female X. amieti, X. petersii, and X. laevis, peripheral auditory sensitivity to their species own dyad-two, species-specific dominant frequencies in the male advertisement call-is enhanced relative to males. Males were most sensitive to lower frequencies including those in the male-directed release calls. Frequency sensitivity was influenced by endocrine state; ovariectomized females had male-like auditory tuning while dihydrotestosterone-treated, ovariectomized females maintained female-like tuning. Thus, adult, female Xenopus demonstrate an endocrine-dependent sensitivity to the spectral features of conspecific male advertisement calls that could facilitate mating. Xenopus AEPs resemble those of other species in stimulus and level dependence, and in sensitivity to anesthetic (MS222). AEPs were correlated with body size and sex within some species. A frequency following response, probably encoded by the amphibian papilla, might facilitate dyad source localization via interaural time differences.
Collapse
Affiliation(s)
- Ian C Hall
- Department of Biological Sciences, Columbia University, Fairchild Building, MC 2432, New York, NY, 10027, USA.
- Department of Biology, St. Mary's College of Maryland, Schaeffer Hall 258, St. Mary's City, MD, 20686, USA.
| | - Sarah M N Woolley
- Department of Psychology, Columbia University, Schermerhorn Hall, MC 5501, New York, NY, 10027, USA
| | - Ursula Kwong-Brown
- Department of Biological Sciences, Columbia University, Fairchild Building, MC 2432, New York, NY, 10027, USA
- Center for New Music and Audio Technologies, University of California, Berkeley, CA, 94720, USA
| | - Darcy B Kelley
- Department of Biological Sciences, Columbia University, Fairchild Building, MC 2432, New York, NY, 10027, USA
| |
Collapse
|
22
|
On the Relevance of Natural Stimuli for the Study of Brainstem Correlates: The Example of Consonance Perception. PLoS One 2015; 10:e0145439. [PMID: 26720000 PMCID: PMC4697839 DOI: 10.1371/journal.pone.0145439] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2015] [Accepted: 12/03/2015] [Indexed: 11/19/2022] Open
Abstract
Some combinations of musical tones sound pleasing to Western listeners, and are termed consonant, while others sound discordant, and are termed dissonant. The perceptual phenomenon of consonance has been traced to the acoustic property of harmonicity. It has been repeatedly shown that neural correlates of consonance can be found as early as the auditory brainstem as reflected in the harmonicity of the scalp-recorded frequency-following response (FFR). “Neural Pitch Salience” (NPS) measured from FFRs—essentially a time-domain equivalent of the classic pattern recognition models of pitch—has been found to correlate with behavioral judgments of consonance for synthetic stimuli. Following the idea that the auditory system has evolved to process behaviorally relevant natural sounds, and in order to test the generalizability of this finding made with synthetic tones, we recorded FFRs for consonant and dissonant intervals composed of synthetic and natural stimuli. We found that NPS correlated with behavioral judgments of consonance and dissonance for synthetic but not for naturalistic sounds. These results suggest that while some form of harmonicity can be computed from the auditory brainstem response, the general percept of consonance and dissonance is not captured by this measure. It might either be represented in the brainstem in a different code (such as place code) or arise at higher levels of the auditory pathway. Our findings further illustrate the importance of using natural sounds, as a complementary tool to fully-controlled synthetic sounds, when probing auditory perception.
Collapse
|
23
|
Wu D, Kendrick KM, Levitin DJ, Li C, Yao D. Bach Is the Father of Harmony: Revealed by a 1/f Fluctuation Analysis across Musical Genres. PLoS One 2015; 10:e0142431. [PMID: 26545104 PMCID: PMC4636347 DOI: 10.1371/journal.pone.0142431] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2015] [Accepted: 10/21/2015] [Indexed: 11/27/2022] Open
Abstract
Harmony is a fundamental attribute of music. Close connections exist between music and mathematics since both pursue harmony and unity. In music, the consonance of notes played simultaneously partly determines our perception of harmony; associates with aesthetic responses; and influences the emotion expression. The consonance could be considered as a window to understand and analyze harmony. Here for the first time we used a 1/f fluctuation analysis to investigate whether the consonance fluctuation structure in music with a wide range of composers and genres followed the scale free pattern that has been found for pitch, melody, rhythm, human body movements, brain activity, natural images and geographical features. We then used a network graph approach to investigate which composers were the most influential both within and across genres. Our results showed that patterns of consonance in music did follow scale-free characteristics, suggesting that this feature is a universally evolved one in both music and the living world. Furthermore, our network analysis revealed that Bach’s harmony patterns were having the most influence on those used by other composers, followed closely by Mozart.
Collapse
Affiliation(s)
- Dan Wu
- Department of Biomedical Engineering, School of Computer and Information Technology, Beijing Jiaotong University, Beijing, China
- Key Laboratory for NeuroInformation of Ministry of Education, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, China
| | - Keith M. Kendrick
- Key Laboratory for NeuroInformation of Ministry of Education, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, China
| | | | - Chaoyi Li
- Key Laboratory for NeuroInformation of Ministry of Education, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, China
- Center for Life Sciences, Shanghai Institutes for Biological Sciences, Chinese Academy of Sciences, Shanghai, China
| | - Dezhong Yao
- Key Laboratory for NeuroInformation of Ministry of Education, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, China
- * E-mail:
| |
Collapse
|
24
|
Losing the music: aging affects the perception and subcortical neural representation of musical harmony. J Neurosci 2015; 35:4071-80. [PMID: 25740534 DOI: 10.1523/jneurosci.3214-14.2015] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
When two musical notes with simple frequency ratios are played simultaneously, the resulting musical chord is pleasing and evokes a sense of resolution or "consonance". Complex frequency ratios, on the other hand, evoke feelings of tension or "dissonance". Consonance and dissonance form the basis of harmony, a central component of Western music. In earlier work, we provided evidence that consonance perception is based on neural temporal coding in the brainstem (Bones et al., 2014). Here, we show that for listeners with clinically normal hearing, aging is associated with a decline in both the perceptual distinction and the distinctiveness of the neural representations of different categories of two-note chords. Compared with younger listeners, older listeners rated consonant chords as less pleasant and dissonant chords as more pleasant. Older listeners also had less distinct neural representations of consonant and dissonant chords as measured using a Neural Consonance Index derived from the electrophysiological "frequency-following response." The results withstood a control for the effect of age on general affect, suggesting that different mechanisms are responsible for the perceived pleasantness of musical chords and affective voices and that, for listeners with clinically normal hearing, age-related differences in consonance perception are likely to be related to differences in neural temporal coding.
Collapse
|
25
|
Bones O, Plack CJ. Subcortical representation of musical dyads: individual differences and neural generators. Hear Res 2015; 323:9-21. [PMID: 25636498 DOI: 10.1016/j.heares.2015.01.009] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/28/2014] [Revised: 01/07/2015] [Accepted: 01/19/2015] [Indexed: 10/24/2022]
Abstract
When two notes are played simultaneously they form a musical dyad. The sensation of pleasantness, or "consonance", of a dyad is likely driven by the harmonic relation of the frequency components of the combined spectrum of the two notes. Previous work has demonstrated a relation between individual preference for consonant over dissonant dyads, and the strength of neural temporal coding of the harmonicity of consonant relative to dissonant dyads as measured using the electrophysiological "frequency-following response" (FFR). However, this work also demonstrated that both these variables correlate strongly with musical experience. The current study was designed to determine whether the relation between consonance preference and neural temporal coding is maintained when controlling for musical experience. The results demonstrate that strength of neural coding of harmonicity is predictive of individual preference for consonance even for non-musicians. An additional purpose of the current study was to assess the cochlear generation site of the FFR to low-frequency dyads. By comparing the reduction in FFR strength when high-pass masking noise was added to the output of a model of the auditory periphery, the results provide evidence for the FFR to low-frequency dyads resulting in part from basal cochlear generators.
Collapse
Affiliation(s)
- Oliver Bones
- School of Psychological Sciences, University of Manchester, Manchester M13 9PL, UK.
| | - Christopher J Plack
- School of Psychological Sciences, University of Manchester, Manchester M13 9PL, UK
| |
Collapse
|