1
|
Madsen SMK, Oxenham AJ. Mistuning perception in music is asymmetric and relies on both beats and inharmonicity. COMMUNICATIONS PSYCHOLOGY 2024; 2:91. [PMID: 39358548 PMCID: PMC11447020 DOI: 10.1038/s44271-024-00141-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/05/2024] [Accepted: 09/23/2024] [Indexed: 10/04/2024]
Abstract
An out-of-tune singer or instrument can ruin the enjoyment of music. However, there is disagreement on how we perceive mistuning in natural music settings. To address this question, we presented listeners with in-tune and out-of-tune passages of two-part music and manipulated the two primary candidate acoustic cues: beats (fluctuations caused by interactions between nearby frequency components) and inharmonicity (non-integer harmonic frequency relationships) across seven experiments (Exp 1: N = 101; Exp 2: N = 63; Exp 3a: N = 87; Exp 3b: N = 28; Exp 3c: N = 69; Exp 4: N = 160; Exp 5: N = 105). Mistuning detection worsened markedly when removing either beating or inharmonicity cues, suggesting important contributions from both. The relative importance of the two cues varied reliably between listeners but was unaffected by musical experience. Finally, a general asymmetry in sensitivity to mistuning was discovered, with compressed pitch differences being more easily detected than stretched ones, thereby demonstrating a generalization of the previously found stretched-octave effect. Overall, the results reveal the acoustic underpinnings of the critical perceptual phenomenon of dissonance through mistuning in natural music.
Collapse
Affiliation(s)
- Sara M K Madsen
- Department of Psychology, University of Minnesota, Minneapolis, MN, USA.
- Hearing Systems Group, Department of Health Technology, Technical University of Denmark, Lyngby, Denmark.
| | - Andrew J Oxenham
- Department of Psychology, University of Minnesota, Minneapolis, MN, USA
| |
Collapse
|
2
|
Harrison PMC, MacConnachie JMC. Consonance in the carillon. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2024; 156:1111-1122. [PMID: 39145812 DOI: 10.1121/10.0028167] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/29/2024] [Accepted: 07/15/2024] [Indexed: 08/16/2024]
Abstract
Previous psychological studies have shown that musical consonance is not only determined by the frequency ratios between tones, but also by the frequency spectra of those tones. However, these prior studies used artificial tones, specifically tones built from a small number of pure tones, which do not match the acoustic complexity of real musical instruments. The present experiment therefore investigates tones recorded from a real musical instrument, the Westerkerk Carillon, conducting a "dense rating" experiment where participants (N = 113) rated musical intervals drawn from the continuous range 0-15 semitones. Results show that the traditional consonances of the major third and the minor sixth become dissonances in the carillon and that small intervals (in particular 0.5-2.5 semitones) also become particularly dissonant. Computational modelling shows that these effects are primarily caused by interference between partials (e.g., beating), but that preference for harmonicity is also necessary to produce an accurate overall account of participants' preferences. The results support musicians' writings about the carillon and contribute to ongoing debates about the psychological mechanisms underpinning consonance perception, in particular disputing the recent claim that interference is largely irrelevant to consonance perception.
Collapse
Affiliation(s)
- Peter M C Harrison
- Centre for Music and Science, Faculty of Music, University of Cambridge, Cambridge, United Kingdom
| | - James M C MacConnachie
- Centre for Music and Science, Faculty of Music, University of Cambridge, Cambridge, United Kingdom
| |
Collapse
|
3
|
Borjigin A, Bharadwaj HM. Individual Differences Elucidate the Perceptual Benefits Associated with Robust Temporal Fine-Structure Processing. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.09.20.558670. [PMID: 37790457 PMCID: PMC10542537 DOI: 10.1101/2023.09.20.558670] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/05/2023]
Abstract
The auditory system is unique among sensory systems in its ability to phase lock to and precisely follow very fast cycle-by-cycle fluctuations in the phase of sound-driven cochlear vibrations. Yet, the perceptual role of this temporal fine structure (TFS) code is debated. This fundamental gap is attributable to our inability to experimentally manipulate TFS cues without altering other perceptually relevant cues. Here, we circumnavigated this limitation by leveraging individual differences across 200 participants to systematically compare variations in TFS sensitivity to performance in a range of speech perception tasks. TFS sensitivity was assessed through detection of interaural time/phase differences, while speech perception was evaluated by word identification under noise interference. Results suggest that greater TFS sensitivity is not associated with greater masking release from fundamental-frequency or spatial cues, but appears to contribute to resilience against the effects of reverberation. We also found that greater TFS sensitivity is associated with faster response times, indicating reduced listening effort. These findings highlight the perceptual significance of TFS coding for everyday hearing.
Collapse
Affiliation(s)
- Agudemu Borjigin
- Weldon School of Biomedical Engineering, Purdue University, West Lafayette, IN 47907, USA
- Waisman Center, University of Wisconsin - Madison, Madison, WI 53705, USA
| | - Hari M. Bharadwaj
- Weldon School of Biomedical Engineering, Purdue University, West Lafayette, IN 47907, USA
- Department of Speech, Language, and Hearing Sciences, Purdue University, West Lafayette, IN 47907, USA
- Department of Communication Science and Disorders, University of Pittsburgh, Pittsburgh, PA 15213, USA
| |
Collapse
|
4
|
Will JK, Roeske C, Degé F. Development of tonality and consonance categorization ability and preferences in 4- to 6-year-old children. Front Psychol 2024; 15:1270114. [PMID: 39171227 PMCID: PMC11336827 DOI: 10.3389/fpsyg.2024.1270114] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2023] [Accepted: 04/17/2024] [Indexed: 08/23/2024] Open
Abstract
Consonance perception has been extensively studied in Western adults, but it is less clear how this perception develops in children during musical enculturation. We investigated how this development occurs in 4- to 6-year-old children by examining two complex musical skills (i.e., consonance and tonality preferences). Accordingly, we developed a child-focused approach to understand the underlying developmental processes of tonality and consonance preferences in 4- to 6-year-old children using a video interview format. As previous studies have confounded preference with perception, we examined each concept separately and measured perceptual abilities as categorization. For tonality, the ability to categorize tonal and atonal melodies developed by the age of 6 years. It is noteworthy that only children who could categorize successfully showed a preference for tonality at the age of 6. For consonance, we observed an early preference for consonance at 4 years of age, but this preference was only measurable with large differences between consonant and dissonant stimuli. We propose that tonality and consonance preferences develop during childhood with increasing categorization ability when the surrounding musical culture is marked by Western tonality and consonance.
Collapse
Affiliation(s)
| | | | - Franziska Degé
- Max Planck Society, Max Planck Institute for Empirical Aesthetics, Music Department, Frankfurt, Germany
| |
Collapse
|
5
|
Wöhrle SD, Reuter C, Rupp A, Andermann M. Neuromagnetic representation of musical roundness in chord progressions. Front Neurosci 2024; 18:1383554. [PMID: 38650622 PMCID: PMC11034485 DOI: 10.3389/fnins.2024.1383554] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2024] [Accepted: 03/26/2024] [Indexed: 04/25/2024] Open
Abstract
Introduction Musical roundness perception relies on consonance/dissonance within a rule-based harmonic context, but also on individual characteristics of the listener. The present work tackles these aspects in a combined psychoacoustic and neurophysiological study, taking into account participant's musical aptitude. Methods Our paradigm employed cadence-like four-chord progressions, based on Western music theory. Chord progressions comprised naturalistic and artificial sounds; moreover, their single chords varied regarding consonance/dissonance and harmonic function. Thirty participants listened to the chord progressions while their cortical activity was measured with magnetoencephalography; afterwards, they rated the individual chord progressions with respect to their perceived roundness. Results Roundness ratings differed according to the degree of dissonance in the dominant chord at the progression's third position; this effect was pronounced in listeners with high musical aptitude. Interestingly, a corresponding pattern occurred in the neuromagnetic N1m response to the fourth chord (i.e., at the progression's resolution), again with somewhat stronger differentiation among musical listeners. The N1m magnitude seemed to increase during chord progressions that were considered particularly round, with the maximum difference after the final chord; here, however, the musical aptitude effect just missed significance. Discussion The roundness of chord progressions is reflected in participant's psychoacoustic ratings and in their transient cortical activity, with stronger differentiation among listeners with high musical aptitude. The concept of roundness might help to reframe consonance/dissonance to a more holistic, gestalt-like understanding that covers chord relations in Western music.
Collapse
Affiliation(s)
- Sophie D. Wöhrle
- Section of Biomagnetism, Department of Neurology, Heidelberg University Hospital, Heidelberg, Germany
| | - Christoph Reuter
- Musicological Department (Acoustics/Music Psychology), University of Vienna, Vienna, Austria
| | - André Rupp
- Section of Biomagnetism, Department of Neurology, Heidelberg University Hospital, Heidelberg, Germany
| | - Martin Andermann
- Section of Biomagnetism, Department of Neurology, Heidelberg University Hospital, Heidelberg, Germany
| |
Collapse
|
6
|
Mok BA, Viswanathan V, Borjigin A, Singh R, Kafi H, Bharadwaj HM. Web-based psychoacoustics: Hearing screening, infrastructure, and validation. Behav Res Methods 2024; 56:1433-1448. [PMID: 37326771 PMCID: PMC10704001 DOI: 10.3758/s13428-023-02101-9] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/01/2023] [Indexed: 06/17/2023]
Abstract
Anonymous web-based experiments are increasingly used in many domains of behavioral research. However, online studies of auditory perception, especially of psychoacoustic phenomena pertaining to low-level sensory processing, are challenging because of limited available control of the acoustics, and the inability to perform audiometry to confirm normal-hearing status of participants. Here, we outline our approach to mitigate these challenges and validate our procedures by comparing web-based measurements to lab-based data on a range of classic psychoacoustic tasks. Individual tasks were created using jsPsych, an open-source JavaScript front-end library. Dynamic sequences of psychoacoustic tasks were implemented using Django, an open-source library for web applications, and combined with consent pages, questionnaires, and debriefing pages. Subjects were recruited via Prolific, a subject recruitment platform for web-based studies. Guided by a meta-analysis of lab-based data, we developed and validated a screening procedure to select participants for (putative) normal-hearing status based on their responses in a suprathreshold task and a survey. Headphone use was standardized by supplementing procedures from prior literature with a binaural hearing task. Individuals meeting all criteria were re-invited to complete a range of classic psychoacoustic tasks. For the re-invited participants, absolute thresholds were in excellent agreement with lab-based data for fundamental frequency discrimination, gap detection, and sensitivity to interaural time delay and level difference. Furthermore, word identification scores, consonant confusion patterns, and co-modulation masking release effect also matched lab-based studies. Our results suggest that web-based psychoacoustics is a viable complement to lab-based research. Source code for our infrastructure is provided.
Collapse
Affiliation(s)
- Brittany A Mok
- Department of Speech, Language, and Hearing Sciences, Purdue University, West Lafayette, IN, USA
| | - Vibha Viswanathan
- Weldon School of Biomedical Engineering, Purdue University, West Lafayette, IN, USA
| | - Agudemu Borjigin
- Weldon School of Biomedical Engineering, Purdue University, West Lafayette, IN, USA
| | - Ravinderjit Singh
- Weldon School of Biomedical Engineering, Purdue University, West Lafayette, IN, USA
| | - Homeira Kafi
- Weldon School of Biomedical Engineering, Purdue University, West Lafayette, IN, USA
| | - Hari M Bharadwaj
- Department of Speech, Language, and Hearing Sciences, Purdue University, West Lafayette, IN, USA.
- Department of Communication Science and Disorders, University of Pittsburgh, Pittsburgh, PA, USA.
| |
Collapse
|
7
|
Marjieh R, Harrison PMC, Lee H, Deligiannaki F, Jacoby N. Timbral effects on consonance disentangle psychoacoustic mechanisms and suggest perceptual origins for musical scales. Nat Commun 2024; 15:1482. [PMID: 38369535 PMCID: PMC11258268 DOI: 10.1038/s41467-024-45812-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2022] [Accepted: 12/11/2023] [Indexed: 02/20/2024] Open
Abstract
The phenomenon of musical consonance is an essential feature in diverse musical styles. The traditional belief, supported by centuries of Western music theory and psychological studies, is that consonance derives from simple (harmonic) frequency ratios between tones and is insensitive to timbre. Here we show through five large-scale behavioral studies, comprising 235,440 human judgments from US and South Korean populations, that harmonic consonance preferences can be reshaped by timbral manipulations, even as far as to induce preferences for inharmonic intervals. We show how such effects may suggest perceptual origins for diverse scale systems ranging from the gamelan's slendro scale to the tuning of Western mean-tone and equal-tempered scales. Through computational modeling we show that these timbral manipulations dissociate competing psychoacoustic mechanisms underlying consonance, and we derive an updated computational model combining liking of harmonicity, disliking of fast beats (roughness), and liking of slow beats. Altogether, this work showcases how large-scale behavioral experiments can inform classical questions in auditory perception.
Collapse
Affiliation(s)
- Raja Marjieh
- Department of Psychology, Princeton University, Princeton, NJ, USA.
- Max Planck Institute for Empirical Aesthetics, Frankfurt am Main, Germany.
| | - Peter M C Harrison
- Max Planck Institute for Empirical Aesthetics, Frankfurt am Main, Germany.
- Centre for Music and Science, University of Cambridge, Cambridge, UK.
| | - Harin Lee
- Max Planck Institute for Empirical Aesthetics, Frankfurt am Main, Germany
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Fotini Deligiannaki
- Max Planck Institute for Empirical Aesthetics, Frankfurt am Main, Germany
- German Aerospace Center (DLR), Institute for AI Safety and Security, Bonn, Germany
| | - Nori Jacoby
- Max Planck Institute for Empirical Aesthetics, Frankfurt am Main, Germany.
| |
Collapse
|
8
|
Rajappa N, Guest DR, Oxenham AJ. Benefits of Harmonicity for Hearing in Noise Are Limited to Detection and Pitch-Related Discrimination Tasks. BIOLOGY 2023; 12:1522. [PMID: 38132348 PMCID: PMC10740545 DOI: 10.3390/biology12121522] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/04/2023] [Revised: 12/07/2023] [Accepted: 12/08/2023] [Indexed: 12/23/2023]
Abstract
Harmonic complex tones are easier to detect in noise than inharmonic complex tones, providing a potential perceptual advantage in complex auditory environments. Here, we explored whether the harmonic advantage extends to other auditory tasks that are important for navigating a noisy auditory environment, such as amplitude- and frequency-modulation detection. Sixty young normal-hearing listeners were tested, divided into two equal groups with and without musical training. Consistent with earlier studies, harmonic tones were easier to detect in noise than inharmonic tones, with a signal-to-noise ratio (SNR) advantage of about 2.5 dB, and the pitch discrimination of the harmonic tones was more accurate than that of inharmonic tones, even after differences in audibility were accounted for. In contrast, neither amplitude- nor frequency-modulation detection was superior with harmonic tones once differences in audibility were accounted for. Musical training was associated with better performance only in pitch-discrimination and frequency-modulation-detection tasks. The results confirm a detection and pitch-perception advantage for harmonic tones but reveal that the harmonic benefits do not extend to suprathreshold tasks that do not rely on extracting the fundamental frequency. A general theory is proposed that may account for the effects of both noise and memory on pitch-discrimination differences between harmonic and inharmonic tones.
Collapse
Affiliation(s)
- Neha Rajappa
- Department of Psychology, University of Minnesota, Minneapolis, MN 55455, USA;
| | - Daniel R. Guest
- Department of Biomedical Engineering, University of Rochester, Rochester, NY 14627, USA;
| | - Andrew J. Oxenham
- Department of Psychology, University of Minnesota, Minneapolis, MN 55455, USA;
| |
Collapse
|
9
|
Treider JM, Kunst JR, Vuoskoski JK. The influence of musical parameters and subjective musical ratings on perceptions of culture. Sci Rep 2023; 13:20682. [PMID: 38001153 PMCID: PMC10673861 DOI: 10.1038/s41598-023-45805-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2023] [Accepted: 10/24/2023] [Indexed: 11/26/2023] Open
Abstract
Recent research suggests that music can affect evaluations of other groups and cultures. However, little is known about the objective and subjective musical parameters that influence these evaluations. We aimed to fill this gap through two studies. Study 1 collected responses from 52 American participants who listened to 30 folk-song melodies from different parts of the world. Linear mixed-effects models tested the influence of objective and subjective musical parameters of these melodies on evaluations of the cultures from which they originated. Musical parameters consistently predicted cultural evaluations. The most prominent musical parameter was musical velocity, a measure of number of pitch onsets, predicting more cultural warmth, competence and evolvedness and less cultural threat. Next, with a sample of 212 American participants, Study 2 used a within-subjects experiment to alter the tempo and dissonance for a subset of six melody excerpts from Study 1, testing for causal effects. Linear mixed-effects models revealed that both dissonance and slow tempo predicted more negative cultural evaluations. Together, both studies demonstrate how musical parameters can influence cultural perceptions. Avenues for future research are discussed.
Collapse
Affiliation(s)
- John Melvin Treider
- Department of Psychology, University of Oslo, Postboks 1094, Blindern, 0317, Oslo, Norway.
| | - Jonas R Kunst
- Department of Psychology, University of Oslo, Postboks 1094, Blindern, 0317, Oslo, Norway
| | - Jonna K Vuoskoski
- Department of Psychology, University of Oslo, Postboks 1094, Blindern, 0317, Oslo, Norway
- Department of Musicology, University of Oslo, Oslo, Norway
- RITMO Center for Interdisciplinary Studies in Time, Rhythm and Motion, Oslo, Norway
| |
Collapse
|
10
|
Milne AJ, Smit EA, Sarvasy HS, Dean RT. Evidence for a universal association of auditory roughness with musical stability. PLoS One 2023; 18:e0291642. [PMID: 37729156 PMCID: PMC10511120 DOI: 10.1371/journal.pone.0291642] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2023] [Accepted: 09/03/2023] [Indexed: 09/22/2023] Open
Abstract
We provide evidence that the roughness of chords-a psychoacoustic property resulting from unresolved frequency components-is associated with perceived musical stability (operationalized as finishedness) in participants with differing levels and types of exposure to Western or Western-like music. Three groups of participants were tested in a remote cloud forest region of Papua New Guinea (PNG), and two groups in Sydney, Australia (musicians and non-musicians). Unlike prominent prior studies of consonance/dissonance across cultures, we framed the concept of consonance as stability rather than as pleasantness. We find a negative relationship between roughness and musical stability in every group including the PNG community with minimal experience of musical harmony. The effect of roughness is stronger for the Sydney participants, particularly musicians. We find an effect of harmonicity-a psychoacoustic property resulting from chords having a spectral structure resembling a single pitched tone (such as produced by human vowel sounds)-only in the Sydney musician group, which indicates this feature's effect is mediated via a culture-dependent mechanism. In sum, these results underline the importance of both universal and cultural mechanisms in music cognition, and they suggest powerful implications for understanding the origin of pitch structures in Western tonal music as well as on possibilities for new musical forms that align with humans' perceptual and cognitive biases. They also highlight the importance of how consonance/dissonance is operationalized and explained to participants-particularly those with minimal prior exposure to musical harmony.
Collapse
Affiliation(s)
- Andrew J. Milne
- MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Sydney, NSW, Australia
| | - Eline A. Smit
- MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Sydney, NSW, Australia
- Department of Linguistics, University of Konstanz, Konstanz, Germany
| | - Hannah S. Sarvasy
- MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Sydney, NSW, Australia
| | - Roger T. Dean
- MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Sydney, NSW, Australia
| |
Collapse
|
11
|
de Cheveigné A. Why is the perceptual octave stretched? An account based on mismatched time constants within the auditory brainstem. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2023; 153:2600. [PMID: 37129672 DOI: 10.1121/10.0017978] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/03/2023] [Accepted: 04/12/2023] [Indexed: 05/03/2023]
Abstract
This paper suggests an explanation for listeners' greater tolerance to positive than negative mistuning of the higher tone within an octave pair. It hypothesizes a neural circuit tuned to cancel the lower tone that also cancels the higher tone if that tone is in tune. Imperfect cancellation is the cue to mistuning of the octave. The circuit involves two neural pathways, one delayed with respect to the other, that feed a coincidence-sensitive neuron via excitatory and inhibitory synapses. A mismatch between the time constants of these two synapses results in an asymmetry in sensitivity to mismatch. Specifically, if the time constant of the delayed pathway is greater than that of the direct pathway, there is a greater tolerance to positive mistuning than to negative mistuning. The model is directly applicable to the harmonic octave (concurrent tones) but extending it to the melodic octave (successive tones) requires additional assumptions that are discussed. The paper reviews evidence from auditory psychophysics and physiology in favor-or against-this explanation.
Collapse
Affiliation(s)
- Alain de Cheveigné
- Laboratoire des Systèmes Perceptifs, Unité Mixte de Recherche 8248, Centre National de la Recherche Scientifique, Paris, France
| |
Collapse
|
12
|
Klarlund M, Brattico E, Pearce M, Wu Y, Vuust P, Overgaard M, Du Y. Worlds apart? Testing the cultural distance hypothesis in music perception of Chinese and Western listeners. Cognition 2023; 235:105405. [PMID: 36807031 DOI: 10.1016/j.cognition.2023.105405] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2022] [Revised: 02/02/2023] [Accepted: 02/08/2023] [Indexed: 02/21/2023]
Abstract
According to the cultural distance hypothesis (CDH), individuals learn culture-specific statistical structures in music as internal stylistic models and use these models in predictive processing of music, with musical structures closer to their home culture being easier to predict. This cultural distance effect may be affected by domain-specific (musical ability) and domain-general individual characteristics (openness, implicit cultural bias). To test the CDH and its modulation by individual characteristics, we recruited Chinese and Western adults to categorize stylistically ambiguous and unambiguous Chinese and Western melodies by cultural origin. Categorization performance was better for unambiguous (low CD) than ambiguous melodies (high CD), and for in-culture melodies regardless of ambiguity for both groups, providing evidence for CDH. Musical ability, but not other traits, correlated positively with melody categorization, suggesting that musical ability refines internal stylistic models. Therefore, both cultures show musical enculturation in their home culture with a modulatory effect of individual musical ability.
Collapse
Affiliation(s)
- Mathias Klarlund
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China; Sino-Danish College, University of Chinese Academy of Sciences, Beijing, China; Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & Royal Academy of Music Aarhus/Aalborg, Aarhus, Denmark.
| | - Elvira Brattico
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & Royal Academy of Music Aarhus/Aalborg, Aarhus, Denmark; Department of Education, Psychology, Communication, University of Bari Aldo Moro, Italy
| | - Marcus Pearce
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & Royal Academy of Music Aarhus/Aalborg, Aarhus, Denmark; Music Cognition Lab, Queen Mary University of London, London, England, UK
| | - Yiyang Wu
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Peter Vuust
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & Royal Academy of Music Aarhus/Aalborg, Aarhus, Denmark
| | - Morten Overgaard
- Center for Functionally Integrative Neuroscience, Dept of Clinical Medicine, Aarhus University, Aarhus, Denmark
| | - Yi Du
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing, China; CAS Center for Excellence in Brain Science and Intelligence Technology, Shanghai, China; Chinese Institute for Brain Research, Beijing, China.
| |
Collapse
|
13
|
Witek MAG, Matthews T, Bodak R, Blausz MW, Penhune V, Vuust P. Musicians and non-musicians show different preference profiles for single chords of varying harmonic complexity. PLoS One 2023; 18:e0281057. [PMID: 36730271 PMCID: PMC9894397 DOI: 10.1371/journal.pone.0281057] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2022] [Accepted: 01/16/2023] [Indexed: 02/03/2023] Open
Abstract
The inverted U hypothesis in music predicts that listeners prefer intermediate levels of complexity. However, the shape of the liking response to harmonic complexity and the effect of musicianship remains unclear. Here, we tested whether the relationship between liking and harmonic complexity in single chords shows an inverted U shape and whether this U shape is different for musicians and non-musicians. We recorded these groups' liking ratings for four levels of harmonic complexity, indexed by their level of acoustic roughness, as well as several measures of inter-individual difference. Results showed that there is an inverted U-shaped relationship between harmonic complexity and liking in both musicians and non-musicians, but that the shape of the U is different for the two groups. Non-musicians' U is more left-skewed, with peak liking for low harmonic complexity, while musicians' U is more right-skewed, with highest ratings for medium and low complexity. Furthermore, musicians who showed greater liking for medium compared to low complexity chords reported higher levels of active musical engagement and higher levels of openness to experience. This suggests that a combination of practical musical experience and personality is reflected in musicians' inverted U-shaped preference response to harmonic complexity in chords.
Collapse
Affiliation(s)
- Maria A. G. Witek
- Department of Music, School of Languages, Cultures, Art History and Music, University of Birmingham, Birmingham, United Kingdom
- * E-mail:
| | - Tomas Matthews
- Center for Music in the Brain, Aarhus University and Royal Academy of Music, Aarhus, Denmark
| | - Rebeka Bodak
- Center for Music in the Brain, Aarhus University and Royal Academy of Music, Aarhus, Denmark
| | - Marta W. Blausz
- Department of Psychology, University of Southern Denmark, Odense, Denmark
| | - Virginia Penhune
- Department of Psychology, Concordia University, Montreal, Canada
| | - Peter Vuust
- Center for Music in the Brain, Aarhus University and Royal Academy of Music, Aarhus, Denmark
| |
Collapse
|
14
|
Donhauser PW, Klein D. Audio-Tokens: A toolbox for rating, sorting and comparing audio samples in the browser. Behav Res Methods 2023; 55:508-515. [PMID: 35297013 PMCID: PMC10027774 DOI: 10.3758/s13428-022-01803-w] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/19/2022] [Indexed: 12/30/2022]
Abstract
Here we describe a JavaScript toolbox to perform online rating studies with auditory material. The main feature of the toolbox is that audio samples are associated with visual tokens on the screen that control audio playback and can be manipulated depending on the type of rating. This allows the collection of single- and multidimensional feature ratings, as well as categorical and similarity ratings. The toolbox ( github.com/pwdonh/audio_tokens ) can be used via a plugin for the widely used jsPsych, as well as using plain JavaScript for custom applications. We expect the toolbox to be useful in psychological research on speech and music perception, as well as for the curation and annotation of datasets in machine learning.
Collapse
Affiliation(s)
- Peter W Donhauser
- Cognitive Neuroscience Unit, Montreal Neurological Institute, McGill University, Montreal, QC, H3A 2B4, Canada.
- Ernst Strüngmann Institute for Neuroscience in Cooperation with Max Planck Society, 60528, Frankfurt am Main, Germany.
| | - Denise Klein
- Cognitive Neuroscience Unit, Montreal Neurological Institute, McGill University, Montreal, QC, H3A 2B4, Canada.
- Centre for Research on Brain, Language and Music, McGill University, Montreal, QC, H3G 2A8, Canada.
| |
Collapse
|
15
|
Hoeschele M, Wagner B, Mann DC. Lessons learned in animal acoustic cognition through comparisons with humans. Anim Cogn 2023; 26:97-116. [PMID: 36574158 PMCID: PMC9877085 DOI: 10.1007/s10071-022-01735-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2022] [Revised: 11/21/2022] [Accepted: 12/06/2022] [Indexed: 12/28/2022]
Abstract
Humans are an interesting subject of study in comparative cognition. While humans have a lot of anecdotal and subjective knowledge about their own minds and behaviors, researchers tend not to study humans the way they study other species. Instead, comparisons between humans and other animals tend to be based on either assumptions about human behavior and cognition, or very different testing methods. Here we emphasize the importance of using insider knowledge about humans to form interesting research questions about animal cognition while simultaneously stepping back and treating humans like just another species as if one were an alien researcher. This perspective is extremely helpful to identify what aspects of cognitive processes may be interesting and relevant across the animal kingdom. Here we outline some examples of how this objective human-centric approach has helped us to move forward knowledge in several areas of animal acoustic cognition (rhythm, harmonicity, and vocal units). We describe how this approach works, what kind of benefits we obtain, and how it can be applied to other areas of animal cognition. While an objective human-centric approach is not useful when studying traits that do not occur in humans (e.g., magnetic spatial navigation), it can be extremely helpful when studying traits that are relevant to humans (e.g., communication). Overall, we hope to entice more people working in animal cognition to use a similar approach to maximize the benefits of being part of the animal kingdom while maintaining a detached and scientific perspective on the human species.
Collapse
Affiliation(s)
- Marisa Hoeschele
- Acoustics Research Institute, Austrian Academy of Sciences, Wohllebengasse 12-14, 1040, Vienna, Austria.
| | - Bernhard Wagner
- Acoustics Research Institute, Austrian Academy of Sciences, Wohllebengasse 12-14, 1040, Vienna, Austria
| | - Dan C Mann
- Konrad Lorenz Institute of Ethology, University of Veterinary Medicine, Vienna, Savoyenstrasse 1, 1160, Vienna, Austria
| |
Collapse
|
16
|
Di Stefano N, Vuust P, Brattico E. Consonance and dissonance perception. A critical review of the historical sources, multidisciplinary findings, and main hypotheses. Phys Life Rev 2022; 43:273-304. [PMID: 36372030 DOI: 10.1016/j.plrev.2022.10.004] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2022] [Accepted: 10/17/2022] [Indexed: 11/05/2022]
Abstract
Revealed more than two millennia ago by Pythagoras, consonance and dissonance (C/D) are foundational concepts in music theory, perception, and aesthetics. The search for the biological, acoustical, and cultural factors that affect C/D perception has resulted in descriptive accounts inspired by arithmetic, musicological, psychoacoustical or neurobiological frameworks without reaching a consensus. Here, we review the key historical sources and modern multidisciplinary findings on C/D and integrate them into three main hypotheses: the vocal similarity hypothesis (VSH), the psychocultural hypothesis (PH), and the sensorimotor hypothesis (SH). By illustrating the hypotheses-related findings, we highlight their major conceptual, methodological, and terminological shortcomings. Trying to provide a unitary framework for C/D understanding, we put together multidisciplinary research on human and animal vocalizations, which converges to suggest that auditory roughness is associated with distress/danger and, therefore, elicits defensive behavioral reactions and neural responses that indicate aversion. We therefore stress the primacy of vocality and roughness as key factors in the explanation of C/D phenomenon, and we explore the (neuro)biological underpinnings of the attraction-aversion mechanisms that are triggered by C/D stimuli. Based on the reviewed evidence, while the aversive nature of dissonance appears as solidly rooted in the multidisciplinary findings, the attractive nature of consonance remains a somewhat speculative claim that needs further investigation. Finally, we outline future directions for empirical research in C/D, especially regarding cross-modal and cross-cultural approaches.
Collapse
Affiliation(s)
- Nicola Di Stefano
- Institute for Cognitive Sciences and Technologies (ISTC), National Research Council of Italy (CNR), Via San Martino della Battaglia 44, 00185 Rome, Italy.
| | - Peter Vuust
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University Royal Academy of Music Aarhus/Aalborg (RAMA), 8000 Aarhus, Denmark.
| | - Elvira Brattico
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University Royal Academy of Music Aarhus/Aalborg (RAMA), 8000 Aarhus, Denmark; Department of Education, Psychology, Communication, University of Bari Aldo Moro, 70122 Bari, Italy.
| |
Collapse
|
17
|
Loui P. New music system reveals spectral contribution to statistical learning. Cognition 2022; 224:105071. [PMID: 35227982 DOI: 10.1016/j.cognition.2022.105071] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2021] [Revised: 02/10/2022] [Accepted: 02/20/2022] [Indexed: 11/03/2022]
Abstract
Knowledge of speech and music depends upon the ability to perceive relationships between sounds in order to form a stable mental representation of statistical structure. Although evidence exists for the learning of musical scale structure from the statistical properties of sound events, little research has been able to observe how specific acoustic features contribute to statistical learning independent of the effects of long-term exposure. Here, using a new musical system, we show that spectral content is an important cue for acquiring musical scale structure. In two experiments, participants completed probe-tone ratings before and after a half-hour period of exposure to melodies in a novel musical scale with a predefined statistical structure. In Experiment 1, participants were randomly assigned to either a no-exposure control group, or to exposure groups who heard pure tone or complex tone sequences. In Experiment 2, participants were randomly assigned to exposure groups who heard complex tones constructed with odd harmonics or even harmonics. Learning outcome was assessed by correlating pre/post-exposure ratings and the statistical structure of tones within the exposure period. Spectral information significantly affected sensitivity to statistical structure: participants were able to learn after exposure to all tested timbres, but did best at learning with timbres with odd harmonics, which were congruent with scale structure. Results show that spectral amplitude distribution is a useful cue for statistical learning, and suggest that musical scale structure might be acquired through exposure to spectral distribution in sounds.
Collapse
|
18
|
Guida A, Porret A. A SPoARC of Music: Musicians Spatialize Melodies but not All-Comers. Cogn Sci 2022; 46:e13139. [PMID: 35503037 DOI: 10.1111/cogs.13139] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2021] [Revised: 02/23/2022] [Accepted: 04/01/2022] [Indexed: 12/01/2022]
Abstract
Recent studies on the spatial positional associated response codes (SPoARC) effect have shown that when Western adults are asked to keep in mind sequences of verbal items, they mentally spatialize them along the horizontal axis, with the initial items being associated with the left and the last items being associated with the right. The origin of this mental line is still debated, but it has been theorized that it necessitates specific spatial cognitive structures to emerge, which are built through expertise. This hypothesis is examined by testing for the first time whether Western individuals spatialize melodies from left to right and whether expertise in the musical domain is necessary for this effect to emerge. Two groups (musicians and non-musicians) of participants were asked to memorize sequences of four musical notes and to indicate if a subsequent probe was part of the sequence by pressing a "yes" key or a "no" key with the left or right index finger. Left/right-hand key assignment was reversed at mid-experiment. The results showed a SPoARC effect only for the group of musicians. Moreover, no association between pitch and hand responses was observed in either of the two groups. These findings suggest a crucial role of expertise in the SPoARC effect.
Collapse
|
19
|
Abstract
Hearing in noise is a core problem in audition, and a challenge for hearing-impaired listeners, yet the underlying mechanisms are poorly understood. We explored whether harmonic frequency relations, a signature property of many communication sounds, aid hearing in noise for normal hearing listeners. We measured detection thresholds in noise for tones and speech synthesized to have harmonic or inharmonic spectra. Harmonic signals were consistently easier to detect than otherwise identical inharmonic signals. Harmonicity also improved discrimination of sounds in noise. The largest benefits were observed for two-note up-down "pitch" discrimination and melodic contour discrimination, both of which could be performed equally well with harmonic and inharmonic tones in quiet, but which showed large harmonic advantages in noise. The results show that harmonicity facilitates hearing in noise, plausibly by providing a noise-robust pitch cue that aids detection and discrimination.
Collapse
|
20
|
Individualized Assays of Temporal Coding in the Ascending Human Auditory System. eNeuro 2022; 9:ENEURO.0378-21.2022. [PMID: 35193890 PMCID: PMC8925652 DOI: 10.1523/eneuro.0378-21.2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2021] [Revised: 01/12/2022] [Accepted: 02/08/2022] [Indexed: 11/21/2022] Open
Abstract
Neural phase-locking to temporal fluctuations is a fundamental and unique mechanism by which acoustic information is encoded by the auditory system. The perceptual role of this metabolically expensive mechanism, the neural phase-locking to temporal fine structure (TFS) in particular, is debated. Although hypothesized, it is unclear whether auditory perceptual deficits in certain clinical populations are attributable to deficits in TFS coding. Efforts to uncover the role of TFS have been impeded by the fact that there are no established assays for quantifying the fidelity of TFS coding at the individual level. While many candidates have been proposed, for an assay to be useful, it should not only intrinsically depend on TFS coding, but should also have the property that individual differences in the assay reflect TFS coding per se over and beyond other sources of variance. Here, we evaluate a range of behavioral and electroencephalogram (EEG)-based measures as candidate individualized measures of TFS sensitivity. Our comparisons of behavioral and EEG-based metrics suggest that extraneous variables dominate both behavioral scores and EEG amplitude metrics, rendering them ineffective. After adjusting behavioral scores using lapse rates, and extracting latency or percent-growth metrics from EEG, interaural timing sensitivity measures exhibit robust behavior-EEG correlations. Together with the fact that unambiguous theoretical links can be made relating binaural measures and phase-locking to TFS, our results suggest that these "adjusted" binaural assays may be well suited for quantifying individual TFS processing.
Collapse
|
21
|
Lahdelma I, Eerola T, Armitage J. Is Harmonicity a Misnomer for Cultural Familiarity in Consonance Preferences? Front Psychol 2022; 13:802385. [PMID: 35153957 PMCID: PMC8833847 DOI: 10.3389/fpsyg.2022.802385] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2021] [Accepted: 01/10/2022] [Indexed: 11/13/2022] Open
|
22
|
Camarena A, Manchala G, Papadopoulos J, O’Connell SR, Goldsworthy RL. Pleasantness Ratings of Musical Dyads in Cochlear Implant Users. Brain Sci 2021; 12:brainsci12010033. [PMID: 35053777 PMCID: PMC8773901 DOI: 10.3390/brainsci12010033] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2021] [Revised: 12/21/2021] [Accepted: 12/22/2021] [Indexed: 11/28/2022] Open
Abstract
Cochlear implants have been used to restore hearing to more than half a million people around the world. The restored hearing allows most recipients to understand spoken speech without relying on visual cues. While speech comprehension in quiet is generally high for recipients, many complain about the sound of music. The present study examines consonance and dissonance perception in nine cochlear implant users and eight people with no known hearing loss. Participants completed web-based assessments to characterize low-level psychophysical sensitivities to modulation and pitch, as well as higher-level measures of musical pleasantness and speech comprehension in background noise. The underlying hypothesis is that sensitivity to modulation and pitch, in addition to higher levels of musical sophistication, relate to higher-level measures of music and speech perception. This hypothesis tested true with strong correlations observed between measures of modulation and pitch with measures of consonance ratings and speech recognition. Additionally, the cochlear implant users who were the most sensitive to modulations and pitch, and who had higher musical sophistication scores, had similar pleasantness ratings as those with no known hearing loss. The implication is that better coding and focused rehabilitation for modulation and pitch sensitivity will broadly improve perception of music and speech for cochlear implant users.
Collapse
Affiliation(s)
- Andres Camarena
- Auditory Research Center, Caruso Department of Otolaryngology, Keck School of Medicine, University of Southern California, Los Angeles, CA 90033, USA; (A.C.); (G.M.); (J.P.); (S.R.O.)
| | - Grace Manchala
- Auditory Research Center, Caruso Department of Otolaryngology, Keck School of Medicine, University of Southern California, Los Angeles, CA 90033, USA; (A.C.); (G.M.); (J.P.); (S.R.O.)
| | - Julianne Papadopoulos
- Auditory Research Center, Caruso Department of Otolaryngology, Keck School of Medicine, University of Southern California, Los Angeles, CA 90033, USA; (A.C.); (G.M.); (J.P.); (S.R.O.)
- Thornton School of Music, University of Southern California, Los Angeles, CA 90089, USA
| | - Samantha R. O’Connell
- Auditory Research Center, Caruso Department of Otolaryngology, Keck School of Medicine, University of Southern California, Los Angeles, CA 90033, USA; (A.C.); (G.M.); (J.P.); (S.R.O.)
| | - Raymond L. Goldsworthy
- Auditory Research Center, Caruso Department of Otolaryngology, Keck School of Medicine, University of Southern California, Los Angeles, CA 90033, USA; (A.C.); (G.M.); (J.P.); (S.R.O.)
- Correspondence:
| |
Collapse
|
23
|
Neural correlates of acoustic dissonance in music: The role of musicianship, schematic and veridical expectations. PLoS One 2021; 16:e0260728. [PMID: 34852008 PMCID: PMC8635369 DOI: 10.1371/journal.pone.0260728] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2021] [Accepted: 11/15/2021] [Indexed: 11/19/2022] Open
Abstract
In western music, harmonic expectations can be fulfilled or broken by unexpected chords. Musical irregularities in the absence of auditory deviance elicit well-studied neural responses (e.g. ERAN, P3, N5). These responses are sensitive to schematic expectations (induced by syntactic rules of chord succession) and veridical expectations about predictability (induced by experimental regularities). However, the cognitive and sensory contributions to these responses and their plasticity as a result of musical training remains under debate. In the present study, we explored whether the neural processing of pure acoustic violations is affected by schematic and veridical expectations. Moreover, we investigated whether these two factors interact with long-term musical training. In Experiment 1, we registered the ERPs elicited by dissonant clusters placed either at the middle or the ending position of chord cadences. In Experiment 2, we presented to the listeners with a high proportion of cadences ending in a dissonant chord. In both experiments, we compared the ERPs of musicians and non-musicians. Dissonant clusters elicited distinctive neural responses (an early negativity, the P3 and the N5). While the EN was not affected by syntactic rules, the P3a and P3b were larger for dissonant closures than for middle dissonant chords. Interestingly, these components were larger in musicians than in non-musicians, while the N5 was the opposite. Finally, the predictability of dissonant closures in our experiment did not modulate any of the ERPs. Our study suggests that, at early time windows, dissonance is processed based on acoustic deviance independently of syntactic rules. However, at longer latencies, listeners may be able to engage integration mechanisms and further processes of attentional and structural analysis dependent on musical hierarchies, which are enhanced in musicians.
Collapse
|
24
|
Homma NY, Bajo VM. Lemniscal Corticothalamic Feedback in Auditory Scene Analysis. Front Neurosci 2021; 15:723893. [PMID: 34489635 PMCID: PMC8417129 DOI: 10.3389/fnins.2021.723893] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2021] [Accepted: 07/30/2021] [Indexed: 12/15/2022] Open
Abstract
Sound information is transmitted from the ear to central auditory stations of the brain via several nuclei. In addition to these ascending pathways there exist descending projections that can influence the information processing at each of these nuclei. A major descending pathway in the auditory system is the feedback projection from layer VI of the primary auditory cortex (A1) to the ventral division of medial geniculate body (MGBv) in the thalamus. The corticothalamic axons have small glutamatergic terminals that can modulate thalamic processing and thalamocortical information transmission. Corticothalamic neurons also provide input to GABAergic neurons of the thalamic reticular nucleus (TRN) that receives collaterals from the ascending thalamic axons. The balance of corticothalamic and TRN inputs has been shown to refine frequency tuning, firing patterns, and gating of MGBv neurons. Therefore, the thalamus is not merely a relay stage in the chain of auditory nuclei but does participate in complex aspects of sound processing that include top-down modulations. In this review, we aim (i) to examine how lemniscal corticothalamic feedback modulates responses in MGBv neurons, and (ii) to explore how the feedback contributes to auditory scene analysis, particularly on frequency and harmonic perception. Finally, we will discuss potential implications of the role of corticothalamic feedback in music and speech perception, where precise spectral and temporal processing is essential.
Collapse
Affiliation(s)
- Natsumi Y. Homma
- Center for Integrative Neuroscience, University of California, San Francisco, San Francisco, CA, United States
- Coleman Memorial Laboratory, Department of Otolaryngology – Head and Neck Surgery, University of California, San Francisco, San Francisco, CA, United States
| | - Victoria M. Bajo
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
| |
Collapse
|
25
|
Lahdelma I, Athanasopoulos G, Eerola T. Sweetness is in the ear of the beholder: chord preference across United Kingdom and Pakistani listeners. Ann N Y Acad Sci 2021; 1502:72-84. [PMID: 34240419 DOI: 10.1111/nyas.14655] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2021] [Revised: 05/28/2021] [Accepted: 06/08/2021] [Indexed: 01/24/2023]
Abstract
The majority of research in the field of music perception has been conducted with Western participants, and it has remained unclear which aspects of music perception are culture dependent, and which are universal. The current study compared how participants unfamiliar with Western music (people from the Khowar and Kalash tribes native to Northwest Pakistan with minimal exposure to Western music) perceive affect (positive versus negative) in musical chords compared with United Kingdom (UK) listeners, as well as the overall preference for these chords. The stimuli consisted of four distinct chord types (major, minor, augmented, and chromatic) and were played as both vertical blocks (pitches presented concurrently) and arpeggios (pitches presented successively). The results suggest that the Western listener major-positive minor-negative affective distinction is opposite for Northwest Pakistani listeners, arguably because of the reversed prevalence of these chords in the two music cultures. The aversion to the harsh dissonance of the chromatic cluster is present cross-culturally, but the preference for the consonance of the major triad varies between UK and Northwest Pakistani listeners, depending on cultural familiarity. Our findings imply not only notable cultural variation but also commonalities in chord perception across Western and non-Western listeners.
Collapse
Affiliation(s)
- Imre Lahdelma
- Department of Music, Durham University, Durham, United Kingdom
| | | | - Tuomas Eerola
- Department of Music, Durham University, Durham, United Kingdom
| |
Collapse
|
26
|
Armitage J, Lahdelma I, Eerola T. Automatic responses to musical intervals: Contrasts in acoustic roughness predict affective priming in Western listeners. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2021; 150:551. [PMID: 34340511 DOI: 10.1121/10.0005623] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/04/2021] [Accepted: 06/24/2021] [Indexed: 06/13/2023]
Abstract
The aim of the present study is to determine which acoustic components of harmonic consonance and dissonance influence automatic responses in a simple cognitive task. In a series of affective priming experiments, eight pairs of musical intervals were used to measure the influence of acoustic roughness and harmonicity on response times in a word-classification task conducted online. Interval pairs that contrasted in roughness induced a greater degree of affective priming than pairs that did not contrast in terms of their roughness. Contrasts in harmonicity did not induce affective priming. A follow-up experiment used detuned intervals to create higher levels of roughness contrasts. However, the detuning did not lead to any further increase in the size of the priming effect. More detailed analysis suggests that the presence of priming in intervals is binary: in the negative primes that create congruency effects the intervals' fundamentals and overtones coincide within the same equivalent rectangular bandwidth (i.e., the minor and major seconds). Intervals that fall outside this equivalent rectangular bandwidth do not elicit priming effects, regardless of their dissonance or negative affect. The results are discussed in the context of recent developments in consonance/dissonance research and vocal similarity.
Collapse
Affiliation(s)
- James Armitage
- Department of Music, Durham University, Durham, DH1 3RL, United Kingdom
| | - Imre Lahdelma
- Department of Music, Durham University, Durham, DH1 3RL, United Kingdom
| | - Tuomas Eerola
- Department of Music, Durham University, Durham, DH1 3RL, United Kingdom
| |
Collapse
|
27
|
Memorisation and implicit perceptual learning are enhanced for preferred musical intervals and chords. Psychon Bull Rev 2021; 28:1623-1637. [PMID: 33945127 PMCID: PMC8500890 DOI: 10.3758/s13423-021-01922-z] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/23/2021] [Indexed: 11/22/2022]
Abstract
Is it true that we learn better what we like? Current neuroaesthetic and neurocomputational models of aesthetic appreciation postulate the existence of a correlation between aesthetic appreciation and learning. However, even though aesthetic appreciation has been associated with attentional enhancements, systematic evidence demonstrating its influence on learning processes is still lacking. Here, in two experiments, we investigated the relationship between aesthetic preferences for consonance versus dissonance and the memorisation of musical intervals and chords. In Experiment 1, 60 participants were first asked to memorise and evaluate arpeggiated triad chords (memorisation phase), then, following a distraction task, chords’ memorisation accuracy was measured (recognition phase). Memorisation resulted to be significantly enhanced for subjectively preferred as compared with non-preferred chords. To explore the possible neural mechanisms underlying these results, we performed an EEG study, directed to investigate implicit perceptual learning dynamics (Experiment 2). Through an auditory mismatch detection paradigm, electrophysiological responses to standard/deviant intervals were recorded, while participants were asked to evaluate the beauty of the intervals. We found a significant trial-by-trial correlation between subjective aesthetic judgements and single trial amplitude fluctuations of the ERP attention-related N1 component. Moreover, implicit perceptual learning, expressed by larger mismatch detection responses, was enhanced for more appreciated intervals. Altogether, our results showed the existence of a relationship between aesthetic appreciation and implicit learning dynamics as well as higher-order learning processes, such as memorisation. This finding might suggest possible future applications in different research domains such as teaching and rehabilitation of memory and attentional deficits.
Collapse
|
28
|
de Cheveigné A. Harmonic Cancellation-A Fundamental of Auditory Scene Analysis. Trends Hear 2021; 25:23312165211041422. [PMID: 34698574 PMCID: PMC8552394 DOI: 10.1177/23312165211041422] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2021] [Revised: 07/23/2021] [Accepted: 07/09/2021] [Indexed: 11/16/2022] Open
Abstract
This paper reviews the hypothesis of harmonic cancellation according to which an interfering sound is suppressed or canceled on the basis of its harmonicity (or periodicity in the time domain) for the purpose of Auditory Scene Analysis. It defines the concept, discusses theoretical arguments in its favor, and reviews experimental results that support it, or not. If correct, the hypothesis may draw on time-domain processing of temporally accurate neural representations within the brainstem, as required also by the classic equalization-cancellation model of binaural unmasking. The hypothesis predicts that a target sound corrupted by interference will be easier to hear if the interference is harmonic than inharmonic, all else being equal. This prediction is borne out in a number of behavioral studies, but not all. The paper reviews those results, with the aim to understand the inconsistencies and come up with a reliable conclusion for, or against, the hypothesis of harmonic cancellation within the auditory system.
Collapse
Affiliation(s)
- Alain de Cheveigné
- Laboratoire des systèmes perceptifs, CNRS, Paris, France
- Département d’études cognitives, École normale supérieure, PSL
University, Paris, France
- UCL Ear Institute, London, UK
| |
Collapse
|
29
|
McPherson MJ, McDermott JH. Time-dependent discrimination advantages for harmonic sounds suggest efficient coding for memory. Proc Natl Acad Sci U S A 2020; 117:32169-32180. [PMID: 33262275 PMCID: PMC7749397 DOI: 10.1073/pnas.2008956117] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Perceptual systems have finite memory resources and must store incoming signals in compressed formats. To explore whether representations of a sound's pitch might derive from this need for compression, we compared discrimination of harmonic and inharmonic sounds across delays. In contrast to inharmonic spectra, harmonic spectra can be summarized, and thus compressed, using their fundamental frequency (f0). Participants heard two sounds and judged which was higher. Despite being comparable for sounds presented back-to-back, discrimination was better for harmonic than inharmonic stimuli when sounds were separated in time, implicating memory representations unique to harmonic sounds. Patterns of individual differences (correlations between thresholds in different conditions) indicated that listeners use different representations depending on the time delay between sounds, directly comparing the spectra of temporally adjacent sounds, but transitioning to comparing f0s across delays. The need to store sound in memory appears to determine reliance on f0-based pitch and may explain its importance in music, in which listeners must extract relationships between notes separated in time.
Collapse
Affiliation(s)
- Malinda J McPherson
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139;
- Program in Speech and Hearing Bioscience and Technology, Harvard University, Boston, MA 02115
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA 02139
| | - Josh H McDermott
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139;
- Program in Speech and Hearing Bioscience and Technology, Harvard University, Boston, MA 02115
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA 02139
- Center for Brains, Minds and Machines, Massachusetts Institute of Technology, Cambridge, MA 02139
| |
Collapse
|
30
|
Carcagno S, Plack CJ. Effects of age on psychophysical measures of auditory temporal processing and speech reception at low and high levels. Hear Res 2020; 400:108117. [PMID: 33253994 PMCID: PMC7812372 DOI: 10.1016/j.heares.2020.108117] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/11/2020] [Revised: 10/18/2020] [Accepted: 11/17/2020] [Indexed: 01/21/2023]
Abstract
We found little evidence of greater age-related hearing declines at high sound levels. There are age-related temporal-processing declines independent of hearing loss. No evidence of age-related speech-reception deficits independent of hearing loss.
Age-related cochlear synaptopathy (CS) has been shown to occur in rodents with minimal noise exposure, and has been hypothesized to play a crucial role in age-related hearing declines in humans. It is not known to what extent age-related CS occurs in humans, and how it affects the coding of supra-threshold sounds and speech in noise. Because in rodents CS affects mainly low- and medium-spontaneous rate (L/M-SR) auditory-nerve fibers with rate-level functions covering medium-high levels, it should lead to greater deficits in the processing of sounds at high than at low stimulus levels. In this cross-sectional study the performance of 102 listeners across the age range (34 young, 34 middle-aged, 34 older) was assessed in a set of psychophysical temporal processing and speech reception in noise tests at both low, and high stimulus levels. Mixed-effect multiple regression models were used to estimate the effects of age while partialing out effects of audiometric thresholds, lifetime noise exposure, cognitive abilities (assessed with additional tests), and musical experience. Age was independently associated with performance deficits on several tests. However, only for one out of 13 tests were age effects credibly larger at the high compared to the low stimulus level. Overall these results do not provide much evidence that age-related CS, to the extent to which it may occur in humans according to the rodent model of greater L/M-SR synaptic loss, has substantial effects on psychophysical measures of auditory temporal processing or on speech reception in noise.
Collapse
Affiliation(s)
- Samuele Carcagno
- Department of Psychology, Lancaster University, Lancaster, LA1 4YF, United Kingdom.
| | - Christopher J Plack
- Department of Psychology, Lancaster University, Lancaster, LA1 4YF, United Kingdom; Manchester Centre for Audiology and Deafness, University of Manchester, Academic Health Science Centre, M13 9PL, United Kingdom
| |
Collapse
|
31
|
Renton AI, Painter DR, Mattingley JB. Differential Deployment of Visual Attention During Interactive Approach and Avoidance Behavior. Cereb Cortex 2020; 29:2366-2383. [PMID: 29750259 DOI: 10.1093/cercor/bhy105] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2018] [Revised: 04/17/2018] [Accepted: 04/19/2018] [Indexed: 01/23/2023] Open
Abstract
The ability to coordinate approach and avoidance actions in dynamic environments represents the boundary between extinction and the continued survival of many animal species. It is therefore crucial that sensory systems allocate limited attentional resources to the most relevant information to facilitate planning and execution of appropriate actions. Prominent theories of how attention regulates visual processing focus on the distinction between behaviorally relevant and irrelevant visual inputs. To date, however, no study has directly compared the deployment of attention to visual inputs relevant for approach and avoidance behaviors, which naturally occur in dynamic, interactive environments. In two experiments, we combined electroencephalography, frequency tagging, and eye gaze measures to investigate whether the deployment of visual selective attention differs for items relevant for approach and avoidance actions. Participants maneuvered a cursor to approach and avoid contact with moving items in a continuous interactive task. The results indicated that while the approach and avoidance tasks recruited equivalent attentional resources overall, attentional biases were directed toward task-relevant items during approach, and away from task-relevant items during avoidance. We conclude that the deployment of visual attention is guided not only by relevance to a behavioral goal, but also by the nature of that goal.
Collapse
Affiliation(s)
- Angela I Renton
- School of Psychology, The University of Queensland, St Lucia, Australia.,Queensland Brain Institute, The University of Queensland, St Lucia, Australia
| | - David R Painter
- School of Psychology, The University of Queensland, St Lucia, Australia
| | - Jason B Mattingley
- School of Psychology, The University of Queensland, St Lucia, Australia.,Queensland Brain Institute, The University of Queensland, St Lucia, Australia
| |
Collapse
|
32
|
Sarasso P, Neppi-Modona M, Sacco K, Ronga I. "Stopping for knowledge": The sense of beauty in the perception-action cycle. Neurosci Biobehav Rev 2020; 118:723-738. [PMID: 32926914 DOI: 10.1016/j.neubiorev.2020.09.004] [Citation(s) in RCA: 33] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2020] [Revised: 07/23/2020] [Accepted: 09/01/2020] [Indexed: 01/07/2023]
Abstract
According to a millennial-old philosophical debate, aesthetic emotions have been connected to knowledge acquisition. Recent scientific evidence, collected across different disciplinary domains, confirms this link, but also reveals that motor inhibition plays a crucial role in the process. In this review, we discuss multidisciplinary results and propose an original account of aesthetic appreciation (the stopping for knowledge hypothesis) framed within the predictive coding theory. We discuss evidence showing that aesthetic emotions emerge in correspondence with an inhibition of motor behavior (i.e., minimizing action), promoting a simultaneous perceptual processing enhancement, at the level of sensory cortices (i.e., optimizing learning). Accordingly, we suggest that aesthetic appreciation may represent a hedonic feedback over learning progresses, motivating the individual to inhibit motor routines to seek further knowledge acquisition. Furthermore, the neuroimaging and neuropsychological studies we review reveal the presence of a strong association between aesthetic appreciation and the activation of the dopaminergic reward-related circuits. Finally, we propose a number of possible applications of the stopping for knowledge hypothesis in the clinical and education domains.
Collapse
Affiliation(s)
- P Sarasso
- BIP (BraIn Plasticity and Behaviour Changes) Research Group, Department of Psychology, University of Turin, Italy
| | - M Neppi-Modona
- BIP (BraIn Plasticity and Behaviour Changes) Research Group, Department of Psychology, University of Turin, Italy
| | - K Sacco
- BIP (BraIn Plasticity and Behaviour Changes) Research Group, Department of Psychology, University of Turin, Italy
| | - I Ronga
- BIP (BraIn Plasticity and Behaviour Changes) Research Group, Department of Psychology, University of Turin, Italy.
| |
Collapse
|
33
|
Nikolsky A. The Pastoral Origin of Semiotically Functional Tonal Organization of Music. Front Psychol 2020; 11:1358. [PMID: 32848961 PMCID: PMC7396614 DOI: 10.3389/fpsyg.2020.01358] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2019] [Accepted: 05/22/2020] [Indexed: 11/13/2022] Open
Abstract
This paper presents a new line of inquiry into when and how music as a semiotic system was born. Eleven principal expressive aspects of music each contains specific structural patterns whose configuration signifies a certain affective state. This distinguishes the tonal organization of music from the phonetic and prosodic organization of natural languages and animal communication. The question of music’s origin can therefore be answered by establishing the point in human history at which all eleven expressive aspects might have been abstracted from the instinct-driven primate calls and used to express human psycho-emotional states. Etic analysis of acoustic parameters is the prime means of cross-examination of the typical patterns of expression of the basic emotions in human music versus animal vocal communication. A new method of such analysis is proposed here. Formation of such expressive aspects as meter, tempo, melodic intervals, and articulation can be explained by the influence of bipedal locomotion, breathing cycle, and heartbeat, long before Homo sapiens. However, two aspects, rhythm and melodic contour, most crucial for music as we know it, lack proxies in the Paleolithic lifestyle. The available ethnographic and developmental data leads one to believe that rhythmic and directional patterns of melody became involved in conveying emotion-related information in the process of frequent switching from one call-type to another within the limited repertory of calls. Such calls are usually adopted for the ongoing caretaking of human youngsters and domestic animals. The efficacy of rhythm and pitch contour in affective communication must have been spontaneously discovered in new important cultural activities. The most likely scenario for music to have become fully semiotically functional and to have spread wide enough to avoid extinctions is the formation of cross-specific communication between humans and domesticated animals during the Neolithic demographic explosion and the subsequent cultural revolution. Changes in distance during such communication must have promoted the integration between different expressive aspects and generated the basic musical grammar. The model of such communication can be found in the surviving tradition of Scandinavian pastoral music - kulning. This article discusses the most likely ways in which such music evolved.
Collapse
|
34
|
Abstract
Abstract
Why do humans make music? Theories of the evolution of musicality have focused mainly on the value of music for specific adaptive contexts such as mate selection, parental care, coalition signaling, and group cohesion. Synthesizing and extending previous proposals, we argue that social bonding is an overarching function that unifies all of these theories, and that musicality enabled social bonding at larger scales than grooming and other bonding mechanisms available in ancestral primate societies. We combine cross-disciplinary evidence from archaeology, anthropology, biology, musicology, psychology, and neuroscience into a unified framework that accounts for the biological and cultural evolution of music. We argue that the evolution of musicality involves gene-culture coevolution, through which proto-musical behaviors that initially arose and spread as cultural inventions had feedback effects on biological evolution due to their impact on social bonding. We emphasize the deep links between production, perception, prediction, and social reward arising from repetition, synchronization, and harmonization of rhythms and pitches, and summarize empirical evidence for these links at the levels of brain networks, physiological mechanisms, and behaviors across cultures and across species. Finally, we address potential criticisms and make testable predictions for future research, including neurobiological bases of musicality and relationships between human music, language, animal song, and other domains. The music and social bonding (MSB) hypothesis provides the most comprehensive theory to date of the biological and cultural evolution of music.
Collapse
|
35
|
Prete G, Bondi D, Verratti V, Aloisi AM, Rai P, Tommasi L. Universality vs experience: a cross-cultural pilot study on the consonance effect in music at different altitudes. PeerJ 2020; 8:e9344. [PMID: 32704441 PMCID: PMC7350922 DOI: 10.7717/peerj.9344] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2020] [Accepted: 05/21/2020] [Indexed: 11/25/2022] Open
Abstract
Background Previous studies have shown that music preferences are influenced by cultural “rules”, and some others have suggested a universal preference for some features over others. Methods We investigated cultural differences on the “consonance effect”, consisting in higher pleasantness judgments for consonant compared to dissonant chords—according to the Western definition of music: Italian and Himalayan participants were asked to express pleasantness judgments for consonant and dissonant chords. An Italian and a Nepalese sample were tested both at 1,450 m and at 4,750 m of altitude, with the further aim to evaluate the effect of hypoxia on this task. A third sample consisted of two subgroups of Sherpas: lowlanders (1,450 m of altitude), often exposed to Western music, and highlanders (3,427 m of altitude), less exposed to Western music. All Sherpas were tested where they lived. Results Independently from the altitude, results confirmed the consonance effect in the Italian sample, and the absence of such effect in the Nepalese sample. Lowlander Sherpas revealed the consonance effect, but highlander Sherpas did not show this effect. Conclusions Results of this pilot study show that neither hypoxia (altitude), nor demographic features (age, schooling, or playing music), nor ethnicity per se influence the consonance effect. We conclude that music preferences are attributable to music exposure.
Collapse
Affiliation(s)
- Giulia Prete
- Department of Psychological, Health and Territorial Sciences, "G. d'Annunzio" University of Chieti-Pescara, Chieti, Italy
| | - Danilo Bondi
- Department of Neuroscience, Imaging and Clinical Sciences, "G. d'Annunzio" University of Chieti-Pescara, Chieti, Italy
| | - Vittore Verratti
- Department of Psychological, Health and Territorial Sciences, "G. d'Annunzio" University of Chieti-Pescara, Chieti, Italy
| | - Anna Maria Aloisi
- Department of Medicine, Surgery and Neuroscience, University of Siena, Siena, Italy
| | - Prabin Rai
- Unique College of Medical Science and Hospital, Rajbiraj, Nepal.,Mechi Technical Training Academy, Birtamode, Nepal
| | - Luca Tommasi
- Department of Psychological, Health and Territorial Sciences, "G. d'Annunzio" University of Chieti-Pescara, Chieti, Italy
| |
Collapse
|
36
|
Wagner B, Bowling DL, Hoeschele M. Is consonance attractive to budgerigars? No evidence from a place preference study. Anim Cogn 2020; 23:973-987. [PMID: 32572655 PMCID: PMC7415764 DOI: 10.1007/s10071-020-01404-0] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2019] [Revised: 05/27/2020] [Accepted: 06/12/2020] [Indexed: 11/26/2022]
Abstract
Consonant tone combinations occur naturally in the overtone series of harmonic sounds. These include sounds that many non-human animals produce to communicate. As such, non-human animals may be attracted to consonant intervals, interpreting them, e.g., as a feature of important social stimuli. There is preliminary evidence of attraction to consonance in various bird species in the wild, but few experimental studies with birds. We tested budgerigars (Melopsittacus undulatus) for attraction to consonant over dissonant intervals in two experiments. In Experiment 1, we tested humans and budgerigars using a place preference paradigm in which individuals could explore an environment with multiple sound sources. Both species were tested with consonant and dissonant versions of a previously studied piano melody, and we recorded time spent with each stimulus as a measure of attraction. Human females spent more time with consonant than dissonant stimuli in this experiment, but human males spent equal time with both consonant and dissonant stimuli. Neither male nor female budgerigars spent more time with either stimulus type. In Experiment 2, we tested budgerigars with more ecologically relevant stimuli comprised of sampled budgerigar vocalizations arranged into consonant or dissonant chords. These stimuli, however, also failed to produce any evidence of preference in budgerigar responses. We discuss these results in the context of ongoing research on the study of consonance as a potential general feature of auditory perception in animals with harmonic vocalizations, with respect to similarities and differences between human and budgerigar vocal behaviour, and future methodological directions.
Collapse
Affiliation(s)
- Bernhard Wagner
- Acoustics Research Institute, Wohllebengasse 12-14, 1040, Vienna, Austria
| | - Daniel L Bowling
- Department of Psychiatry and Behavioral Sciences, Stanford University School of Medicine, 1201 Welch Rd. MSLS P-126, Stanford, CA, 94305-5485, USA
- Department of Cognitive Biology, Althanstrasse 14 (UZA1), 1090, Vienna, Austria
| | - Marisa Hoeschele
- Acoustics Research Institute, Wohllebengasse 12-14, 1040, Vienna, Austria.
- Department of Cognitive Biology, Althanstrasse 14 (UZA1), 1090, Vienna, Austria.
| |
Collapse
|
37
|
Perceptual fusion of musical notes by native Amazonians suggests universal representations of musical intervals. Nat Commun 2020; 11:2786. [PMID: 32493923 PMCID: PMC7270137 DOI: 10.1038/s41467-020-16448-6] [Citation(s) in RCA: 27] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2019] [Accepted: 04/23/2020] [Indexed: 01/31/2023] Open
Abstract
Music perception is plausibly constrained by universal perceptual mechanisms adapted to natural sounds. Such constraints could arise from our dependence on harmonic frequency spectra for segregating concurrent sounds, but evidence has been circumstantial. We measured the extent to which concurrent musical notes are misperceived as a single sound, testing Westerners as well as native Amazonians with limited exposure to Western music. Both groups were more likely to mistake note combinations related by simple integer ratios as single sounds (‘fusion’). Thus, even with little exposure to Western harmony, acoustic constraints on sound segregation appear to induce perceptual structure on note combinations. However, fusion did not predict aesthetic judgments of intervals in Westerners, or in Amazonians, who were indifferent to consonance/dissonance. The results suggest universal perceptual mechanisms that could help explain cross-cultural regularities in musical systems, but indicate that these mechanisms interact with culture-specific influences to produce musical phenomena such as consonance. Music varies across cultures, but some features are widespread, consistent with biological constraints. Here, the authors report that both Western and native Amazonian listeners perceptually fuse concurrent notes related by simple-integer ratios, suggestive of one such biological constraint.
Collapse
|
38
|
Andermann M, Patterson RD, Rupp A. Transient and sustained processing of musical consonance in auditory cortex and the effect of musicality. J Neurophysiol 2020; 123:1320-1331. [DOI: 10.1152/jn.00876.2018] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
In recent years, electroencephalography and magnetoencephalography (MEG) have both been used to investigate the response in human auditory cortex to musical sounds that are perceived as consonant or dissonant. These studies have typically focused on the transient components of the physiological activity at sound onset, specifically, the N1 wave of the auditory evoked potential and the auditory evoked field, respectively. Unfortunately, the morphology of the N1 wave is confounded by the prominent neural response to energy onset at stimulus onset. It is also the case that the perception of pitch is not limited to sound onset; the perception lasts as long as the note producing it. This suggests that consonance studies should also consider the sustained activity that appears after the transient components die away. The current MEG study shows how energy-balanced sounds can focus the response waves on the consonance-dissonance distinction rather than energy changes and how source modeling techniques can be used to measure the sustained field associated with extended consonant and dissonant sounds. The study shows that musical dyads evoke distinct transient and sustained neuromagnetic responses in auditory cortex. The form of the response depends on both whether the dyads are consonant or dissonant and whether the listeners are musical or nonmusical. The results also show that auditory cortex requires more time for the early transient processing of dissonant dyads than it does for consonant dyads and that the continuous representation of temporal regularity in auditory cortex might be modulated by processes beyond auditory cortex. NEW & NOTEWORTHY We report a magnetoencephalography (MEG) study on transient and sustained cortical consonance processing. Stimuli were long-duration, energy-balanced, musical dyads that were either consonant or dissonant. Spatiotemporal source analysis revealed specific transient and sustained neuromagnetic activity in response to the dyads; in particular, the morphology of the responses was shaped by the dyad’s consonance and the listener’s musicality. Our results also suggest that the sustained representation of stimulus regularity might be modulated by processes beyond auditory cortex.
Collapse
Affiliation(s)
- Martin Andermann
- Section of Biomagnetism, Department of Neurology, Heidelberg University Hospital, Heidelberg, Germany
| | - Roy D. Patterson
- Department of Physiology, Development and Neuroscience, University of Cambridge, Cambridge, United Kingdom
| | - André Rupp
- Section of Biomagnetism, Department of Neurology, Heidelberg University Hospital, Heidelberg, Germany
| |
Collapse
|
39
|
Harrison PMC, Pearce MT. Simultaneous consonance in music perception and composition. Psychol Rev 2020; 127:216-244. [PMID: 31868392 PMCID: PMC7032667 DOI: 10.1037/rev0000169] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2019] [Revised: 06/13/2019] [Accepted: 09/02/2019] [Indexed: 11/08/2022]
Abstract
Simultaneous consonance is a salient perceptual phenomenon corresponding to the perceived pleasantness of simultaneously sounding musical tones. Various competing theories of consonance have been proposed over the centuries, but recently a consensus has developed that simultaneous consonance is primarily driven by harmonicity perception. Here we question this view, substantiating our argument by critically reviewing historic consonance research from a broad variety of disciplines, reanalyzing consonance perception data from 4 previous behavioral studies representing more than 500 participants, and modeling three Western musical corpora representing more than 100,000 compositions. We conclude that simultaneous consonance is a composite phenomenon that derives in large part from three phenomena: interference, periodicity/harmonicity, and cultural familiarity. We formalize this conclusion with a computational model that predicts a musical chord's simultaneous consonance from these three features, and release this model in an open-source R package, incon, alongside 15 other computational models also evaluated in this paper. We hope that this package will facilitate further psychological and musicological research into simultaneous consonance. (PsycINFO Database Record (c) 2020 APA, all rights reserved).
Collapse
Affiliation(s)
- Peter M C Harrison
- School of Electronic Engineering and Computer Science, Queen Mary University of London
| | - Marcus T Pearce
- School of Electronic Engineering and Computer Science, Queen Mary University of London
| |
Collapse
|
40
|
Sarasso P, Ronga I, Pistis A, Forte E, Garbarini F, Ricci R, Neppi-Modona M. Aesthetic appreciation of musical intervals enhances behavioural and neurophysiological indexes of attentional engagement and motor inhibition. Sci Rep 2019; 9:18550. [PMID: 31811225 PMCID: PMC6898439 DOI: 10.1038/s41598-019-55131-9] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2019] [Accepted: 11/25/2019] [Indexed: 12/27/2022] Open
Abstract
From Kant to current perspectives in neuroaesthetics, the experience of beauty has been described as disinterested, i.e. focusing on the stimulus perceptual features while neglecting self-referred concerns. At a neurophysiological level, some indirect evidence suggests that disinterested aesthetic appreciation might be associated with attentional enhancement and inhibition of motor behaviour. To test this hypothesis, we performed three auditory-evoked potential experiments, employing consonant and dissonant two-note musical intervals. Twenty-two volunteers judged the beauty of intervals (Aesthetic Judgement task) or responded to them as fast as possible (Detection task). In a third Go-NoGo task, a different group of twenty-two participants had to refrain from responding when hearing intervals. Individual aesthetic judgements positively correlated with response times in the Detection task, with slower motor responses for more appreciated intervals. Electrophysiological indexes of attentional engagement (N1/P2) and motor inhibition (N2/P3) were enhanced for more appreciated intervals. These findings represent the first experimental evidence confirming the disinterested interest hypothesis and may have important applications in research areas studying the effects of stimulus features on learning and motor behaviour.
Collapse
Affiliation(s)
- P Sarasso
- SAMBA (SpAtial, Motor & Bodily Awareness) Research Group, Department of Psychology, University of Turin, Turin, Italy.
| | - I Ronga
- MANIBUS Lab, Department of Psychology, University of Turin, Turin, Italy
| | - A Pistis
- SAMBA (SpAtial, Motor & Bodily Awareness) Research Group, Department of Psychology, University of Turin, Turin, Italy
| | - E Forte
- SAMBA (SpAtial, Motor & Bodily Awareness) Research Group, Department of Psychology, University of Turin, Turin, Italy
| | - F Garbarini
- MANIBUS Lab, Department of Psychology, University of Turin, Turin, Italy
| | - R Ricci
- SAMBA (SpAtial, Motor & Bodily Awareness) Research Group, Department of Psychology, University of Turin, Turin, Italy
| | - M Neppi-Modona
- SAMBA (SpAtial, Motor & Bodily Awareness) Research Group, Department of Psychology, University of Turin, Turin, Italy
| |
Collapse
|
41
|
Zhou L, Liu F, Jiang J, Jiang C. Impaired emotional processing of chords in congenital amusia: Electrophysiological and behavioral evidence. Brain Cogn 2019; 135:103577. [DOI: 10.1016/j.bandc.2019.06.001] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2017] [Revised: 06/04/2019] [Accepted: 06/04/2019] [Indexed: 10/26/2022]
|
42
|
Carcagno S, Lakhani S, Plack CJ. Consonance perception beyond the traditional existence region of pitch. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2019; 146:2279. [PMID: 31671967 DOI: 10.1121/1.5127845] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/20/2019] [Accepted: 09/12/2019] [Indexed: 06/10/2023]
Abstract
Some theories posit that the perception of consonance is based on neural periodicity detection, which is dependent on accurate phase locking of auditory nerve fibers to features of the stimulus waveform. In the current study, 15 listeners were asked to rate the pleasantness of complex tone dyads (2 note chords) forming various harmonic intervals and bandpass filtered in a high-frequency region (all components >5.8 kHz), where phase locking to the rapid stimulus fine structure is thought to be severely degraded or absent. The two notes were presented to opposite ears. Consonant intervals (minor third and perfect fifth) received higher ratings than dissonant intervals (minor second and tritone). The results could not be explained in terms of phase locking to the slower waveform envelope because the preference for consonant intervals was higher when the stimuli were harmonic, compared to a condition in which they were made inharmonic by shifting their component frequencies by a constant offset, so as to preserve their envelope periodicity. Overall the results indicate that, if phase locking is indeed absent at frequencies greater than ∼5 kHz, neural periodicity detection is not necessary for the perception of consonance.
Collapse
Affiliation(s)
- Samuele Carcagno
- Department of Psychology, Lancaster University, Lancaster, LA1 4YF, United Kingdom
| | - Saday Lakhani
- Department of Psychology, Lancaster University, Lancaster, LA1 4YF, United Kingdom
| | - Christopher J Plack
- Department of Psychology, Lancaster University, Lancaster, LA1 4YF, United Kingdom
| |
Collapse
|
43
|
Pagès-Portabella C, Toro JM. Dissonant endings of chord progressions elicit a larger ERAN than ambiguous endings in musicians. Psychophysiology 2019; 57:e13476. [PMID: 31512751 DOI: 10.1111/psyp.13476] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2018] [Revised: 07/30/2019] [Accepted: 08/08/2019] [Indexed: 11/29/2022]
Abstract
In major-minor tonal music, the hierarchical relationships and patterns of tension/release are essential for its composition and experience. For most listeners, tension leads to an expectation of resolution. Thus, when musical expectations are broken, they are usually perceived as erroneous and elicit specific neural responses such as the early right anterior negativity (ERAN). In the present study, we explored if different degrees of musical violations are processed differently after long-term musical training in comparison to day-to-day exposure. We registered the ERPs elicited by listening to unexpected chords in both musicians and nonmusicians. More specifically, we compared the responses of strong violations by unexpected dissonant endings and mild violations by unexpected but consonant endings (Neapolitan chords). Our results show that, irrespective of training, irregular endings elicited the ERAN. However, the ERAN for dissonant endings was larger in musicians than in nonmusicians. More importantly, we observed a modulation of the neural responses by the degree of violation only in musicians. In this group, the amplitude of the ERAN was larger for strong than for mild violations. These results suggest an early sensitivity of musicians to dissonance, which is processed as less expected than tonal irregularities. We also found that irregular endings elicited a P3 only in musicians. Our study suggests that, even though violations of harmonic expectancies are detected by all listeners, musical training modulates how different violations of the musical context are processed.
Collapse
Affiliation(s)
- Carlota Pagès-Portabella
- Language & Comparative Cognition Group, Center for Brain & Cognition, Universitat Pompeu Fabra, Barcelona, Spain
| | - Juan M Toro
- Language & Comparative Cognition Group, Center for Brain & Cognition, Universitat Pompeu Fabra, Barcelona, Spain.,Institució Catalana de Recerca i Estudis Avançats, Barcelona, Spain
| |
Collapse
|
44
|
Spitzer ER, Landsberger DM, Friedmann DR, Galvin JJ. Pleasantness Ratings for Harmonic Intervals With Acoustic and Electric Hearing in Unilaterally Deaf Cochlear Implant Patients. Front Neurosci 2019; 13:922. [PMID: 31551686 PMCID: PMC6733976 DOI: 10.3389/fnins.2019.00922] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2019] [Accepted: 08/16/2019] [Indexed: 11/13/2022] Open
Abstract
Background Harmony is an important part of tonal music that conveys context, form and emotion. Two notes sounded simultaneously form a harmonic interval. In normal-hearing (NH) listeners, some harmonic intervals (e.g., minor 2nd, tritone, major 7th) typically sound more dissonant than others (e.g., octave, major 3rd, 4th). Because of the limited spectro-temporal resolution afforded by cochlear implants (CIs), music perception is generally poor. However, CI users may still be sensitive to relative dissonance across intervals. In this study, dissonance ratings for harmonic intervals were measured in 11 unilaterally deaf CI patients, in whom ratings from the CI could be compared to those from the normal ear. Methods Stimuli consisted of pairs of equal amplitude MIDI piano tones. Intervals spanned a range of two octaves relative to two root notes (F3 or C4). Dissonance was assessed in terms of subjective pleasantness ratings for intervals presented to the NH ear alone, the CI ear alone, and both ears together (NH + CI). Ratings were collected for both root notes for within- and across-octave intervals (1–12 and 13–24 semitones). Participants rated the pleasantness of each interval by clicking on a line anchored with “least pleasant” and “most pleasant.” A follow-up experiment repeated the task with a smaller stimulus set. Results With NH-only listening, within-octave intervals minor 2nd, major 2nd, and major 7th were rated least pleasant; major 3rd, 5th, and octave were rated most pleasant. Across-octave counterparts were similarly rated. With CI-only listening, ratings were consistently lower and showed a reduced range. Mean ratings were highly correlated between NH-only and CI-only listening (r = 0.845, p < 0.001). Ratings were similar between NH-only and NH + CI listening, with no significant binaural enhancement/interference. The follow-up tests showed that ratings were reliable for the least and most pleasant intervals. Discussion Although pleasantness ratings were less differentiated for the CI ear than the NH ear, there were similarities between the two listening modes. Given the lack of spectro-temporal detail needed for harmonicity-based distinctions, temporal envelope interactions (within and across channels) associated with a perception of roughness may contribute to dissonance perception for harmonic intervals with CI-only listening.
Collapse
Affiliation(s)
- Emily R Spitzer
- Department of Otolaryngology-Head and Neck Surgery, New York University School of Medicine, New York, NY, United States
| | - David M Landsberger
- Department of Otolaryngology-Head and Neck Surgery, New York University School of Medicine, New York, NY, United States
| | - David R Friedmann
- Department of Otolaryngology-Head and Neck Surgery, New York University School of Medicine, New York, NY, United States
| | | |
Collapse
|
45
|
Abstract
An important aspect of the perceived quality of vocal music is the degree to which the vocalist sings in tune. Although most listeners seem sensitive to vocal mistuning, little is known about the development of this perceptual ability or how it differs between listeners. Motivated by a lack of suitable preexisting measures, we introduce in this article an adaptive and ecologically valid test of mistuning perception ability. The stimulus material consisted of short excerpts (6 to 12 s in length) from pop music performances (obtained from MedleyDB; Bittner et al., 2014) for which the vocal track was pitch-shifted relative to the instrumental tracks. In a first experiment, 333 listeners were tested on a two-alternative forced choice task that tested discrimination between a pitch-shifted and an unaltered version of the same audio clip. Explanatory item response modeling was then used to calibrate an adaptive version of the test. A subsequent validation experiment applied this adaptive test to 66 participants with a broad range of musical expertise, producing evidence of the test's reliability, convergent validity, and divergent validity. The test is ready to be deployed as an experimental tool and should make an important contribution to our understanding of the human ability to judge mistuning.
Collapse
|
46
|
Smit EA, Milne AJ, Dean RT, Weidemann G. Perception of affect in unfamiliar musical chords. PLoS One 2019; 14:e0218570. [PMID: 31226170 PMCID: PMC6588276 DOI: 10.1371/journal.pone.0218570] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2019] [Accepted: 06/04/2019] [Indexed: 11/23/2022] Open
Abstract
This study investigates the role of extrinsic and intrinsic predictors in the perception of affect in mostly unfamiliar musical chords from the Bohlen-Pierce microtonal tuning system. Extrinsic predictors are derived, in part, from long-term statistical regularities in music; for example, the prevalence of a chord in a corpus of music that is relevant to a participant. Conversely, intrinsic predictors make no use of long-term statistical regularities in music; for example, psychoacoustic features inherent in the music, such as roughness. Two types of affect were measured for each chord: pleasantness/unpleasantness and happiness/sadness. We modelled the data with a number of novel and well-established intrinsic predictors, namely roughness, harmonicity, spectral entropy and average pitch height; and a single extrinsic predictor, 12-TET Dissimilarity, which was estimated by the chord's smallest distance to any 12-tone equally tempered chord. Musical sophistication was modelled as a potential moderator of the above predictors. Two experiments were conducted, each using slightly different tunings of the Bohlen-Pierce musical system: a just intonation version and an equal-tempered version. It was found that, across both tunings and across both affective responses, all the tested intrinsic features and 12-TET Dissimilarity have consistent influences in the expected direction. These results contrast with much current music perception research, which tends to assume the dominance of extrinsic over intrinsic predictors. This study highlights the importance of both intrinsic characteristics of the acoustic signal itself, as well as extrinsic factors, such as 12-TET Dissimilarity, on perception of affect in music.
Collapse
Affiliation(s)
- Eline Adrianne Smit
- MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Milperra, NSW, Australia
| | - Andrew J. Milne
- MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Milperra, NSW, Australia
| | - Roger T. Dean
- MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Milperra, NSW, Australia
| | - Gabrielle Weidemann
- MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Milperra, NSW, Australia
- School of Social Sciences and Psychology, Western Sydney University, Milperra, NSW, Australia
| |
Collapse
|
47
|
Graves JE, Oxenham AJ. Pitch discrimination with mixtures of three concurrent harmonic complexes. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2019; 145:2072. [PMID: 31046318 PMCID: PMC6469983 DOI: 10.1121/1.5096639] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/13/2018] [Revised: 02/19/2019] [Accepted: 03/13/2019] [Indexed: 06/09/2023]
Abstract
In natural listening contexts, especially in music, it is common to hear three or more simultaneous pitches, but few empirical or theoretical studies have addressed how this is achieved. Place and pattern-recognition theories of pitch require at least some harmonics to be spectrally resolved for pitch to be extracted, but it is unclear how often such conditions exist when multiple complex tones are presented together. In three behavioral experiments, mixtures of three concurrent complexes were filtered into a single bandpass spectral region, and the relationship between the fundamental frequencies and spectral region was varied in order to manipulate the extent to which harmonics were resolved either before or after mixing. In experiment 1, listeners discriminated major from minor triads (a difference of 1 semitone in one note of the triad). In experiments 2 and 3, listeners compared the pitch of a probe tone with that of a subsequent target, embedded within two other tones. All three experiments demonstrated above-chance performance, even in conditions where the combinations of harmonic components were unlikely to be resolved after mixing, suggesting that fully resolved harmonics may not be necessary to extract the pitch from multiple simultaneous complexes.
Collapse
Affiliation(s)
- Jackson E Graves
- Department of Psychology, University of Minnesota, 75 East River Parkway, Minneapolis, Minnesota 55455, USA
| | - Andrew J Oxenham
- Department of Psychology, University of Minnesota, 75 East River Parkway, Minneapolis, Minnesota 55455, USA
| |
Collapse
|
48
|
The pleasantness of sensory dissonance is mediated by musical style and expertise. Sci Rep 2019; 9:1070. [PMID: 30705379 PMCID: PMC6355932 DOI: 10.1038/s41598-018-35873-8] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2017] [Accepted: 11/09/2018] [Indexed: 12/13/2022] Open
Abstract
Western musical styles use a large variety of chords and vertical sonorities. Based on objective acoustical properties, chords can be situated on a dissonant-consonant continuum. While this might to some extent converge with the unpleasant-pleasant continuum, subjective liking might diverge for various chord forms from music across different styles. Our study aimed to investigate how well appraisals of the roughness and pleasantness dimensions of isolated chords taken from real-world music are predicted by Parncutt’s established model of sensory dissonance. Furthermore, we related these subjective ratings to style of origin and acoustical features of the chords as well as musical sophistication of the raters. Ratings were obtained for chords deemed representative of the harmonic language of three different musical styles (classical, jazz and avant-garde music), plus randomly generated chords. Results indicate that pleasantness and roughness ratings were, on average, mirror opposites; however, their relative distribution differed greatly across styles, reflecting different underlying aesthetic ideals. Parncutt’s model only weakly predicted ratings for all but Classical chords, suggesting that listeners’ appraisal of the dissonance and pleasantness of chords bears not only on stimulus-side but also on listener-side factors. Indeed, we found that levels of musical sophistication negatively predicted listeners’ tendency to rate the consonance and pleasantness of any one chord as coupled measures, suggesting that musical education and expertise may serve to individuate how these musical dimensions are apprehended.
Collapse
|
49
|
Carcagno S, Bucknall R, Woodhouse J, Fritz C, Plack CJ. Effect of back wood choice on the perceived quality of steel-string acoustic guitars. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2018; 144:3533. [PMID: 30599660 DOI: 10.1121/1.5084735] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/01/2018] [Accepted: 12/06/2018] [Indexed: 06/09/2023]
Abstract
Some of the most prized woods used for the backs and sides of acoustic guitars are expensive, rare, and from unsustainable sources. It is unclear to what extent back woods contribute to the sound and playability qualities of acoustic guitars. Six steel-string acoustic guitars were built for this study to the same design and material specifications except for the back/side plates which were made of woods varying widely in availability and price (Brazilian rosewood, Indian rosewood, mahogany, maple, sapele, and walnut). Bridge-admittance measurements revealed small differences between the modal properties of the guitars which could be largely attributed to residual manufacturing variability rather than to the back/side plates. Overall sound quality ratings, given by 52 guitarists in a dimly lit room while wearing welder's goggles to prevent visual identification, were very similar between the six guitars. The results of a blinded ABX discrimination test, performed by another subset of 31 guitarists, indicate that guitarists could not easily distinguish the guitars by their sound or feel. Overall, the results suggest that the species of wood used for the back and sides of a steel-string acoustic guitar has only a marginal impact on its body mode properties and perceived sound.
Collapse
Affiliation(s)
- Samuele Carcagno
- Department of Psychology, Lancaster University, Lancaster, LA1 4YF, United Kingdom
| | | | - Jim Woodhouse
- Engineering Department, Cambridge University, Cambridge, CB2 1PZ, United Kingdom
| | - Claudia Fritz
- Sorbonne Université, Centre National de la Recherche Scientifique, Institut Jean Le Rond d'Alembert, 75005, Paris, France
| | - Christopher J Plack
- Department of Psychology, Lancaster University, Lancaster, LA1 4YF, United Kingdom
| |
Collapse
|
50
|
Popham S, Boebinger D, Ellis DPW, Kawahara H, McDermott JH. Inharmonic speech reveals the role of harmonicity in the cocktail party problem. Nat Commun 2018; 9:2122. [PMID: 29844313 PMCID: PMC5974276 DOI: 10.1038/s41467-018-04551-8] [Citation(s) in RCA: 36] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2017] [Accepted: 05/08/2018] [Indexed: 11/22/2022] Open
Abstract
The "cocktail party problem" requires us to discern individual sound sources from mixtures of sources. The brain must use knowledge of natural sound regularities for this purpose. One much-discussed regularity is the tendency for frequencies to be harmonically related (integer multiples of a fundamental frequency). To test the role of harmonicity in real-world sound segregation, we developed speech analysis/synthesis tools to perturb the carrier frequencies of speech, disrupting harmonic frequency relations while maintaining the spectrotemporal envelope that determines phonemic content. We find that violations of harmonicity cause individual frequencies of speech to segregate from each other, impair the intelligibility of concurrent utterances despite leaving intelligibility of single utterances intact, and cause listeners to lose track of target talkers. However, additional segregation deficits result from replacing harmonic frequencies with noise (simulating whispering), suggesting additional grouping cues enabled by voiced speech excitation. Our results demonstrate acoustic grouping cues in real-world sound segregation.
Collapse
Affiliation(s)
- Sara Popham
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA, 02139, USA
- Helen Wills Neuroscience Institute, UC Berkeley, Berkeley, CA, 94720, USA
| | - Dana Boebinger
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA, 02139, USA
- Program in Speech and Hearing Sciences, Harvard University, Cambridge, MA, 02138, USA
| | | | | | - Josh H McDermott
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA, 02139, USA.
- Program in Speech and Hearing Sciences, Harvard University, Cambridge, MA, 02138, USA.
| |
Collapse
|