1
|
Di Stefano N, Spence C. Should absolute pitch be considered as a unique kind of absolute sensory judgment in humans? A systematic and theoretical review of the literature. Cognition 2024; 249:105805. [PMID: 38761646 DOI: 10.1016/j.cognition.2024.105805] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2023] [Revised: 04/12/2024] [Accepted: 04/23/2024] [Indexed: 05/20/2024]
Abstract
Absolute pitch is the name given to the rare ability to identify a musical note in an automatic and effortless manner without the need for a reference tone. Those individuals with absolute pitch can, for example, name the note they hear, identify all of the tones of a given chord, and/or name the pitches of everyday sounds, such as car horns or sirens. Hence, absolute pitch can be seen as providing a rare example of absolute sensory judgment in audition. Surprisingly, however, the intriguing question of whether such an ability presents unique features in the domain of sensory perception, or whether instead similar perceptual skills also exist in other sensory domains, has not been explicitly addressed previously. In this paper, this question is addressed by systematically reviewing research on absolute pitch using the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) method. Thereafter, we compare absolute pitch with two rare types of sensory experience, namely synaesthesia and eidetic memory, to understand if and how these phenomena exhibit similar features to absolute pitch. Furthermore, a common absolute perceptual ability that has been often compared to absolute pitch, namely colour perception, is also discussed. Arguments are provided supporting the notion that none of the examined abilities can be considered like absolute pitch. Therefore, we conclude by suggesting that absolute pitch does indeed appear to constitute a unique kind of absolute sensory judgment in humans, and we discuss some open issues and novel directions for future research in absolute pitch.
Collapse
Affiliation(s)
- Nicola Di Stefano
- Institute of Cognitive Sciences and Technologies, National Research Council of Italy (CNR), Via Gian Domenico Romagnosi, 18, 00196 Rome, Italy.
| | - Charles Spence
- Crossmodal Research Laboratory, Department of Experimental Psychology, University of Oxford, Oxford, UK
| |
Collapse
|
2
|
Van Hedger SC, Bongiovanni NR, Heald SLM, Nusbaum HC. Absolute pitch judgments of familiar melodies generalize across timbre and octave. Mem Cognit 2023; 51:1898-1910. [PMID: 37165298 DOI: 10.3758/s13421-023-01429-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/24/2023] [Indexed: 05/12/2023]
Abstract
Most listeners can determine when a familiar recording of music has been shifted in musical key by as little as one semitone (e.g., from B to C major). These findings appear to suggest that absolute pitch memory is widespread in the general population. However, the use of familiar recordings makes it unclear whether these findings genuinely reflect absolute melody-key associations for at least two reasons. First, listeners may be able to use spectral cues from the familiar instrumentation of the recordings to determine when a familiar recording has been shifted in pitch. Second, listeners may be able to rely solely on pitch height cues (e.g., relying on a feeling that an incorrect recording sounds "too high" or "too low"). Neither of these strategies would require an understanding of pitch chroma or musical key. The present experiments thus assessed whether listeners could make accurate absolute melody-key judgments when listening to novel versions of these melodies, differing from the iconic recording in timbre (Experiment 1) or timbre and octave (Experiment 2). Listeners in both experiments were able to select the correct-key version of the familiar melody at rates that were well above chance. These results fit within a growing body of research supporting the idea that most listeners, regardless of formal musical training, have robust representations of absolute pitch - based on pitch chroma - that generalize to novel listening situations. Implications for theories of auditory pitch memory are discussed.
Collapse
Affiliation(s)
- Stephen C Van Hedger
- Department of Psychology, Huron University College at Western, 1349 Western Road, London, ON, N6G 1H3, Canada.
- Department of Psychology and Brain and Mind Institute, Western University, London, Ontario, Canada.
| | | | - Shannon L M Heald
- Department of Psychology, University of Chicago, Chicago, IL, USA
- Center for Practical Wisdom, University Chicago, Chicago, IL, USA
| | - Howard C Nusbaum
- Department of Psychology, University of Chicago, Chicago, IL, USA
- Center for Practical Wisdom, University Chicago, Chicago, IL, USA
| |
Collapse
|
3
|
Kim T, Chung M, Jeong E, Cho YS, Kwon OS, Kim SP. Cortical representation of musical pitch in event-related potentials. Biomed Eng Lett 2023; 13:441-454. [PMID: 37519879 PMCID: PMC10382469 DOI: 10.1007/s13534-023-00274-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2022] [Revised: 03/14/2023] [Accepted: 03/18/2023] [Indexed: 08/01/2023] Open
Abstract
Neural coding of auditory stimulus frequency is well-documented; however, the cortical signals and perceptual correlates of pitch have not yet been comprehensively investigated. This study examined the temporal patterns of event-related potentials (ERP) in response to single tones of pitch chroma, with an assumption that these patterns would be more prominent in musically-trained individuals than in non-musically-trained individuals. Participants with and without musical training (N = 20) were presented with seven notes on the C major scale (C4, D4, E4, F4, G4, A4, and B4), and whole-brain activities were recorded. A linear regression analysis between the ERP amplitude and the seven notes showed that the ERP amplitude increased or decreased as the frequency of the pitch increased. Remarkably, these linear correlations were anti-symmetric between the hemispheres. Specifically, we found that ERP amplitudes of the left and right frontotemporal areas decreased and increased, respectively, as the pitch frequency increased. Although linear slopes were significant in both groups, the musically-trained group exhibited marginally steeper slope, and their ERP amplitudes were most discriminant for frequency of tone of pitch at earlier latency than in the non-musically-trained group (~ 460 ms vs ~ 630 ms after stimulus onset). Thus, the ERP amplitudes in frontotemporal areas varied according to the pitch frequency, with the musically-trained participants demonstrating a wider range of amplitudes and inter-hemispheric anti-symmetric patterns. Our findings may provide new insights on cortical processing of musical pitch, revealing anti-symmetric processing of musical pitch between hemispheres, which appears to be more pronounced in musically-trained people. Supplementary Information The online version contains supplementary material available at 10.1007/s13534-023-00274-y.
Collapse
Affiliation(s)
- Taehyoung Kim
- Department of Biomedical Engineering, Ulsan National Institute of Science and Technology, Ulsan, Republic of Korea
| | - Miyoung Chung
- Department of Biomedical Engineering, Ulsan National Institute of Science and Technology, Ulsan, Republic of Korea
| | - Eunju Jeong
- Department of Music and Science for Clinical Practice, College of Interdisciplinary Industrial Studies, Hanyang University, Seoul, Republic of Korea
| | - Yang Seok Cho
- School of Psychology, Korea University, Seoul, Republic of Korea
| | - Oh-Sang Kwon
- Department of Biomedical Engineering, Ulsan National Institute of Science and Technology, Ulsan, Republic of Korea
| | - Sung-Phil Kim
- Department of Biomedical Engineering, Ulsan National Institute of Science and Technology, Ulsan, Republic of Korea
| |
Collapse
|
4
|
Generalizing across tonal context, timbre, and octave in rapid absolute pitch training. Atten Percept Psychophys 2023; 85:525-542. [PMID: 36690914 DOI: 10.3758/s13414-023-02653-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/03/2023] [Indexed: 01/24/2023]
Abstract
Absolute pitch (AP) is the rare ability to name any musical note without the use of a reference note. Given that genuine AP representations are based on the identification of isolated notes by their tone chroma, they are considered to be invariant to (1) surrounding tonal context, (2) changes in instrumental timbre, and (3) changes in octave register. However, there is considerable variability in the literature in terms of how AP is trained and tested along these dimensions, making recent claims about AP learning difficult to assess. Here, we examined the effect of tonal context on participant success with a single-note identification training paradigm, including how learning generalized to an untested instrument and octave. We found that participants were able to rapidly learn to distinguish C from other notes, with and without feedback and regardless of the tonal context in which C was presented. Participants were also able to partly generalize this skill to an untrained instrument. However, participants displayed the weakest generalization in recognizing C in a higher octave. The results indicate that participants were likely attending to pitch height in addition to pitch chroma - a conjecture that was supported by analyzing the pattern of response errors. These findings highlight the complex nature of note representation in AP, which requires note identification across contexts, going beyond the simple storage of a note fundamental. The importance of standardizing testing that spans both timbre and octave in assessing AP and further implications on past literature and future work are discussed.
Collapse
|
5
|
Leite Filho CA, Rocha-Muniz CN, Pereira LD, Schochat E. Auditory temporal resolution and backward masking in musicians with absolute pitch. Front Neurosci 2023; 17:1151776. [PMID: 37139520 PMCID: PMC10149789 DOI: 10.3389/fnins.2023.1151776] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2023] [Accepted: 03/27/2023] [Indexed: 05/05/2023] Open
Abstract
Among the many questions regarding the ability to effortlessly name musical notes without a reference, also known as absolute pitch, the neural processes by which this phenomenon operates are still a matter of debate. Although a perceptual subprocess is currently accepted by the literature, the participation of some aspects of auditory processing still needs to be determined. We conducted two experiments to investigate the relationship between absolute pitch and two aspects of auditory temporal processing, namely temporal resolution and backward masking. In the first experiment, musicians were organized into two groups according to the presence of absolute pitch, as determined by a pitch identification test, and compared regarding their performance in the Gaps-in-Noise test, a gap detection task for assessing temporal resolution. Despite the lack of statistically significant difference between the groups, the Gaps-in-Noise test measures were significant predictors of the measures for pitch naming precision, even after controlling for possible confounding variables. In the second experiment, another two groups of musicians with and without absolute pitch were submitted to the backward masking test, with no difference between the groups and no correlation between backward masking and absolute pitch measures. The results from both experiments suggest that only part of temporal processing is involved in absolute pitch, indicating that not all aspects of auditory perception are related to the perceptual subprocess. Possible explanations for these findings include the notable overlap of brain areas involved in both temporal resolution and absolute pitch, which is not present in the case of backward masking, and the relevance of temporal resolution to analyze the temporal fine structure of sound in pitch perception.
Collapse
Affiliation(s)
- Carlos Alberto Leite Filho
- Auditory Processing Lab, Department of Physical Therapy, Speech-Language Pathology and Occupational Therapy, School of Medicine, University of São Paulo, São Paulo, Brazil
- *Correspondence: Carlos Alberto Leite Filho,
| | - Caroline Nunes Rocha-Muniz
- Speech-Language Pathology Department, Santa Casa de São Paulo School of Medical Sciences, São Paulo, Brazil
| | - Liliane Desgualdo Pereira
- Neuroaudiology Lab, Department of Speech Therapy, Paulista School of Medicine, Federal University of São Paulo, São Paulo, Brazil
| | - Eliane Schochat
- Auditory Processing Lab, Department of Physical Therapy, Speech-Language Pathology and Occupational Therapy, School of Medicine, University of São Paulo, São Paulo, Brazil
| |
Collapse
|
6
|
Domain-specific hearing-in-noise performance is associated with absolute pitch proficiency. Sci Rep 2022; 12:16344. [PMID: 36175508 PMCID: PMC9521875 DOI: 10.1038/s41598-022-20869-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2022] [Accepted: 09/20/2022] [Indexed: 11/22/2022] Open
Abstract
Recent evidence suggests that musicians may have an advantage over non-musicians in perceiving speech against noisy backgrounds. Previously, musicians have been compared as a homogenous group, despite demonstrated heterogeneity, which may contribute to discrepancies between studies. Here, we investigated whether “quasi”-absolute pitch (AP) proficiency, viewed as a general trait that varies across a spectrum, accounts for the musician advantage in hearing-in-noise (HIN) performance, irrespective of whether the streams are speech or musical sounds. A cohort of 12 non-musicians and 42 trained musicians stratified into high, medium, or low AP proficiency identified speech or melody targets masked in noise (speech-shaped, multi-talker, and multi-music) under four signal-to-noise ratios (0, − 3, − 6, and − 9 dB). Cognitive abilities associated with HIN benefits, including auditory working memory and use of visuo-spatial cues, were assessed. AP proficiency was verified against pitch adjustment and relative pitch tasks. We found a domain-specific effect on HIN perception: quasi-AP abilities were related to improved perception of melody but not speech targets in noise. The quasi-AP advantage extended to tonal working memory and the use of spatial cues, but only during melodic stream segregation. Overall, the results do not support the putative musician advantage in speech-in-noise perception, but suggest a quasi-AP advantage in perceiving music under noisy environments.
Collapse
|
7
|
Gao Z, Oxenham AJ. Voice disadvantage effects in absolute and relative pitch judgments. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2022; 151:2414. [PMID: 35461511 PMCID: PMC8993423 DOI: 10.1121/10.0010123] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/02/2021] [Revised: 03/16/2022] [Accepted: 03/21/2022] [Indexed: 06/14/2023]
Abstract
Absolute pitch (AP) possessors can identify musical notes without an external reference. Most AP studies have used musical instruments and pure tones for testing, rather than the human voice. However, the voice is crucial for human communication in both speech and music, and evidence for voice-specific neural processing mechanisms and brain regions suggests that AP processing of voice may be different. Here, musicians with AP or relative pitch (RP) completed online AP or RP note-naming tasks, respectively. Four synthetic sound categories were tested: voice, viola, simplified voice, and simplified viola. Simplified sounds had the same long-term spectral information but no temporal fluctuations (such as vibrato). The AP group was less accurate in judging the note names for voice than for viola in both the original and simplified conditions. A smaller, marginally significant effect was observed in the RP group. A voice disadvantage effect was also observed in a simple pitch discrimination task, even with simplified stimuli. To reconcile these results with voice-advantage effects in other domains, it is proposed that voices are processed in a way that voice- or speech-relevant features are facilitated at the expense of features that are less relevant to voice processing, such as fine-grained pitch information.
Collapse
Affiliation(s)
- Zi Gao
- Department of Psychology, University of Minnesota, Minneapolis, Minnesota 55455, USA
| | - Andrew J Oxenham
- Department of Psychology, University of Minnesota, Minneapolis, Minnesota 55455, USA
| |
Collapse
|
8
|
Foster NEV, Beffa L, Lehmann A. Accuracy of Tempo Judgments in Disk Jockeys Compared to Musicians and Untrained Individuals. Front Psychol 2021; 12:709979. [PMID: 34675835 PMCID: PMC8525396 DOI: 10.3389/fpsyg.2021.709979] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2021] [Accepted: 09/10/2021] [Indexed: 11/25/2022] Open
Abstract
Professional disk jockeys (DJs) are an under-studied population whose performance involves creating new musical experiences by combining existing musical materials with a high level of temporal precision. In contemporary electronic dance music, these materials have a stable tempo and are composed with the expectation for further transformation during performance by a DJ for the audience of dancers. Thus, a fundamental aspect of DJ performance is synchronizing the tempo and phase of multiple pieces of music, so that over seconds or even minutes, they may be layered and transitioned without disrupting the rhythmic pulse. This has been accomplished traditionally by manipulating the speed of individual music pieces “by ear,” without additional technological synchronization aids. However, the cumulative effect of this repeated practice on auditory tempo perception has not yet been evaluated. Well-known phenomena of experience-dependent plasticity in other populations, such as musicians, prompts the question of whether such effects exist in DJs in their domain of expertise. This pilot study examined auditory judgments of tempo in 10 professional DJs with experience mixing by ear, compared to 7 percussionists, 12 melodic instrumental musicians, and 11 untrained controls. Participants heard metronome sequences between 80 and 160 beats per minute (BPM) and estimated the tempo. In their most-trained tempo range, 120–139 BPM, DJs were more accurate (lower absolute percent error) than untrained participants. Within the DJ group, 120–139 BPM exhibited greater accuracy than slower tempos of 80–99 or 100–119 BPM. DJs did not differ in accuracy compared to percussionists or melodic musicians on any BPM range. Percussionists were more accurate than controls for 100–119 and 120–139 BPM. The results affirm the experience-dependent skill of professional DJs in temporal perception, with comparable performance to conventionally trained percussionists and instrumental musicians. Additionally, the pattern of results suggests a tempo-specific aspect to this training effect that may be more pronounced in DJs than percussionists and musicians. As one of the first demonstrations of enhanced auditory perception in this unorthodox music expert population, this work opens the way to testing whether DJs also have enhanced rhythmic production abilities, and investigating the neural substrates of this skill compared to conventional musicians.
Collapse
Affiliation(s)
- Nicholas E V Foster
- Department of Otolaryngology Head and Neck Surgery, McGill University, Montreal, QC, Canada.,International Laboratory for Brain, Music and Sound Research (BRAMS), Montreal, QC, Canada.,Center for Research on Brain, Language and Music (CRBLM), Montreal, QC, Canada
| | - Lauriane Beffa
- International Laboratory for Brain, Music and Sound Research (BRAMS), Montreal, QC, Canada.,Center for Research on Brain, Language and Music (CRBLM), Montreal, QC, Canada
| | - Alexandre Lehmann
- Department of Otolaryngology Head and Neck Surgery, McGill University, Montreal, QC, Canada.,International Laboratory for Brain, Music and Sound Research (BRAMS), Montreal, QC, Canada.,Center for Research on Brain, Language and Music (CRBLM), Montreal, QC, Canada
| |
Collapse
|
9
|
Greber M, Klein C, Leipold S, Sele S, Jäncke L. Heterogeneity of EEG resting-state brain networks in absolute pitch. Int J Psychophysiol 2020; 157:11-22. [PMID: 32721558 DOI: 10.1016/j.ijpsycho.2020.07.007] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2020] [Revised: 07/09/2020] [Accepted: 07/19/2020] [Indexed: 01/13/2023]
Abstract
The neural basis of absolute pitch (AP), the ability to effortlessly identify a musical tone without an external reference, is poorly understood. One of the key questions is whether perceptual or cognitive processes underlie the phenomenon, as both sensory and higher-order brain regions have been associated with AP. To integrate the perceptual and cognitive views on AP, here, we investigated joint contributions of sensory and higher-order brain regions to AP resting-state networks. We performed a comprehensive functional network analysis of source-level EEG in a large sample of AP musicians (n = 54) and non-AP musicians (n = 51), adopting two analysis approaches: First, we applied an ROI-based analysis to examine the connectivity between the auditory cortex and the dorsolateral prefrontal cortex (DLPFC) using several established functional connectivity measures. This analysis is a replication of a previous study which reported increased connectivity between these two regions in AP musicians. Second, we performed a whole-brain network-based analysis on the same functional connectivity measures to gain a more complete picture of the brain regions involved in a possibly large-scale network supporting AP ability. In our sample, the ROI-based analysis did not provide evidence for an AP-specific connectivity increase between the auditory cortex and the DLPFC. The whole-brain analysis revealed three networks with increased connectivity in AP musicians comprising nodes in frontal, temporal, subcortical, and occipital areas. Commonalities of the networks were found in both sensory and higher-order brain regions of the perisylvian area. Further research will be needed to confirm these exploratory results.
Collapse
Affiliation(s)
- Marielle Greber
- Division Neuropsychology, Department of Psychology, University of Zurich, Zurich, Switzerland.
| | - Carina Klein
- Division Neuropsychology, Department of Psychology, University of Zurich, Zurich, Switzerland
| | - Simon Leipold
- Division Neuropsychology, Department of Psychology, University of Zurich, Zurich, Switzerland; Department of Psychiatry and Behavioral Sciences, Stanford University School of Medicine, Stanford, USA
| | - Silvano Sele
- Division Neuropsychology, Department of Psychology, University of Zurich, Zurich, Switzerland; University Research Priority Program (URPP), Dynamics of Healthy Aging, University of Zurich, Zurich, Switzerland
| | - Lutz Jäncke
- Division Neuropsychology, Department of Psychology, University of Zurich, Zurich, Switzerland; University Research Priority Program (URPP), Dynamics of Healthy Aging, University of Zurich, Zurich, Switzerland.
| |
Collapse
|
10
|
Letailleur A, Bisesi E, Legrain P. Strategies Used by Musicians to Identify Notes' Pitch: Cognitive Bricks and Mental Representations. Front Psychol 2020; 11:1480. [PMID: 32733333 PMCID: PMC7358308 DOI: 10.3389/fpsyg.2020.01480] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2019] [Accepted: 06/02/2020] [Indexed: 12/20/2022] Open
Abstract
To this day, the study of the substratum of thought and its implied mechanisms is rarely directly addressed. Nowadays, systemic approaches based on introspective methodologies are no longer fashionable and are often overlooked or ignored. Most frequently, reductionist approaches are followed for deciphering the neuronal circuits functionally associated with cognitive processes. However, we argue that systemic studies of individual thought may still contribute to a useful and complementary description of the multimodal nature of perception, because they can take into account individual diversity while still identifying the common features of perceptual processes. We propose to address this question by looking at one possible task for recognition of a "signifying sound", as an example of conceptual grasping of a perceptual response. By adopting a mixed approach combining qualitative analyses of interviews based on introspection with quantitative statistical analyses carried out on the resulting categorization, this study describes a variety of mental strategies used by musicians to identify notes' pitch. Sixty-seven musicians (music students and professionals) were interviewed, revealing that musicians utilize intermediate steps during note identification by selecting or activating cognitive bricks that help construct and reach the correct decision. We named these elements "mental anchorpoints" (MA). Although the anchorpoints are not universal, and differ between individuals, they can be grouped into categories related to three main sensory modalities - auditory, visual and kinesthetic. Such categorization enabled us to characterize the mental representations (MR) that allow musicians to name notes in relationship to eleven basic typologies of anchorpoints. We propose a conceptual framework which summarizes the process of note identification in five steps, starting from sensory detection and ending with the verbalization of the note pitch, passing through the pivotal role of MAs and MRs. We found that musicians use multiple strategies and select individual combinations of MAs belonging to these three different sensory modalities, both in isolation and in combination.
Collapse
Affiliation(s)
- Alain Letailleur
- CNRS UMR 8131, Centre Georg Simmel Recherches Franco-Allemandes en Sciences Sociales, École des Hautes Études en Sciences Sociales (EHESS), Paris, France
| | - Erica Bisesi
- CNRS UMR 3571, Paris, France.,Unité Perception et Mémoire, Institut Pasteur, Paris, France
| | - Pierre Legrain
- CNRS UMR 3571, Paris, France.,Unité Perception et Mémoire, Institut Pasteur, Paris, France
| |
Collapse
|