1
|
Will JK, Roeske C, Degé F. Development of tonality and consonance categorization ability and preferences in 4- to 6-year-old children. Front Psychol 2024; 15:1270114. [PMID: 39171227 PMCID: PMC11336827 DOI: 10.3389/fpsyg.2024.1270114] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2023] [Accepted: 04/17/2024] [Indexed: 08/23/2024] Open
Abstract
Consonance perception has been extensively studied in Western adults, but it is less clear how this perception develops in children during musical enculturation. We investigated how this development occurs in 4- to 6-year-old children by examining two complex musical skills (i.e., consonance and tonality preferences). Accordingly, we developed a child-focused approach to understand the underlying developmental processes of tonality and consonance preferences in 4- to 6-year-old children using a video interview format. As previous studies have confounded preference with perception, we examined each concept separately and measured perceptual abilities as categorization. For tonality, the ability to categorize tonal and atonal melodies developed by the age of 6 years. It is noteworthy that only children who could categorize successfully showed a preference for tonality at the age of 6. For consonance, we observed an early preference for consonance at 4 years of age, but this preference was only measurable with large differences between consonant and dissonant stimuli. We propose that tonality and consonance preferences develop during childhood with increasing categorization ability when the surrounding musical culture is marked by Western tonality and consonance.
Collapse
Affiliation(s)
| | | | - Franziska Degé
- Max Planck Society, Max Planck Institute for Empirical Aesthetics, Music Department, Frankfurt, Germany
| |
Collapse
|
2
|
Kalra L, Altman S, Bee MA. Perceptually salient differences in a species recognition cue do not promote auditory streaming in eastern grey treefrogs (Hyla versicolor). J Comp Physiol A Neuroethol Sens Neural Behav Physiol 2024:10.1007/s00359-024-01702-9. [PMID: 38733407 DOI: 10.1007/s00359-024-01702-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2023] [Revised: 04/17/2024] [Accepted: 04/18/2024] [Indexed: 05/13/2024]
Abstract
Auditory streaming underlies a receiver's ability to organize complex mixtures of auditory input into distinct perceptual "streams" that represent different sound sources in the environment. During auditory streaming, sounds produced by the same source are integrated through time into a single, coherent auditory stream that is perceptually segregated from other concurrent sounds. Based on human psychoacoustic studies, one hypothesis regarding auditory streaming is that any sufficiently salient perceptual difference may lead to stream segregation. Here, we used the eastern grey treefrog, Hyla versicolor, to test this hypothesis in the context of vocal communication in a non-human animal. In this system, females choose their mate based on perceiving species-specific features of a male's pulsatile advertisement calls in social environments (choruses) characterized by mixtures of overlapping vocalizations. We employed an experimental paradigm from human psychoacoustics to design interleaved pulsatile sequences (ABAB…) that mimicked key features of the species' advertisement call, and in which alternating pulses differed in pulse rise time, which is a robust species recognition cue in eastern grey treefrogs. Using phonotaxis assays, we found no evidence that perceptually salient differences in pulse rise time promoted the segregation of interleaved pulse sequences into distinct auditory streams. These results do not support the hypothesis that any perceptually salient acoustic difference can be exploited as a cue for stream segregation in all species. We discuss these findings in the context of cues used for species recognition and auditory streaming.
Collapse
Affiliation(s)
- Lata Kalra
- Department of Ecology, Evolution, and Behavior, University of Minnesota, Saint Paul, MN, 55108, USA.
| | - Shoshana Altman
- Department of Ecology, Evolution, and Behavior, University of Minnesota, Saint Paul, MN, 55108, USA
| | - Mark A Bee
- Department of Ecology, Evolution, and Behavior, University of Minnesota, Saint Paul, MN, 55108, USA
| |
Collapse
|
3
|
Sankaran N, Leonard MK, Theunissen F, Chang EF. Encoding of melody in the human auditory cortex. SCIENCE ADVANCES 2024; 10:eadk0010. [PMID: 38363839 PMCID: PMC10871532 DOI: 10.1126/sciadv.adk0010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/27/2023] [Accepted: 01/17/2024] [Indexed: 02/18/2024]
Abstract
Melody is a core component of music in which discrete pitches are serially arranged to convey emotion and meaning. Perception varies along several pitch-based dimensions: (i) the absolute pitch of notes, (ii) the difference in pitch between successive notes, and (iii) the statistical expectation of each note given prior context. How the brain represents these dimensions and whether their encoding is specialized for music remains unknown. We recorded high-density neurophysiological activity directly from the human auditory cortex while participants listened to Western musical phrases. Pitch, pitch-change, and expectation were selectively encoded at different cortical sites, indicating a spatial map for representing distinct melodic dimensions. The same participants listened to spoken English, and we compared responses to music and speech. Cortical sites selective for music encoded expectation, while sites that encoded pitch and pitch-change in music used the same neural code to represent equivalent properties of speech. Findings reveal how the perception of melody recruits both music-specific and general-purpose sound representations.
Collapse
Affiliation(s)
- Narayan Sankaran
- Department of Neurological Surgery, University of California, San Francisco, 675 Nelson Rising Lane, San Francisco, CA 94158, USA
| | - Matthew K. Leonard
- Department of Neurological Surgery, University of California, San Francisco, 675 Nelson Rising Lane, San Francisco, CA 94158, USA
| | - Frederic Theunissen
- Department of Psychology, University of California, Berkeley, 2121 Berkeley Way, Berkeley, CA 94720, USA
| | - Edward F. Chang
- Department of Neurological Surgery, University of California, San Francisco, 675 Nelson Rising Lane, San Francisco, CA 94158, USA
| |
Collapse
|
4
|
T. Zaatar M, Alhakim K, Enayeh M, Tamer R. The transformative power of music: Insights into neuroplasticity, health, and disease. Brain Behav Immun Health 2024; 35:100716. [PMID: 38178844 PMCID: PMC10765015 DOI: 10.1016/j.bbih.2023.100716] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2023] [Revised: 12/04/2023] [Accepted: 12/08/2023] [Indexed: 01/06/2024] Open
Abstract
Music is a universal language that can elicit profound emotional and cognitive responses. In this literature review, we explore the intricate relationship between music and the brain, from how it is decoded by the nervous system to its therapeutic potential in various disorders. Music engages a diverse network of brain regions and circuits, including sensory-motor processing, cognitive, memory, and emotional components. Music-induced brain network oscillations occur in specific frequency bands, and listening to one's preferred music can grant easier access to these brain functions. Moreover, music training can bring about structural and functional changes in the brain, and studies have shown its positive effects on social bonding, cognitive abilities, and language processing. We also discuss how music therapy can be used to retrain impaired brain circuits in different disorders. Understanding how music affects the brain can open up new avenues for music-based interventions in healthcare, education, and wellbeing.
Collapse
Affiliation(s)
- Muriel T. Zaatar
- Department of Biological and Physical Sciences, American University in Dubai, Dubai, United Arab Emirates
| | | | | | | |
Collapse
|
5
|
Han M, Chien YF, Zhang Z, Wei Z, Li W. Music training affects listeners' processing of different types of accentuation information: Evidence from ERPs. Brain Cogn 2024; 174:106120. [PMID: 38142535 DOI: 10.1016/j.bandc.2023.106120] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2023] [Revised: 12/07/2023] [Accepted: 12/14/2023] [Indexed: 12/26/2023]
Abstract
Previous studies found that prolonged musical training can promote language processing, but few studies have examined whether and how musical training affects the processing of accentuation in spoken language. In this study, a vocabulary detection task was conducted, with Chinese single sentences as materials, to investigate how musicians and non-musicians process corrective accent and information accent in the sentence-middle and sentence-final positions. In the sentence-middle position, results of the cluster-based permutation t-tests showed significant differences in the 574-714 ms time window for the control group. In the sentence-final position, the cluster-based permutation t-tests revealed significant differences in the 612-810 ms time window for the music group and in the 616-812 ms time window for the control group. These significant positive effects were induced by the processing of information accent relative to that of corrective accent. These results suggest that both groups were able to distinguish corrective accent from information accent, but they processed the two accent types differently in the sentence-middle position. These findings show that musical training has a cross-domain effect on spoken language processing and that the accent position also affects its processing.
Collapse
Affiliation(s)
- Mei Han
- School of Public Health, Bengbu Medical University, China; Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, China; Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, China
| | - Yu-Fu Chien
- Department of Chinese Language and Literature, Fudan University, China
| | - Zhenghua Zhang
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, China; Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, China; Department of Psychology, Renmin University of China, Beijing, China
| | - Zhen Wei
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, China; Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, China
| | - Weijun Li
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, China; Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, China.
| |
Collapse
|
6
|
Papadaki E, Koustakas T, Werner A, Lindenberger U, Kühn S, Wenger E. Resting-state functional connectivity in an auditory network differs between aspiring professional and amateur musicians and correlates with performance. Brain Struct Funct 2023; 228:2147-2163. [PMID: 37792073 PMCID: PMC10587189 DOI: 10.1007/s00429-023-02711-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2023] [Accepted: 09/10/2023] [Indexed: 10/05/2023]
Abstract
Auditory experience-dependent plasticity is often studied in the domain of musical expertise. Available evidence suggests that years of musical practice are associated with structural and functional changes in auditory cortex and related brain regions. Resting-state functional magnetic resonance imaging (MRI) can be used to investigate neural correlates of musical training and expertise beyond specific task influences. Here, we compared two groups of musicians with varying expertise: 24 aspiring professional musicians preparing for their entrance exam at Universities of Arts versus 17 amateur musicians without any such aspirations but who also performed music on a regular basis. We used an interval recognition task to define task-relevant brain regions and computed functional connectivity and graph-theoretical measures in this network on separately acquired resting-state data. Aspiring professionals performed significantly better on all behavioral indicators including interval recognition and also showed significantly greater network strength and global efficiency than amateur musicians. Critically, both average network strength and global efficiency were correlated with interval recognition task performance assessed in the scanner, and with an additional measure of interval identification ability. These findings demonstrate that task-informed resting-state fMRI can capture connectivity differences that correspond to expertise-related differences in behavior.
Collapse
Affiliation(s)
- Eleftheria Papadaki
- Center for Lifespan Psychology, Max Planck Institute for Human Development, Lentzeallee 94, 14195, Berlin, Germany.
- International Max Planck Research School on the Life Course (LIFE), Berlin, Germany.
| | - Theodoros Koustakas
- Center for Lifespan Psychology, Max Planck Institute for Human Development, Lentzeallee 94, 14195, Berlin, Germany
| | - André Werner
- Center for Lifespan Psychology, Max Planck Institute for Human Development, Lentzeallee 94, 14195, Berlin, Germany
| | - Ulman Lindenberger
- Center for Lifespan Psychology, Max Planck Institute for Human Development, Lentzeallee 94, 14195, Berlin, Germany
- Max Planck UCL Centre for Computational Psychiatry and Ageing Research, Berlin, Germany, London, UK
| | - Simone Kühn
- Lise Meitner Group for Environmental Neuroscience, Max Planck Institute for Human Development, Berlin, Germany
- Neuronal Plasticity Working Group, Department of Psychiatry and Psychotherapy, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| | - Elisabeth Wenger
- Center for Lifespan Psychology, Max Planck Institute for Human Development, Lentzeallee 94, 14195, Berlin, Germany
| |
Collapse
|
7
|
Preniqi V, Kalimeri K, Saitis C. Soundscapes of morality: Linking music preferences and moral values through lyrics and audio. PLoS One 2023; 18:e0294402. [PMID: 38019770 PMCID: PMC10686442 DOI: 10.1371/journal.pone.0294402] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2023] [Accepted: 10/26/2023] [Indexed: 12/01/2023] Open
Abstract
Music is a fundamental element in every culture, serving as a universal means of expressing our emotions, feelings, and beliefs. This work investigates the link between our moral values and musical choices through lyrics and audio analyses. We align the psychometric scores of 1,480 participants to acoustics and lyrics features obtained from the top 5 songs of their preferred music artists from Facebook Page Likes. We employ a variety of lyric text processing techniques, including lexicon-based approaches and BERT-based embeddings, to identify each song's narrative, moral valence, attitude, and emotions. In addition, we extract both low- and high-level audio features to comprehend the encoded information in participants' musical choices and improve the moral inferences. We propose a Machine Learning approach and assess the predictive power of lyrical and acoustic features separately and in a multimodal framework for predicting moral values. Results indicate that lyrics and audio features from the artists people like inform us about their morality. Though the most predictive features vary per moral value, the models that utilised a combination of lyrics and audio characteristics were the most successful in predicting moral values, outperforming the models that only used basic features such as user demographics, the popularity of the artists, and the number of likes per user. Audio features boosted the accuracy in the prediction of empathy and equality compared to textual features, while the opposite happened for hierarchy and tradition, where higher prediction scores were driven by lyrical features. This demonstrates the importance of both lyrics and audio features in capturing moral values. The insights gained from our study have a broad range of potential uses, including customising the music experience to meet individual needs, music rehabilitation, or even effective communication campaign crafting.
Collapse
Affiliation(s)
- Vjosa Preniqi
- Centre for Digital Music, Queen Mary University of London, London, United Kingdom
| | | | - Charalampos Saitis
- Centre for Digital Music, Queen Mary University of London, London, United Kingdom
| |
Collapse
|
8
|
Sankaran N, Leonard MK, Theunissen F, Chang EF. Encoding of melody in the human auditory cortex. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.10.17.562771. [PMID: 37905047 PMCID: PMC10614915 DOI: 10.1101/2023.10.17.562771] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/02/2023]
Abstract
Melody is a core component of music in which discrete pitches are serially arranged to convey emotion and meaning. Perception of melody varies along several pitch-based dimensions: (1) the absolute pitch of notes, (2) the difference in pitch between successive notes, and (3) the higher-order statistical expectation of each note conditioned on its prior context. While humans readily perceive melody, how these dimensions are collectively represented in the brain and whether their encoding is specialized for music remains unknown. Here, we recorded high-density neurophysiological activity directly from the surface of human auditory cortex while Western participants listened to Western musical phrases. Pitch, pitch-change, and expectation were selectively encoded at different cortical sites, indicating a spatial code for representing distinct dimensions of melody. The same participants listened to spoken English, and we compared evoked responses to music and speech. Cortical sites selective for music were systematically driven by the encoding of expectation. In contrast, sites that encoded pitch and pitch-change used the same neural code to represent equivalent properties of speech. These findings reveal the multidimensional nature of melody encoding, consisting of both music-specific and domain-general sound representations in auditory cortex. Teaser The human brain contains both general-purpose and music-specific neural populations for processing distinct attributes of melody.
Collapse
|
9
|
Gurariy G, Randall R, Greenberg AS. Neuroimaging evidence for the direct role of auditory scene analysis in object perception. Cereb Cortex 2023; 33:6257-6272. [PMID: 36562994 PMCID: PMC10183742 DOI: 10.1093/cercor/bhac501] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2022] [Revised: 11/29/2022] [Accepted: 11/30/2022] [Indexed: 12/24/2022] Open
Abstract
Auditory Scene Analysis (ASA) refers to the grouping of acoustic signals into auditory objects. Previously, we have shown that perceived musicality of auditory sequences varies with high-level organizational features. Here, we explore the neural mechanisms mediating ASA and auditory object perception. Participants performed musicality judgments on randomly generated pure-tone sequences and manipulated versions of each sequence containing low-level changes (amplitude; timbre). Low-level manipulations affected auditory object perception as evidenced by changes in musicality ratings. fMRI was used to measure neural activation to sequences rated most and least musical, and the altered versions of each sequence. Next, we generated two partially overlapping networks: (i) a music processing network (music localizer) and (ii) an ASA network (base sequences vs. ASA manipulated sequences). Using Representational Similarity Analysis, we correlated the functional profiles of each ROI to a model generated from behavioral musicality ratings as well as models corresponding to low-level feature processing and music perception. Within overlapping regions, areas near primary auditory cortex correlated with low-level ASA models, whereas right IPS was correlated with musicality ratings. Shared neural mechanisms that correlate with behavior and underlie both ASA and music perception suggests that low-level features of auditory stimuli play a role in auditory object perception.
Collapse
Affiliation(s)
- Gennadiy Gurariy
- Department of Biomedical Engineering, Medical College of Wisconsin and Marquette University, 8701 W Watertown Plank Rd, Milwaukee, WI 53233, United States
| | - Richard Randall
- School of Music and Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA 15213, United States
| | - Adam S Greenberg
- Department of Biomedical Engineering, Medical College of Wisconsin and Marquette University, 8701 W Watertown Plank Rd, Milwaukee, WI 53233, United States
| |
Collapse
|
10
|
Basiński K, Quiroga-Martinez DR, Vuust P. Temporal hierarchies in the predictive processing of melody - From pure tones to songs. Neurosci Biobehav Rev 2023; 145:105007. [PMID: 36535375 DOI: 10.1016/j.neubiorev.2022.105007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2022] [Revised: 11/30/2022] [Accepted: 12/14/2022] [Indexed: 12/23/2022]
Abstract
Listening to musical melodies is a complex task that engages perceptual and memoryrelated processes. The processes underlying melody cognition happen simultaneously on different timescales, ranging from milliseconds to minutes. Although attempts have been made, research on melody perception is yet to produce a unified framework of how melody processing is achieved in the brain. This may in part be due to the difficulty of integrating concepts such as perception, attention and memory, which pertain to different temporal scales. Recent theories on brain processing, which hold prediction as a fundamental principle, offer potential solutions to this problem and may provide a unifying framework for explaining the neural processes that enable melody perception on multiple temporal levels. In this article, we review empirical evidence for predictive coding on the levels of pitch formation, basic pitch-related auditory patterns,more complex regularity processing extracted from basic patterns and long-term expectations related to musical syntax. We also identify areas that would benefit from further inquiry and suggest future directions in research on musical melody perception.
Collapse
Affiliation(s)
- Krzysztof Basiński
- Division of Quality of Life Research, Medical University of Gdańsk, Poland
| | - David Ricardo Quiroga-Martinez
- Helen Wills Neuroscience Institute & Department of Psychology, University of California Berkeley, USA; Center for Music in the Brain, Aarhus University & The Royal Academy of Music, Denmark
| | - Peter Vuust
- Center for Music in the Brain, Aarhus University & The Royal Academy of Music, Denmark
| |
Collapse
|
11
|
Kervin SR. The Key to Singing Off-Key: The Trained Singer and Pitch Perception Distortion. J Voice 2023:S0892-1997(22)00417-9. [PMID: 36732108 DOI: 10.1016/j.jvoice.2022.12.016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2022] [Revised: 12/17/2022] [Accepted: 12/19/2022] [Indexed: 02/04/2023]
Abstract
OBJECTIVES Pitch perception distortion (PPD) is a novel term describing a phenomenon in which an amplified, accompanied singer's perception of their sung pitch relative to band or accompaniment becomes ambiguous, leading to one of two conditions: a) the singer believes they are out of tune with the accompaniment, but are in tune as perceived by a listener, or b) the singer believes they are in tune with the accompaniment, but are not. This pilot study aims to investigate the existence and incidence of PPD among amplified, accompanied performers and identify associated variables. DESIGN/METHODS 115 singers were recruited to participate in an online survey, which collected information on musical training, performance environment, and PPD experience. RESULTS Reported PPD incidence was 68%, with 92% of respondents indicating that PPD occurred rarely. The factors reported as most associated with PPD experiences included loud stage volume, poor song familiarity, singing outside one's habitual pitch range, and singing loudly. Contrary to previous studies and our hypotheses, no association was found between modality of auditory feedback (e.g., in-ears versus floor monitors) and incidence of PPD. Additionally, higher levels of training were found to be associated with higher incidence of PPD. CONCLUSIONS The reported incidence supports that PPD exists beyond chance and anecdotal experience. In light of the highly trained sample, the data suggest that pitch accuracy in accompanied, amplified performance may be more associated with aural environment-specifically loud stage volume-and a highly trained singer's tuning strategy in response to that environment rather than a singer's mastery of vocal intonation skills in isolation. Loud stage volume was implicated as a primary factor associated with PPD, which may be related to the stapedius reflex. Future investigations will target attempted elicitation of PPD in trained singers after establishing baseline auditory reflex thresholds and objective measurements of intonation accuracy.
Collapse
Affiliation(s)
- Sarah R Kervin
- New York University, Department of Communicative Sciences and Disorders, 665 Broadway #9, New York, NY, 10012; Grabscheid Voice and Swallowing Center, New York Eye and Ear Infirmary of Mount Sinai, 380 2nd Ave, 9th Fl, New York, NY, 10010.
| |
Collapse
|
12
|
Goldsworthy RL. Computational Modeling of Synchrony in the Auditory Nerve in Response to Acoustic and Electric Stimulation. Front Comput Neurosci 2022; 16:889992. [PMID: 35782089 PMCID: PMC9249013 DOI: 10.3389/fncom.2022.889992] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2022] [Accepted: 05/25/2022] [Indexed: 11/13/2022] Open
Abstract
Cochlear implants are medical devices that provide hearing to nearly one million people around the world. Outcomes are impressive with most recipients learning to understand speech through this new way of hearing. Music perception and speech reception in noise, however, are notably poor. These aspects of hearing critically depend on sensitivity to pitch, whether the musical pitch of an instrument or the vocal pitch of speech. The present article examines cues for pitch perception in the auditory nerve based on computational models. Modeled neural synchrony for pure and complex tones is examined for three different electric stimulation strategies including Continuous Interleaved Sampling (CIS), High-Fidelity CIS (HDCIS), and Peak-Derived Timing (PDT). Computational modeling of current spread and neuronal response are used to predict neural activity to electric and acoustic stimulation. It is shown that CIS does not provide neural synchrony to the frequency of pure tones nor to the fundamental component of complex tones. The newer HDCIS and PDT strategies restore synchrony to both the frequency of pure tones and to the fundamental component of complex tones. Current spread reduces spatial specificity of excitation as well as the temporal fidelity of neural synchrony, but modeled neural excitation restores precision of these cues. Overall, modeled neural excitation to electric stimulation that incorporates temporal fine structure (e.g., HDCIS and PDT) indicates neural synchrony comparable to that provided by acoustic stimulation. Discussion considers the importance of stimulation rate and long-term rehabilitation to provide temporal cues for pitch perception.
Collapse
|
13
|
Vuust P, Heggli OA, Friston KJ, Kringelbach ML. Music in the brain. Nat Rev Neurosci 2022; 23:287-305. [PMID: 35352057 DOI: 10.1038/s41583-022-00578-5] [Citation(s) in RCA: 104] [Impact Index Per Article: 52.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/22/2022] [Indexed: 02/06/2023]
Abstract
Music is ubiquitous across human cultures - as a source of affective and pleasurable experience, moving us both physically and emotionally - and learning to play music shapes both brain structure and brain function. Music processing in the brain - namely, the perception of melody, harmony and rhythm - has traditionally been studied as an auditory phenomenon using passive listening paradigms. However, when listening to music, we actively generate predictions about what is likely to happen next. This enactive aspect has led to a more comprehensive understanding of music processing involving brain structures implicated in action, emotion and learning. Here we review the cognitive neuroscience literature of music perception. We show that music perception, action, emotion and learning all rest on the human brain's fundamental capacity for prediction - as formulated by the predictive coding of music model. This Review elucidates how this formulation of music perception and expertise in individuals can be extended to account for the dynamics and underlying brain mechanisms of collective music making. This in turn has important implications for human creativity as evinced by music improvisation. These recent advances shed new light on what makes music meaningful from a neuroscientific perspective.
Collapse
Affiliation(s)
- Peter Vuust
- Center for Music in the Brain, Aarhus University and The Royal Academy of Music (Det Jyske Musikkonservatorium), Aarhus, Denmark.
| | - Ole A Heggli
- Center for Music in the Brain, Aarhus University and The Royal Academy of Music (Det Jyske Musikkonservatorium), Aarhus, Denmark
| | - Karl J Friston
- Wellcome Centre for Human Neuroimaging, University College London, London, UK
| | - Morten L Kringelbach
- Center for Music in the Brain, Aarhus University and The Royal Academy of Music (Det Jyske Musikkonservatorium), Aarhus, Denmark.,Department of Psychiatry, University of Oxford, Oxford, UK.,Centre for Eudaimonia and Human Flourishing, Linacre College, University of Oxford, Oxford, UK
| |
Collapse
|
14
|
Guest DR, Oxenham AJ. Human discrimination and modeling of high-frequency complex tones shed light on the neural codes for pitch. PLoS Comput Biol 2022; 18:e1009889. [PMID: 35239639 PMCID: PMC8923464 DOI: 10.1371/journal.pcbi.1009889] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2021] [Revised: 03/15/2022] [Accepted: 02/02/2022] [Indexed: 11/24/2022] Open
Abstract
Accurate pitch perception of harmonic complex tones is widely believed to rely on temporal fine structure information conveyed by the precise phase-locked responses of auditory-nerve fibers. However, accurate pitch perception remains possible even when spectrally resolved harmonics are presented at frequencies beyond the putative limits of neural phase locking, and it is unclear whether residual temporal information, or a coarser rate-place code, underlies this ability. We addressed this question by measuring human pitch discrimination at low and high frequencies for harmonic complex tones, presented either in isolation or in the presence of concurrent complex-tone maskers. We found that concurrent complex-tone maskers impaired performance at both low and high frequencies, although the impairment introduced by adding maskers at high frequencies relative to low frequencies differed between the tested masker types. We then combined simulated auditory-nerve responses to our stimuli with ideal-observer analysis to quantify the extent to which performance was limited by peripheral factors. We found that the worsening of both frequency discrimination and F0 discrimination at high frequencies could be well accounted for (in relative terms) by optimal decoding of all available information at the level of the auditory nerve. A Python package is provided to reproduce these results, and to simulate responses to acoustic stimuli from the three previously published models of the human auditory nerve used in our analyses.
Collapse
Affiliation(s)
- Daniel R. Guest
- Department of Psychology, University of Minnesota, Minneapolis, Minnesota, United States of America
| | - Andrew J. Oxenham
- Department of Psychology, University of Minnesota, Minneapolis, Minnesota, United States of America
| |
Collapse
|
15
|
Goldsworthy RL, Bissmeyer SRS, Camarena A. Advantages of Pulse Rate Compared to Modulation Frequency for Temporal Pitch Perception in Cochlear Implant Users. J Assoc Res Otolaryngol 2022; 23:137-150. [PMID: 34981263 PMCID: PMC8782986 DOI: 10.1007/s10162-021-00828-w] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2021] [Accepted: 12/01/2021] [Indexed: 02/03/2023] Open
Abstract
Most cochlear implants encode the fundamental frequency of periodic sounds by amplitude modulation of constant-rate pulsatile stimulation. Pitch perception provided by such stimulation strategies is markedly poor. Two experiments are reported here that consider potential advantages of pulse rate compared to modulation frequency for providing stimulation timing cues for pitch. The first experiment examines beat frequency distortion that occurs when modulating constant-rate pulsatile stimulation. This distortion has been reported on previously, but the results presented here indicate that distortion occurs for higher stimulation rates than previously reported. The second experiment examines pitch resolution as provided by pulse rate compared to modulation frequency. The results indicate that pitch discrimination is better with pulse rate than with modulation frequency. The advantage was large for rates near what has been suggested as the upper limit of temporal pitch perception conveyed by cochlear implants. The results are relevant to sound processing design for cochlear implants particularly for algorithms that encode fundamental frequency into deep envelope modulations or into precisely timed pulsatile stimulation.
Collapse
Affiliation(s)
- Raymond L Goldsworthy
- Auditory Research Center, Caruso Department of Otolaryngology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA.
| | - Susan R S Bissmeyer
- Auditory Research Center, Caruso Department of Otolaryngology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
- Department of Biomedical Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, USA
| | - Andres Camarena
- Auditory Research Center, Caruso Department of Otolaryngology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
- Neuroscience Graduate Program, University of Southern California, Los Angeles, CA, USA
| |
Collapse
|
16
|
McGuire K, Firestone GM, Zhang N, Zhang F. The Acoustic Change Complex in Response to Frequency Changes and Its Correlation to Cochlear Implant Speech Outcomes. Front Hum Neurosci 2021; 15:757254. [PMID: 34744668 PMCID: PMC8566680 DOI: 10.3389/fnhum.2021.757254] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2021] [Accepted: 10/01/2021] [Indexed: 12/12/2022] Open
Abstract
One of the biggest challenges that face cochlear implant (CI) users is the highly variable hearing outcomes of implantation across patients. Since speech perception requires the detection of various dynamic changes in acoustic features (e.g., frequency, intensity, timing) in speech sounds, it is critical to examine the ability to detect the within-stimulus acoustic changes in CI users. The primary objective of this study was to examine the auditory event-related potential (ERP) evoked by the within-stimulus frequency changes (F-changes), one type of the acoustic change complex (ACC), in adult CI users, and its correlation to speech outcomes. Twenty-one adult CI users (29 individual CI ears) were tested with psychoacoustic frequency change detection tasks, speech tests including the Consonant-Nucleus-Consonant (CNC) word recognition, Arizona Biomedical Sentence Recognition in quiet and noise (AzBio-Q and AzBio-N), and the Digit-in-Noise (DIN) tests, and electroencephalographic (EEG) recordings. The stimuli for the psychoacoustic tests and EEG recordings were pure tones at three different base frequencies (0.25, 1, and 4 kHz) that contained a F-change at the midpoint of the tone. Results showed that the frequency change detection threshold (FCDT), ACC N1' latency, and P2' latency did not differ across frequencies (p > 0.05). ACC N1'-P2 amplitude was significantly larger for 0.25 kHz than for other base frequencies (p < 0.05). The mean N1' latency across three base frequencies was negatively correlated with CNC word recognition (r = -0.40, p < 0.05) and CNC phoneme (r = -0.40, p < 0.05), and positively correlated with mean FCDT (r = 0.46, p < 0.05). The P2' latency was positively correlated with DIN (r = 0.47, p < 0.05) and mean FCDT (r = 0.47, p < 0.05). There was no statistically significant correlation between N1'-P2' amplitude and speech outcomes (all ps > 0.05). Results of this study indicated that variability in CI speech outcomes assessed with the CNC, AzBio-Q, and DIN tests can be partially explained (approximately 16-21%) by the variability of cortical sensory encoding of F-changes reflected by the ACC.
Collapse
Affiliation(s)
- Kelli McGuire
- Department of Communication Sciences and Disorders, University of Cincinnati, Cincinnati, OH, United States
| | - Gabrielle M. Firestone
- Department of Communication Sciences and Disorders, University of Cincinnati, Cincinnati, OH, United States
| | - Nanhua Zhang
- Division of Biostatistics and Epidemiology, Cincinnati Children’s Hospital Medical Center, Cincinnati, OH, United States
| | - Fawen Zhang
- Department of Communication Sciences and Disorders, University of Cincinnati, Cincinnati, OH, United States
| |
Collapse
|
17
|
Goldsworthy RL, Camarena A, Bissmeyer SRS. Pitch perception is more robust to interference and better resolved when provided by pulse rate than by modulation frequency of cochlear implant stimulation. Hear Res 2021; 409:108319. [PMID: 34340020 PMCID: PMC9343238 DOI: 10.1016/j.heares.2021.108319] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/17/2021] [Revised: 07/15/2021] [Accepted: 07/21/2021] [Indexed: 01/14/2023]
Abstract
Cochlear implants are medical devices that have been used to restore hearing to more than half a million people worldwide. Most recipients achieve high levels of speech comprehension through these devices, but speech comprehension in background noise and music appreciation in general are markedly poor compared to normal hearing. A key aspect of hearing that is notably diminished in cochlear implant outcomes is the sense of pitch provided by these devices. Pitch perception is an important factor affecting speech comprehension in background noise and is critical for music perception. The present article summarizes two experiments that examine the robustness and resolution of pitch perception as provided by cochlear implant stimulation timing. The driving hypothesis is that pitch conveyed by stimulation timing cues is more robust and better resolved when provided by variable pulse rates than by modulation frequency of constant-rate stimulation. Experiment 1 examines the robustness for hearing a large, one-octave, pitch difference in the presence of interfering electrical stimulation. With robustness to interference characterized for an otherwise easily discernible pitch difference, Experiment 2 examines the resolution of discrimination thresholds in the presence of interference as conveyed by modulation frequency or by pulse rate. These experiments test for an advantage of stimulation with precise temporal cues. The results indicate that pitch provided by pulse rate is both more robust to interference and is better resolved compared to when provided by modulation frequency. These results should inform the development of new sound processing strategies for cochlear implants designed to encode fundamental frequency of sounds into precise temporal stimulation.
Collapse
Affiliation(s)
- Raymond L Goldsworthy
- Auditory Research Center, Caruso Department of Otolaryngology, Keck School of Medicine, University of Southern California, Los Angeles, CA, United States.
| | - Andres Camarena
- Auditory Research Center, Caruso Department of Otolaryngology, Keck School of Medicine, University of Southern California, Los Angeles, CA, United States; Neuroscience Graduate Program, University of Southern California, Los Angeles, CA, United States
| | - Susan R S Bissmeyer
- Auditory Research Center, Caruso Department of Otolaryngology, Keck School of Medicine, University of Southern California, Los Angeles, CA, United States; Department of Biomedical Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States
| |
Collapse
|
18
|
Homma NY, Bajo VM. Lemniscal Corticothalamic Feedback in Auditory Scene Analysis. Front Neurosci 2021; 15:723893. [PMID: 34489635 PMCID: PMC8417129 DOI: 10.3389/fnins.2021.723893] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2021] [Accepted: 07/30/2021] [Indexed: 12/15/2022] Open
Abstract
Sound information is transmitted from the ear to central auditory stations of the brain via several nuclei. In addition to these ascending pathways there exist descending projections that can influence the information processing at each of these nuclei. A major descending pathway in the auditory system is the feedback projection from layer VI of the primary auditory cortex (A1) to the ventral division of medial geniculate body (MGBv) in the thalamus. The corticothalamic axons have small glutamatergic terminals that can modulate thalamic processing and thalamocortical information transmission. Corticothalamic neurons also provide input to GABAergic neurons of the thalamic reticular nucleus (TRN) that receives collaterals from the ascending thalamic axons. The balance of corticothalamic and TRN inputs has been shown to refine frequency tuning, firing patterns, and gating of MGBv neurons. Therefore, the thalamus is not merely a relay stage in the chain of auditory nuclei but does participate in complex aspects of sound processing that include top-down modulations. In this review, we aim (i) to examine how lemniscal corticothalamic feedback modulates responses in MGBv neurons, and (ii) to explore how the feedback contributes to auditory scene analysis, particularly on frequency and harmonic perception. Finally, we will discuss potential implications of the role of corticothalamic feedback in music and speech perception, where precise spectral and temporal processing is essential.
Collapse
Affiliation(s)
- Natsumi Y. Homma
- Center for Integrative Neuroscience, University of California, San Francisco, San Francisco, CA, United States
- Coleman Memorial Laboratory, Department of Otolaryngology – Head and Neck Surgery, University of California, San Francisco, San Francisco, CA, United States
| | - Victoria M. Bajo
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
| |
Collapse
|
19
|
Wang L, Beaman CP, Jiang C, Liu F. Perception and Production of Statement-Question Intonation in Autism Spectrum Disorder: A Developmental Investigation. J Autism Dev Disord 2021; 52:3456-3472. [PMID: 34355295 PMCID: PMC9296411 DOI: 10.1007/s10803-021-05220-4] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/25/2021] [Indexed: 11/25/2022]
Abstract
Prosody or “melody in speech” in autism spectrum disorder (ASD) is often perceived as atypical. This study examined perception and production of statements and questions in 84 children, adolescents and adults with and without ASD, as well as participants’ pitch direction discrimination thresholds. The results suggested that the abilities to discriminate (in both speech and music conditions), identify, and imitate statement-question intonation were intact in individuals with ASD across age cohorts. Sensitivity to pitch direction predicted performance on intonation processing in both groups, who also exhibited similar developmental changes. These findings provide evidence for shared mechanisms in pitch processing between speech and music, as well as associations between low- and high-level pitch processing and between perception and production of pitch.
Collapse
Affiliation(s)
- Li Wang
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK
| | - C Philip Beaman
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK
| | - Cunmei Jiang
- Music College, Shanghai Normal University, Shanghai, China
| | - Fang Liu
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK.
| |
Collapse
|
20
|
Helpard L, Li H, Rohani SA, Zhu N, Rask-Andersen H, Agrawal S, Ladak HM. An Approach for Individualized Cochlear Frequency Mapping Determined from 3D Synchrotron Radiation Phase-Contrast Imaging. IEEE Trans Biomed Eng 2021; 68:3602-3611. [PMID: 33983877 DOI: 10.1109/tbme.2021.3080116] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
OBJECTIVE Cochlear implants are traditionally programmed to stimulate according to a generalized frequency map, where individual anatomic variability is not considered when selecting the centre frequency of stimulation of each implant electrode. However, high variability in cochlear size and spatial frequency distributions exist among individuals. Generalized cochlear implant frequency maps can result in large pitch perception errors and reduced hearing outcomes for cochlear implant recipients. The objective of this work was to develop an individualized frequency mapping technique for the human cochlea to allow for patient-specific cochlear implant stimulation. METHODS Ten cadaveric human cochleae were scanned using synchrotron radiation phase-contrast imaging (SR-PCI) combined with computed tomography (CT). For each cochlea, ground truth angle-frequency measurements were obtained in three-dimensions using the SR-PCI CT data. Using an approach designed to minimize perceptual error in frequency estimation, an individualized frequency function was determined to relate angular depth to frequency within the cochlea. RESULTS The individualized frequency mapping function significantly reduced pitch errors in comparison to the current gold standard generalized approach. CONCLUSION AND SIGNIFICANCE This paper presents for the first time a cochlear frequency map which can be individualized using only the angular length of cochleae. This approach can be applied in the clinical setting and has the potential to revolutionize cochlear implant programming for patients worldwide.
Collapse
|
21
|
Liang Q, Zeng Y. Stylistic Composition of Melodies Based on a Brain-Inspired Spiking Neural Network. Front Syst Neurosci 2021; 15:639484. [PMID: 33776661 PMCID: PMC7991719 DOI: 10.3389/fnsys.2021.639484] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2020] [Accepted: 02/22/2021] [Indexed: 11/30/2022] Open
Abstract
Current neural network based algorithmic composition methods are very different compared to human brain's composition process, while the biological plausibility of composition and generative models are essential for the future of Artificial Intelligence. To explore this problem, this paper presents a spiking neural network based on the inspiration from brain structures and musical information processing mechanisms at multiple scales. Unlike previous methods, our model has three novel characteristics: (1) Inspired by brain structures, multiple brain regions with different cognitive functions, including musical memory and knowledge learning, are simulated and cooperated to generate stylistic melodies. A hierarchical neural network is constructed to formulate musical knowledge. (2) Biologically plausible neural model is employed to construct the network and synaptic connections are modulated using spike-timing-dependent plasticity (STDP) learning rule. Besides, brain oscillation activities with different frequencies perform importantly during the learning and generating process. (3) Based on significant musical memory and knowledge learning, genre-based and composer-based melody composition can be achieved by different neural circuits, the experiments show that the model can compose melodies with different styles of composers or genres.
Collapse
Affiliation(s)
- Qian Liang
- Research Center for Brain-Inspired Intelligence, Institute of Automation, Chinese Academy of Sciences, Beijing, China.,School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China
| | - Yi Zeng
- Research Center for Brain-Inspired Intelligence, Institute of Automation, Chinese Academy of Sciences, Beijing, China.,School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China.,National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing, China.,Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, China
| |
Collapse
|
22
|
Andermann M, Günther M, Patterson RD, Rupp A. Early cortical processing of pitch height and the role of adaptation and musicality. Neuroimage 2020; 225:117501. [PMID: 33169697 DOI: 10.1016/j.neuroimage.2020.117501] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2020] [Revised: 10/19/2020] [Accepted: 10/21/2020] [Indexed: 02/06/2023] Open
Abstract
Pitch is an important perceptual feature; however, it is poorly understood how its cortical correlates are shaped by absolute vs relative fundamental frequency (f0), and by neural adaptation. In this study, we assessed transient and sustained auditory evoked fields (AEFs) at the onset, progression, and offset of short pitch height sequences, taking into account the listener's musicality. We show that neuromagnetic activity reflects absolute f0 at pitch onset and offset, and relative f0 at transitions within pitch sequences; further, sequences with fixed f0 lead to larger response suppression than sequences with variable f0 contour, and to enhanced offset activity. Musical listeners exhibit stronger f0-related AEFs and larger differences between their responses to fixed vs variable sequences, both within sequences and at pitch offset. The results resemble prominent psychoacoustic phenomena in the perception of pitch contours; moreover, they suggest a strong influence of adaptive mechanisms on cortical pitch processing which, in turn, might be modulated by a listener's musical expertise.
Collapse
Affiliation(s)
- Martin Andermann
- Section of Biomagnetism, Department of Neurology, Heidelberg University Hospital, Im Neuenheimer Feld 400, 69120 Heidelberg, Germany.
| | - Melanie Günther
- Section of Biomagnetism, Department of Neurology, Heidelberg University Hospital, Im Neuenheimer Feld 400, 69120 Heidelberg, Germany
| | - Roy D Patterson
- Department of Physiology, Development and Neuroscience, University of Cambridge, Downing Street, Cambridge, CB2 3EG, United Kingdom
| | - André Rupp
- Section of Biomagnetism, Department of Neurology, Heidelberg University Hospital, Im Neuenheimer Feld 400, 69120 Heidelberg, Germany
| |
Collapse
|
23
|
Usui K, Shinozaki J, Usui N, Terada K, Matsuda K, Kondo A, Tottori T, Nagamine T, Inoue Y. Retained absolute pitch after selective amygdalohippocampectomy. Epilepsy Behav Rep 2020; 14:100378. [PMID: 32984806 PMCID: PMC7494675 DOI: 10.1016/j.ebr.2020.100378] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2020] [Revised: 06/13/2020] [Accepted: 06/19/2020] [Indexed: 11/17/2022] Open
Abstract
This study assessed the pre-operative chronic condition and effect of epilepsy surgery in a 21-year-old Japanese woman with drug-resistant right temporal lobe epilepsy (TLE). For this patient, it was crucially important to preserve language and her music capabilities, including absolute pitch (AP), which is found in the general population at less than 0.1%. The patient became seizure free, and her AP capability was preserved after selective amygdalohippocampectomy in the non-dominant right hemisphere. Most of the neuropsychological test (WAIS-III and WMS-R) scores remained in the normal range, except for low scores in verbal memory and markedly improved attention/concentration index. The patient's pre- and postoperative brain function related to language and music capabilities were investigated using functional magnetic resonance imaging (fMRI) based on two language tasks and a music task (listening to melodies). While task performance was similar in pre- and postoperative examinations, her brain activation patterns markedly differed. The most striking difference was during the music task: areas with significant activation existed in the bilateral frontal and temporal lobes before surgery, whereas postoperative activation was confined to a very limited region in the left angular gyrus. The authors speculate that the surgery triggered some change in functional organization in the brain, which contributed to preserving her capabilities. A music student with drug-resistant temporal lobe epilepsy (TLE) became seizure free. Postoperative evaluation exhibited almost stable AP ability and cognitive function. Brain activation patterns on fMRI showed a notable change after surgery. Surgery possibly triggered some change in functional organization of the brain. Change in functional organization possibly contributed to preserving the capabilities.
Collapse
Affiliation(s)
- Keiko Usui
- Department of Systems Neuroscience, School of Medicine, Sapporo Medical University, S1W17, Chuo-ku, Sapporo, Hokkaido 060-8556, Japan
- Corresponding author.
| | - Jun Shinozaki
- Department of Systems Neuroscience, School of Medicine, Sapporo Medical University, S1W17, Chuo-ku, Sapporo, Hokkaido 060-8556, Japan
| | - Naotaka Usui
- National Epilepsy Center, NHO Shizuoka Institute of Epilepsy and Neurological Disorders, Urushiyama 886, Aoi-ku, Shizuoka 420-8688, Japan
| | - Kiyohito Terada
- National Epilepsy Center, NHO Shizuoka Institute of Epilepsy and Neurological Disorders, Urushiyama 886, Aoi-ku, Shizuoka 420-8688, Japan
| | - Kazumi Matsuda
- National Epilepsy Center, NHO Shizuoka Institute of Epilepsy and Neurological Disorders, Urushiyama 886, Aoi-ku, Shizuoka 420-8688, Japan
| | - Akihiko Kondo
- National Epilepsy Center, NHO Shizuoka Institute of Epilepsy and Neurological Disorders, Urushiyama 886, Aoi-ku, Shizuoka 420-8688, Japan
| | - Takayasu Tottori
- National Epilepsy Center, NHO Shizuoka Institute of Epilepsy and Neurological Disorders, Urushiyama 886, Aoi-ku, Shizuoka 420-8688, Japan
| | - Takashi Nagamine
- Department of Systems Neuroscience, School of Medicine, Sapporo Medical University, S1W17, Chuo-ku, Sapporo, Hokkaido 060-8556, Japan
| | - Yushi Inoue
- National Epilepsy Center, NHO Shizuoka Institute of Epilepsy and Neurological Disorders, Urushiyama 886, Aoi-ku, Shizuoka 420-8688, Japan
| |
Collapse
|
24
|
Liang Q, Zeng Y, Xu B. Temporal-Sequential Learning With a Brain-Inspired Spiking Neural Network and Its Application to Musical Memory. Front Comput Neurosci 2020; 14:51. [PMID: 32714173 PMCID: PMC7343962 DOI: 10.3389/fncom.2020.00051] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2019] [Accepted: 05/11/2020] [Indexed: 11/13/2022] Open
Abstract
Sequence learning is a fundamental cognitive function of the brain. However, the ways in which sequential information is represented and memorized are not dealt with satisfactorily by existing models. To overcome this deficiency, this paper introduces a spiking neural network based on psychological and neurobiological findings at multiple scales. Compared with existing methods, our model has four novel features: (1) It contains several collaborative subnetworks similar to those in brain regions with different cognitive functions. The individual building blocks of the simulated areas are neural functional minicolumns composed of biologically plausible neurons. Both excitatory and inhibitory connections between neurons are modulated dynamically using a spike-timing-dependent plasticity learning rule. (2) Inspired by the mechanisms of the brain's cortical-striatal loop, a dependent timing module is constructed to encode temporal information, which is essential in sequence learning but has not been processed well by traditional algorithms. (3) Goal-based and episodic retrievals can be achieved at different time scales. (4) Musical memory is used as an application to validate the model. Experiments show that the model can store a huge amount of data on melodies and recall them with high accuracy. In addition, it can remember the entirety of a melody given only an episode or the melody played at different paces.
Collapse
Affiliation(s)
- Qian Liang
- Research Center for Brain-Inspired Intelligence, Institute of Automation, Chinese Academy of Sciences, Beijing, China.,School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China
| | - Yi Zeng
- Research Center for Brain-Inspired Intelligence, Institute of Automation, Chinese Academy of Sciences, Beijing, China.,School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China.,National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing, China.,Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, China
| | - Bo Xu
- Research Center for Brain-Inspired Intelligence, Institute of Automation, Chinese Academy of Sciences, Beijing, China.,School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China.,Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, China
| |
Collapse
|
25
|
Little DF, Snyder JS, Elhilali M. Ensemble modeling of auditory streaming reveals potential sources of bistability across the perceptual hierarchy. PLoS Comput Biol 2020; 16:e1007746. [PMID: 32275706 PMCID: PMC7185718 DOI: 10.1371/journal.pcbi.1007746] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2019] [Revised: 04/27/2020] [Accepted: 02/25/2020] [Indexed: 11/19/2022] Open
Abstract
Perceptual bistability-the spontaneous, irregular fluctuation of perception between two interpretations of a stimulus-occurs when observing a large variety of ambiguous stimulus configurations. This phenomenon has the potential to serve as a tool for, among other things, understanding how function varies across individuals due to the large individual differences that manifest during perceptual bistability. Yet it remains difficult to interpret the functional processes at work, without knowing where bistability arises during perception. In this study we explore the hypothesis that bistability originates from multiple sources distributed across the perceptual hierarchy. We develop a hierarchical model of auditory processing comprised of three distinct levels: a Peripheral, tonotopic analysis, a Central analysis computing features found more centrally in the auditory system, and an Object analysis, where sounds are segmented into different streams. We model bistable perception within this system by applying adaptation, inhibition and noise into one or all of the three levels of the hierarchy. We evaluate a large ensemble of variations of this hierarchical model, where each model has a different configuration of adaptation, inhibition and noise. This approach avoids the assumption that a single configuration must be invoked to explain the data. Each model is evaluated based on its ability to replicate two hallmarks of bistability during auditory streaming: the selectivity of bistability to specific stimulus configurations, and the characteristic log-normal pattern of perceptual switches. Consistent with a distributed origin, a broad range of model parameters across this hierarchy lead to a plausible form of perceptual bistability.
Collapse
Affiliation(s)
- David F. Little
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, Maryland, United States of America
| | - Joel S. Snyder
- Department of Psychology, University of Nevada, Las Vegas; Las Vegas, Nevada, United States of America
| | - Mounya Elhilali
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, Maryland, United States of America
| |
Collapse
|
26
|
Firestone GM, McGuire K, Liang C, Zhang N, Blankenship CM, Xiang J, Zhang F. A Preliminary Study of the Effects of Attentive Music Listening on Cochlear Implant Users' Speech Perception, Quality of Life, and Behavioral and Objective Measures of Frequency Change Detection. Front Hum Neurosci 2020; 14:110. [PMID: 32296318 PMCID: PMC7136537 DOI: 10.3389/fnhum.2020.00110] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2019] [Accepted: 03/11/2020] [Indexed: 11/17/2022] Open
Abstract
Introduction Most cochlear implant (CI) users have difficulty in listening tasks that rely strongly on perception of frequency changes (e.g., speech perception in noise, musical melody perception, etc.). Some previous studies using behavioral or subjective assessments have shown that short-term music training can benefit CI users’ perception of music and speech. Electroencephalographic (EEG) recordings may reveal the neural basis for music training benefits in CI users. Objective To examine the effects of short-term music training on CI hearing outcomes using a comprehensive test battery of subjective evaluation, behavioral tests, and EEG measures. Design Twelve adult CI users were recruited for a home-based music training program that focused on attentive listening to music genres and materials that have an emphasis on melody. The participants used a music streaming program (i.e., Pandora) downloaded onto personal electronic devices for training. The participants attentively listened to music through a direct audio cable or through Bluetooth streaming. The training schedule was 40 min/session/day, 5 days/week, for either 4 or 8 weeks. The pre-training and post-training tests included: hearing thresholds, Speech, Spatial and Qualities of Hearing Scale (SSQ12) questionnaire, psychoacoustic tests of frequency change detection threshold (FCDT), speech recognition tests (CNC words, AzBio sentences, and QuickSIN), and EEG responses to tones that contained different magnitudes of frequency changes. Results All participants except one finished the 4- or 8-week training, resulting in a dropout rate of 8.33%. Eleven participants performed all tests except for two who did not participate in EEG tests. Results showed a significant improvement in the FCDTs as well as performance on CNC and QuickSIN after training (p < 0.05), but no significant improvement in SSQ scores (p > 0.05). Results of the EEG tests showed larger post-training cortical auditory evoked potentials (CAEPs) in seven of the nine participants, suggesting a better cortical processing of both stimulus onset and within-stimulus frequency changes. Conclusion These preliminary data suggest that extensive, focused music listening can improve frequency perception and speech perception in CI users. Further studies that include a larger sample size and control groups are warranted to determine the efficacy of short-term music training in CI users.
Collapse
Affiliation(s)
- Gabrielle M Firestone
- Department of Communication Sciences and Disorders, University of Cincinnati, Cincinnati, OH, United States
| | - Kelli McGuire
- Department of Communication Sciences and Disorders, University of Cincinnati, Cincinnati, OH, United States
| | - Chun Liang
- Department of Communication Sciences and Disorders, University of Cincinnati, Cincinnati, OH, United States
| | - Nanhua Zhang
- Division of Biostatistics and Epidemiology, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, United States.,Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH, United States
| | - Chelsea M Blankenship
- Department of Communication Sciences and Disorders, University of Cincinnati, Cincinnati, OH, United States
| | - Jing Xiang
- Department of Pediatrics and Neurology, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, United States
| | - Fawen Zhang
- Department of Communication Sciences and Disorders, University of Cincinnati, Cincinnati, OH, United States
| |
Collapse
|
27
|
Lu H, Zhang K, Liu Q. Reading fluency and pitch discrimination abilities in children with learning disabilities. Technol Health Care 2020; 28:361-370. [PMID: 32364169 PMCID: PMC7369083 DOI: 10.3233/thc-209037] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
BACKGROUND Pitch perception and pitch matching may link to individual reading skills. OBJECTIVE In this study, we examined pitch perception and pitch matching tasks in children with learning disabilities to determine whether there was any connection between these tests and the reading fluency in these children. METHOD The study used different types of pitch discrimination tests and reading fluency tests to compare the two groups. RESULTS Results indicated that the accuracy of pitch discrimination and reading fluency was significantly different in these children with learning disabilities relative to typically developing children. This study also indicated that they exhibit impaired pitch matching, which linked to their reading skills. CONCLUSION The results indicate that processing and production of speech may be impacted by individuals' musical pitch perception and matching ability. The results may also give us a piece of evidence that we need further research on how these deficits in musical pitch perception affect our speech and language production in children and adults.
Collapse
Affiliation(s)
- Haidan Lu
- Education and Rehabilitation Department, Faculty of Education, East China Normal University, Shanghai, China
| | - Kaili Zhang
- Education and Rehabilitation Department, Faculty of Education, East China Normal University, Shanghai, China
| | - Qiaoyun Liu
- Education and Rehabilitation Department, Faculty of Education, East China Normal University, Shanghai, China
| |
Collapse
|
28
|
Effects of syllable name change on pitch comparison. JOURNAL OF PACIFIC RIM PSYCHOLOGY 2020. [DOI: 10.1017/prp.2019.25] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
Abstract
Previous research has found that musicians’ pitch judgments, unlike non-musicians’, are influenced by syllable names. Although non-musicians fail to identify absolute pitches, they acknowledge the direction of pitch change. The present experiment investigated whether non-musicians’ judgments of pitch change can be influenced by the direction of syllable name change. Moreover, we examined the spatial, magnitudinal and sequential nature of pitches and syllable names. Participants ( N = 33) were asked to hear two successive tones sung by syllable name and to judge the direction of pitch change by pressing vertically arranged buttons. Participants’ accuracy of pitch change judging was found to be influenced by the direction of syllable name change. However, the response location was not found to interact with pitch change or syllable name change. The distance effect was found in pitches but not in syllable names. A sequence effect was found that trials with early-in-sequence syllable names were responded faster than trials with late-in-sequence syllable names. These results suggest that syllable names can influence non-musicians’ pitch judgments in a relative context. We suggest that it is the sequential order of syllable names that is the product of cultural activities that interfere with the judgment of pitch change.
Collapse
|
29
|
Mehr SA, Singh M, Knox D, Ketter DM, Pickens-Jones D, Atwood S, Lucas C, Jacoby N, Egner AA, Hopkins EJ, Howard RM, Hartshorne JK, Jennings MV, Simson J, Bainbridge CM, Pinker S, O'Donnell TJ, Krasnow MM, Glowacki L. Universality and diversity in human song. Science 2019; 366:eaax0868. [PMID: 31753969 PMCID: PMC7001657 DOI: 10.1126/science.aax0868] [Citation(s) in RCA: 190] [Impact Index Per Article: 38.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2019] [Accepted: 10/24/2019] [Indexed: 12/22/2022]
Abstract
What is universal about music, and what varies? We built a corpus of ethnographic text on musical behavior from a representative sample of the world's societies, as well as a discography of audio recordings. The ethnographic corpus reveals that music (including songs with words) appears in every society observed; that music varies along three dimensions (formality, arousal, religiosity), more within societies than across them; and that music is associated with certain behavioral contexts such as infant care, healing, dance, and love. The discography-analyzed through machine summaries, amateur and expert listener ratings, and manual transcriptions-reveals that acoustic features of songs predict their primary behavioral context; that tonality is widespread, perhaps universal; that music varies in rhythmic and melodic complexity; and that elements of melodies and rhythms found worldwide follow power laws.
Collapse
Affiliation(s)
- Samuel A Mehr
- Data Science Initiative, Harvard University, Cambridge, MA 02138, USA.
- Department of Psychology, Harvard University, Cambridge, MA 02138, USA
- School of Psychology, Victoria University of Wellington, Wellington, New Zealand
| | - Manvir Singh
- Department of Human Evolutionary Biology, Harvard University, Cambridge, MA 02138, USA.
| | - Dean Knox
- Department of Politics, Princeton University, Princeton, NJ 08544, USA
| | - Daniel M Ketter
- Eastman School of Music, University of Rochester, Rochester, NY 14604, USA
- Department of Music, Missouri State University, Springfield, MO 65897, USA
| | | | - S Atwood
- Department of Psychology, Harvard University, Cambridge, MA 02138, USA
| | - Christopher Lucas
- Department of Political Science, Washington University, St. Louis, MO 63130, USA
| | - Nori Jacoby
- Computational Auditory Perception Group, Max Planck Institute for Empirical Aesthetics, 60322 Frankfurt am Main, Germany
| | - Alena A Egner
- Department of Psychology, Harvard University, Cambridge, MA 02138, USA
| | - Erin J Hopkins
- Department of Psychology, Harvard University, Cambridge, MA 02138, USA
| | - Rhea M Howard
- Department of Psychology, Harvard University, Cambridge, MA 02138, USA
| | | | | | - Jan Simson
- Department of Psychology, Harvard University, Cambridge, MA 02138, USA
- Department of Psychology, University of Konstanz, 78464 Konstanz, Germany
| | | | - Steven Pinker
- Department of Psychology, Harvard University, Cambridge, MA 02138, USA
| | | | - Max M Krasnow
- Department of Psychology, Harvard University, Cambridge, MA 02138, USA
| | - Luke Glowacki
- Department of Anthropology, Pennsylvania State University, State College, PA 16802, USA.
| |
Collapse
|
30
|
Fuller C, Başkent D, Free R. Early Deafened, Late Implanted Cochlear Implant Users Appreciate Music More Than and Identify Music as Well as Postlingual Users. Front Neurosci 2019; 13:1050. [PMID: 31680802 PMCID: PMC6798179 DOI: 10.3389/fnins.2019.01050] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2019] [Accepted: 09/19/2019] [Indexed: 11/13/2022] Open
Abstract
Introduction: Typical cochlear implant (CI) users, namely postlingually deafened and implanted, report to not enjoy listening to music, and find it difficult to perceive music. Another group of CI users, the early-deafened (during language acquisition) and late-implanted (after a long period of auditory deprivation; EDLI), report a higher music appreciation, but is this related to a better music perception? Materials and Methods: Sixteen EDLI and fifteen postlingually deafened (control group) CI users participated in the study. The inclusion criteria for EDLI were: severe or profound hearing loss onset before the age of 6 years, implantation after the age of 16 years, and CI experience more than 1 year. Subjectively, music perception and appreciation was evaluated using the Dutch Musical Background Questionnaire. Behaviorally, music perception was measured with melodic contour identification (MCI), using two instruments (piano and organ), each tested with and without a masking contour. Semitone distance between successive tones of the target varied from 1 to 3 semitones. Results: Subjectively, the EDLI group reported to appreciate music more than postlingually deafened CI users. Behaviorally, while clinical phoneme recognition test score on average was lower in the EDLI group, melodic contour identification did not significantly differ between the two groups. There was, however, an effect of instrument and masker for both groups; the piano was the best-recognized instrument, and for both instruments, the masker with non-overlapping pitch was best recognized. Discussion: EDLI group reported higher appreciation of music than postlingual control group, even though behaviorally measured music perception did not differ significantly between the two groups. Both surprising findings since EDLI CI users would be expected to have lower outcomes based on the early deafness onset, long duration of auditory deprivation, and on average lower clinical speech scores. Perhaps, the music perception difficulty comes from similar electric hearing limitations in both groups. The higher subjective appreciation in EDLI might be due to the lack of a musical memory, with no ability to compare music heard via the CI to acoustic music perception. Overall, our findings support a benefit from implantation for a positive music experience in EDLI CI users.
Collapse
Affiliation(s)
- Christina Fuller
- Department of Otorhinolaryngology, Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen, Netherlands.,Research School of Behavioral and Cognitive Neurosciences, Graduate School of Medical Sciences, University of Groningen, Groningen, Netherlands.,Department of Otorhinolaryngology, Treant Zorggroep, Emmen, Netherlands
| | - Deniz Başkent
- Department of Otorhinolaryngology, Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen, Netherlands.,Research School of Behavioral and Cognitive Neurosciences, Graduate School of Medical Sciences, University of Groningen, Groningen, Netherlands
| | - Rolien Free
- Department of Otorhinolaryngology, Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen, Netherlands.,Research School of Behavioral and Cognitive Neurosciences, Graduate School of Medical Sciences, University of Groningen, Groningen, Netherlands
| |
Collapse
|
31
|
Pagès-Portabella C, Toro JM. Dissonant endings of chord progressions elicit a larger ERAN than ambiguous endings in musicians. Psychophysiology 2019; 57:e13476. [PMID: 31512751 DOI: 10.1111/psyp.13476] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2018] [Revised: 07/30/2019] [Accepted: 08/08/2019] [Indexed: 11/29/2022]
Abstract
In major-minor tonal music, the hierarchical relationships and patterns of tension/release are essential for its composition and experience. For most listeners, tension leads to an expectation of resolution. Thus, when musical expectations are broken, they are usually perceived as erroneous and elicit specific neural responses such as the early right anterior negativity (ERAN). In the present study, we explored if different degrees of musical violations are processed differently after long-term musical training in comparison to day-to-day exposure. We registered the ERPs elicited by listening to unexpected chords in both musicians and nonmusicians. More specifically, we compared the responses of strong violations by unexpected dissonant endings and mild violations by unexpected but consonant endings (Neapolitan chords). Our results show that, irrespective of training, irregular endings elicited the ERAN. However, the ERAN for dissonant endings was larger in musicians than in nonmusicians. More importantly, we observed a modulation of the neural responses by the degree of violation only in musicians. In this group, the amplitude of the ERAN was larger for strong than for mild violations. These results suggest an early sensitivity of musicians to dissonance, which is processed as less expected than tonal irregularities. We also found that irregular endings elicited a P3 only in musicians. Our study suggests that, even though violations of harmonic expectancies are detected by all listeners, musical training modulates how different violations of the musical context are processed.
Collapse
Affiliation(s)
- Carlota Pagès-Portabella
- Language & Comparative Cognition Group, Center for Brain & Cognition, Universitat Pompeu Fabra, Barcelona, Spain
| | - Juan M Toro
- Language & Comparative Cognition Group, Center for Brain & Cognition, Universitat Pompeu Fabra, Barcelona, Spain.,Institució Catalana de Recerca i Estudis Avançats, Barcelona, Spain
| |
Collapse
|
32
|
Leipold S, Greber M, Sele S, Jäncke L. Neural patterns reveal single-trial information on absolute pitch and relative pitch perception. Neuroimage 2019; 200:132-141. [PMID: 31238164 DOI: 10.1016/j.neuroimage.2019.06.030] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2019] [Accepted: 06/15/2019] [Indexed: 01/01/2023] Open
Abstract
Pitch is a fundamental attribute of sounds and yet is not perceived equally by all humans. Absolute pitch (AP) musicians perceive, recognize, and name pitches in absolute terms, whereas relative pitch (RP) musicians, representing the large majority of musicians, perceive pitches in relation to other pitches. In this study, we used electroencephalography (EEG) to investigate the neural representations underlying tone listening and tone labeling in a large sample of musicians (n = 105). Participants performed a pitch processing task with a listening and a labeling condition during EEG acquisition. Using a brain-decoding framework, we tested a prediction derived from both theoretical and empirical accounts of AP, namely that the representational similarity of listening and labeling is higher in AP musicians than in RP musicians. Consistent with the prediction, time-resolved single-trial EEG decoding revealed a higher representational similarity in AP musicians during late stages of pitch perception. Time-frequency-resolved EEG decoding further showed that the higher representational similarity was present in oscillations in the theta and beta frequency bands. Supplemental univariate analyses were less sensitive in detecting subtle group differences in the frequency domain. Taken together, the results suggest differences between AP and RP musicians in late pitch processing stages associated with cognition, rather than in early processing stages associated with perception.
Collapse
Affiliation(s)
- Simon Leipold
- Division Neuropsychology, Department of Psychology, University of Zurich, Zurich, Switzerland.
| | - Marielle Greber
- Division Neuropsychology, Department of Psychology, University of Zurich, Zurich, Switzerland
| | - Silvano Sele
- Division Neuropsychology, Department of Psychology, University of Zurich, Zurich, Switzerland; University Research Priority Program (URPP), Dynamics of Healthy Aging, University of Zurich, Zurich, Switzerland
| | - Lutz Jäncke
- Division Neuropsychology, Department of Psychology, University of Zurich, Zurich, Switzerland; University Research Priority Program (URPP), Dynamics of Healthy Aging, University of Zurich, Zurich, Switzerland; Department of Special Education, King Abdulaziz University, Jeddah, Saudi Arabia.
| |
Collapse
|
33
|
Weaver AJ, DiGiovanni JJ, Ries DT. Pspan: A New Tool for Assessing Pitch Temporal Processing and Patterning Capacity. Am J Audiol 2019; 28:322-332. [PMID: 31084578 DOI: 10.1044/2019_aja-18-0117] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022] Open
Abstract
Purpose The purpose of this study was to evaluate whether merging the clinical pitch pattern test procedure with psychoacoustic adaptive methods would create a new tool feasible to capture individual differences in pitch temporal processing and patterning capacity of children and adults. Method Sixty-six individuals, young children (ages 10-12 years, n = 22), older children (ages 13-15 years, n = 23), and adults (ages 18-33 years, n = 21), were recruited and assigned to subgroups based on reported duration (years) of instrumental music instruction. Additional background information was collected in order to assess if the pitch temporal processing and patterning span developed, the Pspan, was sensitive to individual differences across participants. Results The evaluation of the Pspan task as a scale indicated good parallel reliability across runs assessed by Cronbach's alpha, and scores were normally distributed. Between-subjects analysis of variance indicated main effects for both age groups and music groups recruited for the study. A multiple regression analysis with the Pspan scores as the dependent variable found that 3 measures of music instruction, age in years, and paternal education were predictive of enhanced temporal processing and patterning capacity for pitch input. Conclusions The outcomes suggest that the Pspan task is a time-efficient data collection tool that is sensitive to the duration of instrumental music instruction, maturation, and paternal education. In addition, results indicate that the task is sensitive to age-related auditory temporal processing and patterning performance changes during adolescence when children are 10-15 years old.
Collapse
Affiliation(s)
- Aurora J. Weaver
- Auditory Psychophysics and Signal Processing Lab, Division of Communication Sciences and Disorders, Ohio University, Athens
- Auditory and Music Perception Lab, Department of Communication Disorders, Auburn University, AL
| | - Jeffrey J. DiGiovanni
- Auditory Psychophysics and Signal Processing Lab, Division of Communication Sciences and Disorders, Ohio University, Athens
- Department of Communication Sciences and Disorders, University of Cincinnati, OH
| | - Dennis T. Ries
- Department of Physical Medicine and Rehabilitation, University of Colorado–Anschutz Medical Campus, Aurora
| |
Collapse
|
34
|
Absolute and relative pitch processing in the human brain: neural and behavioral evidence. Brain Struct Funct 2019; 224:1723-1738. [DOI: 10.1007/s00429-019-01872-2] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2018] [Accepted: 04/03/2019] [Indexed: 12/11/2022]
|
35
|
Fuller CD, Galvin JJ, Maat B, Başkent D, Free RH. Comparison of Two Music Training Approaches on Music and Speech Perception in Cochlear Implant Users. Trends Hear 2019; 22:2331216518765379. [PMID: 29621947 PMCID: PMC5894911 DOI: 10.1177/2331216518765379] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
In normal-hearing (NH) adults, long-term music training may benefit music and speech perception, even when listening to spectro-temporally degraded signals as experienced by cochlear implant (CI) users. In this study, we compared two different music training approaches in CI users and their effects on speech and music perception, as it remains unclear which approach to music training might be best. The approaches differed in terms of music exercises and social interaction. For the pitch/timbre group, melodic contour identification (MCI) training was performed using computer software. For the music therapy group, training involved face-to-face group exercises (rhythm perception, musical speech perception, music perception, singing, vocal emotion identification, and music improvisation). For the control group, training involved group nonmusic activities (e.g., writing, cooking, and woodworking). Training consisted of weekly 2-hr sessions over a 6-week period. Speech intelligibility in quiet and noise, vocal emotion identification, MCI, and quality of life (QoL) were measured before and after training. The different training approaches appeared to offer different benefits for music and speech perception. Training effects were observed within-domain (better MCI performance for the pitch/timbre group), with little cross-domain transfer of music training (emotion identification significantly improved for the music therapy group). While training had no significant effect on QoL, the music therapy group reported better perceptual skills across training sessions. These results suggest that more extensive and intensive training approaches that combine pitch training with the social aspects of music therapy may further benefit CI users.
Collapse
Affiliation(s)
- Christina D Fuller
- 1 Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, the Netherlands.,2 Graduate School of Medical Sciences, University of Groningen, the Netherlands.,3 Research School of Behavioral and Cognitive Neurosciences, University of Groningen, the Netherlands
| | - John J Galvin
- 1 Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, the Netherlands.,2 Graduate School of Medical Sciences, University of Groningen, the Netherlands.,3 Research School of Behavioral and Cognitive Neurosciences, University of Groningen, the Netherlands.,4 House Ear Institute, Los Angeles, CA, USA.,5 Department of Head and Neck Surgery, David Geffen School of Medicine, UCLA, CA, USA
| | - Bert Maat
- 1 Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, the Netherlands.,2 Graduate School of Medical Sciences, University of Groningen, the Netherlands.,3 Research School of Behavioral and Cognitive Neurosciences, University of Groningen, the Netherlands
| | - Deniz Başkent
- 1 Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, the Netherlands.,2 Graduate School of Medical Sciences, University of Groningen, the Netherlands.,3 Research School of Behavioral and Cognitive Neurosciences, University of Groningen, the Netherlands
| | - Rolien H Free
- 1 Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, the Netherlands.,2 Graduate School of Medical Sciences, University of Groningen, the Netherlands.,3 Research School of Behavioral and Cognitive Neurosciences, University of Groningen, the Netherlands
| |
Collapse
|
36
|
A reevaluation of the electrophysiological correlates of absolute pitch and relative pitch: No evidence for an absolute pitch-specific negativity. Int J Psychophysiol 2019; 137:21-31. [PMID: 30610912 DOI: 10.1016/j.ijpsycho.2018.12.016] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2018] [Revised: 12/06/2018] [Accepted: 12/30/2018] [Indexed: 11/20/2022]
Abstract
Musicians with absolute pitch effortlessly identify the pitch of a sound without an external reference. Previous neuroscientific studies on absolute pitch have typically had small samples sizes and low statistical power, making them susceptible for false positive findings. In a seminal study, Itoh et al. (2005) reported the elicitation of an absolute pitch-specific event-related potential component during tone listening - the AP negativity. Additionally, they identified several components as correlates of relative pitch, the ability to identify relations between pitches. Here, we attempted to replicate the main findings of Itoh et al.'s study in a large sample of musicians (n = 104) using both frequentist and Bayesian inference. We were not able to replicate the presence of an AP negativity during tone listening in individuals with high levels of absolute pitch, but we partially replicated the findings concerning the correlates of relative pitch. Our results are consistent with several previous studies reporting an absence of differences between musicians with and without absolute pitch in early auditory evoked potential components. We conclude that replication studies form a crucial part in assessing extraordinary findings, even more so in small fields where a single finding can have a large impact on further research.
Collapse
|
37
|
Silva P, Spedo C, Baldassarini C, Benini C, Ferreira D, Barreira A, Leoni R. Brain functional and effective connectivity underlying the information processing speed assessed by the Symbol Digit Modalities Test. Neuroimage 2019; 184:761-770. [DOI: 10.1016/j.neuroimage.2018.09.080] [Citation(s) in RCA: 25] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2018] [Revised: 09/24/2018] [Accepted: 09/26/2018] [Indexed: 11/30/2022] Open
|
38
|
Greber M, Rogenmoser L, Elmer S, Jäncke L. Electrophysiological Correlates of Absolute Pitch in a Passive Auditory Oddball Paradigm: a Direct Replication Attempt. eNeuro 2018; 5:ENEURO.0333-18.2018. [PMID: 30637328 PMCID: PMC6327942 DOI: 10.1523/eneuro.0333-18.2018] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2018] [Revised: 11/02/2018] [Accepted: 11/22/2018] [Indexed: 11/21/2022] Open
Abstract
Humans with absolute pitch (AP) are able to effortlessly name the pitch class of a sound without an external reference. The association of labels with pitches cannot be entirely suppressed even if it interferes with task demands. This suggests a high level of automaticity of pitch labeling in AP. The automatic nature of AP was further investigated in a study by Rogenmoser et al. (2015). Using a passive auditory oddball paradigm in combination with electroencephalography, they observed electrophysiological differences between musicians with and without AP in response to piano tones. Specifically, the AP musicians showed a smaller P3a, an event-related potential (ERP) component presumably reflecting early attentional processes. In contrast, they did not find group differences in the mismatch negativity (MMN), an ERP component associated with auditory memory processes. They concluded that early cognitive processes are facilitated in AP during passive listening and are more important for AP than the preceding sensory processes. In our direct replication study on a larger sample of musicians with (n = 54, 27 females, 27 males) and without (n = 50, 24 females, 26 males) AP, we successfully replicated the non-significant effects of AP on the MMN. However, we could not replicate the significant effects for the P3a. Additional Bayes factor analyses revealed moderate to strong evidence (Bayes factor > 3) for the null hypothesis for both MMN and P3a. Therefore, the results of this replication study do not support the postulated importance of cognitive facilitation in AP during passive tone listening.
Collapse
Affiliation(s)
- Marielle Greber
- Division Neuropsychology, Department of Psychology, University of Zurich, CH-8050 Zurich, Switzerland
| | - Lars Rogenmoser
- Laboratory of Integrative Neuroscience and Cognition, Department of Neuroscience, Georgetown University Medical Center, Washington, DC 20007
| | - Stefan Elmer
- Division Neuropsychology, Department of Psychology, University of Zurich, CH-8050 Zurich, Switzerland
| | - Lutz Jäncke
- Division Neuropsychology, Department of Psychology, University of Zurich, CH-8050 Zurich, Switzerland
- University Research Priority Program (URPP), Dynamics of Healthy Aging, University of Zurich, CH-8050 Zurich, Switzerland
- Department of Special Education, King Abdulaziz University, Jeddah 21589, Kingdom of Saudi Arabia
| |
Collapse
|
39
|
Mondelli MFCG, José IDS, José MR, Lopes NBF. Elaboration of an instrument to evaluate the recognition of Brazilian melodies in children. Braz J Otorhinolaryngol 2018; 85:690-697. [PMID: 30017874 PMCID: PMC9443065 DOI: 10.1016/j.bjorl.2018.05.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2018] [Revised: 05/09/2018] [Accepted: 05/28/2018] [Indexed: 10/28/2022] Open
Abstract
INTRODUCTION There is evidence pointing to the importance of the evaluation of musical perception through objective and subjective instruments. In Brazil, there is a shortage of instruments that evaluates musical perception. OBJECTIVE To develop an instrument to evaluate the recognition of traditional Brazilian melodies and investigate the performance of children with typical hearing. METHODS The study was carried out after approval of the research ethics committee (1.198.607). The instrument was developed in software format with website access, using the languages PHP 5.5.12, Javascript, Cascade style sheets and "HTML5"; database "MYSQL 5.6.17" on the "Apache 2.4.9" server. Fifteen melodies of Brazilian folk songs were recorded in piano synthesized timbre, with 12 seconds per melody reproduction and four second intervals between them. A total of 155 schooled children, aged eight to 11 years, of both sexes, with typical hearing participated in the study. The test was performed in a silent room with sound stimuli amplified by a sound box at 65dBNA, positioned at 0 azimuth, and at one meter from the participant, the notebook was used for children to play with on the screen on the title and illustration of the melody they recognized they were listening to. The responses were recorded on their own database. RESULTS The instrument titled "Evaluation of recognition of traditional melodies in children" can be run on various devices (computers, notebooks, tablets, mobile phones) and operating systems (Windows, Macintosh, Android, Linux). Access: http://192.185.216.17/ivan/home/login.php by login and password. The most easily recognized melody was "Cai, cai balão" (89%) and the least recognized was "Capelinha de melão" (25.2%). The average time to perform the test was 3'15″. CONCLUSION The development and application of the software proved effective for the studied population. This instrument may contribute to the improvement of protocols for the evaluation of musical perception in children with hearing aid and/or cochlear implants users.
Collapse
Affiliation(s)
| | - Ivan Dos Santos José
- Universidade de São Paulo (USP), Faculdade de Odontologia de Bauru, Programa de Pós-Graduação em Fonoaudiologia, Bauru, SP, Brazil
| | - Maria Renata José
- Universidade de São Paulo (USP), Faculdade de Odontologia de Bauru, Programa de Pós-Graduação em Fonoaudiologia, Bauru, SP, Brazil
| | | |
Collapse
|
40
|
Temporal Fine Structure Processing, Pitch, and Speech Perception in Adult Cochlear Implant Recipients. Ear Hear 2018; 39:679-686. [DOI: 10.1097/aud.0000000000000525] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
41
|
Multifractal analysis reveals music-like dynamic structure in songbird rhythms. Sci Rep 2018; 8:4570. [PMID: 29545558 PMCID: PMC5854712 DOI: 10.1038/s41598-018-22933-2] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2017] [Accepted: 03/01/2018] [Indexed: 01/01/2023] Open
Abstract
Music is thought to engage its listeners by driving feelings of surprise, tension, and relief through a dynamic mixture of predictable and unpredictable patterns, a property summarized here as “expressiveness”. Birdsong shares with music the goal to attract its listeners’ attention and might use similar strategies to achieve this. We here tested a thrush nightingale’s (Luscinia luscinia) rhythm, as represented by song amplitude envelope (containing information on note timing, duration, and intensity), for evidence of expressiveness. We used multifractal analysis, which is designed to detect in a signal dynamic fluctuations between predictable and unpredictable states on multiple timescales (e.g. notes, subphrases, songs). Results show that rhythm is strongly multifractal, indicating fluctuations between predictable and unpredictable patterns. Moreover, comparing original songs with re-synthesized songs that lack all subtle deviations from the “standard” note envelopes, we find that deviations in note intensity and duration significantly contributed to multifractality. This suggests that birdsong is more dynamic due to subtle note timing patterns, often similar to musical operations like accelerando or crescendo. While different sources of these dynamics are conceivable, this study shows that multi-timescale rhythm fluctuations can be detected in birdsong, paving the path to studying mechanisms and function behind such patterns.
Collapse
|
42
|
Disbergen NR, Valente G, Formisano E, Zatorre RJ. Assessing Top-Down and Bottom-Up Contributions to Auditory Stream Segregation and Integration With Polyphonic Music. Front Neurosci 2018; 12:121. [PMID: 29563861 PMCID: PMC5845899 DOI: 10.3389/fnins.2018.00121] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2017] [Accepted: 02/15/2018] [Indexed: 11/24/2022] Open
Abstract
Polyphonic music listening well exemplifies processes typically involved in daily auditory scene analysis situations, relying on an interactive interplay between bottom-up and top-down processes. Most studies investigating scene analysis have used elementary auditory scenes, however real-world scene analysis is far more complex. In particular, music, contrary to most other natural auditory scenes, can be perceived by either integrating or, under attentive control, segregating sound streams, often carried by different instruments. One of the prominent bottom-up cues contributing to multi-instrument music perception is their timbre difference. In this work, we introduce and validate a novel paradigm designed to investigate, within naturalistic musical auditory scenes, attentive modulation as well as its interaction with bottom-up processes. Two psychophysical experiments are described, employing custom-composed two-voice polyphonic music pieces within a framework implementing a behavioral performance metric to validate listener instructions requiring either integration or segregation of scene elements. In Experiment 1, the listeners' locus of attention was switched between individual instruments or the aggregate (i.e., both instruments together), via a task requiring the detection of temporal modulations (i.e., triplets) incorporated within or across instruments. Subjects responded post-stimulus whether triplets were present in the to-be-attended instrument(s). Experiment 2 introduced the bottom-up manipulation by adding a three-level morphing of instrument timbre distance to the attentional framework. The task was designed to be used within neuroimaging paradigms; Experiment 2 was additionally validated behaviorally in the functional Magnetic Resonance Imaging (fMRI) environment. Experiment 1 subjects (N = 29, non-musicians) completed the task at high levels of accuracy, showing no group differences between any experimental conditions. Nineteen listeners also participated in Experiment 2, showing a main effect of instrument timbre distance, even though within attention-condition timbre-distance contrasts did not demonstrate any timbre effect. Correlation of overall scores with morph-distance effects, computed by subtracting the largest from the smallest timbre distance scores, showed an influence of general task difficulty on the timbre distance effect. Comparison of laboratory and fMRI data showed scanner noise had no adverse effect on task performance. These Experimental paradigms enable to study both bottom-up and top-down contributions to auditory stream segregation and integration within psychophysical and neuroimaging experiments.
Collapse
Affiliation(s)
- Niels R. Disbergen
- Department of Cognitive Neuroscience, Maastricht University, Maastricht, Netherlands
- Maastricht Brain Imaging Center (MBIC), Maastricht, Netherlands
| | - Giancarlo Valente
- Department of Cognitive Neuroscience, Maastricht University, Maastricht, Netherlands
- Maastricht Brain Imaging Center (MBIC), Maastricht, Netherlands
| | - Elia Formisano
- Department of Cognitive Neuroscience, Maastricht University, Maastricht, Netherlands
- Maastricht Brain Imaging Center (MBIC), Maastricht, Netherlands
| | - Robert J. Zatorre
- Cognitive Neuroscience Unit, Montreal Neurological Institute, McGill University, Montreal, QC, Canada
- International Laboratory for Brain Music and Sound Research (BRAMS), Montreal, QC, Canada
| |
Collapse
|
43
|
Jafari Z, Malayeri S. Subcortical encoding of speech cues in children with congenital blindness. Restor Neurol Neurosci 2018; 34:757-68. [PMID: 27589504 DOI: 10.3233/rnn-160639] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
BACKGROUND Congenital visual deprivation underlies neural plasticity in different brain areas, and provides an outstanding opportunity to study the neuroplastic capabilities of the brain. OBJECTIVES The present study aimed to investigate the effect of congenital blindness on subcortical auditory processing using electrophysiological and behavioral assessments in children. METHODS A total of 47 children aged 8-12 years, including 22 congenitally blind (CB) children and 25 normal-sighted (NS) control, were studied. All children were tested using an auditory brainstem response (ABR) test with both click and speech stimuli. Speech recognition and musical abilities were tested using standard tools. RESULTS Significant differences were observed between the two groups in speech ABR wave latencies A, F and O (p≤0.043), wave amplitude F (p = 0.039), V-A slope (p = 0.026), and three spectral magnitudes F0, F1 and HF (p≤0.002). CB children showed a superior performance compared to NS peers in all the subtests and the total score of musical abilities (p≤0.003). Moreover, they had significantly higher scores during the nonsense syllable test in noise than the NS children (p = 0.034). Significant negative correlations were found only in CB children between the total music score and both wave A (p = 0.039) and wave F (p = 0.029) latencies, as well as nonsense-syllable test in noise and the wave A latency (p = 0.041). CONCLUSION Our results suggest that neuroplasticity resulting from congenital blindness can be measured subcortically and has a heightened effect on temporal, musical and speech processing abilities. The findings have been discussed based on models of plasticity and the influence of corticofugal modulation in synthesizing complex auditory stimuli.
Collapse
Affiliation(s)
- Zahra Jafari
- Rehabilitation Research Center (RRC), Iran University of Medical Sciences (IUMS), Tehran, Iran.,Department of Basic Sciences in Rehabilitation, School of Rehabilitation Sciences, Iran University of Medical Sciences (IUMS), Tehran, Iran.,Canadian Center for Behavioral Neuroscience (CCBN), University of Lethbridge, Lethbridge, Alberta, Canada
| | | |
Collapse
|
44
|
McPherson MJ, McDermott JH. Diversity in pitch perception revealed by task dependence. Nat Hum Behav 2018; 2:52-66. [PMID: 30221202 PMCID: PMC6136452 DOI: 10.1038/s41562-017-0261-8] [Citation(s) in RCA: 33] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2017] [Accepted: 11/08/2017] [Indexed: 01/12/2023]
Abstract
Pitch conveys critical information in speech, music, and other natural sounds, and is conventionally defined as the perceptual correlate of a sound's fundamental frequency (F0). Although pitch is widely assumed to be subserved by a single F0 estimation process, real-world pitch tasks vary enormously, raising the possibility of underlying mechanistic diversity. To probe pitch mechanisms we conducted a battery of pitch-related music and speech tasks using conventional harmonic sounds and inharmonic sounds whose frequencies lack a common F0. Some pitch-related abilities - those relying on musical interval or voice recognition - were strongly impaired by inharmonicity, suggesting a reliance on F0. However, other tasks, including those dependent on pitch contours in speech and music, were unaffected by inharmonicity, suggesting a mechanism that tracks the frequency spectrum rather than the F0. The results suggest that pitch perception is mediated by several different mechanisms, only some of which conform to traditional notions of pitch.
Collapse
Affiliation(s)
- Malinda J McPherson
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA.
- Program in Speech and Hearing Bioscience and Technology, Harvard University, Cambridge, MA, USA.
| | - Josh H McDermott
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA
- Program in Speech and Hearing Bioscience and Technology, Harvard University, Cambridge, MA, USA
| |
Collapse
|
45
|
Yamazaki H, Easwar V, Polonenko MJ, Jiwani S, Wong DDE, Papsin BC, Gordon KA. Cortical hemispheric asymmetries are present at young ages and further develop into adolescence. Hum Brain Mapp 2017; 39:941-954. [PMID: 29134751 DOI: 10.1002/hbm.23893] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2016] [Revised: 10/07/2017] [Accepted: 11/08/2017] [Indexed: 02/01/2023] Open
Abstract
Specialization of the auditory cortices for pure tone listening may develop with age. In adults, the right hemisphere dominates when listening to pure tones and music; we thus hypothesized that (a) asymmetric function between auditory cortices increases with age and (b) this development is specific to tonal rather than broadband/non-tonal stimuli. Cortical responses to tone-bursts and broadband click-trains were recorded by multichannel electroencephalography in young children (5.1 ± 0.8 years old) and adolescents (15.2 ± 1.7 years old) with normal hearing. Peak dipole moments indicating activity strength in right and left auditory cortices were calculated using the Time Restricted, Artefact and Coherence source Suppression (TRACS) beamformer. Monaural click-trains and tone-bursts in young children evoked a dominant response in the contralateral right cortex by left ear stimulation and, similarly, a contralateral left cortex response to click-trains in the right ear. Responses to tone-bursts in the right ear were more bilateral. In adolescents, peak activity dominated in the right cortex in most conditions (tone-bursts from either ear and to clicks from the left ear). Bilateral activity was evoked by right ear click stimulation. Thus, right hemispheric specialization for monaural tonal stimuli begins in children as young as 5 years of age and becomes more prominent by adolescence. These changes were marked by consistent dipole moments in the right auditory cortex with age in contrast to decreases in dipole activity in all other stimulus conditions. Together, the findings reveal increasingly asymmetric function for the two auditory cortices, potentially to support greater cortical specialization with development into adolescence.
Collapse
Affiliation(s)
- Hiroshi Yamazaki
- Archie's Cochlear Implant Laboratory, Department of Otolaryngology, The Hospital for Sick Children, Toronto, Ontario, Canada
| | - Vijayalakshmi Easwar
- Archie's Cochlear Implant Laboratory, Department of Otolaryngology, The Hospital for Sick Children, Toronto, Ontario, Canada
| | - Melissa Jane Polonenko
- Archie's Cochlear Implant Laboratory, Department of Otolaryngology, The Hospital for Sick Children, Toronto, Ontario, Canada.,Institute of Medical Sciences, University of Toronto, Toronto, Ontario, Canada
| | - Salima Jiwani
- Archie's Cochlear Implant Laboratory, Department of Otolaryngology, The Hospital for Sick Children, Toronto, Ontario, Canada
| | - Daniel D E Wong
- Archie's Cochlear Implant Laboratory, Department of Otolaryngology, The Hospital for Sick Children, Toronto, Ontario, Canada
| | - Blake Croll Papsin
- Archie's Cochlear Implant Laboratory, Department of Otolaryngology, The Hospital for Sick Children, Toronto, Ontario, Canada.,Department of Otolaryngology, University of Toronto, Toronto, Ontario, Canada
| | - Karen Ann Gordon
- Archie's Cochlear Implant Laboratory, Department of Otolaryngology, The Hospital for Sick Children, Toronto, Ontario, Canada.,Department of Otolaryngology, University of Toronto, Toronto, Ontario, Canada.,Institute of Medical Sciences, University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|
46
|
SymCHM—An Unsupervised Approach for Pattern Discovery in Symbolic Music with a Compositional Hierarchical Model. APPLIED SCIENCES-BASEL 2017. [DOI: 10.3390/app7111135] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|
47
|
Abstract
Auditory perception is our main gateway to communication with others via speech and music, and it also plays an important role in alerting and orienting us to new events. This review provides an overview of selected topics pertaining to the perception and neural coding of sound, starting with the first stage of filtering in the cochlea and its profound impact on perception. The next topic, pitch, has been debated for millennia, but recent technical and theoretical developments continue to provide us with new insights. Cochlear filtering and pitch both play key roles in our ability to parse the auditory scene, enabling us to attend to one auditory object or stream while ignoring others. An improved understanding of the basic mechanisms of auditory perception will aid us in the quest to tackle the increasingly important problem of hearing loss in our aging population.
Collapse
Affiliation(s)
- Andrew J Oxenham
- Department of Psychology, University of Minnesota, Minneapolis, Minnesota 55455;
| |
Collapse
|
48
|
Meha-Bettison K, Sharma M, Ibrahim RK, Mandikal Vasuki PR. Enhanced speech perception in noise and cortical auditory evoked potentials in professional musicians. Int J Audiol 2017; 57:40-52. [DOI: 10.1080/14992027.2017.1380850] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Affiliation(s)
- Kiriana Meha-Bettison
- Australian Hearing, The Australian Hearing Hub, Macquarie University, Sydney, Australia,
| | - Mridula Sharma
- Department of Linguistics, The Australian Hearing Hub, Macquarie University, Sydney, Australia,
- The HEARing CRC, Audiology, Hearing and Speech Sciences, The University of Melbourne, Melbourne, Australia, and
| | - Ronny K. Ibrahim
- Department of Linguistics, The Australian Hearing Hub, Macquarie University, Sydney, Australia,
- The HEARing CRC, Audiology, Hearing and Speech Sciences, The University of Melbourne, Melbourne, Australia, and
| | - Pragati Rao Mandikal Vasuki
- Department of Linguistics, The Australian Hearing Hub, Macquarie University, Sydney, Australia,
- Audiology Research, Starkey Hearing Research Centre, Berkeley, USA
| |
Collapse
|
49
|
Casey MA. Music of the 7Ts: Predicting and Decoding Multivoxel fMRI Responses with Acoustic, Schematic, and Categorical Music Features. Front Psychol 2017; 8:1179. [PMID: 28769835 PMCID: PMC5509941 DOI: 10.3389/fpsyg.2017.01179] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2016] [Accepted: 06/28/2017] [Indexed: 11/26/2022] Open
Abstract
Underlying the experience of listening to music are parallel streams of auditory, categorical, and schematic qualia, whose representations and cortical organization remain largely unresolved. We collected high-field (7T) fMRI data in a music listening task, and analyzed the data using multivariate decoding and stimulus-encoding models. Twenty subjects participated in the experiment, which measured BOLD responses evoked by naturalistic listening to twenty-five music clips from five genres. Our first analysis applied machine classification to the multivoxel patterns that were evoked in temporal cortex. Results yielded above-chance levels for both stimulus identification and genre classification–cross-validated by holding out data from multiple of the stimuli during model training and then testing decoding performance on the held-out data. Genre model misclassifications were significantly correlated with those in a corresponding behavioral music categorization task, supporting the hypothesis that geometric properties of multivoxel pattern spaces underlie observed musical behavior. A second analysis employed a spherical searchlight regression analysis which predicted multivoxel pattern responses to music features representing melody and harmony across a large area of cortex. The resulting prediction-accuracy maps yielded significant clusters in the temporal, frontal, parietal, and occipital lobes, as well as in the parahippocampal gyrus and the cerebellum. These maps provide evidence in support of our hypothesis that geometric properties of music cognition are neurally encoded as multivoxel representational spaces. The maps also reveal a cortical topography that differentially encodes categorical and absolute-pitch information in distributed and overlapping networks, with smaller specialized regions that encode tonal music information in relative-pitch representations.
Collapse
Affiliation(s)
- Michael A Casey
- Bregman Music and Audio Lab, Computer Science and Music Departments, Dartmouth CollegeHanover, NH, United States
| |
Collapse
|
50
|
Veltri T, Taroyan N, Overton PG. Nicotine enhances an auditory Event-Related Potential component which is inversely related to habituation. J Psychopharmacol 2017; 31:861-872. [PMID: 28675114 DOI: 10.1177/0269881117695860] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Nicotine is a psychoactive substance that is commonly consumed in the context of music. However, the reason why music and nicotine are co-consumed is uncertain. One possibility is that nicotine affects cognitive processes relevant to aspects of music appreciation in a beneficial way. Here we investigated this possibility using Event-Related Potentials. Participants underwent a simple decision-making task (to maintain attentional focus), responses to which were signalled by auditory stimuli. Unlike previous research looking at the effects of nicotine on auditory processing, we used complex tones that varied in pitch, a fundamental element of music. In addition, unlike most other studies, we tested non-smoking subjects to avoid withdrawal-related complications. We found that nicotine (4.0 mg, administered as gum) increased P2 amplitude in the frontal region. Since a decrease in P2 amplitude and latency is related to habituation processes, and an enhanced ability to disengage from irrelevant stimuli, our findings suggest that nicotine may cause a reduction in habituation, resulting in non-smokers being less able to adapt to repeated stimuli. A corollary of that decrease in adaptation may be that nicotine extends the temporal window during which a listener is able and willing to engage with a piece of music.
Collapse
Affiliation(s)
- Theresa Veltri
- 1 Department of Psychology, University of Sheffield, Sheffield, UK
| | - Naira Taroyan
- 2 Department of Psychology, Sociology and Politics, Sheffield Hallam University, Sheffield, UK
| | - Paul G Overton
- 1 Department of Psychology, University of Sheffield, Sheffield, UK
| |
Collapse
|