1
|
Luchesi LC, Cavalcanti JC, Lucci TK, David VF, Otta E, Monticelli PF. Zygosity Effects on Human Voice: Fundamental Frequency Analysis of Brazilian Twins' Speech. Twin Res Hum Genet 2024:1-8. [PMID: 39355961 DOI: 10.1017/thg.2024.33] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/03/2024]
Abstract
Voice production can be influenced by interindividual variations related to genetic, physiological, behavioral, and several environmental factors. Here we examined the effect of zygosity on speaking fundamental frequency (F0) statistical descriptors. Our aims were: (1) to determine whether the genetic similarity between monozygotic (MZ) and dizygotic (DZ) twins affects F0 characteristics, and (2) to quantify the contribution of genetic factors to these characteristics. The study involved 79 same-sex twin pairs of Brazilian Portuguese speakers, comprising 65 MZ and 14 DZ twins, aged 18 to 66 years (31.7 ± 11.6 years), with 21 male and 58 female pairs. Participants were recorded while uttering a greeting phrase and the Brazilian Portuguese version of the 'Happy Birthday to You' song. Speech segments were analyzed using Praat free software, and F0 measures were automatically extracted in both Hertz and semitone scales. Statistical descriptors, including centrality, dispersion, and extreme values of F0 were examined, and the ACE model (i.e., total genetic effects, A; shared environmental influences, C; and nonshared environmental influences, E) was employed to estimate the additive effect;ts of monozygosity. As anticipated, we observed a zygosity effect on several F0 parameters, with more similarity between MZ twins compared to DZ twins. We discuss the genetic influences on F0 parameters and the absence of a monozygosity effect in two of them. Additionally, we briefly address potential biases associated with the selected measurement scale for statistical modeling. Finally, we explore the influence of genetic factors on F0 patterns, as well as environmental, life history and linguistic factors, particularly concerning F0 variation in speech.
Collapse
Affiliation(s)
- Lilian C Luchesi
- Ethology and Bioacoustic Laboratory, Department of Psychology, Faculdade de Filosofia, Ciências e Letras de Ribeirão Preto, Universidade de São Paulo, Ribeirão Preto, São Paulo, Brazil
- Psychoethology and Human Ethology Laboratory, Department of Experimental Psychology, Instituto de Psicologia, Universidade de São Paulo, São Paulo
| | - Julio C Cavalcanti
- Integrated Acoustic Analysis and Cognition Laboratory, Pontifical Catholic University of São Paulo, Rua Ministro de Godoy, São Paulo, Brazil
- Institute of Language Studies, Department of Linguistics, University of Campinas, Campinas, São Paulo, Brazil
- Laboratory of Phonetics, Department of Linguistics, Stockholm University, Stockholm, Sweden
| | - Tania K Lucci
- Psychoethology and Human Ethology Laboratory, Department of Experimental Psychology, Instituto de Psicologia, Universidade de São Paulo, São Paulo
| | - Vinicius F David
- Psychoethology and Human Ethology Laboratory, Department of Experimental Psychology, Instituto de Psicologia, Universidade de São Paulo, São Paulo
| | - Emma Otta
- Psychoethology and Human Ethology Laboratory, Department of Experimental Psychology, Instituto de Psicologia, Universidade de São Paulo, São Paulo
| | - Patricia F Monticelli
- Ethology and Bioacoustic Laboratory, Department of Psychology, Faculdade de Filosofia, Ciências e Letras de Ribeirão Preto, Universidade de São Paulo, Ribeirão Preto, São Paulo, Brazil
| |
Collapse
|
2
|
McBride JM, Passmore S, Tlusty T. Convergent evolution in a large cross-cultural database of musical scales. PLoS One 2023; 18:e0284851. [PMID: 38091315 PMCID: PMC10718441 DOI: 10.1371/journal.pone.0284851] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2022] [Accepted: 07/21/2023] [Indexed: 12/18/2023] Open
Abstract
Scales, sets of discrete pitches that form the basis of melodies, are thought to be one of the most universal hallmarks of music. But we know relatively little about cross-cultural diversity of scales or how they evolved. To remedy this, we assemble a cross-cultural database (Database of Musical Scales: DaMuSc) of scale data, collected over the past century by various ethnomusicologists. Statistical analyses of the data highlight that certain intervals (e.g., the octave, fifth, second) are used frequently across cultures. Despite some diversity among scales, it is the similarities across societies which are most striking: step intervals are restricted to 100-400 cents; most scales are found close to equidistant 5- and 7-note scales. We discuss potential mechanisms of variation and selection in the evolution of scales, and how the assembled data may be used to examine the root causes of convergent evolution.
Collapse
Affiliation(s)
- John M. McBride
- Center for Soft and Living Matter, Institute for Basic Science, Ulsan, South Korea
| | - Sam Passmore
- Faculty of Environment and Information Studies, Keio University, Fujisawa, Japan
- Evolution of Cultural Diversity Initiative, College of Asia and the Pacific, Australian National University, Canberra, Australia
| | - Tsvi Tlusty
- Center for Soft and Living Matter, Institute for Basic Science, Ulsan, South Korea
- Departments of Physics and Chemistry, Ulsan National Institute of Science and Technology, Ulsan, South Korea
| |
Collapse
|
3
|
McPherson MJ, McDermott JH. Relative pitch representations and invariance to timbre. Cognition 2023; 232:105327. [PMID: 36495710 PMCID: PMC10016107 DOI: 10.1016/j.cognition.2022.105327] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2022] [Revised: 09/13/2022] [Accepted: 11/10/2022] [Indexed: 12/12/2022]
Abstract
Information in speech and music is often conveyed through changes in fundamental frequency (f0), perceived by humans as "relative pitch". Relative pitch judgments are complicated by two facts. First, sounds can simultaneously vary in timbre due to filtering imposed by a vocal tract or instrument body. Second, relative pitch can be extracted in two ways: by measuring changes in constituent frequency components from one sound to another, or by estimating the f0 of each sound and comparing the estimates. We examined the effects of timbral differences on relative pitch judgments, and whether any invariance to timbre depends on whether judgments are based on constituent frequencies or their f0. Listeners performed up/down and interval discrimination tasks with pairs of spoken vowels, instrument notes, or synthetic tones, synthesized to be either harmonic or inharmonic. Inharmonic sounds lack a well-defined f0, such that relative pitch must be extracted from changes in individual frequencies. Pitch judgments were less accurate when vowels/instruments were different compared to when they were the same, and were biased by the associated timbre differences. However, this bias was similar for harmonic and inharmonic sounds, and was observed even in conditions where judgments of harmonic sounds were based on f0 representations. Relative pitch judgments are thus not invariant to timbre, even when timbral variation is naturalistic, and when such judgments are based on representations of f0.
Collapse
Affiliation(s)
- Malinda J McPherson
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, United States of America; Program in Speech and Hearing Biosciences and Technology, Harvard University, Boston, MA 02115, United States of America; McGovern Institute for Brain Research, MIT, Cambridge, MA 02139, United States of America.
| | - Josh H McDermott
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, United States of America; Program in Speech and Hearing Biosciences and Technology, Harvard University, Boston, MA 02115, United States of America; McGovern Institute for Brain Research, MIT, Cambridge, MA 02139, United States of America; Center for Brains Minds and Machines, MIT, Cambridge, MA 02139, United States of America
| |
Collapse
|
4
|
Bissmeyer SRS, Ortiz JR, Gan H, Goldsworthy RL. Computer-based musical interval training program for Cochlear implant users and listeners with no known hearing loss. Front Neurosci 2022; 16:903924. [PMID: 35968373 PMCID: PMC9363605 DOI: 10.3389/fnins.2022.903924] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2022] [Accepted: 07/11/2022] [Indexed: 11/15/2022] Open
Abstract
A musical interval is the difference in pitch between two sounds. The way that musical intervals are used in melodies relative to the tonal center of a key can strongly affect the emotion conveyed by the melody. The present study examines musical interval identification in people with no known hearing loss and in cochlear implant users. Pitch resolution varies widely among cochlear implant users with average resolution an order of magnitude worse than in normal hearing. The present study considers the effect of training on musical interval identification and tests for correlations between low-level psychophysics and higher-level musical abilities. The overarching hypothesis is that cochlear implant users are limited in their ability to identify musical intervals both by low-level access to frequency cues for pitch as well as higher-level mapping of the novel encoding of pitch that implants provide. Participants completed a 2-week, online interval identification training. The benchmark tests considered before and after interval identification training were pure tone detection thresholds, pure tone frequency discrimination, fundamental frequency discrimination, tonal and rhythm comparisons, and interval identification. The results indicate strong correlations between measures of pitch resolution with interval identification; however, only a small effect of training on interval identification was observed for the cochlear implant users. Discussion focuses on improving access to pitch cues for cochlear implant users and on improving auditory training for musical intervals.
Collapse
Affiliation(s)
- Susan Rebekah Subrahmanyam Bissmeyer
- Caruso Department of Otolaryngology, Auditory Research Center, Keck School of Medicine, University of Southern California, Los Angeles, CA, United States
- Department of Biomedical Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States
- *Correspondence: Susan Rebekah Subrahmanyam Bissmeyer,
| | - Jacqueline Rose Ortiz
- Caruso Department of Otolaryngology, Auditory Research Center, Keck School of Medicine, University of Southern California, Los Angeles, CA, United States
| | - Helena Gan
- Caruso Department of Otolaryngology, Auditory Research Center, Keck School of Medicine, University of Southern California, Los Angeles, CA, United States
| | - Raymond Lee Goldsworthy
- Caruso Department of Otolaryngology, Auditory Research Center, Keck School of Medicine, University of Southern California, Los Angeles, CA, United States
| |
Collapse
|
5
|
Abstract
Hearing in noise is a core problem in audition, and a challenge for hearing-impaired listeners, yet the underlying mechanisms are poorly understood. We explored whether harmonic frequency relations, a signature property of many communication sounds, aid hearing in noise for normal hearing listeners. We measured detection thresholds in noise for tones and speech synthesized to have harmonic or inharmonic spectra. Harmonic signals were consistently easier to detect than otherwise identical inharmonic signals. Harmonicity also improved discrimination of sounds in noise. The largest benefits were observed for two-note up-down "pitch" discrimination and melodic contour discrimination, both of which could be performed equally well with harmonic and inharmonic tones in quiet, but which showed large harmonic advantages in noise. The results show that harmonicity facilitates hearing in noise, plausibly by providing a noise-robust pitch cue that aids detection and discrimination.
Collapse
|
6
|
Nguyen DD, Chacon AM, Novakovic D, Hodges NJ, Carding PN, Madill C. Pitch Discrimination Testing in Patients with a Voice Disorder. J Clin Med 2022; 11:584. [PMID: 35160036 PMCID: PMC8836960 DOI: 10.3390/jcm11030584] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2021] [Revised: 01/17/2022] [Accepted: 01/18/2022] [Indexed: 02/01/2023] Open
Abstract
Auditory perception plays an important role in voice control. Pitch discrimination (PD) is a key index of auditory perception and is influenced by a variety of factors. Little is known about the potential effects of voice disorders on PD and whether PD testing can differentiate people with and without a voice disorder. We thus evaluated PD in a voice-disordered group (n = 71) and a non-voice-disordered control group (n = 80). The voice disorders included muscle tension dysphonia and neurological voice disorders and all participants underwent PD testing as part of a comprehensive voice assessment. Percentage of accurate responses and PD threshold were compared across groups. The PD percentage accuracy was significantly lower in the voice-disordered group than the control group, irrespective of musical background. Participants with voice disorders also required a larger PD threshold to correctly discriminate pitch differences. The mean PD threshold significantly discriminated the voice-disordered groups from the control group. These results have implications for the voice control and pathogenesis of voice disorders. They support the inclusion of PD testing during comprehensive voice assessment and throughout the treatment process for patients with voice disorders.
Collapse
Affiliation(s)
- Duy Duong Nguyen
- Voice Research Laboratory, Discipline of Speech Pathology, Faculty of Medicine and Health, The University of Sydney, Sydney, NSW 2006, Australia; (A.M.C.); (D.N.); (C.M.)
- National Hospital of Otorhinolaryngology, Hanoi 11519, Vietnam
| | - Antonia M. Chacon
- Voice Research Laboratory, Discipline of Speech Pathology, Faculty of Medicine and Health, The University of Sydney, Sydney, NSW 2006, Australia; (A.M.C.); (D.N.); (C.M.)
| | - Daniel Novakovic
- Voice Research Laboratory, Discipline of Speech Pathology, Faculty of Medicine and Health, The University of Sydney, Sydney, NSW 2006, Australia; (A.M.C.); (D.N.); (C.M.)
- The Canterbury Hospital, Campsie, NSW 2194, Australia
| | - Nicola J. Hodges
- School of Kinesiology, University of British Columbia, Vancouver, BC V6T 1Z1, Canada;
| | - Paul N. Carding
- Faculty of Health and Life Sciences, Oxford Institute of Nursing, Midwifery and Allied Health Research, Oxford OX3 0BP, UK;
| | - Catherine Madill
- Voice Research Laboratory, Discipline of Speech Pathology, Faculty of Medicine and Health, The University of Sydney, Sydney, NSW 2006, Australia; (A.M.C.); (D.N.); (C.M.)
| |
Collapse
|
7
|
Boebinger D, Norman-Haignere SV, McDermott JH, Kanwisher N. Music-selective neural populations arise without musical training. J Neurophysiol 2021; 125:2237-2263. [PMID: 33596723 PMCID: PMC8285655 DOI: 10.1152/jn.00588.2020] [Citation(s) in RCA: 27] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2020] [Revised: 02/12/2021] [Accepted: 02/12/2021] [Indexed: 11/22/2022] Open
Abstract
Recent work has shown that human auditory cortex contains neural populations anterior and posterior to primary auditory cortex that respond selectively to music. However, it is unknown how this selectivity for music arises. To test whether musical training is necessary, we measured fMRI responses to 192 natural sounds in 10 people with almost no musical training. When voxel responses were decomposed into underlying components, this group exhibited a music-selective component that was very similar in response profile and anatomical distribution to that previously seen in individuals with moderate musical training. We also found that musical genres that were less familiar to our participants (e.g., Balinese gamelan) produced strong responses within the music component, as did drum clips with rhythm but little melody, suggesting that these neural populations are broadly responsive to music as a whole. Our findings demonstrate that the signature properties of neural music selectivity do not require musical training to develop, showing that the music-selective neural populations are a fundamental and widespread property of the human brain.NEW & NOTEWORTHY We show that music-selective neural populations are clearly present in people without musical training, demonstrating that they are a fundamental and widespread property of the human brain. Additionally, we show music-selective neural populations respond strongly to music from unfamiliar genres as well as music with rhythm but little pitch information, suggesting that they are broadly responsive to music as a whole.
Collapse
Affiliation(s)
- Dana Boebinger
- Speech and Hearing Bioscience and Technology, Harvard University, Cambridge, Massachusetts
- Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, Massachusetts
| | - Sam V Norman-Haignere
- Laboratoire des Sytèmes Perceptifs, Département d'Études Cognitives, École Normale Supérieure, PSL Research University, CNRS, Paris France
- Zuckerman Institute for Brain Research, Columbia University, New York, New York
| | - Josh H McDermott
- Speech and Hearing Bioscience and Technology, Harvard University, Cambridge, Massachusetts
- Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, Massachusetts
- Center for Brains, Minds, and Machines, Massachusetts Institute of Technology, Cambridge, Massachusetts
| | - Nancy Kanwisher
- Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, Massachusetts
- Center for Brains, Minds, and Machines, Massachusetts Institute of Technology, Cambridge, Massachusetts
| |
Collapse
|
8
|
Demany L, Monteiro G, Semal C, Shamma S, Carlyon RP. The perception of octave pitch affinity and harmonic fusion have a common origin. Hear Res 2021; 404:108213. [PMID: 33662686 PMCID: PMC7614450 DOI: 10.1016/j.heares.2021.108213] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/20/2020] [Revised: 02/05/2021] [Accepted: 02/10/2021] [Indexed: 02/06/2023]
Abstract
Musicians say that the pitches of tones with a frequency ratio of 2:1 (one octave) have a distinctive affinity, even if the tones do not have common spectral components. It has been suggested, however, that this affinity judgment has no biological basis and originates instead from an acculturation process ‒ the learning of musical rules unrelated to auditory physiology. We measured, in young amateur musicians, the perceptual detectability of octave mistunings for tones presented alternately (melodic condition) or simultaneously (harmonic condition). In the melodic condition, mistuning was detectable only by means of explicit pitch comparisons. In the harmonic condition, listeners could use a different and more efficient perceptual cue: in the absence of mistuning, the tones fused into a single sound percept; mistunings decreased fusion. Performance was globally better in the harmonic condition, in line with the hypothesis that listeners used a fusion cue in this condition; this hypothesis was also supported by results showing that an illusory simultaneity of the tones was much less advantageous than a real simultaneity. In the two conditions, mistuning detection was generally better for octave compressions than for octave stretchings. This asymmetry varied across listeners, but crucially the listener-specific asymmetries observed in the two conditions were highly correlated. Thus, the perception of the melodic octave appeared to be closely linked to the phenomenon of harmonic fusion. As harmonic fusion is thought to be determined by biological factors rather than factors related to musical culture or training, we argue that octave pitch affinity also has, at least in part, a biological basis.
Collapse
Affiliation(s)
- Laurent Demany
- Institut de Neurosciences Cognitives et Intégratives d'Aquitaine, CNRS, EPHE, and Université de Bordeaux, Bordeaux, France.
| | - Guilherme Monteiro
- Institut de Neurosciences Cognitives et Intégratives d'Aquitaine, CNRS, EPHE, and Université de Bordeaux, Bordeaux, France
| | - Catherine Semal
- Institut de Neurosciences Cognitives et Intégratives d'Aquitaine, CNRS, EPHE, and Université de Bordeaux, Bordeaux, France; Bordeaux INP, Bordeaux, France.
| | - Shihab Shamma
- Institute for Systems Research, University of Maryland, College Park, MD, United States; Département d'Etudes Cognitives, Ecole Normale Supérieure, Paris, France.
| | - Robert P Carlyon
- Cambridge Hearing Group, MRC Cognition and Brain Sciences Unit, Cambridge, United Kingdom.
| |
Collapse
|
9
|
Ashori M. Speech intelligibility and auditory perception of pre-school children with Hearing Aid, cochlear implant and Typical Hearing. J Otol 2020; 15:62-66. [PMID: 32440268 PMCID: PMC7231984 DOI: 10.1016/j.joto.2019.11.001] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2019] [Revised: 10/31/2019] [Accepted: 11/07/2019] [Indexed: 11/28/2022] Open
Abstract
Purpose There is a growing interest in speech intelligibility and auditory perception of deaf children. The aim of the present study was to compare speech intelligibility and auditory perception of pre-school children with Hearing Aid (HA), Cochlear Implant (CI), and Typical Hearing (TH). Methods The research design was descriptive-analytic and comparative. The participants comprised 75 male pre-school children aged 4–6 years in the 2017–2018 from Tehran, Iran. The participants were divided into three groups, and each group consisted of 25 children. The first and second groups were respectively selected from pre-school children with HA and CI using the convenience sampling method, while the third group was selected from pre-school children with TH by random sampling method. All children completed Speech Intelligibility Rating and Categories of Auditory Performance Questionnaires. Results The findings indicated that the mean scores of speech intelligibility and auditory perception of the group with TH were significantly higher than those of the other groups (P < 0.0001). The mean scores of speech intelligibility in the group with CI did not significantly differ from those of the group with HA (P < 0.38). Also, the mean scores of auditory perception in the group with CI were significantly higher than those of the group with HA (P < 0.002). Conclusion The results showed that auditory perception in children with CI was significantly higher than children with HA. This finding highlights the importance of cochlear implantation at a younger age and its significant impact on auditory perception in deaf children.
Collapse
Affiliation(s)
- Mohammad Ashori
- Department of Psychology and Education of Children with Special Needs, University of Isfahan, Isfahan, Iran
| |
Collapse
|
10
|
Swanson BA, Marimuthu VMR, Mannell RH. Place and Temporal Cues in Cochlear Implant Pitch and Melody Perception. Front Neurosci 2019; 13:1266. [PMID: 31849583 PMCID: PMC6888014 DOI: 10.3389/fnins.2019.01266] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2019] [Accepted: 11/07/2019] [Indexed: 11/15/2022] Open
Abstract
The present study compared pitch and melody perception using cochlear place of excitation and temporal cues in six adult nucleus cochlear implant (CI) recipients. The stimuli were synthesized tones presented through a loudspeaker, and recipients used the Advanced Combinational Encoder (ACE) sound coding strategy on their own sound processors. Three types of tones were used, denoted H3, H4, and P5. H3 tones were harmonic tones with fundamental frequencies in the range C3-C4 (131-262 Hz), providing temporal pitch cues alone. H4 tones were harmonic tones with fundamental frequencies in the range C4-C5 (262-523 Hz), providing a mixture of temporal and place cues. P5 tones were pure tones with fundamental frequencies in the range C5-C6 (523-1046 Hz), providing place pitch cues alone. Four experimental procedures were used: pitch discrimination, pitch ranking, backward modified melodies, and warped modified melodies. In each trial of the modified melodies tests, subjects heard a familiar melody and a version with modified pitch (in randomized order), and had to select the unmodified melody. In all four procedures, many scores were much lower than would be expected for normal hearing listeners, implying that the strength of the perceived pitch was weak. Discrimination and ranking with H3 and P5 tones was poor for two-semitone intervals, but near perfect for intervals of five semitones and larger. H4 tones provided the lowest group mean scores in all four procedures, with some pitch reversals observed in pitch ranking. Group mean scores for P5 tones (place cues alone) were at least as high as those for H3 tones (temporal cues alone). The relatively good scores on the melody tasks with P5 tones were surprising, given the lack of temporal cues, raising the possibility of musical pitch using place cues alone. However, the alternative possibility that the CI recipients perceived the place cues as brightness, rather than musical pitch per se, cannot be excluded. These findings show that pitch perception models need to incorporate neural place representations alongside temporal cues if they are to predict pitch and melody perception in the absence of temporal cues.
Collapse
Affiliation(s)
| | - Vijay M. R. Marimuthu
- Department of Linguistics, Faculty of Human Sciences, Macquarie University, Sydney, NSW, Australia
| | - Robert H. Mannell
- Department of Linguistics, Faculty of Human Sciences, Macquarie University, Sydney, NSW, Australia
| |
Collapse
|
11
|
Graves JE, Pralus A, Fornoni L, Oxenham AJ, Caclin A, Tillmann B. Short- and long-term memory for pitch and non-pitch contours: Insights from congenital amusia. Brain Cogn 2019; 136:103614. [PMID: 31546175 PMCID: PMC6953621 DOI: 10.1016/j.bandc.2019.103614] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2019] [Revised: 09/11/2019] [Accepted: 09/13/2019] [Indexed: 11/25/2022]
Abstract
Congenital amusia is a neurodevelopmental disorder characterized by deficits in music perception, including discriminating and remembering melodies and melodic contours. As non-amusic listeners can perceive contours in dimensions other than pitch, such as loudness and brightness, our present study investigated whether amusics' pitch contour deficits also extend to these other auditory dimensions. Amusic and control participants performed an identification task for ten familiar melodies and a short-term memory task requiring the discrimination of changes in the contour of novel four-tone melodies. For both tasks, melodic contour was defined by pitch, brightness, or loudness. Amusic participants showed some ability to extract contours in all three dimensions. For familiar melodies, amusic participants showed impairment in all conditions, perhaps reflecting the fact that the long-term memory representations of the familiar melodies were defined in pitch. In the contour discrimination task with novel melodies, amusic participants exhibited less impairment for loudness-based melodies than for pitch- or brightness-based melodies, suggesting some specificity of the deficit for spectral changes, if not for pitch alone. The results suggest pitch and brightness may not be processed by the same mechanisms as loudness, and that short-term memory for loudness contours may be spared to some degree in congenital amusia.
Collapse
Affiliation(s)
- Jackson E Graves
- Lyon Neuroscience Research Center (CRNL), CNRS, UMR 5292, Inserm U1028, Université Lyon 1, Lyon, France; Department of Psychology, University of Minnesota, Minneapolis, MN, USA; Laboratoire des systèmes perceptifs, Département d'études cognitives, École normale supérieure, PSL University, CNRS, 75005 Paris, France.
| | - Agathe Pralus
- Lyon Neuroscience Research Center (CRNL), CNRS, UMR 5292, Inserm U1028, Université Lyon 1, Lyon, France
| | - Lesly Fornoni
- Lyon Neuroscience Research Center (CRNL), CNRS, UMR 5292, Inserm U1028, Université Lyon 1, Lyon, France
| | - Andrew J Oxenham
- Department of Psychology, University of Minnesota, Minneapolis, MN, USA
| | - Anne Caclin
- Lyon Neuroscience Research Center (CRNL), CNRS, UMR 5292, Inserm U1028, Université Lyon 1, Lyon, France
| | - Barbara Tillmann
- Lyon Neuroscience Research Center (CRNL), CNRS, UMR 5292, Inserm U1028, Université Lyon 1, Lyon, France
| |
Collapse
|
12
|
Marty N, Marty M, Pfeuty M. Relative contribution of pitch and brightness to the auditory kappa effect. PSYCHOLOGICAL RESEARCH 2019; 85:55-67. [PMID: 31440814 DOI: 10.1007/s00426-019-01233-y] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2019] [Accepted: 07/22/2019] [Indexed: 11/25/2022]
Abstract
Pitch height is known to interfere with temporal judgment. This is the case in the auditory kappa effect in which the relative degree of pitch distance separating two tones extends the perceived duration of the inter-onset interval (IOI). However, pitch variations which result from manipulations of the fundamental frequency of tones are associated with variations of the spectral centroid, which is related to the perceived brightness. The present study aimed at determining the relative contribution of pitch and brightness to the auditory kappa effect. Forty-eight participants performed an AXB paradigm (tone X was shifted to be closer to either tone A or B) in three conditions: the three tones varied in both pitch and brightness (PB condition), pitch varied but brightness was fixed (P condition) or brightness varied but pitch was fixed (B condition). Pitch and brightness were modified by manipulating the fundamental frequency (F0) and the spectral centroid of the tones, respectively. In each condition, the percentage of trials in which the first IOI was perceived as shorter increased as X was closer (in pitch and/or brightness) to A. Furthermore, the magnitude of the effect was larger in PB than in P condition, while it did not differ between PB and B conditions, suggesting that brightness would contribute more than pitch height to the auditory kappa effect. This study provides the first evidence that auditory brightness interferes with duration judgment and highlights the importance to consider jointly the role of pitch height and brightness in future studies on auditory temporal processing.
Collapse
Affiliation(s)
- Nicolas Marty
- Sorbonne University, 75000, Paris, France
- University of Bourgogne Franche-Comté, LEAD, UMR 5022, CNRS, 21000, Dijon, France
| | - Maxime Marty
- University of Bordeaux, INCIA, UMR 5287, CNRS, 146 rue Leo Saignat, 33076, Bordeaux, France
| | - Micha Pfeuty
- University of Bordeaux, INCIA, UMR 5287, CNRS, 146 rue Leo Saignat, 33076, Bordeaux, France.
| |
Collapse
|
13
|
Zhang F, Underwood G, McGuire K, Liang C, Moore DR, Fu QJ. Frequency change detection and speech perception in cochlear implant users. Hear Res 2019; 379:12-20. [PMID: 31035223 PMCID: PMC6571168 DOI: 10.1016/j.heares.2019.04.007] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/03/2018] [Revised: 03/21/2019] [Accepted: 04/15/2019] [Indexed: 10/27/2022]
Abstract
Dynamic frequency changes in sound provide critical cues for speech perception. Most previous studies examining frequency discrimination in cochlear implant (CI) users have employed behavioral tasks in which target and reference tones (differing in frequency) are presented statically in separate time intervals. Participants are required to identify the target frequency by comparing stimuli across these time intervals. However, perceiving dynamic frequency changes in speech requires detection of within-interval frequency change. This study explored the relationship between detection of within-interval frequency changes and speech perception performance of CI users. Frequency change detection thresholds (FCDTs) were measured in 20 adult CI users using a 3-alternative forced-choice (3AFC) procedure. Stimuli were 1-sec pure tones (base frequencies at 0.25, 1, 4 kHz) with frequency changes occurring 0.5 s after the tone onset. Speech tests were 1) Consonant-Nucleus-Consonant (CNC) monosyllabic word recognition, 2) Arizona Biomedical Sentence Recognition (AzBio) in Quiet, 3) AzBio in Noise (AzBio-N, +10 dB signal-to-noise/SNR ratio), and 4) Digits-in-noise (DIN). Participants' subjective satisfaction with the CI was obtained. Results showed that correlations between FCDTs and speech perception were all statistically significant. The satisfaction level of CI use was not related to FCDTs, after controlling for major demographic factors. DIN speech reception thresholds were significantly correlated to AzBio-N scores. The current findings suggest that the ability to detect within-interval frequency changes may play an important role in speech perception performance of CI users. FCDT and DIN can serve as simple and rapid tests that require no or minimal linguistic background for the prediction of CI speech outcomes.
Collapse
Affiliation(s)
- Fawen Zhang
- Department of Communication Sciences and Disorders, University of Cincinnati, Ohio, USA.
| | - Gabrielle Underwood
- Department of Communication Sciences and Disorders, University of Cincinnati, Ohio, USA
| | - Kelli McGuire
- Department of Communication Sciences and Disorders, University of Cincinnati, Ohio, USA
| | - Chun Liang
- Department of Communication Sciences and Disorders, University of Cincinnati, Ohio, USA; Shenzhen Maternity & Child Healthcare Hospital, Shenzhen, China
| | - David R Moore
- Communication Sciences Research Center, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA; Department of Otolaryngology, University of Cincinnati, Ohio, USA
| | - Qian-Jie Fu
- Department of Head and Neck Surgery, University of California, Los Angeles, Los Angeles, CA, USA
| |
Collapse
|
14
|
Madsen SMK, Marschall M, Dau T, Oxenham AJ. Speech perception is similar for musicians and non-musicians across a wide range of conditions. Sci Rep 2019; 9:10404. [PMID: 31320656 PMCID: PMC6639310 DOI: 10.1038/s41598-019-46728-1] [Citation(s) in RCA: 31] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2019] [Accepted: 06/29/2019] [Indexed: 11/12/2022] Open
Abstract
It remains unclear whether musical training is associated with improved speech understanding in a noisy environment, with different studies reaching differing conclusions. Even in those studies that have reported an advantage for highly trained musicians, it is not known whether the benefits measured in laboratory tests extend to more ecologically valid situations. This study aimed to establish whether musicians are better than non-musicians at understanding speech in a background of competing speakers or speech-shaped noise under more realistic conditions, involving sounds presented in space via a spherical array of 64 loudspeakers, rather than over headphones, with and without simulated room reverberation. The study also included experiments testing fundamental frequency discrimination limens (F0DLs), interaural time differences limens (ITDLs), and attentive tracking. Sixty-four participants (32 non-musicians and 32 musicians) were tested, with the two groups matched in age, sex, and IQ as assessed with Raven’s Advanced Progressive matrices. There was a significant benefit of musicianship for F0DLs, ITDLs, and attentive tracking. However, speech scores were not significantly different between the two groups. The results suggest no musician advantage for understanding speech in background noise or talkers under a variety of conditions.
Collapse
Affiliation(s)
- Sara M K Madsen
- Hearing Systems Group, Department of Health Technology, Technical University of Denmark, Ørsteds Plads, 2800, Lyngby, Denmark. .,Department of Psychology, University of Minnesota, 75 East River Parkway, Minneapolis, MN, 55455, USA.
| | - Marton Marschall
- Hearing Systems Group, Department of Health Technology, Technical University of Denmark, Ørsteds Plads, 2800, Lyngby, Denmark
| | - Torsten Dau
- Hearing Systems Group, Department of Health Technology, Technical University of Denmark, Ørsteds Plads, 2800, Lyngby, Denmark
| | - Andrew J Oxenham
- Department of Psychology, University of Minnesota, 75 East River Parkway, Minneapolis, MN, 55455, USA
| |
Collapse
|
15
|
Weaver AJ, DiGiovanni JJ, Ries DT. Pspan: A New Tool for Assessing Pitch Temporal Processing and Patterning Capacity. Am J Audiol 2019; 28:322-332. [PMID: 31084578 DOI: 10.1044/2019_aja-18-0117] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022] Open
Abstract
Purpose The purpose of this study was to evaluate whether merging the clinical pitch pattern test procedure with psychoacoustic adaptive methods would create a new tool feasible to capture individual differences in pitch temporal processing and patterning capacity of children and adults. Method Sixty-six individuals, young children (ages 10-12 years, n = 22), older children (ages 13-15 years, n = 23), and adults (ages 18-33 years, n = 21), were recruited and assigned to subgroups based on reported duration (years) of instrumental music instruction. Additional background information was collected in order to assess if the pitch temporal processing and patterning span developed, the Pspan, was sensitive to individual differences across participants. Results The evaluation of the Pspan task as a scale indicated good parallel reliability across runs assessed by Cronbach's alpha, and scores were normally distributed. Between-subjects analysis of variance indicated main effects for both age groups and music groups recruited for the study. A multiple regression analysis with the Pspan scores as the dependent variable found that 3 measures of music instruction, age in years, and paternal education were predictive of enhanced temporal processing and patterning capacity for pitch input. Conclusions The outcomes suggest that the Pspan task is a time-efficient data collection tool that is sensitive to the duration of instrumental music instruction, maturation, and paternal education. In addition, results indicate that the task is sensitive to age-related auditory temporal processing and patterning performance changes during adolescence when children are 10-15 years old.
Collapse
Affiliation(s)
- Aurora J. Weaver
- Auditory Psychophysics and Signal Processing Lab, Division of Communication Sciences and Disorders, Ohio University, Athens
- Auditory and Music Perception Lab, Department of Communication Disorders, Auburn University, AL
| | - Jeffrey J. DiGiovanni
- Auditory Psychophysics and Signal Processing Lab, Division of Communication Sciences and Disorders, Ohio University, Athens
- Department of Communication Sciences and Disorders, University of Cincinnati, OH
| | - Dennis T. Ries
- Department of Physical Medicine and Rehabilitation, University of Colorado–Anschutz Medical Campus, Aurora
| |
Collapse
|
16
|
Ireland K, Iyer TA, Penhune VB. Contributions of age of start, cognitive abilities and practice to musical task performance in childhood. PLoS One 2019; 14:e0216119. [PMID: 31022272 PMCID: PMC6483258 DOI: 10.1371/journal.pone.0216119] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2018] [Accepted: 04/14/2019] [Indexed: 02/02/2023] Open
Abstract
Studies with adult musicians show that beginning lessons before age seven is associated with better performance on musical tasks and enhancement in auditory and motor brain regions. It is hypothesized that early training interacts with periods of heightened neural development to promote greater plasticity and better learning and performance later in life. However, we do not know whether such effects can be observed in childhood. Moreover, we do not know the degree to which such effects are related to training, or whether early training has different effects on particular musical skills depending on their cognitive, perceptual or motor requirements. To address these questions, we compared groups of child musicians who had started lessons earlier or later on age-normed tests of rhythm synchronization and melody discrimination. We also matched for age, years of experience, working memory and global cognitive ability. Results showed that children who started early performed better on simple melody discrimination and that scores on this task were predicted by both age of start (AoS) and cognitive ability. There was no effect of AoS for the more complex rhythm or transposed melody tasks, but these scores were significantly predicted by working memory ability, and for transposed melodies, by hours of weekly practice. These findings provide the first evidence that earlier AoS for music training in childhood results in enhancement of specific musical skills. Integrating these results with those for adult musicians, we hypothesize that early training has an immediate impact on simple melody discrimination skills that develop early, while more complex abilities, like synchronization and transposition require both further maturation and additional training.
Collapse
Affiliation(s)
- Kierla Ireland
- Laboratory for Motor Learning and Neural Plasticity, Concordia University, Montreal, Quebec, Canada
- International Laboratory for Brain, Music, and Sound Research (BRAMS), Montreal, Quebec, Canada
| | - Thanya A. Iyer
- Laboratory for Motor Learning and Neural Plasticity, Concordia University, Montreal, Quebec, Canada
- International Laboratory for Brain, Music, and Sound Research (BRAMS), Montreal, Quebec, Canada
| | - Virginia B. Penhune
- Laboratory for Motor Learning and Neural Plasticity, Concordia University, Montreal, Quebec, Canada
- International Laboratory for Brain, Music, and Sound Research (BRAMS), Montreal, Quebec, Canada
| |
Collapse
|
17
|
Little DF, Cheng HH, Wright BA. Inducing musical-interval learning by combining task practice with periods of stimulus exposure alone. Atten Percept Psychophys 2019; 81:344-357. [PMID: 30136042 PMCID: PMC6384134 DOI: 10.3758/s13414-018-1584-x] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
A key component of musical proficiency is the ability to discriminate between and identify musical intervals, or fixed ratios between pitches. Acquiring these skills requires training, but little is known about how to best arrange the trials within a training session. To address this issue, learning on a musical-interval comparison task was evaluated for two four-day training regimens that employed equal numbers of stimulus presentations per day. A regimen of continuous practice yielded no learning, but a regimen that combined practice and stimulus exposure alone generated clear improvement. Learning in the practice-plus-exposure regimen was due to the combination of the two experiences, because two control groups who received only either the practice or the exposure from that regimen did not learn. Posttest performance suggested that this improvement in comparison learning generalized to an untrained stimulus type and an untrained musical-interval identification task. Naïve comparison performance, but not learning, was better for larger pitch-ratio differences and for individuals with more musical experience. The reported benefits of the practice-plus-exposure regimen mirror the outcomes for fine-grained discrimination and speech tasks, suggesting that a general learning principle is involved. In practical terms, it appears that combining practice and stimulus exposure alone is a particularly effective configuration for improving musical-interval perception.
Collapse
Affiliation(s)
- David F Little
- Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD, USA.
| | - Henry H Cheng
- Communication Sciences and Disorders, Northwestern University, Evanston, IL, 60208-3550, USA
| | - Beverly A Wright
- Communication Sciences and Disorders, Knowles Hearing Center, Northwestern Institute for Neuroscience, Northwestern University, Evanston, IL, 60208-3550, USA
| |
Collapse
|
18
|
Tong X, Choi W, Man YY. Tone language experience modulates the effect of long-term musical training on musical pitch perception. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2018; 144:690. [PMID: 30180694 DOI: 10.1121/1.5049365] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/10/2017] [Accepted: 07/19/2018] [Indexed: 06/08/2023]
Abstract
Long-term musical training is widely reported to enhance music pitch perception. However, it remains unclear whether tone language experience influences the effect of long-term musical training on musical pitch perception. The present study addressed this question by testing 30 Cantonese and 30 non-tonal language speakers, each divided equally into musician and non-musician groups, on pitch height and pitch interval discrimination. Musicians outperformed non-musicians among non-tonal language speakers, but not among Cantonese speakers on the pitch height discrimination task. However, musicians outperformed non-musicians among Cantonese speakers, but not among non-tonal language speakers on the pitch interval discrimination task. These results suggest that the effect of long-term musical training on musical pitch perception is shaped by tone language experience and varies across different pitch perception tasks.
Collapse
Affiliation(s)
- Xiuli Tong
- Division of Speech and Hearing Sciences, The University of Hong Kong, Hong Kong
| | - William Choi
- Division of Speech and Hearing Sciences, The University of Hong Kong, Hong Kong
| | | |
Collapse
|
19
|
McPherson MJ, McDermott JH. Diversity in pitch perception revealed by task dependence. Nat Hum Behav 2018; 2:52-66. [PMID: 30221202 PMCID: PMC6136452 DOI: 10.1038/s41562-017-0261-8] [Citation(s) in RCA: 33] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2017] [Accepted: 11/08/2017] [Indexed: 01/12/2023]
Abstract
Pitch conveys critical information in speech, music, and other natural sounds, and is conventionally defined as the perceptual correlate of a sound's fundamental frequency (F0). Although pitch is widely assumed to be subserved by a single F0 estimation process, real-world pitch tasks vary enormously, raising the possibility of underlying mechanistic diversity. To probe pitch mechanisms we conducted a battery of pitch-related music and speech tasks using conventional harmonic sounds and inharmonic sounds whose frequencies lack a common F0. Some pitch-related abilities - those relying on musical interval or voice recognition - were strongly impaired by inharmonicity, suggesting a reliance on F0. However, other tasks, including those dependent on pitch contours in speech and music, were unaffected by inharmonicity, suggesting a mechanism that tracks the frequency spectrum rather than the F0. The results suggest that pitch perception is mediated by several different mechanisms, only some of which conform to traditional notions of pitch.
Collapse
Affiliation(s)
- Malinda J McPherson
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA.
- Program in Speech and Hearing Bioscience and Technology, Harvard University, Cambridge, MA, USA.
| | - Josh H McDermott
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA
- Program in Speech and Hearing Bioscience and Technology, Harvard University, Cambridge, MA, USA
| |
Collapse
|
20
|
Graves JE, Oxenham AJ. Familiar Tonal Context Improves Accuracy of Pitch Interval Perception. Front Psychol 2017; 8:1753. [PMID: 29062295 PMCID: PMC5640898 DOI: 10.3389/fpsyg.2017.01753] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2017] [Accepted: 09/22/2017] [Indexed: 12/03/2022] Open
Abstract
A fundamental feature of everyday music perception is sensitivity to familiar tonal structures such as musical keys. Many studies have suggested that a tonal context can enhance the perception and representation of pitch. Most of these studies have measured response time, which may reflect expectancy as opposed to perceptual accuracy. We instead used a performance-based measure, comparing participants’ ability to discriminate between a “small, in-tune” interval and a “large, mistuned” interval in conditions that involved familiar tonal relations (diatonic, or major, scale notes), unfamiliar tonal relations (whole-tone or mistuned-diatonic scale notes), repetition of a single pitch, or no tonal context. The context was established with a brief sequence of tones in Experiment 1 (melodic context), and a cadence-like two-chord progression in Experiment 2 (harmonic context). In both experiments, performance significantly differed across the context conditions, with a diatonic context providing a significant advantage over no context; however, no correlation with years of musical training was observed. The diatonic tonal context also provided an advantage over the whole-tone scale context condition in Experiment 1 (melodic context), and over the mistuned scale or repetition context conditions in Experiment 2 (harmonic context). However, the relatively small benefit to performance suggests that the main advantage of tonal context may be priming of expected stimuli, rather than enhanced accuracy of pitch interval representation.
Collapse
Affiliation(s)
- Jackson E Graves
- Department of Psychology, University of Minnesota, Minneapolis, MN, United States
| | - Andrew J Oxenham
- Department of Psychology, University of Minnesota, Minneapolis, MN, United States
| |
Collapse
|
21
|
Standard-interval size affects interval-discrimination thresholds for pure-tone melodic pitch intervals. Hear Res 2017; 355:64-69. [PMID: 28935162 DOI: 10.1016/j.heares.2017.09.008] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/10/2016] [Revised: 09/12/2017] [Accepted: 09/14/2017] [Indexed: 11/20/2022]
Abstract
Our ability to discriminate between pitch intervals of different sizes is not only an important aspect of speech and music perception, but also a useful means of evaluating higher-level pitch perception. The current study examined how pitch-interval discrimination was affected by the size of the intervals being compared, and by musical training. Using an adaptive procedure, pitch-interval discrimination thresholds were measured for sequentially presented pure-tone intervals with standard intervals of 1 semitone (minor second), 6 semitones (the tri-tone), and 7 semitones (perfect fifth). Listeners were classified into three groups based on musical experience: non-musicians had less than 3 years of informal musical experience, amateur musicians had at least 10 years of experience but no formal music theory training, and expert musicians had at least 12 years of experience with 1 year of formal ear training, and were either currently pursuing or had earned a Bachelor's degree as either a music major or music minor. Consistent with previous studies, discrimination thresholds obtained from expert musicians were significantly lower than those from other listeners. Thresholds also significantly varied with the magnitude of the reference interval and were higher for conditions with a 6- or 7-semitone standard than a 1-semitone standard. These data show that interval-discrimination thresholds are strongly affected by the size of the standard interval.
Collapse
|
22
|
Zheng Y, Brette R. On the relation between pitch and level. Hear Res 2017; 348:63-69. [PMID: 28238889 DOI: 10.1016/j.heares.2017.02.014] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/29/2016] [Revised: 02/13/2017] [Accepted: 02/16/2017] [Indexed: 11/25/2022]
Abstract
Pitch is the perceptual dimension along which musical notes are ordered from low to high. It is often described as the perceptual correlate of the periodicity of the sound's waveform. Previous reports have shown that pitch can depend slightly on sound level. We wanted to verify that these observations reflect genuine changes in perceived pitch, and were not due to procedural factors or confusion between dimensions of pitch and level. We first conducted a systematic pitch matching task and confirmed that the pitch of low frequency pure tones, but not complex tones, decreases by an amount equivalent to a change in frequency of more than half a semitone when level increases. We then showed that the structure of pitch shifts is anti-symmetric and transitive, as expected for changes in pitch. We also observed shifts in the same direction (although smaller) in an interval matching task. Finally, we observed that musicians are more precise in pitch matching tasks than non-musicians but show the same average shifts with level. These combined experiments confirm that the pitch of low frequency pure tones depends weakly but systematically on level. These observations pose a challenge to current theories of pitch.
Collapse
Affiliation(s)
- Yi Zheng
- Sorbonne Universités, UPMC Univ Paris 06, INSERM, CNRS, Institut de la Vision, 17 rue Moreau, 75012 Paris, France; Institut d'Etudes de la Cognition, Ecole Normale Supérieure, Paris, France; Beijing Advanced Innovation Center for Future Education, Beijing Normal University, Beijing, China
| | - Romain Brette
- Sorbonne Universités, UPMC Univ Paris 06, INSERM, CNRS, Institut de la Vision, 17 rue Moreau, 75012 Paris, France; Institut d'Etudes de la Cognition, Ecole Normale Supérieure, Paris, France.
| |
Collapse
|
23
|
Allen EJ, Burton PC, Olman CA, Oxenham AJ. Representations of Pitch and Timbre Variation in Human Auditory Cortex. J Neurosci 2017; 37:1284-1293. [PMID: 28025255 PMCID: PMC5296797 DOI: 10.1523/jneurosci.2336-16.2016] [Citation(s) in RCA: 57] [Impact Index Per Article: 8.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2016] [Revised: 09/30/2016] [Accepted: 12/10/2016] [Indexed: 11/21/2022] Open
Abstract
Pitch and timbre are two primary dimensions of auditory perception, but how they are represented in the human brain remains a matter of contention. Some animal studies of auditory cortical processing have suggested modular processing, with different brain regions preferentially coding for pitch or timbre, whereas other studies have suggested a distributed code for different attributes across the same population of neurons. This study tested whether variations in pitch and timbre elicit activity in distinct regions of the human temporal lobes. Listeners were presented with sequences of sounds that varied in either fundamental frequency (eliciting changes in pitch) or spectral centroid (eliciting changes in brightness, an important attribute of timbre), with the degree of pitch or timbre variation in each sequence parametrically manipulated. The BOLD responses from auditory cortex increased with increasing sequence variance along each perceptual dimension. The spatial extent, region, and laterality of the cortical regions most responsive to variations in pitch or timbre at the univariate level of analysis were largely overlapping. However, patterns of activation in response to pitch or timbre variations were discriminable in most subjects at an individual level using multivoxel pattern analysis, suggesting a distributed coding of the two dimensions bilaterally in human auditory cortex. SIGNIFICANCE STATEMENT Pitch and timbre are two crucial aspects of auditory perception. Pitch governs our perception of musical melodies and harmonies, and conveys both prosodic and (in tone languages) lexical information in speech. Brightness-an aspect of timbre or sound quality-allows us to distinguish different musical instruments and speech sounds. Frequency-mapping studies have revealed tonotopic organization in primary auditory cortex, but the use of pure tones or noise bands has precluded the possibility of dissociating pitch from brightness. Our results suggest a distributed code, with no clear anatomical distinctions between auditory cortical regions responsive to changes in either pitch or timbre, but also reveal a population code that can differentiate between changes in either dimension within the same cortical regions.
Collapse
Affiliation(s)
- Emily J Allen
- Department of Psychology, University of Minnesota, Minneapolis, Minnesota 55455
| | - Philip C Burton
- Department of Psychology, University of Minnesota, Minneapolis, Minnesota 55455
| | - Cheryl A Olman
- Department of Psychology, University of Minnesota, Minneapolis, Minnesota 55455
| | - Andrew J Oxenham
- Department of Psychology, University of Minnesota, Minneapolis, Minnesota 55455
| |
Collapse
|
24
|
Cognitive representation of "musical fractals": Processing hierarchy and recursion in the auditory domain. Cognition 2017; 161:31-45. [PMID: 28103526 PMCID: PMC5348576 DOI: 10.1016/j.cognition.2017.01.001] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2015] [Revised: 10/01/2016] [Accepted: 01/03/2017] [Indexed: 11/21/2022]
Abstract
The human ability to process hierarchical structures has been a longstanding research topic. However, the nature of the cognitive machinery underlying this faculty remains controversial. Recursion, the ability to embed structures within structures of the same kind, has been proposed as a key component of our ability to parse and generate complex hierarchies. Here, we investigated the cognitive representation of both recursive and iterative processes in the auditory domain. The experiment used a two-alternative forced-choice paradigm: participants were exposed to three-step processes in which pure-tone sequences were built either through recursive or iterative processes, and had to choose the correct completion. Foils were constructed according to generative processes that did not match the previous steps. Both musicians and non-musicians were able to represent recursion in the auditory domain, although musicians performed better. We also observed that general ‘musical’ aptitudes played a role in both recursion and iteration, although the influence of musical training was somehow independent from melodic memory. Moreover, unlike iteration, recursion in audition was well correlated with its non-auditory (recursive) analogues in the visual and action sequencing domains. These results suggest that the cognitive machinery involved in establishing recursive representations is domain-general, even though this machinery requires access to information resulting from domain-specific processes.
Collapse
|
25
|
Abstract
Pitch is a percept of sound that is based in part on fundamental frequency. Although pitch can be defined in a way that is clearly separable from other aspects of musical sounds, such as timbre, the perception of pitch is not a simple topic. Despite this, studying pitch separately from other aspects of sound has led to some interesting conclusions about how humans and other animals process acoustic signals. It turns out that pitch perception in humans is based on an assessment of pitch height, pitch chroma, relative pitch, and grouping principles. How pitch is broken down depends largely on the context. Most, if not all, of these principles appear to also be used by other species, but when and how accurately they are used varies across species and context. Studying how other animals compare to humans in their pitch abilities is partially a reevaluation of what we know about humans by considering ourselves in a biological context.
Collapse
Affiliation(s)
- Marisa Hoeschele
- Department of Cognitive Biology, University of Vienna, Vienna, Austria
| |
Collapse
|
26
|
Todd AE, Mertens G, Van de Heyning P, Landsberger DM. Encoding a Melody Using Only Temporal Information for Cochlear-Implant and Normal-Hearing Listeners. Trends Hear 2017; 21:2331216517739745. [PMID: 29161987 PMCID: PMC5703098 DOI: 10.1177/2331216517739745] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2017] [Accepted: 10/05/2017] [Indexed: 11/16/2022] Open
Abstract
One way to provide pitch information to cochlear implant users is through amplitude-modulation rate. It is currently unknown whether amplitude-modulation rate can provide cochlear implant users with pitch information adequate for perceiving melodic information. In the present study, the notes of a song were encoded via amplitude-modulation rate of pulse trains on single electrodes at the apex or middle of long electrode arrays. The melody of the song was either physically correct or modified by compression or expansion. Nine cochlear implant users rated the extent to which the song was out of tune in the different conditions. Cochlear implant users on average did not show sensitivity to melody compression or expansion regardless of place of stimulation. These results were found despite the fact that three of the cochlear implant users showed the expected sensitivity to melody compression and expansion with the same task using acoustic pure tones in a contralateral acoustic ear. Normal-hearing listeners showed an inconsistent and weak effect of melody compression and expansion when the notes of the song were encoded with acoustic pulse rate. The results suggest that amplitude-modulation rate provides insufficient access to melodic information for cochlear-implant and normal-hearing listeners.
Collapse
Affiliation(s)
- Ann E. Todd
- Department of Otolaryngology, New York University School of Medicine, NY, USA
| | - Griet Mertens
- Department of Otorhinolaryngology, Head and Neck Surgery, Antwerp University Hospital, University of Antwerp, Belgium
| | - Paul Van de Heyning
- Department of Otorhinolaryngology, Head and Neck Surgery, Antwerp University Hospital, University of Antwerp, Belgium
| | | |
Collapse
|
27
|
Slana A, Repovš G, Fitch WT, Gingras B. Harmonic context influences pitch class equivalence judgments through gestalt and congruency effects. Acta Psychol (Amst) 2016; 166:54-63. [PMID: 27058166 DOI: 10.1016/j.actpsy.2016.03.006] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2015] [Revised: 01/20/2016] [Accepted: 03/18/2016] [Indexed: 10/22/2022] Open
Abstract
The context in which a stimulus is presented shapes the way it is processed. This effect has been studied extensively in the field of visual perception. Our understanding of how context affects the processing of auditory stimuli is, however, rather limited. Western music is primarily built on melodies (succession of pitches) typically accompanied by chords (harmonic context), which provides a natural template for the study of context effects in auditory processing. Here, we investigated whether pitch class equivalence judgments of tones are affected by the harmonic context within which the target tones are embedded. Nineteen musicians and 19 non-musicians completed a change detection task in which they were asked to determine whether two successively presented target tones, heard either in isolation or with a chordal accompaniment (same or different chords), belonged to the same pitch class. Both musicians and non-musicians were most accurate when the chords remained the same, less so in the absence of chordal accompaniment, and least when the chords differed between both target tones. Further analysis investigating possible mechanisms underpinning these effects of harmonic context on task performance revealed that both a change in gestalt (change in either chord or pitch class), as well as incongruency between change in target tone pitch class and change in chords, led to reduced accuracy and longer reaction times. Our results demonstrate that, similarly to visual processing, auditory processing is influenced by gestalt and congruency effects.
Collapse
|
28
|
Abstract
Most people are able to recognise familiar tunes even when played in a different key. It is assumed that this depends on a general capacity for relative pitch perception; the ability to recognise the pattern of inter-note intervals that characterises the tune. However, when healthy adults are required to detect rare deviant melodic patterns in a sequence of randomly transposed standard patterns they perform close to chance. Musically experienced participants perform better than naïve participants, but even they find the task difficult, despite the fact that musical education includes training in interval recognition.To understand the source of this difficulty we designed an experiment to explore the relative influence of the size of within-pattern intervals and between-pattern transpositions on detecting deviant melodic patterns. We found that task difficulty increases when patterns contain large intervals (5-7 semitones) rather than small intervals (1-3 semitones). While task difficulty increases substantially when transpositions are introduced, the effect of transposition size (large vs small) is weaker. Increasing the range of permissible intervals to be used also makes the task more difficult. Furthermore, providing an initial exact repetition followed by subsequent transpositions does not improve performance. Although musical training correlates with task performance, we find no evidence that violations to musical intervals important in Western music (i.e. the perfect fifth or fourth) are more easily detected. In summary, relative pitch perception does not appear to be conducive to simple explanations based exclusively on invariant physical ratios.
Collapse
|
29
|
Nikolsky A. Evolution of tonal organization in music mirrors symbolic representation of perceptual reality. Part-1: Prehistoric. Front Psychol 2015; 6:1405. [PMID: 26528193 PMCID: PMC4607869 DOI: 10.3389/fpsyg.2015.01405] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2015] [Accepted: 09/03/2015] [Indexed: 11/21/2022] Open
Abstract
This paper reveals the way in which musical pitch works as a peculiar form of cognition that reflects upon the organization of the surrounding world as perceived by majority of music users within a socio-cultural formation. The evidence from music theory, ethnography, archeology, organology, anthropology, psychoacoustics, and evolutionary biology is plotted against experimental evidence. Much of the methodology for this investigation comes from studies conducted within the territory of the former USSR. To date, this methodology has remained solely confined to Russian speaking scholars. A brief overview of pitch-set theory demonstrates the need to distinguish between vertical and horizontal harmony, laying out the framework for virtual music space that operates according to the perceptual laws of tonal gravity. Brought to life by bifurcation of music and speech, tonal gravity passed through eleven discrete stages of development until the onset of tonality in the seventeenth century. Each stage presents its own method of integration of separate musical tones into an auditory-cognitive unity. The theory of “melodic intonation” is set forth as a counterpart to harmonic theory of chords. Notions of tonality, modality, key, diatonicity, chromaticism, alteration, and modulation are defined in terms of their perception, and categorized according to the way in which they have developed historically. Tonal organization in music, and perspective organization in fine arts are explained as products of the same underlying mental process. Music seems to act as a unique medium of symbolic representation of reality through the concept of pitch. Tonal organization of pitch reflects the culture of thinking, adopted as a standard within a community of music users. Tonal organization might be a naturally formed system of optimizing individual perception of reality within a social group and its immediate environment, setting conventional standards of intellectual and emotional intelligence.
Collapse
|
30
|
Graves JE, Micheyl C, Oxenham AJ. Expectations for melodic contours transcend pitch. J Exp Psychol Hum Percept Perform 2014; 40:2338-47. [PMID: 25365571 DOI: 10.1037/a0038291] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
The question of what makes a good melody has interested composers, music theorists, and psychologists alike. Many of the observed principles of good "melodic continuation" involve melodic contour-the pattern of rising and falling pitch within a sequence. Previous work has shown that contour perception can extend beyond pitch to other auditory dimensions, such as brightness and loudness. Here, we show that the generalization of contour perception to nontraditional dimensions also extends to melodic expectations. In the first experiment, subjective ratings for 3-tone sequences that vary in brightness or loudness conformed to the same general contour-based expectations as pitch sequences. In the second experiment, we modified the sequence of melody presentation such that melodies with the same beginning were blocked together. This change produced substantively different results, but the patterns of ratings remained similar across the 3 auditory dimensions. Taken together, these results suggest that (a) certain well-known principles of melodic expectation (such as the expectation for a reversal following a skip) are dependent on long-term context, and (b) these expectations are not unique to the dimension of pitch and may instead reflect more general principles of perceptual organization.
Collapse
|
31
|
Luo X, Masterson ME, Wu CC. Melodic interval perception by normal-hearing listeners and cochlear implant users. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2014; 136:1831-44. [PMID: 25324084 PMCID: PMC4241717 DOI: 10.1121/1.4894738] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/20/2014] [Revised: 08/05/2014] [Accepted: 08/12/2014] [Indexed: 05/20/2023]
Abstract
The perception of melodic intervals (sequential pitch differences) is essential to music perception. This study tested melodic interval perception in normal-hearing (NH) listeners and cochlear implant (CI) users. Melodic interval ranking was tested using an adaptive procedure. CI users had slightly higher interval ranking thresholds than NH listeners. Both groups' interval ranking thresholds, although not affected by root note, significantly increased with standard interval size and were higher for descending intervals than for ascending intervals. The pitch direction effect may be due to a procedural artifact or a difference in central processing. In another test, familiar melodies were played with all the intervals scaled by a single factor. Subjects rated how in tune the melodies were and adjusted the scaling factor until the melodies sounded the most in tune. CI users had lower final interval ratings and less change in interval rating as a function of scaling factor than NH listeners. For CI users, the root-mean-square error of the final scaling factors and the width of the interval rating function were significantly correlated with the average ranking threshold for ascending rather than descending intervals, suggesting that CI users may have focused on ascending intervals when rating and adjusting the melodies.
Collapse
Affiliation(s)
- Xin Luo
- Department of Speech, Language, and Hearing Sciences, Purdue University, 500 Oval Drive, West Lafayette, Indiana 47907
| | - Megan E Masterson
- Department of Speech, Language, and Hearing Sciences, Purdue University, 500 Oval Drive, West Lafayette, Indiana 47907
| | - Ching-Chih Wu
- Department of Speech, Language, and Hearing Sciences, Purdue University, 500 Oval Drive, West Lafayette, Indiana 47907
| |
Collapse
|
32
|
Allen EJ, Oxenham AJ. Symmetric interactions and interference between pitch and timbre. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2014; 135:1371-9. [PMID: 24606275 PMCID: PMC3985978 DOI: 10.1121/1.4863269] [Citation(s) in RCA: 57] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
Variations in the spectral shape of harmonic tone complexes are perceived as timbre changes and can lead to poorer fundamental frequency (F0) or pitch discrimination. Less is known about the effects of F0 variations on spectral shape discrimination. The aims of the study were to determine whether the interactions between pitch and timbre are symmetric, and to test whether musical training affects listeners' ability to ignore variations in irrelevant perceptual dimensions. Difference limens (DLs) for F0 were measured with and without random, concurrent, variations in spectral centroid, and vice versa. Additionally, sensitivity was measured as the target parameter and the interfering parameter varied by the same amount, in terms of individual DLs. Results showed significant and similar interference between pitch (F0) and timbre (spectral centroid) dimensions, with upward spectral motion often confused for upward F0 motion, and vice versa. Musicians had better F0DLs than non-musicians on average, but similar spectral centroid DLs. Both groups showed similar interference effects, in terms of decreased sensitivity, in both dimensions. Results reveal symmetry in the interference effects between pitch and timbre, once differences in sensitivity between dimensions and subjects are controlled. Musical training does not reliably help to overcome these effects.
Collapse
Affiliation(s)
- Emily J Allen
- Department of Psychology, University of Minnesota, Minneapolis, Minnesota 55455
| | - Andrew J Oxenham
- Department of Psychology, University of Minnesota, Minneapolis, Minnesota 55455
| |
Collapse
|
33
|
Chubb C, Dickson CA, Dean T, Fagan C, Mann DS, Wright CE, Guan M, Silva AE, Gregersen PK, Kowalsky E. Bimodal distribution of performance in discriminating major/minor modes. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2013; 134:3067-3078. [PMID: 24116441 DOI: 10.1121/1.4816546] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
This study investigated the abilities of listeners to classify various sorts of musical stimuli as major vs minor. All stimuli combined four pure tones: low and high tonics (G5 and G6), dominant (D), and either a major third (B) or a minor third (B[symbol: see text]). Especially interesting results were obtained using tone-scrambles, randomly ordered sequences of pure tones presented at ≈15 per second. All tone-scrambles tested comprised 16 G's (G5's + G6's), 8 D's, and either 8 B's or 8 B[symbol: see text]'s. The distribution of proportion correct across 275 listeners tested over the course of three experiments was strikingly bimodal, with one mode very close to chance performance, and the other very close to perfect performance. Testing with tone-scrambles thus sorts listeners fairly cleanly into two subpopulations. Listeners in subpopulation 1 are sufficiently sensitive to major vs minor to classify tone-scrambles nearly perfectly; listeners in subpopulation 2 (comprising roughly 70% of the population) have very little sensitivity to major vs minor. Skill in classifying major vs minor tone-scrambles shows a modest correlation of around 0.5 with years of musical training.
Collapse
Affiliation(s)
- Charles Chubb
- Department of Cognitive Sciences, University of California at Irvine, Irvine, California 92697-5100
| | | | | | | | | | | | | | | | | | | |
Collapse
|
34
|
Zarate JM, Ritson CR, Poeppel D. The effect of instrumental timbre on interval discrimination. PLoS One 2013; 8:e75410. [PMID: 24066179 PMCID: PMC3774646 DOI: 10.1371/journal.pone.0075410] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2013] [Accepted: 08/13/2013] [Indexed: 11/18/2022] Open
Abstract
We tested non-musicians and musicians in an auditory psychophysical experiment to assess the effects of timbre manipulation on pitch-interval discrimination. Both groups were asked to indicate the larger of two presented intervals, comprised of four sequentially presented pitches; the second or fourth stimulus within a trial was either a sinusoidal (or “pure”), flute, piano, or synthetic voice tone, while the remaining three stimuli were all pure tones. The interval-discrimination tasks were administered parametrically to assess performance across varying pitch distances between intervals (“interval-differences”). Irrespective of timbre, musicians displayed a steady improvement across interval-differences, while non-musicians only demonstrated enhanced interval discrimination at an interval-difference of 100 cents (one semitone in Western music). Surprisingly, the best discrimination performance across both groups was observed with pure-tone intervals, followed by intervals containing a piano tone. More specifically, we observed that: 1) timbre changes within a trial affect interval discrimination; and 2) the broad spectral characteristics of an instrumental timbre may influence perceived pitch or interval magnitude and make interval discrimination more difficult.
Collapse
Affiliation(s)
- Jean Mary Zarate
- Department of Psychology, New York University, New York, New York, United States of America
- * E-mail:
| | - Caroline R. Ritson
- Department of Psychology, New York University, New York, New York, United States of America
| | - David Poeppel
- Department of Psychology, New York University, New York, New York, United States of America
| |
Collapse
|
35
|
Matteson SE, Olness GS, Caplow NJ. Toward a quantitative account of pitch distribution in spontaneous narrative: method and validation. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2013; 133:2953-2971. [PMID: 23654400 PMCID: PMC3663868 DOI: 10.1121/1.4796111] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/08/2011] [Revised: 11/19/2012] [Accepted: 03/01/2013] [Indexed: 06/02/2023]
Abstract
Pitch is well-known both to animate human discourse and to convey meaning in communication. The study of the statistical population distributions of pitch in discourse will undoubtedly benefit from methodological improvements. The current investigation examines a method that parameterizes pitch in discourse as musical pitch interval H measured in units of cents and that disaggregates the sequence of peak word-pitches using tools employed in time-series analysis and digital signal processing. The investigators test the proposed methodology by its application to distributions in pitch interval of the peak word-pitch (collectively called the discourse gamut) that occur in simulated and actual spontaneous emotive narratives obtained from 17 middle-aged African-American adults. The analysis, in rigorous tests, not only faithfully reproduced simulated distributions imbedded in realistic time series that drift and include pitch breaks, but the protocol also reveals that the empirical distributions exhibit a common hidden structure when normalized to a slowly varying mode (called the gamut root) of their respective probability density functions. Quantitative differences between narratives reveal the speakers' relative propensity for the use of pitch levels corresponding to elevated degrees of a discourse gamut (the "e-la") superimposed upon a continuum that conforms systematically to an asymmetric Laplace distribution.
Collapse
Affiliation(s)
- Samuel E Matteson
- Department of Physics, University of North Texas, 1155 Union Circle #311427, Denton, Texas 76203-5017, USA.
| | | | | |
Collapse
|
36
|
Bonnard D, Micheyl C, Semal C, Dauman R, Demany L. Auditory discrimination of frequency ratios: the octave singularity. J Exp Psychol Hum Percept Perform 2012; 39:788-801. [PMID: 23088507 DOI: 10.1037/a0030095] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Sensitivity to frequency ratios is essential for the perceptual processing of complex sounds and the appreciation of music. This study assessed the effect of ratio simplicity on ratio discrimination for pure tones presented either simultaneously or sequentially. Each stimulus consisted of four 100-ms pure tones, equally spaced in terms of frequency ratio and presented at a low intensity to limit interactions in the auditory periphery. Listeners had to discriminate between a reference frequency ratio of 0.97 octave (about 1.96:1) and target frequency ratios, which were larger than the reference. In the simultaneous condition, the obtained psychometric functions were nonmonotonic: as the target frequency ratio increased from 0.98 octave to 1.04 octaves, discrimination performance initially increased, then decreased, and then increased again; performance was better when the target was exactly one octave (2:1) than when the target was slightly larger. In the sequential condition, by contrast, the psychometric functions were monotonic and there was no effect of frequency ratio simplicity. A control experiment verified that the non-monotonicity observed in the simultaneous condition did not originate from peripheral interactions between the tones. Our results indicate that simultaneous octaves are recognized as "special" frequency intervals by a mechanism that is insensitive to the sign (positive or negative) of deviations from the octave, whereas this is apparently not the case for sequential octaves.
Collapse
Affiliation(s)
- Damien Bonnard
- INCIA, Université de Bordeaux and CNRS, 146 rue Leo-Saignat, Bordeaux Cedex, France
| | | | | | | | | |
Collapse
|
37
|
Mary Zarate J, Ritson CR, Poeppel D. Pitch-interval discrimination and musical expertise: is the semitone a perceptual boundary? THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2012; 132:984-93. [PMID: 22894219 PMCID: PMC3427364 DOI: 10.1121/1.4733535] [Citation(s) in RCA: 27] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/16/2023]
Abstract
The ability to discriminate pitch changes (or intervals) is foundational for speech and music. In an auditory psychophysical experiment, musicians and non-musicians were tested with fixed- and roving-pitch discrimination tasks to investigate the effects of musical expertise on interval discrimination. The tasks were administered parametrically to assess performance across varying pitch distances between intervals. Both groups showed improvements in fixed-pitch interval discrimination as a function of increasing interval difference. Only musicians showed better roving-pitch interval discrimination as interval differences increased, suggesting that this task was too demanding for non-musicians. Musicians had better interval discrimination than non-musicians across most interval differences in both tasks. Interestingly, musicians exhibited improved interval discrimination starting at interval differences of 100 cents (a semitone in Western music), whereas non-musicians showed enhanced discrimination at interval differences exceeding 125 cents. Although exposure to Western music and speech may help establish a basic interval-discrimination threshold between 100 and 200 cents (intervals that occur often in Western languages and music), musical training presumably enhances auditory processing and reduces this threshold to a semitone. As musical expertise does not decrease this threshold beyond 100 cents, the semitone may represent a musical training-induced intervallic limit to acoustic processing.
Collapse
Affiliation(s)
- Jean Mary Zarate
- Department of Psychology, New York University, 6 Washington Place, New York, New York 10003, USA.
| | | | | |
Collapse
|
38
|
Thompson WF, Peter V, Olsen KN, Stevens CJ. The effect of intensity on relative pitch. Q J Exp Psychol (Hove) 2012; 65:2054-72. [PMID: 22650967 DOI: 10.1080/17470218.2012.678369] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
Abstract
In two experiments, we examined the effect of intensity and intensity change on judgements of pitch differences or interval size. In Experiment 1, 39 musically untrained participants rated the size of the interval spanned by two pitches within individual gliding tones. Tones were presented at high intensity, low intensity, looming intensity (up-ramp), and fading intensity (down-ramp) and glided between two pitches spanning either 6 or 7 semitones (a tritone or a perfect fifth interval). The pitch shift occurred in either ascending or descending directions. Experiment 2 repeated the conditions of Experiment 1 but the shifts in pitch and intensity occurred across two discrete tones (i.e., a melodic interval). Results indicated that participants were sensitive to the differences in interval size presented: Ratings were significantly higher when two pitches differed by 7 semitones than when they differed by 6 semitones. However, ratings were also dependent on whether the interval was high or low in intensity, whether it increased or decreased in intensity across the two pitches, and whether the interval was ascending or descending in pitch. Such influences illustrate that the perception of pitch relations does not always adhere to a logarithmic function as implied by their musical labels, but that identical intervals are perceived as substantially different in size depending on other attributes of the sound source.
Collapse
|