1
|
Vanden Bosch der Nederlanden CM, Qi X, Sequeira S, Seth P, Grahn JA, Joanisse MF, Hannon EE. Developmental changes in the categorization of speech and song. Dev Sci 2023; 26:e13346. [PMID: 36419407 DOI: 10.1111/desc.13346] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2022] [Revised: 10/04/2022] [Accepted: 10/22/2022] [Indexed: 11/27/2022]
Abstract
Music and language are two fundamental forms of human communication. Many studies examine the development of music- and language-specific knowledge, but few studies compare how listeners know they are listening to music or language. Although we readily differentiate these domains, how we distinguish music and language-and especially speech and song- is not obvious. In two studies, we asked how listeners categorize speech and song. Study 1 used online survey data to illustrate that 4- to 17-year-olds and adults have verbalizable distinctions for speech and song. At all ages, listeners described speech and song differences based on acoustic features, but compared with older children, 4- to 7-year-olds more often used volume to describe differences, suggesting that they are still learning to identify the features most useful for differentiating speech from song. Study 2 used a perceptual categorization task to demonstrate that 4-8-year-olds and adults readily categorize speech and song, but this ability improves with age especially for identifying song. Despite generally rating song as more speech-like, 4- and 6-year-olds rated ambiguous speech-song stimuli as more song-like than 8-year-olds and adults. Four acoustic features predicted song ratings: F0 instability, utterance duration, harmonicity, and spectral flux. However, 4- and 6-year-olds' song ratings were better predicted by F0 instability than by harmonicity and utterance duration. These studies characterize how children develop conceptual and perceptual understandings of speech and song and suggest that children under age 8 are still learning what features are important for categorizing utterances as speech or song. RESEARCH HIGHLIGHTS: Children and adults conceptually and perceptually categorize speech and song from age 4. Listeners use F0 instability, harmonicity, spectral flux, and utterance duration to determine whether vocal stimuli sound like song. Acoustic cue weighting changes with age, becoming adult-like at age 8 for perceptual categorization and at age 12 for conceptual differentiation. Young children are still learning to categorize speech and song, which leaves open the possibility that music- and language-specific skills are not so domain-specific.
Collapse
Affiliation(s)
| | - Xin Qi
- The Brain and Mind Institute, Western University, London, Canada
| | - Sarah Sequeira
- The Brain and Mind Institute, Western University, London, Canada
| | - Prakhar Seth
- The Brain and Mind Institute, Western University, London, Canada
| | - Jessica A Grahn
- The Brain and Mind Institute, Western University, London, Canada
- Department of Psychology, Western University, London, Canada
| | - Marc F Joanisse
- The Brain and Mind Institute, Western University, London, Canada
- Department of Psychology, Western University, London, Canada
| | - Erin E Hannon
- Department of Psychology, University of Nevada, Las Vegas, Nevada, USA
| |
Collapse
|
2
|
Sharman KM, Meissel K, Tait JE, Rudd G, Henderson AME. The effects of live parental infant-directed singing on infants, parents, and the parent-infant dyad: A systematic review of the literature. Infant Behav Dev 2023; 72:101859. [PMID: 37343492 DOI: 10.1016/j.infbeh.2023.101859] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2022] [Revised: 05/29/2023] [Accepted: 06/14/2023] [Indexed: 06/23/2023]
Abstract
Singing to infants is widely accepted as an enjoyable, positive, and beneficial interaction between the parent and infant across cultures. Whilst the literature suggests that live infant-directed singing impacts the infant, the parent doing the singing and the dyad in powerful ways, no systematic review of the evidence has yet been conducted. To this end, this systematic review identified 21 studies that investigated the effect of live parental infant-directed singing. These impacts were categorized as either being directly related to the infant, the parent, or the parent-infant dyad. Three main themes - one for each of the impact categories considered - were identified using thematic analysis techniques; infant-directed singing impacts on: infants' emotional regulation, provides validation of the parent's role, and promotes affect attunement within the dyad. The findings reinforce the benefits of live parental infant-directed singing for all parties involved, particularly when parents sing to typically developing infants born at full term. In contrast, the findings were inconsistent for pre-term infants. The implications of these findings are discussed.
Collapse
Affiliation(s)
- Kirsten M Sharman
- School of Learning, Development and Professional Practice, Faculty of Education and Social Work, The University of Auckland, New Zealand
| | - Kane Meissel
- School of Learning, Development and Professional Practice, Faculty of Education and Social Work, The University of Auckland, New Zealand.
| | - Josie E Tait
- School of Population Health, Faculty of Medical and Health Sciences, The University of Auckland, New Zealand
| | - Georgia Rudd
- School of Learning, Development and Professional Practice, Faculty of Education and Social Work, The University of Auckland, New Zealand
| | | |
Collapse
|
3
|
The influence of memory on the speech-to-song illusion. Mem Cognit 2022; 50:1804-1815. [PMID: 35083717 PMCID: PMC9767999 DOI: 10.3758/s13421-021-01269-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/17/2021] [Indexed: 12/30/2022]
Abstract
In the speech-to-song illusion a spoken phrase is presented repeatedly and begins to sound as if it is being sung. Anecdotal reports suggest that subsequent presentations of a previously heard phrase enhance the illusion, even if several hours or days have elapsed between presentations. In Experiment 1, we examined in a controlled laboratory setting whether memory traces for a previously heard phrase would influence song-like ratings to a subsequent presentation of that phrase. The results showed that word lists that were played several times throughout the experimental session were rated as being more song-like at the end of the experiment than word lists that were played only once in the experimental session. In Experiment 2, we examined if the memory traces that influenced the speech-to-song illusion were abstract in nature or exemplar-based by playing some word lists several times during the experiment in the same voice and playing other word lists several times during the experiment but in different voices. The results showed that word lists played in the same voice were rated as more song-like at the end of the experiment than word lists played in different voices. Many previous studies have examined how various aspects of the stimulus itself influences the perception of the speech-to-song illusion. The results of the present experiments demonstrate that memory traces of the stimulus also influence the speech-to-song illusion.
Collapse
|
4
|
Vitevitch MS, Ng JW, Hatley E, Castro N. Phonological but not semantic influences on the speech-to-song illusion. Q J Exp Psychol (Hove) 2021; 74:585-597. [PMID: 33089742 PMCID: PMC8287799 DOI: 10.1177/1747021820969144] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
In the speech to song illusion, a spoken phrase begins to sound as if it is being sung after several repetitions. Castro et al. (2018) used Node Structure Theory (NST; MacKay, 1987), a model of speech perception and production, to explain how the illusion occurs. Two experiments further test the mechanisms found in NST-priming, activation, and satiation-as an account of the speech to song illusion. In Experiment 1, words varying in the phonological clustering coefficient influenced how quickly a lexical node could recover from satiation, thereby influencing the song-like ratings to lists of words that were high versus low in phonological clustering coefficient. In Experiment 2, we used equivalence testing (i.e., the TOST procedure) to demonstrate that once lexical nodes are satiated the higher level semantic information associated with the word cannot differentially influence song-like ratings to lists of words varying in emotional arousal. The results of these two experiments further support the NST account of the speech to song illusion.
Collapse
|
5
|
Cournoyer Lemaire E. Extraordinary times call for extraordinary measures: the use of music to communicate public health recommendations against the spread of COVID-19. Canadian Journal of Public Health 2020; 111:477-479. [PMID: 32696141 PMCID: PMC7373831 DOI: 10.17269/s41997-020-00379-2] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Received: 06/01/2020] [Accepted: 06/25/2020] [Indexed: 12/03/2022]
Abstract
To promote the population’s adherence to COVID-19 public health preventive measures, the Quebec (Canada) government solicited the assistance of local music artists. This commentary aims to demonstrate how music has been utilized to communicate the public health recommendations relative to the COVID-19 pandemic and to discuss the relevance of using music in this context, as supported by research. More specifically, music is discussed in terms of its powerful capacity to reach out to a large population pool; to capture the population’s attention quickly and massively in spite of age, language, or cultural barriers; to effectively communicate messages; and to affect individuals’ behaviours. In this regard, the current COVID-19 pandemic demonstrates how music can be utilized as a communication tool and offers an interesting perspective for the consideration of music in future public health research.
Collapse
Affiliation(s)
- Elise Cournoyer Lemaire
- Faculty of Medicine and Health Sciences, Department of Community Health Sciences, University of Sherbrooke, Longueuil, QC, Canada.
| |
Collapse
|
6
|
Groenveld G, Burgoyne JA, Sadakata M. I still hear a melody: investigating temporal dynamics of the Speech-to-Song Illusion. PSYCHOLOGICAL RESEARCH 2019; 84:1451-1459. [PMID: 30627768 DOI: 10.1007/s00426-018-1135-z] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2018] [Accepted: 12/10/2018] [Indexed: 10/27/2022]
Abstract
The Speech-to-Song Illusion (STS) refers to a dramatic shift in our perception of short speech fragments which, when repeatedly presented, may start to sound-like song. Anecdotally, once it is perceived as a song, it is difficult to unhear the melody of a speech fragment, and such temporal dynamics of the STS illusion has theoretical implications. The goal of the current study is to capture this temporal effect. In our experiment, speech fragments that initially did not elicit the STS illusion were manipulated to have increasingly stable F0 contours to strengthen the perceived 'song-likeness' of a fragment. Over the course of trials, the speech fragments with manipulated contours were repeatedly presented within blocks of decreasing, increasing, or random orders of F0 manipulations. Results showed that a presentation order where participants first heard the sentence with the maximum amount of F0 manipulations (decreasing condition) resulted in participants continuously giving higher overall song-like ratings than other presentation orders (increasing or random conditions). Our results thus capture the commonly reported phenomenon that it is hard to 'unhear' the illusion once a speech segment has been perceived as song.
Collapse
Affiliation(s)
- Gerben Groenveld
- Musicology Department, University of Amsterdam, Amsterdam, The Netherlands
| | - John Ashley Burgoyne
- Musicology Department, University of Amsterdam, Amsterdam, The Netherlands.,Institute for Logic, Language and Computation, University of Amsterdam, Amsterdam, The Netherlands
| | - Makiko Sadakata
- Musicology Department, University of Amsterdam, Amsterdam, The Netherlands. .,Institute for Logic, Language and Computation, University of Amsterdam, Amsterdam, The Netherlands. .,Artificial Intelligence Department, Radboud University, Nijmegen, The Netherlands.
| |
Collapse
|
7
|
Freitas C, Manzato E, Burini A, Taylor MJ, Lerch JP, Anagnostou E. Neural Correlates of Familiarity in Music Listening: A Systematic Review and a Neuroimaging Meta-Analysis. Front Neurosci 2018; 12:686. [PMID: 30344470 PMCID: PMC6183416 DOI: 10.3389/fnins.2018.00686] [Citation(s) in RCA: 31] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2018] [Accepted: 09/13/2018] [Indexed: 11/15/2022] Open
Abstract
Familiarity in music has been reported as an important factor modulating emotional and hedonic responses in the brain. Familiarity and repetition may increase the liking of a piece of music, thus inducing positive emotions. Neuroimaging studies have focused on identifying the brain regions involved in the processing of familiar and unfamiliar musical stimuli. However, the use of different modalities and experimental designs has led to discrepant results and it is not clear which areas of the brain are most reliably engaged when listening to familiar and unfamiliar musical excerpts. In the present study, we conducted a systematic review from three databases (Medline, PsychoINFO, and Embase) using the keywords (recognition OR familiar OR familiarity OR exposure effect OR repetition) AND (music OR song) AND (brain OR brains OR neuroimaging OR functional Magnetic Resonance Imaging OR Position Emission Tomography OR Electroencephalography OR Event Related Potential OR Magnetoencephalography). Of the 704 titles identified, 23 neuroimaging studies met our inclusion criteria for the systematic review. After removing studies providing insufficient information or contrasts, 11 studies (involving 212 participants) qualified for the meta-analysis using the activation likelihood estimation (ALE) approach. Our results did not find significant peak activations consistently across included studies. Using a less conservative approach (p < 0.001, uncorrected for multiple comparisons) we found that the left superior frontal gyrus, the ventral lateral (VL) nucleus of the left thalamus, and the left medial surface of the superior frontal gyrus had the highest likelihood of being activated by familiar music. On the other hand, the left insula, and the right anterior cingulate cortex had the highest likelihood of being activated by unfamiliar music. We had expected limbic structures as top clusters when listening to familiar music. But, instead, music familiarity had a motor pattern of activation. This could reflect an audio-motor synchronization to the rhythm which is more engaging for familiar tunes, and/or a sing-along response in one's mind, anticipating melodic, harmonic progressions, rhythms, timbres, and lyric events in the familiar songs. These data provide evidence for the need for larger neuroimaging studies to understand the neural correlates of music familiarity.
Collapse
Affiliation(s)
- Carina Freitas
- Faculty of Medicine, Institute of Medical Science, University of Toronto, Toronto, ON, Canada
- Bloorview Research Institute, Holland Bloorview Kids Rehabilitation Hospital, Toronto, ON, Canada
| | | | | | - Margot J. Taylor
- Faculty of Medicine, Institute of Medical Science, University of Toronto, Toronto, ON, Canada
- Department of Diagnostic Imaging, Hospital for Sick Children, Toronto, ON, Canada
- Department of Psychology, University of Toronto, Toronto, ON, Canada
- Neuroscience & Mental Health Program, Hospital for Sick Children Research Institute, Toronto, ON, Canada
| | - Jason P. Lerch
- Neuroscience & Mental Health Program, Hospital for Sick Children Research Institute, Toronto, ON, Canada
- Mouse Imaging Centre, Hospital for Sick Children, Toronto, ON, Canada
- Department of Medical Biophysics, University of Toronto, Toronto, ON, Canada
| | - Evdokia Anagnostou
- Faculty of Medicine, Institute of Medical Science, University of Toronto, Toronto, ON, Canada
- Bloorview Research Institute, Holland Bloorview Kids Rehabilitation Hospital, Toronto, ON, Canada
- Neuroscience & Mental Health Program, Hospital for Sick Children Research Institute, Toronto, ON, Canada
- Department of Pediatrics, University of Toronto, Toronto, ON, Canada
| |
Collapse
|
8
|
Castro N, Mendoza JM, Tampke EC, Vitevitch MS. An account of the Speech-to-Song Illusion using Node Structure Theory. PLoS One 2018; 13:e0198656. [PMID: 29883451 PMCID: PMC5993277 DOI: 10.1371/journal.pone.0198656] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2017] [Accepted: 05/23/2018] [Indexed: 11/25/2022] Open
Abstract
In the Speech-to-Song Illusion, repetition of a spoken phrase results in it being perceived as if it were sung. Although a number of previous studies have examined which characteristics of the stimulus will produce the illusion, there is, until now, no description of the cognitive mechanism that underlies the illusion. We suggest that the processes found in Node Structure Theory that are used to explain normal language processing as well as other auditory illusions might also account for the Speech-to-Song Illusion. In six experiments we tested whether the satiation of lexical nodes, but continued priming of syllable nodes may lead to the Speech-to-Song Illusion. The results of these experiments provide evidence for the role of priming, activation, and satiation as described in Node Structure Theory as an explanation of the Speech-to-Song Illusion.
Collapse
Affiliation(s)
- Nichol Castro
- Spoken Language Laboratory, Department of Psychology, University of Kansas, Lawrence, Kansas, United States of America
| | - Joshua M. Mendoza
- Spoken Language Laboratory, Department of Psychology, University of Kansas, Lawrence, Kansas, United States of America
| | - Elizabeth C. Tampke
- Spoken Language Laboratory, Department of Psychology, University of Kansas, Lawrence, Kansas, United States of America
| | - Michael S. Vitevitch
- Spoken Language Laboratory, Department of Psychology, University of Kansas, Lawrence, Kansas, United States of America
| |
Collapse
|
9
|
Graber E, Simchy-Gross R, Margulis EH. Musical and linguistic listening modes in the speech-to-song illusion bias timing perception and absolute pitch memory. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2017; 142:3593. [PMID: 29289094 DOI: 10.1121/1.5016806] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
The speech-to-song (STS) illusion is a phenomenon in which some spoken utterances perceptually transform to song after repetition [Deutsch, Henthorn, and Lapidis (2011). J. Acoust. Soc. Am. 129, 2245-2252]. Tierney, Dick, Deutsch, and Sereno [(2013). Cereb. Cortex. 23, 249-254] developed a set of stimuli where half tend to transform to perceived song with repetition and half do not. Those that transform and those that do not can be understood to induce a musical or linguistic mode of listening, respectively. By comparing performance on perceptual tasks related to transforming and non-transforming utterances, the current study examines whether the musical mode of listening entails higher sensitivity to temporal regularity and better absolute pitch (AP) memory compared to the linguistic mode. In experiment 1, inter-stimulus intervals within STS trials were steady, slightly variable, or highly variable. Participants reported how temporally regular utterance entrances were. In experiment 2, participants performed an AP memory task after a blocked STS exposure phase. Utterances identically matching those used in the exposure phase were targets among transposed distractors in the test phase. Results indicate that listeners exhibit heightened awareness of temporal manipulations but reduced awareness of AP manipulations to transforming utterances. This methodology establishes a framework for implicitly differentiating musical from linguistic perception.
Collapse
Affiliation(s)
- Emily Graber
- Center for Computer Research in Music and Acoustics, Stanford University, 660 Lomita Court, Stanford, California 94305, USA
| | - Rhimmon Simchy-Gross
- Department of Psychological Science, University of Arkansas, 216 Memorial Hall, Fayetteville, Arkansas 72701, USA
| | | |
Collapse
|
10
|
Abstract
Vocal theories of the origin of language rarely make a case for the precursor functions that underlay the evolution of speech. The vocal expression of emotion is unquestionably the best candidate for such a precursor, although most evolutionary models of both language and speech ignore emotion and prosody altogether. I present here a model for a joint prosodic precursor of language and music in which ritualized group-level vocalizations served as the ancestral state. This precursor combined not only affective and intonational aspects of prosody, but also holistic and combinatorial mechanisms of phrase generation. From this common stage, there was a bifurcation to form language and music as separate, though homologous, specializations. This separation of language and music was accompanied by their (re)unification in songs with words.
Collapse
Affiliation(s)
- Steven Brown
- Department of Psychology, Neuroscience & Behaviour, McMaster University, Hamilton, ON, Canada
| |
Collapse
|
11
|
Acoustic parameters of infant-directed singing in mothers of infants with down syndrome. Infant Behav Dev 2017; 49:151-160. [PMID: 28934613 DOI: 10.1016/j.infbeh.2017.09.001] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2017] [Revised: 07/25/2017] [Accepted: 09/01/2017] [Indexed: 11/22/2022]
Abstract
This study compared the acoustic parameters and degree of perceived warmth in two types of infant-directed (ID) songs - the lullaby and the playsong - between mothers of infants with Down syndrome (DS) and mothers of typically-developing (TD) infants. Participants included mothers of 15 DS infants and 15 TD infants between 3 and 9 months of age. Each mother's singing voice was digitally recorded while singing to her infant and subjected to feature extraction and data mining. Mothers of DS infants and TD infants sang both lullabies and playsongs with similar frequency. In comparison with mothers of TD infants, mothers of DS infants used a higher maximum pitch and more key changes during playsong. Mothers of DS infants also took more time to establish a rhythmic structure in their singing. These differences suggest mothers are sensitive to the attentional and arousal needs of their DS infants. Mothers of TD infants sang with a higher degree of perceived warmth which does not agree with previous observations of "forceful warmth" in mothers of DS infants. In comparison with lullaby, all mothers sang playsong with higher overall pitch and slower tempo. Playsongs were also distinguished by higher levels of spectral centroid properties related to emotional expressivity, as well as higher degrees of perceived warmth. These similarities help to define specific song types, and suggest that all mothers sing in an expressive manner that can modulate infant arousal, including mothers of DS infants.
Collapse
|
12
|
Peretz I, Vuvan D, Lagrois MÉ, Armony JL. Neural overlap in processing music and speech. Philos Trans R Soc Lond B Biol Sci 2016; 370:20140090. [PMID: 25646513 DOI: 10.1098/rstb.2014.0090] [Citation(s) in RCA: 116] [Impact Index Per Article: 14.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
Neural overlap in processing music and speech, as measured by the co-activation of brain regions in neuroimaging studies, may suggest that parts of the neural circuitries established for language may have been recycled during evolution for musicality, or vice versa that musicality served as a springboard for language emergence. Such a perspective has important implications for several topics of general interest besides evolutionary origins. For instance, neural overlap is an important premise for the possibility of music training to influence language acquisition and literacy. However, neural overlap in processing music and speech does not entail sharing neural circuitries. Neural separability between music and speech may occur in overlapping brain regions. In this paper, we review the evidence and outline the issues faced in interpreting such neural data, and argue that converging evidence from several methodologies is needed before neural overlap is taken as evidence of sharing.
Collapse
Affiliation(s)
- Isabelle Peretz
- International Laboratory of Brain, Music and Sound Research (BRAMS), and Center for Research on Brain, Language and Music (CRBLM), University of Montreal, Montreal, Quebec, Canada Department of Psychology, University of Montreal, Montreal, Quebec, Canada
| | - Dominique Vuvan
- International Laboratory of Brain, Music and Sound Research (BRAMS), and Center for Research on Brain, Language and Music (CRBLM), University of Montreal, Montreal, Quebec, Canada Department of Psychology, University of Montreal, Montreal, Quebec, Canada
| | - Marie-Élaine Lagrois
- International Laboratory of Brain, Music and Sound Research (BRAMS), and Center for Research on Brain, Language and Music (CRBLM), University of Montreal, Montreal, Quebec, Canada Department of Psychology, University of Montreal, Montreal, Quebec, Canada
| | - Jorge L Armony
- International Laboratory of Brain, Music and Sound Research (BRAMS), and Center for Research on Brain, Language and Music (CRBLM), University of Montreal, Montreal, Quebec, Canada Department of Psychiatry, McGill University and Douglas Mental Health University Institute, Montreal, Quebec, Canada
| |
Collapse
|
13
|
Vanden Bosch der Nederlanden CM, Hannon EE, Snyder JS. Finding the music of speech: Musical knowledge influences pitch processing in speech. Cognition 2015; 143:135-40. [PMID: 26151370 DOI: 10.1016/j.cognition.2015.06.015] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2014] [Revised: 06/25/2015] [Accepted: 06/27/2015] [Indexed: 10/23/2022]
Abstract
Few studies comparing music and language processing have adequately controlled for low-level acoustical differences, making it unclear whether differences in music and language processing arise from domain-specific knowledge, acoustic characteristics, or both. We controlled acoustic characteristics by using the speech-to-song illusion, which often results in a perceptual transformation to song after several repetitions of an utterance. Participants performed a same-different pitch discrimination task for the initial repetition (heard as speech) and the final repetition (heard as song). Better detection was observed for pitch changes that violated rather than conformed to Western musical scale structure, but only when utterances transformed to song, indicating that music-specific pitch representations were activated and influenced perception. This shows that music-specific processes can be activated when an utterance is heard as song, suggesting that the high-level status of a stimulus as either language or music can be behaviorally dissociated from low-level acoustic factors.
Collapse
|
14
|
Bhatara A, Laukka P, Levitin DJ. Expression of emotion in music and vocal communication: Introduction to the research topic. Front Psychol 2014; 5:399. [PMID: 24829557 PMCID: PMC4017128 DOI: 10.3389/fpsyg.2014.00399] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2014] [Accepted: 04/15/2014] [Indexed: 11/13/2022] Open
Affiliation(s)
- Anjali Bhatara
- Sorbonne Paris Cité, Université Paris Descartes Paris, France ; Laboratoire Psychologie de la Perception, CNRS, UMR 8242 Paris, France
| | - Petri Laukka
- Department of Psychology, Stockholm University Stockholm, Sweden
| | - Daniel J Levitin
- Department of Psychology, McGill University Montreal, QC, Canada
| |
Collapse
|