1
|
Kachlicka M, Tierney A. Voice actors show enhanced neural tracking of pitch, prosody perception, and music perception. Cortex 2024; 178:213-222. [PMID: 39024939 DOI: 10.1016/j.cortex.2024.06.016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2024] [Revised: 05/28/2024] [Accepted: 06/26/2024] [Indexed: 07/20/2024]
Abstract
Experiences with sound that make strong demands on the precision of perception, such as musical training and experience speaking a tone language, can enhance auditory neural encoding. Are high demands on the precision of perception necessary for training to drive auditory neural plasticity? Voice actors are an ideal subject population for answering this question. Voice acting requires exaggerating prosodic cues to convey emotion, character, and linguistic structure, drawing upon attention to sound, memory for sound features, and accurate sound production, but not fine perceptual precision. Here we assessed neural encoding of pitch using the frequency-following response (FFR), as well as prosody, music, and sound perception, in voice actors and a matched group of non-actors. We find that the consistency of neural sound encoding, prosody perception, and musical phrase perception are all enhanced in voice actors, suggesting that a range of neural and behavioural auditory processing enhancements can result from training which lacks fine perceptual precision. However, fine discrimination was not enhanced in voice actors but was linked to degree of musical experience, suggesting that low-level auditory processing can only be enhanced by demanding perceptual training. These findings suggest that training which taxes attention, memory, and production but is not perceptually taxing may be a way to boost neural encoding of sound and auditory pattern detection in individuals with poor auditory skills.
Collapse
Affiliation(s)
- Magdalena Kachlicka
- School of Psychological Sciences, Birkbeck, University of London, London, UK
| | - Adam Tierney
- School of Psychological Sciences, Birkbeck, University of London, London, UK.
| |
Collapse
|
2
|
Kachlicka M, Patel AD, Liu F, Tierney A. Weighting of cues to categorization of song versus speech in tone-language and non-tone-language speakers. Cognition 2024; 246:105757. [PMID: 38442588 DOI: 10.1016/j.cognition.2024.105757] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2023] [Revised: 02/09/2024] [Accepted: 02/20/2024] [Indexed: 03/07/2024]
Abstract
One of the most important auditory categorization tasks a listener faces is determining a sound's domain, a process which is a prerequisite for successful within-domain categorization tasks such as recognizing different speech sounds or musical tones. Speech and song are universal in human cultures: how do listeners categorize a sequence of words as belonging to one or the other of these domains? There is growing interest in the acoustic cues that distinguish speech and song, but it remains unclear whether there are cross-cultural differences in the evidence upon which listeners rely when making this fundamental perceptual categorization. Here we use the speech-to-song illusion, in which some spoken phrases perceptually transform into song when repeated, to investigate cues to this domain-level categorization in native speakers of tone languages (Mandarin and Cantonese speakers residing in the United Kingdom and China) and in native speakers of a non-tone language (English). We find that native tone-language and non-tone-language listeners largely agree on which spoken phrases sound like song after repetition, and we also find that the strength of this transformation is not significantly different across language backgrounds or countries of residence. Furthermore, we find a striking similarity in the cues upon which listeners rely when perceiving word sequences as singing versus speech, including small pitch intervals, flat within-syllable pitch contours, and steady beats. These findings support the view that there are certain widespread cross-cultural similarities in the mechanisms by which listeners judge if a word sequence is spoken or sung.
Collapse
Affiliation(s)
- Magdalena Kachlicka
- Department of Psychological Sciences, Birkbeck, University of London, Malet Street, London, United Kingdom
| | - Aniruddh D Patel
- Department of Psychology, Tufts University, 419 Boston Ave, Medford, USA; Program in Brain, Mind, and Consciousness, Canadian Institute for Advanced Research, 661 University Avenue, Toronto, Canada
| | - Fang Liu
- School of Psychology and Clinical Language Sciences, University of Reading, Whiteknights, Reading, United Kingdom
| | - Adam Tierney
- Department of Psychological Sciences, Birkbeck, University of London, Malet Street, London, United Kingdom.
| |
Collapse
|
3
|
Symons AE, Holt LL, Tierney AT. Informational masking influences segmental and suprasegmental speech categorization. Psychon Bull Rev 2024; 31:686-696. [PMID: 37658222 PMCID: PMC11061029 DOI: 10.3758/s13423-023-02364-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/04/2023] [Indexed: 09/03/2023]
Abstract
Auditory categorization requires listeners to integrate acoustic information from multiple dimensions. Attentional theories suggest that acoustic dimensions that are informative attract attention and therefore receive greater perceptual weight during categorization. However, the acoustic environment is often noisy, with multiple sound sources competing for listeners' attention. Amid these adverse conditions, attentional theories predict that listeners will distribute attention more evenly across multiple dimensions. Here we test this prediction using an informational masking paradigm. In two experiments, listeners completed suprasegmental (focus) and segmental (voicing) speech categorization tasks in quiet or in the presence of competing speech. In both experiments, the target speech consisted of short words or phrases that varied in the extent to which fundamental frequency (F0) and durational information signalled category identity. To isolate effects of informational masking, target and competing speech were presented in opposite ears. Across both experiments, there was substantial individual variability in the relative weighting of the two dimensions. These individual differences were consistent across listening conditions, suggesting that they reflect stable perceptual strategies. Consistent with attentional theories of auditory categorization, listeners who relied on a single primary dimension in quiet shifted towards integrating across multiple dimensions in the presence of competing speech. These findings demonstrate that listeners make greater use of the redundancy present in speech when attentional resources are limited.
Collapse
Affiliation(s)
- A E Symons
- Department of Psychological Sciences, Birkbeck, University of London, London, UK
| | - L L Holt
- Department of Psychology and Neuroscience Institute, Carnegie Mellon University, 500 Forbes Avenue, Pittsburgh, PA, USA.
| | - A T Tierney
- Department of Psychological Sciences, Birkbeck, University of London, London, UK
| |
Collapse
|
4
|
Shorey AE, King CJ, Whiteford KL, Stilp CE. Musical training is not associated with spectral context effects in instrument sound categorization. Atten Percept Psychophys 2024; 86:991-1007. [PMID: 38216848 DOI: 10.3758/s13414-023-02839-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/21/2023] [Indexed: 01/14/2024]
Abstract
Musicians display a variety of auditory perceptual benefits relative to people with little or no musical training; these benefits are collectively referred to as the "musician advantage." Importantly, musicians consistently outperform nonmusicians for tasks relating to pitch, but there are mixed reports as to musicians outperforming nonmusicians for timbre-related tasks. Due to their experience manipulating the timbre of their instrument or voice in performance, we hypothesized that musicians would be more sensitive to acoustic context effects stemming from the spectral changes in timbre across a musical context passage (played by a string quintet then filtered) and a target instrument sound (French horn or tenor saxophone; Experiment 1). Additionally, we investigated the role of a musician's primary instrument of instruction by recruiting French horn and tenor saxophone players to also complete this task (Experiment 2). Consistent with the musician advantage literature, musicians exhibited superior pitch discrimination to nonmusicians. Contrary to our main hypothesis, there was no difference between musicians and nonmusicians in how spectral context effects shaped instrument sound categorization. Thus, musicians may only outperform nonmusicians for some auditory skills relevant to music (e.g., pitch perception) but not others (e.g., timbre perception via spectral differences).
Collapse
Affiliation(s)
- Anya E Shorey
- Department of Psychological and Brain Sciences, University of Louisville, Louisville, KY, 40292, USA.
| | - Caleb J King
- Department of Psychological and Brain Sciences, University of Louisville, Louisville, KY, 40292, USA.
| | - Kelly L Whiteford
- Department of Psychology, University of Minnesota, Minneapolis, MN, 55455, USA
| | - Christian E Stilp
- Department of Psychological and Brain Sciences, University of Louisville, Louisville, KY, 40292, USA
| |
Collapse
|
5
|
Creel SC, Obiri-Yeboah M, Rose S. Language-to-music transfer effects depend on the tone language: Akan vs. East Asian tone languages. Mem Cognit 2023; 51:1624-1639. [PMID: 37052771 PMCID: PMC10100610 DOI: 10.3758/s13421-023-01416-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/15/2023] [Indexed: 04/14/2023]
Abstract
Recent research suggests that speaking a tone language confers benefits in processing pitch in nonlinguistic contexts such as music. This research largely compares speakers of nontone European languages (English, French) with speakers of tone languages in East Asia (Mandarin, Cantonese, Vietnamese, Thai). However, tone languages exist on multiple continents-notably, languages indigenous to Africa and the Americas. With one exception (Bradley, Psychomusicology, 26(4), 337-345, 2016), no research has assessed whether these tone languages also confer pitch processing advantages. Two studies presented a melody change detection task, using quasirandom note sequences drawn from Western major scale tone probabilities. Listeners were speakers of Akan, a tone language of Ghana, plus speakers from previously tested populations (nontone language speakers and East Asian tone language speakers). In both cases, East Asian tone language speakers showed the strongest musical pitch processing, but Akan speakers did not exceed nontone speakers, despite comparable or better instrument change detection. Results suggest more nuanced effects of tone languages on pitch processing. Greater numbers of tones, presence of contour tones in a language's tone inventory, or possibly greater functional load of tone may be more likely to confer pitch processing benefits than mere presence of tone contrasts.
Collapse
Affiliation(s)
- Sarah C. Creel
- UC San Diego Cognitive Science, 9500 Gilman Drive Mail Code 0515, La Jolla, CA 92093-0515 USA
| | - Michael Obiri-Yeboah
- Georgetown University Linguistics, Washington, DC USA
- UC San Diego Linguistics, San Diego, CA USA
| | | |
Collapse
|
6
|
Liu J, Hilton CB, Bergelson E, Mehr SA. Language experience predicts music processing in a half-million speakers of fifty-four languages. Curr Biol 2023; 33:1916-1925.e4. [PMID: 37105166 PMCID: PMC10306420 DOI: 10.1016/j.cub.2023.03.067] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2021] [Revised: 02/08/2023] [Accepted: 03/23/2023] [Indexed: 04/29/2023]
Abstract
Tonal languages differ from other languages in their use of pitch (tones) to distinguish words. Lifelong experience speaking and hearing tonal languages has been argued to shape auditory processing in ways that generalize beyond the perception of linguistic pitch to the perception of pitch in other domains like music. We conducted a meta-analysis of prior studies testing this idea, finding moderate evidence supporting it. But prior studies were limited by mostly small sample sizes representing a small number of languages and countries, making it challenging to disentangle the effects of linguistic experience from variability in music training, cultural differences, and other potential confounds. To address these issues, we used web-based citizen science to assess music perception skill on a global scale in 34,034 native speakers of 19 tonal languages (e.g., Mandarin, Yoruba). We compared their performance to 459,066 native speakers of other languages, including 6 pitch-accented (e.g., Japanese) and 29 non-tonal languages (e.g., Hungarian). Whether or not participants had taken music lessons, native speakers of all 19 tonal languages had an improved ability to discriminate musical melodies on average, relative to speakers of non-tonal languages. But this improvement came with a trade-off: tonal language speakers were also worse at processing the musical beat. The results, which held across native speakers of many diverse languages and were robust to geographic and demographic variation, demonstrate that linguistic experience shapes music perception, with implications for relations between music, language, and culture in the human mind.
Collapse
Affiliation(s)
- Jingxuan Liu
- Columbia Business School, Columbia University, 665 W 130th Street, New York, NY 10027, USA; Department of Psychology & Neuroscience, Duke University, 417 Chapel Drive, Durham, NC 27708, USA.
| | - Courtney B Hilton
- Yale Child Study Center, Yale University, 300 George Street #900, New Haven, CT 06511, USA; School of Psychology, University of Auckland, 23 Symonds Street, Auckland 1010, New Zealand.
| | - Elika Bergelson
- Department of Psychology & Neuroscience, Duke University, 417 Chapel Drive, Durham, NC 27708, USA
| | - Samuel A Mehr
- Yale Child Study Center, Yale University, 300 George Street #900, New Haven, CT 06511, USA; School of Psychology, University of Auckland, 23 Symonds Street, Auckland 1010, New Zealand.
| |
Collapse
|
7
|
Theunissen F. Language and music: Singing voices and music talent. Curr Biol 2023; 33:R418-R420. [PMID: 37220737 DOI: 10.1016/j.cub.2023.03.086] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
Native speakers of tonal languages show enhanced musical melody perception but diminished rhythm abilities. This effect has now been rigorously demonstrated in a new study that tested the musical IQ of half a million human participants across the globe.
Collapse
Affiliation(s)
- Frédéric Theunissen
- University of California Berkeley, Department of Psychology, Integrative Biology and Helen Wills Neuroscience Institute, Berkeley, CA 94720, USA.
| |
Collapse
|
8
|
Jasmin K, Tierney A, Obasih C, Holt L. Short-term perceptual reweighting in suprasegmental categorization. Psychon Bull Rev 2023; 30:373-382. [PMID: 35915382 PMCID: PMC9971089 DOI: 10.3758/s13423-022-02146-5] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/05/2022] [Indexed: 11/08/2022]
Abstract
Segmental speech units such as phonemes are described as multidimensional categories whose perception involves contributions from multiple acoustic input dimensions, and the relative perceptual weights of these dimensions respond dynamically to context. For example, when speech is altered to create an "accent" in which two acoustic dimensions are correlated in a manner opposite that of long-term experience, the dimension that carries less perceptual weight is down-weighted to contribute less in category decisions. It remains unclear, however, whether this short-term reweighting extends to perception of suprasegmental features that span multiple phonemes, syllables, or words, in part because it has remained debatable whether suprasegmental features are perceived categorically. Here, we investigated the relative contribution of two acoustic dimensions to word emphasis. Participants categorized instances of a two-word phrase pronounced with typical covariation of fundamental frequency (F0) and duration, and in the context of an artificial "accent" in which F0 and duration (established in prior research on English speech as "primary" and "secondary" dimensions, respectively) covaried atypically. When categorizing "accented" speech, listeners rapidly down-weighted the secondary dimension (duration). This result indicates that listeners continually track short-term regularities across speech input and dynamically adjust the weight of acoustic evidence for suprasegmental decisions. Thus, dimension-based statistical learning appears to be a widespread phenomenon in speech perception extending to both segmental and suprasegmental categorization.
Collapse
Affiliation(s)
- Kyle Jasmin
- Department of Psychology, Wolfson Building, Royal Holloway, University of London, Egham, Surrey, TW20 0EX, UK.
| | | | | | - Lori Holt
- Carnegie Mellon University, Pittsburgh, PA, USA
| |
Collapse
|
9
|
D'Ascenzo S, Scerrati E, Villani C, Galatolo R, Lugli L, Nicoletti R. Does social distancing affect the processing of brand logos? Brain Behav 2022; 12:e2501. [PMID: 35212187 PMCID: PMC8933757 DOI: 10.1002/brb3.2501] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/04/2021] [Revised: 12/12/2021] [Accepted: 12/29/2021] [Indexed: 11/20/2022] Open
Abstract
Social distancing and isolation have been imposed to contrast the spread of COVID-19. The present study investigates whether social distancing affects our cognitive system, in particular the processing of different types of brand logos in different moments of the pandemic spread in Italy. In a size discrimination task, six different logos belonging to three categories (letters, symbols, and social images) were presented in their original format and spaced. Two samples of participants were tested: one just after the pandemic spread in Italy, the other one after 6 months. Results showed an overall distancing effect (i.e., spaced stimuli are processed slower than original ones) that interacted with the sample, revealing a significant effect only for participants belonging to the second sample. However, both groups showed a distancing effect modulated by the type of logo as it only emerged for social images. Results suggest that social distancing behaviors have been integrated in our cognitive system as they appear to affect our perception of distance when social images are involved.
Collapse
Affiliation(s)
- Stefania D'Ascenzo
- Department of Philosophy and Communication, University of Bologna, Bologna, Italy
| | - Elisa Scerrati
- Department of Biomedical, Metabolic and Neuroscience, University of Modena and Reggio Emilia, Reggio Emilia, Italy
| | - Caterina Villani
- Department of Philosophy and Communication, University of Bologna, Bologna, Italy
| | - Renata Galatolo
- Department of Philosophy and Communication, University of Bologna, Bologna, Italy
| | - Luisa Lugli
- Department of Philosophy and Communication, University of Bologna, Bologna, Italy
| | - Roberto Nicoletti
- Department of Philosophy and Communication, University of Bologna, Bologna, Italy
| |
Collapse
|
10
|
Symons AE, Dick F, Tierney AT. Dimension-selective attention and dimensional salience modulate cortical tracking of acoustic dimensions. Neuroimage 2021; 244:118544. [PMID: 34492294 DOI: 10.1016/j.neuroimage.2021.118544] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2021] [Revised: 08/19/2021] [Accepted: 08/31/2021] [Indexed: 11/17/2022] Open
Abstract
Some theories of auditory categorization suggest that auditory dimensions that are strongly diagnostic for particular categories - for instance voice onset time or fundamental frequency in the case of some spoken consonants - attract attention. However, prior cognitive neuroscience research on auditory selective attention has largely focused on attention to simple auditory objects or streams, and so little is known about the neural mechanisms that underpin dimension-selective attention, or how the relative salience of variations along these dimensions might modulate neural signatures of attention. Here we investigate whether dimensional salience and dimension-selective attention modulate the cortical tracking of acoustic dimensions. In two experiments, participants listened to tone sequences varying in pitch and spectral peak frequency; these two dimensions changed at different rates. Inter-trial phase coherence (ITPC) and amplitude of the EEG signal at the frequencies tagged to pitch and spectral changes provided a measure of cortical tracking of these dimensions. In Experiment 1, tone sequences varied in the size of the pitch intervals, while the size of spectral peak intervals remained constant. Cortical tracking of pitch changes was greater for sequences with larger compared to smaller pitch intervals, with no difference in cortical tracking of spectral peak changes. In Experiment 2, participants selectively attended to either pitch or spectral peak. Cortical tracking was stronger in response to the attended compared to unattended dimension for both pitch and spectral peak. These findings suggest that attention can enhance the cortical tracking of specific acoustic dimensions rather than simply enhancing tracking of the auditory object as a whole.
Collapse
Affiliation(s)
- Ashley E Symons
- Department of Psychological Sciences, Birkbeck College, University of London UK.
| | - Fred Dick
- Department of Psychological Sciences, Birkbeck College, University of London UK; Division of Psychology & Language Sciences, University College London UK
| | - Adam T Tierney
- Department of Psychological Sciences, Birkbeck College, University of London UK
| |
Collapse
|
11
|
Choi W. Musicianship Influences Language Effect on Musical Pitch Perception. Front Psychol 2021; 12:712753. [PMID: 34690869 PMCID: PMC8527392 DOI: 10.3389/fpsyg.2021.712753] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2021] [Accepted: 09/13/2021] [Indexed: 11/20/2022] Open
Abstract
Given its practical implications, the effect of musicianship on language learning has been vastly researched. Interestingly, growing evidence also suggests that language experience can facilitate music perception. However, the precise nature of this facilitation is not fully understood. To address this research gap, I investigated the interactive effect of language and musicianship on musical pitch and rhythmic perception. Cantonese and English listeners, each divided into musician and non-musician groups, completed the Musical Ear Test and the Raven’s 2 Progressive Matrices. Essentially, an interactive effect of language and musicianship was found on musical pitch but not rhythmic perception. Consistent with previous studies, Cantonese language experience appeared to facilitate musical pitch perception. However, this facilitatory effect was only present among the non-musicians. Among the musicians, Cantonese language experience did not offer any perceptual advantage. The above findings reflect that musicianship influences the effect of language on musical pitch perception. Together with the previous findings, the new findings offer two theoretical implications for the OPERA hypothesis—bi-directionality and mechanisms through which language experience and musicianship interact in different domains.
Collapse
Affiliation(s)
- William Choi
- Academic Unit of Human Communication, Development, and Information Sciences, The University of Hong Kong, Hong Kong, SAR China
| |
Collapse
|
12
|
Jasmin K, Dick F, Tierney AT. The Multidimensional Battery of Prosody Perception (MBOPP). Wellcome Open Res 2021; 5:4. [PMID: 35282675 PMCID: PMC8881696 DOI: 10.12688/wellcomeopenres.15607.2] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/20/2021] [Indexed: 11/20/2022] Open
Abstract
Prosody can be defined as the rhythm and intonation patterns spanning words, phrases and sentences. Accurate perception of prosody is an important component of many aspects of language processing, such as parsing grammatical structures, recognizing words, and determining where emphasis may be placed. Prosody perception is important for language acquisition and can be impaired in language-related developmental disorders. However, existing assessments of prosodic perception suffer from some shortcomings. These include being unsuitable for use with typically developing adults due to ceiling effects and failing to allow the investigator to distinguish the unique contributions of individual acoustic features such as pitch and temporal cues. Here we present the Multi-Dimensional Battery of Prosody Perception (MBOPP), a novel tool for the assessment of prosody perception. It consists of two subtests: Linguistic Focus, which measures the ability to hear emphasis or sentential stress, and Phrase Boundaries, which measures the ability to hear where in a compound sentence one phrase ends, and another begins. Perception of individual acoustic dimensions (Pitch and Duration) can be examined separately, and test difficulty can be precisely calibrated by the experimenter because stimuli were created using a continuous voice morph space. We present validation analyses from a sample of 59 individuals and discuss how the battery might be deployed to examine perception of prosody in various populations.
Collapse
Affiliation(s)
- Kyle Jasmin
- Department of Psychology, Royal Holloway, University of London, Ehgam, TW20 0EX, UK
| | - Frederic Dick
- Psychological Sciences, Birkbeck University of London, London, WC1E 7HX, UK
| | | |
Collapse
|
13
|
Beccacece L, Abondio P, Cilli E, Restani D, Luiselli D. Human Genomics and the Biocultural Origin of Music. Int J Mol Sci 2021; 22:5397. [PMID: 34065521 PMCID: PMC8160972 DOI: 10.3390/ijms22105397] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2021] [Revised: 05/03/2021] [Accepted: 05/18/2021] [Indexed: 12/11/2022] Open
Abstract
Music is an exclusive feature of humankind. It can be considered as a form of universal communication, only partly comparable to the vocalizations of songbirds. Many trends of research in this field try to address music origins, as well as the genetic bases of musicality. On one hand, several hypotheses have been made on the evolution of music and its role, but there is still debate, and comparative studies suggest a gradual evolution of some abilities underlying musicality in primates. On the other hand, genome-wide studies highlight several genes associated with musical aptitude, confirming a genetic basis for different musical skills which humans show. Moreover, some genes associated with musicality are involved also in singing and song learning in songbirds, suggesting a likely evolutionary convergence between humans and songbirds. This comprehensive review aims at presenting the concept of music as a sociocultural manifestation within the current debate about its biocultural origin and evolutionary function, in the context of the most recent discoveries related to the cross-species genetics of musical production and perception.
Collapse
Affiliation(s)
- Livia Beccacece
- Laboratory of Molecular Anthropology, Department of Biological, Geological and Environmental Sciences, University of Bologna, 40126 Bologna, Italy;
| | - Paolo Abondio
- Laboratory of Molecular Anthropology, Department of Biological, Geological and Environmental Sciences, University of Bologna, 40126 Bologna, Italy;
| | - Elisabetta Cilli
- Department of Cultural Heritage, University of Bologna—Ravenna Campus, 48121 Ravenna, Italy; (E.C.); (D.R.)
| | - Donatella Restani
- Department of Cultural Heritage, University of Bologna—Ravenna Campus, 48121 Ravenna, Italy; (E.C.); (D.R.)
| | - Donata Luiselli
- Department of Cultural Heritage, University of Bologna—Ravenna Campus, 48121 Ravenna, Italy; (E.C.); (D.R.)
| |
Collapse
|