1
|
Frühholz S, Rodriguez P, Bonard M, Steiner F, Bobin M. Psychoacoustic and Archeoacoustic nature of ancient Aztec skull whistles. COMMUNICATIONS PSYCHOLOGY 2024; 2:108. [PMID: 39528620 PMCID: PMC11555264 DOI: 10.1038/s44271-024-00157-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/24/2024] [Accepted: 10/31/2024] [Indexed: 11/16/2024]
Abstract
Many ancient cultures used musical tools for social and ritual procedures, with the Aztec skull whistle being a unique exemplar from postclassic Mesoamerica. Skull whistles can produce softer hiss-like but also aversive and scream-like sounds that were potentially meaningful either for sacrificial practices, mythological symbolism, or intimidating warfare of the Aztecs. However, solid psychoacoustic evidence for any theory is missing, especially how human listeners cognitively and affectively respond to skull whistle sounds. Using psychoacoustic listening and classification experiments, we show that skull whistle sounds are predominantly perceived as aversive and scary and as having a hybrid natural-artificial origin. Skull whistle sounds attract mental attention by affectively mimicking other aversive and startling sounds produced by nature and technology. They were psychoacoustically classified as a hybrid mix of being voice- and scream-like but also originating from technical mechanisms. Using human neuroimaging, we furthermore found that skull whistle sounds received a specific decoding of the affective significance in the neural auditory system of human listeners, accompanied by higher-order auditory cognition and symbolic evaluations in fronto-insular-parietal brain systems. Skull whistles thus seem unique sound tools with specific psycho-affective effects on listeners, and Aztec communities might have capitalized on the scary and scream-like nature of skull whistles.
Collapse
Affiliation(s)
- Sascha Frühholz
- Cognitive and Affective Neuroscience Unit, University of Zurich, Zurich, Switzerland.
- Department of Psychology, University of Oslo, Oslo, Norway.
| | - Pablo Rodriguez
- Cognitive and Affective Neuroscience Unit, University of Zurich, Zurich, Switzerland
| | - Mathilde Bonard
- Cognitive and Affective Neuroscience Unit, University of Zurich, Zurich, Switzerland
| | - Florence Steiner
- Cognitive and Affective Neuroscience Unit, University of Zurich, Zurich, Switzerland
| | - Marine Bobin
- Cognitive and Affective Neuroscience Unit, University of Zurich, Zurich, Switzerland
| |
Collapse
|
2
|
Kattner F, Föcker J, Moshona CC, Marsh JE. When softer sounds are more distracting: Task-irrelevant whispered speech causes disruption of serial recall. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2024; 156:3632-3648. [PMID: 39589332 DOI: 10.1121/10.0034454] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/09/2024] [Accepted: 11/04/2024] [Indexed: 11/27/2024]
Abstract
Two competing accounts propose that the disruption of short-term memory by irrelevant speech arises either due to interference-by-process (e.g., changing-state effect) or attentional capture, but it is unclear how whispering affects the irrelevant speech effect. According to the interference-by-process account, whispered speech should be less disruptive due to its reduced periodic spectro-temporal fine structure and lower amplitude modulations. In contrast, the attentional account predicts more disruption by whispered speech, possibly via enhanced listening effort in the case of a comprehended language. In two experiments, voiced and whispered speech (spoken sentences or monosyllabic words) were presented while participants memorized the order of visually presented letters. In both experiments, a changing-state effect was observed regardless of the phonation (sentences produced more disruption than "steady-state" words). Moreover, whispered speech (lower fluctuation strength) was more disruptive than voiced speech when participants understood the language (Experiment 1), but not when the language was incomprehensible (Experiment 2). The results suggest two functionally distinct mechanisms of auditory distraction: While changing-state speech causes automatic interference with seriation processes regardless of its meaning or intelligibility, whispering appears to contain cues that divert attention from the focal task primarily when presented in a comprehended language, possibly via enhanced listening effort.
Collapse
Affiliation(s)
- Florian Kattner
- Institute for Mind, Brain and Behavior, Health and Medical University, Schiffbauergasse 14, 14467 Potsdam, Germany
| | - Julia Föcker
- College of Health and Science, School of Psychology, Sport Science and Wellbeing, University of Lincoln, Brayford Pool, Lincoln, LN6 7TS, United Kingdom
| | - Cleopatra Christina Moshona
- Engineering Acoustics, Institute of Fluid Dynamics and Technical Acoustics, Technische Universität Berlin, Einsteinufer 25, 10587 Berlin, Germany
| | - John E Marsh
- School of Psychology and Humanities, University of Central Lancashire, Preston, PR1 2HE, United Kingdom
- Department of Health, Learning and Technology, Luleå University of Technology, Luleå, Sweden
| |
Collapse
|
3
|
Piña Méndez Á, Taitz A, Palacios Rodríguez O, Rodríguez Leyva I, Assaneo MF. Speech's syllabic rhythm and articulatory features produced under different auditory feedback conditions identify Parkinsonism. Sci Rep 2024; 14:15787. [PMID: 38982177 PMCID: PMC11233651 DOI: 10.1038/s41598-024-65974-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2024] [Accepted: 06/25/2024] [Indexed: 07/11/2024] Open
Abstract
Diagnostic tests for Parkinsonism based on speech samples have shown promising results. Although abnormal auditory feedback integration during speech production and impaired rhythmic organization of speech are known in Parkinsonism, these aspects have not been incorporated into diagnostic tests. This study aimed to identify Parkinsonism using a novel speech behavioral test that involved rhythmically repeating syllables under different auditory feedback conditions. The study included 30 individuals with Parkinson's disease (PD) and 30 healthy subjects. Participants were asked to rhythmically repeat the PA-TA-KA syllable sequence, both whispering and speaking aloud under various listening conditions. The results showed that individuals with PD had difficulties in whispering and articulating under altered auditory feedback conditions, exhibited delayed speech onset, and demonstrated inconsistent rhythmic structure across trials compared to controls. These parameters were then fed into a supervised machine-learning algorithm to differentiate between the two groups. The algorithm achieved an accuracy of 85.4%, a sensitivity of 86.5%, and a specificity of 84.3%. This pilot study highlights the potential of the proposed behavioral paradigm as an objective and accessible (both in cost and time) test for identifying individuals with Parkinson's disease.
Collapse
Affiliation(s)
- Ángeles Piña Méndez
- Faculty of Psychology, Autonomous University of San Luis Potosí, San Luis Potosí, Mexico
| | | | | | | | - M Florencia Assaneo
- Institute of Neurobiology, National Autonomous University of Mexico, Querétaro, Mexico.
| |
Collapse
|
4
|
Trost W, Trevor C, Fernandez N, Steiner F, Frühholz S. Live music stimulates the affective brain and emotionally entrains listeners in real time. Proc Natl Acad Sci U S A 2024; 121:e2316306121. [PMID: 38408255 DOI: 10.1073/pnas.2316306121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2023] [Accepted: 01/18/2024] [Indexed: 02/28/2024] Open
Abstract
Music is powerful in conveying emotions and triggering affective brain mechanisms. Affective brain responses in previous studies were however rather inconsistent, potentially because of the non-adaptive nature of recorded music used so far. Live music instead can be dynamic and adaptive and is often modulated in response to audience feedback to maximize emotional responses in listeners. Here, we introduce a setup for studying emotional responses to live music in a closed-loop neurofeedback setup. This setup linked live performances by musicians to neural processing in listeners, with listeners' amygdala activity was displayed to musicians in real time. Brain activity was measured using functional MRI, and especially amygdala activity was quantified in real time for the neurofeedback signal. Live pleasant and unpleasant piano music performed in response to amygdala neurofeedback from listeners was acoustically very different from comparable recorded music and elicited significantly higher and more consistent amygdala activity. Higher activity was also found in a broader neural network for emotion processing during live compared to recorded music. This finding included observations of the predominance for aversive coding in the ventral striatum while listening to unpleasant music, and involvement of the thalamic pulvinar nucleus, presumably for regulating attentional and cortical flow mechanisms. Live music also stimulated a dense functional neural network with the amygdala as a central node influencing other brain systems. Finally, only live music showed a strong and positive coupling between features of the musical performance and brain activity in listeners pointing to real-time and dynamic entrainment processes.
Collapse
Affiliation(s)
- Wiebke Trost
- Cognitive and Affective Neuroscience Unit, Department of Psychology, University of Zurich, Zurich 8050, Switzerland
| | - Caitlyn Trevor
- Cognitive and Affective Neuroscience Unit, Department of Psychology, University of Zurich, Zurich 8050, Switzerland
| | - Natalia Fernandez
- Cognitive and Affective Neuroscience Unit, Department of Psychology, University of Zurich, Zurich 8050, Switzerland
| | - Florence Steiner
- Cognitive and Affective Neuroscience Unit, Department of Psychology, University of Zurich, Zurich 8050, Switzerland
| | - Sascha Frühholz
- Cognitive and Affective Neuroscience Unit, Department of Psychology, University of Zurich, Zurich 8050, Switzerland
- Neuroscience Center Zurich, University of Zurich and ETH Zurich, Zurich 8057, Switzerland
- Department of Psychology, University of Oslo, Oslo 0373, Norway
| |
Collapse
|
5
|
Steiner F, Fernandez N, Dietziker J, Stämpfli SP, Seifritz E, Rey A, Frühholz FS. Affective speech modulates a cortico-limbic network in real time. Prog Neurobiol 2022; 214:102278. [DOI: 10.1016/j.pneurobio.2022.102278] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2022] [Revised: 04/06/2022] [Accepted: 04/28/2022] [Indexed: 10/18/2022]
|
6
|
ASMR amplifies low frequency and reduces high frequency oscillations. Cortex 2022; 149:85-100. [DOI: 10.1016/j.cortex.2022.01.004] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2021] [Revised: 11/26/2021] [Accepted: 01/10/2022] [Indexed: 11/20/2022]
|
7
|
Nonverbal auditory communication - Evidence for integrated neural systems for voice signal production and perception. Prog Neurobiol 2020; 199:101948. [PMID: 33189782 DOI: 10.1016/j.pneurobio.2020.101948] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2020] [Revised: 10/12/2020] [Accepted: 11/04/2020] [Indexed: 12/24/2022]
Abstract
While humans have developed a sophisticated and unique system of verbal auditory communication, they also share a more common and evolutionarily important nonverbal channel of voice signaling with many other mammalian and vertebrate species. This nonverbal communication is mediated and modulated by the acoustic properties of a voice signal, and is a powerful - yet often neglected - means of sending and perceiving socially relevant information. From the viewpoint of dyadic (involving a sender and a signal receiver) voice signal communication, we discuss the integrated neural dynamics in primate nonverbal voice signal production and perception. Most previous neurobiological models of voice communication modelled these neural dynamics from the limited perspective of either voice production or perception, largely disregarding the neural and cognitive commonalities of both functions. Taking a dyadic perspective on nonverbal communication, however, it turns out that the neural systems for voice production and perception are surprisingly similar. Based on the interdependence of both production and perception functions in communication, we first propose a re-grouping of the neural mechanisms of communication into auditory, limbic, and paramotor systems, with special consideration for a subsidiary basal-ganglia-centered system. Second, we propose that the similarity in the neural systems involved in voice signal production and perception is the result of the co-evolution of nonverbal voice production and perception systems promoted by their strong interdependence in dyadic interactions.
Collapse
|
8
|
Dricu M, Frühholz S. A neurocognitive model of perceptual decision-making on emotional signals. Hum Brain Mapp 2020; 41:1532-1556. [PMID: 31868310 PMCID: PMC7267943 DOI: 10.1002/hbm.24893] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2019] [Revised: 11/18/2019] [Accepted: 11/29/2019] [Indexed: 01/09/2023] Open
Abstract
Humans make various kinds of decisions about which emotions they perceive from others. Although it might seem like a split-second phenomenon, deliberating over which emotions we perceive unfolds across several stages of decisional processing. Neurocognitive models of general perception postulate that our brain first extracts sensory information about the world then integrates these data into a percept and lastly interprets it. The aim of the present study was to build an evidence-based neurocognitive model of perceptual decision-making on others' emotions. We conducted a series of meta-analyses of neuroimaging data spanning 30 years on the explicit evaluations of others' emotional expressions. We find that emotion perception is rather an umbrella term for various perception paradigms, each with distinct neural structures that underline task-related cognitive demands. Furthermore, the left amygdala was responsive across all classes of decisional paradigms, regardless of task-related demands. Based on these observations, we propose a neurocognitive model that outlines the information flow in the brain needed for a successful evaluation of and decisions on other individuals' emotions. HIGHLIGHTS: Emotion classification involves heterogeneous perception and decision-making tasks Decision-making processes on emotions rarely covered by existing emotions theories We propose an evidence-based neuro-cognitive model of decision-making on emotions Bilateral brain processes for nonverbal decisions, left brain processes for verbal decisions Left amygdala involved in any kind of decision on emotions.
Collapse
Affiliation(s)
- Mihai Dricu
- Department of PsychologyUniversity of BernBernSwitzerland
| | - Sascha Frühholz
- Cognitive and Affective Neuroscience Unit, Department of PsychologyUniversity of ZurichZurichSwitzerland
- Neuroscience Center Zurich (ZNZ)University of Zurich and ETH ZurichZurichSwitzerland
- Center for Integrative Human Physiology (ZIHP)University of ZurichZurichSwitzerland
| |
Collapse
|
9
|
Frühholz S, Trost W, Grandjean D, Belin P. Neural oscillations in human auditory cortex revealed by fast fMRI during auditory perception. Neuroimage 2020; 207:116401. [DOI: 10.1016/j.neuroimage.2019.116401] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2019] [Accepted: 11/24/2019] [Indexed: 11/30/2022] Open
|
10
|
Chorna O, Filippa M, De Almeida JS, Lordier L, Monaci MG, Hüppi P, Grandjean D, Guzzetta A. Neuroprocessing Mechanisms of Music during Fetal and Neonatal Development: A Role in Neuroplasticity and Neurodevelopment. Neural Plast 2019; 2019:3972918. [PMID: 31015828 PMCID: PMC6446122 DOI: 10.1155/2019/3972918] [Citation(s) in RCA: 30] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2018] [Revised: 01/06/2019] [Accepted: 02/24/2019] [Indexed: 01/17/2023] Open
Abstract
The primary aim of this viewpoint article is to examine recent literature on fetal and neonatal processing of music. In particular, we examine the behavioral, neurophysiological, and neuroimaging literature describing fetal and neonatal music perception and processing to the first days of term equivalent life. Secondly, in light of the recent systematic reviews published on this topic, we discuss the impact of music interventions on the potential neuroplasticity pathways through which the early exposure to music, live or recorded, may impact the fetal, preterm, and full-term infant brain. We conclude with recommendations for music stimuli selection and its role within the framework of early socioemotional development and environmental enrichment.
Collapse
Affiliation(s)
- O. Chorna
- Department of Developmental Neuroscience, IRCCS Fondazione Stella Maris, Pisa, Italy
| | - M. Filippa
- Division of Development and Growth, Department of Pediatrics, University Hospital of Geneva, Geneva, Switzerland
- Swiss Center for Affective Sciences, University of Geneva, Geneva, Switzerland
- Social Science Department, University of Valle d'Aosta, Aosta, Italy
| | - J. Sa De Almeida
- Division of Development and Growth, Department of Pediatrics, University Hospital of Geneva, Geneva, Switzerland
| | - L. Lordier
- Division of Development and Growth, Department of Pediatrics, University Hospital of Geneva, Geneva, Switzerland
| | - M. G. Monaci
- Social Science Department, University of Valle d'Aosta, Aosta, Italy
| | - P. Hüppi
- Division of Development and Growth, Department of Pediatrics, University Hospital of Geneva, Geneva, Switzerland
| | - D. Grandjean
- Swiss Center for Affective Sciences and Department of Psychology and Educational Sciences, University of Geneva, Switzerland
| | - A. Guzzetta
- Department of Developmental Neuroscience, IRCCS Fondazione Stella Maris, Pisa, Italy
- Department of Clinical and Experimental Medicine, University of Pisa, Pisa, Italy
| |
Collapse
|