1
|
Jahn KN, Wiegand-Shahani BM, Moturi V, Kashiwagura ST, Doak KR. Cochlear-implant simulated spectral degradation attenuates emotional responses to environmental sounds. Int J Audiol 2024:1-7. [PMID: 39146030 DOI: 10.1080/14992027.2024.2385552] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2023] [Accepted: 07/22/2024] [Indexed: 08/17/2024]
Abstract
OBJECTIVE Cochlear implants (CI) provide users with a spectrally degraded acoustic signal that could impact their auditory emotional experiences. This study evaluated the effects of CI-simulated spectral degradation on emotional valence and arousal elicited by environmental sounds. DESIGN Thirty emotionally evocative sounds were filtered through a noise-band vocoder. Participants rated the perceived valence and arousal elicited by each of the full-spectrum and vocoded stimuli. These ratings were compared across acoustic conditions (full-spectrum, vocoded) and as a function of stimulus type (unpleasant, neutral, pleasant). STUDY SAMPLE Twenty-five young adults (age 19 to 34 years) with normal hearing. RESULTS Emotional responses were less extreme for spectrally degraded (i.e., vocoded) sounds than for full-spectrum sounds. Specifically, spectrally degraded stimuli were perceived as more negative and less arousing than full-spectrum stimuli. CONCLUSION By meticulously replicating CI spectral degradation while controlling for variables that are confounded within CI users, these findings indicate that CI spectral degradation can compress the range of sound-induced emotion independent of hearing loss and other idiosyncratic device- or person-level variables. Future work will characterize emotional reactions to sound in CI users via objective, psychoacoustic, and subjective measures.
Collapse
Affiliation(s)
- Kelly N Jahn
- Department of Speech, Language, and Hearing, The University of Texas at Dallas, Richardson, TX, USA
- Callier Center for Communication Disorders, The University of Texas at Dallas, Dallas, TX, USA
| | - Braden M Wiegand-Shahani
- Department of Speech, Language, and Hearing, The University of Texas at Dallas, Richardson, TX, USA
- Callier Center for Communication Disorders, The University of Texas at Dallas, Dallas, TX, USA
| | - Vaishnavi Moturi
- Department of Speech, Language, and Hearing, The University of Texas at Dallas, Richardson, TX, USA
| | - Sean Takamoto Kashiwagura
- Department of Speech, Language, and Hearing, The University of Texas at Dallas, Richardson, TX, USA
- Callier Center for Communication Disorders, The University of Texas at Dallas, Dallas, TX, USA
| | - Karlee R Doak
- Department of Speech, Language, and Hearing, The University of Texas at Dallas, Richardson, TX, USA
- Callier Center for Communication Disorders, The University of Texas at Dallas, Dallas, TX, USA
| |
Collapse
|
2
|
Barbosa Escobar F, Wang QJ. Inducing Novel Sound-Taste Correspondences via an Associative Learning Task. Cogn Sci 2024; 48:e13421. [PMID: 38500336 DOI: 10.1111/cogs.13421] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2023] [Revised: 01/08/2024] [Accepted: 02/19/2024] [Indexed: 03/20/2024]
Abstract
The interest in crossmodal correspondences, including those involving sounds and involving tastes, has experienced rapid growth in recent years. However, the mechanisms underlying these correspondences are not well understood. In the present study (N = 302), we used an associative learning paradigm, based on previous literature using simple sounds with no consensual taste associations (i.e., square and triangle wave sounds at 200 Hz) and taste words (i.e., sweet and bitter), to test the influence of two potential mechanisms in establishing sound-taste correspondences and investigate whether either learning mechanism could give rise to new and long-lasting associations. Specifically, we examined an emotional mediation account (i.e., using sad and happy emoji facial expressions) and a transitive path (i.e., sound-taste correspondence being mediated by color, using red and black colored squares). The results revealed that the associative learning paradigm mapping the triangle wave tone with a happy emoji facial expression induced a novel crossmodal correspondence between this sound and the word sweet. Importantly, we found that this novel association was still present two months after the experimental learning paradigm. None of the other mappings, emotional or transitive, gave rise to any significant associations between sound and taste. These findings provide evidence that new crossmodal correspondences between sounds and tastes can be created by leveraging the affective connection between both dimensions, helping elucidate the mechanisms underlying these associations. Moreover, these findings reveal that these associations can last for several weeks after the experimental session through which they were induced.
Collapse
Affiliation(s)
- Francisco Barbosa Escobar
- Department of Food Science, Faculty of Science, University of Copenhagen
- Department of Marketing, Copenhagen Business School
| | - Qian Janice Wang
- Department of Food Science, Faculty of Science, University of Copenhagen
| |
Collapse
|
3
|
Ma W, Bowers L, Behrend D, Hellmuth Margulis E, Forde Thompson W. Child word learning in song and speech. Q J Exp Psychol (Hove) 2024; 77:343-362. [PMID: 37073951 DOI: 10.1177/17470218231172494] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/20/2023]
Abstract
Listening to sung words rather than spoken words can facilitate word learning and memory in adults and school-aged children. To explore the development of this effect in young children, this study examined word learning (assessed as forming word-object associations) in 1- to 2-year olds and 3- to 4-year olds, and word long-term memory (LTM) in 4- to 5-year olds several days after the initial learning. In an intermodal preferential looking paradigm, children were taught a pair of words utilising adult-directed speech (ADS) and a pair of sung words. Word learning performance was better with sung words than with ADS words in 1- to 2-year olds (Experiments 1a and 1b), 3- to 4-year olds (Experiment 1a), and 4- to 5-year olds (Experiment 2b), revealing a benefit of song in word learning in all age ranges recruited. We also examined whether children successfully learned the words by comparing their performance against chance. The 1- to 2-year olds only learned sung words, but the 3- to 4-year olds learned both sung and ADS words, suggesting that the reliance on music features in word learning observed at ages 1-2 decreased with age. Furthermore, song facilitated the word mapping-recognition processes. Results on children's LTM performance showed that the 4- to 5-year olds' LTM performance did not differ between sung and ADS words. However, the 4- to 5-year olds reliably recalled sung words but not spoken words. The reliable LTM of sung words arose from hearing sung words during the initial learning rather than at test. Finally, the benefit of song on word learning and the reliable LTM of sung words observed at ages 3-5 cannot be explained as an attentional effect.
Collapse
Affiliation(s)
- Weiyi Ma
- School of Human Environmental Sciences, University of Arkansas, Fayetteville, AR, USA
| | - Lisa Bowers
- Department of Rehabilitation, Human Resources and Communication Disorders, University of Arkansas, Fayetteville, AR, USA
| | - Douglas Behrend
- Department of Psychological Science, University of Arkansas, Fayetteville, AR, USA
| | | | | |
Collapse
|
4
|
Wang L, Hu X, Ren Y, Lv J, Zhao S, Guo L, Liu T, Han J. Arousal modulates the amygdala-insula reciprocal connectivity during naturalistic emotional movie watching. Neuroimage 2023; 279:120316. [PMID: 37562718 DOI: 10.1016/j.neuroimage.2023.120316] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2023] [Revised: 08/06/2023] [Accepted: 08/07/2023] [Indexed: 08/12/2023] Open
Abstract
Emotional arousal is a complex state recruiting distributed cortical and subcortical structures, in which the amygdala and insula play an important role. Although previous neuroimaging studies have showed that the amygdala and insula manifest reciprocal connectivity, the effective connectivities and modulatory patterns on the amygdala-insula interactions underpinning arousal are still largely unknown. One of the reasons may be attributed to static and discrete laboratory brain imaging paradigms used in most existing studies. In this study, by integrating naturalistic-paradigm (i.e., movie watching) functional magnetic resonance imaging (fMRI) with a computational affective model that predicts dynamic arousal for the movie stimuli, we investigated the effective amygdala-insula interactions and the modulatory effect of the input arousal on the effective connections. Specifically, the predicted dynamic arousal of the movie served as regressors in general linear model (GLM) analysis and brain activations were identified accordingly. The regions of interest (i.e., the bilateral amygdala and insula) were localized according to the GLM activation map. The effective connectivity and modulatory effect were then inferred by using dynamic causal modeling (DCM). Our experimental results demonstrated that amygdala was the site of driving arousal input and arousal had a modulatory effect on the reciprocal connections between amygdala and insula. Our study provides novel evidence to the underlying neural mechanisms of arousal in a dynamical naturalistic setting.
Collapse
Affiliation(s)
- Liting Wang
- School of Automation, Northwestern Polytechnical University, Xi'an, China
| | - Xintao Hu
- School of Automation, Northwestern Polytechnical University, Xi'an, China.
| | - Yudan Ren
- School of Information Science and Technology, Northwest University, Xi'an, China
| | - Jinglei Lv
- School of Biomedical Engineering and Brain and Mind Centre, University of Sydney, Sydney, Australia
| | - Shijie Zhao
- School of Automation, Northwestern Polytechnical University, Xi'an, China
| | - Lei Guo
- School of Automation, Northwestern Polytechnical University, Xi'an, China
| | - Tianming Liu
- School of Computing, University of Georgia, Athens, USA
| | - Junwei Han
- School of Automation, Northwestern Polytechnical University, Xi'an, China
| |
Collapse
|
5
|
Ma W, Zhou P, Liang X, Thompson WF. Children across cultures respond emotionally to the acoustic environment. Cogn Emot 2023; 37:1144-1152. [PMID: 37338002 DOI: 10.1080/02699931.2023.2225850] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2022] [Revised: 06/07/2023] [Accepted: 06/08/2023] [Indexed: 06/21/2023]
Abstract
Among human and non-human animals, the ability to respond rapidly to biologically significant events in the environment is essential for survival and development. Research has confirmed that human adult listeners respond emotionally to environmental sounds by relying on the same acoustic cues that signal emotionality in speech prosody and music. However, it is unknown whether young children also respond emotionally to environmental sounds. Here, we report that changes in pitch, rate (i.e. playback speed), and intensity (i.e. amplitude) of environmental sounds trigger emotional responses in 3- to 6-year-old American and Chinese children, including four sound types: sounds of human actions, animal calls, machinery, and natural phenomena such as wind and waves. Children's responses did not differ across the four types of sounds used but developed with age - a finding observed in both American and Chinese children. Thus, the ability to respond emotionally to non-linguistic, non-music environmental sounds is evident at three years of age - an age when the ability to decode emotional prosody in language and music emerges. We argue that general mechanisms that support emotional prosody decoding are engaged by all sounds, as reflected in emotional responses to non-linguistic acoustic input such as music and environmental sounds.
Collapse
Affiliation(s)
- Weiyi Ma
- School of Human Environmental Sciences, University of Arkansas, Fayetteville, AR, USA
| | - Peng Zhou
- School of International Studies, Zhejiang University, Hangzhou, People's Republic of China
| | - Xinya Liang
- Department of Counseling, Leadership, and Research Methods, University of Arkansas, Fayetteville, AR, USA
| | | |
Collapse
|
6
|
Singh M, Mehr SA. Universality, domain-specificity, and development of psychological responses to music. NATURE REVIEWS PSYCHOLOGY 2023; 2:333-346. [PMID: 38143935 PMCID: PMC10745197 DOI: 10.1038/s44159-023-00182-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 03/30/2023] [Indexed: 12/26/2023]
Abstract
Humans can find music happy, sad, fearful, or spiritual. They can be soothed by it or urged to dance. Whether these psychological responses reflect cognitive adaptations that evolved expressly for responding to music is an ongoing topic of study. In this Review, we examine three features of music-related psychological responses that help to elucidate whether the underlying cognitive systems are specialized adaptations: universality, domain-specificity, and early expression. Focusing on emotional and behavioural responses, we find evidence that the relevant psychological mechanisms are universal and arise early in development. However, the existing evidence cannot establish that these mechanisms are domain-specific. To the contrary, many findings suggest that universal psychological responses to music reflect more general properties of emotion, auditory perception, and other human cognitive capacities that evolved for non-musical purposes. Cultural evolution, driven by the tinkering of musical performers, evidently crafts music to compellingly appeal to shared psychological mechanisms, resulting in both universal patterns (such as form-function associations) and culturally idiosyncratic styles.
Collapse
Affiliation(s)
- Manvir Singh
- Institute for Advanced Study in Toulouse, University of
Toulouse 1 Capitole, Toulouse, France
| | - Samuel A. Mehr
- Yale Child Study Center, Yale University, New Haven, CT,
USA
- School of Psychology, University of Auckland, Auckland,
New Zealand
| |
Collapse
|
7
|
Slow tempo music preserves attentional efficiency in young children. Atten Percept Psychophys 2022; 85:978-984. [PMID: 36577915 DOI: 10.3758/s13414-022-02602-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/14/2022] [Indexed: 12/29/2022]
Abstract
Past research has shown that listening to slow- or fast-tempo music can affect adults' executive attention (EA) performance. This study examined the immediate impact of brief exposure to slow- or fast-tempo music on EA performance in 4- to 6-year-old children. A within-subject design was used, where each child completed three blocks of the EA task after listening to fast-tempo music (fast-tempo block), slow-tempo music (slow-tempo block), and ocean waves (control block), with block-order counterbalanced. In each block, children were also asked to report their pre-task subjective emotional status (experienced arousal and valence) before listening to music and their post-task emotional status after the EA task. Three major results emerged. First, reaction time (RT) was significantly faster in the slow-tempo block than in the fast-tempo, suggesting that listening to slow-tempo music preserves processing efficiency, relative to fast-tempo music. Second, children's accuracy rate in the EA task did not differ across blocks. Third, children's subjective emotional status did not differ across blocks and did not change across the pre- and post-task phases in any block, suggesting the faster RT observed in the slow-tempo block cannot be explained by changes in arousal or mood.
Collapse
|
8
|
Musical and Non-Musical Sounds Influence the Flavour Perception of Chocolate Ice Cream and Emotional Responses. Foods 2022; 11:foods11121784. [PMID: 35741981 PMCID: PMC9223177 DOI: 10.3390/foods11121784] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2022] [Revised: 06/06/2022] [Accepted: 06/08/2022] [Indexed: 02/01/2023] Open
Abstract
Auditory cues, such as real-world sounds or music, influence how we perceive food. The main aim of the present study was to investigate the influence of negatively and positively valenced mixtures of musical and non-musical sounds on the affective states of participants and their perception of chocolate ice cream. Consuming ice cream while listening to liked music (LM) and while listening to the combination of liked music and pleasant sound (LMPS) conditions gave rise to more positive emotions than listening to just pleasant sound (PS). Consuming ice cream during the LM condition resulted in the longest duration of perceived sweetness. On the other hand, PS and LMPS conditions resulted in cocoa dominating for longer. Bitterness and roasted were dominant under the disliked music and unpleasant sound (DMUS) and DM conditions respectively. Positive emotions correlated well with the temporal sensory perception of sweetness and cocoa when consuming chocolate ice cream under the positively valenced auditory conditions. In contrast, negative emotions were associated with bitter and roasted tastes/flavours under the negatively valenced auditory conditions. The combination of pleasant music and non-musical sound conditions evoked more positive emotions than when either was presented in isolation. Taken together, the results of this study support the view that sensory attributes correlated well with emotions evoked when consuming ice cream under different auditory conditions varying in terms of their valence.
Collapse
|
9
|
Grall C, Finn ES. Leveraging the power of media to drive cognition: a media-informed approach to naturalistic neuroscience. Soc Cogn Affect Neurosci 2022; 17:598-608. [PMID: 35257180 PMCID: PMC9164202 DOI: 10.1093/scan/nsac019] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2021] [Revised: 02/01/2022] [Accepted: 03/07/2022] [Indexed: 11/18/2022] Open
Abstract
So-called 'naturalistic' stimuli have risen in popularity in cognitive, social and affective neuroscience over the last 15 years. However, a critical property of these stimuli is frequently overlooked: Media-like film, television, books and podcasts-are 'fundamentally not natural'. They are deliberately crafted products meant to elicit particular human thought, emotion and behavior. Here, we argue for a more informed approach to adopting media stimuli in experimental paradigms. We discuss the pitfalls of combining stimuli that are designed for research with those that are designed for other purposes (e.g. entertainment) under the umbrella term of 'naturalistic' and present strategies to improve rigor in the stimulus selection process. We assert that experiencing media should be considered a task akin to any other experimental task(s) and explain how this shift in perspective will compel more nuanced and generalizable research using these stimuli. Throughout, we offer theoretical and practical knowledge from multidisciplinary media research to raise the standard for the treatment of media stimuli in neuroscience research.
Collapse
Affiliation(s)
- Clare Grall
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH 03755, USA
| | - Emily S Finn
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH 03755, USA
| |
Collapse
|
10
|
Tawdrous MM, D'Onofrio KL, Gifford R, Picou EM. Emotional Responses to Non-Speech Sounds for Hearing-aid and Bimodal Cochlear-Implant Listeners. Trends Hear 2022; 26:23312165221083091. [PMID: 35435773 PMCID: PMC9019384 DOI: 10.1177/23312165221083091] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2021] [Revised: 12/19/2021] [Accepted: 02/06/2022] [Indexed: 02/03/2023] Open
Abstract
The purpose of this project was to evaluate differences between groups and device configurations for emotional responses to non-speech sounds. Three groups of adults participated: 1) listeners with normal hearing with no history of device use, 2) hearing aid candidates with or without hearing aid experience, and 3) bimodal cochlear-implant listeners with at least 6 months of implant use. Participants (n = 18 in each group) rated valence and arousal of pleasant, neutral, and unpleasant non-speech sounds. Listeners with normal hearing rated sounds without hearing devices. Hearing aid candidates rated sounds while using one or two hearing aids. Bimodal cochlear-implant listeners rated sounds while using a hearing aid alone, a cochlear implant alone, or the hearing aid and cochlear implant simultaneously. Analysis revealed significant differences between groups in ratings of pleasant and unpleasant stimuli; ratings from hearing aid candidates and bimodal cochlear-implant listeners were less extreme (less pleasant and less unpleasant) than were ratings from listeners with normal hearing. Hearing aid candidates' ratings were similar with one and two hearing aids. Bimodal cochlear-implant listeners' ratings of valence were higher (more pleasant) in the configuration without a hearing aid (implant only) than in the two configurations with a hearing aid (alone or with an implant). These data support the need for further investigation into hearing device optimization to improve emotional responses to non-speech sounds for adults with hearing loss.
Collapse
Affiliation(s)
- Marina M. Tawdrous
- School of Communication Sciences and Disorders, Western University, 1151 Richmond St, London, ON, N6A 3K7
| | - Kristen L. D'Onofrio
- Department of Hearing and Speech Sciences, Graduate School, Vanderbilt University, 1215 21st Ave South, Room 8310, Nashville, TN, 37232
- Department of Hearing and Speech Sciences, School of Medicine, Vanderbilt University Medical
Center, 1215 21st Ave South, Room 8310, Nashville, TN, 37232
| | - René Gifford
- Department of Hearing and Speech Sciences, Graduate School, Vanderbilt University, 1215 21st Ave South, Room 8310, Nashville, TN, 37232
- Department of Hearing and Speech Sciences, School of Medicine, Vanderbilt University Medical
Center, 1215 21st Ave South, Room 8310, Nashville, TN, 37232
| | - Erin M. Picou
- Department of Hearing and Speech Sciences, Graduate School, Vanderbilt University, 1215 21st Ave South, Room 8310, Nashville, TN, 37232
- Department of Hearing and Speech Sciences, School of Medicine, Vanderbilt University Medical
Center, 1215 21st Ave South, Room 8310, Nashville, TN, 37232
| |
Collapse
|
11
|
Bedoya D, Arias P, Rachman L, Liuni M, Canonne C, Goupil L, Aucouturier JJ. Even violins can cry: specifically vocal emotional behaviours also drive the perception of emotions in non-vocal music. Philos Trans R Soc Lond B Biol Sci 2021; 376:20200396. [PMID: 34719254 PMCID: PMC8558776 DOI: 10.1098/rstb.2020.0396] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
A wealth of theoretical and empirical arguments have suggested that music triggers emotional responses by resembling the inflections of expressive vocalizations, but have done so using low-level acoustic parameters (pitch, loudness, speed) that, in fact, may not be processed by the listener in reference to human voice. Here, we take the opportunity of the recent availability of computational models that allow the simulation of three specifically vocal emotional behaviours: smiling, vocal tremor and vocal roughness. When applied to musical material, we find that these three acoustic manipulations trigger emotional perceptions that are remarkably similar to those observed on speech and scream sounds, and identical across musician and non-musician listeners. Strikingly, this not only applied to singing voice with and without musical background, but also to purely instrumental material. This article is part of the theme issue ‘Voice modulation: from origin and mechanism to social impact (Part I)’.
Collapse
Affiliation(s)
- D Bedoya
- Science and Technology of Music and Sound, IRCAM/CNRS/Sorbonne Université, Paris, France
| | - P Arias
- Science and Technology of Music and Sound, IRCAM/CNRS/Sorbonne Université, Paris, France.,Department of Cognitive Science, Lund University, Lund, Sweden
| | - L Rachman
- Faculty of Medical Sciences, University of Groningen, Groningen, The Netherlands
| | - M Liuni
- Alta Voce SAS, Houilles, France
| | - C Canonne
- Science and Technology of Music and Sound, IRCAM/CNRS/Sorbonne Université, Paris, France
| | - L Goupil
- BabyDevLab, University of East London, London, UK
| | - J-J Aucouturier
- FEMTO-ST Institute, Université de Bourgogne Franche-Comté/CNRS, Besançon, France
| |
Collapse
|
12
|
Picou EM, Rakita L, Buono GH, Moore TM. Effects of Increasing the Overall Level or Fitting Hearing Aids on Emotional Responses to Sounds. Trends Hear 2021; 25:23312165211049938. [PMID: 34866509 PMCID: PMC8825634 DOI: 10.1177/23312165211049938] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/21/2023] Open
Abstract
Adults with hearing loss demonstrate a reduced range of emotional responses to nonspeech
sounds compared to their peers with normal hearing. The purpose of this study was to
evaluate two possible strategies for addressing the effects of hearing loss on emotional
responses: (a) increasing overall level and (b) hearing aid use (with and without
nonlinear frequency compression, NFC). Twenty-three adults (mean age = 65.5 years) with
mild-to-severe sensorineural hearing loss and 17 adults (mean age = 56.2 years) with
normal hearing participated. All adults provided ratings of valence and arousal without
hearing aids in response to nonspeech sounds presented at a moderate and at a high level.
Adults with hearing loss also provided ratings while using individually fitted study
hearing aids with two settings (NFC-OFF or NFC-ON). Hearing loss and hearing aid use
impacted ratings of valence but not arousal. Listeners with hearing loss rated pleasant
sounds as less pleasant than their peers, confirming findings in the extant literature.
For both groups, increasing the overall level resulted in lower ratings of valence. For
listeners with hearing loss, the use of hearing aids (NFC-OFF) also resulted in lower
ratings of valence but to a lesser extent than increasing the overall level. Activating
NFC resulted in ratings that were similar to ratings without hearing aids (with a moderate
presentation level) but did not improve ratings to match those from the listeners with
normal hearing. These findings suggest that current interventions do not ameliorate the
effects of hearing loss on emotional responses to sound.
Collapse
Affiliation(s)
- Erin M Picou
- Department of Hearing and Speech Sciences, 12328Vanderbilt University School of Medicine, Nashville TN, USA
| | - Lori Rakita
- Department of Otolaryngology, 1866Massachusetts Ear and Eye Infirmary, Harvard Medical School, Boston, MA, USA
| | - Gabrielle H Buono
- Department of Hearing and Speech Sciences, 12328Vanderbilt University School of Medicine, Nashville TN, USA
| | | |
Collapse
|
13
|
Jiang J, Meng Q, Ji J. Combining Music and Indoor Spatial Factors Helps to Improve College Students' Emotion During Communication. Front Psychol 2021; 12:703908. [PMID: 34594267 PMCID: PMC8476911 DOI: 10.3389/fpsyg.2021.703908] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2021] [Accepted: 08/10/2021] [Indexed: 11/13/2022] Open
Abstract
Against the background of weakening face-to-face social interaction, the mental health of college students deserves attention. There are few existing studies on the impact of audiovisual interaction on interactive behavior, especially emotional perception in specific spaces. This study aims to indicate whether the perception of one's music environment has influence on college students' emotion during communication in different indoor conditions including spatial function, visual and sound atmospheres, and interior furnishings. The three-dimensional pleasure-arousal-dominance (PAD) emotional model was used to evaluate the changes of emotions before and after communication. An acoustic environmental measurement was performed and the evaluations of emotion during communication was investigated by a questionnaire survey with 331 participants at six experimental sites [including a classroom (CR), a learning corridor (LC), a coffee shop (CS), a fast food restaurant (FFR), a dormitory (DT), and a living room(LR)], the following results were found: Firstly, the results in different functional spaces showed no significant effect of music on communication or emotional states during communication. Secondly, the average score of the musical evaluation was 1.09 higher in the warm-toned space compared to the cold-toned space. Thirdly, the differences in the effects of music on emotion during communication in different sound environments were significant and pleasure, arousal, and dominance could be efficiently enhanced by music in the quiet space. Fourthly, dominance was 0.63 higher in the minimally furnished space. Finally, we also investigated influence of social characteristics on the effect of music on communication in different indoor spaces, in terms of the intimacy level, the gender combination, and the group size. For instance, when there are more than two communicators in the dining space, pleasure and arousal can be efficiently enhanced by music. This study shows that combining the sound environment with spatial factors (for example, the visual and sound atmosphere) and the interior furnishings can be an effective design strategy for promoting social interaction in indoor spaces.
Collapse
Affiliation(s)
- Jiani Jiang
- Key Laboratory of Cold Region Urban and Rural Human Settlement Environment Science and Technology, Ministry of Industry and Information Technology, School of Architecture, Harbin Institute of Technology, Harbin, China
| | - Qi Meng
- Key Laboratory of Cold Region Urban and Rural Human Settlement Environment Science and Technology, Ministry of Industry and Information Technology, School of Architecture, Harbin Institute of Technology, Harbin, China
| | - Jingtao Ji
- Key Laboratory of Cold Region Urban and Rural Human Settlement Environment Science and Technology, Ministry of Industry and Information Technology, School of Architecture, Harbin Institute of Technology, Harbin, China
| |
Collapse
|
14
|
Music's putative adaptive function hinges on a combination of distinct mechanisms. Behav Brain Sci 2021; 44:e72. [PMID: 34588057 DOI: 10.1017/s0140525x20001752] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Music's efficacy as a credible signal and/or as a tool for social bonding piggybacks on a diverse set of biological and cognitive processes, implying different proximate mechanisms. It is likely this multiplicity of mechanisms that explains why it is so difficult to account for music's putative biological role(s), as well as its possible origins, by proposing a single adaptive function.
Collapse
|
15
|
Herff SA, Cecchetti G, Taruffi L, Déguernel K. Music influences vividness and content of imagined journeys in a directed visual imagery task. Sci Rep 2021; 11:15990. [PMID: 34362960 PMCID: PMC8346606 DOI: 10.1038/s41598-021-95260-8] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2021] [Accepted: 07/22/2021] [Indexed: 02/07/2023] Open
Abstract
Directed, intentional imagination is pivotal for self-regulation in the form of escapism and therapies for a wide variety of mental health conditions, such anxiety and stress disorders, as well as phobias. Clinical application in particular benefits from increasing our understanding of imagination, as well as non-invasive means of influencing it. To investigate imagination, this study draws from the prior observation that music can influence the imagined content during non-directed mind-wandering, as well as the finding that relative orientation within time and space is retained in imagination. One hundred participants performed a directed imagination task that required watching a video of a figure travelling towards a barely visible landmark, and then closing their eyes and imagining a continuation of the journey. During each imagined journey, participants either listened to music or silence. After the imagined journeys, participants reported vividness, the imagined time passed and distance travelled, as well as the imagined content. Bayesian mixed effects models reveal strong evidence that vividness, sentiment, as well imagined time passed and distances travelled, are influenced by the music, and show that aspects of these effects can be modelled through features such as tempo. The results highlight music's potential to support therapies such as Exposure Therapy and Imagery Rescripting, which deploy directed imagination as a clinical tool.
Collapse
Affiliation(s)
- Steffen A. Herff
- grid.5333.60000000121839049École Polytechnique Fédérale de Lausanne, INN 115, 1015 Lausanne, Switzerland ,grid.1029.a0000 0000 9939 5719The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Sydney, Australia
| | - Gabriele Cecchetti
- grid.5333.60000000121839049École Polytechnique Fédérale de Lausanne, INN 115, 1015 Lausanne, Switzerland
| | - Liila Taruffi
- grid.8250.f0000 0000 8700 0572Music Department, Durham University, Durham, UK
| | - Ken Déguernel
- grid.503422.20000 0001 2242 6780CNRS, Centrale Lille, UMR 9189 CRIStAL, Université de Lille, F-59000 Lille, France
| |
Collapse
|
16
|
The Influence of Action Video Gaming Experience on the Perception of Emotional Faces and Emotional Word Meaning. Neural Plast 2021; 2021:8841156. [PMID: 34135955 PMCID: PMC8178008 DOI: 10.1155/2021/8841156] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2020] [Revised: 12/30/2020] [Accepted: 05/06/2021] [Indexed: 11/21/2022] Open
Abstract
Action video gaming (AVG) experience has been found related to sensorimotor and attentional development. However, the influence of AVG experience on the development of emotional perception skills is still unclear. Using behavioral and ERP measures, this study examined the relationship between AVG experience and the ability to decode emotional faces and emotional word meanings. AVG experts and amateurs completed an emotional word-face Stroop task prior to (the pregaming phase) and after (the postgaming phase) a 1 h AVG session. Within-group comparisons showed that after the 1 h AVG session, a more negative N400 was observed in both groups of participants, and a more negative N170 was observed in the experts. Between-group comparisons showed that the experts had a greater change of N170 and N400 amplitudes across phases than the amateurs. The results suggest that both the 1 h and long-term AVG experiences may be related to an increased difficulty of emotional perception. Furthermore, certain behavioral and ERP measures showed neither within- nor between-group differences, suggesting that the relationship between AVG experience and emotional perception skills still needs further research.
Collapse
|
17
|
Kogan VV, Reiterer SM. Eros, Beauty, and Phon-Aesthetic Judgements of Language Sound. We Like It Flat and Fast, but Not Melodious. Comparing Phonetic and Acoustic Features of 16 European Languages. Front Hum Neurosci 2021; 15:578594. [PMID: 33708080 PMCID: PMC7940689 DOI: 10.3389/fnhum.2021.578594] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2020] [Accepted: 01/12/2021] [Indexed: 11/13/2022] Open
Abstract
This article concerns sound aesthetic preferences for European foreign languages. We investigated the phonetic-acoustic dimension of the linguistic aesthetic pleasure to describe the "music" found in European languages. The Romance languages, French, Italian, and Spanish, take a lead when people talk about melodious language - the music-like effects in the language (a.k.a., phonetic chill). On the other end of the melodiousness spectrum are German and Arabic that are often considered sounding harsh and un-attractive. Despite the public interest, limited research has been conducted on the topic of phonaesthetics, i.e., the subfield of phonetics that is concerned with the aesthetic properties of speech sounds (Crystal, 2008). Our goal is to fill the existing research gap by identifying the acoustic features that drive the auditory perception of language sound beauty. What is so music-like in the language that makes people say "it is music in my ears"? We had 45 central European participants listening to 16 auditorily presented European languages and rating each language in terms of 22 binary characteristics (e.g., beautiful - ugly and funny - boring) plus indicating their language familiarities, L2 backgrounds, speaker voice liking, demographics, and musicality levels. Findings revealed that all factors in complex interplay explain a certain percentage of variance: familiarity and expertise in foreign languages, speaker voice characteristics, phonetic complexity, musical acoustic properties, and finally musical expertise of the listener. The most important discovery was the trade-off between speech tempo and so-called linguistic melody (pitch variance): the faster the language, the flatter/more atonal it is in terms of the pitch (speech melody), making it highly appealing acoustically (sounding beautiful and sexy), but not so melodious in a "musical" sense.
Collapse
Affiliation(s)
- Vita V Kogan
- School of European Culture and Languages, University of Kent, Kent, United Kingdom
| | - Susanne M Reiterer
- Department of Linguistics, University of Vienna, Vienna, Austria.,Teacher Education Centre, University of Vienna, Vienna, Austria
| |
Collapse
|
18
|
Buono GH, Crukley J, Hornsby BWY, Picou EM. Loss of high- or low-frequency audibility can partially explain effects of hearing loss on emotional responses to non-speech sounds. Hear Res 2020; 401:108153. [PMID: 33360158 DOI: 10.1016/j.heares.2020.108153] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/06/2020] [Revised: 11/20/2020] [Accepted: 12/08/2020] [Indexed: 11/16/2022]
Abstract
Hearing loss can disrupt emotional responses to sound. However, the impact of stimulus modality (multisensory versus unisensory) on this disruption, and the underlying mechanisms responsible, are unclear. The purposes of this project were to evaluate the effects of stimulus modality and filtering on emotional responses to non-speech stimuli. It was hypothesized that low- and high-pass filtering would result in less extreme ratings, but only for unisensory stimuli. Twenty-four adults (22- 34 years old; 12 male) with normal hearing participated. Participants made ratings of valence and arousal in response to pleasant, neutral, and unpleasant non-speech sounds and/or pictures. Each participant completed ratings of five stimulus modalities: auditory-only, visual-only, auditory-visual, filtered auditory-only, and filtered auditory-visual. Half of the participants rated low-pass filtered stimuli (800 Hz cutoff), and half of the participants rated high-pass filtered stimuli (2000 Hz cutoff). Combining auditory and visual modalities resulted in more extreme (more pleasant and more unpleasant) ratings of valence in response to pleasant and unpleasant stimuli. In addition, low- and high-pass filtering of sounds resulted in less extreme ratings of valence (less pleasant and less unpleasant) and arousal (less exciting) in response to both auditory-only and auditory-visual stimuli. These results suggest that changes in audible spectral information are partially responsible for the noted changes in emotional responses to sound that accompany hearing loss. The findings also suggest the effects of hearing loss will generalize to multisensory stimuli if the stimuli include sound, although further work is warranted to confirm this in listeners with hearing loss.
Collapse
Affiliation(s)
- Gabrielle H Buono
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, 1215 21st Ave South, Room 8310, Nashville, TN 37232, United States
| | - Jeffery Crukley
- Department of Speech-Language Pathology, University of Toronto, Canada
| | - Benjamin W Y Hornsby
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, 1215 21st Ave South, Room 8310, Nashville, TN 37232, United States
| | - Erin M Picou
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, 1215 21st Ave South, Room 8310, Nashville, TN 37232, United States.
| |
Collapse
|
19
|
Fiebig A, Jordan P, Moshona CC. Assessments of Acoustic Environments by Emotions - The Application of Emotion Theory in Soundscape. Front Psychol 2020; 11:573041. [PMID: 33329214 PMCID: PMC7718000 DOI: 10.3389/fpsyg.2020.573041] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2020] [Accepted: 10/29/2020] [Indexed: 11/22/2022] Open
Abstract
Human beings respond to their immediate environments in a variety of ways, with emotion playing a cardinal role. In evolutionary theories, emotions are thought to prepare an organism for action. The interplay of acoustic environments, emotions, and evolutionary needs are currently subject to discussion in soundscape research. Universal definitions of emotion and its nature are currently missing, but there seems to be a fundamental consensus that emotions are internal, evanescent, mostly conscious, relational, manifest in different forms, and serve a purpose. Research in this area is expanding, particularly in regards to the context-related, affective, and emotional processing of environmental stimuli. A number of studies present ways to determine the nature of emotions elicited by a soundscape and to measure these reliably. Yet the crucial question—which basic and complex emotions are triggered and how they relate to affective appraisal—has still not been conclusively answered. To help frame research on this topic, an overview of the theoretical background is presented that applies emotion theory to soundscape. Two latent fundamental dimensions are often found at the center of theoretical concepts of emotion: valence and arousal. These established universal dimensions can also be applied in the context of emotions that are elicited by soundscapes. Another, and perhaps more familiar, parallel is found between emotion and music. However, acoustic environments are more subtle than musical arrangements, rarely applying the compositional and artistic considerations frequently used in music. That said, the measurement of emotion in the context of soundscape studies is only of additional value if some fundamental inquiries are sufficiently answered: To what extent does the reporting act itself alter emotional responses? Are all important affective qualities consciously accessible and directly measurable by self-reports? How can emotion related to the environment be separated from affective predisposition? By means of a conceptual analysis of relevant soundscape publications, the consensus and conflicts on these fundamental questions in the light of soundscape theory are highlighted and needed research actions are framed. The overview closes with a proposed modification to an existing, standardized framework to include the meaning of emotion in the design of soundscapes.
Collapse
Affiliation(s)
- André Fiebig
- Engineering Acoustics, Institute of Fluid Dynamics and Technical Acoustics, Technische Universität Berlin, Berlin, Germany
| | - Pamela Jordan
- Amsterdam Centre for Ancient Studies and Archaeology, University of Amsterdam, Amsterdam, Netherlands
| | - Cleopatra Christina Moshona
- Engineering Acoustics, Institute of Fluid Dynamics and Technical Acoustics, Technische Universität Berlin, Berlin, Germany
| |
Collapse
|
20
|
Abstract
Background: In this study we measured the affective appraisal of sounds and video clips using a newly developed graphical self-report tool: the EmojiGrid. The EmojiGrid is a square grid, labeled with emoji that express different degrees of valence and arousal. Users rate the valence and arousal of a given stimulus by simply clicking on the grid. Methods: In Experiment I, observers (N=150, 74 males, mean age=25.2±3.5) used the EmojiGrid to rate their affective appraisal of 77 validated sound clips from nine different semantic categories, covering a large area of the affective space. In Experiment II, observers (N=60, 32 males, mean age=24.5±3.3) used the EmojiGrid to rate their affective appraisal of 50 validated film fragments varying in positive and negative affect (20 positive, 20 negative, 10 neutral). Results: The results of this study show that for both sound and video, the agreement between the mean ratings obtained with the EmojiGrid and those obtained with an alternative and validated affective rating tool in previous studies in the literature, is excellent for valence and good for arousal. Our results also show the typical universal U-shaped relation between mean valence and arousal that is commonly observed for affective sensory stimuli, both for sound and video. Conclusions: We conclude that the EmojiGrid can be used as an affective self-report tool for the assessment of sound and video-evoked emotions.
Collapse
Affiliation(s)
- Alexander Toet
- Perceptual and Cognitive Systems, TNO, Soesterberg, 3769DE, The Netherlands
- Department of Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, 3584 CS, The Netherlands
| | - Jan B. F. van Erp
- Perceptual and Cognitive Systems, TNO, Soesterberg, 3769DE, The Netherlands
- Research Group Human Media Interaction, University of Twente, Enschede, 7522 NH, The Netherlands
| |
Collapse
|
21
|
Abstract
Background: In this study we measured the affective appraisal of sounds and video clips using a newly developed graphical self-report tool: the EmojiGrid. The EmojiGrid is a square grid, labeled with emoji that express different degrees of valence and arousal. Users rate the valence and arousal of a given stimulus by simply clicking on the grid. Methods: In Experiment I, observers (N=150, 74 males, mean age=25.2±3.5) used the EmojiGrid to rate their affective appraisal of 77 validated sound clips from nine different semantic categories, covering a large area of the affective space. In Experiment II, observers (N=60, 32 males, mean age=24.5±3.3) used the EmojiGrid to rate their affective appraisal of 50 validated film fragments varying in positive and negative affect (20 positive, 20 negative, 10 neutral). Results: The results of this study show that for both sound and video, the agreement between the mean ratings obtained with the EmojiGrid and those obtained with an alternative and validated affective rating tool in previous studies in the literature, is excellent for valence and good for arousal. Our results also show the typical universal U-shaped relation between mean valence and arousal that is commonly observed for affective sensory stimuli, both for sound and video. Conclusions: We conclude that the EmojiGrid can be used as an affective self-report tool for the assessment of sound and video-evoked emotions.
Collapse
Affiliation(s)
- Alexander Toet
- Perceptual and Cognitive Systems, TNO, Soesterberg, 3769DE, The Netherlands
- Department of Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, 3584 CS, The Netherlands
| | - Jan B. F. van Erp
- Perceptual and Cognitive Systems, TNO, Soesterberg, 3769DE, The Netherlands
- Research Group Human Media Interaction, University of Twente, Enschede, 7522 NH, The Netherlands
| |
Collapse
|
22
|
Arias P, Rachman L, Liuni M, Aucouturier JJ. Beyond Correlation: Acoustic Transformation Methods for the Experimental Study of Emotional Voice and Speech. EMOTION REVIEW 2020. [DOI: 10.1177/1754073920934544] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
While acoustic analysis methods have become a commodity in voice emotion research, experiments that attempt not only to describe but to computationally manipulate expressive cues in emotional voice and speech have remained relatively rare. We give here a nontechnical overview of voice-transformation techniques from the audio signal-processing community that we believe are ripe for adoption in this context. We provide sound examples of what they can achieve, examples of experimental questions for which they can be used, and links to open-source implementations. We point at a number of methodological properties of these algorithms, such as being specific, parametric, exhaustive, and real-time, and describe the new possibilities that these open for the experimental study of the emotional voice.
Collapse
Affiliation(s)
- Pablo Arias
- STMS UMR9912, IRCAM/CNRS/Sorbonne Université, France
| | - Laura Rachman
- STMS UMR9912, IRCAM/CNRS/Sorbonne Université, France
| | - Marco Liuni
- STMS UMR9912, IRCAM/CNRS/Sorbonne Université, France
| | | |
Collapse
|
23
|
Abstract
To ensure that listeners pay attention and do not habituate, emotionally intense vocalizations may be under evolutionary pressure to exploit processing biases in the auditory system by maximising their bottom-up salience. This "salience code" hypothesis was tested using 128 human nonverbal vocalizations representing eight emotions: amusement, anger, disgust, effort, fear, pain, pleasure, and sadness. As expected, within each emotion category salience ratings derived from pairwise comparisons strongly correlated with perceived emotion intensity. For example, while laughs as a class were less salient than screams of fear, salience scores almost perfectly explained the perceived intensity of both amusement and fear considered separately. Validating self-rated salience evaluations, high- vs. low-salience sounds caused 25% more recall errors in a short-term memory task, whereas emotion intensity had no independent effect on recall errors. Furthermore, the acoustic characteristics of salient vocalizations were similar to those previously described for non-emotional sounds (greater duration and intensity, high pitch, bright timbre, rapid modulations, and variable spectral characteristics), confirming that vocalizations were not salient merely because of their emotional content. The acoustic code in nonverbal communication is thus aligned with sensory biases, offering a general explanation for some non-arbitrary properties of human and animal high-arousal vocalizations.
Collapse
Affiliation(s)
- Andrey Anikin
- Division of Cognitive Science, Lund University, Lund, Sweden
| |
Collapse
|
24
|
Reiterer SM, Kogan V, Seither-Preisler A, Pesek G. Foreign language learning motivation: Phonetic chill or Latin lover effect? Does sound structure or social stereotyping drive FLL? PSYCHOLOGY OF LEARNING AND MOTIVATION 2020. [DOI: 10.1016/bs.plm.2020.02.003] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
25
|
Hao Y, Yao L, Sun Q, Gupta D. Interaction of Self-Regulation and Contextual Effects on Pre-attentive Auditory Processing: A Combined EEG/ECG Study. Front Neurosci 2019; 13:638. [PMID: 31275111 PMCID: PMC6593616 DOI: 10.3389/fnins.2019.00638] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2019] [Accepted: 06/03/2019] [Indexed: 11/13/2022] Open
Abstract
Environmental changes are not always within the focus of our attention, and sensitive reactions (i.e., quicker and stronger responses) can be essential for an organism's survival and adaptation. Here we report that neurophysiological responses to sound changes that are not in the focus of attention are related to both ambient acoustic contexts and regulation ability. We assessed electroencephalograph (EEG) mismatch negativity (MMN) latency and amplitude in response to sound changes in two contexts: ascending and descending pitch sequences while participants were instructed to attend to muted videos. Prolonged latency and increased amplitude of MMN at fronto-central region occurred in ascending pitch sequences relative to descending sequences. We also assessed how regulation related to the contextual effects on MMN. Reactions to changes in the ascending sequence were observed with the attention control (frontal EEG theta/beta ratio) indicating speed of reaction, and the autonomous regulation (heart-rate variability) indicating intensity of reaction. Moreover, sound changes in the ascending context were associated with more activation of anterior cingulate cortex and insula, suggesting arousal effects and regulation processes. These findings suggest that the relation between speed and intensity is not fixed and may be modified by contexts and self-regulation ability. Specifically, cortical and cardiovascular indicators of self-regulation may specify different aspects of response sensitivity in terms of speed and intensity.
Collapse
Affiliation(s)
- Yu Hao
- Department of Design and Environmental Analysis, Cornell University, Ithaca, NY, United States
| | - Lin Yao
- School of Electrical and Computer Engineering, Cornell University, Ithaca, NY, United States
| | - Qiuyan Sun
- Department of Nutritional Science, Cornell University, Ithaca, NY, United States
| | - Disha Gupta
- School of Medicine, New York University, New York, NY, United States
| |
Collapse
|
26
|
Reybrouck M, Podlipniak P. Preconceptual Spectral and Temporal Cues as a Source of Meaning in Speech and Music. Brain Sci 2019; 9:E53. [PMID: 30832292 PMCID: PMC6468545 DOI: 10.3390/brainsci9030053] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2019] [Revised: 02/18/2019] [Accepted: 02/26/2019] [Indexed: 11/24/2022] Open
Abstract
This paper explores the importance of preconceptual meaning in speech and music, stressing the role of affective vocalizations as a common ancestral instrument in communicative interactions. Speech and music are sensory rich stimuli, both at the level of production and perception, which involve different body channels, mainly the face and the voice. However, this bimodal approach has been challenged as being too restrictive. A broader conception argues for an action-oriented embodied approach that stresses the reciprocity between multisensory processing and articulatory-motor routines. There is, however, a distinction between language and music, with the latter being largely unable to function referentially. Contrary to the centrifugal tendency of language to direct the attention of the receiver away from the text or speech proper, music is centripetal in directing the listener's attention to the auditory material itself. Sound, therefore, can be considered as the meeting point between speech and music and the question can be raised as to the shared components between the interpretation of sound in the domain of speech and music. In order to answer these questions, this paper elaborates on the following topics: (i) The relationship between speech and music with a special focus on early vocalizations in humans and non-human primates; (ii) the transition from sound to meaning in speech and music; (iii) the role of emotion and affect in early sound processing; (iv) vocalizations and nonverbal affect burst in communicative sound comprehension; and (v) the acoustic features of affective sound with a special emphasis on temporal and spectrographic cues as parts of speech prosody and musical expressiveness.
Collapse
Affiliation(s)
- Mark Reybrouck
- Musicology Research Group, KU Leuven⁻University of Leuven, 3000 Leuven, Belgium and IPEM⁻Department of Musicology, Ghent University, 9000 Ghent, Belgium.
| | - Piotr Podlipniak
- Institute of Musicology, Adam Mickiewicz University in Poznań, ul. Umultowska 89D, 61-614 Poznań, Poland.
| |
Collapse
|
27
|
Ma W, Zhou P. Three-year-old tone language learners are tolerant of tone mispronunciations spoken with familiar and novel tones. COGENT PSYCHOLOGY 2019. [DOI: 10.1080/23311908.2019.1690816] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022] Open
Affiliation(s)
- Weiyi Ma
- School of Human Environmental Sciences, University of Arkansas, Fayetteville, AR, USA
| | - Peng Zhou
- Department of Foreign Languages and Literatures, Tsinghua University, Beijing, China
| |
Collapse
|
28
|
DAVID: An open-source platform for real-time transformation of infra-segmental emotional cues in running speech. Behav Res Methods 2018; 50:323-343. [PMID: 28374144 PMCID: PMC5809549 DOI: 10.3758/s13428-017-0873-y] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/07/2023]
Abstract
We present an open-source software platform that transforms emotional cues expressed by speech signals using audio effects like pitch shifting, inflection, vibrato, and filtering. The emotional transformations can be applied to any audio file, but can also run in real time, using live input from a microphone, with less than 20-ms latency. We anticipate that this tool will be useful for the study of emotions in psychology and neuroscience, because it enables a high level of control over the acoustical and emotional content of experimental stimuli in a variety of laboratory situations, including real-time social situations. We present here results of a series of validation experiments aiming to position the tool against several methodological requirements: that transformed emotions be recognized at above-chance levels, valid in several languages (French, English, Swedish, and Japanese) and with a naturalness comparable to natural speech.
Collapse
|
29
|
Aryani A, Conrad M, Schmidtke D, Jacobs A. Why 'piss' is ruder than 'pee'? The role of sound in affective meaning making. PLoS One 2018; 13:e0198430. [PMID: 29874293 PMCID: PMC5991420 DOI: 10.1371/journal.pone.0198430] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2017] [Accepted: 05/18/2018] [Indexed: 11/29/2022] Open
Abstract
Most language users agree that some words sound harsh (e.g. grotesque) whereas others sound soft and pleasing (e.g. lagoon). While this prominent feature of human language has always been creatively deployed in art and poetry, it is still largely unknown whether the sound of a word in itself makes any contribution to the word's meaning as perceived and interpreted by the listener. In a large-scale lexicon analysis, we focused on the affective substrates of words' meaning (i.e. affective meaning) and words' sound (i.e. affective sound); both being measured on a two-dimensional space of valence (ranging from pleasant to unpleasant) and arousal (ranging from calm to excited). We tested the hypothesis that the sound of a word possesses affective iconic characteristics that can implicitly influence listeners when evaluating the affective meaning of that word. The results show that a significant portion of the variance in affective meaning ratings of printed words depends on a number of spectral and temporal acoustic features extracted from these words after converting them to their spoken form (study1). In order to test the affective nature of this effect, we independently assessed the affective sound of these words using two different methods: through direct rating (study2a), and through acoustic models that we implemented based on pseudoword materials (study2b). In line with our hypothesis, the estimated contribution of words' sound to ratings of words' affective meaning was indeed associated with the affective sound of these words; with a stronger effect for arousal than for valence. Further analyses revealed crucial phonetic features potentially causing the effect of sound on meaning: For instance, words with short vowels, voiceless consonants, and hissing sibilants (as in 'piss') feel more arousing and negative. Our findings suggest that the process of meaning making is not solely determined by arbitrary mappings between formal aspects of words and concepts they refer to. Rather, even in silent reading, words' acoustic profiles provide affective perceptual cues that language users may implicitly use to construct words' overall meaning.
Collapse
Affiliation(s)
- Arash Aryani
- Department of Experimental and Neurocognitive Psychology, Freie Universität Berlin, Berlin, Germany
| | - Markus Conrad
- Department of Cognitive, Social and Organizational Psychology, University of La Laguna, La Laguna, Spain
| | - David Schmidtke
- Department of Experimental and Neurocognitive Psychology, Freie Universität Berlin, Berlin, Germany
| | - Arthur Jacobs
- Department of Experimental and Neurocognitive Psychology, Freie Universität Berlin, Berlin, Germany
- Centre for Cognitive Neuroscience Berlin (CCNB), Berlin, Germany
| |
Collapse
|
30
|
Paquette S, Takerkart S, Saget S, Peretz I, Belin P. Cross-classification of musical and vocal emotions in the auditory cortex. Ann N Y Acad Sci 2018; 1423:329-337. [PMID: 29741242 DOI: 10.1111/nyas.13666] [Citation(s) in RCA: 32] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2017] [Revised: 02/05/2018] [Accepted: 02/13/2018] [Indexed: 12/17/2022]
Abstract
Whether emotions carried by voice and music are processed by the brain using similar mechanisms has long been investigated. Yet neuroimaging studies do not provide a clear picture, mainly due to lack of control over stimuli. Here, we report a functional magnetic resonance imaging (fMRI) study using comparable stimulus material in the voice and music domains-the Montreal Affective Voices and the Musical Emotional Bursts-which include nonverbal short bursts of happiness, fear, sadness, and neutral expressions. We use a multivariate emotion-classification fMRI analysis involving cross-timbre classification as a means of comparing the neural mechanisms involved in processing emotional information in the two domains. We find, for affective stimuli in the violin, clarinet, or voice timbres, that local fMRI patterns in the bilateral auditory cortex and upper premotor regions support above-chance emotion classification when training and testing sets are performed within the same timbre category. More importantly, classifier performance generalized well across timbre in cross-classifying schemes, albeit with a slight accuracy drop when crossing the voice-music boundary, providing evidence for a shared neural code for processing musical and vocal emotions, with possibly a cost for the voice due to its evolutionary significance.
Collapse
Affiliation(s)
- Sébastien Paquette
- Department of Psychology, International Laboratory for Brain Music and Sound Research, Université de Montréal, Montreal, Canada
- Department of Neurology, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, Massachusetts
| | - Sylvain Takerkart
- Institut de Neurosciences de La Timone, CNRS & Aix-Marseille University, Marseille, France
| | - Shinji Saget
- Institut de Neurosciences de La Timone, CNRS & Aix-Marseille University, Marseille, France
| | - Isabelle Peretz
- Department of Psychology, International Laboratory for Brain Music and Sound Research, Université de Montréal, Montreal, Canada
| | - Pascal Belin
- Department of Psychology, International Laboratory for Brain Music and Sound Research, Université de Montréal, Montreal, Canada
- Institut de Neurosciences de La Timone, CNRS & Aix-Marseille University, Marseille, France
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, United Kingdom
| |
Collapse
|
31
|
Picou EM, Singh G, Goy H, Russo F, Hickson L, Oxenham AJ, Buono GH, Ricketts TA, Launer S. Hearing, Emotion, Amplification, Research, and Training Workshop: Current Understanding of Hearing Loss and Emotion Perception and Priorities for Future Research. Trends Hear 2018; 22:2331216518803215. [PMID: 30270810 PMCID: PMC6168729 DOI: 10.1177/2331216518803215] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2018] [Revised: 08/18/2018] [Accepted: 09/03/2018] [Indexed: 12/19/2022] Open
Abstract
The question of how hearing loss and hearing rehabilitation affect patients' momentary emotional experiences is one that has received little attention but has considerable potential to affect patients' psychosocial function. This article is a product from the Hearing, Emotion, Amplification, Research, and Training workshop, which was convened to develop a consensus document describing research on emotion perception relevant for hearing research. This article outlines conceptual frameworks for the investigation of emotion in hearing research; available subjective, objective, neurophysiologic, and peripheral physiologic data acquisition research methods; the effects of age and hearing loss on emotion perception; potential rehabilitation strategies; priorities for future research; and implications for clinical audiologic rehabilitation. More broadly, this article aims to increase awareness about emotion perception research in audiology and to stimulate additional research on the topic.
Collapse
Affiliation(s)
- Erin M. Picou
- Vanderbilt University School of
Medicine, Nashville, TN, USA
| | - Gurjit Singh
- Phonak Canada, Mississauga, ON,
Canada
- Department of Speech-Language Pathology,
University of Toronto, ON, Canada
- Department of Psychology, Ryerson
University, Toronto, ON, Canada
| | - Huiwen Goy
- Department of Psychology, Ryerson
University, Toronto, ON, Canada
| | - Frank Russo
- Department of Psychology, Ryerson
University, Toronto, ON, Canada
| | - Louise Hickson
- School of Health and Rehabilitation
Sciences, University of Queensland, Brisbane, Australia
| | | | | | | | | |
Collapse
|
32
|
Jiang C, Liu F, Wong PCM. Sensitivity to musical emotion is influenced by tonal structure in congenital amusia. Sci Rep 2017; 7:7624. [PMID: 28790442 PMCID: PMC5548738 DOI: 10.1038/s41598-017-08005-x] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2016] [Accepted: 07/06/2017] [Indexed: 11/09/2022] Open
Abstract
Emotional communication in music depends on multiple attributes including psychoacoustic features and tonal system information, the latter of which is unique to music. The present study investigated whether congenital amusia, a lifelong disorder of musical processing, impacts sensitivity to musical emotion elicited by timbre and tonal system information. Twenty-six amusics and 26 matched controls made tension judgments on Western (familiar) and Indian (unfamiliar) melodies played on piano and sitar. Like controls, amusics used timbre cues to judge musical tension in Western and Indian melodies. While controls assigned significantly lower tension ratings to Western melodies compared to Indian melodies, thus showing a tonal familiarity effect on tension ratings, amusics provided comparable tension ratings for Western and Indian melodies on both timbres. Furthermore, amusics rated Western melodies as more tense compared to controls, as they relied less on tonality cues than controls in rating tension for Western melodies. The implications of these findings in terms of emotional responses to music are discussed.
Collapse
Affiliation(s)
- Cunmei Jiang
- Music College, Shanghai Normal University, Shanghai, China. .,Institute of Psychology, Shanghai Normal University, Shanghai, China.
| | - Fang Liu
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK
| | - Patrick C M Wong
- Department of Linguistics and Modern Languages and Brain and Mind Institute, The Chinese University of Hong Kong, Hong Kong, China. .,The Chinese University of Hong Kong - Utrecth University Joint Center for Language, Mind and Brain, Hong Kong, China.
| |
Collapse
|
33
|
What drives sound symbolism? Different acoustic cues underlie sound-size and sound-shape mappings. Sci Rep 2017; 7:5562. [PMID: 28717151 PMCID: PMC5514121 DOI: 10.1038/s41598-017-05965-y] [Citation(s) in RCA: 35] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2016] [Accepted: 06/06/2017] [Indexed: 11/24/2022] Open
Abstract
Sound symbolism refers to the non-arbitrary mappings that exist between phonetic properties of speech sounds and their meaning. Despite there being an extensive literature on the topic, the acoustic features and psychological mechanisms that give rise to sound symbolism are not, as yet, altogether clear. The present study was designed to investigate whether different sets of acoustic cues predict size and shape symbolism, respectively. In two experiments, participants judged whether a given consonant-vowel speech sound was large or small, round or angular, using a size or shape scale. Visual size judgments were predicted by vowel formant F1 in combination with F2, and by vowel duration. Visual shape judgments were, however, predicted by formants F2 and F3. Size and shape symbolism were thus not induced by a common mechanism, but rather were distinctly affected by acoustic properties of speech sounds. These findings portray sound symbolism as a process that is not based merely on broad categorical contrasts, such as round/unround and front/back vowels. Rather, individuals seem to base their sound-symbolic judgments on specific sets of acoustic cues, extracted from speech sounds, which vary across judgment dimensions.
Collapse
|
34
|
Lemaitre G, Houix O, Voisin F, Misdariis N, Susini P. Vocal Imitations of Non-Vocal Sounds. PLoS One 2016; 11:e0168167. [PMID: 27992480 PMCID: PMC5161510 DOI: 10.1371/journal.pone.0168167] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2016] [Accepted: 11/24/2016] [Indexed: 11/25/2022] Open
Abstract
Imitative behaviors are widespread in humans, in particular whenever two persons communicate and interact. Several tokens of spoken languages (onomatopoeias, ideophones, and phonesthemes) also display different degrees of iconicity between the sound of a word and what it refers to. Thus, it probably comes at no surprise that human speakers use a lot of imitative vocalizations and gestures when they communicate about sounds, as sounds are notably difficult to describe. What is more surprising is that vocal imitations of non-vocal everyday sounds (e.g. the sound of a car passing by) are in practice very effective: listeners identify sounds better with vocal imitations than with verbal descriptions, despite the fact that vocal imitations are inaccurate reproductions of a sound created by a particular mechanical system (e.g. a car driving by) through a different system (the voice apparatus). The present study investigated the semantic representations evoked by vocal imitations of sounds by experimentally quantifying how well listeners could match sounds to category labels. The experiment used three different types of sounds: recordings of easily identifiable sounds (sounds of human actions and manufactured products), human vocal imitations, and computational “auditory sketches” (created by algorithmic computations). The results show that performance with the best vocal imitations was similar to the best auditory sketches for most categories of sounds, and even to the referent sounds themselves in some cases. More detailed analyses showed that the acoustic distance between a vocal imitation and a referent sound is not sufficient to account for such performance. Analyses suggested that instead of trying to reproduce the referent sound as accurately as vocally possible, vocal imitations focus on a few important features, which depend on each particular sound category. These results offer perspectives for understanding how human listeners store and access long-term sound representations, and sets the stage for the development of human-computer interfaces based on vocalizations.
Collapse
Affiliation(s)
- Guillaume Lemaitre
- Equipe Perception et Design Sonores, STMS-IRCAM-CNRS-UPMC, Institut de Recherche et de Coordination Acoustique Musique, Paris, France
- * E-mail:
| | - Olivier Houix
- Equipe Perception et Design Sonores, STMS-IRCAM-CNRS-UPMC, Institut de Recherche et de Coordination Acoustique Musique, Paris, France
| | - Frédéric Voisin
- Equipe Perception et Design Sonores, STMS-IRCAM-CNRS-UPMC, Institut de Recherche et de Coordination Acoustique Musique, Paris, France
| | - Nicolas Misdariis
- Equipe Perception et Design Sonores, STMS-IRCAM-CNRS-UPMC, Institut de Recherche et de Coordination Acoustique Musique, Paris, France
| | - Patrick Susini
- Equipe Perception et Design Sonores, STMS-IRCAM-CNRS-UPMC, Institut de Recherche et de Coordination Acoustique Musique, Paris, France
| |
Collapse
|
35
|
Ma W, Zhou P, Singh L, Gao L. Spoken word recognition in young tone language learners: Age-dependent effects of segmental and suprasegmental variation. Cognition 2016; 159:139-155. [PMID: 27951429 DOI: 10.1016/j.cognition.2016.11.011] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2015] [Revised: 11/16/2016] [Accepted: 11/28/2016] [Indexed: 10/20/2022]
Abstract
The majority of the world's languages rely on both segmental (vowels, consonants) and suprasegmental (lexical tones) information to contrast the meanings of individual words. However, research on early language development has mostly focused on the acquisition of vowel-consonant languages. Developmental research comparing sensitivity to segmental and suprasegmental features in young tone learners is extremely rare. This study examined 2- and 3-year-old monolingual tone learners' sensitivity to vowels and tones. Experiment 1a tested the influence of vowel and tone variation on novel word learning. Vowel and tone variation hindered word recognition efficiency in both age groups. However, tone variation hindered word recognition accuracy only in 2-year-olds, while 3-year-olds were insensitive to tone variation. Experiment 1b demonstrated that 3-year-olds could use tones to learn new words when additional support was provided, and additionally, that Tone 3 words were exceptionally difficult to learn. Experiment 2 confirmed a similar pattern of results when children were presented with familiar words. This study is the first to show that despite the importance of tones in tone languages, vowels maintain primacy over tones in young children's word recognition and that tone sensitivity in word learning and recognition changes between 2 and 3years of age. The findings suggest that early lexical processes are more tightly constrained by variation in vowels than by tones.
Collapse
Affiliation(s)
- Weiyi Ma
- ARC Centre of Excellence in Cognition and Its Disorders, Macquarie University, Sydney 2109, Australia; School of Linguistics and Literature, University of Electronic Science and Technology of China, Chengdu 610000, China.
| | - Peng Zhou
- Department of Foreign Languages and Literatures, Tsinghua University, Beijing 100084, China
| | - Leher Singh
- Department of Psychology, National University of Singapore, 117570, Singapore
| | - Liqun Gao
- Centre for Speech, Language and the Brain, Beijing Language and Culture University, 100066 Beijing, China
| |
Collapse
|