1
|
Calvillo-Torres R, Haro J, Ferré P, Poch C, Hinojosa JA. Sound symbolic associations in Spanish emotional words: affective dimensions and discrete emotions. Cogn Emot 2024:1-17. [PMID: 38660751 DOI: 10.1080/02699931.2024.2345377] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2023] [Accepted: 04/16/2024] [Indexed: 04/26/2024]
Abstract
Sound symbolism refers to non-arbitrary associations between word forms and meaning, such as those observed for some properties of sounds and size or shape. Recent evidence suggests that these connections extend to emotional concepts. Here we investigated two types of non-arbitrary relationships. Study 1 examined whether iconicity scores (i.e. resemblance-based mapping between aspects of a word's form and its meaning) for words can be predicted from ratings in the affective dimensions of valence and arousal and/or the discrete emotions of happiness, anger, fear, disgust and sadness. Words denoting negative concepts were more likely to have more iconic word forms. Study 2 explored whether statistical regularities in single phonemes (i.e. systematicity) predicted ratings in affective dimensions and/or discrete emotions. Voiceless (/p/, /t/) and voiced plosives (/b/, /d/, /g/) were related to high arousing words, whereas high arousing negative words tended to include fricatives (/s/, /z/). Hissing consonants were also more likely to occur in words denoting all negative discrete emotions. Additionally, words conveying certain discrete emotions included specific phonemes. Overall, our data suggest that emotional features might explain variations in iconicity and provide new insight about phonemic patterns showing sound symbolic associations with the affective properties of words.
Collapse
Affiliation(s)
- Rocío Calvillo-Torres
- Departamento de Psicología Experimental, Procesos Cognitivos y Logopedia, Universidad Complutense de Madrid, Madrid, Spain
| | - Juan Haro
- Departament de Psicologia and CRAMC, Universitat Rovira i Virgili, Tarragona, Spain
| | - Pilar Ferré
- Departament de Psicologia and CRAMC, Universitat Rovira i Virgili, Tarragona, Spain
| | - Claudia Poch
- Centro de Investigación Nebrija en Cognición (CINC), Universidad Nebrija, Madrid, Spain
- Departamento de Educación, Universidad de Nebrija, Madrid, Spain
| | - José A Hinojosa
- Departamento de Psicología Experimental, Procesos Cognitivos y Logopedia, Universidad Complutense de Madrid, Madrid, Spain
- Centro de Investigación Nebrija en Cognición (CINC), Universidad Nebrija, Madrid, Spain
- Instituto Pluridisciplinar, Universidad Complutense de Madrid, Madrid, Spain
| |
Collapse
|
2
|
Sidhu DM, Pexman PM. Is a boat bigger than a ship? Null results in the investigation of vowel sound symbolism on size judgements in real language. Q J Exp Psychol (Hove) 2023; 76:28-43. [PMID: 35045778 PMCID: PMC9773152 DOI: 10.1177/17470218221078299] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/25/2022]
Abstract
Sound symbolism is the phenomenon by which certain kinds of phonemes are associated with perceptual and/or semantic properties. In this article, we explored size sound symbolism (i.e., the mil/mal effect) in which high-front vowels (e.g., /i/) show an association with smallness, while low-back vowels (e.g., /ɑ/) show an association with largeness. This has previously been demonstrated with nonwords, but its impact on the processing of real language is unknown. We investigated this using a size judgement task, in which participants classified words for small or large objects, containing a small- or large-associated vowel, based on their size. Words were presented auditorily in Experiment 1 and visually in Experiment 2. We did not observe an effect of vowel congruence (i.e., between object size and the size association of its vowel) in either of the experiments. This suggests that there are limits to the impact of sound symbolism on the processing of real language.
Collapse
Affiliation(s)
- David M Sidhu
- University of Calgary, Calgary, Alberta, Canada,University College London, London, UK,David Sidhu, University College London, London WC1E 6BT, UK.
| | | |
Collapse
|
3
|
De Deyne S, Navarro DJ, Collell G, Perfors A. Visual and Affective Multimodal Models of Word Meaning in Language and Mind. Cogn Sci 2021; 45:e12922. [PMID: 33432630 PMCID: PMC7816238 DOI: 10.1111/cogs.12922] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2019] [Revised: 10/26/2020] [Accepted: 11/10/2020] [Indexed: 01/16/2023]
Abstract
One of the main limitations of natural language‐based approaches to meaning is that they do not incorporate multimodal representations the way humans do. In this study, we evaluate how well different kinds of models account for people's representations of both concrete and abstract concepts. The models we compare include unimodal distributional linguistic models as well as multimodal models which combine linguistic with perceptual or affective information. There are two types of linguistic models: those based on text corpora and those derived from word association data. We present two new studies and a reanalysis of a series of previous studies. The studies demonstrate that both visual and affective multimodal models better capture behavior that reflects human representations than unimodal linguistic models. The size of the multimodal advantage depends on the nature of semantic representations involved, and it is especially pronounced for basic‐level concepts that belong to the same superordinate category. Additional visual and affective features improve the accuracy of linguistic models based on text corpora more than those based on word associations; this suggests systematic qualitative differences between what information is encoded in natural language versus what information is reflected in word associations. Altogether, our work presents new evidence that multimodal information is important for capturing both abstract and concrete words and that fully representing word meaning requires more than purely linguistic information. Implications for both embodied and distributional views of semantic representation are discussed.
Collapse
Affiliation(s)
- Simon De Deyne
- School of Psychological Sciences, University of Melbourne
| | | | | | - Andrew Perfors
- School of Psychological Sciences, University of Melbourne
| |
Collapse
|
4
|
Kambara T, Umemura T. The Relationships Between Initial Consonants in Japanese Sound Symbolic Words and Familiarity, Multi-Sensory Imageability, Emotional Valence, and Arousal. JOURNAL OF PSYCHOLINGUISTIC RESEARCH 2021; 50:831-842. [PMID: 33394300 DOI: 10.1007/s10936-020-09749-w] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 12/09/2020] [Indexed: 06/12/2023]
Abstract
Sound symbolic words consist of inevitable associations between sounds and meanings. We aimed to identify differences in familiarity, visual imageability, auditory imageability, tactile imageability, emotional valence, and arousal between Japanese sound symbolic words with voiced initial consonants (VCs; dakuon in Japanese; e.g., biribiri) and Japanese sound symbolic words with semi-voiced initial consonants (SVCs; handakuon in Japanese; e.g., piripiri), and between VCs (e.g., daradara) and Japanese sound symbolic words with voiceless initial consonants (VLCs; seion in Japanese; e.g., taratara). First, auditory imageability and arousal were significantly higher in VCs than SVCs, whereas familiarity, tactile imageability, and positive emotion (emotional valence) were significantly higher in SVCs than VCs. Second, visual imageability was higher in VCs than VLCs, while familiarity and positive emotion were higher in VLCs than VCs. Initial consonants in Japanese sound symbolic words could be associated with specific subjective evaluations such as familiarity, visual imageability, auditory imageability, tactile imageability, emotional valence, and arousal.
Collapse
Affiliation(s)
- Toshimune Kambara
- Department of Psychology, Graduate School of Education, Hiroshima University, 1-1-1 Kagamiyama, Higashi-Hiroshima, Hiroshima, 7398524, Japan.
| | - Tomotaka Umemura
- Department of Psychology, Graduate School of Education, Hiroshima University, 1-1-1 Kagamiyama, Higashi-Hiroshima, Hiroshima, 7398524, Japan
| |
Collapse
|
5
|
Namba S, Kambara T. Semantics Based on the Physical Characteristics of Facial Expressions Used to Produce Japanese Vowels. Behav Sci (Basel) 2020; 10:E157. [PMID: 33066229 PMCID: PMC7602070 DOI: 10.3390/bs10100157] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2020] [Revised: 10/02/2020] [Accepted: 10/12/2020] [Indexed: 12/18/2022] Open
Abstract
Previous studies have reported that verbal sounds are associated-non-arbitrarily-with specific meanings (e.g., sound symbolism and onomatopoeia), including visual forms of information such as facial expressions; however, it remains unclear how mouth shapes used to utter each vowel create our semantic impressions. We asked 81 Japanese participants to evaluate mouth shapes associated with five Japanese vowels by using 10 five-item semantic differential scales. The results reveal that the physical characteristics of the facial expressions (mouth shapes) induced specific evaluations. For example, the mouth shape made to voice the vowel "a" was the one with the biggest, widest, and highest facial components compared to other mouth shapes, and people perceived words containing that vowel sound as bigger. The mouth shapes used to pronounce the vowel "i" were perceived as more likable than the other four vowels. These findings indicate that the mouth shapes producing vowels imply specific meanings. Our study provides clues about the meaning of verbal sounds and what the facial expressions in communication represent to the perceiver.
Collapse
Affiliation(s)
- Shushi Namba
- Psychological Process Team, BZP, Robotics Project, RIKEN, 2-2-2 Hikaridai, Seika-cho, Soraku-gun, Kyoto 6190288, Japan;
| | - Toshimune Kambara
- Department of Psychology, Graduate School of Education, Hiroshima University, 1-1-1 Kagamiyama, Higashi-Hiroshima, Hiroshima 7398524, Japan
| |
Collapse
|
6
|
Filippi P. Emotional Voice Intonation: A Communication Code at the Origins of Speech Processing and Word-Meaning Associations? JOURNAL OF NONVERBAL BEHAVIOR 2020. [DOI: 10.1007/s10919-020-00337-z] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2022]
Abstract
Abstract
The aim of the present work is to investigate the facilitating effect of vocal emotional intonation on the evolution of the following processes involved in language: (a) identifying and producing phonemes, (b) processing compositional rules underlying vocal utterances, and (c) associating vocal utterances with meanings. To this end, firstly, I examine research on the presence of these abilities in animals, and the biologically ancient nature of emotional vocalizations. Secondly, I review research attesting to the facilitating effect of emotional voice intonation on these abilities in humans. Thirdly, building on these studies in animals and humans, and through taking an evolutionary perspective, I provide insights for future empirical work on the facilitating effect of emotional intonation on these three processes in animals and preverbal humans. In this work, I highlight the importance of a comparative approach to investigate language evolution empirically. This review supports Darwin’s hypothesis, according to which the ability to express emotions through voice modulation was a key step in the evolution of spoken language.
Collapse
|
7
|
Abstract
Prior investigations have demonstrated that people tend to link pseudowords such as bouba to rounded shapes and kiki to spiky shapes, but the cognitive processes underlying this matching bias have remained controversial. Here, we present three experiments underscoring the fundamental role of emotional mediation in this sound–shape mapping. Using stimuli from key previous studies, we found that kiki-like pseudowords and spiky shapes, compared with bouba-like pseudowords and rounded shapes, consistently elicit higher levels of affective arousal, which we assessed through both subjective ratings (Experiment 1, N = 52) and acoustic models implemented on the basis of pseudoword material (Experiment 2, N = 70). Crucially, the mediating effect of arousal generalizes to novel pseudowords (Experiment 3, N = 64, which was preregistered). These findings highlight the role that human emotion may play in language development and evolution by grounding associations between abstract concepts (e.g., shapes) and linguistic signs (e.g., words) in the affective system.
Collapse
Affiliation(s)
- Arash Aryani
- Department of Education and Psychology, Freie Universität Berlin.,Center for Cognitive Neuroscience Berlin, Freie Universität Berlin
| | | | - Morten H Christiansen
- Department of Psychology, Cornell University.,Interacting Minds Centre, Aarhus University.,School of Communication and Culture, Aarhus University
| |
Collapse
|
8
|
Aryani A, Hsu CT, Jacobs AM. Affective iconic words benefit from additional sound-meaning integration in the left amygdala. Hum Brain Mapp 2019; 40:5289-5300. [PMID: 31444898 PMCID: PMC6864889 DOI: 10.1002/hbm.24772] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2019] [Revised: 07/21/2019] [Accepted: 07/31/2019] [Indexed: 01/01/2023] Open
Abstract
Recent studies have shown that a similarity between sound and meaning of a word (i.e., iconicity) can help more readily access the meaning of that word, but the neural mechanisms underlying this beneficial role of iconicity in semantic processing remain largely unknown. In an fMRI study, we focused on the affective domain and examined whether affective iconic words (e.g., high arousal in both sound and meaning) activate additional brain regions that integrate emotional information from different domains (i.e., sound and meaning). In line with our hypothesis, affective iconic words, compared to their non‐iconic counterparts, elicited additional BOLD responses in the left amygdala known for its role in multimodal representation of emotions. Functional connectivity analyses revealed that the observed amygdalar activity was modulated by an interaction of iconic condition and activations in two hubs representative for processing sound (left superior temporal gyrus) and meaning (left inferior frontal gyrus) of words. These results provide a neural explanation for the facilitative role of iconicity in language processing and indicate that language users are sensitive to the interaction between sound and meaning aspect of words, suggesting the existence of iconicity as a general property of human language.
Collapse
Affiliation(s)
- Arash Aryani
- Department of Experimental and Neurocognitive Psychology, Freie Universität Berlin, Germany
| | - Chun-Ting Hsu
- Kokoro Research Center, Kyoto University, Kyoto, Japan
| | - Arthur M Jacobs
- Department of Experimental and Neurocognitive Psychology, Freie Universität Berlin, Germany.,Centre for Cognitive Neuroscience Berlin (CCNB), Berlin, Germany
| |
Collapse
|
9
|
Reybrouck M, Podlipniak P. Preconceptual Spectral and Temporal Cues as a Source of Meaning in Speech and Music. Brain Sci 2019; 9:E53. [PMID: 30832292 PMCID: PMC6468545 DOI: 10.3390/brainsci9030053] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2019] [Revised: 02/18/2019] [Accepted: 02/26/2019] [Indexed: 11/24/2022] Open
Abstract
This paper explores the importance of preconceptual meaning in speech and music, stressing the role of affective vocalizations as a common ancestral instrument in communicative interactions. Speech and music are sensory rich stimuli, both at the level of production and perception, which involve different body channels, mainly the face and the voice. However, this bimodal approach has been challenged as being too restrictive. A broader conception argues for an action-oriented embodied approach that stresses the reciprocity between multisensory processing and articulatory-motor routines. There is, however, a distinction between language and music, with the latter being largely unable to function referentially. Contrary to the centrifugal tendency of language to direct the attention of the receiver away from the text or speech proper, music is centripetal in directing the listener's attention to the auditory material itself. Sound, therefore, can be considered as the meeting point between speech and music and the question can be raised as to the shared components between the interpretation of sound in the domain of speech and music. In order to answer these questions, this paper elaborates on the following topics: (i) The relationship between speech and music with a special focus on early vocalizations in humans and non-human primates; (ii) the transition from sound to meaning in speech and music; (iii) the role of emotion and affect in early sound processing; (iv) vocalizations and nonverbal affect burst in communicative sound comprehension; and (v) the acoustic features of affective sound with a special emphasis on temporal and spectrographic cues as parts of speech prosody and musical expressiveness.
Collapse
Affiliation(s)
- Mark Reybrouck
- Musicology Research Group, KU Leuven⁻University of Leuven, 3000 Leuven, Belgium and IPEM⁻Department of Musicology, Ghent University, 9000 Ghent, Belgium.
| | - Piotr Podlipniak
- Institute of Musicology, Adam Mickiewicz University in Poznań, ul. Umultowska 89D, 61-614 Poznań, Poland.
| |
Collapse
|