1
|
Kilpatrick A, Ćwiek A. Using artificial intelligence to explore sound symbolic expressions of gender in American English. PeerJ Comput Sci 2024; 10:e1811. [PMID: 38283586 PMCID: PMC10821993 DOI: 10.7717/peerj-cs.1811] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2023] [Accepted: 12/18/2023] [Indexed: 01/30/2024]
Abstract
This study investigates the extent to which gender can be inferred from the phonemes that make up given names and words in American English. Two extreme gradient boosted algorithms were constructed to classify words according to gender, one using a list of the most common given names (N∼1,000) in North America and the other using the Glasgow Norms (N∼5,500), a corpus consisting of nouns, verbs, adjectives, and adverbs which have each been assigned a psycholinguistic score of how they are associated with male or female behaviour. Both models report significant findings, but the model constructed using given names achieves a greater accuracy despite being trained on a smaller dataset suggesting that gender is expressed more robustly in given names than in other word classes. Feature importance was examined to determine which features were contributing to the decision-making process. Feature importance scores revealed a general pattern across both models, but also show that not all word classes express gender the same way. Finally, the models were reconstructed and tested on the opposite dataset to determine whether they were useful in classifying opposite samples. The results showed that the models were not as accurate when classifying opposite samples, suggesting that they are more suited to classifying words of the same class.
Collapse
Affiliation(s)
- Alexander Kilpatrick
- International Communication, Nagoya University of Commerce and Business, Nagoya, Aichi, Japan
| | | |
Collapse
|
2
|
Sidhu DM, Athanasopoulou A, Archer SL, Czarnecki N, Curtin S, Pexman PM. The maluma/takete effect is late: No longitudinal evidence for shape sound symbolism in the first year. PLoS One 2023; 18:e0287831. [PMID: 37943758 PMCID: PMC10635456 DOI: 10.1371/journal.pone.0287831] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2022] [Accepted: 06/14/2023] [Indexed: 11/12/2023] Open
Abstract
The maluma/takete effect refers to an association between certain language sounds (e.g., /m/ and /o/) and round shapes, and other language sounds (e.g., /t/ and /i/) and spiky shapes. This is an example of sound symbolism and stands in opposition to arbitrariness of language. It is still unknown when sensitivity to sound symbolism emerges. In the present series of studies, we first confirmed that the classic maluma/takete effect would be observed in adults using our novel 3-D object stimuli (Experiments 1a and 1b). We then conducted the first longitudinal test of the maluma/takete effect, testing infants at 4-, 8- and 12-months of age (Experiment 2). Sensitivity to sound symbolism was measured with a looking time preference task, in which infants were shown images of a round and a spiky 3-D object while hearing either a round- or spiky-sounding nonword. We did not detect a significant difference in looking time based on nonword type. We also collected a series of individual difference measures including measures of vocabulary, movement ability and babbling. Analyses of these measures revealed that 12-month olds who babbled more showed a greater sensitivity to sound symbolism. Finally, in Experiment 3, we had parents take home round or spiky 3-D printed objects, to present to 7- to 8-month-old infants paired with either congruent or incongruent nonwords. This language experience had no effect on subsequent measures of sound symbolism sensitivity. Taken together these studies demonstrate that sound symbolism is elusive in the first year, and shed light on the mechanisms that may contribute to its eventual emergence.
Collapse
Affiliation(s)
- David M. Sidhu
- Department of Psychology, Carleton University, Ottawa, Canada
| | - Angeliki Athanasopoulou
- School of Languages, Linguistics, Literatures, and Cultures, University of Calgary, Calgary, Canada
| | | | | | - Suzanne Curtin
- Department of Child and Youth Studies, Brock University, St. Catharines, Canada
| | - Penny M. Pexman
- Department of Psychology, University of Calgary, Calgary, Canada
| |
Collapse
|
3
|
Barany DA, Lacey S, Matthews KL, Nygaard LC, Sathian K. Neural basis of sound-symbolic pseudoword-shape correspondences. Neuropsychologia 2023; 188:108657. [PMID: 37543139 PMCID: PMC10529692 DOI: 10.1016/j.neuropsychologia.2023.108657] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2023] [Revised: 06/23/2023] [Accepted: 08/02/2023] [Indexed: 08/07/2023]
Abstract
Non-arbitrary mapping between the sound of a word and its meaning, termed sound symbolism, is commonly studied through crossmodal correspondences between sounds and visual shapes, e.g., auditory pseudowords, like 'mohloh' and 'kehteh', are matched to rounded and pointed visual shapes, respectively. Here, we used functional magnetic resonance imaging (fMRI) during a crossmodal matching task to investigate the hypotheses that sound symbolism (1) involves language processing; (2) depends on multisensory integration; (3) reflects embodiment of speech in hand movements. These hypotheses lead to corresponding neuroanatomical predictions of crossmodal congruency effects in (1) the language network; (2) areas mediating multisensory processing, including visual and auditory cortex; (3) regions responsible for sensorimotor control of the hand and mouth. Right-handed participants (n = 22) encountered audiovisual stimuli comprising a simultaneously presented visual shape (rounded or pointed) and an auditory pseudoword ('mohloh' or 'kehteh') and indicated via a right-hand keypress whether the stimuli matched or not. Reaction times were faster for congruent than incongruent stimuli. Univariate analysis showed that activity was greater for the congruent compared to the incongruent condition in the left primary and association auditory cortex, and left anterior fusiform/parahippocampal gyri. Multivoxel pattern analysis revealed higher classification accuracy for the audiovisual stimuli when congruent than when incongruent, in the pars opercularis of the left inferior frontal (Broca's area), the left supramarginal, and the right mid-occipital gyri. These findings, considered in relation to the neuroanatomical predictions, support the first two hypotheses and suggest that sound symbolism involves both language processing and multisensory integration.
Collapse
Affiliation(s)
- Deborah A Barany
- Department of Kinesiology, University of Georgia and Augusta University/University of Georgia Medical Partnership, Athens, GA, 30602, USA
| | - Simon Lacey
- Department of Neurology, Penn State College of Medicine, Hershey, PA, 17033-0859, USA; Department of Neural & Behavioral Sciences, Penn State College of Medicine, Hershey, PA, 17033-0859, USA; Department of Psychology, Penn State College of Liberal Arts, University Park, PA, 16802, USA
| | - Kaitlyn L Matthews
- Department of Psychology, Emory University, Atlanta, GA, 30322, USA; Present address: Department of Psychological & Brain Sciences, Washington University in St. Louis, St. Louis, MO, 63130, USA
| | - Lynne C Nygaard
- Department of Psychology, Emory University, Atlanta, GA, 30322, USA
| | - K Sathian
- Department of Neurology, Penn State College of Medicine, Hershey, PA, 17033-0859, USA; Department of Neural & Behavioral Sciences, Penn State College of Medicine, Hershey, PA, 17033-0859, USA; Department of Psychology, Penn State College of Liberal Arts, University Park, PA, 16802, USA.
| |
Collapse
|
4
|
Barany DA, Lacey S, Matthews KL, Nygaard LC, Sathian K. Neural Basis Of Sound-Symbolic Pseudoword-Shape Correspondences. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.04.14.536865. [PMID: 37425853 PMCID: PMC10327042 DOI: 10.1101/2023.04.14.536865] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/11/2023]
Abstract
Non-arbitrary mapping between the sound of a word and its meaning, termed sound symbolism, is commonly studied through crossmodal correspondences between sounds and visual shapes, e.g., auditory pseudowords, like 'mohloh' and 'kehteh', are matched to rounded and pointed visual shapes, respectively. Here, we used functional magnetic resonance imaging (fMRI) during a crossmodal matching task to investigate the hypotheses that sound symbolism (1) involves language processing; (2) depends on multisensory integration; (3) reflects embodiment of speech in hand movements. These hypotheses lead to corresponding neuroanatomical predictions of crossmodal congruency effects in (1) the language network; (2) areas mediating multisensory processing, including visual and auditory cortex; (3) regions responsible for sensorimotor control of the hand and mouth. Right-handed participants ( n = 22) encountered audiovisual stimuli comprising a simultaneously presented visual shape (rounded or pointed) and an auditory pseudoword ('mohloh' or 'kehteh') and indicated via a right-hand keypress whether the stimuli matched or not. Reaction times were faster for congruent than incongruent stimuli. Univariate analysis showed that activity was greater for the congruent compared to the incongruent condition in the left primary and association auditory cortex, and left anterior fusiform/parahippocampal gyri. Multivoxel pattern analysis revealed higher classification accuracy for the audiovisual stimuli when congruent than when incongruent, in the pars opercularis of the left inferior frontal (Broca's area), the left supramarginal, and the right mid-occipital gyri. These findings, considered in relation to the neuroanatomical predictions, support the first two hypotheses and suggest that sound symbolism involves both language processing and multisensory integration. HIGHLIGHTS fMRI investigation of sound-symbolic correspondences between auditory pseudowords and visual shapesFaster reaction times for congruent than incongruent audiovisual stimuliGreater activation in auditory and visual cortices for congruent stimuliHigher classification accuracy for congruent stimuli in language and visual areasSound symbolism involves language processing and multisensory integration.
Collapse
Affiliation(s)
- Deborah A. Barany
- Department of Kinesiology, University of Georgia and Augusta University/University of Georgia Medical Partnership, Athens, GA, 30602, USA
| | - Simon Lacey
- Department of Neurology, Penn State Colleges of Medicine and Liberal Arts, Hershey, PA 17033-0859, USA
- Department of Neural & Behavioral Sciences, Penn State Colleges of Medicine and Liberal Arts, Hershey, PA 17033-0859, USA
- Department of Psychology, Penn State Colleges of Medicine and Liberal Arts, Hershey, PA 17033-0859, USA
| | - Kaitlyn L. Matthews
- Department of Psychology, Emory University, Atlanta, GA 30322, USA
- Present address: Department of Psychological & Brain Sciences, Washington University in St. Louis, St. Louis, MO 63130
| | - Lynne C. Nygaard
- Department of Psychology, Emory University, Atlanta, GA 30322, USA
| | - K. Sathian
- Department of Neurology, Penn State Colleges of Medicine and Liberal Arts, Hershey, PA 17033-0859, USA
- Department of Neural & Behavioral Sciences, Penn State Colleges of Medicine and Liberal Arts, Hershey, PA 17033-0859, USA
- Department of Psychology, Penn State Colleges of Medicine and Liberal Arts, Hershey, PA 17033-0859, USA
| |
Collapse
|
5
|
Sciortino P, Kayser C. Steady state visual evoked potentials reveal a signature of the pitch-size crossmodal association in visual cortex. Neuroimage 2023; 273:120093. [PMID: 37028733 DOI: 10.1016/j.neuroimage.2023.120093] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2023] [Revised: 03/31/2023] [Accepted: 04/04/2023] [Indexed: 04/08/2023] Open
Abstract
Crossmodal correspondences describe our tendency to associate sensory features from different modalities with each other, such as the pitch of a sound with the size of a visual object. While such crossmodal correspondences (or associations) are described in many behavioural studies their neurophysiological correlates remain unclear. Under the current working model of multisensory perception both a low- and a high-level account seem plausible. That is, the neurophysiological processes shaping these associations could commence in low-level sensory regions, or may predominantly emerge in high-level association regions of semantic and object identification networks. We exploited steady-state visual evoked potentials (SSVEP) to directly probe this question, focusing on the associations between pitch and the visual features of size, hue or chromatic saturation. We found that SSVEPs over occipital regions are sensitive to the congruency between pitch and size, and a source analysis pointed to an origin around primary visual cortices. We speculate that this signature of the pitch-size association in low-level visual cortices reflects the successful pairing of congruent visual and acoustic object properties and may contribute to establishing causal relations between multisensory objects. Besides this, our study also provides a paradigm can be exploited to study other crossmodal associations involving visual stimuli in the future.
Collapse
|
6
|
Kilpatrick A, Ćwiek A, Lewis E, Kawahara S. A cross-linguistic, sound symbolic relationship between labial consonants, voiced plosives, and Pokémon friendship. Front Psychol 2023; 14:1113143. [PMID: 36910799 PMCID: PMC10000297 DOI: 10.3389/fpsyg.2023.1113143] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2022] [Accepted: 02/07/2023] [Indexed: 03/14/2023] Open
Abstract
Introduction This paper presents a cross-linguistic study of sound symbolism, analysing a six-language corpus of all Pokémon names available as of January 2022. It tests the effects of labial consonants and voiced plosives on a Pokémon attribute known as friendship. Friendship is a mechanic in the core series of Pokémon video games that arguably reflects how friendly each Pokémon is. Method Poisson regression is used to examine the relationship between the friendship mechanic and the number of times /p/, /b/, /d/, /m/, /g/, and /w/ occur in the names of English, Japanese, Korean, Chinese, German, and French Pokémon. Results Bilabial plosives, /p/ and /b/, typically represent high friendship values in Pokémon names while /m/, /d/, and /g/ typically represent low friendship values. No association is found for /w/ in any language. Discussion Many of the previously known cases of cross-linguistic sound symbolic patterns can be explained by the relationship between how sounds in words are articulated and the physical qualities of the referents. This study, however, builds upon the underexplored relationship between sound symbolism and abstract qualities.
Collapse
|