1
|
Barany DA, Lacey S, Matthews KL, Nygaard LC, Sathian K. Neural basis of sound-symbolic pseudoword-shape correspondences. Neuropsychologia 2023; 188:108657. [PMID: 37543139 PMCID: PMC10529692 DOI: 10.1016/j.neuropsychologia.2023.108657] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2023] [Revised: 06/23/2023] [Accepted: 08/02/2023] [Indexed: 08/07/2023]
Abstract
Non-arbitrary mapping between the sound of a word and its meaning, termed sound symbolism, is commonly studied through crossmodal correspondences between sounds and visual shapes, e.g., auditory pseudowords, like 'mohloh' and 'kehteh', are matched to rounded and pointed visual shapes, respectively. Here, we used functional magnetic resonance imaging (fMRI) during a crossmodal matching task to investigate the hypotheses that sound symbolism (1) involves language processing; (2) depends on multisensory integration; (3) reflects embodiment of speech in hand movements. These hypotheses lead to corresponding neuroanatomical predictions of crossmodal congruency effects in (1) the language network; (2) areas mediating multisensory processing, including visual and auditory cortex; (3) regions responsible for sensorimotor control of the hand and mouth. Right-handed participants (n = 22) encountered audiovisual stimuli comprising a simultaneously presented visual shape (rounded or pointed) and an auditory pseudoword ('mohloh' or 'kehteh') and indicated via a right-hand keypress whether the stimuli matched or not. Reaction times were faster for congruent than incongruent stimuli. Univariate analysis showed that activity was greater for the congruent compared to the incongruent condition in the left primary and association auditory cortex, and left anterior fusiform/parahippocampal gyri. Multivoxel pattern analysis revealed higher classification accuracy for the audiovisual stimuli when congruent than when incongruent, in the pars opercularis of the left inferior frontal (Broca's area), the left supramarginal, and the right mid-occipital gyri. These findings, considered in relation to the neuroanatomical predictions, support the first two hypotheses and suggest that sound symbolism involves both language processing and multisensory integration.
Collapse
Affiliation(s)
- Deborah A Barany
- Department of Kinesiology, University of Georgia and Augusta University/University of Georgia Medical Partnership, Athens, GA, 30602, USA
| | - Simon Lacey
- Department of Neurology, Penn State College of Medicine, Hershey, PA, 17033-0859, USA; Department of Neural & Behavioral Sciences, Penn State College of Medicine, Hershey, PA, 17033-0859, USA; Department of Psychology, Penn State College of Liberal Arts, University Park, PA, 16802, USA
| | - Kaitlyn L Matthews
- Department of Psychology, Emory University, Atlanta, GA, 30322, USA; Present address: Department of Psychological & Brain Sciences, Washington University in St. Louis, St. Louis, MO, 63130, USA
| | - Lynne C Nygaard
- Department of Psychology, Emory University, Atlanta, GA, 30322, USA
| | - K Sathian
- Department of Neurology, Penn State College of Medicine, Hershey, PA, 17033-0859, USA; Department of Neural & Behavioral Sciences, Penn State College of Medicine, Hershey, PA, 17033-0859, USA; Department of Psychology, Penn State College of Liberal Arts, University Park, PA, 16802, USA.
| |
Collapse
|
2
|
Barany DA, Lacey S, Matthews KL, Nygaard LC, Sathian K. Neural Basis Of Sound-Symbolic Pseudoword-Shape Correspondences. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.04.14.536865. [PMID: 37425853 PMCID: PMC10327042 DOI: 10.1101/2023.04.14.536865] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/11/2023]
Abstract
Non-arbitrary mapping between the sound of a word and its meaning, termed sound symbolism, is commonly studied through crossmodal correspondences between sounds and visual shapes, e.g., auditory pseudowords, like 'mohloh' and 'kehteh', are matched to rounded and pointed visual shapes, respectively. Here, we used functional magnetic resonance imaging (fMRI) during a crossmodal matching task to investigate the hypotheses that sound symbolism (1) involves language processing; (2) depends on multisensory integration; (3) reflects embodiment of speech in hand movements. These hypotheses lead to corresponding neuroanatomical predictions of crossmodal congruency effects in (1) the language network; (2) areas mediating multisensory processing, including visual and auditory cortex; (3) regions responsible for sensorimotor control of the hand and mouth. Right-handed participants ( n = 22) encountered audiovisual stimuli comprising a simultaneously presented visual shape (rounded or pointed) and an auditory pseudoword ('mohloh' or 'kehteh') and indicated via a right-hand keypress whether the stimuli matched or not. Reaction times were faster for congruent than incongruent stimuli. Univariate analysis showed that activity was greater for the congruent compared to the incongruent condition in the left primary and association auditory cortex, and left anterior fusiform/parahippocampal gyri. Multivoxel pattern analysis revealed higher classification accuracy for the audiovisual stimuli when congruent than when incongruent, in the pars opercularis of the left inferior frontal (Broca's area), the left supramarginal, and the right mid-occipital gyri. These findings, considered in relation to the neuroanatomical predictions, support the first two hypotheses and suggest that sound symbolism involves both language processing and multisensory integration. HIGHLIGHTS fMRI investigation of sound-symbolic correspondences between auditory pseudowords and visual shapesFaster reaction times for congruent than incongruent audiovisual stimuliGreater activation in auditory and visual cortices for congruent stimuliHigher classification accuracy for congruent stimuli in language and visual areasSound symbolism involves language processing and multisensory integration.
Collapse
Affiliation(s)
- Deborah A. Barany
- Department of Kinesiology, University of Georgia and Augusta University/University of Georgia Medical Partnership, Athens, GA, 30602, USA
| | - Simon Lacey
- Department of Neurology, Penn State Colleges of Medicine and Liberal Arts, Hershey, PA 17033-0859, USA
- Department of Neural & Behavioral Sciences, Penn State Colleges of Medicine and Liberal Arts, Hershey, PA 17033-0859, USA
- Department of Psychology, Penn State Colleges of Medicine and Liberal Arts, Hershey, PA 17033-0859, USA
| | - Kaitlyn L. Matthews
- Department of Psychology, Emory University, Atlanta, GA 30322, USA
- Present address: Department of Psychological & Brain Sciences, Washington University in St. Louis, St. Louis, MO 63130
| | - Lynne C. Nygaard
- Department of Psychology, Emory University, Atlanta, GA 30322, USA
| | - K. Sathian
- Department of Neurology, Penn State Colleges of Medicine and Liberal Arts, Hershey, PA 17033-0859, USA
- Department of Neural & Behavioral Sciences, Penn State Colleges of Medicine and Liberal Arts, Hershey, PA 17033-0859, USA
- Department of Psychology, Penn State Colleges of Medicine and Liberal Arts, Hershey, PA 17033-0859, USA
| |
Collapse
|
3
|
Atilgan H, Koi JXJ, Wong E, Laakso I, Matilainen N, Pasqualotto A, Tanaka S, Chen SHA, Kitada R. Functional relevance of the extrastriate body area for visual and haptic object recognition: a preregistered fMRI-guided TMS study. Cereb Cortex Commun 2023; 4:tgad005. [PMID: 37188067 PMCID: PMC10176024 DOI: 10.1093/texcom/tgad005] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2023] [Revised: 03/22/2023] [Accepted: 03/27/2023] [Indexed: 05/17/2023] Open
Abstract
The extrastriate body area (EBA) is a region in the lateral occipito-temporal cortex (LOTC), which is sensitive to perceived body parts. Neuroimaging studies suggested that EBA is related to body and tool processing, regardless of the sensory modalities. However, how essential this region is for visual tool processing and nonvisual object processing remains a matter of controversy. In this preregistered fMRI-guided repetitive transcranial magnetic stimulation (rTMS) study, we examined the causal involvement of EBA in multisensory body and tool recognition. Participants used either vision or haptics to identify 3 object categories: hands, teapots (tools), and cars (control objects). Continuous theta-burst stimulation (cTBS) was applied over left EBA, right EBA, or vertex (control site). Performance for visually perceived hands and teapots (relative to cars) was more strongly disrupted by cTBS over left EBA than over the vertex, whereas no such object-specific effect was observed in haptics. The simulation of the induced electric fields confirmed that the cTBS affected regions including EBA. These results indicate that the LOTC is functionally relevant for visual hand and tool processing, whereas the rTMS over EBA may differently affect object recognition between the 2 sensory modalities.
Collapse
Affiliation(s)
- Hicret Atilgan
- Psychology, School of Social Sciences, Nanyang Technological University, 48 Nanyang Avenue, Singapore 639818, Singapore
| | - J X Janice Koi
- Psychology, School of Social Sciences, Nanyang Technological University, 48 Nanyang Avenue, Singapore 639818, Singapore
| | - Ern Wong
- IMT School for Advanced Studies Lucca, Piazza S. Francesco, 19, 55100 Lucca LU, Italy
| | - Ilkka Laakso
- Department of Electrical Engineering and Automation, Aalto University, Otakaari 3, 02150 Espoo, Finland
| | - Noora Matilainen
- Department of Electrical Engineering and Automation, Aalto University, Otakaari 3, 02150 Espoo, Finland
| | - Achille Pasqualotto
- Faculty of Human Sciences, University of Tsukuba, 1-1-1 Tennodai, Tsukuba, Ibaraki 305-8577, Japan
| | - Satoshi Tanaka
- Department of Psychology, Hamamatsu University School of Medicine, 1-20-1 Handayama, Higashi Ward, Hamamatsu, Shizuoka 431-3192, Japan
| | - S H Annabel Chen
- Psychology, School of Social Sciences, Nanyang Technological University, 48 Nanyang Avenue, Singapore 639818, Singapore
- Centre for Research and Development in Learning, Nanyang Technological University, 61 Nanyang Drive, Singapore 637335, Singapore
- Lee Kong Chian School of Medicine (LKCMedicine), Nanyang Technological University, 11 Mandalay Road, Singapore 308232, Singapore
| | - Ryo Kitada
- Corresponding author: Graduate School of Intercultural Studies, Kobe University, 12-1 Tsurukabuto, Nada Ward, Kobe, Hyogo 657-0013, Japan.
| |
Collapse
|
4
|
Tan ZY, Choo CM, Lin Y, Ho HN, Kitada R. The Effect of Temperature on Tactile Softness Perception. IEEE TRANSACTIONS ON HAPTICS 2022; 15:638-645. [PMID: 35951577 DOI: 10.1109/toh.2022.3198115] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
We are adept at discriminating object properties such as softness and temperature using touch. Previous studies have investigated the nature of each object property, but the interactions between these properties are not fully understood. Tactile softness perception relies on multiple sensory cues such as the size of the contact area, indentation depth, and force exerted. In addition to these cues, the temperature of the stimulus may contribute to tactile softness perception by changing the sensitivity to changes in stimulus compliance. To test this hypothesis, we conducted two psychophysical experiments in which the subjects estimated the magnitude of perceived softness after touching deformable objects. We varied the compliance and temperature of the stimuli. The linear functions of compliance fit to the magnitude estimates under cold conditions (9-15°C) were steeper than the functions fit to the magnitude estimates under room temperature (21-25°C). These results indicate that temperature can sharpen our tactile softness perception of deformable surfaces by increasing the sensitivity to differences in compliance.
Collapse
|
5
|
Wong LS, Kwon J, Zheng Z, Styles SJ, Sakamoto M, Kitada R. Japanese Sound-Symbolic Words for Representing the Hardness of an Object Are Judged Similarly by Japanese and English Speakers. Front Psychol 2022; 13:830306. [PMID: 35369145 PMCID: PMC8965287 DOI: 10.3389/fpsyg.2022.830306] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2021] [Accepted: 02/14/2022] [Indexed: 11/13/2022] Open
Abstract
Contrary to the assumption of arbitrariness in modern linguistics, sound symbolism, which is the non-arbitrary relationship between sounds and meanings, exists. Sound symbolism, including the “Bouba–Kiki” effect, implies the universality of such relationships; individuals from different cultural and linguistic backgrounds can similarly relate sound-symbolic words to referents, although the extent of these similarities remains to be fully understood. Here, we examined if subjects from different countries could similarly infer the surface texture properties from words that sound-symbolically represent hardness in Japanese. We prepared Japanese sound-symbolic words of which novelty was manipulated by a genetic algorithm (GA). Japanese speakers in Japan and English speakers in both Singapore and the United States rated these words based on surface texture properties (hardness, warmness, and roughness), as well as familiarity. The results show that hardness-related words were rated as harder and rougher than softness-related words, regardless of novelty and countries. Multivariate analyses of the ratings classified the hardness-related words along the hardness-softness dimension at over 80% accuracy, regardless of country. Multiple regression analyses revealed that the number of speech sounds /g/ and /k/ predicted the ratings of the surface texture properties in non-Japanese countries, suggesting a systematic relationship between phonetic features of a word and perceptual quality represented by the word across culturally and linguistically diverse samples.
Collapse
Affiliation(s)
- Li Shan Wong
- Division of Psychology, School of Social Sciences, Nanyang Technological University, Singapore, Singapore
| | - Jinhwan Kwon
- Faculty of Education, Kyoto University of Education, Kyoto, Japan
| | - Zane Zheng
- Department of Psychology, Lasell University, Newton, MA, United States
| | - Suzy J Styles
- Division of Psychology, School of Social Sciences, Nanyang Technological University, Singapore, Singapore
| | - Maki Sakamoto
- Department of Informatics, Graduate School of Informatics and Engineering, The University of Electro-Communications, Chofu, Japan
| | - Ryo Kitada
- Division of Psychology, School of Social Sciences, Nanyang Technological University, Singapore, Singapore.,Graduate School of Intercultural Studies, Kobe University, Kobe, Japan
| |
Collapse
|
6
|
Yamagata K, Kwon J, Kawashima T, Shimoda W, Sakamoto M. Computer Vision System for Expressing Texture Using Sound-Symbolic Words. Front Psychol 2021; 12:654779. [PMID: 34690855 PMCID: PMC8529034 DOI: 10.3389/fpsyg.2021.654779] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2021] [Accepted: 09/20/2021] [Indexed: 11/20/2022] Open
Abstract
The major goals of texture research in computer vision are to understand, model, and process texture and ultimately simulate human visual information processing using computer technologies. The field of computer vision has witnessed remarkable advancements in material recognition using deep convolutional neural networks (DCNNs), which have enabled various computer vision applications, such as self-driving cars, facial and gesture recognition, and automatic number plate recognition. However, for computer vision to "express" texture like human beings is still difficult because texture description has no correct or incorrect answer and is ambiguous. In this paper, we develop a computer vision method using DCNN that expresses texture of materials. To achieve this goal, we focus on Japanese "sound-symbolic" words, which can describe differences in texture sensation at a fine resolution and are known to have strong and systematic sensory-sound associations. Because the phonemes of Japanese sound-symbolic words characterize categories of texture sensations, we develop a computer vision method to generate the phonemes and structure comprising sound-symbolic words that probabilistically correspond to the input images. It was confirmed that the sound-symbolic words output by our system had about 80% accuracy rate in our evaluation.
Collapse
Affiliation(s)
- Koichi Yamagata
- Graduate School of Informatics and Engineering, The University of Electro Communications, Chofu, Japan
| | - Jinhwan Kwon
- Department of Education, Kyoto University of Education, Kyoto, Japan
| | - Takuya Kawashima
- Graduate School of Informatics and Engineering, The University of Electro Communications, Chofu, Japan
| | - Wataru Shimoda
- Graduate School of Informatics and Engineering, The University of Electro Communications, Chofu, Japan
| | - Maki Sakamoto
- Graduate School of Informatics and Engineering, The University of Electro Communications, Chofu, Japan
| |
Collapse
|