1
|
Vainio L, Wikström A, Vainio M. Pitch-based correspondences related to abstract concepts. Acta Psychol (Amst) 2025; 253:104754. [PMID: 39862450 DOI: 10.1016/j.actpsy.2025.104754] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2024] [Revised: 01/09/2025] [Accepted: 01/22/2025] [Indexed: 01/27/2025] Open
Abstract
Previous investigations have shown pitch-based correspondences with various perceptual and conceptual attributes. The present study reveals two novel pitch-based correspondences with highly abstract concepts. Three experiments with varying levels of implicitness of the association task showed that the concepts of future and in are associated with high-pitch sounds, while past and out are associated with low-pitch sounds. Hence, pitch-based correspondences can be observed even with temporal concepts that cannot be unambiguously represented in any perceptual format, at least, without spatial metaphorization. The correspondence effects were even more robust with the abstract temporal concepts of future/past than with more concrete spatial concepts of in/out. We propose that these effects might emerge from semantic multimodal abstraction processes mediated by affective dimensions of particular concepts.
Collapse
Affiliation(s)
- L Vainio
- Phonetics and speech synthesis research group, Department of Digital Humanities, Faculty of Arts, University of Helsinki, Unioninkatu 38, Helsinki, Finland.
| | - A Wikström
- Phonetics and speech synthesis research group, Department of Digital Humanities, Faculty of Arts, University of Helsinki, Unioninkatu 38, Helsinki, Finland.
| | - M Vainio
- Phonetics and speech synthesis research group, Department of Digital Humanities, Faculty of Arts, University of Helsinki, Unioninkatu 38, Helsinki, Finland.
| |
Collapse
|
2
|
Groves K, Farbood MM, Carone B, Ripollés P, Zuanazzi A. Acoustic features of instrumental movie soundtracks elicit distinct and mostly non-overlapping extra-musical meanings in the mind of the listener. Sci Rep 2025; 15:2327. [PMID: 39825090 PMCID: PMC11748619 DOI: 10.1038/s41598-025-86089-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2024] [Accepted: 01/08/2025] [Indexed: 01/20/2025] Open
Abstract
Music can evoke powerful emotions in listeners. However, the role that instrumental music (music without any vocal part) plays in conveying extra-musical meaning, above and beyond emotions, is still a debated question. We conducted a study wherein participants (N = 121) listened to twenty 15-second-long excerpts of polyphonic instrumental soundtrack music and reported (i) perceived emotions (e.g., happiness, sadness) as well as (ii) movie scene properties imagined during listening (e.g., scene brightness, character role). We systematically investigated how acoustic features of instrumental soundtrack excerpts (e.g., tempo, loudness) contributed to mental imagery of movie scenes. We show distinct and mostly non-overlapping contributions of acoustic features to the imagination of properties of movie scene settings, characters, actions, and objects. Moreover, we find that negatively-valenced emotions fully mediate the relation between a subset of acoustic features and movie scene properties, providing evidence for the importance of emotional valence in evoking mental imagery. The data demonstrate the capacity of music to convey extra-musical semantic information through audition.
Collapse
Affiliation(s)
- Karleigh Groves
- Department of Psychology, New York University, New York, NY, USA
- Music and Audio Research Lab (MARL), New York University, New York, NY, USA
- Center for Language, Music, and Emotion (CLaME), New York University, Max-Planck Institute, New York, NY, USA
- Department of Psychology, Lehigh University, Bethlehem, PA, USA
| | - Morwaread Mary Farbood
- Music and Audio Research Lab (MARL), New York University, New York, NY, USA
- Center for Language, Music, and Emotion (CLaME), New York University, Max-Planck Institute, New York, NY, USA
- Department of Music and Performing Arts Professions, New York University, New York, NY, USA
| | - Brandon Carone
- Department of Psychology, New York University, New York, NY, USA
- Music and Audio Research Lab (MARL), New York University, New York, NY, USA
- Center for Language, Music, and Emotion (CLaME), New York University, Max-Planck Institute, New York, NY, USA
| | - Pablo Ripollés
- Department of Psychology, New York University, New York, NY, USA
- Music and Audio Research Lab (MARL), New York University, New York, NY, USA
- Center for Language, Music, and Emotion (CLaME), New York University, Max-Planck Institute, New York, NY, USA
- Department of Music and Performing Arts Professions, New York University, New York, NY, USA
| | - Arianna Zuanazzi
- Department of Psychology, New York University, New York, NY, USA.
- Music and Audio Research Lab (MARL), New York University, New York, NY, USA.
- Center for Language, Music, and Emotion (CLaME), New York University, Max-Planck Institute, New York, NY, USA.
| |
Collapse
|
3
|
Reymore L, Lindsey DT. Color and tone color: audiovisual crossmodal correspondences with musical instrument timbre. Front Psychol 2025; 15:1520131. [PMID: 39839933 PMCID: PMC11747214 DOI: 10.3389/fpsyg.2024.1520131] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2024] [Accepted: 12/17/2024] [Indexed: 01/23/2025] Open
Abstract
Crossmodal correspondences, or widely shared tendencies for mapping experiences across sensory domains, are revealed in common descriptors of musical timbre such as bright, dark, and warm. Two experiments are reported in which participants listened to recordings of musical instruments playing major scales, selected colors to match the timbres, and rated the timbres on crossmodal semantic scales. Experiment A used three different keyboard instruments, each played in three pitch registers. Stimuli in Experiment B, representing six different orchestral instruments, were similar to those in Experiment A but were controlled for pitch register. Overall, results were consistent with hypothesized concordances between ratings on crossmodal timbre descriptors and participants' color associations. Semantic ratings predicted the lightness and saturation of colors matched to instrument timbres; effects were larger when both pitch register and instrument type varied (Experiment A) but were still evident when pitch register was held constant (Experiment B). We also observed a weak relationship between participant ratings of musical stimuli on the terms warm and cool and the warmth-coolness of selected colors in Experiment B only. Results were generally consistent with the hypothesis that instrument type and pitch register are related to color choice, though we speculate that these associations may only be relevant for certain instruments. Overall, the results have implications for our understanding the relationship between music and color, suggesting that while timbre/color matching behavior is in many ways diverse, observable trends in strategy can in part be linked to crossmodal timbre semantics.
Collapse
Affiliation(s)
- Lindsey Reymore
- School of Music, Dance and Theatre, Herberger Institute for Design and the Arts, Arizona State University, Tempe, AZ, United States
- School of Music, The Ohio State University, Columbus, OH, United States
| | - Delwin T. Lindsey
- Department of Psychology, The Ohio State University, Columbus, OH, United States
- College of Optometry, The Ohio State University, Columbus, OH, United States
| |
Collapse
|
4
|
Antović M, Jovanović VŽ, Popović M. From spatial perception to referential meaning: convergent image schemas in the music of and texts about Beethoven's piano sonatas. Front Psychol 2024; 15:1497557. [PMID: 39654938 PMCID: PMC11625561 DOI: 10.3389/fpsyg.2024.1497557] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2024] [Accepted: 11/13/2024] [Indexed: 12/12/2024] Open
Abstract
This paper approaches the connection between musical constructs and visuo-haptic experience through the lens of the cognitive-linguistic notion of the "image schema." The proposal is that the subconscious inference of spatial and haptic schematic constructs in music, such as vertical movement, will motivate their equally common occurrence in the language about that music, irrespective of the fact that this language never describes the musical structure in a one-to-one fashion. We have looked for five schemas in the scores for the first ten piano sonatas by Ludwig van Beethoven and three famous analytical and pedagogical texts about them: force, indicating changes in musical dynamics and referential invocation of power-related terms in the books; path, identifying vertical movement in the music and suggestions of upward- or downward motion in the texts; link, suggesting the presence or absence of musical slurs and references to attachment or detachment in the language; balance, indicating the loss and regain of consonance in the harmony and invocation of lost and recovered stability in the verbal semantics; and containment, allocating the nonharmonic tones that "belong" to their resolving notes in the scores and referring to physical or metaphorical enclosed areas in the texts. Results of the corpus analysis suggest the following conclusions: musical schemas outnumber linguistic ones sevenfold; moderate schema strengths are typical of both language and music; predominant valences are shared by language and music in three schemas out of five; hierarchies of five schemas by strength differ, though the strongest schemas are mostly shared. Yet the central finding is that the correlations between each schema pair for music and language, by scalarity and valence, are total. This implies that (1) schemas operate as semantic building blocks irrespective of the external "symbolical form" in which they are realized and (2) scalarized image schema complexes perceived in one cognitive mode may motivate the emergence of a corresponding number of the same complexes in another.
Collapse
Affiliation(s)
- Mihailo Antović
- Faculty of Philosophy and Center for Cognitive Sciences, University of Niš, Niš, Serbia
| | | | | |
Collapse
|
5
|
Saitis C, Wallmark Z. Timbral brightness perception investigated through multimodal interference. Atten Percept Psychophys 2024; 86:1835-1845. [PMID: 39090510 PMCID: PMC11410849 DOI: 10.3758/s13414-024-02934-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/08/2024] [Indexed: 08/04/2024]
Abstract
Brightness is among the most studied aspects of timbre perception. Psychoacoustically, sounds described as "bright" versus "dark" typically exhibit a high versus low frequency emphasis in the spectrum. However, relatively little is known about the neurocognitive mechanisms that facilitate these metaphors we listen with. Do they originate in universal magnitude representations common to more than one sensory modality? Triangulating three different interaction paradigms, we investigated using speeded classification whether intramodal, crossmodal, and amodal interference occurs when timbral brightness, as modeled by the centroid of the spectral envelope, and pitch height/visual brightness/numerical value processing are semantically congruent and incongruent. In four online experiments varying in priming strategy, onset timing, and response deadline, 189 total participants were presented with a baseline stimulus (a pitch, gray square, or numeral) then asked to quickly identify a target stimulus that is higher/lower, brighter/darker, or greater/less than the baseline after being primed with a bright or dark synthetic harmonic tone. Results suggest that timbral brightness modulates the perception of pitch and possibly visual brightness, but not numerical value. Semantically incongruent pitch height-timbral brightness shifts produced significantly slower reaction time (RT) and higher error compared to congruent pairs. In the visual task, incongruent pairings of gray squares and tones elicited slower RTs than congruent pairings (in two experiments). No interference was observed in the number comparison task. These findings shed light on the embodied and multimodal nature of experiencing timbre.
Collapse
Affiliation(s)
| | - Zachary Wallmark
- School of Music and Dance and Center for Translational Neuroscience, University of Oregon, Eugene, OR, USA
| |
Collapse
|
6
|
Vainio L, Myllylä IL, Wikström A, Vainio M. High-Pitched Sound is Open and Low-Pitched Sound is Closed: Representing the Spatial Meaning of Pitch Height. Cogn Sci 2024; 48:e13486. [PMID: 39155515 DOI: 10.1111/cogs.13486] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2024] [Revised: 06/10/2024] [Accepted: 08/02/2024] [Indexed: 08/20/2024]
Abstract
Research shows that high- and low-pitch sounds can be associated with various meanings. For example, high-pitch sounds are associated with small concepts, whereas low-pitch sounds are associated with large concepts. This study presents three experiments revealing that high-pitch sounds are also associated with open concepts and opening hand actions, while low-pitch sounds are associated with closed concepts and closing hand actions. In Experiment 1, this sound-meaning correspondence effect was shown using the two-alternative forced-choice task, while Experiments 2 and 3 used reaction time tasks to show this interaction. In Experiment 2, high-pitch vocalizations were found to facilitate opening hand gestures, and low-pitch vocalizations were found to facilitate closing hand gestures, when performed simultaneously. In Experiment 3, high-pitched vocalizations were produced particularly rapidly when the visual target stimulus presented an open object, and low-pitched vocalizations were produced particularly rapidly when the target presented a closed object. These findings are discussed concerning the meaning of intonational cues. They are suggested to be based on cross-modally representing conceptual spatial knowledge in sensory, motor, and affective systems. Additionally, this pitch-opening effect might share cognitive processes with other pitch-meaning effects.
Collapse
Affiliation(s)
- Lari Vainio
- Department of Digital Humanities, Faculty of Arts, University of Helsinki
- Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki
| | - Ida-Lotta Myllylä
- Department of Digital Humanities, Faculty of Arts, University of Helsinki
| | - Alexandra Wikström
- Department of Digital Humanities, Faculty of Arts, University of Helsinki
| | - Martti Vainio
- Department of Digital Humanities, Faculty of Arts, University of Helsinki
| |
Collapse
|
7
|
Margiotoudi K, Fagot J, Meguerditchian A, Dautriche I. Humans (Homo sapiens) but not baboons (Papio papio) demonstrate crossmodal pitch-luminance correspondence. Am J Primatol 2024; 86:e23613. [PMID: 38475662 DOI: 10.1002/ajp.23613] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2023] [Revised: 02/12/2024] [Accepted: 02/17/2024] [Indexed: 03/14/2024]
Abstract
Humans spontaneously and consistently map information coming from different sensory modalities. Surprisingly, the phylogenetic origin of such cross-modal correspondences has been under-investigated. A notable exception is the study of Ludwig et al. (Visuoauditory mappings between high luminance and high pitch are shared by chimpanzees [Pan troglodytes] and humans. Proceedings of the National Academy of Sciences, 108(51), 20661-20665) which reports that both humans and chimpanzees spontaneously map high-pitched sounds with bright objects and low-pitched sounds with dark objects. Our pre-registered study aimed to directly replicate this research on both humans and baboons (Papio papio), an old world monkey which is more phylogenetically distant from humans than chimpanzees. Following Ludwig et al. participants were presented with a visual classification task where they had to sort black and white square (low and high luminance), while background sounds (low or high-pitched tones) were playing. Whereas we replicated the finding that humans' performance on the visual task was affected by congruency between sound and luminance of the target, we did not find any of those effects on baboons' performance. These results question the presence of a shared cross-modal pitch-luminance mapping in other nonhuman primates.
Collapse
Affiliation(s)
- Konstantina Margiotoudi
- Centre de Recherche en Psychologie et Neurosciences, UMR7077, CNRS, Aix-Marseille Université, Marseille, France
- Station de Primatologie-Celphedia UAR846, CNRS, Rousset, France
| | - Joel Fagot
- Centre de Recherche en Psychologie et Neurosciences, UMR7077, CNRS, Aix-Marseille Université, Marseille, France
- Station de Primatologie-Celphedia UAR846, CNRS, Rousset, France
| | - Adrien Meguerditchian
- Centre de Recherche en Psychologie et Neurosciences, UMR7077, CNRS, Aix-Marseille Université, Marseille, France
- Station de Primatologie-Celphedia UAR846, CNRS, Rousset, France
| | - Isabelle Dautriche
- Centre de Recherche en Psychologie et Neurosciences, UMR7077, CNRS, Aix-Marseille Université, Marseille, France
| |
Collapse
|
8
|
Chen L. Synesthetic Correspondence: An Overview. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2024; 1437:101-119. [PMID: 38270856 DOI: 10.1007/978-981-99-7611-9_7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/26/2024]
Abstract
Intramodal and cross-modal perceptual grouping based on the spatial proximity and temporal closeness between multiple sensory stimuli, as an operational principle has built a coherent and meaningful representation of the multisensory event/object. To implement and investigate the cross-modal perceptual grouping, researchers have employed excellent paradigms of spatial/temporal ventriloquism and cross-modal dynamic capture and have revealed the conditional constraints as well as the functional facilitations among various correspondence of sensory properties, with featured behavioral evidence, computational framework as well as brain oscillation patterns. Typically, synesthetic correspondence as a special type of cross-modal correspondence can shape the efficiency and effect-size of cross-modal interaction. For example, factors such as pitch/loudness in the auditory dimension with size/brightness in the visual dimension could modulate the strength of the cross-modal temporal capture. The empirical behavioral findings, as well as psychophysical and neurophysiological evidence to address the cross-modal perceptual grouping and synesthetic correspondence, were summarized in this review. Finally, the potential applications (such as artificial synesthesia device) and how synesthetic correspondence interface with semantics (sensory linguistics), as well as the promising research questions in this field have been discussed.
Collapse
Affiliation(s)
- Lihan Chen
- School of Psychological and Cognitive Sciences, Peking University, Beijing, China.
- Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, China.
- Key Laboratory of Machine Perception (Ministry of Education), Peking University, Beijing, China.
- National Key Laboratory of General Artificial Intelligence, Peking University, Beijing, China.
- National Engineering Laboratory for Big Data Analysis and Applications, Peking University, Beijing, China.
| |
Collapse
|
9
|
Uno K, Yokosawa K. Does cross-modal correspondence modulate modality-specific perceptual processing? Study using timing judgment tasks. Atten Percept Psychophys 2024; 86:273-284. [PMID: 37932495 DOI: 10.3758/s13414-023-02812-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/20/2023] [Indexed: 11/08/2023]
Abstract
Cross-modal correspondences refer to associations between stimulus features across sensory modalities. Previous studies have shown that cross-modal correspondences modulate reaction times for detecting and identifying stimuli in one modality when uninformative stimuli from another modality are present. However, it is unclear whether such modulation reflects changes in modality-specific perceptual processing. We used two psychophysical timing judgment tasks to examine the effects of audiovisual correspondences on visual perceptual processing. In Experiment 1, we conducted a temporal order judgment (TOJ) task that asked participants to judge which of two visual stimuli presented with various stimulus onset asynchronies (SOAs) appeared first. In Experiment 2, we conducted a simultaneous judgment (SJ) task that asked participants to report whether the two visual stimuli were simultaneous or successive. We also presented an unrelated auditory stimulus, simultaneously or preceding the first visual stimulus, and manipulated the congruency between audiovisual stimuli. Experiment 1 indicated that the points of subjective simultaneity (PSSs) between the two visual stimuli estimated in the TOJ task shifted according to the audiovisual correspondence between the auditory pitch and visual features of vertical location and size. However, these audiovisual correspondences did not affect PSS estimated using the SJ task in Experiment 2. The different results of the two tasks can be explained through the response bias triggered by audiovisual correspondence that only the TOJ task included. We concluded that audiovisual correspondence would not modulate visual perceptual timing and that changes in modality-specific perceptual processing might not trigger the congruency effects reported in previous studies.
Collapse
Affiliation(s)
- Kyuto Uno
- Department of Psychology, Graduate School of Humanities and Sociology, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-0033, Japan.
- Japan Society for the Promotion of Science, 5-3-1 Kojimachi, Chiyoda-ku, Tokyo, 102-0083, Japan.
- Department of Psychology, Faculty of Human Sciences, Sophia University, 7-1 Kioi-cho, Chiyoda-ku, Tokyo, 102-8554, Japan.
| | - Kazuhiko Yokosawa
- Department of Psychology, Graduate School of Humanities and Sociology, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-0033, Japan
- Tsukuba Gakuin University, 3-1 Azuma, Tsukuba-shi, Ibaraki, 305-0031, Japan
| |
Collapse
|
10
|
Barbosa Escobar F, Wang QJ. Tasty vibes: Uncovering crossmodal correspondences between tactile vibrations and basic tastes. Food Res Int 2023; 174:113613. [PMID: 37986468 DOI: 10.1016/j.foodres.2023.113613] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Revised: 10/05/2023] [Accepted: 10/20/2023] [Indexed: 11/22/2023]
Abstract
The interest in crossmodal correspondences individually involving the senses of touch and taste has grown rapidly in the last few decades. Several correspondences involving different tactile dimensions (e.g., hardness/softness, roughness/smoothness) have been uncovered, such as those between sweetness and softness and between roughness and sourness. However, a dimension that has been long overlooked, despite its pervasiveness and importance in everyday experiences, relates to tactile vibrations. The present study aimed to fill this gap and investigate crossmodal correspondences between basic tastes and vibrations. In the present study (N = 72), we uncovered these associations by having participants sample basic taste (i.e., sweet, salty, sour, bitter, umami) aqueous solutions and chose the frequency of vibrations, delivered via a consumer-grade subwoofer wristband on their dominant hand, that they most strongly associated with each taste. We found that sourness was most strongly associated with frequencies around 98 Hz, and that sweetness and umami were associated with frequencies around 77 Hz. These correspondences may, to different extents, be based on affective and semantic mechanisms. The findings have relevant implications for theoretical research on multisensory integration and perception and the potential future applications of these associations, through wearable technologies, to enhance eating experiences and promote healthier eating habits.
Collapse
Affiliation(s)
- Francisco Barbosa Escobar
- Department of Food Science, Faculty of Science, University of Copenhagen, Frederiksberg, Denmark; Department of Marketing, Copenhagen Business School, Frederiksberg, Denmark.
| | - Qian Janice Wang
- Department of Food Science, Faculty of Science, University of Copenhagen, Frederiksberg, Denmark
| |
Collapse
|
11
|
Ohtake Y, Tanaka K, Yamamoto K. How many categories are there in crossmodal correspondences? A study based on exploratory factor analysis. PLoS One 2023; 18:e0294141. [PMID: 37963160 PMCID: PMC10645324 DOI: 10.1371/journal.pone.0294141] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2023] [Accepted: 10/25/2023] [Indexed: 11/16/2023] Open
Abstract
Humans naturally associate stimulus features of one sensory modality with those of other modalities, such as associating bright light with high-pitched tones. This phenomenon is called crossmodal correspondence and is found between various stimulus features, and has been suggested to be categorized into several types. However, it is not yet clear whether there are differences in the underlying mechanism between the different kinds of correspondences. This study used exploratory factor analysis to address this question. Through an online experiment platform, we asked Japanese adult participants (Experiment 1: N = 178, Experiment 2: N = 160) to rate the degree of correspondence between two auditory and five visual features. The results of two experiments revealed that two factors underlie the subjective judgments of the audiovisual crossmodal correspondences: One factor was composed of correspondences whose auditory and visual features can be expressed in common Japanese terms, such as the loudness-size and pitch-vertical position correspondences, and another factor was composed of correspondences whose features have no linguistic similarities, such as pitch-brightness and pitch-shape correspondences. These results confirm that there are at least two types of crossmodal correspondences that are likely to differ in terms of language mediation.
Collapse
Affiliation(s)
- Yuka Ohtake
- Graduate School of Human-Environment Studies, Kyushu University, Fukuoka, Japan
- Japan Society for the Promotion of Science, Tokyo, Japan
| | - Kanji Tanaka
- Faculty of Arts and Science, Kyushu University, Fukuoka, Japan
| | - Kentaro Yamamoto
- Faculty of Human-Environment Studies, Kyushu University, Fukuoka, Japan
| |
Collapse
|
12
|
Del Gatto C, Indraccolo A, Pedale T, Brunetti R. Crossmodal interference on counting performance: Evidence for shared attentional resources. PLoS One 2023; 18:e0294057. [PMID: 37948407 PMCID: PMC10637692 DOI: 10.1371/journal.pone.0294057] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2023] [Accepted: 10/16/2023] [Indexed: 11/12/2023] Open
Abstract
During the act of counting, our perceptual system may rely on information coming from different sensory channels. However, when the information coming from different sources is discordant, such as in the case of a de-synchronization between visual stimuli to be counted and irrelevant auditory stimuli, the performance in a sequential counting task might deteriorate. Such deterioration may originate from two different mechanisms, both linked to exogenous attention attracted by auditory stimuli. Indeed, exogenous auditory triggers may infiltrate our internal "counter", interfering with the counting process, resulting in an overcount; alternatively, the exogenous auditory triggers may disrupt the internal "counter" by deviating participants' attention from the visual stimuli, resulting in an undercount. We tested these hypotheses by asking participants to count visual discs sequentially appearing on the screen while listening to task-irrelevant sounds, in systematically varied conditions: visual stimuli could be synchronized or de-synchronized with sounds; they could feature regular or irregular pacing; and their speed presentation could be fast (approx. 3/sec), moderate (approx. 2/sec), or slow (approx. 1.5/sec). Our results support the second hypothesis since participants tend to undercount visual stimuli in all harder conditions (de-synchronized, irregular, fast sequences). We discuss these results in detail, adding novel elements to the study of crossmodal interference.
Collapse
Affiliation(s)
- Claudia Del Gatto
- Experimental and Applied Psychology Laboratory, Department of Human Sciences, Università Europea di Roma, Rome, Italy
| | - Allegra Indraccolo
- Experimental and Applied Psychology Laboratory, Department of Human Sciences, Università Europea di Roma, Rome, Italy
| | - Tiziana Pedale
- Department of Physiology and Pharmacology, Sapienza University of Rome, Rome, Italy
- Functional Neuroimaging Laboratory, Fondazione Santa Lucia, IRCCS, Rome, Italy
| | - Riccardo Brunetti
- Experimental and Applied Psychology Laboratory, Department of Human Sciences, Università Europea di Roma, Rome, Italy
| |
Collapse
|
13
|
Miyamoto K, Taniyama Y, Hine K, Nakauchi S. Congruency of color-sound crossmodal correspondence interacts with color and sound discrimination depending on color category. Iperception 2023; 14:20416695231196835. [PMID: 37654696 PMCID: PMC10467208 DOI: 10.1177/20416695231196835] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2022] [Accepted: 08/08/2023] [Indexed: 09/02/2023] Open
Abstract
People occasionally associate color (e.g., hue) with sound (e.g., pitch). Previous studies have reported color-sound associations, which are examples of crossmodal correspondences. However, the association between both semantic and perceptual factors with color/sound discrimination in crossmodal correspondence remains unclear. To clarify this, three psychological experiments were conducted, where Stroop tasks were used to assess automatic process on the association. We focused on the crossmodal correspondence between color (Experiment 1)/color word (Experiment 2) and sound. Participants discriminated the color/word or the sound presented simultaneously. The results showed the color-sound bidirectional enhancement/interference of the response by certain associations of the crossmodal correspondence (blue-drop and yellow-shiny) in both experiments. These results suggest that these Stroop effects were caused by the semantic factor (color category) and the perceptual factor (color appearance) was not necessary for the current results. In Experiment 3, response modulation by color labeling was investigated to clarify the influence of subjective labeling. Participants labeled a presented ambiguous color, which was a hue specification between two specific colors, by listening to the sound. The results revealed that the Stroop effect was caused only when the presented color was classified as the color related to the presented sound. This showed that subjective labeling played a role in the regulation of the effect of crossmodal correspondences. These findings should contribute to the explanation of crossmodal correspondences through semantic mediation.
Collapse
Affiliation(s)
| | | | - Kyoko Hine
- Toyohashi University of Technology, Japan
| | | |
Collapse
|
14
|
Chen YC, Huang PC. Examining the automaticity and symmetry of sound-shape correspondences. Front Psychol 2023; 14:1172946. [PMID: 37342641 PMCID: PMC10277733 DOI: 10.3389/fpsyg.2023.1172946] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2023] [Accepted: 05/16/2023] [Indexed: 06/23/2023] Open
Abstract
Introduction A classic example of sound-shape correspondences is the mapping of the vowel /i/ with angular patterns and the vowel /u/ with rounded patterns. Such crossmodal correspondences have been reliably reported when tested in explicit matching tasks. Nevertheless, it remains unclear whether such sound-shape correspondences automatically occur and bidirectionally modulate people's perception. We address this question by adopting the explicit matching task and two implicit tasks. Methods In Experiment 1, we examined the sound-shape correspondences using the implicit association test (IAT), in which the sounds and shapes were both task-relevant, followed by an explicit matching task. In Experiments 2 and 3, we adopted the speeded classification task; when the target was a sound (or shape), a task-irrelevant shape (or sound) that was congruent or incongruent to the target was simultaneously presented. In addition, the participants performed the explicit matching task either before or after the speeded classification task. Results and Discussion The congruency effect was more pronounced in the IAT than in the speeded classification task; in addition, a bin analysis of RTs revealed that the congruency effect took time to develop. These findings suggest that the sound-shape correspondences were not completely automatic. The magnitude and onset of visual and auditory congruency effects were comparable, suggesting that the crossmodal modulations were symmetrical. Taken together, the sound-shape correspondences appeared not to be completely automatic, but their modulation was bidirectionally symmetrical once it occurred.
Collapse
Affiliation(s)
- Yi-Chuan Chen
- Department of Medicine, MacKay Medical College, New Taipei City, Taiwan
| | - Pi-Chun Huang
- Department of Psychology, National Cheng Kung University, Tainan, Taiwan
| |
Collapse
|
15
|
Barbosa Escobar F, Velasco C, Byrne DV, Wang QJ. Crossmodal associations between visual textures and temperature concepts. Q J Exp Psychol (Hove) 2023; 76:731-761. [PMID: 35414309 DOI: 10.1177/17470218221096452] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Visual textures are critical in how individuals form sensory expectations about objects, which include somatosensory properties such as temperature. This study aimed to uncover crossmodal associations between visual textures and temperature concepts. In Experiment 1 (N = 193), we evaluated crossmodal associations between 43 visual texture categories and different temperature concepts (via temperature words such as cold and hot) using an explicit forced-choice test. The results revealed associations between striped, cracked, matted, and waffled visual textures and high temperatures and between crystalline and flecked visual textures and low temperatures. In Experiment 2 (N = 247), we conducted six implicit association tests (IATs) pairing the two visual textures most strongly associated with low (crystalline and flecked) and high (striped and cracked) temperatures with the words cold and hot as per the results of Experiment 1. When pairing the crystalline and striped visual textures, the results revealed that crystalline was matched to the word cold, and striped was matched to the word hot. However, some associations found in the explicit test were not found in the IATs. In Experiment 3 (N = 124), we investigated how mappings between visual textures and concrete entities may influence crossmodal associations with temperature and these visual textures. Altogether, we found a range of association strengths and automaticity levels. Importantly, we found evidence of relative effects. Furthermore, some of these crossmodal associations are partly influenced by indirect mappings to concrete entities.
Collapse
Affiliation(s)
- Francisco Barbosa Escobar
- Food Quality Perception and Society Science Team, iSENSE Lab, Department of Food Science, Faculty of Technical Sciences, Aarhus University, Aarhus, Denmark
| | - Carlos Velasco
- Centre for Multisensory Marketing, Department of Marketing, BI Norwegian Business School, Oslo, Norway
| | - Derek Victor Byrne
- Food Quality Perception and Society Science Team, iSENSE Lab, Department of Food Science, Faculty of Technical Sciences, Aarhus University, Aarhus, Denmark
| | - Qian Janice Wang
- Food Quality Perception and Society Science Team, iSENSE Lab, Department of Food Science, Faculty of Technical Sciences, Aarhus University, Aarhus, Denmark
| |
Collapse
|
16
|
Spence C. Exploring Group Differences in the Crossmodal Correspondences. Multisens Res 2022; 35:495-536. [PMID: 35985650 DOI: 10.1163/22134808-bja10079] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2022] [Accepted: 07/22/2022] [Indexed: 11/19/2022]
Abstract
There has been a rapid growth of interest amongst researchers in the cross-modal correspondences in recent years. In part, this has resulted from the emerging realization of the important role that the correspondences can sometimes play in multisensory integration. In turn, this has led to an interest in the nature of any differences between individuals, or rather, between groups of individuals, in the strength and/or consensuality of cross-modal correspondences that may be observed in both neurotypically normal groups cross-culturally, developmentally, and across various special populations (including those who have lost a sense, as well as those with autistic tendencies). The hope is that our emerging understanding of such group differences may one day provide grounds for supporting the reality of the various different types of correspondence that have so far been proposed, namely structural, statistical, semantic, and hedonic (or emotionally mediated).
Collapse
Affiliation(s)
- Charles Spence
- Crossmodal Research Laboratory, Department of Experimental Psychology, University of Oxford, New Radcliffe House, Walton Street, Oxford, OX2 6BW, UK
| |
Collapse
|
17
|
Morett LM, Feiler JB, Getz LM. Elucidating the influences of embodiment and conceptual metaphor on lexical and non-speech tone learning. Cognition 2022; 222:105014. [DOI: 10.1016/j.cognition.2022.105014] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2021] [Revised: 01/04/2022] [Accepted: 01/05/2022] [Indexed: 11/25/2022]
|
18
|
Zhang G, Wang W, Qu J, Li H, Song X, Wang Q. Perceptual influence of auditory pitch on motion speed. J Vis 2021; 21:11. [PMID: 34520509 PMCID: PMC8444457 DOI: 10.1167/jov.21.10.11] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
There is a cross-modal mapping between auditory pitch and many visual properties, but the relationship between auditory pitch and motion speed is unexplored. In this article, the ball and baffle are used as the research objects, and an object collision experiment is used to explore the perceptual influence of auditory pitch on motion speed. Since cross-modal mapping can influence perceptual experience, this article also explores the influence of auditory pitch on action measures. In Experiment 1, 12 participants attempted to release a baffle to block a falling ball on the basis of speed judgment, and after each trial, they were asked to rate the speed of the ball. The speed score and baffle release time were recorded and used for analysis of variance. Since making explicit judgments about speed can alter the processing of visual paths, another group of participants in Experiment 2 completed the experiment without making explicit judgments about speed. Our results show that there is a cross-modal mapping between auditory pitch and motion speed, and high or low tones cause perception shift to faster or slower speeds.
Collapse
Affiliation(s)
- Gangsheng Zhang
- Graduate School, Air Force Engineering University, Xi'an, China.,
| | - Wei Wang
- Air and Missile Defense College, Air Force Engineering University, Xi'an, China.,
| | - Jue Qu
- Air and Missile Defense College, Air Force Engineering University, Xi'an, China.,
| | - Hengwei Li
- Graduate School, Air Force Engineering University, Xi'an, China.,
| | - Xincheng Song
- Graduate School, Air Force Engineering University, Xi'an, China.,
| | - Qingli Wang
- Air and Missile Defense College, Air Force Engineering University, Xi'an, China.,
| |
Collapse
|
19
|
Tsushima Y, Nishino Y, Ando H. Olfactory Stimulation Modulates Visual Perception Without Training. Front Neurosci 2021; 15:642584. [PMID: 34408620 PMCID: PMC8364961 DOI: 10.3389/fnins.2021.642584] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2020] [Accepted: 07/06/2021] [Indexed: 11/13/2022] Open
Abstract
Considerable research shows that olfactory stimulations affect other modalities in high-level cognitive functions such as emotion. However, little known fact is that olfaction modulates low-level perception of other sensory modalities. Although some studies showed that olfaction had influenced on the other low-level perception, all of them required specific experiences like perceptual training. To test the possibility that olfaction modulates low-level perception without training, we conducted a series of psychophysical and neuroimaging experiments. From the results of a visual task in which participants reported the speed of moving dots, we found that participants perceived the slower motions with a lemon smell and the faster motions with a vanilla smell, without any specific training. In functional magnetic resonance imaging (fMRI) studies, brain activities in the visual cortices [V1 and human middle temporal area (hMT)] changed based on the type of olfactory stimulation. Our findings provide us with the first direct evidence that olfaction modulates low-level visual perception without training, thereby indicating that olfactory-visual effect is not an acquired behavior but an innate behavior. The present results show us with a new crossmodal effect between olfaction and vision, and bring a unique opportunity to reconsider some fundamental roles of olfactory function.
Collapse
Affiliation(s)
- Yoshiaki Tsushima
- National Institute of Information and Communications Technology, Center for Information and Neural Networks, Osaka, Japan
| | - Yurie Nishino
- National Institute of Information and Communications Technology, Center for Information and Neural Networks, Osaka, Japan
| | - Hiroshi Ando
- National Institute of Information and Communications Technology, Universal Communication Research Institute, Kyoto, Japan
| |
Collapse
|
20
|
Spence C, Levitan CA. Explaining Crossmodal Correspondences Between Colours and Tastes. Iperception 2021; 12:20416695211018223. [PMID: 34211685 PMCID: PMC8216361 DOI: 10.1177/20416695211018223] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2021] [Accepted: 04/28/2021] [Indexed: 11/16/2022] Open
Abstract
For centuries, if not millennia, people have associated the basic tastes (e.g., sweet, bitter, salty, and sour) with specific colours. While the range of tastes may have changed, and the reasons for wanting to connect the senses in this rather surprising way have undoubtedly differed, there would nevertheless appear to be a surprisingly high degree of consistency regarding this crossmodal mapping among non-synaesthetes that merits further consideration. Traditionally, colour-taste correspondences have often been considered together with odour-colour and flavour-colour correspondences. However, the explanation for these various correspondences with the chemical senses may turn out to be qualitatively different, given the presence of identifiable source objects in the case of food aromas/flavours, but not necessarily in the case of basic tastes. While the internalization of the crossmodal statistics of the environment provides one appealing account for the existence of colour-taste correspondences, emotional mediation may also be relevant. Ultimately, while explaining colour-taste correspondences is of both theoretical and historical interest, the growing awareness of the robustness of colour-taste correspondences would currently seem to be of particular relevance to those working in the fields of design and multisensory experiential marketing.
Collapse
Affiliation(s)
- Charles Spence
- Department of Experimental Psychology, Oxford University, UK
| | - Carmel A Levitan
- Department of Cognitive Science, Occidental College, Los Angeles, California, United States
| |
Collapse
|
21
|
Schmitz L, Knoblich G, Deroy O, Vesper C. Crossmodal correspondences as common ground for joint action. Acta Psychol (Amst) 2021; 212:103222. [PMID: 33302228 PMCID: PMC7755874 DOI: 10.1016/j.actpsy.2020.103222] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2020] [Revised: 09/25/2020] [Accepted: 11/05/2020] [Indexed: 11/19/2022] Open
Abstract
When performing joint actions, people rely on common ground - shared information that provides the required basis for mutual understanding. Common ground can be based on people's interaction history or on knowledge and expectations people share, e.g., because they belong to the same culture or social class. Here, we suggest that people rely on yet another form of common ground, one that originates in their similarities in multisensory processing. Specifically, we focus on 'crossmodal correspondences' - nonarbitrary associations that people make between stimulus features in different sensory modalities, e.g., between stimuli in the auditory and the visual modality such as high-pitched sounds and small objects. Going beyond previous research that focused on investigating crossmodal correspondences in individuals, we propose that people can use these correspondences for communicating and coordinating with others. Initial support for our proposal comes from a communication game played in a public space (an art gallery) by pairs of visitors. We observed that pairs created nonverbal communication systems by spontaneously relying on 'crossmodal common ground'. Based on these results, we conclude that crossmodal correspondences not only occur within individuals but that they can also be actively used in joint action to facilitate the coordination between individuals.
Collapse
Affiliation(s)
- Laura Schmitz
- Department of Cognitive Science, Central European University, Budapest, Hungary; Institute for Sports Science, Leibniz Universität Hannover, Hannover, Germany
| | - Günther Knoblich
- Department of Cognitive Science, Central European University, Budapest, Hungary
| | - Ophelia Deroy
- Faculty of Philosophy, Ludwig-Maximilians-Universität, Munich, Germany; Munich Centre for Neuroscience, Ludwig-Maximilians-Universität, Munich, Germany; Institute of Philosophy, School of Advanced Study, University of London, London, UK
| | - Cordula Vesper
- Department of Cognitive Science, Central European University, Budapest, Hungary; Department of Linguistics, Cognitive Science and Semiotics, Aarhus University, Aarhus, Denmark; Interacting Minds Centre, Aarhus University, Aarhus, Denmark.
| |
Collapse
|
22
|
Spence C. Olfactory-colour crossmodal correspondences in art, science, and design. Cogn Res Princ Implic 2020; 5:52. [PMID: 33113051 PMCID: PMC7593372 DOI: 10.1186/s41235-020-00246-1] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2020] [Accepted: 09/03/2020] [Indexed: 01/28/2023] Open
Abstract
The last few years have seen a rapid growth of interest amongst researchers in the crossmodal correspondences. One of the correspondences that has long intrigued artists is the putative association between colours and odours. While traditionally conceptualised in terms of synaesthesia, over the last quarter century or so, at least 20 published peer-reviewed articles have assessed the consistent, and non-random, nature of the colours that people intuitively associate with specific (both familiar and unfamiliar) odours in a non-food context. Having demonstrated such consistent mappings amongst the general (i.e. non-synaesthetic) population, researchers have now started to investigate whether they are shared cross-culturally, and to document their developmental acquisition. Over the years, several different explanations have been put forward by researchers for the existence of crossmodal correspondences, including the statistical, semantic, structural, and emotional-mediation accounts. While several of these approaches would appear to have some explanatory validity as far as the odour-colour correspondences are concerned, contemporary researchers have focussed on learned associations as the dominant explanatory framework. The nature of the colour-odour associations that have been reported to date appear to depend on the familiarity of the odour and the ease of source naming, and hence the kind of association/representation that is accessed. While the bidirectionality of odour-colour correspondences has not yet been rigorously assessed, many designers are nevertheless already starting to build on odour-colour crossmodal correspondences in their packaging/labelling/branding work.
Collapse
Affiliation(s)
- Charles Spence
- Department of Experimental Psychology, Anna Watts Building, University of Oxford, Oxford, OX2 6GG, UK.
| |
Collapse
|
23
|
Spence C. Temperature-Based Crossmodal Correspondences: Causes and Consequences. Multisens Res 2020; 33:645-682. [PMID: 31923885 DOI: 10.1163/22134808-20191494] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2019] [Accepted: 11/13/2019] [Indexed: 12/15/2022]
Abstract
The last few years have seen an explosive growth of research interest in the crossmodal correspondences, the sometimes surprising associations that people experience between stimuli, attributes, or perceptual dimensions, such as between auditory pitch and visual size, or elevation. To date, the majority of this research has tended to focus on audiovisual correspondences. However, a variety of crossmodal correspondences have also been demonstrated with tactile stimuli, involving everything from felt shape to texture, and from weight through to temperature. In this review, I take a closer look at temperature-based correspondences. The empirical research not only supports the existence of robust crossmodal correspondences between temperature and colour (as captured by everyday phrases such as 'red hot') but also between temperature and auditory pitch. Importantly, such correspondences have (on occasion) been shown to influence everything from our thermal comfort in coloured environments through to our response to the thermal and chemical warmth associated with stimulation of the chemical senses, as when eating, drinking, and sniffing olfactory stimuli. Temperature-based correspondences are considered in terms of the four main classes of correspondence that have been identified to date, namely statistical, structural, semantic, and affective. The hope is that gaining a better understanding of temperature-based crossmodal correspondences may one day also potentially help in the design of more intuitive sensory-substitution devices, and support the delivery of immersive virtual and augmented reality experiences.
Collapse
Affiliation(s)
- Charles Spence
- Crossmodal Research Laboratory, Oxford University, Oxford, UK
| |
Collapse
|
24
|
Abstract
Cross-sensory correspondences can reflect crosstalk between aligned conceptual feature dimensions, though uncertainty remains regarding the identities of all the dimensions involved. It is unclear, for example, if heaviness contributes to correspondences separately from size. Taking steps to dissociate variations in heaviness from variations in size, the question was asked if a heaviness-brightness correspondence will induce a congruity effect during the speeded brightness classification of simple visual stimuli. Participants classified the stimuli according to whether they were brighter or darker than the mid-gray background against which they appeared. They registered their speeded decisions by manipulating (e.g., tapping) the object they were holding in either their left or right hand (e.g., left for bright, right for dark). With these two otherwise identical objects contrasting in their weight, stimuli were classified more quickly when the relative heaviness of the object needing to be manipulated corresponded with the brightness of the stimulus being classified (e.g., the heavier object for a darker stimulus). This novel congruity effect, in the guise of a stimulus-response (S-R) compatibility effect, was induced when heaviness was isolated as an enduring feature of the object needing to be manipulated. It was also undiminished when participants completed a concurrent verbal memory load task, countering claims that the heaviness-brightness correspondence is verbally mediated. Heaviness, alongside size, appears to contribute to cross-sensory correspondences in its own right and in a manner confirming the far-reaching influence of correspondences, extending here to the fluency with which people communicate simple ideas by manipulating a hand-held object.
Collapse
|
25
|
Spence C. Multisensory Flavour Perception: Blending, Mixing, Fusion, and Pairing Within and Between the Senses. Foods 2020; 9:E407. [PMID: 32244690 PMCID: PMC7230593 DOI: 10.3390/foods9040407] [Citation(s) in RCA: 27] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2020] [Revised: 03/21/2020] [Accepted: 03/21/2020] [Indexed: 11/16/2022] Open
Abstract
This review summarizes the various outcomes that may occur when two or more elements are paired in the context of flavour perception. In the first part, I review the literature concerning what happens when flavours, ingredients, and/or culinary techniques are deliberately combined in a dish, drink, or food product. Sometimes the result is fusion but, if one is not careful, the result can equally well be confusion instead. In fact, blending, mixing, fusion, and flavour pairing all provide relevant examples of how the elements in a carefully-crafted multi-element tasting experience may be combined. While the aim is sometimes to obscure the relative contributions of the various elements to the mix (as in the case of blending), at other times, consumers/tasters are explicitly encouraged to contemplate/perceive the nature of the relationship between the contributing elements instead (e.g., as in the case of flavour pairing). There has been a noticeable surge in both popular and commercial interest in fusion foods and flavour pairing in recent years, and various of the 'rules' that have been put forward to help explain the successful combination of the elements in such food and/or beverage experiences are discussed. In the second part of the review, I examine the pairing of flavour stimuli with music/soundscapes, in the emerging field of 'sonic seasoning'. I suggest that the various perceptual pairing principles/outcomes identified when flavours are paired deliberately can also be meaningfully extended to provide a coherent framework when it comes to categorizing the ways in which what we hear can influence our flavour experiences, both in terms of the sensory-discriminative and hedonic response.
Collapse
Affiliation(s)
- Charles Spence
- Crossmodal Research Laboratory, Oxford University, Oxford OX2 6GG, UK
| |
Collapse
|
26
|
Korzeniowska AT, Root-Gutteridge H, Simner J, Reby D. Audio-visual crossmodal correspondences in domestic dogs ( Canis familiaris). Biol Lett 2019; 15:20190564. [PMID: 31718513 DOI: 10.1098/rsbl.2019.0564] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
Crossmodal correspondences are intuitively held relationships between non-redundant features of a stimulus, such as auditory pitch and visual illumination. While a number of correspondences have been identified in humans to date (e.g. high pitch is intuitively felt to be luminant, angular and elevated in space), their evolutionary and developmental origins remain unclear. Here, we investigated the existence of audio-visual crossmodal correspondences in domestic dogs, and specifically, the known human correspondence in which high auditory pitch is associated with elevated spatial position. In an audio-visual attention task, we found that dogs engaged more with audio-visual stimuli that were congruent with human intuitions (high auditory pitch paired with a spatially elevated visual stimulus) compared to incongruent (low pitch paired with elevated visual stimulus). This result suggests that crossmodal correspondences are not a uniquely human or primate phenomenon and they cannot easily be dismissed as merely lexical conventions (i.e. matching 'high' pitch with 'high' elevation).
Collapse
Affiliation(s)
- A T Korzeniowska
- Mammal Vocal Communication and Cognition Research Group, MULTISENSE Lab, School of Psychology, University of Sussex, Falmer, Brighton BN1 9QH, UK
| | - H Root-Gutteridge
- Mammal Vocal Communication and Cognition Research Group, MULTISENSE Lab, School of Psychology, University of Sussex, Falmer, Brighton BN1 9QH, UK
| | - J Simner
- Mammal Vocal Communication and Cognition Research Group, MULTISENSE Lab, School of Psychology, University of Sussex, Falmer, Brighton BN1 9QH, UK
| | - D Reby
- Mammal Vocal Communication and Cognition Research Group, MULTISENSE Lab, School of Psychology, University of Sussex, Falmer, Brighton BN1 9QH, UK.,Equipe Neuro-Ethologie Sensorielle, ENES/CRNL, CNRS UMR5292, INSERM UMR_S 1028, University of Lyon/Saint-Etienne, France
| |
Collapse
|
27
|
Sievers B, Lee C, Haslett W, Wheatley T. A multi-sensory code for emotional arousal. Proc Biol Sci 2019; 286:20190513. [PMID: 31288695 DOI: 10.1098/rspb.2019.0513] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
People express emotion using their voice, face and movement, as well as through abstract forms as in art, architecture and music. The structure of these expressions often seems intuitively linked to its meaning: romantic poetry is written in flowery curlicues, while the logos of death metal bands use spiky script. Here, we show that these associations are universally understood because they are signalled using a multi-sensory code for emotional arousal. Specifically, variation in the central tendency of the frequency spectrum of a stimulus-its spectral centroid-is used by signal senders to express emotional arousal, and by signal receivers to make emotional arousal judgements. We show that this code is used across sounds, shapes, speech and human body movements, providing a strong multi-sensory signal that can be used to efficiently estimate an agent's level of emotional arousal.
Collapse
Affiliation(s)
- Beau Sievers
- 1 Department of Psychology, Harvard University , Cambridge, MA 02138 , USA
| | - Caitlyn Lee
- 2 Department of Psychological and Brain Sciences, Dartmouth College , Hanover, NH 03755 , USA
| | - William Haslett
- 3 Department of Biomedical Data Science, Geisel School of Medicine at Dartmouth , Hanover, NH 03755 , USA
| | - Thalia Wheatley
- 2 Department of Psychological and Brain Sciences, Dartmouth College , Hanover, NH 03755 , USA
| |
Collapse
|
28
|
|
29
|
Abstract
We report a series of 22 experiments in which the implicit associations test (IAT) was used to investigate cross-modal correspondences between visual (luminance, hue [R-G, B-Y], saturation) and acoustic (loudness, pitch, formants [F1, F2], spectral centroid, trill) dimensions. Colors were sampled from the perceptually accurate CIE-Lab space, and the complex, vowel-like sounds were created with a formant synthesizer capable of separately manipulating individual acoustic properties. In line with previous reports, the loudness and pitch of acoustic stimuli were associated with both luminance and saturation of the presented colors. However, pitch was associated specifically with color lightness, whereas loudness mapped onto greater visual saliency. Manipulating the spectrum of sounds without modifying their pitch showed that an upward shift of spectral energy was associated with the same visual features (higher luminance and saturation) as higher pitch. In contrast, changing formant frequencies of synthetic vowels while minimizing the accompanying shifts in spectral centroid failed to reveal cross-modal correspondences with color. This may indicate that the commonly reported associations between vowels and colors are mediated by differences in the overall balance of low- and high-frequency energy in the spectrum rather than by vowel identity as such. Surprisingly, the hue of colors with the same luminance and saturation was not associated with any of the tested acoustic features, except for a weak preference to match higher pitch with blue (vs. yellow). We discuss these findings in the context of previous research and consider their implications for sound symbolism in world languages.
Collapse
Affiliation(s)
- Andrey Anikin
- Division of Cognitive Science, Department of Philosophy, Lund University, Box 192, SE-221 00, Lund, Sweden.
| | - N Johansson
- Center for Language and Literature, Lund University, Lund, Sweden
| |
Collapse
|
30
|
Spence C. On the Relative Nature of (Pitch-Based) Crossmodal Correspondences. Multisens Res 2019; 32:235-265. [DOI: 10.1163/22134808-20191407] [Citation(s) in RCA: 30] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2019] [Accepted: 02/21/2019] [Indexed: 11/19/2022]
Abstract
Abstract
This review deals with the question of the relative vs absolute nature of crossmodal correspondences, with a specific focus on those correspondences involving the auditory dimension of pitch. Crossmodal correspondences have been defined as the often-surprising crossmodal associations that people experience between features, attributes, or dimensions of experience in different sensory modalities, when either physically present, or else merely imagined. In the literature, crossmodal correspondences have often been contrasted with synaesthesia in that the former are frequently said to be relative phenomena (e.g., it is the higher-pitched of two sounds that is matched with the smaller of two visual stimuli, say, rather than there being a specific one-to-one crossmodal mapping between a particular pitch of sound and size of object). By contrast, in the case of synaesthesia, the idiosyncratic mapping between inducer and concurrent tends to be absolute (e.g., it is a particular sonic inducer that elicits a specific colour concurrent). However, a closer analysis of the literature soon reveals that the distinction between relative and absolute in the case of crossmodal correspondences may not be as clear-cut as some commentators would have us believe. Furthermore, it is important to note that the relative vs absolute question may receive different answers depending on the particular (class of) correspondence under empirical investigation.
Collapse
Affiliation(s)
- Charles Spence
- Crossmodal Research Laboratory, Oxford University, Oxford, UK
| |
Collapse
|
31
|
Abstract
Sound symbolism refers to an association between phonemes and stimuli containing particular perceptual and/or semantic elements (e.g., objects of a certain size or shape). Some of the best-known examples include the mil/mal effect (Sapir, Journal of Experimental Psychology, 12, 225-239, 1929) and the maluma/takete effect (Köhler, 1929). Interest in this topic has been on the rise within psychology, and studies have demonstrated that sound symbolic effects are relevant for many facets of cognition, including language, action, memory, and categorization. Sound symbolism also provides a mechanism by which words' forms can have nonarbitrary, iconic relationships with their meanings. Although various proposals have been put forth for how phonetic features (both acoustic and articulatory) come to be associated with stimuli, there is as yet no generally agreed-upon explanation. We review five proposals: statistical co-occurrence between phonetic features and associated stimuli in the environment, a shared property among phonetic features and stimuli; neural factors; species-general, evolved associations; and patterns extracted from language. We identify a number of outstanding questions that need to be addressed on this topic and suggest next steps for the field.
Collapse
|
32
|
Ueda S. Effects of the Simultaneous Presentation of Corresponding Auditory and Visual Stimuli on Size Variance Perception. Iperception 2018; 9:2041669518815709. [PMID: 30559958 PMCID: PMC6291879 DOI: 10.1177/2041669518815709] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2018] [Accepted: 11/04/2018] [Indexed: 11/15/2022] Open
Abstract
To overcome limitations in perceptual bandwidth, humans condense various features of the environment into summary statistics. Variance constitutes indices that represent diversity within categories and also the reliability of the information regarding that diversity. Studies have shown that humans can efficiently perceive variance for visual stimuli; however, to enhance perception of environments, information about the external world can be obtained from multisensory modalities and integrated. Consequently, this study investigates, through two experiments, whether the precision of variance perception improves when visual information (size) and corresponding auditory information (pitch) are integrated. In Experiment 1, we measured the correspondence between visual size and auditory pitch for each participant by using adjustment measurements. The results showed a linear relationship between size and pitch-that is, the higher the pitch, the smaller the corresponding circle. In Experiment 2, sequences of visual stimuli were presented both with and without linked auditory tones, and the precision of perceived variance in size was measured. We consequently found that synchronized presentation of audio and visual stimuli that have the same variance improves the precision of perceived variance in size when compared with visual-only presentation. This suggests that audiovisual information may be automatically integrated in variance perception.
Collapse
Affiliation(s)
- Sachiyo Ueda
- Department of Computer Science and Engineering, Toyohashi University of Technology, Japan
| |
Collapse
|
33
|
Schmitz L, Vesper C, Sebanz N, Knoblich G. When Height Carries Weight: Communicating Hidden Object Properties for Joint Action. Cogn Sci 2018; 42:2021-2059. [PMID: 29936705 PMCID: PMC6120543 DOI: 10.1111/cogs.12638] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2017] [Revised: 05/19/2019] [Accepted: 05/23/2018] [Indexed: 11/29/2022]
Abstract
In the absence of pre-established communicative conventions, people create novel communication systems to successfully coordinate their actions toward a joint goal. In this study, we address two types of such novel communication systems: sensorimotor communication, where the kinematics of instrumental actions are systematically modulated, versus symbolic communication. We ask which of the two systems co-actors preferentially create when aiming to communicate about hidden object properties such as weight. The results of three experiments consistently show that actors who knew the weight of an object transmitted this weight information to their uninformed co-actors by systematically modulating their instrumental actions, grasping objects of particular weights at particular heights. This preference for sensorimotor communication was reduced in a fourth experiment where co-actors could communicate with weight-related symbols. Our findings demonstrate that the use of sensorimotor communication extends beyond the communication of spatial locations to non-spatial, hidden object properties.
Collapse
Affiliation(s)
- Laura Schmitz
- Department of Cognitive ScienceCentral European University
| | - Cordula Vesper
- Department of Cognitive ScienceCentral European University
- School of Communication and CultureAarhus University
| | - Natalie Sebanz
- Department of Cognitive ScienceCentral European University
| | | |
Collapse
|
34
|
Getz LM, Kubovy M. Questioning the automaticity of audiovisual correspondences. Cognition 2018; 175:101-108. [DOI: 10.1016/j.cognition.2018.02.015] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2017] [Revised: 02/11/2018] [Accepted: 02/13/2018] [Indexed: 11/27/2022]
|
35
|
Hamilton-Fletcher G, Wright TD, Ward J. Cross-Modal Correspondences Enhance Performance on a Colour-to-Sound Sensory Substitution Device. Multisens Res 2018; 29:337-63. [PMID: 29384607 DOI: 10.1163/22134808-00002519] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
Abstract
Visual sensory substitution devices (SSDs) can represent visual characteristics through distinct patterns of sound, allowing a visually impaired user access to visual information. Previous SSDs have avoided colour and when they do encode colour, have assigned sounds to colour in a largely unprincipled way. This study introduces a new tablet-based SSD termed the ‘Creole’ (so called because it combines tactile scanning with image sonification) and a new algorithm for converting colour to sound that is based on established cross-modal correspondences (intuitive mappings between different sensory dimensions). To test the utility of correspondences, we examined the colour–sound associative memory and object recognition abilities of sighted users who had their device either coded in line with or opposite to sound–colour correspondences. Improved colour memory and reduced colour-errors were made by users who had the correspondence-based mappings. Interestingly, the colour–sound mappings that provided the highest improvements during the associative memory task also saw the greatest gains for recognising realistic objects that also featured these colours, indicating a transfer of abilities from memory to recognition. These users were also marginally better at matching sounds to images varying in luminance, even though luminance was coded identically across the different versions of the device. These findings are discussed with relevance for both colour and correspondences for sensory substitution use.
Collapse
|
36
|
Symmetry and its role in the crossmodal correspondence between shape and taste. Atten Percept Psychophys 2017; 80:738-751. [DOI: 10.3758/s13414-017-1463-x] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
37
|
Blazhenkova O, Kumar MM. Angular Versus Curved Shapes: Correspondences and Emotional Processing. Perception 2017; 47:67-89. [DOI: 10.1177/0301006617731048] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
The present work aimed to systematically examine sensory and higher level correspondences to angular and curved shapes. Participants matched angular and curved abstract shapes to sensory experiences in five different modalities as well as to emotion, gender, and name attributes presented as written labels (Study 1) and real experiences (Study 2). The results demonstrated nonarbitrary mapping of angular and curved shapes to attributes from all basic sensory modalities (vision, audition, gustation, olfaction, and tactation) and higher level attributes (emotion, gender, and name). Participants associated curved shapes with sweet taste, quiet or calm sound, vanilla smell, green color, smooth texture, relieved emotion, female gender, and wide-vowel names. In contrast, they associated angular shapes with sour taste, loud or dynamic sound, spicy or citrus smell, red color, rough texture, excited or surprise emotion, male gender, and narrow-vowel names. These prevalent correspondences were robust across different shape pairs as well as all sensory and higher level attributes, presented as both verbal labels and real sensory experiences. The second goal of this research was to examine the relationship between the shape correspondences and individual differences in emotional processing, assessed by self-report and performance measures. The results suggest that heightened emotional ability is associated with making shape attributions that go along with the found prevalent trends.
Collapse
|
38
|
Abstract
The renewed interest that has emerged around the topic of crossmodal correspondences in recent years has demonstrated that crossmodal matchings and mappings exist between the majority of sensory dimensions, and across all combinations of sensory modalities. This renewed interest also offers a rapidly-growing list of ways in which correspondences affect--or interact with--metaphorical understanding, feelings of 'knowing', behavioral tasks, learning, mental imagery, and perceptual experiences. Here we highlight why, more generally, crossmodal correspondences matter to theories of multisensory interactions.
Collapse
|
39
|
Iosifyan M, Korolkova O, Vlasov I. Emotional and Semantic Associations Between Cinematographic Aesthetics and Haptic Perception. Multisens Res 2017. [DOI: 10.1163/22134808-00002597] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
Abstract
This study investigates systematic links between haptic perception and multimodal cinema perception. It differs from previous research conducted on cross-modal associations as it focuses on a complex intermodal stimulus, close to one people experience in reality: cinema. Participants chose materials that are most/least consistent with three-minute samples of films with elements of beauty and ugliness. We found that specific materials are associated with certain films significantly different from chance. Silk was associated with films including elements of beauty, while sandpaper was associated with films including elements of ugliness. To investigate the nature of this phenomenon, we tested the mediation effect of emotional/semantic representations on cinema–haptic associations. We found that affective representations at least partly explain the cross-modal associations between films and materials.
Collapse
Affiliation(s)
- Marina Iosifyan
- Moscow State University, Faculty of Psychology, Mokhovaya st. 11/9 125009 Moscow, Russia
| | - Olga Korolkova
- Center for Experimental Psychology, Moscow State University of Psychology and Education, 2a Shelepikhinskaya Quay, 123290 Moscow, Russia
| | - Igor Vlasov
- VTB Capital, 12, Presnenskaya emb. 123100 Moscow, Russia
| |
Collapse
|
40
|
Hamilton-Fletcher G, Witzel C, Reby D, Ward J. Sound Properties Associated With Equiluminant Colours. Multisens Res 2017; 30:337-362. [DOI: 10.1163/22134808-00002567] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2016] [Accepted: 03/27/2017] [Indexed: 11/19/2022]
Abstract
There is a widespread tendency to associate certain properties of sound with those of colour (e.g., higher pitches with lighter colours). Yet it is an open question how sound influences chroma or hue when properly controlling for lightness. To examine this, we asked participants to adjust physically equiluminant colours until they ‘went best’ with certain sounds. For pure tones, complex sine waves and vocal timbres, increases in frequency were associated with increases in chroma. Increasing the loudness of pure tones also increased chroma. Hue associations varied depending on the type of stimuli. In stimuli that involved only limited bands of frequencies (pure tones, vocal timbres), frequency correlated with hue, such that low frequencies gave blue hues and progressed to yellow hues at 800 Hz. Increasing the loudness of a pure tone was also associated with a shift from blue to yellow. However, for complex sounds that share the same bandwidth of frequencies (100–3200 Hz) but that vary in terms of which frequencies have the most power, all stimuli were associated with yellow hues. This suggests that the presence of high frequencies (above 800 Hz) consistently yields yellow hues. Overall we conclude that while pitch–chroma associations appear to flexibly re-apply themselves across a variety of contexts, frequencies above 800 Hz appear to produce yellow hues irrespective of context. These findings reveal new sound–colour correspondences previously obscured through not controlling for lightness. Findings are discussed in relation to understanding the underlying rules of cross-modal correspondences, synaesthesia, and optimising the sensory substitution of visual information through sound.
Collapse
Affiliation(s)
- Giles Hamilton-Fletcher
- School of Psychology, University of Sussex, Brighton, UK
- Sackler Centre for Consciousness Science, University of Sussex, Brighton, UK
| | - Christoph Witzel
- Allgemeine Psychologie, Justus-Liebig-Universität Gießen, Gießen, Germany
| | - David Reby
- School of Psychology, University of Sussex, Brighton, UK
| | - Jamie Ward
- School of Psychology, University of Sussex, Brighton, UK
- Sackler Centre for Consciousness Science, University of Sussex, Brighton, UK
| |
Collapse
|
41
|
Abstract
Everyday language reveals how stimuli encoded in one sensory feature domain can possess qualities normally associated with a different domain (e.g., higher pitch sounds are bright, light in weight, sharp, and thin). Such cross-sensory associations appear to reflect crosstalk among aligned (corresponding) feature dimensions, including brightness, heaviness, and sharpness. Evidence for heaviness being one such dimension is very limited, with heaviness appearing primarily as a verbal associate of other feature contrasts (e.g., darker objects and lower pitch sounds are heavier than their opposites). Given the presumed bidirectionality of the crosstalk between corresponding dimensions, heaviness should itself induce the cross-sensory associations observed elsewhere, including with brightness and pitch. Taking care to dissociate effects arising from the size and mass of an object, this is confirmed. When hidden objects varying independently in size and mass are lifted, objects that feel heavier are judged to be darker and to make lower pitch sounds than objects feeling less heavy. These judgements track the changes in perceived heaviness induced by the size-weight illusion. The potential involvement of language, natural scene statistics, and Bayesian processes in correspondences, and the effects they induce, is considered.
Collapse
Affiliation(s)
- Peter Walker
- Department of Psychology, Lancaster University, UK; Department of Psychology, Sunway University, Malaysia
| | | | - Brian Francis
- Department of Mathematics and Statistics, Lancaster University, UK
| |
Collapse
|
42
|
Velasco C, Woods AT, Petit O, Cheok AD, Spence C. Crossmodal correspondences between taste and shape, and their implications for product packaging: A review. Food Qual Prefer 2016. [DOI: 10.1016/j.foodqual.2016.03.005] [Citation(s) in RCA: 68] [Impact Index Per Article: 7.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
43
|
Krugliak A, Noppeney U. Synaesthetic interactions across vision and audition. Neuropsychologia 2016; 88:65-73. [PMID: 26427739 DOI: 10.1016/j.neuropsychologia.2015.09.027] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2015] [Revised: 08/14/2015] [Accepted: 09/21/2015] [Indexed: 11/28/2022]
Abstract
In everyday life our senses are exposed to a constant influx of sensory signals. The brain binds signals into a coherent percept based on temporal, spatial or semantic correspondences. In addition, synaesthetic correspondences may form important cues for multisensory binding. This study focussed on the synaesthetic correspondences between auditory pitch and visual size. While high pitch has been associated with small objects in static contexts, recent research has surprisingly found that increasing size is linked with rising pitch. The current study presented participants with small/large visual circles/discs together with high/low pitched pure tones in an intersensory selective attention paradigm. Whilst fixating a central cross participants discriminated between small and large visual size in the visual modality or between high and low pitch in the auditory modality. Across a series of five experiments, we observed convergent evidence that participants associated small visual size with low pitch and large visual size with high pitch. In other words, we observed the pitch-size mapping that has previously been observed only for dynamic contexts. We suggest that these contradictory findings may emerge because participants can interpret visual size as an index of permanent object size or distance (e.g. in motion) from the observer. Moreover, the pitch-size mapping may depend not only on relative but also on the absolute levels of pitch and size of the presented stimuli.
Collapse
Affiliation(s)
- Alexandra Krugliak
- Centre for Computational Neuroscience and Cognitive Robotics, University of Birmingham, Edgbaston B15 2TT, Birmingham, UK, England.
| | - Uta Noppeney
- Centre for Computational Neuroscience and Cognitive Robotics, University of Birmingham, Edgbaston B15 2TT, Birmingham, UK, England.
| |
Collapse
|
44
|
Kanaya S, Kariya K, Fujisaki W. Cross-Modal Correspondence Among Vision, Audition, and Touch in Natural Objects: An Investigation of the Perceptual Properties of Wood. Perception 2016; 45:1099-114. [DOI: 10.1177/0301006616652018] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Certain systematic relationships are often assumed between information conveyed from multiple sensory modalities; for instance, a small figure and a high pitch may be perceived as more harmonious. This phenomenon, termed cross-modal correspondence, may result from correlations between multi-sensory signals learned in daily experience of the natural environment. If so, we would observe cross-modal correspondences not only in the perception of artificial stimuli but also in perception of natural objects. To test this hypothesis, we reanalyzed data collected previously in our laboratory examining perceptions of the material properties of wood using vision, audition, and touch. We compared participant evaluations of three perceptual properties (surface brightness, sharpness of sound, and smoothness) of the wood blocks obtained separately via vision, audition, and touch. Significant positive correlations were identified for all properties in the audition–touch comparison, and for two of the three properties regarding in the vision–touch comparison. By contrast, no properties exhibited significant positive correlations in the vision–audition comparison. These results suggest that we learn correlations between multi-sensory signals through experience; however, the strength of this statistical learning is apparently dependent on the particular combination of sensory modalities involved.
Collapse
Affiliation(s)
- Shoko Kanaya
- Human Information Research Institute, National Institute of Advanced Industrial Science and Technology (AIST), Tsukuba, Japan
| | - Kenji Kariya
- Tsukuba Research Institute, Sumitomo Forestry Company, Tsukuba, Japan
| | - Waka Fujisaki
- Human Information Research Institute, National Institute of Advanced Industrial Science and Technology (AIST), Tsukuba, Japan
| |
Collapse
|
45
|
Abstract
In this review, we distinguish strong and weak forms of synesthesia. Strong synesthesia is characterized by a vivid image in one sensory modality in response to stimulation in another one. Weak synesthesia is characterized by cross-sensory correspondences expressed through language, perceptual similarity, and perceptual interactions during information processing. Despite important phenomenological dissimilarities between strong and weak synesthesia, we maintain that the two forms draw on similar underlying mechanisms. The study of strong and weak synesthetic phenomena provides an opportunity to enrich scientists' understanding of basic mechanisms involved in perceptual coding and cross-modal information processing.
Collapse
Affiliation(s)
- Gail Martino
- The John B. Pierce Laboratory, New Haven, Connecticut
- Department of Diagnostic Radiology (G.M.) and Department of Epidemiology and Public Health (L.E.M.), Yale Medical School, Yale University, New Haven, Connecticut
| | | |
Collapse
|
46
|
Abstract
Grapheme-color synaesthesia is a rare condition in which perception of a letter or a digit is associated with concurrent perception of a color. Synaesthetes report that these color experiences are vivid and realistic. We used a Stroop task to show that synaesthetically induced color, like real color, is processed in color-opponent channels (red-green or blue-yellow). Synaesthetic color produced maximal interference with the perception and naming of the real color of a grapheme if the real color was opponent to the synaesthetic color. Interference was reduced considerably if the synaesthetic and real colors engaged different color channels (e.g., synaesthetic blue and real red). No dependence on color opponency was found for semantic conflicts between shape and color (e.g., a blue lemon). Thus, the neural representation of synaesthetic colors closely resembles that of real colors. This suggests involvement of early stages of visual processing in color synaesthesia and explains the vivid and realistic nature of synaesthetic experiences.
Collapse
Affiliation(s)
- Danko Nikolić
- Department of Neurophysiology, Max-Planck Institute for Brain Research, Frankfurt am Main, Germany.
| | | | | |
Collapse
|
47
|
Chen N, Tanaka K, Namatame M, Watanabe K. Color-Shape Associations in Deaf and Hearing People. Front Psychol 2016; 7:355. [PMID: 27014161 PMCID: PMC4791540 DOI: 10.3389/fpsyg.2016.00355] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2015] [Accepted: 02/26/2016] [Indexed: 11/13/2022] Open
Abstract
Studies have contended that neurotypical Japanese individuals exhibit consistent color-shape associations (red-circle, yellow-triangle, and blue-square) and those color-shape associations could be constructed by common semantic information between colors and shapes through learning and/or language experiences. Here, we conducted two experiments using a direct questionnaire survey and an indirect behavioral test (Implicit Association Test), to examine whether the construction of color-shape associations entailed phonological information by comparing color-shape associations in deaf and hearing participants. The results of the direct questionnaire showed that deaf and hearing participants had similar patterns of color-shape associations (red-circle, yellow-triangle, and blue-square). However, deaf participants failed to show any facilitated processing of congruent pairs in the IAT tasks as hearing participants did. The present results suggest that color-shape associations in deaf participants may not be strong enough to be proved by the indirect behavior tasks and relatively weaker in comparison to hearing participants. Thus, phonological information likely plays a role in the construction of color-shape associations.
Collapse
Affiliation(s)
- Na Chen
- Research Center for Advanced Science and Technology, The University of Tokyo Tokyo, Japan
| | - Kanji Tanaka
- Research Center for Advanced Science and Technology, The University of TokyoTokyo, Japan; Faculty of Science and Engineering, Waseda UniversityTokyo, Japan
| | - Miki Namatame
- Department of Synthetic Design, Tsukuba University of Technology Tsukuba, Japan
| | - Katsumi Watanabe
- Research Center for Advanced Science and Technology, The University of TokyoTokyo, Japan; Faculty of Science and Engineering, Waseda UniversityTokyo, Japan
| |
Collapse
|
48
|
Velasco C, Woods AT, Marks LE, Cheok AD, Spence C. The semantic basis of taste-shape associations. PeerJ 2016; 4:e1644. [PMID: 26966646 PMCID: PMC4783761 DOI: 10.7717/peerj.1644] [Citation(s) in RCA: 28] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2015] [Accepted: 01/10/2016] [Indexed: 11/30/2022] Open
Abstract
Previous research shows that people systematically match tastes with shapes. Here, we assess the extent to which matched taste and shape stimuli share a common semantic space and whether semantically congruent versus incongruent taste/shape associations can influence the speed with which people respond to both shapes and taste words. In Experiment 1, semantic differentiation was used to assess the semantic space of both taste words and shapes. The results suggest a common semantic space containing two principal components (seemingly, intensity and hedonics) and two principal clusters, one including round shapes and the taste word “sweet,” and the other including angular shapes and the taste words “salty,” “sour,” and “bitter.” The former cluster appears more positively-valenced whilst less potent than the latter. In Experiment 2, two speeded classification tasks assessed whether congruent versus incongruent mappings of stimuli and responses (e.g., sweet with round versus sweet with angular) would influence the speed of participants’ responding, to both shapes and taste words. The results revealed an overall effect of congruence with congruent trials yielding faster responses than their incongruent counterparts. These results are consistent with previous evidence suggesting a close relation (or crossmodal correspondence) between tastes and shape curvature that may derive from common semantic coding, perhaps along the intensity and hedonic dimensions.
Collapse
Affiliation(s)
- Carlos Velasco
- Crossmodal Research Laboratory, Department of Experimental Psychology, University of Oxford, Oxford, UK; Imagineering Institute, Iskandar, Malaysia
| | - Andy T Woods
- Crossmodal Research Laboratory, Department of Experimental Psychology, University of Oxford, Oxford, UK; Xperiment, UK
| | - Lawrence E Marks
- Sensory Information Processing, John B. Pierce Laboratory, New Haven, CT, USA; School of Public Health and Department of Psychology, Yale University, New Haven, CT, USA
| | - Adrian David Cheok
- Imagineering Institute, Iskandar, Malaysia; School of Mathematics, Engineering, and Computer Science, City University, London, UK
| | - Charles Spence
- Crossmodal Research Laboratory, Department of Experimental Psychology, University of Oxford , Oxford , UK
| |
Collapse
|
49
|
Etzi R, Spence C, Zampini M, Gallace A. When Sandpaper Is ‘Kiki’ and Satin Is ‘Bouba’: an Exploration of the Associations Between Words, Emotional States, and the Tactile Attributes of Everyday Materials. Multisens Res 2016; 29:133-55. [DOI: 10.1163/22134808-00002497] [Citation(s) in RCA: 32] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
Abstract
Over the last decade, scientists working on the topic of multisensory integration, as well as designers and marketers involved in trying to understand consumer behavior, have become increasingly interested in the non-arbitrary associations (e.g., sound symbolism) between different sensorial attributes of the stimuli they work with. Nevertheless, to date, little research in this area has investigated the presence of these crossmodal correspondences in the tactile evaluation of everyday materials. Here, we explore the presence and nature of the associations between tactile sensations, the sound of non-words, and people’s emotional states. Samples of cotton, satin, tinfoil, sandpaper, and abrasive sponge, were stroked along the participants’ forearm at the speed of 5 cm/s. Participants evaluated the materials along several dimensions, comprising scales anchored by pairs of non-words (e.g., Kiki/Bouba) and adjectives (e.g., ugly/beautiful). The results revealed that smoother textures were associated with non-words made up of round-shaped sounds (e.g., Maluma), whereas rougher textures were more strongly associated with sharp-transient sounds (e.g., Takete). The results also revealed the presence of a number of correspondences between tactile surfaces and adjectives related to visual and auditory attributes. For example, smooth textures were associated with features evoked by words such as ‘bright’ and ‘quiet’; by contrast, the rougher textures were associated with adjectives such as ‘dim’ and ‘loud’. The textures were also found to be associated with a number of emotional labels. Taken together, these results further our understanding of crossmodal correspondences involving the tactile modality and provide interesting insights in the applied field of design and marketing.
Collapse
Affiliation(s)
- Roberta Etzi
- Department of Psychology, University of Milano–Bicocca, Milan, Italy
- NeuroMI — Milan Center for Neuroscience, Milan, Italy
| | - Charles Spence
- Crossmodal Research Laboratory, Department of Experimental Psychology, Oxford University, Oxford, UK
| | - Massimiliano Zampini
- CIMeC, Center for Mind/Brain Sciences, University of Trento, Rovereto, Italy
- Department of Psychology and Cognitive Sciences, University of Trento, Rovereto, Italy
| | - Alberto Gallace
- Department of Psychology, University of Milano–Bicocca, Milan, Italy
- NeuroMI — Milan Center for Neuroscience, Milan, Italy
| |
Collapse
|
50
|
The size-brightness correspondence: evidence for crosstalk among aligned conceptual feature dimensions. Atten Percept Psychophys 2015; 77:2694-710. [PMID: 26294420 DOI: 10.3758/s13414-015-0977-3] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|