1
|
Spence C, Di Stefano N. What, if anything, can be considered an amodal sensory dimension? Psychon Bull Rev 2024; 31:1915-1933. [PMID: 38381301 PMCID: PMC11543734 DOI: 10.3758/s13423-023-02447-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/17/2023] [Indexed: 02/22/2024]
Abstract
The term 'amodal' is a key topic in several different research fields across experimental psychology and cognitive neuroscience, including in the areas of developmental and perception science. However, despite being regularly used in the literature, the term means something different to the researchers working in the different contexts. Many developmental scientists conceive of the term as referring to those perceptual qualities, such as, for example, the size and shape of an object, that can be picked up by multiple senses (e.g., vision and touch potentially providing information relevant to the same physical stimulus/property). However, the amodal label is also widely used in the case of those qualities that are not directly sensory, such as, for example, numerosity, rhythm, synchrony, etc. Cognitive neuroscientists, by contrast, tend to use the term amodal to refer to those central cognitive processes and brain areas that do not appear to be preferentially responsive to a particular sensory modality or to those symbolic or formal representations that essentially lack any modality and that are assumed to play a role in the higher processing of sensory information. Finally, perception scientists sometimes refer to the phenomenon of 'amodal completion', referring to the spontaneous completion of perceptual information that is missing when occluded objects are presented to observers. In this paper, we review the various different ways in which the term 'amodal' has been used in the literature and the evidence supporting the various uses of the term. Morever, we highlight some of the various properties that have been suggested to be 'amodal' over the years. Then, we try to address some of the questions that arise from the reviewed evidence, such as: Do different uses of the 'term' refer to different domains, for example, sensory information, perceptual processes, or perceptual representations? Are there any commonalities among the different uses of the term? To what extent is research on cross-modal associations (or correspondences) related to, or can shed light on, amodality? And how is the notion of amodal related to multisensory integration? Based on the reviewed evidence, it is argued that there is, as yet, no convincing empirical evidence to support the claim that amodal sensory qualities exist. We thus suggest that use of the term amodal would be more meaningful with respect to abstract cognition rather than necessarily sensory perception, the latter being more adequately explained/understood in terms of highly redundant cross-modal correspondences.
Collapse
Affiliation(s)
- Charles Spence
- Department of Experimental Psychology, New Radcliffe House, University of Oxford, Oxford, OX2 6BW, UK.
- Crossmodal Research Laboratory, University of Oxford, Oxford, UK.
| | - Nicola Di Stefano
- Institute of Cognitive Sciences and Technologies, National Research Council of Italy (CNR), Rome, Italy
| |
Collapse
|
2
|
Abstract
Across the millennia, and across a range of disciplines, there has been a widespread desire to connect, or translate between, the senses in a manner that is meaningful, rather than arbitrary. Early examples were often inspired by the vivid, yet mostly idiosyncratic, crossmodal matches expressed by synaesthetes, often exploited for aesthetic purposes by writers, artists, and composers. A separate approach comes from those academic commentators who have attempted to translate between structurally similar dimensions of perceptual experience (such as pitch and colour). However, neither approach has succeeded in delivering consensually agreed crossmodal matches. As such, an alternative approach to sensory translation is needed. In this narrative historical review, focusing on the translation between audition and vision, we attempt to shed light on the topic by addressing the following three questions: (1) How is the topic of sensory translation related to synaesthesia, multisensory integration, and crossmodal associations? (2) Are there common processing mechanisms across the senses that can help to guarantee the success of sensory translation, or, rather, is mapping among the senses mediated by allegedly universal (e.g., amodal) stimulus dimensions? (3) Is the term 'translation' in the context of cross-sensory mappings used metaphorically or literally? Given the general mechanisms and concepts discussed throughout the review, the answers we come to regarding the nature of audio-visual translation are likely to apply to the translation between other perhaps less-frequently studied modality pairings as well.
Collapse
Affiliation(s)
- Charles Spence
- Crossmodal Research Laboratory, University of Oxford, Oxford, UK.
- Department of Experimental Psychology, New Radcliffe House, University of Oxford, Oxford, OX2 6BW, UK.
| | - Nicola Di Stefano
- Institute of Cognitive Sciences and Technologies, National Research Council of Italy (CNR), Rome, Italy
| |
Collapse
|
3
|
Albertazzi L, Canal L, Micciolo R, Hachen I. Cross-Modal Perceptual Organization in Works of Art. Iperception 2020; 11:2041669520950750. [PMID: 32922715 PMCID: PMC7459189 DOI: 10.1177/2041669520950750] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2019] [Accepted: 07/21/2020] [Indexed: 11/18/2022] Open
Abstract
This study investigates the existence of cross-modal correspondences between a series of paintings by Kandinsky and a series of selections from Schönberg music. The experiment was conducted in two phases. In the first phase, by means of the Osgood semantic differential, the participants evaluated the perceptual characteristics first of visual stimuli (some pictures of Kandinsky's paintings, with varying perceptual characteristics and contents) and then of auditory stimuli (musical excerpts taken from the repertoire of Schönberg's piano works) relative to 11 pairs of adjectives tested on a continuous bipolar scale. In the second phase, participants were required to associate pictures and musical excerpts. The results of the semantic differential test show that certain paintings and musical excerpts were evaluated as semantically more similar, while others were evaluated as semantically more different. The results of the direct association between musical excerpts and paintings showed both attractions and repulsions among the stimuli. The overall results provide significant insights into the relationship between concrete and abstract concepts and into the process of perceptual grouping in cross-modal phenomena.
Collapse
Affiliation(s)
- Liliana Albertazzi
- Neuroscience Area, International School for
Advanced Studies (SISSA), Trieste, Italy
| | - Luisa Canal
- Neuroscience Area, International School for
Advanced Studies (SISSA), Trieste, Italy
| | - Rocco Micciolo
- Neuroscience Area, International School for
Advanced Studies (SISSA), Trieste, Italy
| | - Iacopo Hachen
- Neuroscience Area, International School for
Advanced Studies (SISSA), Trieste, Italy
| |
Collapse
|
4
|
Kang H, Lancelin D, Pressnitzer D. Memory for Random Time Patterns in Audition, Touch, and Vision. Neuroscience 2018; 389:118-132. [PMID: 29577997 DOI: 10.1016/j.neuroscience.2018.03.017] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2017] [Revised: 03/09/2018] [Accepted: 03/13/2018] [Indexed: 11/28/2022]
Abstract
Perception deals with temporal sequences of events, like series of phonemes for audition, dynamic changes in pressure for touch textures, or moving objects for vision. Memory processes are thus needed to make sense of the temporal patterning of sensory information. Recently, we have shown that auditory temporal patterns could be learned rapidly and incidentally with repeated exposure [Kang et al., 2017]. Here, we tested whether rapid incidental learning of temporal patterns was specific to audition, or if it was a more general property of sensory systems. We used a same behavioral task in three modalities: audition, touch, and vision, for stimuli having identical temporal statistics. Participants were presented with sequences of acoustic pulses for audition, motion pulses to the fingertips for touch, or light pulses for vision. Pulses were randomly and irregularly spaced, with all inter-pulse intervals in the sub-second range and all constrained to be longer than the temporal acuity in any modality. This led to pulse sequences with an average inter-pulse interval of 166 ms, a minimum inter-pulse interval of 60 ms, and a total duration of 1.2 s. Results showed that, if a random temporal pattern re-occurred at random times during an experimental block, it was rapidly learned, whatever the sensory modality. Moreover, patterns first learned in the auditory modality displayed transfer of learning to either touch or vision. This suggests that sensory systems may be exquisitely tuned to incidentally learn re-occurring temporal patterns, with possible cross-talk between the senses.
Collapse
Affiliation(s)
- HiJee Kang
- Laboratoire des Systèmes Perceptifs, Département d'études cognitives, École Normale Supérieure, PSL Research University, CNRS, 29 rue d'Ulm, 75005 Paris, France.
| | - Denis Lancelin
- Laboratoire des Systèmes Perceptifs, Département d'études cognitives, École Normale Supérieure, PSL Research University, CNRS, 29 rue d'Ulm, 75005 Paris, France
| | - Daniel Pressnitzer
- Laboratoire des Systèmes Perceptifs, Département d'études cognitives, École Normale Supérieure, PSL Research University, CNRS, 29 rue d'Ulm, 75005 Paris, France.
| |
Collapse
|
5
|
Albertazzi L, Canal L, Micciolo R, Ferrari F, Sitta S, Hachen I. Naturally Biased Associations Between Music and Poetry. Perception 2016; 46:139-160. [PMID: 27770078 DOI: 10.1177/0301006616673851] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
The study analyzes the existence of naturally biased associations in the general population between a series of musical selections and a series of quatrains. Differently from other studies in the field, the association is tested between complex stimuli involving literary texts, which increases the load of the semantic factors. The stimuli were eight quatrains taken from the same poem and eight musical clips taken from a classical musical version of the poem. The experiment was conducted in two phases. First, the participants were asked to rate 10 couples of opposite adjectives on a continuous bipolar scale when reading a quatrain or when listening to a musical clip; then they were asked to associate a given clip directly with the quatrains in decreasing order. The results showed the existence of significant associations between the semantics of the quatrains and the musical selections. They also confirmed the correspondences experienced by the composer when writing the musical version of the poem. Connotative dimensions such as rough or smooth, distressing or serene, turbid or clear, and gloomy or bright, characterizing both the semantic and the auditory stimuli, may have played a role in the associations. The results also shed light on the accomplishment of the two diverse methodologies adopted in the two different phases of the test. Finally, the role of specific musical components and their combinations is likely to have played an important role in the associations, an aspect that shall be addressed in further studies.
Collapse
Affiliation(s)
| | - Luisa Canal
- Department of Psychology and Cognitive Sciences, University of Trento, Italy
| | - Rocco Micciolo
- Department of Psychology and Cognitive Sciences, University of Trento, Italy
| | | | | | - Iacopo Hachen
- Center for Mind/Brain Sciences, University of Trento, Italy
| |
Collapse
|
6
|
Misselhorn J, Daume J, Engel AK, Friese U. A matter of attention: Crossmodal congruence enhances and impairs performance in a novel trimodal matching paradigm. Neuropsychologia 2015. [PMID: 26209356 DOI: 10.1016/j.neuropsychologia.2015.07.022] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
A novel crossmodal matching paradigm including vision, audition, and somatosensation was developed in order to investigate the interaction between attention and crossmodal congruence in multisensory integration. To that end, all three modalities were stimulated concurrently while a bimodal focus was defined blockwise. Congruence between stimulus intensity changes in the attended modalities had to be evaluated. We found that crossmodal congruence improved performance if both, the attended modalities and the task-irrelevant distractor were congruent. If the attended modalities were incongruent, the distractor impaired performance due to its congruence relation to one of the attended modalities. Between attentional conditions, magnitudes of crossmodal enhancement or impairment differed. Largest crossmodal effects were seen in visual-tactile matching, intermediate effects for audio-visual and smallest effects for audio-tactile matching. We conclude that differences in crossmodal matching likely reflect characteristics of multisensory neural network architecture. We discuss our results with respect to the timing of perceptual processing and state hypotheses for future physiological studies. Finally, etiological questions are addressed.
Collapse
Affiliation(s)
- Jonas Misselhorn
- Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, Martinistr. 52, 20246 Hamburg, Germany.
| | - Jonathan Daume
- Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, Martinistr. 52, 20246 Hamburg, Germany
| | - Andreas K Engel
- Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, Martinistr. 52, 20246 Hamburg, Germany
| | - Uwe Friese
- Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, Martinistr. 52, 20246 Hamburg, Germany
| |
Collapse
|
7
|
Stekelenburg JJ, Keetels M. The effect of synesthetic associations between the visual and auditory modalities on the Colavita effect. Exp Brain Res 2015; 234:1209-19. [PMID: 26126803 PMCID: PMC4828489 DOI: 10.1007/s00221-015-4363-0] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2014] [Accepted: 06/16/2015] [Indexed: 11/30/2022]
Abstract
The Colavita effect refers to the phenomenon that when confronted with an audiovisual stimulus, observers report more often to have perceived the visual than the auditory component. The Colavita effect depends on low-level stimulus factors such as spatial and temporal proximity between the unimodal signals. Here, we examined whether the Colavita effect is modulated by synesthetic congruency between visual size and auditory pitch. If the Colavita effect depends on synesthetic congruency, we expect a larger Colavita effect for synesthetically congruent size/pitch (large visual stimulus/low-pitched tone; small visual stimulus/high-pitched tone) than synesthetically incongruent (large visual stimulus/high-pitched tone; small visual stimulus/low-pitched tone) combinations. Participants had to identify stimulus type (visual, auditory or audiovisual). The study replicated the Colavita effect because participants reported more often the visual than auditory component of the audiovisual stimuli. Synesthetic congruency had, however, no effect on the magnitude of the Colavita effect. EEG recordings to congruent and incongruent audiovisual pairings showed a late frontal congruency effect at 400-550 ms and an occipitoparietal effect at 690-800 ms with neural sources in the anterior cingulate and premotor cortex for the 400- to 550-ms window and premotor cortex, inferior parietal lobule and the posterior middle temporal gyrus for the 690- to 800-ms window. The electrophysiological data show that synesthetic congruency was probably detected in a processing stage subsequent to the Colavita effect. We conclude that-in a modality detection task-the Colavita effect can be modulated by low-level structural factors but not by higher-order associations between auditory and visual inputs.
Collapse
Affiliation(s)
- Jeroen J Stekelenburg
- Department of Cognitive Neuropsychology, Tilburg University, P.O. Box 90153, Warandelaan 2, 5000 LE, Tilburg, The Netherlands.
| | - Mirjam Keetels
- Department of Cognitive Neuropsychology, Tilburg University, P.O. Box 90153, Warandelaan 2, 5000 LE, Tilburg, The Netherlands
| |
Collapse
|
8
|
Abstract
In many everyday situations, our senses are bombarded by many different unisensory signals at any given time. To gain the most veridical, and least variable, estimate of environmental stimuli/properties, we need to combine the individual noisy unisensory perceptual estimates that refer to the same object, while keeping those estimates belonging to different objects or events separate. How, though, does the brain "know" which stimuli to combine? Traditionally, researchers interested in the crossmodal binding problem have focused on the roles that spatial and temporal factors play in modulating multisensory integration. However, crossmodal correspondences between various unisensory features (such as between auditory pitch and visual size) may provide yet another important means of constraining the crossmodal binding problem. A large body of research now shows that people exhibit consistent crossmodal correspondences between many stimulus features in different sensory modalities. For example, people consistently match high-pitched sounds with small, bright objects that are located high up in space. The literature reviewed here supports the view that crossmodal correspondences need to be considered alongside semantic and spatiotemporal congruency, among the key constraints that help our brains solve the crossmodal binding problem.
Collapse
|
9
|
Abstract
A sound presented in temporal proximity to a light can alter the perceived temporal occurrence of that light (temporal ventriloquism). Recent studies have suggested that pitch-size synesthetic congruency (i.e., a natural association between the relative pitch of a sound and the relative size of a visual stimulus) might affect this phenomenon. To reexamine this, participants made temporal order judgements about small- and large-sized visual stimuli while high- or low-pitched tones were presented before the first and after the second light. We replicated a previous study showing that, at large sound-light intervals, sensitivity for visual temporal order was better for synesthetically congruent than for incongruent pairs. However, this congruency effect could not be attributed to temporal ventriloquism, since it disappeared at short sound-light intervals, if compared with a synchronous audiovisual baseline condition that excluded response biases. In addition, synesthetic congruency did not affect temporal ventriloquism even if participants were made explicitly aware of congruency before testing. Our results thus challenge the view that synesthetic congruency affects temporal ventriloquism.
Collapse
|
10
|
Abstract
After examination of the status of time in experimental psychology and a review of related major texts, 2 opposite approaches are presented in which time is either unified or fragmented. Unified time perception views, usually guided by Weber's law, are embodied in various models. After a brief review of old models and a description of the major contemporary models of time perception, views on fragmented time perception are presented as challenges for any unified time view. Fragmentation of psychological time emerges from (a) disruptions of the Weber function, which are caused by the types of interval presentation, by extensive practice, and by counting explicitly or not; and (b) modulations of time sensitivity and perceived duration by attention and interval structures. Weber's law is a useful guide for studying psychological time, but it is also reasonable to assume that more than one so-called central timekeeper could contribute to perceiving time.
Collapse
Affiliation(s)
- S Grondin
- Ecole de Psychologie, Université Laval, Québec, Canada.
| |
Collapse
|
11
|
Flowers JH, Hauer TA. Musical versus visual graphs: cross-modal equivalence in perception of time series data. HUMAN FACTORS 1995; 37:553-569. [PMID: 8566997 DOI: 10.1518/001872095779049264] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/22/2023]
Abstract
By applying multidimensional scaling procedures and other quantitative analyses to perceptual dissimilarity judgments, we compared the perceptual structure of visual line graphs depicting simulated time series data with that of auditory displays (musical graphs) presenting the same data. Highly similar and meaningful perceptual structures were demonstrated for both auditory and visual modalities, showing that important data characteristics (function slope, shape, and level) were perceptually salient in either presentation mode. Auditory graphics may be a highly useful alternative to traditional visual graphics for a variety of data presentation applications.
Collapse
Affiliation(s)
- J H Flowers
- Department of Psychology, University of Nebraska, Lincoln 68588-0308, USA
| | | |
Collapse
|
12
|
Mahar D, Mackenzie B, McNicol D. Modality-specific differences in the processing of spatially, temporally, and spatiotemporally distributed information. Perception 1994; 23:1369-86. [PMID: 7761246 DOI: 10.1068/p231369] [Citation(s) in RCA: 19] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
Abstract
The extent to which auditory, tactile, and visual perceptual representations are similar, particularly when dealing with speech and speech-like stimuli, was investigated. It was found that comparisons between auditory and tactile patterns were easier to perform than were similar comparisons between auditory and visual stimuli. This was true across a variety of styles of tactile and visual display, and was not due to limitations in the discriminability of the visual displays. The findings suggest that auditory and tactile representations of stimuli are more alike than are auditory and visual ones. It was also found that touch and vision differ in terms of the style of information distribution which they process most efficiently. Touch dealt with patterns best when the pattern was characterised by changes across time, whereas vision did best when spatially or spatiotemporally distributed patterns were presented. As the sense of hearing also seems to specialise in the processing of temporally ordered patterns, these results suggest one way in which the senses of hearing and touch differ from vision.
Collapse
Affiliation(s)
- D Mahar
- Division of Psychology, Australian National University, Canberra
| | | | | |
Collapse
|
13
|
Abstract
The role of the human amygdala in cross-modal associations was investigated in two subjects: SM-046, who had bilateral damage circumscribed to the amygdala; and the patient known as Boswell, whose damage in both temporal lobes includes the amygdala and surrounding cortices. Neither subject was impaired on Tactile-Visual or Visual-Tactile cross-modal tasks using the Arc-Circle test, suggesting that the amygdala is not involved in cross-modal associations involving perceptually "equivalent" basic stimulus properties. On the other hand, the results are compatible with the amygdala's involvement in higher-order associations between exteroceptive sensory data and interoceptive data concerned with correlated somatic states.
Collapse
Affiliation(s)
- F K Nahm
- Neurosciences Group, University of California, San Diego
| | | | | | | |
Collapse
|
14
|
Melara RD, Marks LE, Lesko KE. Optional processes in similarity judgments. PERCEPTION & PSYCHOPHYSICS 1992; 51:123-33. [PMID: 1549431 DOI: 10.3758/bf03212237] [Citation(s) in RCA: 28] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/27/2022]
Abstract
This research investigates the nature of similarity relations among three pairs of interacting dimensions: (1) the integral dimensions of auditory pitch and loudness, (2) the configural dimensions of paired parentheses, and (3) the cross-modally corresponding dimensions of visual position and auditory pitch. We evaluated the rules by which information from each dimension combines in similarity judgments. Our claim is that, when judging similarity, processes that are obligatory, or what we call mandatory processes, can commingle with processes of choice, or what we call optional processes. By varying instructions, we found strong evidence of optional processing. Instructions to rate overall similarity encouraged subjects to attend to stimuli as wholes and led to a Euclidean rule in similarity scaling. Instructions to focus on dimensions encouraged subjects to consider each stimulus dimension separately and led to a city-block rule. We argue that optional processes may obscure mandatory ones, and so need to be identified before mandatory processes can be understood.
Collapse
Affiliation(s)
- R D Melara
- Purdue University, West Lafayette, Indiana
| | | | | |
Collapse
|