1
|
Abstract
It is commonly agreed that vision is more sensitive to spatial information, while audition is more sensitive to temporal information. When both visual and auditory information are available simultaneously, the modality appropriateness hypothesis predicts that, depending on the task, the most appropriate (i.e., reliable) modality dominates perception. While previous research mainly focused on discrepant information from different sensory inputs to scrutinize the modality appropriateness hypothesis, the current study aimed at investigating the modality appropriateness hypothesis when multimodal information was provided in a nondiscrepant and simultaneous manner. To this end, participants performed a temporal rhythm reproduction task for which the auditory modality is known to be the most appropriate. The experiment comprised an auditory (i.e., beeps), a visual (i.e., flashing dots), and an audiovisual condition (i.e., beeps and dots simultaneously). Moreover, constant as well as variable interstimulus intervals were implemented. Results revealed higher accuracy and lower variability in the auditory condition for both interstimulus interval types when compared to the visual condition. More importantly, there were no differences between the auditory and the audiovisual condition across both interstimulus interval types. This indicates that the auditory modality dominated multimodal perception in the task, whereas the visual modality was disregarded and hence did not add to reproduction performance.
Collapse
Affiliation(s)
- Alexandra Hildebrandt
- Department for the Psychology of Human Movement and Sport, Institute of Sport Science, 9378Friedrich Schiller University Jena, Germany
| | - Eric Grießbach
- Department for the Psychology of Human Movement and Sport, Institute of Sport Science, 9378Friedrich Schiller University Jena, Germany
| | - Rouwen Cañal-Bruland
- Department for the Psychology of Human Movement and Sport, Institute of Sport Science, 9378Friedrich Schiller University Jena, Germany
| |
Collapse
|
2
|
Richards MD, Goltz HC, Wong AM. Audiovisual perception in amblyopia: A review and synthesis. Exp Eye Res 2019; 183:68-75. [DOI: 10.1016/j.exer.2018.04.017] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2018] [Revised: 04/27/2018] [Accepted: 04/28/2018] [Indexed: 11/15/2022]
|
3
|
Sanders P, Thompson B, Corballis P, Searchfield G. On the Timing of Signals in Multisensory Integration and Crossmodal Interactions: a Scoping Review. Multisens Res 2019; 32:533-573. [PMID: 31137004 DOI: 10.1163/22134808-20191331] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2018] [Accepted: 04/24/2019] [Indexed: 11/19/2022]
Abstract
A scoping review was undertaken to explore research investigating early interactions and integration of auditory and visual stimuli in the human brain. The focus was on methods used to study low-level multisensory temporal processing using simple stimuli in humans, and how this research has informed our understanding of multisensory perception. The study of multisensory temporal processing probes how the relative timing between signals affects perception. Several tasks, illusions, computational models, and neuroimaging techniques were identified in the literature search. Research into early audiovisual temporal processing in special populations was also reviewed. Recent research has continued to provide support for early integration of crossmodal information. These early interactions can influence higher-level factors, and vice versa. Temporal relationships between auditory and visual stimuli influence multisensory perception, and likely play a substantial role in solving the 'correspondence problem' (how the brain determines which sensory signals belong together, and which should be segregated).
Collapse
Affiliation(s)
- Philip Sanders
- 1Section of Audiology, University of Auckland, Auckland, New Zealand.,2Centre for Brain Research, University of Auckland, New Zealand.,3Brain Research New Zealand - Rangahau Roro Aotearoa, New Zealand
| | - Benjamin Thompson
- 2Centre for Brain Research, University of Auckland, New Zealand.,4School of Optometry and Vision Science, University of Auckland, Auckland, New Zealand.,5School of Optometry and Vision Science, University of Waterloo, Waterloo, Canada
| | - Paul Corballis
- 2Centre for Brain Research, University of Auckland, New Zealand.,6Department of Psychology, University of Auckland, Auckland, New Zealand
| | - Grant Searchfield
- 1Section of Audiology, University of Auckland, Auckland, New Zealand.,2Centre for Brain Research, University of Auckland, New Zealand.,3Brain Research New Zealand - Rangahau Roro Aotearoa, New Zealand
| |
Collapse
|
4
|
Brooks CJ, Chan YM, Anderson AJ, McKendrick AM. Audiovisual Temporal Perception in Aging: The Role of Multisensory Integration and Age-Related Sensory Loss. Front Hum Neurosci 2018; 12:192. [PMID: 29867415 PMCID: PMC5954093 DOI: 10.3389/fnhum.2018.00192] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2017] [Accepted: 04/20/2018] [Indexed: 11/26/2022] Open
Abstract
Within each sensory modality, age-related deficits in temporal perception contribute to the difficulties older adults experience when performing everyday tasks. Since perceptual experience is inherently multisensory, older adults also face the added challenge of appropriately integrating or segregating the auditory and visual cues present in our dynamic environment into coherent representations of distinct objects. As such, many studies have investigated how older adults perform when integrating temporal information across audition and vision. This review covers both direct judgments about temporal information (the sound-induced flash illusion, temporal order, perceived synchrony, and temporal rate discrimination) and judgments regarding stimuli containing temporal information (the audiovisual bounce effect and speech perception). Although an age-related increase in integration has been demonstrated on a variety of tasks, research specifically investigating the ability of older adults to integrate temporal auditory and visual cues has produced disparate results. In this short review, we explore what factors could underlie these divergent findings. We conclude that both task-specific differences and age-related sensory loss play a role in the reported disparity in age-related effects on the integration of auditory and visual temporal information.
Collapse
Affiliation(s)
- Cassandra J Brooks
- Department of Optometry and Vision Sciences, The University of Melbourne, Melbourne, VIC, Australia
| | - Yu Man Chan
- Department of Optometry and Vision Sciences, The University of Melbourne, Melbourne, VIC, Australia
| | - Andrew J Anderson
- Department of Optometry and Vision Sciences, The University of Melbourne, Melbourne, VIC, Australia
| | - Allison M McKendrick
- Department of Optometry and Vision Sciences, The University of Melbourne, Melbourne, VIC, Australia
| |
Collapse
|
5
|
Central–peripheral differences in audiovisual and visuotactile event perception. Atten Percept Psychophys 2017; 79:2552-2563. [DOI: 10.3758/s13414-017-1396-4] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
6
|
Mendonça C, Mandelli P, Pulkki V. Modeling the Perception of Audiovisual Distance: Bayesian Causal Inference and Other Models. PLoS One 2016; 11:e0165391. [PMID: 27959919 PMCID: PMC5154506 DOI: 10.1371/journal.pone.0165391] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2016] [Accepted: 10/11/2016] [Indexed: 11/23/2022] Open
Abstract
Studies of audiovisual perception of distance are rare. Here, visual and auditory cue interactions in distance are tested against several multisensory models, including a modified causal inference model. In this causal inference model predictions of estimate distributions are included. In our study, the audiovisual perception of distance was overall better explained by Bayesian causal inference than by other traditional models, such as sensory dominance and mandatory integration, and no interaction. Causal inference resolved with probability matching yielded the best fit to the data. Finally, we propose that sensory weights can also be estimated from causal inference. The analysis of the sensory weights allows us to obtain windows within which there is an interaction between the audiovisual stimuli. We find that the visual stimulus always contributes by more than 80% to the perception of visual distance. The visual stimulus also contributes by more than 50% to the perception of auditory distance, but only within a mobile window of interaction, which ranges from 1 to 4 m.
Collapse
Affiliation(s)
- Catarina Mendonça
- Department of Signal Processing and Acoustics, Aalto University, Espoo, Finland
- * E-mail:
| | - Pietro Mandelli
- School of Industrial and Information Engineering, Polytechnic University of Milan, Milan, Italy
| | - Ville Pulkki
- Department of Signal Processing and Acoustics, Aalto University, Espoo, Finland
| |
Collapse
|
7
|
Bolognini N, Convento S, Casati C, Mancini F, Brighina F, Vallar G. Multisensory integration in hemianopia and unilateral spatial neglect: Evidence from the sound induced flash illusion. Neuropsychologia 2016; 87:134-143. [PMID: 27197073 DOI: 10.1016/j.neuropsychologia.2016.05.015] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2016] [Revised: 05/13/2016] [Accepted: 05/14/2016] [Indexed: 11/24/2022]
Abstract
Recent neuropsychological evidence suggests that acquired brain lesions can, in some instances, abolish the ability to integrate inputs from different sensory modalities, disrupting multisensory perception. We explored the ability to perceive multisensory events, in particular the integrity of audio-visual processing in the temporal domain, in brain-damaged patients with visual field defects (VFD), or with unilateral spatial neglect (USN), by assessing their sensitivity to the 'Sound-Induced Flash Illusion' (SIFI). The study yielded two key findings. Firstly, the 'fission' illusion (namely, seeing multiple flashes when a single flash is paired with multiple sounds) is reduced in both left- and right-brain-damaged patients with VFD, but not in right-brain-damaged patients with left USN. The disruption of the fission illusion is proportional to the extent of the occipital damage. Secondly, a reliable 'fusion' illusion (namely, seeing less flashes when a single sound is paired with multiple flashes) is evoked in USN patients, but neither in VFD patients nor in healthy participants. A control experiment showed that the fusion, but not the fission, illusion is lost in older participants (>50 year-old), as compared with younger healthy participants (<30 year-old). This evidence indicates that the fission and fusion illusions are dissociable multisensory phenomena, altered differently by impairments of visual perception (i.e. VFD) and spatial attention (i.e. USN). The occipital cortex represents a key cortical site for binding auditory and visual stimuli in the SIFI, while damage to right-hemisphere areas mediating spatial attention and awareness does not prevent the integration of audio-visual inputs in the temporal domain.
Collapse
Affiliation(s)
- Nadia Bolognini
- Department of Psychology, and Milan Center for Neuroscience - NeuroMi, University of Milano-Bicocca, Milano, Italy; Laboratory of Neuropsychology, and Department of Neurorehabilitation Sciences, IRCSS Istituto Auxologico, Milano, Italy.
| | - Silvia Convento
- Department of Psychology, and Milan Center for Neuroscience - NeuroMi, University of Milano-Bicocca, Milano, Italy; Department of Neuroscience, Baylor College of Medicine, Houston, USA
| | - Carlotta Casati
- Laboratory of Neuropsychology, and Department of Neurorehabilitation Sciences, IRCSS Istituto Auxologico, Milano, Italy
| | - Flavia Mancini
- Department of Neuroscience, Physiology & Pharmacology, University College London, London, UK
| | - Filippo Brighina
- Department of Experimental Biomedicine and Clinical Neuroscience, University of Palermo, Palermo, Italy
| | - Giuseppe Vallar
- Department of Psychology, and Milan Center for Neuroscience - NeuroMi, University of Milano-Bicocca, Milano, Italy; Laboratory of Neuropsychology, and Department of Neurorehabilitation Sciences, IRCSS Istituto Auxologico, Milano, Italy
| |
Collapse
|
8
|
Powers Iii AR, Hillock-Dunn A, Wallace MT. Generalization of multisensory perceptual learning. Sci Rep 2016; 6:23374. [PMID: 27000988 PMCID: PMC4802214 DOI: 10.1038/srep23374] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2015] [Accepted: 03/01/2016] [Indexed: 11/28/2022] Open
Abstract
Life in a multisensory world requires the rapid and accurate integration of stimuli across the different senses. In this process, the temporal relationship between stimuli is critical in determining which stimuli share a common origin. Numerous studies have described a multisensory temporal binding window—the time window within which audiovisual stimuli are likely to be perceptually bound. In addition to characterizing this window’s size, recent work has shown it to be malleable, with the capacity for substantial narrowing following perceptual training. However, the generalization of these effects to other measures of perception is not known. This question was examined by characterizing the ability of training on a simultaneity judgment task to influence perception of the temporally-dependent sound-induced flash illusion (SIFI). Results do not demonstrate a change in performance on the SIFI itself following training. However, data do show an improved ability to discriminate rapidly-presented two-flash control conditions following training. Effects were specific to training and scaled with the degree of temporal window narrowing exhibited. Results do not support generalization of multisensory perceptual learning to other multisensory tasks. However, results do show that training results in improvements in visual temporal acuity, suggesting a generalization effect of multisensory training on unisensory abilities.
Collapse
Affiliation(s)
- Albert R Powers Iii
- Kennedy Center, Vanderbilt University, Nashville, Tennessee, USA.,Neuroscience Graduate Program, Vanderbilt University, Nashville, Tennessee, USA.,Medical Scientist Training Program, Vanderbilt University School of Medicine, Nashville, Tennessee, USA.,Department of Psychiatry, Yale University, New Haven, Connecticut, USA
| | - Andrea Hillock-Dunn
- Kennedy Center, Vanderbilt University, Nashville, Tennessee, USA.,Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, Tennessee, USA
| | - Mark T Wallace
- Kennedy Center, Vanderbilt University, Nashville, Tennessee, USA.,Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, Tennessee, USA.,Neuroscience Graduate Program, Vanderbilt University, Nashville, Tennessee, USA
| |
Collapse
|
9
|
Andersen TS. The early maximum likelihood estimation model of audiovisual integration in speech perception. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2015; 137:2884-2891. [PMID: 25994715 DOI: 10.1121/1.4916691] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
Speech perception is facilitated by seeing the articulatory mouth movements of the talker. This is due to perceptual audiovisual integration, which also causes the McGurk-MacDonald illusion, and for which a comprehensive computational account is still lacking. Decades of research have largely focused on the fuzzy logical model of perception (FLMP), which provides excellent fits to experimental observations but also has been criticized for being too flexible, post hoc and difficult to interpret. The current study introduces the early maximum likelihood estimation (MLE) model of audiovisual integration to speech perception along with three model variations. In early MLE, integration is based on a continuous internal representation before categorization, which can make the model more parsimonious by imposing constraints that reflect experimental designs. The study also shows that cross-validation can evaluate models of audiovisual integration based on typical data sets taking both goodness-of-fit and model flexibility into account. All models were tested on a published data set previously used for testing the FLMP. Cross-validation favored the early MLE while more conventional error measures favored more complex models. This difference between conventional error measures and cross-validation was found to be indicative of over-fitting in more complex models such as the FLMP.
Collapse
Affiliation(s)
- Tobias S Andersen
- Section for Cognitive Systems, Department of Applied Mathematics and Computer Science, Technical University of Denmark, Richard Petersens Plads, Building 321, DK-2800 Kgs. Lyngby, Denmark
| |
Collapse
|
10
|
Whittingham KM, McDonald JS, Clifford CW. Synesthetes show normal sound-induced flash fission and fusion illusions. Vision Res 2014; 105:1-9. [DOI: 10.1016/j.visres.2014.08.010] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2013] [Revised: 08/18/2014] [Accepted: 08/20/2014] [Indexed: 11/28/2022]
|
11
|
Phenomenology of the sound-induced flash illusion. Exp Brain Res 2014; 232:2207-20. [DOI: 10.1007/s00221-014-3912-2] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2013] [Accepted: 03/07/2014] [Indexed: 10/25/2022]
|
12
|
Mendonça C, Santos JA, López-Moliner J. The benefit of multisensory integration with biological motion signals. Exp Brain Res 2011; 213:185-92. [DOI: 10.1007/s00221-011-2620-4] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2010] [Accepted: 02/24/2011] [Indexed: 11/28/2022]
|
13
|
Besson P, Richiardi J, Bourdin C, Bringoux L, Mestre DR, Vercher JL. Bayesian networks and information theory for audio-visual perception modeling. BIOLOGICAL CYBERNETICS 2010; 103:213-226. [PMID: 20502912 DOI: 10.1007/s00422-010-0392-8] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/18/2009] [Accepted: 04/12/2010] [Indexed: 05/29/2023]
Abstract
Thanks to their different senses, human observers acquire multiple information coming from their environment. Complex cross-modal interactions occur during this perceptual process. This article proposes a framework to analyze and model these interactions through a rigorous and systematic data-driven process. This requires considering the general relationships between the physical events or factors involved in the process, not only in quantitative terms, but also in term of the influence of one factor on another. We use tools from information theory and probabilistic reasoning to derive relationships between the random variables of interest, where the central notion is that of conditional independence. Using mutual information analysis to guide the model elicitation process, a probabilistic causal model encoded as a Bayesian network is obtained. We exemplify the method by using data collected in an audio-visual localization task for human subjects, and we show that it yields a well-motivated model with good predictive ability. The model elicitation process offers new prospects for the investigation of the cognitive mechanisms of multisensory perception.
Collapse
Affiliation(s)
- Patricia Besson
- Institute of Movement Sciences, CNRS & Université de la Méditerranée, Marseille, France.
| | | | | | | | | | | |
Collapse
|
14
|
Kawabe T. Audiovisual temporal capture underlies flash fusion. Exp Brain Res 2009; 198:195-208. [PMID: 19521693 DOI: 10.1007/s00221-009-1877-3] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2008] [Accepted: 05/21/2009] [Indexed: 11/24/2022]
Abstract
When sequential visual flashes are accompanied by a lower number of sequential auditory pulses, the perceived number of visual flashes is lower than the actual number, an illusion termed 'flash fusion'. We examined whether temporal capture of flashes by pulses underlay flash fusion. One of the visual flashes was given a luminance increment, and observers reported which flash had the luminance increment. Results showed that the pulse strongly captured the flashes in its temporal vicinity, resulting in flash fusion. Moreover, when one of the successive pulses was given a higher frequency than others, the luminance increment was perceptually paired with the pulse with the higher frequency. The pairing of audiovisual features disappeared when the temporal pattern of the pulse frequency was difficult for the observer to anticipate. These data indicate that flash fusion is caused by temporal capture of flashes by the pulse, and that feature matching between auditory and visual signals also contributes to the modulation of perceived temporal structure of flashes during flash fusion.
Collapse
|
15
|
Philippi TG, van Erp JB, Werkhoven PJ. Multisensory temporal numerosity judgment. Brain Res 2008; 1242:116-25. [DOI: 10.1016/j.brainres.2008.05.056] [Citation(s) in RCA: 41] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2008] [Revised: 05/12/2008] [Accepted: 05/15/2008] [Indexed: 11/25/2022]
|
16
|
Bruns P, Getzmann S. Audiovisual influences on the perception of visual apparent motion: exploring the effect of a single sound. Acta Psychol (Amst) 2008; 129:273-83. [PMID: 18790468 DOI: 10.1016/j.actpsy.2008.08.002] [Citation(s) in RCA: 14] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2007] [Revised: 06/17/2008] [Accepted: 08/05/2008] [Indexed: 11/19/2022] Open
Abstract
Previous research has shown that irrelevant sounds can facilitate the perception of visual apparent motion. Here the effectiveness of a single sound to facilitate motion perception was investigated in three experiments. Observers were presented with two discrete lights temporally separated by stimulus onset asynchronies from 0 to 350 ms. After each trial, observers classified their impression of the stimuli using a categorisation system. A short sound presented temporally (and spatially) midway between the lights facilitated the impression of motion relative to baseline (lights without sound), whereas a sound presented either before the first or after the second light or simultaneously with the lights did not affect motion impression. The facilitation effect also occurred with sound presented far from the visual display, as well as with continuous-sound that was started with the first light and terminated with the second light. No facilitation of visual motion perception occurred if the sound was part of a tone sequence that allowed for intramodal perceptual grouping of the auditory stimuli prior to the critical audiovisual stimuli. Taken together, the findings are consistent with a low-level audiovisual integration approach in which the perceptual system merges temporally proximate sound and light stimuli, thereby provoking the impression of a single multimodal moving object.
Collapse
Affiliation(s)
- Patrick Bruns
- Biological Psychology and Neuropsychology, University of Hamburg, Von-Melle-Park 11, D-20146 Hamburg, Germany.
| | | |
Collapse
|
17
|
Bresciani JP, Dammeier F, Ernst MO. Tri-modal integration of visual, tactile and auditory signals for the perception of sequences of events. Brain Res Bull 2008; 75:753-60. [PMID: 18394521 DOI: 10.1016/j.brainresbull.2008.01.009] [Citation(s) in RCA: 41] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
We investigated the interactions between visual, tactile and auditory sensory signals for the perception of sequences of events. Sequences of flashes, taps and beeps were presented simultaneously. For each session, subjects were instructed to count the number of events presented in one modality (Target) and to ignore the stimuli presented in the other modalities (Background). The number of events presented in the background sequence could differ from the number of events in the target sequence. For each session, we quantified the Background-evoked bias by comparing subjects' responses with and without Background (Target presented alone). Nine combinations between vision, touch and audition were tested. In each session but two, the Background significantly biased the Target. Vision was the most susceptible to Background-evoked bias and the least efficient in biasing the other two modalities. By contrast, audition was the least susceptible to Background-evoked bias and the most efficient in biasing the other two modalities. These differences were strongly correlated to the relative reliability of each modality. In line with this, the evoked biases were larger when the Background consisted of two instead of only one modality. These results show that for the perception of sequences of events: (1) vision, touch and audition are automatically integrated; (2) the respective contributions of the three modalities to the integrated percept differ; (3) the relative contribution of each modality depends on its relative reliability (1/variability); (4) task-irrelevant stimuli have more weight when presented in two rather than only one modality.
Collapse
Affiliation(s)
- Jean-Pierre Bresciani
- Max-Planck-Institute for Biological Cybernetics, Spemannstrasse 38, 72076 Tuebingen, Germany
| | | | | |
Collapse
|
18
|
McCormick D, Mamassian P. What does the illusory-flash look like? Vision Res 2007; 48:63-9. [PMID: 18054372 DOI: 10.1016/j.visres.2007.10.010] [Citation(s) in RCA: 42] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2007] [Revised: 10/08/2007] [Accepted: 10/10/2007] [Indexed: 11/15/2022]
Abstract
In the illusory-flash effect (Shams, L., Kamitani, Y., & Shimojo, S. (2000). Illusions. What you see is what you hear. Nature, 408, 788), one flash presented with two tones has a tendency to be seen as two flashes. Previous studies of this effect have been ill-equipped to establish whether this illusory-flash is the result of a genuine percept, or that of a shift in criterion. We addressed this issue by using a stimulus comprising two locations. This enabled contrast-threshold measurement by means of a location detection task. High-contrast white or black flashes were presented simultaneously to both locations, followed by threshold contrast flashes of the same contrast polarity at the two locations in half of the trials; observers reported whether or not the low-contrast flashes had been present. Irrelevant to the task, half of the trials contained one tone, the other half contained two tones. In this way, we were able to compute the change in sensitivity and shift in criterion between illusory and non-illusory trials. We observe both a decrease in visual sensitivity and a criterion shift in the illusory-flash conditions. In a second experiment, we were interested in determining whether this change in visual sensitivity gave rise to measurable visual attributes of the illusory-flash. If it has a contrast, it should interact with a spatio-temporally concurrent real flash. Using a similar two-location stimulus presentation, we found that under certain conditions, we were able to infer the polarity of the perceived illusory-flash. We conclude that the illusory-flash is indeed a perceptual effect with psychophysically assessable characteristics.
Collapse
Affiliation(s)
- David McCormick
- Laboratoire Psychologie de la Perception, CNRS & Université Paris Descartes, 45 rue des Saints-Pères, 75270 Paris Cedex 06, France.
| | | |
Collapse
|