1
|
Feenders G. Attentional capture or multisensory integration? (Commentary on Bean et al., 2021). Eur J Neurosci 2023; 58:3714-3718. [PMID: 37697730 DOI: 10.1111/ejn.16131] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2021] [Accepted: 08/10/2023] [Indexed: 09/13/2023]
Affiliation(s)
- Gesa Feenders
- Animal Physiology and Behaviour Group, Cluster of Excellence Hearing4all, Department of Neuroscience, School of Medicine and Health Sciences, University of Oldenburg, Oldenburg, Germany
| |
Collapse
|
2
|
Loss of audiovisual facilitation with age occurs for vergence eye movements but not for saccades. Sci Rep 2022; 12:4453. [PMID: 35292652 PMCID: PMC8924254 DOI: 10.1038/s41598-022-08072-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2021] [Accepted: 01/31/2022] [Indexed: 11/08/2022] Open
Abstract
Though saccade and vergence eye movements are fundamental for everyday life, the way these movements change as we age has not been sufficiently studied. The present study examines the effect of age on vergence and saccade eye movement characteristics (latency, peak and average velocity, amplitude) and on audiovisual facilitation. We compare the results for horizontal saccades and vergence movements toward visual and audiovisual targets in a young group of 22 participants (mean age 25 ± 2.5) and an elderly group of 45 participants (mean age 65 + 6.9). The results show that, with increased age, latency of all eye movements increases, average velocity decreases, amplitude of vergence decreases, and audiovisual facilitation collapses for vergence eye movements in depth but is preserved for saccades. There is no effect on peak velocity, suggesting that, although the sensory and attentional mechanisms controlling the motor system does age, the motor system itself does not age. The loss of audiovisual facilitation along the depth axis can be attributed to a physiologic decrease in the capacity for sound localization in depth with age, while left/right sound localization coupled with saccades is preserved. The results bring new insight for the effects of aging on multisensory control and attention.
Collapse
|
3
|
Dozio N, Maggioni E, Pittera D, Gallace A, Obrist M. May I Smell Your Attention: Exploration of Smell and Sound for Visuospatial Attention in Virtual Reality. Front Psychol 2021; 12:671470. [PMID: 34366990 PMCID: PMC8339311 DOI: 10.3389/fpsyg.2021.671470] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2021] [Accepted: 06/21/2021] [Indexed: 11/14/2022] Open
Abstract
When interacting with technology, attention is mainly driven by audiovisual and increasingly haptic stimulation. Olfactory stimuli are widely neglected, although the sense of smell influences many of our daily life choices, affects our behavior, and can catch and direct our attention. In this study, we investigated the effect of smell and sound on visuospatial attention in a virtual environment. We implemented the Bells Test, an established neuropsychological test to assess attentional and visuospatial disorders, in virtual reality (VR). We conducted an experiment with 24 participants comparing the performance of users under three experimental conditions (smell, sound, and smell and sound). The results show that multisensory stimuli play a key role in driving the attention of the participants and highlight asymmetries in directing spatial attention. We discuss the relevance of the results within and beyond human-computer interaction (HCI), particularly with regard to the opportunity of using VR for rehabilitation and assessment procedures for patients with spatial attention deficits.
Collapse
Affiliation(s)
- Nicolò Dozio
- Politecnico di Milano, Department of Mechanical Engineering, Milan, Italy
- Sussex Computer-Human Interaction Lab, Department of Informatics, University of Sussex, Brighton, United Kingdom
| | - Emanuela Maggioni
- Sussex Computer-Human Interaction Lab, Department of Informatics, University of Sussex, Brighton, United Kingdom
- Department of Computer Science, University College London, London, United Kingdom
| | - Dario Pittera
- Sussex Computer-Human Interaction Lab, Department of Informatics, University of Sussex, Brighton, United Kingdom
- Ultraleap Ltd., Bristol, United Kingdom
| | - Alberto Gallace
- Mind and Behavior Technological Center - MibTec, University of Milano-Bicocca, Milan, Italy
| | - Marianna Obrist
- Sussex Computer-Human Interaction Lab, Department of Informatics, University of Sussex, Brighton, United Kingdom
- Department of Computer Science, University College London, London, United Kingdom
| |
Collapse
|
4
|
Blom JD, Nanuashvili N, Waters F. Time Distortions: A Systematic Review of Cases Characteristic of Alice in Wonderland Syndrome. Front Psychiatry 2021; 12:668633. [PMID: 34025485 PMCID: PMC8138562 DOI: 10.3389/fpsyt.2021.668633] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/16/2021] [Accepted: 04/06/2021] [Indexed: 11/13/2022] Open
Abstract
Of the perceptual distortions characteristic of Alice in Wonderland syndrome, substantial alterations in the immediate experience of time are probably the least known and the most fascinating. We reviewed original case reports to examine the phenomenology and associated pathology of these time distortions in this syndrome. A systematic search in PubMed, Ovid Medline, and the historical literature yielded 59 publications that described 168 people experiencing time distortions, including 84 detailed individual case reports. We distinguished five different types of time distortion. The most common category comprises slow-motion and quick-motion phenomena. In 39% of all cases, time distortions were unimodal in nature, while in 61% there was additional involvement of the visual (49%), kinaesthetic (18%), and auditory modalities (14%). In all, 40% of all time distortions described were bimodal in nature and 19% trimodal, with 1% involving four modalities. Underlying neurological mechanisms are varied and may be triggered by intoxications, infectious diseases, metabolic disorders, CNS lesions, paroxysmal neurological disorders, and psychiatric disorders. Bizarre sensations of time alteration-such as time going backwards or moving in circles-were mostly associated with psychosis. Pathophysiologically, mainly occipital areas appear to be involved, although the temporal network is widely disseminated, with separate component timing mechanisms not always functioning synchronously, thus occasionally creating temporal mismatches within and across sensory modalities (desynchronization). Based on our findings, we propose a classification of time distortions and formulate implications for research and clinical practice.
Collapse
Affiliation(s)
- Jan Dirk Blom
- Outpatient Clinic for Uncommon Psychiatric Syndromes, Parnassia Psychiatric Institute, The Hague, Netherlands.,Faculty of Social Sciences, Leiden University, Leiden, Netherlands.,Department of Psychiatry, University of Groningen, Groningen, Netherlands
| | - Nutsa Nanuashvili
- Amsterdam Brain and Cognition Center, University of Amsterdam, Amsterdam, Netherlands
| | - Flavie Waters
- Clinical Research Centre, Graylands Hospital, North Metro Health Service Mental Health, Perth, WA, Australia.,School of Psychological Sciences, University of Western Australia, Perth, WA, Australia
| |
Collapse
|
5
|
Colonius H, Diederich A. Formal models and quantitative measures of multisensory integration: a selective overview. Eur J Neurosci 2020; 51:1161-1178. [DOI: 10.1111/ejn.13813] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2017] [Revised: 12/18/2017] [Accepted: 12/20/2017] [Indexed: 11/26/2022]
Affiliation(s)
- Hans Colonius
- Department of Psychology Carl von Ossietzky Universität Oldenburg Oldenburg 26111 Germany
- Department of Psychological Sciences Purdue University West Lafayette IN USA
| | - Adele Diederich
- Department of Psychological Sciences Purdue University West Lafayette IN USA
- Life Sciences and Chemistry Jacobs University Bremen Bremen Germany
| |
Collapse
|
6
|
Diederich A, Colonius H. Multisensory Integration and Exogenous Spatial Attention: A Time-window-of-integration Analysis. J Cogn Neurosci 2019; 31:699-710. [PMID: 30822208 DOI: 10.1162/jocn_a_01386] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Although it is well documented that occurrence of an irrelevant and nonpredictive sound facilitates motor responses to a subsequent target light appearing nearby, the cause of this "exogenous spatial cuing effect" has been under discussion. On the one hand, it has been postulated to be the result of a shift of visual spatial attention possibly triggered by parietal and/or cortical supramodal "attention" structures. On the other hand, the effect has been considered to be due to multisensory integration based on the activation of multisensory convergence structures in the brain. Recent RT experiments have suggested that multisensory integration and exogenous spatial cuing differ in their temporal profiles of facilitation: When the nontarget occurs 100-200 msec before the target, facilitation is likely driven by crossmodal exogenous spatial attention, whereas multisensory integration effects are still seen when target and nontarget are presented nearly simultaneously. Here, we develop an extension of the time-window-of-integration model that combines both mechanisms within the same formal framework. The model is illustrated by fitting it to data from a focused attention task with a visual target and an auditory nontarget presented at horizontally or vertically varying positions. Results show that both spatial cuing and multisensory integration may coexist in a single trial in bringing about the crossmodal facilitation of RT effects. Moreover, the formal analysis via time window of integration allows to predict and quantify the contribution of either mechanism as they occur across different spatiotemporal conditions.
Collapse
|
7
|
Meijer GT, Mertens PEC, Pennartz CMA, Olcese U, Lansink CS. The circuit architecture of cortical multisensory processing: Distinct functions jointly operating within a common anatomical network. Prog Neurobiol 2019; 174:1-15. [PMID: 30677428 DOI: 10.1016/j.pneurobio.2019.01.004] [Citation(s) in RCA: 28] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2017] [Revised: 12/21/2018] [Accepted: 01/21/2019] [Indexed: 12/16/2022]
Abstract
Our perceptual systems continuously process sensory inputs from different modalities and organize these streams of information such that our subjective representation of the outside world is a unified experience. By doing so, they also enable further cognitive processing and behavioral action. While cortical multisensory processing has been extensively investigated in terms of psychophysics and mesoscale neural correlates, an in depth understanding of the underlying circuit-level mechanisms is lacking. Previous studies on circuit-level mechanisms of multisensory processing have predominantly focused on cue integration, i.e. the mechanism by which sensory features from different modalities are combined to yield more reliable stimulus estimates than those obtained by using single sensory modalities. In this review, we expand the framework on the circuit-level mechanisms of cortical multisensory processing by highlighting that multisensory processing is a family of functions - rather than a single operation - which involves not only the integration but also the segregation of modalities. In addition, multisensory processing not only depends on stimulus features, but also on cognitive resources, such as attention and memory, as well as behavioral context, to determine the behavioral outcome. We focus on rodent models as a powerful instrument to study the circuit-level bases of multisensory processes, because they enable combining cell-type-specific recording and interventional techniques with complex behavioral paradigms. We conclude that distinct multisensory processes share overlapping anatomical substrates, are implemented by diverse neuronal micro-circuitries that operate in parallel, and are flexibly recruited based on factors such as stimulus features and behavioral constraints.
Collapse
Affiliation(s)
- Guido T Meijer
- Swammerdam Institute for Life Sciences, University of Amsterdam, Science Park 904, 1098XH Amsterdam, the Netherlands.
| | - Paul E C Mertens
- Swammerdam Institute for Life Sciences, University of Amsterdam, Science Park 904, 1098XH Amsterdam, the Netherlands.
| | - Cyriel M A Pennartz
- Swammerdam Institute for Life Sciences, University of Amsterdam, Science Park 904, 1098XH Amsterdam, the Netherlands; Research Priority Program Brain and Cognition, University of Amsterdam, Science Park 904, 1098XH Amsterdam, the Netherlands.
| | - Umberto Olcese
- Swammerdam Institute for Life Sciences, University of Amsterdam, Science Park 904, 1098XH Amsterdam, the Netherlands; Research Priority Program Brain and Cognition, University of Amsterdam, Science Park 904, 1098XH Amsterdam, the Netherlands.
| | - Carien S Lansink
- Swammerdam Institute for Life Sciences, University of Amsterdam, Science Park 904, 1098XH Amsterdam, the Netherlands; Research Priority Program Brain and Cognition, University of Amsterdam, Science Park 904, 1098XH Amsterdam, the Netherlands.
| |
Collapse
|
8
|
Spence C, Lee J, Van der Stoep N. Responding to sounds from unseen locations: crossmodal attentional orienting in response to sounds presented from the rear. Eur J Neurosci 2017; 51:1137-1150. [PMID: 28973789 DOI: 10.1111/ejn.13733] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2017] [Revised: 09/27/2017] [Accepted: 09/27/2017] [Indexed: 11/28/2022]
Abstract
To date, most of the research on spatial attention has focused on probing people's responses to stimuli presented in frontal space. That is, few researchers have attempted to assess what happens in the space that is currently unseen (essentially rear space). In a sense, then, 'out of sight' is, very much, 'out of mind'. In this review, we highlight what is presently known about the perception and processing of sensory stimuli (focusing on sounds) whose source is not currently visible. We briefly summarize known differences in the localizability of sounds presented from different locations in 3D space, and discuss the consequences for the crossmodal attentional and multisensory perceptual interactions taking place in various regions of space. The latest research now clearly shows that the kinds of crossmodal interactions that take place in rear space are very often different in kind from those that have been documented in frontal space. Developing a better understanding of how people respond to unseen sound sources in naturalistic environments by integrating findings emerging from multiple fields of research will likely lead to the design of better warning signals in the future. This review highlights the need for neuroscientists interested in spatial attention to spend more time researching what happens (in terms of the covert and overt crossmodal orienting of attention) in rear space.
Collapse
Affiliation(s)
- Charles Spence
- Crossmodal Research Laboratory, Department of Experimental Psychology, Oxford University, Oxford, OX1 3UD, UK
| | - Jae Lee
- Crossmodal Research Laboratory, Department of Experimental Psychology, Oxford University, Oxford, OX1 3UD, UK
| | - Nathan Van der Stoep
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| |
Collapse
|
9
|
Powers Iii AR, Hillock-Dunn A, Wallace MT. Generalization of multisensory perceptual learning. Sci Rep 2016; 6:23374. [PMID: 27000988 PMCID: PMC4802214 DOI: 10.1038/srep23374] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2015] [Accepted: 03/01/2016] [Indexed: 11/28/2022] Open
Abstract
Life in a multisensory world requires the rapid and accurate integration of stimuli across the different senses. In this process, the temporal relationship between stimuli is critical in determining which stimuli share a common origin. Numerous studies have described a multisensory temporal binding window—the time window within which audiovisual stimuli are likely to be perceptually bound. In addition to characterizing this window’s size, recent work has shown it to be malleable, with the capacity for substantial narrowing following perceptual training. However, the generalization of these effects to other measures of perception is not known. This question was examined by characterizing the ability of training on a simultaneity judgment task to influence perception of the temporally-dependent sound-induced flash illusion (SIFI). Results do not demonstrate a change in performance on the SIFI itself following training. However, data do show an improved ability to discriminate rapidly-presented two-flash control conditions following training. Effects were specific to training and scaled with the degree of temporal window narrowing exhibited. Results do not support generalization of multisensory perceptual learning to other multisensory tasks. However, results do show that training results in improvements in visual temporal acuity, suggesting a generalization effect of multisensory training on unisensory abilities.
Collapse
Affiliation(s)
- Albert R Powers Iii
- Kennedy Center, Vanderbilt University, Nashville, Tennessee, USA.,Neuroscience Graduate Program, Vanderbilt University, Nashville, Tennessee, USA.,Medical Scientist Training Program, Vanderbilt University School of Medicine, Nashville, Tennessee, USA.,Department of Psychiatry, Yale University, New Haven, Connecticut, USA
| | - Andrea Hillock-Dunn
- Kennedy Center, Vanderbilt University, Nashville, Tennessee, USA.,Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, Tennessee, USA
| | - Mark T Wallace
- Kennedy Center, Vanderbilt University, Nashville, Tennessee, USA.,Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, Tennessee, USA.,Neuroscience Graduate Program, Vanderbilt University, Nashville, Tennessee, USA
| |
Collapse
|
10
|
Diederich A, Colonius H, Kandil FI. Prior knowledge of spatiotemporal configuration facilitates crossmodal saccadic response. Exp Brain Res 2016; 234:2059-2076. [DOI: 10.1007/s00221-016-4609-5] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2015] [Accepted: 02/23/2016] [Indexed: 10/22/2022]
|
11
|
Van der Stoep N, Spence C, Nijboer TCW, Van der Stigchel S. On the relative contributions of multisensory integration and crossmodal exogenous spatial attention to multisensory response enhancement. Acta Psychol (Amst) 2015; 162:20-8. [PMID: 26436587 DOI: 10.1016/j.actpsy.2015.09.010] [Citation(s) in RCA: 28] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2015] [Revised: 09/22/2015] [Accepted: 09/24/2015] [Indexed: 11/18/2022] Open
Abstract
Two processes that can give rise to multisensory response enhancement (MRE) are multisensory integration (MSI) and crossmodal exogenous spatial attention. It is, however, currently unclear what the relative contribution of each of these is to MRE. We investigated this issue using two tasks that are generally assumed to measure MSI (a redundant target effect task) and crossmodal exogenous spatial attention (a spatial cueing task). One block of trials consisted of unimodal auditory and visual targets designed to provide a unimodal baseline. In two other blocks of trials, the participants were presented with spatially and temporally aligned and misaligned audiovisual (AV) targets (0, 50, 100, and 200ms SOA). In the integration block, the participants were instructed to respond to the onset of the first target stimulus that they detected (A or V). The instruction for the cueing block was to respond only to the onset of the visual targets. The targets could appear at one of three locations: left, center, and right. The participants were instructed to respond only to lateral targets. The results indicated that MRE was caused by MSI at 0ms SOA. At 50ms SOA, both crossmodal exogenous spatial attention and MSI contributed to the observed MRE, whereas the MRE observed at the 100 and 200ms SOAs was attributable to crossmodal exogenous spatial attention, alerting, and temporal preparation. These results therefore suggest that there may be a temporal window in which both MSI and exogenous crossmodal spatial attention can contribute to multisensory response enhancement.
Collapse
Affiliation(s)
- N Van der Stoep
- Utrecht University, Department of Experimental Psychology, Helmholtz Institute, Utrecht, The Netherlands.
| | - C Spence
- Oxford University, Department of Experimental Psychology, Oxford, United Kingdom
| | - T C W Nijboer
- Utrecht University, Department of Experimental Psychology, Helmholtz Institute, Utrecht, The Netherlands; Brain Center Rudolf Magnus, and Center of Excellence for Rehabilitation Medicine, University Medical Center Utrecht and De Hoogstraat Rehabilitation, The Netherlands
| | - S Van der Stigchel
- Utrecht University, Department of Experimental Psychology, Helmholtz Institute, Utrecht, The Netherlands
| |
Collapse
|
12
|
Glazebrook CM, Welsh TN, Tremblay L. The processing of visual and auditory information for reaching movements. PSYCHOLOGICAL RESEARCH 2015; 80:757-73. [PMID: 26253323 DOI: 10.1007/s00426-015-0689-2] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2015] [Accepted: 07/11/2015] [Indexed: 11/28/2022]
Abstract
Presenting target and non-target information in different modalities influences target localization if the non-target is within the spatiotemporal limits of perceptual integration. When using auditory and visual stimuli, the influence of a visual non-target on auditory target localization is greater than the reverse. It is not known, however, whether or how such perceptual effects extend to goal-directed behaviours. To gain insight into how audio-visual stimuli are integrated for motor tasks, the kinematics of reaching movements towards visual or auditory targets with or without a non-target in the other modality were examined. When present, the simultaneously presented non-target could be spatially coincident, to the left, or to the right of the target. Results revealed that auditory non-targets did not influence reaching trajectories towards a visual target, whereas visual non-targets influenced trajectories towards an auditory target. Interestingly, the biases induced by visual non-targets were present early in the trajectory and persisted until movement end. Subsequent experimentation indicated that the magnitude of the biases was equivalent whether participants performed a perceptual or motor task, whereas variability was greater for the motor versus the perceptual tasks. We propose that visually induced trajectory biases were driven by the perceived mislocation of the auditory target, which in turn affected both the movement plan and subsequent control of the movement. Such findings provide further evidence of the dominant role visual information processing plays in encoding spatial locations as well as planning and executing reaching action, even when reaching towards auditory targets.
Collapse
Affiliation(s)
- Cheryl M Glazebrook
- Faculty of Kinesiology and Recreation Management, University of Manitoba, 319 Max Bell Centre, Winnipeg, MB, R3T 2N2, Canada. .,Health, Leisure, and Human Performance Research Institute, University of Manitoba, Winnipeg, MB, R3T 2N2, Canada.
| | - Timothy N Welsh
- Faculty of Kinesiology and Physical Education, University of Toronto, Toronto, ON, M5S 2W6, Canada
| | - Luc Tremblay
- Faculty of Kinesiology and Physical Education, University of Toronto, Toronto, ON, M5S 2W6, Canada
| |
Collapse
|
13
|
Wallace MT, Stevenson RA. The construct of the multisensory temporal binding window and its dysregulation in developmental disabilities. Neuropsychologia 2014; 64:105-23. [PMID: 25128432 PMCID: PMC4326640 DOI: 10.1016/j.neuropsychologia.2014.08.005] [Citation(s) in RCA: 195] [Impact Index Per Article: 19.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2014] [Revised: 08/04/2014] [Accepted: 08/05/2014] [Indexed: 01/18/2023]
Abstract
Behavior, perception and cognition are strongly shaped by the synthesis of information across the different sensory modalities. Such multisensory integration often results in performance and perceptual benefits that reflect the additional information conferred by having cues from multiple senses providing redundant or complementary information. The spatial and temporal relationships of these cues provide powerful statistical information about how these cues should be integrated or "bound" in order to create a unified perceptual representation. Much recent work has examined the temporal factors that are integral in multisensory processing, with many focused on the construct of the multisensory temporal binding window - the epoch of time within which stimuli from different modalities is likely to be integrated and perceptually bound. Emerging evidence suggests that this temporal window is altered in a series of neurodevelopmental disorders, including autism, dyslexia and schizophrenia. In addition to their role in sensory processing, these deficits in multisensory temporal function may play an important role in the perceptual and cognitive weaknesses that characterize these clinical disorders. Within this context, focus on improving the acuity of multisensory temporal function may have important implications for the amelioration of the "higher-order" deficits that serve as the defining features of these disorders.
Collapse
Affiliation(s)
- Mark T Wallace
- Vanderbilt Brain Institute, Vanderbilt University, 465 21st Avenue South, Nashville, TN 37232, USA; Department of Hearing & Speech Sciences, Vanderbilt University, Nashville, TN, USA; Department of Psychology, Vanderbilt University, Nashville, TN, USA; Department of Psychiatry, Vanderbilt University, Nashville, TN, USA.
| | - Ryan A Stevenson
- Department of Psychology, University of Toronto, Toronto, ON, Canada
| |
Collapse
|
14
|
Steenken R, Weber L, Colonius H, Diederich A. Designing driver assistance systems with crossmodal signals: multisensory integration rules for saccadic reaction times apply. PLoS One 2014; 9:e92666. [PMID: 24800823 PMCID: PMC4011748 DOI: 10.1371/journal.pone.0092666] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2013] [Accepted: 02/25/2014] [Indexed: 11/19/2022] Open
Abstract
Modern driver assistance systems make increasing use of auditory and tactile signals in order to reduce the driver's visual information load. This entails potential crossmodal interaction effects that need to be taken into account in designing an optimal system. Here we show that saccadic reaction times to visual targets (cockpit or outside mirror), presented in a driving simulator environment and accompanied by auditory or tactile accessories, follow some well-known spatiotemporal rules of multisensory integration, usually found under confined laboratory conditions. Auditory nontargets speed up reaction time by about 80 ms. The effect tends to be maximal when the nontarget is presented 50 ms before the target and when target and nontarget are spatially coincident. The effect of a tactile nontarget (vibrating steering wheel) was less pronounced and not spatially specific. It is shown that the average reaction times are well-described by the stochastic "time window of integration" model for multisensory integration developed by the authors. This two-stage model postulates that crossmodal interaction occurs only if the peripheral processes from the different sensory modalities terminate within a fixed temporal interval, and that the amount of crossmodal interaction manifests itself in an increase or decrease of second stage processing time. A qualitative test is consistent with the model prediction that the probability of interaction, but not the amount of crossmodal interaction, depends on target-nontarget onset asynchrony. A quantitative model fit yields estimates of individual participants' parameters, including the size of the time window. Some consequences for the design of driver assistance systems are discussed.
Collapse
Affiliation(s)
- Rike Steenken
- Department of Psychology, European Medical School, Carl von Ossietzky Universität, Oldenburg, Germany
- * E-mail:
| | - Lars Weber
- OFFIS, Department for Transportation, Human-Centred Design, Oldenburg, Germany
| | - Hans Colonius
- Department of Psychology, Cluster of Excellence “Hearing4all”, and Research Center Neurosensory Science, European Medical School, Carl von Ossietzky Universität, Oldenburg, Germany
| | - Adele Diederich
- School of Humanities and Social Sciences, Jacobs University, Bremen, Germany
| |
Collapse
|
15
|
Deuter CE, Schilling TM, Kuehl LK, Blumenthal TD, Schachinger H. Startle effects on saccadic responses to emotional target stimuli. Psychophysiology 2013; 50:1056-63. [PMID: 23841560 DOI: 10.1111/psyp.12083] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2012] [Accepted: 05/11/2013] [Indexed: 01/21/2023]
Abstract
Startle stimuli elicit various physiological and cognitive responses. This study investigated whether acoustic startle stimuli affect saccadic reactions in an emotional pro- or antisaccade task. Startle probes were presented either 500 ms before or simultaneous with an imperative stimulus that indicated whether a saccade towards or away from positive, neutral, or negative peripheral target pictures had to be performed. Valence interacted with saccade direction according to an approach-avoidance pattern of gaze behavior, with delayed prosaccades to negative targets and antisaccades away from positive targets. Acoustic startle stimuli preceding the presentation of peripheral target pictures speeded up the initiation saccades, irrespective of stimulus valence. Results indicate a speeding of cognitive-motor processing by preceding startle stimuli.
Collapse
Affiliation(s)
- Christian E Deuter
- Department of Clinical Psychophysiology, University of Trier, Trier, Germany
| | - Thomas M Schilling
- Department of Clinical Psychophysiology, University of Trier, Trier, Germany
| | - Linn K Kuehl
- Department of Psychiatry and Psychotherapy, Charité University Medicine, Berlin, Germany
| | - Terry D Blumenthal
- Department of Psychology, Wake Forest University, Winston-Salem, North Carolina, USA
| | - Hartmut Schachinger
- Department of Clinical Psychophysiology, University of Trier, Trier, Germany
| |
Collapse
|
16
|
Modeling Multisensory Processes in Saccadic Responses. Front Neurosci 2013. [DOI: 10.1201/9781439812174-18] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] Open
|
17
|
Colonius H, Diederich A. Focused attention vs. crossmodal signals paradigm: deriving predictions from the time-window-of-integration model. Front Integr Neurosci 2012; 6:62. [PMID: 22952460 PMCID: PMC3430010 DOI: 10.3389/fnint.2012.00062] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2012] [Accepted: 08/05/2012] [Indexed: 11/22/2022] Open
Abstract
In the crossmodal signals paradigm (CSP) participants are instructed to respond to a set of stimuli from different modalities, presented more or less simultaneously, as soon as a stimulus from any modality has been detected. In the focused attention paradigm (FAP), on the other hand, responses should only be made to a stimulus from a pre-defined target modality and stimuli from non-target modalities should be ignored. Whichever paradigm is being applied, a typical result is that responses tend to be faster to crossmodal stimuli than to unimodal stimuli, a phenomenon often referred to as “crossmodal interaction.” Here, we investigate predictions of the time-window-of-integration (TWIN) modeling framework previously proposed by the authors. It is shown that TWIN makes specific qualitative and quantitative predictions on how the two paradigms differ with respect to the probability of multisensory integration and the amount of response enhancement, including the effect of stimulus intensity (“inverse effectiveness”). Introducing a decision-theoretic framework for TWIN further allows comparing the two paradigms with respect to the predicted optimal time window size and its dependence on the prior probability that the crossmodal stimulus information refers to the same event. In order to test these predictions, experimental studies that systematically compare crossmodal effects under stimulus conditions that are identical except for the CSP-FAP instruction should be performed in the future.
Collapse
Affiliation(s)
- Hans Colonius
- Department of Psychology, Carl von Ossietzky Universitaet Oldenburg Oldenburg, Germany
| | | |
Collapse
|
18
|
Does crossmodal correspondence modulate the facilitatory effect of auditory cues on visual search? Atten Percept Psychophys 2012; 74:1154-67. [DOI: 10.3758/s13414-012-0317-9] [Citation(s) in RCA: 59] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
19
|
|
20
|
Altieri N, Pisoni DB, Townsend JT. Some behavioral and neurobiological constraints on theories of audiovisual speech integration: a review and suggestions for new directions. ACTA ACUST UNITED AC 2011; 24:513-39. [PMID: 21968081 DOI: 10.1163/187847611x595864] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
Abstract
Summerfield (1987) proposed several accounts of audiovisual speech perception, a field of research that has burgeoned in recent years. The proposed accounts included the integration of discrete phonetic features, vectors describing the values of independent acoustical and optical parameters, the filter function of the vocal tract, and articulatory dynamics of the vocal tract. The latter two accounts assume that the representations of audiovisual speech perception are based on abstract gestures, while the former two assume that the representations consist of symbolic or featural information obtained from visual and auditory modalities. Recent converging evidence from several different disciplines reveals that the general framework of Summerfield's feature-based theories should be expanded. An updated framework building upon the feature-based theories is presented. We propose a processing model arguing that auditory and visual brain circuits provide facilitatory information when the inputs are correctly timed, and that auditory and visual speech representations do not necessarily undergo translation into a common code during information processing. Future research on multisensory processing in speech perception should investigate the connections between auditory and visual brain regions, and utilize dynamic modeling tools to further understand the timing and information processing mechanisms involved in audiovisual speech integration.
Collapse
Affiliation(s)
- Nicholas Altieri
- Department of Psychology, University of Oklahoma, OK 73072, USA.
| | | | | |
Collapse
|
21
|
Chandrasekaran C, Lemus L, Trubanova A, Gondan M, Ghazanfar AA. Monkeys and humans share a common computation for face/voice integration. PLoS Comput Biol 2011; 7:e1002165. [PMID: 21998576 PMCID: PMC3182859 DOI: 10.1371/journal.pcbi.1002165] [Citation(s) in RCA: 45] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2011] [Accepted: 07/03/2011] [Indexed: 11/18/2022] Open
Abstract
Speech production involves the movement of the mouth and other regions of the face resulting in visual motion cues. These visual cues enhance intelligibility and detection of auditory speech. As such, face-to-face speech is fundamentally a multisensory phenomenon. If speech is fundamentally multisensory, it should be reflected in the evolution of vocal communication: similar behavioral effects should be observed in other primates. Old World monkeys share with humans vocal production biomechanics and communicate face-to-face with vocalizations. It is unknown, however, if they, too, combine faces and voices to enhance their perception of vocalizations. We show that they do: monkeys combine faces and voices in noisy environments to enhance their detection of vocalizations. Their behavior parallels that of humans performing an identical task. We explored what common computational mechanism(s) could explain the pattern of results we observed across species. Standard explanations or models such as the principle of inverse effectiveness and a "race" model failed to account for their behavior patterns. Conversely, a "superposition model", positing the linear summation of activity patterns in response to visual and auditory components of vocalizations, served as a straightforward but powerful explanatory mechanism for the observed behaviors in both species. As such, it represents a putative homologous mechanism for integrating faces and voices across primates.
Collapse
Affiliation(s)
- Chandramouli Chandrasekaran
- Neuroscience Institute, Princeton University, Princeton, New Jersey, United States of America
- Department of Psychology, Princeton University, Princeton, New Jersey, United States of America
| | - Luis Lemus
- Neuroscience Institute, Princeton University, Princeton, New Jersey, United States of America
- Department of Psychology, Princeton University, Princeton, New Jersey, United States of America
| | - Andrea Trubanova
- Department of Psychology, Princeton University, Princeton, New Jersey, United States of America
- Marcus Autism Center, Emory University School of Medicine, Atlanta, Georgia, United States of America
| | - Matthias Gondan
- Department of Psychology, University of Regensburg, Regensburg, Germany
- Medical Biometry and Informatics, University of Heidelberg, Heidelberg, Germany
| | - Asif A. Ghazanfar
- Neuroscience Institute, Princeton University, Princeton, New Jersey, United States of America
- Department of Psychology, Princeton University, Princeton, New Jersey, United States of America
- Department of Ecology and Evolutionary Biology, Princeton University, Princeton, New Jersey, United States of America
- * E-mail:
| |
Collapse
|
22
|
Computing an optimal time window of audiovisual integration in focused attention tasks: illustrated by studies on effect of age and prior knowledge. Exp Brain Res 2011; 212:327-37. [PMID: 21626414 DOI: 10.1007/s00221-011-2732-x] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2011] [Accepted: 05/12/2011] [Indexed: 10/18/2022]
Abstract
The concept of a "time window of integration" holds that information from different sensory modalities must not be perceived too far apart in time in order to be integrated into a multisensory perceptual event. Empirical estimates of window width differ widely, however, ranging from 40 to 600 ms depending on context and experimental paradigm. Searching for theoretical derivation of window width, Colonius and Diederich (Front Integr Neurosci 2010) developed a decision-theoretic framework using a decision rule that is based on the prior probability of a common source, the likelihood of temporal disparities between the unimodal signals, and the payoff for making right or wrong decisions. Here, this framework is extended to the focused attention task where subjects are asked to respond to signals from a target modality only. Evoking the framework of the time-window-of-integration (TWIN) model, an explicit expression for optimal window width is obtained. The approach is probed on two published focused attention studies. The first is a saccadic reaction time study assessing the efficiency with which multisensory integration varies as a function of aging. Although the window widths for young and older adults differ by nearly 200 ms, presumably due to their different peripheral processing speeds, neither of them deviates significantly from the optimal values. In the second study, head saccadic reactions times to a perfectly aligned audiovisual stimulus pair had been shown to depend on the prior probability of spatial alignment. Intriguingly, they reflected the magnitude of the time-window widths predicted by our decision-theoretic framework, i.e., a larger time window is associated with a higher prior probability.
Collapse
|
23
|
Neuner I, Stöcker T, Kellermann T, Ermer V, Wegener HP, Eickhoff SB, Schneider F, Shah NJ. Electrophysiology meets fMRI: neural correlates of the startle reflex assessed by simultaneous EMG-fMRI data acquisition. Hum Brain Mapp 2011; 31:1675-85. [PMID: 20205248 DOI: 10.1002/hbm.20965] [Citation(s) in RCA: 24] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022] Open
Abstract
The startle reflex provides a unique tool for the investigation of sensorimotor gating and information processing. Simultaneous EMG-fMRI acquisition (i.e., online stimulation and recording in the MR environment) allows for the quantitative assessment of the neuronal correlates of the startle reflex and its modulations on a single trial level. This serves as the backbone for a startle response informed fMRI analysis, which is fed by data acquired in the same brain at the same time. We here present the first MR study using a single trial approach with simultaneous acquired EMG and fMRI data on the human startle response in 15 healthy young men. It investigates the neural correlates for isolated air puff startle pulses (PA), prepulse-pulse inhibition (PPI), and prepulse facilitation (PPF). We identified a common core network engaged by all three conditions (PA, PPI, and PPF), consisting of bilateral primary and secondary somatosensory cortices, right insula, right thalamus, right temporal pole, middle cingulate cortex, and cerebellum. The cerebellar vermis exhibits distinct activation patterns between the startle modifications. It is differentially activated with the highest amplitude for PPF, a lower activation for PA, and lowest for PPI. The orbital frontal cortex exhibits a differential activation pattern, not for the type of startle response but for the amplitude modification. For pulse alone it is close to zero; for PPI it is activated. This is in contrast to PPF where it shows deactivation. In addition, the thalamus, the cerebellum, and the anterior cingulate cortex add to the modulation of the startle reflex.
Collapse
Affiliation(s)
- Irene Neuner
- Department of Psychiatry and Psychotherapy, RWTH Aachen University, 52074 Aachen, Germany.
| | | | | | | | | | | | | | | |
Collapse
|
24
|
Colonius H, Diederich A. The optimal time window of visual-auditory integration: a reaction time analysis. Front Integr Neurosci 2010; 4:11. [PMID: 20485476 PMCID: PMC2871715 DOI: 10.3389/fnint.2010.00011] [Citation(s) in RCA: 24] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2010] [Accepted: 04/02/2010] [Indexed: 11/21/2022] Open
Abstract
The spatiotemporal window of integration has become a widely accepted concept in multisensory research: crossmodal information falling within this window is highly likely to be integrated, whereas information falling outside is not. Here we further probe this concept in a reaction time context with redundant crossmodal targets. An infinitely large time window would lead to mandatory integration, a zero-width time window would rule out integration entirely. Making explicit assumptions about the arrival time difference between peripheral sensory processing times triggered by a crossmodal stimulus set, we derive a decision rule that determines an optimal window width as a function of (i) the prior odds in favor of a common multisensory source, (ii) the likelihood of arrival time differences, and (iii) the payoff for making correct or wrong decisions; moreover, we suggest a detailed experimental setup to test the theory. Our approach is in line with the well-established framework for modeling multisensory integration as (nearly) optimal decision making, but none of those studies, to our knowledge, has considered reaction time as observable variable. The theory can easily be extended to reaction times collected under the focused attention paradigm. Possible variants of the theory to account for judgments of crossmodal simultaneity are discussed. Finally, neural underpinnings of the theory in terms of oscillatory responses in primary sensory cortices are hypothesized.
Collapse
Affiliation(s)
- Hans Colonius
- Department of Psychology, University of Oldenburg Oldenburg, Germany
| | | |
Collapse
|
25
|
Emotion and motor preparation: A transcranial magnetic stimulation study of corticospinal motor tract excitability. COGNITIVE AFFECTIVE & BEHAVIORAL NEUROSCIENCE 2009; 9:380-8. [PMID: 19897791 DOI: 10.3758/cabn.9.4.380] [Citation(s) in RCA: 88] [Impact Index Per Article: 5.9] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
26
|
Time-Window-of-Integration (TWIN) Model for Saccadic Reaction Time: Effect of Auditory Masker Level on Visual–Auditory Spatial Interaction in Elevation. Brain Topogr 2009; 21:177-84. [PMID: 19337824 DOI: 10.1007/s10548-009-0091-8] [Citation(s) in RCA: 24] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2009] [Accepted: 03/19/2009] [Indexed: 10/20/2022]
|
27
|
Sinclair C, Hammond GR. Excitatory and inhibitory processes in primary motor cortex during the foreperiod of a warned reaction time task are unrelated to response expectancy. Exp Brain Res 2009; 194:103-13. [DOI: 10.1007/s00221-008-1684-2] [Citation(s) in RCA: 52] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2008] [Accepted: 12/05/2008] [Indexed: 11/27/2022]
|
28
|
Diederich A, Colonius H. Crossmodal interaction in speeded responses: time window of integration model. PROGRESS IN BRAIN RESEARCH 2009; 174:119-35. [PMID: 19477335 DOI: 10.1016/s0079-6123(09)01311-9] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/06/2023]
Abstract
Saccadic reaction time (SRT) to a visual stimulus tends to be faster when an auditory and/or somatosensory stimulus is presented in close temporal or spatial proximity, even when participants are instructed to ignore the accessory input (focused attention task). The time course of SRT as a function of stimulus onset asynchrony (SOA) is consistent with the time-window-of-integration (TWIN) model assuming a peripheral stage of parallel processing in separate sensory channels followed by a secondary stage of multisensory integration. TWIN has been shown to account for effects of the spatial configuration of the stimuli, for the effect of increasing the number of nontargets presented together with the target, for a possible warning effect of the nontarget, for effects of increasing the intensity of the nontarget, and for the effect of background noise on multisensory integration. Moreover, it has been able to accommodate some effects of aging on multisensory integration. There is empirical support for TWIN's tenet of the separability between spatial and temporal factors on multisensory integration. Besides presenting many features of TWIN within the context of crossmodal interaction modeling efforts, some possible directions on how the TWIN framework could serve to elucidate the link between perception and action are shown.
Collapse
Affiliation(s)
- Adele Diederich
- School of Humanities and Social Sciences, Jacobs University, Bremen, Germany.
| | | |
Collapse
|
29
|
Diederich A, Colonius H, Schomburg A. Assessing age-related multisensory enhancement with the time-window-of-integration model. Neuropsychologia 2008; 46:2556-62. [DOI: 10.1016/j.neuropsychologia.2008.03.026] [Citation(s) in RCA: 117] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2007] [Revised: 03/18/2008] [Accepted: 03/31/2008] [Indexed: 10/22/2022]
|
30
|
When a high-intensity "distractor" is better then a low-intensity one: modeling the effect of an auditory or tactile nontarget stimulus on visual saccadic reaction time. Brain Res 2008; 1242:219-30. [PMID: 18573240 DOI: 10.1016/j.brainres.2008.05.081] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2008] [Revised: 05/29/2008] [Accepted: 05/29/2008] [Indexed: 11/21/2022]
Abstract
In a focused attention task saccadic reaction time (SRT) to a visual target stimulus (LED) was measured with an auditory (white noise burst) or tactile (vibration applied to palm) nontarget presented in ipsi- or contralateral position to the target. Crossmodal facilitation of SRT was observed under all configurations and stimulus onset asynchrony (SOA) values ranging from -250 ms (nontarget prior to target) to 50 ms. This study specifically addressed the effect of varying nontarget intensity. While facilitation effects for auditory nontargets are somewhat more pronounced than for tactile ones, decreasing intensity slightly reduced facilitation for both types of nontargets. The time course of crossmodal mean SRT over SOA and the pattern of facilitation observed here suggest the existence of two distinct underlying mechanisms: (a) a spatially unspecific crossmodal warning triggered by the nontarget being detected early enough before the arrival of the target plus (b) a spatially specific multisensory integration mechanism triggered by the target processing time terminating within the time window of integration. It is shown that the time window of integration (TWIN) model introduced by the authors gives a reasonable quantitative account of the data relating observed SRT to the unobservable probability of integration and crossmodal warning for each SOA value under a high and low intensity level of the nontarget.
Collapse
|
31
|
Steenken R, Diederich A, Colonius H. Time course of auditory masker effects: tapping the locus of audiovisual integration? Neurosci Lett 2008; 435:78-83. [PMID: 18355963 DOI: 10.1016/j.neulet.2008.02.017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2007] [Revised: 01/22/2008] [Accepted: 02/06/2008] [Indexed: 11/28/2022]
Abstract
In a focused attention paradigm, saccadic reaction time (SRT) to a visual target tends to be shorter when an auditory accessory stimulus is presented in close temporal and spatial proximity. Observed SRT reductions typically diminish as spatial disparity between the stimuli increases. Here a visual target LED (500 ms duration) was presented above or below the fixation point and a simultaneously presented auditory accessory (2 ms duration) could appear at the same or the opposite vertical position. SRT enhancement was about 35 ms in the coincident and 10 ms in the disparate condition. In order to further probe the audiovisual integration mechanism, in addition to the auditory non-target an auditory masker (200 ms duration) was presented before, simultaneous to, or after the accessory stimulus. In all interstimulus interval (ISI) conditions, SRT enhancement went down both in the coincident and disparate configuration, but this decrement was fairly stable across the ISI values. If multisensory integration solely relied on a feed-forward process, one would expect a monotonic decrease of the masker effect with increasing ISI in the backward masking condition. It is therefore conceivable that the relatively high-energetic masker causes a broad excitatory response of SC neurons. During this state, the spatial audio-visual information from multisensory association areas is fed back and merged with the spatially unspecific excitation pattern induced by the masker. Assuming that a certain threshold of activation has to be achieved in order to generate a saccade in the correct direction, the blurred joint output of noise and spatial audio-visual information needs more time to reach this threshold prolonging SRT to an audio-visual object.
Collapse
Affiliation(s)
- Rike Steenken
- Department of Psychology, University of Oldenburg, P.O. Box 2503, 26111 Oldenburg, Germany.
| | | | | |
Collapse
|