101
|
The role of multisensory interplay in enabling temporal expectations. Cognition 2017; 170:130-146. [PMID: 28992555 DOI: 10.1016/j.cognition.2017.09.015] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2016] [Revised: 09/22/2017] [Accepted: 09/26/2017] [Indexed: 11/23/2022]
Abstract
Temporal regularities can guide our attention to focus on a particular moment in time and to be especially vigilant just then. Previous research provided evidence for the influence of temporal expectation on perceptual processing in unisensory auditory, visual, and tactile contexts. However, in real life we are often exposed to a complex and continuous stream of multisensory events. Here we tested - in a series of experiments - whether temporal expectations can enhance perception in multisensory contexts and whether this enhancement differs from enhancements in unisensory contexts. Our discrimination paradigm contained near-threshold targets (subject-specific 75% discrimination accuracy) embedded in a sequence of distractors. The likelihood of target occurrence (early or late) was manipulated block-wise. Furthermore, we tested whether spatial and modality-specific target uncertainty (i.e. predictable vs. unpredictable target position or modality) would affect temporal expectation (TE) measured with perceptual sensitivity (d') and response times (RT). In all our experiments, hidden temporal regularities improved performance for expected multisensory targets. Moreover, multisensory performance was unaffected by spatial and modality-specific uncertainty, whereas unisensory TE effects on d' but not RT were modulated by spatial and modality-specific uncertainty. Additionally, the size of the temporal expectation effect, i.e. the increase in perceptual sensitivity and decrease of RT, scaled linearly with the likelihood of expected targets. Finally, temporal expectation effects were unaffected by varying target position within the stream. Together, our results strongly suggest that participants quickly adapt to novel temporal contexts, that they benefit from multisensory (relative to unisensory) stimulation and that multisensory benefits are maximal if the stimulus-driven uncertainty is highest. We propose that enhanced informational content (i.e. multisensory stimulation) enables the robust extraction of temporal regularities which in turn boost (uni-)sensory representations.
Collapse
|
102
|
Colonius H, Wolff FH, Diederich A. Trimodal Race Model Inequalities in Multisensory Integration: I. Basics. Front Psychol 2017; 8:1141. [PMID: 28744236 PMCID: PMC5504196 DOI: 10.3389/fpsyg.2017.01141] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2017] [Accepted: 06/22/2017] [Indexed: 11/13/2022] Open
Abstract
The race model inequality has become an important testing tool for the analysis of redundant signals tasks. In crossmodal reaction time experiments, the strength of violation of the inequality is taken as measure of multisensory integration occurring beyond probability summation. Here we extend previous results on trimodal race model inequalities and specify the underlying context invariance assumptions required for their validity. Some simulation results comparing the race model and the superposition model for Erlang distributed random variables illustrate the trimodal inequalities.
Collapse
Affiliation(s)
- Hans Colonius
- Cognitive Psychology Lab, Department of Psychology, University of OldenburgOldenburg, Germany.,Cluster of Excellence 'Hearing4All,' University of OldenburgOldenburg, Germany
| | - Felix Hermann Wolff
- Cognitive Psychology Lab, Department of Psychology, University of OldenburgOldenburg, Germany
| | - Adele Diederich
- Cognitive Science Lab, Life Sciences and Chemistry, Jacobs University BremenBremen, Germany
| |
Collapse
|
103
|
Roy C, Lagarde J, Dotov D, Dalla Bella S. Walking to a multisensory beat. Brain Cogn 2017; 113:172-183. [PMID: 28257971 DOI: 10.1016/j.bandc.2017.02.002] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2016] [Revised: 02/03/2017] [Accepted: 02/09/2017] [Indexed: 12/29/2022]
Abstract
Living in a complex and multisensory environment demands constant interaction between perception and action. In everyday life it is common to combine efficiently simultaneous signals coming from different modalities. There is evidence of a multisensory benefit in a variety of laboratory tasks (temporal judgement, reaction time tasks). It is less clear if this effect extends to ecological tasks, such as walking. Furthermore, benefits of multimodal stimulation are linked to temporal properties such as the temporal window of integration and temporal recalibration. These properties have been examined in tasks involving single, non-repeating stimulus presentations. Here we investigate the same temporal properties in the context of a rhythmic task, namely audio-tactile stimulation during walking. The effect of audio-tactile rhythmic cues on gait variability and the ability to synchronize to the cues was studied in young adults. Participants walked with rhythmic cues presented at different stimulus-onset asynchronies. We observed a multisensory benefit by comparing audio-tactile to unimodal stimulation. Moreover, both the temporal window of integration and temporal recalibration mediated the response to multimodal stimulation. In sum, rhythmic behaviours obey the same principles as temporal discrimination and detection behaviours and thus can also benefit from multimodal stimulation.
Collapse
Affiliation(s)
- Charlotte Roy
- EuroMov Laboratory, Montpellier University, 700 Avenue du Pic Saint Loup, 34090 Montpellier, France.
| | - Julien Lagarde
- EuroMov Laboratory, Montpellier University, 700 Avenue du Pic Saint Loup, 34090 Montpellier, France
| | - Dobromir Dotov
- Instituto de Neurobiología, Juriquilla, Universidad Nacional Autonoma de México, Mexico
| | - Simone Dalla Bella
- EuroMov Laboratory, Montpellier University, 700 Avenue du Pic Saint Loup, 34090 Montpellier, France; Institut Universitaire de France, Paris, France; International Laboratory for Brain, Music, and Sound Research (BRAMS), Montreal, Canada; Department of Cognitive Psychology, WSFiZ, Warsaw, Poland
| |
Collapse
|
104
|
Smith E, Zhang S, Bennetto L. Temporal synchrony and audiovisual integration of speech and object stimuli in autism. RESEARCH IN AUTISM SPECTRUM DISORDERS 2017; 39:11-19. [PMID: 30220908 PMCID: PMC6135104 DOI: 10.1016/j.rasd.2017.04.001] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/08/2023]
Abstract
BACKGROUND Individuals with Autism Spectrum Disorders (ASD) have been shown to have multisensory integration deficits, which may lead to problems perceiving complex, multisensory environments. For example, understanding audiovisual speech requires integration of visual information from the lips and face with auditory information from the voice, and audiovisual speech integration deficits can lead to impaired understanding and comprehension. While there is strong evidence for an audiovisual speech integration impairment in ASD, it is unclear whether this impairment is due to low level perceptual processes that affect all types of audiovisual integration or if it is specific to speech processing. METHOD Here, we measure audiovisual integration of basic speech (i.e., consonant-vowel utterances) and object stimuli (i.e., a bouncing ball) in adolescents with ASD and well-matched controls. We calculate a temporal window of integration (TWI) using each individual's ability to identify which of two videos (one temporally aligned and one misaligned) matches auditory stimuli. The TWI measures tolerance for temporal asynchrony between the auditory and visual streams, and is an important feature of audiovisual perception. RESULTS While controls showed similar tolerance of asynchrony for the simple speech and object stimuli, individuals with ASD did not. Specifically, individuals with ASD showed less tolerance of asynchrony for speech stimuli compared to object stimuli. In individuals with ASD, decreased tolerance for asynchrony in speech stimuli was associated with higher ratings of autism symptom severity. CONCLUSIONS These results suggest that audiovisual perception in ASD may vary for speech and object stimuli beyond what can be accounted for by stimulus complexity.
Collapse
Affiliation(s)
- Elizabeth Smith
- Department of Clinical and Social Sciences in Psychology, University of Rochester, Rochester, NY USA
| | - Shouling Zhang
- Department of Clinical and Social Sciences in Psychology, University of Rochester, Rochester, NY USA
| | - Loisa Bennetto
- Department of Clinical and Social Sciences in Psychology, University of Rochester, Rochester, NY USA
| |
Collapse
|
105
|
Petermeijer S, Bazilinskyy P, Bengler K, de Winter J. Take-over again: Investigating multimodal and directional TORs to get the driver back into the loop. APPLIED ERGONOMICS 2017; 62:204-215. [PMID: 28411731 DOI: 10.1016/j.apergo.2017.02.023] [Citation(s) in RCA: 33] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/17/2016] [Revised: 01/09/2017] [Accepted: 02/28/2017] [Indexed: 06/07/2023]
Abstract
When a highly automated car reaches its operational limits, it needs to provide a take-over request (TOR) in order for the driver to resume control. The aim of this simulator-based study was to investigate the effects of TOR modality and left/right directionality on drivers' steering behaviour when facing a head-on collision without having received specific instructions regarding the directional nature of the TORs. Twenty-four participants drove three sessions in a highly automated car, each session with a different TOR modality (auditory, vibrotactile, and auditory-vibrotactile). Six TORs were provided per session, warning the participants about a stationary vehicle that had to be avoided by changing lane left or right. Two TORs were issued from the left, two from the right, and two from both the left and the right (i.e., nondirectional). The auditory stimuli were presented via speakers in the simulator (left, right, or both), and the vibrotactile stimuli via a tactile seat (with tactors activated at the left side, right side, or both). The results showed that the multimodal TORs yielded statistically significantly faster steer-touch times than the unimodal vibrotactile TOR, while no statistically significant differences were observed for brake times and lane change times. The unimodal auditory TOR yielded relatively low self-reported usefulness and satisfaction ratings. Almost all drivers overtook the stationary vehicle on the left regardless of the directionality of the TOR, and a post-experiment questionnaire revealed that most participants had not realized that some of the TORs were directional. We conclude that between the three TOR modalities tested, the multimodal approach is preferred. Moreover, our results show that directional auditory and vibrotactile stimuli do not evoke a directional response in uninstructed drivers. More salient and semantically congruent cues, as well as explicit instructions, may be needed to guide a driver into a specific direction during a take-over scenario.
Collapse
Affiliation(s)
- Sebastiaan Petermeijer
- Department for Ergonomics, Faculty of Mechanical Engineering, Technical University Munich, Boltzmannstraße 15, 85747 Garching, Germany.
| | - Pavlo Bazilinskyy
- Department of BioMechanical Engineering, Faculty of Mechanical, Maritime and Materials Engineering, Mekelweg 2, 2628 CD Delft, The Netherlands
| | - Klaus Bengler
- Department for Ergonomics, Faculty of Mechanical Engineering, Technical University Munich, Boltzmannstraße 15, 85747 Garching, Germany
| | - Joost de Winter
- Department of BioMechanical Engineering, Faculty of Mechanical, Maritime and Materials Engineering, Mekelweg 2, 2628 CD Delft, The Netherlands
| |
Collapse
|
106
|
Multisensory Perception of Contradictory Information in an Environment of Varying Reliability: Evidence for Conscious Perception and Optimal Causal Inference. Sci Rep 2017; 7:3167. [PMID: 28600573 PMCID: PMC5466670 DOI: 10.1038/s41598-017-03521-2] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2016] [Accepted: 05/01/2017] [Indexed: 11/23/2022] Open
Abstract
Two psychophysical experiments examined multisensory integration of visual-auditory (Experiment 1) and visual-tactile-auditory (Experiment 2) signals. Participants judged the location of these multimodal signals relative to a standard presented at the median plane of the body. A cue conflict was induced by presenting the visual signals with a constant spatial discrepancy to the other modalities. Extending previous studies, the reliability of certain modalities (visual in Experiment 1, visual and tactile in Experiment 2) was varied from trial to trial by presenting signals with either strong or weak location information (e.g., a relatively dense or dispersed dot cloud as visual stimulus). We investigated how participants would adapt to the cue conflict from the contradictory information under these varying reliability conditions and whether participants had insight to their performance. During the course of both experiments, participants switched from an integration strategy to a selection strategy in Experiment 1 and to a calibration strategy in Experiment 2. Simulations of various multisensory perception strategies proposed that optimal causal inference in a varying reliability environment not only depends on the amount of multimodal discrepancy, but also on the relative reliability of stimuli across the reliability conditions.
Collapse
|
107
|
Colonius H, Diederich A. Measuring multisensory integration: from reaction times to spike counts. Sci Rep 2017; 7:3023. [PMID: 28596602 PMCID: PMC5465073 DOI: 10.1038/s41598-017-03219-5] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2016] [Accepted: 04/25/2017] [Indexed: 12/30/2022] Open
Abstract
A neuron is categorized as “multisensory” if there is a statistically significant difference between the response evoked, e.g., by a crossmodal stimulus combination and that evoked by the most effective of its components separately. Being responsive to multiple sensory modalities does not guarantee that a neuron has actually engaged in integrating its multiple sensory inputs: it could simply respond to the stimulus component eliciting the strongest response in a given trial. Crossmodal enhancement is commonly expressed as a proportion of the strongest mean unisensory response. This traditional index does not take into account any statistical dependency between the sensory channels under crossmodal stimulation. We propose an alternative index measuring by how much the multisensory response surpasses the level obtainable by optimally combining the unisensory responses, with optimality defined as probability summation under maximal negative stochastic dependence. The new index is analogous to measuring crossmodal enhancement in reaction time studies by the strength of violation of the “race model inequality’, a numerical measure of multisensory integration. Since the new index tends to be smaller than the traditional one, neurons previously labeled as “multisensory’ may lose that property. The index is easy to compute and it is sensitive to variability in data.
Collapse
Affiliation(s)
- Hans Colonius
- Carl von Ossietzky Universität Oldenburg, Department of Psychology, Oldenburg, 26111, Germany.
| | - Adele Diederich
- Jacobs University Bremen, Life Sciences & Chemistry, Bremen, 28759, Germany
| |
Collapse
|
108
|
Bao JY, Corrow SL, Schaefer H, Barton JJS. Cross-modal interactions of faces, voices and names in person familiarity. VISUAL COGNITION 2017. [DOI: 10.1080/13506285.2017.1329763] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Affiliation(s)
- Jing Ye Bao
- Human Vision and Eye Movement Laboratory, Departments of Medicine (Neurology), Ophthalmology and Visual Sciences, Psychology, University of British Columbia, Vancouver, Canada
| | - Sherryse L. Corrow
- Human Vision and Eye Movement Laboratory, Departments of Medicine (Neurology), Ophthalmology and Visual Sciences, Psychology, University of British Columbia, Vancouver, Canada
| | - Heidi Schaefer
- Human Vision and Eye Movement Laboratory, Departments of Medicine (Neurology), Ophthalmology and Visual Sciences, Psychology, University of British Columbia, Vancouver, Canada
| | - Jason J. S. Barton
- Human Vision and Eye Movement Laboratory, Departments of Medicine (Neurology), Ophthalmology and Visual Sciences, Psychology, University of British Columbia, Vancouver, Canada
| |
Collapse
|
109
|
Clemente F, Håkansson B, Cipriani C, Wessberg J, Kulbacka-Ortiz K, Brånemark R, Fredén Jansson KJ, Ortiz-Catalan M. Touch and Hearing Mediate Osseoperception. Sci Rep 2017; 7:45363. [PMID: 28349945 PMCID: PMC5368565 DOI: 10.1038/srep45363] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2016] [Accepted: 02/27/2017] [Indexed: 12/24/2022] Open
Abstract
Osseoperception is the sensation arising from the mechanical stimulation of a bone-anchored prosthesis. Here we show that not only touch, but also hearing is involved in this phenomenon. Using mechanical vibrations ranging from 0.1 to 6 kHz, we performed four psychophysical measures (perception threshold, sensation discrimination, frequency discrimination and reaction time) on 12 upper and lower limb amputees and found that subjects: consistently reported perceiving a sound when the stimulus was delivered at frequencies equal to or above 400 Hz; were able to discriminate frequency differences between stimuli delivered at high stimulation frequencies (~1500 Hz); improved their reaction time for bimodal stimuli (i.e. when both vibration and sound were perceived). Our results demonstrate that osseoperception is a multisensory perception, which can explain the improved environment perception of bone-anchored prosthesis users. This phenomenon might be exploited in novel prosthetic devices to enhance their control, thus ultimately improving the amputees' quality of life.
Collapse
Affiliation(s)
| | - Bo Håkansson
- Department of Signals and Systems, Chalmers University of Technology, Gothenburg, Sweden
| | | | - Johan Wessberg
- Institute of Neuroscience and Physiology, University of Gothenburg, Gothenburg, Sweden
| | - Katarzyna Kulbacka-Ortiz
- Centre for Advanced Reconstruction of Extremities, Sahlgrenska University Hospital, Gothenburg, Sweden
| | - Rickard Brånemark
- Centre for Advanced Reconstruction of Extremities, Sahlgrenska University Hospital, Gothenburg, Sweden.,International Center for Osseointegration Research, Education and Surgery (iCORES), Department of Orthopaedics, University of California, San Francisco, USA
| | | | - Max Ortiz-Catalan
- Department of Signals and Systems, Chalmers University of Technology, Gothenburg, Sweden.,Integrum AB, Gothenburg, Sweden
| |
Collapse
|
110
|
Wahn B, Murali S, Sinnett S, König P. Auditory Stimulus Detection Partially Depends on Visuospatial Attentional Resources. Iperception 2017; 8:2041669516688026. [PMID: 28203353 PMCID: PMC5298484 DOI: 10.1177/2041669516688026] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022] Open
Abstract
Humans’ ability to detect relevant sensory information while being engaged in a demanding task is crucial in daily life. Yet, limited attentional resources restrict information processing. To date, it is still debated whether there are distinct pools of attentional resources for each sensory modality and to what extent the process of multisensory integration is dependent on attentional resources. We addressed these two questions using a dual task paradigm. Specifically, participants performed a multiple object tracking task and a detection task either separately or simultaneously. In the detection task, participants were required to detect visual, auditory, or audiovisual stimuli at varying stimulus intensities that were adjusted using a staircase procedure. We found that tasks significantly interfered. However, the interference was about 50% lower when tasks were performed in separate sensory modalities than in the same sensory modality, suggesting that attentional resources are partly shared. Moreover, we found that perceptual sensitivities were significantly improved for audiovisual stimuli relative to unisensory stimuli regardless of whether attentional resources were diverted to the multiple object tracking task or not. Overall, the present study supports the view that attentional resource allocation in multisensory processing is task-dependent and suggests that multisensory benefits are not dependent on attentional resources.
Collapse
Affiliation(s)
- Basil Wahn
- Institute of Cognitive Science, University of Osnabrück, Osnabrück, Germany
| | - Supriya Murali
- Institute of Cognitive Science, University of Osnabrück, Osnabrück, Germany
| | - Scott Sinnett
- Department of Psychology, University of Hawai'i at Mānoa, Honolulu, Hawai'i, USA
| | - Peter König
- Institute of Cognitive Science, University of Osnabrück, Osnabrück, Germany; Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| |
Collapse
|
111
|
Gibney KD, Aligbe E, Eggleston BA, Nunes SR, Kerkhoff WG, Dean CL, Kwakye LD. Visual Distractors Disrupt Audiovisual Integration Regardless of Stimulus Complexity. Front Integr Neurosci 2017; 11:1. [PMID: 28163675 PMCID: PMC5247431 DOI: 10.3389/fnint.2017.00001] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2016] [Accepted: 01/04/2017] [Indexed: 11/30/2022] Open
Abstract
The intricate relationship between multisensory integration and attention has been extensively researched in the multisensory field; however, the necessity of attention for the binding of multisensory stimuli remains contested. In the current study, we investigated whether diverting attention from well-known multisensory tasks would disrupt integration and whether the complexity of the stimulus and task modulated this interaction. A secondary objective of this study was to investigate individual differences in the interaction of attention and multisensory integration. Participants completed a simple audiovisual speeded detection task and McGurk task under various perceptual load conditions: no load (multisensory task while visual distractors present), low load (multisensory task while detecting the presence of a yellow letter in the visual distractors), and high load (multisensory task while detecting the presence of a number in the visual distractors). Consistent with prior studies, we found that increased perceptual load led to decreased reports of the McGurk illusion, thus confirming the necessity of attention for the integration of speech stimuli. Although increased perceptual load led to longer response times for all stimuli in the speeded detection task, participants responded faster on multisensory trials than unisensory trials. However, the increase in multisensory response times violated the race model for no and low perceptual load conditions only. Additionally, a geometric measure of Miller’s inequality showed a decrease in multisensory integration for the speeded detection task with increasing perceptual load. Surprisingly, we found diverging changes in multisensory integration with increasing load for participants who did not show integration for the no load condition: no changes in integration for the McGurk task with increasing load but increases in integration for the detection task. The results of this study indicate that attention plays a crucial role in multisensory integration for both highly complex and simple multisensory tasks and that attention may interact differently with multisensory processing in individuals who do not strongly integrate multisensory information.
Collapse
Affiliation(s)
- Kyla D Gibney
- Department of Neuroscience, Oberlin College, Oberlin OH, USA
| | | | | | - Sarah R Nunes
- Department of Neuroscience, Oberlin College, Oberlin OH, USA
| | | | | | - Leslie D Kwakye
- Department of Neuroscience, Oberlin College, Oberlin OH, USA
| |
Collapse
|
112
|
Estimating the relative weights of visual and auditory tau versus heuristic-based cues for time-to-contact judgments in realistic, familiar scenes by older and younger adults. Atten Percept Psychophys 2017; 79:929-944. [DOI: 10.3758/s13414-016-1270-9] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
113
|
Schormans AL, Scott KE, Vo AMQ, Tyker A, Typlt M, Stolzberg D, Allman BL. Audiovisual Temporal Processing and Synchrony Perception in the Rat. Front Behav Neurosci 2017; 10:246. [PMID: 28119580 PMCID: PMC5222817 DOI: 10.3389/fnbeh.2016.00246] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2016] [Accepted: 12/16/2016] [Indexed: 11/13/2022] Open
Abstract
Extensive research on humans has improved our understanding of how the brain integrates information from our different senses, and has begun to uncover the brain regions and large-scale neural activity that contributes to an observer’s ability to perceive the relative timing of auditory and visual stimuli. In the present study, we developed the first behavioral tasks to assess the perception of audiovisual temporal synchrony in rats. Modeled after the parameters used in human studies, separate groups of rats were trained to perform: (1) a simultaneity judgment task in which they reported whether audiovisual stimuli at various stimulus onset asynchronies (SOAs) were presented simultaneously or not; and (2) a temporal order judgment task in which they reported whether they perceived the auditory or visual stimulus to have been presented first. Furthermore, using in vivo electrophysiological recordings in the lateral extrastriate visual (V2L) cortex of anesthetized rats, we performed the first investigation of how neurons in the rat multisensory cortex integrate audiovisual stimuli presented at different SOAs. As predicted, rats (n = 7) trained to perform the simultaneity judgment task could accurately (~80%) identify synchronous vs. asynchronous (200 ms SOA) trials. Moreover, the rats judged trials at 10 ms SOA to be synchronous, whereas the majority (~70%) of trials at 100 ms SOA were perceived to be asynchronous. During the temporal order judgment task, rats (n = 7) perceived the synchronous audiovisual stimuli to be “visual first” for ~52% of the trials, and calculation of the smallest timing interval between the auditory and visual stimuli that could be detected in each rat (i.e., the just noticeable difference (JND)) ranged from 77 ms to 122 ms. Neurons in the rat V2L cortex were sensitive to the timing of audiovisual stimuli, such that spiking activity was greatest during trials when the visual stimulus preceded the auditory by 20–40 ms. Ultimately, given that our behavioral and electrophysiological results were consistent with studies conducted on human participants and previous recordings made in multisensory brain regions of different species, we suggest that the rat represents an effective model for studying audiovisual temporal synchrony at both the neuronal and perceptual level.
Collapse
Affiliation(s)
- Ashley L Schormans
- Department of Anatomy and Cell Biology, Schulich School of Medicine and Dentistry, University of Western Ontario London, ON, Canada
| | - Kaela E Scott
- Department of Anatomy and Cell Biology, Schulich School of Medicine and Dentistry, University of Western Ontario London, ON, Canada
| | - Albert M Q Vo
- Department of Anatomy and Cell Biology, Schulich School of Medicine and Dentistry, University of Western Ontario London, ON, Canada
| | - Anna Tyker
- Department of Anatomy and Cell Biology, Schulich School of Medicine and Dentistry, University of Western Ontario London, ON, Canada
| | - Marei Typlt
- Department of Anatomy and Cell Biology, Schulich School of Medicine and Dentistry, University of Western Ontario London, ON, Canada
| | - Daniel Stolzberg
- Department of Physiology and Pharmacology, Schulich School of Medicine and Dentistry, University of Western Ontario London, ON, Canada
| | - Brian L Allman
- Department of Anatomy and Cell Biology, Schulich School of Medicine and Dentistry, University of Western Ontario London, ON, Canada
| |
Collapse
|
114
|
Biondi F, Strayer DL, Rossi R, Gastaldi M, Mulatti C. Advanced driver assistance systems: Using multimodal redundant warnings to enhance road safety. APPLIED ERGONOMICS 2017; 58:238-244. [PMID: 27633218 DOI: 10.1016/j.apergo.2016.06.016] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/18/2016] [Revised: 06/27/2016] [Accepted: 06/28/2016] [Indexed: 06/06/2023]
Abstract
This study investigated whether multimodal redundant warnings presented by advanced assistance systems reduce brake response times. Warnings presented by assistance systems are designed to assist drivers by informing them that evasive driving maneuvers are needed in order to avoid a potential accident. If these warnings are poorly designed, they may distract drivers, slow their responses, and reduce road safety. In two experiments, participants drove a simulated vehicle equipped with a forward collision avoidance system. Auditory, vibrotactile, and multimodal warnings were presented when the time to collision was shorter than five seconds. The effects of these warnings were investigated with participants performing a concurrent cell phone conversation (Exp. 1) or driving in high-density traffic (Exp. 2). Braking times and subjective workload were measured. Multimodal redundant warnings elicited faster braking reaction times. These warnings were found to be effective even when talking on a cell phone (Exp. 1) or driving in dense traffic (Exp. 2). Multimodal warnings produced higher ratings of urgency, but ratings of frustration did not increase compared to other warnings. Findings obtained in these two experiments are important given that faster braking responses may reduce the potential for a collision.
Collapse
Affiliation(s)
- Francesco Biondi
- Jaguar Land Rover, United Kingdom; University of Padova, Italy; Department of Psychology, University of Utah, Salt Lake City, UT, United States.
| | - David L Strayer
- Department of Psychology, University of Utah, Salt Lake City, UT, United States
| | - Riccardo Rossi
- Department of Civil, Architectural and Environmental Engineering, University of Padova, Padova, Italy
| | - Massimiliano Gastaldi
- Department of Civil, Architectural and Environmental Engineering, University of Padova, Padova, Italy
| | - Claudio Mulatti
- Department of Developmental and Social Psychology, University of Padova, Padova, Italy
| |
Collapse
|
115
|
Chandrasekaran C. Computational principles and models of multisensory integration. Curr Opin Neurobiol 2016; 43:25-34. [PMID: 27918886 DOI: 10.1016/j.conb.2016.11.002] [Citation(s) in RCA: 51] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2016] [Revised: 10/27/2016] [Accepted: 11/09/2016] [Indexed: 12/22/2022]
Abstract
Combining information from multiple senses creates robust percepts, speeds up responses, enhances learning, and improves detection, discrimination, and recognition. In this review, I discuss computational models and principles that provide insight into how this process of multisensory integration occurs at the behavioral and neural level. My initial focus is on drift-diffusion and Bayesian models that can predict behavior in multisensory contexts. I then highlight how recent neurophysiological and perturbation experiments provide evidence for a distributed redundant network for multisensory integration. I also emphasize studies which show that task-relevant variables in multisensory contexts are distributed in heterogeneous neural populations. Finally, I describe dimensionality reduction methods and recurrent neural network models that may help decipher heterogeneous neural populations involved in multisensory integration.
Collapse
|
116
|
Multisensory integration is independent of perceived simultaneity. Exp Brain Res 2016; 235:763-775. [DOI: 10.1007/s00221-016-4822-2] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2016] [Accepted: 11/04/2016] [Indexed: 10/20/2022]
|
117
|
Roy C, Dalla Bella S, Lagarde J. To bridge or not to bridge the multisensory time gap: bimanual coordination to sound and touch with temporal lags. Exp Brain Res 2016; 235:135-151. [DOI: 10.1007/s00221-016-4776-4] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2016] [Accepted: 09/13/2016] [Indexed: 11/28/2022]
|
118
|
Noel JP, De Niear M, Van der Burg E, Wallace MT. Audiovisual Simultaneity Judgment and Rapid Recalibration throughout the Lifespan. PLoS One 2016; 11:e0161698. [PMID: 27551918 PMCID: PMC4994953 DOI: 10.1371/journal.pone.0161698] [Citation(s) in RCA: 48] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2016] [Accepted: 08/10/2016] [Indexed: 11/18/2022] Open
Abstract
Multisensory interactions are well established to convey an array of perceptual and behavioral benefits. One of the key features of multisensory interactions is the temporal structure of the stimuli combined. In an effort to better characterize how temporal factors influence multisensory interactions across the lifespan, we examined audiovisual simultaneity judgment and the degree of rapid recalibration to paired audiovisual stimuli (Flash-Beep and Speech) in a sample of 220 participants ranging from 7 to 86 years of age. Results demonstrate a surprisingly protracted developmental time-course for both audiovisual simultaneity judgment and rapid recalibration, with neither reaching maturity until well into adolescence. Interestingly, correlational analyses revealed that audiovisual simultaneity judgments (i.e., the size of the audiovisual temporal window of simultaneity) and rapid recalibration significantly co-varied as a function of age. Together, our results represent the most complete description of age-related changes in audiovisual simultaneity judgments to date, as well as being the first to describe changes in the degree of rapid recalibration as a function of age. We propose that the developmental time-course of rapid recalibration scaffolds the maturation of more durable audiovisual temporal representations.
Collapse
Affiliation(s)
- Jean-Paul Noel
- Neuroscience Graduate Program, Vanderbilt Brain Institute, Vanderbilt University Medical School, Vanderbilt University, Nashville, TN, 37235, United States of America
- Vanderbilt Brain Institute, Vanderbilt University Medical School, Vanderbilt University, Nashville, TN, 37235, United States of America
| | - Matthew De Niear
- Vanderbilt Brain Institute, Vanderbilt University Medical School, Vanderbilt University, Nashville, TN, 37235, United States of America
- Medical Scientist Training Program, Vanderbilt University Medical School, Vanderbilt University, Nashville, TN, 37235, United States of America
| | - Erik Van der Burg
- Department of Experimental and Applied Psychology, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
- School of Psychology, University of Sydney, Sydney, Australia
| | - Mark T. Wallace
- Vanderbilt Brain Institute, Vanderbilt University Medical School, Vanderbilt University, Nashville, TN, 37235, United States of America
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, 37235, United States of America
- Department of Psychology, Vanderbilt University, Nashville, TN, 37235, United States of America
- * E-mail:
| |
Collapse
|
119
|
Multisensory perceptual learning is dependent upon task difficulty. Exp Brain Res 2016; 234:3269-3277. [PMID: 27401473 DOI: 10.1007/s00221-016-4724-3] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2016] [Accepted: 07/04/2016] [Indexed: 12/19/2022]
Abstract
There has been a growing interest in developing behavioral tasks to enhance temporal acuity as recent findings have demonstrated changes in temporal processing in a number of clinical conditions. Prior research has demonstrated that perceptual training can enhance temporal acuity both within and across different sensory modalities. Although certain forms of unisensory perceptual learning have been shown to be dependent upon task difficulty, this relationship has not been explored for multisensory learning. The present study sought to determine the effects of task difficulty on multisensory perceptual learning. Prior to and following a single training session, participants completed a simultaneity judgment (SJ) task, which required them to judge whether a visual stimulus (flash) and auditory stimulus (beep) presented in synchrony or at various stimulus onset asynchronies (SOAs) occurred synchronously or asynchronously. During the training session, participants completed the same SJ task but received feedback regarding the accuracy of their responses. Participants were randomly assigned to one of three levels of difficulty during training: easy, moderate, and hard, which were distinguished based on the SOAs used during training. We report that only the most difficult (i.e., hard) training protocol enhanced temporal acuity. We conclude that perceptual training protocols for enhancing multisensory temporal acuity may be optimized by employing audiovisual stimuli for which it is difficult to discriminate temporal synchrony from asynchrony.
Collapse
|
120
|
Noel JP, Lukowska M, Wallace M, Serino A. Multisensory simultaneity judgment and proximity to the body. J Vis 2016; 16:21. [PMID: 26891828 PMCID: PMC4777235 DOI: 10.1167/16.3.21] [Citation(s) in RCA: 37] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/03/2022] Open
Abstract
The integration of information across different sensory modalities is known to be dependent upon the statistical characteristics of the stimuli to be combined. For example, the spatial and temporal proximity of stimuli are important determinants with stimuli that are close in space and time being more likely to be bound. These multisensory interactions occur not only for singular points in space/time, but over “windows” of space and time that likely relate to the ecological statistics of real-world stimuli. Relatedly, human psychophysical work has demonstrated that individuals are highly prone to judge multisensory stimuli as co-occurring over a wide range of time—a so-called simultaneity window (SW). Similarly, there exists a spatial representation of peripersonal space (PPS) surrounding the body in which stimuli related to the body and to external events occurring near the body are highly likely to be jointly processed. In the current study, we sought to examine the interaction between these temporal and spatial dimensions of multisensory representation by measuring the SW for audiovisual stimuli through proximal–distal space (i.e., PPS and extrapersonal space). Results demonstrate that the audiovisual SWs within PPS are larger than outside PPS. In addition, we suggest that this effect is likely due to an automatic and additional computation of these multisensory events in a body-centered reference frame. We discuss the current findings in terms of the spatiotemporal constraints of multisensory interactions and the implication of distinct reference frames on this process.
Collapse
|
121
|
Powers Iii AR, Hillock-Dunn A, Wallace MT. Generalization of multisensory perceptual learning. Sci Rep 2016; 6:23374. [PMID: 27000988 PMCID: PMC4802214 DOI: 10.1038/srep23374] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2015] [Accepted: 03/01/2016] [Indexed: 11/28/2022] Open
Abstract
Life in a multisensory world requires the rapid and accurate integration of stimuli across the different senses. In this process, the temporal relationship between stimuli is critical in determining which stimuli share a common origin. Numerous studies have described a multisensory temporal binding window—the time window within which audiovisual stimuli are likely to be perceptually bound. In addition to characterizing this window’s size, recent work has shown it to be malleable, with the capacity for substantial narrowing following perceptual training. However, the generalization of these effects to other measures of perception is not known. This question was examined by characterizing the ability of training on a simultaneity judgment task to influence perception of the temporally-dependent sound-induced flash illusion (SIFI). Results do not demonstrate a change in performance on the SIFI itself following training. However, data do show an improved ability to discriminate rapidly-presented two-flash control conditions following training. Effects were specific to training and scaled with the degree of temporal window narrowing exhibited. Results do not support generalization of multisensory perceptual learning to other multisensory tasks. However, results do show that training results in improvements in visual temporal acuity, suggesting a generalization effect of multisensory training on unisensory abilities.
Collapse
Affiliation(s)
- Albert R Powers Iii
- Kennedy Center, Vanderbilt University, Nashville, Tennessee, USA.,Neuroscience Graduate Program, Vanderbilt University, Nashville, Tennessee, USA.,Medical Scientist Training Program, Vanderbilt University School of Medicine, Nashville, Tennessee, USA.,Department of Psychiatry, Yale University, New Haven, Connecticut, USA
| | - Andrea Hillock-Dunn
- Kennedy Center, Vanderbilt University, Nashville, Tennessee, USA.,Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, Tennessee, USA
| | - Mark T Wallace
- Kennedy Center, Vanderbilt University, Nashville, Tennessee, USA.,Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, Tennessee, USA.,Neuroscience Graduate Program, Vanderbilt University, Nashville, Tennessee, USA
| |
Collapse
|
122
|
When audiovisual correspondence disturbs visual processing. Exp Brain Res 2016; 234:1325-32. [PMID: 26884130 DOI: 10.1007/s00221-016-4591-y] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2015] [Accepted: 01/30/2016] [Indexed: 10/22/2022]
Abstract
Multisensory integration is known to create a more robust and reliable perceptual representation of one's environment. Specifically, a congruent auditory input can make a visual stimulus more salient, consequently enhancing the visibility and detection of the visual target. However, it remains largely unknown whether a congruent auditory input can also impair visual processing. In the current study, we demonstrate that temporally congruent auditory input disrupts visual processing, consequently slowing down visual target detection. More importantly, this cross-modal inhibition occurs only when the contrast of visual targets is high. When the contrast of visual targets is low, enhancement of visual target detection is observed, consistent with the prediction based on the principle of inverse effectiveness (PIE) in cross-modal integration. The switch of the behavioral effect of audiovisual interaction from benefit to cost further extends the PIE to encompass the suppressive cross-modal interaction.
Collapse
|
123
|
Roberts W, Monem RG, Fillmore MT. Multisensory Stop Signals Can Reduce the Disinhibiting Effects of Alcohol in Adults. Alcohol Clin Exp Res 2016; 40:591-8. [PMID: 26853439 DOI: 10.1111/acer.12971] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2015] [Accepted: 11/25/2015] [Indexed: 02/04/2023]
Abstract
BACKGROUND Alcohol impairs drinkers' abilities to inhibit inappropriate responses. Certain stimulus conditions have been shown to facilitate behavioral control. Under conditions where individuals are presented with multiple inhibitory signals, the speed and consistency with which they are able to inhibit a response is improved. Recent research has shown that multisensory signals might protect against the disruptive effects of alcohol on mechanisms of behavioral control. This study examined whether multisensory stop signals can be used to improve inhibitory control, possibly by speeding attentional shifts toward inhibitory "stop" signals in the environment. METHODS Twenty adult social drinkers performed a modified cued go/no-go task that measured the ability to inhibit prepotent responses following 0.64 g/kg alcohol and placebo. Response targets were presented as unimodal (visual) and as multisensory (visual + aural) stimuli. RESULTS Results showed that during unimodal response target trials, participants made more inhibitory failures under 0.64 g/kg alcohol compared to placebo. During multisensory trials, however, there was no significant effect of alcohol on inhibitory control. CONCLUSIONS These findings identify multisensory inhibitory signals as a potentially important environmental factor that can reduce the degree to which alcohol disinhibits behavior possibly by intersensory co-activation between the visual and auditory pathways.
Collapse
Affiliation(s)
- Walter Roberts
- Department of Psychology, University of Kentucky, Lexington, Kentucky
| | - Ramey G Monem
- Department of Psychology, University of Kentucky, Lexington, Kentucky
| | - Mark T Fillmore
- Department of Psychology, University of Kentucky, Lexington, Kentucky
| |
Collapse
|
124
|
True and Perceived Synchrony are Preferentially Associated With Particular Sensory Pairings. Sci Rep 2015; 5:17467. [PMID: 26621493 PMCID: PMC4664927 DOI: 10.1038/srep17467] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2015] [Accepted: 10/29/2015] [Indexed: 11/09/2022] Open
Abstract
Perception and behavior are fundamentally shaped by the integration of different sensory modalities into unique multisensory representations, a process governed by spatio-temporal correspondence. Prior work has characterized temporal perception using the point in time at which subjects are most likely to judge multisensory stimuli to be simultaneous (PSS) and the temporal binding window (TBW) over which participants are likely to do so. Here we examine the relationship between the PSS and the TBW within and between individuals, and within and between three sensory combinations: audiovisual, audiotactile and visuotactile. We demonstrate that TBWs correlate within individuals and across multisensory pairings, but PSSs do not. Further, we reveal that while the audiotactile and audiovisual pairings show tightly related TBWs, they also exhibit a differential relationship with respect to true and perceived multisensory synchrony. Thus, audiotactile and audiovisual temporal processing share mechanistic features yet are respectively functionally linked to objective and subjective synchrony.
Collapse
|
125
|
Leone LM, McCourt ME. Dissociation of perception and action in audiovisual multisensory integration. Eur J Neurosci 2015; 42:2915-22. [PMID: 26417674 PMCID: PMC4715611 DOI: 10.1111/ejn.13087] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2015] [Revised: 09/16/2015] [Accepted: 09/23/2015] [Indexed: 11/30/2022]
Abstract
The ‘temporal rule’ of multisensory integration (MI) proposes that unisensory stimuli, and the neuronal responses they evoke, must fall within a window of integration. Ecological validity demands that MI should occur only for physically simultaneous events (which may give rise to non‐simultaneous neural activations), and spurious neural response simultaneities unrelated to environmental multisensory occurrences must somehow be rejected. Two experiments investigated the requirements of simultaneity for facilitative MI. Experiment 1 employed an reaction time (RT)/race model paradigm to measure audiovisual (AV) MI as a function of AV stimulus‐onset asynchrony (SOA) under fully dark adapted conditions for visual stimuli that were either rod‐ or cone‐isolating. Auditory stimulus intensity was constant. Despite a 155‐ms delay in mean RT to the scotopic vs. photopic stimulus, facilitative AV MI in both conditions occurred exclusively at an AV SOA of 0 ms. Thus, facilitative MI demands both physical and physiological simultaneity. Experiment 2 investigated the accuracy of simultaneity and temporal order judgements under the same stimulus conditions. Judgements of AV stimulus simultaneity or temporal order were significantly influenced by stimulus intensity, indicating different simultaneity requirements for these tasks. The possibility was considered that there are mechanisms by which the nervous system may take account of variations in response latency arising from changes in stimulus intensity in order to selectively integrate only those physiological simultaneities that arise from physical simultaneities. It was proposed that separate subsystems for AV MI exist that pertain to action and perception.
Collapse
Affiliation(s)
- Lynnette M Leone
- Center for Visual and Cognitive Neuroscience, Department of Psychology, North Dakota State University, Fargo, ND, 58108, USA
| | - Mark E McCourt
- Center for Visual and Cognitive Neuroscience, Department of Psychology, North Dakota State University, Fargo, ND, 58108, USA
| |
Collapse
|
126
|
Zelic G, Mottet D, Lagarde J. Perceptuo-motor compatibility governs multisensory integration in bimanual coordination dynamics. Exp Brain Res 2015; 234:463-74. [DOI: 10.1007/s00221-015-4476-5] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2014] [Accepted: 10/15/2015] [Indexed: 11/30/2022]
|
127
|
Fukutomi M, Someya M, Ogawa H. Auditory modulation of wind-elicited walking behavior in the cricket Gryllus bimaculatus. ACTA ACUST UNITED AC 2015; 218:3968-77. [PMID: 26519512 DOI: 10.1242/jeb.128751] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2015] [Accepted: 10/20/2015] [Indexed: 11/20/2022]
Abstract
Animals flexibly change their locomotion triggered by an identical stimulus depending on the environmental context and behavioral state. This indicates that additional sensory inputs in different modality from the stimulus triggering the escape response affect the neuronal circuit governing that behavior. However, how the spatio-temporal relationships between these two stimuli effect a behavioral change remains unknown. We studied this question, using crickets, which respond to a short air-puff by oriented walking activity mediated by the cercal sensory system. In addition, an acoustic stimulus, such as conspecific 'song' received by the tympanal organ, elicits a distinct oriented locomotion termed phonotaxis. In this study, we examined the cross-modal effects on wind-elicited walking when an acoustic stimulus was preceded by an air-puff and tested whether the auditory modulation depends on the coincidence of the direction of both stimuli. A preceding 10 kHz pure tone biased the wind-elicited walking in a backward direction and elevated a threshold of the wind-elicited response, whereas other movement parameters, including turn angle, reaction time, walking speed and distance were unaffected. The auditory modulations, however, did not depend on the coincidence of the stimulus directions. A preceding sound consistently altered the wind-elicited walking direction and response probability throughout the experimental sessions, meaning that the auditory modulation did not result from previous experience or associative learning. These results suggest that the cricket nervous system is able to integrate auditory and air-puff stimuli, and modulate the wind-elicited escape behavior depending on the acoustic context.
Collapse
Affiliation(s)
- Matasaburo Fukutomi
- Biosystems Science Course, Graduate School of Life Science, Hokkaido University, Sapporo 060-0810, Japan
| | - Makoto Someya
- Biosystems Science Course, Graduate School of Life Science, Hokkaido University, Sapporo 060-0810, Japan
| | - Hiroto Ogawa
- PREST, Japan Science and Technology Agency (JST), Kawaguchi 332-0012, Japan Department of Biological Sciences, Faculty of Science, Hokkaido University, Sapporo 060-0810, Japan
| |
Collapse
|
128
|
Laing M, Rees A, Vuong QC. Amplitude-modulated stimuli reveal auditory-visual interactions in brain activity and brain connectivity. Front Psychol 2015; 6:1440. [PMID: 26483710 PMCID: PMC4591484 DOI: 10.3389/fpsyg.2015.01440] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2015] [Accepted: 09/09/2015] [Indexed: 11/13/2022] Open
Abstract
The temporal congruence between auditory and visual signals coming from the same source can be a powerful means by which the brain integrates information from different senses. To investigate how the brain uses temporal information to integrate auditory and visual information from continuous yet unfamiliar stimuli, we used amplitude-modulated tones and size-modulated shapes with which we could manipulate the temporal congruence between the sensory signals. These signals were independently modulated at a slow or a fast rate. Participants were presented with auditory-only, visual-only, or auditory-visual (AV) trials in the fMRI scanner. On AV trials, the auditory and visual signal could have the same (AV congruent) or different modulation rates (AV incongruent). Using psychophysiological interaction analyses, we found that auditory regions showed increased functional connectivity predominantly with frontal regions for AV incongruent relative to AV congruent stimuli. We further found that superior temporal regions, shown previously to integrate auditory and visual signals, showed increased connectivity with frontal and parietal regions for the same contrast. Our findings provide evidence that both activity in a network of brain regions and their connectivity are important for AV integration, and help to bridge the gap between transient and familiar AV stimuli used in previous studies.
Collapse
Affiliation(s)
- Mark Laing
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne UK
| | - Adrian Rees
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne UK
| | - Quoc C Vuong
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne UK
| |
Collapse
|
129
|
Schröter H, Bratzke D, Fiedler A, Birngruber T. Does semantic redundancy gain result from multiple semantic priming? Acta Psychol (Amst) 2015; 161:79-85. [PMID: 26342771 DOI: 10.1016/j.actpsy.2015.08.001] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2014] [Revised: 07/29/2015] [Accepted: 08/04/2015] [Indexed: 11/29/2022] Open
Abstract
Fiedler, Schröter, and Ulrich (2013) reported faster responses to a single written word when the semantic content of this word (e.g., "elephant") matched both targets (e.g., "animal", "gray") as compared to a single target (e.g., "animal", "brown"). This semantic redundancy gain was explained by statistical facilitation due to a race of independent memory retrieval processes. The present experiment addresses one alternative explanation, namely that semantic redundancy gain results from multiple pre-activation of words that match both targets. In different blocks of trials, participants performed a redundant-targets task and a lexical decision task. The targets of the redundant-targets task served as primes in the lexical decision task. Replicating the findings of Fiedler et al., a semantic redundancy gain was observed in the redundant-targets task. Crucially, however, there was no evidence of a multiple semantic priming effect in the lexical decision task. This result suggests that semantic redundancy gain cannot be explained by multiple pre-activation of words that match both targets.
Collapse
Affiliation(s)
| | - Daniel Bratzke
- Department of Psychology, University of Tübingen, Germany
| | - Anja Fiedler
- Department of Psychology, The University of IA, USA
| | | |
Collapse
|
130
|
Höchenberger R, Busch NA, Ohla K. Nonlinear response speedup in bimodal visual-olfactory object identification. Front Psychol 2015; 6:1477. [PMID: 26483730 PMCID: PMC4588124 DOI: 10.3389/fpsyg.2015.01477] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2015] [Accepted: 09/14/2015] [Indexed: 01/12/2023] Open
Abstract
Multisensory processes are vital in the perception of our environment. In the evaluation of foodstuff, redundant sensory inputs not only assist the identification of edible and nutritious substances, but also help avoiding the ingestion of possibly hazardous substances. While it is known that the non-chemical senses interact already at early processing levels, it remains unclear whether the visual and olfactory senses exhibit comparable interaction effects. To address this question, we tested whether the perception of congruent bimodal visual-olfactory objects is facilitated compared to unimodal stimulation. We measured response times (RT) and accuracy during speeded object identification. The onset of the visual and olfactory constituents in bimodal trials was physically aligned in the first and perceptually aligned in the second experiment. We tested whether the data favored coactivation or parallel processing consistent with race models. A redundant-signals effect was observed for perceptually aligned redundant stimuli only, i.e., bimodal stimuli were identified faster than either of the unimodal components. Analysis of the RT distributions and accuracy data revealed that these observations could be explained by a race model. More specifically, visual and olfactory channels appeared to be operating in a parallel, positively dependent manner. While these results suggest the absence of early sensory interactions, future studies are needed to substantiate this interpretation.
Collapse
Affiliation(s)
- Richard Höchenberger
- Psychophysiology of Food Perception, German Institute of Human Nutrition Potsdam-Rehbrücke (DIfE) Nuthetal, Germany
| | - Niko A Busch
- Institute of Medical Psychology, Charité - Universitätsmedizin Berlin Berlin, Germany ; Berlin School of Mind and Brain, Humboldt University Berlin, Germany
| | - Kathrin Ohla
- Psychophysiology of Food Perception, German Institute of Human Nutrition Potsdam-Rehbrücke (DIfE) Nuthetal, Germany
| |
Collapse
|
131
|
Stevenson RA, Segers M, Ferber S, Barense MD, Camarata S, Wallace MT. Keeping time in the brain: Autism spectrum disorder and audiovisual temporal processing. Autism Res 2015; 9:720-38. [PMID: 26402725 DOI: 10.1002/aur.1566] [Citation(s) in RCA: 58] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2015] [Revised: 08/22/2015] [Accepted: 08/29/2015] [Indexed: 12/21/2022]
Abstract
A growing area of interest and relevance in the study of autism spectrum disorder (ASD) focuses on the relationship between multisensory temporal function and the behavioral, perceptual, and cognitive impairments observed in ASD. Atypical sensory processing is becoming increasingly recognized as a core component of autism, with evidence of atypical processing across a number of sensory modalities. These deviations from typical processing underscore the value of interpreting ASD within a multisensory framework. Furthermore, converging evidence illustrates that these differences in audiovisual processing may be specifically related to temporal processing. This review seeks to bridge the connection between temporal processing and audiovisual perception, and to elaborate on emerging data showing differences in audiovisual temporal function in autism. We also discuss the consequence of such changes, the specific impact on the processing of different classes of audiovisual stimuli (e.g. speech vs. nonspeech, etc.), and the presumptive brain processes and networks underlying audiovisual temporal integration. Finally, possible downstream behavioral implications, and possible remediation strategies are outlined. Autism Res 2016, 9: 720-738. © 2015 International Society for Autism Research, Wiley Periodicals, Inc.
Collapse
Affiliation(s)
- Ryan A Stevenson
- Department of Psychology, University of Toronto, Toronto, Ontario, Canada
| | - Magali Segers
- Department of Psychology, York University, Toronto, Ontario, Canada
| | - Susanne Ferber
- Department of Psychology, University of Toronto, Toronto, Ontario, Canada.,Rotman Research Institute, Toronto, Ontario, Canada
| | - Morgan D Barense
- Department of Psychology, University of Toronto, Toronto, Ontario, Canada.,Rotman Research Institute, Toronto, Ontario, Canada
| | - Stephen Camarata
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, Tennessee.,Vanderbilt Kennedy Center, Vanderbilt University Medical Center, Nashville, Tennessee
| | - Mark T Wallace
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, Tennessee.,Vanderbilt Kennedy Center, Vanderbilt University Medical Center, Nashville, Tennessee.,Vanderbilt Brain Institute, Vanderbilt University Medical Center, Nashville, Tennessee.,Department of Psychology, Vanderbilt University, Nashville, Tennessee.,Department of Psychiatry, Vanderbilt University Medical Center, Nashville, Tennessee
| |
Collapse
|
132
|
Tsui ASM, Ma YK, Ho A, Chow HM, Tseng CH. Bimodal emotion congruency is critical to preverbal infants’ abstract rule learning. Dev Sci 2015; 19:382-93. [DOI: 10.1111/desc.12319] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2013] [Accepted: 03/31/2015] [Indexed: 11/28/2022]
Affiliation(s)
- Angeline Sin Mei Tsui
- Department of Psychology; The University of Hong Kong; China
- Department of Psychology; University of Ottawa; Canada
| | - Yuen Ki Ma
- Department of Psychology; The University of Hong Kong; China
| | - Anna Ho
- Department of Psychology; The University of Hong Kong; China
| | - Hiu Mei Chow
- Department of Psychology; The University of Hong Kong; China
- Department of Psychology; University of Massachusetts Boston; USA
| | - Chia-huei Tseng
- Department of Psychology; The University of Hong Kong; China
| |
Collapse
|
133
|
Weisberg J, McCullough S, Emmorey K. Simultaneous perception of a spoken and a signed language: The brain basis of ASL-English code-blends. BRAIN AND LANGUAGE 2015; 147:96-106. [PMID: 26177161 PMCID: PMC5769874 DOI: 10.1016/j.bandl.2015.05.006] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/15/2014] [Revised: 04/17/2015] [Accepted: 05/16/2015] [Indexed: 05/29/2023]
Abstract
Code-blends (simultaneous words and signs) are a unique characteristic of bimodal bilingual communication. Using fMRI, we investigated code-blend comprehension in hearing native ASL-English bilinguals who made a semantic decision (edible?) about signs, audiovisual words, and semantically equivalent code-blends. English and ASL recruited a similar fronto-temporal network with expected modality differences: stronger activation for English in auditory regions of bilateral superior temporal cortex, and stronger activation for ASL in bilateral occipitotemporal visual regions and left parietal cortex. Code-blend comprehension elicited activity in a combination of these regions, and no cognitive control regions were additionally recruited. Furthermore, code-blends elicited reduced activation relative to ASL presented alone in bilateral prefrontal and visual extrastriate cortices, and relative to English alone in auditory association cortex. Consistent with behavioral facilitation observed during semantic decisions, the findings suggest that redundant semantic content induces more efficient neural processing in language and sensory regions during bimodal language integration.
Collapse
Affiliation(s)
- Jill Weisberg
- Laboratory for Language and Cognitive Neuroscience, San Diego State University, 6495 Alvarado Rd., Suite 200, San Diego, CA 92120, USA.
| | - Stephen McCullough
- Laboratory for Language and Cognitive Neuroscience, San Diego State University, 6495 Alvarado Rd., Suite 200, San Diego, CA 92120, USA.
| | - Karen Emmorey
- Laboratory for Language and Cognitive Neuroscience, San Diego State University, 6495 Alvarado Rd., Suite 200, San Diego, CA 92120, USA.
| |
Collapse
|
134
|
Misselhorn J, Daume J, Engel AK, Friese U. A matter of attention: Crossmodal congruence enhances and impairs performance in a novel trimodal matching paradigm. Neuropsychologia 2015. [PMID: 26209356 DOI: 10.1016/j.neuropsychologia.2015.07.022] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
A novel crossmodal matching paradigm including vision, audition, and somatosensation was developed in order to investigate the interaction between attention and crossmodal congruence in multisensory integration. To that end, all three modalities were stimulated concurrently while a bimodal focus was defined blockwise. Congruence between stimulus intensity changes in the attended modalities had to be evaluated. We found that crossmodal congruence improved performance if both, the attended modalities and the task-irrelevant distractor were congruent. If the attended modalities were incongruent, the distractor impaired performance due to its congruence relation to one of the attended modalities. Between attentional conditions, magnitudes of crossmodal enhancement or impairment differed. Largest crossmodal effects were seen in visual-tactile matching, intermediate effects for audio-visual and smallest effects for audio-tactile matching. We conclude that differences in crossmodal matching likely reflect characteristics of multisensory neural network architecture. We discuss our results with respect to the timing of perceptual processing and state hypotheses for future physiological studies. Finally, etiological questions are addressed.
Collapse
Affiliation(s)
- Jonas Misselhorn
- Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, Martinistr. 52, 20246 Hamburg, Germany.
| | - Jonathan Daume
- Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, Martinistr. 52, 20246 Hamburg, Germany
| | - Andreas K Engel
- Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, Martinistr. 52, 20246 Hamburg, Germany
| | - Uwe Friese
- Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, Martinistr. 52, 20246 Hamburg, Germany
| |
Collapse
|
135
|
Abstract
Understanding the principles by which the brain combines information from different senses provides us with insight into the computational strategies used to maximize their utility. Prior studies of the superior colliculus (SC) neuron as a model suggest that the relative timing with which sensory cues appear is an important factor in this context. Cross-modal cues that are near-simultaneous are likely to be derived from the same event, and the neural inputs they generate are integrated more strongly than those from cues that are temporally displaced from one another. However, the present results from studies of cat SC neurons show that this "temporal principle" of multisensory integration is more nuanced than previously thought and reveal that the integration of temporally displaced sensory responses is also highly dependent on the relative efficacies with which they drive their common target neuron. Larger multisensory responses were achieved when stronger responses were advanced in time relative to weaker responses. This new temporal principle of integration suggests an inhibitory mechanism that better accounts for the sensitivity of the multisensory product to differences in the timing of cross-modal cues than do earlier mechanistic hypotheses based on response onset alignment or response overlap.
Collapse
|
136
|
Li B, Yuan X, Huang X. The aftereffect of perceived duration is contingent on auditory frequency but not visual orientation. Sci Rep 2015; 5:10124. [PMID: 26054927 PMCID: PMC4460570 DOI: 10.1038/srep10124] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2014] [Accepted: 03/30/2015] [Indexed: 11/09/2022] Open
Abstract
Recent sensory history plays a critical role in duration perception. It has been established that after adapting to a particular duration, the test durations within a certain range appear to be distorted. To explore whether the aftereffect of perceived duration can be constrained by sensory modality and stimulus feature within a modality, the current study applied the technique of simultaneous sensory adaptation, by which observers were able to simultaneously adapt to two durations defined by two different stimuli. Using both simple visual and auditory stimuli, we found that the aftereffect of perceived duration is modality specific and contingent on auditory frequency but not visual orientation of the stimulus. These results demonstrate that there are independent timers responsible for the aftereffects of perceived duration in each sensory modality. Furthermore, the timer for the auditory modality may be located at a relatively earlier stage of sensory processing than the timer for the visual modality.
Collapse
Affiliation(s)
- Baolin Li
- Key laboratory of cognition and personality (SWU), Ministry of Education, Chongqing 400715, China
- Faculty of Psychology, Southwest University, Chongqing 400715, China
| | - Xiangyong Yuan
- Key laboratory of cognition and personality (SWU), Ministry of Education, Chongqing 400715, China
- Faculty of Psychology, Southwest University, Chongqing 400715, China
| | - Xiting Huang
- Key laboratory of cognition and personality (SWU), Ministry of Education, Chongqing 400715, China
- Faculty of Psychology, Southwest University, Chongqing 400715, China
| |
Collapse
|
137
|
Hollensteiner KJ, Pieper F, Engler G, König P, Engel AK. Crossmodal integration improves sensory detection thresholds in the ferret. PLoS One 2015; 10:e0124952. [PMID: 25970327 PMCID: PMC4430165 DOI: 10.1371/journal.pone.0124952] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2015] [Accepted: 03/20/2015] [Indexed: 11/19/2022] Open
Abstract
During the last two decades ferrets (Mustela putorius) have been established as a highly efficient animal model in different fields in neuroscience. Here we asked whether ferrets integrate sensory information according to the same principles established for other species. Since only few methods and protocols are available for behaving ferrets we developed a head-free, body-restrained approach allowing a standardized stimulation position and the utilization of the ferret’s natural response behavior. We established a behavioral paradigm to test audiovisual integration in the ferret. Animals had to detect a brief auditory and/or visual stimulus presented either left or right from their midline. We first determined detection thresholds for auditory amplitude and visual contrast. In a second step, we combined both modalities and compared psychometric fits and the reaction times between all conditions. We employed Maximum Likelihood Estimation (MLE) to model bimodal psychometric curves and to investigate whether ferrets integrate modalities in an optimal manner. Furthermore, to test for a redundant signal effect we pooled the reaction times of all animals to calculate a race model. We observed that bimodal detection thresholds were reduced and reaction times were faster in the bimodal compared to unimodal conditions. The race model and MLE modeling showed that ferrets integrate modalities in a statistically optimal fashion. Taken together, the data indicate that principles of multisensory integration previously demonstrated in other species also apply to crossmodal processing in the ferret.
Collapse
Affiliation(s)
- Karl J. Hollensteiner
- Dept. of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, 20246 Hamburg, Germany
- * E-mail:
| | - Florian Pieper
- Dept. of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, 20246 Hamburg, Germany
| | - Gerhard Engler
- Dept. of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, 20246 Hamburg, Germany
| | - Peter König
- Dept. of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, 20246 Hamburg, Germany
- Institute of Cognitive Science, University of Osnabrück, 49069 Osnabrück, Germany
| | - Andreas K. Engel
- Dept. of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, 20246 Hamburg, Germany
| |
Collapse
|
138
|
Donohue SE, Green JJ, Woldorff MG. The effects of attention on the temporal integration of multisensory stimuli. Front Integr Neurosci 2015; 9:32. [PMID: 25954167 PMCID: PMC4407588 DOI: 10.3389/fnint.2015.00032] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2014] [Accepted: 04/07/2015] [Indexed: 11/25/2022] Open
Abstract
In unisensory contexts, spatially-focused attention tends to enhance perceptual processing. How attention influences the processing of multisensory stimuli, however, has been of much debate. In some cases, attention has been shown to be important for processes related to the integration of audio-visual stimuli, but in other cases such processes have been reported to occur independently of attention. To address these conflicting results, we performed three experiments to examine how attention interacts with a key facet of multisensory processing: the temporal window of integration (TWI). The first two experiments used a novel cued-spatial-attention version of the bounce/stream illusion, wherein two moving visual stimuli with intersecting paths tend to be perceived as bouncing off rather than streaming through each other when a brief sound occurs near in time. When the task was to report whether the visual stimuli appeared to bounce or stream, attention served to narrow this measure of the TWI and bias perception toward “streaming”. When the participants’ task was to explicitly judge the simultaneity of the sound with the intersection of the moving visual stimuli, however, the results were quite different. Specifically, attention served to mainly widen the TWI, increasing the likelihood of simultaneity perception, while also substantially increasing the simultaneity judgment accuracy when the stimuli were actually physically simultaneous. Finally, in Experiment 3, where the task was to judge the simultaneity of a simple, temporally discrete, flashed visual stimulus and the same brief tone pip, attention had no effect on the measured TWI. These results highlight the flexibility of attention in enhancing multisensory perception and show that the effects of attention on multisensory processing are highly dependent on the task demands and observer goals.
Collapse
Affiliation(s)
- Sarah E Donohue
- Center for Cognitive Neuroscience, Duke University Durham, NC, USA ; Department of Neurology, Otto-von-Guericke University Magdeburg Magdeburg, Germany ; Leibniz Institute for Neurobiology Magdeburg, Germany
| | - Jessica J Green
- Center for Cognitive Neuroscience, Duke University Durham, NC, USA ; Department of Psychology, University of South Carolina Columbia, SC, USA
| | - Marty G Woldorff
- Center for Cognitive Neuroscience, Duke University Durham, NC, USA ; Department of Neurology, Otto-von-Guericke University Magdeburg Magdeburg, Germany ; Leibniz Institute for Neurobiology Magdeburg, Germany ; Department of Psychiatry, Duke University Durham, NC, USA
| |
Collapse
|
139
|
Kanayama N, Kimura K, Hiraki K. Cortical EEG components that reflect inverse effectiveness during visuotactile integration processing. Brain Res 2015; 1598:18-30. [DOI: 10.1016/j.brainres.2014.12.017] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2014] [Revised: 11/18/2014] [Accepted: 12/06/2014] [Indexed: 11/26/2022]
|
140
|
Hauthal N, Debener S, Rach S, Sandmann P, Thorne JD. Visuo-tactile interactions in the congenitally deaf: a behavioral and event-related potential study. Front Integr Neurosci 2015; 8:98. [PMID: 25653602 PMCID: PMC4300915 DOI: 10.3389/fnint.2014.00098] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2014] [Accepted: 12/19/2014] [Indexed: 11/13/2022] Open
Abstract
Auditory deprivation is known to be accompanied by alterations in visual processing. Yet not much is known about tactile processing and the interplay of the intact sensory modalities in the deaf. We presented visual, tactile, and visuo-tactile stimuli to congenitally deaf and hearing individuals in a speeded detection task. Analyses of multisensory responses showed a redundant signals effect that was attributable to a coactivation mechanism in both groups, although the redundancy gain was less in the deaf. In line with these behavioral results, on a neural level, there were multisensory interactions in both groups that were again weaker in the deaf. In hearing but not deaf participants, somatosensory event-related potential N200 latencies were modulated by simultaneous visual stimulation. A comparison of unisensory responses between groups revealed larger N200 amplitudes for visual and shorter N200 latencies for tactile stimuli in the deaf. Furthermore, P300 amplitudes were also larger in the deaf. This group difference was significant for tactile and approached significance for visual targets. The differences in visual and tactile processing between deaf and hearing participants, however, were not reflected in behavior. Both the behavioral and electroencephalography (EEG) results suggest more pronounced multisensory interaction in hearing than in deaf individuals. Visuo-tactile enhancements could not be explained by perceptual deficiency, but could be partly attributable to inverse effectiveness.
Collapse
Affiliation(s)
- Nadine Hauthal
- Neuropsychology Lab, Department of Psychology, Cluster of Excellence "Hearing4all," European Medical School, University of Oldenburg Oldenburg, Germany
| | - Stefan Debener
- Neuropsychology Lab, Department of Psychology, Cluster of Excellence "Hearing4all," European Medical School, University of Oldenburg Oldenburg, Germany ; Research Center Neurosensory Science, University of Oldenburg Oldenburg, Germany
| | - Stefan Rach
- Research Center Neurosensory Science, University of Oldenburg Oldenburg, Germany ; Experimental Psychology Lab, Department of Psychology, European Medical School, University of Oldenburg Oldenburg, Germany ; Department of Epidemiological Methods and Etiologic Research, Leibniz Institute for Prevention Research and Epidemiology - BIPS Bremen, Germany
| | - Pascale Sandmann
- Neuropsychology Lab, Department of Psychology, Cluster of Excellence "Hearing4all," European Medical School, University of Oldenburg Oldenburg, Germany ; Department of Neurology, Cluster of Excellence "Hearing4all," Hannover Medical School Hannover, Germany
| | - Jeremy D Thorne
- Neuropsychology Lab, Department of Psychology, Cluster of Excellence "Hearing4all," European Medical School, University of Oldenburg Oldenburg, Germany
| |
Collapse
|
141
|
TERAMOTO W, KAKUYA T. Visuotactile Peripersonal Space in Healthy Humans: Evidence from Crossmodal Congruency and Redundant Target Effects. ACTA ACUST UNITED AC 2015. [DOI: 10.4036/iis.2015.a.04] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
Affiliation(s)
- Wataru TERAMOTO
- Department of Psychology, Faculty of Letters, Kumamoto University
- Department of Computer Science and Systems Engineering, Muroran Institute of Technology
| | - Tomoaki KAKUYA
- Department of Computer Science and Systems Engineering, Muroran Institute of Technology
| |
Collapse
|
142
|
Diederich A, Schomburg A, van Vugt M. Fronto-central theta oscillations are related to oscillations in saccadic response times (SRT): an EEG and behavioral data analysis. PLoS One 2014; 9:e112974. [PMID: 25405521 PMCID: PMC4236144 DOI: 10.1371/journal.pone.0112974] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2014] [Accepted: 10/16/2014] [Indexed: 11/19/2022] Open
Abstract
The phase reset hypothesis states that the phase of an ongoing neural oscillation, reflecting periodic fluctuations in neural activity between states of high and low excitability, can be shifted by the occurrence of a sensory stimulus so that the phase value become highly constant across trials (Schroeder et al., 2008). From EEG/MEG studies it has been hypothesized that coupled oscillatory activity in primary sensory cortices regulates multi sensory processing (Senkowski et al. 2008). We follow up on a study in which evidence of phase reset was found using a purely behavioral paradigm by including also EEG measures. In this paradigm, presentation of an auditory accessory stimulus was followed by a visual target with a stimulus-onset asynchrony (SOA) across a range from 0 to 404 ms in steps of 4 ms. This fine-grained stimulus presentation allowed us to do a spectral analysis on the mean SRT as a function of the SOA, which revealed distinct peak spectral components within a frequency range of 6 to 11 Hz with a modus of 7 Hz. The EEG analysis showed that the auditory stimulus caused a phase reset in 7-Hz brain oscillations in a widespread set of channels. Moreover, there was a significant difference in the average phase at which the visual target stimulus appeared between slow and fast SRT trials. This effect was evident in three different analyses, and occurred primarily in frontal and central electrodes.
Collapse
Affiliation(s)
- Adele Diederich
- School of Humanities and Social Sciences, Jacobs University Bremen, Bremen, Germany
- * E-mail:
| | - Annette Schomburg
- School of Humanities and Social Sciences, Jacobs University Bremen, Bremen, Germany
| | - Marieke van Vugt
- Dept of Artificial Intelligence and Cognitive Engineering, University of Groningen, Groningen, The Netherlands
| |
Collapse
|
143
|
Mohr B, Difrancesco S, Harrington K, Evans S, Pulvermüller F. Changes of right-hemispheric activation after constraint-induced, intensive language action therapy in chronic aphasia: fMRI evidence from auditory semantic processing. Front Hum Neurosci 2014; 8:919. [PMID: 25452721 PMCID: PMC4231973 DOI: 10.3389/fnhum.2014.00919] [Citation(s) in RCA: 31] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2014] [Accepted: 10/28/2014] [Indexed: 11/28/2022] Open
Abstract
The role of the two hemispheres in the neurorehabilitation of language is still under dispute. This study explored the changes in language-evoked brain activation over a 2-week treatment interval with intensive constraint induced aphasia therapy (CIAT), which is also called intensive language action therapy (ILAT). Functional magnetic resonance imaging (fMRI) was used to assess brain activation in perilesional left hemispheric and in homotopic right hemispheric areas during passive listening to high and low-ambiguity sentences and non-speech control stimuli in chronic non-fluent aphasia patients. All patients demonstrated significant clinical improvements of language functions after therapy. In an event-related fMRI experiment, a significant increase of BOLD signal was manifest in right inferior frontal and temporal areas. This activation increase was stronger for highly ambiguous sentences than for unambiguous ones. These results suggest that the known language improvements brought about by intensive constraint-induced language action therapy at least in part relies on circuits within the right-hemispheric homologs of left-perisylvian language areas, which are most strongly activated in the processing of semantically complex language.
Collapse
Affiliation(s)
- Bettina Mohr
- Department of Psychiatry, Charité Universitätsmedizin, Campus Benjamin FranklinBerlin, Germany
| | | | | | - Samuel Evans
- Institute of Cognitive Neuroscience, University College LondonLondon, UK
- Medical Research Council, Cognition and Brain Sciences UnitCambridge, UK
| | - Friedemann Pulvermüller
- Brain Language Laboratory, Department of Philosophy and Humanities, Freie Universität BerlinBerlin, Germany
| |
Collapse
|
144
|
Wallace MT, Stevenson RA. The construct of the multisensory temporal binding window and its dysregulation in developmental disabilities. Neuropsychologia 2014; 64:105-23. [PMID: 25128432 PMCID: PMC4326640 DOI: 10.1016/j.neuropsychologia.2014.08.005] [Citation(s) in RCA: 200] [Impact Index Per Article: 20.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2014] [Revised: 08/04/2014] [Accepted: 08/05/2014] [Indexed: 01/18/2023]
Abstract
Behavior, perception and cognition are strongly shaped by the synthesis of information across the different sensory modalities. Such multisensory integration often results in performance and perceptual benefits that reflect the additional information conferred by having cues from multiple senses providing redundant or complementary information. The spatial and temporal relationships of these cues provide powerful statistical information about how these cues should be integrated or "bound" in order to create a unified perceptual representation. Much recent work has examined the temporal factors that are integral in multisensory processing, with many focused on the construct of the multisensory temporal binding window - the epoch of time within which stimuli from different modalities is likely to be integrated and perceptually bound. Emerging evidence suggests that this temporal window is altered in a series of neurodevelopmental disorders, including autism, dyslexia and schizophrenia. In addition to their role in sensory processing, these deficits in multisensory temporal function may play an important role in the perceptual and cognitive weaknesses that characterize these clinical disorders. Within this context, focus on improving the acuity of multisensory temporal function may have important implications for the amelioration of the "higher-order" deficits that serve as the defining features of these disorders.
Collapse
Affiliation(s)
- Mark T Wallace
- Vanderbilt Brain Institute, Vanderbilt University, 465 21st Avenue South, Nashville, TN 37232, USA; Department of Hearing & Speech Sciences, Vanderbilt University, Nashville, TN, USA; Department of Psychology, Vanderbilt University, Nashville, TN, USA; Department of Psychiatry, Vanderbilt University, Nashville, TN, USA.
| | - Ryan A Stevenson
- Department of Psychology, University of Toronto, Toronto, ON, Canada
| |
Collapse
|
145
|
Stein BE, Stanford TR, Rowland BA. Development of multisensory integration from the perspective of the individual neuron. Nat Rev Neurosci 2014; 15:520-35. [PMID: 25158358 DOI: 10.1038/nrn3742] [Citation(s) in RCA: 219] [Impact Index Per Article: 21.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
Abstract
The ability to use cues from multiple senses in concert is a fundamental aspect of brain function. It maximizes the brain’s use of the information available to it at any given moment and enhances the physiological salience of external events. Because each sense conveys a unique perspective of the external world, synthesizing information across senses affords computational benefits that cannot otherwise be achieved. Multisensory integration not only has substantial survival value but can also create unique experiences that emerge when signals from different sensory channels are bound together. However, neurons in a newborn’s brain are not capable of multisensory integration, and studies in the midbrain have shown that the development of this process is not predetermined. Rather, its emergence and maturation critically depend on cross-modal experiences that alter the underlying neural circuit in such a way that optimizes multisensory integrative capabilities for the environment in which the animal will function.
Collapse
|
146
|
Göschl F, Engel AK, Friese U. Attention modulates visual-tactile interaction in spatial pattern matching. PLoS One 2014; 9:e106896. [PMID: 25203102 PMCID: PMC4159283 DOI: 10.1371/journal.pone.0106896] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2014] [Accepted: 08/04/2014] [Indexed: 11/18/2022] Open
Abstract
Factors influencing crossmodal interactions are manifold and operate in a stimulus-driven, bottom-up fashion, as well as via top-down control. Here, we evaluate the interplay of stimulus congruence and attention in a visual-tactile task. To this end, we used a matching paradigm requiring the identification of spatial patterns that were concurrently presented visually on a computer screen and haptically to the fingertips by means of a Braille stimulator. Stimulation in our paradigm was always bimodal with only the allocation of attention being manipulated between conditions. In separate blocks of the experiment, participants were instructed to (a) focus on a single modality to detect a specific target pattern, (b) pay attention to both modalities to detect a specific target pattern, or (c) to explicitly evaluate if the patterns in both modalities were congruent or not. For visual as well as tactile targets, congruent stimulus pairs led to quicker and more accurate detection compared to incongruent stimulation. This congruence facilitation effect was more prominent under divided attention. Incongruent stimulation led to behavioral decrements under divided attention as compared to selectively attending a single sensory channel. Additionally, when participants were asked to evaluate congruence explicitly, congruent stimulation was associated with better performance than incongruent stimulation. Our results extend previous findings from audiovisual studies, showing that stimulus congruence also resulted in behavioral improvements in visuotactile pattern matching. The interplay of stimulus processing and attentional control seems to be organized in a highly flexible fashion, with the integration of signals depending on both bottom-up and top-down factors, rather than occurring in an ‘all-or-nothing’ manner.
Collapse
Affiliation(s)
- Florian Göschl
- Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
- * E-mail:
| | - Andreas K. Engel
- Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| | - Uwe Friese
- Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| |
Collapse
|
147
|
Pomper U, Brincker J, Harwood J, Prikhodko I, Senkowski D. Taking a call is facilitated by the multisensory processing of smartphone vibrations, sounds, and flashes. PLoS One 2014; 9:e103238. [PMID: 25116195 PMCID: PMC4130528 DOI: 10.1371/journal.pone.0103238] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2014] [Accepted: 06/27/2014] [Indexed: 11/17/2022] Open
Abstract
Many electronic devices that we use in our daily lives provide inputs that need to be processed and integrated by our senses. For instance, ringing, vibrating, and flashing indicate incoming calls and messages in smartphones. Whether the presentation of multiple smartphone stimuli simultaneously provides an advantage over the processing of the same stimuli presented in isolation has not yet been investigated. In this behavioral study we examined multisensory processing between visual (V), tactile (T), and auditory (A) stimuli produced by a smartphone. Unisensory V, T, and A stimuli as well as VA, AT, VT, and trisensory VAT stimuli were presented in random order. Participants responded to any stimulus appearance by touching the smartphone screen using the stimulated hand (Experiment 1), or the non-stimulated hand (Experiment 2). We examined violations of the race model to test whether shorter response times to multisensory stimuli exceed probability summations of unisensory stimuli. Significant violations of the race model, indicative of multisensory processing, were found for VA stimuli in both experiments and for VT stimuli in Experiment 1. Across participants, the strength of this effect was not associated with prior learning experience and daily use of smartphones. This indicates that this integration effect, similar to what has been previously reported for the integration of semantically meaningless stimuli, could involve bottom-up driven multisensory processes. Our study demonstrates for the first time that multisensory processing of smartphone stimuli facilitates taking a call. Thus, research on multisensory integration should be taken into consideration when designing electronic devices such as smartphones.
Collapse
Affiliation(s)
- Ulrich Pomper
- Department of Psychiatry and Psychotherapy, St. Hedwig Hospital, Charité-Universitätsmedizin Berlin, Berlin, Germany
| | - Jana Brincker
- Department of Psychiatry and Psychotherapy, St. Hedwig Hospital, Charité-Universitätsmedizin Berlin, Berlin, Germany
| | - James Harwood
- Department of Psychiatry and Psychotherapy, St. Hedwig Hospital, Charité-Universitätsmedizin Berlin, Berlin, Germany
| | - Ivan Prikhodko
- Department of Psychiatry and Psychotherapy, St. Hedwig Hospital, Charité-Universitätsmedizin Berlin, Berlin, Germany
| | - Daniel Senkowski
- Department of Psychiatry and Psychotherapy, St. Hedwig Hospital, Charité-Universitätsmedizin Berlin, Berlin, Germany
| |
Collapse
|
148
|
Godlove JM, Whaite EO, Batista AP. Comparing temporal aspects of visual, tactile, and microstimulation feedback for motor control. J Neural Eng 2014; 11:046025. [PMID: 25028989 PMCID: PMC4156317 DOI: 10.1088/1741-2560/11/4/046025] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
OBJECTIVES Current brain-computer interfaces (BCIs) rely on visual feedback, requiring sustained visual attention to use the device. Improvements to BCIs may stem from the development of an effective way to provide quick feedback independent of vision. Tactile stimuli, either delivered on the skin surface, or directly to the brain via microstimulation in somatosensory cortex, could serve that purpose. We examined the effectiveness of vibrotactile stimuli and microstimulation as a means of non-visual feedback by using a fundamental element of feedback: the ability to react to a stimulus while already in motion. APPROACH Human and monkey subjects performed a center-out reach task which was, on occasion, interrupted with a stimulus cue that instructed a change in reach target. MAIN RESULTS Subjects generally responded faster to tactile cues than to visual cues. However, when we delivered cues via microstimuation in a monkey, its response was slower on average than for both tactile and visual cues. SIGNIFICANCE Tactile and microstimulation feedback can be used to rapidly adjust movements mid-flight. The relatively slow speed of microstimulation is surprising and warrants further investigation. Overall, these results highlight the importance of considering temporal aspects of feedback when designing alternative forms of feedback for BCIs.
Collapse
Affiliation(s)
- Jason M Godlove
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA, USA. Center for the Neural Basis of Cognition, Pittsburgh, PA, USA
| | | | | |
Collapse
|
149
|
Schlesinger JJ, Stevenson RA, Shotwell MS, Wallace MT. Improving pulse oximetry pitch perception with multisensory perceptual training. Anesth Analg 2014; 118:1249-53. [PMID: 24846194 DOI: 10.1213/ane.0000000000000222] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
The pulse oximeter is a critical monitor in anesthesia practice designed to improve patient safety. Here, we present an approach to improve the ability of anesthesiologists to monitor arterial oxygen saturation via pulse oximetry through an audiovisual training process. Fifteen residents' abilities to detect auditory changes in pulse oximetry were measured before and after perceptual training. Training resulted in a 9% (95% confidence interval, 4%-14%, P = 0.0004, t(166) = 3.60) increase in detection accuracy, and a 72-millisecond (95% confidence interval, 40-103 milliseconds, P < 0.0001, t(166) = -4.52) speeding of response times in attentionally demanding and noisy conditions that were designed to simulate an operating room. This study illustrates the benefits of multisensory training and sets the stage for further work to better define the role of perceptual training in clinical anesthesiology.
Collapse
Affiliation(s)
- Joseph J Schlesinger
- From the *Department of Anesthesiology, Division of Critical Care Medicine, and †Department of Anesthesiology, BH Robbins Scholars Program, ‡Department of Hearing and Speech Sciences, Vanderbilt University Medical Center; §Vanderbilt Brain Institute; ‖Vanderbilt Kennedy Center, Nashville, Tennessee; ¶Department of Psychology, University of Toronto, Toronto, Ontario, Canada; #Department of Biostatistics, Vanderbilt University, Nashville, Tennessee; **Department of Psychology, Vanderbilt University, Nashville, Tennessee; and ††Department of Psychiatry, Vanderbilt University, Nashville, Tennessee
| | | | | | | |
Collapse
|
150
|
Identifying and quantifying multisensory integration: a tutorial review. Brain Topogr 2014; 27:707-30. [PMID: 24722880 DOI: 10.1007/s10548-014-0365-7] [Citation(s) in RCA: 133] [Impact Index Per Article: 13.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2013] [Accepted: 03/26/2014] [Indexed: 12/19/2022]
Abstract
We process information from the world through multiple senses, and the brain must decide what information belongs together and what information should be segregated. One challenge in studying such multisensory integration is how to quantify the multisensory interactions, a challenge that is amplified by the host of methods that are now used to measure neural, behavioral, and perceptual responses. Many of the measures that have been developed to quantify multisensory integration (and which have been derived from single unit analyses), have been applied to these different measures without much consideration for the nature of the process being studied. Here, we provide a review focused on the means with which experimenters quantify multisensory processes and integration across a range of commonly used experimental methodologies. We emphasize the most commonly employed measures, including single- and multiunit responses, local field potentials, functional magnetic resonance imaging, and electroencephalography, along with behavioral measures of detection, accuracy, and response times. In each section, we will discuss the different metrics commonly used to quantify multisensory interactions, including the rationale for their use, their advantages, and the drawbacks and caveats associated with them. Also discussed are possible alternatives to the most commonly used metrics.
Collapse
|