1
|
Bräutigam LC, Leuthold H, Mackenzie IG, Mittelstädt V. Exploring behavioral adjustments of proportion congruency manipulations in an Eriksen flanker task with visual and auditory distractor modalities. Mem Cognit 2024; 52:91-114. [PMID: 37548866 PMCID: PMC10806239 DOI: 10.3758/s13421-023-01447-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/07/2023] [Indexed: 08/08/2023]
Abstract
The present study investigated global behavioral adaptation effects to conflict arising from different distractor modalities. Three experiments were conducted using an Eriksen flanker paradigm with constant visual targets, but randomly varying auditory or visual distractors. In Experiment 1, the proportion of congruent to incongruent trials was varied for both distractor modalities, whereas in Experiments 2A and 2B, this proportion congruency (PC) manipulation was applied to trials with one distractor modality (inducer) to test potential behavioral transfer effects to trials with the other distractor modality (diagnostic). In all experiments, mean proportion congruency effects (PCEs) were present in trials with a PC manipulation, but there was no evidence of transfer to diagnostic trials in Experiments 2A and 2B. Distributional analyses (delta plots) provided further evidence for distractor modality-specific global behavioral adaptations by showing differences in the slope of delta plots with visual but not auditory distractors when increasing the ratio of congruent trials. Thus, it is suggested that distractor modalities constrain global behavioral adaptation effects due to the learning of modality-specific memory traces (e.g., distractor-target associations) and/or the modality-specific cognitive control processes (e.g., suppression of modality-specific distractor-based activation). Moreover, additional analyses revealed partial transfer of the congruency sequence effect across trials with different distractor modalities suggesting that distractor modality may differentially affect local and global behavioral adaptations.
Collapse
Affiliation(s)
- Linda C Bräutigam
- Department of Psychology, University of Tübingen, Schleichstrasse 4, 72076, Tübingen, Germany.
| | - Hartmut Leuthold
- Department of Psychology, University of Tübingen, Schleichstrasse 4, 72076, Tübingen, Germany
| | - Ian G Mackenzie
- Department of Psychology, University of Tübingen, Schleichstrasse 4, 72076, Tübingen, Germany
| | - Victor Mittelstädt
- Department of Psychology, University of Tübingen, Schleichstrasse 4, 72076, Tübingen, Germany
| |
Collapse
|
2
|
Jones SA, Noppeney U. Multisensory Integration and Causal Inference in Typical and Atypical Populations. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2024; 1437:59-76. [PMID: 38270853 DOI: 10.1007/978-981-99-7611-9_4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/26/2024]
Abstract
Multisensory perception is critical for effective interaction with the environment, but human responses to multisensory stimuli vary across the lifespan and appear changed in some atypical populations. In this review chapter, we consider multisensory integration within a normative Bayesian framework. We begin by outlining the complex computational challenges of multisensory causal inference and reliability-weighted cue integration, and discuss whether healthy young adults behave in accordance with normative Bayesian models. We then compare their behaviour with various other human populations (children, older adults, and those with neurological or neuropsychiatric disorders). In particular, we consider whether the differences seen in these groups are due only to changes in their computational parameters (such as sensory noise or perceptual priors), or whether the fundamental computational principles (such as reliability weighting) underlying multisensory perception may also be altered. We conclude by arguing that future research should aim explicitly to differentiate between these possibilities.
Collapse
Affiliation(s)
- Samuel A Jones
- Department of Psychology, Nottingham Trent University, Nottingham, UK.
| | - Uta Noppeney
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| |
Collapse
|
3
|
Grundei M, Schmidt TT, Blankenburg F. A multimodal cortical network of sensory expectation violation revealed by fMRI. Hum Brain Mapp 2023; 44:5871-5891. [PMID: 37721377 PMCID: PMC10619418 DOI: 10.1002/hbm.26482] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2023] [Revised: 07/04/2023] [Accepted: 08/29/2023] [Indexed: 09/19/2023] Open
Abstract
The brain is subjected to multi-modal sensory information in an environment governed by statistical dependencies. Mismatch responses (MMRs), classically recorded with EEG, have provided valuable insights into the brain's processing of regularities and the generation of corresponding sensory predictions. Only few studies allow for comparisons of MMRs across multiple modalities in a simultaneous sensory stream and their corresponding cross-modal context sensitivity remains unknown. Here, we used a tri-modal version of the roving stimulus paradigm in fMRI to elicit MMRs in the auditory, somatosensory and visual modality. Participants (N = 29) were simultaneously presented with sequences of low and high intensity stimuli in each of the three senses while actively observing the tri-modal input stream and occasionally reporting the intensity of the previous stimulus in a prompted modality. The sequences were based on a probabilistic model, defining transition probabilities such that, for each modality, stimuli were more likely to repeat (p = .825) than change (p = .175) and stimulus intensities were equiprobable (p = .5). Moreover, each transition was conditional on the configuration of the other two modalities comprising global (cross-modal) predictive properties of the sequences. We identified a shared mismatch network of modality general inferior frontal and temporo-parietal areas as well as sensory areas, where the connectivity (psychophysiological interaction) between these regions was modulated during mismatch processing. Further, we found deviant responses within the network to be modulated by local stimulus repetition, which suggests highly comparable processing of expectation violation across modalities. Moreover, hierarchically higher regions of the mismatch network in the temporo-parietal area around the intraparietal sulcus were identified to signal cross-modal expectation violation. With the consistency of MMRs across audition, somatosensation and vision, our study provides insights into a shared cortical network of uni- and multi-modal expectation violation in response to sequence regularities.
Collapse
Affiliation(s)
- Miro Grundei
- Neurocomputation and Neuroimaging UnitFreie Universität BerlinBerlinGermany
- Berlin School of Mind and BrainHumboldt Universität zu BerlinBerlinGermany
| | | | - Felix Blankenburg
- Neurocomputation and Neuroimaging UnitFreie Universität BerlinBerlinGermany
- Berlin School of Mind and BrainHumboldt Universität zu BerlinBerlinGermany
| |
Collapse
|
4
|
Villalonga MB, Sekuler R. Keep your finger on the pulse: Better rate perception and gap detection with vibrotactile compared to visual stimuli. Atten Percept Psychophys 2023; 85:2004-2017. [PMID: 37587355 PMCID: PMC10545646 DOI: 10.3758/s13414-023-02736-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/16/2023] [Indexed: 08/18/2023]
Abstract
Important characteristics of the environment can be represented in the temporal pattern of sensory stimulation. In two experiments, we compared accuracy of temporal processing by different modalities. Experiment 1 examined binary categorization of rate for visual (V) or vibrotactile (T) stimulus pulses presented at either 4 or 6 Hz. Inter-pulse intervals were either constant or variable, perturbed by random Gaussian variates. Subjects categorized the rate of T pulse sequences more accurately than V sequences. In V conditions only, subjects disproportionately tended to mis-categorize 4-Hz pulse rates, for all but the most variable sequences. In Experiment 2, we compared gap detection thresholds across modalities, using the same V and T pulses from Experiment 1, as well as with bimodal (VT) pulses. Visual gap detection thresholds were larger (3[Formula: see text]) than tactile thresholds. Additionally, performance with VT stimuli seemed to be nearly completely dominated by their T components. Together, these results suggest (i) that vibrotactile temporal acuity surpasses visual temporal acuity, and (ii) that vibrotactile stimulation has considerable, untapped potential to convey temporal information like that needed for eyes-free alerting signals.
Collapse
Affiliation(s)
| | - Robert Sekuler
- Department of Psychology, Brandeis University, Waltham, MA, USA
- Program in Neuroscience, Brandeis University, Waltham, MA, USA
| |
Collapse
|
5
|
Stanley BM, Chen YC, Maurer D, Lewis TL, Shore DI. Developmental changes in audiotactile event perception. J Exp Child Psychol 2023; 230:105629. [PMID: 36731280 DOI: 10.1016/j.jecp.2023.105629] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2022] [Revised: 01/04/2023] [Accepted: 01/05/2023] [Indexed: 02/04/2023]
Abstract
The fission and fusion illusions provide measures of multisensory integration. The sound-induced tap fission illusion occurs when a tap is paired with two distractor sounds, resulting in the perception of two taps; the sound-induced tap fusion illusion occurs when two taps are paired with a single sound, resulting in the perception of a single tap. Using these illusions, we measured integration in three groups of children (9-, 11-, and 13-year-olds) and compared them with a group of adults. Based on accuracy, we derived a measure of magnitude of illusion and used a signal detection analysis to estimate perceptual discriminability and decisional criterion. All age groups showed a significant fission illusion, whereas only the three groups of children showed a significant fusion illusion. When compared with adults, the 9-year-olds showed larger fission and fusion illusions (i.e., reduced discriminability and greater bias), whereas the 11-year-olds were adult-like for fission but showed some differences for fusion: significantly worse discriminability and marginally greater magnitude and criterion. The 13-year-olds were adult-like on all measures. Based on the pattern of data, we speculate that the developmental trajectories for fission and fusion differ. We discuss these developmental results in the context of three non-mutually exclusive theoretical frameworks: sensory dominance, maximum likelihood estimation, and causal inference.
Collapse
Affiliation(s)
- Brendan M Stanley
- Department of Psychology, Neuroscience & Behaviour, McMaster University, Hamilton, Ontario L8S 4K1, Canada
| | - Yi-Chuan Chen
- Department of Medicine, Mackay Medical College, New Taipei City 252, Taiwan
| | - Daphne Maurer
- Department of Psychology, Neuroscience & Behaviour, McMaster University, Hamilton, Ontario L8S 4K1, Canada
| | - Terri L Lewis
- Department of Psychology, Neuroscience & Behaviour, McMaster University, Hamilton, Ontario L8S 4K1, Canada
| | - David I Shore
- Department of Psychology, Neuroscience & Behaviour, McMaster University, Hamilton, Ontario L8S 4K1, Canada; Multisensory Perception Laboratory, Division of Multisensory Mind Inc., Hamilton, Ontario L8S 4K1, Canada.
| |
Collapse
|
6
|
Sun Y, Fu Q. How do irrelevant stimuli from another modality influence responses to the targets in a same-different task. Conscious Cogn 2023; 107:103455. [PMID: 36586291 DOI: 10.1016/j.concog.2022.103455] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2022] [Revised: 11/13/2022] [Accepted: 12/13/2022] [Indexed: 12/30/2022]
Abstract
It remains unclear whether multisensory interaction can implicitly occur at the abstract level. To address this issue, a same-different task was used to select comparable images and sounds in Experiment 1. Then, the stimuli with various levels of discrimination difficulty were adopted in a modified same-different task in Experiments 2, 3, and 4. The resultsshowed that only when the irrelevant stimuli were easily distinguishable, aconsistency effectcould beobservedin the testing phase. Moreover, when easily distinguishableirrelevant stimuliwere simultaneously presented with difficulttarget stimuli, irrelevant auditorystimuli facilitated responses to visual targets whereas irrelevant visual stimuli interfered with responses to auditorytargetsin the training phase,indicating an asymmetry in the role of visual and auditory in abstract multisensory integration. The results suggested that abstract multisensory information could be implicitly integrated and the inverse effectiveness principle might not apply to high-level processing of abstract multisensory integration.
Collapse
Affiliation(s)
- Ying Sun
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Qiufang Fu
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing, China.
| |
Collapse
|
7
|
Sound-induced flash illusion is modulated by the depth of auditory stimuli: Evidence from younger and older adults. Atten Percept Psychophys 2022; 84:2040-2050. [DOI: 10.3758/s13414-022-02537-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/05/2022] [Indexed: 11/08/2022]
|
8
|
Hojatmadani M, Reed KB. The Role of Spatial and Modality Cues on Visual and Haptic Memory. IEEE TRANSACTIONS ON HAPTICS 2022; 15:154-163. [PMID: 34415838 DOI: 10.1109/toh.2021.3106271] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
This study investigates the ability to remember a sequence of stimuli in two basic conditions: haptic and visual. Participants rely on a combination of modal and/or spatial information to perform a memory task. For this purpose, an experimental setup was developed based on the "Simon Says" memory game. Individuals receive a series of sensory stimuli and need to remember the sequence and repeat it. The stimuli in visual conditions are colored or white lights, and the stimuli in haptic conditions are vibration, hot, cold, and skin stretch. Results demonstrate that participants retained longer sequences in spatial conditions compared to the modal conditions. It is also demonstrated that participants performed better in visual conditions compared to haptic conditions. Participants were able to retain more complex spatial patterns and remember them faster in visual conditions compared to haptic conditions. A spatial difficulty ranking system was developed, indicating how easily each spatial pattern can be retained visually and haptically.
Collapse
|
9
|
Long-term training reduces the responses to the sound-induced flash illusion. Atten Percept Psychophys 2021; 84:529-539. [PMID: 34518970 DOI: 10.3758/s13414-021-02363-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/25/2021] [Indexed: 11/08/2022]
Abstract
The sound-induced flash illusion (SiFI) is a robust auditory-dominated multisensory integration phenomenon that is used as a reliable indicator to assess multisensory integration. Previous studies have indicated that the SiFI effect is correlated with perceptual sensitivity. However, to date, there is no consensus regarding how it corresponds to sensitivity with long-term training. The present study adopted the classic SiFI paradigm with feedback training to investigate the effect of a week of long-term training on the SiFI effect. Both the training group and control group completed a pretest and a posttest before and after the perceptual training; however, only the training group was required to complete 7-day behavioral training. The results showed that (1) long-term training could reduce the response of fission and fusion illusions by improving perceptual sensitivity and that (2) there was a "plateau effect" that emerged during the training stage, which tended to stabilize by the fifth day. These findings demonstrated that the SiFI effect could be modified with long-term training by ameliorating perceptual sensitivity, especially in terms of the fission illusion. Therefore, the present study supplements perceptual training in SiFI domains and provides evidence that the SiFI could be used as an assessment intervention to improve the efficiency of multisensory integration.
Collapse
|
10
|
Peng C, Peng W, Feng W, Zhang Y, Xiao J, Wang D. EEG Correlates of Sustained Attention Variability during Discrete Multi-finger Force Control Tasks. IEEE TRANSACTIONS ON HAPTICS 2021; 14:526-537. [PMID: 33523817 DOI: 10.1109/toh.2021.3055842] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
The neurophysiological characteristics of sustained attention states are unclear in discrete multi-finger force control tasks. In this article, we developed an immersive visuo-haptic task for conducting stimulus-response measurements. Visual cues were randomly provided to signify the required amplitude and tolerance of fingertip force. Participants were required to respond to the visual cues by pressing force transducers using their fingertips. Response time variation was taken as a behavioral measure of sustained attention states during the task. 50% low-variability trials were classified as the optimal state and the other high-variability trials were classified as the suboptimal state using z-scoring over time. A 64-channel electroencephalogram (EEG) acquisition system was used to collect brain activities during the tasks. The haptics-elicited potential amplitude at 20 ∼ 40 ms in latency and over the frontal-central region significantly decreased in the optimal state. Furthermore, the alpha-band power in the spectra of 8 ∼ 13 Hz was significantly suppressed in the frontal-central, right temporal, and parietal regions in the optimal state. Taken together, we have identified neuroelectrophysiological features that were associated with sustained attention during multi-finger force control tasks, which would be potentially used in the development of closed-loop attention detection and training systems exploiting haptic interaction.
Collapse
|
11
|
The development of visuotactile congruency effects for sequences of events. J Exp Child Psychol 2021; 207:105094. [PMID: 33714049 DOI: 10.1016/j.jecp.2021.105094] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2020] [Revised: 12/11/2020] [Accepted: 01/07/2021] [Indexed: 11/23/2022]
Abstract
Sensitivity to the temporal coherence of visual and tactile signals increases perceptual reliability and is evident during infancy. However, it is not clear how, or whether, bidirectional visuotactile interactions change across childhood. Furthermore, no study has explored whether viewing a body modulates how children perceive visuotactile sequences of events. Here, children aged 5-7 years (n = 19), 8 and 9 years (n = 21), and 10-12 years (n = 24) and adults (n = 20) discriminated the number of target events (one or two) in a task-relevant modality (touch or vision) and ignored distractors (one or two) in the opposing modality. While participants performed the task, an image of either a hand or an object was presented. Children aged 5-7 years and 8 and 9 years showed larger crossmodal interference from visual distractors when discriminating tactile targets than the converse. Across age groups, this was strongest when two visual distractors were presented with one tactile target, implying a "fission-like" crossmodal effect (perceiving one event as two events). There was no influence of visual context (viewing a hand or non-hand image) on visuotactile interactions for any age group. Our results suggest robust interference from discontinuous visual information on tactile discrimination of sequences of events during early and middle childhood. These findings are discussed with respect to age-related changes in sensory dominance, selective attention, and multisensory processing.
Collapse
|
12
|
Villalonga MB, Sussman RF, Sekuler R. Feeling the Beat (and Seeing It, Too): Vibrotactile, Visual, and Bimodal Rate Discrimination. Multisens Res 2020; 33:31-59. [PMID: 31648198 DOI: 10.1163/22134808-20191413] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2019] [Accepted: 07/09/2019] [Indexed: 11/19/2022]
Abstract
Beats are among the basic units of perceptual experience. Produced by regular, intermittent stimulation, beats are most commonly associated with audition, but the experience of a beat can result from stimulation in other modalities as well. We studied the robustness of visual, vibrotactile, and bimodal signals as sources of beat perception. Subjects attempted to discriminate between pulse trains delivered at 3 Hz or at 6 Hz. To investigate signal robustness, we intentionally degraded signals on two-thirds of the trials using temporal-domain noise. On these trials, inter-pulse intervals (IPIs) were stochastic, perturbed independently from the nominal IPI by random samples from zero-mean Gaussian distributions with different variances. These perturbations produced directional changes in the IPIs, which either increased or decreased the likelihood of confusing the two pulse rates. In addition to affording an assay of signal robustness, this paradigm made it possible to gauge how subjects' judgments were influenced by successive IPIs. Logistic regression revealed a strong primacy effect: subjects' decisions were disproportionately influenced by a trial's initial IPIs. Response times and parameter estimates from drift-diffusion modeling showed that information accumulates more rapidly with bimodal stimulation than with either unimodal stimulus alone. Analysis of error rates within each condition suggested consistently optimal decision making, even with increased IPI variability. Finally, beat information delivered by vibrotactile signals proved just as robust as information conveyed by visual signals, confirming vibrotactile stimulation's potential as a communication channel.
Collapse
Affiliation(s)
| | - Rachel F Sussman
- 2Volen National Center for Complex Systems, Brandeis University, Waltham, MA, USA
| | - Robert Sekuler
- 2Volen National Center for Complex Systems, Brandeis University, Waltham, MA, USA
| |
Collapse
|
13
|
Arikan BE, van Kemenade BM, Podranski K, Steinsträter O, Straube B, Kircher T. Perceiving your hand moving: BOLD suppression in sensory cortices and the role of the cerebellum in the detection of feedback delays. J Vis 2020; 19:4. [PMID: 31826249 DOI: 10.1167/19.14.4] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Sensory consequences of self-generated as opposed to externally generated movements are perceived as less intense and lead to less neural activity in corresponding sensory cortices, presumably due to predictive mechanisms. Self-generated sensory inputs have been mostly studied in a single modality, using abstract feedback, with control conditions not differentiating efferent from reafferent feedback. Here we investigated the neural processing of (a) naturalistic action-feedback associations of (b) self-generated versus externally generated movements, and (c) how an additional (auditory) modality influences neural processing and detection of delays. Participants executed wrist movements using a passive movement device (PMD) as they watched their movements in real time or with variable delays (0-417 ms). The task was to judge whether there was a delay between the movement and its visual feedback. In the externally generated condition, movements were induced by the PMD to disentangle efferent from reafferent feedback. Half of the trials involved auditory beeps coupled to the onset of the visual feedback. We found reduced BOLD activity in visual, auditory, and somatosensory areas during self-generated compared with externally generated movements in unimodal and bimodal conditions. Anterior and posterior cerebellar areas were engaged for trials in which action-feedback delays were detected for self-generated movements. Specifically, the left cerebellar lobule IX was functionally connected with the right superior occipital gyrus. The results indicate efference copy-based predictive mechanisms specific to self-generated movements, leading to BOLD suppression in sensory areas. In addition, our results support the cerebellum's role in the detection of temporal prediction errors during our actions and their consequences.
Collapse
Affiliation(s)
- B Ezgi Arikan
- Department of Psychology, Justus-Liebig University Giessen, Giessen, Germany
| | - Bianca M van Kemenade
- Department of Psychiatry and Psychotherapy, Philipps University Marburg, Marburg, Germany
| | - Kornelius Podranski
- Department of Psychiatry and Psychotherapy, Philipps University Marburg, Marburg, Germany.,Core Facility Brain Imaging, Faculty of Medicine, Philipps University Marburg, Marburg, Germany.,Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Olaf Steinsträter
- Department of Psychiatry and Psychotherapy, Philipps University Marburg, Marburg, Germany.,Core Facility Brain Imaging, Faculty of Medicine, Philipps University Marburg, Marburg, Germany
| | - Benjamin Straube
- Department of Psychiatry and Psychotherapy, Philipps University Marburg, Marburg, Germany
| | - Tilo Kircher
- Department of Psychiatry and Psychotherapy, Philipps University Marburg, Marburg, Germany
| |
Collapse
|
14
|
Sun Y, Liu X, Li B, Sava-Segal C, Wang A, Zhang M. Effects of Repetition Suppression on Sound Induced Flash Illusion With Aging. Front Psychol 2020; 11:216. [PMID: 32153456 PMCID: PMC7047336 DOI: 10.3389/fpsyg.2020.00216] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2019] [Accepted: 01/30/2020] [Indexed: 11/13/2022] Open
Abstract
The sound-induced flash illusion (SiFI) is a classical auditory-dominated multisensory integration phenomenon in which the observer misperceives the number of visual flashes due to the simultaneous presentation of a different number of auditory beeps. Although the SiFI has been documented to correlate with perceptual sensitivity, to date there is no consensus as to how it corresponds to sensitivity with aging. The present study was based on the SiFI paradigm (Shams et al., 2000), adding repeated auditory stimuli prior to the appearance of audiovisual stimuli to investigate the effects of repetition suppression (RS) on the SiFI with aging. The repeated auditory stimuli consisted of one or two of the same auditory stimuli presented twice in succession, which were then followed by the audiovisual stimuli. By comparing the illusions in old and young adults, we aimed to explore the influence of aging on the RS of auditory stimuli on the SiFI. The results showed that both age groups showed SiFI effects, however, the RS performance of the two age groups had different effects on the fusion and fission illusions. The illusion effect in old adults was weaker than in young adults. Specifically, RS only affected fission illusions in the old adults but both fission and fusion illusions in young adults. Thus, the present study indicated that the decreased perceptual sensitivity based on auditory RS could weaken the SiFI effect in multisensory integration and that old adults are more susceptible to RS, showing that old adults perceived the SiFI effect weakly under auditory RS.
Collapse
Affiliation(s)
- Yawen Sun
- Department of Psychology, Soochow University, Suzhou, China
| | - Xiaole Liu
- Department of Psychology, Soochow University, Suzhou, China
- Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou, China
| | - Biqin Li
- Laboratory of Psychology and Cognition Science, School of Psychology, Jiangxi Normal University, Nanchang, China
| | - Clara Sava-Segal
- Department of Neurology & Neurological Sciences, Stanford University, Palo Alto, CA, United States
| | - Aijun Wang
- Department of Psychology, Soochow University, Suzhou, China
- Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou, China
| | - Ming Zhang
- Department of Psychology, Soochow University, Suzhou, China
- Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou, China
| |
Collapse
|
15
|
Association of Subclinical Neck Pain With Altered Multisensory Integration at Baseline and 4-Week Follow-up Relative to Asymptomatic Controls. J Manipulative Physiol Ther 2019; 41:81-91. [PMID: 29482829 DOI: 10.1016/j.jmpt.2017.09.003] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2017] [Revised: 09/12/2017] [Accepted: 09/26/2017] [Indexed: 02/07/2023]
Abstract
OBJECTIVE The purpose of this study was to test whether people with subclinical neck pain (SCNP) had altered visual, auditory, and multisensory response times, and whether these findings were consistent over time. METHODS Twenty-five volunteers (12 SCNP and 13 asymptomatic controls) were recruited from a Canadian university student population. A 2-alternative forced-choice discrimination task with multisensory redundancy was used to measure response times to the presentation of visual (color filled circles), auditory (verbalization of the color words, eg, red or blue), and multisensory (simultaneous audiovisual) stimuli at baseline and 4 weeks later. RESULTS The SCNP group was slower at both visual and multisensory tasks (P = .046, P = .020, respectively), with no change over 4 weeks. Auditory response times improved slightly but significantly after 4 weeks (P = .050) with no group difference. CONCLUSIONS This is the first study to report that people with SCNP have slower visual and multisensory response times than asymptomatic individuals. These differences persist over 4 weeks, suggesting that the multisensory technique is reliable and that these differences in the SCNP group do not improve on their own in the absence of treatment.
Collapse
|
16
|
Multisensory Enhancement of Odor Object Processing in Primary Olfactory Cortex. Neuroscience 2019; 418:254-265. [DOI: 10.1016/j.neuroscience.2019.08.040] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2019] [Revised: 08/22/2019] [Accepted: 08/23/2019] [Indexed: 01/06/2023]
|
17
|
Ohla K, Höchenberger R, Freiherr J, Lundström JN. Superadditive and Subadditive Neural Processing of Dynamic Auditory-Visual Objects in the Presence of Congruent Odors. Chem Senses 2019; 43:35-44. [PMID: 29045615 DOI: 10.1093/chemse/bjx068] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
Our sensory experiences comprise a variety of different inputs at any given time. Some of these experiences are unmistakable, others are ambiguous and profit from additional sensory information. Here, we explored whether the presence of a congruent odor influences the neural processing and sensory interaction of audio-visual objects using degraded videos (V) and sounds (A) of dynamic objects in unimodal and bimodal (AV) combinations without or with a congruent odor (VO, AO, AVO). Analyses of EEG data revealed superadditive and subadditive interaction effects. The topography and timing of these effects suggest evaluative rather than sensory processes as the underlying cause. Together, the results suggest that the mere presence of an odor affects the processing of A, V, and AV objects differently while multisensory interactions of AV and AVO objects have common neuronal mechanisms pointing to a robust, modality-independent network for the processing of redundant sensory information.
Collapse
Affiliation(s)
- Kathrin Ohla
- German Institute of Human Nutrition Potsdam-Rehbruecke, Germany
- Monell Chemical Senses Center, USA
| | | | - Jessica Freiherr
- Uniklinik RWTH Aachen, Diagnostic and Interventional Neuroradiology, Germany
- Fraunhofer-Institut für Verfahrenstechnik und Verpackung IVV, Sensory Analytics, Germany
| | - Johan N Lundström
- Monell Chemical Senses Center, USA
- Department of Clinical Neuroscience, Karolinska Institutet, Sweden
| |
Collapse
|
18
|
Abstract
Many skills rely on performing noisy mental computations on noisy sensory measurements. Bayesian models suggest that humans compensate for measurement noise and reduce behavioral variability by biasing perception toward prior expectations. Whether a similar strategy is employed to compensate for noise in downstream mental and sensorimotor computations is not known. We tested humans in a battery of tasks and found that tasks which involved more complex mental transformations resulted in increased bias, suggesting that humans are able to mitigate the effect of noise in both sensorimotor and mental transformations. These results indicate that humans delay inference in order to account for both measurement noise and noise in downstream computations. Humans compensate for sensory noise by biasing sensory estimates toward prior expectations, as predicted by models of Bayesian inference. Here, the authors show that humans perform ‘late inference’ downstream of sensory processing to mitigate the effects of noisy internal mental computations.
Collapse
|
19
|
Abstract
Integration of sensory information across modalities can confer behavioral advantages by decreasing perceptual ambiguity, increasing reaction time, and increasing detection accuracy relative to unisensory stimuli. We asked how combinations of auditory, visual, and somatosensory events alter response time. Participants detected stimulation on one side of space (right or left) while ignoring stimulation on the other side of space. There were seven types of suprathreshold stimuli: auditory (tones from speakers), visual (sinusoidal contrast gratings), somatosensory (fingertip vibrations), audio-visual, somato-visual, audio-somatosensory, and audio-somato-visual. Response enhancement and race model analysis confirmed that bisensory and trisensory trials enhanced response time relative to unisensory trials. Exploratory analysis of individual differences in intersensory facilitation revealed that participants fit into one of two groups: those who benefitted from trisensory information and those who did not.
Collapse
|
20
|
Central–peripheral differences in audiovisual and visuotactile event perception. Atten Percept Psychophys 2017; 79:2552-2563. [DOI: 10.3758/s13414-017-1396-4] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
21
|
Newly acquired audio-visual associations bias perception in binocular rivalry. Vision Res 2017; 133:121-129. [DOI: 10.1016/j.visres.2017.02.001] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2016] [Revised: 02/11/2017] [Accepted: 02/17/2017] [Indexed: 11/16/2022]
|
22
|
|
23
|
Pratt H, Bleich N, Mittelman N. Spatio-temporal distribution of brain activity associated with audio-visually congruent and incongruent speech and the McGurk Effect. Brain Behav 2015; 5:e00407. [PMID: 26664791 PMCID: PMC4667754 DOI: 10.1002/brb3.407] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/08/2015] [Revised: 08/26/2015] [Accepted: 09/07/2015] [Indexed: 12/04/2022] Open
Abstract
INTRODUCTION Spatio-temporal distributions of cortical activity to audio-visual presentations of meaningless vowel-consonant-vowels and the effects of audio-visual congruence/incongruence, with emphasis on the McGurk effect, were studied. The McGurk effect occurs when a clearly audible syllable with one consonant, is presented simultaneously with a visual presentation of a face articulating a syllable with a different consonant and the resulting percept is a syllable with a consonant other than the auditorily presented one. METHODS Twenty subjects listened to pairs of audio-visually congruent or incongruent utterances and indicated whether pair members were the same or not. Source current densities of event-related potentials to the first utterance in the pair were estimated and effects of stimulus-response combinations, brain area, hemisphere, and clarity of visual articulation were assessed. RESULTS Auditory cortex, superior parietal cortex, and middle temporal cortex were the most consistently involved areas across experimental conditions. Early (<200 msec) processing of the consonant was overall prominent in the left hemisphere, except right hemisphere prominence in superior parietal cortex and secondary visual cortex. Clarity of visual articulation impacted activity in secondary visual cortex and Wernicke's area. McGurk perception was associated with decreased activity in primary and secondary auditory cortices and Wernicke's area before 100 msec, increased activity around 100 msec which decreased again around 180 msec. Activity in Broca's area was unaffected by McGurk perception and was only increased to congruent audio-visual stimuli 30-70 msec following consonant onset. CONCLUSIONS The results suggest left hemisphere prominence in the effects of stimulus and response conditions on eight brain areas involved in dynamically distributed parallel processing of audio-visual integration. Initially (30-70 msec) subcortical contributions to auditory cortex, superior parietal cortex, and middle temporal cortex occur. During 100-140 msec, peristriate visual influences and Wernicke's area join in the processing. Resolution of incongruent audio-visual inputs is then attempted, and if successful, McGurk perception occurs and cortical activity in left hemisphere further increases between 170 and 260 msec.
Collapse
Affiliation(s)
- Hillel Pratt
- Evoked Potentials Laboratory Technion - Israel Institute of Technology Haifa 32000 Israel
| | - Naomi Bleich
- Evoked Potentials Laboratory Technion - Israel Institute of Technology Haifa 32000 Israel
| | - Nomi Mittelman
- Evoked Potentials Laboratory Technion - Israel Institute of Technology Haifa 32000 Israel
| |
Collapse
|
24
|
Alavash M, Hilgetag CC, Thiel CM, Gießing C. Persistency and flexibility of complex brain networks underlie dual-task interference. Hum Brain Mapp 2015; 36:3542-62. [PMID: 26095953 PMCID: PMC6869626 DOI: 10.1002/hbm.22861] [Citation(s) in RCA: 31] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2015] [Revised: 04/27/2015] [Accepted: 05/19/2015] [Indexed: 12/29/2022] Open
Abstract
Previous studies on multitasking suggest that performance decline during concurrent task processing arises from interfering brain modules. Here, we used graph-theoretical network analysis to define functional brain modules and relate the modular organization of complex brain networks to behavioral dual-task costs. Based on resting-state and task fMRI we explored two organizational aspects potentially associated with behavioral interference when human subjects performed a visuospatial and speech task simultaneously: the topological overlap between persistent single-task modules, and the flexibility of single-task modules in adaptation to the dual-task condition. Participants showed a significant decline in visuospatial accuracy in the dual-task compared with single visuospatial task. Global analysis of topological similarity between modules revealed that the overlap between single-task modules significantly correlated with the decline in visuospatial accuracy. Subjects with larger overlap between single-task modules showed higher behavioral interference. Furthermore, lower flexible reconfiguration of single-task modules in adaptation to the dual-task condition significantly correlated with larger decline in visuospatial accuracy. Subjects with lower modular flexibility showed higher behavioral interference. At the regional level, higher overlap between single-task modules and less modular flexibility in the somatomotor cortex positively correlated with the decline in visuospatial accuracy. Additionally, higher modular flexibility in cingulate and frontal control areas and lower flexibility in right-lateralized nodes comprising the middle occipital and superior temporal gyri supported dual-tasking. Our results suggest that persistency and flexibility of brain modules are important determinants of dual-task costs. We conclude that efficient dual-tasking benefits from a specific balance between flexibility and rigidity of functional brain modules.
Collapse
Affiliation(s)
- Mohsen Alavash
- Department of Psychology, Biological Psychology LabEuropean Medical School, Carl von Ossietzky Universität Oldenburg26111OldenburgGermany
| | - Claus C. Hilgetag
- Department of Computational NeuroscienceUniversity Medical Center Hamburg‐Eppendorf20246HamburgGermany
- Department of Health SciencesBoston UniversityBostonMassachusetts02215
| | - Christiane M. Thiel
- Department of Psychology, Biological Psychology LabEuropean Medical School, Carl von Ossietzky Universität Oldenburg26111OldenburgGermany
- Research Center Neurosensory ScienceCarl von Ossietzky Universität Oldenburg26111OldenburgGermany
| | - Carsten Gießing
- Department of Psychology, Biological Psychology LabEuropean Medical School, Carl von Ossietzky Universität Oldenburg26111OldenburgGermany
- Research Center Neurosensory ScienceCarl von Ossietzky Universität Oldenburg26111OldenburgGermany
| |
Collapse
|
25
|
Sella I, Reiner M, Pratt H. Natural stimuli from three coherent modalities enhance behavioral responses and electrophysiological cortical activity in humans. Int J Psychophysiol 2013; 93:45-55. [PMID: 24315926 DOI: 10.1016/j.ijpsycho.2013.11.003] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2012] [Revised: 10/23/2013] [Accepted: 11/26/2013] [Indexed: 11/15/2022]
Abstract
Cues that involve a number of sensory modalities are processed in the brain in an interactive multimodal manner rather than independently for each modality. We studied multimodal integration in a natural, yet fully controlled scene, implemented as an interactive game in an auditory-haptic-visual virtual environment. In this imitation of a natural scene, the targets of perception were ecologically valid uni-, bi- and tri-modal manifestations of a simple event-a ball hitting a wall. Subjects were engaged in the game while their behavioral and early cortical electrophysiological responses were measured. Behavioral results confirmed that tri-modal cues were detected faster and more accurately than bi-modal cues, which, likewise, showed advantages over unimodal responses. Event-Related Potentials (ERPs) were recorded, and the first 200 ms following stimulus onset was analyzed to reveal the latencies of cortical multimodal interactions as estimated by sLORETA. These electrophysiological findings indicated bi-modal as well as tri-modal interactions beginning very early (~30 ms), uniquely for each multimodal combination. The results suggest that early cortical multimodal integration accelerates cortical activity and, in turn, enhances performance measures. This acceleration registers on the scalp as sub-additive cortical activation.
Collapse
Affiliation(s)
- Irit Sella
- The Virtual Reality and NeuroCognition Laboratory, Technion, Israel Institute of Science, Israel; Evoked Potentials Laboratory, Technion, Israel Institute of Science, Israel
| | - Miriam Reiner
- The Virtual Reality and NeuroCognition Laboratory, Technion, Israel Institute of Science, Israel.
| | - Hillel Pratt
- Evoked Potentials Laboratory, Technion, Israel Institute of Science, Israel
| |
Collapse
|
26
|
Wang W, Hu L, Cui H, Xie X, Hu Y. Spatio-temporal measures of electrophysiological correlates for behavioral multisensory enhancement during visual, auditory and somatosensory stimulation: A behavioral and ERP study. Neurosci Bull 2013; 29:715-24. [PMID: 24293020 DOI: 10.1007/s12264-013-1386-z] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2012] [Accepted: 04/19/2013] [Indexed: 11/25/2022] Open
Abstract
Multisensory enhancement, as a facilitation phenomenon, is responsible for superior behavioral performance when an individual is responding to cross-modal versus modality-specific stimuli. However, the event-related potential (ERP) counterparts of behavioral multisensory enhancement are not well known. We recorded ERPs and behavioral data from 14 healthy volunteers with three types of target stimuli (modality-specific, bimodal, and trimodal) to examine the spatio-temporal electrophysiological characteristics of multisensory enhancement by comparing behavioral data with ERPs. We found a strong correlation between P3 latency and behavioral performance in terms of reaction time (RT) (R = 0.98, P <0.001), suggesting that P3 latency constitutes a temporal measure of behavioral multisensory enhancement. In addition, a fast RT and short P3 latency were found when comparing the modality-specific visual target with the modality-specific auditory and somatosensory targets. Our results indicate that behavioral multisensory enhancement can be identified by the latency and source distribution of the P3 component. These findings may advance our understanding of the neuronal mechanisms of multisensory enhancement.
Collapse
Affiliation(s)
- Wuyi Wang
- School of Precision Instruments and Opto-Electronics Engineering, Tianjin University, Tianjin, 300072, China
| | | | | | | | | |
Collapse
|
27
|
van Erp JBF, Philippi TG, Werkhoven P. Observers can reliably identify illusory flashes in the illusory flash paradigm. Exp Brain Res 2013; 226:73-9. [DOI: 10.1007/s00221-013-3413-8] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2012] [Accepted: 01/07/2013] [Indexed: 10/27/2022]
|
28
|
Huang J, Gamble D, Sarnlertsophon K, Wang X, Hsiao S. Integration of auditory and tactile inputs in musical meter perception. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2013; 787:453-61. [PMID: 23716252 PMCID: PMC4324720 DOI: 10.1007/978-1-4614-1590-9_50] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 01/20/2023]
Abstract
Musicians often say that they not only hear but also "feel" music. To explore the contribution of tactile information to "feeling" music, we investigated the degree that auditory and tactile inputs are integrated in humans performing a musical meter-recognition task. Subjects discriminated between two types of sequences, "duple" (march-like rhythms) and "triple" (waltz-like rhythms), presented in three conditions: (1) unimodal inputs (auditory or tactile alone); (2) various combinations of bimodal inputs, where sequences were distributed between the auditory and tactile channels such that a single channel did not produce coherent meter percepts; and (3) bimodal inputs where the two channels contained congruent or incongruent meter cues. We first show that meter is perceived similarly well (70-85 %) when tactile or auditory cues are presented alone. We next show in the bimodal experiments that auditory and tactile cues are integrated to produce coherent meter percepts. Performance is high (70-90 %) when all of the metrically important notes are assigned to one channel and is reduced to 60 % when half of these notes are assigned to one channel. When the important notes are presented simultaneously to both channels, congruent cues enhance meter recognition (90 %). Performance dropped dramatically when subjects were presented with incongruent auditory cues (10 %), as opposed to incongruent tactile cues (60 %), demonstrating that auditory input dominates meter perception. These observations support the notion that meter perception is a cross-modal percept with tactile inputs underlying the perception of "feeling" music.
Collapse
Affiliation(s)
- Juan Huang
- The Solomon H. Snyder Department of Neuroscience, The Johns Hopkins University, Baltimore, MD 21205, USA.
| | | | | | | | | |
Collapse
|
29
|
Hacker G, Brooks A, van der Zwan R. Sex discriminations made on the basis of ambiguous visual cues can be affected by the presence of an olfactory cue. BMC Psychol 2013; 1:10. [PMID: 25566362 PMCID: PMC4270023 DOI: 10.1186/2050-7283-1-10] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2012] [Accepted: 05/31/2013] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Almost every interpersonal interaction is mediated by the sex of the individuals involved. Visual, auditory, and olfactory cues provide individuals with the opportunity to discriminate the sex of others from a distance and so prepare sex-appropriate behaviours for any impending interaction. The usefulness of that important social skill is mediated by the reliability of the sensory information. Sometimes cues in one domain will be ambiguous, and the perceptual processes mediating sex perceptions will need to integrate information from across the senses for better reliability. With that in mind, the experiment reported here was designed to explore the effect of olfactory-visual interactions on sex perceptions. METHODS Observers were presented visually with point-light walkers that were sexually ambiguous (not unequivocally female or male). They were asked to judge, using a two-alternative forced choice paradigm, the sex of each walker. Tested on two occasions, observers unknowingly made sex judgements in the presence or absence of pads soaked in male sweat. RESULTS The presence of male sweat was associated with higher proportions of 'male' judgements of both ambiguous female and ambiguous male walkers (F1,19 = 24.11, p < 0.01). CONCLUSION These findings suggest that olfactory cues can modulate visual sex discriminations made on the basis of biological motion cues. Importantly, they seem to do so even when the olfactory cue is not consciously perceived, suggesting these effects are mediated by perceptual rather than cognitive processes. These findings suggest that there exist cortical processes mediating sex perceptions that are capable of integrating visual and olfactory information. What is important is that this sensory integration takes place without conscious knowledge and that appropriate behaviour modifications may occur automatically.
Collapse
Affiliation(s)
- Graeme Hacker
- Laboratory of Cognitive Neuroscience and Behaviour Southern Cross University, Coffs Harbour Campus, Hogbin Drive, Coffs Harbour, NSW 2450 Australia
| | - Anna Brooks
- Laboratory of Cognitive Neuroscience and Behaviour Southern Cross University, Coffs Harbour Campus, Hogbin Drive, Coffs Harbour, NSW 2450 Australia
| | - Rick van der Zwan
- Laboratory of Cognitive Neuroscience and Behaviour Southern Cross University, Coffs Harbour Campus, Hogbin Drive, Coffs Harbour, NSW 2450 Australia
| |
Collapse
|
30
|
Huang J, Gamble D, Sarnlertsophon K, Wang X, Hsiao S. Feeling music: integration of auditory and tactile inputs in musical meter perception. PLoS One 2012; 7:e48496. [PMID: 23119038 PMCID: PMC3485368 DOI: 10.1371/journal.pone.0048496] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2012] [Accepted: 10/01/2012] [Indexed: 11/30/2022] Open
Abstract
Musicians often say that they not only hear, but also "feel" music. To explore the contribution of tactile information in "feeling" musical rhythm, we investigated the degree that auditory and tactile inputs are integrated in humans performing a musical meter recognition task. Subjects discriminated between two types of sequences, 'duple' (march-like rhythms) and 'triple' (waltz-like rhythms) presented in three conditions: 1) Unimodal inputs (auditory or tactile alone), 2) Various combinations of bimodal inputs, where sequences were distributed between the auditory and tactile channels such that a single channel did not produce coherent meter percepts, and 3) Simultaneously presented bimodal inputs where the two channels contained congruent or incongruent meter cues. We first show that meter is perceived similarly well (70%-85%) when tactile or auditory cues are presented alone. We next show in the bimodal experiments that auditory and tactile cues are integrated to produce coherent meter percepts. Performance is high (70%-90%) when all of the metrically important notes are assigned to one channel and is reduced to 60% when half of these notes are assigned to one channel. When the important notes are presented simultaneously to both channels, congruent cues enhance meter recognition (90%). Performance drops dramatically when subjects were presented with incongruent auditory cues (10%), as opposed to incongruent tactile cues (60%), demonstrating that auditory input dominates meter perception. We believe that these results are the first demonstration of cross-modal sensory grouping between any two senses.
Collapse
Affiliation(s)
- Juan Huang
- Zanvyl Krieger Mind/Brain Institute and the Solomon H. Snyder Department of Neuroscience, The Johns Hopkins University, Baltimore, Maryland, United States of America
- Laboratory of Auditory Neurophysiology, Department of Biomedical Engineering, The Johns Hopkins University, Baltimore, Maryland, United States of America
| | - Darik Gamble
- Laboratory of Auditory Neurophysiology, Department of Biomedical Engineering, The Johns Hopkins University, Baltimore, Maryland, United States of America
| | - Kristine Sarnlertsophon
- Zanvyl Krieger Mind/Brain Institute and the Solomon H. Snyder Department of Neuroscience, The Johns Hopkins University, Baltimore, Maryland, United States of America
| | - Xiaoqin Wang
- Laboratory of Auditory Neurophysiology, Department of Biomedical Engineering, The Johns Hopkins University, Baltimore, Maryland, United States of America
| | - Steven Hsiao
- Zanvyl Krieger Mind/Brain Institute and the Solomon H. Snyder Department of Neuroscience, The Johns Hopkins University, Baltimore, Maryland, United States of America
| |
Collapse
|
31
|
Guerraz M, Provost S, NARISON R, Brugnon A, Virolle S, Bresciani JP. Integration of visual and proprioceptive afferents in kinesthesia. Neuroscience 2012; 223:258-68. [DOI: 10.1016/j.neuroscience.2012.07.059] [Citation(s) in RCA: 49] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2012] [Revised: 07/19/2012] [Accepted: 07/26/2012] [Indexed: 10/28/2022]
|
32
|
Wang WY, Hu L, Valentini E, Xie XB, Cui HY, Hu Y. Dynamic characteristics of multisensory facilitation and inhibition. Cogn Neurodyn 2012; 6:409-19. [PMID: 24082962 DOI: 10.1007/s11571-012-9197-x] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2011] [Revised: 02/14/2012] [Accepted: 03/01/2012] [Indexed: 11/29/2022] Open
Abstract
Multimodal integration, which mainly refers to multisensory facilitation and multisensory inhibition, is the process of merging multisensory information in the human brain. However, the neural mechanisms underlying the dynamic characteristics of multimodal integration are not fully understood. The objective of this study is to investigate the basic mechanisms of multimodal integration by assessing the intermodal influences of vision, audition, and somatosensory sensations (the influence of multisensory background events to the target event). We used a timed target detection task, and measured both behavioral and electroencephalographic responses to visual target events (green solid circle), auditory target events (2 kHz pure tone) and somatosensory target events (1.5 ± 0.1 mA square wave pulse) from 20 normal participants. There were significant differences in both behavior performance and ERP components when comparing the unimodal target stimuli with multimodal (bimodal and trimodal) target stimuli for all target groups. Significant correlation among reaction time and P3 latency was observed across all target conditions. The perceptual processing of auditory target events (A) was inhibited by the background events, while the perceptual processing of somatosensory target events (S) was facilitated by the background events. In contrast, the perceptual processing of visual target events (V) remained impervious to multisensory background events.
Collapse
Affiliation(s)
- W Y Wang
- Chinese Academy of Medical Science and Peking Union Medical College, Institute of Biomedical Engineering, Tianjin, China ; Department of Orthopaedics and Traumatology, Duchess of Kent Children's Hospital, The University of Hong Kong, 12 Sandy Bay Road, Hong Kong, China
| | | | | | | | | | | |
Collapse
|
33
|
Heron J, Roach NW, Hanson JVM, McGraw PV, Whitaker D. Audiovisual time perception is spatially specific. Exp Brain Res 2012; 218:477-85. [PMID: 22367399 PMCID: PMC3324684 DOI: 10.1007/s00221-012-3038-3] [Citation(s) in RCA: 44] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2011] [Accepted: 02/09/2012] [Indexed: 11/19/2022]
Abstract
Our sensory systems face a daily barrage of auditory and visual signals whose arrival times form a wide range of audiovisual asynchronies. These temporal relationships constitute an important metric for the nervous system when surmising which signals originate from common external events. Internal consistency is known to be aided by sensory adaptation: repeated exposure to consistent asynchrony brings perceived arrival times closer to simultaneity. However, given the diverse nature of our audiovisual environment, functionally useful adaptation would need to be constrained to signals that were generated together. In the current study, we investigate the role of two potential constraining factors: spatial and contextual correspondence. By employing an experimental design that allows independent control of both factors, we show that observers are able to simultaneously adapt to two opposing temporal relationships, provided they are segregated in space. No such recalibration was observed when spatial segregation was replaced by contextual stimulus features (in this case, pitch and spatial frequency). These effects provide support for dedicated asynchrony mechanisms that interact with spatially selective mechanisms early in visual and auditory sensory pathways.
Collapse
Affiliation(s)
- James Heron
- Bradford School of Optometry and Vision Science, University of Bradford, Bradford, UK.
| | | | | | | | | |
Collapse
|
34
|
Abstract
In the present review, we focus on how commonalities in the ontogenetic development of the auditory and tactile sensory systems may inform the interplay between these signals in the temporal domain. In particular, we describe the results of behavioral studies that have investigated temporal resolution (in temporal order, synchrony/asynchrony, and simultaneity judgment tasks), as well as temporal numerosity perception, and similarities in the perception of frequency across touch and hearing. The evidence reviewed here highlights features of audiotactile temporal perception that are distinctive from those seen for other pairings of sensory modalities. For instance, audiotactile interactions are characterized in certain tasks (e.g., temporal numerosity judgments) by a more balanced reciprocal influence than are other modality pairings. Moreover, relative spatial position plays a different role in the temporal order and temporal recalibration processes for audiotactile stimulus pairings than for other modality pairings. The effect exerted by both the spatial arrangement of stimuli and attention on temporal order judgments is described. Moreover, a number of audiotactile interactions occurring during sensory-motor synchronization are highlighted. We also look at the audiotactile perception of rhythm and how it may be affected by musical training. The differences emerging from this body of research highlight the need for more extensive investigation into audiotactile temporal interactions. We conclude with a brief overview of some of the key issues deserving of further research in this area.
Collapse
|
35
|
Campos J, Bülthoff H. Multimodal Integration during Self-Motion in Virtual Reality. Front Neurosci 2011. [DOI: 10.1201/9781439812174-38] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022] Open
|
36
|
Campos J, Bülthoff H. Multimodal Integration during Self-Motion in Virtual Reality. Front Neurosci 2011. [DOI: 10.1201/b11092-38] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022] Open
|
37
|
Abstract
An account of intersensory integration is premised on knowing that different sensory inputs arise from the same object. Could, however, the combination of the inputs be impaired although the “unity assumption” holds? Forty observers viewed a square through a minifying (50%) lens while they simultaneously touched the square. Half could see and half could not see their haptic explorations of the square. Both groups, however, had reason to believe that they were touching and viewing the same square. Subsequent matches of the inspected square were mutually biased by touch and vision when the exploratory movements were visible. However, the matches were biased in the direction of the square’s haptic size when observers could not see their exploratory movements. This impaired integration without the visible haptic explorations suggests that the unity assumption alone is not enough to promote intersensory integration.
Collapse
|
38
|
Frings C, Spence C. Crossmodal congruency effects based on stimulus identity. Brain Res 2010; 1354:113-22. [DOI: 10.1016/j.brainres.2010.07.058] [Citation(s) in RCA: 24] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2009] [Revised: 01/25/2010] [Accepted: 07/17/2010] [Indexed: 10/19/2022]
|
39
|
Bendixen A, Grimm S, Deouell LY, Wetzel N, Mädebach A, Schröger E. The time-course of auditory and visual distraction effects in a new crossmodal paradigm. Neuropsychologia 2010; 48:2130-9. [DOI: 10.1016/j.neuropsychologia.2010.04.004] [Citation(s) in RCA: 31] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2009] [Revised: 03/18/2010] [Accepted: 04/03/2010] [Indexed: 10/19/2022]
|
40
|
Visual stimulus locking of EEG is modulated by temporal congruency of auditory stimuli. Exp Brain Res 2009; 198:137-51. [PMID: 19526359 DOI: 10.1007/s00221-009-1867-5] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2008] [Accepted: 05/19/2009] [Indexed: 10/20/2022]
Abstract
Disparate sensory streams originating from a common underlying event share similar dynamics, and this plays an important part in multisensory integration. Here we investigate audiovisual binding by presenting continuously changing, temporally congruent and incongruent stimuli. Recorded EEG signals are used to quantify spectrotemporal and waveform locking of neural activity to stimulus dynamics. Spectrotemporal analysis reveals locking to visual stimulus dynamics in both a broad alpha and the beta band. The properties of these effects suggest they are a correlate of bottom-up processing in the visual system. Waveform locking reveals two cortically distinct processes that lock to visual stimulus dynamics with differing topographies and time lags relative to the stimuli. Most importantly, these are modulated in strength by the congruency of an accompanying auditory stream. In addition, the waveform locking found at occipital electrodes shows an increase over stimulus duration for visual and congruent audiovisual stimuli. Hence we argue that these effects reflect audiovisual interaction. We thus propose that spectrotemporal and waveform locking reflect different mechanisms involved in the processing of dynamic audiovisual stimuli.
Collapse
|
41
|
Hecht D, Reiner M. Sensory dominance in combinations of audio, visual and haptic stimuli. Exp Brain Res 2008; 193:307-14. [PMID: 18985327 DOI: 10.1007/s00221-008-1626-z] [Citation(s) in RCA: 67] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2008] [Accepted: 10/15/2008] [Indexed: 11/25/2022]
Abstract
Participants presented with auditory, visual, or bi-sensory audio-visual stimuli in a speeded discrimination task, fail to respond to the auditory component of the bi-sensory trials significantly more often than they fail to respond to the visual component--a 'visual dominance' effect. The current study investigated further the sensory dominance phenomenon in all combinations of auditory, visual and haptic stimuli. We found a similar visual dominance effect also in bi-sensory trials of combined haptic-visual stimuli, but no bias towards either sensory modality in bi-sensory trials of haptic-auditory stimuli. When presented with tri-sensory trials of combined auditory-visual-haptic stimuli, participants made more errors of responding only to two corresponding sensory signals than errors of responding only to a single sensory modality, however, there were no biases towards either sensory modality (or sensory pairs) in the distribution of both types of errors (i.e. responding only to a single stimulus or to pairs of stimuli). These results suggest that while vision can dominate both the auditory and the haptic sensory modalities, it is limited to bi-sensory combinations in which the visual signal is combined with another single stimulus. However, in a tri-sensory combination when a visual signal is presented simultaneously with both the auditory and the haptic signals, the probability of missing two signals is much smaller than of missing only one signal and therefore the visual dominance disappears.
Collapse
Affiliation(s)
- David Hecht
- The Touch Laboratory, Gutwirth Building, Department of Education in Technology and Science, Technion-Israel Institute of Technology, 32000 Haifa, Israel.
| | | |
Collapse
|