51
|
Exposure to first-person shooter videogames is associated with multisensory temporal precision and migraine incidence. Cortex 2020; 134:223-238. [PMID: 33291047 DOI: 10.1016/j.cortex.2020.10.009] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2020] [Revised: 10/03/2020] [Accepted: 10/19/2020] [Indexed: 02/06/2023]
Abstract
Adaptive interactions with the environment require optimal integration and segregation of sensory information. Yet, temporal misalignments in the presentation of visual and auditory stimuli may generate illusory phenomena such as the sound-induced flash illusion, in which a single flash paired with multiple auditory stimuli induces the perception of multiple illusory flashes. This phenomenon has been shown to be robust and resistant to feedback training. According to a Bayesian account, this is due to a statistically optimal combination of the signals operated by the nervous system. From this perspective, individual susceptibility to the illusion might be moulded through prolonged experience. For example, repeated exposure to the illusion and prolonged training sessions partially impact on the reported illusion. Therefore, extensive and immersive audio-visual experience, such as first-person shooter videogames, should sharpen individual capacity to correctly integrate multisensory information over time, leading to more veridical perception. We tested this hypothesis by comparing the temporal profile of the sound-induced illusion in a group of expert first-person shooter gamers and a non-players group. In line with the hypotheses, gamers experience significantly narrower windows of illusion (~87 ms) relative to non-players (~105 ms), leading to higher veridical reports in gamers (~68%) relative to non-players (~59%). Moreover, according to recent literature, we tested whether audio-visual intensive training in gamers could be related to the incidence of migraine, and found that its severity may be directly proportioned to the time spent on videogames. Overall, these results suggest that continued training within audio-visual environments such as first-person shooter videogames improves temporal discrimination and sensory integration. This finding may pave the way for future therapeutic strategies based on self-administered multisensory training. On the other hand, the impact of intensive training on visual-related stress disorders, such as migraine incidence, should be taken into account as a risk factor during therapeutic planning.
Collapse
|
52
|
Hirst RJ, McGovern DP, Setti A, Shams L, Newell FN. What you see is what you hear: Twenty years of research using the Sound-Induced Flash Illusion. Neurosci Biobehav Rev 2020; 118:759-774. [DOI: 10.1016/j.neubiorev.2020.09.006] [Citation(s) in RCA: 31] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2020] [Revised: 07/06/2020] [Accepted: 09/03/2020] [Indexed: 01/17/2023]
|
53
|
A-Izzeddin EJ, Grove PM. The Relationship Between Illusory Crescents and the Stream/Bounce Effect. Multisens Res 2020; 34:1-17. [PMID: 33535166 DOI: 10.1163/22134808-bja10040] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2020] [Accepted: 10/05/2020] [Indexed: 11/19/2022]
Abstract
We conducted two experiments to evaluate Meyerhoff and Scholl's (2018, Cognition 170, 88-94) hypothesis that illusory crescents contribute to resolutions in audiovisual stream/bounce displays. In Experiment 1, we measured illusory crescent size in the launching effect as a function of speed, overlap, and sound. In Experiment 2, we tabulated stream and bounce responses to similar stimuli with the same speed, sound, and overlap conditions as Experiment 1. Our critical manipulation of target speed spanned the range of values from typical stream/bounce investigations of ∼5 degrees/s up to the target speeds employed by Meyerhoff and Scholl ∼38 degrees/s. We replicated Meyerhoff and Scholl's findings at higher speeds, but not at slower speeds. Critically, we found that speed influenced crescent size judgements and bouncing responses in opposite directions. As target speed increased, illusory crescent size increased (Experiment 1), but the overall percentage of bounce responses decreased (Experiment 2). Additionally, we found that sound failed to enhance illusory crescent size at slower speeds but promotes bouncing responses at all speeds. The disassociation of the effects of speed and sound on illusory crescents with those effects on reported streaming/bouncing in similar displays provides compelling evidence against Meyerhoff and Scholl's hypothesis. Therefore, we conclude that illusory crescents do not account for the pattern of responses attributed to the stream/bounce effect.
Collapse
Affiliation(s)
- Emily J A-Izzeddin
- School of Psychology, The University of Queensland, St Lucia, QLD, 4072, Australia
| | - Philip M Grove
- School of Psychology, The University of Queensland, St Lucia, QLD, 4072, Australia
| |
Collapse
|
54
|
Alkhamra RA, Abu-Dahab SMN. Sensory processing disorders in children with hearing impairment: Implications for multidisciplinary approach and early intervention. Int J Pediatr Otorhinolaryngol 2020; 136:110154. [PMID: 32521420 DOI: 10.1016/j.ijporl.2020.110154] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/31/2020] [Revised: 04/26/2020] [Accepted: 05/27/2020] [Indexed: 10/24/2022]
Abstract
OBJECTIVE To explore the differences in sensory processing between children with hearing impairments and children with normal hearing and the variables that influence sensory processing disorder (SPD). METHODS The sensory processing abilities of 90 children were compared in three age-matched groups of 30, with cochlear implants (CIs), hearing aids (HAs), and normal hearing (NH). The Arabic Sensory Profile (Arabic_SP) was used. RESULTS Findings were presented in the Arabic_SP section and factor levels. Sections: The NH group performed better (p < .05) than the CI group in 57% of the sections and better than the HA group in 14%. The CI group exhibited more signs of SPD than the HA group with vestibular processing, multisensory processing, and emotional-social responses. FACTORS The NH group differed from the CI group on all the factors that showed significance and from the HA group with inattention/distractibility and poor registration. There were great differences between the CI and the HA groups on all the factors except with poor registration and fine motor/perceptual. Hearing loss variables that most affected results in the Arabic_SP were the age at receiving a hearing device and type of hearing loss onset. CONCLUSION Along with speech and language problems, children with hearing impairment are especially vulnerable to SPD. Children with CIs and HAs are increasingly susceptible to auditory processing disorders. Higher risks of balance, multisensory processing, social-emotional, and fine motor problems are in children with CIs. Increased SPD risks came with a higher age at implantation. Findings indicate the importance of a multidisciplinary approach for early detection and intervention for children with hearing impairment, especially those with CIs.
Collapse
Affiliation(s)
- Rana A Alkhamra
- Department of Hearing and Speech Sciences, Faculty of Rehabilitation Sciences, The University of Jordan, Amman, 11942, Jordan.
| | - Sana M N Abu-Dahab
- Department of Occupational Therapy, Faculty of Rehabilitation Sciences, The University of Jordan, Amman, 11942, Jordan
| |
Collapse
|
55
|
Zumer JM, White TP, Noppeney U. The neural mechanisms of audiotactile binding depend on asynchrony. Eur J Neurosci 2020; 52:4709-4731. [PMID: 32725895 DOI: 10.1111/ejn.14928] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2020] [Revised: 07/06/2020] [Accepted: 07/24/2020] [Indexed: 11/30/2022]
Abstract
Asynchrony is a critical cue informing the brain whether sensory signals are caused by a common source and should be integrated or segregated. This psychophysics-electroencephalography (EEG) study investigated the influence of asynchrony on how the brain binds audiotactile (AT) signals to enable faster responses in a redundant target paradigm. Human participants actively responded (psychophysics) or passively attended (EEG) to noise bursts, "taps-to-the-face" and their AT combinations at seven AT asynchronies: 0, ±20, ±70 and ±500 ms. Behaviourally, observers were faster at detecting AT than unisensory stimuli within a temporal integration window: the redundant target effect was maximal for synchronous stimuli and declined within a ≤70 ms AT asynchrony. EEG revealed a cascade of AT interactions that relied on different neural mechanisms depending on AT asynchrony. At small (≤20 ms) asynchronies, AT interactions arose for evoked response potentials (ERPs) at 110 ms and ~400 ms post-stimulus. Selectively at ±70 ms asynchronies, AT interactions were observed for the P200 ERP, theta-band inter-trial coherence (ITC) and power at ~200 ms post-stimulus. In conclusion, AT binding was mediated by distinct neural mechanisms depending on the asynchrony of the AT signals. Early AT interactions in ERPs and theta-band ITC and power were critical for the behavioural response facilitation within a ≤±70 ms temporal integration window.
Collapse
Affiliation(s)
- Johanna M Zumer
- School of Psychology, University of Birmingham, Birmingham, UK.,Centre for Computational Neuroscience and Cognitive Robotics, University of Birmingham, Birmingham, UK.,Centre for Human Brain Health, University of Birmingham, Birmingham, UK.,School of Life and Health Sciences, Aston University, Birmingham, UK
| | - Thomas P White
- School of Psychology, University of Birmingham, Birmingham, UK.,Centre for Computational Neuroscience and Cognitive Robotics, University of Birmingham, Birmingham, UK
| | - Uta Noppeney
- School of Psychology, University of Birmingham, Birmingham, UK.,Centre for Computational Neuroscience and Cognitive Robotics, University of Birmingham, Birmingham, UK.,Centre for Human Brain Health, University of Birmingham, Birmingham, UK.,Donders Institute for Brain, Cognition, and Behaviour, Nijmegen, The Netherlands
| |
Collapse
|
56
|
Pomper U, Schmid R, Ansorge U. Continuous, Lateralized Auditory Stimulation Biases Visual Spatial Processing. Front Psychol 2020; 11:1183. [PMID: 32655440 PMCID: PMC7325992 DOI: 10.3389/fpsyg.2020.01183] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2020] [Accepted: 05/07/2020] [Indexed: 11/25/2022] Open
Abstract
Sounds in our environment can easily capture human visual attention. Previous studies have investigated the impact of spatially localized, brief sounds on concurrent visuospatial attention. However, little is known on how the presence of a continuous, lateralized auditory stimulus (e.g., a person talking next to you while driving a car) impacts visual spatial attention (e.g., detection of critical events in traffic). In two experiments, we investigated whether a continuous auditory stream presented from one side biases visual spatial attention toward that side. Participants had to either passively or actively listen to sounds of various semantic complexities (tone pips, spoken digits, and a spoken story) while performing a visual target discrimination task. During both passive and active listening, we observed faster response times to visual targets presented spatially close to the relevant auditory stream. Additionally, we found that higher levels of semantic complexity of the presented sounds led to reduced visual discrimination sensitivity, but only during active listening to the sounds. We provide important novel results by showing that the presence of a continuous, ongoing auditory stimulus can impact visual processing, even when the sounds are not endogenously attended to. Together, our findings demonstrate the implications of ongoing sounds on visual processing in everyday scenarios such as moving about in traffic.
Collapse
Affiliation(s)
- Ulrich Pomper
- Department of Cognition, Emotion, and Methods in Psychology, Faculty of Psychology, University of Vienna, Vienna, Austria
| | - Rebecca Schmid
- Department of Cognition, Emotion, and Methods in Psychology, Faculty of Psychology, University of Vienna, Vienna, Austria
| | - Ulrich Ansorge
- Department of Cognition, Emotion, and Methods in Psychology, Faculty of Psychology, University of Vienna, Vienna, Austria.,Cognitive Science Hub, University of Vienna, Vienna, Austria
| |
Collapse
|
57
|
Arikan BE, van Kemenade BM, Podranski K, Steinsträter O, Straube B, Kircher T. Perceiving your hand moving: BOLD suppression in sensory cortices and the role of the cerebellum in the detection of feedback delays. J Vis 2020; 19:4. [PMID: 31826249 DOI: 10.1167/19.14.4] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Sensory consequences of self-generated as opposed to externally generated movements are perceived as less intense and lead to less neural activity in corresponding sensory cortices, presumably due to predictive mechanisms. Self-generated sensory inputs have been mostly studied in a single modality, using abstract feedback, with control conditions not differentiating efferent from reafferent feedback. Here we investigated the neural processing of (a) naturalistic action-feedback associations of (b) self-generated versus externally generated movements, and (c) how an additional (auditory) modality influences neural processing and detection of delays. Participants executed wrist movements using a passive movement device (PMD) as they watched their movements in real time or with variable delays (0-417 ms). The task was to judge whether there was a delay between the movement and its visual feedback. In the externally generated condition, movements were induced by the PMD to disentangle efferent from reafferent feedback. Half of the trials involved auditory beeps coupled to the onset of the visual feedback. We found reduced BOLD activity in visual, auditory, and somatosensory areas during self-generated compared with externally generated movements in unimodal and bimodal conditions. Anterior and posterior cerebellar areas were engaged for trials in which action-feedback delays were detected for self-generated movements. Specifically, the left cerebellar lobule IX was functionally connected with the right superior occipital gyrus. The results indicate efference copy-based predictive mechanisms specific to self-generated movements, leading to BOLD suppression in sensory areas. In addition, our results support the cerebellum's role in the detection of temporal prediction errors during our actions and their consequences.
Collapse
Affiliation(s)
- B Ezgi Arikan
- Department of Psychology, Justus-Liebig University Giessen, Giessen, Germany
| | - Bianca M van Kemenade
- Department of Psychiatry and Psychotherapy, Philipps University Marburg, Marburg, Germany
| | - Kornelius Podranski
- Department of Psychiatry and Psychotherapy, Philipps University Marburg, Marburg, Germany.,Core Facility Brain Imaging, Faculty of Medicine, Philipps University Marburg, Marburg, Germany.,Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Olaf Steinsträter
- Department of Psychiatry and Psychotherapy, Philipps University Marburg, Marburg, Germany.,Core Facility Brain Imaging, Faculty of Medicine, Philipps University Marburg, Marburg, Germany
| | - Benjamin Straube
- Department of Psychiatry and Psychotherapy, Philipps University Marburg, Marburg, Germany
| | - Tilo Kircher
- Department of Psychiatry and Psychotherapy, Philipps University Marburg, Marburg, Germany
| |
Collapse
|
58
|
D’Agostino AR, Brown J, Fillmore MT. Redundant visual signals reduce the intensity of alcohol impairment. Drug Alcohol Depend 2020; 209:107945. [PMID: 32151879 PMCID: PMC7127954 DOI: 10.1016/j.drugalcdep.2020.107945] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/07/2019] [Revised: 02/06/2020] [Accepted: 02/25/2020] [Indexed: 10/24/2022]
Abstract
BACKGROUND Humans interact with multiple stimuli across several modalities each day. The "redundant signal effect" refers to the observation that individuals respond more quickly to stimuli when information is presented as multisensory, redundant stimuli (e.g., aurally and visually), rather than as a single stimulus presented to either modality alone. Studies of alcohol effects on human performance show that alcohol induced impairment is reduced when subjects respond to redundant multisensory stimuli. However, redundant signals do not need to involve multisensory stimuli to facilitate behavior as studies have shown facilitating effects by redundant unisensory signals that are delivered to the "same sensory" (e.g., two visual or two auditory signals). METHODS The current study examined the degree to which redundant visual signals would reduce alcohol impairment and compared the magnitude of this effect with that produced by redundant multisensory signals. On repeated test sessions, participants (n = 20) received placebo or 0.65 g/kg alcohol and performed a two-choice reaction time task that measured how quickly participants responded to four different signal conditions. The four conditions differed by the modality of the target presentation: visual, auditory, multisensory, and unisensory. RESULTS Alcohol slowed performance in all conditions and reaction times were generally faster in the redundant signal conditions. Both multisensory and unisensory redundant signals reduced the impairing effects of alcohol compared with single signals. CONCLUSIONS These findings indicate that the ability of redundant signals to counteract alcohol impairment does not require multisensory input. Duplicate signals to the same modality can also reduce alcohol impairment.
Collapse
Affiliation(s)
- Alexandra R. D’Agostino
- Department of Psychology, University of Kentucky College of Arts and Sciences, 110 Kastle Hall, Lexington, KY 40506-0044, U.S.A
| | - Jaime Brown
- Department of Psychology, University of Kentucky College of Arts and Sciences, 110 Kastle Hall, Lexington, KY 40506-0044, U.S.A
| | - Mark T. Fillmore
- Department of Psychology, University of Kentucky College of Arts and Sciences, 110 Kastle Hall, Lexington, KY 40506-0044, U.S.A.,Correspondence concerning this article should be addressed to: Mark T. Fillmore, Department of Psychology University of Kentucky Lexington, KY 40506-0044 Phone: (859) 277-4728
| |
Collapse
|
59
|
Colonius H, Diederich A. Formal models and quantitative measures of multisensory integration: a selective overview. Eur J Neurosci 2020; 51:1161-1178. [DOI: 10.1111/ejn.13813] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2017] [Revised: 12/18/2017] [Accepted: 12/20/2017] [Indexed: 11/26/2022]
Affiliation(s)
- Hans Colonius
- Department of Psychology Carl von Ossietzky Universität Oldenburg Oldenburg 26111 Germany
- Department of Psychological Sciences Purdue University West Lafayette IN USA
| | - Adele Diederich
- Department of Psychological Sciences Purdue University West Lafayette IN USA
- Life Sciences and Chemistry Jacobs University Bremen Bremen Germany
| |
Collapse
|
60
|
Biondi FN, Rossi R, Gastaldi M, Orsini F, Mulatti C. Precision teaching to improve drivers' lane maintenance. JOURNAL OF SAFETY RESEARCH 2020; 72:225-229. [PMID: 32199567 DOI: 10.1016/j.jsr.2019.12.020] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/16/2019] [Revised: 09/12/2019] [Accepted: 12/26/2019] [Indexed: 06/10/2023]
Abstract
INTRODUCTION This study investigates the effect of precision teaching signals on lane maintenance. METHODS In experiment 1, the control group drove a simulator with no signals. In experiment 2, drivers were presented with auditory signals depending on their position within or outside the lane. In experiment 3, visual signals were presented in addition to auditory signals to examine the effect of redundancy on drivers' lane maintenance. RESULTS Results showed an improvement in lane maintenance in experiment 2. Cross-experiment analysis indicated this effect not to be the result of learning. Data from experiment 3 also showed that presenting redundant signals did not further reduce lane variability or help drivers maintain a more central position within the lane. CONCLUSIONS Taken together, data suggest precision teaching be effective as an educational tool to improve lane maintenance. Practical Applications: Our study shows the potential for precision teaching to serve as a valuable tool in driver training.
Collapse
Affiliation(s)
- Francesco N Biondi
- Department of Kinesiology, University of Windsor, 2555 College Ave, Windsor, ON, Canada; Department of Civil Engineering, University of Windsor, 2555 College Ave, Windsor, ON, Canada; Department of Psychology, University of Utah, Salt Lake City, UT, USA.
| | - Riccardo Rossi
- Department of Civil Engineering, University of Padova, Via 8 Febbraio 1848, 2 Padova, Italy
| | - Massimiliano Gastaldi
- Department of Civil Engineering, University of Padova, Via 8 Febbraio 1848, 2 Padova, Italy
| | - Federico Orsini
- Department of Civil Engineering, University of Padova, Via 8 Febbraio 1848, 2 Padova, Italy
| | - Claudio Mulatti
- Department of Psychology , University of Padova, Via 8 Febbraio 1848, 2 Padova, Italy
| |
Collapse
|
61
|
Klatt S, Smeeton NJ. Visual and Auditory Information During Decision Making in Sport. JOURNAL OF SPORT & EXERCISE PSYCHOLOGY 2020; 42:15-25. [PMID: 31883505 DOI: 10.1123/jsep.2019-0107] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/14/2019] [Revised: 10/07/2019] [Accepted: 10/30/2019] [Indexed: 06/10/2023]
Abstract
In 2 experiments, the authors investigated the effects of bimodal integration in a sport-specific task. Beach volleyball players were required to make a tactical decision, responding either verbally or via a motor response, after being presented with visual, auditory, or both kinds of stimuli in a beach volleyball scenario. In Experiment 1, players made the correct decision in a game situation more often when visual and auditory information were congruent than in trials in which they experienced only one of the modalities or incongruent information. Decision-making accuracy was greater when motor, rather than verbal, responses were given. Experiment 2 replicated this congruence effect using different stimulus material and showed a decreasing effect of visual stimulation on decision making as a function of shorter visual stimulus durations. In conclusion, this study shows that bimodal integration of congruent visual and auditory information results in more accurate decision making in sport than unimodal information.
Collapse
|
62
|
Scheller M, Garcia S, Bathelt J, de Haan M, Petrini K. Active touch facilitates object size perception in children but not adults: A multisensory event related potential study. Brain Res 2019; 1723:146381. [PMID: 31419429 DOI: 10.1016/j.brainres.2019.146381] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2019] [Revised: 07/19/2019] [Accepted: 08/12/2019] [Indexed: 11/28/2022]
Abstract
In order to increase perceptual precision the adult brain dynamically combines redundant information from different senses depending on their reliability. During object size estimation, for example, visual, auditory and haptic information can be integrated to increase the precision of the final size estimate. Young children, however, do not integrate sensory information optimally and instead rely on active touch. Whether this early haptic dominance is reflected in age-related differences in neural mechanisms and whether it is driven by changes in bottom-up perceptual or top-down attentional processes has not yet been investigated. Here, we recorded event-related-potentials from a group of adults and children aged 5-7 years during an object size perception task using auditory, visual and haptic information. Multisensory information was presented either congruently (conveying the same information) or incongruently (conflicting information). No behavioral responses were required from participants. When haptic size information was available via actively tapping the objects, response amplitudes in the mid-parietal area were significantly reduced by information congruency in children but not in adults between 190 ms-250 ms and 310 ms-370 ms. These findings indicate that during object size perception only children's brain activity is modulated by active touch supporting a neural maturational shift from sensory dominance in early childhood to optimal multisensory benefit in adulthood.
Collapse
Affiliation(s)
| | | | - Joe Bathelt
- Brain & Cognition, University of Amsterdam, Netherlands; UCL Great Ormond Street Institute of Child Health, UK
| | | | | |
Collapse
|
63
|
Colonius H, Diederich A. Dependency in multisensory integration: a copula-based analysis. PHILOSOPHICAL TRANSACTIONS. SERIES A, MATHEMATICAL, PHYSICAL, AND ENGINEERING SCIENCES 2019; 377:20180364. [PMID: 31522633 PMCID: PMC6754712 DOI: 10.1098/rsta.2018.0364] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 05/14/2019] [Indexed: 06/10/2023]
Abstract
The notion of copula has attracted attention from the field of contextuality and probability. A copula is a function that joins a multivariate distribution to its one-dimensional marginal distributions. Thereby, it allows characterizing the multivariate dependency separately from the specific choice of margins. Here, we demonstrate the use of copulas by investigating the structure of dependency between processing stages in a stochastic model of multisensory integration, which describes the effect of stimulation by several sensory modalities on human reaction times. We derive explicit terms for the covariance and Kendall's tau between the processing stages and point out the specific role played by two stochastic order relations, the usual stochastic order and the likelihood ratio order, in determining the sign of dependency. This article is part of the theme issue 'Contextuality and probability in quantum mechanics and beyond'.
Collapse
Affiliation(s)
- Hans Colonius
- Department of Psychology, Carl von Ossietzky Universität, Oldenburg, Germany
- Department of Psychological Sciences, Purdue University, West Lafayette, IN, USA
| | - Adele Diederich
- Department of Psychological Sciences, Purdue University, West Lafayette, IN, USA
- Department of Chemistry and Life Sciences, Jacobs University Bremen, Bremen, Germany
| |
Collapse
|
64
|
Keane B, Bland NS, Matthews N, Carroll TJ, Wallis G. Rapid recalibration of temporal order judgements: Response bias accounts for contradictory results. Eur J Neurosci 2019; 51:1697-1710. [PMID: 31430402 DOI: 10.1111/ejn.14551] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2019] [Revised: 08/07/2019] [Accepted: 08/13/2019] [Indexed: 11/28/2022]
Abstract
Recent history influences subsequent perception, decision-making and motor behaviours. In this article, we address a discrepancy in the effects of recent sensory history on the perceived timing of auditory and visual stimuli. In the synchrony judgement (SJ) task, similar timing relationships in consecutive trials seem more synchronous (i.e. less like the repeated temporal order). This effect is known as rapid recalibration and is consistent with a negative perceptual aftereffect. Interestingly, the opposite is found in the temporal order judgement (TOJ) task (positive rapid recalibration). We aimed to determine whether a simple bias to repeat judgements on consecutive trials (choice-repetition bias) could account for the discrepant results in these tasks. Preliminary simulations and analyses indicated that a choice-repetition bias could produce apparently positive rapid recalibration in the TOJ and not the SJ task. Our first experiment revealed no evidence of rapid recalibration of TOJs, but negative rapid recalibration of associated confidence. This suggests that timing perception was rapidly recalibrated, but that the negative recalibration effect was obfuscated by a positive bias effect. In our second experiment, we experimentally mitigated the choice-repetition bias effect and found negative rapid recalibration of TOJs. We therefore conclude that timing perception is negatively rapidly recalibrated, and this is observed consistently across timing tasks. These results contribute to a growing body of evidence that indicates multisensory perception is constantly undergoing recalibration, such that perceptual synchrony is maintained. This work also demonstrates that participants' task responses reflect judgements that are contaminated by independent biases of perception and decision-making.
Collapse
Affiliation(s)
- Brendan Keane
- Centre for Sensorimotor Performance, School of Human Movement and Nutrition Sciences, University of Queensland, Brisbane, Qld, Australia
| | - Nicholas S Bland
- Queensland Brain Institute, University of Queensland, Brisbane, Qld, Australia.,School of Health and Rehabilitation Sciences, University of Queensland, Brisbane, Qld, Australia
| | - Natasha Matthews
- School of Psychology, University of Queensland, Brisbane, Qld, Australia
| | - Timothy J Carroll
- Centre for Sensorimotor Performance, School of Human Movement and Nutrition Sciences, University of Queensland, Brisbane, Qld, Australia
| | - Guy Wallis
- Centre for Sensorimotor Performance, School of Human Movement and Nutrition Sciences, University of Queensland, Brisbane, Qld, Australia
| |
Collapse
|
65
|
RSE-box: An analysis and modelling package to study response times to multiple signals. ACTA ACUST UNITED AC 2019. [DOI: 10.20982/tqmp.15.2.p112] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
|
66
|
Bao T, Su L, Kinnaird C, Kabeto M, Shull PB, Sienko KH. Vibrotactile display design: Quantifying the importance of age and various factors on reaction times. PLoS One 2019; 14:e0219737. [PMID: 31398207 PMCID: PMC6688825 DOI: 10.1371/journal.pone.0219737] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2018] [Accepted: 07/02/2019] [Indexed: 11/19/2022] Open
Abstract
Numerous factors affect reaction times to vibrotactile cues. Therefore, it is important to consider the relative magnitudes of these time delays when designing vibrotactile displays for real-time applications. The objectives of this study were to quantify reaction times to typical vibrotactile stimuli parameters through direct comparison within a single experimental setting, and to determine the relative importance of these factors on reaction times. Young (n = 10, 21.9 ± 1.3 yrs) and older adults (n = 13, 69.4 ± 5.0 yrs) performed simple reaction time tasks by responding to vibrotactile stimuli using a thumb trigger while frequency, location, auditory cues, number of tactors in the same location, and tactor type were varied. Participants also performed a secondary task in a subset of the trials. The factors investigated in this study affected reaction times by 20-300 ms (reaction time findings are noted in parentheses) depending on the specific stimuli condition. In general, auditory cues generated by the tactors (<20 ms), vibration frequency (<20 ms), number of tactors in the same location (<30 ms) and tactor type (<50 ms) had relatively small effects on reaction times, while stimulus location (20-120 ms) and secondary cognitive task (>130 ms) had relatively large effects. Factors affected young and older adults' reaction times in a similar manner, but with different magnitudes. These findings can inform the development of vibrotactile displays by enabling designers to directly compare the relative effects of key factors on reaction times.
Collapse
Affiliation(s)
- Tian Bao
- Dept. of Mechanical Engineering, University of Michigan, Ann Arbor, Michigan, United States of America
| | - Lydia Su
- Dept. of Mechanical Engineering, University of Michigan, Ann Arbor, Michigan, United States of America
| | - Catherine Kinnaird
- Dept. of Mechanical Engineering, University of Michigan, Ann Arbor, Michigan, United States of America
| | - Mohammed Kabeto
- Internal Medicine, University of Michigan, Ann Arbor, Michigan, United States of America
| | - Peter B. Shull
- State Key Laboratory of Mechanical System and Vibration, Shanghai Jiao Tong University, Shanghai, China
| | - Kathleen H. Sienko
- Dept. of Mechanical Engineering, University of Michigan, Ann Arbor, Michigan, United States of America
| |
Collapse
|
67
|
Chandrasekaran C, Blurton SP, Gondan M. Audiovisual detection at different intensities and delays. JOURNAL OF MATHEMATICAL PSYCHOLOGY 2019; 91:159-175. [PMID: 31404455 PMCID: PMC6688765 DOI: 10.1016/j.jmp.2019.05.001] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
In the redundant signals task, two target stimuli are associated with the same response. If both targets are presented together, redundancy gains are observed, as compared with single-target presentation. Different models explain these redundancy gains, including race and coactivation models (e.g., the Wiener diffusion superposition model, Schwarz, 1994, Journal of Mathematical Psychology, and the Ornstein Uhlenbeck diffusion superposition model, Diederich, 1995, Journal of Mathematical Psychology). In the present study, two monkeys performed a simple detection task with auditory, visual and audiovisual stimuli of different intensities and onset asynchronies. In its basic form, a Wiener diffusion superposition model provided only a poor description of the observed data, especially of the detection rate (i.e., accuracy or hit rate) for low stimulus intensity. We expanded the model in two ways, by (A) adding a temporal deadline, that is, restricting the evidence accumulation process to a stopping time, and (B) adding a second "nogo" barrier representing target absence. We present closed-form solutions for the mean absorption times and absorption probabilities for a Wiener diffusion process with a drift towards a single barrier in the presence of a temporal deadline (A), and numerically improved solutions for the two-barrier model (B). The best description of the data was obtained from the deadline model and substantially outperformed the two-barrier approach.
Collapse
Affiliation(s)
- Chandramouli Chandrasekaran
- Department of Electrical Engineering, Stanford University, USA
- Howard Hughes Medical Institute, Stanford University, USA
- Department of Psychological and Brain Sciences, Boston University, USA
- Department of Anatomy and Neurobiology, Boston University, USA
| | | | | |
Collapse
|
68
|
D’Agostino AR, Wesley MJ, Brown J, Fillmore MT. Effects of multisensory stop signals on alcohol-induced disinhibition in adults with ADHD. Exp Clin Psychopharmacol 2019; 27:247-256. [PMID: 30628812 PMCID: PMC6538486 DOI: 10.1037/pha0000251] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Multisensory environments facilitate behavioral functioning in humans. The redundant signal effect (RSE) refers to the observation that individuals respond more quickly to stimuli when information is presented as multisensory, redundant stimuli (e.g., aurally and visually) rather than as a single stimulus presented to either modality alone. RSE appears to be because of specialized multisensory neurons in the superior colliculus and association cortex that allow intersensory coactivation between the visual and auditory channels. Our studies show that the disinhibiting effects of alcohol are attenuated when stop signals are multisensory (e.g., Visual + Auditory stop signals) versus unisensory (Roberts, Monem, & Fillmore, 2016). The present study expanded on this research to test the degree to which multisensory stop signals could also attenuate the disinhibiting effects of alcohol in those with attention-deficit hyperactivity disorder (ADHD), a clinical population characterized by poor impulse control. The study compared young adults with ADHD (n = 22) with healthy controls (n = 22) and examined the acute impairing effect of alcohol on response inhibition to stop signals that were presented as a unisensory (visual) stimulus or a multisensory (Visual + Auditory) stimulus. For controls, results showed alcohol impaired response inhibition to unisensory stop signals but not to multisensory stop signals. Response inhibition of those with ADHD was impaired by alcohol regardless of whether stop signals were unisensory or multisensory. The failure of multisensory stimuli to attenuate alcohol impairment in those with ADHD highlights a specific vulnerability that could account for heightened sensitivity to the disruptive effects of alcohol. (PsycINFO Database Record (c) 2019 APA, all rights reserved).
Collapse
Affiliation(s)
- Alexandra R. D’Agostino
- Department of Psychology, University of Kentucky College of Arts and Sciences, 110 Kastle Hall, Lexington, KY 40506-0044, U.S.A
| | - Michael J. Wesley
- Department of Behavioral Science, University of Kentucky College of Medicine, 1100 Veterans Drive, Medical Behavioral Science Building Room 140, Lexington, KY 40536-0086, U.S.A
| | - Jaime Brown
- Department of Psychology, University of Kentucky College of Arts and Sciences, 110 Kastle Hall, Lexington, KY 40506-0044, U.S.A
| | - Mark T. Fillmore
- Department of Psychology, University of Kentucky College of Arts and Sciences, 110 Kastle Hall, Lexington, KY 40506-0044, U.S.A
| |
Collapse
|
69
|
Stenzel H, Francombe J, Jackson PJB. Limits of Perceived Audio-Visual Spatial Coherence as Defined by Reaction Time Measurements. Front Neurosci 2019; 13:451. [PMID: 31191211 PMCID: PMC6538976 DOI: 10.3389/fnins.2019.00451] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2018] [Accepted: 04/23/2019] [Indexed: 11/30/2022] Open
Abstract
The ventriloquism effect describes the phenomenon of audio and visual signals with common features, such as a voice and a talking face merging perceptually into one percept even if they are spatially misaligned. The boundaries of the fusion of spatially misaligned stimuli are of interest for the design of multimedia products to ensure a perceptually satisfactory product. They have mainly been studied using continuous judgment scales and forced-choice measurement methods. These results vary greatly between different studies. The current experiment aims to evaluate audio-visual fusion using reaction time (RT) measurements as an indirect method of measurement to overcome these great variances. A two-alternative forced-choice (2AFC) word recognition test was designed and tested with noise and multi-talker speech background distractors. Visual signals were presented centrally and audio signals were presented between 0° and 31° audio-visual offset in azimuth. RT data were analyzed separately for the underlying Simon effect and attentional effects. In the case of the attentional effects, three models were identified but no single model could explain the observed RTs for all participants so data were grouped and analyzed accordingly. The results show that significant differences in RTs are measured from 5° to 10° onwards for the Simon effect. The attentional effect varied at the same audio-visual offset for two out of the three defined participant groups. In contrast with the prior research, these results suggest that, even for speech signals, small audio-visual offsets influence spatial integration subconsciously.
Collapse
Affiliation(s)
- Hanne Stenzel
- Centre for Vision, Speech and Signal Processing, University of Surrey, Guildford, United Kingdom
| | | | - Philip J. B. Jackson
- Centre for Vision, Speech and Signal Processing, University of Surrey, Guildford, United Kingdom
| |
Collapse
|
70
|
Amsellem S, Höchenberger R, Ohla K. Visual-Olfactory Interactions: Bimodal Facilitation and Impact on the Subjective Experience. Chem Senses 2019. [PMID: 29528380 DOI: 10.1093/chemse/bjy018] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022] Open
Abstract
Odors are inherently ambiguous and therefore susceptible to redundant sensory as well as context information. The identification of an odor object relies largely on visual input. Thus far, it is unclear whether visual and olfactory stimuli are indeed integrated at an early perceptual stage and which role the congruence between the visual and olfactory inputs plays. Previous studies on visual-olfactory interaction used either congruent or incongruent information, leaving it open whether nuances of visual-olfactory congruence shape perception differently. We aimed to answer 1) whether visual-olfactory information is integrated at early stages of processing, 2) whether visual-olfactory congruence is a gradual or dichotomous phenomenon, and 3) whether visual information influences bimodal stimulus evaluation and odor identity. We found a bimodal response time speedup that is consistent with parallel processing according to race models. Subjectively, pleasantness of bimodal stimuli increased with increasing congruence, and orange images biased odor composition toward orange. Visual-olfactory congruence was perceived in gradual and distinct categories, consistent with the notion that congruence is a gradual phenomenon. Together, the data provide evidence for bimodal facilitation consistent with parallel processing of the visual and olfactory stimuli, and that visual-olfactory interactions influence various levels of the subjective experience.
Collapse
Affiliation(s)
- Sherlley Amsellem
- Psychophysiology of Food Perception, German Institute of Human Nutrition Potsdam-Rehbruecke, Arthur-Scheunert Allee, Nuthetal, Germany
| | - Richard Höchenberger
- Psychophysiology of Food Perception, German Institute of Human Nutrition Potsdam-Rehbruecke, Arthur-Scheunert Allee, Nuthetal, Germany
| | - Kathrin Ohla
- Psychophysiology of Food Perception, German Institute of Human Nutrition Potsdam-Rehbruecke, Arthur-Scheunert Allee, Nuthetal, Germany
| |
Collapse
|
71
|
Visually induced gains in pitch discrimination: Linking audio-visual processing with auditory abilities. Atten Percept Psychophys 2019; 80:999-1010. [PMID: 29473142 DOI: 10.3758/s13414-017-1481-8] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Perception is fundamentally a multisensory experience. The principle of inverse effectiveness (PoIE) states how the multisensory gain is maximal when responses to the unisensory constituents of the stimuli are weak. It is one of the basic principles underlying multisensory processing of spatiotemporally corresponding crossmodal stimuli that are well established at behavioral as well as neural levels. It is not yet clear, however, how modality-specific stimulus features influence discrimination of subtle changes in a crossmodally corresponding feature belonging to another modality. Here, we tested the hypothesis that reliance on visual cues to pitch discrimination follow the PoIE at the interindividual level (i.e., varies with varying levels of auditory-only pitch discrimination abilities). Using an oddball pitch discrimination task, we measured the effect of varying visually perceived vertical position in participants exhibiting a wide range of pitch discrimination abilities (i.e., musicians and nonmusicians). Visual cues significantly enhanced pitch discrimination as measured by the sensitivity index d', and more so in the crossmodally congruent than incongruent condition. The magnitude of gain caused by compatible visual cues was associated with individual pitch discrimination thresholds, as predicted by the PoIE. This was not the case for the magnitude of the congruence effect, which was unrelated to individual pitch discrimination thresholds, indicating that the pitch-height association is robust to variations in auditory skills. Our findings shed light on individual differences in multisensory processing by suggesting that relevant multisensory information that crucially aids some perceivers' performance may be of less importance to others, depending on their unisensory abilities.
Collapse
|
72
|
Lundqvist LM, Eriksson L. Age, cognitive load, and multimodal effects on driver response to directional warning. APPLIED ERGONOMICS 2019; 76:147-154. [PMID: 30642519 DOI: 10.1016/j.apergo.2019.01.002] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/04/2018] [Revised: 12/06/2018] [Accepted: 01/01/2019] [Indexed: 06/09/2023]
Abstract
Inattention can be considered a primary cause of vehicular accidents or crashes, and in-car warning signals are applied to alert the driver to take action even in automated vehicles. Because of age related decline of the older driver's abilities, in-car warning signals may need adjustment to the older driver. We therefore investigated the effects of uni-, bi- and trimodal directional warnings (i.e., light, sound, vibration) on young and older drivers' responses in a driving simulator. A young group of 15 drivers (20-25 years of age) and an older group of 16 drivers (65-79 years of age) participated. In the simulations, warning signal was presented at the left, the center, or the right in front of the participant. With a warning at the left, the center, and the right the correct response was to steer to the right, brake, and steer to the left, respectively. The main results showed the older drivers' responses were slower for each type of warning compared with the young drivers' responses. Overall, the responses were slower with an added cognitively loading task. The only multimodal type of warning inducing overall faster response than its constituent warning types was the vibration-sound, and only for the older drivers. Additionally, with the groups' responses collapsed, such a true multimodal effect on response time also showed for the center vibration-sound warning (i.e., braking response). The only multimodal warning showing clear reduction in response errors compared with its constituent warning types was the vibration-sound for the older drivers during extra cognitive load. The main conclusion is that older drivers can benefit from bimodal warning, as compared with unimodal, in terms of faster and more accurate response. The potential superiority of trimodal warning is nevertheless argued.
Collapse
Affiliation(s)
- Linda-Marie Lundqvist
- Department of Social and Psychological Studies, Karlstad University, Karlstad, Sweden
| | - Lars Eriksson
- Department of Social and Psychological Studies, Karlstad University, Karlstad, Sweden.
| |
Collapse
|
73
|
Van der Stoep N, Van der Stigchel S, Van Engelen RC, Biesbroek JM, Nijboer TCW. Impairments in Multisensory Integration after Stroke. J Cogn Neurosci 2019; 31:885-899. [PMID: 30883294 DOI: 10.1162/jocn_a_01389] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
Abstract
The integration of information from multiple senses leads to a plethora of behavioral benefits, most predominantly to faster and better detection, localization, and identification of events in the environment. Although previous studies of multisensory integration (MSI) in humans have provided insights into the neural underpinnings of MSI, studies of MSI at a behavioral level in individuals with brain damage are scarce. Here, a well-known psychophysical paradigm (the redundant target paradigm) was employed to quantify MSI in a group of stroke patients. The relation between MSI and lesion location was analyzed using lesion subtraction analysis. Twenty-one patients with ischemic infarctions and 14 healthy control participants responded to auditory, visual, and audiovisual targets in the left and right visual hemifield. Responses to audiovisual targets were faster than to unisensory targets. This could be due to MSI or statistical facilitation. Comparing the audiovisual RTs to the winner of a race between unisensory signals allowed us to determine whether participants could integrate auditory and visual information. The results indicated that (1) 33% of the patients showed an impairment in MSI; (2) patients with MSI impairment had left hemisphere and brainstem/cerebellar lesions; and (3) the left caudate, left pallidum, left putamen, left thalamus, left insula, left postcentral and precentral gyrus, left central opercular cortex, left amygdala, and left OFC were more often damaged in patients with MSI impairments. These results are the first to demonstrate the impact of brain damage on MSI in stroke patients using a well-established psychophysical paradigm.
Collapse
Affiliation(s)
| | | | | | | | - Tanja C W Nijboer
- Helmholtz Institute, Utrecht University.,Brain Center Rudolph Magnus, University Medical Center, Utrecht University.,Center for Brain Rehabilitation Medicine, Utrecht Medical Center, Utrecht University
| |
Collapse
|
74
|
Eördegh G, Őze A, Bodosi B, Puszta A, Pertich Á, Rosu A, Godó G, Nagy A. Multisensory guided associative learning in healthy humans. PLoS One 2019; 14:e0213094. [PMID: 30861023 PMCID: PMC6413907 DOI: 10.1371/journal.pone.0213094] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2018] [Accepted: 02/14/2019] [Indexed: 12/15/2022] Open
Abstract
Associative learning is a basic cognitive function by which discrete and often different percepts are linked together. The Rutgers Acquired Equivalence Test investigates a specific kind of associative learning, visually guided equivalence learning. The test consists of an acquisition (pair learning) and a test (rule transfer) phase, which are associated primarily with the function of the basal ganglia and the hippocampi, respectively. Earlier studies described that both fundamentally-involved brain structures in the visual associative learning, the basal ganglia and the hippocampi, receive not only visual but also multisensory information. However, no study has investigated whether there is a priority for multisensory guided equivalence learning compared to unimodal ones. Thus we had no data about the modality-dependence or independence of the equivalence learning. In the present study, we have therefore introduced the auditory- and multisensory (audiovisual)-guided equivalence learning paradigms and investigated the performance of 151 healthy volunteers in the visual as well as in the auditory and multisensory paradigms. Our results indicated that visual, auditory and multisensory guided associative learning is similarly effective in healthy humans, which suggest that the acquisition phase is fairly independent from the modality of the stimuli. On the other hand, in the test phase, where participants were presented with acquisitions that were learned earlier and associations that were until then not seen or heard but predictable, the multisensory stimuli elicited the best performance. The test phase, especially its generalization part, seems to be a harder cognitive task, where the multisensory information processing could improve the performance of the participants.
Collapse
Affiliation(s)
- Gabriella Eördegh
- Department of Operative and Esthetic Dentistry, Faculty of Dentistry, University of Szeged, Szeged, Hungary
| | - Attila Őze
- Department of Physiology, Faculty of Medicine, University of Szeged, Szeged, Hungary
| | - Balázs Bodosi
- Department of Physiology, Faculty of Medicine, University of Szeged, Szeged, Hungary
| | - András Puszta
- Department of Physiology, Faculty of Medicine, University of Szeged, Szeged, Hungary
| | - Ákos Pertich
- Department of Physiology, Faculty of Medicine, University of Szeged, Szeged, Hungary
| | - Anett Rosu
- Department of Psychiatry, Faculty of Medicine, University of Szeged, Szeged, Hungary
| | - György Godó
- Csongrád County Health Care Center, Psychiatric Outpatient Care, Hódmezővásárhely, Hungary
| | - Attila Nagy
- Department of Physiology, Faculty of Medicine, University of Szeged, Szeged, Hungary
- * E-mail:
| |
Collapse
|
75
|
Diederich A, Colonius H. Multisensory Integration and Exogenous Spatial Attention: A Time-window-of-integration Analysis. J Cogn Neurosci 2019; 31:699-710. [PMID: 30822208 DOI: 10.1162/jocn_a_01386] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Although it is well documented that occurrence of an irrelevant and nonpredictive sound facilitates motor responses to a subsequent target light appearing nearby, the cause of this "exogenous spatial cuing effect" has been under discussion. On the one hand, it has been postulated to be the result of a shift of visual spatial attention possibly triggered by parietal and/or cortical supramodal "attention" structures. On the other hand, the effect has been considered to be due to multisensory integration based on the activation of multisensory convergence structures in the brain. Recent RT experiments have suggested that multisensory integration and exogenous spatial cuing differ in their temporal profiles of facilitation: When the nontarget occurs 100-200 msec before the target, facilitation is likely driven by crossmodal exogenous spatial attention, whereas multisensory integration effects are still seen when target and nontarget are presented nearly simultaneously. Here, we develop an extension of the time-window-of-integration model that combines both mechanisms within the same formal framework. The model is illustrated by fitting it to data from a focused attention task with a visual target and an auditory nontarget presented at horizontally or vertically varying positions. Results show that both spatial cuing and multisensory integration may coexist in a single trial in bringing about the crossmodal facilitation of RT effects. Moreover, the formal analysis via time window of integration allows to predict and quantify the contribution of either mechanism as they occur across different spatiotemporal conditions.
Collapse
|
76
|
Ohla K, Höchenberger R, Freiherr J, Lundström JN. Superadditive and Subadditive Neural Processing of Dynamic Auditory-Visual Objects in the Presence of Congruent Odors. Chem Senses 2019; 43:35-44. [PMID: 29045615 DOI: 10.1093/chemse/bjx068] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
Our sensory experiences comprise a variety of different inputs at any given time. Some of these experiences are unmistakable, others are ambiguous and profit from additional sensory information. Here, we explored whether the presence of a congruent odor influences the neural processing and sensory interaction of audio-visual objects using degraded videos (V) and sounds (A) of dynamic objects in unimodal and bimodal (AV) combinations without or with a congruent odor (VO, AO, AVO). Analyses of EEG data revealed superadditive and subadditive interaction effects. The topography and timing of these effects suggest evaluative rather than sensory processes as the underlying cause. Together, the results suggest that the mere presence of an odor affects the processing of A, V, and AV objects differently while multisensory interactions of AV and AVO objects have common neuronal mechanisms pointing to a robust, modality-independent network for the processing of redundant sensory information.
Collapse
Affiliation(s)
- Kathrin Ohla
- German Institute of Human Nutrition Potsdam-Rehbruecke, Germany
- Monell Chemical Senses Center, USA
| | | | - Jessica Freiherr
- Uniklinik RWTH Aachen, Diagnostic and Interventional Neuroradiology, Germany
- Fraunhofer-Institut für Verfahrenstechnik und Verpackung IVV, Sensory Analytics, Germany
| | - Johan N Lundström
- Monell Chemical Senses Center, USA
- Department of Clinical Neuroscience, Karolinska Institutet, Sweden
| |
Collapse
|
77
|
Meijer GT, Mertens PEC, Pennartz CMA, Olcese U, Lansink CS. The circuit architecture of cortical multisensory processing: Distinct functions jointly operating within a common anatomical network. Prog Neurobiol 2019; 174:1-15. [PMID: 30677428 DOI: 10.1016/j.pneurobio.2019.01.004] [Citation(s) in RCA: 28] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2017] [Revised: 12/21/2018] [Accepted: 01/21/2019] [Indexed: 12/16/2022]
Abstract
Our perceptual systems continuously process sensory inputs from different modalities and organize these streams of information such that our subjective representation of the outside world is a unified experience. By doing so, they also enable further cognitive processing and behavioral action. While cortical multisensory processing has been extensively investigated in terms of psychophysics and mesoscale neural correlates, an in depth understanding of the underlying circuit-level mechanisms is lacking. Previous studies on circuit-level mechanisms of multisensory processing have predominantly focused on cue integration, i.e. the mechanism by which sensory features from different modalities are combined to yield more reliable stimulus estimates than those obtained by using single sensory modalities. In this review, we expand the framework on the circuit-level mechanisms of cortical multisensory processing by highlighting that multisensory processing is a family of functions - rather than a single operation - which involves not only the integration but also the segregation of modalities. In addition, multisensory processing not only depends on stimulus features, but also on cognitive resources, such as attention and memory, as well as behavioral context, to determine the behavioral outcome. We focus on rodent models as a powerful instrument to study the circuit-level bases of multisensory processes, because they enable combining cell-type-specific recording and interventional techniques with complex behavioral paradigms. We conclude that distinct multisensory processes share overlapping anatomical substrates, are implemented by diverse neuronal micro-circuitries that operate in parallel, and are flexibly recruited based on factors such as stimulus features and behavioral constraints.
Collapse
Affiliation(s)
- Guido T Meijer
- Swammerdam Institute for Life Sciences, University of Amsterdam, Science Park 904, 1098XH Amsterdam, the Netherlands.
| | - Paul E C Mertens
- Swammerdam Institute for Life Sciences, University of Amsterdam, Science Park 904, 1098XH Amsterdam, the Netherlands.
| | - Cyriel M A Pennartz
- Swammerdam Institute for Life Sciences, University of Amsterdam, Science Park 904, 1098XH Amsterdam, the Netherlands; Research Priority Program Brain and Cognition, University of Amsterdam, Science Park 904, 1098XH Amsterdam, the Netherlands.
| | - Umberto Olcese
- Swammerdam Institute for Life Sciences, University of Amsterdam, Science Park 904, 1098XH Amsterdam, the Netherlands; Research Priority Program Brain and Cognition, University of Amsterdam, Science Park 904, 1098XH Amsterdam, the Netherlands.
| | - Carien S Lansink
- Swammerdam Institute for Life Sciences, University of Amsterdam, Science Park 904, 1098XH Amsterdam, the Netherlands; Research Priority Program Brain and Cognition, University of Amsterdam, Science Park 904, 1098XH Amsterdam, the Netherlands.
| |
Collapse
|
78
|
Living and Working in a Multisensory World: From Basic Neuroscience to the Hospital. MULTIMODAL TECHNOLOGIES AND INTERACTION 2019. [DOI: 10.3390/mti3010002] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/24/2023] Open
Abstract
The intensive care unit (ICU) of a hospital is an environment subjected to ceaseless noise. Patient alarms contribute to the saturated auditory environment and often overwhelm healthcare providers with constant and false alarms. This may lead to alarm fatigue and prevent optimum patient care. In response, a multisensory alarm system developed with consideration for human neuroscience and basic music theory is proposed as a potential solution. The integration of auditory, visual, and other sensory output within an alarm system can be used to convey more meaningful clinical information about patient vital signs in the ICU and operating room to ultimately improve patient outcomes.
Collapse
|
79
|
Jerger S, Damian MF, Karl C, Abdi H. Developmental Shifts in Detection and Attention for Auditory, Visual, and Audiovisual Speech. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2018; 61:3095-3112. [PMID: 30515515 PMCID: PMC6440305 DOI: 10.1044/2018_jslhr-h-17-0343] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/10/2017] [Revised: 01/02/2018] [Accepted: 07/16/2018] [Indexed: 06/09/2023]
Abstract
PURPOSE Successful speech processing depends on our ability to detect and integrate multisensory cues, yet there is minimal research on multisensory speech detection and integration by children. To address this need, we studied the development of speech detection for auditory (A), visual (V), and audiovisual (AV) input. METHOD Participants were 115 typically developing children clustered into age groups between 4 and 14 years. Speech detection (quantified by response times [RTs]) was determined for 1 stimulus, /buh/, presented in A, V, and AV modes (articulating vs. static facial conditions). Performance was analyzed not only in terms of traditional mean RTs but also in terms of the faster versus slower RTs (defined by the 1st vs. 3rd quartiles of RT distributions). These time regions were conceptualized respectively as reflecting optimal detection with efficient focused attention versus less optimal detection with inefficient focused attention due to attentional lapses. RESULTS Mean RTs indicated better detection (a) of multisensory AV speech than A speech only in 4- to 5-year-olds and (b) of A and AV inputs than V input in all age groups. The faster RTs revealed that AV input did not improve detection in any group. The slower RTs indicated that (a) the processing of silent V input was significantly faster for the articulating than static face and (b) AV speech or facial input significantly minimized attentional lapses in all groups except 6- to 7-year-olds (a peaked U-shaped curve). Apparently, the AV benefit observed for mean performance in 4- to 5-year-olds arose from effects of attention. CONCLUSIONS The faster RTs indicated that AV input did not enhance detection in any group, but the slower RTs indicated that AV speech and dynamic V speech (mouthing) significantly minimized attentional lapses and thus did influence performance. Overall, A and AV inputs were detected consistently faster than V input; this result endorsed stimulus-bound auditory processing by these children.
Collapse
Affiliation(s)
- Susan Jerger
- School of Behavioral and Brain Sciences, GR4.1, University of Texas at Dallas, Richardson
- Callier Center for Communication Disorders, Richardson, TX
| | - Markus F. Damian
- School of Experimental Psychology, University of Bristol, United Kingdom
| | - Cassandra Karl
- School of Behavioral and Brain Sciences, GR4.1, University of Texas at Dallas, Richardson
- Callier Center for Communication Disorders, Richardson, TX
| | - Hervé Abdi
- School of Behavioral and Brain Sciences, GR4.1, University of Texas at Dallas, Richardson
| |
Collapse
|
80
|
Bazilinskyy P, de Winter J. Crowdsourced Measurement of Reaction Times to Audiovisual Stimuli With Various Degrees of Asynchrony. HUMAN FACTORS 2018; 60:1192-1206. [PMID: 30036098 PMCID: PMC6207992 DOI: 10.1177/0018720818787126] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/06/2017] [Accepted: 06/10/2018] [Indexed: 06/08/2023]
Abstract
OBJECTIVE This study was designed to replicate past research concerning reaction times to audiovisual stimuli with different stimulus onset asynchrony (SOA) using a large sample of crowdsourcing respondents. BACKGROUND Research has shown that reaction times are fastest when an auditory and a visual stimulus are presented simultaneously and that SOA causes an increase in reaction time, this increase being dependent on stimulus intensity. Research on audiovisual SOA has been conducted with small numbers of participants. METHOD Participants ( N = 1,823) each performed 176 reaction time trials consisting of 29 SOA levels and three visual intensity levels, using CrowdFlower, with a compensation of US$0.20 per participant. Results were verified with a local Web-in-lab study ( N = 34). RESULTS The results replicated past research, with a V shape of mean reaction time as a function of SOA, the V shape being stronger for lower-intensity visual stimuli. The level of SOA affected mainly the right side of the reaction time distribution, whereas the fastest 5% was hardly affected. The variability of reaction times was higher for the crowdsourcing study than for the Web-in-lab study. CONCLUSION Crowdsourcing is a promising medium for reaction time research that involves small temporal differences in stimulus presentation. The observed effects of SOA can be explained by an independent-channels mechanism and also by some participants not perceiving the auditory or visual stimulus, hardware variability, misinterpretation of the task instructions, or lapses in attention. APPLICATION The obtained knowledge on the distribution of reaction times may benefit the design of warning systems.
Collapse
Affiliation(s)
- Pavlo Bazilinskyy
- Pavlo Bazilinskyy, Department of BioMechanical Engineering, Faculty of Mechanical, Maritime and Materials Engineering, Delft University of Technology, Mekelweg 2, 2628 CD Delft, the Netherlands; e-mail:
| | | |
Collapse
|
81
|
Audiovisual Lexical Retrieval Deficits Following Left Hemisphere Stroke. Brain Sci 2018; 8:brainsci8120206. [PMID: 30486517 PMCID: PMC6316523 DOI: 10.3390/brainsci8120206] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2018] [Revised: 11/18/2018] [Accepted: 11/27/2018] [Indexed: 11/27/2022] Open
Abstract
Binding sensory features of multiple modalities of what we hear and see allows formation of a coherent percept to access semantics. Previous work on object naming has focused on visual confrontation naming with limited research in nonverbal auditory or multisensory processing. To investigate neural substrates and sensory effects of lexical retrieval, we evaluated healthy adults (n = 118) and left hemisphere stroke patients (LHD, n = 42) in naming manipulable objects across auditory (sound), visual (picture), and multisensory (audiovisual) conditions. LHD patients were divided into cortical, cortical–subcortical, or subcortical lesions (CO, CO–SC, SC), and specific lesion location investigated in a predictive model. Subjects produced lower accuracy in auditory naming relative to other conditions. Controls demonstrated greater naming accuracy and faster reaction times across all conditions compared to LHD patients. Naming across conditions was most severely impaired in CO patients. Both auditory and visual naming accuracy were impacted by temporal lobe involvement, although auditory naming was sensitive to lesions extending subcortically. Only controls demonstrated significant improvement over visual naming with the addition of auditory cues (i.e., multisensory condition). Results support overlapping neural networks for visual and auditory modalities related to semantic integration in lexical retrieval and temporal lobe involvement, while multisensory integration was impacted by both occipital and temporal lobe lesion involvement. The findings support modality specificity in naming and suggest that auditory naming is mediated by a distributed cortical–subcortical network overlapping with networks mediating spatiotemporal aspects of skilled movements producing sound.
Collapse
|
82
|
Vision dominates audition in adults but not children: A meta-analysis of the Colavita effect. Neurosci Biobehav Rev 2018; 94:286-301. [DOI: 10.1016/j.neubiorev.2018.07.012] [Citation(s) in RCA: 38] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2017] [Revised: 07/18/2018] [Accepted: 07/22/2018] [Indexed: 01/17/2023]
|
83
|
Schormans AL, Allman BL. Behavioral Plasticity of Audiovisual Perception: Rapid Recalibration of Temporal Sensitivity but Not Perceptual Binding Following Adult-Onset Hearing Loss. Front Behav Neurosci 2018; 12:256. [PMID: 30429780 PMCID: PMC6220077 DOI: 10.3389/fnbeh.2018.00256] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2018] [Accepted: 10/11/2018] [Indexed: 11/13/2022] Open
Abstract
The ability to accurately integrate or bind stimuli from more than one sensory modality is highly dependent on the features of the stimuli, such as their intensity and relative timing. Previous studies have demonstrated that the ability to perceptually bind stimuli is impaired in various clinical conditions such as autism, dyslexia, schizophrenia, as well as aging. However, it remains unknown if adult-onset hearing loss, separate from aging, influences audiovisual temporal acuity. In the present study, rats were trained using appetitive operant conditioning to perform an audiovisual temporal order judgment (TOJ) task or synchrony judgment (SJ) task in order to investigate the nature and extent that audiovisual temporal acuity is affected by adult-onset hearing loss, with a specific focus on the time-course of perceptual changes following loud noise exposure. In our first series of experiments, we found that audiovisual temporal acuity in normal-hearing rats was influenced by sound intensity, such that when a quieter sound was presented, the rats were biased to perceive the audiovisual stimuli as asynchronous (SJ task), or as though the visual stimulus was presented first (TOJ task). Psychophysical testing demonstrated that noise-induced hearing loss did not alter the rats' temporal sensitivity 2-3 weeks post-noise exposure, despite rats showing an initial difficulty in differentiating the temporal order of audiovisual stimuli. Furthermore, consistent with normal-hearing rats, the timing at which the stimuli were perceived as simultaneous (i.e., the point of subjective simultaneity, PSS) remained sensitive to sound intensity following hearing loss. Contrary to the TOJ task, hearing loss resulted in persistent impairments in asynchrony detection during the SJ task, such that a greater proportion of trials were now perceived as synchronous. Moreover, psychophysical testing found that noise-exposed rats had altered audiovisual synchrony perception, consistent with impaired audiovisual perceptual binding (e.g., an increase in the temporal window of integration on the right side of simultaneity; right temporal binding window (TBW)). Ultimately, our collective results show for the first time that adult-onset hearing loss leads to behavioral plasticity of audiovisual perception, characterized by a rapid recalibration of temporal sensitivity but a persistent impairment in the perceptual binding of audiovisual stimuli.
Collapse
Affiliation(s)
- Ashley L Schormans
- Department of Anatomy and Cell Biology, Schulich School of Medicine and Dentistry, University of Western Ontario, London, ON, Canada
| | - Brian L Allman
- Department of Anatomy and Cell Biology, Schulich School of Medicine and Dentistry, University of Western Ontario, London, ON, Canada
| |
Collapse
|
84
|
Roque J, Lafraire J, Spence C, Auvray M. The influence of audiovisual stimuli cuing temperature, carbonation, and color on the categorization of freshness in beverages. J SENS STUD 2018. [DOI: 10.1111/joss.12469] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Affiliation(s)
- Jérémy Roque
- Breakthrough Innovation Group; Pernod Ricard; Paris France
- Institut Paul Bocuse Research Center; Ecully France
- Sorbonne Université, UPMC, CNRS, Institut des Systèmes Intelligents et de Robotique (ISIR); Paris France
| | | | - Charles Spence
- Crossmodal Research Laboratory, Department of Experimental Psychology; Oxford University; Oxford United Kingdom
| | - Malika Auvray
- Sorbonne Université, UPMC, CNRS, Institut des Systèmes Intelligents et de Robotique (ISIR); Paris France
| |
Collapse
|
85
|
Feenders G, Klump GM. Violation of the Unity Assumption Disrupts Temporal Ventriloquism Effect in Starlings. Front Psychol 2018; 9:1386. [PMID: 30154744 PMCID: PMC6102397 DOI: 10.3389/fpsyg.2018.01386] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2018] [Accepted: 07/17/2018] [Indexed: 11/13/2022] Open
Abstract
When stimuli from different sensory modalities are received, they may be combined by the brain to form a multisensory percept. One key mechanism for multisensory binding is the unity assumption under which multisensory stimuli that share certain physical properties like temporal and/or spatial correspondence are grouped together as deriving from one object. In humans, evidence for a role of the unity assumption has been found in spatial tasks and also in temporal tasks using stimuli that share physical properties (speech-related stimuli, musical and synesthetically congruent stimuli). In our study, we investigate the role of the unity assumption in an animal model in a temporal order judgment task. When subjects are asked to indicate which of two spatially separated visual stimuli appeared first in time, performance improves when the visual stimuli are paired (in time) with spatially non-informative acoustic cues, a phenomenon known as the temporal ventriloquism effect. Here, we show that European starlings perform better when one singleton acoustic cue is paired with the first visual stimulus as compared to pairing with the second visual stimulus. This shows, in combination with our previous study, that a non-informative singleton acoustic cue, when temporally paired with the first visual stimulus, triggers alerting while, when temporally pairing with the second visual stimulus, it prevents a temporal ventriloquism effect because the unity assumption is violated. Thus, the unity assumption influences sensory perception not only in humans but also in an animal model. The importance of the unity assumption in this task supports the idea that the temporal ventriloquism effect, similar to the spatial ventriloquism effect, is based on multisensory binding and integration but not on alerting effects.
Collapse
Affiliation(s)
- Gesa Feenders
- Cluster of Excellence Hearing4all, Animal Physiology and Behaviour Group, Department of Neuroscience, School of Medicine and Health Sciences, University of Oldenburg, Oldenburg, Germany
| | - Georg M Klump
- Cluster of Excellence Hearing4all, Animal Physiology and Behaviour Group, Department of Neuroscience, School of Medicine and Health Sciences, University of Oldenburg, Oldenburg, Germany
| |
Collapse
|
86
|
Effect of acceleration of auditory inputs on the primary somatosensory cortex in humans. Sci Rep 2018; 8:12883. [PMID: 30150686 PMCID: PMC6110726 DOI: 10.1038/s41598-018-31319-3] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2018] [Accepted: 08/17/2018] [Indexed: 11/09/2022] Open
Abstract
Cross-modal interaction occurs during the early stages of processing in the sensory cortex; however, its effect on neuronal activity speed remains unclear. We used magnetoencephalography to investigate whether auditory stimulation influences the initial cortical activity in the primary somatosensory cortex. A 25-ms pure tone was randomly presented to the left or right side of healthy volunteers at 1000 ms when electrical pulses were applied to the left or right median nerve at 20 Hz for 1500 ms because we did not observe any cross-modal effect elicited by a single pulse. The latency of N20 m originating from Brodmann's area 3b was measured for each pulse. The auditory stimulation significantly shortened the N20 m latency at 1050 and 1100 ms. This reduction in N20 m latency was identical for the ipsilateral and contralateral sounds for both latency points. Therefore, somatosensory-auditory interaction, such as input to the area 3b from the thalamus, occurred during the early stages of synaptic transmission. Auditory information that converged on the somatosensory system was considered to have arisen from the early stages of the feedforward pathway. Acceleration of information processing through the cross-modal interaction seemed to be partly due to faster processing in the sensory cortex.
Collapse
|
87
|
Wolf D, Mittelberg I, Rekittke LM, Bhavsar S, Zvyagintsev M, Haeck A, Cong F, Klasen M, Mathiak K. Interpretation of Social Interactions: Functional Imaging of Cognitive-Semiotic Categories During Naturalistic Viewing. Front Hum Neurosci 2018; 12:296. [PMID: 30154703 PMCID: PMC6102316 DOI: 10.3389/fnhum.2018.00296] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2018] [Accepted: 07/06/2018] [Indexed: 01/01/2023] Open
Abstract
Social interactions arise from patterns of communicative signs, whose perception and interpretation require a multitude of cognitive functions. The semiotic framework of Peirce's Universal Categories (UCs) laid ground for a novel cognitive-semiotic typology of social interactions. During functional magnetic resonance imaging (fMRI), 16 volunteers watched a movie narrative encompassing verbal and non-verbal social interactions. Three types of non-verbal interactions were coded ("unresolved," "non-habitual," and "habitual") based on a typology reflecting Peirce's UCs. As expected, the auditory cortex responded to verbal interactions, but non-verbal interactions modulated temporal areas as well. Conceivably, when speech was lacking, ambiguous visual information (unresolved interactions) primed auditory processing in contrast to learned behavioral patterns (habitual interactions). The latter recruited a parahippocampal-occipital network supporting conceptual processing and associative memory retrieval. Requesting semiotic contextualization, non-habitual interactions activated visuo-spatial and contextual rule-learning areas such as the temporo-parietal junction and right lateral prefrontal cortex. In summary, the cognitive-semiotic typology reflected distinct sensory and association networks underlying the interpretation of observed non-verbal social interactions.
Collapse
Affiliation(s)
- Dhana Wolf
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen University, Aachen, Germany.,Natural Media Lab, Human Technology Centre (HumTec), RWTH Aachen University, Aachen, Germany
| | - Irene Mittelberg
- Natural Media Lab, Human Technology Centre (HumTec), RWTH Aachen University, Aachen, Germany.,Center for Sign Language and Gesture (SignGes), RWTH Aachen University, Aachen, Germany
| | - Linn-Marlen Rekittke
- Natural Media Lab, Human Technology Centre (HumTec), RWTH Aachen University, Aachen, Germany.,Center for Sign Language and Gesture (SignGes), RWTH Aachen University, Aachen, Germany
| | - Saurabh Bhavsar
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen University, Aachen, Germany
| | - Mikhail Zvyagintsev
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen University, Aachen, Germany.,Brain Imaging Facility, Interdisciplinary Centre for Clinical Studies (IZKF), Medical Faculty, RWTH Aachen University, Aachen, Germany
| | - Annina Haeck
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen University, Aachen, Germany
| | - Fengyu Cong
- Department of Biomedical Engineering, Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian, China
| | - Martin Klasen
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen University, Aachen, Germany
| | - Klaus Mathiak
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen University, Aachen, Germany.,Center for Sign Language and Gesture (SignGes), RWTH Aachen University, Aachen, Germany.,JARA-Translational Brain Medicine, Aachen, Germany
| |
Collapse
|
88
|
Finotti G, Migliorati D, Costantini M. Multisensory integration, body representation and hyperactivity of the immune system. Conscious Cogn 2018; 63:61-73. [PMID: 29957448 DOI: 10.1016/j.concog.2018.06.009] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2017] [Revised: 06/05/2018] [Accepted: 06/06/2018] [Indexed: 10/28/2022]
Abstract
Multisensory stimuli are integrated over a delimited window of temporal asynchronies. This window is highly variable across individuals, but the origins of this variability are still not clear. We hypothesized that immune system functioning could partially account for this variability. In two experiments, we investigated the relationship between key aspects of multisensory integration in allergic participants and healthy controls. First, we tested the temporal constraint of multisensory integration, as measured by the temporal binding window. Second, we tested multisensory body representation, as indexed by the Rubber Hand Illusion (RHI). Results showed that allergic participants have a narrower temporal binding window and are less susceptible to the RHI than healthy controls. Overall, we provide evidence linking multisensory integration processes and the activity of the immune system. The present findings are discussed within the context of the effect of immune molecules on the brain mechanisms enabling multisensory integration and multisensory body representation.
Collapse
Affiliation(s)
- Gianluca Finotti
- Centre for Brain Science, Department of Psychology, University of Essex, United Kingdom; Department of Neuroscience, Imaging and Clinical Sciences, University G. d'Annunzio, Chieti, Italy; Institute for Advanced Biomedical Technologies - ITAB, University G. d'Annunzio, Chieti, Italy.
| | - Daniele Migliorati
- Department of Neuroscience, Imaging and Clinical Sciences, University G. d'Annunzio, Chieti, Italy; Institute for Advanced Biomedical Technologies - ITAB, University G. d'Annunzio, Chieti, Italy
| | - Marcello Costantini
- Centre for Brain Science, Department of Psychology, University of Essex, United Kingdom; Department of Neuroscience, Imaging and Clinical Sciences, University G. d'Annunzio, Chieti, Italy; Institute for Advanced Biomedical Technologies - ITAB, University G. d'Annunzio, Chieti, Italy.
| |
Collapse
|
89
|
Stevenson RA, Sheffield SW, Butera IM, Gifford RH, Wallace MT. Multisensory Integration in Cochlear Implant Recipients. Ear Hear 2018; 38:521-538. [PMID: 28399064 DOI: 10.1097/aud.0000000000000435] [Citation(s) in RCA: 47] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
Speech perception is inherently a multisensory process involving integration of auditory and visual cues. Multisensory integration in cochlear implant (CI) recipients is a unique circumstance in that the integration occurs after auditory deprivation and the provision of hearing via the CI. Despite the clear importance of multisensory cues for perception, in general, and for speech intelligibility, specifically, the topic of multisensory perceptual benefits in CI users has only recently begun to emerge as an area of inquiry. We review the research that has been conducted on multisensory integration in CI users to date and suggest a number of areas needing further research. The overall pattern of results indicates that many CI recipients show at least some perceptual gain that can be attributable to multisensory integration. The extent of this gain, however, varies based on a number of factors, including age of implantation and specific task being assessed (e.g., stimulus detection, phoneme perception, word recognition). Although both children and adults with CIs obtain audiovisual benefits for phoneme, word, and sentence stimuli, neither group shows demonstrable gain for suprasegmental feature perception. Additionally, only early-implanted children and the highest performing adults obtain audiovisual integration benefits similar to individuals with normal hearing. Increasing age of implantation in children is associated with poorer gains resultant from audiovisual integration, suggesting a sensitive period in development for the brain networks that subserve these integrative functions, as well as length of auditory experience. This finding highlights the need for early detection of and intervention for hearing loss, not only in terms of auditory perception, but also in terms of the behavioral and perceptual benefits of audiovisual processing. Importantly, patterns of auditory, visual, and audiovisual responses suggest that underlying integrative processes may be fundamentally different between CI users and typical-hearing listeners. Future research, particularly in low-level processing tasks such as signal detection will help to further assess mechanisms of multisensory integration for individuals with hearing loss, both with and without CIs.
Collapse
Affiliation(s)
- Ryan A Stevenson
- 1Department of Psychology, University of Western Ontario, London, Ontario, Canada; 2Brain and Mind Institute, University of Western Ontario, London, Ontario, Canada; 3Walter Reed National Military Medical Center, Audiology and Speech Pathology Center, London, Ontario, Canada; 4Vanderbilt Brain Institute, Nashville, Tennesse; 5Vanderbilt Kennedy Center, Nashville, Tennesse; 6Department of Psychology, Vanderbilt University, Nashville, Tennesse; 7Department of Psychiatry, Vanderbilt University Medical Center, Nashville, Tennesse; and 8Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, Tennesse
| | | | | | | | | |
Collapse
|
90
|
Audiovisual integration in depth: multisensory binding and gain as a function of distance. Exp Brain Res 2018; 236:1939-1951. [PMID: 29700577 PMCID: PMC6010498 DOI: 10.1007/s00221-018-5274-7] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2017] [Accepted: 02/19/2018] [Indexed: 11/01/2022]
Abstract
The integration of information across sensory modalities is dependent on the spatiotemporal characteristics of the stimuli that are paired. Despite large variation in the distance over which events occur in our environment, relatively little is known regarding how stimulus-observer distance affects multisensory integration. Prior work has suggested that exteroceptive stimuli are integrated over larger temporal intervals in near relative to far space, and that larger multisensory facilitations are evident in far relative to near space. Here, we sought to examine the interrelationship between these previously established distance-related features of multisensory processing. Participants performed an audiovisual simultaneity judgment and redundant target task in near and far space, while audiovisual stimuli were presented at a range of temporal delays (i.e., stimulus onset asynchronies). In line with the previous findings, temporal acuity was poorer in near relative to far space. Furthermore, reaction time to asynchronously presented audiovisual targets suggested a temporal window for fast detection-a range of stimuli asynchronies that was also larger in near as compared to far space. However, the range of reaction times over which multisensory response enhancement was observed was limited to a restricted range of relatively small (i.e., 150 ms) asynchronies, and did not differ significantly between near and far space. Furthermore, for synchronous presentations, these distance-related (i.e., near vs. far) modulations in temporal acuity and multisensory gain correlated negatively at an individual subject level. Thus, the findings support the conclusion that multisensory temporal binding and gain are asymmetrically modulated as a function of distance from the observer, and specifies that this relationship is specific for temporally synchronous audiovisual stimulus presentations.
Collapse
|
91
|
Redundant-target processing is robust against changes to task load. COGNITIVE RESEARCH-PRINCIPLES AND IMPLICATIONS 2018; 3:4. [PMID: 29497688 PMCID: PMC5820380 DOI: 10.1186/s41235-017-0088-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/16/2017] [Accepted: 12/25/2017] [Indexed: 11/10/2022]
Abstract
Monitoring visual displays while performing other tasks is commonplace in many operational environments. Although dividing attention between tasks can impair monitoring accuracy and response times, it is unclear whether it also reduces processing efficiency for visual targets. Thus, the current three experiments examined the effects of dual-tasking on target processing in the visual periphery. A total of 120 undergraduate students performed a redundant-target task either by itself (Experiment 1a) or in conjunction with a manual tracking task (Experiments 1b-3). Target processing efficiency was assessed using measures of workload resilience. Processing of redundant targets in Experiments 1-2 was less efficient than predicted by a standard parallel race model, giving evidence for limited-capacity, parallel processing. However, when stimulus characteristics forced participants to process targets in serial (Experiment 3), processing efficiency became super-capacity. Across the three experiments, dual-tasking had no effect on target processing efficiency. Results suggest that a central task slows target detection in the display periphery, but does not change the efficiency with which multiple concurrent targets are processed.
Collapse
|
92
|
Does hearing aid use affect audiovisual integration in mild hearing impairment? Exp Brain Res 2018; 236:1161-1179. [PMID: 29453491 DOI: 10.1007/s00221-018-5206-6] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2017] [Accepted: 02/11/2018] [Indexed: 10/18/2022]
Abstract
There is converging evidence for altered audiovisual integration abilities in hearing-impaired individuals and those with profound hearing loss who are provided with cochlear implants, compared to normal-hearing adults. Still, little is known on the effects of hearing aid use on audiovisual integration in mild hearing loss, although this constitutes one of the most prevalent conditions in the elderly and, yet, often remains untreated in its early stages. This study investigated differences in the strength of audiovisual integration between elderly hearing aid users and those with the same degree of mild hearing loss who were not using hearing aids, the non-users, by measuring their susceptibility to the sound-induced flash illusion. We also explored the corresponding window of integration by varying the stimulus onset asynchronies. To examine general group differences that are not attributable to specific hearing aid settings but rather reflect overall changes associated with habitual hearing aid use, the group of hearing aid users was tested unaided while individually controlling for audibility. We found greater audiovisual integration together with a wider window of integration in hearing aid users compared to their age-matched untreated peers. Signal detection analyses indicate that a change in perceptual sensitivity as well as in bias may underlie the observed effects. Our results and comparisons with other studies in normal-hearing older adults suggest that both mild hearing impairment and hearing aid use seem to affect audiovisual integration, possibly in the sense that hearing aid use may reverse the effects of hearing loss on audiovisual integration. We suggest that these findings may be particularly important for auditory rehabilitation and call for a longitudinal study.
Collapse
|
93
|
Sürig R, Bottari D, Röder B. Transfer of Audio-Visual Temporal Training to Temporal and Spatial Audio-Visual Tasks. Multisens Res 2018; 31:556-578. [PMID: 31264612 DOI: 10.1163/22134808-00002611] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2017] [Accepted: 09/21/2017] [Indexed: 12/19/2022]
Abstract
Temporal and spatial characteristics of sensory inputs are fundamental to multisensory integration because they provide probabilistic information as to whether or not multiple sensory inputs belong to the same event. The multisensory temporal binding window defines the time range within which two stimuli of different sensory modalities are merged into one percept and has been shown to depend on training. The aim of the present study was to evaluate the role of the training procedure for improving multisensory temporal discrimination and to test for a possible transfer of training to other multisensory tasks. Participants were trained over five sessions in a two-alternative forced-choice simultaneity judgment task. The task difficulty of each trial was either at each participant's threshold (adaptive group) or randomly chosen (control group). A possible transfer of improved multisensory temporal discrimination on multisensory binding was tested with a redundant signal paradigm in which the temporal alignment of auditory and visual stimuli was systematically varied. Moreover, the size of the spatial audio-visual ventriloquist effect was assessed. Adaptive training resulted in faster improvements compared to the control condition. Transfer effects were found for both tasks: The processing speed of auditory inputs and the size of the ventriloquist effect increased in the adaptive group following the training. We suggest that the relative precision of the temporal and spatial features of a cross-modal stimulus is weighted during multisensory integration. Thus, changes in the precision of temporal processing are expected to enhance the likelihood of multisensory integration for temporally aligned cross-modal stimuli.
Collapse
Affiliation(s)
- Ralf Sürig
- Biological Psychology and Neuropsychology, University of Hamburg, Von Melle Park 11, 20146 Hamburg, Germany
| | - Davide Bottari
- Biological Psychology and Neuropsychology, University of Hamburg, Von Melle Park 11, 20146 Hamburg, Germany.,IMT School for Advanced Studies Lucca, Lucca, Italy
| | - Brigitte Röder
- Biological Psychology and Neuropsychology, University of Hamburg, Von Melle Park 11, 20146 Hamburg, Germany
| |
Collapse
|
94
|
Couth S, Gowen E, Poliakoff E. Using Race Model Violation to Explore Multisensory Responses in Older Adults: Enhanced Multisensory Integration or Slower Unisensory Processing? Multisens Res 2018; 31:151-174. [DOI: 10.1163/22134808-00002588] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2017] [Accepted: 06/14/2017] [Indexed: 11/19/2022]
Abstract
Older adults exhibit greater multisensory reaction time (RT) facilitation than young adults. Since older adults exhibit greater violation of the race model (i.e., cumulative distribution functions for multisensory RTs are greater than that of the summed unisensory RTs), this has been attributed to enhanced multisensory integration. Here we explored whether (a) individual differences in RT distributions within each age group might drive this effect, and (b) the race model is more likely to be violated if unisensory RTs are slower. Young () and older adults () made speeded responses to visual, auditory or tactile stimuli, or any combination of these (bi-/tri-modal). The test of the race model suggested greater audiovisual integration for older adults, but only when accounting for individual differences in RT distributions. Moreover, correlations in both age groups showed that slower unisensory RTs were associated with a greater degree of race model violation. Therefore, greater race model violation may be due to greater ‘room for improvement’ from unisensory responses in older adults compared to young adults, and thus could falsely give the impression of enhanced multisensory integration.
Collapse
Affiliation(s)
- Samuel Couth
- Faculty of Biology, Medicine and Health, The University of Manchester, Oxford Road, Manchester M13 9PL, UK
| | - Emma Gowen
- Faculty of Biology, Medicine and Health, The University of Manchester, Oxford Road, Manchester M13 9PL, UK
| | - Ellen Poliakoff
- Faculty of Biology, Medicine and Health, The University of Manchester, Oxford Road, Manchester M13 9PL, UK
| |
Collapse
|
95
|
Bailey HD, Mullaney AB, Gibney KD, Kwakye LD. Audiovisual Integration Varies With Target and Environment Richness in Immersive Virtual Reality. Multisens Res 2018; 31:689-713. [PMID: 31264608 DOI: 10.1163/22134808-20181301] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2017] [Accepted: 02/26/2018] [Indexed: 11/19/2022]
Abstract
We are continually bombarded by information arriving to each of our senses; however, the brain seems to effortlessly integrate this separate information into a unified percept. Although multisensory integration has been researched extensively using simple computer tasks and stimuli, much less is known about how multisensory integration functions in real-world contexts. Additionally, several recent studies have demonstrated that multisensory integration varies tremendously across naturalistic stimuli. Virtual reality can be used to study multisensory integration in realistic settings because it combines realism with precise control over the environment and stimulus presentation. In the current study, we investigated whether multisensory integration as measured by the redundant signals effects (RSE) is observable in naturalistic environments using virtual reality and whether it differs as a function of target and/or environment cue-richness. Participants detected auditory, visual, and audiovisual targets which varied in cue-richness within three distinct virtual worlds that also varied in cue-richness. We demonstrated integrative effects in each environment-by-target pairing and further showed a modest effect on multisensory integration as a function of target cue-richness but only in the cue-rich environment. Our study is the first to definitively show that minimal and more naturalistic tasks elicit comparable redundant signals effects. Our results also suggest that multisensory integration may function differently depending on the features of the environment. The results of this study have important implications in the design of virtual multisensory environments that are currently being used for training, educational, and entertainment purposes.
Collapse
Affiliation(s)
| | | | - Kyla D Gibney
- Department of Neuroscience, Oberlin College, Oberlin, OH, USA
| | | |
Collapse
|
96
|
Timora J, Budd T. Steady-State EEG and Psychophysical Measures of Multisensory Integration to Cross-Modally Synchronous and Asynchronous Acoustic and Vibrotactile Amplitude Modulation Rate. Multisens Res 2018; 31:391-418. [DOI: 10.1163/22134808-00002549] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2016] [Accepted: 01/16/2017] [Indexed: 11/19/2022]
Abstract
According to thetemporal principleof multisensory integration, cross-modal synchronisation of stimulus onset facilitates multisensory integration. This is typically observed as a greater response to multisensory stimulation relative to the sum of the constituent unisensory responses (i.e.,superadditivity). The aim of the present study was to examine whether the temporal principle extends to the cross-modal synchrony of amplitude-modulation (AM) rate. It is well established that psychophysical sensitivity to AM stimulation is strongly influenced by AM rate where the optimum rate differs according to sensory modality. This rate-dependent sensitivity is also apparent from EEG steady-state response (SSR) activity, which becomes entrained to the stimulation rate and is thought to reflect neural processing of the temporal characteristics of AM stimulation. In this study we investigated whether cross-modal congruence of AM rate reveals both psychophysical and EEG evidence of enhanced multisensory integration. To achieve this, EEG SSR and psychophysical sensitivity to simultaneous acoustic and/or vibrotactile AM stimuli were measured at cross-modally congruent and incongruent AM rates. While the results provided no evidence of superadditive multisensory SSR activity or psychophysical sensitivity, the complex pattern of results did reveal a consistent correspondence between SSR activity and psychophysical sensitivity to AM stimulation. This indicates that entrained EEG activity may provide a direct measure of cortical activity underlying multisensory integration. Consistent with the temporal principle of multisensory integration, increased vibrotactile SSR responses and psychophysical sensitivity were found for cross-modally congruent relative to incongruent AM rate. However, no corresponding increase in auditory SSR or psychophysical sensitivity was observed for cross-modally congruent AM rates. This complex pattern of results can be understood in terms of the likely influence of theprinciple of inverse effectivenesswhere the temporal principle of multisensory integration was only evident in the context of reduced perceptual sensitivity for the vibrotactile but not the auditory modality.
Collapse
Affiliation(s)
- Justin R. Timora
- Brain Imaging Lab, School of Psychology, University of Newcastle, Ourimbah, NSW, Australia
| | - Timothy W. Budd
- Brain Imaging Lab, School of Psychology, University of Newcastle, Ourimbah, NSW, Australia
| |
Collapse
|
97
|
Costantini M, Migliorati D, Donno B, Sirota M, Ferri F. Expected but omitted stimuli affect crossmodal interaction. Cognition 2017; 171:52-64. [PMID: 29107888 DOI: 10.1016/j.cognition.2017.10.016] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2016] [Revised: 10/19/2017] [Accepted: 10/20/2017] [Indexed: 11/29/2022]
Abstract
One of the most important ability of our brain is to integrate input from different sensory modalities to create a coherent representation of the environment. Does expectation affect such multisensory integration? In this paper, we tackled this issue by taking advantage from the crossmodal congruency effect (CCE). Participants made elevation judgments to visual target while ignoring tactile distractors. We manipulated the expectation of the tactile distractor by pairing the tactile stimulus to the index finger with a high-frequency tone and the tactile stimulus to the thumb with a low-frequency tone in 80% of the trials. In the remaining trials we delivered the tone and the visual target, but the tactile distractor was omitted (Study 1). Results fully replicated the basic crossmodal congruency effect. Strikingly, the CCE was observed, though at a lesser degree, also when the tactile distractor was not presented but merely expected. The contingencies between tones and tactile distractors were reversed in a follow-up study (Study 2), and the effect was further tested in two conceptual replications using different combinations of stimuli (Studies 5 and 6). Two control studies ruled out alternative explanations of the observed effect that would not involve a role for tactile distractors (Studies 3, 4). Two additional control studies unequivocally proved the dependency of the CCE on the spatial and temporal expectation of the distractors (Study 7, 8). An internal small-scale meta-analysis showed that the crossmodal congruency effect with predicted distractors is a robust medium size effect. Our findings reveal that multisensory integration, one of the most basic and ubiquitous mechanisms to encode external events, benefits from expectation of sensory input.
Collapse
Affiliation(s)
- Marcello Costantini
- Centre for Brain Science, Department of Psychology, University of Essex, Colchester, UK; Laboratory of Neuropsychology and Cognitive Neuroscience, Department of Neuroscience and Imaging, University G. d'Annunzio, Chieti, Italy; Institute for Advanced Biomedical Technologies - ITAB, Foundation University G. d'Annunzio, Chieti, Italy.
| | - Daniele Migliorati
- Laboratory of Neuropsychology and Cognitive Neuroscience, Department of Neuroscience and Imaging, University G. d'Annunzio, Chieti, Italy; Institute for Advanced Biomedical Technologies - ITAB, Foundation University G. d'Annunzio, Chieti, Italy
| | - Brunella Donno
- Laboratory of Neuropsychology and Cognitive Neuroscience, Department of Neuroscience and Imaging, University G. d'Annunzio, Chieti, Italy; Institute for Advanced Biomedical Technologies - ITAB, Foundation University G. d'Annunzio, Chieti, Italy
| | - Miroslav Sirota
- Centre for Brain Science, Department of Psychology, University of Essex, Colchester, UK
| | - Francesca Ferri
- Centre for Brain Science, Department of Psychology, University of Essex, Colchester, UK.
| |
Collapse
|
98
|
De Niear MA, Gupta PB, Baum SH, Wallace MT. Perceptual training enhances temporal acuity for multisensory speech. Neurobiol Learn Mem 2017; 147:9-17. [PMID: 29107704 DOI: 10.1016/j.nlm.2017.10.016] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2017] [Revised: 10/19/2017] [Accepted: 10/27/2017] [Indexed: 11/30/2022]
Abstract
The temporal relationship between auditory and visual cues is a fundamental feature in the determination of whether these signals will be integrated. The window of perceived simultaneity (TBW) is a construct that describes the epoch of time during which asynchronous auditory and visual stimuli are likely to be perceptually bound. Recently, a number of studies have demonstrated the capacity for perceptual training to enhance temporal acuity for audiovisual stimuli (i.e., narrow the TBW). These studies, however, have only examined multisensory perceptual learning that develops in response to feedback that is provided when making judgments on simple, low-level audiovisual stimuli (i.e., flashes and beeps). Here we sought to determine if perceptual training was capable of altering temporal acuity for audiovisual speech. Furthermore, we also explored whether perceptual training with simple or complex audiovisual stimuli generalized across levels of stimulus complexity. Using a simultaneity judgment (SJ) task, we measured individuals' temporal acuity (as estimated by the TBW) prior to, immediately following, and one week after four consecutive days of perceptual training. We report that temporal acuity for audiovisual speech stimuli is enhanced following perceptual training using speech stimuli. Additionally, we find that changes in temporal acuity following perceptual training do not generalize across the levels of stimulus complexity in this study. Overall, the results suggest that perceptual training is capable of enhancing temporal acuity for audiovisual speech in adults, and that the dynamics of the changes in temporal acuity following perceptual training differ between simple audiovisual stimuli and more complex audiovisual speech stimuli.
Collapse
Affiliation(s)
- Matthew A De Niear
- Medical Scientist Training Program, Vanderbilt University Medical School, Vanderbilt University, Nashville, TN 37235, USA; Vanderbilt Brain Institute, Vanderbilt University Medical School, Vanderbilt University, Nashville, TN 37235, USA.
| | - Pranjal B Gupta
- Undergraduate Neuroscience Program, Vanderbilt University Medical School, Vanderbilt University, Nashville, TN 37235, USA
| | - Sarah H Baum
- Department of Psychology, University of Washington, Seattle, WA 98195, USA
| | - Mark T Wallace
- Vanderbilt Brain Institute, Vanderbilt University Medical School, Vanderbilt University, Nashville, TN 37235, USA; Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN 37235, USA; Department of Psychology, Vanderbilt University, Nashville, TN 37235, USA; Department of Psychiatry, Vanderbilt University Medical Center, Nashville, TN 37235, USA
| |
Collapse
|
99
|
Abstract
Integration of sensory information across modalities can confer behavioral advantages by decreasing perceptual ambiguity, increasing reaction time, and increasing detection accuracy relative to unisensory stimuli. We asked how combinations of auditory, visual, and somatosensory events alter response time. Participants detected stimulation on one side of space (right or left) while ignoring stimulation on the other side of space. There were seven types of suprathreshold stimuli: auditory (tones from speakers), visual (sinusoidal contrast gratings), somatosensory (fingertip vibrations), audio-visual, somato-visual, audio-somatosensory, and audio-somato-visual. Response enhancement and race model analysis confirmed that bisensory and trisensory trials enhanced response time relative to unisensory trials. Exploratory analysis of individual differences in intersensory facilitation revealed that participants fit into one of two groups: those who benefitted from trisensory information and those who did not.
Collapse
|
100
|
The role of multisensory interplay in enabling temporal expectations. Cognition 2017; 170:130-146. [PMID: 28992555 DOI: 10.1016/j.cognition.2017.09.015] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2016] [Revised: 09/22/2017] [Accepted: 09/26/2017] [Indexed: 11/23/2022]
Abstract
Temporal regularities can guide our attention to focus on a particular moment in time and to be especially vigilant just then. Previous research provided evidence for the influence of temporal expectation on perceptual processing in unisensory auditory, visual, and tactile contexts. However, in real life we are often exposed to a complex and continuous stream of multisensory events. Here we tested - in a series of experiments - whether temporal expectations can enhance perception in multisensory contexts and whether this enhancement differs from enhancements in unisensory contexts. Our discrimination paradigm contained near-threshold targets (subject-specific 75% discrimination accuracy) embedded in a sequence of distractors. The likelihood of target occurrence (early or late) was manipulated block-wise. Furthermore, we tested whether spatial and modality-specific target uncertainty (i.e. predictable vs. unpredictable target position or modality) would affect temporal expectation (TE) measured with perceptual sensitivity (d') and response times (RT). In all our experiments, hidden temporal regularities improved performance for expected multisensory targets. Moreover, multisensory performance was unaffected by spatial and modality-specific uncertainty, whereas unisensory TE effects on d' but not RT were modulated by spatial and modality-specific uncertainty. Additionally, the size of the temporal expectation effect, i.e. the increase in perceptual sensitivity and decrease of RT, scaled linearly with the likelihood of expected targets. Finally, temporal expectation effects were unaffected by varying target position within the stream. Together, our results strongly suggest that participants quickly adapt to novel temporal contexts, that they benefit from multisensory (relative to unisensory) stimulation and that multisensory benefits are maximal if the stimulus-driven uncertainty is highest. We propose that enhanced informational content (i.e. multisensory stimulation) enables the robust extraction of temporal regularities which in turn boost (uni-)sensory representations.
Collapse
|