1
|
Ishida K, Nittono H. Multidimensional regularity processing in music: an examination using redundant signals effect. Exp Brain Res 2024:10.1007/s00221-024-06861-4. [PMID: 39012473 DOI: 10.1007/s00221-024-06861-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2024] [Accepted: 05/22/2024] [Indexed: 07/17/2024]
Abstract
Music is based on various regularities, ranging from the repetition of physical sounds to theoretically organized harmony and counterpoint. How are multidimensional regularities processed when we listen to music? The present study focuses on the redundant signals effect (RSE) as a novel approach to untangling the relationship between these regularities in music. The RSE refers to the occurrence of a shorter reaction time (RT) when two or three signals are presented simultaneously than when only one of these signals is presented, and provides evidence that these signals are processed concurrently. In two experiments, chords that deviated from tonal (harmonic) and acoustic (intensity and timbre) regularities were presented occasionally in the final position of short chord sequences. The participants were asked to detect all deviant chords while withholding their responses to non-deviant chords (i.e., the Go/NoGo task). RSEs were observed in all double- and triple-deviant combinations, reflecting processing of multidimensional regularities. Further analyses suggested evidence of coactivation by separate perceptual modules in the combination of tonal and acoustic deviants, but not in the combination of two acoustic deviants. These results imply that tonal and acoustic regularities are different enough to be processed as two discrete pieces of information. Examining the underlying process of RSE may elucidate the relationship between multidimensional regularity processing in music.
Collapse
Affiliation(s)
- Kai Ishida
- Graduate School of Human Sciences, Osaka University, 1-2 Yamadaoka, Osaka, Osaka, 565-0871, Japan.
| | - Hiroshi Nittono
- Graduate School of Human Sciences, Osaka University, 1-2 Yamadaoka, Osaka, Osaka, 565-0871, Japan
| |
Collapse
|
2
|
Ampollini S, Ardizzi M, Ferroni F, Cigala A. Synchrony perception across senses: A systematic review of temporal binding window changes from infancy to adolescence in typical and atypical development. Neurosci Biobehav Rev 2024; 162:105711. [PMID: 38729280 DOI: 10.1016/j.neubiorev.2024.105711] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2023] [Revised: 04/14/2024] [Accepted: 05/03/2024] [Indexed: 05/12/2024]
Abstract
Sensory integration is increasingly acknowledged as being crucial for the development of cognitive and social abilities. However, its developmental trajectory is still little understood. This systematic review delves into the topic by investigating the literature about the developmental changes from infancy through adolescence of the Temporal Binding Window (TBW) - the epoch of time within which sensory inputs are perceived as simultaneous and therefore integrated. Following comprehensive searches across PubMed, Elsevier, and PsycInfo databases, only experimental, behavioral, English-language, peer-reviewed studies on multisensory temporal processing in 0-17-year-olds have been included. Non-behavioral, non-multisensory, and non-human studies have been excluded as those that did not directly focus on the TBW. The selection process was independently performed by two Authors. The 39 selected studies involved 2859 participants in total. Findings indicate a predisposition towards cross-modal asynchrony sensitivity and a composite, still unclear, developmental trajectory, with atypical development associated to increased asynchrony tolerance. These results highlight the need for consistent and thorough research into TBW development to inform potential interventions.
Collapse
Affiliation(s)
- Silvia Ampollini
- Department of Humanities, Social Sciences and Cultural Industries, University of Parma, Borgo Carissimi, 10, Parma 43121, Italy.
| | - Martina Ardizzi
- Department of Medicine and Surgery, Unit of Neuroscience, University of Parma, Via Volturno 39E, Parma 43121, Italy
| | - Francesca Ferroni
- Department of Medicine and Surgery, Unit of Neuroscience, University of Parma, Via Volturno 39E, Parma 43121, Italy
| | - Ada Cigala
- Department of Humanities, Social Sciences and Cultural Industries, University of Parma, Borgo Carissimi, 10, Parma 43121, Italy
| |
Collapse
|
3
|
Kreyenmeier P, Bhuiyan I, Gian M, Chow HM, Spering M. Smooth pursuit inhibition reveals audiovisual enhancement of fast movement control. J Vis 2024; 24:3. [PMID: 38558158 PMCID: PMC10996987 DOI: 10.1167/jov.24.4.3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2023] [Accepted: 02/03/2024] [Indexed: 04/04/2024] Open
Abstract
The sudden onset of a visual object or event elicits an inhibition of eye movements at latencies approaching the minimum delay of visuomotor conductance in the brain. Typically, information presented via multiple sensory modalities, such as sound and vision, evokes stronger and more robust responses than unisensory information. Whether and how multisensory information affects ultra-short latency oculomotor inhibition is unknown. In two experiments, we investigate smooth pursuit and saccadic inhibition in response to multisensory distractors. Observers tracked a horizontally moving dot and were interrupted by an unpredictable visual, auditory, or audiovisual distractor. Distractors elicited a transient inhibition of pursuit eye velocity and catch-up saccade rate within ∼100 ms of their onset. Audiovisual distractors evoked stronger oculomotor inhibition than visual- or auditory-only distractors, indicating multisensory response enhancement. Multisensory response enhancement magnitudes were equal to the linear sum of responses to component stimuli. These results demonstrate that multisensory information affects eye movements even at ultra-short latencies, establishing a lower time boundary for multisensory-guided behavior. We conclude that oculomotor circuits must have privileged access to sensory information from multiple modalities, presumably via a fast, subcortical pathway.
Collapse
Affiliation(s)
- Philipp Kreyenmeier
- Department of Ophthalmology & Visual Sciences, University of British Columbia, Vancouver, British Columbia, Canada
- Graduate Program in Neuroscience, University of British Columbia, Vancouver, British Columbia, Canada
| | - Ishmam Bhuiyan
- Department of Ophthalmology & Visual Sciences, University of British Columbia, Vancouver, British Columbia, Canada
| | - Mathew Gian
- Department of Ophthalmology & Visual Sciences, University of British Columbia, Vancouver, British Columbia, Canada
| | - Hiu Mei Chow
- Department of Ophthalmology & Visual Sciences, University of British Columbia, Vancouver, British Columbia, Canada
- Department of Psychology, St. Thomas University, Fredericton, New Brunswick, Canada
| | - Miriam Spering
- Department of Ophthalmology & Visual Sciences, University of British Columbia, Vancouver, British Columbia, Canada
- Graduate Program in Neuroscience, University of British Columbia, Vancouver, British Columbia, Canada
- Djavad Mowafaghian Center for Brain Health, University of British Columbia, BC, Vancouver, Canada
- Institute for Computing, Information, and Cognitive Systems, University of British Columbia, Vancouver, BC, Canada
| |
Collapse
|
4
|
Ross LA, Molholm S, Butler JS, Del Bene VA, Brima T, Foxe JJ. Neural correlates of audiovisual narrative speech perception in children and adults on the autism spectrum: A functional magnetic resonance imaging study. Autism Res 2024; 17:280-310. [PMID: 38334251 DOI: 10.1002/aur.3104] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2023] [Accepted: 01/19/2024] [Indexed: 02/10/2024]
Abstract
Autistic individuals show substantially reduced benefit from observing visual articulations during audiovisual speech perception, a multisensory integration deficit that is particularly relevant to social communication. This has mostly been studied using simple syllabic or word-level stimuli and it remains unclear how altered lower-level multisensory integration translates to the processing of more complex natural multisensory stimulus environments in autism. Here, functional neuroimaging was used to examine neural correlates of audiovisual gain (AV-gain) in 41 autistic individuals to those of 41 age-matched non-autistic controls when presented with a complex audiovisual narrative. Participants were presented with continuous narration of a story in auditory-alone, visual-alone, and both synchronous and asynchronous audiovisual speech conditions. We hypothesized that previously identified differences in audiovisual speech processing in autism would be characterized by activation differences in brain regions well known to be associated with audiovisual enhancement in neurotypicals. However, our results did not provide evidence for altered processing of auditory alone, visual alone, audiovisual conditions or AV- gain in regions associated with the respective task when comparing activation patterns between groups. Instead, we found that autistic individuals responded with higher activations in mostly frontal regions where the activation to the experimental conditions was below baseline (de-activations) in the control group. These frontal effects were observed in both unisensory and audiovisual conditions, suggesting that these altered activations were not specific to multisensory processing but reflective of more general mechanisms such as an altered disengagement of Default Mode Network processes during the observation of the language stimulus across conditions.
Collapse
Affiliation(s)
- Lars A Ross
- The Frederick J. and Marion A. Schindler Cognitive Neurophysiology Laboratory, The Ernest J. Del Monte Institute for Neuroscience, Department of Neuroscience, University of Rochester School of Medicine and Dentistry, Rochester, New York, USA
- Department of Imaging Sciences, University of Rochester Medical Center, University of Rochester School of Medicine and Dentistry, Rochester, New York, USA
- The Cognitive Neurophysiology Laboratory, Departments of Pediatrics and Neuroscience, Albert Einstein College of Medicine & Montefiore Medical Center, Bronx, New York, USA
| | - Sophie Molholm
- The Frederick J. and Marion A. Schindler Cognitive Neurophysiology Laboratory, The Ernest J. Del Monte Institute for Neuroscience, Department of Neuroscience, University of Rochester School of Medicine and Dentistry, Rochester, New York, USA
- The Cognitive Neurophysiology Laboratory, Departments of Pediatrics and Neuroscience, Albert Einstein College of Medicine & Montefiore Medical Center, Bronx, New York, USA
| | - John S Butler
- The Cognitive Neurophysiology Laboratory, Departments of Pediatrics and Neuroscience, Albert Einstein College of Medicine & Montefiore Medical Center, Bronx, New York, USA
- School of Mathematics and Statistics, Technological University Dublin, City Campus, Dublin, Ireland
| | - Victor A Del Bene
- The Cognitive Neurophysiology Laboratory, Departments of Pediatrics and Neuroscience, Albert Einstein College of Medicine & Montefiore Medical Center, Bronx, New York, USA
- Heersink School of Medicine, Department of Neurology, University of Alabama at Birmingham, Birmingham, Alabama, USA
| | - Tufikameni Brima
- The Frederick J. and Marion A. Schindler Cognitive Neurophysiology Laboratory, The Ernest J. Del Monte Institute for Neuroscience, Department of Neuroscience, University of Rochester School of Medicine and Dentistry, Rochester, New York, USA
| | - John J Foxe
- The Frederick J. and Marion A. Schindler Cognitive Neurophysiology Laboratory, The Ernest J. Del Monte Institute for Neuroscience, Department of Neuroscience, University of Rochester School of Medicine and Dentistry, Rochester, New York, USA
- The Cognitive Neurophysiology Laboratory, Departments of Pediatrics and Neuroscience, Albert Einstein College of Medicine & Montefiore Medical Center, Bronx, New York, USA
| |
Collapse
|
5
|
Yu L, Xu J. The Development of Multisensory Integration at the Neuronal Level. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2024; 1437:153-172. [PMID: 38270859 DOI: 10.1007/978-981-99-7611-9_10] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/26/2024]
Abstract
Multisensory integration is a fundamental function of the brain. In the typical adult, multisensory neurons' response to paired multisensory (e.g., audiovisual) cues is significantly more robust than the corresponding best unisensory response in many brain regions. Synthesizing sensory signals from multiple modalities can speed up sensory processing and improve the salience of outside events or objects. Despite its significance, multisensory integration is testified to be not a neonatal feature of the brain. Neurons' ability to effectively combine multisensory information does not occur rapidly but develops gradually during early postnatal life (for cats, 4-12 weeks required). Multisensory experience is critical for this developing process. If animals were restricted from sensing normal visual scenes or sounds (deprived of the relevant multisensory experience), the development of the corresponding integrative ability could be blocked until the appropriate multisensory experience is obtained. This section summarizes the extant literature on the development of multisensory integration (mainly using cat superior colliculus as a model), sensory-deprivation-induced cross-modal plasticity, and how sensory experience (sensory exposure and perceptual learning) leads to the plastic change and modification of neural circuits in cortical and subcortical areas.
Collapse
Affiliation(s)
- Liping Yu
- Key Laboratory of Brain Functional Genomics (Ministry of Education and Shanghai), School of Life Sciences, East China Normal University, Shanghai, China.
| | - Jinghong Xu
- Key Laboratory of Brain Functional Genomics (Ministry of Education and Shanghai), School of Life Sciences, East China Normal University, Shanghai, China
| |
Collapse
|
6
|
Jones SA, Noppeney U. Multisensory Integration and Causal Inference in Typical and Atypical Populations. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2024; 1437:59-76. [PMID: 38270853 DOI: 10.1007/978-981-99-7611-9_4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/26/2024]
Abstract
Multisensory perception is critical for effective interaction with the environment, but human responses to multisensory stimuli vary across the lifespan and appear changed in some atypical populations. In this review chapter, we consider multisensory integration within a normative Bayesian framework. We begin by outlining the complex computational challenges of multisensory causal inference and reliability-weighted cue integration, and discuss whether healthy young adults behave in accordance with normative Bayesian models. We then compare their behaviour with various other human populations (children, older adults, and those with neurological or neuropsychiatric disorders). In particular, we consider whether the differences seen in these groups are due only to changes in their computational parameters (such as sensory noise or perceptual priors), or whether the fundamental computational principles (such as reliability weighting) underlying multisensory perception may also be altered. We conclude by arguing that future research should aim explicitly to differentiate between these possibilities.
Collapse
Affiliation(s)
- Samuel A Jones
- Department of Psychology, Nottingham Trent University, Nottingham, UK.
| | - Uta Noppeney
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| |
Collapse
|
7
|
Del Gatto C, Indraccolo A, Pedale T, Brunetti R. Crossmodal interference on counting performance: Evidence for shared attentional resources. PLoS One 2023; 18:e0294057. [PMID: 37948407 PMCID: PMC10637692 DOI: 10.1371/journal.pone.0294057] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2023] [Accepted: 10/16/2023] [Indexed: 11/12/2023] Open
Abstract
During the act of counting, our perceptual system may rely on information coming from different sensory channels. However, when the information coming from different sources is discordant, such as in the case of a de-synchronization between visual stimuli to be counted and irrelevant auditory stimuli, the performance in a sequential counting task might deteriorate. Such deterioration may originate from two different mechanisms, both linked to exogenous attention attracted by auditory stimuli. Indeed, exogenous auditory triggers may infiltrate our internal "counter", interfering with the counting process, resulting in an overcount; alternatively, the exogenous auditory triggers may disrupt the internal "counter" by deviating participants' attention from the visual stimuli, resulting in an undercount. We tested these hypotheses by asking participants to count visual discs sequentially appearing on the screen while listening to task-irrelevant sounds, in systematically varied conditions: visual stimuli could be synchronized or de-synchronized with sounds; they could feature regular or irregular pacing; and their speed presentation could be fast (approx. 3/sec), moderate (approx. 2/sec), or slow (approx. 1.5/sec). Our results support the second hypothesis since participants tend to undercount visual stimuli in all harder conditions (de-synchronized, irregular, fast sequences). We discuss these results in detail, adding novel elements to the study of crossmodal interference.
Collapse
Affiliation(s)
- Claudia Del Gatto
- Experimental and Applied Psychology Laboratory, Department of Human Sciences, Università Europea di Roma, Rome, Italy
| | - Allegra Indraccolo
- Experimental and Applied Psychology Laboratory, Department of Human Sciences, Università Europea di Roma, Rome, Italy
| | - Tiziana Pedale
- Department of Physiology and Pharmacology, Sapienza University of Rome, Rome, Italy
- Functional Neuroimaging Laboratory, Fondazione Santa Lucia, IRCCS, Rome, Italy
| | - Riccardo Brunetti
- Experimental and Applied Psychology Laboratory, Department of Human Sciences, Università Europea di Roma, Rome, Italy
| |
Collapse
|
8
|
Ghaneirad E, Borgolte A, Sinke C, Čuš A, Bleich S, Szycik GR. The effect of multisensory semantic congruency on unisensory object recognition in schizophrenia. Front Psychiatry 2023; 14:1246879. [PMID: 38025441 PMCID: PMC10646423 DOI: 10.3389/fpsyt.2023.1246879] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/24/2023] [Accepted: 10/16/2023] [Indexed: 12/01/2023] Open
Abstract
Multisensory, as opposed to unisensory processing of stimuli, has been found to enhance the performance (e.g., reaction time, accuracy, and discrimination) of healthy individuals across various tasks. However, this enhancement is not as pronounced in patients with schizophrenia (SZ), indicating impaired multisensory integration (MSI) in these individuals. To the best of our knowledge, no study has yet investigated the impact of MSI deficits in the context of working memory, a domain highly reliant on multisensory processing and substantially impaired in schizophrenia. To address this research gap, we employed two adopted versions of the continuous object recognition task to investigate the effect of single-trail multisensory encoding on subsequent object recognition in 21 schizophrenia patients and 21 healthy controls (HC). Participants were tasked with discriminating between initial and repeated presentations. For the initial presentations, half of the stimuli were audiovisual pairings, while the other half were presented unimodal. The task-relevant stimuli were then presented a second time in a unisensory manner (either auditory stimuli in the auditory task or visual stimuli in the visual task). To explore the impact of semantic context on multisensory encoding, half of the audiovisual pairings were selected to be semantically congruent, while the remaining pairs were not semantically related to each other. Consistent with prior studies, our findings demonstrated that the impact of single-trial multisensory presentation during encoding remains discernible during subsequent object recognition. This influence could be distinguished based on the semantic congruity between the auditory and visual stimuli presented during the encoding. This effect was more robust in the auditory task. In the auditory task, when congruent multisensory pairings were encoded, both participant groups demonstrated a multisensory facilitation effect. This effect resulted in improved accuracy and RT performance. Regarding incongruent audiovisual encoding, as expected, HC did not demonstrate an evident multisensory facilitation effect on memory performance. In contrast, SZs exhibited an atypically accelerated reaction time during the subsequent auditory object recognition. Based on the predictive coding model we propose that this observed deviations indicate a reduced semantic modulatory effect and anomalous predictive errors signaling, particularly in the context of conflicting cross-modal sensory inputs in SZ.
Collapse
Affiliation(s)
- Erfan Ghaneirad
- Department of Psychiatry, Social Psychiatry and Psychotherapy, Hannover Medical School, Hanover, Germany
| | - Anna Borgolte
- Department of Psychiatry, Social Psychiatry and Psychotherapy, Hannover Medical School, Hanover, Germany
| | - Christopher Sinke
- Department of Psychiatry, Social Psychiatry and Psychotherapy, Division of Clinical Psychology and Sexual Medicine, Hannover Medical School, Hannover, Germany
| | - Anja Čuš
- Department of Psychiatry, Social Psychiatry and Psychotherapy, Hannover Medical School, Hanover, Germany
| | - Stefan Bleich
- Department of Psychiatry, Social Psychiatry and Psychotherapy, Hannover Medical School, Hanover, Germany
- Center for Systems Neuroscience, University of Veterinary Medicine, Hanover, Germany
| | - Gregor R. Szycik
- Department of Psychiatry, Social Psychiatry and Psychotherapy, Hannover Medical School, Hanover, Germany
| |
Collapse
|
9
|
Newell FN, McKenna E, Seveso MA, Devine I, Alahmad F, Hirst RJ, O'Dowd A. Multisensory perception constrains the formation of object categories: a review of evidence from sensory-driven and predictive processes on categorical decisions. Philos Trans R Soc Lond B Biol Sci 2023; 378:20220342. [PMID: 37545304 PMCID: PMC10404931 DOI: 10.1098/rstb.2022.0342] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2023] [Accepted: 06/29/2023] [Indexed: 08/08/2023] Open
Abstract
Although object categorization is a fundamental cognitive ability, it is also a complex process going beyond the perception and organization of sensory stimulation. Here we review existing evidence about how the human brain acquires and organizes multisensory inputs into object representations that may lead to conceptual knowledge in memory. We first focus on evidence for two processes on object perception, multisensory integration of redundant information (e.g. seeing and feeling a shape) and crossmodal, statistical learning of complementary information (e.g. the 'moo' sound of a cow and its visual shape). For both processes, the importance attributed to each sensory input in constructing a multisensory representation of an object depends on the working range of the specific sensory modality, the relative reliability or distinctiveness of the encoded information and top-down predictions. Moreover, apart from sensory-driven influences on perception, the acquisition of featural information across modalities can affect semantic memory and, in turn, influence category decisions. In sum, we argue that both multisensory processes independently constrain the formation of object categories across the lifespan, possibly through early and late integration mechanisms, respectively, to allow us to efficiently achieve the everyday, but remarkable, ability of recognizing objects. This article is part of the theme issue 'Decision and control processes in multisensory perception'.
Collapse
Affiliation(s)
- F. N. Newell
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, College Green, Dublin D02 PN40, Ireland
| | - E. McKenna
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, College Green, Dublin D02 PN40, Ireland
| | - M. A. Seveso
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, College Green, Dublin D02 PN40, Ireland
| | - I. Devine
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, College Green, Dublin D02 PN40, Ireland
| | - F. Alahmad
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, College Green, Dublin D02 PN40, Ireland
| | - R. J. Hirst
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, College Green, Dublin D02 PN40, Ireland
| | - A. O'Dowd
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, College Green, Dublin D02 PN40, Ireland
| |
Collapse
|
10
|
Sadaf MUK, Sakib NU, Pannone A, Ravichandran H, Das S. A bio-inspired visuotactile neuron for multisensory integration. Nat Commun 2023; 14:5729. [PMID: 37714853 PMCID: PMC10504285 DOI: 10.1038/s41467-023-40686-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2023] [Accepted: 08/03/2023] [Indexed: 09/17/2023] Open
Abstract
Multisensory integration is a salient feature of the brain which enables better and faster responses in comparison to unisensory integration, especially when the unisensory cues are weak. Specialized neurons that receive convergent input from two or more sensory modalities are responsible for such multisensory integration. Solid-state devices that can emulate the response of these multisensory neurons can advance neuromorphic computing and bridge the gap between artificial and natural intelligence. Here, we introduce an artificial visuotactile neuron based on the integration of a photosensitive monolayer MoS2 memtransistor and a triboelectric tactile sensor which minutely captures the three essential features of multisensory integration, namely, super-additive response, inverse effectiveness effect, and temporal congruency. We have also realized a circuit which can encode visuotactile information into digital spiking events, with probability of spiking determined by the strength of the visual and tactile cues. We believe that our comprehensive demonstration of bio-inspired and multisensory visuotactile neuron and spike encoding circuitry will advance the field of neuromorphic computing, which has thus far primarily focused on unisensory intelligence and information processing.
Collapse
Affiliation(s)
| | - Najam U Sakib
- Engineering Science and Mechanics, Penn State University, University Park, PA, 16802, USA
| | - Andrew Pannone
- Engineering Science and Mechanics, Penn State University, University Park, PA, 16802, USA
| | | | - Saptarshi Das
- Engineering Science and Mechanics, Penn State University, University Park, PA, 16802, USA.
- Electrical Engineering, Penn State University, University Park, PA, 16802, USA.
- Materials Science and Engineering, Penn State University, University Park, PA, 16802, USA.
- Materials Research Institute, Penn State University, University Park, PA, 16802, USA.
| |
Collapse
|
11
|
Schormans AL, Allman BL. An imbalance of excitation and inhibition in the multisensory cortex impairs the temporal acuity of audiovisual processing and perception. Cereb Cortex 2023; 33:9937-9953. [PMID: 37464944 DOI: 10.1093/cercor/bhad256] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2023] [Revised: 06/23/2023] [Accepted: 06/24/2023] [Indexed: 07/20/2023] Open
Abstract
The neural integration of closely timed auditory and visual stimuli can offer several behavioral advantages; however, an overly broad window of temporal integration-a phenomenon observed in various neurodevelopmental disorders-could have far-reaching perceptual consequences. Non-invasive studies in humans have suggested that the level of GABAergic inhibition in the multisensory cortex influences the temporal window over which auditory and visual stimuli are bound into a unified percept. Although this suggestion aligns with the theory that an imbalance of cortical excitation and inhibition alters multisensory processing, no prior studies have performed experimental manipulations to determine the causal effects of a reduction of GABAergic inhibition on audiovisual temporal perception. To that end, we used a combination of in vivo electrophysiology, neuropharmacology, and translational behavioral testing in rats to provide the first mechanistic evidence that a reduction of GABAergic inhibition in the audiovisual cortex is sufficient to disrupt unisensory and multisensory processing across the cortical layers, and ultimately impair the temporal acuity of audiovisual perception and its rapid adaptation to recent sensory experience. Looking forward, our findings provide support for using rat models to further investigate the neural mechanisms underlying the audiovisual perceptual alterations observed in neurodevelopmental disorders, such as autism, schizophrenia, and dyslexia.
Collapse
Affiliation(s)
- Ashley L Schormans
- Department of Anatomy and Cell Biology, Schulich School of Medicine and Dentistry, University of Western Ontario, London, Ontario, Canada
| | - Brian L Allman
- Department of Anatomy and Cell Biology, Schulich School of Medicine and Dentistry, University of Western Ontario, London, Ontario, Canada
| |
Collapse
|
12
|
Saltafossi M, Zaccaro A, Perrucci MG, Ferri F, Costantini M. The impact of cardiac phases on multisensory integration. Biol Psychol 2023; 182:108642. [PMID: 37467844 DOI: 10.1016/j.biopsycho.2023.108642] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2023] [Revised: 07/10/2023] [Accepted: 07/14/2023] [Indexed: 07/21/2023]
Abstract
The brain continuously processes information coming from both the external environment and visceral signals generated by the body. This constant information exchange between the body and the brain allows signals originating from the oscillatory activity of the heart, among others, to influence perception. Here, we investigated how the cardiac phase modulates multisensory integration, which is the process that allows information from multiple senses to combine non-linearly to reduce environmental uncertainty. Forty healthy participants completed a Simple Detection Task with unimodal (Auditory, Visual, Tactile) and bimodal (Audio-Tactile, Audio-Visual, Visuo-Tactile) stimuli presented 250 ms and 500 ms after the R-peak of the electrocardiogram, that is, systole and diastole, respectively. First, we found a nonspecific effect of the cardiac cycle phases on detection of both unimodal and bimodal stimuli. Reaction times were faster for stimuli presented during diastole, compared to systole. Then, applying the Race Model Inequality approach to quantify multisensory integration, Audio-Tactile and Visuo-Tactile, but not Audio-Visual stimuli, showed higher integration when presented during diastole than during systole. These findings indicate that the impact of the cardiac phase on multisensory integration may be specific for stimuli including somatosensory (i.e., tactile) inputs. This suggests that the heartbeat-related noise, which according to the interoceptive predictive coding theory suppresses somatosensory inputs, also affects multisensory integration during systole. In conclusion, our data extend the interoceptive predictive coding theory to the multisensory domain. From a more mechanistic view, they may reflect a reduced optimization of neural oscillations orchestrating multisensory integration during systole.
Collapse
Affiliation(s)
- Martina Saltafossi
- Department of Psychological, Health and Territorial Sciences, "G. d'Annunzio" University of Chieti-Pescara, Chieti, Italy.
| | - Andrea Zaccaro
- Department of Psychological, Health and Territorial Sciences, "G. d'Annunzio" University of Chieti-Pescara, Chieti, Italy
| | - Mauro Gianni Perrucci
- Department of Neuroscience, Imaging and Clinical Sciences, "G. d'Annunzio" University of Chieti-Pescara, Chieti, Italy; Institute for Advanced Biomedical Technologies, ITAB, "G. d'Annunzio" University of Chieti-Pescara, Chieti, Italy
| | - Francesca Ferri
- Department of Neuroscience, Imaging and Clinical Sciences, "G. d'Annunzio" University of Chieti-Pescara, Chieti, Italy
| | - Marcello Costantini
- Department of Psychological, Health and Territorial Sciences, "G. d'Annunzio" University of Chieti-Pescara, Chieti, Italy; Institute for Advanced Biomedical Technologies, ITAB, "G. d'Annunzio" University of Chieti-Pescara, Chieti, Italy
| |
Collapse
|
13
|
Lanfranco RC, Chancel M, Ehrsson HH. Quantifying body ownership information processing and perceptual bias in the rubber hand illusion. Cognition 2023; 238:105491. [PMID: 37178590 DOI: 10.1016/j.cognition.2023.105491] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2023] [Revised: 05/02/2023] [Accepted: 05/04/2023] [Indexed: 05/15/2023]
Abstract
Bodily illusions have fascinated humankind for centuries, and researchers have studied them to learn about the perceptual and neural processes that underpin multisensory channels of bodily awareness. The influential rubber hand illusion (RHI) has been used to study changes in the sense of body ownership - that is, how a limb is perceived to belong to one's body, which is a fundamental building block in many theories of bodily awareness, self-consciousness, embodiment, and self-representation. However, the methods used to quantify perceptual changes in bodily illusions, including the RHI, have mainly relied on subjective questionnaires and rating scales, and the degree to which such illusory sensations depend on sensory information processing has been difficult to test directly. Here, we introduce a signal detection theory (SDT) framework to study the sense of body ownership in the RHI. We provide evidence that the illusion is associated with changes in body ownership sensitivity that depend on the information carried in the degree of asynchrony of correlated visual and tactile signals, as well as with perceptual bias and sensitivity that reflect the distance between the rubber hand and the participant's body. We found that the illusion's sensitivity to asynchrony is remarkably precise; even a 50 ms visuotactile delay significantly affected body ownership information processing. Our findings conclusively link changes in a complex bodily experience such as body ownership to basic sensory information processing and provide a proof of concept that SDT can be used to study bodily illusions.
Collapse
Affiliation(s)
- Renzo C Lanfranco
- Department of Neuroscience, Karolinska Institutet, Stockholm, Sweden.
| | - Marie Chancel
- Department of Neuroscience, Karolinska Institutet, Stockholm, Sweden; Psychology and Neurocognition Lab, Université Grenoble-Alpes, Grenoble, France
| | - H Henrik Ehrsson
- Department of Neuroscience, Karolinska Institutet, Stockholm, Sweden.
| |
Collapse
|
14
|
Baş B, Yücel E. Sensory profiles of children using cochlear implant and auditory brainstem implant. Int J Pediatr Otorhinolaryngol 2023; 170:111584. [PMID: 37224736 DOI: 10.1016/j.ijporl.2023.111584] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/09/2022] [Revised: 04/18/2023] [Accepted: 04/29/2023] [Indexed: 05/26/2023]
Affiliation(s)
- Banu Baş
- Ankara Yıldırım Beyazıt University, Faculty of Health Sciences, Department of Audiology, Ankara, Turkey.
| | - Esra Yücel
- Hacettepe University, Faculty of Health Sciences, Department of Audiology, Ankara, Turkey
| |
Collapse
|
15
|
Williams AM, Angeloni CF, Geffen MN. Sound Improves Neuronal Encoding of Visual Stimuli in Mouse Primary Visual Cortex. J Neurosci 2023; 43:2885-2906. [PMID: 36944489 PMCID: PMC10124961 DOI: 10.1523/jneurosci.2444-21.2023] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2021] [Revised: 02/14/2023] [Accepted: 02/23/2023] [Indexed: 03/23/2023] Open
Abstract
In everyday life, we integrate visual and auditory information in routine tasks such as navigation and communication. While concurrent sound can improve visual perception, the neuronal correlates of audiovisual integration are not fully understood. Specifically, it remains unclear whether neuronal firing patters in the primary visual cortex (V1) of awake animals demonstrate similar sound-induced improvement in visual discriminability. Furthermore, presentation of sound is associated with movement in the subjects, but little is understood about whether and how sound-associated movement affects audiovisual integration in V1. Here, we investigated how sound and movement interact to modulate V1 visual responses in awake, head-fixed mice and whether this interaction improves neuronal encoding of the visual stimulus. We presented visual drifting gratings with and without simultaneous auditory white noise to awake mice while recording mouse movement and V1 neuronal activity. Sound modulated activity of 80% of light-responsive neurons, with 95% of neurons increasing activity when the auditory stimulus was present. A generalized linear model (GLM) revealed that sound and movement had distinct and complementary effects of the neuronal visual responses. Furthermore, decoding of the visual stimulus from the neuronal activity was improved with sound, an effect that persisted even when controlling for movement. These results demonstrate that sound and movement modulate visual responses in complementary ways, improving neuronal representation of the visual stimulus. This study clarifies the role of movement as a potential confound in neuronal audiovisual responses and expands our knowledge of how multimodal processing is mediated at a neuronal level in the awake brain.SIGNIFICANCE STATEMENT Sound and movement are both known to modulate visual responses in the primary visual cortex; however, sound-induced movement has largely remained unaccounted for as a potential confound in audiovisual studies in awake animals. Here, authors found that sound and movement both modulate visual responses in an important visual brain area, the primary visual cortex, in distinct, yet complementary ways. Furthermore, sound improved encoding of the visual stimulus even when accounting for movement. This study reconciles contrasting theories on the mechanism underlying audiovisual integration and asserts the primary visual cortex as a key brain region participating in tripartite sensory interactions.
Collapse
Affiliation(s)
- Aaron M Williams
- Department of Otorhinolaryngology, University of Pennsylvania, Philadelphia, Pennsylvania, 19104
- Department of Neuroscience, University of Pennsylvania, Philadelphia, Pennsylvania, 19104
- Department of Neurology, University of Pennsylvania, Philadelphia, Pennsylvania, 19104
| | - Christopher F Angeloni
- Department of Otorhinolaryngology, University of Pennsylvania, Philadelphia, Pennsylvania, 19104
- Department of Psychology, University of Pennsylvania, Philadelphia, Pennsylvania 19104
| | - Maria N Geffen
- Department of Otorhinolaryngology, University of Pennsylvania, Philadelphia, Pennsylvania, 19104
- Department of Neuroscience, University of Pennsylvania, Philadelphia, Pennsylvania, 19104
- Department of Neurology, University of Pennsylvania, Philadelphia, Pennsylvania, 19104
| |
Collapse
|
16
|
Nasemann J, Töllner T, Müller HJ, Shi Z. Hierarchy of Intra- and Cross-modal Redundancy Gains in Visuo-tactile Search: Evidence from the Posterior Contralateral Negativity. J Cogn Neurosci 2023; 35:543-570. [PMID: 36735602 DOI: 10.1162/jocn_a_01971] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
Redundant combination of target features from separable dimensions can expedite visual search. The dimension-weighting account explains these "redundancy gains" by assuming that the attention-guiding priority map integrates the feature-contrast signals generated by targets within the respective dimensions. The present study investigated whether this hierarchical architecture is sufficient to explain the gains accruing from redundant targets defined by features in different modalities, or whether an additional level of modality-specific priority coding is necessary, as postulated by the modality-weighting account (MWA). To address this, we had observers perform a visuo-tactile search task in which targets popped out by a visual feature (color or shape) or a tactile feature (vibro-tactile frequency) as well as any combination of these features. The RT gains turned out larger for visuo-tactile versus visual redundant targets, as predicted by the MWA. In addition, we analyzed two lateralized event-related EEG components: the posterior (PCN) and central (CCN) contralateral negativities, which are associated with visual and tactile attentional selection, respectively. The CCN proved to be a stable somatosensory component, unaffected by cross-modal redundancies. In contrast, the PCN was sensitive to cross-modal redundancies, evidenced by earlier onsets and higher amplitudes, which could not be explained by linear superposition of the earlier CCN onto the later PCN. Moreover, linear mixed-effect modeling of the PCN amplitude and timing parameters accounted for approximately 25% of the behavioral RT variance. Together, these behavioral and PCN effects support the hierarchy of priority-signal computation assumed by the MWA.
Collapse
Affiliation(s)
- Jan Nasemann
- Ludwig-Maximilians-Universität München, Munich, Germany.,Graduate School of Systemic Neurosciences, Ludwig-Maximilians-Universität München, Planegg, Germany
| | | | - Hermann J Müller
- Ludwig-Maximilians-Universität München, Munich, Germany.,Graduate School of Systemic Neurosciences, Ludwig-Maximilians-Universität München, Planegg, Germany
| | - Zhuanghua Shi
- Ludwig-Maximilians-Universität München, Munich, Germany.,Graduate School of Systemic Neurosciences, Ludwig-Maximilians-Universität München, Planegg, Germany
| |
Collapse
|
17
|
Tanner J, Keefer E, Cheng J, Helms Tillery S. Dynamic peripheral nerve stimulation can produce cortical activation similar to punctate mechanical stimuli. Front Hum Neurosci 2023; 17:1083307. [PMID: 37033904 PMCID: PMC10079952 DOI: 10.3389/fnhum.2023.1083307] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2022] [Accepted: 02/28/2023] [Indexed: 04/11/2023] Open
Abstract
During contact, phasic and tonic responses provide feedback that is used for task performance and perceptual processes. These disparate temporal dynamics are carried in peripheral nerves, and produce overlapping signals in cortex. Using longitudinal intrafascicular electrodes inserted into the median nerve of a nonhuman primate, we delivered composite stimulation consisting of onset and release bursts to capture rapidly adapting responses and sustained stochastic stimulation to capture the ongoing response of slowly adapting receptors. To measure the stimulation's effectiveness in producing natural responses, we monitored the local field potential in somatosensory cortex. We compared the cortical responses to peripheral nerve stimulation and vibrotactile/punctate stimulation of the fingertip, with particular focus on gamma band (30-65 Hz) responses. We found that vibrotactile stimulation produces consistently phase locked gamma throughout the duration of the stimulation. By contrast, punctate stimulation responses were phase locked at the onset and release of stimulation, but activity maintained through the stimulation was not phase locked. Using these responses as guideposts for assessing the response to the peripheral nerve stimulation, we found that constant frequency stimulation produced continual phase locking, whereas composite stimulation produced gamma enhancement throughout the stimulus, phase locked only at the onset and release of the stimulus. We describe this response as an "Appropriate Response in the gamma band" (ARγ), a trend seen in other sensory systems. Our demonstration is the first shown for intracortical somatosensory local field potentials. We argue that this stimulation paradigm produces a more biomimetic response in somatosensory cortex and is more likely to produce naturalistic sensations for readily usable neuroprosthetic feedback.
Collapse
Affiliation(s)
- Justin Tanner
- School of Biological and Health Systems Engineering, Arizona State University, Tempe, AZ, United States
| | | | - Jonathan Cheng
- University of Texas Southwestern Medical Center, Dallas, TX, United States
| | - Stephen Helms Tillery
- School of Biological and Health Systems Engineering, Arizona State University, Tempe, AZ, United States
| |
Collapse
|
18
|
Lakshminarayanan K, Ramu V, Rajendran J, Chandrasekaran KP, Shah R, Daulat SR, Moodley V, Madathil D. The Effect of Tactile Imagery Training on Reaction Time in Healthy Participants. Brain Sci 2023; 13:brainsci13020321. [PMID: 36831864 PMCID: PMC9954091 DOI: 10.3390/brainsci13020321] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2023] [Revised: 02/07/2023] [Accepted: 02/10/2023] [Indexed: 02/16/2023] Open
Abstract
BACKGROUND Reaction time is an important measure of sensorimotor performance and coordination and has been shown to improve with training. Various training methods have been employed in the past to improve reaction time. Tactile imagery (TI) is a method of mentally simulating a tactile sensation and has been used in brain-computer interface applications. However, it is yet unknown whether TI can have a learning effect and improve reaction time. OBJECTIVE The purpose of this study was to investigate the effect of TI on reaction time in healthy participants. METHODS We examined the reaction time to vibratory stimuli before and after a TI training session in an experimental group and compared the change in reaction time post-training with pre-training in the experimental group as well as the reaction time in a control group. A follow-up evaluation of reaction time was also conducted. RESULTS The results showed that TI training significantly improved reaction time after TI compared with before TI by approximately 25% (pre-TI right-hand mean ± SD: 456.62 ± 124.26 ms, pre-TI left-hand mean ± SD: 448.82 ± 124.50 ms, post-TI right-hand mean ± SD: 340.32 ± 65.59 ms, post-TI left-hand mean ± SD: 335.52 ± 59.01 ms). Furthermore, post-training reaction time showed significant reduction compared with the control group and the improved reaction time had a lasting effect even after four weeks post-training. CONCLUSION These findings indicate that TI training may serve as an alternate imagery strategy for improving reaction time without the need for physical practice.
Collapse
Affiliation(s)
- Kishor Lakshminarayanan
- Neuro-Rehabilitation Lab, Department of Sensors and Biomedical Engineering, School of Electronics Engineering, Vellore Institute of Technology, Vellore 632014, India
- Correspondence: ; Tel.: +91-9361-013563
| | - Vadivelan Ramu
- Neuro-Rehabilitation Lab, Department of Sensors and Biomedical Engineering, School of Electronics Engineering, Vellore Institute of Technology, Vellore 632014, India
| | - Janaane Rajendran
- Neuro-Rehabilitation Lab, Department of Sensors and Biomedical Engineering, School of Electronics Engineering, Vellore Institute of Technology, Vellore 632014, India
| | - Kamala Prasanna Chandrasekaran
- Neuro-Rehabilitation Lab, Department of Sensors and Biomedical Engineering, School of Electronics Engineering, Vellore Institute of Technology, Vellore 632014, India
| | - Rakshit Shah
- Department of Chemical and Biomedical Engineering, Cleveland State University, Cleveland, OH 44115, USA
| | - Sohail R. Daulat
- University of Arizona College of Medicine, Tucson, AZ 85724, USA
| | - Viashen Moodley
- Arizona Center for Hand to Shoulder Surgery, Phoenix, AZ 85004, USA
| | - Deepa Madathil
- Jindal Institute of Behavioural Sciences, O. P. Jindal Global University, Haryana 131001, India
| |
Collapse
|
19
|
Haptic shared control improves neural efficiency during myoelectric prosthesis use. Sci Rep 2023; 13:484. [PMID: 36627340 PMCID: PMC9832035 DOI: 10.1038/s41598-022-26673-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2022] [Accepted: 12/19/2022] [Indexed: 01/11/2023] Open
Abstract
Clinical myoelectric prostheses lack the sensory feedback and sufficient dexterity required to complete activities of daily living efficiently and accurately. Providing haptic feedback of relevant environmental cues to the user or imbuing the prosthesis with autonomous control authority have been separately shown to improve prosthesis utility. Few studies, however, have investigated the effect of combining these two approaches in a shared control paradigm, and none have evaluated such an approach from the perspective of neural efficiency (the relationship between task performance and mental effort measured directly from the brain). In this work, we analyzed the neural efficiency of 30 non-amputee participants in a grasp-and-lift task of a brittle object. Here, a myoelectric prosthesis featuring vibrotactile feedback of grip force and autonomous control of grasping was compared with a standard myoelectric prosthesis with and without vibrotactile feedback. As a measure of mental effort, we captured the prefrontal cortex activity changes using functional near infrared spectroscopy during the experiment. It was expected that the prosthesis with haptic shared control would improve both task performance and mental effort compared to the standard prosthesis. Results showed that only the haptic shared control system enabled users to achieve high neural efficiency, and that vibrotactile feedback was important for grasping with the appropriate grip force. These results indicate that the haptic shared control system synergistically combines the benefits of haptic feedback and autonomous controllers, and is well-poised to inform such hybrid advancements in myoelectric prosthesis technology.
Collapse
|
20
|
Yuan Y, He X, Yue Z. Working memory load modulates the processing of audiovisual distractors: A behavioral and event-related potentials study. Front Integr Neurosci 2023; 17:1120668. [PMID: 36908504 PMCID: PMC9995450 DOI: 10.3389/fnint.2023.1120668] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2022] [Accepted: 01/30/2023] [Indexed: 02/25/2023] Open
Abstract
The interplay between different modalities can help to perceive stimuli more effectively. However, very few studies have focused on how multisensory distractors affect task performance. By adopting behavioral and event-related potentials (ERPs) techniques, the present study examined whether multisensory audiovisual distractors could attract attention more effectively than unisensory distractors. Moreover, we explored whether such a process was modulated by working memory load. Across three experiments, n-back tasks (1-back and 2-back) were adopted with peripheral auditory, visual, or audiovisual distractors. Visual and auditory distractors were white discs and pure tones (Experiments 1 and 2), pictures and sounds of animals (Experiment 3), respectively. Behavioral results in Experiment 1 showed a significant interference effect under high working memory load but not under low load condition. The responses to central letters with audiovisual distractors were significantly slower than those to letters without distractors, while no significant difference was found between unisensory distractor and without distractor conditions. Similarly, ERP results in Experiments 2 and 3 showed that there existed an integration only under high load condition. That is, an early integration for simple audiovisual distractors (240-340 ms) and a late integration for complex audiovisual distractors (440-600 ms). These findings suggest that multisensory distractors can be integrated and effectively attract attention away from the main task, i.e., interference effect. Moreover, this effect is pronounced only under high working memory load condition.
Collapse
Affiliation(s)
- Yichen Yuan
- Department of Psychology, Sun Yat-sen University, Guangzhou, China
| | - Xiang He
- Department of Psychology, Sun Yat-sen University, Guangzhou, China
| | - Zhenzhu Yue
- Department of Psychology, Sun Yat-sen University, Guangzhou, China
| |
Collapse
|
21
|
Vastano R, Costantini M, Alexander WH, Widerstrom-Noga E. Multisensory integration in humans with spinal cord injury. Sci Rep 2022; 12:22156. [PMID: 36550184 PMCID: PMC9780239 DOI: 10.1038/s41598-022-26678-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2022] [Accepted: 12/19/2022] [Indexed: 12/24/2022] Open
Abstract
Although multisensory integration (MSI) has been extensively studied, the underlying mechanisms remain a topic of ongoing debate. Here we investigate these mechanisms by comparing MSI in healthy controls to a clinical population with spinal cord injury (SCI). Deafferentation following SCI induces sensorimotor impairment, which may alter the ability to synthesize cross-modal information. We applied mathematical and computational modeling to reaction time data recorded in response to temporally congruent cross-modal stimuli. We found that MSI in both SCI and healthy controls is best explained by cross-modal perceptual competition, highlighting a common competition mechanism. Relative to controls, MSI impairments in SCI participants were better explained by reduced stimulus salience leading to increased cross-modal competition. By combining traditional analyses with model-based approaches, we examine how MSI is realized during normal function, and how it is compromised in a clinical population. Our findings support future investigations identifying and rehabilitating MSI deficits in clinical disorders.
Collapse
Affiliation(s)
- Roberta Vastano
- grid.26790.3a0000 0004 1936 8606Department of Neurological Surgery, The Miami Project to Cure Paralysis, University of Miami, Miami, FL 33136 USA
| | - Marcello Costantini
- grid.412451.70000 0001 2181 4941Department of Psychological, Health and Territorial Sciences, “G. d’Annunzio” University of Chieti-Pescara, Chieti, Italy ,grid.412451.70000 0001 2181 4941Institute for Advanced Biomedical Technologies, ITAB, “G. d’Annunzio” University of Chieti-Pescara, Chieti, Italy
| | - William H. Alexander
- grid.255951.fCenter for Complex Systems and Brain Sciences, Florida Atlantic University, Boca Raton, USA ,grid.255951.fDepartment of Psychology, Florida Atlantic University, Boca Raton, USA ,grid.255951.fThe Brain Institute, Florida Atlantic University, Boca Raton, USA
| | - Eva Widerstrom-Noga
- grid.26790.3a0000 0004 1936 8606Department of Neurological Surgery, The Miami Project to Cure Paralysis, University of Miami, Miami, FL 33136 USA
| |
Collapse
|
22
|
Yang W, Yang X, Guo A, Li S, Li Z, Lin J, Ren Y, Yang J, Wu J, Zhang Z. Audiovisual integration of the dynamic hand-held tool at different stimulus intensities in aging. Front Hum Neurosci 2022; 16:968987. [PMID: 36590067 PMCID: PMC9794578 DOI: 10.3389/fnhum.2022.968987] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2022] [Accepted: 11/15/2022] [Indexed: 12/23/2022] Open
Abstract
Introduction: In comparison to the audiovisual integration of younger adults, the same process appears more complex and unstable in older adults. Previous research has found that stimulus intensity is one of the most important factors influencing audiovisual integration. Methods: The present study compared differences in audiovisual integration between older and younger adults using dynamic hand-held tool stimuli, such as holding a hammer hitting the floor. Meanwhile, the effects of stimulus intensity on audiovisual integration were compared. The intensity of the visual and auditory stimuli was regulated by modulating the contrast level and sound pressure level. Results: Behavioral results showed that both older and younger adults responded faster and with higher hit rates to audiovisual stimuli than to visual and auditory stimuli. Further results of event-related potentials (ERPs) revealed that during the early stage of 60-100 ms, in the low-intensity condition, audiovisual integration of the anterior brain region was greater in older adults than in younger adults; however, in the high-intensity condition, audiovisual integration of the right hemisphere region was greater in younger adults than in older adults. Moreover, audiovisual integration was greater in the low-intensity condition than in the high-intensity condition in older adults during the 60-100 ms, 120-160 ms, and 220-260 ms periods, showing inverse effectiveness. However, there was no difference in the audiovisual integration of younger adults across different intensity conditions. Discussion: The results suggested that there was an age-related dissociation between high- and low-intensity conditions with audiovisual integration of the dynamic hand-held tool stimulus. Older adults showed greater audiovisual integration in the lower intensity condition, which may be due to the activation of compensatory mechanisms.
Collapse
Affiliation(s)
- Weiping Yang
- Department of Psychology, Faculty of Education, Hubei University, Wuhan, China,Brain and Cognition Research Center (BCRC), Faculty of Education, Hubei University, Wuhan, China
| | - Xiangfu Yang
- Department of Psychology, Faculty of Education, Hubei University, Wuhan, China
| | - Ao Guo
- Department of Psychology, Faculty of Education, Hubei University, Wuhan, China
| | - Shengnan Li
- Department of Psychology, Faculty of Education, Hubei University, Wuhan, China
| | - Zimo Li
- Department of Psychology, Faculty of Education, Hubei University, Wuhan, China
| | - Jinfei Lin
- Department of Psychology, Faculty of Education, Hubei University, Wuhan, China
| | - Yanna Ren
- Department of Psychology, College of Humanities and Management, Guizhou University of Traditional Chinese Medicine, Guiyang, China,*Correspondence: Yanna Ren Zhilin Zhang
| | - Jiajia Yang
- Applied Brain Science Lab, Faculty of Interdisciplinary Science and Engineering in Health Systems, Okayama University, Okayama, Japan
| | - Jinglong Wu
- Applied Brain Science Lab, Faculty of Interdisciplinary Science and Engineering in Health Systems, Okayama University, Okayama, Japan,Research Center for Medical Artificial Intelligence, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong, China
| | - Zhilin Zhang
- Research Center for Medical Artificial Intelligence, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong, China,*Correspondence: Yanna Ren Zhilin Zhang
| |
Collapse
|
23
|
Yang W, Li S, Guo A, Li Z, Yang X, Ren Y, Yang J, Wu J, Zhang Z. Auditory attentional load modulates the temporal dynamics of audiovisual integration in older adults: An ERPs study. Front Aging Neurosci 2022; 14:1007954. [PMID: 36325188 PMCID: PMC9618958 DOI: 10.3389/fnagi.2022.1007954] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2022] [Accepted: 09/23/2022] [Indexed: 12/02/2022] Open
Abstract
As older adults experience degenerations in perceptual ability, it is important to gain perception from audiovisual integration. Due to attending to one or more auditory stimuli, performing other tasks is a common challenge for older adults in everyday life. Therefore, it is necessary to probe the effects of auditory attentional load on audiovisual integration in older adults. The present study used event-related potentials (ERPs) and a dual-task paradigm [Go / No-go task + rapid serial auditory presentation (RSAP) task] to investigate the temporal dynamics of audiovisual integration. Behavioral results showed that both older and younger adults responded faster and with higher accuracy to audiovisual stimuli than to either visual or auditory stimuli alone. ERPs revealed weaker audiovisual integration under the no-attentional auditory load condition at the earlier processing stages and, conversely, stronger integration in the late stages. Moreover, audiovisual integration was greater in older adults than in younger adults at the following time intervals: 60–90, 140–210, and 430–530 ms. Notably, only under the low load condition in the time interval of 140–210 ms, we did find that the audiovisual integration of older adults was significantly greater than that of younger adults. These results delineate the temporal dynamics of the interactions with auditory attentional load and audiovisual integration in aging, suggesting that modulation of auditory attentional load affects audiovisual integration, enhancing it in older adults.
Collapse
Affiliation(s)
- Weiping Yang
- Department of Psychology, Faculty of Education, Hubei University, Wuhan, China
- Brain and Cognition Research Center (BCRC), Faculty of Education, Hubei University, Wuhan, China
| | - Shengnan Li
- Graduate School of Interdisciplinary Science and Engineering in Health Systems, Okayama University, Okayama, Japan
| | - Ao Guo
- Graduate School of Interdisciplinary Science and Engineering in Health Systems, Okayama University, Okayama, Japan
| | - Zimo Li
- Department of Psychology, Faculty of Education, Hubei University, Wuhan, China
| | - Xiangfu Yang
- Department of Psychology, Faculty of Education, Hubei University, Wuhan, China
| | - Yanna Ren
- Department of Psychology, College of Humanities and Management, Guizhou University of Traditional Chinese Medicine, Guiyang, China
- *Correspondence: Yanna Ren
| | - Jiajia Yang
- Applied Brain Science Lab, Faculty of Interdisciplinary Science and Engineering in Health Systems, Okayama University, Okayama, Japan
| | - Jinglong Wu
- Graduate School of Interdisciplinary Science and Engineering in Health Systems, Okayama University, Okayama, Japan
- Research Center for Medical Artificial Intelligence, Shenzhen Institute of Advanced Technology, Chinese Academy of Science, Shenzhen, China
| | - Zhilin Zhang
- Research Center for Medical Artificial Intelligence, Shenzhen Institute of Advanced Technology, Chinese Academy of Science, Shenzhen, China
- Zhilin Zhang
| |
Collapse
|
24
|
Ross LA, Molholm S, Butler JS, Bene VAD, Foxe JJ. Neural correlates of multisensory enhancement in audiovisual narrative speech perception: a fMRI investigation. Neuroimage 2022; 263:119598. [PMID: 36049699 DOI: 10.1016/j.neuroimage.2022.119598] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2022] [Revised: 08/26/2022] [Accepted: 08/28/2022] [Indexed: 11/25/2022] Open
Abstract
This fMRI study investigated the effect of seeing articulatory movements of a speaker while listening to a naturalistic narrative stimulus. It had the goal to identify regions of the language network showing multisensory enhancement under synchronous audiovisual conditions. We expected this enhancement to emerge in regions known to underlie the integration of auditory and visual information such as the posterior superior temporal gyrus as well as parts of the broader language network, including the semantic system. To this end we presented 53 participants with a continuous narration of a story in auditory alone, visual alone, and both synchronous and asynchronous audiovisual speech conditions while recording brain activity using BOLD fMRI. We found multisensory enhancement in an extensive network of regions underlying multisensory integration and parts of the semantic network as well as extralinguistic regions not usually associated with multisensory integration, namely the primary visual cortex and the bilateral amygdala. Analysis also revealed involvement of thalamic brain regions along the visual and auditory pathways more commonly associated with early sensory processing. We conclude that under natural listening conditions, multisensory enhancement not only involves sites of multisensory integration but many regions of the wider semantic network and includes regions associated with extralinguistic sensory, perceptual and cognitive processing.
Collapse
Affiliation(s)
- Lars A Ross
- The Frederick J. and Marion A. Schindler Cognitive Neurophysiology Laboratory, The Ernest J. Del Monte Institute for Neuroscience, Department of Neuroscience, University of Rochester School of Medicine and Dentistry, Rochester, New York, 14642, USA; Department of Imaging Sciences, University of Rochester Medical Center, University of Rochester School of Medicine and Dentistry, Rochester, New York, 14642, USA; The Cognitive Neurophysiology Laboratory, Departments of Pediatrics and Neuroscience, Albert Einstein College of Medicine & Montefiore Medical Center, Bronx, New York, 10461, USA.
| | - Sophie Molholm
- The Frederick J. and Marion A. Schindler Cognitive Neurophysiology Laboratory, The Ernest J. Del Monte Institute for Neuroscience, Department of Neuroscience, University of Rochester School of Medicine and Dentistry, Rochester, New York, 14642, USA; The Cognitive Neurophysiology Laboratory, Departments of Pediatrics and Neuroscience, Albert Einstein College of Medicine & Montefiore Medical Center, Bronx, New York, 10461, USA
| | - John S Butler
- The Cognitive Neurophysiology Laboratory, Departments of Pediatrics and Neuroscience, Albert Einstein College of Medicine & Montefiore Medical Center, Bronx, New York, 10461, USA; School of Mathematical Sciences, Technological University Dublin, Kevin Street Campus, Dublin, Ireland
| | - Victor A Del Bene
- The Cognitive Neurophysiology Laboratory, Departments of Pediatrics and Neuroscience, Albert Einstein College of Medicine & Montefiore Medical Center, Bronx, New York, 10461, USA; University of Alabama at Birmingham, Heersink School of Medicine, Department of Neurology, Birmingham, Alabama, 35233, USA
| | - John J Foxe
- The Frederick J. and Marion A. Schindler Cognitive Neurophysiology Laboratory, The Ernest J. Del Monte Institute for Neuroscience, Department of Neuroscience, University of Rochester School of Medicine and Dentistry, Rochester, New York, 14642, USA; The Cognitive Neurophysiology Laboratory, Departments of Pediatrics and Neuroscience, Albert Einstein College of Medicine & Montefiore Medical Center, Bronx, New York, 10461, USA.
| |
Collapse
|
25
|
Dwyer P, Takarae Y, Zadeh I, Rivera SM, Saron CD. Multisensory integration and interactions across vision, hearing, and somatosensation in autism spectrum development and typical development. Neuropsychologia 2022; 175:108340. [PMID: 36028085 DOI: 10.1016/j.neuropsychologia.2022.108340] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2021] [Revised: 06/13/2022] [Accepted: 07/22/2022] [Indexed: 10/15/2022]
Abstract
Most prior studies of multisensory integration (MSI) in autism have measured MSI in only a single combination of modalities - typically audiovisual integration. The present study used onset reaction times (RTs) and 125-channel electroencephalography (EEG) to examine different forms of bimodal and trimodal MSI based on combinations of auditory (noise burst), somatosensory (finger tap), and visual (flash) stimuli presented in a spatially-aligned manner using a custom desktop apparatus. A total of 36 autistic and 19 non-autistic adolescents between the ages of 11-14 participated. Significant RT multisensory facilitation relative to summed unisensory RT was observed in both groups, as were significant differences between summed unisensory and multisensory ERPs. Although the present study's statistical approach was not intended to test effect latencies, these interactions may have begun as early as ∼45 ms, constituting "early" (<100 ms) MSI. RT and ERP measurements of MSI appeared independent of one another. Groups did not significantly differ in multisensory RT facilitation, but we found exploratory evidence of group differences in the magnitude of audiovisual interactions in ERPs. Future research should make greater efforts to explore MSI in under-represented populations, especially autistic people with intellectual disabilities and nonspeaking/minimally-verbal autistic people.
Collapse
Affiliation(s)
- Patrick Dwyer
- Department of Psychology, UC Davis, USA; Center for Mind and Brain, UC Davis, USA.
| | - Yukari Takarae
- Department of Neurosciences, UC San Diego, USA; Department of Psychology, San Diego State University, USA
| | | | - Susan M Rivera
- Department of Psychology, UC Davis, USA; Center for Mind and Brain, UC Davis, USA; MIND Institute, UC Davis, USA
| | - Clifford D Saron
- Center for Mind and Brain, UC Davis, USA; MIND Institute, UC Davis, USA
| |
Collapse
|
26
|
Cinel C, Fernandez-Vargas J, Tremmel C, Citi L, Poli R. Enhancing performance with multisensory cues in a realistic target discrimination task. PLoS One 2022; 17:e0272320. [PMID: 35930533 PMCID: PMC9355224 DOI: 10.1371/journal.pone.0272320] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2021] [Accepted: 07/17/2022] [Indexed: 11/19/2022] Open
Abstract
Making decisions is an important aspect of people’s lives. Decisions can be highly critical in nature, with mistakes possibly resulting in extremely adverse consequences. Yet, such decisions have often to be made within a very short period of time and with limited information. This can result in decreased accuracy and efficiency. In this paper, we explore the possibility of increasing speed and accuracy of users engaged in the discrimination of realistic targets presented for a very short time, in the presence of unimodal or bimodal cues. More specifically, we present results from an experiment where users were asked to discriminate between targets rapidly appearing in an indoor environment. Unimodal (auditory) or bimodal (audio-visual) cues could shortly precede the target stimulus, warning the users about its location. Our findings show that, when used to facilitate perceptual decision under time pressure, and in condition of limited information in real-world scenarios, spoken cues can be effective in boosting performance (accuracy, reaction times or both), and even more so when presented in bimodal form. However, we also found that cue timing plays a critical role and, if the cue-stimulus interval is too short, cues may offer no advantage. In a post-hoc analysis of our data, we also show that congruency between the response location and both the target location and the cues, can interfere with the speed and accuracy in the task. These effects should be taken in consideration, particularly when investigating performance in realistic tasks.
Collapse
Affiliation(s)
- Caterina Cinel
- Brain Computer Interface and Neural Engineering Lab, School of Computer Science and Electronic Engineering, University of Essex, Colchester, United Kingdom
- * E-mail:
| | - Jacobo Fernandez-Vargas
- Brain Computer Interface and Neural Engineering Lab, School of Computer Science and Electronic Engineering, University of Essex, Colchester, United Kingdom
| | - Christoph Tremmel
- Brain Computer Interface and Neural Engineering Lab, School of Computer Science and Electronic Engineering, University of Essex, Colchester, United Kingdom
- WellthLab, Electronics and Computer Science, University of Southampton, Southampton, United Kingdom
| | - Luca Citi
- Brain Computer Interface and Neural Engineering Lab, School of Computer Science and Electronic Engineering, University of Essex, Colchester, United Kingdom
| | - Riccardo Poli
- Brain Computer Interface and Neural Engineering Lab, School of Computer Science and Electronic Engineering, University of Essex, Colchester, United Kingdom
| |
Collapse
|
27
|
Michail G, Senkowski D, Holtkamp M, Wächter B, Keil J. Early beta oscillations in multisensory association areas underlie crossmodal performance enhancement. Neuroimage 2022; 257:119307. [PMID: 35577024 DOI: 10.1016/j.neuroimage.2022.119307] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2022] [Revised: 04/29/2022] [Accepted: 05/10/2022] [Indexed: 11/28/2022] Open
Abstract
The combination of signals from different sensory modalities can enhance perception and facilitate behavioral responses. While previous research described crossmodal influences in a wide range of tasks, it remains unclear how such influences drive performance enhancements. In particular, the neural mechanisms underlying performance-relevant crossmodal influences, as well as the latency and spatial profile of such influences are not well understood. Here, we examined data from high-density electroencephalography (N = 30) recordings to characterize the oscillatory signatures of crossmodal facilitation of response speed, as manifested in the speeding of visual responses by concurrent task-irrelevant auditory information. Using a data-driven analysis approach, we found that individual gains in response speed correlated with larger beta power difference (13-25 Hz) between the audiovisual and the visual condition, starting within 80 ms after stimulus onset in the secondary visual cortex and in multisensory association areas in the parietal cortex. In addition, we examined data from electrocorticography (ECoG) recordings in four epileptic patients in a comparable paradigm. These ECoG data revealed reduced beta power in audiovisual compared with visual trials in the superior temporal gyrus (STG). Collectively, our data suggest that the crossmodal facilitation of response speed is associated with reduced early beta power in multisensory association and secondary visual areas. The reduced early beta power may reflect an auditory-driven feedback signal to improve visual processing through attentional gating. These findings improve our understanding of the neural mechanisms underlying crossmodal response speed facilitation and highlight the critical role of beta oscillations in mediating behaviorally relevant multisensory processing.
Collapse
Affiliation(s)
- Georgios Michail
- Department of Psychiatry and Psychotherapy, Charité Campus Mitte (CCM), Charité - Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Charitéplatz 1, Berlin 10117, Germany.
| | - Daniel Senkowski
- Department of Psychiatry and Psychotherapy, Charité Campus Mitte (CCM), Charité - Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Charitéplatz 1, Berlin 10117, Germany
| | - Martin Holtkamp
- Epilepsy-Center Berlin-Brandenburg, Institute for Diagnostics of Epilepsy, Berlin 10365, Germany; Department of Neurology, Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Charité Campus Mitte (CCM), Charitéplatz 1, Berlin 10117, Germany
| | - Bettina Wächter
- Epilepsy-Center Berlin-Brandenburg, Institute for Diagnostics of Epilepsy, Berlin 10365, Germany
| | - Julian Keil
- Biological Psychology, Christian-Albrechts-University Kiel, Kiel 24118, Germany
| |
Collapse
|
28
|
Audiovisual Integration for Saccade and Vergence Eye Movements Increases with Presbycusis and Loss of Selective Attention on the Stroop Test. Brain Sci 2022; 12:brainsci12050591. [PMID: 35624979 PMCID: PMC9139407 DOI: 10.3390/brainsci12050591] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2022] [Revised: 04/23/2022] [Accepted: 04/28/2022] [Indexed: 11/17/2022] Open
Abstract
Multisensory integration is a capacity allowing us to merge information from different sensory modalities in order to improve the salience of the signal. Audiovisual integration is one of the most used kinds of multisensory integration, as vision and hearing are two senses used very frequently in humans. However, the literature regarding age-related hearing loss (presbycusis) on audiovisual integration abilities is almost nonexistent, despite the growing prevalence of presbycusis in the population. In that context, the study aims to assess the relationship between presbycusis and audiovisual integration using tests of saccade and vergence eye movements to visual vs. audiovisual targets, with a pure tone as an auditory signal. Tests were run with the REMOBI and AIDEAL technologies coupled with the pupil core eye tracker. Hearing abilities, eye movement characteristics (latency, peak velocity, average velocity, amplitude) for saccade and vergence eye movements, and the Stroop Victoria test were measured in 69 elderly and 30 young participants. The results indicated (i) a dual pattern of aging effect on audiovisual integration for convergence (a decrease in the aged group relative to the young one, but an increase with age within the elderly group) and (ii) an improvement of audiovisual integration for saccades for people with presbycusis associated with lower scores of selective attention in the Stroop test, regardless of age. These results bring new insight on an unknown topic, that of audio visuomotor integration in normal aging and in presbycusis. They highlight the potential interest of using eye movement targets in the 3D space and pure tone sound to objectively evaluate audio visuomotor integration capacities.
Collapse
|
29
|
Perceived timing of cutaneous vibration and intracortical microstimulation of human somatosensory cortex. Brain Stimul 2022; 15:881-888. [DOI: 10.1016/j.brs.2022.05.015] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2021] [Revised: 05/18/2022] [Accepted: 05/20/2022] [Indexed: 11/19/2022] Open
|
30
|
Loss of audiovisual facilitation with age occurs for vergence eye movements but not for saccades. Sci Rep 2022; 12:4453. [PMID: 35292652 PMCID: PMC8924254 DOI: 10.1038/s41598-022-08072-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2021] [Accepted: 01/31/2022] [Indexed: 11/08/2022] Open
Abstract
Though saccade and vergence eye movements are fundamental for everyday life, the way these movements change as we age has not been sufficiently studied. The present study examines the effect of age on vergence and saccade eye movement characteristics (latency, peak and average velocity, amplitude) and on audiovisual facilitation. We compare the results for horizontal saccades and vergence movements toward visual and audiovisual targets in a young group of 22 participants (mean age 25 ± 2.5) and an elderly group of 45 participants (mean age 65 + 6.9). The results show that, with increased age, latency of all eye movements increases, average velocity decreases, amplitude of vergence decreases, and audiovisual facilitation collapses for vergence eye movements in depth but is preserved for saccades. There is no effect on peak velocity, suggesting that, although the sensory and attentional mechanisms controlling the motor system does age, the motor system itself does not age. The loss of audiovisual facilitation along the depth axis can be attributed to a physiologic decrease in the capacity for sound localization in depth with age, while left/right sound localization coupled with saccades is preserved. The results bring new insight for the effects of aging on multisensory control and attention.
Collapse
|
31
|
Johnston PR, Alain C, McIntosh AR. Individual Differences in Multisensory Processing Are Related to Broad Differences in the Balance of Local versus Distributed Information. J Cogn Neurosci 2022; 34:846-863. [PMID: 35195723 DOI: 10.1162/jocn_a_01835] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
The brain's ability to extract information from multiple sensory channels is crucial to perception and effective engagement with the environment, but the individual differences observed in multisensory processing lack mechanistic explanation. We hypothesized that, from the perspective of information theory, individuals with more effective multisensory processing will exhibit a higher degree of shared information among distributed neural populations while engaged in a multisensory task, representing more effective coordination of information among regions. To investigate this, healthy young adults completed an audiovisual simultaneity judgment task to measure their temporal binding window (TBW), which quantifies the ability to distinguish fine discrepancies in timing between auditory and visual stimuli. EEG was then recorded during a second run of the simultaneity judgment task, and partial least squares was used to relate individual differences in the TBW width to source-localized EEG measures of local entropy and mutual information, indexing local and distributed processing of information, respectively. The narrowness of the TBW, reflecting more effective multisensory processing, was related to a broad pattern of higher mutual information and lower local entropy at multiple timescales. Furthermore, a small group of temporal and frontal cortical regions, including those previously implicated in multisensory integration and response selection, respectively, played a prominent role in this pattern. Overall, these findings suggest that individual differences in multisensory processing are related to widespread individual differences in the balance of distributed versus local information processing among a large subset of brain regions, with more distributed information being associated with more effective multisensory processing. The balance of distributed versus local information processing may therefore be a useful measure for exploring individual differences in multisensory processing, its relationship to higher cognitive traits, and its disruption in neurodevelopmental disorders and clinical conditions.
Collapse
|
32
|
Huang G, Pitts BJ. Takeover requests for automated driving: The effects of signal direction, lead time, and modality on takeover performance. ACCIDENT; ANALYSIS AND PREVENTION 2022; 165:106534. [PMID: 34922107 DOI: 10.1016/j.aap.2021.106534] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/04/2021] [Revised: 10/14/2021] [Accepted: 12/03/2021] [Indexed: 06/14/2023]
Abstract
Vehicle-to-driver takeover will still be needed in semi-autonomous vehicles. Due to the complexity of the takeover process, it is important to develop interfaces to support good takeover performance. Multimodal displays have been proposed as a candidate for the design of takeover requests (TORs), but many questions remain unanswered regarding the effectiveness of this approach. This study investigated the effects of takeover signal direction (ipsilateral vs. contralateral), lead time (4 vs. 7 s), and modality (uni-, bi-, and trimodal combinations of visual, auditory, and tactile signals) on automated vehicle takeover performance. Twenty-four participants rode in a simulated SAE Level 3 vehicle and performed a series of takeover tasks when presented with a TOR. Overall, single and multimodal signals with a tactile component were correlated with the faster takeover and information processing times, and were perceived as most useful. Ipsilateral signals showed a marginally significant benefit to takeover times compared to contralateral signals. Finally, a shorter lead time was associated with faster takeover times, but also poorer takeover quality. Findings from this study can inform the design of in-vehicle information and warning systems for next-generation transportation.
Collapse
Affiliation(s)
- Gaojian Huang
- Department of Industrial and Systems Engineering, San Jose State University, One Washington Sq., San Jose, CA 95192, United States
| | - Brandon J Pitts
- School of Industrial Engineering, Purdue University, 315 N. Grant St., West Lafayette, IN 47907-2023, United States.
| |
Collapse
|
33
|
Wang Y, Wu B, Ma S, Wang D, Gan T, Liu H, Yang Z. Effect of mapping characteristic on audiovisual warning: Evidence from a simulated driving study. APPLIED ERGONOMICS 2022; 99:103638. [PMID: 34768226 DOI: 10.1016/j.apergo.2021.103638] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/24/2021] [Revised: 10/16/2021] [Accepted: 10/31/2021] [Indexed: 06/13/2023]
Abstract
Advanced driver assistance systems (ADAS) can enhance road safety by sending warning signals to drivers. Multimodal signals are gaining attention in ADAS warning design because they offer redundant information that facilitates human-system communication. However, no consensus has been reached on which multimodal design offers optimal benefit to road safety. Icons iconically map the real world and are associated with fast recognition and response time. Therefore, this study aims to investigate whether visual and auditory icons will benefit the effectiveness of audiovisual multimodal warnings. Thirty-two participants (16 females) experienced four types of unimodal warnings (high and low mapping visual warnings and high and low mapping auditory warnings) and four types of audiovisual warnings (high mapping visual + high mapping auditory warning, low mapping visual + low mapping auditory warning, high mapping visual + low mapping auditory warning, and low mapping visual + high mapping auditory warning) in simulated driving conditions. Visual warnings are presented in a head-up display. Results showed that multimodal warnings outperformed unimodal warnings (i.e., modality effect). We found mapping effect in audiovisual warnings, but only high mapping auditory constituents benefited warning effectiveness. Eye movement results revealed that the high mapping constituents might distract drivers from the road. This study adds evidence that multimodal warnings can offer extra benefits to drivers and high mapping auditory signals should be included in multimodal warning design to achieve better driving performance.
Collapse
Affiliation(s)
- Yuwei Wang
- Department of Psychology, Zhejiang Sci-Tech University, Hangzhou, China
| | - Bohan Wu
- Department of Psychology, Zhejiang Sci-Tech University, Hangzhou, China
| | - Shu Ma
- Department of Psychology, Zhejiang Sci-Tech University, Hangzhou, China
| | - Duming Wang
- Department of Psychology, Zhejiang Sci-Tech University, Hangzhou, China
| | - Tian Gan
- Department of Psychology, Zhejiang Sci-Tech University, Hangzhou, China
| | - Hongyan Liu
- Department of Psychology, Zhejiang Sci-Tech University, Hangzhou, China
| | - Zhen Yang
- Department of Psychology, Zhejiang Sci-Tech University, Hangzhou, China.
| |
Collapse
|
34
|
Event-related potentials reveal early visual-tactile integration in the deaf. PSIHOLOGIJA 2022. [DOI: 10.2298/psi210407003l] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022] Open
Abstract
This study examined visual-tactile perceptual integration in deaf and normal hearing individuals. Participants were presented with photos of faces or pictures of an oval in either a visual mode or a visual-tactile mode in a recognition learning task. Event-related potentials (ERPs) were recorded when participants recognized real faces and pictures of ovals in learning stage. Results from the parietal-occipital region showed that photos of faces accompanied with vibration elicited more positive-going ERP responses than photos of faces without vibration as indicated in the components of P1 and N170 in both deaf and hearing individuals. However, pictures of ovals accompanied with vibration produced more positive-going ERP responses than pictures of ovals without vibration in N170, which was only found in deaf individuals. A reversed pattern was shown in the temporal region indicating that real faces with vibration elicited less positive ERPs than photos of faces without vibration in both N170 and N300 for deaf, but such pattern did not appear in N170 and N300 for normal hearing. The results suggest that multisensory integration across the visual and tactile modality involves more fundamental perceptual regions than auditory regions. Moreover, auditory deprivation played an essential role at the perceptual encoding stage of the multisensory integration.
Collapse
|
35
|
Choi I, Zhao Y, Gonzalez EJ, Follmer S. Augmenting Perceived Softness of Haptic Proxy Objects Through Transient Vibration and Visuo-Haptic Illusion in Virtual Reality. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2021; 27:4387-4400. [PMID: 32746263 DOI: 10.1109/tvcg.2020.3002245] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
In this article, we investigate the effects of active transient vibration and visuo-haptic illusion to augment the perceived softness of haptic proxy objects. We introduce a system combining active transient vibration at the fingertip with visuo-haptic illusions. In our hand-held device, a voice coil actuator transmits active transient vibrations to the index fingertip, while a force sensor measures the force applied on passive proxy objects to create visuo-haptic illusions in virtual reality. We conducted three user studies to understand both the vibrotactile effect and its combined effect with visuo-haptic illusions. A preliminary study confirmed that active transient vibrations can intuitively alter the perceived softness of a proxy object. Our first study demonstrated that those same active transient vibrations can generate different perceptions of softness depending on the material of the proxy object used. In our second study, we evaluated the combination of active transient vibration and visuo-haptic illusion, and found that both significantly influence perceived softness, with with the visuo-haptic effect being dominant. Our third study further investigated the vibrotactile effect while controlling for the visuo-haptic illusion. The combination of these two methods allows users to effectively perceive various levels of softness when interacting with haptic proxy objects.
Collapse
|
36
|
Lagarrigue Y, Cappe C, Tallet J. Regular rhythmic and audio-visual stimulations enhance procedural learning of a perceptual-motor sequence in healthy adults: A pilot study. PLoS One 2021; 16:e0259081. [PMID: 34780497 PMCID: PMC8592429 DOI: 10.1371/journal.pone.0259081] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2021] [Accepted: 10/12/2021] [Indexed: 12/02/2022] Open
Abstract
Procedural learning is essential for the effortless execution of many everyday life activities. However, little is known about the conditions influencing the acquisition of procedural skills. The literature suggests that sensory environment may influence the acquisition of perceptual-motor sequences, as tested by a Serial Reaction Time Task. In the current study, we investigated the effects of auditory stimulations on procedural learning of a visuo-motor sequence. Given that the literature shows that regular rhythmic auditory rhythm and multisensory stimulations improve motor speed, we expected to improve procedural learning (reaction times and errors) with repeated practice with auditory stimulations presented either simultaneously with visual stimulations or with a regular tempo, compared to control conditions (e.g., with irregular tempo). Our results suggest that both congruent audio-visual stimulations and regular rhythmic auditory stimulations promote procedural perceptual-motor learning. On the contrary, auditory stimulations with irregular or very quick tempo alter learning. We discuss how regular rhythmic multisensory stimulations may improve procedural learning with respect of a multisensory rhythmic integration process.
Collapse
Affiliation(s)
- Yannick Lagarrigue
- ToNIC, Toulouse NeuroImaging Center, Université de Toulouse, Inserm, UPS, Toulouse, France
- * E-mail:
| | - Céline Cappe
- Cerco, Centre de Recherche Cerveau et Cognition, Université de Toulouse, CNRS, UMR 5549, Toulouse, France
| | - Jessica Tallet
- ToNIC, Toulouse NeuroImaging Center, Université de Toulouse, Inserm, UPS, Toulouse, France
| |
Collapse
|
37
|
Ball F, Nentwich A, Noesselt T. Cross-modal perceptual enhancement of unisensory targets is uni-directional and does not affect temporal expectations. Vision Res 2021; 190:107962. [PMID: 34757275 DOI: 10.1016/j.visres.2021.107962] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2021] [Revised: 10/05/2021] [Accepted: 10/15/2021] [Indexed: 10/20/2022]
Abstract
Temporal structures in the environment can shape temporal expectations (TE); and previous studies demonstrated that TEs interact with multisensory interplay (MSI) when multisensory stimuli are presented synchronously. Here, we tested whether other types of MSI - evoked by asynchronous yet temporally flanking irrelevant stimuli - result in similar performance patterns. To this end, we presented sequences of 12 stimuli (10 Hz) which consisted of auditory (A), visual (V) or alternating auditory-visual stimuli (e.g. A-V-A-V-…) with either auditory or visual targets (Exp. 1). Participants discriminated target frequencies (auditory pitch or visual spatial frequency) embedded in these sequences. To test effects of TE, the proportion of early and late temporal target positions was manipulated run-wise. Performance for unisensory targets was affected by temporally flanking distractors, with auditory temporal flankers selectively improving visual target perception (Exp. 1). However, no effect of temporal expectation was observed. Control experiments (Exp. 2-3) tested whether this lack of TE effect was due to the higher presentation frequency in Exp. 1 relative to previous experiments. Importantly, even at higher stimulation frequencies redundant multisensory targets (Exp. 2-3) reliably modulated TEs. Together, our results indicate that visual target detection was enhanced by MSI. However, this cross-modal enhancement - in contrast to the redundant target effect - was still insufficient to generate TEs. We posit that unisensory target representations were either instable or insufficient for the generation of TEs while less demanding MSI still occurred; highlighting the need for robust stimulus representations when generating temporal expectations.
Collapse
Affiliation(s)
- Felix Ball
- Department of Biological Psychology, Faculty of Natural Science, Otto-von-Guericke-University Magdeburg, Germany; Center for Behavioral Brain Sciences, Otto-von-Guericke-University Magdeburg, Germany.
| | - Annika Nentwich
- Department of Biological Psychology, Faculty of Natural Science, Otto-von-Guericke-University Magdeburg, Germany
| | - Toemme Noesselt
- Department of Biological Psychology, Faculty of Natural Science, Otto-von-Guericke-University Magdeburg, Germany; Center for Behavioral Brain Sciences, Otto-von-Guericke-University Magdeburg, Germany
| |
Collapse
|
38
|
Vastano R, Costantini M, Widerstrom-Noga E. Maladaptive reorganization following SCI: The role of body representation and multisensory integration. Prog Neurobiol 2021; 208:102179. [PMID: 34600947 DOI: 10.1016/j.pneurobio.2021.102179] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2021] [Revised: 09/08/2021] [Accepted: 09/24/2021] [Indexed: 10/20/2022]
Abstract
In this review we focus on maladaptive brain reorganization after spinal cord injury (SCI), including the development of neuropathic pain, and its relationship with impairments in body representation and multisensory integration. We will discuss the implications of altered sensorimotor interactions after SCI with and without neuropathic pain and possible deficits in multisensory integration and body representation. Within this framework we will examine published research findings focused on the use of bodily illusions to manipulate multisensory body representation to induce analgesic effects in heterogeneous chronic pain populations and in SCI-related neuropathic pain. We propose that the development and intensification of neuropathic pain after SCI is partly dependent on brain reorganization associated with dysfunctional multisensory integration processes and distorted body representation. We conclude this review by suggesting future research avenues that may lead to a better understanding of the complex mechanisms underlying the sense of the body after SCI, with a focus on cortical changes.
Collapse
Affiliation(s)
- Roberta Vastano
- University of Miami, Department of Neurological Surgery, The Miami Project to Cure Paralysis, Miami, FL, USA.
| | - Marcello Costantini
- Department of Psychological, Health and Territorial Sciences, "G. d'Annunzio" University of Chieti-Pescara, Chieti, Italy; Institute for Advanced Biomedical Technologies, ITAB, "G. d'Annunzio" University of Chieti-Pescara, Chieti, Italy.
| | - Eva Widerstrom-Noga
- University of Miami, Department of Neurological Surgery, The Miami Project to Cure Paralysis, Miami, FL, USA.
| |
Collapse
|
39
|
Muller AM, Dalal TC, Stevenson RA. Schizotypal personality traits and multisensory integration: An investigation using the McGurk effect. Acta Psychol (Amst) 2021; 218:103354. [PMID: 34174491 DOI: 10.1016/j.actpsy.2021.103354] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2020] [Revised: 06/04/2021] [Accepted: 06/10/2021] [Indexed: 12/14/2022] Open
Abstract
Multisensory integration, the process by which sensory information from different sensory modalities are bound together, is hypothesized to contribute to perceptual symptomatology in schizophrenia, in whom multisensory integration differences have been consistently found. Evidence is emerging that these differences extend across the schizophrenia spectrum, including individuals in the general population with higher levels of schizotypal traits. In the current study, we used the McGurk task as a measure of multisensory integration. We measured schizotypal traits using the Schizotypal Personality Questionnaire (SPQ), hypothesizing that higher levels of schizotypal traits, specifically Unusual Perceptual Experiences and Odd Speech subscales, would be associated with decreased multisensory integration of speech. Surprisingly, Unusual Perceptual Experiences were not associated with multisensory integration. However, Odd Speech was associated with multisensory integration, and this association extended more broadly across the Disorganized factor of the SPQ, including Odd or Eccentric Behaviour. Individuals with higher levels of Odd or Eccentric Behaviour scores also demonstrated poorer lip-reading abilities, which partially explained performance in the McGurk task. This suggests that aberrant perceptual processes affecting individuals across the schizophrenia spectrum may relate to disorganized symptomatology.
Collapse
Affiliation(s)
- Anne-Marie Muller
- Department of Psychology, University of Western Ontario, London, ON, Canada; Brain and Mind Institute, University of Western Ontario, London, ON, Canada
| | - Tyler C Dalal
- Department of Psychology, University of Western Ontario, London, ON, Canada; Brain and Mind Institute, University of Western Ontario, London, ON, Canada
| | - Ryan A Stevenson
- Department of Psychology, University of Western Ontario, London, ON, Canada; Brain and Mind Institute, University of Western Ontario, London, ON, Canada.
| |
Collapse
|
40
|
Cornelio P, Velasco C, Obrist M. Multisensory Integration as per Technological Advances: A Review. Front Neurosci 2021; 15:652611. [PMID: 34239410 PMCID: PMC8257956 DOI: 10.3389/fnins.2021.652611] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2021] [Accepted: 04/29/2021] [Indexed: 11/13/2022] Open
Abstract
Multisensory integration research has allowed us to better understand how humans integrate sensory information to produce a unitary experience of the external world. However, this field is often challenged by the limited ability to deliver and control sensory stimuli, especially when going beyond audio-visual events and outside laboratory settings. In this review, we examine the scope and challenges of new technology in the study of multisensory integration in a world that is increasingly characterized as a fusion of physical and digital/virtual events. We discuss multisensory integration research through the lens of novel multisensory technologies and, thus, bring research in human-computer interaction, experimental psychology, and neuroscience closer together. Today, for instance, displays have become volumetric so that visual content is no longer limited to 2D screens, new haptic devices enable tactile stimulation without physical contact, olfactory interfaces provide users with smells precisely synchronized with events in virtual environments, and novel gustatory interfaces enable taste perception through levitating stimuli. These technological advances offer new ways to control and deliver sensory stimulation for multisensory integration research beyond traditional laboratory settings and open up new experimentations in naturally occurring events in everyday life experiences. Our review then summarizes these multisensory technologies and discusses initial insights to introduce a bridge between the disciplines in order to advance the study of multisensory integration.
Collapse
Affiliation(s)
- Patricia Cornelio
- Department of Computer Science, University College London, London, United Kingdom
| | - Carlos Velasco
- Centre for Multisensory Marketing, Department of Marketing, BI Norwegian Business School, Oslo, Norway
| | - Marianna Obrist
- Department of Computer Science, University College London, London, United Kingdom
| |
Collapse
|
41
|
Sutter K, Oostwoud Wijdenes L, van Beers RJ, Medendorp WP. Movement preparation time determines movement variability. J Neurophysiol 2021; 125:2375-2383. [PMID: 34038240 DOI: 10.1152/jn.00087.2020] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Faster movements are typically more variable-a speed-accuracy trade-off known as Fitts' law. Are movements that are initiated faster also more variable? Neurophysiological work has associated larger neural variability during motor preparation with longer reaction time (RT) and larger movement variability, implying that movement variability decreases with increasing RT. Here, we recorded over 30,000 reaching movements in 11 human participants who moved to visually cued targets. Half of the visual cues were accompanied by a beep to evoke a wide RT range in each participant. Results show that initial reach variability decreases with increasing RT, for voluntarily produced RTs up to ∼300 ms, whereas other kinematic aspects and endpoint accuracy remained unaffected. We conclude that movement preparation time determines initial movement variability. We suggest that the chosen movement preparation time reflects a trade-off between movement initiation and precision.NEW & NOTEWORTHY Fitts' law describes the speed-accuracy trade-off in the execution of human movements. We examined whether there is also a trade-off between movement planning time and initial movement precision. We show that shorter reaction times result in higher initial movement variability. In other words, movement preparation time determines movement variability.
Collapse
Affiliation(s)
- Katrin Sutter
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - Leonie Oostwoud Wijdenes
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - Robert J van Beers
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands.,Department of Human Movement Sciences, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| | - W Pieter Medendorp
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| |
Collapse
|
42
|
Chow HM, Harris DA, Eid S, Ciaramitaro VM. The feeling of "kiki": Comparing developmental changes in sound-shape correspondence for audio-visual and audio-tactile stimuli. J Exp Child Psychol 2021; 209:105167. [PMID: 33915481 DOI: 10.1016/j.jecp.2021.105167] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2020] [Revised: 03/21/2021] [Accepted: 03/22/2021] [Indexed: 10/21/2022]
Abstract
Sound-shape crossmodal correspondence, the naturally occurring associations between abstract visual shapes and nonsense sounds, is one aspect of multisensory processing that strengthens across early childhood. Little is known regarding whether school-aged children exhibit other variants of sound-shape correspondences such as audio-tactile (AT) associations between tactile shapes and nonsense sounds. Based on previous research in blind individuals suggesting the role of visual experience in establishing sound-shape correspondence, we hypothesized that children would show weaker AT association than adults and that children's AT association would be enhanced with visual experience of the shapes. In Experiment 1, we showed that, when asked to match shapes explored haptically via touch to nonsense words, 6- to 8-year-olds exhibited inconsistent AT associations, whereas older children and adults exhibited the expected AT associations, despite robust audio-visual (AV) associations found across all age groups in a related study. In Experiment 2, we confirmed the role of visual experience in enhancing AT association; here, 6- to 8-year-olds could exhibit the expected AT association if first exposed to the AV condition, whereas adults showed the expected AT association irrespective of whether the AV condition was tested first or second. Our finding suggests that AT sound-shape correspondence is weak early in development relative to AV sound-shape correspondence, paralleling previous findings on the development of other types of multisensory associations. The potential role of visual experience in the development of sound-shape correspondences in other senses is discussed.
Collapse
Affiliation(s)
- Hiu Mei Chow
- Department of Psychology, Developmental and Brain Sciences, University of Massachusetts Boston, Boston, MA 02125, USA; Department of Ophthalmology and Visual Sciences, University of British Columbia, Vancouver, British Columbia V6T 1Z4, Canada
| | - Daniel A Harris
- Department of Psychology, Developmental and Brain Sciences, University of Massachusetts Boston, Boston, MA 02125, USA; Division of Epidemiology, Dalla Lana School of Public Health, University of Toronto, Toronto, Ontario M5T 3M7, Canada
| | - Sandy Eid
- Department of Psychology, Developmental and Brain Sciences, University of Massachusetts Boston, Boston, MA 02125, USA
| | - Vivian M Ciaramitaro
- Department of Psychology, Developmental and Brain Sciences, University of Massachusetts Boston, Boston, MA 02125, USA.
| |
Collapse
|
43
|
Perceptual timing precision with vibrotactile, auditory, and multisensory stimuli. Atten Percept Psychophys 2021; 83:2267-2280. [PMID: 33772447 DOI: 10.3758/s13414-021-02254-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/17/2021] [Indexed: 11/08/2022]
Abstract
The growing use of vibrotactile signaling devices makes it important to understand the perceptual limits on vibrotactile information processing. To promote that understanding, we carried out a pair of experiments on vibrotactile, auditory, and bimodal (synchronous vibrotactile and auditory) temporal acuity. On each trial, subjects experienced a set of isochronous, standard intervals (400 ms each), followed by one interval of variable duration (400 ± 1-80 ms). Intervals were demarcated by short vibrotactile, auditory, or bimodal pulses. Subjects categorized the timing of the last interval by describing the final pulse as either "early" or "late" relative to its predecessors. In Experiment 1, each trial contained three isochronous standard intervals, followed by an interval of variable length. In Experiment 2, the number of isochronous standard intervals per trial varied, from one to four. Psychometric modeling revealed that vibrotactile stimulation produced poorer temporal discrimination than either auditory or bimodal stimulation. Moreover, auditory signals dominated bimodal sensitivity, and inter-individual differences in temporal discriminability were reduced with bimodal stimulation. Additionally, varying the number of isochronous intervals in a trial failed to improve temporal sensitivity in either modality, suggesting that memory played a key role in judgments of interval duration.
Collapse
|
44
|
Jagini KK. Temporal Binding in Multisensory and Motor-Sensory Contexts: Toward a Unified Model. Front Hum Neurosci 2021; 15:629437. [PMID: 33841117 PMCID: PMC8026855 DOI: 10.3389/fnhum.2021.629437] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2020] [Accepted: 02/18/2021] [Indexed: 11/13/2022] Open
Abstract
Our senses receive a manifold of sensory signals at any given moment in our daily lives. For a coherent and unified representation of information and precise motor control, our brain needs to temporally bind the signals emanating from a common causal event and segregate others. Traditionally, different mechanisms were proposed for the temporal binding phenomenon in multisensory and motor-sensory contexts. This paper reviews the literature on the temporal binding phenomenon in both multisensory and motor-sensory contexts and suggests future research directions for advancing the field. Moreover, by critically evaluating the recent literature, this paper suggests that common computational principles are responsible for the temporal binding in multisensory and motor-sensory contexts. These computational principles are grounded in the Bayesian framework of uncertainty reduction rooted in the Helmholtzian idea of unconscious causal inference.
Collapse
Affiliation(s)
- Kishore Kumar Jagini
- Center for Cognitive and Brain Sciences, Indian Institute of Technology Gandhinagar, Gandhinagar, India
| |
Collapse
|
45
|
Opoku-Baah C, Wallace MT. Binocular Enhancement of Multisensory Temporal Perception. Invest Ophthalmol Vis Sci 2021; 62:7. [PMID: 33661284 PMCID: PMC7938005 DOI: 10.1167/iovs.62.3.7] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/05/2023] Open
Abstract
Purpose The goal of this study was to examine the behavioral effects and to suggest possible underlying mechanisms of binocularity on audiovisual temporal perception in normally-sighted individuals. Methods Participants performed two audiovisual simultaneity judgment tasks-one using simple flashes and beeps and the other using audiovisual speech stimuli-with the left eye, right eye, and both eyes. Two measures, the point of subjective simultaneity (PSS) and the temporal binding window (TBW), an index for audiovisual temporal acuity, were derived for each viewing condition, stimulus type, and participant. The data were then modeled using causal inference, allowing us to determine whether binocularity affected low-level unisensory mechanisms (i.e., sensory noise level) or high-level multisensory mechanisms (i.e., prior probability of interring a common cause, pC=1). Results Whereas for the PSS there was no significant effect of viewing condition, for the TBW, a significant interaction between stimulus type and viewing condition was found. Post hoc analyses revealed a significantly narrower TBW during binocular than monocular viewing (average of left and right eyes) for the flash-beep condition but no difference between the viewing conditions for the speech stimuli. Modeling results showed no significant difference in pC=1 but a significant reduction in sensory noise during binocular performance on flash-beep trials. Conclusions Binocular viewing was found to enhance audiovisual temporal acuity as indexed by the TBW for simple low-level audiovisual stimuli. Furthermore, modeling results suggest that this effect may stem from enhanced sensory representations evidenced as a reduction in sensory noise affecting the measurement of physical asynchrony during audiovisual temporal perception.
Collapse
Affiliation(s)
- Collins Opoku-Baah
- Neuroscience Graduate Program, Vanderbilt University, Nashville, Tennessee, United States.,Vanderbilt Brain Institute, Vanderbilt University, Nashville, Tennessee, United States
| | - Mark T Wallace
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, Tennessee, United States.,Department of Psychology, Vanderbilt University, Nashville, Tennessee, United States.,Department of Hearing and Speech, Vanderbilt University Medical Center, Nashville, Tennessee, United States.,Vanderbilt Vision Research Center, Nashville, Tennessee, United States.,Department of Psychiatry and Behavioral Sciences, Vanderbilt University Medical Center, Nashville, Tennessee, United States.,Department of Pharmacology, Vanderbilt University, Nashville, Tennessee, United States
| |
Collapse
|
46
|
Møller C, Garza-Villarreal EA, Hansen NC, Højlund A, Bærentsen KB, Chakravarty MM, Vuust P. Audiovisual structural connectivity in musicians and non-musicians: a cortical thickness and diffusion tensor imaging study. Sci Rep 2021; 11:4324. [PMID: 33619288 PMCID: PMC7900203 DOI: 10.1038/s41598-021-83135-x] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2020] [Accepted: 01/29/2021] [Indexed: 01/31/2023] Open
Abstract
Our sensory systems provide complementary information about the multimodal objects and events that are the target of perception in everyday life. Professional musicians' specialization in the auditory domain is reflected in the morphology of their brains, which has distinctive characteristics, particularly in areas related to auditory and audio-motor activity. Here, we combined diffusion tensor imaging (DTI) with a behavioral measure of visually induced gain in pitch discrimination, and we used measures of cortical thickness (CT) correlations to assess how auditory specialization and musical expertise are reflected in the structural architecture of white and grey matter relevant to audiovisual processing. Across all participants (n = 45), we found a correlation (p < 0.001) between reliance on visual cues in pitch discrimination and the fractional anisotropy (FA) in the left inferior fronto-occipital fasciculus (IFOF), a structure connecting visual and auditory brain areas. Group analyses also revealed greater cortical thickness correlation between visual and auditory areas in non-musicians (n = 28) compared to musicians (n = 17), possibly reflecting musicians' auditory specialization (FDR < 10%). Our results corroborate and expand current knowledge of functional specialization with a specific focus on audition, and highlight the fact that perception is essentially multimodal while uni-sensory processing is a specialized task.
Collapse
Affiliation(s)
- Cecilie Møller
- grid.7048.b0000 0001 1956 2722Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & The Royal Academy of Music, Aarhus/Aalborg, Universitetsbyen 3, Building 1710, 8000 Aarhus C, Denmark
| | - Eduardo A. Garza-Villarreal
- grid.7048.b0000 0001 1956 2722Center of Functionally Integrative Neuroscience, Aarhus University, Aarhus, Denmark ,grid.9486.30000 0001 2159 0001Institute of Neurobiology, Universidad Nacional Autónoma de México, Boulevard Juriquilla 3001, C.P. 76230 Querétaro, Querétaro Mexico
| | - Niels Chr. Hansen
- grid.7048.b0000 0001 1956 2722Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & The Royal Academy of Music, Aarhus/Aalborg, Universitetsbyen 3, Building 1710, 8000 Aarhus C, Denmark ,grid.7048.b0000 0001 1956 2722Aarhus Institute of Advanced Studies, Aarhus University, Aarhus, Denmark
| | - Andreas Højlund
- grid.7048.b0000 0001 1956 2722Center of Functionally Integrative Neuroscience, Aarhus University, Aarhus, Denmark ,grid.7048.b0000 0001 1956 2722Interacting Minds Centre, Aarhus University, Aarhus, Denmark
| | - Klaus B. Bærentsen
- grid.7048.b0000 0001 1956 2722Department of Psychology, Aarhus University, Aarhus, Denmark
| | - M. Mallar Chakravarty
- grid.412078.80000 0001 2353 5268Cerebral Imaging Center, Douglas Mental Health University Institute, Montreal, QC Canada ,grid.14709.3b0000 0004 1936 8649Department of Psychiatry, McGill University, Montreal, QC Canada ,grid.14709.3b0000 0004 1936 8649Department of Biological and Biomedical Engineering, McGill University, Montreal, QC Canada
| | - Peter Vuust
- grid.7048.b0000 0001 1956 2722Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & The Royal Academy of Music, Aarhus/Aalborg, Universitetsbyen 3, Building 1710, 8000 Aarhus C, Denmark
| |
Collapse
|
47
|
Effects of stimulus intensity on audiovisual integration in aging across the temporal dynamics of processing. Int J Psychophysiol 2021; 162:95-103. [PMID: 33529642 DOI: 10.1016/j.ijpsycho.2021.01.017] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2020] [Revised: 10/26/2020] [Accepted: 01/24/2021] [Indexed: 11/24/2022]
Abstract
Previous studies have drawn different conclusions about whether older adults benefit more from audiovisual integration, and such conflicts may have been due to the stimulus features investigated in those studies, such as stimulus intensity. In the current study, using ERPs, we compared the effects of stimulus intensity on audiovisual integration between young adults and older adults. The results showed that inverse effectiveness, which depicts a phenomenon that lowing the effectiveness of sensory stimuli increases benefits of multisensory integration, was observed in young adults at earlier processing stages but was absent in older adults. Moreover, at the earlier processing stages (60-90 ms and 110-140 ms), older adults exhibited significantly greater audiovisual integration than young adults (all ps < 0.05). However, at the later processing stages (220-250 ms and 340-370 ms), young adults exhibited significantly greater audiovisual integration than old adults (all ps < 0.001). The results suggested that there is an age-related dissociation between early integration and late integration, which indicates that there are different audiovisual processing mechanisms in play between older adults and young adults.
Collapse
|
48
|
Sensory capability and information integration independently explain the cognitive status of healthy older adults. Sci Rep 2020; 10:22437. [PMID: 33384454 PMCID: PMC7775431 DOI: 10.1038/s41598-020-80069-8] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2020] [Accepted: 12/16/2020] [Indexed: 12/24/2022] Open
Abstract
While there is evidence that sensory processing and multisensory integration change with age, links between these alterations and their relation to cognitive status remain unclear. In this study, we assessed sensory thresholds and performance of healthy younger and older adults in a visuotactile delayed match-to-sample task. Using Bayesian structural equation modelling (BSEM), we explored the factors explaining cognitive status in the group of older adults. Additionally, we applied transcranial alternating current stimulation (tACS) to a parieto-central network found to underlie visuotactile interactions and working memory matching in our previous work. Response times and signal detection measures indicated enhanced multisensory integration and enhanced benefit from successful working memory matching in older adults. Further, tACS caused a frequency-specific speeding (20 Hz) and delaying (70 Hz) of responses. Data exploration suggested distinct underlying factors for sensory acuity and sensitivity d’ on the one side, and multisensory and working memory enhancement on the other side. Finally, BSEM showed that these two factors labelled ‘sensory capability’ and ‘information integration’ independently explained cognitive status. We conclude that sensory decline and enhanced information integration might relate to distinct processes of ageing and discuss a potential role of the parietal cortex in mediating augmented integration in older adults.
Collapse
|
49
|
Opoku-Baah C, Wallace MT. Brief period of monocular deprivation drives changes in audiovisual temporal perception. J Vis 2020; 20:8. [PMID: 32761108 PMCID: PMC7438662 DOI: 10.1167/jov.20.8.8] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/11/2023] Open
Abstract
The human brain retains a striking degree of plasticity into adulthood. Recent studies have demonstrated that a short period of altered visual experience (via monocular deprivation) can change the dynamics of binocular rivalry in favor of the deprived eye, a compensatory action thought to be mediated by an upregulation of cortical gain control mechanisms. Here, we sought to better understand the impact of monocular deprivation on multisensory abilities, specifically examining audiovisual temporal perception. Using an audiovisual simultaneity judgment task, we discovered that 90 minutes of monocular deprivation produced opposing effects on the temporal binding window depending on the eye used in the task. Thus, in those who performed the task with their deprived eye there was a narrowing of the temporal binding window, whereas in those performing the task with their nondeprived eye there was a widening of the temporal binding window. The effect was short lived, being observed only in the first 10 minutes of postdeprivation testing. These findings indicate that changes in visual experience in the adult can rapidly impact multisensory perceptual processes, a finding that has important clinical implications for those patients with adult-onset visual deprivation and for therapies founded on monocular deprivation.
Collapse
Affiliation(s)
| | - Mark T Wallace
- ,.,,.,,.,,.,,.,,
| |
Collapse
|
50
|
Multisensory action effects facilitate the performance of motor sequences. Atten Percept Psychophys 2020; 83:475-483. [PMID: 33135098 PMCID: PMC7875850 DOI: 10.3758/s13414-020-02179-9] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/12/2020] [Indexed: 11/10/2022]
Abstract
Research has shown that contingent, distinct action effects have a beneficial influence on motor sequence performance. Previous studies showed the beneficial influence of task-irrelevant action effects from one modality (auditory) on motor sequence performance, compared with no task-irrelevant action effects. The present study investigated the influence of task-irrelevant action effects on motor sequence performance from a multiple-modality perspective. We compared motor sequence performances of participants who received different task-irrelevant action effects in an auditory, visual, or audiovisual condition. In the auditory condition, key presses produced tones of a C-major scale that mapped to keys from left to right in ascending order. In the visual condition, key presses produced rectangles in different locations on the screen that mapped to keys from left to right in ascending order. In the audiovisual condition, both tone and rectangle effects were produced simultaneously by key presses. There were advantages for the audiovisual group in motor sequence initiation and execution. The results implied that, compared with unimodal action effects, action effects from multiple sensory modalities can prime an action faster and strengthen associations between successive actions, leading to faster motor sequence performance.
Collapse
|