1
|
Chow HM, Ma YK, Tseng CH. Social and communicative not a prerequisite: Preverbal infants learn an abstract rule only from congruent audiovisual dynamic pitch-height patterns. J Exp Child Psychol 2024; 248:106046. [PMID: 39241321 DOI: 10.1016/j.jecp.2024.106046] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2023] [Revised: 07/23/2024] [Accepted: 07/29/2024] [Indexed: 09/09/2024]
Abstract
Learning in the everyday environment often requires the flexible integration of relevant multisensory information. Previous research has demonstrated preverbal infants' capacity to extract an abstract rule from audiovisual temporal sequences matched in temporal synchrony. Interestingly, this capacity was recently reported to be modulated by crossmodal correspondence beyond spatiotemporal matching (e.g., consistent facial emotional expressions or articulatory mouth movements matched with sound). To investigate whether such modulatory influence applies to non-social and non-communicative stimuli, we conducted a critical test using audiovisual stimuli free of social information: visually upward (and downward) moving objects paired with a congruent tone of ascending or incongruent (descending) pitch. East Asian infants (8-10 months old) from a metropolitan area in Asia demonstrated successful abstract rule learning in the congruent audiovisual condition and demonstrated weaker learning in the incongruent condition. This implies that preverbal infants use crossmodal dynamic pitch-height correspondence to integrate multisensory information before rule extraction. This result confirms that preverbal infants are ready to use non-social non-communicative information in serving cognitive functions such as rule extraction in a multisensory context.
Collapse
Affiliation(s)
- Hiu Mei Chow
- Department of Psychology, St. Thomas University, Fredericton, New Brunswick E3B 5G3, Canada
| | - Yuen Ki Ma
- Department of Psychology, The University of Hong Kong, Pokfulam, Hong Kong
| | - Chia-Huei Tseng
- Research Institute of Electrical Communication, Tohoku University, Sendai, Miyagi 980-0812, Japan.
| |
Collapse
|
2
|
Giurgola S, Lo Gerfo E, Farnè A, Roy AC, Bolognini N. Multisensory integration and motor resonance in the primary motor cortex. Cortex 2024; 179:235-246. [PMID: 39213776 DOI: 10.1016/j.cortex.2024.07.015] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2024] [Revised: 06/09/2024] [Accepted: 07/15/2024] [Indexed: 09/04/2024]
Abstract
Humans are endowed with a motor system that resonates to speech sounds, but whether concurrent visual information from lip movements can improve speech perception at a motor level through multisensory integration mechanisms remains unknown. Therefore, the aim of the study was to explore behavioral and neurophysiological correlates of multisensory influences on motor resonance in speech perception. Motor-evoked potentials (MEPs), by single pulse transcranial magnetic stimulation (TMS) applied over the left lip muscle (orbicularis oris) representation in the primary motor cortex, were recorded in healthy participants during the presentation of syllables in unimodal (visual or auditory) or multisensory (audio-visual) congruent or incongruent conditions. At the behavioral level, subjects showed better syllable identification in the congruent audio-visual condition as compared to the unimodal conditions, hence showing a multisensory enhancement effect. Accordingly, at the neurophysiological level, increased MEPs amplitudes were found in the congruent audio-visual condition, as compared to the unimodal ones. Incongruent audio-visual syllables resulting in illusory percepts did not increase corticospinal excitability, which in fact was comparable to that induced by the real perception of the same syllable. In conclusion, seeing and hearing congruent bilabial syllables increases the excitability of the lip representation in the primary motor cortex, hence documenting that multisensory integration can facilitate speech processing by influencing motor resonance. These findings highlight the modulation role of multisensory processing showing that it can boost speech perception and that multisensory interactions occur not only within higher-order regions, but also within primary motor areas, as shown by corticospinal excitability changes.
Collapse
Affiliation(s)
- Serena Giurgola
- Department of Psychology & NeuroMI - Milan Center for Neuroscience, University of Milano-Bicocca, Milan, Italy.
| | | | - Alessandro Farnè
- Impact Team of the Lyon Neuroscience Research Centre, INSERM U1028 CNRS UMR5292, University Claude Bernard Lyon 1, Lyon, France
| | - Alice C Roy
- Laboratoire Dynamique du Langage, Centre National de la Recherche Scientifique, UMR 5596, CNRS Université de Lyon 2, Lyon, France
| | - Nadia Bolognini
- Department of Psychology & NeuroMI - Milan Center for Neuroscience, University of Milano-Bicocca, Milan, Italy; IRCCS Istituto Auxologico Italiano, Laboratory of Neuropsychology, Milan, Italy.
| |
Collapse
|
3
|
Vogler NW, Chen R, Virkler A, Tu VY, Gottfried JA, Geffen MN. Direct piriform-to-auditory cortical projections shape auditory-olfactory integration. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.07.11.602976. [PMID: 39071445 PMCID: PMC11275881 DOI: 10.1101/2024.07.11.602976] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/30/2024]
Abstract
In a real-world environment, the brain must integrate information from multiple sensory modalities, including the auditory and olfactory systems. However, little is known about the neuronal circuits governing how odors influence and modulate sound processing. Here, we investigated the mechanisms underlying auditory-olfactory integration using anatomical, electrophysiological, and optogenetic approaches, focusing on the auditory cortex as a key locus for cross-modal integration. First, retrograde and anterograde viral tracing strategies revealed a direct projection from the piriform cortex to the auditory cortex. Next, using in vivo electrophysiological recordings of neuronal activity in the auditory cortex of awake male or female mice, we found that odors modulate auditory cortical responses to sound. Finally, we used in vivo optogenetic manipulations during electrophysiology to demonstrate that olfactory modulation in auditory cortex, specifically, odor-driven enhancement of sound responses, depends on direct input from the piriform cortex. Together, our results identify a novel role of piriform-to-auditory cortical circuitry in shaping olfactory modulation in the auditory cortex, shedding new light on the neuronal mechanisms underlying auditory-olfactory integration.
Collapse
Affiliation(s)
- Nathan W. Vogler
- Department of Otorhinolaryngology, Perelman School of Medicine, University of Pennsylvania
| | - Ruoyi Chen
- Department of Otorhinolaryngology, Perelman School of Medicine, University of Pennsylvania
| | - Alister Virkler
- Department of Neurology, Perelman School of Medicine, University of Pennsylvania
| | - Violet Y. Tu
- Department of Otorhinolaryngology, Perelman School of Medicine, University of Pennsylvania
| | - Jay A. Gottfried
- Department of Neurology, Perelman School of Medicine, University of Pennsylvania
| | - Maria N. Geffen
- Department of Otorhinolaryngology, Perelman School of Medicine, University of Pennsylvania
| |
Collapse
|
4
|
Senkowski D, Engel AK. Multi-timescale neural dynamics for multisensory integration. Nat Rev Neurosci 2024; 25:625-642. [PMID: 39090214 DOI: 10.1038/s41583-024-00845-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/02/2024] [Indexed: 08/04/2024]
Abstract
Carrying out any everyday task, be it driving in traffic, conversing with friends or playing basketball, requires rapid selection, integration and segregation of stimuli from different sensory modalities. At present, even the most advanced artificial intelligence-based systems are unable to replicate the multisensory processes that the human brain routinely performs, but how neural circuits in the brain carry out these processes is still not well understood. In this Perspective, we discuss recent findings that shed fresh light on the oscillatory neural mechanisms that mediate multisensory integration (MI), including power modulations, phase resetting, phase-amplitude coupling and dynamic functional connectivity. We then consider studies that also suggest multi-timescale dynamics in intrinsic ongoing neural activity and during stimulus-driven bottom-up and cognitive top-down neural network processing in the context of MI. We propose a new concept of MI that emphasizes the critical role of neural dynamics at multiple timescales within and across brain networks, enabling the simultaneous integration, segregation, hierarchical structuring and selection of information in different time windows. To highlight predictions from our multi-timescale concept of MI, real-world scenarios in which multi-timescale processes may coordinate MI in a flexible and adaptive manner are considered.
Collapse
Affiliation(s)
- Daniel Senkowski
- Department of Psychiatry and Neurosciences, Charité - Universitätsmedizin Berlin, Berlin, Germany
| | - Andreas K Engel
- Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, Hamburg, Germany.
| |
Collapse
|
5
|
Çetinçelik M, Jordan-Barros A, Rowland CF, Snijders TM. The effect of visual speech cues on neural tracking of speech in 10-month-old infants. Eur J Neurosci 2024. [PMID: 39188179 DOI: 10.1111/ejn.16492] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2024] [Revised: 07/04/2024] [Accepted: 07/20/2024] [Indexed: 08/28/2024]
Abstract
While infants' sensitivity to visual speech cues and the benefit of these cues have been well-established by behavioural studies, there is little evidence on the effect of visual speech cues on infants' neural processing of continuous auditory speech. In this study, we investigated whether visual speech cues, such as the movements of the lips, jaw, and larynx, facilitate infants' neural speech tracking. Ten-month-old Dutch-learning infants watched videos of a speaker reciting passages in infant-directed speech while electroencephalography (EEG) was recorded. In the videos, either the full face of the speaker was displayed or the speaker's mouth and jaw were masked with a block, obstructing the visual speech cues. To assess neural tracking, speech-brain coherence (SBC) was calculated, focusing particularly on the stress and syllabic rates (1-1.75 and 2.5-3.5 Hz respectively in our stimuli). First, overall, SBC was compared to surrogate data, and then, differences in SBC in the two conditions were tested at the frequencies of interest. Our results indicated that infants show significant tracking at both stress and syllabic rates. However, no differences were identified between the two conditions, meaning that infants' neural tracking was not modulated further by the presence of visual speech cues. Furthermore, we demonstrated that infants' neural tracking of low-frequency information is related to their subsequent vocabulary development at 18 months. Overall, this study provides evidence that infants' neural tracking of speech is not necessarily impaired when visual speech cues are not fully visible and that neural tracking may be a potential mechanism in successful language acquisition.
Collapse
Affiliation(s)
- Melis Çetinçelik
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
- Department of Experimental Psychology, Utrecht University, Utrecht, The Netherlands
- Cognitive Neuropsychology Department, Tilburg University, Tilburg, The Netherlands
| | - Antonia Jordan-Barros
- Centre for Brain and Cognitive Development, Department of Psychological Science, Birkbeck, University of London, London, UK
- Experimental Psychology, University College London, London, UK
| | - Caroline F Rowland
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - Tineke M Snijders
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
- Cognitive Neuropsychology Department, Tilburg University, Tilburg, The Netherlands
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| |
Collapse
|
6
|
Décaillet M, Denervaud S, Huguenin-Virchaux C, Besuchet L, Fischer Fumeaux CJ, Murray MM, Schneider J. The impact of premature birth on auditory-visual processes in very preterm schoolchildren. NPJ SCIENCE OF LEARNING 2024; 9:42. [PMID: 38971881 PMCID: PMC11227572 DOI: 10.1038/s41539-024-00257-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/02/2023] [Accepted: 06/19/2024] [Indexed: 07/08/2024]
Abstract
Interactions between stimuli from different sensory modalities and their integration are central to daily life, contributing to improved perception. Being born prematurely and the subsequent hospitalization can have an impact not only on sensory processes, but also on the manner in which information from different senses is combined-i.e., multisensory processes. Very preterm (VPT) children (<32 weeks gestational age) present impaired multisensory processes in early childhood persisting at least through the age of five. However, it remains largely unknown whether and how these consequences persist into later childhood. Here, we evaluated the integrity of auditory-visual multisensory processes in VPT schoolchildren. VPT children (N = 28; aged 8-10 years) received a standardized cognitive assessment and performed a simple detection task at their routine follow-up appointment. The simple detection task involved pressing a button as quickly as possible upon presentation of an auditory, visual, or simultaneous audio-visual stimulus. Compared to full-term (FT) children (N = 23; aged 6-11 years), reaction times of VPT children were generally slower and more variable, regardless of sensory modality. Nonetheless, both groups exhibited multisensory facilitation on mean reaction times and inter-quartile ranges. There was no evidence that standardized cognitive or clinical measures correlated with multisensory gains of VPT children. However, while gains in FT children exceeded predictions based on probability summation and thus forcibly invoked integrative processes, this was not the case for VPT children. Our findings provide evidence of atypical multisensory profiles in VPT children persisting into school-age. These results could help in targeting supportive interventions for this vulnerable population.
Collapse
Affiliation(s)
- Marion Décaillet
- Department of Radiology, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland.
- The Sense Innovation and Research Center, Lausanne and Sion, Lausanne, Switzerland.
- Clinic of Neonatology, Department of Mother-Woman-Child, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland.
| | - Solange Denervaud
- Department of Radiology, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland
| | - Cléo Huguenin-Virchaux
- The Sense Innovation and Research Center, Lausanne and Sion, Lausanne, Switzerland
- Clinic of Neonatology, Department of Mother-Woman-Child, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland
| | - Laureline Besuchet
- The Sense Innovation and Research Center, Lausanne and Sion, Lausanne, Switzerland
- Clinic of Neonatology, Department of Mother-Woman-Child, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland
| | - Céline J Fischer Fumeaux
- Clinic of Neonatology, Department of Mother-Woman-Child, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland
| | - Micah M Murray
- Department of Radiology, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland
- The Sense Innovation and Research Center, Lausanne and Sion, Lausanne, Switzerland
| | - Juliane Schneider
- The Sense Innovation and Research Center, Lausanne and Sion, Lausanne, Switzerland
- Clinic of Neonatology, Department of Mother-Woman-Child, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland
| |
Collapse
|
7
|
Ampollini S, Ardizzi M, Ferroni F, Cigala A. Synchrony perception across senses: A systematic review of temporal binding window changes from infancy to adolescence in typical and atypical development. Neurosci Biobehav Rev 2024; 162:105711. [PMID: 38729280 DOI: 10.1016/j.neubiorev.2024.105711] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2023] [Revised: 04/14/2024] [Accepted: 05/03/2024] [Indexed: 05/12/2024]
Abstract
Sensory integration is increasingly acknowledged as being crucial for the development of cognitive and social abilities. However, its developmental trajectory is still little understood. This systematic review delves into the topic by investigating the literature about the developmental changes from infancy through adolescence of the Temporal Binding Window (TBW) - the epoch of time within which sensory inputs are perceived as simultaneous and therefore integrated. Following comprehensive searches across PubMed, Elsevier, and PsycInfo databases, only experimental, behavioral, English-language, peer-reviewed studies on multisensory temporal processing in 0-17-year-olds have been included. Non-behavioral, non-multisensory, and non-human studies have been excluded as those that did not directly focus on the TBW. The selection process was independently performed by two Authors. The 39 selected studies involved 2859 participants in total. Findings indicate a predisposition towards cross-modal asynchrony sensitivity and a composite, still unclear, developmental trajectory, with atypical development associated to increased asynchrony tolerance. These results highlight the need for consistent and thorough research into TBW development to inform potential interventions.
Collapse
Affiliation(s)
- Silvia Ampollini
- Department of Humanities, Social Sciences and Cultural Industries, University of Parma, Borgo Carissimi, 10, Parma 43121, Italy.
| | - Martina Ardizzi
- Department of Medicine and Surgery, Unit of Neuroscience, University of Parma, Via Volturno 39E, Parma 43121, Italy
| | - Francesca Ferroni
- Department of Medicine and Surgery, Unit of Neuroscience, University of Parma, Via Volturno 39E, Parma 43121, Italy
| | - Ada Cigala
- Department of Humanities, Social Sciences and Cultural Industries, University of Parma, Borgo Carissimi, 10, Parma 43121, Italy
| |
Collapse
|
8
|
Salinas E, Stanford TR. Conditional independence as a statistical assessment of evidence integration processes. PLoS One 2024; 19:e0297792. [PMID: 38722936 PMCID: PMC11081312 DOI: 10.1371/journal.pone.0297792] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2023] [Accepted: 01/12/2024] [Indexed: 05/13/2024] Open
Abstract
Intuitively, combining multiple sources of evidence should lead to more accurate decisions than considering single sources of evidence individually. In practice, however, the proper computation may be difficult, or may require additional data that are inaccessible. Here, based on the concept of conditional independence, we consider expressions that can serve either as recipes for integrating evidence based on limited data, or as statistical benchmarks for characterizing evidence integration processes. Consider three events, A, B, and C. We find that, if A and B are conditionally independent with respect to C, then the probability that C occurs given that both A and B are known, P(C|A, B), can be easily calculated without the need to measure the full three-way dependency between A, B, and C. This simplified approach can be used in two general ways: to generate predictions by combining multiple (conditionally independent) sources of evidence, or to test whether separate sources of evidence are functionally independent of each other. These applications are demonstrated with four computer-simulated examples, which include detecting a disease based on repeated diagnostic testing, inferring biological age based on multiple biomarkers of aging, discriminating two spatial locations based on multiple cue stimuli (multisensory integration), and examining how behavioral performance in a visual search task depends on selection histories. Besides providing a sound prescription for predicting outcomes, this methodology may be useful for analyzing experimental data of many types.
Collapse
Affiliation(s)
- Emilio Salinas
- Department of Neurobiology & Anatomy, Wake Forest University School of Medicine, Winston-Salem, North Carolina, United States of America
| | - Terrence R. Stanford
- Department of Neurobiology & Anatomy, Wake Forest University School of Medicine, Winston-Salem, North Carolina, United States of America
| |
Collapse
|
9
|
Huntley MK, Nguyen A, Albrecht MA, Marinovic W. Tactile cues are more intrinsically linked to motor timing than visual cues in visual-tactile sensorimotor synchronization. Atten Percept Psychophys 2024; 86:1022-1037. [PMID: 38263510 PMCID: PMC11062975 DOI: 10.3758/s13414-023-02828-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/07/2023] [Indexed: 01/25/2024]
Abstract
Many tasks require precise synchronization with external sensory stimuli, such as driving a car. This study investigates whether combined visual-tactile information provides additional benefits to movement synchrony over separate visual and tactile stimuli and explores the relationship with the temporal binding window for multisensory integration. In Experiment 1, participants completed a sensorimotor synchronization task to examine movement variability and a simultaneity judgment task to measure the temporal binding window. Results showed similar synchronization variability between visual-tactile and tactile-only stimuli, but significantly lower than visual only. In Experiment 2, participants completed a visual-tactile sensorimotor synchronization task with cross-modal stimuli presented inside (stimulus onset asynchrony 80 ms) and outside (stimulus-onset asynchrony 400 ms) the temporal binding window to examine temporal accuracy of movement execution. Participants synchronized their movement with the first stimulus in the cross-modal pair, either the visual or tactile stimulus. Results showed significantly greater temporal accuracy when only one stimulus was presented inside the window and the second stimulus was outside the window than when both stimuli were presented inside the window, with movement execution being more accurate when attending to the tactile stimulus. Overall, these findings indicate there may be a modality-specific benefit to sensorimotor synchronization performance, such that tactile cues are weighted more strongly than visual information as tactile information is more intrinsically linked to motor timing than visual information. Further, our findings indicate that the visual-tactile temporal binding window is related to the temporal accuracy of movement execution.
Collapse
Affiliation(s)
- Michelle K Huntley
- School of Population Health, Curtin University, Perth, Western Australia, Australia.
- School of Psychology and Public Health, La Trobe University, Wodonga, Victoria, Australia.
| | - An Nguyen
- School of Population Health, Curtin University, Perth, Western Australia, Australia
| | - Matthew A Albrecht
- Western Australia Centre for Road Safety Research, School of Psychological Science, University of Western Australia, Perth, Western Australia, Australia
| | - Welber Marinovic
- School of Population Health, Curtin University, Perth, Western Australia, Australia
| |
Collapse
|
10
|
Scheller M, Nardini M. Correctly establishing evidence for cue combination via gains in sensory precision: Why the choice of comparator matters. Behav Res Methods 2024; 56:2842-2858. [PMID: 37730934 PMCID: PMC11133123 DOI: 10.3758/s13428-023-02227-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/27/2023] [Indexed: 09/22/2023]
Abstract
Studying how sensory signals from different sources (sensory cues) are integrated within or across multiple senses allows us to better understand the perceptual computations that lie at the foundation of adaptive behaviour. As such, determining the presence of precision gains - the classic hallmark of cue combination - is important for characterising perceptual systems, their development and functioning in clinical conditions. However, empirically measuring precision gains to distinguish cue combination from alternative perceptual strategies requires careful methodological considerations. Here, we note that the majority of existing studies that tested for cue combination either omitted this important contrast, or used an analysis approach that, unknowingly, strongly inflated false positives. Using simulations, we demonstrate that this approach enhances the chances of finding significant cue combination effects in up to 100% of cases, even when cues are not combined. We establish how this error arises when the wrong cue comparator is chosen and recommend an alternative analysis that is easy to implement but has only been adopted by relatively few studies. By comparing combined-cue perceptual precision with the best single-cue precision, determined for each observer individually rather than at the group level, researchers can enhance the credibility of their reported effects. We also note that testing for deviations from optimal predictions alone is not sufficient to ascertain whether cues are combined. Taken together, to correctly test for perceptual precision gains, we advocate for a careful comparator selection and task design to ensure that cue combination is tested with maximum power, while reducing the inflation of false positives.
Collapse
Affiliation(s)
- Meike Scheller
- Department of Psychology, Durham University, Durham, UK.
| | - Marko Nardini
- Department of Psychology, Durham University, Durham, UK
| |
Collapse
|
11
|
O'Dowd A, O'Connor DMA, Hirst RJ, Setti A, Kenny RA, Newell FN. Nutrition is associated with differences in multisensory integration in healthy older adults. Nutr Neurosci 2024:1-11. [PMID: 38386286 DOI: 10.1080/1028415x.2024.2316446] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/23/2024]
Abstract
Diet can influence cognitive functioning in older adults and is a modifiable risk factor for cognitive decline. However, it is unknown if an association exists between diet and lower-level processes in the brain underpinning cognition, such as multisensory integration. We investigated whether temporal multisensory integration is associated with daily intake of fruit and vegetables (FV) or products high in fat/sugar/salt (FSS) in a large sample (N = 2,693) of older adults (mean age = 64.06 years, SD = 7.60; 56% female) from The Irish Longitudinal Study on Ageing (TILDA). Older adults completed a Food Frequency Questionnaire from which the total number of daily servings of FV and FSS items respectively was calculated. Older adults' susceptibility to the Sound Induced Flash Illusion (SIFI) measured the temporal precision of audio-visual integration, which included three audio-visual Stimulus Onset Asynchronies (SOAs): 70, 150 and 230 ms. Older adults who self-reported a higher daily consumption of FV were less susceptible to the SIFI at the longest versus shortest SOAs (i.e. increased temporal precision) compared to those reporting the lowest daily consumption (p = .013). In contrast, older adults reporting a higher daily consumption of FSS items were more susceptible to the SIFI at the longer versus shortest SOAs (i.e. reduced temporal precision) compared to those reporting the lowest daily consumption (p < .001). The temporal precision of multisensory integration is differentially associated with levels of daily consumption of FV versus products high in FSS, consistent with broader evidence that habitual diet is associated with brain health.
Collapse
Affiliation(s)
- Alan O'Dowd
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, Dublin, Ireland
- The Irish Longitudinal Study on Ageing, Trinity College Dublin, Dublin, Ireland
| | - Deirdre M A O'Connor
- The Irish Longitudinal Study on Ageing, Trinity College Dublin, Dublin, Ireland
- Mercer Institute for Successful Ageing, St. James Hospital, Dublin, Ireland
- Department of Medical Gerontology, School of Medicine, Trinity College Dublin, Dublin, Ireland
| | - Rebecca J Hirst
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, Dublin, Ireland
- The Irish Longitudinal Study on Ageing, Trinity College Dublin, Dublin, Ireland
| | - Annalisa Setti
- The Irish Longitudinal Study on Ageing, Trinity College Dublin, Dublin, Ireland
- School of Applied Psychology, University College Cork, Cork, Ireland
| | - Rose Anne Kenny
- The Irish Longitudinal Study on Ageing, Trinity College Dublin, Dublin, Ireland
- Mercer Institute for Successful Ageing, St. James Hospital, Dublin, Ireland
- Department of Medical Gerontology, School of Medicine, Trinity College Dublin, Dublin, Ireland
| | - Fiona N Newell
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, Dublin, Ireland
| |
Collapse
|
12
|
Loskutova E, Butler JS, Setti A, O'Brien C, Loughman J. Ability to Process Multisensory Information Is Impaired in Open Angle Glaucoma. J Glaucoma 2024; 33:78-86. [PMID: 37974328 DOI: 10.1097/ijg.0000000000002331] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2023] [Accepted: 10/09/2023] [Indexed: 11/19/2023]
Abstract
PRCIS Patients with glaucoma demonstrated deficiencies in their ability to process multisensory information when compared with controls, with those deficiencies being related to glaucoma severity. Impaired multisensory integration (MSI) may affect the quality of life in individuals with glaucoma and may contribute to the increased prevalence of falls and driving safety concerns. Therapeutic possibilities to influence cognition in glaucoma should be explored. PURPOSE Glaucoma is a neurodegenerative disease of the optic nerve that has also been linked to cognitive health decline. This study explored MSI as a function of glaucoma status and severity. METHODS MSI was assessed in 37 participants with open angle glaucoma relative to 18 age-matched healthy controls. The sound-induced flash illusion was used to assess MSI efficiency. Participants were presented with various combinations of simultaneous visual and/or auditory stimuli and were required to indicate the number of visual stimuli observed for each of the 96 total presentations. Central retinal sensitivity was assessed as an indicator of glaucoma severity (MAIA; CenterVue). RESULTS Participants with glaucoma performed with equivalent capacity to healthy controls on unisensory trials ( F1,53 =2.222, P =0.142). Both groups performed equivalently on congruent multisensory trials involving equal numbers of auditory and visual stimuli F1,53 =1.032, P =0.314). For incongruent presentations, that is, 2 beeps and 1 flash stimulus, individuals with glaucoma demonstrated a greater influence of the incongruent beeps when judging the number of flashes, indicating less efficient MSI relative to age-matched controls ( F1,53 =11.45, P <0.002). In addition, MSI performance was positively correlated with retinal sensitivity ( F3,49 =4.042, P <0.025), adjusted R ²=0.15). CONCLUSIONS Individuals with open angle glaucoma exhibited MSI deficiencies that relate to disease severity. The type of deficiencies observed were similar to those observed among older individuals with cognitive impairment and balance issues. Impaired MSI may, therefore, be relevant to the increased prevalence of falls observed among individuals with glaucoma, a concept that merits further investigation.
Collapse
Affiliation(s)
- Ekaterina Loskutova
- Centre for Eye Research Ireland, School of Physics, Clinical & Optometric Sciences, Technological University Dublin, Dublin, Ireland
| | - John S Butler
- Centre for Eye Research Ireland, School of Mathematical Sciences, Technological University Dublin, Dublin, Ireland
| | - Annalisa Setti
- School of Applied Psychology, University College Cork, Cork, Ireland
| | - Colm O'Brien
- Department of Ophthalmology, Mater Misericordiae University Hospital, Dublin, Ireland
| | - James Loughman
- Centre for Eye Research Ireland, School of Physics, Clinical & Optometric Sciences, Technological University Dublin, Dublin, Ireland
| |
Collapse
|
13
|
Alemi R, Wolfe J, Neumann S, Manning J, Hanna L, Towler W, Wilson C, Bien A, Miller S, Schafer E, Gemignani J, Koirala N, Gracco VL, Deroche M. Motor Processing in Children With Cochlear Implants as Assessed by Functional Near-Infrared Spectroscopy. Percept Mot Skills 2024; 131:74-105. [PMID: 37977135 PMCID: PMC10863375 DOI: 10.1177/00315125231213167] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2023]
Abstract
Auditory-motor and visual-motor networks are often coupled in daily activities, such as when listening to music and dancing; but these networks are known to be highly malleable as a function of sensory input. Thus, congenital deafness may modify neural activities within the connections between the motor, auditory, and visual cortices. Here, we investigated whether the cortical responses of children with cochlear implants (CI) to a simple and repetitive motor task would differ from that of children with typical hearing (TH) and we sought to understand whether this response related to their language development. Participants were 75 school-aged children, including 50 with CI (with varying language abilities) and 25 controls with TH. We used functional near-infrared spectroscopy (fNIRS) to record cortical responses over the whole brain, as children squeezed the back triggers of a joystick that vibrated or not with the squeeze. Motor cortex activity was reflected by an increase in oxygenated hemoglobin concentration (HbO) and a decrease in deoxygenated hemoglobin concentration (HbR) in all children, irrespective of their hearing status. Unexpectedly, the visual cortex (supposedly an irrelevant region) was deactivated in this task, particularly for children with CI who had good language skills when compared to those with CI who had language delays. Presence or absence of vibrotactile feedback made no difference in cortical activation. These findings support the potential of fNIRS to examine cognitive functions related to language in children with CI.
Collapse
Affiliation(s)
- Razieh Alemi
- Department of Psychology, Concordia University, Montreal, QC, Canada
| | - Jace Wolfe
- Oberkotter Foundation, Oklahoma City, OK, USA
| | - Sara Neumann
- Hearts for Hearing Foundation, Oklahoma City, OK, USA
| | - Jacy Manning
- Hearts for Hearing Foundation, Oklahoma City, OK, USA
| | - Lindsay Hanna
- Hearts for Hearing Foundation, Oklahoma City, OK, USA
| | - Will Towler
- Hearts for Hearing Foundation, Oklahoma City, OK, USA
| | - Caleb Wilson
- Department of Otolaryngology-Head & Neck Surgery, University of Oklahoma Health Sciences Center, Oklahoma City, OK, USA
| | - Alexander Bien
- Department of Otolaryngology-Head & Neck Surgery, University of Oklahoma Health Sciences Center, Oklahoma City, OK, USA
| | - Sharon Miller
- Department of Audiology & Speech-Language Pathology, University of North Texas, Denton, TX, USA
| | - Erin Schafer
- Department of Audiology & Speech-Language Pathology, University of North Texas, Denton, TX, USA
| | - Jessica Gemignani
- Department of Developmental and Social Psychology, University of Padua, Padova, Italy
| | | | | | - Mickael Deroche
- Department of Psychology, Concordia University, Montreal, QC, Canada
| |
Collapse
|
14
|
Feldman JI, Garla V, Dunham K, Markfeld JE, Bowman SM, Golden AJ, Daly C, Kaiser S, Mailapur N, Raj S, Santapuram P, Suzman E, Augustine AE, Muhumuza A, Cascio CJ, Williams KL, Kirby AV, Keceli-Kaysili B, Woynaroski TG. Longitudinal Relations Between Early Sensory Responsiveness and Later Communication in Infants with Autistic and Non-autistic Siblings. J Autism Dev Disord 2024; 54:594-606. [PMID: 36441431 PMCID: PMC9707174 DOI: 10.1007/s10803-022-05817-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/04/2022] [Indexed: 11/29/2022]
Abstract
Early differences in sensory responsiveness may contribute to difficulties with communication among autistic children; however, this theory has not been longitudinally assessed in infants at increased familial versus general population-level likelihood for autism (Sibs-autism vs. Sibs-NA) using a comprehensive battery of sensory responsiveness and communication. In a sample of 40 infants (20 Sibs-autism, of whom six were later diagnosed with autism; 20 Sibs-NA), we tested (a) associations between sensory responsiveness at 12-18 months and communication 9 months later and (b) evaluated whether such associations were moderated by sibling group, autism diagnosis, or age. We found negative zero-order correlations between sensory responsiveness (i.e., caregiver reported hyperresponsiveness and hyporesponsiveness; an observational measure of hyperresponsiveness) and later communication. Additionally, caregiver reported sensory seeking was negatively associated with later expressive communication only in Sibs-NA. Limitations include our relatively small sample size of infants diagnosed with autism. Implications for future research are discussed.
Collapse
Affiliation(s)
- Jacob I Feldman
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, 8310 South Tower, 1215 21St Avenue South, Nashville, TN, 37232, USA.
- Frist Center for Autism & Innovation, Vanderbilt University, Nashville, TN, USA.
| | - Varsha Garla
- Neuroscience Undergraduate Program, Vanderbilt University, Nashville, TN, USA
| | - Kacie Dunham
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA
- Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN, USA
| | - Jennifer E Markfeld
- Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN, USA
| | - Sarah M Bowman
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, 8310 South Tower, 1215 21St Avenue South, Nashville, TN, 37232, USA
- Department of Pediatrics, Cincinnati Children's Hospital, Cincinnati, OH, USA
| | - Alexandra J Golden
- Vanderbilt School of Medicine, Vanderbilt University, Nashville, TN, USA
| | - Claire Daly
- Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN, USA
| | - Sophia Kaiser
- Cognitive Studies Undergraduate Program, Vanderbilt University, Nashville, TN, USA
| | - Nisha Mailapur
- Economics Undergraduate Program, Vanderbilt University, Nashville, TN, USA
| | - Sweeya Raj
- Vanderbilt School of Medicine, Vanderbilt University, Nashville, TN, USA
| | - Pooja Santapuram
- Vanderbilt School of Medicine, Vanderbilt University, Nashville, TN, USA
- Department of Anesthesiology, Columbia University Irving Medical Center, New York, NY, USA
| | - Evan Suzman
- Master's Program in Biomedical Science, Vanderbilt University, Nashville, TN, USA
- University of Texas Southwestern Medical School, University of Texas, Dallas, TX, USA
| | - Ashley E Augustine
- Biological Sciences Undergraduate Program, Vanderbilt University, Nashville, TN, USA
- Department of Pediatrics, University Hospitals Cleveland, Cleveland, OH, USA
| | - Aine Muhumuza
- Neuroscience Undergraduate Program, Vanderbilt University, Nashville, TN, USA
| | - Carissa J Cascio
- Frist Center for Autism & Innovation, Vanderbilt University, Nashville, TN, USA
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA
- Department of Psychiatry and Behavioral Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
- Vanderbilt Kennedy Center, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Kathryn L Williams
- Department of Occupational Therapy and Occupational Science, Towson University, Towson, MD, USA
| | - Anne V Kirby
- Department of Occupational and Recreational Therapies, University of Utah, Salt Lake City, UT, USA
| | - Bahar Keceli-Kaysili
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, 8310 South Tower, 1215 21St Avenue South, Nashville, TN, 37232, USA
| | - Tiffany G Woynaroski
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, 8310 South Tower, 1215 21St Avenue South, Nashville, TN, 37232, USA
- Frist Center for Autism & Innovation, Vanderbilt University, Nashville, TN, USA
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA
- Vanderbilt Kennedy Center, Vanderbilt University Medical Center, Nashville, TN, USA
| |
Collapse
|
15
|
Qian Q, Cai S, Zhang X, Huang J, Chen Y, Wang A, Zhang M. Seeing is believing: Larger Colavita effect in school-aged children with attention-deficit/hyperactivity disorder. J Exp Child Psychol 2024; 238:105798. [PMID: 37844345 DOI: 10.1016/j.jecp.2023.105798] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2023] [Revised: 08/07/2023] [Accepted: 09/25/2023] [Indexed: 10/18/2023]
Abstract
Attention-deficit/hyperactivity disorder (ADHD) is a common neurodevelopmental disorder that leads to visually relevant compensatory activities and cognitive strategies in children. Previous studies have identified difficulties with audiovisual integration in children with ADHD, but the characteristics of the visual dominance effect when processing multisensory stimuli are not clear in children with ADHD. The current study used the Colavita paradigm to explore the visual dominance effect in school-aged children with ADHD. The results found that, compared with typically developing children, children with ADHD had a higher proportion of "visual-auditory" trials and a lower proportion of "simultaneous" trials. The study also found that the proportion of visual-auditory trials in children with ADHD decreased as their Swanson, Nolan, and Pelham-IV rating scale (SNAP-IV) inattention scores increased. The results showed that school-aged children with ADHD had a larger Colavita effect, which decreased with the severity of inattentive symptoms. This may be due to an overreliance on visual information and an abnormal integration time window. The connection between multisensory cognitive processing performance and clinical symptoms found in the current study provides empirical and theoretical support for the knowledge base of multisensory and cognitive abilities in disorders.
Collapse
Affiliation(s)
- Qinyue Qian
- Department of Psychology, Soochow University, Suzhou, Jiangsu 215123, China; Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou, Jiangsu 215123, China
| | - Shizhong Cai
- Department of Child and Adolescent Healthcare, Children's Hospital of Soochow University, Suzhou, Jiangsu 215003, China
| | - Xianghui Zhang
- Department of Psychology, Soochow University, Suzhou, Jiangsu 215123, China; Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou, Jiangsu 215123, China
| | - Jie Huang
- Department of Psychology, Soochow University, Suzhou, Jiangsu 215123, China; Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou, Jiangsu 215123, China
| | - Yan Chen
- Department of Child and Adolescent Healthcare, Children's Hospital of Soochow University, Suzhou, Jiangsu 215003, China.
| | - Aijun Wang
- Department of Psychology, Soochow University, Suzhou, Jiangsu 215123, China; Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou, Jiangsu 215123, China.
| | - Ming Zhang
- Department of Psychology, Soochow University, Suzhou, Jiangsu 215123, China; Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou, Jiangsu 215123, China; Department of Psychology, Suzhou University of Science and Technology, Suzhou, Jiangsu 215011, China; Faculty of Interdisciplinary Science and Engineering in Health Systems, Okayama University, Okayama 700-8530, Japan.
| |
Collapse
|
16
|
Park M, Blake R, Kim CY. Audiovisual interactions outside of visual awareness during motion adaptation. Neurosci Conscious 2024; 2024:niad027. [PMID: 38292024 PMCID: PMC10823907 DOI: 10.1093/nc/niad027] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2023] [Revised: 12/05/2023] [Accepted: 12/27/2023] [Indexed: 02/01/2024] Open
Abstract
Motion aftereffects (MAEs), illusory motion experienced in a direction opposed to real motion experienced during prior adaptation, have been used to assess audiovisual interactions. In a previous study from our laboratory, we demonstrated that a congruent direction of auditory motion presented concurrently with visual motion during adaptation strengthened the consequent visual MAE, compared to when auditory motion was incongruent in direction. Those judgments of MAE strength, however, could have been influenced by expectations or response bias from mere knowledge of the state of audiovisual congruity during adaptation. To prevent such knowledge, we now employed continuous flash suppression to render visual motion perceptually invisible during adaptation, ensuring that observers were completely unaware of visual adapting motion and only aware of the motion direction of the sound they were hearing. We found a small but statistically significant congruence effect of sound on adaptation strength produced by invisible adaptation motion. After considering alternative explanations for this finding, we conclude that auditory motion can impact the strength of visual processing produced by translational visual motion even when that motion transpires outside of awareness.
Collapse
Affiliation(s)
- Minsun Park
- School of Psychology, Korea University, 145, Anam-ro, Seongbuk-gu, Seoul 02841, Republic of Korea
| | - Randolph Blake
- Department of Psychology, Vanderbilt University, PMB 407817 2301 Vanderbilt Place, Nashville, TN 37240-7817, United States
| | - Chai-Youn Kim
- School of Psychology, Korea University, 145, Anam-ro, Seongbuk-gu, Seoul 02841, Republic of Korea
| |
Collapse
|
17
|
Hisaizumi M, Tantam D. Enhanced sensitivity to pitch perception and its possible relation to language acquisition in autism. AUTISM & DEVELOPMENTAL LANGUAGE IMPAIRMENTS 2024; 9:23969415241248618. [PMID: 38817731 PMCID: PMC11138189 DOI: 10.1177/23969415241248618] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/01/2024]
Abstract
Background and aims Fascinations for or aversions to particular sounds are a familiar feature of autism, as is an ability to reproduce another person's utterances, precisely copying the other person's prosody as well as their words. Such observations seem to indicate not only that autistic people can pay close attention to what they hear, but also that they have the ability to perceive the finer details of auditory stimuli. This is consistent with the previously reported consensus that absolute pitch is more common in autistic individuals than in neurotypicals. We take this to suggest that autistic people have perception that allows them to pay attention to fine details. It is important to establish whether or not this is so as autism is often presented as a deficit rather than a difference. We therefore undertook a narrative literature review of studies of auditory perception, in autistic and nonautistic individuals, focussing on any differences in processing linguistic and nonlinguistic sounds. Main contributions We find persuasive evidence that nonlinguistic auditory perception in autistic children differs from that of nonautistic children. This is supported by the additional finding of a higher prevalence of absolute pitch and enhanced pitch discriminating abilities in autistic children compared to neurotypical children. Such abilities appear to stem from atypical perception, which is biased toward local-level information necessary for processing pitch and other prosodic features. Enhanced pitch discriminating abilities tend to be found in autistic individuals with a history of language delay, suggesting possible reciprocity. Research on various aspects of language development in autism also supports the hypothesis that atypical pitch perception may be accountable for observed differences in language development in autism. Conclusions The results of our review of previously published studies are consistent with the hypothesis that auditory perception, and particularly pitch perception, in autism are different from the norm but not always impaired. Detail-oriented pitch perception may be an advantage given the right environment. We speculate that unusually heightened sensitivity to pitch differences may be at the cost of the normal development of the perception of the sounds that contribute most to early language development. Implications The acquisition of speech and language may be a process that normally involves an enhanced perception of speech sounds at the expense of the processing of nonlinguistic sounds, but autistic children may not give speech sounds this same priority.
Collapse
Affiliation(s)
| | - Digby Tantam
- Middlesex University, Existential Academy, London, UK
| |
Collapse
|
18
|
Todd JT, Bahrick LE. Individual Differences in Multisensory Attention Skills in Children with Autism Spectrum Disorder Predict Language and Symptom Severity: Evidence from the Multisensory Attention Assessment Protocol (MAAP). J Autism Dev Disord 2023; 53:4685-4710. [PMID: 36181648 PMCID: PMC10065966 DOI: 10.1007/s10803-022-05752-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/07/2022] [Indexed: 01/27/2023]
Abstract
Children with autism spectrum disorders (ASD) show atypical attention, particularly for social events. The new Multisensory Attention Assessment Protocol (MAAP) assesses fine-grained individual differences in attention disengagement, maintenance, and audiovisual matching for social and nonsocial events. We investigated the role of competing stimulation on attention, and relations with language and symptomatology in children with ASD and typical controls. Findings revealed: (1) the MAAP differentiated children with ASD from controls, (2) greater attention to social events predicted better language for both groups and lower symptom severity in children with ASD, (3) different pathways from attention to language were evident in children with ASD versus controls. The MAAP provides an ideal attention assessment for revealing diagnostic group differences and relations with outcomes.
Collapse
Affiliation(s)
- James Torrence Todd
- Department of Psychology, Florida International University, 11200 South West 8 Street, Miami, FL, 33199, USA.
| | - Lorraine E Bahrick
- Department of Psychology, Florida International University, 11200 South West 8 Street, Miami, FL, 33199, USA
| |
Collapse
|
19
|
Bhaskaran AA, Gauvrit T, Vyas Y, Bony G, Ginger M, Frick A. Endogenous noise of neocortical neurons correlates with atypical sensory response variability in the Fmr1 -/y mouse model of autism. Nat Commun 2023; 14:7905. [PMID: 38036566 PMCID: PMC10689491 DOI: 10.1038/s41467-023-43777-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2023] [Accepted: 11/20/2023] [Indexed: 12/02/2023] Open
Abstract
Excessive neural variability of sensory responses is a hallmark of atypical sensory processing in autistic individuals with cascading effects on other core autism symptoms but unknown neurobiological substrate. Here, by recording neocortical single neuron activity in a well-established mouse model of Fragile X syndrome and autism, we characterized atypical sensory processing and probed the role of endogenous noise sources in exaggerated response variability in males. The analysis of sensory stimulus evoked activity and spontaneous dynamics, as well as neuronal features, reveals a complex cellular and network phenotype. Neocortical sensory information processing is more variable and temporally imprecise. Increased trial-by-trial and inter-neuronal response variability is strongly related to key endogenous noise features, and may give rise to behavioural sensory responsiveness variability in autism. We provide a novel preclinical framework for understanding the sources of endogenous noise and its contribution to core autism symptoms, and for testing the functional consequences for mechanism-based manipulation of noise.
Collapse
Affiliation(s)
- Arjun A Bhaskaran
- INSERM, U1215 Neurocentre Magendie, 33077, Bordeaux, France
- University of Bordeaux, 33000, Bordeaux, France
- Department of Psychiatry, Djavad Mowafaghian Centre for Brain Health, University of British Columbia, Vancouver, BC, Canada
| | - Théo Gauvrit
- INSERM, U1215 Neurocentre Magendie, 33077, Bordeaux, France
- University of Bordeaux, 33000, Bordeaux, France
| | - Yukti Vyas
- INSERM, U1215 Neurocentre Magendie, 33077, Bordeaux, France
- University of Bordeaux, 33000, Bordeaux, France
| | - Guillaume Bony
- INSERM, U1215 Neurocentre Magendie, 33077, Bordeaux, France
- University of Bordeaux, 33000, Bordeaux, France
| | - Melanie Ginger
- INSERM, U1215 Neurocentre Magendie, 33077, Bordeaux, France
- University of Bordeaux, 33000, Bordeaux, France
| | - Andreas Frick
- INSERM, U1215 Neurocentre Magendie, 33077, Bordeaux, France.
- University of Bordeaux, 33000, Bordeaux, France.
| |
Collapse
|
20
|
Feldman JI, Dunham K, DiCarlo GE, Cassidy M, Liu Y, Suzman E, Williams ZJ, Pulliam G, Kaiser S, Wallace MT, Woynaroski TG. A Randomized Controlled Trial for Audiovisual Multisensory Perception in Autistic Youth. J Autism Dev Disord 2023; 53:4318-4335. [PMID: 36028729 PMCID: PMC9417081 DOI: 10.1007/s10803-022-05709-6] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/01/2022] [Indexed: 11/24/2022]
Abstract
Differences in audiovisual integration are commonly observed in autism. Temporal binding windows (TBWs) of audiovisual speech can be trained (i.e., narrowed) in non-autistic adults; this study evaluated a computer-based perceptual training in autistic youth and assessed whether treatment outcomes varied according to individual characteristics. Thirty autistic youth aged 8-21 were randomly assigned to a brief perceptual training (n = 15) or a control condition (n = 15). At post-test, the perceptual training group did not differ, on average, on TBWs for trained and untrained stimuli and perception of the McGurk illusion compared to the control group. The training benefited youth with higher language and nonverbal IQ scores; the training caused widened TBWs in youth with co-occurring cognitive and language impairments.
Collapse
Affiliation(s)
- Jacob I Feldman
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, MCE 8310 South Tower, 1215 21st Avenue South, Nashville, TN, 37232, USA.
- Frist Center for Autism & Innovation, Vanderbilt University, Nashville, TN, USA.
| | - Kacie Dunham
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA
- Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN, USA
| | - Gabriella E DiCarlo
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA
- Mass General Brigham Neurology Residency Program, Harvard Medical School, Boston, MA, USA
- Medical Scientist Training Program, Vanderbilt University, Nashville, TN, USA
| | - Margaret Cassidy
- Neuroscience Undergraduate Program, Vanderbilt University, Nashville, TN, USA
- National Institutes of Health, Bethesda, MD, USA
| | - Yupeng Liu
- Neuroscience Undergraduate Program, Vanderbilt University, Nashville, TN, USA
- Washington University School of Medicine, Washington University in St. Louis, St. Louis, MO, USA
| | - Evan Suzman
- Master's Program in Biomedical Science, Vanderbilt University, Nashville, TN, USA
- Southwestern School of Medicine, University of Texas, Dallas, TX, USA
| | - Zachary J Williams
- Frist Center for Autism & Innovation, Vanderbilt University, Nashville, TN, USA
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA
- Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN, USA
- Medical Scientist Training Program, Vanderbilt University, Nashville, TN, USA
| | - Grace Pulliam
- Neuroscience Undergraduate Program, Vanderbilt University, Nashville, TN, USA
| | - Sophia Kaiser
- Cognitive Studies Undergraduate Program, Vanderbilt University, Nashville, TN, USA
| | - Mark T Wallace
- Frist Center for Autism & Innovation, Vanderbilt University, Nashville, TN, USA
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA
- Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN, USA
- Vanderbilt Kennedy Center, Vanderbilt University Medical Center, Nashville, TN, USA
- Department of Psychology, Vanderbilt University, Nashville, TN, USA
- Department of Psychiatry and Behavioral Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
- Department of Pharmacology, Vanderbilt University, Nashville, TN, USA
| | - Tiffany G Woynaroski
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, MCE 8310 South Tower, 1215 21st Avenue South, Nashville, TN, 37232, USA
- Frist Center for Autism & Innovation, Vanderbilt University, Nashville, TN, USA
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA
- Vanderbilt Kennedy Center, Vanderbilt University Medical Center, Nashville, TN, USA
| |
Collapse
|
21
|
Ghaneirad E, Borgolte A, Sinke C, Čuš A, Bleich S, Szycik GR. The effect of multisensory semantic congruency on unisensory object recognition in schizophrenia. Front Psychiatry 2023; 14:1246879. [PMID: 38025441 PMCID: PMC10646423 DOI: 10.3389/fpsyt.2023.1246879] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/24/2023] [Accepted: 10/16/2023] [Indexed: 12/01/2023] Open
Abstract
Multisensory, as opposed to unisensory processing of stimuli, has been found to enhance the performance (e.g., reaction time, accuracy, and discrimination) of healthy individuals across various tasks. However, this enhancement is not as pronounced in patients with schizophrenia (SZ), indicating impaired multisensory integration (MSI) in these individuals. To the best of our knowledge, no study has yet investigated the impact of MSI deficits in the context of working memory, a domain highly reliant on multisensory processing and substantially impaired in schizophrenia. To address this research gap, we employed two adopted versions of the continuous object recognition task to investigate the effect of single-trail multisensory encoding on subsequent object recognition in 21 schizophrenia patients and 21 healthy controls (HC). Participants were tasked with discriminating between initial and repeated presentations. For the initial presentations, half of the stimuli were audiovisual pairings, while the other half were presented unimodal. The task-relevant stimuli were then presented a second time in a unisensory manner (either auditory stimuli in the auditory task or visual stimuli in the visual task). To explore the impact of semantic context on multisensory encoding, half of the audiovisual pairings were selected to be semantically congruent, while the remaining pairs were not semantically related to each other. Consistent with prior studies, our findings demonstrated that the impact of single-trial multisensory presentation during encoding remains discernible during subsequent object recognition. This influence could be distinguished based on the semantic congruity between the auditory and visual stimuli presented during the encoding. This effect was more robust in the auditory task. In the auditory task, when congruent multisensory pairings were encoded, both participant groups demonstrated a multisensory facilitation effect. This effect resulted in improved accuracy and RT performance. Regarding incongruent audiovisual encoding, as expected, HC did not demonstrate an evident multisensory facilitation effect on memory performance. In contrast, SZs exhibited an atypically accelerated reaction time during the subsequent auditory object recognition. Based on the predictive coding model we propose that this observed deviations indicate a reduced semantic modulatory effect and anomalous predictive errors signaling, particularly in the context of conflicting cross-modal sensory inputs in SZ.
Collapse
Affiliation(s)
- Erfan Ghaneirad
- Department of Psychiatry, Social Psychiatry and Psychotherapy, Hannover Medical School, Hanover, Germany
| | - Anna Borgolte
- Department of Psychiatry, Social Psychiatry and Psychotherapy, Hannover Medical School, Hanover, Germany
| | - Christopher Sinke
- Department of Psychiatry, Social Psychiatry and Psychotherapy, Division of Clinical Psychology and Sexual Medicine, Hannover Medical School, Hannover, Germany
| | - Anja Čuš
- Department of Psychiatry, Social Psychiatry and Psychotherapy, Hannover Medical School, Hanover, Germany
| | - Stefan Bleich
- Department of Psychiatry, Social Psychiatry and Psychotherapy, Hannover Medical School, Hanover, Germany
- Center for Systems Neuroscience, University of Veterinary Medicine, Hanover, Germany
| | - Gregor R. Szycik
- Department of Psychiatry, Social Psychiatry and Psychotherapy, Hannover Medical School, Hanover, Germany
| |
Collapse
|
22
|
Monti M, Molholm S, Cuppini C. Atypical development of causal inference in autism inferred through a neurocomputational model. Front Comput Neurosci 2023; 17:1258590. [PMID: 37927544 PMCID: PMC10620690 DOI: 10.3389/fncom.2023.1258590] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2023] [Accepted: 10/05/2023] [Indexed: 11/07/2023] Open
Abstract
In everyday life, the brain processes a multitude of stimuli from the surrounding environment, requiring the integration of information from different sensory modalities to form a coherent perception. This process, known as multisensory integration, enhances the brain's response to redundant congruent sensory cues. However, it is equally important for the brain to segregate sensory inputs from distinct events, to interact with and correctly perceive the multisensory environment. This problem the brain must face, known as the causal inference problem, is strictly related to multisensory integration. It is widely recognized that the ability to integrate information from different senses emerges during the developmental period, as a function of our experience with multisensory stimuli. Consequently, multisensory integrative abilities are altered in individuals who have atypical experiences with cross-modal cues, such as those on the autistic spectrum. However, no research has been conducted on the developmental trajectories of causal inference and its relationship with experience thus far. Here, we used a neuro-computational model to simulate and investigate the development of causal inference in both typically developing children and those in the autistic spectrum. Our results indicate that higher exposure to cross-modal cues accelerates the acquisition of causal inference abilities, and a minimum level of experience with multisensory stimuli is required to develop fully mature behavior. We then simulated the altered developmental trajectory of causal inference in individuals with autism by assuming reduced multisensory experience during training. The results suggest that causal inference reaches complete maturity much later in these individuals compared to neurotypical individuals. Furthermore, we discuss the underlying neural mechanisms and network architecture involved in these processes, highlighting that the development of causal inference follows the evolution of the mechanisms subserving multisensory integration. Overall, this study provides a computational framework, unifying causal inference and multisensory integration, which allows us to suggest neural mechanisms and provide testable predictions about the development of such abilities in typically developed and autistic children.
Collapse
Affiliation(s)
- Melissa Monti
- Department of Electrical, Electronic, and Information Engineering Guglielmo Marconi, University of Bologna, Bologna, Italy
| | - Sophie Molholm
- Departments of Pediatrics and Neuroscience, Albert Einstein College of Medicine, Bronx, NY, United States
| | - Cristiano Cuppini
- Department of Electrical, Electronic, and Information Engineering Guglielmo Marconi, University of Bologna, Bologna, Italy
| |
Collapse
|
23
|
Salinas E, Stanford TR. Conditional independence as a statistical assessment of evidence integration processes. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.05.03.539321. [PMID: 37646001 PMCID: PMC10461915 DOI: 10.1101/2023.05.03.539321] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/01/2023]
Abstract
Intuitively, combining multiple sources of evidence should lead to more accurate decisions than considering single sources of evidence individually. In practice, however, the proper computation may be difficult, or may require additional data that are inaccessible. Here, based on the concept of conditional independence, we consider expressions that can serve either as recipes for integrating evidence based on limited data, or as statistical benchmarks for characterizing evidence integration processes. Consider three events, A , B , and C . We find that, if A and B are conditionally independent with respect to C , then the probability that C occurs given that both A and B are known, P C | A , B , can be easily calculated without the need to measure the full three-way dependency between A , B , and C . This simplified approach can be used in two general ways: to generate predictions by combining multiple (conditionally independent) sources of evidence, or to test whether separate sources of evidence are functionally independent of each other. These applications are demonstrated with four computer-simulated examples, which include detecting a disease based on repeated diagnostic testing, inferring biological age based on multiple biomarkers of aging, discriminating two spatial locations based on multiple cue stimuli (multisensory integration), and examining how behavioral performance in a visual search task depends on selection histories. Besides providing a sound prescription for predicting outcomes, this methodology may be useful for analyzing experimental data of many types.
Collapse
Affiliation(s)
- Emilio Salinas
- Department of Neurobiology & Anatomy, Wake Forest University School of Medicine, Winston-Salem, North Carolina, United States of America
| | - Terrence R Stanford
- Department of Neurobiology & Anatomy, Wake Forest University School of Medicine, Winston-Salem, North Carolina, United States of America
| |
Collapse
|
24
|
Jiang Z, An X, Liu S, Yin E, Yan Y, Ming D. Neural oscillations reflect the individual differences in the temporal perception of audiovisual speech. Cereb Cortex 2023; 33:10575-10583. [PMID: 37727958 DOI: 10.1093/cercor/bhad304] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2023] [Revised: 08/01/2023] [Accepted: 08/02/2023] [Indexed: 09/21/2023] Open
Abstract
Multisensory integration occurs within a limited time interval between multimodal stimuli. Multisensory temporal perception varies widely among individuals and involves perceptual synchrony and temporal sensitivity processes. Previous studies explored the neural mechanisms of individual differences for beep-flash stimuli, whereas there was no study for speech. In this study, 28 subjects (16 male) performed an audiovisual speech/ba/simultaneity judgment task while recording their electroencephalography. We examined the relationship between prestimulus neural oscillations (i.e. the pre-pronunciation movement-related oscillations) and temporal perception. The perceptual synchrony was quantified using the Point of Subjective Simultaneity and temporal sensitivity using the Temporal Binding Window. Our results revealed dissociated neural mechanisms for individual differences in Temporal Binding Window and Point of Subjective Simultaneity. The frontocentral delta power, reflecting top-down attention control, is positively related to the magnitude of individual auditory leading Temporal Binding Windows (auditory Temporal Binding Windows; LTBWs), whereas the parieto-occipital theta power, indexing bottom-up visual temporal attention specific to speech, is negatively associated with the magnitude of individual visual leading Temporal Binding Windows (visual Temporal Binding Windows; RTBWs). In addition, increased left frontal and bilateral temporoparietal occipital alpha power, reflecting general attentional states, is associated with increased Points of Subjective Simultaneity. Strengthening attention abilities might improve the audiovisual temporal perception of speech and further impact speech integration.
Collapse
Affiliation(s)
- Zeliang Jiang
- Academy of Medical Engineering and Translational Medicine, Tianjin University, 300072 Tianjin, China
| | - Xingwei An
- Academy of Medical Engineering and Translational Medicine, Tianjin University, 300072 Tianjin, China
| | - Shuang Liu
- Academy of Medical Engineering and Translational Medicine, Tianjin University, 300072 Tianjin, China
| | - Erwei Yin
- Academy of Medical Engineering and Translational Medicine, Tianjin University, 300072 Tianjin, China
- Defense Innovation Institute, Academy of Military Sciences (AMS), 100071 Beijing, China
- Tianjin Artificial Intelligence Innovation Center (TAIIC), 300457 Tianjin, China
| | - Ye Yan
- Academy of Medical Engineering and Translational Medicine, Tianjin University, 300072 Tianjin, China
- Defense Innovation Institute, Academy of Military Sciences (AMS), 100071 Beijing, China
- Tianjin Artificial Intelligence Innovation Center (TAIIC), 300457 Tianjin, China
| | - Dong Ming
- Academy of Medical Engineering and Translational Medicine, Tianjin University, 300072 Tianjin, China
| |
Collapse
|
25
|
Bruns P, Röder B. Development and experience-dependence of multisensory spatial processing. Trends Cogn Sci 2023; 27:961-973. [PMID: 37208286 DOI: 10.1016/j.tics.2023.04.012] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2022] [Revised: 04/24/2023] [Accepted: 04/25/2023] [Indexed: 05/21/2023]
Abstract
Multisensory spatial processes are fundamental for efficient interaction with the world. They include not only the integration of spatial cues across sensory modalities, but also the adjustment or recalibration of spatial representations to changing cue reliabilities, crossmodal correspondences, and causal structures. Yet how multisensory spatial functions emerge during ontogeny is poorly understood. New results suggest that temporal synchrony and enhanced multisensory associative learning capabilities first guide causal inference and initiate early coarse multisensory integration capabilities. These multisensory percepts are crucial for the alignment of spatial maps across sensory systems, and are used to derive more stable biases for adult crossmodal recalibration. The refinement of multisensory spatial integration with increasing age is further promoted by the inclusion of higher-order knowledge.
Collapse
Affiliation(s)
- Patrick Bruns
- Biological Psychology and Neuropsychology, University of Hamburg, Hamburg, Germany.
| | - Brigitte Röder
- Biological Psychology and Neuropsychology, University of Hamburg, Hamburg, Germany
| |
Collapse
|
26
|
Lanfranco RC, Chancel M, Ehrsson HH. Quantifying body ownership information processing and perceptual bias in the rubber hand illusion. Cognition 2023; 238:105491. [PMID: 37178590 DOI: 10.1016/j.cognition.2023.105491] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2023] [Revised: 05/02/2023] [Accepted: 05/04/2023] [Indexed: 05/15/2023]
Abstract
Bodily illusions have fascinated humankind for centuries, and researchers have studied them to learn about the perceptual and neural processes that underpin multisensory channels of bodily awareness. The influential rubber hand illusion (RHI) has been used to study changes in the sense of body ownership - that is, how a limb is perceived to belong to one's body, which is a fundamental building block in many theories of bodily awareness, self-consciousness, embodiment, and self-representation. However, the methods used to quantify perceptual changes in bodily illusions, including the RHI, have mainly relied on subjective questionnaires and rating scales, and the degree to which such illusory sensations depend on sensory information processing has been difficult to test directly. Here, we introduce a signal detection theory (SDT) framework to study the sense of body ownership in the RHI. We provide evidence that the illusion is associated with changes in body ownership sensitivity that depend on the information carried in the degree of asynchrony of correlated visual and tactile signals, as well as with perceptual bias and sensitivity that reflect the distance between the rubber hand and the participant's body. We found that the illusion's sensitivity to asynchrony is remarkably precise; even a 50 ms visuotactile delay significantly affected body ownership information processing. Our findings conclusively link changes in a complex bodily experience such as body ownership to basic sensory information processing and provide a proof of concept that SDT can be used to study bodily illusions.
Collapse
Affiliation(s)
- Renzo C Lanfranco
- Department of Neuroscience, Karolinska Institutet, Stockholm, Sweden.
| | - Marie Chancel
- Department of Neuroscience, Karolinska Institutet, Stockholm, Sweden; Psychology and Neurocognition Lab, Université Grenoble-Alpes, Grenoble, France
| | - H Henrik Ehrsson
- Department of Neuroscience, Karolinska Institutet, Stockholm, Sweden.
| |
Collapse
|
27
|
Dunham-Carr K, Feldman JI, Simon DM, Edmunds SR, Tu A, Kuang W, Conrad JG, Santapuram P, Wallace MT, Woynaroski TG. The Processing of Audiovisual Speech Is Linked with Vocabulary in Autistic and Nonautistic Children: An ERP Study. Brain Sci 2023; 13:1043. [PMID: 37508976 PMCID: PMC10377472 DOI: 10.3390/brainsci13071043] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2023] [Revised: 06/29/2023] [Accepted: 07/05/2023] [Indexed: 07/30/2023] Open
Abstract
Explaining individual differences in vocabulary in autism is critical, as understanding and using words to communicate are key predictors of long-term outcomes for autistic individuals. Differences in audiovisual speech processing may explain variability in vocabulary in autism. The efficiency of audiovisual speech processing can be indexed via amplitude suppression, wherein the amplitude of the event-related potential (ERP) is reduced at the P2 component in response to audiovisual speech compared to auditory-only speech. This study used electroencephalography (EEG) to measure P2 amplitudes in response to auditory-only and audiovisual speech and norm-referenced, standardized assessments to measure vocabulary in 25 autistic and 25 nonautistic children to determine whether amplitude suppression (a) differs or (b) explains variability in vocabulary in autistic and nonautistic children. A series of regression analyses evaluated associations between amplitude suppression and vocabulary scores. Both groups demonstrated P2 amplitude suppression, on average, in response to audiovisual speech relative to auditory-only speech. Between-group differences in mean amplitude suppression were nonsignificant. Individual differences in amplitude suppression were positively associated with expressive vocabulary through receptive vocabulary, as evidenced by a significant indirect effect observed across groups. The results suggest that efficiency of audiovisual speech processing may explain variance in vocabulary in autism.
Collapse
Affiliation(s)
- Kacie Dunham-Carr
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN 37232, USA
- Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN 37232, USA
| | - Jacob I Feldman
- Frist Center for Autism and Innovation, Vanderbilt University, Nashville, TN 37232, USA
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN 37232, USA
| | - David M Simon
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN 37232, USA
| | - Sarah R Edmunds
- Department of Psychology, University of Washington, Seattle, WA 98195, USA
- Department of Psychology, University of South Carolina, Columbia, SC 29208, USA
- Department of Educational Studies, University of South Carolina, Columbia, SC 29208, USA
| | - Alexander Tu
- Neuroscience Undergraduate Program, Vanderbilt University, Nashville, TN 37232, USA
- Department of Otolaryngology and Communication Sciences, Medical College of Wisconsin, Milwaukee, WI 53226, USA
| | - Wayne Kuang
- Neuroscience Undergraduate Program, Vanderbilt University, Nashville, TN 37232, USA
- Department of Pediatrics, Los Angeles General Medical Center, Keck School of Medicine of University of Southern California, Los Angeles, CA 90033, USA
| | - Julie G Conrad
- Neuroscience Undergraduate Program, Vanderbilt University, Nashville, TN 37232, USA
- College of Medicine, University of Illinois Hospital, Chicago, IL 60612, USA
| | - Pooja Santapuram
- Neuroscience Undergraduate Program, Vanderbilt University, Nashville, TN 37232, USA
- Department of Anesthesiology, Columbia University Irving Medical Center, New York City, NY 10032, USA
| | - Mark T Wallace
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN 37232, USA
- Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN 37232, USA
- Frist Center for Autism and Innovation, Vanderbilt University, Nashville, TN 37232, USA
- Vanderbilt Kennedy Center, Vanderbilt University Medical Center, Nashville, TN 37232, USA
- Department of Psychology, Vanderbilt University, Nashville, TN 37232, USA
- Department of Pharmacology, Vanderbilt University, Nashville, TN 37232, USA
- Department of Psychiatry and Behavioral Sciences, Vanderbilt University Medical Center, Nashville, TN 37232, USA
| | - Tiffany G Woynaroski
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN 37232, USA
- Frist Center for Autism and Innovation, Vanderbilt University, Nashville, TN 37232, USA
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN 37232, USA
- Vanderbilt Kennedy Center, Vanderbilt University Medical Center, Nashville, TN 37232, USA
- Department of Communication Sciences and Disorders, John A. Burns School of Medicine, University of Hawaii at Manoa, Honolulu, HI 96813, USA
| |
Collapse
|
28
|
Bertaccini R, Ippolito G, Tarasi L, Zazio A, Stango A, Bortoletto M, Romei V. Rhythmic TMS as a Feasible Tool to Uncover the Oscillatory Signatures of Audiovisual Integration. Biomedicines 2023; 11:1746. [PMID: 37371840 DOI: 10.3390/biomedicines11061746] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2023] [Revised: 06/09/2023] [Accepted: 06/15/2023] [Indexed: 06/29/2023] Open
Abstract
Multisensory integration is quintessential to adaptive behavior, with clinical populations showing significant impairments in this domain, most notably hallucinatory reports. Interestingly, altered cross-modal interactions have also been reported in healthy individuals when engaged in tasks such as the Sound-Induced Flash-Illusion (SIFI). The temporal dynamics of the SIFI have been recently tied to the speed of occipital alpha rhythms (IAF), with faster oscillations entailing reduced temporal windows within which the illusion is experienced. In this regard, entrainment-based protocols have not yet implemented rhythmic transcranial magnetic stimulation (rhTMS) to causally test for this relationship. It thus remains to be evaluated whether rhTMS-induced acoustic and somatosensory sensations may not specifically interfere with the illusion. Here, we addressed this issue by asking 27 volunteers to perform a SIFI paradigm under different Sham and active rhTMS protocols, delivered over the occipital pole at the IAF. Although TMS has been proven to act upon brain tissues excitability, results show that the SIFI occurred for both Sham and active rhTMS, with the illusory rate not being significantly different between baseline and stimulation conditions. This aligns with the discrete sampling hypothesis, for which alpha amplitude modulation, known to reflect changes in cortical excitability, should not account for changes in the illusory rate. Moreover, these findings highlight the viability of rhTMS-based interventions as a means to probe the neuroelectric signatures of illusory and hallucinatory audiovisual experiences, in healthy and neuropsychiatric populations.
Collapse
Affiliation(s)
- Riccardo Bertaccini
- Centro Studi e Ricerche in Neuroscienze Cognitive, Dipartimento di Psicologia, Alma Mater Studiorum-Università di Bologna, 47521 Cesena, Italy
- Neurophysiology Lab., IRCCS Istituto Centro San Giovanni di Dio Fatebenefratelli, 25125 Brescia, Italy
| | - Giuseppe Ippolito
- Centro Studi e Ricerche in Neuroscienze Cognitive, Dipartimento di Psicologia, Alma Mater Studiorum-Università di Bologna, 47521 Cesena, Italy
- Laboratory of Cognitive Neuroscience, Department of Languages and Literatures, Communication, Education and Society, University of Udine, 33100 Udine, Italy
| | - Luca Tarasi
- Centro Studi e Ricerche in Neuroscienze Cognitive, Dipartimento di Psicologia, Alma Mater Studiorum-Università di Bologna, 47521 Cesena, Italy
| | - Agnese Zazio
- Neurophysiology Lab., IRCCS Istituto Centro San Giovanni di Dio Fatebenefratelli, 25125 Brescia, Italy
| | - Antonietta Stango
- Neurophysiology Lab., IRCCS Istituto Centro San Giovanni di Dio Fatebenefratelli, 25125 Brescia, Italy
| | - Marta Bortoletto
- Neurophysiology Lab., IRCCS Istituto Centro San Giovanni di Dio Fatebenefratelli, 25125 Brescia, Italy
| | - Vincenzo Romei
- Centro Studi e Ricerche in Neuroscienze Cognitive, Dipartimento di Psicologia, Alma Mater Studiorum-Università di Bologna, 47521 Cesena, Italy
- Facultad de Lenguas y Educación, Universidad Antonio de Nebrija, 28015 Madrid, Spain
| |
Collapse
|
29
|
Pulliam G, Feldman JI, Woynaroski TG. Audiovisual multisensory integration in individuals with reading and language impairments: A systematic review and meta-analysis. Neurosci Biobehav Rev 2023; 149:105130. [PMID: 36933815 PMCID: PMC10243286 DOI: 10.1016/j.neubiorev.2023.105130] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2022] [Revised: 03/09/2023] [Accepted: 03/10/2023] [Indexed: 03/18/2023]
Abstract
Differences in sensory function have been documented for a number of neurodevelopmental conditions, including reading and language impairments. Prior studies have measured audiovisual multisensory integration (i.e., the ability to combine inputs from the auditory and visual modalities) in these populations. The present study sought to systematically review and quantitatively synthesize the extant literature on audiovisual multisensory integration in individuals with reading and language impairments. A comprehensive search strategy yielded 56 reports, of which 38 were used to extract 109 group difference and 68 correlational effect sizes. There was an overall difference between individuals with reading and language impairments and comparisons on audiovisual integration. There was a nonsignificant trend towards moderation according to sample type (i.e., reading versus language) and publication/small study bias for this model. Overall, there was a small but non-significant correlation between metrics of audiovisual integration and reading or language ability; this model was not moderated by sample or study characteristics, nor was there evidence of publication/small study bias. Limitations and future directions for primary and meta-analytic research are discussed.
Collapse
Affiliation(s)
- Grace Pulliam
- Neuroscience Undergraduate Program, Vanderbilt University, Nashville, TN, USA; Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, 1215 21st Ave S, MCE South Tower 8310, Nashville 37232, TN, USA
| | - Jacob I Feldman
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, 1215 21st Ave S, MCE South Tower 8310, Nashville 37232, TN, USA; Frist Center for Autism & Innovation, Vanderbilt University, Nashville, TN, USA.
| | - Tiffany G Woynaroski
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, 1215 21st Ave S, MCE South Tower 8310, Nashville 37232, TN, USA; Frist Center for Autism & Innovation, Vanderbilt University, Nashville, TN, USA; Vanderbilt Kennedy Center, Vanderbilt University Medical Center, Nashville, TN, USA; Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA; John A. Burns School of Medicine, University of Hawaii, Manoa, HI, USA
| |
Collapse
|
30
|
O' Dowd A, Hirst R, Setti A, Kenny R, Newell F. Longitudinal grip strength is associated with susceptibility to the Sound Induced Flash Illusion in older adults. AGING BRAIN 2023; 3:100076. [PMID: 37287584 PMCID: PMC10241972 DOI: 10.1016/j.nbas.2023.100076] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2023] [Revised: 04/20/2023] [Accepted: 05/03/2023] [Indexed: 06/09/2023] Open
Abstract
The precision of temporal multisensory integration is associated with specific aspects of physical functioning in ageing, including gait speed and incidents of falling. However, it is unknown if such an association exists between multisensory integration and grip strength, an important index of frailty and brain health and predictor of disease and mortality in older adults. Here, we investigated whether temporal multisensory integration is associated with longitudinal (eight-year) grip strength trajectories in a large sample of 2,061 older adults (mean age = 64.42 years, SD = 7.20; 52% female) drawn from The Irish Longitudinal Study on Ageing (TILDA). Grip strength (kg) for the dominant hand was assessed with a hand-held dynamometer across four testing waves. Longitudinal k-means clustering was applied to these data separately for sex (male, female) and age group (50-64, 65-74, 75+ years). At wave 3, older adults participated in the Sound Induced Flash Illusion (SIFI), a measure of the precision of temporal audio-visual integration, which included three audio-visual stimulus onset asynchronies (SOAs): 70, 150 and 230 ms. Results showed that older adults with a relatively lower (i.e., weaker) grip strength were more susceptible to the SIFI at the longer SOAs compared to those with a relatively higher (i.e., stronger) grip strength (p <.001). These novel findings suggest that older adults with relatively weaker grip strength exhibit an expanded temporal binding window for audio-visual events, possibly reflecting a reduction in the integrity of the central nervous system.
Collapse
Affiliation(s)
- A. O' Dowd
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, Ireland
- The Irish Longitudinal Study on Ageing, Trinity College Dublin, Ireland
| | - R.J. Hirst
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, Ireland
- The Irish Longitudinal Study on Ageing, Trinity College Dublin, Ireland
| | - A. Setti
- The Irish Longitudinal Study on Ageing, Trinity College Dublin, Ireland
- School of Applied Psychology, University College Cork, Ireland
| | - R.A. Kenny
- The Irish Longitudinal Study on Ageing, Trinity College Dublin, Ireland
- Mercer Institute for Successful Ageing, St James. Hospital, Dublin, Ireland
| | - F.N. Newell
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, Ireland
| |
Collapse
|
31
|
Kral A, Sharma A. Crossmodal plasticity in hearing loss. Trends Neurosci 2023; 46:377-393. [PMID: 36990952 PMCID: PMC10121905 DOI: 10.1016/j.tins.2023.02.004] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2022] [Revised: 01/27/2023] [Accepted: 02/21/2023] [Indexed: 03/29/2023]
Abstract
Crossmodal plasticity is a textbook example of the ability of the brain to reorganize based on use. We review evidence from the auditory system showing that such reorganization has significant limits, is dependent on pre-existing circuitry and top-down interactions, and that extensive reorganization is often absent. We argue that the evidence does not support the hypothesis that crossmodal reorganization is responsible for closing critical periods in deafness, and crossmodal plasticity instead represents a neuronal process that is dynamically adaptable. We evaluate the evidence for crossmodal changes in both developmental and adult-onset deafness, which start as early as mild-moderate hearing loss and show reversibility when hearing is restored. Finally, crossmodal plasticity does not appear to affect the neuronal preconditions for successful hearing restoration. Given its dynamic and versatile nature, we describe how this plasticity can be exploited for improving clinical outcomes after neurosensory restoration.
Collapse
Affiliation(s)
- Andrej Kral
- Institute of AudioNeuroTechnology and Department of Experimental Otology, Otolaryngology Clinics, Hannover Medical School, Hannover, Germany; Australian Hearing Hub, School of Medicine and Health Sciences, Macquarie University, Sydney, NSW, Australia
| | - Anu Sharma
- Department of Speech Language and Hearing Science, Center for Neuroscience, Institute of Cognitive Science, University of Colorado Boulder, Boulder, CO, USA.
| |
Collapse
|
32
|
Du 杜彬 B, Yang 杨振 Z, Wang 王翠翠 C, Li 李媛媛 Y, Tao 陶沙 S. Short-term training helps second-language learners read like native readers: An ERP study. BRAIN AND LANGUAGE 2023; 239:105251. [PMID: 36931112 DOI: 10.1016/j.bandl.2023.105251] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/18/2022] [Revised: 03/04/2023] [Accepted: 03/07/2023] [Indexed: 05/10/2023]
Abstract
This randomized controlled trial study aimed to examine what experience other than immersion may help adult learners read with native-like neural responses. We compared a group of 13 native Chinese English learners completing English letter-sound association training with another group of 12 completing visual symbol-sound association training and included one group of native English readers as the reference. The results showed that after three hours of training, all learners no longer showed attenuated cross-modal mismatch negativity (MMN) to English letter-sound integration as in the pretest. After six hours of training, the learners receiving English letter-sound association training showed enhanced cross-modal MMN and theta oscillations, as native English readers did. The enhanced neural responses were significantly correlated with better phonological awareness. Thus, with training specific to critical second language reading skills of appropriate dosages, adult learners can overcome the constraints of their native language background and learn to read with native-like neural responses.
Collapse
Affiliation(s)
- Bin Du 杜彬
- State Key Laboratory of Cognitive Neuroscience and Learning and IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Zhen Yang 杨振
- Department of Psychology, Zhejiang Sci-Tech University, Jianggan District, Hangzhou, Zhejiang, China
| | - Cuicui Wang 王翠翠
- Zhejiang Philosophy and Social Science Laboratory for Research in Early Development and Childcare, Hangzhou Normal University, China; Centre for Cognition and Brain Disorders, The Affiliated Hospital of Hangzhou Normal University, Hangzhou, China; Deqing Hospital of Hangzhou Normal University, China
| | - Yuanyuan Li 李媛媛
- State Key Laboratory of Cognitive Neuroscience and Learning and IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Sha Tao 陶沙
- State Key Laboratory of Cognitive Neuroscience and Learning and IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China.
| |
Collapse
|
33
|
Wu T, Li S, Du D, Li R, Liu P, Yin Z, Zhang H, Qiao Y, Li A. Olfactory-auditory sensory integration in the lateral entorhinal cortex. Prog Neurobiol 2023; 221:102399. [PMID: 36581184 DOI: 10.1016/j.pneurobio.2022.102399] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2022] [Revised: 12/02/2022] [Accepted: 12/19/2022] [Indexed: 12/27/2022]
Abstract
Multisensory integration plays an important role in animal cognition. Although many studies have focused on visual-auditory integration, studies on olfactory-auditory integration are rare. Here, we investigated neural activity patterns and odor decoding in the lateral entorhinal cortex (LEC) under uni-sensory and multisensory stimuli in awake, head-fixed mice. Using specific retrograde tracing, we verified that the LEC receives direct inputs from the primary auditory cortex (AC) and the medial geniculate body (MGB). Strikingly, we found that mitral/tufted cells (M/Ts) in the olfactory bulb (OB) and neurons in the LEC respond to both olfactory and auditory stimuli. Sound decreased the neural responses evoked by odors in both the OB and LEC, for both excitatory and inhibitory responses. Interestingly, significant changes in odor decoding performance and modulation of odor-evoked local field potentials (LFPs) were observed only in the LEC. These data indicate that the LEC is a critical center for olfactory-auditory multisensory integration, with direct projections from both olfactory and auditory centers.
Collapse
Affiliation(s)
- Tingting Wu
- Jiangsu Key Laboratory of Brain Disease and Bioinformation, Research Center for Biochemistry and Molecular Biology, Xuzhou Medical University, Xuzhou 221004, China; Artificial Auditory Laboratory of Jiangsu Province, Xuzhou Medical University, Xuzhou 221004, China; Clinical Hearing Center, Department of Otorhinolaryngology - Head and Neck Surgery, Affiliated Hospital of Xuzhou Medical University, Xuzhou 221006, China; Department of Otolaryngology, Eye, Ear, Nose and Throat Hospital, Shanghai Key Clinical Disciplines of Otorhinolaryngology, Fudan University, Shanghai 200031, China
| | - Shan Li
- Jiangsu Key Laboratory of Brain Disease and Bioinformation, Research Center for Biochemistry and Molecular Biology, Xuzhou Medical University, Xuzhou 221004, China
| | - Deliang Du
- Jiangsu Key Laboratory of Brain Disease and Bioinformation, Research Center for Biochemistry and Molecular Biology, Xuzhou Medical University, Xuzhou 221004, China; Artificial Auditory Laboratory of Jiangsu Province, Xuzhou Medical University, Xuzhou 221004, China; Clinical Hearing Center, Department of Otorhinolaryngology - Head and Neck Surgery, Affiliated Hospital of Xuzhou Medical University, Xuzhou 221006, China
| | - Ruochen Li
- Jiangsu Key Laboratory of Brain Disease and Bioinformation, Research Center for Biochemistry and Molecular Biology, Xuzhou Medical University, Xuzhou 221004, China
| | - Penglai Liu
- Jiangsu Key Laboratory of Brain Disease and Bioinformation, Research Center for Biochemistry and Molecular Biology, Xuzhou Medical University, Xuzhou 221004, China
| | - Zhaoyang Yin
- Jiangsu Key Laboratory of Brain Disease and Bioinformation, Research Center for Biochemistry and Molecular Biology, Xuzhou Medical University, Xuzhou 221004, China
| | - Hongxing Zhang
- Jiangsu Province Key Laboratory of Anesthesiology, Xuzhou Medical University, Xuzhou 221004, China; Jiangsu Province Key Laboratory of Anesthesia and Analgesia Application Technology, Xuzhou Medical University, Xuzhou 221004, China; NMPA Key Laboratory for Research and Evaluation of Narcotic and Psychotropic Drugs, Xuzhou Medical University, Xuzhou 221004, China
| | - Yuehua Qiao
- Artificial Auditory Laboratory of Jiangsu Province, Xuzhou Medical University, Xuzhou 221004, China; Clinical Hearing Center, Department of Otorhinolaryngology - Head and Neck Surgery, Affiliated Hospital of Xuzhou Medical University, Xuzhou 221006, China.
| | - Anan Li
- Jiangsu Key Laboratory of Brain Disease and Bioinformation, Research Center for Biochemistry and Molecular Biology, Xuzhou Medical University, Xuzhou 221004, China.
| |
Collapse
|
34
|
Eisen-Enosh A, Farah N, Polat U, Mandel Y. Temporal synchronization elicits enhancement of binocular vision functions. iScience 2023; 26:105960. [PMID: 36718367 PMCID: PMC9883208 DOI: 10.1016/j.isci.2023.105960] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2022] [Revised: 12/01/2022] [Accepted: 01/06/2023] [Indexed: 01/13/2023] Open
Abstract
Integration of information over the CNS is an important neural process that affects our ability to perceive and react to the environment. The visual system is required to continuously integrate information arriving from two different sources (the eyes) to create a coherent percept with high spatiotemporal precision. Although this neural integration of information is assumed to be critical for visual performance, it can be impaired under some pathological or developmental conditions. Here we took advantage of a unique developmental condition, amblyopia ("lazy eye"), which is characterized by an impaired temporal synchronization between the two eyes, to meticulously study the effect of synchronization on the integration of binocular visual information. We measured the eyes' asynchrony and compensated for it (with millisecond temporal resolution) by providing time-shifted stimuli to the eyes. We found that the re-synchronization of the ocular input elicited a significant improvement in visual functions, and binocular functions, such as binocular summation and stereopsis, were regained. This phenomenon was also evident in neurophysiological measures. Our results can shed light on other neural processing aspects and might also have translational relevance for the field of training, rehabilitation, and perceptual learning.
Collapse
Affiliation(s)
- Auria Eisen-Enosh
- School of Optometry and Vision Science, Bar-Ilan University, Ramat-Gan, Israel
| | - Nairouz Farah
- School of Optometry and Vision Science, Bar-Ilan University, Ramat-Gan, Israel
| | - Uri Polat
- School of Optometry and Vision Science, Bar-Ilan University, Ramat-Gan, Israel
| | - Yossi Mandel
- School of Optometry and Vision Science, Bar-Ilan University, Ramat-Gan, Israel,Institute for Nanotechnology and Advanced Materials (BINA), Bar-Ilan University, Ramat Gan, Israel,The Leslie and Susan Gonda (Goldschmied) Multidisciplinary Brain Research Center, Bar-Ilan University, Ramat Gan, Israel,Corresponding author
| |
Collapse
|
35
|
von Eiff CI, Frühholz S, Korth D, Guntinas-Lichius O, Schweinberger SR. Crossmodal benefits to vocal emotion perception in cochlear implant users. iScience 2022; 25:105711. [PMID: 36578321 PMCID: PMC9791346 DOI: 10.1016/j.isci.2022.105711] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2022] [Revised: 10/17/2022] [Accepted: 11/29/2022] [Indexed: 12/03/2022] Open
Abstract
Speech comprehension counts as a benchmark outcome of cochlear implants (CIs)-disregarding the communicative importance of efficient integration of audiovisual (AV) socio-emotional information. We investigated effects of time-synchronized facial information on vocal emotion recognition (VER). In Experiment 1, 26 CI users and normal-hearing (NH) individuals classified emotions for auditory-only, AV congruent, or AV incongruent utterances. In Experiment 2, we compared crossmodal effects between groups with adaptive testing, calibrating auditory difficulty via voice morphs from emotional caricatures to anti-caricatures. CI users performed lower than NH individuals, and VER was correlated with life quality. Importantly, they showed larger benefits to VER with congruent facial emotional information even at equal auditory-only performance levels, suggesting that their larger crossmodal benefits result from deafness-related compensation rather than degraded acoustic representations. Crucially, vocal caricatures enhanced CI users' VER. Findings advocate AV stimuli during CI rehabilitation and suggest perspectives of caricaturing for both perceptual trainings and sound processor technology.
Collapse
Affiliation(s)
- Celina Isabelle von Eiff
- Department for General Psychology and Cognitive Neuroscience, Institute of Psychology, Friedrich Schiller University Jena, 07743 Jena, Germany,Voice Research Unit, Institute of Psychology, Friedrich Schiller University Jena, 07743 Jena, Germany,DFG SPP 2392 Visual Communication (ViCom), Frankfurt am Main, Germany,Corresponding author
| | - Sascha Frühholz
- Department of Psychology (Cognitive and Affective Neuroscience), Faculty of Arts and Social Sciences, University of Zurich, 8050 Zurich, Switzerland,Department of Psychology, University of Oslo, 0373 Oslo, Norway
| | - Daniela Korth
- Department of Otorhinolaryngology, Jena University Hospital, 07747 Jena, Germany
| | | | - Stefan Robert Schweinberger
- Department for General Psychology and Cognitive Neuroscience, Institute of Psychology, Friedrich Schiller University Jena, 07743 Jena, Germany,Voice Research Unit, Institute of Psychology, Friedrich Schiller University Jena, 07743 Jena, Germany,DFG SPP 2392 Visual Communication (ViCom), Frankfurt am Main, Germany
| |
Collapse
|
36
|
Kearney BE, Lanius RA. The brain-body disconnect: A somatic sensory basis for trauma-related disorders. Front Neurosci 2022; 16:1015749. [PMID: 36478879 PMCID: PMC9720153 DOI: 10.3389/fnins.2022.1015749] [Citation(s) in RCA: 24] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2022] [Accepted: 10/14/2022] [Indexed: 08/16/2023] Open
Abstract
Although the manifestation of trauma in the body is a phenomenon well-endorsed by clinicians and traumatized individuals, the neurobiological underpinnings of this manifestation remain unclear. The notion of somatic sensory processing, which encompasses vestibular and somatosensory processing and relates to the sensory systems concerned with how the physical body exists in and relates to physical space, is introduced as a major contributor to overall regulatory, social-emotional, and self-referential functioning. From a phylogenetically and ontogenetically informed perspective, trauma-related symptomology is conceptualized to be grounded in brainstem-level somatic sensory processing dysfunction and its cascading influences on physiological arousal modulation, affect regulation, and higher-order capacities. Lastly, we introduce a novel hierarchical model bridging somatic sensory processes with limbic and neocortical mechanisms regulating an individual's emotional experience and sense of a relational, agentive self. This model provides a working framework for the neurobiologically informed assessment and treatment of trauma-related conditions from a somatic sensory processing perspective.
Collapse
Affiliation(s)
- Breanne E. Kearney
- Department of Neuroscience, Schulich School of Medicine and Dentistry, Western University, London, ON, Canada
| | - Ruth A. Lanius
- Department of Neuroscience, Schulich School of Medicine and Dentistry, Western University, London, ON, Canada
- Department of Psychiatry, Schulich School of Medicine and Dentistry, Western University, London, ON, Canada
| |
Collapse
|
37
|
Pei C, Qiu Y, Li F, Huang X, Si Y, Li Y, Zhang X, Chen C, Liu Q, Cao Z, Ding N, Gao S, Alho K, Yao D, Xu P. The different brain areas occupied for integrating information of hierarchical linguistic units: a study based on EEG and TMS. Cereb Cortex 2022; 33:4740-4751. [PMID: 36178127 DOI: 10.1093/cercor/bhac376] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2022] [Revised: 08/29/2022] [Accepted: 08/30/2022] [Indexed: 11/13/2022] Open
Abstract
Human language units are hierarchical, and reading acquisition involves integrating multisensory information (typically from auditory and visual modalities) to access meaning. However, it is unclear how the brain processes and integrates language information at different linguistic units (words, phrases, and sentences) provided simultaneously in auditory and visual modalities. To address the issue, we presented participants with sequences of short Chinese sentences through auditory, visual, or combined audio-visual modalities while electroencephalographic responses were recorded. With a frequency tagging approach, we analyzed the neural representations of basic linguistic units (i.e. characters/monosyllabic words) and higher-level linguistic structures (i.e. phrases and sentences) across the 3 modalities separately. We found that audio-visual integration occurs in all linguistic units, and the brain areas involved in the integration varied across different linguistic levels. In particular, the integration of sentences activated the local left prefrontal area. Therefore, we used continuous theta-burst stimulation to verify that the left prefrontal cortex plays a vital role in the audio-visual integration of sentence information. Our findings suggest the advantage of bimodal language comprehension at hierarchical stages in language-related information processing and provide evidence for the causal role of the left prefrontal regions in processing information of audio-visual sentences.
Collapse
Affiliation(s)
- Changfu Pei
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for Neuroinformation, University of Electronic Science and Technology of China, Chengdu, 611731, China.,School of Life Science and Technology, Center for Information in BioMedicine, University of Electronic Science and Technology of China, Chengdu, 611731, China
| | - Yuan Qiu
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for Neuroinformation, University of Electronic Science and Technology of China, Chengdu, 611731, China.,School of Life Science and Technology, Center for Information in BioMedicine, University of Electronic Science and Technology of China, Chengdu, 611731, China
| | - Fali Li
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for Neuroinformation, University of Electronic Science and Technology of China, Chengdu, 611731, China.,School of Life Science and Technology, Center for Information in BioMedicine, University of Electronic Science and Technology of China, Chengdu, 611731, China.,Research Unit of Neuroscience, Chinese Academy of Medical Science, 2019RU035, Chengdu, China
| | - Xunan Huang
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for Neuroinformation, University of Electronic Science and Technology of China, Chengdu, 611731, China.,School of Life Science and Technology, Center for Information in BioMedicine, University of Electronic Science and Technology of China, Chengdu, 611731, China.,School of Foreign Languages, University of Electronic Science and Technology of China, Chengdu, Sichuan, 611731, China
| | - Yajing Si
- School of Psychology, Xinxiang Medical University, Xinxiang, 453003, China
| | - Yuqin Li
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for Neuroinformation, University of Electronic Science and Technology of China, Chengdu, 611731, China.,School of Life Science and Technology, Center for Information in BioMedicine, University of Electronic Science and Technology of China, Chengdu, 611731, China
| | - Xiabing Zhang
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for Neuroinformation, University of Electronic Science and Technology of China, Chengdu, 611731, China.,School of Life Science and Technology, Center for Information in BioMedicine, University of Electronic Science and Technology of China, Chengdu, 611731, China
| | - Chunli Chen
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for Neuroinformation, University of Electronic Science and Technology of China, Chengdu, 611731, China.,School of Life Science and Technology, Center for Information in BioMedicine, University of Electronic Science and Technology of China, Chengdu, 611731, China
| | - Qiang Liu
- Institute of Brain and Psychological Sciences, Sichuan Normal University, Chengdu, Sichuan, 610066, China
| | - Zehong Cao
- STEM, Mawson Lakes Campus, University of South Australia, Adelaide, SA 5095, Australia
| | - Nai Ding
- College of Biomedical Engineering and Instrument Sciences, Key Laboratory for Biomedical Engineering of Ministry of Education, Zhejiang University, Hangzhou, 310007, China
| | - Shan Gao
- School of Foreign Languages, University of Electronic Science and Technology of China, Chengdu, Sichuan, 611731, China
| | - Kimmo Alho
- Department of Psychology and Logopedics, University of Helsinki, Helsinki, FI 00014, Finland
| | - Dezhong Yao
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for Neuroinformation, University of Electronic Science and Technology of China, Chengdu, 611731, China.,School of Life Science and Technology, Center for Information in BioMedicine, University of Electronic Science and Technology of China, Chengdu, 611731, China.,Research Unit of Neuroscience, Chinese Academy of Medical Science, 2019RU035, Chengdu, China
| | - Peng Xu
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for Neuroinformation, University of Electronic Science and Technology of China, Chengdu, 611731, China.,School of Life Science and Technology, Center for Information in BioMedicine, University of Electronic Science and Technology of China, Chengdu, 611731, China.,Research Unit of Neuroscience, Chinese Academy of Medical Science, 2019RU035, Chengdu, China.,Radiation Oncology Key Laboratory of Sichuan Province, Chengdu, 610041, China
| |
Collapse
|
38
|
Ren Q, Marshall AC, Kaiser J, Schütz-Bosbach S. Multisensory Integration of Anticipated Cardiac Signals with Visual Targets Affects Their Detection among Multiple Visual Stimuli. Neuroimage 2022; 262:119549. [DOI: 10.1016/j.neuroimage.2022.119549] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2022] [Revised: 07/29/2022] [Accepted: 08/04/2022] [Indexed: 11/17/2022] Open
|
39
|
Noel JP, Paredes R, Terrebonne E, Feldman JI, Woynaroski T, Cascio CJ, Seriès P, Wallace MT. Inflexible Updating of the Self-Other Divide During a Social Context in Autism: Psychophysical, Electrophysiological, and Neural Network Modeling Evidence. BIOLOGICAL PSYCHIATRY. COGNITIVE NEUROSCIENCE AND NEUROIMAGING 2022; 7:756-764. [PMID: 33845169 PMCID: PMC8521572 DOI: 10.1016/j.bpsc.2021.03.013] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/19/2020] [Revised: 03/08/2021] [Accepted: 03/29/2021] [Indexed: 01/21/2023]
Abstract
BACKGROUND Autism spectrum disorder (ASD) affects many aspects of life, from social interactions to (multi)sensory processing. Similarly, the condition expresses at a variety of levels of description, from genetics to neural circuits and interpersonal behavior. We attempt to bridge between domains and levels of description by detailing the behavioral, electrophysiological, and putative neural network basis of peripersonal space (PPS) updating in ASD during a social context, given that the encoding of this space relies on appropriate multisensory integration, is malleable by social context, and is thought to delineate the boundary between the self and others. METHODS Fifty (20 male/30 female) young adults, either diagnosed with ASD or age- and sex-matched individuals, took part in a visuotactile reaction time task indexing PPS, while high-density electroencephalography was continuously recorded. Neural network modeling was performed in silico. RESULTS Multisensory psychophysics demonstrates that while PPS in neurotypical individuals shrinks in the presence of others-as to "give space"-this does not occur in ASD. Likewise, electroencephalography recordings suggest that multisensory integration is altered by social context in neurotypical individuals but not in individuals with ASD. Finally, a biologically plausible neural network model shows, as a proof of principle, that PPS updating may be inflexible in ASD owing to the altered excitatory/inhibitory balance that characterizes neural circuits in animal models of ASD. CONCLUSIONS Findings are conceptually in line with recent statistical inference accounts, suggesting diminished flexibility in ASD, and further these observations by suggesting within an example relevant for social cognition that such inflexibility may be due to excitatory/inhibitory imbalances.
Collapse
Affiliation(s)
- Jean-Paul Noel
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, Tennessee; Center for Neural Science, New York University, New York, New York.
| | - Renato Paredes
- Institute for Adaptive and Neural Computation, University of Edinburgh, Edinburgh, United Kingdom
| | - Emily Terrebonne
- Undergraduate Neuroscience Program, Vanderbilt University, Nashville, Tennessee; School of Medicine and Health Sciences, George Washington University, Washington, District of Columbia
| | - Jacob I Feldman
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, Tennessee; Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, Tennessee
| | - Tiffany Woynaroski
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, Tennessee; Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, Tennessee
| | - Carissa J Cascio
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, Tennessee; Psychiatry and Behavioral Sciences, Vanderbilt University Medical Center, Nashville, Tennessee
| | - Peggy Seriès
- Institute for Adaptive and Neural Computation, University of Edinburgh, Edinburgh, United Kingdom
| | - Mark T Wallace
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, Tennessee; Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, Tennessee; Psychiatry and Behavioral Sciences, Vanderbilt University Medical Center, Nashville, Tennessee
| |
Collapse
|
40
|
Are auditory cues special? Evidence from cross-modal distractor-induced blindness. Atten Percept Psychophys 2022; 85:889-904. [PMID: 35902451 PMCID: PMC10066119 DOI: 10.3758/s13414-022-02540-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/08/2022] [Indexed: 11/08/2022]
Abstract
A target that shares features with preceding distractor stimuli is less likely to be detected due to a distractor-driven activation of a negative attentional set. This transient impairment in perceiving the target (distractor-induced blindness/deafness) can be found within vision and audition. Recently, the phenomenon was observed in a cross-modal setting involving an auditory target and additional task-relevant visual information (cross-modal distractor-induced deafness). In the current study, consisting of three behavioral experiments, a visual target, indicated by an auditory cue, had to be detected despite the presence of visual distractors. Multiple distractors consistently led to reduced target detection if cue and target appeared in close temporal proximity, confirming cross-modal distractor-induced blindness. However, the effect on target detection was reduced compared to the effect of cross-modal distractor-induced deafness previously observed for reversed modalities. The physical features defining cue and target could not account for the diminished distractor effect in the current cross-modal task. Instead, this finding may be attributed to the auditory cue acting as an especially efficient release signal of the distractor-induced inhibition. Additionally, a multisensory enhancement of visual target detection by the concurrent auditory signal might have contributed to the reduced distractor effect.
Collapse
|
41
|
The multisensory cocktail party problem in children: Synchrony-based segregation of multiple talking faces improves in early childhood. Cognition 2022; 228:105226. [PMID: 35882100 DOI: 10.1016/j.cognition.2022.105226] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2021] [Revised: 07/09/2022] [Accepted: 07/11/2022] [Indexed: 11/23/2022]
Abstract
Extraction of meaningful information from multiple talkers relies on perceptual segregation. The temporal synchrony statistics inherent in everyday audiovisual (AV) speech offer a powerful basis for perceptual segregation. We investigated the developmental emergence of synchrony-based perceptual segregation of multiple talkers in 3-7-year-old children. Children either saw four identical or four different faces articulating temporally jittered versions of the same utterance and heard the audible version of the same utterance either synchronized with one of the talkers or desynchronized with all of them. Eye tracking revealed that selective attention to the temporally synchronized talking face increased while attention to the desynchronized faces decreased with age and that attention to the talkers' mouth primarily drove responsiveness. These findings demonstrate that the temporal synchrony statistics inherent in fluent AV speech assume an increasingly greater role in perceptual segregation of the multisensory clutter created by multiple talking faces in early childhood.
Collapse
|
42
|
Zhou W, Tian W, Xia J, Li Y, Li X, Yao T, Bi J, Zhu Z. Alterations in degree centrality and cognitive function in breast cancer patients after chemotherapy. Brain Imaging Behav 2022; 16:2248-2257. [PMID: 35689165 DOI: 10.1007/s11682-022-00695-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2022] [Revised: 05/21/2022] [Accepted: 06/01/2022] [Indexed: 11/26/2022]
Abstract
The goal of this study was to determine the presence or absence of persistent functional impairments in specific brain regions in breast cancer patients during the recovery period after chemotherapy. We calculated degree centrality (DC) and explored the correlation between brain changes and cognitive scores in 29 female patients with breast cancer who had completed chemotherapy within 1-6 years (C + group) and in 28 age-matched patients with breast cancer who did not receive chemotherapy (C- group). All patients underwent rs-fMRI and cognitive testing. Differences in brain functional activity were explored using DC parameters. Correlations between brain features and cognitive scores were analyzed via correlation analysis. Compared with the C- group, the C + group obtained significantly lower motor and cognitive subscores on the Fatigue Scale for Motor and Cognitive Functions and four subscale scores of the Functional Assessment of Cancer Therapy-Cognitive Function (P < 0.05). Furthermore, the C + group exhibited a significantly higher DC z-score (zDC) in the right superior temporal gyrus and left postcentral gyrus (P < 0.01, FWE-corrected), and a lower zDC in the left caudate nucleus (P < 0.01, FWE-corrected). We found a positive correlation between digit symbol test (DST) scores and zDC values in the right superior temporal gyrus (r = 0.709, P < 0.001), and a negative correlation between DST scores and zDC values in the right angular gyrus (r = -0.784, P < 0.001) and left superior parietal gyrus (r = -0.739, P < 0.001). Chemotherapy can cause abnormal brain activity and cognitive decline in patients with breast cancer, and these effects are likely to persist. DC can be used as an imaging marker for chemotherapy-related cognitive impairment after chemotherapy in breast cancer patients.
Collapse
Affiliation(s)
- Wensu Zhou
- Graduate School of Dalian Medical University, 116044, Dalian, China
| | - Weizhong Tian
- Department of Radiology, Taizhou People's Hospital, 225300, Taizhou, Jiangsu, China.
| | - Jianguo Xia
- Department of Radiology, Taizhou People's Hospital, 225300, Taizhou, Jiangsu, China.
| | - Yuan Li
- Department of Radiology, Taizhou People's Hospital, 225300, Taizhou, Jiangsu, China
| | - Xiaolu Li
- Graduate School of Dalian Medical University, 116044, Dalian, China
| | - Tianyi Yao
- Department of Breast and Thyroid Surgery, Taizhou People's Hospital, 225300, Taizhou, Jiangsu, China
| | - Jingcheng Bi
- Department of Breast and Thyroid Surgery, Taizhou People's Hospital, 225300, Taizhou, Jiangsu, China
| | - Zhengcai Zhu
- Department of Breast and Thyroid Surgery, Taizhou People's Hospital, 225300, Taizhou, Jiangsu, China
| |
Collapse
|
43
|
Bowsher-Murray C, Gerson S, von dem Hagen E, Jones CRG. The Components of Interpersonal Synchrony in the Typical Population and in Autism: A Conceptual Analysis. Front Psychol 2022; 13:897015. [PMID: 35734455 PMCID: PMC9208202 DOI: 10.3389/fpsyg.2022.897015] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2022] [Accepted: 05/16/2022] [Indexed: 01/18/2023] Open
Abstract
Interpersonal synchrony - the tendency for social partners to temporally co-ordinate their behaviour when interacting - is a ubiquitous feature of social interactions. Synchronous interactions play a key role in development, and promote social bonding and a range of pro-social behavioural outcomes across the lifespan. The process of achieving and maintaining interpersonal synchrony is highly complex, with inputs required from across perceptual, temporal, motor, and socio-cognitive domains. In this conceptual analysis, we synthesise evidence from across these domains to establish the key components underpinning successful non-verbal interpersonal synchrony, how such processes interact, and factors that may moderate their operation. We also consider emerging evidence that interpersonal synchrony is reduced in autistic populations. We use our account of the components contributing to interpersonal synchrony in the typical population to identify potential points of divergence in interpersonal synchrony in autism. The relationship between interpersonal synchrony and broader aspects of social communication in autism are also considered, together with implications for future research.
Collapse
Affiliation(s)
- Claire Bowsher-Murray
- Wales Autism Research Centre, School of Psychology, Cardiff University, Cardiff, United Kingdom
- Cardiff University Centre for Human Developmental Science, School of Psychology, Cardiff University, Cardiff, United Kingdom
| | - Sarah Gerson
- Cardiff University Centre for Human Developmental Science, School of Psychology, Cardiff University, Cardiff, United Kingdom
| | - Elisabeth von dem Hagen
- Wales Autism Research Centre, School of Psychology, Cardiff University, Cardiff, United Kingdom
- Cardiff University Centre for Human Developmental Science, School of Psychology, Cardiff University, Cardiff, United Kingdom
- Cardiff University Brain Imaging Research Centre, School of Psychology, Cardiff University, Cardiff, United Kingdom
| | - Catherine R. G. Jones
- Wales Autism Research Centre, School of Psychology, Cardiff University, Cardiff, United Kingdom
- Cardiff University Centre for Human Developmental Science, School of Psychology, Cardiff University, Cardiff, United Kingdom
| |
Collapse
|
44
|
Noel JP, Shivkumar S, Dokka K, Haefner RM, Angelaki DE. Aberrant causal inference and presence of a compensatory mechanism in autism spectrum disorder. eLife 2022; 11:71866. [PMID: 35579424 PMCID: PMC9170250 DOI: 10.7554/elife.71866] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2021] [Accepted: 05/15/2022] [Indexed: 12/02/2022] Open
Abstract
Autism spectrum disorder (ASD) is characterized by a panoply of social, communicative, and sensory anomalies. As such, a central goal of computational psychiatry is to ascribe the heterogenous phenotypes observed in ASD to a limited set of canonical computations that may have gone awry in the disorder. Here, we posit causal inference - the process of inferring a causal structure linking sensory signals to hidden world causes - as one such computation. We show that audio-visual integration is intact in ASD and in line with optimal models of cue combination, yet multisensory behavior is anomalous in ASD because this group operates under an internal model favoring integration (vs. segregation). Paradoxically, during explicit reports of common cause across spatial or temporal disparities, individuals with ASD were less and not more likely to report common cause, particularly at small cue disparities. Formal model fitting revealed differences in both the prior probability for common cause (p-common) and choice biases, which are dissociable in implicit but not explicit causal inference tasks. Together, this pattern of results suggests (i) different internal models in attributing world causes to sensory signals in ASD relative to neurotypical individuals given identical sensory cues, and (ii) the presence of an explicit compensatory mechanism in ASD, with these individuals putatively having learned to compensate for their bias to integrate in explicit reports.
Collapse
Affiliation(s)
- Jean-Paul Noel
- Center for Neural Science, New York University, New York City, United States
| | | | - Kalpana Dokka
- Department of Neuroscience, Baylor College of Medicine, Houston, United States
| | - Ralf M Haefner
- Brain and Cognitive Sciences, University of Rochester, Rochester, United States
| | - Dora E Angelaki
- Center for Neural Science, New York University, New York City, United States.,Department of Neuroscience, Baylor College of Medicine, Houston, United States
| |
Collapse
|
45
|
Albini F, Pisoni A, Salvatore A, Calzolari E, Casati C, Marzoli SB, Falini A, Crespi SA, Godi C, Castellano A, Bolognini N, Vallar G. Aftereffects to Prism Exposure without Adaptation: A Single Case Study. Brain Sci 2022; 12:480. [PMID: 35448011 PMCID: PMC9028811 DOI: 10.3390/brainsci12040480] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2022] [Revised: 03/07/2022] [Accepted: 03/24/2022] [Indexed: 02/05/2023] Open
Abstract
Visuo-motor adaptation to optical prisms (Prism Adaptation, PA), displacing the visual scene laterally, is a behavioral method used for the experimental investigation of visuomotor plasticity, and, in clinical settings, for temporarily ameliorating and rehabilitating unilateral spatial neglect. This study investigated the building up of PA, and the presence of the typically occurring subsequent Aftereffects (AEs) in a brain-damaged patient (TMA), suffering from apperceptive agnosia and a right visual half-field defect, with bilateral atrophy of the parieto-occipital cortices, regions involved in PA and AEs. Base-Right prisms and control neutral lenses were used. PA was achieved by repeated pointing movements toward three types of stimuli: visual, auditory, and bimodal audio-visual. The presence and the magnitude of AEs were assessed by proprioceptive, visual, visuo-proprioceptive, and auditory-proprioceptive straight-ahead pointing tasks. The patient's brain connectivity was investigated by Diffusion Tensor Imaging (DTI). Unlike control participants, TMA did not show any adaptation to prism exposure, but her AEs were largely preserved. These findings indicate that AEs may occur even in the absence of PA, as indexed by the reduction of the pointing error, showing a dissociation between the classical measures of PA and AEs. In the PA process, error reduction, and its feedback, may be less central to the building up of AEs, than the sensorimotor pointing activity per se.
Collapse
Affiliation(s)
- Federica Albini
- Department of Psychology, University of Milano-Bicocca, 20126 Milano, Italy; (A.P.); (A.S.); (N.B.)
| | - Alberto Pisoni
- Department of Psychology, University of Milano-Bicocca, 20126 Milano, Italy; (A.P.); (A.S.); (N.B.)
| | - Anna Salvatore
- Department of Psychology, University of Milano-Bicocca, 20126 Milano, Italy; (A.P.); (A.S.); (N.B.)
| | - Elena Calzolari
- Neuro-Otology Unit, Division of Brain Sciences, Imperial College London, London SW7 2AZ, UK;
| | - Carlotta Casati
- Experimental Laboratory of Research in Clinical Neuropsychology, IRCCS Istituto Auxologico Italiano, 20155 Milano, Italy;
- Department of Neurorehabilitation Sciences, IRCCS Istituto Auxologico Italiano, 20155 Milano, Italy
| | - Stefania Bianchi Marzoli
- Laboratory of Neuro-Ophthalmology and Ocular Electrophysiology, IRCCS Istituto Auxologico Italiano, 20155 Milano, Italy;
| | - Andrea Falini
- Neuroradiology Unit and CERMAC, IRCCS San Raffaele Scientific Institute, Vita-Salute San Raffaele University, 20132 Milano, Italy; (A.F.); (S.A.C.); (C.G.); (A.C.)
| | - Sofia Allegra Crespi
- Neuroradiology Unit and CERMAC, IRCCS San Raffaele Scientific Institute, Vita-Salute San Raffaele University, 20132 Milano, Italy; (A.F.); (S.A.C.); (C.G.); (A.C.)
| | - Claudia Godi
- Neuroradiology Unit and CERMAC, IRCCS San Raffaele Scientific Institute, Vita-Salute San Raffaele University, 20132 Milano, Italy; (A.F.); (S.A.C.); (C.G.); (A.C.)
| | - Antonella Castellano
- Neuroradiology Unit and CERMAC, IRCCS San Raffaele Scientific Institute, Vita-Salute San Raffaele University, 20132 Milano, Italy; (A.F.); (S.A.C.); (C.G.); (A.C.)
| | - Nadia Bolognini
- Department of Psychology, University of Milano-Bicocca, 20126 Milano, Italy; (A.P.); (A.S.); (N.B.)
- Experimental Laboratory of Research in Clinical Neuropsychology, IRCCS Istituto Auxologico Italiano, 20155 Milano, Italy;
| | - Giuseppe Vallar
- Department of Psychology, University of Milano-Bicocca, 20126 Milano, Italy; (A.P.); (A.S.); (N.B.)
- Experimental Laboratory of Research in Clinical Neuropsychology, IRCCS Istituto Auxologico Italiano, 20155 Milano, Italy;
| |
Collapse
|
46
|
Johnston PR, Alain C, McIntosh AR. Individual Differences in Multisensory Processing Are Related to Broad Differences in the Balance of Local versus Distributed Information. J Cogn Neurosci 2022; 34:846-863. [PMID: 35195723 DOI: 10.1162/jocn_a_01835] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
The brain's ability to extract information from multiple sensory channels is crucial to perception and effective engagement with the environment, but the individual differences observed in multisensory processing lack mechanistic explanation. We hypothesized that, from the perspective of information theory, individuals with more effective multisensory processing will exhibit a higher degree of shared information among distributed neural populations while engaged in a multisensory task, representing more effective coordination of information among regions. To investigate this, healthy young adults completed an audiovisual simultaneity judgment task to measure their temporal binding window (TBW), which quantifies the ability to distinguish fine discrepancies in timing between auditory and visual stimuli. EEG was then recorded during a second run of the simultaneity judgment task, and partial least squares was used to relate individual differences in the TBW width to source-localized EEG measures of local entropy and mutual information, indexing local and distributed processing of information, respectively. The narrowness of the TBW, reflecting more effective multisensory processing, was related to a broad pattern of higher mutual information and lower local entropy at multiple timescales. Furthermore, a small group of temporal and frontal cortical regions, including those previously implicated in multisensory integration and response selection, respectively, played a prominent role in this pattern. Overall, these findings suggest that individual differences in multisensory processing are related to widespread individual differences in the balance of distributed versus local information processing among a large subset of brain regions, with more distributed information being associated with more effective multisensory processing. The balance of distributed versus local information processing may therefore be a useful measure for exploring individual differences in multisensory processing, its relationship to higher cognitive traits, and its disruption in neurodevelopmental disorders and clinical conditions.
Collapse
|
47
|
Wang L, Lin L, Sun Y, Hou S, Ren J. The effect of movement speed on audiovisual temporal integration in streaming-bouncing illusion. Exp Brain Res 2022; 240:1139-1149. [PMID: 35147722 DOI: 10.1007/s00221-022-06312-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2021] [Accepted: 01/18/2022] [Indexed: 11/04/2022]
Abstract
Motion perception in real situations is often stimulated by multisensory information. Speed is an essential characteristic of moving objects; however, at present, it is not clear whether speed affects the process of audiovisual temporal integration in motion perception. Therefore, this study used a streaming-bouncing task (a bistable motion perception; SB task) combined with a simultaneous judgment task (SJ task) to explore the effect of speed on audiovisual temporal integration from implicit and explicit perspectives. The experiment had a within-subjects design, two speed conditions (fast/slow), eleven audiovisual conditions [stimulus onset asynchrony (SOA): 0 ms/ ± 60 ms/ ± 120 ms/ ± 180 ms/ ± 240 ms/ ± 300 ms], and a visual-only condition. A total of 30 subjects were recruited for the study. These participants completed the SB task and the SJ task successively. The results showed the following outcomes: (1) the optimal times needed to induce the "bouncing" illusion and maximum audiovisual bounce-inducing effect (ABE) magnitude were much earlier than that for the optimal time of audiovisual synchrony, (2) speed as a bottom-up factor could affect the proportion of "bouncing" perception in SB illusions but did not affect the ABE magnitude, (3) speed could also affect the ability of audiovisual temporal integration in motion perception, and the main manifestation was that the point of subjective simultaneity (PSS) in fast speed conditions was earlier than that of slow speed conditions in the SJ task and (4) the SB task and SJ task were not related. In conclusion, the time to complete the maximum audiovisual integration was different from the optimal time for synchrony perception; moreover, speed could affect audiovisual temporal integration in motion perception but only in explicit temporal tasks.
Collapse
Affiliation(s)
- Luning Wang
- School of Psychology, Shanghai University of Sport, Shanghai, 200438, China
| | - Liyue Lin
- School of Psychology, Shanghai University of Sport, Shanghai, 200438, China
| | - Yujia Sun
- China Table Tennis College, Shanghai University of Sport, Shanghai, 200438, China
| | - Shuang Hou
- School of Psychology, Shanghai University of Sport, Shanghai, 200438, China
| | - Jie Ren
- China Table Tennis College, Shanghai University of Sport, Shanghai, 200438, China.
| |
Collapse
|
48
|
Giurgola S, Casati C, Stampatori C, Perucca L, Mattioli F, Vallar G, Bolognini N. Abnormal multisensory integration in relapsing–remitting multiple sclerosis. Exp Brain Res 2022; 240:953-968. [PMID: 35094114 PMCID: PMC8918188 DOI: 10.1007/s00221-022-06310-0] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2021] [Accepted: 01/15/2022] [Indexed: 12/22/2022]
Abstract
Temporal Binding Window (TBW) represents a reliable index of efficient multisensory integration process, which allows individuals to infer which sensory inputs from different modalities pertain to the same event. TBW alterations have been reported in some neurological and neuropsychiatric disorders and seem to negatively affects cognition and behavior. So far, it is still unknown whether deficits of multisensory integration, as indexed by an abnormal TBW, are present even in Multiple Sclerosis. We addressed this issue by testing 25 participants affected by relapsing–remitting Multiple Sclerosis (RRMS) and 30 age-matched healthy controls. Participants completed a simultaneity judgment task (SJ2) to assess the audio-visual TBW; two unimodal SJ2 versions were used as control tasks. Individuals with RRMS showed an enlarged audio-visual TBW (width range = from − 166 ms to + 198 ms), as compared to healthy controls (width range = − 177/ + 66 ms), thus showing an increased tendency to integrate temporally asynchronous visual and auditory stimuli. Instead, simultaneity perception of unimodal (visual or auditory) events overall did not differ from that of controls. These results provide first evidence of a selective deficit of multisensory integration in individuals affected by RRMS, besides the well-known motor and cognitive impairments. The reduced multisensory temporal acuity is likely caused by a disruption of the neural interplay between different sensory systems caused by multiple sclerosis.
Collapse
Affiliation(s)
- Serena Giurgola
- Department of Psychology and NeuroMI, University of Milano-Bicocca, Piazza dell’Ateneo Nuovo 1, 20126 Milan, Italy
| | - Carlotta Casati
- Neuropsychology Laboratory, IRCCS Istituto Auxologico Italiano, Milan, Italy
- Neurorehabilitation Sciences, IRCCS Istituto Auxologico Italiano, Milan, Italy
| | | | - Laura Perucca
- Neurorehabilitation Sciences, IRCCS Istituto Auxologico Italiano, Milan, Italy
| | - Flavia Mattioli
- Neuropsychology Unit, Spedali Civili of Brescia, Brescia, Italy
| | - Giuseppe Vallar
- Department of Psychology and NeuroMI, University of Milano-Bicocca, Piazza dell’Ateneo Nuovo 1, 20126 Milan, Italy
- Neuropsychology Laboratory, IRCCS Istituto Auxologico Italiano, Milan, Italy
| | - Nadia Bolognini
- Department of Psychology and NeuroMI, University of Milano-Bicocca, Piazza dell’Ateneo Nuovo 1, 20126 Milan, Italy
- Neuropsychology Laboratory, IRCCS Istituto Auxologico Italiano, Milan, Italy
| |
Collapse
|
49
|
Hirst RJ, Setti A, De Looze C, Kenny RA, Newell FN. Multisensory integration precision is associated with better cognitive performance over time in older adults: A large-scale exploratory study. AGING BRAIN 2022; 2:100038. [PMID: 36908873 PMCID: PMC9997173 DOI: 10.1016/j.nbas.2022.100038] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2021] [Revised: 02/17/2022] [Accepted: 03/01/2022] [Indexed: 11/15/2022] Open
Abstract
Age-related sensory decline impacts cognitive performance and exposes individuals to a greater risk of cognitive decline. Integration across the senses also changes with age, yet the link between multisensory perception and cognitive ageing is poorly understood. We explored the relationship between multisensory integration and cognitive function in 2875 adults aged 50 + from The Irish Longitudinal Study on Ageing. Multisensory integration was assessed at several audio-visual temporal asynchronies using the Sound Induced Flash Illusion (SIFI). More precise integration (i.e. less illusion susceptibility with larger temporal asynchronies) was cross-sectionally associated with faster Choice Response Times and Colour Trail Task performance, and fewer errors on the Sustained Attention to Response Task. We then used k-means clustering to identify groups with different 10-year cognitive trajectories on measures available longitudinally; delayed recall, immediate recall and verbal fluency. Across measures, groups with consistently higher performance trajectories had more precise multisensory integration. These findings support broad links between multisensory integration and several cognitive measures, including processing speed, attention and memory, rather than association with any specific subdomain.
Collapse
Affiliation(s)
- Rebecca J. Hirst
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, Ireland
- The Irish Longitudinal Study on Ageing, Trinity College Dublin, Ireland
- Corresponding author at: Institute of Neuroscience, Trinity College Dublin, Dublin, Ireland.
| | - Annalisa Setti
- The Irish Longitudinal Study on Ageing, Trinity College Dublin, Ireland
- School of Applied Psychology, University College Cork, Ireland
| | - Céline De Looze
- The Irish Longitudinal Study on Ageing, Trinity College Dublin, Ireland
| | - Rose Anne Kenny
- The Irish Longitudinal Study on Ageing, Trinity College Dublin, Ireland
- Mercer Institute for Successful Ageing, St. James Hospital, Dublin, Ireland
| | - Fiona N. Newell
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, Ireland
| |
Collapse
|
50
|
Gröhn C, Norgren E, Eriksson L. A systematic review of the neural correlates of multisensory integration in schizophrenia. SCHIZOPHRENIA RESEARCH-COGNITION 2021; 27:100219. [PMID: 34660211 PMCID: PMC8502765 DOI: 10.1016/j.scog.2021.100219] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/14/2021] [Revised: 09/27/2021] [Accepted: 09/27/2021] [Indexed: 01/01/2023]
Abstract
Multisensory integration (MSI), in which sensory signals from different modalities are unified, is necessary for our comprehensive perception of and effective adaptation to the objects and events around us. However, individuals with schizophrenia suffer from impairments in MSI, which could explain typical symptoms like hallucination and reality distortion. Because the neural correlates of aberrant MSI in schizophrenia help us understand the physiognomy of this psychiatric disorder, we performed a systematic review of the current research on this subject. The literature search concerned investigated MSI in diagnosed schizophrenia patients compared to healthy controls using brain imaging. Seventeen of 317 identified studies were finally included. To assess risk of bias, the Newcastle-Ottawa quality assessment was used, and the review was written according to the Preferred Reporting Items for Systematic Reviews and Meta-analysis (PRISMA). The results indicated that multisensory processes in schizophrenia are associated with aberrant, mainly reduced, neural activity in several brain regions, as measured by event-related potentials, oscillations, activity and connectivity. The conclusion is that a fronto-temporal region, comprising the frontal inferior gyrus, middle temporal gyrus and superior temporal gyrus/sulcus, along with the fusiform gyrus and dorsal visual stream in the occipital-parietal lobe are possible key regions of deficient MSI in schizophrenia.
Collapse
Affiliation(s)
| | | | - Lars Eriksson
- Corresponding author at: Department of Social and Psychological Studies, Karlstad University, SE-651 88 Karlstad, Sweden.
| |
Collapse
|