1
|
Tót K, Braunitzer G, Harcsa-Pintér N, Kiss Á, Bodosi B, Tajti J, Csáti A, Eördegh G, Nagy A. Enhanced audiovisual associative pair learning in migraine without aura in adult patients: An unexpected finding. Cephalalgia 2024; 44:3331024241258722. [PMID: 39093997 DOI: 10.1177/03331024241258722] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/04/2024]
Abstract
BACKGROUND Altered sensory processing in migraine has been demonstrated by several studies in unimodal, and especially visual, tasks. While there is some limited evidence hinting at potential alterations in multisensory processing among migraine sufferers, this aspect remains relatively unexplored. This study investigated the interictal cognitive performance of migraine patients without aura compared to matched controls, focusing on associative learning, recall, and transfer abilities through the Sound-Face Test, an audiovisual test based on the principles of the Rutgers Acquired Equivalence Test. MATERIALS AND METHODS The performance of 42 volunteering migraine patients was compared to the data of 42 matched controls, selected from a database of healthy volunteers who had taken the test earlier. The study aimed to compare the groups' performance in learning, recall, and the ability to transfer learned associations. RESULTS Migraine patients demonstrated significantly superior associative learning as compared to controls, requiring fewer trials, and making fewer errors during the acquisition phase. However, no significant differences were observed in retrieval error ratios, generalization error ratios, or reaction times between migraine patients and controls in later stages of the test. CONCLUSION The results of our study support those of previous investigations, which concluded that multisensory processing exhibits a unique pattern in migraine. The specific finding that associative audiovisual pair learning is more effective in adult migraine patients than in matched controls is unexpected. If the phenomenon is not an artifact, it may be assumed to be a combined result of the hypersensitivity present in migraine and the sensory threshold-lowering effect of multisensory integration.
Collapse
Affiliation(s)
- Kálmán Tót
- Department of Physiology, Albert Szent-Györgyi Medical School, University of Szeged, Szeged, Hungary
| | - Gábor Braunitzer
- Nyírő Gyula Hospital, Laboratory for Perception & Cognition and Clinical Neuroscience, Budapest, Hungary
| | - Noémi Harcsa-Pintér
- Department of Physiology, Albert Szent-Györgyi Medical School, University of Szeged, Szeged, Hungary
| | - Ádám Kiss
- Department of Physiology, Albert Szent-Györgyi Medical School, University of Szeged, Szeged, Hungary
| | - Balázs Bodosi
- Department of Physiology, Albert Szent-Györgyi Medical School, University of Szeged, Szeged, Hungary
| | - János Tajti
- Department of Neurology, Albert Szent-Györgyi Medical School, University of Szeged, Szeged, Hungary
| | - Anett Csáti
- Department of Neurology, Albert Szent-Györgyi Medical School, University of Szeged, Szeged, Hungary
| | - Gabriella Eördegh
- Department of Theoretical Health Sciences and Health Management, Faculty of Health Sciences and Social Studies, University of Szeged, Szeged, Hungary
| | - Attila Nagy
- Department of Physiology, Albert Szent-Györgyi Medical School, University of Szeged, Szeged, Hungary
| |
Collapse
|
2
|
Smyre SA, Bean NL, Stein BE, Rowland BA. The brain can develop conflicting multisensory principles to guide behavior. Cereb Cortex 2024; 34:bhae247. [PMID: 38879756 PMCID: PMC11179994 DOI: 10.1093/cercor/bhae247] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2024] [Revised: 05/23/2024] [Accepted: 05/30/2024] [Indexed: 06/19/2024] Open
Abstract
Midbrain multisensory neurons undergo a significant postnatal transition in how they process cross-modal (e.g. visual-auditory) signals. In early stages, signals derived from common events are processed competitively; however, at later stages they are processed cooperatively such that their salience is enhanced. This transition reflects adaptation to cross-modal configurations that are consistently experienced and become informative about which correspond to common events. Tested here was the assumption that overt behaviors follow a similar maturation. Cats were reared in omnidirectional sound thereby compromising the experience needed for this developmental process. Animals were then repeatedly exposed to different configurations of visual and auditory stimuli (e.g. spatiotemporally congruent or spatially disparate) that varied on each side of space and their behavior was assessed using a detection/localization task. Animals showed enhanced performance to stimuli consistent with the experience provided: congruent stimuli elicited enhanced behaviors where spatially congruent cross-modal experience was provided, and spatially disparate stimuli elicited enhanced behaviors where spatially disparate cross-modal experience was provided. Cross-modal configurations not consistent with experience did not enhance responses. The presumptive benefit of such flexibility in the multisensory developmental process is to sensitize neural circuits (and the behaviors they control) to the features of the environment in which they will function. These experiments reveal that these processes have a high degree of flexibility, such that two (conflicting) multisensory principles can be implemented by cross-modal experience on opposite sides of space even within the same animal.
Collapse
Affiliation(s)
- Scott A Smyre
- Department of Neurobiology and Anatomy, Wake Forest School of Medicine, Medical Center Blvd., Winston Salem, NC 27157, United States
| | - Naomi L Bean
- Department of Neurobiology and Anatomy, Wake Forest School of Medicine, Medical Center Blvd., Winston Salem, NC 27157, United States
| | - Barry E Stein
- Department of Neurobiology and Anatomy, Wake Forest School of Medicine, Medical Center Blvd., Winston Salem, NC 27157, United States
| | - Benjamin A Rowland
- Department of Neurobiology and Anatomy, Wake Forest School of Medicine, Medical Center Blvd., Winston Salem, NC 27157, United States
| |
Collapse
|
3
|
Wang L, Xin H, Buren Q, Zhang Y, Han Y, Ouyang B, Sun Z, Bao Y, Dong C. Specific rules for time and space of multisensory plasticity in the superior colliculus. Brain Res 2024; 1828:148774. [PMID: 38244758 DOI: 10.1016/j.brainres.2024.148774] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2023] [Revised: 12/28/2023] [Accepted: 01/15/2024] [Indexed: 01/22/2024]
Abstract
Cat superior colliculus (SC) neurons commonly combine information from different senses, which facilitates event detection and localization. Integration in SC multisensory neurons depends on the spatial and temporal relationships between cross-modal cues. Here, we revealed the parallel process of short-term plasticity in the temporal/spatial integration process during adulthood that adapts multisensory integration to reliable changes in environmental conditions. Short-term experience alters the temporal preferences of SC multisensory neurons, and this short-term plasticity in the temporal/spatial integration process is limited to changes in cross-modal timing (a factor commonly induced by events at different distances from the receiver). However, this plasticity was not evident in response to changes in the cross-modal spatial configuration.
Collapse
Affiliation(s)
- Linghong Wang
- School of Basic Medicine, Inner Mongolia Medical University, Inner Mongolia, Hohhot 010110, China
| | - Hongmei Xin
- School of Humanities Education, Inner Mongolia Medical University, Inner Mongolia, Hohhot 010110, China
| | - Qiqige Buren
- School of Basic Medicine, Inner Mongolia Medical University, Inner Mongolia, Hohhot 010110, China
| | - Yan Zhang
- School of Basic Medicine, Inner Mongolia Medical University, Inner Mongolia, Hohhot 010110, China
| | - Yaxin Han
- School of Basic Medicine, Inner Mongolia Medical University, Inner Mongolia, Hohhot 010110, China
| | - Biao Ouyang
- School of Basic Medicine, Inner Mongolia Medical University, Inner Mongolia, Hohhot 010110, China
| | - Zhe Sun
- School of Basic Medicine, Inner Mongolia Medical University, Inner Mongolia, Hohhot 010110, China
| | - Yulong Bao
- School of Basic Medicine, Inner Mongolia Medical University, Inner Mongolia, Hohhot 010110, China.
| | - Chao Dong
- School of Basic Medicine, Inner Mongolia Medical University, Inner Mongolia, Hohhot 010110, China.
| |
Collapse
|
4
|
Kreyenmeier P, Bhuiyan I, Gian M, Chow HM, Spering M. Smooth pursuit inhibition reveals audiovisual enhancement of fast movement control. J Vis 2024; 24:3. [PMID: 38558158 PMCID: PMC10996987 DOI: 10.1167/jov.24.4.3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2023] [Accepted: 02/03/2024] [Indexed: 04/04/2024] Open
Abstract
The sudden onset of a visual object or event elicits an inhibition of eye movements at latencies approaching the minimum delay of visuomotor conductance in the brain. Typically, information presented via multiple sensory modalities, such as sound and vision, evokes stronger and more robust responses than unisensory information. Whether and how multisensory information affects ultra-short latency oculomotor inhibition is unknown. In two experiments, we investigate smooth pursuit and saccadic inhibition in response to multisensory distractors. Observers tracked a horizontally moving dot and were interrupted by an unpredictable visual, auditory, or audiovisual distractor. Distractors elicited a transient inhibition of pursuit eye velocity and catch-up saccade rate within ∼100 ms of their onset. Audiovisual distractors evoked stronger oculomotor inhibition than visual- or auditory-only distractors, indicating multisensory response enhancement. Multisensory response enhancement magnitudes were equal to the linear sum of responses to component stimuli. These results demonstrate that multisensory information affects eye movements even at ultra-short latencies, establishing a lower time boundary for multisensory-guided behavior. We conclude that oculomotor circuits must have privileged access to sensory information from multiple modalities, presumably via a fast, subcortical pathway.
Collapse
Affiliation(s)
- Philipp Kreyenmeier
- Department of Ophthalmology & Visual Sciences, University of British Columbia, Vancouver, British Columbia, Canada
- Graduate Program in Neuroscience, University of British Columbia, Vancouver, British Columbia, Canada
| | - Ishmam Bhuiyan
- Department of Ophthalmology & Visual Sciences, University of British Columbia, Vancouver, British Columbia, Canada
| | - Mathew Gian
- Department of Ophthalmology & Visual Sciences, University of British Columbia, Vancouver, British Columbia, Canada
| | - Hiu Mei Chow
- Department of Ophthalmology & Visual Sciences, University of British Columbia, Vancouver, British Columbia, Canada
- Department of Psychology, St. Thomas University, Fredericton, New Brunswick, Canada
| | - Miriam Spering
- Department of Ophthalmology & Visual Sciences, University of British Columbia, Vancouver, British Columbia, Canada
- Graduate Program in Neuroscience, University of British Columbia, Vancouver, British Columbia, Canada
- Djavad Mowafaghian Center for Brain Health, University of British Columbia, BC, Vancouver, Canada
- Institute for Computing, Information, and Cognitive Systems, University of British Columbia, Vancouver, BC, Canada
| |
Collapse
|
5
|
Frissen I, Mars F. Planning lane changes using advance visual and haptic information. PSYCHOLOGICAL RESEARCH 2024; 88:363-378. [PMID: 37801088 DOI: 10.1007/s00426-023-01879-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2022] [Accepted: 09/17/2023] [Indexed: 10/07/2023]
Abstract
Taking a motor planning perspective, this study investigates whether haptic force cues displayed on the steering wheel are more effective than visual cues in signaling the direction of an upcoming lane change. Licensed drivers drove in a fixed-base driving simulator equipped with an active steering system for realistic force feedback. They were instructed to make lane changes upon registering a directional cue. Cues were delivered according to the movement precuing technique employing a pair of precues and imperative cues which could be either visual, haptic, or crossmodal (a visual precue with a haptic imperative cue, and vice versa). The main dependent variable was response time. Additional analyses were conducted on steering wheel angle profiles and the rate of initial steering errors. Conditions with a haptic imperative cue produced considerably faster responses than conditions with a visual imperative cue, irrespective of the precue modality. Valid and invalid precues produced the typical gains and costs, with one exception. There appeared to be little cost in response time or initial steering errors associated with invalid cueing when both cues were haptic. The results are consistent with the hypothesis that imperative haptic cues facilitate action selection while visual stimuli require additional time-consuming cognitive processing.
Collapse
Affiliation(s)
- Ilja Frissen
- School of Information Studies, McGill University, 3661 Rue Peel, Montreal, QC, H3A 1X1, Canada.
| | - Franck Mars
- Centrale Nantes, CNRS, LS2N, Nantes Université, 44000, Nantes, France
| |
Collapse
|
6
|
Ross LA, Molholm S, Butler JS, Del Bene VA, Brima T, Foxe JJ. Neural correlates of audiovisual narrative speech perception in children and adults on the autism spectrum: A functional magnetic resonance imaging study. Autism Res 2024; 17:280-310. [PMID: 38334251 DOI: 10.1002/aur.3104] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2023] [Accepted: 01/19/2024] [Indexed: 02/10/2024]
Abstract
Autistic individuals show substantially reduced benefit from observing visual articulations during audiovisual speech perception, a multisensory integration deficit that is particularly relevant to social communication. This has mostly been studied using simple syllabic or word-level stimuli and it remains unclear how altered lower-level multisensory integration translates to the processing of more complex natural multisensory stimulus environments in autism. Here, functional neuroimaging was used to examine neural correlates of audiovisual gain (AV-gain) in 41 autistic individuals to those of 41 age-matched non-autistic controls when presented with a complex audiovisual narrative. Participants were presented with continuous narration of a story in auditory-alone, visual-alone, and both synchronous and asynchronous audiovisual speech conditions. We hypothesized that previously identified differences in audiovisual speech processing in autism would be characterized by activation differences in brain regions well known to be associated with audiovisual enhancement in neurotypicals. However, our results did not provide evidence for altered processing of auditory alone, visual alone, audiovisual conditions or AV- gain in regions associated with the respective task when comparing activation patterns between groups. Instead, we found that autistic individuals responded with higher activations in mostly frontal regions where the activation to the experimental conditions was below baseline (de-activations) in the control group. These frontal effects were observed in both unisensory and audiovisual conditions, suggesting that these altered activations were not specific to multisensory processing but reflective of more general mechanisms such as an altered disengagement of Default Mode Network processes during the observation of the language stimulus across conditions.
Collapse
Affiliation(s)
- Lars A Ross
- The Frederick J. and Marion A. Schindler Cognitive Neurophysiology Laboratory, The Ernest J. Del Monte Institute for Neuroscience, Department of Neuroscience, University of Rochester School of Medicine and Dentistry, Rochester, New York, USA
- Department of Imaging Sciences, University of Rochester Medical Center, University of Rochester School of Medicine and Dentistry, Rochester, New York, USA
- The Cognitive Neurophysiology Laboratory, Departments of Pediatrics and Neuroscience, Albert Einstein College of Medicine & Montefiore Medical Center, Bronx, New York, USA
| | - Sophie Molholm
- The Frederick J. and Marion A. Schindler Cognitive Neurophysiology Laboratory, The Ernest J. Del Monte Institute for Neuroscience, Department of Neuroscience, University of Rochester School of Medicine and Dentistry, Rochester, New York, USA
- The Cognitive Neurophysiology Laboratory, Departments of Pediatrics and Neuroscience, Albert Einstein College of Medicine & Montefiore Medical Center, Bronx, New York, USA
| | - John S Butler
- The Cognitive Neurophysiology Laboratory, Departments of Pediatrics and Neuroscience, Albert Einstein College of Medicine & Montefiore Medical Center, Bronx, New York, USA
- School of Mathematics and Statistics, Technological University Dublin, City Campus, Dublin, Ireland
| | - Victor A Del Bene
- The Cognitive Neurophysiology Laboratory, Departments of Pediatrics and Neuroscience, Albert Einstein College of Medicine & Montefiore Medical Center, Bronx, New York, USA
- Heersink School of Medicine, Department of Neurology, University of Alabama at Birmingham, Birmingham, Alabama, USA
| | - Tufikameni Brima
- The Frederick J. and Marion A. Schindler Cognitive Neurophysiology Laboratory, The Ernest J. Del Monte Institute for Neuroscience, Department of Neuroscience, University of Rochester School of Medicine and Dentistry, Rochester, New York, USA
| | - John J Foxe
- The Frederick J. and Marion A. Schindler Cognitive Neurophysiology Laboratory, The Ernest J. Del Monte Institute for Neuroscience, Department of Neuroscience, University of Rochester School of Medicine and Dentistry, Rochester, New York, USA
- The Cognitive Neurophysiology Laboratory, Departments of Pediatrics and Neuroscience, Albert Einstein College of Medicine & Montefiore Medical Center, Bronx, New York, USA
| |
Collapse
|
7
|
Yu L, Xu J. The Development of Multisensory Integration at the Neuronal Level. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2024; 1437:153-172. [PMID: 38270859 DOI: 10.1007/978-981-99-7611-9_10] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/26/2024]
Abstract
Multisensory integration is a fundamental function of the brain. In the typical adult, multisensory neurons' response to paired multisensory (e.g., audiovisual) cues is significantly more robust than the corresponding best unisensory response in many brain regions. Synthesizing sensory signals from multiple modalities can speed up sensory processing and improve the salience of outside events or objects. Despite its significance, multisensory integration is testified to be not a neonatal feature of the brain. Neurons' ability to effectively combine multisensory information does not occur rapidly but develops gradually during early postnatal life (for cats, 4-12 weeks required). Multisensory experience is critical for this developing process. If animals were restricted from sensing normal visual scenes or sounds (deprived of the relevant multisensory experience), the development of the corresponding integrative ability could be blocked until the appropriate multisensory experience is obtained. This section summarizes the extant literature on the development of multisensory integration (mainly using cat superior colliculus as a model), sensory-deprivation-induced cross-modal plasticity, and how sensory experience (sensory exposure and perceptual learning) leads to the plastic change and modification of neural circuits in cortical and subcortical areas.
Collapse
Affiliation(s)
- Liping Yu
- Key Laboratory of Brain Functional Genomics (Ministry of Education and Shanghai), School of Life Sciences, East China Normal University, Shanghai, China.
| | - Jinghong Xu
- Key Laboratory of Brain Functional Genomics (Ministry of Education and Shanghai), School of Life Sciences, East China Normal University, Shanghai, China
| |
Collapse
|
8
|
Saltafossi M, Zaccaro A, Perrucci MG, Ferri F, Costantini M. The impact of cardiac phases on multisensory integration. Biol Psychol 2023; 182:108642. [PMID: 37467844 DOI: 10.1016/j.biopsycho.2023.108642] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2023] [Revised: 07/10/2023] [Accepted: 07/14/2023] [Indexed: 07/21/2023]
Abstract
The brain continuously processes information coming from both the external environment and visceral signals generated by the body. This constant information exchange between the body and the brain allows signals originating from the oscillatory activity of the heart, among others, to influence perception. Here, we investigated how the cardiac phase modulates multisensory integration, which is the process that allows information from multiple senses to combine non-linearly to reduce environmental uncertainty. Forty healthy participants completed a Simple Detection Task with unimodal (Auditory, Visual, Tactile) and bimodal (Audio-Tactile, Audio-Visual, Visuo-Tactile) stimuli presented 250 ms and 500 ms after the R-peak of the electrocardiogram, that is, systole and diastole, respectively. First, we found a nonspecific effect of the cardiac cycle phases on detection of both unimodal and bimodal stimuli. Reaction times were faster for stimuli presented during diastole, compared to systole. Then, applying the Race Model Inequality approach to quantify multisensory integration, Audio-Tactile and Visuo-Tactile, but not Audio-Visual stimuli, showed higher integration when presented during diastole than during systole. These findings indicate that the impact of the cardiac phase on multisensory integration may be specific for stimuli including somatosensory (i.e., tactile) inputs. This suggests that the heartbeat-related noise, which according to the interoceptive predictive coding theory suppresses somatosensory inputs, also affects multisensory integration during systole. In conclusion, our data extend the interoceptive predictive coding theory to the multisensory domain. From a more mechanistic view, they may reflect a reduced optimization of neural oscillations orchestrating multisensory integration during systole.
Collapse
Affiliation(s)
- Martina Saltafossi
- Department of Psychological, Health and Territorial Sciences, "G. d'Annunzio" University of Chieti-Pescara, Chieti, Italy.
| | - Andrea Zaccaro
- Department of Psychological, Health and Territorial Sciences, "G. d'Annunzio" University of Chieti-Pescara, Chieti, Italy
| | - Mauro Gianni Perrucci
- Department of Neuroscience, Imaging and Clinical Sciences, "G. d'Annunzio" University of Chieti-Pescara, Chieti, Italy; Institute for Advanced Biomedical Technologies, ITAB, "G. d'Annunzio" University of Chieti-Pescara, Chieti, Italy
| | - Francesca Ferri
- Department of Neuroscience, Imaging and Clinical Sciences, "G. d'Annunzio" University of Chieti-Pescara, Chieti, Italy
| | - Marcello Costantini
- Department of Psychological, Health and Territorial Sciences, "G. d'Annunzio" University of Chieti-Pescara, Chieti, Italy; Institute for Advanced Biomedical Technologies, ITAB, "G. d'Annunzio" University of Chieti-Pescara, Chieti, Italy
| |
Collapse
|
9
|
Benarroch E. What Are the Functions of the Superior Colliculus and Its Involvement in Neurologic Disorders? Neurology 2023; 100:784-790. [PMID: 37068960 PMCID: PMC10115501 DOI: 10.1212/wnl.0000000000207254] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2023] [Accepted: 02/16/2023] [Indexed: 04/19/2023] Open
|
10
|
Smyre SA, Bean NL, Stein BE, Rowland BA. Predictability alters multisensory responses by modulating unisensory inputs. Front Neurosci 2023; 17:1150168. [PMID: 37065927 PMCID: PMC10090419 DOI: 10.3389/fnins.2023.1150168] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2023] [Accepted: 03/13/2023] [Indexed: 03/30/2023] Open
Abstract
The multisensory (deep) layers of the superior colliculus (SC) play an important role in detecting, localizing, and guiding orientation responses to salient events in the environment. Essential to this role is the ability of SC neurons to enhance their responses to events detected by more than one sensory modality and to become desensitized (‘attenuated’ or ‘habituated’) or sensitized (‘potentiated’) to events that are predictable via modulatory dynamics. To identify the nature of these modulatory dynamics, we examined how the repetition of different sensory stimuli affected the unisensory and multisensory responses of neurons in the cat SC. Neurons were presented with 2HZ stimulus trains of three identical visual, auditory, or combined visual–auditory stimuli, followed by a fourth stimulus that was either the same or different (‘switch’). Modulatory dynamics proved to be sensory-specific: they did not transfer when the stimulus switched to another modality. However, they did transfer when switching from the visual–auditory stimulus train to either of its modality-specific component stimuli and vice versa. These observations suggest that predictions, in the form of modulatory dynamics induced by stimulus repetition, are independently sourced from and applied to the modality-specific inputs to the multisensory neuron. This falsifies several plausible mechanisms for these modulatory dynamics: they neither produce general changes in the neuron’s transform, nor are they dependent on the neuron’s output.
Collapse
|
11
|
Mackey CA, Dylla M, Bohlen P, Grigsby J, Hrnicek A, Mayfield J, Ramachandran R. Hierarchical differences in the encoding of sound and choice in the subcortical auditory system. J Neurophysiol 2023; 129:591-608. [PMID: 36651913 PMCID: PMC9988536 DOI: 10.1152/jn.00439.2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2022] [Revised: 01/03/2023] [Accepted: 01/16/2023] [Indexed: 01/19/2023] Open
Abstract
Detection of sounds is a fundamental function of the auditory system. Although studies of auditory cortex have gained substantial insight into detection performance using behaving animals, previous subcortical studies have mostly taken place under anesthesia, in passively listening animals, or have not measured performance at threshold. These limitations preclude direct comparisons between neuronal responses and behavior. To address this, we simultaneously measured auditory detection performance and single-unit activity in the inferior colliculus (IC) and cochlear nucleus (CN) in macaques. The spontaneous activity and response variability of CN neurons were higher than those observed for IC neurons. Signal detection theoretic methods revealed that the magnitude of responses of IC neurons provided more reliable estimates of psychometric threshold and slope compared with the responses of single CN neurons. However, pooling small populations of CN neurons provided reliable estimates of psychometric threshold and slope, suggesting sufficient information in CN population activity. Trial-by-trial correlations between spike count and behavioral response emerged 50-75 ms after sound onset for most IC neurons, but for few neurons in the CN. These results highlight hierarchical differences between neurometric-psychometric correlations in CN and IC and have important implications for how subcortical information could be decoded.NEW & NOTEWORTHY The cerebral cortex is widely recognized to play a role in sensory processing and decision-making. Accounts of the neural basis of auditory perception and its dysfunction are based on this idea. However, significantly less attention has been paid to midbrain and brainstem structures in this regard. Here, we find that subcortical auditory neurons represent stimulus information sufficient for detection and predict behavioral choice on a trial-by-trial basis.
Collapse
Affiliation(s)
- Chase A Mackey
- Neuroscience Graduate Program, Vanderbilt University, Nashville, Tennessee, United States
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, Tennessee, United States
| | - Margit Dylla
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, Tennessee, United States
| | - Peter Bohlen
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, Tennessee, United States
| | - Jason Grigsby
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, Tennessee, United States
| | - Andrew Hrnicek
- Department of Neurobiology and Anatomy, Wake Forest University Health Sciences, Winston-Salem, North Carolina, United States
| | - Jackson Mayfield
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, Tennessee, United States
| | - Ramnarayan Ramachandran
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, Tennessee, United States
| |
Collapse
|
12
|
Bean NL, Smyre SA, Stein BE, Rowland BA. Noise-rearing precludes the behavioral benefits of multisensory integration. Cereb Cortex 2023; 33:948-958. [PMID: 35332919 PMCID: PMC9930622 DOI: 10.1093/cercor/bhac113] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2021] [Revised: 02/23/2022] [Accepted: 02/24/2022] [Indexed: 11/14/2022] Open
Abstract
Concordant visual-auditory stimuli enhance the responses of individual superior colliculus (SC) neurons. This neuronal capacity for "multisensory integration" is not innate: it is acquired only after substantial cross-modal (e.g. auditory-visual) experience. Masking transient auditory cues by raising animals in omnidirectional sound ("noise-rearing") precludes their ability to obtain this experience and the ability of the SC to construct a normal multisensory (auditory-visual) transform. SC responses to combinations of concordant visual-auditory stimuli are depressed, rather than enhanced. The present experiments examined the behavioral consequence of this rearing condition in a simple detection/localization task. In the first experiment, the auditory component of the concordant cross-modal pair was novel, and only the visual stimulus was a target. In the second experiment, both component stimuli were targets. Noise-reared animals failed to show multisensory performance benefits in either experiment. These results reveal a close parallel between behavior and single neuron physiology in the multisensory deficits that are induced when noise disrupts early visual-auditory experience.
Collapse
Affiliation(s)
- Naomi L Bean
- Corresponding author: Wake Forest School of Medicine, Medical Center Blvd., Winston Salem, NC 27157, United States.
| | | | - Barry E Stein
- Department of Neurobiology and Anatomy, Wake Forest School of Medicine, Medical Center Blvd., Winston Salem, NC 27157, United States
| | - Benjamin A Rowland
- Department of Neurobiology and Anatomy, Wake Forest School of Medicine, Medical Center Blvd., Winston Salem, NC 27157, United States
| |
Collapse
|
13
|
Vastano R, Costantini M, Alexander WH, Widerstrom-Noga E. Multisensory integration in humans with spinal cord injury. Sci Rep 2022; 12:22156. [PMID: 36550184 PMCID: PMC9780239 DOI: 10.1038/s41598-022-26678-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2022] [Accepted: 12/19/2022] [Indexed: 12/24/2022] Open
Abstract
Although multisensory integration (MSI) has been extensively studied, the underlying mechanisms remain a topic of ongoing debate. Here we investigate these mechanisms by comparing MSI in healthy controls to a clinical population with spinal cord injury (SCI). Deafferentation following SCI induces sensorimotor impairment, which may alter the ability to synthesize cross-modal information. We applied mathematical and computational modeling to reaction time data recorded in response to temporally congruent cross-modal stimuli. We found that MSI in both SCI and healthy controls is best explained by cross-modal perceptual competition, highlighting a common competition mechanism. Relative to controls, MSI impairments in SCI participants were better explained by reduced stimulus salience leading to increased cross-modal competition. By combining traditional analyses with model-based approaches, we examine how MSI is realized during normal function, and how it is compromised in a clinical population. Our findings support future investigations identifying and rehabilitating MSI deficits in clinical disorders.
Collapse
Affiliation(s)
- Roberta Vastano
- grid.26790.3a0000 0004 1936 8606Department of Neurological Surgery, The Miami Project to Cure Paralysis, University of Miami, Miami, FL 33136 USA
| | - Marcello Costantini
- grid.412451.70000 0001 2181 4941Department of Psychological, Health and Territorial Sciences, “G. d’Annunzio” University of Chieti-Pescara, Chieti, Italy ,grid.412451.70000 0001 2181 4941Institute for Advanced Biomedical Technologies, ITAB, “G. d’Annunzio” University of Chieti-Pescara, Chieti, Italy
| | - William H. Alexander
- grid.255951.fCenter for Complex Systems and Brain Sciences, Florida Atlantic University, Boca Raton, USA ,grid.255951.fDepartment of Psychology, Florida Atlantic University, Boca Raton, USA ,grid.255951.fThe Brain Institute, Florida Atlantic University, Boca Raton, USA
| | - Eva Widerstrom-Noga
- grid.26790.3a0000 0004 1936 8606Department of Neurological Surgery, The Miami Project to Cure Paralysis, University of Miami, Miami, FL 33136 USA
| |
Collapse
|
14
|
Cuppini C, Magosso E, Monti M, Ursino M, Yau JM. A neurocomputational analysis of visual bias on bimanual tactile spatial perception during a crossmodal exposure. Front Neural Circuits 2022; 16:933455. [PMID: 36439678 PMCID: PMC9684216 DOI: 10.3389/fncir.2022.933455] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2022] [Accepted: 10/13/2022] [Indexed: 11/11/2022] Open
Abstract
Vision and touch both support spatial information processing. These sensory systems also exhibit highly specific interactions in spatial perception, which may reflect multisensory representations that are learned through visuo-tactile (VT) experiences. Recently, Wani and colleagues reported that task-irrelevant visual cues bias tactile perception, in a brightness-dependent manner, on a task requiring participants to detect unimanual and bimanual cues. Importantly, tactile performance remained spatially biased after VT exposure, even when no visual cues were presented. These effects on bimanual touch conceivably reflect cross-modal learning, but the neural substrates that are changed by VT experience are unclear. We previously described a neural network capable of simulating VT spatial interactions. Here, we exploited this model to test different hypotheses regarding potential network-level changes that may underlie the VT learning effects. Simulation results indicated that VT learning effects are inconsistent with plasticity restricted to unisensory visual and tactile hand representations. Similarly, VT learning effects were also inconsistent with changes restricted to the strength of inter-hemispheric inhibitory interactions. Instead, we found that both the hand representations and the inter-hemispheric inhibitory interactions need to be plastic to fully recapitulate VT learning effects. Our results imply that crossmodal learning of bimanual spatial perception involves multiple changes distributed over a VT processing cortical network.
Collapse
Affiliation(s)
- Cristiano Cuppini
- Department of Electrical, Electronic, and Information Engineering “Guglielmo Marconi,” University of Bologna, Bologna, Italy,*Correspondence: Cristiano Cuppini,
| | - Elisa Magosso
- Department of Electrical, Electronic, and Information Engineering “Guglielmo Marconi,” University of Bologna, Bologna, Italy
| | - Melissa Monti
- Department of Electrical, Electronic, and Information Engineering “Guglielmo Marconi,” University of Bologna, Bologna, Italy
| | - Mauro Ursino
- Department of Electrical, Electronic, and Information Engineering “Guglielmo Marconi,” University of Bologna, Bologna, Italy
| | - Jeffrey M. Yau
- Department of Neuroscience, Baylor College of Medicine, Houston, TX, United States
| |
Collapse
|
15
|
Ross LA, Molholm S, Butler JS, Bene VAD, Foxe JJ. Neural correlates of multisensory enhancement in audiovisual narrative speech perception: a fMRI investigation. Neuroimage 2022; 263:119598. [PMID: 36049699 DOI: 10.1016/j.neuroimage.2022.119598] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2022] [Revised: 08/26/2022] [Accepted: 08/28/2022] [Indexed: 11/25/2022] Open
Abstract
This fMRI study investigated the effect of seeing articulatory movements of a speaker while listening to a naturalistic narrative stimulus. It had the goal to identify regions of the language network showing multisensory enhancement under synchronous audiovisual conditions. We expected this enhancement to emerge in regions known to underlie the integration of auditory and visual information such as the posterior superior temporal gyrus as well as parts of the broader language network, including the semantic system. To this end we presented 53 participants with a continuous narration of a story in auditory alone, visual alone, and both synchronous and asynchronous audiovisual speech conditions while recording brain activity using BOLD fMRI. We found multisensory enhancement in an extensive network of regions underlying multisensory integration and parts of the semantic network as well as extralinguistic regions not usually associated with multisensory integration, namely the primary visual cortex and the bilateral amygdala. Analysis also revealed involvement of thalamic brain regions along the visual and auditory pathways more commonly associated with early sensory processing. We conclude that under natural listening conditions, multisensory enhancement not only involves sites of multisensory integration but many regions of the wider semantic network and includes regions associated with extralinguistic sensory, perceptual and cognitive processing.
Collapse
Affiliation(s)
- Lars A Ross
- The Frederick J. and Marion A. Schindler Cognitive Neurophysiology Laboratory, The Ernest J. Del Monte Institute for Neuroscience, Department of Neuroscience, University of Rochester School of Medicine and Dentistry, Rochester, New York, 14642, USA; Department of Imaging Sciences, University of Rochester Medical Center, University of Rochester School of Medicine and Dentistry, Rochester, New York, 14642, USA; The Cognitive Neurophysiology Laboratory, Departments of Pediatrics and Neuroscience, Albert Einstein College of Medicine & Montefiore Medical Center, Bronx, New York, 10461, USA.
| | - Sophie Molholm
- The Frederick J. and Marion A. Schindler Cognitive Neurophysiology Laboratory, The Ernest J. Del Monte Institute for Neuroscience, Department of Neuroscience, University of Rochester School of Medicine and Dentistry, Rochester, New York, 14642, USA; The Cognitive Neurophysiology Laboratory, Departments of Pediatrics and Neuroscience, Albert Einstein College of Medicine & Montefiore Medical Center, Bronx, New York, 10461, USA
| | - John S Butler
- The Cognitive Neurophysiology Laboratory, Departments of Pediatrics and Neuroscience, Albert Einstein College of Medicine & Montefiore Medical Center, Bronx, New York, 10461, USA; School of Mathematical Sciences, Technological University Dublin, Kevin Street Campus, Dublin, Ireland
| | - Victor A Del Bene
- The Cognitive Neurophysiology Laboratory, Departments of Pediatrics and Neuroscience, Albert Einstein College of Medicine & Montefiore Medical Center, Bronx, New York, 10461, USA; University of Alabama at Birmingham, Heersink School of Medicine, Department of Neurology, Birmingham, Alabama, 35233, USA
| | - John J Foxe
- The Frederick J. and Marion A. Schindler Cognitive Neurophysiology Laboratory, The Ernest J. Del Monte Institute for Neuroscience, Department of Neuroscience, University of Rochester School of Medicine and Dentistry, Rochester, New York, 14642, USA; The Cognitive Neurophysiology Laboratory, Departments of Pediatrics and Neuroscience, Albert Einstein College of Medicine & Montefiore Medical Center, Bronx, New York, 10461, USA.
| |
Collapse
|
16
|
Peng X, Jiang H, Yang J, Shi R, Feng J, Liang Y. Effects of Temporal Characteristics on Pilots Perceiving Audiovisual Warning Signals Under Different Perceptual Loads. Front Psychol 2022; 13:808150. [PMID: 35222196 PMCID: PMC8867071 DOI: 10.3389/fpsyg.2022.808150] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2021] [Accepted: 01/10/2022] [Indexed: 11/13/2022] Open
Abstract
Our research aimed to investigate the effectiveness of auditory, visual, and audiovisual warning signals for capturing the attention of the pilot, and how stimulus onset asynchronies (SOA) in audiovisual stimuli affect pilots perceiving the bimodal warning signals under different perceptual load conditions. In experiment 1 of the low perceptual load condition, participants discriminated the location (right vs. left) of visual targets preceded by five different types of warning signals. In experiment 2 of high perceptual load, participants completed the location task identical to a low load condition and a digit detection task in a rapid serial visual presentation (RSVP) stream. The main effect of warning signals in two experiments showed that visual and auditory cues presented simultaneously (AV) could effectively and efficiently arouse the attention of the pilots in high and low load conditions. Specifically, auditory (A), AV, and visual preceding auditory stimulus by 100 ms (VA100) increased the spatial orientation to a valid position in low load conditions. With the increase in visual perceptual load, auditory preceding the visual stimulus by 100 ms (AV100) and A warning signals had stronger spatial orientation. The results are expected to theoretically support the optimization design of the cockpit display interface, contributing to immediate flight crew awareness.
Collapse
Affiliation(s)
- Xing Peng
- Institute of Aviation Human Factors and Cognitive Neuroscience, College of Flight Technology, Civil Aviation Flight University of China, Guanghan, China
| | - Hao Jiang
- Institute of Aviation Human Factors and Cognitive Neuroscience, College of Flight Technology, Civil Aviation Flight University of China, Guanghan, China
| | - Jiazhong Yang
- Institute of Aviation Human Factors and Cognitive Neuroscience, College of Flight Technology, Civil Aviation Flight University of China, Guanghan, China
| | - Rong Shi
- Institute of Aviation Human Factors and Cognitive Neuroscience, College of Flight Technology, Civil Aviation Flight University of China, Guanghan, China
| | - Junyi Feng
- Technical Support Center, Operation Control Department, Beijing Capital Airlines, Beijing, China
| | - Yaowei Liang
- Institute of Aviation Human Factors and Cognitive Neuroscience, College of Flight Technology, Civil Aviation Flight University of China, Guanghan, China.,Flying Department of Southwest Branch, Air China Limited, Chengdu, China
| |
Collapse
|
17
|
Audiovisual integration in the Mauthner cell enhances escape probability and reduces response latency. Sci Rep 2022; 12:1097. [PMID: 35058502 PMCID: PMC8776867 DOI: 10.1038/s41598-022-04998-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2021] [Accepted: 01/03/2022] [Indexed: 11/09/2022] Open
Abstract
AbstractFast and accurate threat detection is critical for animal survival. Reducing perceptual ambiguity by integrating multiple sources of sensory information can enhance perception and reduce response latency. However, studies addressing the link between behavioral correlates of multisensory integration and its underlying neural basis are rare. Fish that detect an urgent threat escape with an explosive behavior known as C-start. The C-start is driven by an identified neural circuit centered on the Mauthner cell, an identified neuron capable of triggering escapes in response to visual and auditory stimuli. Here we demonstrate that goldfish can integrate visual looms and brief auditory stimuli to increase C-start probability. This multisensory enhancement is inversely correlated to the salience of the stimuli, with weaker auditory cues producing a proportionally stronger multisensory effect. We also show that multisensory stimuli reduced C-start response latency, with most escapes locked to the presentation of the auditory cue. We make a direct link between behavioral data and its underlying neural mechanism by reproducing the behavioral data with an integrate-and-fire computational model of the Mauthner cell. This model of the Mauthner cell circuit suggests that excitatory inputs integrated at the soma are key elements in multisensory decision making during fast C-start escapes. This provides a simple but powerful mechanism to enhance threat detection and survival.
Collapse
|
18
|
Jiang H, Stanford TR, Rowland BA, Stein BE. Association Cortex Is Essential to Reverse Hemianopia by Multisensory Training. Cereb Cortex 2021; 31:5015-5023. [PMID: 34056645 DOI: 10.1093/cercor/bhab138] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2021] [Revised: 04/19/2021] [Accepted: 04/19/2021] [Indexed: 11/14/2022] Open
Abstract
Hemianopia induced by unilateral visual cortex lesions can be resolved by repeatedly exposing the blinded hemifield to auditory-visual stimuli. This rehabilitative "training" paradigm depends on mechanisms of multisensory plasticity that restore the lost visual responsiveness of multisensory neurons in the ipsilesional superior colliculus (SC) so that they can once again support vision in the blinded hemifield. These changes are thought to operate via the convergent visual and auditory signals relayed to the SC from association cortex (the anterior ectosylvian sulcus [AES], in cat). The present study tested this assumption by cryogenically deactivating ipsilesional AES in hemianopic, anesthetized cats during weekly multisensory training sessions. No signs of visual recovery were evident in this condition, even after providing animals with up to twice the number of training sessions required for effective rehabilitation. Subsequent training under the same conditions, but with AES active, reversed the hemianopia within the normal timeframe. These results indicate that the corticotectal circuit that is normally engaged in SC multisensory plasticity has to be operational for the brain to use visual-auditory experience to resolve hemianopia.
Collapse
Affiliation(s)
- Huai Jiang
- Department of Neurobiology and Anatomy, Wake Forest School of Medicine, Medical Center Boulevard, Winston-Salem, NC 27157, USA
| | - Terrence R Stanford
- Department of Neurobiology and Anatomy, Wake Forest School of Medicine, Medical Center Boulevard, Winston-Salem, NC 27157, USA
| | - Benjamin A Rowland
- Department of Neurobiology and Anatomy, Wake Forest School of Medicine, Medical Center Boulevard, Winston-Salem, NC 27157, USA
| | - Barry E Stein
- Department of Neurobiology and Anatomy, Wake Forest School of Medicine, Medical Center Boulevard, Winston-Salem, NC 27157, USA
| |
Collapse
|
19
|
Rezaul Karim AKM, Proulx MJ, de Sousa AA, Likova LT. Neuroplasticity and Crossmodal Connectivity in the Normal, Healthy Brain. PSYCHOLOGY & NEUROSCIENCE 2021; 14:298-334. [PMID: 36937077 PMCID: PMC10019101 DOI: 10.1037/pne0000258] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Objective Neuroplasticity enables the brain to establish new crossmodal connections or reorganize old connections which are essential to perceiving a multisensorial world. The intent of this review is to identify and summarize the current developments in neuroplasticity and crossmodal connectivity, and deepen understanding of how crossmodal connectivity develops in the normal, healthy brain, highlighting novel perspectives about the principles that guide this connectivity. Methods To the above end, a narrative review is carried out. The data documented in prior relevant studies in neuroscience, psychology and other related fields available in a wide range of prominent electronic databases are critically assessed, synthesized, interpreted with qualitative rather than quantitative elements, and linked together to form new propositions and hypotheses about neuroplasticity and crossmodal connectivity. Results Three major themes are identified. First, it appears that neuroplasticity operates by following eight fundamental principles and crossmodal integration operates by following three principles. Second, two different forms of crossmodal connectivity, namely direct crossmodal connectivity and indirect crossmodal connectivity, are suggested to operate in both unisensory and multisensory perception. Third, three principles possibly guide the development of crossmodal connectivity into adulthood. These are labeled as the principle of innate crossmodality, the principle of evolution-driven 'neuromodular' reorganization and the principle of multimodal experience. These principles are combined to develop a three-factor interaction model of crossmodal connectivity. Conclusions The hypothesized principles and the proposed model together advance understanding of neuroplasticity, the nature of crossmodal connectivity, and how such connectivity develops in the normal, healthy brain.
Collapse
|
20
|
Sultan N, Mughal AM, Islam MNU, Malik FM. High-gain observer-based nonlinear control scheme for biomechanical sit to stand movement in the presence of sensory feedback delays. PLoS One 2021; 16:e0256049. [PMID: 34383831 PMCID: PMC8360614 DOI: 10.1371/journal.pone.0256049] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2021] [Accepted: 07/28/2021] [Indexed: 11/19/2022] Open
Abstract
Sit-to-stand movement (STS) is a mundane activity, controlled by the central-nervous-system (CNS) via a complex neurophysiological mechanism that involves coordination of limbs for successful execution. Detailed analysis and accurate simulations of STS task have significant importance in clinical intervention, rehabilitation process, and better design for assistive devices. The CNS controls STS motion by taking inputs from proprioceptors. These input signals suffer delay in transmission to CNS making movement control and coordination more complex which may lead to larger body exertion or instability. This paper deals with the problem of STS movement execution in the presence of proprioceptive feedback delays in joint position and velocity. We present a high-gain observer (HGO) based feedback linearization control technique to mimic the CNS in controlling the STS transfer. The HGO estimates immeasurable delayed states to generate input signals for feedback. The feedback linearization output control law generates the passive torques at joints to execute the STS movement. The H2 dynamic controller calculates the optimal linear gains by using physiological variables. The whole scheme is simulated in MATLAB/Simulink. The simulations illustrate physiologically improved results. The ankle, knee, and hip joint position profiles show a high correlation of 0.91, 0.97, 0.80 with the experimentally generated reference profiles. The faster observer dynamics and global boundness of controller result in compensation of delays. The low error and high correlation of simulation results demonstrate (1) the reliability and effectiveness of the proposed scheme for customization of human models and (2) highlight the fact that for detailed analysis and accurate simulations of STS movement the modeling scheme must consider nonlinearities of the system.
Collapse
Affiliation(s)
- Nadia Sultan
- Department of Electrical Engineering, Bahria University Islamabad, Islamabad, Pakistan
| | - Asif Mahmood Mughal
- Department of Electrical Engineering, Bahria University Islamabad, Islamabad, Pakistan
| | | | - Fahad Mumtaz Malik
- Department of Electrical Engineering, CE&ME National University of Sciences and Technology Islamabad, Islamabad, Pakistan
| |
Collapse
|
21
|
Sultan N, Najam Ul Islam M, Mughal AM. Nonlinear postural control paradigm for larger perturbations in the presence of neural delays. BIOLOGICAL CYBERNETICS 2021; 115:397-414. [PMID: 34373936 DOI: 10.1007/s00422-021-00889-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/03/2021] [Accepted: 07/20/2021] [Indexed: 06/13/2023]
Abstract
Maintaining balance is an essential skill regulated by the central nervous system (CNS) that helps humans to function effectively. Developing a physiologically motivated computational model of a neural controller with good performance is a central component for a large range of potential applications, such as the development of therapeutic and assistive devices, diagnosis of balance disorders, and designing robotic control systems. In this paper, we characterize the biomechanics of postural control system by considering the musculoskeletal dynamics in the sagittal plane, proprioceptive feedback, and a neural controller. The model includes several physiological structures, such as the feedforward and feedback mechanism, sensory noise, and proprioceptive feedback delays. A high-gain observer (HGO)-based feedback linearization controller represents the CNS analog in the modeling paradigm. The HGO gives an estimation of delayed states and the feedback linearization control law generates the feedback torques at joints to execute postural recovery movements. The whole scheme is simulated in MATLAB/Simulink. The simulation results show that our proposed scheme is robust against larger perturbations, sensory noises, feedback delays and retains a strong disturbance rejection and trajectory tracking capability. Overall, these results demonstrate that the nonlinear system dynamics, the feedforward and feedback mechanism, and physiological latencies play a key role in shaping the motor control process.
Collapse
Affiliation(s)
- Nadia Sultan
- Department of Electrical Engineering, Bahria University Islamabad, Naval Complex E-8, Islamabad, Pakistan.
| | - Muhammad Najam Ul Islam
- Department of Electrical Engineering, Bahria University Islamabad, Naval Complex E-8, Islamabad, Pakistan
| | - Asif Mahmood Mughal
- Department of Electrical Engineering, Bahria University Islamabad, Naval Complex E-8, Islamabad, Pakistan
| |
Collapse
|
22
|
Moran JK, Keil J, Masurovsky A, Gutwinski S, Montag C, Senkowski D. Multisensory Processing Can Compensate for Top-Down Attention Deficits in Schizophrenia. Cereb Cortex 2021; 31:5536-5548. [PMID: 34274967 DOI: 10.1093/cercor/bhab177] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2020] [Revised: 04/20/2021] [Accepted: 05/25/2021] [Indexed: 11/13/2022] Open
Abstract
Studies on schizophrenia (SCZ) and aberrant multisensory integration (MSI) show conflicting results, which are potentially confounded by attention deficits in SCZ. To test this, we examined the interplay between MSI and intersensory attention (IA) in healthy controls (HCs) (N = 27) and in SCZ (N = 27). Evoked brain potentials to unisensory-visual (V), unisensory-tactile (T), or spatiotemporally aligned bisensory VT stimuli were measured with high-density electroencephalography, while participants attended blockwise to either visual or tactile inputs. Behaviorally, IA effects in SCZ, relative to HC, were diminished for unisensory stimuli, but not for bisensory stimuli. At the neural level, we observed reduced IA effects for bisensory stimuli over mediofrontal scalp regions (230-320 ms) in SCZ. The analysis of MSI, using the additive approach, revealed multiple phases of integration over occipital and frontal scalp regions (240-364 ms), which did not differ between HC and SCZ. Furthermore, IA and MSI effects were both positively related to the behavioral performance in SCZ, indicating that IA and MSI mutually facilitate bisensory stimulus processing. Multisensory processing could facilitate stimulus processing and compensate for top-down attention deficits in SCZ. Differences in attentional demands, which may be differentially compensated by multisensory processing, could account for previous conflicting findings on MSI in SCZ.
Collapse
Affiliation(s)
- James K Moran
- Department of Psychiatry and Psychotherapy, Charité - Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt-Universität zu Berlin, St. Hedwig Hospital, 10115 Berlin, Germany
| | - Julian Keil
- Department of Psychiatry and Psychotherapy, Charité - Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt-Universität zu Berlin, St. Hedwig Hospital, 10115 Berlin, Germany.,Biological Psychology, Christian-Albrechts-University Kiel 24118, Germany
| | - Alexander Masurovsky
- Department of Psychiatry and Psychotherapy, Charité - Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt-Universität zu Berlin, St. Hedwig Hospital, 10115 Berlin, Germany
| | - Stefan Gutwinski
- Department of Psychiatry and Psychotherapy, Charité - Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt-Universität zu Berlin, St. Hedwig Hospital, 10115 Berlin, Germany
| | - Christiane Montag
- Department of Psychiatry and Psychotherapy, Charité - Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt-Universität zu Berlin, St. Hedwig Hospital, 10115 Berlin, Germany
| | - Daniel Senkowski
- Department of Psychiatry and Psychotherapy, Charité - Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt-Universität zu Berlin, St. Hedwig Hospital, 10115 Berlin, Germany
| |
Collapse
|
23
|
Smyre SA, Wang Z, Stein BE, Rowland BA. Multisensory enhancement of overt behavior requires multisensory experience. Eur J Neurosci 2021; 54:4514-4527. [PMID: 34013578 DOI: 10.1111/ejn.15315] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2021] [Revised: 05/11/2021] [Accepted: 05/14/2021] [Indexed: 11/27/2022]
Abstract
The superior colliculus (SC) is richly endowed with neurons that integrate cues from different senses to enhance their physiological responses and the overt behaviors they mediate. However, in the absence of experience with cross-modal combinations (e.g., visual-auditory), they fail to develop this characteristic multisensory capability: Their multisensory responses are no greater than their most effective unisensory responses. Presumably, this impairment in neural development would be reflected as corresponding impairments in SC-mediated behavioral capabilities such as detection and localization performance. Here, we tested that assumption directly in cats raised to adulthood in darkness. They, along with a normally reared cohort, were trained to approach brief visual or auditory stimuli. The animals were then tested with these stimuli individually and in combination under ambient light conditions consistent with their rearing conditions and home environment as well as under the opposite lighting condition. As expected, normally reared animals detected and localized the cross-modal combinations significantly better than their individual component stimuli. However, dark-reared animals showed significant defects in multisensory detection and localization performance. The results indicate that a physiological impairment in single multisensory SC neurons is predictive of an impairment in overt multisensory behaviors.
Collapse
Affiliation(s)
- Scott A Smyre
- Wake Forest School of Medicine, Medical Center Boulevard, Winston-Salem, NC, USA
| | - Zhengyang Wang
- Wake Forest School of Medicine, Medical Center Boulevard, Winston-Salem, NC, USA
| | - Barry E Stein
- Wake Forest School of Medicine, Medical Center Boulevard, Winston-Salem, NC, USA
| | - Benjamin A Rowland
- Wake Forest School of Medicine, Medical Center Boulevard, Winston-Salem, NC, USA
| |
Collapse
|
24
|
Kirkels LAMH, Zhang W, Rezvani Z, van Wezel RJA, van Wanrooij MM. Visual motion integration of bidirectional transparent motion in mouse opto-locomotor reflexes. Sci Rep 2021; 11:10490. [PMID: 34006985 PMCID: PMC8131598 DOI: 10.1038/s41598-021-89974-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2019] [Accepted: 04/27/2021] [Indexed: 11/09/2022] Open
Abstract
Visual motion perception depends on readout of direction selective sensors. We investigated in mice whether the response to bidirectional transparent motion, activating oppositely tuned sensors, reflects integration (averaging) or winner-take-all (mutual inhibition) mechanisms. We measured whole body opto-locomotor reflexes (OLRs) to bidirectional oppositely moving random dot patterns (leftward and rightward) and compared the response to predictions based on responses to unidirectional motion (leftward or rightward). In addition, responses were compared to stimulation with stationary patterns. When comparing OLRs to bidirectional and unidirectional conditions, we found that the OLR to bidirectional motion best fits an averaging model. These results reflect integration mechanisms in neural responses to contradicting sensory evidence as has been documented for other sensory and motor domains.
Collapse
Affiliation(s)
- L A M H Kirkels
- Department of Biophysics, Donders Institute, Radboud University, Nijmegen, The Netherlands.
| | - W Zhang
- Department of Biophysics, Donders Institute, Radboud University, Nijmegen, The Netherlands
| | - Z Rezvani
- School of Computer Science, Institute for Research in Fundamental Sciences, Tehran, Iran
| | - R J A van Wezel
- Department of Biophysics, Donders Institute, Radboud University, Nijmegen, The Netherlands.,Biomedical Signals and Systems, TechMed Centre, Twente University, Enschede, The Netherlands
| | - M M van Wanrooij
- Department of Biophysics, Donders Institute, Radboud University, Nijmegen, The Netherlands
| |
Collapse
|
25
|
Zheng M, Xu J, Keniston L, Wu J, Chang S, Yu L. Choice-dependent cross-modal interaction in the medial prefrontal cortex of rats. Mol Brain 2021; 14:13. [PMID: 33446258 PMCID: PMC7809823 DOI: 10.1186/s13041-021-00732-7] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2020] [Accepted: 01/08/2021] [Indexed: 11/25/2022] Open
Abstract
Cross-modal interaction (CMI) could significantly influence the perceptional or decision-making process in many circumstances. However, it remains poorly understood what integrative strategies are employed by the brain to deal with different task contexts. To explore it, we examined neural activities of the medial prefrontal cortex (mPFC) of rats performing cue-guided two-alternative forced-choice tasks. In a task requiring rats to discriminate stimuli based on auditory cue, the simultaneous presentation of an uninformative visual cue substantially strengthened mPFC neurons' capability of auditory discrimination mainly through enhancing the response to the preferred cue. Doing this also increased the number of neurons revealing a cue preference. If the task was changed slightly and a visual cue, like the auditory, denoted a specific behavioral direction, mPFC neurons frequently showed a different CMI pattern with an effect of cross-modal enhancement best evoked in information-congruent multisensory trials. In a choice free task, however, the majority of neurons failed to show a cross-modal enhancement effect and cue preference. These results indicate that CMI at the neuronal level is context-dependent in a way that differs from what has been shown in previous studies.
Collapse
Affiliation(s)
- Mengyao Zheng
- Key Laboratory of Brain Functional Genomics (Ministry of Education and Shanghai), Key Laboratory of Adolescent Health Assessment and Exercise Intervention of Ministry of Education, and School of Life Sciences, East China Normal University, Shanghai, 200062 China
| | - Jinghong Xu
- Key Laboratory of Brain Functional Genomics (Ministry of Education and Shanghai), Key Laboratory of Adolescent Health Assessment and Exercise Intervention of Ministry of Education, and School of Life Sciences, East China Normal University, Shanghai, 200062 China
| | - Les Keniston
- Department of Physical Therapy, University of Maryland Eastern Shore, Princess Anne, MD 21853 USA
| | - Jing Wu
- Key Laboratory of Brain Functional Genomics (Ministry of Education and Shanghai), Key Laboratory of Adolescent Health Assessment and Exercise Intervention of Ministry of Education, and School of Life Sciences, East China Normal University, Shanghai, 200062 China
| | - Song Chang
- Key Laboratory of Brain Functional Genomics (Ministry of Education and Shanghai), Key Laboratory of Adolescent Health Assessment and Exercise Intervention of Ministry of Education, and School of Life Sciences, East China Normal University, Shanghai, 200062 China
| | - Liping Yu
- Key Laboratory of Brain Functional Genomics (Ministry of Education and Shanghai), Key Laboratory of Adolescent Health Assessment and Exercise Intervention of Ministry of Education, and School of Life Sciences, East China Normal University, Shanghai, 200062 China
| |
Collapse
|
26
|
Zumer JM, White TP, Noppeney U. The neural mechanisms of audiotactile binding depend on asynchrony. Eur J Neurosci 2020; 52:4709-4731. [PMID: 32725895 DOI: 10.1111/ejn.14928] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2020] [Revised: 07/06/2020] [Accepted: 07/24/2020] [Indexed: 11/30/2022]
Abstract
Asynchrony is a critical cue informing the brain whether sensory signals are caused by a common source and should be integrated or segregated. This psychophysics-electroencephalography (EEG) study investigated the influence of asynchrony on how the brain binds audiotactile (AT) signals to enable faster responses in a redundant target paradigm. Human participants actively responded (psychophysics) or passively attended (EEG) to noise bursts, "taps-to-the-face" and their AT combinations at seven AT asynchronies: 0, ±20, ±70 and ±500 ms. Behaviourally, observers were faster at detecting AT than unisensory stimuli within a temporal integration window: the redundant target effect was maximal for synchronous stimuli and declined within a ≤70 ms AT asynchrony. EEG revealed a cascade of AT interactions that relied on different neural mechanisms depending on AT asynchrony. At small (≤20 ms) asynchronies, AT interactions arose for evoked response potentials (ERPs) at 110 ms and ~400 ms post-stimulus. Selectively at ±70 ms asynchronies, AT interactions were observed for the P200 ERP, theta-band inter-trial coherence (ITC) and power at ~200 ms post-stimulus. In conclusion, AT binding was mediated by distinct neural mechanisms depending on the asynchrony of the AT signals. Early AT interactions in ERPs and theta-band ITC and power were critical for the behavioural response facilitation within a ≤±70 ms temporal integration window.
Collapse
Affiliation(s)
- Johanna M Zumer
- School of Psychology, University of Birmingham, Birmingham, UK.,Centre for Computational Neuroscience and Cognitive Robotics, University of Birmingham, Birmingham, UK.,Centre for Human Brain Health, University of Birmingham, Birmingham, UK.,School of Life and Health Sciences, Aston University, Birmingham, UK
| | - Thomas P White
- School of Psychology, University of Birmingham, Birmingham, UK.,Centre for Computational Neuroscience and Cognitive Robotics, University of Birmingham, Birmingham, UK
| | - Uta Noppeney
- School of Psychology, University of Birmingham, Birmingham, UK.,Centre for Computational Neuroscience and Cognitive Robotics, University of Birmingham, Birmingham, UK.,Centre for Human Brain Health, University of Birmingham, Birmingham, UK.,Donders Institute for Brain, Cognition, and Behaviour, Nijmegen, The Netherlands
| |
Collapse
|
27
|
Differential Rapid Plasticity in Auditory and Visual Responses in the Primarily Multisensory Orbitofrontal Cortex. eNeuro 2020; 7:ENEURO.0061-20.2020. [PMID: 32424057 PMCID: PMC7294472 DOI: 10.1523/eneuro.0061-20.2020] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2020] [Accepted: 03/26/2020] [Indexed: 01/17/2023] Open
Abstract
Given the connectivity of orbitofrontal cortex (OFC) with the sensory areas and areas involved in goal execution, it is likely that OFC, along with its function in reward processing, also has a role to play in perception-based multisensory decision-making. To understand mechanisms involved in multisensory decision-making, it is important to first know the encoding of different sensory stimuli in single neurons of the mouse OFC. Ruling out effects of behavioral state, memory, and others, we studied the anesthetized mouse OFC responses to auditory, visual, and audiovisual/multisensory stimuli, multisensory associations and sensory-driven input organization to the OFC. Almost all, OFC single neurons were found to be multisensory in nature, with sublinear to supralinear integration of the component unisensory stimuli. With a novel multisensory oddball stimulus set, we show that the OFC receives both unisensory as well as multisensory inputs, further corroborated by retrograde tracers showing labeling in secondary auditory and visual cortices, which we find to also have similar multisensory integration and responses. With long audiovisual pairing/association, we show rapid plasticity in OFC single neurons, with a strong visual bias, leading to a strong depression of auditory responses and effective enhancement of visual responses. Such rapid multisensory association driven plasticity is absent in the auditory and visual cortices, suggesting its emergence in the OFC. Based on the above results, we propose a hypothetical local circuit model in the OFC that integrates auditory and visual information which participates in computing stimulus value in dynamic multisensory environments.
Collapse
|
28
|
Wang Z, Yu L, Xu J, Stein BE, Rowland BA. Experience Creates the Multisensory Transform in the Superior Colliculus. Front Integr Neurosci 2020; 14:18. [PMID: 32425761 PMCID: PMC7212431 DOI: 10.3389/fnint.2020.00018] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2019] [Accepted: 03/18/2020] [Indexed: 11/15/2022] Open
Abstract
Although the ability to integrate information across the senses is compromised in some individuals for unknown reasons, similar defects have been observed when animals are reared without multisensory experience. The experience-dependent development of multisensory integration has been studied most extensively using the visual-auditory neuron of the cat superior colliculus (SC) as a neural model. In the normally-developed adult, SC neurons react to concordant visual-auditory stimuli by integrating their inputs in real-time to produce non-linearly amplified multisensory responses. However, when prevented from gathering visual-auditory experience, their multisensory responses are no more robust than their responses to the individual component stimuli. The mechanisms operating in this defective state are poorly understood. Here we examined the responses of SC neurons in “naïve” (i.e., dark-reared) and “neurotypic” (i.e., normally-reared) animals on a millisecond-by-millisecond basis to determine whether multisensory experience changes the operation by which unisensory signals are converted into multisensory outputs (the “multisensory transform”), or whether it changes the dynamics of the unisensory inputs to that transform (e.g., their synchronization and/or alignment). The results reveal that the major impact of experience was on the multisensory transform itself. Whereas neurotypic multisensory responses exhibited non-linear amplification near their onset followed by linear amplification thereafter, the naive responses showed no integration in the initial phase of the response and a computation consistent with competition in its later phases. The results suggest that multisensory experience creates an entirely new computation by which convergent unisensory inputs are used cooperatively to enhance the physiological salience of cross-modal events and thereby facilitate normal perception and behavior.
Collapse
Affiliation(s)
- Zhengyang Wang
- Department of Biomedical Engineering, Southern University of Science and Technology, Shenzhen, China
| | - Liping Yu
- Key Laboratory of Brain Functional Genomics (Ministry of Education and Shanghai), School of Life Sciences, East China Normal University, Shanghai, China
| | - Jinghong Xu
- Key Laboratory of Brain Functional Genomics (Ministry of Education and Shanghai), School of Life Sciences, East China Normal University, Shanghai, China
| | - Barry E Stein
- Department of Neurobiology and Anatomy, Wake Forest School of Medicine, Winston-Salem, NC, United States
| | - Benjamin A Rowland
- Department of Neurobiology and Anatomy, Wake Forest School of Medicine, Winston-Salem, NC, United States
| |
Collapse
|
29
|
Shaw LH, Freedman EG, Crosse MJ, Nicholas E, Chen AM, Braiman MS, Molholm S, Foxe JJ. Operating in a Multisensory Context: Assessing the Interplay Between Multisensory Reaction Time Facilitation and Inter-sensory Task-switching Effects. Neuroscience 2020; 436:122-135. [PMID: 32325100 DOI: 10.1016/j.neuroscience.2020.04.013] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2019] [Revised: 04/03/2020] [Accepted: 04/06/2020] [Indexed: 11/28/2022]
Abstract
Individuals respond faster to presentations of bisensory stimuli (e.g. audio-visual targets) than to presentations of either unisensory constituent in isolation (i.e. to the auditory-alone or visual-alone components of an audio-visual stimulus). This well-established multisensory speeding effect, termed the redundant signals effect (RSE), is not predicted by simple linear summation of the unisensory response time probability distributions. Rather, the speeding is typically faster than this prediction, leading researchers to ascribe the RSE to a so-called co-activation account. According to this account, multisensory neural processing occurs whereby the unisensory inputs are integrated to produce more effective sensory-motor activation. However, the typical paradigm used to test for RSE involves random sequencing of unisensory and bisensory inputs in a mixed design, raising the possibility of an alternate attention-switching account. This intermixed design requires participants to switch between sensory modalities on many task trials (e.g. from responding to a visual stimulus to an auditory stimulus). Here we show that much, if not all, of the RSE under this paradigm can be attributed to slowing of reaction times to unisensory stimuli resulting from modality switching, and is not in fact due to speeding of responses to AV stimuli. As such, the present data do not support a co-activation account, but rather suggest that switching and mixing costs akin to those observed during classic task-switching paradigms account for the observed RSE.
Collapse
Affiliation(s)
- Luke H Shaw
- The Cognitive Neurophysiology Laboratory, The Del Monte Institute for Neuroscience, Department of Neuroscience, University of Rochester School of Medicine and Dentistry, Rochester, NY 14642, USA
| | - Edward G Freedman
- The Cognitive Neurophysiology Laboratory, The Del Monte Institute for Neuroscience, Department of Neuroscience, University of Rochester School of Medicine and Dentistry, Rochester, NY 14642, USA
| | - Michael J Crosse
- The Cognitive Neurophysiology Laboratory, Department of Pediatrics & Neuroscience, Albert Einstein College of Medicine & Montefiore Medical Center, Bronx, NY 10461, USA
| | - Eric Nicholas
- The Cognitive Neurophysiology Laboratory, The Del Monte Institute for Neuroscience, Department of Neuroscience, University of Rochester School of Medicine and Dentistry, Rochester, NY 14642, USA
| | - Allen M Chen
- The Cognitive Neurophysiology Laboratory, The Del Monte Institute for Neuroscience, Department of Neuroscience, University of Rochester School of Medicine and Dentistry, Rochester, NY 14642, USA
| | - Matthew S Braiman
- The Cognitive Neurophysiology Laboratory, The Del Monte Institute for Neuroscience, Department of Neuroscience, University of Rochester School of Medicine and Dentistry, Rochester, NY 14642, USA
| | - Sophie Molholm
- The Cognitive Neurophysiology Laboratory, The Del Monte Institute for Neuroscience, Department of Neuroscience, University of Rochester School of Medicine and Dentistry, Rochester, NY 14642, USA; The Cognitive Neurophysiology Laboratory, Department of Pediatrics & Neuroscience, Albert Einstein College of Medicine & Montefiore Medical Center, Bronx, NY 10461, USA
| | - John J Foxe
- The Cognitive Neurophysiology Laboratory, The Del Monte Institute for Neuroscience, Department of Neuroscience, University of Rochester School of Medicine and Dentistry, Rochester, NY 14642, USA; The Cognitive Neurophysiology Laboratory, Department of Pediatrics & Neuroscience, Albert Einstein College of Medicine & Montefiore Medical Center, Bronx, NY 10461, USA.
| |
Collapse
|
30
|
Gharaei S, Honnuraiah S, Arabzadeh E, Stuart GJ. Superior colliculus modulates cortical coding of somatosensory information. Nat Commun 2020; 11:1693. [PMID: 32245963 PMCID: PMC7125203 DOI: 10.1038/s41467-020-15443-1] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2019] [Accepted: 03/02/2020] [Indexed: 12/05/2022] Open
Abstract
The cortex modulates activity in superior colliculus via a direct projection. What is largely unknown is whether (and if so how) the superior colliculus modulates activity in the cortex. Here, we investigate this issue and show that optogenetic activation of superior colliculus changes the input-output relationship of neurons in somatosensory cortex, enhancing responses to low amplitude whisker deflections. While there is no direct pathway from superior colliculus to somatosensory cortex, we found that activation of superior colliculus drives spiking in the posterior medial (POm) nucleus of the thalamus via a powerful monosynaptic pathway. Furthermore, POm neurons receiving input from superior colliculus provide monosynaptic excitatory input to somatosensory cortex. Silencing POm abolished the capacity of superior colliculus to modulate cortical whisker responses. Our findings indicate that the superior colliculus, which plays a key role in attention, modulates sensory processing in somatosensory cortex via a powerful di-synaptic pathway through the thalamus.
Collapse
Affiliation(s)
- Saba Gharaei
- Eccles Institute of Neuroscience, John Curtin School of Medical Research, The Australian National University, Canberra, ACT, Australia.
- Australian Research Council Centre of Excellence for Integrative Brain Function, The Australian National University Node, Canberra, ACT, Australia.
| | - Suraj Honnuraiah
- Eccles Institute of Neuroscience, John Curtin School of Medical Research, The Australian National University, Canberra, ACT, Australia
- Australian Research Council Centre of Excellence for Integrative Brain Function, The Australian National University Node, Canberra, ACT, Australia
| | - Ehsan Arabzadeh
- Eccles Institute of Neuroscience, John Curtin School of Medical Research, The Australian National University, Canberra, ACT, Australia
- Australian Research Council Centre of Excellence for Integrative Brain Function, The Australian National University Node, Canberra, ACT, Australia
| | - Greg J Stuart
- Eccles Institute of Neuroscience, John Curtin School of Medical Research, The Australian National University, Canberra, ACT, Australia.
- Australian Research Council Centre of Excellence for Integrative Brain Function, The Australian National University Node, Canberra, ACT, Australia.
| |
Collapse
|
31
|
Colonius H, Diederich A. Formal models and quantitative measures of multisensory integration: a selective overview. Eur J Neurosci 2020; 51:1161-1178. [DOI: 10.1111/ejn.13813] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2017] [Revised: 12/18/2017] [Accepted: 12/20/2017] [Indexed: 11/26/2022]
Affiliation(s)
- Hans Colonius
- Department of Psychology Carl von Ossietzky Universität Oldenburg Oldenburg 26111 Germany
- Department of Psychological Sciences Purdue University West Lafayette IN USA
| | - Adele Diederich
- Department of Psychological Sciences Purdue University West Lafayette IN USA
- Life Sciences and Chemistry Jacobs University Bremen Bremen Germany
| |
Collapse
|
32
|
Stein BE, Rowland BA. Using superior colliculus principles of multisensory integration to reverse hemianopia. Neuropsychologia 2020; 141:107413. [PMID: 32113921 DOI: 10.1016/j.neuropsychologia.2020.107413] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2019] [Revised: 02/04/2020] [Accepted: 02/24/2020] [Indexed: 11/18/2022]
Abstract
The diversity of our senses conveys many advantages; it enables them to compensate for one another when needed, and the information they provide about a common event can be integrated to facilitate its processing and, ultimately, adaptive responses. These cooperative interactions are produced by multisensory neurons. A well-studied model in this context is the multisensory neuron in the output layers of the superior colliculus (SC). These neurons integrate and amplify their cross-modal (e.g., visual-auditory) inputs, thereby enhancing the physiological salience of the initiating event and the probability that it will elicit SC-mediated detection, localization, and orientation behavior. Repeated experience with the same visual-auditory stimulus can also increase the neuron's sensitivity to these individual inputs. This observation raised the possibility that such plasticity could be engaged to restore visual responsiveness when compromised. For example, unilateral lesions of visual cortex compromise the visual responsiveness of neurons in the multisensory output layers of the ipsilesional SC and produces profound contralesional blindness (hemianopia). The possibility that multisensory plasticity could restore the visual responses of these neurons, and reverse blindness, was tested in the cat model of hemianopia. Hemianopic subjects were repeatedly presented with spatiotemporally congruent visual-auditory stimulus pairs in the blinded hemifield on a daily or weekly basis. After several weeks of this multisensory exposure paradigm, visual responsiveness was restored in SC neurons and behavioral responses were elicited by visual stimuli in the previously blind hemifield. The constraints on the effectiveness of this procedure proved to be the same as those constraining SC multisensory plasticity: whereas repetitions of a congruent visual-auditory stimulus was highly effective, neither exposure to its individual component stimuli, nor to these stimuli in non-congruent configurations was effective. The restored visual responsiveness proved to be robust, highly competitive with that in the intact hemifield, and sufficient to support visual discrimination.
Collapse
Affiliation(s)
- Barry E Stein
- Department of Neurobiology and Anatomy, Wake Forest School of Medicine, Medical Center Blvd, Winston-Salem, NC, 27157, USA
| | - Benjamin A Rowland
- Department of Neurobiology and Anatomy, Wake Forest School of Medicine, Medical Center Blvd, Winston-Salem, NC, 27157, USA.
| |
Collapse
|
33
|
Klatt S, Smeeton NJ. Visual and Auditory Information During Decision Making in Sport. JOURNAL OF SPORT & EXERCISE PSYCHOLOGY 2020; 42:15-25. [PMID: 31883505 DOI: 10.1123/jsep.2019-0107] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/14/2019] [Revised: 10/07/2019] [Accepted: 10/30/2019] [Indexed: 06/10/2023]
Abstract
In 2 experiments, the authors investigated the effects of bimodal integration in a sport-specific task. Beach volleyball players were required to make a tactical decision, responding either verbally or via a motor response, after being presented with visual, auditory, or both kinds of stimuli in a beach volleyball scenario. In Experiment 1, players made the correct decision in a game situation more often when visual and auditory information were congruent than in trials in which they experienced only one of the modalities or incongruent information. Decision-making accuracy was greater when motor, rather than verbal, responses were given. Experiment 2 replicated this congruence effect using different stimulus material and showed a decreasing effect of visual stimulation on decision making as a function of shorter visual stimulus durations. In conclusion, this study shows that bimodal integration of congruent visual and auditory information results in more accurate decision making in sport than unimodal information.
Collapse
|
34
|
Foxe JJ, Del Bene VA, Ross LA, Ridgway EM, Francisco AA, Molholm S. Multisensory Audiovisual Processing in Children With a Sensory Processing Disorder (II): Speech Integration Under Noisy Environmental Conditions. Front Integr Neurosci 2020; 14:39. [PMID: 32765229 PMCID: PMC7381232 DOI: 10.3389/fnint.2020.00039] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2020] [Accepted: 06/16/2020] [Indexed: 12/02/2022] Open
Abstract
Background: There exists a cohort of children and adults who exhibit an inordinately high degree of discomfort when experiencing what would be considered moderate and manageable levels of sensory input. That is, they show over-responsivity in the face of entirely typical sound, light, touch, taste, or smell inputs, and this occurs to such an extent that it interferes with their daily functioning and reaches clinical levels of dysfunction. What marks these individuals apart is that this sensory processing disorder (SPD) is observed in the absence of other symptom clusters that would result in a diagnosis of Autism, ADHD, or other neurodevelopmental disorders more typically associated with sensory processing difficulties. One major theory forwarded to account for these SPDs posits a deficit in multisensory integration, such that the various sensory inputs are not appropriately integrated into the central nervous system, leading to an overwhelming sensory-perceptual environment, and in turn to the sensory-defensive phenotype observed in these individuals. Methods: We tested whether children (6-16 years) with an over-responsive SPD phenotype (N = 12) integrated multisensory speech differently from age-matched typically-developing controls (TD: N = 12). Participants identified monosyllabic words while background noise level and sensory modality (auditory-alone, visual-alone, audiovisual) were varied in pseudorandom order. Improved word identification when speech was both seen and heard compared to when it was simply heard served to index multisensory speech integration. Results: School-aged children with an SPD show a deficit in the ability to benefit from the combination of both seen and heard speech inputs under noisy environmental conditions, suggesting that these children do not benefit from multisensory integrative processing to the same extent as their typically developing peers. In contrast, auditory-alone performance did not differ between the groups, signifying that this multisensory deficit is not simply due to impaired processing of auditory speech. Conclusions: Children with an over-responsive SPD show a substantial reduction in their ability to benefit from complementary audiovisual speech, to enhance speech perception in a noisy environment. This has clear implications for performance in the classroom and other learning environments. Impaired multisensory integration may contribute to sensory over-reactivity that is the definitional of SPD.
Collapse
Affiliation(s)
- John J Foxe
- The Cognitive Neurophysiology Laboratory, Department of Neuroscience, The Ernest J. Del Monte Institute for Neuroscience, University of Rochester School of Medicine and Dentistry, Rochester, NY, United States.,The Cognitive Neurophysiology Laboratory, Department of Pediatrics, Albert Einstein College of Medicine and Montefiore Medical Center, Bronx, NY, United States.,The Dominic P. Purpura Department of Neuroscience, Albert Einstein College of Medicine, Bronx, NY, United States
| | - Victor A Del Bene
- The Cognitive Neurophysiology Laboratory, Department of Pediatrics, Albert Einstein College of Medicine and Montefiore Medical Center, Bronx, NY, United States
| | - Lars A Ross
- The Cognitive Neurophysiology Laboratory, Department of Pediatrics, Albert Einstein College of Medicine and Montefiore Medical Center, Bronx, NY, United States
| | - Elizabeth M Ridgway
- The Cognitive Neurophysiology Laboratory, Department of Pediatrics, Albert Einstein College of Medicine and Montefiore Medical Center, Bronx, NY, United States
| | - Ana A Francisco
- The Cognitive Neurophysiology Laboratory, Department of Pediatrics, Albert Einstein College of Medicine and Montefiore Medical Center, Bronx, NY, United States
| | - Sophie Molholm
- The Cognitive Neurophysiology Laboratory, Department of Neuroscience, The Ernest J. Del Monte Institute for Neuroscience, University of Rochester School of Medicine and Dentistry, Rochester, NY, United States.,The Cognitive Neurophysiology Laboratory, Department of Pediatrics, Albert Einstein College of Medicine and Montefiore Medical Center, Bronx, NY, United States.,The Dominic P. Purpura Department of Neuroscience, Albert Einstein College of Medicine, Bronx, NY, United States
| |
Collapse
|
35
|
Sugiyama S, Kinukawa T, Takeuchi N, Nishihara M, Shioiri T, Inui K. Tactile Cross-Modal Acceleration Effects on Auditory Steady-State Response. Front Integr Neurosci 2019; 13:72. [PMID: 31920574 PMCID: PMC6927992 DOI: 10.3389/fnint.2019.00072] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2019] [Accepted: 12/02/2019] [Indexed: 01/09/2023] Open
Abstract
In the sensory cortex, cross-modal interaction occurs during the early cortical stages of processing; however, its effect on the speed of neuronal activity remains unclear. In this study, we used magnetoencephalography (MEG) to investigate whether tactile stimulation influences auditory steady-state responses (ASSRs). To this end, a 0.5-ms electrical pulse was randomly presented to the dorsum of the left or right hand of 12 healthy volunteers at 700 ms while a train of 25-ms pure tones were applied to the left or right side at 75 dB for 1,200 ms. Peak latencies of 40-Hz ASSR were measured. Our results indicated that tactile stimulation significantly shortened subsequent ASSR latency. This cross-modal effect was observed from approximately 50 ms to 125 ms after the onset of tactile stimulation. The somatosensory information that appeared to converge on the auditory system may have arisen during the early processing stages, with the reduced ASSR latency indicating that a new sensory event from the cross-modal inputs served to increase the speed of ongoing sensory processing. Collectively, our findings indicate that ASSR latency changes are a sensitive index of accelerated processing.
Collapse
Affiliation(s)
- Shunsuke Sugiyama
- Department of Psychiatry and Psychotherapy, Gifu University Graduate School of Medicine, Gifu, Japan
| | - Tomoaki Kinukawa
- Department of Anesthesiology, Nagoya University Graduate School of Medicine, Nagoya, Japan
| | | | - Makoto Nishihara
- Multidisciplinary Pain Center, Aichi Medical University, Nagakute, Japan
| | - Toshiki Shioiri
- Department of Psychiatry and Psychotherapy, Gifu University Graduate School of Medicine, Gifu, Japan
| | - Koji Inui
- Departmernt of Functioning and Disability, Institute for Developmental Research, Kasugai, Japan
| |
Collapse
|
36
|
Roy C, Dalla Bella S, Pla S, Lagarde J. Multisensory integration and behavioral stability. PSYCHOLOGICAL RESEARCH 2019; 85:879-886. [PMID: 31792611 DOI: 10.1007/s00426-019-01273-4] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2019] [Accepted: 11/18/2019] [Indexed: 11/28/2022]
Abstract
Information coming from multiple senses, as compared to a single one, typically enhances our performance. The multisensory improvement has been extensively examined in perception studies, as well as in tasks involving a motor response like a simple reaction time. However, how this effect extends to more complex behavior, typically involving the coordination of movements, such as bimanual coordination or walking, is still unclear. A critical element in achieving motor coordination in complex behavior is its stability. Reaching a stable state in the coordination pattern allows to sustain complex behavior over time (e.g., without interruption or negative consequences, like falling). This study focuses on the relation between stability in the coordination of movement patterns, like walking, and multisensory improvement. Participants walk with unimodal and audio-tactile metronomes presented either at their preferred rate or at a slower walking rate, the instruction being to synchronize their steps to the metronomes. Walking at a slower rate makes gait more variable than walking at the preferred rate. Interestingly however, the multimodal stimuli enhance the stability of motor coordination but only in the slower condition. Thus, the reduced stability of the coordination pattern (at a slower gait rate) prompts the sensorimotor system to capitalize on multimodal stimulation. These findings provide evidence of a new link between multisensory improvement and behavioral stability, in the context of ecological sensorimotor task.
Collapse
Affiliation(s)
- Charlotte Roy
- EuroMov Laboratory, Montpellier University, Montpellier, France. .,Applied Cognitive Psychology Laboratory, Ulm University, Albert-Einstein-Allee 43, 89081, Ulm, Germany.
| | - Simone Dalla Bella
- EuroMov Laboratory, Montpellier University, Montpellier, France.,International Laboratory for Brain, Music, and Sound Research (BRAMS), Montreal, Canada.,Department of Psychology, University of Montreal, Montreal, Canada
| | - Simon Pla
- EuroMov Laboratory, Montpellier University, Montpellier, France
| | - Julien Lagarde
- EuroMov Laboratory, Montpellier University, Montpellier, France
| |
Collapse
|
37
|
Cross-Modal Competition: The Default Computation for Multisensory Processing. J Neurosci 2018; 39:1374-1385. [PMID: 30573648 DOI: 10.1523/jneurosci.1806-18.2018] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2018] [Revised: 12/04/2018] [Accepted: 12/08/2018] [Indexed: 11/21/2022] Open
Abstract
Mature multisensory superior colliculus (SC) neurons integrate information across the senses to enhance their responses to spatiotemporally congruent cross-modal stimuli. The development of this neurotypic feature of SC neurons requires experience with cross-modal cues. In the absence of such experience the response of an SC neuron to congruent cross-modal cues is no more robust than its response to the most effective component cue. This "default" or "naive" state is believed to be one in which cross-modal signals do not interact. The present results challenge this characterization by identifying interactions between visual-auditory signals in male and female cats reared without visual-auditory experience. By manipulating the relative effectiveness of the visual and auditory cross-modal cues that were presented to each of these naive neurons, an active competition between cross-modal signals was revealed. Although contrary to current expectations, this result is explained by a neuro-computational model in which the default interaction is mutual inhibition. These findings suggest that multisensory neurons at all maturational stages are capable of some form of multisensory integration, and use experience with cross-modal stimuli to transition from their initial state of competition to their mature state of cooperation. By doing so, they develop the ability to enhance the physiological salience of cross-modal events thereby increasing their impact on the sensorimotor circuitry of the SC, and the likelihood that biologically significant events will elicit SC-mediated overt behaviors.SIGNIFICANCE STATEMENT The present results demonstrate that the default mode of multisensory processing in the superior colliculus is competition, not non-integration as previously characterized. A neuro-computational model explains how these competitive dynamics can be implemented via mutual inhibition, and how this default mode is superseded by the emergence of cooperative interactions during development.
Collapse
|
38
|
Gharaei S, Arabzadeh E, Solomon SG. Integration of visual and whisker signals in rat superior colliculus. Sci Rep 2018; 8:16445. [PMID: 30401871 PMCID: PMC6219574 DOI: 10.1038/s41598-018-34661-8] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2018] [Accepted: 10/16/2018] [Indexed: 12/12/2022] Open
Abstract
Multisensory integration is a process by which signals from different sensory modalities are combined to facilitate detection and localization of external events. One substrate for multisensory integration is the midbrain superior colliculus (SC) which plays an important role in orienting behavior. In rodent SC, visual and somatosensory (whisker) representations are in approximate registration, but whether and how these signals interact is unclear. We measured spiking activity in SC of anesthetized hooded rats, during presentation of visual- and whisker stimuli that were tested simultaneously or in isolation. Visual responses were found in all layers, but were primarily located in superficial layers. Whisker responsive sites were primarily found in intermediate layers. In single- and multi-unit recording sites, spiking activity was usually only sensitive to one modality, when stimuli were presented in isolation. By contrast, we observed robust and primarily suppressive interactions when stimuli were presented simultaneously to both modalities. We conclude that while visual and whisker representations in SC of rat are partially overlapping, there is limited excitatory convergence onto individual sites. Multimodal integration may instead rely on suppressive interactions between modalities.
Collapse
Affiliation(s)
- Saba Gharaei
- Discipline of Physiology, School of Medical Sciences, The University of Sydney, Sydney, Australia. .,Eccles Institute of Neuroscience, John Curtin School of Medical Research, The Australian National University, Canberra, Australia. .,Australian Research Council Centre of Excellence for Integrative Brain Function, The Australian National University Node, Canberra, Australia.
| | - Ehsan Arabzadeh
- Eccles Institute of Neuroscience, John Curtin School of Medical Research, The Australian National University, Canberra, Australia.,Australian Research Council Centre of Excellence for Integrative Brain Function, The Australian National University Node, Canberra, Australia
| | - Samuel G Solomon
- Discipline of Physiology, School of Medical Sciences, The University of Sydney, Sydney, Australia.,Institute of Behavioural Neuroscience, University College London, London, UK
| |
Collapse
|
39
|
Xu J, Bi T, Wu J, Meng F, Wang K, Hu J, Han X, Zhang J, Zhou X, Keniston L, Yu L. Spatial receptive field shift by preceding cross-modal stimulation in the cat superior colliculus. J Physiol 2018; 596:5033-5050. [PMID: 30144059 DOI: 10.1113/jp275427] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2018] [Accepted: 08/21/2018] [Indexed: 12/11/2022] Open
Abstract
KEY POINTS It has been known for some time that sensory information of one type can bias the spatial perception of another modality. However, there is a lack of evidence of this occurring in individual neurons. In the present study, we found that the spatial receptive field of superior colliculus multisensory neurons could be dynamically shifted by a preceding stimulus in a different modality. The extent to which the receptive field shifted was dependent on both temporal and spatial gaps between the preceding and following stimuli, as well as the salience of the preceding stimulus. This result provides a neural mechanism that could underlie the process of cross-modal spatial calibration. ABSTRACT Psychophysical studies have shown that the different senses can be spatially entrained by each other. This can be observed in certain phenomena, such as ventriloquism, in which a visual stimulus can attract the perceived location of a spatially discordant sound. However, the neural mechanism underlying this cross-modal spatial recalibration has remained unclear, as has whether it takes place dynamically. We explored these issues in multisensory neurons of the cat superior colliculus (SC), a midbrain structure that involves both cross-modal and sensorimotor integration. Sequential cross-modal stimulation showed that the preceding stimulus can shift the receptive field (RF) of the lagging response. This cross-modal spatial calibration took place in both auditory and visual RFs, although auditory RFs shifted slightly more. By contrast, if a preceding stimulus was from the same modality, it failed to induce a similarly substantial RF shift. The extent of the RF shift was dependent on both temporal and spatial gaps between the preceding and following stimuli, as well as the salience of the preceding stimulus. A narrow time gap and high stimulus salience were able to induce larger RF shifts. In addition, when both visual and auditory stimuli were presented simultaneously, a substantial RF shift toward the location-fixed stimulus was also induced. These results, taken together, reveal an online cross-modal process and reflect the details of the organization of SC inter-sensory spatial calibration.
Collapse
Affiliation(s)
- Jinghong Xu
- Key Laboratory of Brain Functional Genomics (East China Normal University), Ministry of Education, Shanghai Key Laboratory of Brain Functional Genomics (East China Normal University), School of Life Science, East China Normal University, Shanghai, China
| | - Tingting Bi
- Key Laboratory of Brain Functional Genomics (East China Normal University), Ministry of Education, Shanghai Key Laboratory of Brain Functional Genomics (East China Normal University), School of Life Science, East China Normal University, Shanghai, China
| | - Jing Wu
- Key Laboratory of Brain Functional Genomics (East China Normal University), Ministry of Education, Shanghai Key Laboratory of Brain Functional Genomics (East China Normal University), School of Life Science, East China Normal University, Shanghai, China
| | - Fanzhu Meng
- Key Laboratory of Brain Functional Genomics (East China Normal University), Ministry of Education, Shanghai Key Laboratory of Brain Functional Genomics (East China Normal University), School of Life Science, East China Normal University, Shanghai, China
| | - Kun Wang
- Key Laboratory of Brain Functional Genomics (East China Normal University), Ministry of Education, Shanghai Key Laboratory of Brain Functional Genomics (East China Normal University), School of Life Science, East China Normal University, Shanghai, China
| | - Jiawei Hu
- Key Laboratory of Brain Functional Genomics (East China Normal University), Ministry of Education, Shanghai Key Laboratory of Brain Functional Genomics (East China Normal University), School of Life Science, East China Normal University, Shanghai, China
| | - Xiao Han
- Key Laboratory of Brain Functional Genomics (East China Normal University), Ministry of Education, Shanghai Key Laboratory of Brain Functional Genomics (East China Normal University), School of Life Science, East China Normal University, Shanghai, China
| | - Jiping Zhang
- Key Laboratory of Brain Functional Genomics (East China Normal University), Ministry of Education, Shanghai Key Laboratory of Brain Functional Genomics (East China Normal University), School of Life Science, East China Normal University, Shanghai, China
| | - Xiaoming Zhou
- Key Laboratory of Brain Functional Genomics (East China Normal University), Ministry of Education, Shanghai Key Laboratory of Brain Functional Genomics (East China Normal University), School of Life Science, East China Normal University, Shanghai, China
| | - Les Keniston
- Department of Physical Therapy, University of Maryland Eastern Shore, Princess Anne, MD, USA
| | - Liping Yu
- Key Laboratory of Brain Functional Genomics (East China Normal University), Ministry of Education, Shanghai Key Laboratory of Brain Functional Genomics (East China Normal University), School of Life Science, East China Normal University, Shanghai, China
| |
Collapse
|
40
|
Effect of acceleration of auditory inputs on the primary somatosensory cortex in humans. Sci Rep 2018; 8:12883. [PMID: 30150686 PMCID: PMC6110726 DOI: 10.1038/s41598-018-31319-3] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2018] [Accepted: 08/17/2018] [Indexed: 11/09/2022] Open
Abstract
Cross-modal interaction occurs during the early stages of processing in the sensory cortex; however, its effect on neuronal activity speed remains unclear. We used magnetoencephalography to investigate whether auditory stimulation influences the initial cortical activity in the primary somatosensory cortex. A 25-ms pure tone was randomly presented to the left or right side of healthy volunteers at 1000 ms when electrical pulses were applied to the left or right median nerve at 20 Hz for 1500 ms because we did not observe any cross-modal effect elicited by a single pulse. The latency of N20 m originating from Brodmann's area 3b was measured for each pulse. The auditory stimulation significantly shortened the N20 m latency at 1050 and 1100 ms. This reduction in N20 m latency was identical for the ipsilateral and contralateral sounds for both latency points. Therefore, somatosensory-auditory interaction, such as input to the area 3b from the thalamus, occurred during the early stages of synaptic transmission. Auditory information that converged on the somatosensory system was considered to have arisen from the early stages of the feedforward pathway. Acceleration of information processing through the cross-modal interaction seemed to be partly due to faster processing in the sensory cortex.
Collapse
|
41
|
Michail G, Keil J. High cognitive load enhances the susceptibility to non-speech audiovisual illusions. Sci Rep 2018; 8:11530. [PMID: 30069059 PMCID: PMC6070496 DOI: 10.1038/s41598-018-30007-6] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2018] [Accepted: 07/20/2018] [Indexed: 12/03/2022] Open
Abstract
The role of attentional processes in the integration of input from different sensory modalities is complex and multifaceted. Importantly, little is known about how simple, non-linguistic stimuli are integrated when the resources available for sensory processing are exhausted. We studied this question by examining multisensory integration under conditions of limited endogenous attentional resources. Multisensory integration was assessed through the sound-induced flash illusion (SIFI), in which a flash presented simultaneously with two short auditory beeps is often perceived as two flashes, while cognitive load was manipulated using an n-back task. A one-way repeated measures ANOVA revealed that increased cognitive demands had a significant effect on the perception of the illusion while post-hoc tests showed that participants' illusion perception was increased when attentional resources were limited. Additional analysis demonstrated that this effect was not related to a response bias. These findings provide evidence that the integration of non-speech, audiovisual stimuli is enhanced under reduced attentional resources and it therefore supports the notion that top-down attentional control plays an essential role in multisensory integration.
Collapse
Affiliation(s)
- Georgios Michail
- Department of Psychiatry and Psychotherapy, Multisensory Integration Lab, Charité Universitätsmedizin Berlin, Berlin, Germany.
| | - Julian Keil
- Department of Psychiatry and Psychotherapy, Multisensory Integration Lab, Charité Universitätsmedizin Berlin, Berlin, Germany
- Biological Psychology, Christian-Albrechts-University Kiel, Kiel, Germany
| |
Collapse
|
42
|
Someya M, Ogawa H. Multisensory enhancement of burst activity in an insect auditory neuron. J Neurophysiol 2018; 120:139-148. [PMID: 29641303 DOI: 10.1152/jn.00798.2017] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Detecting predators is crucial for survival. In insects, a few sensory interneurons receiving sensory input from a distinct receptive organ extract specific features informing the animal about approaching predators and mediate avoidance behaviors. Although integration of multiple sensory cues relevant to the predator enhances sensitivity and precision, it has not been established whether the sensory interneurons that act as predator detectors integrate multiple modalities of sensory inputs elicited by predators. Using intracellular recording techniques, we found that the cricket auditory neuron AN2, which is sensitive to the ultrasound-like echolocation calls of bats, responds to airflow stimuli transduced by the cercal organ, a mechanoreceptor in the abdomen. AN2 enhanced spike outputs in response to cross-modal stimuli combining sound with airflow, and the linearity of the summation of multisensory integration depended on the magnitude of the evoked response. The enhanced AN2 activity contained bursts, triggering avoidance behavior. Moreover, cross-modal stimuli elicited larger and longer lasting excitatory postsynaptic potentials (EPSP) than unimodal stimuli, which would result from a sublinear summation of EPSPs evoked respectively by sound or airflow. The persistence of EPSPs was correlated with the occurrence and structure of burst activity. Our findings indicate that AN2 integrates bimodal signals and that multisensory integration rather than unimodal stimulation alone more reliably generates bursting activity. NEW & NOTEWORTHY Crickets detect ultrasound with their tympanum and airflow with their cercal organ and process them as alert signals of predators. These sensory signals are integrated by auditory neuron AN2 in the early stages of sensory processing. Multisensory inputs from different sensory channels enhanced excitatory postsynaptic potentials to facilitate burst firing, which could trigger avoidance steering in flying crickets. Our results highlight the cellular basis of multisensory integration in AN2 and possible effects on escape behavior.
Collapse
Affiliation(s)
- Makoto Someya
- Graduate School of Life Science, Hokkaido University , Sapporo , Japan
| | - Hiroto Ogawa
- Department of Biological Sciences, Faculty of Science, Hokkaido University , Sapporo , Japan
| |
Collapse
|
43
|
Riley MR, Qi XL, Constantinidis C. Functional specialization of areas along the anterior-posterior axis of the primate prefrontal cortex. Cereb Cortex 2018; 27:3683-3697. [PMID: 27371761 DOI: 10.1093/cercor/bhw190] [Citation(s) in RCA: 25] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
Functional specialization of areas along the anterior-posterior axis of the lateral prefrontal cortex has been speculated but little evidence exists about distinct neurophysiological properties between prefrontal sub-regions. To address this issue we divided the lateral prefrontal cortex into a posterior-dorsal, a mid-dorsal, an anterior-dorsal, a posterior-ventral, and an anterior ventral region. Selectivity for spatial locations, shapes, and colors was evaluated in six monkeys never trained in working memory tasks, while they viewed the stimuli passively. Recordings from over two thousand neurons revealed systematic differences between anterior and posterior regions. In the dorsal prefrontal cortex, anterior regions exhibited the largest receptive fields, longest response latencies, and lowest amount of information for stimuli. In the ventral prefrontal cortex, posterior regions were characterized by a low percentage of responsive neurons to any stimuli we used, consistent with high specialization for stimulus features. Additionally, spatial information was more prominent in the dorsal and color in ventral regions. Our results provide neurophysiological evidence for a rostral-caudal gradient of stimulus selectivity through the prefrontal cortex, suggesting that posterior areas are selective for stimuli even when these are not releant for execution of a task, and that anterior areas are likely engaged in more abstract operations.
Collapse
Affiliation(s)
- Mitchell R Riley
- Department of Neurobiology & Anatomy, Wake Forest School of Medicine, Winston-Salem, NC 27157, USA
| | - Xue-Lian Qi
- Department of Neurobiology & Anatomy, Wake Forest School of Medicine, Winston-Salem, NC 27157, USA
| | - Christos Constantinidis
- Department of Neurobiology & Anatomy, Wake Forest School of Medicine, Winston-Salem, NC 27157, USA
| |
Collapse
|
44
|
Zhou B, Feng G, Chen W, Zhou W. Olfaction Warps Visual Time Perception. Cereb Cortex 2018; 28:1718-1728. [PMID: 28334302 DOI: 10.1093/cercor/bhx068] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2016] [Accepted: 03/01/2017] [Indexed: 11/12/2022] Open
Abstract
Our perception of the world builds upon dynamic inputs from multiple senses with different temporal resolutions, and is threaded with the passing of subjective time. How time is extracted from multisensory inputs is scantly known. Utilizing psychophysical testing and electroencephalography, we show in healthy human adults that odors modulate object visibility around critical flicker-fusion frequency (CFF)-the limit at which chromatic flickers become perceived as a stable color-and effectively alter CFF in a congruency-based manner, despite that they afford no clear environmental temporal information. The behavioral gain produced by a congruent relative to an incongruent odor is accompanied by elevated neural oscillatory power around the object's flicker frequency in the right temporal region ~150-300 ms after object onset, and is not mediated by visual awareness. In parallel, odors bias the subjective duration of visual objects without affecting one's temporal sensitivity. These findings point to a neuronal network in the right temporal cortex that executes flexible temporal filtering of upstream visual inputs based on olfactory information. Moreover, they collectively indicate that the very process of sensory integration at the stage of object processing twists time perception, hence casting new insights into the neural timing of multisensory events.
Collapse
Affiliation(s)
- Bin Zhou
- Institute of Psychology, CAS Key Laboratory of Behavioral Science, CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Beijing 100101, China.,Department of Psychology, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Guo Feng
- Institute of Psychology, CAS Key Laboratory of Behavioral Science, CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Beijing 100101, China.,Department of Psychology, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Wei Chen
- Institute of Psychology, CAS Key Laboratory of Behavioral Science, CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Beijing 100101, China.,Department of Psychology, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Wen Zhou
- Institute of Psychology, CAS Key Laboratory of Behavioral Science, CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Beijing 100101, China.,Department of Psychology, University of Chinese Academy of Sciences, Beijing 100049, China
| |
Collapse
|
45
|
Hauser CK, Zhu D, Stanford TR, Salinas E. Motor selection dynamics in FEF explain the reaction time variance of saccades to single targets. eLife 2018; 7:33456. [PMID: 29652247 PMCID: PMC5947991 DOI: 10.7554/elife.33456] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2017] [Accepted: 04/12/2018] [Indexed: 01/26/2023] Open
Abstract
In studies of voluntary movement, a most elemental quantity is the reaction time (RT) between the onset of a visual stimulus and a saccade toward it. However, this RT demonstrates extremely high variability which, in spite of extensive research, remains unexplained. It is well established that, when a visual target appears, oculomotor activity gradually builds up until a critical level is reached, at which point a saccade is triggered. Here, based on computational work and single-neuron recordings from monkey frontal eye field (FEF), we show that this rise-to-threshold process starts from a dynamic initial state that already contains other incipient, internally driven motor plans, which compete with the target-driven activity to varying degrees. The ensuing conflict resolution process, which manifests in subtle covariations between baseline activity, build-up rate, and threshold, consists of fundamentally deterministic interactions, and explains the observed RT distributions while invoking only a small amount of intrinsic randomness. As we examine the space around us our eyes move in short steps, looking toward a new location about four times a second. Neurons in a region of the brain called the frontal eye field help initiate these eye movements, which are known as saccades. Each neuron contributes to a saccade with a specific direction and size. Before a saccade, the relevant neurons in the frontal eye field steadily increase their activity. When this activity reaches a critical threshold, the visual system issues a command to move the eyes in the appropriate direction. So a saccade that moves the eyes to the right requires a specific group of neurons to be strongly activated – but, at the same time, the neurons responsible for movement to the left need to be less active. Imagine that you have to move your eyes as quickly as possible to look at a spot of light that appears on a screen. Some of the time your eyes will start to move about 100 milliseconds after the light appears. But on other attempts, your eyes will not start moving until 300 milliseconds after the light came on. What causes this variability? To find out, Hauser et al. recorded from neurons in monkeys trained to perform such a task. When the spot of light appeared many different neurons were active, suggesting there is conflict between the plan that would move the eyes toward the target and plans to look at other locations. That is, when the target appears, the monkey is already thinking of looking somewhere. The time required to resolve this conflict depends on how far apart the target and the competing locations are from one another, and on how much the competing neurons have increased their activity before the target appears. Similar mechanisms are likely to operate when we sit at the dinner table and look for the salt shaker, for example, and so the results presented by Hauser et al. will help us to understand how we direct our attention to different points in space. Understanding how these processes work in more detail will help us to discern what happens when they go wrong, as occurs in attention deficit disorders like ADHD.
Collapse
Affiliation(s)
- Christopher K Hauser
- Department of Neurobiology and Anatomy, Wake Forest School of Medicine, Winston-Salem, United States
| | - Dantong Zhu
- Department of Neurobiology and Anatomy, Wake Forest School of Medicine, Winston-Salem, United States
| | - Terrence R Stanford
- Department of Neurobiology and Anatomy, Wake Forest School of Medicine, Winston-Salem, United States
| | - Emilio Salinas
- Department of Neurobiology and Anatomy, Wake Forest School of Medicine, Winston-Salem, United States
| |
Collapse
|
46
|
Bach EC, Vaughan JW, Stein BE, Rowland BA. Pulsed Stimuli Elicit More Robust Multisensory Enhancement than Expected. Front Integr Neurosci 2018; 11:40. [PMID: 29354037 PMCID: PMC5758560 DOI: 10.3389/fnint.2017.00040] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2017] [Accepted: 12/15/2017] [Indexed: 11/28/2022] Open
Abstract
Neurons in the superior colliculus (SC) integrate cross-modal inputs to generate responses that are more robust than to either input alone, and are frequently greater than their sum (superadditive enhancement). Previously, the principles of a real-time multisensory transform were identified and used to accurately predict a neuron's responses to combinations of brief flashes and noise bursts. However, environmental stimuli frequently have more complex temporal structures that elicit very different response dynamics than previously examined. The present study tested whether such stimuli (i.e., pulsed) would be treated similarly by the multisensory transform. Pulsing visual and auditory stimuli elicited responses composed of higher discharge rates that had multiple peaks temporally aligned to the stimulus pulses. Combinations pulsed cues elicited multiple peaks of superadditive enhancement within the response window. Measured over the entire response, this resulted in larger enhancements than expected given enhancements elicited by non-pulsed (“sustained”) stimuli. However, as with sustained stimuli, the dynamics of multisensory responses to pulsed stimuli were highly related to the temporal dynamics of the unisensory inputs. This suggests that the specific characteristics of the multisensory transform are not determined by the external features of the cross-modal stimulus configuration; rather the temporal structure and alignment of the unisensory inputs is the dominant driving factor in the magnitudes of the multisensory product.
Collapse
Affiliation(s)
- Eva C Bach
- Department Neurobiology and Anatomy, Wake Forest School of Medicine, Winston-Salem, NC, United States
| | - John W Vaughan
- Department Neurobiology and Anatomy, Wake Forest School of Medicine, Winston-Salem, NC, United States
| | - Barry E Stein
- Department Neurobiology and Anatomy, Wake Forest School of Medicine, Winston-Salem, NC, United States
| | - Benjamin A Rowland
- Department Neurobiology and Anatomy, Wake Forest School of Medicine, Winston-Salem, NC, United States
| |
Collapse
|
47
|
Bremen P, Massoudi R, Van Wanrooij MM, Van Opstal AJ. Audio-Visual Integration in a Redundant Target Paradigm: A Comparison between Rhesus Macaque and Man. Front Syst Neurosci 2017; 11:89. [PMID: 29238295 PMCID: PMC5712580 DOI: 10.3389/fnsys.2017.00089] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2017] [Accepted: 11/16/2017] [Indexed: 11/13/2022] Open
Abstract
The mechanisms underlying multi-sensory interactions are still poorly understood despite considerable progress made since the first neurophysiological recordings of multi-sensory neurons. While the majority of single-cell neurophysiology has been performed in anesthetized or passive-awake laboratory animals, the vast majority of behavioral data stems from studies with human subjects. Interpretation of neurophysiological data implicitly assumes that laboratory animals exhibit perceptual phenomena comparable or identical to those observed in human subjects. To explicitly test this underlying assumption, we here characterized how two rhesus macaques and four humans detect changes in intensity of auditory, visual, and audio-visual stimuli. These intensity changes consisted of a gradual envelope modulation for the sound, and a luminance step for the LED. Subjects had to detect any perceived intensity change as fast as possible. By comparing the monkeys' results with those obtained from the human subjects we found that (1) unimodal reaction times differed across modality, acoustic modulation frequency, and species, (2) the largest facilitation of reaction times with the audio-visual stimuli was observed when stimulus onset asynchronies were such that the unimodal reactions would occur at the same time (response, rather than physical synchrony), and (3) the largest audio-visual reaction-time facilitation was observed when unimodal auditory stimuli were difficult to detect, i.e., at slow unimodal reaction times. We conclude that despite marked unimodal heterogeneity, similar multisensory rules applied to both species. Single-cell neurophysiology in the rhesus macaque may therefore yield valuable insights into the mechanisms governing audio-visual integration that may be informative of the processes taking place in the human brain.
Collapse
Affiliation(s)
- Peter Bremen
- Department of Biophysics, Donders Institute for Brain, Cognition, and Behaviour, Radboud University Nijmegen, Nijmegen, Netherlands.,Department of Neuroscience, Erasmus Medical Center, Rotterdam, Netherlands
| | - Rooholla Massoudi
- Department of Biophysics, Donders Institute for Brain, Cognition, and Behaviour, Radboud University Nijmegen, Nijmegen, Netherlands.,Department of Physiology, Development and Neuroscience, University of Cambridge, Cambridge, United Kingdom
| | - Marc M Van Wanrooij
- Department of Biophysics, Donders Institute for Brain, Cognition, and Behaviour, Radboud University Nijmegen, Nijmegen, Netherlands
| | - A J Van Opstal
- Department of Biophysics, Donders Institute for Brain, Cognition, and Behaviour, Radboud University Nijmegen, Nijmegen, Netherlands
| |
Collapse
|
48
|
Scandurra A, Alterisio A, Aria M, Vernese R, D'Aniello B. Should I fetch one or the other? A study on dogs on the object choice in the bimodal contrasting paradigm. Anim Cogn 2017; 21:119-126. [PMID: 29134447 DOI: 10.1007/s10071-017-1145-z] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2017] [Revised: 10/17/2017] [Accepted: 11/10/2017] [Indexed: 11/27/2022]
Abstract
The present study assessed how dogs weigh gestural versus verbal information communicated to them by humans in transitive actions. The dogs were trained by their owners to fetch an object under three conditions: a bimodal congruent condition characterized by using gestures and voices simultaneously; a unimodal gestural condition characterized by using only gestures; and a unimodal verbal condition characterized by using only voices. An additional condition, defined as a bimodal incongruent condition, was later added, in which the gesture contrasted with the verbal command, that is, the owner indicated an object while pronouncing the name of the other object visible to dogs. In the incongruent condition, seven out of nine dogs choose to follow the gestural indication and performed above chance, two were at chance, whereas none of the dogs followed the verbal cues above chance. The dogs, as a group, performed above chance the gestural command in 73.6% of cases. The analysis of latencies in the above-mentioned four conditions exhibited significant differences. The unimodal verbal and the gestural conditions recorded a slower performance than both the bimodal incongruent and congruent conditions. No statistical differences were observed between the unimodal and bimodal conditions. Our results demonstrate that dogs, trained to respond equally well to gestural and verbal commands, choose to follow the indication provided by the gestural command than the verbal one to a significant extent in transitive actions. Furthermore, the responses to bimodal conditions were found to be quicker than the unimodal ones.
Collapse
Affiliation(s)
- Anna Scandurra
- Department of Biology, University of Naples "Federico II", Via Cinthia, 80126, Naples, Italy
| | - Alessandra Alterisio
- Department of Biology, University of Naples "Federico II", Via Cinthia, 80126, Naples, Italy
| | - Massimo Aria
- Department of Economics and Statistics, University of Naples "Federico II", Naples, Italy
| | - Rosaria Vernese
- Dog training center La voce del cane, Via Pisciarelli, Naples, Italy
| | - Biagio D'Aniello
- Department of Biology, University of Naples "Federico II", Via Cinthia, 80126, Naples, Italy.
| |
Collapse
|
49
|
Brown DR, Cavanagh JF. The sound and the fury: Late positive potential is sensitive to sound affect. Psychophysiology 2017; 54:1812-1825. [PMID: 28726287 DOI: 10.1111/psyp.12959] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2016] [Revised: 04/19/2017] [Accepted: 04/24/2017] [Indexed: 01/10/2023]
Abstract
Emotion is an emergent construct of multiple distinct neural processes. EEG is uniquely sensitive to real-time neural computations, and thus is a promising tool to study the construction of emotion. This series of studies aimed to probe the mechanistic contribution of the late positive potential (LPP) to multimodal emotion perception. Experiment 1 revealed that LPP amplitudes for visual images, sounds, and visual images paired with sounds were larger for negatively rated stimuli than for neutrally rated stimuli. Experiment 2 manipulated this audiovisual enhancement by altering the valence pairings with congruent (e.g., positive audio + positive visual) or conflicting emotional pairs (e.g., positive audio + negative visual). Negative visual stimuli evoked larger early LPP amplitudes than positive visual stimuli, regardless of sound pairing. However, time frequency analyses revealed significant midfrontal theta-band power differences for conflicting over congruent stimuli pairs, suggesting very early (∼500 ms) realization of thematic fidelity violations. Interestingly, late LPP modulations were reflective of the opposite pattern of congruency, whereby congruent over conflicting pairs had larger LPP amplitudes. Together, these findings suggest that enhanced parietal activity for affective valence is modality independent and sensitive to complex affective processes. Furthermore, these findings suggest that altered neural activities for affective visual stimuli are enhanced by concurrent affective sounds, paving the way toward an understanding of the construction of multimodal affective experience.
Collapse
Affiliation(s)
- Darin R Brown
- Department of Psychology, University of New Mexico, Albuquerque, New Mexico, USA
| | - James F Cavanagh
- Department of Psychology, University of New Mexico, Albuquerque, New Mexico, USA
| |
Collapse
|
50
|
Multisensory Perception of Contradictory Information in an Environment of Varying Reliability: Evidence for Conscious Perception and Optimal Causal Inference. Sci Rep 2017; 7:3167. [PMID: 28600573 PMCID: PMC5466670 DOI: 10.1038/s41598-017-03521-2] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2016] [Accepted: 05/01/2017] [Indexed: 11/23/2022] Open
Abstract
Two psychophysical experiments examined multisensory integration of visual-auditory (Experiment 1) and visual-tactile-auditory (Experiment 2) signals. Participants judged the location of these multimodal signals relative to a standard presented at the median plane of the body. A cue conflict was induced by presenting the visual signals with a constant spatial discrepancy to the other modalities. Extending previous studies, the reliability of certain modalities (visual in Experiment 1, visual and tactile in Experiment 2) was varied from trial to trial by presenting signals with either strong or weak location information (e.g., a relatively dense or dispersed dot cloud as visual stimulus). We investigated how participants would adapt to the cue conflict from the contradictory information under these varying reliability conditions and whether participants had insight to their performance. During the course of both experiments, participants switched from an integration strategy to a selection strategy in Experiment 1 and to a calibration strategy in Experiment 2. Simulations of various multisensory perception strategies proposed that optimal causal inference in a varying reliability environment not only depends on the amount of multimodal discrepancy, but also on the relative reliability of stimuli across the reliability conditions.
Collapse
|