1
|
de Vignemont F, Farnè A. Peripersonal space: why so last-second? Philos Trans R Soc Lond B Biol Sci 2024; 379:20230159. [PMID: 39155714 DOI: 10.1098/rstb.2023.0159] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2023] [Revised: 12/05/2023] [Accepted: 12/06/2023] [Indexed: 08/20/2024] Open
Abstract
A vast range of neurophysiological, neuropsychological and behavioural results in monkeys and humans have shown that the immediate surroundings of the body, also known as peripersonal space (PPS), are processed in a unique way. Three roles have been ascribed to PPS mechanisms: to react to threats, to avoid obstacles and to act on objects. However, in many circumstances, one does not wait for objects or agents to enter PPS to plan these behaviours. Typically, one has more chances to survive if one starts running away from the lion when one sees it in the distance than if it is a few steps away. PPS makes sense in shortsighted creatures but we are not such creatures. The crucial question is thus twofold: (i) why are these adaptive processes triggered only at the last second or even milliseconds? And (ii) what is their exact contribution, especially for defensive and navigational behaviours? Here, we propose that PPS mechanisms correspond to a plan B, useful in unpredictable situations or when other anticipatory mechanisms have failed. Furthermore, we argue that there are energetic, cognitive and behavioural costs to PPS mechanisms, which explain why this plan B is triggered only at the last second. This article is part of the theme issue 'Minds in movement: embodied cognition in the age of artificial intelligence'.
Collapse
Affiliation(s)
| | - Alessandro Farnè
- Impact Team of the Lyon Neuroscience Research Centre INSERM U1028 CNRS UMR5292 University Claude Bernard Lyon 1 , Lyon, France
| |
Collapse
|
2
|
Cang J, Chen C, Li C, Liu Y. Genetically defined neuron types underlying visuomotor transformation in the superior colliculus. Nat Rev Neurosci 2024:10.1038/s41583-024-00856-4. [PMID: 39333418 DOI: 10.1038/s41583-024-00856-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/16/2024] [Indexed: 09/29/2024]
Abstract
The superior colliculus (SC) is a conserved midbrain structure that is important for transforming visual and other sensory information into motor actions. Decades of investigations in numerous species have made the SC and its nonmammalian homologue, the optic tectum, one of the best studied structures in the brain, with rich information now available regarding its anatomical organization, its extensive inputs and outputs and its important functions in many reflexive and cognitive behaviours. Excitingly, recent studies using modern genomic and physiological approaches have begun to reveal the diverse neuronal subtypes in the SC, as well as their unique functions in visuomotor transformation. Studies have also started to uncover how subtypes of SC neurons form intricate circuits to mediate visual processing and visually guided behaviours. Here, we review these recent discoveries on the cell types and neuronal circuits underlying visuomotor transformations mediated by the SC. We also highlight the important future directions made possible by these new developments.
Collapse
Affiliation(s)
- Jianhua Cang
- Department of Biology, University of Virginia, Charlottesville, VA, USA.
- Department of Psychology, University of Virginia, Charlottesville, VA, USA.
| | - Chen Chen
- Department of Psychology, University of Virginia, Charlottesville, VA, USA
| | - Chuiwen Li
- Department of Psychology, University of Virginia, Charlottesville, VA, USA
| | - Yuanming Liu
- Department of Biology, University of Virginia, Charlottesville, VA, USA
| |
Collapse
|
3
|
Sherman SM, Usrey WM. Transthalamic Pathways for Cortical Function. J Neurosci 2024; 44:e0909242024. [PMID: 39197951 PMCID: PMC11358609 DOI: 10.1523/jneurosci.0909-24.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2024] [Revised: 07/06/2024] [Accepted: 07/18/2024] [Indexed: 09/01/2024] Open
Abstract
The cerebral cortex contains multiple, distinct areas that individually perform specific computations. A particular strength of the cortex is the communication of signals between cortical areas that allows the outputs of these compartmentalized computations to influence and build on each other, thereby dramatically increasing the processing power of the cortex and its role in sensation, action, and cognition. Determining how the cortex communicates signals between individual areas is, therefore, critical for understanding cortical function. Historically, corticocortical communication was thought to occur exclusively by direct anatomical connections between areas that often sequentially linked cortical areas in a hierarchical fashion. More recently, anatomical, physiological, and behavioral evidence is accumulating indicating a role for the higher-order thalamus in corticocortical communication. Specifically, the transthalamic pathway involves projections from one area of the cortex to neurons in the higher-order thalamus that, in turn, project to another area of the cortex. Here, we consider the evidence for and implications of having two routes for corticocortical communication with an emphasis on unique processing available in the transthalamic pathway and the consequences of disorders and diseases that affect transthalamic communication.
Collapse
Affiliation(s)
- S Murray Sherman
- Department of Neurobiology, University of Chicago, Chicago, Illinois 60637
| | - W Martin Usrey
- Center for Neuroscience, University of California, Davis, California 95618
| |
Collapse
|
4
|
Peng B, Huang JJ, Li Z, Zhang LI, Tao HW. Cross-modal enhancement of defensive behavior via parabigemino-collicular projections. Curr Biol 2024; 34:3616-3631.e5. [PMID: 39019036 PMCID: PMC11373540 DOI: 10.1016/j.cub.2024.06.052] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2024] [Revised: 05/19/2024] [Accepted: 06/20/2024] [Indexed: 07/19/2024]
Abstract
Effective detection and avoidance from environmental threats are crucial for animals' survival. Integration of sensory cues associated with threats across different modalities can significantly enhance animals' detection and behavioral responses. However, the neural circuit-level mechanisms underlying the modulation of defensive behavior or fear response under simultaneous multimodal sensory inputs remain poorly understood. Here, we report in mice that bimodal looming stimuli combining coherent visual and auditory signals elicit more robust defensive/fear reactions than unimodal stimuli. These include intensified escape and prolonged hiding, suggesting a heightened defensive/fear state. These various responses depend on the activity of the superior colliculus (SC), while its downstream nucleus, the parabigeminal nucleus (PBG), predominantly influences the duration of hiding behavior. PBG temporally integrates visual and auditory signals and enhances the salience of threat signals by amplifying SC sensory responses through its feedback projection to the visual layer of the SC. Our results suggest an evolutionarily conserved pathway in defense circuits for multisensory integration and cross-modality enhancement.
Collapse
Affiliation(s)
- Bo Peng
- Zilkha Neurogenetic Institute, Center for Neural Circuits and Sensory Processing Disorders, Keck School of Medicine, University of Southern California, Los Angeles, CA 90033, USA; Neuroscience Graduate Program, University of Southern California, Los Angeles, CA 90089, USA
| | - Junxiang J Huang
- Zilkha Neurogenetic Institute, Center for Neural Circuits and Sensory Processing Disorders, Keck School of Medicine, University of Southern California, Los Angeles, CA 90033, USA; Graduate Program in Biomedical and Biological Sciences, University of Southern California, Los Angeles, CA 90033, USA
| | - Zhong Li
- Zilkha Neurogenetic Institute, Center for Neural Circuits and Sensory Processing Disorders, Keck School of Medicine, University of Southern California, Los Angeles, CA 90033, USA
| | - Li I Zhang
- Zilkha Neurogenetic Institute, Center for Neural Circuits and Sensory Processing Disorders, Keck School of Medicine, University of Southern California, Los Angeles, CA 90033, USA; Department of Physiology and Neuroscience, Keck School of Medicine, University of Southern California, Los Angeles, CA 90033, USA.
| | - Huizhong Whit Tao
- Zilkha Neurogenetic Institute, Center for Neural Circuits and Sensory Processing Disorders, Keck School of Medicine, University of Southern California, Los Angeles, CA 90033, USA; Department of Physiology and Neuroscience, Keck School of Medicine, University of Southern California, Los Angeles, CA 90033, USA.
| |
Collapse
|
5
|
Huang YT, Wu CT, Fang YXM, Fu CK, Koike S, Chao ZC. Crossmodal hierarchical predictive coding for audiovisual sequences in the human brain. Commun Biol 2024; 7:965. [PMID: 39122960 PMCID: PMC11316022 DOI: 10.1038/s42003-024-06677-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2023] [Accepted: 08/02/2024] [Indexed: 08/12/2024] Open
Abstract
Predictive coding theory suggests the brain anticipates sensory information using prior knowledge. While this theory has been extensively researched within individual sensory modalities, evidence for predictive processing across sensory modalities is limited. Here, we examine how crossmodal knowledge is represented and learned in the brain, by identifying the hierarchical networks underlying crossmodal predictions when information of one sensory modality leads to a prediction in another modality. We record electroencephalogram (EEG) during a crossmodal audiovisual local-global oddball paradigm, in which the predictability of transitions between tones and images are manipulated at both the stimulus and sequence levels. To dissect the complex predictive signals in our EEG data, we employed a model-fitting approach to untangle neural interactions across modalities and hierarchies. The model-fitting result demonstrates that audiovisual integration occurs at both the levels of individual stimulus interactions and multi-stimulus sequences. Furthermore, we identify the spatio-spectro-temporal signatures of prediction-error signals across hierarchies and modalities, and reveal that auditory and visual prediction errors are rapidly redirected to the central-parietal electrodes during learning through alpha-band interactions. Our study suggests a crossmodal predictive coding mechanism where unimodal predictions are processed by distributed brain networks to form crossmodal knowledge.
Collapse
Affiliation(s)
- Yiyuan Teresa Huang
- International Research Center for Neurointelligence (WPI-IRCN), UTIAS, The University of Tokyo, Tokyo, Japan
- Department of Multidisciplinary Sciences, Graduate School of Arts and Sciences, The University of Tokyo, Tokyo, Japan
| | - Chien-Te Wu
- International Research Center for Neurointelligence (WPI-IRCN), UTIAS, The University of Tokyo, Tokyo, Japan
- School of Occupational Therapy, College of Medicine, National Taiwan University, Taipei, Taiwan
| | - Yi-Xin Miranda Fang
- School of Occupational Therapy, College of Medicine, National Taiwan University, Taipei, Taiwan
| | - Chin-Kun Fu
- School of Occupational Therapy, College of Medicine, National Taiwan University, Taipei, Taiwan
| | - Shinsuke Koike
- International Research Center for Neurointelligence (WPI-IRCN), UTIAS, The University of Tokyo, Tokyo, Japan
- Department of Multidisciplinary Sciences, Graduate School of Arts and Sciences, The University of Tokyo, Tokyo, Japan
- University of Tokyo Institute for Diversity & Adaptation of Human Mind (UTIDAHM), Tokyo, Japan
| | - Zenas C Chao
- International Research Center for Neurointelligence (WPI-IRCN), UTIAS, The University of Tokyo, Tokyo, Japan.
| |
Collapse
|
6
|
Vannasing P, Dionne-Dostie E, Tremblay J, Paquette N, Collignon O, Gallagher A. Electrophysiological responses of audiovisual integration from infancy to adulthood. Brain Cogn 2024; 178:106180. [PMID: 38815526 DOI: 10.1016/j.bandc.2024.106180] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2024] [Revised: 05/17/2024] [Accepted: 05/17/2024] [Indexed: 06/01/2024]
Abstract
Our ability to merge information from different senses into a unified percept is a crucial perceptual process for efficient interaction with our multisensory environment. Yet, the developmental process underlying how the brain implements multisensory integration (MSI) remains poorly known. This cross-sectional study aims to characterize the developmental patterns of audiovisual events in 131 individuals aged from 3 months to 30 years. Electroencephalography (EEG) was recorded during a passive task, including simple auditory, visual, and audiovisual stimuli. In addition to examining age-related variations in MSI responses, we investigated Event-Related Potentials (ERPs) linked with auditory and visual stimulation alone. This was done to depict the typical developmental trajectory of unisensory processing from infancy to adulthood within our sample and to contextualize the maturation effects of MSI in relation to unisensory development. Comparing the neural response to audiovisual stimuli to the sum of the unisensory responses revealed signs of MSI in the ERPs, more specifically between the P2 and N2 components (P2 effect). Furthermore, adult-like MSI responses emerge relatively late in the development, around 8 years old. The automatic integration of simple audiovisual stimuli is a long developmental process that emerges during childhood and continues to mature during adolescence with ERP latencies decreasing with age.
Collapse
Affiliation(s)
- Phetsamone Vannasing
- Neurodevelopmental Optical Imaging Laboratory (LION Lab), Sainte-Justine University Hospital Research Centre, Montreal, QC, Canada.
| | - Emmanuelle Dionne-Dostie
- Neurodevelopmental Optical Imaging Laboratory (LION Lab), Sainte-Justine University Hospital Research Centre, Montreal, QC, Canada.
| | - Julie Tremblay
- Neurodevelopmental Optical Imaging Laboratory (LION Lab), Sainte-Justine University Hospital Research Centre, Montreal, QC, Canada.
| | - Natacha Paquette
- Neurodevelopmental Optical Imaging Laboratory (LION Lab), Sainte-Justine University Hospital Research Centre, Montreal, QC, Canada.
| | - Olivier Collignon
- Institute of Psychology (IPSY) and Institute of Neuroscience (IoNS), Université Catholique de Louvain, Louvain-La-Neuve, Belgium; School of Health Sciences, HES-SO Valais-Wallis, The Sense Innovation and Research Center, Lausanne and Sion, Switzerland.
| | - Anne Gallagher
- Neurodevelopmental Optical Imaging Laboratory (LION Lab), Sainte-Justine University Hospital Research Centre, Montreal, QC, Canada; Cerebrum, Department of Psychology, University of Montreal, Montreal, Qc, Canada.
| |
Collapse
|
7
|
Wang L, Xin H, Buren Q, Zhang Y, Han Y, Ouyang B, Sun Z, Bao Y, Dong C. Specific rules for time and space of multisensory plasticity in the superior colliculus. Brain Res 2024; 1828:148774. [PMID: 38244758 DOI: 10.1016/j.brainres.2024.148774] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2023] [Revised: 12/28/2023] [Accepted: 01/15/2024] [Indexed: 01/22/2024]
Abstract
Cat superior colliculus (SC) neurons commonly combine information from different senses, which facilitates event detection and localization. Integration in SC multisensory neurons depends on the spatial and temporal relationships between cross-modal cues. Here, we revealed the parallel process of short-term plasticity in the temporal/spatial integration process during adulthood that adapts multisensory integration to reliable changes in environmental conditions. Short-term experience alters the temporal preferences of SC multisensory neurons, and this short-term plasticity in the temporal/spatial integration process is limited to changes in cross-modal timing (a factor commonly induced by events at different distances from the receiver). However, this plasticity was not evident in response to changes in the cross-modal spatial configuration.
Collapse
Affiliation(s)
- Linghong Wang
- School of Basic Medicine, Inner Mongolia Medical University, Inner Mongolia, Hohhot 010110, China
| | - Hongmei Xin
- School of Humanities Education, Inner Mongolia Medical University, Inner Mongolia, Hohhot 010110, China
| | - Qiqige Buren
- School of Basic Medicine, Inner Mongolia Medical University, Inner Mongolia, Hohhot 010110, China
| | - Yan Zhang
- School of Basic Medicine, Inner Mongolia Medical University, Inner Mongolia, Hohhot 010110, China
| | - Yaxin Han
- School of Basic Medicine, Inner Mongolia Medical University, Inner Mongolia, Hohhot 010110, China
| | - Biao Ouyang
- School of Basic Medicine, Inner Mongolia Medical University, Inner Mongolia, Hohhot 010110, China
| | - Zhe Sun
- School of Basic Medicine, Inner Mongolia Medical University, Inner Mongolia, Hohhot 010110, China
| | - Yulong Bao
- School of Basic Medicine, Inner Mongolia Medical University, Inner Mongolia, Hohhot 010110, China.
| | - Chao Dong
- School of Basic Medicine, Inner Mongolia Medical University, Inner Mongolia, Hohhot 010110, China.
| |
Collapse
|
8
|
Brizzi G, Sansoni M, Di Lernia D, Frisone F, Tuena C, Riva G. The multisensory mind: a systematic review of multisensory integration processing in Anorexia and Bulimia Nervosa. J Eat Disord 2023; 11:204. [PMID: 37974266 PMCID: PMC10655389 DOI: 10.1186/s40337-023-00930-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/05/2023] [Accepted: 11/12/2023] [Indexed: 11/19/2023] Open
Abstract
Individuals with Anorexia Nervosa and Bulimia Nervosa present alterations in the way they experience their bodies. Body experience results from a multisensory integration process in which information from different sensory domains and spatial reference frames is combined into a coherent percept. Given the critical role of the body in the onset and maintenance of both Anorexia Nervosa and Bulimia Nervosa, we conducted a systematic review to examine multisensory integration abilities of individuals affected by these two conditions and investigate whether they exhibit impairments in crossmodal integration. We searched for studies evaluating crossmodal integration in individuals with a current diagnosis of Anorexia Nervosa and Bulimia Nervosa as compared to healthy individuals from both behavioral and neurobiological perspectives. A search of PubMed, PsycINFO, and Web of Sciences databases was performed to extract relevant articles. Of the 2348 studies retrieved, 911 were unique articles. After the screening, 13 articles were included. Studies revealed multisensory integration abnormalities in patients affected by Anorexia Nervosa; only one included individuals with Bulimia Nervosa and observed less severe impairments compared to healthy controls. Overall, results seemed to support the presence of multisensory deficits in Anorexia Nervosa, especially when integrating interoceptive and exteroceptive information. We proposed the Predictive Coding framework for understanding our findings and suggested future lines of investigation.
Collapse
Affiliation(s)
- Giulia Brizzi
- Applied Technology for Neuro- Psychology Laboratory, IRCCS Istituto Auxologico Italiano, Via Magnasco 2, 20149, Milan, Italy.
| | - Maria Sansoni
- Humane Technology Laboratory, Università Cattolica del Sacro Cuore, Largo Gemelli, 1, 20121, Milan, Italy
- Department of Psychology, Università Cattolica del Sacro Cuore, Largo Gemelli, 1, 20121, Milan, Italy
| | - Daniele Di Lernia
- Applied Technology for Neuro- Psychology Laboratory, IRCCS Istituto Auxologico Italiano, Via Magnasco 2, 20149, Milan, Italy
| | - Fabio Frisone
- Humane Technology Laboratory, Università Cattolica del Sacro Cuore, Largo Gemelli, 1, 20121, Milan, Italy
- Department of Psychology, Università Cattolica del Sacro Cuore, Largo Gemelli, 1, 20121, Milan, Italy
| | - Cosimo Tuena
- Applied Technology for Neuro- Psychology Laboratory, IRCCS Istituto Auxologico Italiano, Via Magnasco 2, 20149, Milan, Italy
| | - Giuseppe Riva
- Applied Technology for Neuro- Psychology Laboratory, IRCCS Istituto Auxologico Italiano, Via Magnasco 2, 20149, Milan, Italy
- Humane Technology Laboratory, Università Cattolica del Sacro Cuore, Largo Gemelli, 1, 20121, Milan, Italy
| |
Collapse
|
9
|
Weidner F, Maier JE, Broll W. Eating, Smelling, and Seeing: Investigating Multisensory Integration and (In)congruent Stimuli while Eating in VR. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; PP:2423-2433. [PMID: 37027726 DOI: 10.1109/tvcg.2023.3247099] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Integrating taste in AR/VR applications has various promising use cases - from social eating to the treatment of disorders. Despite many successful AR/VR applications that alter the taste of beverages and food, the relationship between olfaction, gustation, and vision during the process of multisensory integration (MSI) has not been fully explored yet. Thus, we present the results of a study in which participants were confronted with congruent and incongruent visual and olfactory stimuli while eating a tasteless food product in VR. We were interested (1) if participants integrate bi-modal congruent stimuli and (2) if vision guides MSI during congruent/incongruent conditions. Our results contain three main findings: First, and surprisingly, participants were not always able to detect congruent visual-olfactory stimuli when eating a portion of tasteless food. Second, when confronted with tri-modal incongruent cues, a majority of participants did not rely on any of the presented cues when forced to identify what they eat; this includes vision which has previously been shown to dominate MSI. Third, although research has shown that basic taste qualities like sweetness, saltiness, or sourness can be influenced by congruent cues, doing so with more complex flavors (e.g., zucchini or carrot) proved to be harder to achieve. We discuss our results in the context of multimodal integration, and within the domain of multisensory AR/VR. Our results are a necessary building block for future human-food interaction in XR that relies on smell, taste, and vision and are foundational for applied applications such as affective AR/VR.
Collapse
|
10
|
Brain-inspired multisensory integration neural network for cross-modal recognition through spatiotemporal dynamics and deep learning. Cogn Neurodyn 2023. [DOI: 10.1007/s11571-023-09932-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/05/2023] Open
|
11
|
Cunningham J, O'Dowd A, Broglio SP, Newell FN, Kelly Á, Joyce O, Januszewski J, Wilson F. Multisensory perception is not influenced by previous concussion history in retired rugby union players. Brain Inj 2022; 36:1123-1132. [PMID: 35994241 DOI: 10.1080/02699052.2022.2109732] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
Abstract
BACKGROUND To assess whether concussion history adversely affects multisensory integration, we compared susceptibility to the Sound-Induced Flash Illusion (SIFI) in retired professional rugby players compared to controls. METHODS Retired professional rugby players ((N = 58) and retired international rowers (N = 26) completed a self-report concussion history questionnaire and the SIFI task. Susceptibility to the SIFI (i.e., perceiving two flashes in response to one flash paired with two beeps) was assessed at three stimulus onset asynchronies (70 ms, 150 ms or 230 ms).Logistic mixed-effects regression modeling was implemented to evaluate how athlete grouping, previous concussion history and total number of years playing sport, impacted the susceptibility to the SIFI task. The statistical significance of a fixed effect of interest was determined by a likelihood ratio test. RESULTS Former rugby players had significantly more self-reported concussions than the rower group (p < 0.001). There was no impact of athlete grouping (i.e., retired professional rugby players and retired international rowers), years participation in elite sport or concussion history on performance in the SIFI. CONCLUSION A career in professional rugby, concussion history or number of years participating in professional rugby was not found to be predictive of performance on the SIFI task.
Collapse
Affiliation(s)
- Joice Cunningham
- Discipline of Physiotherapy, School of Medicine, Trinity College Dublin, Dublin, Ireland
| | - Alan O'Dowd
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, Dublin, Ireland
| | - Steven P Broglio
- Michigan Concussion Center, University of Michigan, Ann Arbor, Michigan, USA
| | - Fiona N Newell
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, Dublin, Ireland
| | - Áine Kelly
- Department of Physiology, School of Medicine, Trinity College Dublin, Dublin, Ireland
| | - Oisín Joyce
- Department of Physiology, School of Medicine, Trinity College Dublin, Dublin, Ireland
| | - Julia Januszewski
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, Dublin, Ireland
| | - Fiona Wilson
- Discipline of Physiotherapy, School of Medicine, Trinity College Dublin, Dublin, Ireland
| |
Collapse
|
12
|
Kheirkhah K, Moradi V, Kavianpour I, Farahani S. Comparison of Maturity in Auditory-Visual Multisensory Processes With Sound-Induced Flash Illusion Test in Children and Adults. Cureus 2022; 14:e27631. [PMID: 36072200 PMCID: PMC9437373 DOI: 10.7759/cureus.27631] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/30/2022] [Indexed: 11/05/2022] Open
|
13
|
Mondal AK, Sailopal A, Singla P, AP P. SSDMM-VAE: variational multi-modal disentangled representation learning. APPL INTELL 2022. [DOI: 10.1007/s10489-022-03936-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
14
|
Goldenberg D, Tiede MK, Bennett RT, Whalen DH. Congruent aero-tactile stimuli bias perception of voicing continua. Front Hum Neurosci 2022; 16:879981. [PMID: 35911601 PMCID: PMC9334670 DOI: 10.3389/fnhum.2022.879981] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2022] [Accepted: 06/28/2022] [Indexed: 11/13/2022] Open
Abstract
Multimodal integration is the formation of a coherent percept from different sensory inputs such as vision, audition, and somatosensation. Most research on multimodal integration in speech perception has focused on audio-visual integration. In recent years, audio-tactile integration has also been investigated, and it has been established that puffs of air applied to the skin and timed with listening tasks shift the perception of voicing by naive listeners. The current study has replicated and extended these findings by testing the effect of air puffs on gradations of voice onset time along a continuum rather than the voiced and voiceless endpoints of the original work. Three continua were tested: bilabial (“pa/ba”), velar (“ka/ga”), and a vowel continuum (“head/hid”) used as a control. The presence of air puffs was found to significantly increase the likelihood of choosing voiceless responses for the two VOT continua but had no effect on choices for the vowel continuum. Analysis of response times revealed that the presence of air puffs lengthened responses for intermediate (ambiguous) stimuli and shortened them for endpoint (non-ambiguous) stimuli. The slowest response times were observed for the intermediate steps for all three continua, but for the bilabial continuum this effect interacted with the presence of air puffs: responses were slower in the presence of air puffs, and faster in their absence. This suggests that during integration auditory and aero-tactile inputs are weighted differently by the perceptual system, with the latter exerting greater influence in those cases where the auditory cues for voicing are ambiguous.
Collapse
Affiliation(s)
| | - Mark K. Tiede
- Haskins Laboratories, New Haven, CT, United States
- *Correspondence: Mark K. Tiede,
| | - Ryan T. Bennett
- Department of Linguistics, University of California, Santa Cruz, Santa Cruz, CA, United States
| | - D. H. Whalen
- Haskins Laboratories, New Haven, CT, United States
- The Graduate Center, City University of New York (CUNY), New York, NY, United States
- Department of Linguistics, Yale University, New Haven, CT, United States
| |
Collapse
|
15
|
James LS, Baier AL, Page RA, Clements P, Hunter KL, Taylor RC, Ryan MJ. Cross-modal facilitation of auditory discrimination in a frog. Biol Lett 2022; 18:20220098. [PMID: 35765810 PMCID: PMC9240679 DOI: 10.1098/rsbl.2022.0098] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2022] [Accepted: 06/06/2022] [Indexed: 11/12/2022] Open
Abstract
Stimulation in one sensory modality can affect perception in a separate modality, resulting in diverse effects including illusions in humans. This can also result in cross-modal facilitation, a process where sensory performance in one modality is improved by stimulation in another modality. For instance, a simple sound can improve performance in a visual task in both humans and cats. However, the range of contexts and underlying mechanisms that evoke such facilitation effects remain poorly understood. Here, we demonstrated cross-modal stimulation in wild-caught túngara frogs, a species with well-studied acoustic preferences in females. We first identified that a combined visual and seismic cue (vocal sac movement and water ripple) was behaviourally relevant for females choosing between two courtship calls in a phonotaxis assay. We then found that this combined cross-modal stimulus rescued a species-typical acoustic preference in the presence of background noise that otherwise abolished the preference. These results highlight how cross-modal stimulation can prime attention in receivers to improve performance during decision-making. With this, we provide the foundation for future work uncovering the processes and conditions that promote cross-modal facilitation effects.
Collapse
Affiliation(s)
- Logan S. James
- Department of Integrative Biology, University of Texas, Austin, TX 78712, USA
- Smithsonian Tropical Research Institute, Apartado 0843-03092, Balboa, Ancón, Republic of Panama
| | - A. Leonie Baier
- Department of Integrative Biology, University of Texas, Austin, TX 78712, USA
- Smithsonian Tropical Research Institute, Apartado 0843-03092, Balboa, Ancón, Republic of Panama
| | - Rachel A. Page
- Smithsonian Tropical Research Institute, Apartado 0843-03092, Balboa, Ancón, Republic of Panama
| | - Paul Clements
- Henson School of Technology, Salisbury University, 1101 Camden Ave, Salisbury, MD 21801, USA
| | - Kimberly L. Hunter
- Department of Biological Sciences, Salisbury University, 1101 Camden Ave, Salisbury, MD 21801, USA
| | - Ryan C. Taylor
- Smithsonian Tropical Research Institute, Apartado 0843-03092, Balboa, Ancón, Republic of Panama
- Department of Biological Sciences, Salisbury University, 1101 Camden Ave, Salisbury, MD 21801, USA
| | - Michael J. Ryan
- Department of Integrative Biology, University of Texas, Austin, TX 78712, USA
- Smithsonian Tropical Research Institute, Apartado 0843-03092, Balboa, Ancón, Republic of Panama
| |
Collapse
|
16
|
Cycling multisensory changes in migraine: more than a headache. Curr Opin Neurol 2022; 35:367-372. [PMID: 35674081 DOI: 10.1097/wco.0000000000001059] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
PURPOSE OF REVIEW Research on migraine usually focuses on the headache; however, accumulating evidence suggests that migraine not only changes the somatosensory system for nociception (pain), but also the other modalities of perception, such as visual, auditory or tactile sense. More importantly, the multisensory changes exist beyond the headache (ictal) phase of migraine and show cyclic changes, suggesting a central generator driving the multiple sensory changes across different migraine phases. This review summarizes the latest studies that explored the cyclic sensory changes of migraine. RECENT FINDINGS Considerable evidence from recent neurophysiological and functional imaging studies suggests that alterations in brain activation start at least 48 h before the migraine headache and outlast the pain itself for 24 h. Several sensory modalities are involved with cyclic changes in sensitivity that peak during the ictal phase. SUMMARY In many ways, migraine represents more than just vascular-mediated headaches. Migraine alters the propagation of sensory information long before the headache attack starts.
Collapse
|
17
|
Jure R. The “Primitive Brain Dysfunction” Theory of Autism: The Superior Colliculus Role. Front Integr Neurosci 2022; 16:797391. [PMID: 35712344 PMCID: PMC9194533 DOI: 10.3389/fnint.2022.797391] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2021] [Accepted: 04/19/2022] [Indexed: 11/20/2022] Open
Abstract
A better understanding of the pathogenesis of autism will help clarify our conception of the complexity of normal brain development. The crucial deficit may lie in the postnatal changes that vision produces in the brainstem nuclei during early life. The superior colliculus is the primary brainstem visual center. Although difficult to examine in humans with present techniques, it is known to support behaviors essential for every vertebrate to survive, such as the ability to pay attention to relevant stimuli and to produce automatic motor responses based on sensory input. From birth to death, it acts as a brain sentinel that influences basic aspects of our behavior. It is the main brainstem hub that lies between the environment and the rest of the higher neural system, making continuous, implicit decisions about where to direct our attention. The conserved cortex-like organization of the superior colliculus in all vertebrates allows the early appearance of primitive emotionally-related behaviors essential for survival. It contains first-line specialized neurons enabling the detection and tracking of faces and movements from birth. During development, it also sends the appropriate impulses to help shape brain areas necessary for social-communicative abilities. These abilities require the analysis of numerous variables, such as the simultaneous evaluation of incoming information sustained by separate brain networks (visual, auditory and sensory-motor, social, emotional, etc.), and predictive capabilities which compare present events to previous experiences and possible responses. These critical aspects of decision-making allow us to evaluate the impact that our response or behavior may provoke in others. The purpose of this review is to show that several enigmas about the complexity of autism might be explained by disruptions of collicular and brainstem functions. The results of two separate lines of investigation: 1. the cognitive, etiologic, and pathogenic aspects of autism on one hand, and two. the functional anatomy of the colliculus on the other, are considered in order to bridge the gap between basic brain science and clinical studies and to promote future research in this unexplored area.
Collapse
|
18
|
An effective integrated genetic programming and neural network model for electronic nose calibration of air pollution monitoring application. Neural Comput Appl 2022. [DOI: 10.1007/s00521-022-07129-0] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
19
|
Information in Explaining Cognition: How to Evaluate It? PHILOSOPHIES 2022. [DOI: 10.3390/philosophies7020028] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
The claims that “The brain processes information” or “Cognition is information processing” are accepted as truisms in cognitive science. However, it is unclear how to evaluate such claims absent a specification of “information” as it is used by neurocognitive theories. The aim of this article is, thus, to identify the key features of information that information-based neurocognitive theories posit. A systematic identification of these features can reveal the explanatory role that information plays in specific neurocognitive theories, and can, therefore, be both theoretically and practically important. These features can be used, in turn, as desiderata against which candidate theories of information may be evaluated. After discussing some characteristics of explanation in cognitive science and their implications for “information”, three notions are briefly introduced: natural, sensory, and endogenous information. Subsequently, six desiderata are identified and defended based on cognitive scientific practices. The global workspace theory of consciousness is then used as a specific case study that arguably posits either five or six corresponding features of information.
Collapse
|
20
|
Azzam MB, Easteal RA. Pedagogical Strategies for the Enhancement of Medical Education. MEDICAL SCIENCE EDUCATOR 2021; 31:2041-2048. [PMID: 34950531 PMCID: PMC8651936 DOI: 10.1007/s40670-021-01385-w] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 08/14/2021] [Indexed: 05/29/2023]
Abstract
Clearly, memory and learning are essential to medical education. To make memory and learning more robust and long-term, educators should turn to the advances in neuroscience and cognitive science to direct their efforts. This paper describes the memory pathways and stages with emphasis leading to long-term memory storage. Particular stress is placed on this storage as a construct known as schema. Leading from this background, several pedagogical strategies are described: cognitive load, dual encoding, spiral syllabus, bridging and chunking, sleep consolidation, and retrieval practice.
Collapse
Affiliation(s)
- Mohammad B. Azzam
- Faculty of Education, Western University, London, ON N6G 1G7 Canada
- Department of Biomedical and Molecular Sciences, Queen’s University, Kingston, ON K7L 3N6 Canada
| | - Ronald A. Easteal
- Department of Biomedical and Molecular Sciences, Queen’s University, Kingston, ON K7L 3N6 Canada
| |
Collapse
|
21
|
Azzam MB, Easteal RA. Retrieval Practice for Improving Long-Term Retention in Anatomical Education: A Quasi-Experimental Study. MEDICAL SCIENCE EDUCATOR 2021; 31:1305-1310. [PMID: 34457972 PMCID: PMC8368804 DOI: 10.1007/s40670-021-01298-8] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Accepted: 04/23/2021] [Indexed: 06/13/2023]
Abstract
It is generally assumed by students that learning takes place during repeated episodes of rereading and rote memorization of course materials. Over the past few decades, however, research has increasingly indicated that the said notion can and should be enhanced with learning paradigms such as retrieval practice (RP). RP occurs when students practice retrieving their consolidated semantic memories by informally testing themselves. This strategy results in the re-encoding and re-consolidation of existing semantic memories, thus strengthening their schemas. The purpose of this quasi-experimental design was to assess the effects of the implementation of RP on student performance on the final exam in a large, undergraduate Gross Anatomy course. It was hypothesized that student participation in RP during class would improve their performance on the final exam in the course. The participants (N = 248) were mainly in Life Sciences, Kinesiology, and Physical Education programs. They answered RP questions using TopHat©, an online educational software platform. The results of this study indicated that student performance on the final exam was enhanced when students engaged in RP. It was concluded that the use of RP effectively enhances learning and long-term retention of semantic memory. In addition to the traditional testing 'of' learning, teachers are encouraged to implement testing, in the form of RP, in their classrooms 'for' learning.
Collapse
Affiliation(s)
- Mohammad B. Azzam
- Faculty of Education, Western University, London, ON N6G 1G7 Canada
- Department of Biomedical and Molecular Sciences, Queen’s University, Kingston, ON K7L 3N6 Canada
| | - Ronald A. Easteal
- Department of Biomedical and Molecular Sciences, Queen’s University, Kingston, ON K7L 3N6 Canada
| |
Collapse
|
22
|
Miller LJ, Marco EJ, Chu RC, Camarata S. Editorial: Sensory Processing Across the Lifespan: A 25-Year Initiative to Understand Neurophysiology, Behaviors, and Treatment Effectiveness for Sensory Processing. Front Integr Neurosci 2021; 15:652218. [PMID: 33897385 PMCID: PMC8063042 DOI: 10.3389/fnint.2021.652218] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Key Words] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2021] [Accepted: 02/24/2021] [Indexed: 11/13/2022] Open
Affiliation(s)
- Lucy Jane Miller
- Department of Pediatrics (Emeritus), University of Colorado, Denver, CO, United States.,Sensory Therapies and Research Institute for Sensory Processing Disorder, Centennial, CO, United States
| | - Elysa J Marco
- Cortica (United States), San Diego, CA, United States
| | - Robyn C Chu
- Radiology & Biomedical Imaging, University of California San Francisco, San Francisco, CA, United States.,Growing Healthy Children Therapy Services, Rescue, CA, United States
| | - Stephen Camarata
- School of Medicine, Vanderbilt University, Nashville, TN, United States
| |
Collapse
|
23
|
Dalal TC, Muller AM, Stevenson RA. The Relationship Between Multisensory Temporal Processing and Schizotypal Traits. Multisens Res 2021; 34:1-19. [PMID: 33706260 DOI: 10.1163/22134808-bja10044] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2020] [Accepted: 01/06/2021] [Indexed: 11/19/2022]
Abstract
Recent literature has suggested that deficits in sensory processing are associated with schizophrenia (SCZ), and more specifically hallucination severity. The DSM-5's shift towards a dimensional approach to diagnostic criteria has led to SCZ and schizotypal personality disorder (SPD) being classified as schizophrenia spectrum disorders. With SCZ and SPD overlapping in aetiology and symptomatology, such as sensory abnormalities, it is important to investigate whether these deficits commonly reported in SCZ extend to non-clinical expressions of SPD. In this study, we investigated whether levels of SPD traits were related to audiovisual multisensory temporal processing in a non-clinical sample, revealing two novel findings. First, less precise multisensory temporal processing was related to higher overall levels of SPD symptomatology. Second, this relationship was specific to the cognitive-perceptual domain of SPD symptomatology, and more specifically, the Unusual Perceptual Experiences and Odd Beliefs or Magical Thinking symptomatology. The current study provides an initial look at the relationship between multisensory temporal processing and schizotypal traits. Additionally, it builds on the previous literature by suggesting that less precise multisensory temporal processing is not exclusive to SCZ but may also be related to non-clinical expressions of schizotypal traits in the general population.
Collapse
Affiliation(s)
- Tyler C Dalal
- 1Department of Psychology, University of Western Ontario, London, ON, M6G 2N5, Canada
- 2Brain and Mind Institute, University of Western Ontario, London, ON, M6G 2N5, Canada
| | - Anne-Marie Muller
- 1Department of Psychology, University of Western Ontario, London, ON, M6G 2N5, Canada
- 2Brain and Mind Institute, University of Western Ontario, London, ON, M6G 2N5, Canada
| | - Ryan A Stevenson
- 1Department of Psychology, University of Western Ontario, London, ON, M6G 2N5, Canada
- 2Brain and Mind Institute, University of Western Ontario, London, ON, M6G 2N5, Canada
| |
Collapse
|
24
|
Purpura G, Febbrini Del Magro E, Caputo R, Cioni G, Tinelli F. Visuo-haptic transfer for object recognition in children with peripheral visual impairment. Vision Res 2020; 178:12-17. [PMID: 33070030 DOI: 10.1016/j.visres.2020.06.008] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2020] [Revised: 06/19/2020] [Accepted: 06/24/2020] [Indexed: 11/26/2022]
Abstract
It is well known how early visual experience is critical for the development of multisensory processing abilities, and for this reason an early vision impairment could hinder the transfer of different sensory information during the exploration and recognition of the surrounding environment. Recently, we verified that visuo-haptic transfer for object recognition emerges early in typically developing children but matures slowly during the school-age period. Subsequently we verified the presence of a slower trend of development in unisensory and multisensory skills in children with early abnormal motor and sensory experiences due to brain lesions. Now, we investigated unimodal visual information, unimodal haptic information and visuo-haptic information transfer in children with a diagnosis of low-vision, due to congenital visual impairment. Unimodal and bimodal processes for object recognition were explored in 11 children with low-vision and the results were matched with those of 22 controls. Participants were tested using a clinical protocol involving visual exploration of black-and-white photographs of common objects, haptic exploration of real objects and visuo-haptic transfer of these two types of information. Results show a normal development in haptic unisensory processing in children with low vision and a significant difference in multisensory transfer between the two groups. In children with visual impairment, multisensory processes do not facilitate the recognition of common objects as in typical children, probably because early visual impairment may impact the cross-sensory calibration of vision and touch.
Collapse
Affiliation(s)
- Giulia Purpura
- Department of Developmental Neuroscience, IRCCS Stella Maris Foundation, Pisa, Italy.
| | - Elena Febbrini Del Magro
- Pediatric Ophthalmology Unit, Meyer Children's Hospital, University of Florence, Florence, Italy.
| | - Roberto Caputo
- Pediatric Ophthalmology Unit, Meyer Children's Hospital, University of Florence, Florence, Italy.
| | - Giovanni Cioni
- Department of Developmental Neuroscience, IRCCS Stella Maris Foundation, Pisa, Italy; Department of Clinical and Experimental Medicine, University of Pisa, Italy.
| | - Francesca Tinelli
- Department of Developmental Neuroscience, IRCCS Stella Maris Foundation, Pisa, Italy.
| |
Collapse
|
25
|
Lin Y, Ding H, Zhang Y. Multisensory Integration of Emotion in Schizophrenic Patients. Multisens Res 2020; 33:865-901. [PMID: 33706267 DOI: 10.1163/22134808-bja10016] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2019] [Accepted: 03/24/2020] [Indexed: 01/04/2023]
Abstract
Multisensory integration (MSI) of emotion has been increasingly recognized as an essential element of schizophrenic patients' impairments, leading to the breakdown of their interpersonal functioning. The present review provides an updated synopsis of schizophrenics' MSI abilities in emotion processing by examining relevant behavioral and neurological research. Existing behavioral studies have adopted well-established experimental paradigms to investigate how participants understand multisensory emotion stimuli, and interpret their reciprocal interactions. Yet it remains controversial with regard to congruence-induced facilitation effects, modality dominance effects, and generalized vs specific impairment hypotheses. Such inconsistencies are likely due to differences and variations in experimental manipulations, participants' clinical symptomatology, and cognitive abilities. Recent electrophysiological and neuroimaging research has revealed aberrant indices in event-related potential (ERP) and brain activation patterns, further suggesting impaired temporal processing and dysfunctional brain regions, connectivity and circuities at different stages of MSI in emotion processing. The limitations of existing studies and implications for future MSI work are discussed in light of research designs and techniques, study samples and stimuli, and clinical applications.
Collapse
Affiliation(s)
- Yi Lin
- 1Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, 800 Dong Chuan Rd., Minhang District, Shanghai, 200240, China
| | - Hongwei Ding
- 1Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, 800 Dong Chuan Rd., Minhang District, Shanghai, 200240, China
| | - Yang Zhang
- 2Department of Speech-Language-Hearing Sciences & Center for Neurobehavioral Development, University of Minnesota, Twin Cities, MN 55455, USA
| |
Collapse
|
26
|
Sulubacak U, Caglayan O, Grönroos SA, Rouhe A, Elliott D, Specia L, Tiedemann J. Multimodal machine translation through visuals and speech. MACHINE TRANSLATION 2020. [DOI: 10.1007/s10590-020-09250-0] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Abstract
AbstractMultimodal machine translation involves drawing information from more than one modality, based on the assumption that the additional modalities will contain useful alternative views of the input data. The most prominent tasks in this area are spoken language translation, image-guided translation, and video-guided translation, which exploit audio and visual modalities, respectively. These tasks are distinguished from their monolingual counterparts of speech recognition, image captioning, and video captioning by the requirement of models to generate outputs in a different language. This survey reviews the major data resources for these tasks, the evaluation campaigns concentrated around them, the state of the art in end-to-end and pipeline approaches, and also the challenges in performance evaluation. The paper concludes with a discussion of directions for future research in these areas: the need for more expansive and challenging datasets, for targeted evaluations of model performance, and for multimodality in both the input and output space.
Collapse
|
27
|
Sandini TM, Marks WN, Tahir NB, Song Y, Greba Q, Howland JG. NMDA Receptors in Visual and Olfactory Sensory Integration in Male Long Evans Rats: A Role for the Orbitofrontal Cortex. Neuroscience 2020; 440:230-238. [PMID: 32497759 DOI: 10.1016/j.neuroscience.2020.05.041] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2020] [Revised: 05/20/2020] [Accepted: 05/23/2020] [Indexed: 11/28/2022]
Abstract
Sensory integration (SI) is a cognitive process whereby the brain uses unimodal or multimodal sensory features to create a comprehensive representation of the environment. Integration of sensory input is necessary to achieve a coherent perception of the environment, and to subsequently plan and coordinate action. The neural mechanisms mediating SI are poorly understood; however, recent studies suggest that the regulation of SI involves N-methyl-d-aspartate receptors (NMDARs) in orbitofrontal cortex (OFC). Thus, we tested this hypothesis directly in two experiments using object oddity tests that require SI for visual and olfactory stimuli. First, we blocked NMDARs with acute CPP treatment (i.p., 10 mg/kg) and tested rats in unimodal visual and olfactory SI tests, and respective control unimodal oddity tests that do not require SI. Second, we used intra-OFC infusions of AP5 (30 mM) to examine the role of NMDARs in the OFC in the oddity tests requiring SI. Systemic blockade of NMDARs impaired performance on the visual tests regardless of whether SI was required for determining oddity. In the olfactory tests, systemic treatment with CPP impaired the test requiring SI while sparing olfactory oddity, demonstrating a selective impairment in the olfactory SI. Intra-OFC blockade of NMDARs impaired olfactory SI, without effect on visual SI, demonstrating that intra-OFC NMDARs are essential for olfactory, but not visual SI. The present results are discussed in the context of the function of the OFC and its associated circuitry.
Collapse
Affiliation(s)
- Thaísa M Sandini
- Department of Anatomy, Physiology, and Pharmacology, University of Saskatchewan, Saskatoon, Saskatchewan S7N 5E5, Canada
| | - Wendie N Marks
- Department of Anatomy, Physiology, and Pharmacology, University of Saskatchewan, Saskatoon, Saskatchewan S7N 5E5, Canada
| | - Nimra B Tahir
- Department of Anatomy, Physiology, and Pharmacology, University of Saskatchewan, Saskatoon, Saskatchewan S7N 5E5, Canada
| | - Yuanyi Song
- Department of Anatomy, Physiology, and Pharmacology, University of Saskatchewan, Saskatoon, Saskatchewan S7N 5E5, Canada
| | - Quentin Greba
- Department of Anatomy, Physiology, and Pharmacology, University of Saskatchewan, Saskatoon, Saskatchewan S7N 5E5, Canada
| | - John G Howland
- Department of Anatomy, Physiology, and Pharmacology, University of Saskatchewan, Saskatoon, Saskatchewan S7N 5E5, Canada.
| |
Collapse
|
28
|
Jicol C, Lloyd-Esenkaya T, Proulx MJ, Lange-Smith S, Scheller M, O'Neill E, Petrini K. Efficiency of Sensory Substitution Devices Alone and in Combination With Self-Motion for Spatial Navigation in Sighted and Visually Impaired. Front Psychol 2020; 11:1443. [PMID: 32754082 PMCID: PMC7381305 DOI: 10.3389/fpsyg.2020.01443] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2020] [Accepted: 05/29/2020] [Indexed: 11/13/2022] Open
Abstract
Human adults can optimally combine vision with self-motion to facilitate navigation. In the absence of visual input (e.g., dark environments and visual impairments), sensory substitution devices (SSDs), such as The vOICe or BrainPort, which translate visual information into auditory or tactile information, could be used to increase navigation precision when integrated together or with self-motion. In Experiment 1, we compared and assessed together The vOICe and BrainPort in aerial maps task performed by a group of sighted participants. In Experiment 2, we examined whether sighted individuals and a group of visually impaired (VI) individuals could benefit from using The vOICe, with and without self-motion, to accurately navigate a three-dimensional (3D) environment. In both studies, 3D motion tracking data were used to determine the level of precision with which participants performed two different tasks (an egocentric and an allocentric task) and three different conditions (two unisensory conditions and one multisensory condition). In Experiment 1, we found no benefit of using the devices together. In Experiment 2, the sighted performance during The vOICe was almost as good as that for self-motion despite a short training period, although we found no benefit (reduction in variability) of using The vOICe and self-motion in combination compared to the two in isolation. In contrast, the group of VI participants did benefit from combining The vOICe and self-motion despite the low number of trials. Finally, while both groups became more accurate in their use of The vOICe with increased trials, only the VI group showed an increased level of accuracy in the combined condition. Our findings highlight how exploiting non-visual multisensory integration to develop new assistive technologies could be key to help blind and VI persons, especially due to their difficulty in attaining allocentric information.
Collapse
Affiliation(s)
- Crescent Jicol
- Department of Psychology, University of Bath, Bath, United Kingdom
| | | | - Michael J Proulx
- Department of Psychology, University of Bath, Bath, United Kingdom
| | - Simon Lange-Smith
- School of Sport and Exercise Sciences, Liverpool John Moores University, Liverpool, United Kingdom
| | - Meike Scheller
- Department of Psychology, University of Bath, Bath, United Kingdom
| | - Eamonn O'Neill
- Department of Computer Science, University of Bath, Bath, United Kingdom
| | - Karin Petrini
- Department of Psychology, University of Bath, Bath, United Kingdom
| |
Collapse
|
29
|
Armstrong-Gallegos S. Problems in Audiovisual Filtering for Children with Special Educational Needs. Iperception 2020; 11:2041669520951816. [PMID: 32922716 PMCID: PMC7457682 DOI: 10.1177/2041669520951816] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2019] [Accepted: 06/19/2020] [Indexed: 11/17/2022] Open
Abstract
There is pervasive evidence that problems in sensory processing occur across a range of developmental disorders, but their aetiology and clinical significance remain unclear. The present study investigated the relation between sensory processing and literacy skills in children with and without a background of special educational needs (SEN). Twenty-six children aged between 7 and 12 years old, from both regular classes and SEN programmes, participated. Following baseline tests of literacy, fine motor skills and naming speed, two sets of instruments were administered: the carer-assessed Child Sensory Profile-2 and a novel Audiovisual Animal Stroop (AVAS) test. The SEN group showed significantly higher ratings on three Child Sensory Profile-2 quadrants, together with body position ratings. The SEN participants also showed a specific deficit when required to ignore an accompanying incongruent auditory stimulus on the AVAS. Interestingly, AVAS performance correlated significantly with literacy scores and with the sensory profile scores. It is proposed that the children with SEN showed a specific deficit in "filtering out" irrelevant auditory input. The results highlight the importance of including analysis of sensory processes within theoretical and applied approaches to developmental differences and suggest promising new approaches to the understanding, assessment, and support of children with SEN.
Collapse
|
30
|
Adeel A. Conscious Multisensory Integration: Introducing a Universal Contextual Field in Biological and Deep Artificial Neural Networks. Front Comput Neurosci 2020; 14:15. [PMID: 32508610 PMCID: PMC7248356 DOI: 10.3389/fncom.2020.00015] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2019] [Accepted: 02/07/2020] [Indexed: 11/24/2022] Open
Abstract
Conscious awareness plays a major role in human cognition and adaptive behavior, though its function in multisensory integration is not yet fully understood, hence, questions remain: How does the brain integrate the incoming multisensory signals with respect to different external environments? How are the roles of these multisensory signals defined to adhere to the anticipated behavioral-constraint of the environment? This work seeks to articulate a novel theory on conscious multisensory integration (CMI) that addresses the aforementioned research challenges. Specifically, the well-established contextual field (CF) in pyramidal cells and coherent infomax theory (Kay et al., 1998; Kay and Phillips, 2011) is split into two functionally distinctive integrated input fields: local contextual field (LCF) and universal contextual field (UCF). LCF defines the modulatory sensory signal coming from some other parts of the brain (in principle from anywhere in space-time) and UCF defines the outside environment and anticipated behavior (based on past learning and reasoning). Both LCF and UCF are integrated with the receptive field (RF) to develop a new class of contextually-adaptive neuron (CAN), which adapts to changing environments. The proposed theory is evaluated using human contextual audio-visual (AV) speech modeling. Simulation results provide new insights into contextual modulation and selective multisensory information amplification/suppression. The central hypothesis reviewed here suggests that the pyramidal cell, in addition to the classical excitatory and inhibitory signals, receives LCF and UCF inputs. The UCF (as a steering force or tuner) plays a decisive role in precisely selecting whether to amplify/suppress the transmission of relevant/irrelevant feedforward signals, without changing the content e.g., which information is worth paying more attention to? This, as opposed to, unconditional excitatory and inhibitory activity in existing deep neural networks (DNNs), is called conditional amplification/suppression.
Collapse
Affiliation(s)
- Ahsan Adeel
- Oxford Computational Neuroscience, Nuffield Department of Surgical Sciences, John Radcliffe Hospital, University of Oxford, Oxford, United Kingdom
- School of Mathematics and Computer Science, University of Wolverhampton, Wolverhampton, United Kingdom
| |
Collapse
|
31
|
Webster PJ, Frum C, Kurowski-Burt A, Bauer CE, Wen S, Ramadan JH, Baker KA, Lewis JW. Processing of Real-World, Dynamic Natural Stimuli in Autism is Linked to Corticobasal Function. Autism Res 2020; 13:539-549. [PMID: 31944557 PMCID: PMC7418054 DOI: 10.1002/aur.2250] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2019] [Revised: 11/14/2019] [Accepted: 11/24/2019] [Indexed: 11/06/2022]
Abstract
Many individuals with autism spectrum disorder (ASD) have been shown to perceive everyday sensory information differently compared to peers without autism. Research examining these sensory differences has primarily utilized nonnatural stimuli or natural stimuli using static photos with few having utilized dynamic, real-world nonverbal stimuli. Therefore, in this study, we used functional magnetic resonance imaging to characterize brain activation of individuals with high-functioning autism when viewing and listening to a video of a real-world scene (a person bouncing a ball) and anticipating the bounce. We investigated both multisensory and unisensory processing and hypothesized that individuals with ASD would show differential activation in (a) primary auditory and visual sensory cortical and association areas, and in (b) cortical and subcortical regions where auditory and visual information is integrated (e.g. temporal-parietal junction, pulvinar, superior colliculus). Contrary to our hypotheses, the whole-brain analysis revealed similar activation between the groups in these brain regions. However, compared to controls the ASD group showed significant hypoactivation in the left intraparietal sulcus and left putamen/globus pallidus. We theorize that this hypoactivation reflected underconnectivity for mediating spatiotemporal processing of the visual biological motion stimuli with the task demands of anticipating the timing of the bounce event. The paradigm thus may have tapped into a specific left-lateralized aberrant corticobasal circuit or loop involved in initiating or inhibiting motor responses. This was consistent with a dual "when versus where" psychophysical model of corticobasal function, which may reflect core differences in sensory processing of real-world, nonverbal natural stimuli in ASD. Autism Res 2020, 13: 539-549. © 2020 International Society for Autism Research, Wiley Periodicals, Inc. LAY SUMMARY: To understand how individuals with autism perceive the real-world, using magnetic resonance imaging we examined brain activation in individuals with autism while watching a video of someone bouncing a basketball. Those with autism had similar activation to controls in auditory and visual sensory brain regions, but less activation in an area that processes information about body movements and in a region involved in modulating movements. These areas are important for understanding the actions of others and developing social skills.
Collapse
Affiliation(s)
- Paula J Webster
- Department of Neuroscience, Rockefeller Neuroscience Institute, West Virginia University, Morgantown, West Virginia
| | - Chris Frum
- Department of Neuroscience, Rockefeller Neuroscience Institute, West Virginia University, Morgantown, West Virginia
| | - Amy Kurowski-Burt
- Division of Occupational Therapy, Department of Human Performance, West Virginia University, Morgantown, West Virginia
| | - Christopher E Bauer
- Department of Neuroscience, Rockefeller Neuroscience Institute, West Virginia University, Morgantown, West Virginia
| | - Sijin Wen
- Department of Biostatistics, West Virginia University, Morgantown, West Virginia
| | - Jad H Ramadan
- Department of Neuroscience, Rockefeller Neuroscience Institute, West Virginia University, Morgantown, West Virginia
| | - Kathryn A Baker
- Department of Neuroscience, Rockefeller Neuroscience Institute, West Virginia University, Morgantown, West Virginia
| | - James W Lewis
- Department of Neuroscience, Rockefeller Neuroscience Institute, West Virginia University, Morgantown, West Virginia
| |
Collapse
|
32
|
Bajpai A, Powell JC, Young AJ, Mazumdar A. Enhancing Physical Human Evasion of Moving Threats Using Tactile Cues. IEEE TRANSACTIONS ON HAPTICS 2020; 13:32-37. [PMID: 31899432 DOI: 10.1109/toh.2019.2962664] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
New human-centric approaches to safety can combine over-head camera views with situational awareness tools to enable humans to avoid rapidly evolving threats such as moving machines or falling debris. This article explores how 360° information can be used to inform humans of potential collisions. Specifically, we quantify how different individual (tactile, audio, and visual) and combined cue modalities affect failure rates and reaction times. Human-subject experiments were conducted in a custom virtual reality environment that simulates objects rapidly moving toward the subject. In order to successfully perform their task, the human subject must physically move their body out of the path of the moving threat before a collision occurs. This exploration of full body physical response differentiates this article from previous related studies. The results of the 18-subject study provide quantified data on a range of cues and cue combinations. The study quantified failure rates and reaction times as a function of index of difficulty (Fitt's Law) and threat directionality. The results confirm the hypothesis that the addition of tactile cues statistically improve performance compared to non-tactile cues with regards to failure rate and reaction time. This demonstrates how sensory cues can improve human physical response to rapid threats.
Collapse
|
33
|
Zhou Z, Liu X, Chen S, Zhang Z, Liu Y, Montardy Q, Tang Y, Wei P, Liu N, Li L, Song R, Lai J, He X, Chen C, Bi G, Feng G, Xu F, Wang L. A VTA GABAergic Neural Circuit Mediates Visually Evoked Innate Defensive Responses. Neuron 2019; 103:473-488.e6. [PMID: 31202540 DOI: 10.1016/j.neuron.2019.05.027] [Citation(s) in RCA: 105] [Impact Index Per Article: 21.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2018] [Revised: 03/11/2019] [Accepted: 05/15/2019] [Indexed: 12/14/2022]
Abstract
Innate defensive responses are essential for animal survival and are conserved across species. The ventral tegmental area (VTA) plays important roles in learned appetitive and aversive behaviors, but whether it plays a role in mediating or modulating innate defensive responses is currently unknown. We report that VTAGABA+ neurons respond to a looming stimulus. Inhibition of VTAGABA+ neurons reduced looming-evoked defensive flight behavior, and photoactivation of these neurons resulted in defense-like flight behavior. Using viral tracing and electrophysiological recordings, we show that VTAGABA+ neurons receive direct excitatory inputs from the superior colliculus (SC). Furthermore, we show that glutamatergic SC-VTA projections synapse onto VTAGABA+ neurons that project to the central nucleus of the amygdala (CeA) and that the CeA is involved in mediating the defensive behavior. Our findings demonstrate that aerial threat-related visual information is relayed to VTAGABA+ neurons mediating innate behavioral responses, suggesting a more general role of the VTA.
Collapse
Affiliation(s)
- Zheng Zhou
- Shenzhen Key Lab of Neuropsychiatric Modulation and Collaborative Innovation Center for Brain Science, Guangdong Provincial Key Laboratory of Brain Connectome and Behavior, CAS Center for Excellence in Brain Science and Intelligence Technology, Brain Cognition and Brain Disease Institute (BCBDI), Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen-Hong Kong Institute of Brain Science-Shenzhen Fundamental Research Institutions, Shenzhen 518055, China; University of the Chinese Academy of Sciences, Beijing 100049, China
| | - Xuemei Liu
- Shenzhen Key Lab of Neuropsychiatric Modulation and Collaborative Innovation Center for Brain Science, Guangdong Provincial Key Laboratory of Brain Connectome and Behavior, CAS Center for Excellence in Brain Science and Intelligence Technology, Brain Cognition and Brain Disease Institute (BCBDI), Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen-Hong Kong Institute of Brain Science-Shenzhen Fundamental Research Institutions, Shenzhen 518055, China; University of the Chinese Academy of Sciences, Beijing 100049, China
| | - Shanping Chen
- Shenzhen Key Lab of Neuropsychiatric Modulation and Collaborative Innovation Center for Brain Science, Guangdong Provincial Key Laboratory of Brain Connectome and Behavior, CAS Center for Excellence in Brain Science and Intelligence Technology, Brain Cognition and Brain Disease Institute (BCBDI), Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen-Hong Kong Institute of Brain Science-Shenzhen Fundamental Research Institutions, Shenzhen 518055, China; University of the Chinese Academy of Sciences, Beijing 100049, China
| | - Zhijian Zhang
- Center for Brain Science, Key Laboratory of Magnetic Resonance in Biological Systems and State Key Laboratory of Magnetic Resonance and Atomic and Molecular Physics, Wuhan Institute of Physics and Mathematics, CAS, Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Wuhan 430071, China
| | - Yuanming Liu
- Shenzhen Key Lab of Neuropsychiatric Modulation and Collaborative Innovation Center for Brain Science, Guangdong Provincial Key Laboratory of Brain Connectome and Behavior, CAS Center for Excellence in Brain Science and Intelligence Technology, Brain Cognition and Brain Disease Institute (BCBDI), Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen-Hong Kong Institute of Brain Science-Shenzhen Fundamental Research Institutions, Shenzhen 518055, China
| | - Quentin Montardy
- Shenzhen Key Lab of Neuropsychiatric Modulation and Collaborative Innovation Center for Brain Science, Guangdong Provincial Key Laboratory of Brain Connectome and Behavior, CAS Center for Excellence in Brain Science and Intelligence Technology, Brain Cognition and Brain Disease Institute (BCBDI), Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen-Hong Kong Institute of Brain Science-Shenzhen Fundamental Research Institutions, Shenzhen 518055, China
| | - Yongqiang Tang
- Shenzhen Key Lab of Neuropsychiatric Modulation and Collaborative Innovation Center for Brain Science, Guangdong Provincial Key Laboratory of Brain Connectome and Behavior, CAS Center for Excellence in Brain Science and Intelligence Technology, Brain Cognition and Brain Disease Institute (BCBDI), Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen-Hong Kong Institute of Brain Science-Shenzhen Fundamental Research Institutions, Shenzhen 518055, China; University of the Chinese Academy of Sciences, Beijing 100049, China
| | - Pengfei Wei
- Shenzhen Key Lab of Neuropsychiatric Modulation and Collaborative Innovation Center for Brain Science, Guangdong Provincial Key Laboratory of Brain Connectome and Behavior, CAS Center for Excellence in Brain Science and Intelligence Technology, Brain Cognition and Brain Disease Institute (BCBDI), Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen-Hong Kong Institute of Brain Science-Shenzhen Fundamental Research Institutions, Shenzhen 518055, China
| | - Nan Liu
- Shenzhen Key Lab of Neuropsychiatric Modulation and Collaborative Innovation Center for Brain Science, Guangdong Provincial Key Laboratory of Brain Connectome and Behavior, CAS Center for Excellence in Brain Science and Intelligence Technology, Brain Cognition and Brain Disease Institute (BCBDI), Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen-Hong Kong Institute of Brain Science-Shenzhen Fundamental Research Institutions, Shenzhen 518055, China; University of the Chinese Academy of Sciences, Beijing 100049, China
| | - Lei Li
- Shenzhen Key Lab of Neuropsychiatric Modulation and Collaborative Innovation Center for Brain Science, Guangdong Provincial Key Laboratory of Brain Connectome and Behavior, CAS Center for Excellence in Brain Science and Intelligence Technology, Brain Cognition and Brain Disease Institute (BCBDI), Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen-Hong Kong Institute of Brain Science-Shenzhen Fundamental Research Institutions, Shenzhen 518055, China
| | - Ru Song
- Shenzhen Key Lab of Neuropsychiatric Modulation and Collaborative Innovation Center for Brain Science, Guangdong Provincial Key Laboratory of Brain Connectome and Behavior, CAS Center for Excellence in Brain Science and Intelligence Technology, Brain Cognition and Brain Disease Institute (BCBDI), Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen-Hong Kong Institute of Brain Science-Shenzhen Fundamental Research Institutions, Shenzhen 518055, China
| | - Juan Lai
- Shenzhen Key Lab of Neuropsychiatric Modulation and Collaborative Innovation Center for Brain Science, Guangdong Provincial Key Laboratory of Brain Connectome and Behavior, CAS Center for Excellence in Brain Science and Intelligence Technology, Brain Cognition and Brain Disease Institute (BCBDI), Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen-Hong Kong Institute of Brain Science-Shenzhen Fundamental Research Institutions, Shenzhen 518055, China
| | - Xiaobin He
- Center for Brain Science, Key Laboratory of Magnetic Resonance in Biological Systems and State Key Laboratory of Magnetic Resonance and Atomic and Molecular Physics, Wuhan Institute of Physics and Mathematics, CAS, Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Wuhan 430071, China
| | - Chen Chen
- Shenzhen Key Lab of Neuropsychiatric Modulation and Collaborative Innovation Center for Brain Science, Guangdong Provincial Key Laboratory of Brain Connectome and Behavior, CAS Center for Excellence in Brain Science and Intelligence Technology, Brain Cognition and Brain Disease Institute (BCBDI), Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen-Hong Kong Institute of Brain Science-Shenzhen Fundamental Research Institutions, Shenzhen 518055, China
| | - Guoqiang Bi
- School of Life Sciences, University of Science and Technology of China, Hefei, China
| | - Guoping Feng
- McGovern Institute for Brain Research, Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| | - Fuqiang Xu
- Center for Brain Science, Key Laboratory of Magnetic Resonance in Biological Systems and State Key Laboratory of Magnetic Resonance and Atomic and Molecular Physics, Wuhan Institute of Physics and Mathematics, CAS, Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Wuhan 430071, China.
| | - Liping Wang
- Shenzhen Key Lab of Neuropsychiatric Modulation and Collaborative Innovation Center for Brain Science, Guangdong Provincial Key Laboratory of Brain Connectome and Behavior, CAS Center for Excellence in Brain Science and Intelligence Technology, Brain Cognition and Brain Disease Institute (BCBDI), Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen-Hong Kong Institute of Brain Science-Shenzhen Fundamental Research Institutions, Shenzhen 518055, China; University of the Chinese Academy of Sciences, Beijing 100049, China.
| |
Collapse
|
34
|
MRI acoustic noise-modulated computer animations for patient distraction and entertainment with application in pediatric psychiatric patients. Magn Reson Imaging 2019; 61:16-19. [PMID: 31078614 DOI: 10.1016/j.mri.2019.05.014] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2019] [Revised: 05/06/2019] [Accepted: 05/06/2019] [Indexed: 11/20/2022]
Abstract
PURPOSE To reduce patient anxiety caused by the MRI scanner acoustic noise. MATERIAL AND METHODS We developed a simple and low-cost system for patient distraction using visual computer animations that were synchronized to the MRI scanner's acoustic noise during the MRI exam. The system was implemented on a 3T MRI system and tested in 28 pediatric patients with bipolar disorder. The patients were randomized to receive noise-synchronized animations in the form of abstract animations in addition to music (n = 13, F/M = 6/7, age = 10.9 ± 2.5 years) or, as a control, receive only music (n = 15, F/M = 7/8, age = 11.6 ± 2.3 years). After completion of the scans, all subjects answered a questionnaire about their scan experience and the perceived scan duration. RESULTS The scan duration with multisensory input (animations and music) was perceived to be ~15% shorter than in the control group (43 min vs. 50 min, P < 0.05). However, the overall scan experience was scored less favorably (3.9 vs. 4.6 in the control group, P < 0.04). CONCLUSIONS This simple system provided patient distraction and entertainment leading to perceived shorter scan times, but the provided visualization with abstract animations was not favored by this patient cohort.
Collapse
|
35
|
Pathways for smiling, disgust and fear recognition in blindsight patients. Neuropsychologia 2019; 128:6-13. [DOI: 10.1016/j.neuropsychologia.2017.08.028] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2017] [Revised: 08/03/2017] [Accepted: 08/28/2017] [Indexed: 01/08/2023]
|
36
|
Guellaï B, Callin A, Bevilacqua F, Schwarz D, Pitti A, Boucenna S, Gratier M. Sensus Communis: Some Perspectives on the Origins of Non-synchronous Cross-Sensory Associations. Front Psychol 2019; 10:523. [PMID: 30899237 PMCID: PMC6416194 DOI: 10.3389/fpsyg.2019.00523] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2018] [Accepted: 02/22/2019] [Indexed: 11/13/2022] Open
Abstract
Adults readily make associations between stimuli perceived consecutively through different sense modalities, such as shapes and sounds. Researchers have only recently begun to investigate such correspondences in infants but only a handful of studies have focused on infants less than a year old. Are infants able to make cross-sensory correspondences from birth? Do certain correspondences require extensive real-world experience? Some studies have shown that newborns are able to match stimuli perceived in different sense modalities. Yet, the origins and mechanisms underlying these abilities are unclear. The present paper explores these questions and reviews some hypotheses on the emergence and early development of cross-sensory associations and their possible links with language development. Indeed, if infants can perceive cross-sensory correspondences between events that share certain features but are not strictly contingent or co-located, one may posit that they are using a "sixth sense" in Aristotle's sense of the term. And a likely candidate for explaining this mechanism, as Aristotle suggested, is movement.
Collapse
Affiliation(s)
- Bahia Guellaï
- Laboratoire Ethologie, Cognition, Développement, Université Paris Nanterre, Nanterre, France
| | - Annabel Callin
- Laboratoire Ethologie, Cognition, Développement, Université Paris Nanterre, Nanterre, France
| | | | | | - Alexandre Pitti
- Laboratoire ETIS, Université Cergy-Pontoise, Cergy-Pontoise, France
| | - Sofiane Boucenna
- Laboratoire ETIS, Université Cergy-Pontoise, Cergy-Pontoise, France
| | - Maya Gratier
- Laboratoire Ethologie, Cognition, Développement, Université Paris Nanterre, Nanterre, France
| |
Collapse
|
37
|
Meijer GT, Mertens PEC, Pennartz CMA, Olcese U, Lansink CS. The circuit architecture of cortical multisensory processing: Distinct functions jointly operating within a common anatomical network. Prog Neurobiol 2019; 174:1-15. [PMID: 30677428 DOI: 10.1016/j.pneurobio.2019.01.004] [Citation(s) in RCA: 28] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2017] [Revised: 12/21/2018] [Accepted: 01/21/2019] [Indexed: 12/16/2022]
Abstract
Our perceptual systems continuously process sensory inputs from different modalities and organize these streams of information such that our subjective representation of the outside world is a unified experience. By doing so, they also enable further cognitive processing and behavioral action. While cortical multisensory processing has been extensively investigated in terms of psychophysics and mesoscale neural correlates, an in depth understanding of the underlying circuit-level mechanisms is lacking. Previous studies on circuit-level mechanisms of multisensory processing have predominantly focused on cue integration, i.e. the mechanism by which sensory features from different modalities are combined to yield more reliable stimulus estimates than those obtained by using single sensory modalities. In this review, we expand the framework on the circuit-level mechanisms of cortical multisensory processing by highlighting that multisensory processing is a family of functions - rather than a single operation - which involves not only the integration but also the segregation of modalities. In addition, multisensory processing not only depends on stimulus features, but also on cognitive resources, such as attention and memory, as well as behavioral context, to determine the behavioral outcome. We focus on rodent models as a powerful instrument to study the circuit-level bases of multisensory processes, because they enable combining cell-type-specific recording and interventional techniques with complex behavioral paradigms. We conclude that distinct multisensory processes share overlapping anatomical substrates, are implemented by diverse neuronal micro-circuitries that operate in parallel, and are flexibly recruited based on factors such as stimulus features and behavioral constraints.
Collapse
Affiliation(s)
- Guido T Meijer
- Swammerdam Institute for Life Sciences, University of Amsterdam, Science Park 904, 1098XH Amsterdam, the Netherlands.
| | - Paul E C Mertens
- Swammerdam Institute for Life Sciences, University of Amsterdam, Science Park 904, 1098XH Amsterdam, the Netherlands.
| | - Cyriel M A Pennartz
- Swammerdam Institute for Life Sciences, University of Amsterdam, Science Park 904, 1098XH Amsterdam, the Netherlands; Research Priority Program Brain and Cognition, University of Amsterdam, Science Park 904, 1098XH Amsterdam, the Netherlands.
| | - Umberto Olcese
- Swammerdam Institute for Life Sciences, University of Amsterdam, Science Park 904, 1098XH Amsterdam, the Netherlands; Research Priority Program Brain and Cognition, University of Amsterdam, Science Park 904, 1098XH Amsterdam, the Netherlands.
| | - Carien S Lansink
- Swammerdam Institute for Life Sciences, University of Amsterdam, Science Park 904, 1098XH Amsterdam, the Netherlands; Research Priority Program Brain and Cognition, University of Amsterdam, Science Park 904, 1098XH Amsterdam, the Netherlands.
| |
Collapse
|
38
|
Abstract
After been exposed to the visual input, in the first year of life, the brain experiences subtle but massive changes apparently crucial for communicative/emotional and social human development. Its lack could be the explanation of the very high prevalence of autism in children with total congenital blindness. The present theory postulates that the superior colliculus is the key structure for such changes for several reasons: it dominates visual behavior during the first months of life; it is ready at birth for complex visual tasks; it has a significant influence on several hemispheric regions; it is the main brain hub that permanently integrates visual and non-visual, external and internal information (bottom-up and top-down respectively); and it owns the enigmatic ability to take non-conscious decisions about where to focus attention. It is also a sentinel that triggers the subcortical mechanisms which drive social motivation to follow faces from birth and to react automatically to emotional stimuli. Through indirect connections it also activates simultaneously several cortical structures necessary to develop social cognition and to accomplish the multiattentional task required for conscious social interaction in real life settings. Genetic or non-genetic prenatal or early postnatal factors could disrupt the SC functions resulting in autism. The timing of postnatal biological disruption matches the timing of clinical autism manifestations. Astonishing coincidences between etiologies, clinical manifestations, cognitive and pathogenic autism theories on one side and SC functions on the other are disclosed in this review. Although the visual system dependent of the SC is usually considered as accessory of the LGN canonical pathway, its imprinting gives the brain a qualitatively specific functions not supplied by any other brain structure.
Collapse
Affiliation(s)
- Rubin Jure
- Centro Privado de Neurología y Neuropsicología Infanto Juvenil WERNICKE, Córdoba, Argentina
| |
Collapse
|
39
|
Feldman JI, Dunham K, Cassidy M, Wallace MT, Liu Y, Woynaroski TG. Audiovisual multisensory integration in individuals with autism spectrum disorder: A systematic review and meta-analysis. Neurosci Biobehav Rev 2018; 95:220-234. [PMID: 30287245 PMCID: PMC6291229 DOI: 10.1016/j.neubiorev.2018.09.020] [Citation(s) in RCA: 87] [Impact Index Per Article: 14.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2018] [Revised: 09/10/2018] [Accepted: 09/25/2018] [Indexed: 02/04/2023]
Abstract
An ever-growing literature has aimed to determine how individuals with autism spectrum disorder (ASD) differ from their typically developing (TD) peers on measures of multisensory integration (MSI) and to ascertain the degree to which differences in MSI are associated with the broad range of symptoms associated with ASD. Findings, however, have been highly variable across the studies carried out to date. The present work systematically reviews and quantitatively synthesizes the large literature on audiovisual MSI in individuals with ASD to evaluate the cumulative evidence for (a) group differences between individuals with ASD and TD peers, (b) correlations between MSI and autism symptoms in individuals with ASD and (c) study level factors that may moderate findings (i.e., explain differential effects) observed across studies. To identify eligible studies, a comprehensive search strategy was employed using the ProQuest search engine, PubMed database, forwards and backwards citation searches, direct author contact, and hand-searching of select conference proceedings. A significant between-group difference in MSI was evident in the literature, with individuals with ASD demonstrating worse audiovisual integration on average across studies compared to TD controls. This effect was moderated by mean participant age, such that between-group differences were more pronounced in younger samples. The mean correlation between MSI and autism and related symptomatology was also significant, indicating that increased audiovisual integration in individuals with ASD is associated with better language/communication abilities and/or reduced autism symptom severity in the extant literature. This effect was moderated by whether the stimuli were linguistic versus non-linguistic in nature, such that correlation magnitudes tended to be significantly greater when linguistic stimuli were utilized in the measure of MSI. Limitations and future directions for primary and meta-analytic research are discussed.
Collapse
Affiliation(s)
- Jacob I Feldman
- Department of Hearing and Speech Sciences, Vanderbilt University, 1215 21st Ave S, MCE South Tower 8310, Nashville, TN, 37232, USA.
| | - Kacie Dunham
- Neuroscience Undergraduate Program, Vanderbilt University, Nashville, TN, USA
| | - Margaret Cassidy
- Neuroscience Undergraduate Program, Vanderbilt University, Nashville, TN, USA
| | - Mark T Wallace
- Department of Psychology, Vanderbilt University, Nashville, TN, USA; Department of Psychiatry and Behavioral Sciences, Vanderbilt University Medical Center, Nashville, TN, USA; Department of Pharmacology, Vanderbilt University, Nashville, TN, USA; Vanderbilt Kennedy Center, Vanderbilt University Medical Center, 110 Magnolia Cir, Nashville, TN, 37203, USA; Vanderbilt Brain Institute, Vanderbilt University, 465 21st Avenue South, Nashville, TN, 37232, USA; Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, 1215 21st Ave S, MCE South Tower 8310, Nashville, TN, 27323, USA.
| | - Yupeng Liu
- Neuroscience Undergraduate Program, Vanderbilt University, Nashville, TN, USA
| | - Tiffany G Woynaroski
- Vanderbilt Kennedy Center, Vanderbilt University Medical Center, 110 Magnolia Cir, Nashville, TN, 37203, USA; Vanderbilt Brain Institute, Vanderbilt University, 465 21st Avenue South, Nashville, TN, 37232, USA; Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, 1215 21st Ave S, MCE South Tower 8310, Nashville, TN, 27323, USA.
| |
Collapse
|
40
|
Bertini C, Pietrelli M, Braghittoni D, Làdavas E. Pulvinar Lesions Disrupt Fear-Related Implicit Visual Processing in Hemianopic Patients. Front Psychol 2018; 9:2329. [PMID: 30524351 PMCID: PMC6261973 DOI: 10.3389/fpsyg.2018.02329] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2018] [Accepted: 11/06/2018] [Indexed: 11/13/2022] Open
Abstract
The processing of emotional stimuli in the absence of awareness has been widely investigated in patients with lesions to the primary visual pathway since the classical studies on affective blindsight. In addition, recent evidence has shown that in hemianopic patients without blindsight only unseen fearful faces can be implicitly processed, inducing enhanced visual encoding (Cecere et al., 2014) and response facilitation (Bertini et al., 2013, 2017) to stimuli presented in their intact field. This fear-specific facilitation has been suggested to be mediated by activity in the spared visual subcortical pathway, comprising the superior colliculus (SC), the pulvinar and the amygdala. This suggests that the pulvinar might represent a critical relay structure, conveying threat-related visual information through the subcortical visual circuit. To test this hypothesis, hemianopic patients, with or without pulvinar lesions, performed a go/no-go task in which they had to discriminate simple visual stimuli, consisting in Gabor patches, displayed in their intact visual field, during the simultaneous presentation of faces with fearful, happy, and neutral expressions in their blind visual field. In line with previous evidence, hemianopic patients without pulvinar lesions showed response facilitation to stimuli displayed in the intact field, only while concurrent fearful faces were shown in their blind field. In contrast, no facilitatory effect was found in hemianopic patients with lesions of the pulvinar. These findings reveal that pulvinar lesions disrupt the implicit visual processing of fearful stimuli in hemianopic patients, therefore suggesting a pivotal role of this structure in relaying fear-related visual information from the SC to the amygdala.
Collapse
Affiliation(s)
- Caterina Bertini
- Department of Psychology, University of Bologna, Bologna, Italy.,Centre for Studies and Research in Cognitive Neuroscience, University of Bologna, Cesena, Italy
| | - Mattia Pietrelli
- Department of Psychology, University of Bologna, Bologna, Italy.,Centre for Studies and Research in Cognitive Neuroscience, University of Bologna, Cesena, Italy
| | - Davide Braghittoni
- Department of Psychology, University of Bologna, Bologna, Italy.,Centre for Studies and Research in Cognitive Neuroscience, University of Bologna, Cesena, Italy
| | - Elisabetta Làdavas
- Department of Psychology, University of Bologna, Bologna, Italy.,Centre for Studies and Research in Cognitive Neuroscience, University of Bologna, Cesena, Italy
| |
Collapse
|
41
|
Elorette C, Forcelli PA, Saunders RC, Malkova L. Colocalization of Tectal Inputs With Amygdala-Projecting Neurons in the Macaque Pulvinar. Front Neural Circuits 2018; 12:91. [PMID: 30405362 PMCID: PMC6207581 DOI: 10.3389/fncir.2018.00091] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2018] [Accepted: 10/03/2018] [Indexed: 12/25/2022] Open
Abstract
Neuropsychological and neuroimaging studies have suggested the presence of a fast, subcortical route for the processing of emotionally-salient visual information in the primate brain. This putative pathway consists of the superior colliculus (SC), pulvinar and amygdala. While the presence of such a pathway has been confirmed in sub-primate species, it has yet to be documented in the primate brain using conventional anatomical methods. We injected retrograde tracers into the amygdala and anterograde tracers into the colliculus, and examined regions of colocalization of these signals within the pulvinar of the macaque. Anterograde tracers injected into the SC labeled axonal projections within the pulvinar, primarily within the oral, lateral and medial subdivisions. These axonal projections from the colliculus colocalized with cell bodies within the pulvinar that were labeled by retrograde tracer injected into the lateral amygdala. This zone of overlap was most notable in the medial portions of the medial (PM), oral (PO) and inferior pulvinar (PI), and was often densely concentrated in the vicinity of the brachium of the SC. These data provide an anatomical basis for the previously suggested pathway mediating fast processing of emotionally salient information.
Collapse
Affiliation(s)
- Catherine Elorette
- Interdisciplinary Program in Neuroscience, Georgetown University School of Medicine, Washington, DC, United States
- Department of Pharmacology and Physiology, Georgetown University School of Medicine, Washington, DC, United States
| | - Patrick A. Forcelli
- Interdisciplinary Program in Neuroscience, Georgetown University School of Medicine, Washington, DC, United States
- Department of Pharmacology and Physiology, Georgetown University School of Medicine, Washington, DC, United States
- Department of Neuroscience, Georgetown University School of Medicine, Washington, DC, United States
| | - Richard C. Saunders
- Laboratory of Neuropsychology, National Institute of Mental Health (NIMH), Bethesda, ML, United States
| | - Ludise Malkova
- Interdisciplinary Program in Neuroscience, Georgetown University School of Medicine, Washington, DC, United States
- Department of Pharmacology and Physiology, Georgetown University School of Medicine, Washington, DC, United States
| |
Collapse
|
42
|
Xu J, Bi T, Wu J, Meng F, Wang K, Hu J, Han X, Zhang J, Zhou X, Keniston L, Yu L. Spatial receptive field shift by preceding cross-modal stimulation in the cat superior colliculus. J Physiol 2018; 596:5033-5050. [PMID: 30144059 DOI: 10.1113/jp275427] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2018] [Accepted: 08/21/2018] [Indexed: 12/11/2022] Open
Abstract
KEY POINTS It has been known for some time that sensory information of one type can bias the spatial perception of another modality. However, there is a lack of evidence of this occurring in individual neurons. In the present study, we found that the spatial receptive field of superior colliculus multisensory neurons could be dynamically shifted by a preceding stimulus in a different modality. The extent to which the receptive field shifted was dependent on both temporal and spatial gaps between the preceding and following stimuli, as well as the salience of the preceding stimulus. This result provides a neural mechanism that could underlie the process of cross-modal spatial calibration. ABSTRACT Psychophysical studies have shown that the different senses can be spatially entrained by each other. This can be observed in certain phenomena, such as ventriloquism, in which a visual stimulus can attract the perceived location of a spatially discordant sound. However, the neural mechanism underlying this cross-modal spatial recalibration has remained unclear, as has whether it takes place dynamically. We explored these issues in multisensory neurons of the cat superior colliculus (SC), a midbrain structure that involves both cross-modal and sensorimotor integration. Sequential cross-modal stimulation showed that the preceding stimulus can shift the receptive field (RF) of the lagging response. This cross-modal spatial calibration took place in both auditory and visual RFs, although auditory RFs shifted slightly more. By contrast, if a preceding stimulus was from the same modality, it failed to induce a similarly substantial RF shift. The extent of the RF shift was dependent on both temporal and spatial gaps between the preceding and following stimuli, as well as the salience of the preceding stimulus. A narrow time gap and high stimulus salience were able to induce larger RF shifts. In addition, when both visual and auditory stimuli were presented simultaneously, a substantial RF shift toward the location-fixed stimulus was also induced. These results, taken together, reveal an online cross-modal process and reflect the details of the organization of SC inter-sensory spatial calibration.
Collapse
Affiliation(s)
- Jinghong Xu
- Key Laboratory of Brain Functional Genomics (East China Normal University), Ministry of Education, Shanghai Key Laboratory of Brain Functional Genomics (East China Normal University), School of Life Science, East China Normal University, Shanghai, China
| | - Tingting Bi
- Key Laboratory of Brain Functional Genomics (East China Normal University), Ministry of Education, Shanghai Key Laboratory of Brain Functional Genomics (East China Normal University), School of Life Science, East China Normal University, Shanghai, China
| | - Jing Wu
- Key Laboratory of Brain Functional Genomics (East China Normal University), Ministry of Education, Shanghai Key Laboratory of Brain Functional Genomics (East China Normal University), School of Life Science, East China Normal University, Shanghai, China
| | - Fanzhu Meng
- Key Laboratory of Brain Functional Genomics (East China Normal University), Ministry of Education, Shanghai Key Laboratory of Brain Functional Genomics (East China Normal University), School of Life Science, East China Normal University, Shanghai, China
| | - Kun Wang
- Key Laboratory of Brain Functional Genomics (East China Normal University), Ministry of Education, Shanghai Key Laboratory of Brain Functional Genomics (East China Normal University), School of Life Science, East China Normal University, Shanghai, China
| | - Jiawei Hu
- Key Laboratory of Brain Functional Genomics (East China Normal University), Ministry of Education, Shanghai Key Laboratory of Brain Functional Genomics (East China Normal University), School of Life Science, East China Normal University, Shanghai, China
| | - Xiao Han
- Key Laboratory of Brain Functional Genomics (East China Normal University), Ministry of Education, Shanghai Key Laboratory of Brain Functional Genomics (East China Normal University), School of Life Science, East China Normal University, Shanghai, China
| | - Jiping Zhang
- Key Laboratory of Brain Functional Genomics (East China Normal University), Ministry of Education, Shanghai Key Laboratory of Brain Functional Genomics (East China Normal University), School of Life Science, East China Normal University, Shanghai, China
| | - Xiaoming Zhou
- Key Laboratory of Brain Functional Genomics (East China Normal University), Ministry of Education, Shanghai Key Laboratory of Brain Functional Genomics (East China Normal University), School of Life Science, East China Normal University, Shanghai, China
| | - Les Keniston
- Department of Physical Therapy, University of Maryland Eastern Shore, Princess Anne, MD, USA
| | - Liping Yu
- Key Laboratory of Brain Functional Genomics (East China Normal University), Ministry of Education, Shanghai Key Laboratory of Brain Functional Genomics (East China Normal University), School of Life Science, East China Normal University, Shanghai, China
| |
Collapse
|
43
|
Hoffmann S, Borges U, Bröker L, Laborde S, Liepelt R, Lobinger BH, Löffler J, Musculus L, Raab M. The Psychophysiology of Action: A Multidisciplinary Endeavor for Integrating Action and Cognition. Front Psychol 2018; 9:1423. [PMID: 30210379 PMCID: PMC6124386 DOI: 10.3389/fpsyg.2018.01423] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2018] [Accepted: 07/20/2018] [Indexed: 01/26/2023] Open
Abstract
There is a vast amount of literature concerning the integration of action and cognition. Although this broad research area is of great interest for many disciplines like sports, psychology and cognitive neuroscience, only a few attempts tried to bring together different perspectives so far. Our goal is to provide a perspective to spark a debate across theoretical borders and integration of different disciplines via psychophysiology. In order to boost advances in this research field it is not only necessary to become aware of the different areas that are relevant but also to consider methodological aspects and challenges. We briefly describe the most relevant theoretical accounts to the question of how internal and external information processes or factors interact and, based on this, argue that research programs should consider the three dimensions: (a) dynamics of movements; (b) multivariate measures and; (c) dynamic statistical parameters. Only with an extended perspective on theoretical and methodological accounts, one would be able to integrate the dynamics of actions into theoretical advances.
Collapse
Affiliation(s)
- Sven Hoffmann
- Department of Performance Psychology, Institute of Psychology, German Sport University Cologne, Cologne, Germany
| | - Uirassu Borges
- Department of Performance Psychology, Institute of Psychology, German Sport University Cologne, Cologne, Germany
| | - Laura Bröker
- Department of Performance Psychology, Institute of Psychology, German Sport University Cologne, Cologne, Germany
| | - Sylvain Laborde
- Department of Performance Psychology, Institute of Psychology, German Sport University Cologne, Cologne, Germany.,2EA 4260 Normandie Université, Caen, France
| | - Roman Liepelt
- Department of Performance Psychology, Institute of Psychology, German Sport University Cologne, Cologne, Germany
| | - Babett H Lobinger
- Department of Performance Psychology, Institute of Psychology, German Sport University Cologne, Cologne, Germany
| | - Jonna Löffler
- Department of Performance Psychology, Institute of Psychology, German Sport University Cologne, Cologne, Germany
| | - Lisa Musculus
- Department of Performance Psychology, Institute of Psychology, German Sport University Cologne, Cologne, Germany
| | - Markus Raab
- Department of Performance Psychology, Institute of Psychology, German Sport University Cologne, Cologne, Germany.,School of Applied Sciences, London Southbank University, London, United Kingdom
| |
Collapse
|
44
|
Mendez-Balbuena I, Arrieta P, Huidobro N, Flores A, Lemuz-Lopez R, Trenado C, Manjarrez E. Augmenting EEG-global-coherence with auditory and visual noise: Multisensory internal stochastic resonance. Medicine (Baltimore) 2018; 97:e12008. [PMID: 30170407 PMCID: PMC6393074 DOI: 10.1097/md.0000000000012008] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/02/2022] Open
Abstract
The present investigation documents the electrophysiological occurrence of multisensory internal stochastic resonance (MISR) in the human electroencephalographic (EEG) coherence elicited by auditory and visual noise.We define MISR of EEG coherence as the phenomenon for which an intermediate level of input noise of a sensory modality enhances EEG coherence in response to another noisy sensory modality. Here, EEG coherence is computed by the global weighted coherence (GWC), modulated by quasi-Brownian noise. Specifically, we examined whether a particular level of auditory noise together with constant visual noise (experimental condition 1) and a specified level of visual noise together with constant auditory noise (experimental condition 2), improves EEG's GWC. We compared GWC between ongoing EEG basal activity (BA), zero noise (ZN), optimal noise (ON), and high noise (HN).The data disclosed an intermediate level of input noise that enhances the GWC for the majority of the subjects, thus demonstrating for the first time the occurrence of multisensory internal stochastic resonance (SR) in visuoauditory processing within the central nervous system.
Collapse
Affiliation(s)
| | | | | | | | - Rafael Lemuz-Lopez
- Faculty of Computational Sciences, Benemérita Universidad Autónoma de Puebla, Puebla, México
| | - Carlos Trenado
- Department of Psychology and Neurosciences, Translational Neuromodulation Unit, Leibniz Research Centre for Working Environment and Human Factors, Technical University Dortmund, Dortmund, Germany
| | | |
Collapse
|
45
|
Zhang L, Fu Q, Swanson A, Weitlauf A, Warren Z, Sarkar N. Design and Evaluation of a Collaborative Virtual Environment (CoMove) for Autism Spectrum Disorder Intervention. ACM TRANSACTIONS ON ACCESSIBLE COMPUTING 2018. [DOI: 10.1145/3209687] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
Abstract
Autism Spectrum Disorder (ASD) is a neurodevelopmental disorder characterized in part by core deficits in social interaction and communication. A collaborative virtual environment (CVE), which is a computer-based, distributed, virtual space for multiple users to interact with one another and/or with virtual items, has the potential to support flexible, safe, and peer-based social interactions. In this article, we presented the design of a CVE system, called CoMove, with the ultimate goals of measuring and potentially enhancing collaborative interactions and verbal communication of children with ASD when they play collaborative puzzle games with their typically developing (TD) peers in remote locations. CoMove has two distinguishing characteristics: (i) the ability to promote important collaborative behaviors (including information sharing, sequential interactions, and simultaneous interactions) and to provide real-time feedback based on users’ game performance; as well as (ii) an objective way to measure and index important aspects of collaboration and verbal-communication skills during system interaction. A feasibility study with 14 pairs—7 ASD/TD pairs and 7 TD/TD pairs—was conducted to initially test the feasibility of CoMove. The results of the study validated the system feasibility and suggested its potential to index important aspects of collaboration and verbal communication.
Collapse
Affiliation(s)
- Lian Zhang
- Department of Electrical Engineering & Computer Science, Vanderbilt University, Nashville, TN
| | - Qiang Fu
- Department of Electrical Engineering & Computer Science, Vanderbilt University, Nashville, TN
| | - Amy Swanson
- Treatment and Research Institute for Autism Spectrum Disorders, Vanderbilt University, Nashville, TN
| | - Amy Weitlauf
- Treatment and Research Institute for Autism Spectrum Disorders, Vanderbilt University, Nashville, TN
| | - Zachary Warren
- Treatment and Research Institute for Autism Spectrum Disorders, Vanderbilt University, Nashville, TN
| | - Nilanjan Sarkar
- Department of Mechanical Engineering, Vanderbilt University, Nashville, TN
| |
Collapse
|
46
|
Schatton A, Mendoza E, Grube K, Scharff C. FoxP in bees: A comparative study on the developmental and adult expression pattern in three bee species considering isoforms and circuitry. J Comp Neurol 2018. [PMID: 29536541 DOI: 10.1002/cne.24430] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/13/2023]
Abstract
Mutations in the transcription factors FOXP1, FOXP2, and FOXP4 affect human cognition, including language. The FoxP gene locus is evolutionarily ancient and highly conserved in its DNA-binding domain. In Drosophila melanogaster FoxP has been implicated in courtship behavior, decision making, and specific types of motor-learning. Because honeybees (Apis mellifera, Am) excel at navigation and symbolic dance communication, they are a particularly suitable insect species to investigate a potential link between neural FoxP expression and cognition. We characterized two AmFoxP isoforms and mapped their expression in the brain during development and in adult foragers. Using a custom-made antiserum and in situ hybridization, we describe 11 AmFoxP expressing neuron populations. FoxP was expressed in equivalent patterns in two other representatives of Apidae; a closely related dwarf bee and a bumblebee species. Neural tracing revealed that the largest FoxP expressing neuron cluster in honeybees projects into a posterior tract that connects the optic lobe to the posterior lateral protocerebrum, predicting a function in visual processing. Our data provide an entry point for future experiments assessing the function of FoxP in eusocial Hymenoptera.
Collapse
Affiliation(s)
- Adriana Schatton
- Institute for Animal Behavior, Freie Universität Berlin, Berlin, 14195, Germany
| | - Ezequiel Mendoza
- Institute for Animal Behavior, Freie Universität Berlin, Berlin, 14195, Germany
| | - Kathrin Grube
- Institute for Animal Behavior, Freie Universität Berlin, Berlin, 14195, Germany
| | - Constance Scharff
- Institute for Animal Behavior, Freie Universität Berlin, Berlin, 14195, Germany
| |
Collapse
|
47
|
Purpura G, Cioni G, Tinelli F. Multisensory-Based Rehabilitation Approach: Translational Insights from Animal Models to Early Intervention. Front Neurosci 2017; 11:430. [PMID: 28798661 PMCID: PMC5526840 DOI: 10.3389/fnins.2017.00430] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2017] [Accepted: 07/12/2017] [Indexed: 11/18/2022] Open
Abstract
Multisensory processes permit combinations of several inputs, coming from different sensory systems, allowing for a coherent representation of biological events and facilitating adaptation to environment. For these reasons, their application in neurological and neuropsychological rehabilitation has been enhanced in the last decades. Recent studies on animals and human models have indicated that, on one hand multisensory integration matures gradually during post-natal life and development is closely linked to environment and experience and, on the other hand, that modality-specific information seems to do not benefit by redundancy across multiple sense modalities and is more readily perceived in unimodal than in multimodal stimulation. In this review, multisensory process development is analyzed, highlighting clinical effects in animal and human models of its manipulation for rehabilitation of sensory disorders. In addition, new methods of early intervention based on multisensory-based rehabilitation approach and their applications on different infant populations at risk of neurodevelopmental disabilities are discussed.
Collapse
Affiliation(s)
- Giulia Purpura
- Department of Developmental Neuroscience, Fondazione Stella Maris (IRCCS)Pisa, Italy
| | - Giovanni Cioni
- Department of Developmental Neuroscience, Fondazione Stella Maris (IRCCS)Pisa, Italy.,Department of Clinical and Experimental Medicine, University of PisaPisa, Italy
| | - Francesca Tinelli
- Department of Developmental Neuroscience, Fondazione Stella Maris (IRCCS)Pisa, Italy
| |
Collapse
|
48
|
|
49
|
Gauy MM, Meier F, Steger A. Multiassociative Memory: Recurrent Synapses Increase Storage Capacity. Neural Comput 2017; 29:1375-1405. [DOI: 10.1162/neco_a_00954] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
The connection density of nearby neurons in the cortex has been observed to be around 0.1, whereas the longer-range connections are present with much sparser density (Kalisman, Silberberg, & Markram, 2005 ). We propose a memory association model that qualitatively explains these empirical observations. The model we consider is a multiassociative, sparse, Willshaw-like model consisting of binary threshold neurons and binary synapses. It uses recurrent synapses for iterative retrieval of stored memories. We quantify the usefulness of recurrent synapses by simulating the model for small network sizes and by doing a precise mathematical analysis for large network sizes. Given the network parameters, we can determine the precise values of recurrent and afferent synapse densities that optimize the storage capacity of the network. If the network size is like that of a cortical column, then the predicted optimal recurrent density lies in a range that is compatible with biological measurements. Furthermore, we show that our model is able to surpass the standard Willshaw model in the multiassociative case if the information capacity is normalized per strong synapse or per bits required to store the model, as considered in Knoblauch, Palm, and Sommer ( 2010 ).
Collapse
Affiliation(s)
- Marcelo Matheus Gauy
- Department of Computer Science, Institute of Theoretical Computer Science, ETH Zurich, Zurich 8092, Switzerland
| | - Florian Meier
- Department of Computer Science, Institute of Theoretical Computer Science, ETH Zurich, Zurich 8092, Switzerland
| | - Angelika Steger
- Department of Computer Science, Institute of Theoretical Computer Science, ETH Zurich, Zurich 8092, Switzerland, and Collegium Helveticum, Zurich 8090, Switzerland
| |
Collapse
|
50
|
Purpura G, Cioni G, Tinelli F. Development of visuo-haptic transfer for object recognition in typical preschool and school-aged children. Child Neuropsychol 2017; 24:657-670. [DOI: 10.1080/09297049.2017.1316974] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Affiliation(s)
- Giulia Purpura
- Department of Developmental Neuroscience, IRCCS Stella Maris Foundation, Pisa, Italy
| | - Giovanni Cioni
- Department of Developmental Neuroscience, IRCCS Stella Maris Foundation, Pisa, Italy
- Department of Clinical and Experimental Medicine, University of Pisa, Pisa, Italy
| | - Francesca Tinelli
- Department of Developmental Neuroscience, IRCCS Stella Maris Foundation, Pisa, Italy
| |
Collapse
|