1
|
Karunanayaka PR, Lu J, Elyan R, Yang QX, Sathian K. Olfactory-trigeminal integration in the primary olfactory cortex. Hum Brain Mapp 2024; 45:e26772. [PMID: 38962966 PMCID: PMC11222875 DOI: 10.1002/hbm.26772] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2023] [Revised: 06/07/2024] [Accepted: 06/16/2024] [Indexed: 07/05/2024] Open
Abstract
Humans naturally integrate signals from the olfactory and intranasal trigeminal systems. A tight interplay has been demonstrated between these two systems, and yet the neural circuitry mediating olfactory-trigeminal (OT) integration remains poorly understood. Using functional magnetic resonance imaging (fMRI), combined with psychophysics, this study investigated the neural mechanisms underlying OT integration. Fifteen participants with normal olfactory function performed a localization task with air-puff stimuli, phenylethyl alcohol (PEA; rose odor), or a combination thereof while being scanned. The ability to localize PEA to either nostril was at chance. Yet, its presence significantly improved the localization accuracy of weak, but not strong, air-puffs, when both stimuli were delivered concurrently to the same nostril, but not when different nostrils received the two stimuli. This enhancement in localization accuracy, exemplifying the principles of spatial coincidence and inverse effectiveness in multisensory integration, was associated with multisensory integrative activity in the primary olfactory (POC), orbitofrontal (OFC), superior temporal (STC), inferior parietal (IPC) and cingulate cortices, and in the cerebellum. Multisensory enhancement in most of these regions correlated with behavioral multisensory enhancement, as did increases in connectivity between some of these regions. We interpret these findings as indicating that the POC is part of a distributed brain network mediating integration between the olfactory and trigeminal systems. PRACTITIONER POINTS: Psychophysical and neuroimaging study of olfactory-trigeminal (OT) integration. Behavior, cortical activity, and network connectivity show OT integration. OT integration obeys principles of inverse effectiveness and spatial coincidence. Behavioral and neural measures of OT integration are correlated.
Collapse
Affiliation(s)
- Prasanna R. Karunanayaka
- Department of RadiologyPennsylvania State University College of MedicineHersheyPennsylvaniaUSA
- Department of Neural and Behavioral SciencesPennsylvania State University College of MedicineHersheyPennsylvaniaUSA
- Department of Public Health SciencesPennsylvania State University College of MedicineHersheyPennsylvaniaUSA
| | - Jiaming Lu
- Department of RadiologyPennsylvania State University College of MedicineHersheyPennsylvaniaUSA
- Drum Tower HospitalMedical School of Nanjing UniversityNanjingChina
| | - Rommy Elyan
- Department of RadiologyPennsylvania State University College of MedicineHersheyPennsylvaniaUSA
| | - Qing X. Yang
- Department of RadiologyPennsylvania State University College of MedicineHersheyPennsylvaniaUSA
- Department of NeurosurgeryPennsylvania State University College of MedicineHersheyPennsylvaniaUSA
| | - K. Sathian
- Department of Neural and Behavioral SciencesPennsylvania State University College of MedicineHersheyPennsylvaniaUSA
- Department of NeurologyPenn State Health Milton S. Hershey Medical CenterHersheyPennsylvaniaUSA
- Department of PsychologyPennsylvania State University College of Liberal ArtsState CollegePennsylvaniaUSA
| |
Collapse
|
2
|
Liu W, Cheng Y, Yuan X, Jiang Y. Linear integration of multisensory signals in the pupil. Psychophysiology 2024; 61:e14453. [PMID: 37813676 DOI: 10.1111/psyp.14453] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2023] [Revised: 09/10/2023] [Accepted: 09/15/2023] [Indexed: 10/11/2023]
Abstract
The pupil of the eye responds to various salient signals from different modalities, but there is no consensus on how these pupillary responses are integrated when multiple signals appear simultaneously. Both linear and nonlinear integration have been found previously. The current study aimed to reexamine the nature of pupillary integration, and specifically focused on the early, transient pupillary responses due to its close relationship with orienting. To separate the early pupillary responses out of the pupil time series, we adopted a pupil oscillation paradigm in which sensory stimuli were periodically presented. The simulation analysis confirmed that the amplitude of the pupil oscillation, induced by stimuli repeatedly presented at relatively high rates, can precisely reflect the early, transient pupillary responses without involving the late and sustained pupillary responses. The experimental results then showed that the amplitude of pupil oscillation induced by a series of simultaneous audiovisual stimuli equaled to a linear summation of the oscillatory amplitudes when unisensory stimuli were presented alone. Moreover, the tonic arousal levels, indicated by the baseline pupil size, cannot shift the summation from linear to nonlinear. These findings together support the additive nature of multisensory pupillary integration for the early, orienting-related pupillary responses. The additive nature of pupillary integration further implies that multiple pupillary responses may be independent of each other, irrespective of their potential cognitive and neural drivers.
Collapse
Affiliation(s)
- Wenjie Liu
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
- Chinese Institute for Brain Research, Beijing, China
| | - Yuhui Cheng
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
- Chinese Institute for Brain Research, Beijing, China
| | - Xiangyong Yuan
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
- Chinese Institute for Brain Research, Beijing, China
| | - Yi Jiang
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
- Chinese Institute for Brain Research, Beijing, China
| |
Collapse
|
3
|
Ross LA, Molholm S, Butler JS, Del Bene VA, Brima T, Foxe JJ. Neural correlates of audiovisual narrative speech perception in children and adults on the autism spectrum: A functional magnetic resonance imaging study. Autism Res 2024; 17:280-310. [PMID: 38334251 DOI: 10.1002/aur.3104] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2023] [Accepted: 01/19/2024] [Indexed: 02/10/2024]
Abstract
Autistic individuals show substantially reduced benefit from observing visual articulations during audiovisual speech perception, a multisensory integration deficit that is particularly relevant to social communication. This has mostly been studied using simple syllabic or word-level stimuli and it remains unclear how altered lower-level multisensory integration translates to the processing of more complex natural multisensory stimulus environments in autism. Here, functional neuroimaging was used to examine neural correlates of audiovisual gain (AV-gain) in 41 autistic individuals to those of 41 age-matched non-autistic controls when presented with a complex audiovisual narrative. Participants were presented with continuous narration of a story in auditory-alone, visual-alone, and both synchronous and asynchronous audiovisual speech conditions. We hypothesized that previously identified differences in audiovisual speech processing in autism would be characterized by activation differences in brain regions well known to be associated with audiovisual enhancement in neurotypicals. However, our results did not provide evidence for altered processing of auditory alone, visual alone, audiovisual conditions or AV- gain in regions associated with the respective task when comparing activation patterns between groups. Instead, we found that autistic individuals responded with higher activations in mostly frontal regions where the activation to the experimental conditions was below baseline (de-activations) in the control group. These frontal effects were observed in both unisensory and audiovisual conditions, suggesting that these altered activations were not specific to multisensory processing but reflective of more general mechanisms such as an altered disengagement of Default Mode Network processes during the observation of the language stimulus across conditions.
Collapse
Affiliation(s)
- Lars A Ross
- The Frederick J. and Marion A. Schindler Cognitive Neurophysiology Laboratory, The Ernest J. Del Monte Institute for Neuroscience, Department of Neuroscience, University of Rochester School of Medicine and Dentistry, Rochester, New York, USA
- Department of Imaging Sciences, University of Rochester Medical Center, University of Rochester School of Medicine and Dentistry, Rochester, New York, USA
- The Cognitive Neurophysiology Laboratory, Departments of Pediatrics and Neuroscience, Albert Einstein College of Medicine & Montefiore Medical Center, Bronx, New York, USA
| | - Sophie Molholm
- The Frederick J. and Marion A. Schindler Cognitive Neurophysiology Laboratory, The Ernest J. Del Monte Institute for Neuroscience, Department of Neuroscience, University of Rochester School of Medicine and Dentistry, Rochester, New York, USA
- The Cognitive Neurophysiology Laboratory, Departments of Pediatrics and Neuroscience, Albert Einstein College of Medicine & Montefiore Medical Center, Bronx, New York, USA
| | - John S Butler
- The Cognitive Neurophysiology Laboratory, Departments of Pediatrics and Neuroscience, Albert Einstein College of Medicine & Montefiore Medical Center, Bronx, New York, USA
- School of Mathematics and Statistics, Technological University Dublin, City Campus, Dublin, Ireland
| | - Victor A Del Bene
- The Cognitive Neurophysiology Laboratory, Departments of Pediatrics and Neuroscience, Albert Einstein College of Medicine & Montefiore Medical Center, Bronx, New York, USA
- Heersink School of Medicine, Department of Neurology, University of Alabama at Birmingham, Birmingham, Alabama, USA
| | - Tufikameni Brima
- The Frederick J. and Marion A. Schindler Cognitive Neurophysiology Laboratory, The Ernest J. Del Monte Institute for Neuroscience, Department of Neuroscience, University of Rochester School of Medicine and Dentistry, Rochester, New York, USA
| | - John J Foxe
- The Frederick J. and Marion A. Schindler Cognitive Neurophysiology Laboratory, The Ernest J. Del Monte Institute for Neuroscience, Department of Neuroscience, University of Rochester School of Medicine and Dentistry, Rochester, New York, USA
- The Cognitive Neurophysiology Laboratory, Departments of Pediatrics and Neuroscience, Albert Einstein College of Medicine & Montefiore Medical Center, Bronx, New York, USA
| |
Collapse
|
4
|
Heß T, Themann P, Oehlwein C, Milani TL. Does Impaired Plantar Cutaneous Vibration Perception Contribute to Axial Motor Symptoms in Parkinson's Disease? Effects of Medication and Subthalamic Nucleus Deep Brain Stimulation. Brain Sci 2023; 13:1681. [PMID: 38137129 PMCID: PMC10742284 DOI: 10.3390/brainsci13121681] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2023] [Revised: 11/28/2023] [Accepted: 12/03/2023] [Indexed: 12/24/2023] Open
Abstract
OBJECTIVE To investigate whether impaired plantar cutaneous vibration perception contributes to axial motor symptoms in Parkinson's disease (PD) and whether anti-parkinsonian medication and subthalamic nucleus deep brain stimulation (STN-DBS) show different effects. METHODS Three groups were evaluated: PD patients in the medication "on" state (PD-MED), PD patients in the medication "on" state and additionally "on" STN-DBS (PD-MED-DBS), as well as healthy subjects (HS) as reference. Motor performance was analyzed using a pressure distribution platform. Plantar cutaneous vibration perception thresholds (VPT) were investigated using a customized vibration exciter at 30 Hz. RESULTS Motor performance of PD-MED and PD-MED-DBS was characterized by greater postural sway, smaller limits of stability ranges, and slower gait due to shorter strides, fewer steps per minute, and broader stride widths compared to HS. Comparing patient groups, PD-MED-DBS showed better overall motor performance than PD-MED, particularly for the functional limits of stability and gait. VPTs were significantly higher for PD-MED compared to those of HS, which suggests impaired plantar cutaneous vibration perception in PD. However, PD-MED-DBS showed less impaired cutaneous vibration perception than PD-MED. CONCLUSIONS PD patients suffer from poor motor performance compared to healthy subjects. Anti-parkinsonian medication in tandem with STN-DBS seems to be superior for normalizing axial motor symptoms compared to medication alone. Plantar cutaneous vibration perception is impaired in PD patients, whereas anti-parkinsonian medication together with STN-DBS is superior for normalizing tactile cutaneous perception compared to medication alone. Consequently, based on our results and the findings of the literature, impaired plantar cutaneous vibration perception might contribute to axial motor symptoms in PD.
Collapse
Affiliation(s)
- Tobias Heß
- Department of Human Locomotion, Chemnitz University of Technology, 09126 Chemnitz, Germany
| | - Peter Themann
- Department of Neurology and Parkinson, Clinic at Tharandter Forest, 09633 Halsbruecke, Germany
| | - Christian Oehlwein
- Neurological Outpatient Clinic for Parkinson Disease and Deep Brain Stimulation, 07551 Gera, Germany
| | - Thomas L. Milani
- Department of Human Locomotion, Chemnitz University of Technology, 09126 Chemnitz, Germany
| |
Collapse
|
5
|
Luo X, Qu H, Wang Y, Yi Z, Zhang J, Zhang M. Supervised Learning in Multilayer Spiking Neural Networks With Spike Temporal Error Backpropagation. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2023; 34:10141-10153. [PMID: 35436200 DOI: 10.1109/tnnls.2022.3164930] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
The brain-inspired spiking neural networks (SNNs) hold the advantages of lower power consumption and powerful computing capability. However, the lack of effective learning algorithms has obstructed the theoretical advance and applications of SNNs. The majority of the existing learning algorithms for SNNs are based on the synaptic weight adjustment. However, neuroscience findings confirm that synaptic delays can also be modulated to play an important role in the learning process. Here, we propose a gradient descent-based learning algorithm for synaptic delays to enhance the sequential learning performance of single spiking neuron. Moreover, we extend the proposed method to multilayer SNNs with spike temporal-based error backpropagation. In the proposed multilayer learning algorithm, information is encoded in the relative timing of individual neuronal spikes, and learning is performed based on the exact derivatives of the postsynaptic spike times with respect to presynaptic spike times. Experimental results on both synthetic and realistic datasets show significant improvements in learning efficiency and accuracy over the existing spike temporal-based learning algorithms. We also evaluate the proposed learning method in an SNN-based multimodal computational model for audiovisual pattern recognition, and it achieves better performance compared with its counterparts.
Collapse
|
6
|
Romanski LM, Sharma KK. Multisensory interactions of face and vocal information during perception and memory in ventrolateral prefrontal cortex. Philos Trans R Soc Lond B Biol Sci 2023; 378:20220343. [PMID: 37545305 PMCID: PMC10404928 DOI: 10.1098/rstb.2022.0343] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2023] [Accepted: 03/21/2023] [Indexed: 08/08/2023] Open
Abstract
The ventral frontal lobe is a critical node in the circuit that underlies communication, a multisensory process where sensory features of faces and vocalizations come together. The neural basis of face and vocal integration is a topic of great importance since the integration of multiple sensory signals is essential for the decisions that govern our social interactions. Investigations have shown that the macaque ventrolateral prefrontal cortex (VLPFC), a proposed homologue of the human inferior frontal gyrus, is involved in the processing, integration and remembering of audiovisual signals. Single neurons in VLPFC encode and integrate species-specific faces and corresponding vocalizations. During working memory, VLPFC neurons maintain face and vocal information online and exhibit selective activity for face and vocal stimuli. Population analyses indicate that identity, a critical feature of social stimuli, is encoded by VLPFC neurons and dictates the structure of dynamic population activity in the VLPFC during the perception of vocalizations and their corresponding facial expressions. These studies suggest that VLPFC may play a primary role in integrating face and vocal stimuli with contextual information, in order to support decision making during social communication. This article is part of the theme issue 'Decision and control processes in multisensory perception'.
Collapse
Affiliation(s)
- Lizabeth M. Romanski
- Department of Neuroscience, University of Rochester School of Medicine, Rochester, NY 14642, USA
| | - Keshov K. Sharma
- Department of Neuroscience, University of Rochester School of Medicine, Rochester, NY 14642, USA
| |
Collapse
|
7
|
Jiang Y, Qiao R, Shi Y, Tang Y, Hou Z, Tian Y. The effects of attention in auditory-visual integration revealed by time-varying networks. Front Neurosci 2023; 17:1235480. [PMID: 37600005 PMCID: PMC10434229 DOI: 10.3389/fnins.2023.1235480] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2023] [Accepted: 07/17/2023] [Indexed: 08/22/2023] Open
Abstract
Attention and audiovisual integration are crucial subjects in the field of brain information processing. A large number of previous studies have sought to determine the relationship between them through specific experiments, but failed to reach a unified conclusion. The reported studies explored the relationship through the frameworks of early, late, and parallel integration, though network analysis has been employed sparingly. In this study, we employed time-varying network analysis, which offers a comprehensive and dynamic insight into cognitive processing, to explore the relationship between attention and auditory-visual integration. The combination of high spatial resolution functional magnetic resonance imaging (fMRI) and high temporal resolution electroencephalography (EEG) was used. Firstly, a generalized linear model (GLM) was employed to find the task-related fMRI activations, which was selected as regions of interesting (ROIs) for nodes of time-varying network. Then the electrical activity of the auditory-visual cortex was estimated via the normalized minimum norm estimation (MNE) source localization method. Finally, the time-varying network was constructed using the adaptive directed transfer function (ADTF) technology. Notably, Task-related fMRI activations were mainly observed in the bilateral temporoparietal junction (TPJ), superior temporal gyrus (STG), primary visual and auditory areas. And the time-varying network analysis revealed that V1/A1↔STG occurred before TPJ↔STG. Therefore, the results supported the theory that auditory-visual integration occurred before attention, aligning with the early integration framework.
Collapse
Affiliation(s)
- Yuhao Jiang
- Institute for Advanced Sciences, Chongqing University of Posts and Telecommunications, Chongqing, China
- Guangyang Bay Laboratory, Chongqing Institute for Brain and Intelligence, Chongqing, China
- Central Nervous System Drug Key Laboratory of Sichuan Province, Luzhou, China
| | - Rui Qiao
- Institute for Advanced Sciences, Chongqing University of Posts and Telecommunications, Chongqing, China
- Guangyang Bay Laboratory, Chongqing Institute for Brain and Intelligence, Chongqing, China
| | - Yupan Shi
- Institute for Advanced Sciences, Chongqing University of Posts and Telecommunications, Chongqing, China
- Guangyang Bay Laboratory, Chongqing Institute for Brain and Intelligence, Chongqing, China
| | - Yi Tang
- Institute for Advanced Sciences, Chongqing University of Posts and Telecommunications, Chongqing, China
- Guangyang Bay Laboratory, Chongqing Institute for Brain and Intelligence, Chongqing, China
| | - Zhengjun Hou
- Institute for Advanced Sciences, Chongqing University of Posts and Telecommunications, Chongqing, China
- Guangyang Bay Laboratory, Chongqing Institute for Brain and Intelligence, Chongqing, China
| | - Yin Tian
- Institute for Advanced Sciences, Chongqing University of Posts and Telecommunications, Chongqing, China
- Guangyang Bay Laboratory, Chongqing Institute for Brain and Intelligence, Chongqing, China
| |
Collapse
|
8
|
Ahmed F, Nidiffer AR, O'Sullivan AE, Zuk NJ, Lalor EC. The integration of continuous audio and visual speech in a cocktail-party environment depends on attention. Neuroimage 2023; 274:120143. [PMID: 37121375 DOI: 10.1016/j.neuroimage.2023.120143] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2022] [Revised: 03/17/2023] [Accepted: 04/27/2023] [Indexed: 05/02/2023] Open
Abstract
In noisy environments, our ability to understand speech benefits greatly from seeing the speaker's face. This is attributed to the brain's ability to integrate audio and visual information, a process known as multisensory integration. In addition, selective attention plays an enormous role in what we understand, the so-called cocktail-party phenomenon. But how attention and multisensory integration interact remains incompletely understood, particularly in the case of natural, continuous speech. Here, we addressed this issue by analyzing EEG data recorded from participants who undertook a multisensory cocktail-party task using natural speech. To assess multisensory integration, we modeled the EEG responses to the speech in two ways. The first assumed that audiovisual speech processing is simply a linear combination of audio speech processing and visual speech processing (i.e., an A + V model), while the second allows for the possibility of audiovisual interactions (i.e., an AV model). Applying these models to the data revealed that EEG responses to attended audiovisual speech were better explained by an AV model, providing evidence for multisensory integration. In contrast, unattended audiovisual speech responses were best captured using an A + V model, suggesting that multisensory integration is suppressed for unattended speech. Follow up analyses revealed some limited evidence for early multisensory integration of unattended AV speech, with no integration occurring at later levels of processing. We take these findings as evidence that the integration of natural audio and visual speech occurs at multiple levels of processing in the brain, each of which can be differentially affected by attention.
Collapse
Affiliation(s)
- Farhin Ahmed
- Department of Biomedical Engineering, Department of Neuroscience, and Del Monte Institute for Neuroscience, University of Rochester, Rochester, NY 14627, USA
| | - Aaron R Nidiffer
- Department of Biomedical Engineering, Department of Neuroscience, and Del Monte Institute for Neuroscience, University of Rochester, Rochester, NY 14627, USA
| | - Aisling E O'Sullivan
- Department of Biomedical Engineering, Department of Neuroscience, and Del Monte Institute for Neuroscience, University of Rochester, Rochester, NY 14627, USA; School of Engineering, Trinity Centre for Biomedical Engineering, and Trinity College Institute of Neuroscience, Trinity College Dublin, Dublin 2, Ireland
| | - Nathaniel J Zuk
- Edmond & Lily Safra Center for Brain Sciences, Hebrew University, Jerusalem, Israel
| | - Edmund C Lalor
- Department of Biomedical Engineering, Department of Neuroscience, and Del Monte Institute for Neuroscience, University of Rochester, Rochester, NY 14627, USA; School of Engineering, Trinity Centre for Biomedical Engineering, and Trinity College Institute of Neuroscience, Trinity College Dublin, Dublin 2, Ireland.
| |
Collapse
|
9
|
Vakhrushev R, Cheng FPH, Schacht A, Pooresmaeili A. Differential effects of intra-modal and cross-modal reward value on perception: ERP evidence. PLoS One 2023; 18:e0287900. [PMID: 37390067 PMCID: PMC10313067 DOI: 10.1371/journal.pone.0287900] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2022] [Accepted: 06/15/2023] [Indexed: 07/02/2023] Open
Abstract
In natural environments objects comprise multiple features from the same or different sensory modalities but it is not known how perception of an object is affected by the value associations of its constituent parts. The present study compares intra- and cross-modal value-driven effects on behavioral and electrophysiological correlates of perception. Human participants first learned the reward associations of visual and auditory cues. Subsequently, they performed a visual discrimination task in the presence of previously rewarded, task-irrelevant visual or auditory cues (intra- and cross-modal cues, respectively). During the conditioning phase, when reward associations were learned and reward cues were the target of the task, high value stimuli of both modalities enhanced the electrophysiological correlates of sensory processing in posterior electrodes. During the post-conditioning phase, when reward delivery was halted and previously rewarded stimuli were task-irrelevant, cross-modal value significantly enhanced the behavioral measures of visual sensitivity, whereas intra-modal value produced only an insignificant decrement. Analysis of the simultaneously recorded event-related potentials (ERPs) of posterior electrodes revealed similar findings. We found an early (90-120 ms) suppression of ERPs evoked by high-value, intra-modal stimuli. Cross-modal stimuli led to a later value-driven modulation, with an enhancement of response positivity for high- compared to low-value stimuli starting at the N1 window (180-250 ms) and extending to the P3 (300-600 ms) responses. These results indicate that sensory processing of a compound stimulus comprising a visual target and task-irrelevant visual or auditory cues is modulated by the reward value of both sensory modalities, but such modulations rely on distinct underlying mechanisms.
Collapse
Affiliation(s)
- Roman Vakhrushev
- Perception and Cognition Lab, European Neuroscience Institute Goettingen- A Joint Initiative of the University Medical Center Goettingen and the Max-Planck-Society, Goettingen, Germany
| | - Felicia Pei-Hsin Cheng
- Perception and Cognition Lab, European Neuroscience Institute Goettingen- A Joint Initiative of the University Medical Center Goettingen and the Max-Planck-Society, Goettingen, Germany
| | - Anne Schacht
- Affective Neuroscience and Psychophysiology Laboratory, Georg-Elias-Müller-Institute of Psychology, Georg-August University, Goettingen, Germany
| | - Arezoo Pooresmaeili
- Perception and Cognition Lab, European Neuroscience Institute Goettingen- A Joint Initiative of the University Medical Center Goettingen and the Max-Planck-Society, Goettingen, Germany
| |
Collapse
|
10
|
Pinardi M, Di Stefano N, Di Pino G, Spence C. Exploring crossmodal correspondences for future research in human movement augmentation. Front Psychol 2023; 14:1190103. [PMID: 37397340 PMCID: PMC10308310 DOI: 10.3389/fpsyg.2023.1190103] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2023] [Accepted: 05/30/2023] [Indexed: 07/04/2023] Open
Abstract
"Crossmodal correspondences" are the consistent mappings between perceptual dimensions or stimuli from different sensory domains, which have been widely observed in the general population and investigated by experimental psychologists in recent years. At the same time, the emerging field of human movement augmentation (i.e., the enhancement of an individual's motor abilities by means of artificial devices) has been struggling with the question of how to relay supplementary information concerning the state of the artificial device and its interaction with the environment to the user, which may help the latter to control the device more effectively. To date, this challenge has not been explicitly addressed by capitalizing on our emerging knowledge concerning crossmodal correspondences, despite these being tightly related to multisensory integration. In this perspective paper, we introduce some of the latest research findings on the crossmodal correspondences and their potential role in human augmentation. We then consider three ways in which the former might impact the latter, and the feasibility of this process. First, crossmodal correspondences, given the documented effect on attentional processing, might facilitate the integration of device status information (e.g., concerning position) coming from different sensory modalities (e.g., haptic and visual), thus increasing their usefulness for motor control and embodiment. Second, by capitalizing on their widespread and seemingly spontaneous nature, crossmodal correspondences might be exploited to reduce the cognitive burden caused by additional sensory inputs and the time required for the human brain to adapt the representation of the body to the presence of the artificial device. Third, to accomplish the first two points, the benefits of crossmodal correspondences should be maintained even after sensory substitution, a strategy commonly used when implementing supplementary feedback.
Collapse
Affiliation(s)
- Mattia Pinardi
- NeXT Lab, Neurophysiology and Neuroengineering of Human-Technology Interaction Research Unit, Università Campus Bio-Medico di Roma, Rome, Italy
| | - Nicola Di Stefano
- Institute of Cognitive Sciences and Technologies, National Research Council, Rome, Italy
| | - Giovanni Di Pino
- NeXT Lab, Neurophysiology and Neuroengineering of Human-Technology Interaction Research Unit, Università Campus Bio-Medico di Roma, Rome, Italy
| | - Charles Spence
- Crossmodal Research Laboratory, University of Oxford, Oxford, United Kingdom
| |
Collapse
|
11
|
Lin YR, Chi CH, Chang YL. Differential decay of gist and detail memory in older adults with amnestic mild cognitive impairment. Cortex 2023; 164:112-128. [PMID: 37207409 DOI: 10.1016/j.cortex.2023.04.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2022] [Revised: 02/19/2023] [Accepted: 04/11/2023] [Indexed: 05/21/2023]
Abstract
Amnestic mild cognitive impairment (aMCI) has been identified as a risk factor for dementia due to Alzheimer's disease. The medial temporal structures, which are crucial for memory processing, are the earliest affected regions in the brains of patients with aMCI, and episodic memory performance has been identified as a reliable way to discriminate between patients with aMCI and cognitively normal older adults. However, whether the detail and gist memory of patients with aMCI and cognitively normal older adults decay differently remains unclear. In this study, we hypothesized that detail and gist memory would be retrieved differentially, with a larger group performance gap in detail memory than in gist memory. In addition, we explored whether an increasing group performance gap between detail memory and gist memory groups would be observed over a 14-day period. Furthermore, we hypothesized that unisensory (audio-only) and multisensory (audiovisual) encoding would lead to differences in retrievals, with the multisensory condition reducing between and within-group performance gaps observed under the unisensory condition. The analyses conducted were analyses of covariance controlling for age, sex, and education and correlational analyses to examine behavioral performance and the association between behavioral data and brain variables. Compared with cognitively normal older adults, the patients with aMCI performed poorly on both detail and gist memory tests, and this performance gap persisted over time. Moreover, the memory performance of the patients with aMCI was enhanced by the provision of multisensory information, and bimodal input was significantly associated with medial temporal structure variables. Overall, our findings suggest that detail and gist memory decay differently, with a longer lasting group gap in gist memory than in detail memory. Multisensory encoding effectively reduced or overcame the between- and within-group gaps between time intervals, especially for gist memory, compared with unisensory encoding.
Collapse
Affiliation(s)
- Yu-Ruei Lin
- Department of Psychology, College of Science, National Taiwan University, Taipei, Taiwan
| | - Chia-Hsing Chi
- Department of Psychology, College of Science, National Taiwan University, Taipei, Taiwan
| | - Yu-Ling Chang
- Department of Psychology, College of Science, National Taiwan University, Taipei, Taiwan; Department of Neurology, National Taiwan University Hospital, College of Medicine, National Taiwan University, Taipei, Taiwan; Neurobiology and Cognitive Science Center, National Taiwan University, Taipei, Taiwan; Center for Artificial Intelligence and Advanced Robotics, National Taiwan University, Taipei, Taiwan.
| |
Collapse
|
12
|
Scheliga S, Kellermann T, Lampert A, Rolke R, Spehr M, Habel U. Neural correlates of multisensory integration in the human brain: an ALE meta-analysis. Rev Neurosci 2023; 34:223-245. [PMID: 36084305 DOI: 10.1515/revneuro-2022-0065] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2022] [Accepted: 07/22/2022] [Indexed: 02/07/2023]
Abstract
Previous fMRI research identified superior temporal sulcus as central integration area for audiovisual stimuli. However, less is known about a general multisensory integration network across senses. Therefore, we conducted activation likelihood estimation meta-analysis with multiple sensory modalities to identify a common brain network. We included 49 studies covering all Aristotelian senses i.e., auditory, visual, tactile, gustatory, and olfactory stimuli. Analysis revealed significant activation in bilateral superior temporal gyrus, middle temporal gyrus, thalamus, right insula, and left inferior frontal gyrus. We assume these regions to be part of a general multisensory integration network comprising different functional roles. Here, thalamus operate as first subcortical relay projecting sensory information to higher cortical integration centers in superior temporal gyrus/sulcus while conflict-processing brain regions as insula and inferior frontal gyrus facilitate integration of incongruent information. We additionally performed meta-analytic connectivity modelling and found each brain region showed co-activations within the identified multisensory integration network. Therefore, by including multiple sensory modalities in our meta-analysis the results may provide evidence for a common brain network that supports different functional roles for multisensory integration.
Collapse
Affiliation(s)
- Sebastian Scheliga
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty RWTH Aachen University, Pauwelsstraße 30, 52074 Aachen, Germany
| | - Thilo Kellermann
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty RWTH Aachen University, Pauwelsstraße 30, 52074 Aachen, Germany.,JARA-Institute Brain Structure Function Relationship, Pauwelsstraße 30, 52074 Aachen, Germany
| | - Angelika Lampert
- Institute of Physiology, Medical Faculty RWTH Aachen University, Pauwelsstraße 30, 52074 Aachen, Germany
| | - Roman Rolke
- Department of Palliative Medicine, Medical Faculty RWTH Aachen University, Pauwelsstraße 30, 52074 Aachen, Germany
| | - Marc Spehr
- Department of Chemosensation, RWTH Aachen University, Institute for Biology, Worringerweg 3, 52074 Aachen, Germany
| | - Ute Habel
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty RWTH Aachen University, Pauwelsstraße 30, 52074 Aachen, Germany.,JARA-Institute Brain Structure Function Relationship, Pauwelsstraße 30, 52074 Aachen, Germany
| |
Collapse
|
13
|
Benetti S, Ferrari A, Pavani F. Multimodal processing in face-to-face interactions: A bridging link between psycholinguistics and sensory neuroscience. Front Hum Neurosci 2023; 17:1108354. [PMID: 36816496 PMCID: PMC9932987 DOI: 10.3389/fnhum.2023.1108354] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2022] [Accepted: 01/11/2023] [Indexed: 02/05/2023] Open
Abstract
In face-to-face communication, humans are faced with multiple layers of discontinuous multimodal signals, such as head, face, hand gestures, speech and non-speech sounds, which need to be interpreted as coherent and unified communicative actions. This implies a fundamental computational challenge: optimally binding only signals belonging to the same communicative action while segregating signals that are not connected by the communicative content. How do we achieve such an extraordinary feat, reliably, and efficiently? To address this question, we need to further move the study of human communication beyond speech-centred perspectives and promote a multimodal approach combined with interdisciplinary cooperation. Accordingly, we seek to reconcile two explanatory frameworks recently proposed in psycholinguistics and sensory neuroscience into a neurocognitive model of multimodal face-to-face communication. First, we introduce a psycholinguistic framework that characterises face-to-face communication at three parallel processing levels: multiplex signals, multimodal gestalts and multilevel predictions. Second, we consider the recent proposal of a lateral neural visual pathway specifically dedicated to the dynamic aspects of social perception and reconceive it from a multimodal perspective ("lateral processing pathway"). Third, we reconcile the two frameworks into a neurocognitive model that proposes how multiplex signals, multimodal gestalts, and multilevel predictions may be implemented along the lateral processing pathway. Finally, we advocate a multimodal and multidisciplinary research approach, combining state-of-the-art imaging techniques, computational modelling and artificial intelligence for future empirical testing of our model.
Collapse
Affiliation(s)
- Stefania Benetti
- Centre for Mind/Brain Sciences, University of Trento, Trento, Italy,Interuniversity Research Centre “Cognition, Language, and Deafness”, CIRCLeS, Catania, Italy,*Correspondence: Stefania Benetti,
| | - Ambra Ferrari
- Max Planck Institute for Psycholinguistics, Donders Institute for Brain, Cognition, and Behaviour, Radboud University, Nijmegen, Netherlands
| | - Francesco Pavani
- Centre for Mind/Brain Sciences, University of Trento, Trento, Italy,Interuniversity Research Centre “Cognition, Language, and Deafness”, CIRCLeS, Catania, Italy
| |
Collapse
|
14
|
Vastano R, Costantini M, Alexander WH, Widerstrom-Noga E. Multisensory integration in humans with spinal cord injury. Sci Rep 2022; 12:22156. [PMID: 36550184 PMCID: PMC9780239 DOI: 10.1038/s41598-022-26678-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2022] [Accepted: 12/19/2022] [Indexed: 12/24/2022] Open
Abstract
Although multisensory integration (MSI) has been extensively studied, the underlying mechanisms remain a topic of ongoing debate. Here we investigate these mechanisms by comparing MSI in healthy controls to a clinical population with spinal cord injury (SCI). Deafferentation following SCI induces sensorimotor impairment, which may alter the ability to synthesize cross-modal information. We applied mathematical and computational modeling to reaction time data recorded in response to temporally congruent cross-modal stimuli. We found that MSI in both SCI and healthy controls is best explained by cross-modal perceptual competition, highlighting a common competition mechanism. Relative to controls, MSI impairments in SCI participants were better explained by reduced stimulus salience leading to increased cross-modal competition. By combining traditional analyses with model-based approaches, we examine how MSI is realized during normal function, and how it is compromised in a clinical population. Our findings support future investigations identifying and rehabilitating MSI deficits in clinical disorders.
Collapse
Affiliation(s)
- Roberta Vastano
- grid.26790.3a0000 0004 1936 8606Department of Neurological Surgery, The Miami Project to Cure Paralysis, University of Miami, Miami, FL 33136 USA
| | - Marcello Costantini
- grid.412451.70000 0001 2181 4941Department of Psychological, Health and Territorial Sciences, “G. d’Annunzio” University of Chieti-Pescara, Chieti, Italy ,grid.412451.70000 0001 2181 4941Institute for Advanced Biomedical Technologies, ITAB, “G. d’Annunzio” University of Chieti-Pescara, Chieti, Italy
| | - William H. Alexander
- grid.255951.fCenter for Complex Systems and Brain Sciences, Florida Atlantic University, Boca Raton, USA ,grid.255951.fDepartment of Psychology, Florida Atlantic University, Boca Raton, USA ,grid.255951.fThe Brain Institute, Florida Atlantic University, Boca Raton, USA
| | - Eva Widerstrom-Noga
- grid.26790.3a0000 0004 1936 8606Department of Neurological Surgery, The Miami Project to Cure Paralysis, University of Miami, Miami, FL 33136 USA
| |
Collapse
|
15
|
Yang W, Yang X, Guo A, Li S, Li Z, Lin J, Ren Y, Yang J, Wu J, Zhang Z. Audiovisual integration of the dynamic hand-held tool at different stimulus intensities in aging. Front Hum Neurosci 2022; 16:968987. [PMID: 36590067 PMCID: PMC9794578 DOI: 10.3389/fnhum.2022.968987] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2022] [Accepted: 11/15/2022] [Indexed: 12/23/2022] Open
Abstract
Introduction: In comparison to the audiovisual integration of younger adults, the same process appears more complex and unstable in older adults. Previous research has found that stimulus intensity is one of the most important factors influencing audiovisual integration. Methods: The present study compared differences in audiovisual integration between older and younger adults using dynamic hand-held tool stimuli, such as holding a hammer hitting the floor. Meanwhile, the effects of stimulus intensity on audiovisual integration were compared. The intensity of the visual and auditory stimuli was regulated by modulating the contrast level and sound pressure level. Results: Behavioral results showed that both older and younger adults responded faster and with higher hit rates to audiovisual stimuli than to visual and auditory stimuli. Further results of event-related potentials (ERPs) revealed that during the early stage of 60-100 ms, in the low-intensity condition, audiovisual integration of the anterior brain region was greater in older adults than in younger adults; however, in the high-intensity condition, audiovisual integration of the right hemisphere region was greater in younger adults than in older adults. Moreover, audiovisual integration was greater in the low-intensity condition than in the high-intensity condition in older adults during the 60-100 ms, 120-160 ms, and 220-260 ms periods, showing inverse effectiveness. However, there was no difference in the audiovisual integration of younger adults across different intensity conditions. Discussion: The results suggested that there was an age-related dissociation between high- and low-intensity conditions with audiovisual integration of the dynamic hand-held tool stimulus. Older adults showed greater audiovisual integration in the lower intensity condition, which may be due to the activation of compensatory mechanisms.
Collapse
Affiliation(s)
- Weiping Yang
- Department of Psychology, Faculty of Education, Hubei University, Wuhan, China,Brain and Cognition Research Center (BCRC), Faculty of Education, Hubei University, Wuhan, China
| | - Xiangfu Yang
- Department of Psychology, Faculty of Education, Hubei University, Wuhan, China
| | - Ao Guo
- Department of Psychology, Faculty of Education, Hubei University, Wuhan, China
| | - Shengnan Li
- Department of Psychology, Faculty of Education, Hubei University, Wuhan, China
| | - Zimo Li
- Department of Psychology, Faculty of Education, Hubei University, Wuhan, China
| | - Jinfei Lin
- Department of Psychology, Faculty of Education, Hubei University, Wuhan, China
| | - Yanna Ren
- Department of Psychology, College of Humanities and Management, Guizhou University of Traditional Chinese Medicine, Guiyang, China,*Correspondence: Yanna Ren Zhilin Zhang
| | - Jiajia Yang
- Applied Brain Science Lab, Faculty of Interdisciplinary Science and Engineering in Health Systems, Okayama University, Okayama, Japan
| | - Jinglong Wu
- Applied Brain Science Lab, Faculty of Interdisciplinary Science and Engineering in Health Systems, Okayama University, Okayama, Japan,Research Center for Medical Artificial Intelligence, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong, China
| | - Zhilin Zhang
- Research Center for Medical Artificial Intelligence, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong, China,*Correspondence: Yanna Ren Zhilin Zhang
| |
Collapse
|
16
|
Gao C, Green JJ, Yang X, Oh S, Kim J, Shinkareva SV. Audiovisual integration in the human brain: a coordinate-based meta-analysis. Cereb Cortex 2022; 33:5574-5584. [PMID: 36336347 PMCID: PMC10152097 DOI: 10.1093/cercor/bhac443] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2022] [Revised: 10/09/2022] [Accepted: 10/10/2022] [Indexed: 11/09/2022] Open
Abstract
Abstract
People can seamlessly integrate a vast array of information from what they see and hear in the noisy and uncertain world. However, the neural underpinnings of audiovisual integration continue to be a topic of debate. Using strict inclusion criteria, we performed an activation likelihood estimation meta-analysis on 121 neuroimaging experiments with a total of 2,092 participants. We found that audiovisual integration is linked with the coexistence of multiple integration sites, including early cortical, subcortical, and higher association areas. Although activity was consistently found within the superior temporal cortex, different portions of this cortical region were identified depending on the analytical contrast used, complexity of the stimuli, and modality within which attention was directed. The context-dependent neural activity related to audiovisual integration suggests a flexible rather than fixed neural pathway for audiovisual integration. Together, our findings highlight a flexible multiple pathways model for audiovisual integration, with superior temporal cortex as the central node in these neural assemblies.
Collapse
Affiliation(s)
- Chuanji Gao
- Donders Institute for Brain, Cognition and Behaviour, Radboud University , Nijmegen , Netherlands
| | - Jessica J Green
- Department of Psychology, Institute for Mind and Brain, University of South Carolina , Columbia, SC 29201 , USA
| | - Xuan Yang
- Department of Psychology, Institute for Mind and Brain, University of South Carolina , Columbia, SC 29201 , USA
| | - Sewon Oh
- Department of Psychology, Institute for Mind and Brain, University of South Carolina , Columbia, SC 29201 , USA
| | - Jongwan Kim
- Department of Psychology, Jeonbuk National University , Jeonju , South Korea
| | - Svetlana V Shinkareva
- Department of Psychology, Institute for Mind and Brain, University of South Carolina , Columbia, SC 29201 , USA
| |
Collapse
|
17
|
Gori M, Bertonati G, Campus C, Amadeo MB. Multisensory representations of space and time in sensory cortices. Hum Brain Mapp 2022; 44:656-667. [PMID: 36169038 PMCID: PMC9842891 DOI: 10.1002/hbm.26090] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2022] [Revised: 08/05/2022] [Accepted: 09/07/2022] [Indexed: 01/25/2023] Open
Abstract
Clear evidence demonstrated a supramodal organization of sensory cortices with multisensory processing occurring even at early stages of information encoding. Within this context, early recruitment of sensory areas is necessary for the development of fine domain-specific (i.e., spatial or temporal) skills regardless of the sensory modality involved, with auditory areas playing a crucial role in temporal processing and visual areas in spatial processing. Given the domain-specificity and the multisensory nature of sensory areas, in this study, we hypothesized that preferential domains of representation (i.e., space and time) of visual and auditory cortices are also evident in the early processing of multisensory information. Thus, we measured the event-related potential (ERP) responses of 16 participants while performing multisensory spatial and temporal bisection tasks. Audiovisual stimuli occurred at three different spatial positions and time lags and participants had to evaluate whether the second stimulus was spatially (spatial bisection task) or temporally (temporal bisection task) farther from the first or third audiovisual stimulus. As predicted, the second audiovisual stimulus of both spatial and temporal bisection tasks elicited an early ERP response (time window 50-90 ms) in visual and auditory regions. However, this early ERP component was more substantial in the occipital areas during the spatial bisection task, and in the temporal regions during the temporal bisection task. Overall, these results confirmed the domain specificity of visual and auditory cortices and revealed that this aspect selectively modulates also the cortical activity in response to multisensory stimuli.
Collapse
Affiliation(s)
- Monica Gori
- Unit for Visually Impaired People (U‐VIP)Istituto Italiano di TecnologiaGenoaItaly
| | - Giorgia Bertonati
- Unit for Visually Impaired People (U‐VIP)Istituto Italiano di TecnologiaGenoaItaly,Department of Informatics, Bioengineering, Robotics and Systems Engineering (DIBRIS)Università degli Studi di GenovaGenoaItaly
| | - Claudio Campus
- Unit for Visually Impaired People (U‐VIP)Istituto Italiano di TecnologiaGenoaItaly
| | - Maria Bianca Amadeo
- Unit for Visually Impaired People (U‐VIP)Istituto Italiano di TecnologiaGenoaItaly
| |
Collapse
|
18
|
Ross LA, Molholm S, Butler JS, Bene VAD, Foxe JJ. Neural correlates of multisensory enhancement in audiovisual narrative speech perception: a fMRI investigation. Neuroimage 2022; 263:119598. [PMID: 36049699 DOI: 10.1016/j.neuroimage.2022.119598] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2022] [Revised: 08/26/2022] [Accepted: 08/28/2022] [Indexed: 11/25/2022] Open
Abstract
This fMRI study investigated the effect of seeing articulatory movements of a speaker while listening to a naturalistic narrative stimulus. It had the goal to identify regions of the language network showing multisensory enhancement under synchronous audiovisual conditions. We expected this enhancement to emerge in regions known to underlie the integration of auditory and visual information such as the posterior superior temporal gyrus as well as parts of the broader language network, including the semantic system. To this end we presented 53 participants with a continuous narration of a story in auditory alone, visual alone, and both synchronous and asynchronous audiovisual speech conditions while recording brain activity using BOLD fMRI. We found multisensory enhancement in an extensive network of regions underlying multisensory integration and parts of the semantic network as well as extralinguistic regions not usually associated with multisensory integration, namely the primary visual cortex and the bilateral amygdala. Analysis also revealed involvement of thalamic brain regions along the visual and auditory pathways more commonly associated with early sensory processing. We conclude that under natural listening conditions, multisensory enhancement not only involves sites of multisensory integration but many regions of the wider semantic network and includes regions associated with extralinguistic sensory, perceptual and cognitive processing.
Collapse
Affiliation(s)
- Lars A Ross
- The Frederick J. and Marion A. Schindler Cognitive Neurophysiology Laboratory, The Ernest J. Del Monte Institute for Neuroscience, Department of Neuroscience, University of Rochester School of Medicine and Dentistry, Rochester, New York, 14642, USA; Department of Imaging Sciences, University of Rochester Medical Center, University of Rochester School of Medicine and Dentistry, Rochester, New York, 14642, USA; The Cognitive Neurophysiology Laboratory, Departments of Pediatrics and Neuroscience, Albert Einstein College of Medicine & Montefiore Medical Center, Bronx, New York, 10461, USA.
| | - Sophie Molholm
- The Frederick J. and Marion A. Schindler Cognitive Neurophysiology Laboratory, The Ernest J. Del Monte Institute for Neuroscience, Department of Neuroscience, University of Rochester School of Medicine and Dentistry, Rochester, New York, 14642, USA; The Cognitive Neurophysiology Laboratory, Departments of Pediatrics and Neuroscience, Albert Einstein College of Medicine & Montefiore Medical Center, Bronx, New York, 10461, USA
| | - John S Butler
- The Cognitive Neurophysiology Laboratory, Departments of Pediatrics and Neuroscience, Albert Einstein College of Medicine & Montefiore Medical Center, Bronx, New York, 10461, USA; School of Mathematical Sciences, Technological University Dublin, Kevin Street Campus, Dublin, Ireland
| | - Victor A Del Bene
- The Cognitive Neurophysiology Laboratory, Departments of Pediatrics and Neuroscience, Albert Einstein College of Medicine & Montefiore Medical Center, Bronx, New York, 10461, USA; University of Alabama at Birmingham, Heersink School of Medicine, Department of Neurology, Birmingham, Alabama, 35233, USA
| | - John J Foxe
- The Frederick J. and Marion A. Schindler Cognitive Neurophysiology Laboratory, The Ernest J. Del Monte Institute for Neuroscience, Department of Neuroscience, University of Rochester School of Medicine and Dentistry, Rochester, New York, 14642, USA; The Cognitive Neurophysiology Laboratory, Departments of Pediatrics and Neuroscience, Albert Einstein College of Medicine & Montefiore Medical Center, Bronx, New York, 10461, USA.
| |
Collapse
|
19
|
Li Y, Wang T, Yang Y, Dai W, Wu Y, Li L, Han C, Zhong L, Li L, Wang G, Dou F, Xing D. Cascaded normalizations for spatial integration in the primary visual cortex of primates. Cell Rep 2022; 40:111221. [PMID: 35977486 DOI: 10.1016/j.celrep.2022.111221] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2021] [Revised: 04/19/2022] [Accepted: 07/25/2022] [Indexed: 11/03/2022] Open
Abstract
Spatial integration of visual information is an important function in the brain. However, neural computation for spatial integration in the visual cortex remains unclear. In this study, we recorded laminar responses in V1 of awake monkeys driven by visual stimuli with grating patches and annuli of different sizes. We find three important response properties related to spatial integration that are significantly different between input and output layers: neurons in output layers have stronger surround suppression, smaller receptive field (RF), and higher sensitivity to grating annuli partially covering their RFs. These interlaminar differences can be explained by a descriptive model composed of two global divisions (normalization) and a local subtraction. Our results suggest suppressions with cascaded normalizations (CNs) are essential for spatial integration and laminar processing in the visual cortex. Interestingly, the features of spatial integration in convolutional neural networks, especially in lower layers, are different from our findings in V1.
Collapse
Affiliation(s)
- Yang Li
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China
| | - Tian Wang
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China; College of Life Sciences, Beijing Normal University, Beijing 100875, China
| | - Yi Yang
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China
| | - Weifeng Dai
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China
| | - Yujie Wu
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China
| | - Lianfeng Li
- China Academy of Launch Vehicle Technology, Beijing 100076, China
| | - Chuanliang Han
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China
| | - Lvyan Zhong
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China
| | - Liang Li
- Beijing Institute of Basic Medical Sciences, Beijing 100005, China
| | - Gang Wang
- Beijing Institute of Basic Medical Sciences, Beijing 100005, China
| | - Fei Dou
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China; College of Life Sciences, Beijing Normal University, Beijing 100875, China
| | - Dajun Xing
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China.
| |
Collapse
|
20
|
Tsutsuse KS, Vibell J, Sinnett S. EXPRESS: Multisensory Perception of Natural Versus Unnatural Motion. Q J Exp Psychol (Hove) 2022; 76:1233-1244. [PMID: 35658653 DOI: 10.1177/17470218221108251] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Abstract
Previous research has shown that visual perception is influenced by Newtonian constraints. Kominsky et al. (2017) showed humans detect unnatural motion, where objects break Newtonian constraints by moving at a faster speed after colliding with another object, faster than collisions that do not violate Newtonian constraints. These findings show that the perceptual system distinguishes between realistic and unrealistic causal events. However, real world collisions are rarely silent. The present study extends this research by including sound at the collision point between two objects to evaluate how multisensory integration influences the perception of natural versus unnatural colliding events. Participants viewed an array of three simultaneous videos, each depicting two objects moving in a horizontal back and forth motion. Two of the videos showed the objects moving at the same speed while the third video was an oddball that either moved faster before the collision and slower after (natural target), or slower before the collision and faster after (unnatural target). A brief click was presented at the collision point of one or none of the videos. Participants were asked to indicate the oddball video via keypress. Replicating Kominsky et al. (2017), participants were faster when identifying unnatural target motion events compared to natural target motion events, both with and without sound. The findings also demonstrated lower accuracy rates for unnatural events compared to natural events, especially when a sound was added. These findings suggest that the addition of a sound could be distracting to participants, possibly due to limitations in attentional resources.
Collapse
Affiliation(s)
- Kayla Soma Tsutsuse
- Department of Psychology, University of Hawaii at Manoa 2530 Dole Street Sakamaki D412, Honolulu, HI, 96822 3949
| | - Jonas Vibell
- Department of Psychology, University of Hawaii at Manoa 2530 Dole Street Sakamaki D412, Honolulu, HI, 96822 3949
| | - Scott Sinnett
- Department of Psychology, University of Hawaii at Manoa 2530 Dole Street Sakamaki D412, Honolulu, HI, 96822 3949
| |
Collapse
|
21
|
Wang Y, Zeng Y. Multisensory Concept Learning Framework Based on Spiking Neural Networks. Front Syst Neurosci 2022; 16:845177. [PMID: 35645741 PMCID: PMC9133338 DOI: 10.3389/fnsys.2022.845177] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2021] [Accepted: 04/20/2022] [Indexed: 11/13/2022] Open
Abstract
Concept learning highly depends on multisensory integration. In this study, we propose a multisensory concept learning framework based on brain-inspired spiking neural networks to create integrated vectors relying on the concept's perceptual strength of auditory, gustatory, haptic, olfactory, and visual. With different assumptions, two paradigms: Independent Merge (IM) and Associate Merge (AM) are designed in the framework. For testing, we employed eight distinct neural models and three multisensory representation datasets. The experiments show that integrated vectors are closer to human beings than the non-integrated ones. Furthermore, we systematically analyze the similarities and differences between IM and AM paradigms and validate the generality of our framework.
Collapse
Affiliation(s)
- Yuwei Wang
- Research Center for Brain-inspired Intelligence, Institute of Automation, Chinese Academy of Sciences, Beijing, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China
| | - Yi Zeng
- Research Center for Brain-inspired Intelligence, Institute of Automation, Chinese Academy of Sciences, Beijing, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China
- Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, China
- National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing, China
- *Correspondence: Yi Zeng
| |
Collapse
|
22
|
Tang X, Yuan M, Shi Z, Gao M, Ren R, Wei M, Gao Y. Multisensory integration attenuates visually induced oculomotor inhibition of return. J Vis 2022; 22:7. [PMID: 35297999 PMCID: PMC8944392 DOI: 10.1167/jov.22.4.7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Inhibition of return (IOR) is a mechanism of the attention system involving bias toward novel stimuli and delayed generation of responses to targets at previously attended locations. According to the two-component theory, IOR consists of a perceptual component and an oculomotor component (oculomotor IOR [O-IOR]) depending on whether the eye movement system is activated. Previous studies have shown that multisensory integration weakens IOR when paying attention to both visual and auditory modalities. However, it remains unclear whether the O-IOR effect attenuated by multisensory integration also occurs when the oculomotor system is activated. Here, using two eye movement experiments, we investigated the effect of multisensory integration on O-IOR using the exogenous spatial cueing paradigm. In Experiment 1, we found a greater visual O-IOR effect compared with audiovisual and auditory O-IOR in divided modality attention. The relative multisensory response enhancement (rMRE) and violations of Miller's bound showed a greater magnitude of multisensory integration in the cued location compared with the uncued location. In Experiment 2, the magnitude of the audiovisual O-IOR effect was significantly less than that of the visual O-IOR in single visual modality selective attention. Implications for the effect of multisensory integration on O-IOR were discussed under conditions of oculomotor system activation, shedding new light on the two-component theory of IOR.
Collapse
Affiliation(s)
- Xiaoyu Tang
- School of Psychology, Liaoning Collaborative Innovation Center of Children and Adolescents Healthy Personality Assessment and Cultivation, Liaoning Normal University, Dalian, China.,
| | - Mengying Yuan
- School of Psychology, Liaoning Collaborative Innovation Center of Children and Adolescents Healthy Personality Assessment and Cultivation, Liaoning Normal University, Dalian, China.,
| | - Zhongyu Shi
- School of Psychology, Liaoning Collaborative Innovation Center of Children and Adolescents Healthy Personality Assessment and Cultivation, Liaoning Normal University, Dalian, China.,
| | - Min Gao
- School of Psychology, Liaoning Collaborative Innovation Center of Children and Adolescents Healthy Personality Assessment and Cultivation, Liaoning Normal University, Dalian, China.,
| | - Rongxia Ren
- Interdisciplinary Science and Engineering in Health Systems, Okayama University, Okayama, Japan.,
| | - Ming Wei
- School of Psychology, Liaoning Collaborative Innovation Center of Children and Adolescents Healthy Personality Assessment and Cultivation, Liaoning Normal University, Dalian, China.,
| | - Yulin Gao
- Department of Psychology, Jilin University, Changchun, China.,
| |
Collapse
|
23
|
Benetti S, Collignon O. Cross-modal integration and plasticity in the superior temporal cortex. HANDBOOK OF CLINICAL NEUROLOGY 2022; 187:127-143. [PMID: 35964967 DOI: 10.1016/b978-0-12-823493-8.00026-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
In congenitally deaf people, temporal regions typically believed to be primarily auditory enhance their response to nonauditory information. The neural mechanisms and functional principles underlying this phenomenon, as well as its impact on auditory recovery after sensory restoration, yet remain debated. In this chapter, we demonstrate that the cross-modal recruitment of temporal regions by visual inputs in congenitally deaf people follows organizational principles known to be present in the hearing brain. We propose that the functional and structural mechanisms allowing optimal convergence of multisensory information in the temporal cortex of hearing people also provide the neural scaffolding for feeding visual or tactile information into the deafened temporal areas. Innate in their nature, such anatomo-functional links between the auditory and other sensory systems would represent the common substrate of both early multisensory integration and expression of selective cross-modal plasticity in the superior temporal cortex.
Collapse
Affiliation(s)
- Stefania Benetti
- Center for Mind/Brain Sciences - CIMeC, University of Trento, Trento, Italy
| | - Olivier Collignon
- Center for Mind/Brain Sciences - CIMeC, University of Trento, Trento, Italy; Institute for Research in Psychology and Neuroscience, Faculty of Psychology and Educational Science, UC Louvain, Louvain-la-Neuve, Belgium.
| |
Collapse
|
24
|
Liang J, Li Y, Zhang Z, Luo W. Sound gaps boost emotional audiovisual integration independent of attention: Evidence from an ERP study. Biol Psychol 2021; 168:108246. [PMID: 34968556 DOI: 10.1016/j.biopsycho.2021.108246] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2021] [Revised: 12/18/2021] [Accepted: 12/23/2021] [Indexed: 11/02/2022]
Abstract
The emotion discrimination paradigm was adopted to study the effect of interrupted sound on visual emotional processing under different attentional states. There were two experiments: Experiment 1: judging facial expressions (explicit task), Experiment 2: judging the position of a bar (implicit task). In Experiment 1, ERP results showed that there was a sound gap accelerating the effect of P1 present only under neutral faces. In Experiment 2, the accelerating effect (P1) existed regardless of the emotional condition. Combining two experiments, P1 findings suggest that sound gap enhances bottom-up attention. The N170 and late positive component (LPC) were found to be regulated by emotion face in both experiments, with fear over the neutral. Comparing the two experiments, the explicit task induced a larger LPC than the implicit task. Overall, sound gaps boosted the audiovisual integration by bottom-up attention in early integration, while cognitive expectations led to top-down attention in late stages.
Collapse
Affiliation(s)
- Junyu Liang
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian 116029, China; Key Laboratory of Brain and Cognitive Neuroscience, Dalian 116029, Liaoning Province, China
| | - Yuchen Li
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian 116029, China; Key Laboratory of Brain and Cognitive Neuroscience, Dalian 116029, Liaoning Province, China
| | - Zhao Zhang
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian 116029, China; Institute of Psychology, Weifang Medical University, Weifang 216053, China; Key Laboratory of Brain and Cognitive Neuroscience, Dalian 116029, Liaoning Province, China
| | - Wenbo Luo
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian 116029, China; Key Laboratory of Brain and Cognitive Neuroscience, Dalian 116029, Liaoning Province, China.
| |
Collapse
|
25
|
Ball F, Nentwich A, Noesselt T. Cross-modal perceptual enhancement of unisensory targets is uni-directional and does not affect temporal expectations. Vision Res 2021; 190:107962. [PMID: 34757275 DOI: 10.1016/j.visres.2021.107962] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2021] [Revised: 10/05/2021] [Accepted: 10/15/2021] [Indexed: 10/20/2022]
Abstract
Temporal structures in the environment can shape temporal expectations (TE); and previous studies demonstrated that TEs interact with multisensory interplay (MSI) when multisensory stimuli are presented synchronously. Here, we tested whether other types of MSI - evoked by asynchronous yet temporally flanking irrelevant stimuli - result in similar performance patterns. To this end, we presented sequences of 12 stimuli (10 Hz) which consisted of auditory (A), visual (V) or alternating auditory-visual stimuli (e.g. A-V-A-V-…) with either auditory or visual targets (Exp. 1). Participants discriminated target frequencies (auditory pitch or visual spatial frequency) embedded in these sequences. To test effects of TE, the proportion of early and late temporal target positions was manipulated run-wise. Performance for unisensory targets was affected by temporally flanking distractors, with auditory temporal flankers selectively improving visual target perception (Exp. 1). However, no effect of temporal expectation was observed. Control experiments (Exp. 2-3) tested whether this lack of TE effect was due to the higher presentation frequency in Exp. 1 relative to previous experiments. Importantly, even at higher stimulation frequencies redundant multisensory targets (Exp. 2-3) reliably modulated TEs. Together, our results indicate that visual target detection was enhanced by MSI. However, this cross-modal enhancement - in contrast to the redundant target effect - was still insufficient to generate TEs. We posit that unisensory target representations were either instable or insufficient for the generation of TEs while less demanding MSI still occurred; highlighting the need for robust stimulus representations when generating temporal expectations.
Collapse
Affiliation(s)
- Felix Ball
- Department of Biological Psychology, Faculty of Natural Science, Otto-von-Guericke-University Magdeburg, Germany; Center for Behavioral Brain Sciences, Otto-von-Guericke-University Magdeburg, Germany.
| | - Annika Nentwich
- Department of Biological Psychology, Faculty of Natural Science, Otto-von-Guericke-University Magdeburg, Germany
| | - Toemme Noesselt
- Department of Biological Psychology, Faculty of Natural Science, Otto-von-Guericke-University Magdeburg, Germany; Center for Behavioral Brain Sciences, Otto-von-Guericke-University Magdeburg, Germany
| |
Collapse
|
26
|
Schulze M, Aslan B, Stöcker T, Stirnberg R, Lux S, Philipsen A. Disentangling early versus late audiovisual integration in adult ADHD: a combined behavioural and resting-state connectivity study. J Psychiatry Neurosci 2021; 46:E528-E537. [PMID: 34548387 PMCID: PMC8526154 DOI: 10.1503/jpn.210017] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/20/2021] [Revised: 05/27/2021] [Accepted: 06/21/2021] [Indexed: 01/26/2023] Open
Abstract
BACKGROUND Studies investigating sensory processing in attention-deficit/hyperactivity disorder (ADHD) have shown altered visual and auditory processing. However, evidence is lacking for audiovisual interplay - namely, multisensory integration. As well, neuronal dysregulation at rest (e.g., aberrant within- or between-network functional connectivity) may account for difficulties with integration across the senses in ADHD. We investigated whether sensory processing was altered at the multimodal level in adult ADHD and included resting-state functional connectivity to illustrate a possible overlap between deficient network connectivity and the ability to integrate stimuli. METHODS We tested 25 patients with ADHD and 24 healthy controls using 2 illusionary paradigms: the sound-induced flash illusion and the McGurk illusion. We applied the Mann-Whitney U test to assess statistical differences between groups. We acquired resting-state functional MRIs on a 3.0 T Siemens magnetic resonance scanner, using a highly accelerated 3-dimensional echo planar imaging sequence. RESULTS For the sound-induced flash illusion, susceptibility and reaction time were not different between the 2 groups. For the McGurk illusion, susceptibility was significantly lower for patients with ADHD, and reaction times were significantly longer. At a neuronal level, resting-state functional connectivity in the ADHD group was more highly regulated in polymodal regions that play a role in binding unimodal sensory inputs from different modalities and enabling sensory-to-cognition integration. LIMITATIONS We did not explicitly screen for autism spectrum disorder, which has high rates of comorbidity with ADHD and also involves impairments in multisensory integration. Although the patients were carefully screened by our outpatient department, we could not rule out the possibility of autism spectrum disorder in some participants. CONCLUSION Unimodal hypersensitivity seems to have no influence on the integration of basal stimuli, but it might have negative consequences for the multisensory integration of complex stimuli. This finding was supported by observations of higher resting-state functional connectivity between unimodal sensory areas and polymodal multisensory integration convergence zones for complex stimuli.
Collapse
Affiliation(s)
- Marcel Schulze
- From the Department of Psychiatry and Psychotherapy, University of Bonn, Bonn, Germany (Schulze, Aslan, Lux, Philipsen); Biopsychology and Cognitive Neuroscience, Faculty of Psychology and Sports Science, Bielefeld University, Bielefeld, Germany (Schulze); the German Centre for Neurodegenerative Diseases (DZNE), Bonn, Germany (Stöcker, Stirnberg); and the Department of Physics and Astronomy, University of Bonn, Bonn, Germany (Stöcker)
| | | | | | | | | | | |
Collapse
|
27
|
A Comparative Eye Tracking Study of Usability—Towards Sustainable Web Design. SUSTAINABILITY 2021. [DOI: 10.3390/su131810415] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Websites are one of the most frequently used communication environments, and creating sustainable web designs should be an objective for all companies. Ensuring high usability is proving to be one of the main contributors to sustainable web design, reducing usage time, eliminating frustration and increasing satisfaction and retention. The present paper studies the usability of different website landing pages, seeking to identify the elements, structures and designs that increase usability. The study analyzed the behavior of 22 participants during their interaction with five different landing pages while they performed three tasks on the webpage and freely viewed each page for one minute. The stimuli were represented by five different banking websites, each of them presenting the task content in a different mode (text, image, symbol, graph, etc.).; the data obtained from the eye tracker (fixations location, order and duration, saccades, revisits of the same element, etc.), together with the data from the applied survey lead to interesting conclusions: the top, center and right sides of the webpage attract the most attention; the use of pictures depicting persons increase visibility; the scanpaths follow a vertical and horizontal direction; numerical data should be presented through graphs or tables. Even if a user's past experience influences their experience on a website, we show that the design of the webpage itself has a greater influence on webpage usability.
Collapse
|
28
|
Gao C, Wedell DH, Shinkareva SV. Evaluating non-affective cross-modal congruence effects on emotion perception. Cogn Emot 2021; 35:1634-1651. [PMID: 34486494 DOI: 10.1080/02699931.2021.1973966] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
Although numerous studies have shown that people are more likely to integrate consistent visual and auditory signals, the role of non-affective congruence in emotion perception is unclear. This registered report examined the influence of non-affective cross-modal congruence on emotion perception. In Experiment 1, non-affective congruence was manipulated by matching or mismatching gender between visual and auditory modalities. Participants were instructed to attend to emotion information from only one modality while ignoring the other modality. Experiment 2 tested the inverse effectiveness rule by including both noise and noiseless conditions. Across two experiments, we found the effects of task-irrelevant emotional signals from one modality on emotional perception in the other modality, reflected in affective congruence, facilitation, and affective incongruence effects. The effects were stronger for the attend-auditory compared to the attend-visual condition, supporting a visual dominance effect. The effects were stronger for the noise compared to the noiseless condition, consistent with the inverse effectiveness rule. We did not find evidence for the effects of non-affective congruence on audiovisual integration of emotion across two experiments, suggesting that audiovisual integration of emotion may not require automatic integration of non-affective congruence information.
Collapse
Affiliation(s)
- Chuanji Gao
- Department of Psychology, Institute for Mind and Brain, University of South Carolina, Columbia, SC, USA
| | - Douglas H Wedell
- Department of Psychology, Institute for Mind and Brain, University of South Carolina, Columbia, SC, USA
| | - Svetlana V Shinkareva
- Department of Psychology, Institute for Mind and Brain, University of South Carolina, Columbia, SC, USA
| |
Collapse
|
29
|
Effects of Musical Training, Timbre, and Response Orientation on the ROMPR Effect. JOURNAL OF COGNITIVE ENHANCEMENT 2021. [DOI: 10.1007/s41465-021-00213-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
30
|
Multisensory integration of visual cues from first- to third-person perspective avatars in the perception of self-motion. Atten Percept Psychophys 2021; 83:2634-2655. [PMID: 33864205 DOI: 10.3758/s13414-021-02276-3] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/05/2021] [Indexed: 11/08/2022]
Abstract
In the perception of self-motion, visual cues originating from an embodied humanoid avatar seen from a first-person perspective (1st-PP) are processed in the same way as those originating from a person's own body. Here, we sought to determine whether the user's and avatar's bodies in virtual reality have to be colocalized for this visual integration. In Experiment 1, participants saw a whole-body avatar in a virtual mirror facing them. The mirror perspective could be supplemented with a fully visible 1st-PP avatar or a suggested one (with the arms hidden by a virtual board). In Experiment 2, the avatar was viewed from the mirror perspective or a third-person perspective (3rd-PP) rotated 90° left or right. During an initial embodiment phase in both experiments, the avatar's forearms faithfully reproduced the participant's real movements. Next, kinaesthetic illusions were induced on the static right arm from the vision of passive displacements of the avatar's arms enhanced by passive displacement of the participant's left arm. Results showed that this manipulation elicited kinaesthetic illusions regardless of the avatar's perspective in Experiments 1 and 2. However, illusions were more likely to occur when the mirror perspective was supplemented with the view of the 1st-PP avatar's body than with the mirror perspective only (Experiment 1), just as they are more likely to occur in the latter condition than with the 3rd-PP (Experiment 2). Our results show that colocalization of the user's and avatar's bodies is an important, but not essential, factor in visual integration for self-motion perception.
Collapse
|
31
|
Tang X, Wang X, Peng X, Li Q, Zhang C, Wang A, Zhang M. Electrophysiological evidence of different neural processing between visual and audiovisual inhibition of return. Sci Rep 2021; 11:8056. [PMID: 33850180 PMCID: PMC8044137 DOI: 10.1038/s41598-021-86999-1] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2020] [Accepted: 03/22/2021] [Indexed: 11/09/2022] Open
Abstract
Inhibition of return (IOR) refers to the slower response to targets appearing on the same side as the cue (valid locations) than to targets appearing on the opposite side as the cue (invalid locations). Previous behaviour studies have found that the visual IOR is larger than the audiovisual IOR when focusing on both visual and auditory modalities. Utilising the high temporal resolution of the event-related potential (ERP) technique we explored the possible neural correlates with the behaviour IOR difference between visual and audiovisual targets. The behavioural results revealed that the visual IOR was larger than the audiovisual IOR. The ERP results showed that the visual IOR effect was generated from the P1 and N2 components, while the audiovisual IOR effect was derived only from the P3 component. Multisensory integration (MSI) of audiovisual targets occurred on the P1, N1 and P3 components, which may offset the reduced perceptual processing due to audiovisual IOR. The results of early and late differences in the neural processing of the visual IOR and audiovisual IOR imply that the two target types may have different inhibitory orientation mechanisms.
Collapse
Affiliation(s)
- Xiaoyu Tang
- School of Psychology, Liaoning Collaborative Innovation Center of Children and Adolescents Healthy Personality Assessment and Cultivation, Liaoning Normal University, Dalian, 116029, China.
| | - Xueli Wang
- School of Psychology, Liaoning Collaborative Innovation Center of Children and Adolescents Healthy Personality Assessment and Cultivation, Liaoning Normal University, Dalian, 116029, China
| | - Xing Peng
- Institute of Aviation Human Factors and Ergonomics, Civil Aviation Flight University of China, Guanghan, 618307, China.
| | - Qi Li
- School of Computer Science and Technology, Changchun University of Science and Technology, Changchun, 130022, China
| | - Chi Zhang
- School of Biomedical Engineering, Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian, 116024, China
| | - Aijun Wang
- Department of Psychology, Soochow University, Suzhou, 215123, China.
| | - Ming Zhang
- Department of Psychology, Soochow University, Suzhou, 215123, China.
| |
Collapse
|
32
|
Abstract
We present a primer on multisensory experiences, the different components of this concept, as well as a reflection of its implications for individuals and society. We define multisensory experiences, illustrate how to understand them, elaborate on the role of technology in such experiences, and present the three laws of multisensory experiences, which can guide discussion on their implications. Further, we introduce the case of multisensory experiences in the context of eating and human-food interaction to illustrate how its components operationalize. We expect that this article provides a first point of contact for those interested in multisensory experiences, as well as multisensory experiences in the context of human-food interaction.
Collapse
|
33
|
Aspects of Industrial Design and Their Implications for Society. Case Studies on the Influence of Packaging Design and Placement at the Point of Sale. APPLIED SCIENCES-BASEL 2021. [DOI: 10.3390/app11020517] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
Abstract
Manufacturing engineering is responsible for the design, development and improvement of production systems that convert raw materials into finished products. Each product is designed to be sold to numerous potential consumers, so the importance of the stimuli surrounding the product in packaging, and at the point of sale, cannot be underestimated. The environmental, social, and ethical commitments of industrial design (and their implications in manufacturing) are establishing universal principles in a common effort to foster a more harmonious and sustainable society. This work aims to analyse, through eye tracking biometric techniques, the level of saturation of information generated by the concentration of stimuli in packaging and the retail channel, possibly creating a lower level of attention towards the product itself. This research confirms that every product associated with a manufacturing process seeks to respond to a need, so the associated responsibility is significant. This would suggest that designers incorporate knowledge from multiple fields, including marketing strategies, design, research and development, basic knowledge related to production, integration management and communication skills. More than 50% of consumer attention is dedicated to other elements/items that accompany the product, so it is important to consider this in the design phase. The results can be used to improve efficiency in both generating product attention, and stimulus design for the purchasing process.
Collapse
|
34
|
Lalonde K, Werner LA. Development of the Mechanisms Underlying Audiovisual Speech Perception Benefit. Brain Sci 2021; 11:49. [PMID: 33466253 PMCID: PMC7824772 DOI: 10.3390/brainsci11010049] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2020] [Revised: 12/30/2020] [Accepted: 12/30/2020] [Indexed: 02/07/2023] Open
Abstract
The natural environments in which infants and children learn speech and language are noisy and multimodal. Adults rely on the multimodal nature of speech to compensate for noisy environments during speech communication. Multiple mechanisms underlie mature audiovisual benefit to speech perception, including reduced uncertainty as to when auditory speech will occur, use of correlations between the amplitude envelope of auditory and visual signals in fluent speech, and use of visual phonetic knowledge for lexical access. This paper reviews evidence regarding infants' and children's use of temporal and phonetic mechanisms in audiovisual speech perception benefit. The ability to use temporal cues for audiovisual speech perception benefit emerges in infancy. Although infants are sensitive to the correspondence between auditory and visual phonetic cues, the ability to use this correspondence for audiovisual benefit may not emerge until age four. A more cohesive account of the development of audiovisual speech perception may follow from a more thorough understanding of the development of sensitivity to and use of various temporal and phonetic cues.
Collapse
Affiliation(s)
- Kaylah Lalonde
- Center for Hearing Research, Boys Town National Research Hospital, Omaha, NE 68131, USA
| | - Lynne A. Werner
- Department of Speech and Hearing Sciences, University of Washington, Seattle, WA 98105, USA;
| |
Collapse
|
35
|
Fontanillo Lopez CA, Li G, Zhang D. Beyond Technologies of Electroencephalography-Based Brain-Computer Interfaces: A Systematic Review From Commercial and Ethical Aspects. Front Neurosci 2020; 14:611130. [PMID: 33390892 PMCID: PMC7773904 DOI: 10.3389/fnins.2020.611130] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2020] [Accepted: 11/13/2020] [Indexed: 01/22/2023] Open
Abstract
The deployment of electroencephalographic techniques for commercial applications has undergone a rapid growth in recent decades. As they continue to expand in the consumer markets as suitable techniques for monitoring the brain activity, their transformative potential necessitates equally significant ethical inquiries. One of the main questions, which arises then when evaluating these kinds of applications, is whether they should be aligned or not with the main ethical concerns reported by scholars and experts. Thus, the present work attempts to unify these disciplines of knowledge by performing a comprehensive scan of the major electroencephalographic market applications as well as their most relevant ethical concerns arising from the existing literature. In this literature review, different databases were consulted, which presented conceptual and empirical discussions and findings about commercial and ethical aspects of electroencephalography. Subsequently, the content was extracted from the articles and the main conclusions were presented. Finally, an external assessment of the outcomes was conducted in consultation with an expert panel in some of the topic areas such as biomedical engineering, biomechatronics, and neuroscience. The ultimate purpose of this review is to provide a genuine insight into the cutting-edge practical attempts at electroencephalography. By the same token, it seeks to highlight the overlap between the market needs and the ethical standards that should govern the deployment of electroencephalographic consumer-grade solutions, providing a practical approach that overcomes the engineering myopia of certain ethical discussions.
Collapse
Affiliation(s)
| | - Guangye Li
- The Robotics Institute, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Dingguo Zhang
- The Department of Electronic and Electrical Engineering, University of Bath, Bath, United Kingdom
| |
Collapse
|
36
|
Gao C, Xie W, Green JJ, Wedell DH, Jia X, Guo C, Shinkareva SV. Evoked and induced power oscillations linked to audiovisual integration of affect. Biol Psychol 2020; 158:108006. [PMID: 33301827 DOI: 10.1016/j.biopsycho.2020.108006] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2020] [Revised: 10/28/2020] [Accepted: 11/30/2020] [Indexed: 12/16/2022]
Abstract
Our affective experiences are influenced by combined multisensory information. Although the enhanced effects of congruent audiovisual information on our affective experiences have been well documented, the role of neural oscillations in the audiovisual integration of affective signals remains unclear. First, it is unclear whether oscillatory activity changes as a function of valence. Second, the function of phase-locked and non-phase-locked power changes in audiovisual integration of affect has not yet been clearly distinguished. To fill this gap, the present study performed time-frequency analyses on EEG data acquired while participants perceived positive, neutral and negative naturalistic video and music clips. A comparison between the congruent audiovisual condition and the sum of unimodal conditions was used to identify supra-additive (Audiovisual > Visual + Auditory) or sub-additive (Audiovisual < Visual + Auditory) integration effects. The results showed that early evoked sub-additive theta and sustained induced supra-additive delta and beta activities are linked to audiovisual integration of affect regardless of affective content.
Collapse
Affiliation(s)
- Chuanji Gao
- Department of Psychology, Institute for Mind and Brain, University of South Carolina, Columbia, SC, 29201, USA
| | - Wanze Xie
- Children's Hospital, Harvard Medical School, Boston, MA, 02215, USA
| | - Jessica J Green
- Department of Psychology, Institute for Mind and Brain, University of South Carolina, Columbia, SC, 29201, USA
| | - Douglas H Wedell
- Department of Psychology, Institute for Mind and Brain, University of South Carolina, Columbia, SC, 29201, USA
| | - Xi Jia
- Beijing Key Laboratory of Learning and Cognition, School of Psychology, Capital Normal University, Beijing, 10048, PR China
| | - Chunyan Guo
- Beijing Key Laboratory of Learning and Cognition, School of Psychology, Capital Normal University, Beijing, 10048, PR China.
| | - Svetlana V Shinkareva
- Department of Psychology, Institute for Mind and Brain, University of South Carolina, Columbia, SC, 29201, USA.
| |
Collapse
|
37
|
Sinding C, Thibault H, Hummel T, Thomas-Danguin T. Odor-Induced Saltiness Enhancement: Insights Into The Brain Chronometry Of Flavor Perception. Neuroscience 2020; 452:126-137. [PMID: 33197506 DOI: 10.1016/j.neuroscience.2020.10.029] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2019] [Revised: 10/13/2020] [Accepted: 10/26/2020] [Indexed: 10/23/2022]
Abstract
Flavor perception results from the integration of at least odor and taste. Evidence for such integration is that odors can have taste properties (odor-induced taste). Most brain areas involved in flavor perception are high-level areas; however, primary gustatory and olfactory areas also show activations in response to a combination of odor and taste. While the regions involved in flavor perception are now quite well identified, the network's organization is not yet understood. Using a close to real salty soup model with electroencephalography brain recording, we evaluated whether odor-induced saltiness enhancement would result in differences of amplitude and/or latency in late cognitive P3 peak mostly and/or in P1 early sensory peak. Three target solutions were created from the same base of green-pea soup: i) with a "usual" salt concentration (PPS2), ii) with "reduced" salt (PPS1: -50%), and iii) with reduced salt and a "beef stock" odor (PPS1B). Sensory data showed that the beef odor produced saltiness enhancement in PPS1B in comparison to PPS1. As the main EEG result, the late cognitive P3 peak was delayed by 25 ms in the odor-added solution PPS1B compared to PPS1. The odor alone did not explain this peak amplitude and higher latency in the P3 peak. These results support the classical view that high-level integratory areas process odor-taste interactions with potential top-down effects on primary sensory regions.
Collapse
Affiliation(s)
- Charlotte Sinding
- Centre des Sciences du Goût et de l'Alimentation, AgroSup Dijon, CNRS, INRAE, Université Bourgogne Franche-Comté, F-21000 Dijon, France.
| | - Henri Thibault
- Centre des Sciences du Goût et de l'Alimentation, AgroSup Dijon, CNRS, INRAE, Université Bourgogne Franche-Comté, F-21000 Dijon, France
| | - Thomas Hummel
- Smell & Taste Clinic, Department of Otorhinolaryngology, TU Dresden, Dresden, Germany
| | - Thierry Thomas-Danguin
- Centre des Sciences du Goût et de l'Alimentation, AgroSup Dijon, CNRS, INRAE, Université Bourgogne Franche-Comté, F-21000 Dijon, France
| |
Collapse
|
38
|
Riva G, Mancuso V, Cavedoni S, Stramba-Badiale C. Virtual reality in neurorehabilitation: a review of its effects on multiple cognitive domains. Expert Rev Med Devices 2020; 17:1035-1061. [PMID: 32962433 DOI: 10.1080/17434440.2020.1825939] [Citation(s) in RCA: 35] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
INTRODUCTION Neurological diseases frequently cause adult-onset disability and have increased the demand for rehabilitative interventions. Neurorehabilitation has been progressively relying on computer-assisted programs and, more recently, on virtual reality (VR). Current reviews explore VR-based neurorehabilitation for assessing and treating the most common neurological pathologies. However, none of them explored specifically the impact of VR on multiple cognitive domains. AREAS COVERED The present work is a review of 6 years of literature (2015-2020) on VR in neurorehabilitation with the purpose of analyzing its effects on memory, attention, executive functions, language, and visuospatial ability. EXPERT OPINION Our review suggests that VR-based neurorehabilitation showed encouraging results for executive functions and visuospatial abilities particularly for both acute and neurodegenerative conditions. Conversely, memory, and attention outcomes are conflicting, and language did not show significant improvements following VR-based rehabilitation. Within five years, it is plausible that VR-based intervention would be provided in standalone and mobile-based platforms that won't need a PC to work, with reduced latency and improved user interaction.
Collapse
Affiliation(s)
- Giuseppe Riva
- Applied Technology for Neuro-Psychology Lab, IRCCS Istituto Auxologico Italiano , Milan, Italy.,Department of Psychology, Catholic University of the Sacred Heart , Milan, Italy
| | - Valentina Mancuso
- Applied Technology for Neuro-Psychology Lab, IRCCS Istituto Auxologico Italiano , Milan, Italy
| | - Silvia Cavedoni
- Applied Technology for Neuro-Psychology Lab, IRCCS Istituto Auxologico Italiano , Milan, Italy
| | - Chiara Stramba-Badiale
- Applied Technology for Neuro-Psychology Lab, IRCCS Istituto Auxologico Italiano , Milan, Italy
| |
Collapse
|
39
|
Shared Representation of Visual and Auditory Motion Directions in the Human Middle-Temporal Cortex. Curr Biol 2020; 30:2289-2299.e8. [DOI: 10.1016/j.cub.2020.04.039] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2019] [Revised: 03/03/2020] [Accepted: 04/16/2020] [Indexed: 11/23/2022]
|
40
|
Carlsen AN, Maslovat D, Kaga K. An unperceived acoustic stimulus decreases reaction time to visual information in a patient with cortical deafness. Sci Rep 2020; 10:5825. [PMID: 32242039 PMCID: PMC7118083 DOI: 10.1038/s41598-020-62450-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2019] [Accepted: 03/13/2020] [Indexed: 11/16/2022] Open
Abstract
Responding to multiple stimuli of different modalities has been shown to reduce reaction time (RT), yet many different processes can potentially contribute to multisensory response enhancement. To investigate the neural circuits involved in voluntary response initiation, an acoustic stimulus of varying intensities (80, 105, or 120 dB) was presented during a visual RT task to a patient with profound bilateral cortical deafness and an intact auditory brainstem response. Despite being unable to consciously perceive sound, RT was reliably shortened (~100 ms) on trials where the unperceived acoustic stimulus was presented, confirming the presence of multisensory response enhancement. Although the exact locus of this enhancement is unclear, these results cannot be attributed to involvement of the auditory cortex. Thus, these data provide new and compelling evidence that activation from subcortical auditory processing circuits can contribute to other cortical or subcortical areas responsible for the initiation of a response, without the need for conscious perception.
Collapse
Affiliation(s)
| | - Dana Maslovat
- School of Kinesiology, University of British Columbia, Vancouver, Canada
| | - Kimitaka Kaga
- National Institute of Sensory Organs, National Tokyo Medical Center, Tokyo, Japan
| |
Collapse
|
41
|
Ciraolo MF, O’Hanlon SM, Robinson CW, Sinnett S. Stimulus Onset Modulates Auditory and Visual Dominance. Vision (Basel) 2020; 4:vision4010014. [PMID: 32121428 PMCID: PMC7157246 DOI: 10.3390/vision4010014] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2019] [Revised: 02/09/2020] [Accepted: 02/21/2020] [Indexed: 12/05/2022] Open
Abstract
Investigations of multisensory integration have demonstrated that, under certain conditions, one modality is more likely to dominate the other. While the direction of this relationship typically favors the visual modality, the effect can be reversed to show auditory dominance under some conditions. The experiments presented here use an oddball detection paradigm with variable stimulus timings to test the hypothesis that a stimulus that is presented earlier will be processed first and therefore contribute to sensory dominance. Additionally, we compared two measures of sensory dominance (slowdown scores and error rate) to determine whether the type of measure used can affect which modality appears to dominate. When stimuli were presented asynchronously, analysis of slowdown scores and error rates yielded the same result; for both the 1- and 3-button versions of the task, participants were more likely to show auditory dominance when the auditory stimulus preceded the visual stimulus, whereas evidence for visual dominance was observed as the auditory stimulus was delayed. In contrast, for the simultaneous condition, slowdown scores indicated auditory dominance, whereas error rates indicated visual dominance. Overall, these results provide empirical support for the hypothesis that the modality that engages processing first is more likely to show dominance, and suggest that more explicit measures of sensory dominance may favor the visual modality.
Collapse
Affiliation(s)
- Margeaux F. Ciraolo
- College of Health Solutions, Arizona State University, 550 N 3rd St., Phoenix, AZ 85004, USA
- Correspondence:
| | - Samantha M. O’Hanlon
- School of Psychological Science, Oregon State University, 2950 SW Jefferson Way, Corvallis, OR 97331, USA
| | - Christopher W. Robinson
- Department of Psychology, The Ohio State University at Newark, 1179 University Dr., Newark, OH 43055, USA;
| | - Scott Sinnett
- Department of Psychology, University of Hawai’i at Mānoa, 2530 Dole St., Sakamaki C400, Honolulu, HI 96822, USA;
| |
Collapse
|
42
|
Xu X, Hanganu-Opatz IL, Bieler M. Cross-Talk of Low-Level Sensory and High-Level Cognitive Processing: Development, Mechanisms, and Relevance for Cross-Modal Abilities of the Brain. Front Neurorobot 2020; 14:7. [PMID: 32116637 PMCID: PMC7034303 DOI: 10.3389/fnbot.2020.00007] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2019] [Accepted: 01/27/2020] [Indexed: 12/18/2022] Open
Abstract
The emergence of cross-modal learning capabilities requires the interaction of neural areas accounting for sensory and cognitive processing. Convergence of multiple sensory inputs is observed in low-level sensory cortices including primary somatosensory (S1), visual (V1), and auditory cortex (A1), as well as in high-level areas such as prefrontal cortex (PFC). Evidence shows that local neural activity and functional connectivity between sensory cortices participate in cross-modal processing. However, little is known about the functional interplay between neural areas underlying sensory and cognitive processing required for cross-modal learning capabilities across life. Here we review our current knowledge on the interdependence of low- and high-level cortices for the emergence of cross-modal processing in rodents. First, we summarize the mechanisms underlying the integration of multiple senses and how cross-modal processing in primary sensory cortices might be modified by top-down modulation of the PFC. Second, we examine the critical factors and developmental mechanisms that account for the interaction between neuronal networks involved in sensory and cognitive processing. Finally, we discuss the applicability and relevance of cross-modal processing for brain-inspired intelligent robotics. An in-depth understanding of the factors and mechanisms controlling cross-modal processing might inspire the refinement of robotic systems by better mimicking neural computations.
Collapse
Affiliation(s)
- Xiaxia Xu
- Developmental Neurophysiology, Center for Molecular Neurobiology, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| | - Ileana L Hanganu-Opatz
- Developmental Neurophysiology, Center for Molecular Neurobiology, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| | - Malte Bieler
- Laboratory for Neural Computation, Institute of Basic Medical Sciences, University of Oslo, Oslo, Norway
| |
Collapse
|
43
|
Kramer A, Röder B, Bruns P. Feedback Modulates Audio-Visual Spatial Recalibration. Front Integr Neurosci 2020; 13:74. [PMID: 32009913 PMCID: PMC6979315 DOI: 10.3389/fnint.2019.00074] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2019] [Accepted: 12/10/2019] [Indexed: 11/13/2022] Open
Abstract
In an ever-changing environment, crossmodal recalibration is crucial to maintain precise and coherent spatial estimates across different sensory modalities. Accordingly, it has been found that perceived auditory space is recalibrated toward vision after consistent exposure to spatially misaligned audio-visual stimuli (VS). While this so-called ventriloquism aftereffect (VAE) yields internal consistency between vision and audition, it does not necessarily lead to consistency between the perceptual representation of space and the actual environment. For this purpose, feedback about the true state of the external world might be necessary. Here, we tested whether the size of the VAE is modulated by external feedback and reward. During adaptation audio-VS with a fixed spatial discrepancy were presented. Participants had to localize the sound and received feedback about the magnitude of their localization error. In half of the sessions the feedback was based on the position of the VS and in the other half it was based on the position of the auditory stimulus. An additional monetary reward was given if the localization error fell below a certain threshold that was based on participants’ performance in the pretest. As expected, when error feedback was based on the position of the VS, auditory localization during adaptation trials shifted toward the position of the VS. Conversely, feedback based on the position of the auditory stimuli reduced the visual influence on auditory localization (i.e., the ventriloquism effect) and improved sound localization accuracy. After adaptation with error feedback based on the VS position, a typical auditory VAE (but no visual aftereffect) was observed in subsequent unimodal localization tests. By contrast, when feedback was based on the position of the auditory stimuli during adaptation, no auditory VAE was observed in subsequent unimodal auditory trials. Importantly, in this situation no visual aftereffect was found either. As feedback did not change the physical attributes of the audio-visual stimulation during adaptation, the present findings suggest that crossmodal recalibration is subject to top–down influences. Such top–down influences might help prevent miscalibration of audition toward conflicting visual stimulation in situations in which external feedback indicates that visual information is inaccurate.
Collapse
Affiliation(s)
- Alexander Kramer
- Biological Psychology and Neuropsychology, University of Hamburg, Hamburg, Germany
| | - Brigitte Röder
- Biological Psychology and Neuropsychology, University of Hamburg, Hamburg, Germany
| | - Patrick Bruns
- Biological Psychology and Neuropsychology, University of Hamburg, Hamburg, Germany
| |
Collapse
|
44
|
Maccora S, Bolognini N, Cosentino G, Baschi R, Vallar G, Fierro B, Brighina F. Multisensorial Perception in Chronic Migraine and the Role of Medication Overuse. THE JOURNAL OF PAIN 2020; 21:919-929. [PMID: 31904501 DOI: 10.1016/j.jpain.2019.12.005] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/11/2019] [Revised: 11/12/2019] [Accepted: 12/04/2019] [Indexed: 01/03/2023]
Abstract
Multisensory processing can be assessed by measuring susceptibility to crossmodal illusions such as the Sound-Induced Flash Illusion (SIFI). When a single flash is accompanied by 2 or more beeps, it is perceived as multiple flashes (fission illusion); conversely, a fusion illusion is experienced when more flashes are matched with a single beep, leading to the perception of a single flash. Such illusory perceptions are associated to crossmodal changes in visual cortical excitability. Indeed, increasing occipital cortical excitability, by means of transcranial electrical currents, disrupts the SIFI (ie, fission illusion). Similarly, a reduced fission illusion was shown in patients with episodic migraine, especially during the attack, in agreement with the pathophysiological model of cortical hyperexcitability of this disease. If episodic migraine patients present with reduced SIFI especially during the attack, we hypothesize that chronic migraine (CM) patients should consistently report less illusory effects than healthy controls; drugs intake could also affect SIFI. On such a basis, we studied the proneness to SIFI in CM patients (n = 63), including 52 patients with Medication Overuse Headache (MOH), compared to 24 healthy controls. All migraine patients showed reduced fission phenomena than controls (P < .0001). Triptan MOH patients (n = 23) presented significantly less fission effects than other CM groups (P = .008). This exploratory study suggests that CM - both with and without medication overuse - is associated to a higher visual cortical responsiveness which causes deficit of multisensorial processing, as assessed by the SIFI. PERSPECTIVE: This observational study shows reduced susceptibility to the SIFI in CM, confirming and extending previous results in episodic migraine. MOH contributes to this phenomenon, especially in case of triptans.
Collapse
Affiliation(s)
- Simona Maccora
- Department of Biomedicine, Neuroscience and Advanced Diagnostics (BIND), University of Palermo, Palermo, Italy
| | - Nadia Bolognini
- Department of Psychology, Milan Center for Neuroscience - NeuroMi, University of Milano-Bicocca, Milano, Italy; Laboratory of Neuropsychology, IRCSS Istituto Auxologico, Milano, Italy
| | - Giuseppe Cosentino
- Department of Brain and Behavioural Sciences, University of Pavia, Italy; IRCCS Mondino Foundation, Pavia, Italy
| | - Roberta Baschi
- Department of Biomedicine, Neuroscience and Advanced Diagnostics (BIND), University of Palermo, Palermo, Italy
| | - Giuseppe Vallar
- Department of Psychology, Milan Center for Neuroscience - NeuroMi, University of Milano-Bicocca, Milano, Italy; Laboratory of Neuropsychology, IRCSS Istituto Auxologico, Milano, Italy
| | - Brigida Fierro
- Department of Biomedicine, Neuroscience and Advanced Diagnostics (BIND), University of Palermo, Palermo, Italy
| | - Filippo Brighina
- Department of Biomedicine, Neuroscience and Advanced Diagnostics (BIND), University of Palermo, Palermo, Italy.
| |
Collapse
|
45
|
Li Y, Wang F, Chen Y, Cichocki A, Sejnowski T. The Effects of Audiovisual Inputs on Solving the Cocktail Party Problem in the Human Brain: An fMRI Study. Cereb Cortex 2019; 28:3623-3637. [PMID: 29029039 DOI: 10.1093/cercor/bhx235] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2017] [Indexed: 11/13/2022] Open
Abstract
At cocktail parties, our brains often simultaneously receive visual and auditory information. Although the cocktail party problem has been widely investigated under auditory-only settings, the effects of audiovisual inputs have not. This study explored the effects of audiovisual inputs in a simulated cocktail party. In our fMRI experiment, each congruent audiovisual stimulus was a synthesis of 2 facial movie clips, each of which could be classified into 1 of 2 emotion categories (crying and laughing). Visual-only (faces) and auditory-only stimuli (voices) were created by extracting the visual and auditory contents from the synthesized audiovisual stimuli. Subjects were instructed to selectively attend to 1 of the 2 objects contained in each stimulus and to judge its emotion category in the visual-only, auditory-only, and audiovisual conditions. The neural representations of the emotion features were assessed by calculating decoding accuracy and brain pattern-related reproducibility index based on the fMRI data. We compared the audiovisual condition with the visual-only and auditory-only conditions and found that audiovisual inputs enhanced the neural representations of emotion features of the attended objects instead of the unattended objects. This enhancement might partially explain the benefits of audiovisual inputs for the brain to solve the cocktail party problem.
Collapse
Affiliation(s)
- Yuanqing Li
- Center for Brain Computer Interfaces and Brain Information Processing, South China University of Technology, Guangzhou, China.,Guangzhou Key Laboratory of Brain Computer Interaction and Applications, Guangzhou, China
| | - Fangyi Wang
- Center for Brain Computer Interfaces and Brain Information Processing, South China University of Technology, Guangzhou, China.,Guangzhou Key Laboratory of Brain Computer Interaction and Applications, Guangzhou, China
| | - Yongbin Chen
- Center for Brain Computer Interfaces and Brain Information Processing, South China University of Technology, Guangzhou, China.,Guangzhou Key Laboratory of Brain Computer Interaction and Applications, Guangzhou, China
| | - Andrzej Cichocki
- Riken Brain Science Institute, Wako shi, Japan.,Skolkovo Institute of Science and Technology (SKOTECH), Moscow, Russia
| | - Terrence Sejnowski
- Neurobiology Laboratory, The Salk Institute for Biological Studies, La Jolla, CA, USA
| |
Collapse
|
46
|
Gao C, Weber CE, Shinkareva SV. The brain basis of audiovisual affective processing: Evidence from a coordinate-based activation likelihood estimation meta-analysis. Cortex 2019; 120:66-77. [DOI: 10.1016/j.cortex.2019.05.016] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2019] [Revised: 05/03/2019] [Accepted: 05/28/2019] [Indexed: 01/19/2023]
|
47
|
Stickel S, Weismann P, Kellermann T, Regenbogen C, Habel U, Freiherr J, Chechko N. Audio-visual and olfactory-visual integration in healthy participants and subjects with autism spectrum disorder. Hum Brain Mapp 2019; 40:4470-4486. [PMID: 31301203 PMCID: PMC6865810 DOI: 10.1002/hbm.24715] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2019] [Revised: 05/23/2019] [Accepted: 07/01/2019] [Indexed: 01/22/2023] Open
Abstract
The human capacity to integrate sensory signals has been investigated with respect to different sensory modalities. A common denominator of the neural network underlying the integration of sensory clues has yet to be identified. Additionally, brain imaging data from patients with autism spectrum disorder (ASD) do not cover disparities in neuronal sensory processing. In this fMRI study, we compared the underlying neural networks of both olfactory-visual and auditory-visual integration in patients with ASD and a group of matched healthy participants. The aim was to disentangle sensory-specific networks so as to derive a potential (amodal) common source of multisensory integration (MSI) and to investigate differences in brain networks with sensory processing in individuals with ASD. In both groups, similar neural networks were found to be involved in the olfactory-visual and auditory-visual integration processes, including the primary visual cortex, the inferior parietal sulcus (IPS), and the medial and inferior frontal cortices. Amygdala activation was observed specifically during olfactory-visual integration, with superior temporal activation having been seen during auditory-visual integration. A dynamic causal modeling analysis revealed a nonlinear top-down IPS modulation of the connection between the respective primary sensory regions in both experimental conditions and in both groups. Thus, we demonstrate that MSI has shared neural sources across olfactory-visual and audio-visual stimulation in patients and controls. The enhanced recruitment of the IPS to modulate changes between areas is relevant to sensory perception. Our results also indicate that, with respect to MSI processing, adults with ASD do not significantly differ from their healthy counterparts.
Collapse
Affiliation(s)
- Susanne Stickel
- Department of Psychiatry, Psychotherapy and PsychosomaticsFaculty of Medicine, RWTH AachenAachenGermany
- Institute of Neuroscience and Medicine: JARA‐Institute Brain Structure Function Relationship (INM 10)Research Center JülichJülichGermany
| | - Pauline Weismann
- Department of Psychiatry and PsychotherapyFriedrich‐Alexander‐Universität Erlangen‐NürnbergErlangenGermany
| | - Thilo Kellermann
- Department of Psychiatry, Psychotherapy and PsychosomaticsFaculty of Medicine, RWTH AachenAachenGermany
- Institute of Neuroscience and Medicine: JARA‐Institute Brain Structure Function Relationship (INM 10)Research Center JülichJülichGermany
| | - Christina Regenbogen
- Department of Psychiatry, Psychotherapy and PsychosomaticsFaculty of Medicine, RWTH AachenAachenGermany
- Institute of Neuroscience and Medicine: JARA‐Institute Brain Structure Function Relationship (INM 10)Research Center JülichJülichGermany
- Department of Clinical NeuroscienceKarolinska InstitutetStockholmSweden
| | - Ute Habel
- Department of Psychiatry, Psychotherapy and PsychosomaticsFaculty of Medicine, RWTH AachenAachenGermany
- Institute of Neuroscience and Medicine: JARA‐Institute Brain Structure Function Relationship (INM 10)Research Center JülichJülichGermany
| | - Jessica Freiherr
- Department of Psychiatry and PsychotherapyFriedrich‐Alexander‐Universität Erlangen‐NürnbergErlangenGermany
- Sensory AnalyticsFraunhofer Institute for Process Engineering and Packaging IVVFreisingGermany
| | - Natalya Chechko
- Department of Psychiatry, Psychotherapy and PsychosomaticsFaculty of Medicine, RWTH AachenAachenGermany
- Institute of Neuroscience and Medicine: JARA‐Institute Brain Structure Function Relationship (INM 10)Research Center JülichJülichGermany
| |
Collapse
|
48
|
Baumard J, Osiurak F. Is Bodily Experience an Epiphenomenon of Multisensory Integration and Cognition? Front Hum Neurosci 2019; 13:316. [PMID: 31572151 PMCID: PMC6749066 DOI: 10.3389/fnhum.2019.00316] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2019] [Accepted: 08/26/2019] [Indexed: 11/19/2022] Open
Affiliation(s)
| | - François Osiurak
- Laboratory for the Study of Cognitive Mechanisms (EA 3082), University of Lyon, Lyon, France.,French University Institute, Paris, France
| |
Collapse
|
49
|
Ostrolenk A, Bao VA, Mottron L, Collignon O, Bertone A. Reduced multisensory facilitation in adolescents and adults on the Autism Spectrum. Sci Rep 2019; 9:11965. [PMID: 31427634 PMCID: PMC6700191 DOI: 10.1038/s41598-019-48413-9] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2018] [Accepted: 07/29/2019] [Indexed: 01/01/2023] Open
Abstract
Individuals with autism are reported to integrate information from visual and auditory channels in an idiosyncratic way. Multisensory integration (MSI) of simple, non-social stimuli (i.e., flashes and beeps) was evaluated in adolescents and adults with (n = 20) and without autism (n = 19) using a reaction time (RT) paradigm using audio, visual, and audiovisual stimuli. For each participant, the race model analysis compares the RTs on the audiovisual condition to a bound value computed from the unimodal RTs that reflects the effect of redundancy. If the actual audiovisual RTs are significantly faster than this bound, the race model is violated, indicating evidence of MSI. Our results show that the race model violation occurred only for the typically-developing (TD) group. While the TD group shows evidence of MSI, the autism group does not. These results suggest that multisensory integration of simple information, void of social content or complexity, is altered in autism. Individuals with autism may not benefit from the advantage conferred by multisensory stimulation to the same extent as TD individuals. Altered MSI for simple, non-social information may have cascading effects on more complex perceptual processes related to language and behaviour in autism.
Collapse
Affiliation(s)
- Alexia Ostrolenk
- Perceptual Neuroscience Lab for Autism and Development (PNLab), McGill University, Montreal, Canada.,University of Montreal Center of Excellence for Pervasive Developmental Disorders (CETEDUM), CIUSSS du Nord-de-l'Île de Montréal, Montreal, Canada
| | - Vanessa A Bao
- Perceptual Neuroscience Lab for Autism and Development (PNLab), McGill University, Montreal, Canada.,School/Applied Child Psychology, Department of Education and Counselling Psychology, McGill University, Montreal, Canada
| | - Laurent Mottron
- University of Montreal Center of Excellence for Pervasive Developmental Disorders (CETEDUM), CIUSSS du Nord-de-l'Île de Montréal, Montreal, Canada
| | - Olivier Collignon
- Centre for Mind/Brain Science (CIMeC), University of Trento, Trento, Italy.,Institut de recherche en Psychologie (IPSY) et en Neuroscience (IoNS), Université de Louvain-la-Neuve, Ottignies-Louvain-la-Neuve, Belgium
| | - Armando Bertone
- Perceptual Neuroscience Lab for Autism and Development (PNLab), McGill University, Montreal, Canada. .,School/Applied Child Psychology, Department of Education and Counselling Psychology, McGill University, Montreal, Canada. .,University of Montreal Center of Excellence for Pervasive Developmental Disorders (CETEDUM), CIUSSS du Nord-de-l'Île de Montréal, Montreal, Canada.
| |
Collapse
|
50
|
Sirous M, Sinning N, Schneider TR, Friese U, Lorenz J, Engel AK. Chemosensory Event-Related Potentials in Response to Nasal Propylene Glycol Stimulation. Front Hum Neurosci 2019; 13:99. [PMID: 30949040 PMCID: PMC6435593 DOI: 10.3389/fnhum.2019.00099] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2018] [Accepted: 03/04/2019] [Indexed: 11/13/2022] Open
Abstract
Propylene glycol, also denoted as 1.2 propanediol (C3H8O2), often serves as a solvent for dilution of olfactory stimuli. It is supposed to serve as a neutral substance and has been used in many behavioral and electrophysiological studies to dilute pure olfactory stimuli. However, the effect of propylene glycol on perception and on neuronal responses has hitherto never been studied. In this study we tested by means of a threshold test, whether a nasal propylene glycol stimulation is recognizable by humans. Participants were able to recognize propylene glycol at a threshold of 42% concentration and reported a slight cooling effect. In addition to the threshold test, we recorded electroencephalography (EEG) during nasal propylene glycol stimulation to study the neuronal processing of the stimulus. We used a flow olfactometer and stimulated 15 volunteers with three different concentrations of propylene glycol (40 trials each) and water as a control condition (40 trials). To evaluate the neuronal response, we analyzed the event-related potentials (ERPs) and power modulations. The task of the volunteers was to identify a change (olfactory, thermal, or tactile) in the continuous air flow generated by the flow olfactometer. The analysis of the ERPs showed that propylene glycol generates a clear P2 component, which was also visible in the frequency domain as an evoked power response in the theta-band. The source analysis of the P2 revealed a widespread involvement of brain regions, including the postcentral gyrus, the insula and adjacent operculum, the thalamus, and the cerebellum. Thus, it is possible that trigeminal stimulation can at least partly account for sensations and brain responses elicited by propylene glycol. Based on these results, we conclude that the use of high propylene glycol concentrations to dilute fragrances complicates the interpretation of presumed purely olfactory effects.
Collapse
Affiliation(s)
- Mohammad Sirous
- Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| | - Nico Sinning
- Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| | - Till R Schneider
- Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| | - Uwe Friese
- Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, Hamburg, Germany.,Faculty of Life Science, MSH Medical School Hamburg, Hamburg, Germany
| | - Jürgen Lorenz
- Faculty of Life Science, Laboratory of Human Biology and Physiology, Applied Science University, Hamburg, Germany
| | - Andreas K Engel
- Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| |
Collapse
|