1
|
Orioli G, Dragovic D, Farroni T. Perception of visual and audiovisual trajectories toward and away from the body in the first postnatal year. J Exp Child Psychol 2024; 243:105921. [PMID: 38615600 DOI: 10.1016/j.jecp.2024.105921] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2023] [Revised: 03/13/2024] [Accepted: 03/15/2024] [Indexed: 04/16/2024]
Abstract
Perceiving motion in depth is important in everyday life, especially motion in relation to the body. Visual and auditory cues inform us about motion in space when presented in isolation from each other, but the most comprehensive information is obtained through the combination of both of these cues. We traced the development of infants' ability to discriminate between visual motion trajectories across peripersonal space and to match these with auditory cues specifying the same peripersonal motion. We measured 5-month-old (n = 20) and 9-month-old (n = 20) infants' visual preferences for visual motion toward or away from their body (presented simultaneously and side by side) across three conditions: (a) visual displays presented alone, (b) paired with a sound increasing in intensity, and (c) paired with a sound decreasing in intensity. Both groups preferred approaching motion in the visual-only condition. When the visual displays were paired with a sound increasing in intensity, neither group showed a visual preference. When a sound decreasing in intensity was played instead, the 5-month-olds preferred the receding (spatiotemporally congruent) visual stimulus, whereas the 9-month-olds preferred the approaching (spatiotemporally incongruent) visual stimulus. We speculate that in the approaching sound condition, the behavioral salience of the sound could have led infants to focus on the auditory information alone, in order to prepare a motor response, and to neglect the visual stimuli. In the receding sound condition, instead, the difference in response patterns in the two groups may have been driven by infants' emerging motor abilities and their developing predictive processing mechanisms supporting and influencing each other.
Collapse
Affiliation(s)
- Giulia Orioli
- Centre for Developmental Science, School of Psychology, University of Birmingham, Birmingham B15 2SB, UK; Department of Developmental Psychology and Socialization, University of Padova, 35131 Padova, Italy.
| | - Danica Dragovic
- Paediatric Unit, Hospital of Monfalcone, 34074 Monfalcone, Italy
| | - Teresa Farroni
- Department of Developmental Psychology and Socialization, University of Padova, 35131 Padova, Italy
| |
Collapse
|
2
|
Basile GA, Tatti E, Bertino S, Milardi D, Genovese G, Bruno A, Muscatello MRA, Ciurleo R, Cerasa A, Quartarone A, Cacciola A. Neuroanatomical correlates of peripersonal space: bridging the gap between perception, action, emotion and social cognition. Brain Struct Funct 2024; 229:1047-1072. [PMID: 38683211 PMCID: PMC11147881 DOI: 10.1007/s00429-024-02781-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2023] [Accepted: 02/22/2024] [Indexed: 05/01/2024]
Abstract
Peripersonal space (PPS) is a construct referring to the portion of space immediately surrounding our bodies, where most of the interactions between the subject and the environment, including other individuals, take place. Decades of animal and human neuroscience research have revealed that the brain holds a separate representation of this region of space: this distinct spatial representation has evolved to ensure proper relevance to stimuli that are close to the body and prompt an appropriate behavioral response. The neural underpinnings of such construct have been thoroughly investigated by different generations of studies involving anatomical and electrophysiological investigations in animal models, and, recently, neuroimaging experiments in human subjects. Here, we provide a comprehensive anatomical overview of the anatomical circuitry underlying PPS representation in the human brain. Gathering evidence from multiple areas of research, we identified cortical and subcortical regions that are involved in specific aspects of PPS encoding.We show how these regions are part of segregated, yet integrated functional networks within the brain, which are in turn involved in higher-order integration of information. This wide-scale circuitry accounts for the relevance of PPS encoding in multiple brain functions, including not only motor planning and visuospatial attention but also emotional and social cognitive aspects. A complete characterization of these circuits may clarify the derangements of PPS representation observed in different neurological and neuropsychiatric diseases.
Collapse
Affiliation(s)
- Gianpaolo Antonio Basile
- Brain Mapping Lab, Department of Biomedical, Dental Sciences and Morphological and Functional Imaging, University of Messina, Messina, Italy.
| | - Elisa Tatti
- Department of Molecular, Cellular & Biomedical Sciences, CUNY, School of Medicine, New York, NY, 10031, USA
| | - Salvatore Bertino
- Brain Mapping Lab, Department of Biomedical, Dental Sciences and Morphological and Functional Imaging, University of Messina, Messina, Italy
- Department of Clinical and Experimental Medicine, University of Messina, Messina, Italy
| | - Demetrio Milardi
- Brain Mapping Lab, Department of Biomedical, Dental Sciences and Morphological and Functional Imaging, University of Messina, Messina, Italy
| | | | - Antonio Bruno
- Psychiatry Unit, University Hospital "G. Martino", Messina, Italy
- Department of Biomedical, Dental Sciences and Morphological and Functional Imaging, University of Messina, Messina, Italy
| | - Maria Rosaria Anna Muscatello
- Psychiatry Unit, University Hospital "G. Martino", Messina, Italy
- Department of Biomedical, Dental Sciences and Morphological and Functional Imaging, University of Messina, Messina, Italy
| | | | - Antonio Cerasa
- S. Anna Institute, Crotone, Italy
- Institute for Biomedical Research and Innovation (IRIB), National Research Council of Italy, Messina, Italy
- Pharmacotechnology Documentation and Transfer Unit, Preclinical and Translational Pharmacology, Department of Pharmacy, Health Science and Nutrition, University of Calabria, Rende, Italy
| | | | - Alberto Cacciola
- Brain Mapping Lab, Department of Biomedical, Dental Sciences and Morphological and Functional Imaging, University of Messina, Messina, Italy.
| |
Collapse
|
3
|
Bao X, Lomber SG. Visual modulation of auditory evoked potentials in the cat. Sci Rep 2024; 14:7177. [PMID: 38531940 DOI: 10.1038/s41598-024-57075-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2023] [Accepted: 03/14/2024] [Indexed: 03/28/2024] Open
Abstract
Visual modulation of the auditory system is not only a neural substrate for multisensory processing, but also serves as a backup input underlying cross-modal plasticity in deaf individuals. Event-related potential (ERP) studies in humans have provided evidence of a multiple-stage audiovisual interactions, ranging from tens to hundreds of milliseconds after the presentation of stimuli. However, it is still unknown if the temporal course of visual modulation in the auditory ERPs can be characterized in animal models. EEG signals were recorded in sedated cats from subdermal needle electrodes. The auditory stimuli (clicks) and visual stimuli (flashes) were timed by two independent Poison processes and were presented either simultaneously or alone. The visual-only ERPs were subtracted from audiovisual ERPs before being compared to the auditory-only ERPs. N1 amplitude showed a trend of transiting from suppression-to-facilitation with a disruption at ~ 100-ms flash-to-click delay. We concluded that visual modulation as a function of SOA with extended range is more complex than previously characterized with short SOAs and its periodic pattern can be interpreted with "phase resetting" hypothesis.
Collapse
Affiliation(s)
- Xiaohan Bao
- Integrated Program in Neuroscience, McGill University, Montreal, QC, H3G 1Y6, Canada
| | - Stephen G Lomber
- Department of Physiology, McGill University, McIntyre Medical Sciences Building, Rm 1223, 3655 Promenade Sir William Osler, Montreal, QC, H3G 1Y6, Canada.
| |
Collapse
|
4
|
Yu L, Xu J. The Development of Multisensory Integration at the Neuronal Level. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2024; 1437:153-172. [PMID: 38270859 DOI: 10.1007/978-981-99-7611-9_10] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/26/2024]
Abstract
Multisensory integration is a fundamental function of the brain. In the typical adult, multisensory neurons' response to paired multisensory (e.g., audiovisual) cues is significantly more robust than the corresponding best unisensory response in many brain regions. Synthesizing sensory signals from multiple modalities can speed up sensory processing and improve the salience of outside events or objects. Despite its significance, multisensory integration is testified to be not a neonatal feature of the brain. Neurons' ability to effectively combine multisensory information does not occur rapidly but develops gradually during early postnatal life (for cats, 4-12 weeks required). Multisensory experience is critical for this developing process. If animals were restricted from sensing normal visual scenes or sounds (deprived of the relevant multisensory experience), the development of the corresponding integrative ability could be blocked until the appropriate multisensory experience is obtained. This section summarizes the extant literature on the development of multisensory integration (mainly using cat superior colliculus as a model), sensory-deprivation-induced cross-modal plasticity, and how sensory experience (sensory exposure and perceptual learning) leads to the plastic change and modification of neural circuits in cortical and subcortical areas.
Collapse
Affiliation(s)
- Liping Yu
- Key Laboratory of Brain Functional Genomics (Ministry of Education and Shanghai), School of Life Sciences, East China Normal University, Shanghai, China.
| | - Jinghong Xu
- Key Laboratory of Brain Functional Genomics (Ministry of Education and Shanghai), School of Life Sciences, East China Normal University, Shanghai, China
| |
Collapse
|
5
|
Marchesotti S, Bernasconi F, Rognini G, De Lucia M, Bleuler H, Blanke O. Neural signatures of visuo-motor integration during human-robot interactions. Front Neurorobot 2023; 16:1034615. [PMID: 36776553 PMCID: PMC9908758 DOI: 10.3389/fnbot.2022.1034615] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2022] [Accepted: 11/23/2022] [Indexed: 01/28/2023] Open
Abstract
Visuo-motor integration shapes our daily experience and underpins the sense of feeling in control over our actions. The last decade has seen a surge in robotically and virtually mediated interactions, whereby bodily actions ultimately result in an artificial movement. But despite the growing number of applications, the neurophysiological correlates of visuo-motor processing during human-machine interactions under dynamic conditions remain scarce. Here we address this issue by employing a bimanual robotic interface able to track voluntary hands movement, rendered in real-time into the motion of two virtual hands. We experimentally manipulated the visual feedback in the virtual reality with spatial and temporal conflicts and investigated their impact on (1) visuo-motor integration and (2) the subjective experience of being the author of one's action (i.e., sense of agency). Using somatosensory evoked responses measured with electroencephalography, we investigated neural differences occurring when the integration between motor commands and visual feedback is disrupted. Our results show that the right posterior parietal cortex encodes for differences between congruent and spatially-incongruent interactions. The experimental manipulations also induced a decrease in the sense of agency over the robotically-mediated actions. These findings offer solid neurophysiological grounds that can be used in the future to monitor integration mechanisms during movements and ultimately enhance subjective experience during human-machine interactions.
Collapse
Affiliation(s)
- Silvia Marchesotti
- Laboratory of Cognitive Neuroscience, Center for Neuroprosthetics and Brain Mind Institute, Ecole Polytechnique Fédérale de Lausanne, Geneva, Switzerland,Laboratory of Robotic Systems, Ecole Polytechnique Fédérale de Lausanne, Lausanne, Switzerland,*Correspondence: Silvia Marchesotti
| | - Fosco Bernasconi
- Laboratory of Cognitive Neuroscience, Center for Neuroprosthetics and Brain Mind Institute, Ecole Polytechnique Fédérale de Lausanne, Geneva, Switzerland
| | - Giulio Rognini
- Laboratory of Cognitive Neuroscience, Center for Neuroprosthetics and Brain Mind Institute, Ecole Polytechnique Fédérale de Lausanne, Geneva, Switzerland,Laboratory of Robotic Systems, Ecole Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| | - Marzia De Lucia
- Laboratoire de Recherche en Neuroimagerie, University Hospital (CHUV) and University of Lausanne (UNIL), Lausanne, Switzerland
| | - Hannes Bleuler
- Laboratory of Robotic Systems, Ecole Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| | - Olaf Blanke
- Laboratory of Cognitive Neuroscience, Center for Neuroprosthetics and Brain Mind Institute, Ecole Polytechnique Fédérale de Lausanne, Geneva, Switzerland,Department of Clinical Neurosciences, Faculty of Medicine, University Hospital, Geneva, Switzerland,Olaf Blanke
| |
Collapse
|
6
|
Gori M, Bertonati G, Campus C, Amadeo MB. Multisensory representations of space and time in sensory cortices. Hum Brain Mapp 2022; 44:656-667. [PMID: 36169038 PMCID: PMC9842891 DOI: 10.1002/hbm.26090] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2022] [Revised: 08/05/2022] [Accepted: 09/07/2022] [Indexed: 01/25/2023] Open
Abstract
Clear evidence demonstrated a supramodal organization of sensory cortices with multisensory processing occurring even at early stages of information encoding. Within this context, early recruitment of sensory areas is necessary for the development of fine domain-specific (i.e., spatial or temporal) skills regardless of the sensory modality involved, with auditory areas playing a crucial role in temporal processing and visual areas in spatial processing. Given the domain-specificity and the multisensory nature of sensory areas, in this study, we hypothesized that preferential domains of representation (i.e., space and time) of visual and auditory cortices are also evident in the early processing of multisensory information. Thus, we measured the event-related potential (ERP) responses of 16 participants while performing multisensory spatial and temporal bisection tasks. Audiovisual stimuli occurred at three different spatial positions and time lags and participants had to evaluate whether the second stimulus was spatially (spatial bisection task) or temporally (temporal bisection task) farther from the first or third audiovisual stimulus. As predicted, the second audiovisual stimulus of both spatial and temporal bisection tasks elicited an early ERP response (time window 50-90 ms) in visual and auditory regions. However, this early ERP component was more substantial in the occipital areas during the spatial bisection task, and in the temporal regions during the temporal bisection task. Overall, these results confirmed the domain specificity of visual and auditory cortices and revealed that this aspect selectively modulates also the cortical activity in response to multisensory stimuli.
Collapse
Affiliation(s)
- Monica Gori
- Unit for Visually Impaired People (U‐VIP)Istituto Italiano di TecnologiaGenoaItaly
| | - Giorgia Bertonati
- Unit for Visually Impaired People (U‐VIP)Istituto Italiano di TecnologiaGenoaItaly,Department of Informatics, Bioengineering, Robotics and Systems Engineering (DIBRIS)Università degli Studi di GenovaGenoaItaly
| | - Claudio Campus
- Unit for Visually Impaired People (U‐VIP)Istituto Italiano di TecnologiaGenoaItaly
| | - Maria Bianca Amadeo
- Unit for Visually Impaired People (U‐VIP)Istituto Italiano di TecnologiaGenoaItaly
| |
Collapse
|
7
|
Are auditory cues special? Evidence from cross-modal distractor-induced blindness. Atten Percept Psychophys 2022; 85:889-904. [PMID: 35902451 PMCID: PMC10066119 DOI: 10.3758/s13414-022-02540-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/08/2022] [Indexed: 11/08/2022]
Abstract
A target that shares features with preceding distractor stimuli is less likely to be detected due to a distractor-driven activation of a negative attentional set. This transient impairment in perceiving the target (distractor-induced blindness/deafness) can be found within vision and audition. Recently, the phenomenon was observed in a cross-modal setting involving an auditory target and additional task-relevant visual information (cross-modal distractor-induced deafness). In the current study, consisting of three behavioral experiments, a visual target, indicated by an auditory cue, had to be detected despite the presence of visual distractors. Multiple distractors consistently led to reduced target detection if cue and target appeared in close temporal proximity, confirming cross-modal distractor-induced blindness. However, the effect on target detection was reduced compared to the effect of cross-modal distractor-induced deafness previously observed for reversed modalities. The physical features defining cue and target could not account for the diminished distractor effect in the current cross-modal task. Instead, this finding may be attributed to the auditory cue acting as an especially efficient release signal of the distractor-induced inhibition. Additionally, a multisensory enhancement of visual target detection by the concurrent auditory signal might have contributed to the reduced distractor effect.
Collapse
|
8
|
Chua SFA, Liu Y, Harris JM, Otto TU. No selective integration required: A race model explains responses to audiovisual motion-in-depth. Cognition 2022; 227:105204. [PMID: 35753178 DOI: 10.1016/j.cognition.2022.105204] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2021] [Revised: 06/02/2022] [Accepted: 06/08/2022] [Indexed: 11/03/2022]
Abstract
Looming motion is an ecologically salient signal that often signifies danger. In both audition and vision, humans show behavioral biases in response to perceiving looming motion, which is suggested to indicate an adaptation for survival. However, it is an open question whether such biases occur also in the combined processing of multisensory signals. Towards this aim, Cappe, Thut, Romei, and Murraya (2009) found that responses to audiovisual signals were faster for congruent looming motion compared to receding motion or incongruent combinations. They considered this as evidence for selective integration of multisensory looming signals. To test this proposal, here, we successfully replicate the behavioral results by Cappe et al. (2009). We then show that the redundant signals effect (RSE - a speedup of multisensory compared to unisensory responses) is not distinct for congruent looming motion. Instead, as predicted by a simple probability summation rule, the RSE is primarily modulated by the looming bias in audition, which suggests that multisensory processing inherits a unisensory effect. Finally, we compare a large set of so-called race models that implement probability summation, but that allow for interference between auditory and visual processing. The best-fitting model, selected by the Akaike Information Criterion (AIC), virtually perfectly explained the RSE across conditions with interference parameters that were either constant or varied only with auditory motion. In the absence of effects jointly caused by auditory and visual motion, we conclude that selective integration is not required to explain the behavioral benefits that occur with audiovisual looming motion.
Collapse
Affiliation(s)
- S F Andrew Chua
- School of Psychology & Neuroscience, University of St Andrews, St Mary's Quad, South Street, St Andrews KY16 9JP, United Kingdom.
| | - Yue Liu
- School of Psychology & Neuroscience, University of St Andrews, St Mary's Quad, South Street, St Andrews KY16 9JP, United Kingdom
| | - Julie M Harris
- School of Psychology & Neuroscience, University of St Andrews, St Mary's Quad, South Street, St Andrews KY16 9JP, United Kingdom
| | - Thomas U Otto
- School of Psychology & Neuroscience, University of St Andrews, St Mary's Quad, South Street, St Andrews KY16 9JP, United Kingdom.
| |
Collapse
|
9
|
Gil-Guevara O, Bernal HA, Riveros AJ. Honey bees respond to multimodal stimuli following the Principle of Inverse Effectiveness. J Exp Biol 2022; 225:275501. [PMID: 35531628 PMCID: PMC9206449 DOI: 10.1242/jeb.243832] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2021] [Accepted: 04/29/2022] [Indexed: 11/20/2022]
Abstract
Multisensory integration is assumed to entail benefits for receivers across multiple ecological contexts. However, signal integration effectiveness is constrained by features of the spatiotemporal and intensity domains. How sensory modalities are integrated during tasks facilitated by learning and memory, such as pollination, remains unsolved. Honey bees use olfactory and visual cues during foraging, making them a good model to study the use of multimodal signals. Here, we examined the effect of stimulus intensity on both learning and memory performance of bees trained using unimodal or bimodal stimuli. We measured the performance and the latency response across planned discrete levels of stimulus intensity. We employed the conditioning of the proboscis extension response protocol in honey bees using an electromechanical setup allowing us to control simultaneously and precisely olfactory and visual stimuli at different intensities. Our results show that the bimodal enhancement during learning and memory was higher as the intensity decreased when the separate individual components were least effective. Still, this effect was not detectable for the latency of response. Remarkably, these results support the principle of inverse effectiveness, traditionally studied in vertebrates, predicting that multisensory stimuli are more effectively integrated when the best unisensory response is relatively weak. Thus, we argue that the performance of the bees while using a bimodal stimulus depends on the interaction and intensity of its individual components. We further hold that the inclusion of findings across all levels of analysis enriches the traditional understanding of the mechanics and reliance of complex signals in honey bees. Summary: Bimodal enhancement during learning and memory tasks in africanized honey bees increases as the stimulus intensity of its unimodal components decreases; this indicates that learning performance depends on the interaction between the intensity of its components and the nature of the sensory modalities involved, supporting the principle of inverse effectiveness.
Collapse
Affiliation(s)
- Oswaldo Gil-Guevara
- Departamento de Biología, Facultad de Ciencias Naturales, Universidad del Rosario. Cra. 26 #63B-48. Bogotá. Colombia. 21Bogotá, Colombia
| | - Hernan A. Bernal
- Programa de Ingeniería Biomédica, Escuela de Medicina y Ciencias de la Salud, Universidad del Rosario. Bogotá, Colombia
| | - Andre J. Riveros
- Departamento de Biología, Facultad de Ciencias Naturales, Universidad del Rosario. Cra. 26 #63B-48. Bogotá. Colombia. 21Bogotá, Colombia
| |
Collapse
|
10
|
Gröhn C, Norgren E, Eriksson L. A systematic review of the neural correlates of multisensory integration in schizophrenia. SCHIZOPHRENIA RESEARCH-COGNITION 2021; 27:100219. [PMID: 34660211 PMCID: PMC8502765 DOI: 10.1016/j.scog.2021.100219] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/14/2021] [Revised: 09/27/2021] [Accepted: 09/27/2021] [Indexed: 01/01/2023]
Abstract
Multisensory integration (MSI), in which sensory signals from different modalities are unified, is necessary for our comprehensive perception of and effective adaptation to the objects and events around us. However, individuals with schizophrenia suffer from impairments in MSI, which could explain typical symptoms like hallucination and reality distortion. Because the neural correlates of aberrant MSI in schizophrenia help us understand the physiognomy of this psychiatric disorder, we performed a systematic review of the current research on this subject. The literature search concerned investigated MSI in diagnosed schizophrenia patients compared to healthy controls using brain imaging. Seventeen of 317 identified studies were finally included. To assess risk of bias, the Newcastle-Ottawa quality assessment was used, and the review was written according to the Preferred Reporting Items for Systematic Reviews and Meta-analysis (PRISMA). The results indicated that multisensory processes in schizophrenia are associated with aberrant, mainly reduced, neural activity in several brain regions, as measured by event-related potentials, oscillations, activity and connectivity. The conclusion is that a fronto-temporal region, comprising the frontal inferior gyrus, middle temporal gyrus and superior temporal gyrus/sulcus, along with the fusiform gyrus and dorsal visual stream in the occipital-parietal lobe are possible key regions of deficient MSI in schizophrenia.
Collapse
Affiliation(s)
| | | | - Lars Eriksson
- Corresponding author at: Department of Social and Psychological Studies, Karlstad University, SE-651 88 Karlstad, Sweden.
| |
Collapse
|
11
|
Dias JW, McClaskey CM, Harris KC. Audiovisual speech is more than the sum of its parts: Auditory-visual superadditivity compensates for age-related declines in audible and lipread speech intelligibility. Psychol Aging 2021; 36:520-530. [PMID: 34124922 PMCID: PMC8427734 DOI: 10.1037/pag0000613] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/10/2023]
Abstract
Multisensory input can improve perception of ambiguous unisensory information. For example, speech heard in noise can be more accurately identified when listeners see a speaker's articulating face. Importantly, these multisensory effects can be superadditive to listeners' ability to process unisensory speech, such that audiovisual speech identification is better than the sum of auditory-only and visual-only speech identification. Age-related declines in auditory and visual speech perception have been hypothesized to be concomitant with stronger cross-sensory influences on audiovisual speech identification, but little evidence exists to support this. Currently, studies do not account for the multisensory superadditive benefit of auditory-visual input in their metrics of the auditory or visual influence on audiovisual speech perception. Here we treat multisensory superadditivity as independent from unisensory auditory and visual processing. In the current investigation, older and younger adults identified auditory, visual, and audiovisual speech in noisy listening conditions. Performance across these conditions was used to compute conventional metrics of the auditory and visual influence on audiovisual speech identification and a metric of auditory-visual superadditivity. Consistent with past work, auditory and visual speech identification declined with age, audiovisual speech identification was preserved, and no age-related differences in the auditory or visual influence on audiovisual speech identification were observed. However, we found that auditory-visual superadditivity improved with age. The novel findings suggest that multisensory superadditivity is independent of unisensory processing. As auditory and visual speech identification decline with age, compensatory changes in multisensory superadditivity may preserve audiovisual speech identification in older adults. (PsycInfo Database Record (c) 2021 APA, all rights reserved).
Collapse
Affiliation(s)
- James W Dias
- Department of Otolaryngology-Head and Neck Surgery
| | | | | |
Collapse
|
12
|
Keefe JM, Pokta E, Störmer VS. Cross-modal orienting of exogenous attention results in visual-cortical facilitation, not suppression. Sci Rep 2021; 11:10237. [PMID: 33986384 PMCID: PMC8119727 DOI: 10.1038/s41598-021-89654-x] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2020] [Accepted: 04/29/2021] [Indexed: 11/10/2022] Open
Abstract
Attention may be oriented exogenously (i.e., involuntarily) to the location of salient stimuli, resulting in improved perception. However, it is unknown whether exogenous attention improves perception by facilitating processing of attended information, suppressing processing of unattended information, or both. To test this question, we measured behavioral performance and cue-elicited neural changes in the electroencephalogram as participants (N = 19) performed a task in which a spatially non-predictive auditory cue preceded a visual target. Critically, this cue was either presented at a peripheral target location or from the center of the screen, allowing us to isolate spatially specific attentional activity. We find that both behavior and attention-mediated changes in visual-cortical activity are enhanced at the location of a cue prior to the onset of a target, but that behavior and neural activity at an unattended target location is equivalent to that following a central cue that does not direct attention (i.e., baseline). These results suggest that exogenous attention operates via facilitation of information at an attended location.
Collapse
Affiliation(s)
- Jonathan M Keefe
- Department of Psychology, University of California, San Diego, 92092, USA.
| | - Emilia Pokta
- Department of Psychology, University of California, San Diego, 92092, USA
| | - Viola S Störmer
- Department of Psychology, University of California, San Diego, 92092, USA
- Department of Brain and Psychological Sciences, Dartmouth College, Hanover, USA
| |
Collapse
|
13
|
Tovar DA, Murray MM, Wallace MT. Selective Enhancement of Object Representations through Multisensory Integration. J Neurosci 2020; 40:5604-5615. [PMID: 32499378 PMCID: PMC7363464 DOI: 10.1523/jneurosci.2139-19.2020] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2019] [Revised: 04/17/2020] [Accepted: 05/21/2020] [Indexed: 11/21/2022] Open
Abstract
Objects are the fundamental building blocks of how we create a representation of the external world. One major distinction among objects is between those that are animate versus those that are inanimate. In addition, many objects are specified by more than a single sense, yet the nature by which multisensory objects are represented by the brain remains poorly understood. Using representational similarity analysis of male and female human EEG signals, we show enhanced encoding of audiovisual objects when compared with their corresponding visual and auditory objects. Surprisingly, we discovered that the often-found processing advantages for animate objects were not evident under multisensory conditions. This was due to a greater neural enhancement of inanimate objects-which are more weakly encoded under unisensory conditions. Further analysis showed that the selective enhancement of inanimate audiovisual objects corresponded with an increase in shared representations across brain areas, suggesting that the enhancement was mediated by multisensory integration. Moreover, a distance-to-bound analysis provided critical links between neural findings and behavior. Improvements in neural decoding at the individual exemplar level for audiovisual inanimate objects predicted reaction time differences between multisensory and unisensory presentations during a Go/No-Go animate categorization task. Links between neural activity and behavioral measures were most evident at intervals of 100-200 ms and 350-500 ms after stimulus presentation, corresponding to time periods associated with sensory evidence accumulation and decision-making, respectively. Collectively, these findings provide key insights into a fundamental process the brain uses to maximize the information it captures across sensory systems to perform object recognition.SIGNIFICANCE STATEMENT Our world is filled with ever-changing sensory information that we are able to seamlessly transform into a coherent and meaningful perceptual experience. We accomplish this feat by combining different stimulus features into objects. However, despite the fact that these features span multiple senses, little is known about how the brain combines the various forms of sensory information into object representations. Here, we used EEG and machine learning to study how the brain processes auditory, visual, and audiovisual objects. Surprisingly, we found that nonliving (i.e., inanimate) objects, which are more difficult to process with one sense alone, benefited the most from engaging multiple senses.
Collapse
Affiliation(s)
- David A Tovar
- School of Medicine, Vanderbilt University, Nashville, Tennessee 37240
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, Tennessee 37240
| | - Micah M Murray
- The Laboratory for Investigative Neurophysiology (The LINE), Department of Radiology, Lausanne University Hospital and University of Lausanne (CHUV-UNIL), 1011 Lausanne, Switzerland
- Sensory, Cognitive and Perceptual Neuroscience Section, Center for Biomedical Imaging (CIBM) of Lausanne and Geneva, 1015 Lausanne, Switzerland
- Department of Ophthalmology, Fondation Asile des aveugles and University of Lausanne, 1002 Lausanne, Switzerland
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, Tennessee 37240
| | - Mark T Wallace
- School of Medicine, Vanderbilt University, Nashville, Tennessee 37240
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, Tennessee 37240
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, Tennessee 37240
- Department of Psychology, Vanderbilt University, Nashville, Tennessee 37240
- Department of Psychiatry and Behavioral Sciences, Vanderbilt University Medical Center, Nashville, Tennessee 37240
- Department of Pharmacology, Vanderbilt University, Nashville, Tennessee 37240
| |
Collapse
|
14
|
The interplay between multisensory integration and perceptual decision making. Neuroimage 2020; 222:116970. [PMID: 32454204 DOI: 10.1016/j.neuroimage.2020.116970] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2019] [Revised: 03/23/2020] [Accepted: 05/15/2020] [Indexed: 01/15/2023] Open
Abstract
Facing perceptual uncertainty, the brain combines information from different senses to make optimal perceptual decisions and to guide behavior. However, decision making has been investigated mostly in unimodal contexts. Thus, how the brain integrates multisensory information during decision making is still unclear. Two opposing, but not mutually exclusive, scenarios are plausible: either the brain thoroughly combines the signals from different modalities before starting to build a supramodal decision, or unimodal signals are integrated during decision formation. To answer this question, we devised a paradigm mimicking naturalistic situations where human participants were exposed to continuous cacophonous audiovisual inputs containing an unpredictable signal cue in one or two modalities and had to perform a signal detection task or a cue categorization task. First, model-based analyses of behavioral data indicated that multisensory integration takes place alongside perceptual decision making. Next, using supervised machine learning on concurrently recorded EEG, we identified neural signatures of two processing stages: sensory encoding and decision formation. Generalization analyses across experimental conditions and time revealed that multisensory cues were processed faster during both stages. We further established that acceleration of neural dynamics during sensory encoding and decision formation was directly linked to multisensory integration. Our results were consistent across both signal detection and categorization tasks. Taken together, the results revealed a continuous dynamic interplay between multisensory integration and decision making processes (mixed scenario), with integration of multimodal information taking place both during sensory encoding as well as decision formation.
Collapse
|
15
|
Gau R, Bazin PL, Trampel R, Turner R, Noppeney U. Resolving multisensory and attentional influences across cortical depth in sensory cortices. eLife 2020; 9:46856. [PMID: 31913119 PMCID: PMC6984812 DOI: 10.7554/elife.46856] [Citation(s) in RCA: 38] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2019] [Accepted: 01/07/2020] [Indexed: 11/13/2022] Open
Abstract
In our environment, our senses are bombarded with a myriad of signals, only a subset of which is relevant for our goals. Using sub-millimeter-resolution fMRI at 7T, we resolved BOLD-response and activation patterns across cortical depth in early sensory cortices to auditory, visual and audiovisual stimuli under auditory or visual attention. In visual cortices, auditory stimulation induced widespread inhibition irrespective of attention, whereas auditory relative to visual attention suppressed mainly central visual field representations. In auditory cortices, visual stimulation suppressed activations, but amplified responses to concurrent auditory stimuli, in a patchy topography. Critically, multisensory interactions in auditory cortices were stronger in deeper laminae, while attentional influences were greatest at the surface. These distinct depth-dependent profiles suggest that multisensory and attentional mechanisms regulate sensory processing via partly distinct circuitries. Our findings are crucial for understanding how the brain regulates information flow across senses to interact with our complex multisensory world.
Collapse
Affiliation(s)
- Remi Gau
- Computational Neuroscience and Cognitive Robotics Centre, University of Birmingham, Birmingham, United Kingdom.,Institute of Psychology, Université Catholique de Louvain, Louvain-la-Neuve, Belgium.,Institute of Neuroscience, Université Catholique de Louvain, Louvain-la-Neuve, Belgium
| | - Pierre-Louis Bazin
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.,Integrative Model-based Cognitive Neuroscience research unit, University of Amsterdam, Amsterdam, Netherlands
| | - Robert Trampel
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Robert Turner
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.,Sir Peter Mansfield Imaging Centre, University of Nottingham, Nottingham, United Kingdom
| | - Uta Noppeney
- Computational Neuroscience and Cognitive Robotics Centre, University of Birmingham, Birmingham, United Kingdom.,Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, Netherlands
| |
Collapse
|
16
|
Wallace MT, Woynaroski TG, Stevenson RA. Multisensory Integration as a Window into Orderly and Disrupted Cognition and Communication. Annu Rev Psychol 2020; 71:193-219. [DOI: 10.1146/annurev-psych-010419-051112] [Citation(s) in RCA: 37] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
During our everyday lives, we are confronted with a vast amount of information from several sensory modalities. This multisensory information needs to be appropriately integrated for us to effectively engage with and learn from our world. Research carried out over the last half century has provided new insights into the way such multisensory processing improves human performance and perception; the neurophysiological foundations of multisensory function; the time course for its development; how multisensory abilities differ in clinical populations; and, most recently, the links between multisensory processing and cognitive abilities. This review summarizes the extant literature on multisensory function in typical and atypical circumstances, discusses the implications of the work carried out to date for theory and research, and points toward next steps for advancing the field.
Collapse
Affiliation(s)
- Mark T. Wallace
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, Tennessee 37232, USA;,
- Departments of Psychology and Pharmacology, Vanderbilt University, Nashville, Tennessee 37232, USA
- Department of Psychiatry and Behavioral Sciences, Vanderbilt University Medical Center, Nashville, Tennessee 37232, USA
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, Tennessee 37232, USA
- Vanderbilt Kennedy Center, Nashville, Tennessee 37203, USA
| | - Tiffany G. Woynaroski
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, Tennessee 37232, USA;,
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, Tennessee 37232, USA
- Vanderbilt Kennedy Center, Nashville, Tennessee 37203, USA
| | - Ryan A. Stevenson
- Departments of Psychology and Psychiatry and Program in Neuroscience, University of Western Ontario, London, Ontario N6A 3K7, Canada
- Brain and Mind Institute, University of Western Ontario, London, Ontario N6A 3K7, Canada
| |
Collapse
|
17
|
Bernasconi F, Noel JP, Park HD, Faivre N, Seeck M, Spinelli L, Schaller K, Blanke O, Serino A. Audio-Tactile and Peripersonal Space Processing Around the Trunk in Human Parietal and Temporal Cortex: An Intracranial EEG Study. Cereb Cortex 2019; 28:3385-3397. [PMID: 30010843 PMCID: PMC6095214 DOI: 10.1093/cercor/bhy156] [Citation(s) in RCA: 38] [Impact Index Per Article: 7.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2018] [Accepted: 06/14/2018] [Indexed: 12/04/2022] Open
Abstract
Interactions with the environment happen within one’s peripersonal space (PPS)—the space surrounding the body. Studies in monkeys and humans have highlighted a multisensory distributed cortical network representing the PPS. However, knowledge about the temporal dynamics of PPS processing around the trunk is lacking. Here, we recorded intracranial electroencephalography (iEEG) in humans while administering tactile stimulation (T), approaching auditory stimuli (A), and the 2 combined (AT). To map PPS, tactile stimulation was delivered when the sound was far, intermediate, or close to the body. The 19% of the electrodes showed AT multisensory integration. Among those, 30% showed a PPS effect, a modulation of the response as a function of the distance between the sound and body. AT multisensory integration and PPS effects had similar spatiotemporal characteristics, with an early response (~50 ms) in the insular cortex, and later responses (~200 ms) in precentral and postcentral gyri. Superior temporal cortex showed a different response pattern with AT multisensory integration at ~100 ms without a PPS effect. These results, represent the first iEEG delineation of PPS processing in humans and show that PPS and multisensory integration happen at similar neural sites and time periods, suggesting that PPS representation is based on a spatial modulation of multisensory integration.
Collapse
Affiliation(s)
- Fosco Bernasconi
- Laboratory of Cognitive Neuroscience, Brain Mind Institute, Swiss Federal Institute of Technology (Ecole Polytechnique Fédérale de Lausanne), Geneva, Switzerland.,Center for Neuroprosthetics, School of Life Sciences, Ecole Polytechnique Fédérale de Lausanne, Geneva, Switzerland
| | - Jean-Paul Noel
- Laboratory of Cognitive Neuroscience, Brain Mind Institute, Swiss Federal Institute of Technology (Ecole Polytechnique Fédérale de Lausanne), Geneva, Switzerland.,Neuroscience Graduate Program, Vanderbilt University, Nashville, USA.,Vanderbilt Brain Institute, Vanderbilt University, Nashville, USA
| | - Hyeong Dong Park
- Laboratory of Cognitive Neuroscience, Brain Mind Institute, Swiss Federal Institute of Technology (Ecole Polytechnique Fédérale de Lausanne), Geneva, Switzerland.,Center for Neuroprosthetics, School of Life Sciences, Ecole Polytechnique Fédérale de Lausanne, Geneva, Switzerland
| | - Nathan Faivre
- Laboratory of Cognitive Neuroscience, Brain Mind Institute, Swiss Federal Institute of Technology (Ecole Polytechnique Fédérale de Lausanne), Geneva, Switzerland.,Center for Neuroprosthetics, School of Life Sciences, Ecole Polytechnique Fédérale de Lausanne, Geneva, Switzerland.,Centre d'Economie de la Sorbonne, CNRS UMR 8174, Paris, France
| | - Margitta Seeck
- Presurgical Epilepsy Evaluation Unit, Neurology Department, University Hospital of Geneva, Geneva, Switzerland
| | - Laurent Spinelli
- Presurgical Epilepsy Evaluation Unit, Neurology Department, University Hospital of Geneva, Geneva, Switzerland
| | - Karl Schaller
- Department of Neurosurgery, Geneva University Hospital (HUG), 4 Rue Gabrielle-Perret-Gentil, Geneva, Switzerland
| | - Olaf Blanke
- Laboratory of Cognitive Neuroscience, Brain Mind Institute, Swiss Federal Institute of Technology (Ecole Polytechnique Fédérale de Lausanne), Geneva, Switzerland.,Center for Neuroprosthetics, School of Life Sciences, Ecole Polytechnique Fédérale de Lausanne, Geneva, Switzerland.,Centre d'Economie de la Sorbonne, CNRS UMR 8174, Paris, France
| | - Andrea Serino
- Laboratory of Cognitive Neuroscience, Brain Mind Institute, Swiss Federal Institute of Technology (Ecole Polytechnique Fédérale de Lausanne), Geneva, Switzerland.,Center for Neuroprosthetics, School of Life Sciences, Ecole Polytechnique Fédérale de Lausanne, Geneva, Switzerland.,MySpace Lab, Department of Clinical Neuroscience, Centre Hospitalier Universitaire Vaudois (CHUV), University of Lausanne, Lausanne, Switzerland
| |
Collapse
|
18
|
Zhao S, Wang Y, Feng C, Feng W. Multiple phases of cross-sensory interactions associated with the audiovisual bounce-inducing effect. Biol Psychol 2019; 149:107805. [PMID: 31689465 DOI: 10.1016/j.biopsycho.2019.107805] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2019] [Revised: 10/15/2019] [Accepted: 10/28/2019] [Indexed: 12/30/2022]
Abstract
Using event-related potential (ERP) recordings, the present study investigated the cross-modal neural activities underlying the audiovisual bounce-inducing effect (ABE) via a novel experimental design wherein the audiovisual bouncing trials were induced solely by the ABE. The within-subject (percept-based) analysis showed that early cross-modal interactions within 100-200 ms after sound onset over fronto-central and occipital regions were associated with the occurrence of the ABE, but the cross-modal interaction at a later latency (ND250, 220-280 ms) over fronto-central region did not differ between ABE trials and non-ABE trials. The between-subject analysis indicated that the cross-modal interaction revealed by ND250 was larger for subjects who perceived the ABE more frequently. These findings suggest that the ABE is generated as a consequence of the rapid interplay between the variations of early cross-modal interactions and the general multisensory binding predisposition at an individual level.
Collapse
Affiliation(s)
- Song Zhao
- Department of Psychology, School of Education, SooChow University, Suzhou, Jiangsu, 215123, China
| | - Yajie Wang
- Department of Psychology, School of Education, SooChow University, Suzhou, Jiangsu, 215123, China
| | - Chengzhi Feng
- Department of Psychology, School of Education, SooChow University, Suzhou, Jiangsu, 215123, China.
| | - Wenfeng Feng
- Department of Psychology, School of Education, SooChow University, Suzhou, Jiangsu, 215123, China.
| |
Collapse
|
19
|
Bidelman GM, Myers MH. Frontal cortex selectively overrides auditory processing to bias perception for looming sonic motion. Brain Res 2019; 1726:146507. [PMID: 31606413 DOI: 10.1016/j.brainres.2019.146507] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2019] [Revised: 10/02/2019] [Accepted: 10/09/2019] [Indexed: 12/18/2022]
Abstract
Rising intensity sounds signal approaching objects traveling toward an observer. A variety of species preferentially respond to looming over receding auditory motion, reflecting an evolutionary perceptual bias for recognizing approaching threats. We probed the neural origins of this stark perceptual anisotropy to reveal how the brain creates privilege for auditory looming events. While recording neural activity via electroencephalography (EEG), human listeners rapidly judged whether dynamic (intensity varying) tones were looming or receding in percept. Behaviorally, listeners responded faster to auditory looms confirming a perceptual bias for approaching signals. EEG source analysis revealed sensory activation localized to primary auditory cortex (PAC) and decision-related activity in prefrontal cortex (PFC) within 200 ms after sound onset followed by additional expansive PFC activation by 500 ms. Notably, early PFC (but not PAC) activity rapidly differentiated looming and receding stimuli and this effect roughly co-occurred with sound arrival in auditory cortex. Brain-behavior correlations revealed an association between PFC neural latencies and listeners' speed of sonic motion judgments. Directed functional connectivity revealed stronger information flow from PFC → PAC during looming vs. receding sounds. Our electrophysiological data reveal a critical, previously undocumented role of prefrontal cortex in judging dynamic sonic motion. Both faster neural bias and a functional override of obligatory sensory processing via selective, directional PFC signaling toward auditory system establish the perceptual privilege for approaching looming sounds.
Collapse
Affiliation(s)
- Gavin M Bidelman
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, USA; School of Communication Sciences & Disorders, University of Memphis, Memphis, TN, USA; University of Tennessee Health Sciences Center, Department of Anatomy and Neurobiology, Memphis, TN, USA.
| | - Mark H Myers
- University of Tennessee Health Sciences Center, Department of Anatomy and Neurobiology, Memphis, TN, USA
| |
Collapse
|
20
|
Cross-modal size-contrast illusion: Acoustic increases in intensity and bandwidth modulate haptic representation of object size. Sci Rep 2019; 9:14440. [PMID: 31595003 PMCID: PMC6783429 DOI: 10.1038/s41598-019-50912-8] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2019] [Accepted: 09/12/2019] [Indexed: 01/20/2023] Open
Abstract
Changes in the retinal size of stationary objects provide a cue to the observer's motion in the environment: Increases indicate the observer's forward motion, and decreases backward motion. In this study, a series of images each comprising a pair of pine-tree figures were translated into auditory modality using sensory substitution software. Resulting auditory stimuli were presented in an ascending sequence (i.e. increasing in intensity and bandwidth compatible with forward motion), descending sequence (i.e. decreasing in intensity and bandwidth compatible with backward motion), or in a scrambled order. During the presentation of stimuli, blindfolded participants estimated the lengths of wooden sticks by haptics. Results showed that those exposed to the stimuli compatible with forward motion underestimated the lengths of the sticks. This consistent underestimation may share some aspects with visual size-contrast effects such as the Ebbinghaus illusion. In contrast, participants in the other two conditions did not show such magnitude of error in size estimation; which is consistent with the "adaptive perceptual bias" towards acoustic increases in intensity and bandwidth. In sum, we report a novel cross-modal size-contrast illusion, which reveals that auditory motion cues compatible with listeners' forward motion modulate haptic representations of object size.
Collapse
|
21
|
Cai H, Xu X, Zhang Y, Cong X, Lu X, Huo X. Elevated lead levels from e-waste exposure are linked to sensory integration difficulties in preschool children. Neurotoxicology 2019; 71:150-158. [PMID: 30664973 DOI: 10.1016/j.neuro.2019.01.004] [Citation(s) in RCA: 28] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2018] [Revised: 12/22/2018] [Accepted: 01/16/2019] [Indexed: 02/05/2023]
Abstract
Exposure to lead is associated with adverse effects on neurodevelopment. However, studies of the effects of lead on sensory integration are few. The purpose of this research is to investigate the effect of lead exposure on child sensory integration by correlating the blood lead levels of children with sensory processing measures. A total of 574 children, from 3 to 6 years of age, 358 from an electronic waste (e-waste) recycling town named Guiyu, and 216 from Haojiang, a nearby town with no e-waste recycling activity, were recruited in this study. The median blood lead level in Guiyu children was 4.88 μg/dL, higher than the 3.47 μg/dL blood lead level in Haojiang children (P < 0.001). 47.2% of Guiyu children had blood lead levels exceeding 5 μg/dL. The median concentration of serum cortisol, an HPA-axis biomarker, in Guiyu children was significantly lower than in Haojiang, and was negatively correlated with blood lead levels. All subscale scores and the total score of the Sensory Processing Measure (Hong Kong Chinese version, SPM-HKC) in Guiyu children were higher than Haojiang children, indicating greater difficulties, especially for touch, body awareness, balance and motion, and total sensory systems. Sensory processing scores were positively correlated with blood lead, except for touch, which was negatively correlated with serum cortisol levels. Simultaneously, all subscale scores and the total SPM-HKC scores for children with high blood lead levels (blood lead > 5 μg/dL) were higher than those in the low blood lead level group (blood lead < 5 μg/dL), especially for hearing, touch, body awareness, balance and motion, and total sensory systems. Our findings suggest that lead exposure in e-waste recycling areas may result in a decrease in serum cortisol levels and an increase in child sensory integration difficulties. Cortisol may be involved in touch-related sensory integration difficulties.
Collapse
Affiliation(s)
- Haoxing Cai
- Laboratory of Environmental Medicine and Developmental Toxicology, Shantou University Medical College, Shantou 515041, Guangdong, China
| | - Xijin Xu
- Laboratory of Environmental Medicine and Developmental Toxicology, Shantou University Medical College, Shantou 515041, Guangdong, China; Department of Cell Biology and Genetics, Shantou University Medical College, Shantou 515041, Guangdong, China
| | - Yu Zhang
- Laboratory of Environmental Medicine and Developmental Toxicology, Shantou University Medical College, Shantou 515041, Guangdong, China; Immunoendocrinology, Division of Medical Biology, Department of Pathology and Medical Biology, University of Groningen and University Medical Center Groningen (UMCG), Hanzeplein 1, 9713 GZ Groningen, the Netherlands
| | - Xiaowei Cong
- Laboratory of Environmental Medicine and Developmental Toxicology, Shantou University Medical College, Shantou 515041, Guangdong, China
| | - Xueling Lu
- Laboratory of Environmental Medicine and Developmental Toxicology, Shantou University Medical College, Shantou 515041, Guangdong, China; Department of Epidemiology, University of Groningen, University Medical Center Groningen, Hanzeplein 1, 9713 GZ Groningen, the Netherlands
| | - Xia Huo
- Laboratory of Environmental Medicine and Developmental Toxicology, Guangdong Key Laboratory of Environmental Pollution and Health, School of Environment, Jinan University, 855 East Xingye Avenue, Guangzhou 511486, Guangdong, China.
| |
Collapse
|
22
|
Glatz C, Chuang LL. The time course of auditory looming cues in redirecting visuo-spatial attention. Sci Rep 2019; 9:743. [PMID: 30679468 PMCID: PMC6345893 DOI: 10.1038/s41598-018-36033-8] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2018] [Accepted: 11/14/2018] [Indexed: 11/09/2022] Open
Abstract
By orienting attention, auditory cues can improve the discrimination of spatially congruent visual targets. Looming sounds that increase in intensity are processed preferentially by the brain. Thus, we investigated whether auditory looming cues can orient visuo-spatial attention more effectively than static and receding sounds. Specifically, different auditory cues could redirect attention away from a continuous central visuo-motor tracking task to peripheral visual targets that appeared occasionally. To investigate the time course of crossmodal cuing, Experiment 1 presented visual targets at different time-points across a 500 ms auditory cue's presentation. No benefits were found for simultaneous audio-visual cue-target presentation. The largest crossmodal benefit occurred at early cue-target asynchrony onsets (i.e., CTOA = 250 ms), regardless of auditory cue type, which diminished at CTOA = 500 ms for static and receding cues. However, auditory looming cues showed a late crossmodal cuing benefit at CTOA = 500 ms. Experiment 2 showed that this late auditory looming cue benefit was independent of the cue's intensity when the visual target appeared. Thus, we conclude that the late crossmodal benefit throughout an auditory looming cue's presentation is due to its increasing intensity profile. The neural basis for this benefit and its ecological implications are discussed.
Collapse
Affiliation(s)
- Christiane Glatz
- Max Planck Institute for Biological Cybernetics, Department Human Perception, Cognition, and Action, Tübingen, 72076, Germany.,Graduate Training Centre of Neuroscience, University of Tübingen, Tübingen, 72074, Germany
| | - Lewis L Chuang
- Max Planck Institute for Biological Cybernetics, Department Human Perception, Cognition, and Action, Tübingen, 72076, Germany. .,Institute for Informatics, Ludwig-Maximilian-Universiät, Munich, 80337, Germany.
| |
Collapse
|
23
|
|
24
|
Noel JP, Simon D, Thelen A, Maier A, Blake R, Wallace MT. Probing Electrophysiological Indices of Perceptual Awareness across Unisensory and Multisensory Modalities. J Cogn Neurosci 2018; 30:814-828. [PMID: 29488853 PMCID: PMC10804124 DOI: 10.1162/jocn_a_01247] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/04/2024]
Abstract
The neural underpinnings of perceptual awareness have been extensively studied using unisensory (e.g., visual alone) stimuli. However, perception is generally multisensory, and it is unclear whether the neural architecture uncovered in these studies directly translates to the multisensory domain. Here, we use EEG to examine brain responses associated with the processing of visual, auditory, and audiovisual stimuli presented near threshold levels of detectability, with the aim of deciphering similarities and differences in the neural signals indexing the transition into perceptual awareness across vision, audition, and combined visual-auditory (multisensory) processing. More specifically, we examine (1) the presence of late evoked potentials (∼>300 msec), (2) the across-trial reproducibility, and (3) the evoked complexity associated with perceived versus nonperceived stimuli. Results reveal that, although perceived stimuli are associated with the presence of late evoked potentials across each of the examined sensory modalities, between-trial variability and EEG complexity differed for unisensory versus multisensory conditions. Whereas across-trial variability and complexity differed for perceived versus nonperceived stimuli in the visual and auditory conditions, this was not the case for the multisensory condition. Taken together, these results suggest that there are fundamental differences in the neural correlates of perceptual awareness for unisensory versus multisensory stimuli. Specifically, the work argues that the presence of late evoked potentials, as opposed to neural reproducibility or complexity, most closely tracks perceptual awareness regardless of the nature of the sensory stimulus. In addition, the current findings suggest a greater similarity between the neural correlates of perceptual awareness of unisensory (visual and auditory) stimuli when compared with multisensory stimuli.
Collapse
Affiliation(s)
- Jean-Paul Noel
- Neuroscience Graduate Program, Vanderbilt Brain Institute, Vanderbilt University Medical School, Vanderbilt University, Nashville, TN 37235, USA
- Vanderbilt Brain Institute, Vanderbilt University Medical School, Vanderbilt University, Nashville, TN 37235, USA
| | - David Simon
- Neuroscience Graduate Program, Vanderbilt Brain Institute, Vanderbilt University Medical School, Vanderbilt University, Nashville, TN 37235, USA
- Vanderbilt Brain Institute, Vanderbilt University Medical School, Vanderbilt University, Nashville, TN 37235, USA
| | - Antonia Thelen
- Vanderbilt Brain Institute, Vanderbilt University Medical School, Vanderbilt University, Nashville, TN 37235, USA
| | - Alexander Maier
- Department of Psychology, Vanderbilt University, Nashville, TN 37235, USA
| | - Randolph Blake
- Department of Psychology, Vanderbilt University, Nashville, TN 37235, USA
| | - Mark T. Wallace
- Vanderbilt Brain Institute, Vanderbilt University Medical School, Vanderbilt University, Nashville, TN 37235, USA
- Department of Psychology, Vanderbilt University, Nashville, TN 37235, USA
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN 37235, USA
- Department of Psychiatry, Vanderbilt University Medical Center, Nashville, TN 37235, USA
| |
Collapse
|
25
|
Yamasaki D, Miyoshi K, Altmann CF, Ashida H. Front-Presented Looming Sound Selectively Alters the Perceived Size of a Visual Looming Object. Perception 2018; 47:751-771. [PMID: 29783921 DOI: 10.1177/0301006618777708] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
In spite of accumulating evidence for the spatial rule governing cross-modal interaction according to the spatial consistency of stimuli, it is still unclear whether 3D spatial consistency (i.e., front/rear of the body) of stimuli also regulates audiovisual interaction. We investigated how sounds with increasing/decreasing intensity (looming/receding sound) presented from the front and rear space of the body impact the size perception of a dynamic visual object. Participants performed a size-matching task (Experiments 1 and 2) and a size adjustment task (Experiment 3) of visual stimuli with increasing/decreasing diameter, while being exposed to a front- or rear-presented sound with increasing/decreasing intensity. Throughout these experiments, we demonstrated that only the front-presented looming sound caused overestimation of the spatially consistent looming visual stimulus in size, but not of the spatially inconsistent and the receding visual stimulus. The receding sound had no significant effect on vision. Our results revealed that looming sound alters dynamic visual size perception depending on the consistency in the approaching quality and the front-rear spatial location of audiovisual stimuli, suggesting that the human brain differently processes audiovisual inputs based on their 3D spatial consistency. This selective interaction between looming signals should contribute to faster detection of approaching threats. Our findings extend the spatial rule governing audiovisual interaction into 3D space.
Collapse
Affiliation(s)
| | | | - Christian F Altmann
- Human Brain Research Center, Graduate School of Medicine, Kyoto University, Japan
| | | |
Collapse
|
26
|
Huang R, Chen C, Sereno MI. Spatiotemporal integration of looming visual and tactile stimuli near the face. Hum Brain Mapp 2018; 39:2156-2176. [PMID: 29411461 PMCID: PMC5895522 DOI: 10.1002/hbm.23995] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2017] [Revised: 01/10/2018] [Accepted: 01/26/2018] [Indexed: 12/27/2022] Open
Abstract
Real-world objects approaching or passing by an observer often generate visual, auditory, and tactile signals with different onsets and durations. Prompt detection and avoidance of an impending threat depend on precise binding of looming signals across modalities. Here we constructed a multisensory apparatus to study the spatiotemporal integration of looming visual and tactile stimuli near the face. In a psychophysical experiment, subjects assessed the subjective synchrony between a looming ball and an air puff delivered to the same side of the face with a varying temporal offset. Multisensory stimuli with similar onset times were perceived as completely out of sync and assessed with the lowest subjective synchrony index (SSI). Across subjects, the SSI peaked at an offset between 800 and 1,000 ms, where the multisensory stimuli were perceived as optimally in sync. In an fMRI experiment, tactile, visual, tactile-visual out-of-sync (TVoS), and tactile-visual in-sync (TViS) stimuli were delivered to either side of the face in randomized events. Group-average statistical responses to different stimuli were compared within each surface-based region of interest (sROI) outlined on the cortical surface. Most sROIs showed a preference for contralateral stimuli and higher responses to multisensory than unisensory stimuli. In several bilateral sROIs, particularly the human MT+ complex and V6A, responses to spatially aligned multisensory stimuli (TVoS) were further enhanced when the stimuli were in-sync (TViS), as expressed by TVoS < TViS. This study demonstrates the perceptual and neural mechanisms of multisensory integration near the face, which has potential applications in the development of multisensory entertainment systems and media.
Collapse
Affiliation(s)
- Ruey‐Song Huang
- Institute for Neural Computation, University of California, San DiegoLa JollaCalifornia
| | - Ching‐fu Chen
- Department of Electrical and Computer EngineeringUniversity of California, San DiegoLa JollaCalifornia
| | - Martin I. Sereno
- Department of Psychology and Neuroimaging CenterSan Diego State UniversitySan DiegoCalifornia
- Experimental PsychologyUniversity College LondonLondonUK
| |
Collapse
|
27
|
Audiovisual integration in depth: multisensory binding and gain as a function of distance. Exp Brain Res 2018; 236:1939-1951. [PMID: 29700577 PMCID: PMC6010498 DOI: 10.1007/s00221-018-5274-7] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2017] [Accepted: 02/19/2018] [Indexed: 11/01/2022]
Abstract
The integration of information across sensory modalities is dependent on the spatiotemporal characteristics of the stimuli that are paired. Despite large variation in the distance over which events occur in our environment, relatively little is known regarding how stimulus-observer distance affects multisensory integration. Prior work has suggested that exteroceptive stimuli are integrated over larger temporal intervals in near relative to far space, and that larger multisensory facilitations are evident in far relative to near space. Here, we sought to examine the interrelationship between these previously established distance-related features of multisensory processing. Participants performed an audiovisual simultaneity judgment and redundant target task in near and far space, while audiovisual stimuli were presented at a range of temporal delays (i.e., stimulus onset asynchronies). In line with the previous findings, temporal acuity was poorer in near relative to far space. Furthermore, reaction time to asynchronously presented audiovisual targets suggested a temporal window for fast detection-a range of stimuli asynchronies that was also larger in near as compared to far space. However, the range of reaction times over which multisensory response enhancement was observed was limited to a restricted range of relatively small (i.e., 150 ms) asynchronies, and did not differ significantly between near and far space. Furthermore, for synchronous presentations, these distance-related (i.e., near vs. far) modulations in temporal acuity and multisensory gain correlated negatively at an individual subject level. Thus, the findings support the conclusion that multisensory temporal binding and gain are asymmetrically modulated as a function of distance from the observer, and specifies that this relationship is specific for temporally synchronous audiovisual stimulus presentations.
Collapse
|
28
|
Murray MM, Thelen A, Ionta S, Wallace MT. Contributions of Intraindividual and Interindividual Differences to Multisensory Processes. J Cogn Neurosci 2018; 31:360-376. [PMID: 29488852 DOI: 10.1162/jocn_a_01246] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Most evidence on the neural and perceptual correlates of sensory processing derives from studies that have focused on only a single sensory modality and averaged the data from groups of participants. Although valuable, such studies ignore the substantial interindividual and intraindividual differences that are undoubtedly at play. Such variability plays an integral role in both the behavioral/perceptual realms and in the neural correlates of these processes, but substantially less is known when compared with group-averaged data. Recently, it has been shown that the presentation of stimuli from two or more sensory modalities (i.e., multisensory stimulation) not only results in the well-established performance gains but also gives rise to reductions in behavioral and neural response variability. To better understand the relationship between neural and behavioral response variability under multisensory conditions, this study investigated both behavior and brain activity in a task requiring participants to discriminate moving versus static stimuli presented in either a unisensory or multisensory context. EEG data were analyzed with respect to intraindividual and interindividual differences in RTs. The results showed that trial-by-trial variability of RTs was significantly reduced under audiovisual presentation conditions as compared with visual-only presentations across all participants. Intraindividual variability of RTs was linked to changes in correlated activity between clusters within an occipital to frontal network. In addition, interindividual variability of RTs was linked to differential recruitment of medial frontal cortices. The present findings highlight differences in the brain networks that support behavioral benefits during unisensory versus multisensory motion detection and provide an important view into the functional dynamics within neuronal networks underpinning intraindividual performance differences.
Collapse
Affiliation(s)
- Micah M Murray
- Vaudois University Hospital Center and University of Lausanne.,Center for Biomedical Imaging of Lausanne and Geneva.,Fondation Asile des Aveugles and University of Lausanne.,Vanderbilt University Medical Center
| | | | - Silvio Ionta
- Vaudois University Hospital Center and University of Lausanne.,Fondation Asile des Aveugles and University of Lausanne.,ETH Zürich
| | - Mark T Wallace
- Vanderbilt University Medical Center.,Vanderbilt University
| |
Collapse
|
29
|
Myers MH, Iannaccone A, Bidelman GM. A pilot investigation of audiovisual processing and multisensory integration in patients with inherited retinal dystrophies. BMC Ophthalmol 2017; 17:240. [PMID: 29212538 PMCID: PMC5719743 DOI: 10.1186/s12886-017-0640-y] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2017] [Accepted: 11/29/2017] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND In this study, we examined audiovisual (AV) processing in normal and visually impaired individuals who exhibit partial loss of vision due to inherited retinal dystrophies (IRDs). METHODS Two groups were analyzed for this pilot study: Group 1 was composed of IRD participants: two with autosomal dominant retinitis pigmentosa (RP), two with autosomal recessive cone-rod dystrophy (CORD), and two with the related complex disorder, Bardet-Biedl syndrome (BBS); Group 2 was composed of 15 non-IRD participants (controls). Audiovisual looming and receding stimuli (conveying perceptual motion) were used to assess the cortical processing and integration of unimodal (A or V) and multimodal (AV) sensory cues. Electroencephalography (EEG) was used to simultaneously resolve the temporal and spatial characteristics of AV processing and assess differences in neural responses between groups. Measurement of AV integration was accomplished via quantification of the EEG's spectral power and event-related brain potentials (ERPs). RESULTS Results show that IRD individuals exhibit reduced AV integration for concurrent audio and visual (AV) stimuli but increased brain activity during the unimodal A (but not V) presentation. This was corroborated in behavioral responses, where IRD patients showed slower and less accurate judgments of AV and V stimuli but more accurate responses in the A-alone condition. CONCLUSIONS Collectively, our findings imply a neural compensation from auditory sensory brain areas due to visual deprivation.
Collapse
Affiliation(s)
- Mark H Myers
- Department of Anatomy and Neurobiology, University of Tennessee Health Sciences Center, Memphis, TN, 38163, USA.
| | - Alessandro Iannaccone
- Department of Ophthalmology, Center for Retinal Degenerations and Ophthalmic Genetic Diseases, Duke University School of Medicine, Durham, NC, USA
| | - Gavin M Bidelman
- Department of Anatomy and Neurobiology, University of Tennessee Health Sciences Center, Memphis, TN, 38163, USA.,School of Communication Sciences & Disorders, University of Memphis, Memphis, TN, USA.,Institute for Intelligent Systems, University of Memphis, Memphis, TN, USA
| |
Collapse
|
30
|
Boyle SC, Kayser SJ, Kayser C. Neural correlates of multisensory reliability and perceptual weights emerge at early latencies during audio-visual integration. Eur J Neurosci 2017; 46:2565-2577. [PMID: 28940728 PMCID: PMC5725738 DOI: 10.1111/ejn.13724] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2017] [Revised: 09/11/2017] [Accepted: 09/18/2017] [Indexed: 12/24/2022]
Abstract
To make accurate perceptual estimates, observers must take the reliability of sensory information into account. Despite many behavioural studies showing that subjects weight individual sensory cues in proportion to their reliabilities, it is still unclear when during a trial neuronal responses are modulated by the reliability of sensory information or when they reflect the perceptual weights attributed to each sensory input. We investigated these questions using a combination of psychophysics, EEG‐based neuroimaging and single‐trial decoding. Our results show that the weighted integration of sensory information in the brain is a dynamic process; effects of sensory reliability on task‐relevant EEG components were evident 84 ms after stimulus onset, while neural correlates of perceptual weights emerged 120 ms after stimulus onset. These neural processes had different underlying sources, arising from sensory and parietal regions, respectively. Together these results reveal the temporal dynamics of perceptual and neural audio‐visual integration and support the notion of temporally early and functionally specific multisensory processes in the brain.
Collapse
Affiliation(s)
- Stephanie C Boyle
- Institute of Neuroscience and Psychology, University of Glasgow, Hillhead Street 58, Glasgow, G12 8QB, UK
| | - Stephanie J Kayser
- Institute of Neuroscience and Psychology, University of Glasgow, Hillhead Street 58, Glasgow, G12 8QB, UK
| | - Christoph Kayser
- Institute of Neuroscience and Psychology, University of Glasgow, Hillhead Street 58, Glasgow, G12 8QB, UK
| |
Collapse
|
31
|
Interoceptive signals impact visual processing: Cardiac modulation of visual body perception. Neuroimage 2017; 158:176-185. [DOI: 10.1016/j.neuroimage.2017.06.064] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2017] [Revised: 06/19/2017] [Accepted: 06/22/2017] [Indexed: 11/19/2022] Open
|
32
|
Asymmetries in behavioral and neural responses to spectral cues demonstrate the generality of auditory looming bias. Proc Natl Acad Sci U S A 2017; 114:9743-9748. [PMID: 28827336 DOI: 10.1073/pnas.1703247114] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Studies of auditory looming bias have shown that sources increasing in intensity are more salient than sources decreasing in intensity. Researchers have argued that listeners are more sensitive to approaching sounds compared with receding sounds, reflecting an evolutionary pressure. However, these studies only manipulated overall sound intensity; therefore, it is unclear whether looming bias is truly a perceptual bias for changes in source distance, or only in sound intensity. Here we demonstrate both behavioral and neural correlates of looming bias without manipulating overall sound intensity. In natural environments, the pinnae induce spectral cues that give rise to a sense of externalization; when spectral cues are unnatural, sounds are perceived as closer to the listener. We manipulated the contrast of individually tailored spectral cues to create sounds of similar intensity but different naturalness. We confirmed that sounds were perceived as approaching when spectral contrast decreased, and perceived as receding when spectral contrast increased. We measured behavior and electroencephalography while listeners judged motion direction. Behavioral responses showed a looming bias in that responses were more consistent for sounds perceived as approaching than for sounds perceived as receding. In a control experiment, looming bias disappeared when spectral contrast changes were discontinuous, suggesting that perceived motion in distance and not distance itself was driving the bias. Neurally, looming bias was reflected in an asymmetry of late event-related potentials associated with motion evaluation. Hence, both our behavioral and neural findings support a generalization of the auditory looming bias, representing a perceptual preference for approaching auditory objects.
Collapse
|
33
|
Being First Matters: Topographical Representational Similarity Analysis of ERP Signals Reveals Separate Networks for Audiovisual Temporal Binding Depending on the Leading Sense. J Neurosci 2017; 37:5274-5287. [PMID: 28450537 PMCID: PMC5456109 DOI: 10.1523/jneurosci.2926-16.2017] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2016] [Revised: 02/20/2017] [Accepted: 02/25/2017] [Indexed: 11/30/2022] Open
Abstract
In multisensory integration, processing in one sensory modality is enhanced by complementary information from other modalities. Intersensory timing is crucial in this process because only inputs reaching the brain within a restricted temporal window are perceptually bound. Previous research in the audiovisual field has investigated various features of the temporal binding window, revealing asymmetries in its size and plasticity depending on the leading input: auditory–visual (AV) or visual–auditory (VA). Here, we tested whether separate neuronal mechanisms underlie this AV–VA dichotomy in humans. We recorded high-density EEG while participants performed an audiovisual simultaneity judgment task including various AV–VA asynchronies and unisensory control conditions (visual-only, auditory-only) and tested whether AV and VA processing generate different patterns of brain activity. After isolating the multisensory components of AV–VA event-related potentials (ERPs) from the sum of their unisensory constituents, we ran a time-resolved topographical representational similarity analysis (tRSA) comparing the AV and VA ERP maps. Spatial cross-correlation matrices were built from real data to index the similarity between the AV and VA maps at each time point (500 ms window after stimulus) and then correlated with two alternative similarity model matrices: AVmaps = VAmaps versus AVmaps ≠ VAmaps. The tRSA results favored the AVmaps ≠ VAmaps model across all time points, suggesting that audiovisual temporal binding (indexed by synchrony perception) engages different neural pathways depending on the leading sense. The existence of such dual route supports recent theoretical accounts proposing that multiple binding mechanisms are implemented in the brain to accommodate different information parsing strategies in auditory and visual sensory systems. SIGNIFICANCE STATEMENT Intersensory timing is a crucial aspect of multisensory integration, determining whether and how inputs in one modality enhance stimulus processing in another modality. Our research demonstrates that evaluating synchrony of auditory-leading (AV) versus visual-leading (VA) audiovisual stimulus pairs is characterized by two distinct patterns of brain activity. This suggests that audiovisual integration is not a unitary process and that different binding mechanisms are recruited in the brain based on the leading sense. These mechanisms may be relevant for supporting different classes of multisensory operations, for example, auditory enhancement of visual attention (AV) and visual enhancement of auditory speech (VA).
Collapse
|
34
|
Abstract
The implementation of computer games in physical therapy is motivated by characteristics such as attractiveness, motivation, and engagement, but these do not guarantee the intended therapeutic effect of the interventions. Yet, these characteristics are important variables in physical therapy interventions because they involve reward-related dopaminergic systems in the brain that are known to facilitate learning through long-term potentiation of neural connections. In this perspective we propose a way to apply game design approaches to therapy development by "designing" therapy sessions in such a way as to trigger physical and cognitive behavioral patterns required for treatment and neurological recovery. We also advocate that improving game knowledge among therapists and improving communication between therapists and game designers may lead to a novel avenue in designing applied games with specific therapeutic input, thereby making gamification in therapy a realistic and promising future that may optimize clinical practice.
Collapse
|
35
|
Neuhoff JG. Looming sounds are perceived as faster than receding sounds. COGNITIVE RESEARCH-PRINCIPLES AND IMPLICATIONS 2016; 1:15. [PMID: 28180166 PMCID: PMC5256440 DOI: 10.1186/s41235-016-0017-4] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/08/2016] [Accepted: 09/23/2016] [Indexed: 11/17/2022]
Abstract
Each year thousands of people are killed by looming motor vehicles. Throughout our evolutionary history looming objects have posed a threat to survival and perceptual systems have evolved unique solutions to confront these environmental challenges. Vision provides an accurate representation of time-to-contact with a looming object and usually allows us to interact successfully with the object if required. However, audition functions as a warning system and yields an anticipatory representation of arrival time, indicating that the object has arrived when it is still some distance away. The bias provides a temporal margin of safety that allows more time to initiate defensive actions. In two studies this bias was shown to influence the perception of the speed of looming and receding sound sources. Listeners heard looming and receding sound sources and judged how fast they were moving. Listeners perceived the speed of looming sounds as faster than that of equivalent receding sounds. Listeners also showed better discrimination of the speed of looming sounds than receding sounds. Finally, close sounds were perceived as faster than distant sounds. The results suggest a prioritization of the perception of the speed of looming and receding sounds that mirrors the level of threat posed by moving objects in the environment.
Collapse
Affiliation(s)
- John G Neuhoff
- Department of Psychology, The College of Wooster, Wooster, OH 44691 USA
| |
Collapse
|
36
|
Auditory distance perception in humans: a review of cues, development, neuronal bases, and effects of sensory loss. Atten Percept Psychophys 2016; 78:373-95. [PMID: 26590050 PMCID: PMC4744263 DOI: 10.3758/s13414-015-1015-1] [Citation(s) in RCA: 96] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/05/2022]
Abstract
Auditory distance perception plays a major role in spatial awareness, enabling location of objects and avoidance of obstacles in the environment. However, it remains under-researched relative to studies of the directional aspect of sound localization. This review focuses on the following four aspects of auditory distance perception: cue processing, development, consequences of visual and auditory loss, and neurological bases. The several auditory distance cues vary in their effective ranges in peripersonal and extrapersonal space. The primary cues are sound level, reverberation, and frequency. Nonperceptual factors, including the importance of the auditory event to the listener, also can affect perceived distance. Basic internal representations of auditory distance emerge at approximately 6 months of age in humans. Although visual information plays an important role in calibrating auditory space, sensorimotor contingencies can be used for calibration when vision is unavailable. Blind individuals often manifest supranormal abilities to judge relative distance but show a deficit in absolute distance judgments. Following hearing loss, the use of auditory level as a distance cue remains robust, while the reverberation cue becomes less effective. Previous studies have not found evidence that hearing-aid processing affects perceived auditory distance. Studies investigating the brain areas involved in processing different acoustic distance cues are described. Finally, suggestions are given for further research on auditory distance perception, including broader investigation of how background noise and multiple sound sources affect perceived auditory distance for those with sensory loss.
Collapse
|
37
|
Murray MM, Lewkowicz DJ, Amedi A, Wallace MT. Multisensory Processes: A Balancing Act across the Lifespan. Trends Neurosci 2016; 39:567-579. [PMID: 27282408 PMCID: PMC4967384 DOI: 10.1016/j.tins.2016.05.003] [Citation(s) in RCA: 137] [Impact Index Per Article: 17.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2016] [Revised: 04/13/2016] [Accepted: 05/12/2016] [Indexed: 11/20/2022]
Abstract
Multisensory processes are fundamental in scaffolding perception, cognition, learning, and behavior. How and when stimuli from different sensory modalities are integrated rather than treated as separate entities is poorly understood. We review how the relative reliance on stimulus characteristics versus learned associations dynamically shapes multisensory processes. We illustrate the dynamism in multisensory function across two timescales: one long term that operates across the lifespan and one short term that operates during the learning of new multisensory relations. In addition, we highlight the importance of task contingencies. We conclude that these highly dynamic multisensory processes, based on the relative weighting of stimulus characteristics and learned associations, provide both stability and flexibility to brain functions over a wide range of temporal scales.
Collapse
Affiliation(s)
- Micah M Murray
- The Laboratory for Investigative Neurophysiology (The LINE), Department of Clinical Neurosciences and Department of Radiology, University Hospital Centre and University of Lausanne, Lausanne, Switzerland; Electroencephalography Brain Mapping Core, Centre for Biomedical Imaging (CIBM), Lausanne, Switzerland; Department of Ophthalmology, University of Lausanne, Jules Gonin Eye Hospital, Lausanne, Switzerland; Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA.
| | - David J Lewkowicz
- Department of Communication Sciences and Disorders, Northeastern University, Boston, MA, USA
| | - Amir Amedi
- Department of Medical Neurobiology, Institute for Medical Research Israel-Canada (IMRIC), Hadassah Medical School, Hebrew University of Jerusalem, Jerusalem, Israel; Interdisciplinary and Cognitive Science Program, The Edmond & Lily Safra Center for Brain Sciences (ELSC), Hebrew University of Jerusalem, Jerusalem, Israel
| | - Mark T Wallace
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA; Department of Psychology, Vanderbilt University, Nashville, TN, USA; Kennedy Center for Research on Human Development, Vanderbilt University, Nashville, TN, USA; Department of Psychiatry, Vanderbilt University, Nashville, TN, USA; Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA.
| |
Collapse
|
38
|
van Leeuwen TM, Trautmann-Lengsfeld SA, Wallace MT, Engel AK, Murray MM. Bridging the gap: Synaesthesia and multisensory processes. Neuropsychologia 2016; 88:1-4. [DOI: 10.1016/j.neuropsychologia.2016.06.007] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
39
|
Noel JP, Lukowska M, Wallace M, Serino A. Multisensory simultaneity judgment and proximity to the body. J Vis 2016; 16:21. [PMID: 26891828 PMCID: PMC4777235 DOI: 10.1167/16.3.21] [Citation(s) in RCA: 37] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/03/2022] Open
Abstract
The integration of information across different sensory modalities is known to be dependent upon the statistical characteristics of the stimuli to be combined. For example, the spatial and temporal proximity of stimuli are important determinants with stimuli that are close in space and time being more likely to be bound. These multisensory interactions occur not only for singular points in space/time, but over “windows” of space and time that likely relate to the ecological statistics of real-world stimuli. Relatedly, human psychophysical work has demonstrated that individuals are highly prone to judge multisensory stimuli as co-occurring over a wide range of time—a so-called simultaneity window (SW). Similarly, there exists a spatial representation of peripersonal space (PPS) surrounding the body in which stimuli related to the body and to external events occurring near the body are highly likely to be jointly processed. In the current study, we sought to examine the interaction between these temporal and spatial dimensions of multisensory representation by measuring the SW for audiovisual stimuli through proximal–distal space (i.e., PPS and extrapersonal space). Results demonstrate that the audiovisual SWs within PPS are larger than outside PPS. In addition, we suggest that this effect is likely due to an automatic and additional computation of these multisensory events in a body-centered reference frame. We discuss the current findings in terms of the spatiotemporal constraints of multisensory interactions and the implication of distinct reference frames on this process.
Collapse
|
40
|
Rosenblum LD, Dias JW, Dorsi J. The supramodal brain: implications for auditory perception. JOURNAL OF COGNITIVE PSYCHOLOGY 2016. [DOI: 10.1080/20445911.2016.1181691] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
|
41
|
ten Oever S, Romei V, van Atteveldt N, Soto-Faraco S, Murray MM, Matusz PJ. The COGs (context, object, and goals) in multisensory processing. Exp Brain Res 2016; 234:1307-23. [PMID: 26931340 DOI: 10.1007/s00221-016-4590-z] [Citation(s) in RCA: 32] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2015] [Accepted: 01/30/2016] [Indexed: 12/20/2022]
Abstract
Our understanding of how perception operates in real-world environments has been substantially advanced by studying both multisensory processes and "top-down" control processes influencing sensory processing via activity from higher-order brain areas, such as attention, memory, and expectations. As the two topics have been traditionally studied separately, the mechanisms orchestrating real-world multisensory processing remain unclear. Past work has revealed that the observer's goals gate the influence of many multisensory processes on brain and behavioural responses, whereas some other multisensory processes might occur independently of these goals. Consequently, other forms of top-down control beyond goal dependence are necessary to explain the full range of multisensory effects currently reported at the brain and the cognitive level. These forms of control include sensitivity to stimulus context as well as the detection of matches (or lack thereof) between a multisensory stimulus and categorical attributes of naturalistic objects (e.g. tools, animals). In this review we discuss and integrate the existing findings that demonstrate the importance of such goal-, object- and context-based top-down control over multisensory processing. We then put forward a few principles emerging from this literature review with respect to the mechanisms underlying multisensory processing and discuss their possible broader implications.
Collapse
Affiliation(s)
- Sanne ten Oever
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, The Netherlands
| | - Vincenzo Romei
- Department of Psychology, Centre for Brain Science, University of Essex, Colchester, UK
| | - Nienke van Atteveldt
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, The Netherlands.,Department of Educational Neuroscience, Faculty of Psychology and Education and Institute LEARN!, VU University Amsterdam, Amsterdam, The Netherlands
| | - Salvador Soto-Faraco
- Multisensory Research Group, Center for Brain and Cognition, Universitat Pompeu Fabra, Barcelona, Spain.,Institució Catalana de Recerca i Estudis Avançats (ICREA), Barcelona, Spain
| | - Micah M Murray
- The Laboratory for Investigative Neurophysiology (The LINE), Neuropsychology and Neurorehabilitation Service and Department of Radiology, Centre Hospitalier Universitaire Vaudois (CHUV), University Hospital Center and University of Lausanne, BH7.081, rue du Bugnon 46, 1011, Lausanne, Switzerland.,EEG Brain Mapping Core, Center for Biomedical Imaging (CIBM) of Lausanne and Geneva, Lausanne, Switzerland.,Department of Ophthalmology, Jules-Gonin Eye Hospital, University of Lausanne, Lausanne, Switzerland
| | - Pawel J Matusz
- The Laboratory for Investigative Neurophysiology (The LINE), Neuropsychology and Neurorehabilitation Service and Department of Radiology, Centre Hospitalier Universitaire Vaudois (CHUV), University Hospital Center and University of Lausanne, BH7.081, rue du Bugnon 46, 1011, Lausanne, Switzerland. .,Attention, Brain, and Cognitive Development Group, Department of Experimental Psychology, University of Oxford, Oxford, UK.
| |
Collapse
|
42
|
Interactions between space and effectiveness in human multisensory performance. Neuropsychologia 2016; 88:83-91. [PMID: 26826522 DOI: 10.1016/j.neuropsychologia.2016.01.031] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2015] [Revised: 12/30/2015] [Accepted: 01/26/2016] [Indexed: 11/23/2022]
Abstract
Several stimulus factors are important in multisensory integration, including the spatial and temporal relationships of the paired stimuli as well as their effectiveness. Changes in these factors have been shown to dramatically change the nature and magnitude of multisensory interactions. Typically, these factors are considered in isolation, although there is a growing appreciation for the fact that they are likely to be strongly interrelated. Here, we examined interactions between two of these factors - spatial location and effectiveness - in dictating performance in the localization of an audiovisual target. A psychophysical experiment was conducted in which participants reported the perceived location of visual flashes and auditory noise bursts presented alone and in combination. Stimuli were presented at four spatial locations relative to fixation (0°, 30°, 60°, 90°) and at two intensity levels (high, low). Multisensory combinations were always spatially coincident and of the matching intensity (high-high or low-low). In responding to visual stimuli alone, localization accuracy decreased and response times (RTs) increased as stimuli were presented at more eccentric locations. In responding to auditory stimuli, performance was poorest at the 30° and 60° locations. For both visual and auditory stimuli, accuracy was greater and RTs were faster for more intense stimuli. For responses to visual-auditory stimulus combinations, performance enhancements were found at locations in which the unisensory performance was lowest, results concordant with the concept of inverse effectiveness. RTs for these multisensory presentations frequently violated race-model predictions, implying integration of these inputs, and a significant location-by-intensity interaction was observed. Performance gains under multisensory conditions were larger as stimuli were positioned at more peripheral locations, and this increase was most pronounced for the low-intensity conditions. These results provide strong support that the effects of stimulus location and effectiveness on multisensory integration are interdependent, with both contributing to the overall effectiveness of the stimuli in driving the resultant multisensory response.
Collapse
|
43
|
Gau R, Noppeney U. How prior expectations shape multisensory perception. Neuroimage 2016; 124:876-886. [DOI: 10.1016/j.neuroimage.2015.09.045] [Citation(s) in RCA: 65] [Impact Index Per Article: 8.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2015] [Accepted: 09/20/2015] [Indexed: 11/24/2022] Open
|
44
|
Macaluso E, Noppeney U, Talsma D, Vercillo T, Hartcher-O’Brien J, Adam R. The Curious Incident of Attention in Multisensory Integration: Bottom-up vs. Top-down. Multisens Res 2016. [DOI: 10.1163/22134808-00002528] [Citation(s) in RCA: 50] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
Abstract
The role attention plays in our experience of a coherent, multisensory world is still controversial. On the one hand, a subset of inputs may be selected for detailed processing and multisensory integration in a top-down manner, i.e., guidance of multisensory integration by attention. On the other hand, stimuli may be integrated in a bottom-up fashion according to low-level properties such as spatial coincidence, thereby capturing attention. Moreover, attention itself is multifaceted and can be describedviaboth top-down and bottom-up mechanisms. Thus, the interaction between attention and multisensory integration is complex and situation-dependent. The authors of this opinion paper are researchers who have contributed to this discussion from behavioural, computational and neurophysiological perspectives. We posed a series of questions, the goal of which was to illustrate the interplay between bottom-up and top-down processes in various multisensory scenarios in order to clarify the standpoint taken by each author and with the hope of reaching a consensus. Although divergence of viewpoint emerges in the current responses, there is also considerable overlap: In general, it can be concluded that the amount of influence that attention exerts on MSI depends on the current task as well as prior knowledge and expectations of the observer. Moreover stimulus properties such as the reliability and salience also determine how open the processing is to influences of attention.
Collapse
Affiliation(s)
| | - Uta Noppeney
- Computational Neuroscience and Cognitive Robotics Centre, University of Birmingham, UK
| | - Durk Talsma
- Department of Experimental Psychology, Ghent University, Henri Dunantlaan 2, B-9000 Ghent, Belgium
| | | | | | - Ruth Adam
- Institute for Stroke and Dementia Research, Klinikum der Universität München, Ludwig-Maximilians-Universität LMU, Munich, Germany
| |
Collapse
|
45
|
Baum S, Colonius H, Thelen A, Micheli C, Wallace M. Above the Mean: Examining Variability in Behavioral and Neural Responses to Multisensory Stimuli. Multisens Res 2016. [DOI: 10.1163/22134808-00002536] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
Abstract
Even when experimental conditions are kept constant, a robust and consistent finding in both behavioral and neural experiments designed to examine multisensory processing is striking variability. Although this variability has often been considered uninteresting noise (a term that is laden with strong connotations), emerging work suggests that differences in variability may be an important aspect in describing differences in performance between individuals and groups. In the current review, derived from a symposium at the 2015 International Multisensory Research Forum in Pisa, Italy, we focus on several aspects of variability as it relates to multisensory function. This effort seeks to expand our understanding of variability at levels of coding and analysis ranging from the single neuron through large networks and on to behavioral processes, and encompasses a number of the multimodal approaches that are used to evaluate and characterize multisensory processing including single-unit neurophysiology, electroencephalography (EEG), functional magnetic resonance imaging (fMRI), and electrocorticography (ECoG).
Collapse
Affiliation(s)
- Sarah H. Baum
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA
| | - Hans Colonius
- Department of Cognitive Psychology, University of Oldenburg, Oldenburg, Germany
| | - Antonia Thelen
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA
| | - Cristiano Micheli
- Department of Psychology, Carl-von-Ossietzky University, Oldenburg, Germany
| | - Mark T. Wallace
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA
- Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN, USA
| |
Collapse
|
46
|
van der Stoep N, Serino A, Farnè A, Di Luca M, Spence C. Depth: the Forgotten Dimension in Multisensory Research. Multisens Res 2016. [DOI: 10.1163/22134808-00002525] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
Abstract
The last quarter of a century has seen a dramatic rise of interest in the spatial constraints on multisensory integration. However, until recently, the majority of this research has investigated integration in the space directly in front of the observer. The space around us, however, extends in three spatial dimensions in the front and to the rear beyond such a limited area. The question to be addressed in this review concerns whether multisensory integration operates according to the same rules throughout the whole of three-dimensional space. The results reviewed here not only show that the space around us seems to be divided into distinct functional regions, but they also suggest that multisensory interactions are modulated by the region of space in which stimuli happen to be presented. We highlight a number of key limitations with previous research in this area, including: (1) The focus on only a very narrow region of two-dimensional space in front of the observer; (2) the use of static stimuli in most research; (3) the study of observers who themselves have been mostly static; and (4) the study of isolated observers. All of these factors may change the way in which the senses interact at any given distance, as can the emotional state/personality of the observer. In summarizing these salient issues, we hope to encourage researchers to consider these factors in their own research in order to gain a better understanding of the spatial constraints on multisensory integration as they affect us in our everyday life.
Collapse
Affiliation(s)
- N. van der Stoep
- Department of Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| | - A. Serino
- Center for Neuroprosthetics, EPFL, Lausanne, Switzerland
| | - A. Farnè
- ImpAct Team, Lyon Neuroscience Research Center, INSERM U1028, CNRS UMR5292, 69000 Lyon, France
| | - M. Di Luca
- School of Psychology, CNCR, University of Birmingham, Birmingham, United Kingdom
| | - C. Spence
- Department of Experimental Psychology, Oxford University, Oxford, United Kingdom
| |
Collapse
|
47
|
Hidaka S, Teramoto W, Sugita Y. Spatiotemporal Processing in Crossmodal Interactions for Perception of the External World: A Review. Front Integr Neurosci 2015; 9:62. [PMID: 26733827 PMCID: PMC4686600 DOI: 10.3389/fnint.2015.00062] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2015] [Accepted: 12/03/2015] [Indexed: 11/13/2022] Open
Abstract
Research regarding crossmodal interactions has garnered much interest in the last few decades. A variety of studies have demonstrated that multisensory information (vision, audition, tactile sensation, and so on) can perceptually interact with each other in the spatial and temporal domains. Findings regarding crossmodal interactions in the spatiotemporal domain (i.e., motion processing) have also been reported, with updates in the last few years. In this review, we summarize past and recent findings on spatiotemporal processing in crossmodal interactions regarding perception of the external world. A traditional view regarding crossmodal interactions holds that vision is superior to audition in spatial processing, but audition is dominant over vision in temporal processing. Similarly, vision is considered to have dominant effects over the other sensory modalities (i.e., visual capture) in spatiotemporal processing. However, recent findings demonstrate that sound could have a driving effect on visual motion perception. Moreover, studies regarding perceptual associative learning reported that, after association is established between a sound sequence without spatial information and visual motion information, the sound sequence could trigger visual motion perception. Other sensory information, such as motor action or smell, has also exhibited similar driving effects on visual motion perception. Additionally, recent brain imaging studies demonstrate that similar activation patterns could be observed in several brain areas, including the motion processing areas, between spatiotemporal information from different sensory modalities. Based on these findings, we suggest that multimodal information could mutually interact in spatiotemporal processing in the percept of the external world and that common perceptual and neural underlying mechanisms would exist for spatiotemporal processing.
Collapse
Affiliation(s)
- Souta Hidaka
- Department of Psychology, Rikkyo University Saitama, Japan
| | - Wataru Teramoto
- Department of Psychology, Kumamoto University Kumamoto, Japan
| | - Yoichi Sugita
- Department of Psychology, Waseda University Tokyo, Japan
| |
Collapse
|
48
|
Matusz PJ, Thelen A, Amrein S, Geiser E, Anken J, Murray MM. The role of auditory cortices in the retrieval of single-trial auditory-visual object memories. Eur J Neurosci 2015; 41:699-708. [PMID: 25728186 DOI: 10.1111/ejn.12804] [Citation(s) in RCA: 29] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2014] [Revised: 11/13/2014] [Accepted: 11/13/2014] [Indexed: 11/28/2022]
Abstract
Single-trial encounters with multisensory stimuli affect both memory performance and early-latency brain responses to visual stimuli. Whether and how auditory cortices support memory processes based on single-trial multisensory learning is unknown and may differ qualitatively and quantitatively from comparable processes within visual cortices due to purported differences in memory capacities across the senses. We recorded event-related potentials (ERPs) as healthy adults (n = 18) performed a continuous recognition task in the auditory modality, discriminating initial (new) from repeated (old) sounds of environmental objects. Initial presentations were either unisensory or multisensory; the latter entailed synchronous presentation of a semantically congruent or a meaningless image. Repeated presentations were exclusively auditory, thus differing only according to the context in which the sound was initially encountered. Discrimination abilities (indexed by d') were increased for repeated sounds that were initially encountered with a semantically congruent image versus sounds initially encountered with either a meaningless or no image. Analyses of ERPs within an electrical neuroimaging framework revealed that early stages of auditory processing of repeated sounds were affected by prior single-trial multisensory contexts. These effects followed from significantly reduced activity within a distributed network, including the right superior temporal cortex, suggesting an inverse relationship between brain activity and behavioural outcome on this task. The present findings demonstrate how auditory cortices contribute to long-term effects of multisensory experiences on auditory object discrimination. We propose a new framework for the efficacy of multisensory processes to impact both current multisensory stimulus processing and unisensory discrimination abilities later in time.
Collapse
Affiliation(s)
- Pawel J Matusz
- The Laboratory for Investigative Neurophysiology (The LINE), Department of Clinical Neurosciences and Department of Radiology, Vaudois University Hospital Center and University of Lausanne, Lausanne, Switzerland; Attention, Behaviour, and Cognitive Development Group, Department of Experimental Psychology, University of Oxford, Oxford, UK; University of Social Sciences and Humanities, Faculty in Wroclaw, Wroclaw, Poland
| | | | | | | | | | | |
Collapse
|
49
|
The effects of stereo disparity on the behavioural and electrophysiological correlates of perception of audio–visual motion in depth. Neuropsychologia 2015; 78:51-62. [DOI: 10.1016/j.neuropsychologia.2015.09.023] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2015] [Revised: 09/09/2015] [Accepted: 09/15/2015] [Indexed: 11/18/2022]
|
50
|
Baum SH, Stevenson RA, Wallace MT. Behavioral, perceptual, and neural alterations in sensory and multisensory function in autism spectrum disorder. Prog Neurobiol 2015; 134:140-60. [PMID: 26455789 PMCID: PMC4730891 DOI: 10.1016/j.pneurobio.2015.09.007] [Citation(s) in RCA: 231] [Impact Index Per Article: 25.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2015] [Revised: 08/21/2015] [Accepted: 09/05/2015] [Indexed: 01/24/2023]
Abstract
Although sensory processing challenges have been noted since the first clinical descriptions of autism, it has taken until the release of the fifth edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5) in 2013 for sensory problems to be included as part of the core symptoms of autism spectrum disorder (ASD) in the diagnostic profile. Because sensory information forms the building blocks for higher-order social and cognitive functions, we argue that sensory processing is not only an additional piece of the puzzle, but rather a critical cornerstone for characterizing and understanding ASD. In this review we discuss what is currently known about sensory processing in ASD, how sensory function fits within contemporary models of ASD, and what is understood about the differences in the underlying neural processing of sensory and social communication observed between individuals with and without ASD. In addition to highlighting the sensory features associated with ASD, we also emphasize the importance of multisensory processing in building perceptual and cognitive representations, and how deficits in multisensory integration may also be a core characteristic of ASD.
Collapse
Affiliation(s)
- Sarah H Baum
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA
| | - Ryan A Stevenson
- Department of Psychology, University of Toronto, Toronto, ON, Canada
| | - Mark T Wallace
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA; Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN, USA; Department of Psychology, Vanderbilt University, Nashville, TN, USA; Department of Psychiatry, Vanderbilt University, Nashville, TN, USA.
| |
Collapse
|