1
|
Ngetich R, Burleigh TL, Czakó A, Vékony T, Németh D, Demetrovics Z. Working memory performance in disordered gambling and gaming: A systematic review. Compr Psychiatry 2023; 126:152408. [PMID: 37573802 DOI: 10.1016/j.comppsych.2023.152408] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/21/2023] [Revised: 06/21/2023] [Accepted: 07/21/2023] [Indexed: 08/15/2023] Open
Abstract
BACKGROUND Converging evidence supports that gaming and gambling disorders are associated with executive dysfunction. The involvement of different components of executive functions (EF) in these forms of behavioural addiction is unclear. AIM In a systematic review, we aim to uncover the association between working memory (WM), a crucial component of EF, and disordered gaming and gambling. Note that, in the context of this review, gaming has been used synonymously with video gaming. METHODS Adhering to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA), we systematically searched for studies published from 2012 onwards. RESULTS The search yielded 6081 records after removing duplicates, from which 17 peer-reviewed journal articles were eligible for inclusion. The association between WM and problem or disordered gaming and gambling have been categorized separately to observe possible differences. Essentially, problem gaming or gambling, compared to disorder, presents lesser severity and clinical significance. The results demonstrate reduced auditory-verbal WM in individuals with gambling disorder. Decreased WM capacity was also associated with problem gambling, with a correlation between problem gambling severity and decreased WM capacity. Similarly, gaming disorder was associated with decreased WM. Specifically, gaming disorder patients had lower WM capacity than the healthy controls. CONCLUSION Working memory seems to be a significant predictor of gambling and gaming disorders. Therefore, holistic treatment approaches that incorporate cognitive techniques that could enhance working memory may significantly boost gambling and gaming disorders treatment success.
Collapse
Affiliation(s)
- Ronald Ngetich
- Centre of Excellence in Responsible Gaming, University of Gibraltar, Gibraltar, Gibraltar
| | - Tyrone L Burleigh
- Centre of Excellence in Responsible Gaming, University of Gibraltar, Gibraltar, Gibraltar
| | - Andrea Czakó
- Centre of Excellence in Responsible Gaming, University of Gibraltar, Gibraltar, Gibraltar; Institute of Psychology, ELTE Eötvös Loránd University, Budapest, Hungary
| | - Teodóra Vékony
- INSERM, Université Claude Bernard Lyon 1, CNRS, Centre de Recherche en Neurosciences de Lyon CRNL U1028 UMR5292, Bron, France
| | - Dezso Németh
- Institute of Psychology, ELTE Eötvös Loránd University, Budapest, Hungary; INSERM, Université Claude Bernard Lyon 1, CNRS, Centre de Recherche en Neurosciences de Lyon CRNL U1028 UMR5292, Bron, France; Brain, Memory and Language Research Group, Institute of Cognitive Neuroscience and Psychology, Research Centre for Natural Sciences, Budapest, Hungary
| | - Zsolt Demetrovics
- Centre of Excellence in Responsible Gaming, University of Gibraltar, Gibraltar, Gibraltar; Institute of Psychology, ELTE Eötvös Loránd University, Budapest, Hungary.
| |
Collapse
|
2
|
Audiovisual Training in Virtual Reality Improves Auditory Spatial Adaptation in Unilateral Hearing Loss Patients. J Clin Med 2023; 12:jcm12062357. [PMID: 36983357 PMCID: PMC10058351 DOI: 10.3390/jcm12062357] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2023] [Revised: 03/13/2023] [Accepted: 03/13/2023] [Indexed: 03/22/2023] Open
Abstract
Unilateral hearing loss (UHL) leads to an alteration of binaural cues resulting in a significant increment of spatial errors in the horizontal plane. In this study, nineteen patients with UHL were recruited and randomized in a cross-over design into two groups; a first group (n = 9) that received spatial audiovisual training in the first session and a non-spatial audiovisual training in the second session (2 to 4 weeks after the first session). A second group (n = 10) received the same training in the opposite order (non-spatial and then spatial). A sound localization test using head-pointing (LOCATEST) was completed prior to and following each training session. The results showed a significant decrease in head-pointing localization errors after spatial training for group 1 (24.85° ± 15.8° vs. 16.17° ± 11.28°; p < 0.001). The number of head movements during the spatial training for the 19 participants did not change (p = 0.79); nonetheless, the hand-pointing errors and reaction times significantly decreased at the end of the spatial training (p < 0.001). This study suggests that audiovisual spatial training can improve and induce spatial adaptation to a monaural deficit through the optimization of effective head movements. Virtual reality systems are relevant tools that can be used in clinics to develop training programs for patients with hearing impairments.
Collapse
|
3
|
Intensive Training of Spatial Hearing Promotes Auditory Abilities of Bilateral Cochlear Implant Adults: A Pilot Study. Ear Hear 2023; 44:61-76. [PMID: 35943235 DOI: 10.1097/aud.0000000000001256] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
OBJECTIVE The aim of this study was to evaluate the feasibility of a virtual reality-based spatial hearing training protocol in bilateral cochlear implant (CI) users and to provide pilot data on the impact of this training on different qualities of hearing. DESIGN Twelve bilateral CI adults aged between 19 and 69 followed an intensive 10-week rehabilitation program comprised eight virtual reality training sessions (two per week) interspersed with several evaluation sessions (2 weeks before training started, after four and eight training sessions, and 1 month after the end of training). During each 45-minute training session, participants localized a sound source whose position varied in azimuth and/or in elevation. At the start of each trial, CI users received no information about sound location, but after each response, feedback was given to enable error correction. Participants were divided into two groups: a multisensory feedback group (audiovisual spatial cue) and an unisensory group (visual spatial cue) who only received feedback in a wholly intact sensory modality. Training benefits were measured at each evaluation point using three tests: 3D sound localization in virtual reality, the French Matrix test, and the Speech, Spatial and other Qualities of Hearing questionnaire. RESULTS The training was well accepted and all participants attended the whole rehabilitation program. Four training sessions spread across 2 weeks were insufficient to induce significant performance changes, whereas performance on all three tests improved after eight training sessions. Front-back confusions decreased from 32% to 14.1% ( p = 0.017); speech recognition threshold score from 1.5 dB to -0.7 dB signal-to-noise ratio ( p = 0.029) and eight CI users successfully achieved a negative signal-to-noise ratio. One month after the end of structured training, these performance improvements were still present, and quality of life was significantly improved for both self-reports of sound localization (from 5.3 to 6.7, p = 0.015) and speech understanding (from 5.2 to 5.9, p = 0.048). CONCLUSIONS This pilot study shows the feasibility and potential clinical relevance of this type of intervention involving a sensorial immersive environment and could pave the way for more systematic rehabilitation programs after cochlear implantation.
Collapse
|
4
|
Gaveau V, Coudert A, Salemme R, Koun E, Desoche C, Truy E, Farnè A, Pavani F. Benefits of active listening during 3D sound localization. Exp Brain Res 2022; 240:2817-2833. [PMID: 36071210 PMCID: PMC9587935 DOI: 10.1007/s00221-022-06456-x] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2022] [Accepted: 08/28/2022] [Indexed: 11/29/2022]
Abstract
In everyday life, sound localization entails more than just the extraction and processing of auditory cues. When determining sound position in three dimensions, the brain also considers the available visual information (e.g., visual cues to sound position) and resolves perceptual ambiguities through active listening behavior (e.g., spontaneous head movements while listening). Here, we examined to what extent spontaneous head movements improve sound localization in 3D—azimuth, elevation, and depth—by comparing static vs. active listening postures. To this aim, we developed a novel approach to sound localization based on sounds delivered in the environment, brought into alignment thanks to a VR system. Our system proved effective for the delivery of sounds at predetermined and repeatable positions in 3D space, without imposing a physically constrained posture, and with minimal training. In addition, it allowed measuring participant behavior (hand, head and eye position) in real time. We report that active listening improved 3D sound localization, primarily by ameliorating accuracy and variability of responses in azimuth and elevation. The more participants made spontaneous head movements, the better was their 3D sound localization performance. Thus, we provide proof of concept of a novel approach to the study of spatial hearing, with potentials for clinical and industrial applications.
Collapse
Affiliation(s)
- V Gaveau
- Integrative Multisensory Perception Action & Cognition Team-ImpAct, Lyon Neuroscience Research Center, INSERM U1028, CNRS U5292, 16 Av. Doyen Lépine, BRON cedex, 69500, Lyon, France. .,University of Lyon 1, Lyon, France.
| | - A Coudert
- Integrative Multisensory Perception Action & Cognition Team-ImpAct, Lyon Neuroscience Research Center, INSERM U1028, CNRS U5292, 16 Av. Doyen Lépine, BRON cedex, 69500, Lyon, France.,University of Lyon 1, Lyon, France.,ENT Departments, Hôpital Femme-Mère-Enfant and Edouard Herriot University Hospitals, Lyon, France
| | - R Salemme
- Integrative Multisensory Perception Action & Cognition Team-ImpAct, Lyon Neuroscience Research Center, INSERM U1028, CNRS U5292, 16 Av. Doyen Lépine, BRON cedex, 69500, Lyon, France.,University of Lyon 1, Lyon, France.,Neuro-immersion, Lyon, France
| | - E Koun
- Integrative Multisensory Perception Action & Cognition Team-ImpAct, Lyon Neuroscience Research Center, INSERM U1028, CNRS U5292, 16 Av. Doyen Lépine, BRON cedex, 69500, Lyon, France.,University of Lyon 1, Lyon, France
| | - C Desoche
- Integrative Multisensory Perception Action & Cognition Team-ImpAct, Lyon Neuroscience Research Center, INSERM U1028, CNRS U5292, 16 Av. Doyen Lépine, BRON cedex, 69500, Lyon, France.,University of Lyon 1, Lyon, France.,Neuro-immersion, Lyon, France
| | - E Truy
- Integrative Multisensory Perception Action & Cognition Team-ImpAct, Lyon Neuroscience Research Center, INSERM U1028, CNRS U5292, 16 Av. Doyen Lépine, BRON cedex, 69500, Lyon, France.,University of Lyon 1, Lyon, France.,ENT Departments, Hôpital Femme-Mère-Enfant and Edouard Herriot University Hospitals, Lyon, France
| | - A Farnè
- Integrative Multisensory Perception Action & Cognition Team-ImpAct, Lyon Neuroscience Research Center, INSERM U1028, CNRS U5292, 16 Av. Doyen Lépine, BRON cedex, 69500, Lyon, France.,University of Lyon 1, Lyon, France.,Neuro-immersion, Lyon, France
| | - F Pavani
- Integrative Multisensory Perception Action & Cognition Team-ImpAct, Lyon Neuroscience Research Center, INSERM U1028, CNRS U5292, 16 Av. Doyen Lépine, BRON cedex, 69500, Lyon, France.,University of Lyon 1, Lyon, France.,Center for Mind/Brain Sciences - CIMeC, University of Trento, Rovereto, Italy
| |
Collapse
|
5
|
Skirzewski M, Molotchnikoff S, Hernandez LF, Maya-Vetencourt JF. Multisensory Integration: Is Medial Prefrontal Cortex Signaling Relevant for the Treatment of Higher-Order Visual Dysfunctions? Front Mol Neurosci 2022; 14:806376. [PMID: 35110996 PMCID: PMC8801884 DOI: 10.3389/fnmol.2021.806376] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2021] [Accepted: 12/17/2021] [Indexed: 11/29/2022] Open
Abstract
In the mammalian brain, information processing in sensory modalities and global mechanisms of multisensory integration facilitate perception. Emerging experimental evidence suggests that the contribution of multisensory integration to sensory perception is far more complex than previously expected. Here we revise how associative areas such as the prefrontal cortex, which receive and integrate inputs from diverse sensory modalities, can affect information processing in unisensory systems via processes of down-stream signaling. We focus our attention on the influence of the medial prefrontal cortex on the processing of information in the visual system and whether this phenomenon can be clinically used to treat higher-order visual dysfunctions. We propose that non-invasive and multisensory stimulation strategies such as environmental enrichment and/or attention-related tasks could be of clinical relevance to fight cerebral visual impairment.
Collapse
Affiliation(s)
- Miguel Skirzewski
- Rodent Cognition Research and Innovation Core, University of Western Ontario, London, ON, Canada
| | - Stéphane Molotchnikoff
- Département de Sciences Biologiques, Université de Montréal, Montreal, QC, Canada
- Département de Génie Electrique et Génie Informatique, Université de Sherbrooke, Sherbrooke, QC, Canada
| | - Luis F. Hernandez
- Knoebel Institute for Healthy Aging, University of Denver, Denver, CO, United States
| | - José Fernando Maya-Vetencourt
- Department of Biology, University of Pisa, Pisa, Italy
- Centre for Synaptic Neuroscience, Istituto Italiano di Tecnologia (IIT), Genova, Italy
- *Correspondence: José Fernando Maya-Vetencourt
| |
Collapse
|
6
|
Sievers B, Parkinson C, Kohler PJ, Hughes JM, Fogelson SV, Wheatley T. Visual and auditory brain areas share a representational structure that supports emotion perception. Curr Biol 2021; 31:5192-5203.e4. [PMID: 34644547 DOI: 10.1016/j.cub.2021.09.043] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2021] [Revised: 07/07/2021] [Accepted: 09/16/2021] [Indexed: 11/18/2022]
Abstract
Emotionally expressive music and dance occur together across the world. This may be because features shared across the senses are represented the same way even in different sensory brain areas, putting music and movement in directly comparable terms. These shared representations may arise from a general need to identify environmentally relevant combinations of sensory features, particularly those that communicate emotion. To test the hypothesis that visual and auditory brain areas share a representational structure, we created music and animation stimuli with crossmodally matched features expressing a range of emotions. Participants confirmed that each emotion corresponded to a set of features shared across music and movement. A subset of participants viewed both music and animation during brain scanning, revealing that representations in auditory and visual brain areas were similar to one another. This shared representation captured not only simple stimulus features but also combinations of features associated with emotion judgments. The posterior superior temporal cortex represented both music and movement using this same structure, suggesting supramodal abstraction of sensory content. Further exploratory analysis revealed that early visual cortex used this shared representational structure even when stimuli were presented auditorily. We propose that crossmodally shared representations support mutually reinforcing dynamics across auditory and visual brain areas, facilitating crossmodal comparison. These shared representations may help explain why emotions are so readily perceived and why some dynamic emotional expressions can generalize across cultural contexts.
Collapse
Affiliation(s)
- Beau Sievers
- Department of Psychology, Harvard University, Cambridge, MA 02138, USA; Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH 03755, USA.
| | - Carolyn Parkinson
- Department of Psychology, University of California, Los Angeles, Los Angeles, CA 90095, USA; Brain Research Institute, University of California, Los Angeles, Los Angeles, CA 90095, USA
| | - Peter J Kohler
- Department of Psychology, York University, Toronto, ON, Canada; Centre for Vision Research, York University, Toronto, ON, Canada
| | | | | | - Thalia Wheatley
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH 03755, USA; Santa Fe Institute, Santa Fe, NM 87501, USA.
| |
Collapse
|
7
|
Wang J, Liu J, Lai K, Zhang Q, Zheng Y, Wang S, Liang M. Mirror Mechanism Behind Visual-Auditory Interaction: Evidence From Event-Related Potentials in Children With Cochlear Implants. Front Neurosci 2021; 15:692520. [PMID: 34504413 PMCID: PMC8421565 DOI: 10.3389/fnins.2021.692520] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2021] [Accepted: 07/16/2021] [Indexed: 11/13/2022] Open
Abstract
The mechanism underlying visual-induced auditory interaction is still under discussion. Here, we provide evidence that the mirror mechanism underlies visual–auditory interactions. In this study, visual stimuli were divided into two major groups—mirror stimuli that were able to activate mirror neurons and non-mirror stimuli that were not able to activate mirror neurons. The two groups were further divided into six subgroups as follows: visual speech-related mirror stimuli, visual speech-irrelevant mirror stimuli, and non-mirror stimuli with four different luminance levels. Participants were 25 children with cochlear implants (CIs) who underwent an event-related potential (ERP) and speech recognition task. The main results were as follows: (1) there were significant differences in P1, N1, and P2 ERPs between mirror stimuli and non-mirror stimuli; (2) these ERP differences between mirror and non-mirror stimuli were partly driven by Brodmann areas 41 and 42 in the superior temporal gyrus; (3) ERP component differences between visual speech-related mirror and non-mirror stimuli were partly driven by Brodmann area 39 (visual speech area), which was not observed when comparing the visual speech-irrelevant stimulus and non-mirror groups; and (4) ERPs evoked by visual speech-related mirror stimuli had more components correlated with speech recognition than ERPs evoked by non-mirror stimuli, while ERPs evoked by speech-irrelevant mirror stimuli were not significantly different to those induced by the non-mirror stimuli. These results indicate the following: (1) mirror and non-mirror stimuli differ in their associated neural activation; (2) the visual–auditory interaction possibly led to ERP differences, as Brodmann areas 41 and 42 constitute the primary auditory cortex; (3) mirror neurons could be responsible for the ERP differences, considering that Brodmann area 39 is associated with processing information about speech-related mirror stimuli; and (4) ERPs evoked by visual speech-related mirror stimuli could better reflect speech recognition ability. These results support the hypothesis that a mirror mechanism underlies visual–auditory interactions.
Collapse
Affiliation(s)
- Junbo Wang
- Department of Otolaryngology, Sun Yat-sen Memorial Hospital, Guangzhou, China
| | - Jiahao Liu
- Department of Otolaryngology, Sun Yat-sen Memorial Hospital, Guangzhou, China
| | - Kaiyin Lai
- South China Normal University, Guangzhou, China
| | - Qi Zhang
- School of Foreign Languages, Shenzhen University, Shenzhen, China
| | - Yiqing Zheng
- Department of Otolaryngology, Sun Yat-sen Memorial Hospital, Guangzhou, China
| | | | - Maojin Liang
- Department of Otolaryngology, Sun Yat-sen Memorial Hospital, Guangzhou, China
| |
Collapse
|
8
|
Ceuleers D, Dhooge I, Degeest S, Van Steen H, Keppler H, Baudonck N. The Effects of Age, Gender and Test Stimuli on Visual Speech Perception: A Preliminary Study. Folia Phoniatr Logop 2021; 74:131-140. [PMID: 34348290 DOI: 10.1159/000518205] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2021] [Accepted: 06/30/2021] [Indexed: 11/19/2022] Open
Abstract
INTRODUCTION To the best of our knowledge, there is a lack of reliable, validated, and standardized (Dutch) measuring instruments to document visual speech perception in a structured way. This study aimed to: (1) evaluate the effects of age, gender, and the used word list on visual speech perception examined by a first version of the Dutch Test for (Audio-)Visual Speech Perception on word level (TAUVIS-words) and (2) assess the internal reliability of the TAUVIS-words. METHODS Thirty-nine normal-hearing adults divided into the following 3 age categories were included: (1) younger adults, age 18-39 years; (2) middle-aged adults, age 40-59 years; and (3) older adults, age >60 years. The TAUVIS-words consist of 4 word lists, i.e., 2 monosyllabic word lists (MS 1 and MS 2) and 2 polysyllabic word lists (PS 1 and PS 2). A first exploration of the effects of age, gender, and test stimuli (i.e., the used word list) on visual speech perception was conducted using the TAUVIS-words. A mixed-design analysis of variance (ANOVA) was conducted to analyze the results statistically. Lastly, the internal reliability of the TAUVIS-words was assessed by calculating the Chronbach α. RESULTS The results revealed a significant effect of the used list. More specifically, the score for MS 1 was significantly better compared to that for PS 2, and the score for PS 1 was significantly better compared to that for PS 2. Furthermore, a significant main effect of gender was found. Women scored significantly better compared to men. The effect of age was not significant. The TAUVIS-word lists were found to have good internal reliability. CONCLUSION This study was a first exploration of the effects of age, gender, and test stimuli on visual speech perception using the TAUVIS-words. Further research is necessary to optimize and validate the TAUVIS-words, making use of a larger study sample.
Collapse
Affiliation(s)
- Dorien Ceuleers
- Department of Rehabilitation Sciences, Ghent University, Ghent, Belgium
| | - Ingeborg Dhooge
- Department of Otorhinolaryngology, Ghent University Hospital, Ghent, Belgium.,Department of Ear, Nose, and Throat, Ghent University, Ghent, Belgium
| | - Sofie Degeest
- Department of Rehabilitation Sciences, Ghent University, Ghent, Belgium
| | | | - Hannah Keppler
- Department of Rehabilitation Sciences, Ghent University, Ghent, Belgium.,Department of Otorhinolaryngology, Ghent University Hospital, Ghent, Belgium
| | - Nele Baudonck
- Department of Otorhinolaryngology, Ghent University Hospital, Ghent, Belgium
| |
Collapse
|
9
|
Alvarez-Alonso MJ, de-la-Peña C, Ortega Z, Scott R. Boys-Specific Text-Comprehension Enhancement With Dual Visual-Auditory Text Presentation Among 12-14 Years-Old Students. Front Psychol 2021; 12:574685. [PMID: 33897513 PMCID: PMC8062718 DOI: 10.3389/fpsyg.2021.574685] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2020] [Accepted: 03/12/2021] [Indexed: 11/13/2022] Open
Abstract
Quality of language comprehension determines performance in all kinds of activities including academics. Processing of words initially develops as auditory, and gradually extends to visual as children learn to read. School failure is highly related to listening and reading comprehension problems. In this study we analyzed sex-differences in comprehension of texts in Spanish (standardized reading test PROLEC-R) in three modalities (visual, auditory, and both simultaneously: dual-modality) presented to 12-14-years old students, native in Spanish. We controlled relevant cognitive variables such as attention (d2), phonological and semantic fluency (FAS) and speed of processing (WISC subtest Coding). Girls' comprehension was similar in the three modalities of presentation, however boys were importantly benefited by dual-modality as compared to boys exposed only to visual or auditory text presentation. With respect to the relation of text comprehension and school performance, students with low grades in Spanish showed low auditory comprehension. Interestingly, visual and dual modalities preserved comprehension levels in these low skilled students. Our results suggest that the use of visual-text support during auditory language presentation could be beneficial for low school performance students, especially boys, and encourage future research to evaluate the implementation in classes of the rapidly developing technology of simultaneous speech transcription, that could be, in addition, beneficial to non-native students, especially those recently incorporated into school or newly arrived in a country from abroad.
Collapse
Affiliation(s)
- Maria Jose Alvarez-Alonso
- Departamento de Psicología Evolutiva y Psicobiología, Universidad Internacional de la Rioja, Logroño, Spain
| | - Cristina de-la-Peña
- Departamento de Psicología Evolutiva y Psicobiología, Universidad Internacional de la Rioja, Logroño, Spain
| | - Zaira Ortega
- Departamento de Psicología Evolutiva y Psicobiología, Universidad Internacional de la Rioja, Logroño, Spain
| | - Ricardo Scott
- Departamento de Psicología Evolutiva y Psicobiología, Universidad Internacional de la Rioja, Logroño, Spain.,Departamento de Psicología Evolutiva y Didáctica, Universidad de Alicante, Alicante, Spain
| |
Collapse
|
10
|
Wu P, Liu J. Learning Causal Temporal Relation and Feature Discrimination for Anomaly Detection. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2021; 30:3513-3527. [PMID: 33656993 DOI: 10.1109/tip.2021.3062192] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Weakly supervised anomaly detection is a challenging task since frame-level labels are not given in the training phase. Previous studies generally employ neural networks to learn features and produce frame-level predictions and then use multiple instance learning (MIL)-based classification loss to ensure the interclass separability of the learned features; all operations simply take into account the current time information as input and ignore the historical observations. According to investigations, these solutions are universal but ignore two essential factors, i.e., the temporal cue and feature discrimination. The former introduces temporal context to enhance the current time feature, and the latter enforces the samples of different categories to be more separable in the feature space. In this article, we propose a method that consists of four modules to leverage the effect of these two ignored factors. The causal temporal relation (CTR) module captures local-range temporal dependencies among features to enhance features. The classifier (CL) projects enhanced features to the category space using the causal convolution and further expands the temporal modeling range. Two additional modules, namely, compactness (CP) and dispersion (DP) modules, are designed to learn the discriminative power of features, where the compactness module ensures the intraclass compactness of normal features, and the dispersion module enhances the interclass dispersion. Extensive experiments on three public benchmarks demonstrate the significance of causal temporal relations and feature discrimination for anomaly detection and the superiority of our proposed method.
Collapse
|
11
|
Spatiotemporal Coding in the Macaque Supplementary Eye Fields: Landmark Influence in the Target-to-Gaze Transformation. eNeuro 2021; 8:ENEURO.0446-20.2020. [PMID: 33318073 PMCID: PMC7877461 DOI: 10.1523/eneuro.0446-20.2020] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2020] [Accepted: 11/24/2020] [Indexed: 11/21/2022] Open
Abstract
Eye-centered (egocentric) and landmark-centered (allocentric) visual signals influence spatial cognition, navigation, and goal-directed action, but the neural mechanisms that integrate these signals for motor control are poorly understood. A likely candidate for egocentric/allocentric integration in the gaze control system is the supplementary eye fields (SEF), a mediofrontal structure with high-level “executive” functions, spatially tuned visual/motor response fields, and reciprocal projections with the frontal eye fields (FEF). To test this hypothesis, we trained two head-unrestrained monkeys (Macaca mulatta) to saccade toward a remembered visual target in the presence of a visual landmark that shifted during the delay, causing gaze end points to shift partially in the same direction. A total of 256 SEF neurons were recorded, including 68 with spatially tuned response fields. Model fits to the latter established that, like the FEF and superior colliculus (SC), spatially tuned SEF responses primarily showed an egocentric (eye-centered) target-to-gaze position transformation. However, the landmark shift influenced this default egocentric transformation: during the delay, motor neurons (with no visual response) showed a transient but unintegrated shift (i.e., not correlated with the target-to-gaze transformation), whereas during the saccade-related burst visuomotor (VM) neurons showed an integrated shift (i.e., correlated with the target-to-gaze transformation). This differed from our simultaneous FEF recordings (Bharmauria et al., 2020), which showed a transient shift in VM neurons, followed by an integrated response in all motor responses. Based on these findings and past literature, we propose that prefrontal cortex incorporates landmark-centered information into a distributed, eye-centered target-to-gaze transformation through a reciprocal prefrontal circuit.
Collapse
|
12
|
Csonka M, Mardmomen N, Webster PJ, Brefczynski-Lewis JA, Frum C, Lewis JW. Meta-Analyses Support a Taxonomic Model for Representations of Different Categories of Audio-Visual Interaction Events in the Human Brain. Cereb Cortex Commun 2021; 2:tgab002. [PMID: 33718874 PMCID: PMC7941256 DOI: 10.1093/texcom/tgab002] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2020] [Revised: 12/31/2020] [Accepted: 01/06/2021] [Indexed: 01/23/2023] Open
Abstract
Our ability to perceive meaningful action events involving objects, people, and other animate agents is characterized in part by an interplay of visual and auditory sensory processing and their cross-modal interactions. However, this multisensory ability can be altered or dysfunctional in some hearing and sighted individuals, and in some clinical populations. The present meta-analysis sought to test current hypotheses regarding neurobiological architectures that may mediate audio-visual multisensory processing. Reported coordinates from 82 neuroimaging studies (137 experiments) that revealed some form of audio-visual interaction in discrete brain regions were compiled, converted to a common coordinate space, and then organized along specific categorical dimensions to generate activation likelihood estimate (ALE) brain maps and various contrasts of those derived maps. The results revealed brain regions (cortical "hubs") preferentially involved in multisensory processing along different stimulus category dimensions, including 1) living versus nonliving audio-visual events, 2) audio-visual events involving vocalizations versus actions by living sources, 3) emotionally valent events, and 4) dynamic-visual versus static-visual audio-visual stimuli. These meta-analysis results are discussed in the context of neurocomputational theories of semantic knowledge representations and perception, and the brain volumes of interest are available for download to facilitate data interpretation for future neuroimaging studies.
Collapse
Affiliation(s)
- Matt Csonka
- Department of Neuroscience, Rockefeller Neuroscience Institute, West Virginia University, Morgantown, WV 26506, USA
| | - Nadia Mardmomen
- Department of Neuroscience, Rockefeller Neuroscience Institute, West Virginia University, Morgantown, WV 26506, USA
| | - Paula J Webster
- Department of Neuroscience, Rockefeller Neuroscience Institute, West Virginia University, Morgantown, WV 26506, USA
| | - Julie A Brefczynski-Lewis
- Department of Neuroscience, Rockefeller Neuroscience Institute, West Virginia University, Morgantown, WV 26506, USA
| | - Chris Frum
- Department of Neuroscience, Rockefeller Neuroscience Institute, West Virginia University, Morgantown, WV 26506, USA
| | - James W Lewis
- Department of Neuroscience, Rockefeller Neuroscience Institute, West Virginia University, Morgantown, WV 26506, USA
| |
Collapse
|
13
|
Audiovisual Bimodal and Interactive Effects for Soundscape Design of the Indoor Environments: A Systematic Review. SUSTAINABILITY 2021. [DOI: 10.3390/su13010339] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
A growing number of soundscape studies involving audiovisual factors have been conducted; however, their bimodal and interactive effects on indoor soundscape evaluations have not yet been thoroughly reviewed. The overarching goal of this systematic review was to develop the framework for designing sustainable indoor soundscapes by focusing on audiovisual factors and relations. A search for individual studies was conducted through three databases and search engines: Scopus, Web of Science, and PubMed. Based on the qualitative reviews of the selected thirty papers, a framework of indoor soundscape evaluation concerning visual and audiovisual indicators was proposed. Overall, the greenery factor was the most important visual variable, followed by the water features and moderating noise annoyance perceived by occupants in given indoor environments. The presence of visual information and sound-source visibility would moderate perceived noise annoyance and influence other audio-related perceptions. Furthermore, sound sources would impact multiple perceptual responses (audio, visual, cognitive, and emotional perceptions) related to the overall soundscape experiences when certain visual factors are interactively involved. The proposed framework highlights the potential use of the bimodality and interactivity of the audiovisual factors for designing indoor sound environments in more effective ways.
Collapse
|
14
|
Maccora S, Bolognini N, Cosentino G, Baschi R, Vallar G, Fierro B, Brighina F. Multisensorial Perception in Chronic Migraine and the Role of Medication Overuse. THE JOURNAL OF PAIN 2020; 21:919-929. [PMID: 31904501 DOI: 10.1016/j.jpain.2019.12.005] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/11/2019] [Revised: 11/12/2019] [Accepted: 12/04/2019] [Indexed: 01/03/2023]
Abstract
Multisensory processing can be assessed by measuring susceptibility to crossmodal illusions such as the Sound-Induced Flash Illusion (SIFI). When a single flash is accompanied by 2 or more beeps, it is perceived as multiple flashes (fission illusion); conversely, a fusion illusion is experienced when more flashes are matched with a single beep, leading to the perception of a single flash. Such illusory perceptions are associated to crossmodal changes in visual cortical excitability. Indeed, increasing occipital cortical excitability, by means of transcranial electrical currents, disrupts the SIFI (ie, fission illusion). Similarly, a reduced fission illusion was shown in patients with episodic migraine, especially during the attack, in agreement with the pathophysiological model of cortical hyperexcitability of this disease. If episodic migraine patients present with reduced SIFI especially during the attack, we hypothesize that chronic migraine (CM) patients should consistently report less illusory effects than healthy controls; drugs intake could also affect SIFI. On such a basis, we studied the proneness to SIFI in CM patients (n = 63), including 52 patients with Medication Overuse Headache (MOH), compared to 24 healthy controls. All migraine patients showed reduced fission phenomena than controls (P < .0001). Triptan MOH patients (n = 23) presented significantly less fission effects than other CM groups (P = .008). This exploratory study suggests that CM - both with and without medication overuse - is associated to a higher visual cortical responsiveness which causes deficit of multisensorial processing, as assessed by the SIFI. PERSPECTIVE: This observational study shows reduced susceptibility to the SIFI in CM, confirming and extending previous results in episodic migraine. MOH contributes to this phenomenon, especially in case of triptans.
Collapse
Affiliation(s)
- Simona Maccora
- Department of Biomedicine, Neuroscience and Advanced Diagnostics (BIND), University of Palermo, Palermo, Italy
| | - Nadia Bolognini
- Department of Psychology, Milan Center for Neuroscience - NeuroMi, University of Milano-Bicocca, Milano, Italy; Laboratory of Neuropsychology, IRCSS Istituto Auxologico, Milano, Italy
| | - Giuseppe Cosentino
- Department of Brain and Behavioural Sciences, University of Pavia, Italy; IRCCS Mondino Foundation, Pavia, Italy
| | - Roberta Baschi
- Department of Biomedicine, Neuroscience and Advanced Diagnostics (BIND), University of Palermo, Palermo, Italy
| | - Giuseppe Vallar
- Department of Psychology, Milan Center for Neuroscience - NeuroMi, University of Milano-Bicocca, Milano, Italy; Laboratory of Neuropsychology, IRCSS Istituto Auxologico, Milano, Italy
| | - Brigida Fierro
- Department of Biomedicine, Neuroscience and Advanced Diagnostics (BIND), University of Palermo, Palermo, Italy
| | - Filippo Brighina
- Department of Biomedicine, Neuroscience and Advanced Diagnostics (BIND), University of Palermo, Palermo, Italy.
| |
Collapse
|
15
|
Retter TL, Webster MA, Jiang F. Directional Visual Motion Is Represented in the Auditory and Association Cortices of Early Deaf Individuals. J Cogn Neurosci 2019; 31:1126-1140. [PMID: 30726181 PMCID: PMC6599583 DOI: 10.1162/jocn_a_01378] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Individuals who are deaf since early life may show enhanced performance at some visual tasks, including discrimination of directional motion. The neural substrates of such behavioral enhancements remain difficult to identify in humans, although neural plasticity has been shown for early deaf people in the auditory and association cortices, including the primary auditory cortex (PAC) and STS region, respectively. Here, we investigated whether neural responses in auditory and association cortices of early deaf individuals are reorganized to be sensitive to directional visual motion. To capture direction-selective responses, we recorded fMRI responses frequency-tagged to the 0.1-Hz presentation of central directional (100% coherent random dot) motion persisting for 2 sec contrasted with nondirectional (0% coherent) motion for 8 sec. We found direction-selective responses in the STS region in both deaf and hearing participants, but the extent of activation in the right STS region was 5.5 times larger for deaf participants. Minimal but significant direction-selective responses were also found in the PAC of deaf participants, both at the group level and in five of six individuals. In response to stimuli presented separately in the right and left visual fields, the relative activation across the right and left hemispheres was similar in both the PAC and STS region of deaf participants. Notably, the enhanced right-hemisphere activation could support the right visual field advantage reported previously in behavioral studies. Taken together, these results show that the reorganized auditory cortices of early deaf individuals are sensitive to directional motion. Speculatively, these results suggest that auditory and association regions can be remapped to support enhanced visual performance.
Collapse
|
16
|
Ohla K, Höchenberger R, Freiherr J, Lundström JN. Superadditive and Subadditive Neural Processing of Dynamic Auditory-Visual Objects in the Presence of Congruent Odors. Chem Senses 2019; 43:35-44. [PMID: 29045615 DOI: 10.1093/chemse/bjx068] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
Our sensory experiences comprise a variety of different inputs at any given time. Some of these experiences are unmistakable, others are ambiguous and profit from additional sensory information. Here, we explored whether the presence of a congruent odor influences the neural processing and sensory interaction of audio-visual objects using degraded videos (V) and sounds (A) of dynamic objects in unimodal and bimodal (AV) combinations without or with a congruent odor (VO, AO, AVO). Analyses of EEG data revealed superadditive and subadditive interaction effects. The topography and timing of these effects suggest evaluative rather than sensory processes as the underlying cause. Together, the results suggest that the mere presence of an odor affects the processing of A, V, and AV objects differently while multisensory interactions of AV and AVO objects have common neuronal mechanisms pointing to a robust, modality-independent network for the processing of redundant sensory information.
Collapse
Affiliation(s)
- Kathrin Ohla
- German Institute of Human Nutrition Potsdam-Rehbruecke, Germany
- Monell Chemical Senses Center, USA
| | | | - Jessica Freiherr
- Uniklinik RWTH Aachen, Diagnostic and Interventional Neuroradiology, Germany
- Fraunhofer-Institut für Verfahrenstechnik und Verpackung IVV, Sensory Analytics, Germany
| | - Johan N Lundström
- Monell Chemical Senses Center, USA
- Department of Clinical Neuroscience, Karolinska Institutet, Sweden
| |
Collapse
|
17
|
The Effects of Visual Cues, Blindfolding, Synesthetic Experience, and Musical Training on Pure-Tone Frequency Discrimination. Behav Sci (Basel) 2018; 9:bs9010002. [PMID: 30586857 PMCID: PMC6358848 DOI: 10.3390/bs9010002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2018] [Revised: 11/18/2018] [Accepted: 12/20/2018] [Indexed: 11/16/2022] Open
Abstract
How perceptual limits can be reduced has long been examined by psychologists. This study investigated whether visual cues, blindfolding, visual-auditory synesthetic experience, and musical training could facilitate a smaller frequency difference limen (FDL) in a gliding frequency discrimination test. Ninety university students, with no visual or auditory impairment, were recruited for this one-between (blindfolded/visual cues) and one-within (control/experimental session) designed study. Their FDLs were tested by an alternative forced-choice task (gliding upwards/gliding downwards/no change) and two questionnaires (Vividness of Mental Imagery Questionnaire and Projector⁻Associator Test) were used to assess their tendency to synesthesia. The participants provided with visual cues and with musical training showed a significantly smaller FDL; on the other hand, being blindfolded or having a synesthetic experience before could not significantly reduce the FDL. However, no pattern was found between the perception of the gliding upwards and gliding downwards frequencies. Overall, the current study suggests that the inter-sensory perception can be enhanced through the training and facilitation of visual⁻auditory interaction under the multiple resource model. Future studies are recommended in order to verify the effects of music practice on auditory percepts, and the different mechanisms between perceiving gliding upwards and downwards frequencies.
Collapse
|
18
|
Comparison of 3D printed prostate models with standard radiological information to aid understanding of the precise location of prostate cancer: A construct validation study. PLoS One 2018; 13:e0199477. [PMID: 29940018 PMCID: PMC6016928 DOI: 10.1371/journal.pone.0199477] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2018] [Accepted: 06/07/2018] [Indexed: 01/28/2023] Open
Abstract
Background To investigate the reliability with which healthcare professionals with different levels of expertise are able to impart the exact location of prostate cancer (PCA) after (A) reading written magnetic resonance imaging (MRI) reports, (B) attending MRI presentations in multidisciplinary team meetings (MDT), and (C) examining 3D printed prostate models, which represents a new technology to describe the location of PCA lesions. Methods We used three different PCA cases to assess the three information tools. Construct validation was performed using two healthcare groups with different levels of expertise: (1) Nine expert urologists in PCA, and (2) nine medical students. After each information tool, the study participants plotted the tumor location in a 2-dimensional prostate diagram. A scoring system was established to evaluate the drawings in terms of accuracy of plotting tumor position. Data are shown as median scores with interquartile range. Results Within the expert group, no significant difference was seen in the overall scoring results between the information tools (p = 0.34). Medical students performed significantly worse with MDT information (p = 0.03). Experts performed better in all three information tools compared to students, resulting in a significantly 25% higher overall total score (25.0[22.3–26.7] vs. 20.0[15.0–24.0], p<0.001). The difference was largest after MDT information, with experts showing a 49% better scoring (p<0.001), and second largest with the 3D printed models, showing a 17% better scoring of the experts (p = 0.07). No difference was found in the written MRI report scoring results between experts and students. Conclusions 3D printed models provided better orientation guide to medical students compared to MDT MRI presentations. This indicates that the 3D printed models might be easier to understand than the current gold standard MDT conferences. Therefore, 3D models may play an increasingly important role in providing guidance for orientation for less experienced individuals, such as surgical trainees.
Collapse
|
19
|
Cordani L, Tagliazucchi E, Vetter C, Hassemer C, Roenneberg T, Stehle JH, Kell CA. Endogenous modulation of human visual cortex activity improves perception at twilight. Nat Commun 2018; 9:1274. [PMID: 29636448 PMCID: PMC5893589 DOI: 10.1038/s41467-018-03660-8] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2017] [Accepted: 03/01/2018] [Indexed: 11/09/2022] Open
Abstract
Perception, particularly in the visual domain, is drastically influenced by rhythmic changes in ambient lighting conditions. Anticipation of daylight changes by the circadian system is critical for survival. However, the neural bases of time-of-day-dependent modulation in human perception are not yet understood. We used fMRI to study brain dynamics during resting-state and close-to-threshold visual perception repeatedly at six times of the day. Here we report that resting-state signal variance drops endogenously at times coinciding with dawn and dusk, notably in sensory cortices only. In parallel, perception-related signal variance in visual cortices decreases and correlates negatively with detection performance, identifying an anticipatory mechanism that compensates for the deteriorated visual signal quality at dawn and dusk. Generally, our findings imply that decreases in spontaneous neural activity improve close-to-threshold perception.
Collapse
Affiliation(s)
- Lorenzo Cordani
- Cognitive Neuroscience Group, Brain Imaging Center, Goethe University, 60528, Frankfurt am Main, Germany.,Department of Neurology, Goethe University, 60528, Frankfurt am Main, Germany
| | - Enzo Tagliazucchi
- Cognitive Neuroscience Group, Brain Imaging Center, Goethe University, 60528, Frankfurt am Main, Germany.,Brain and Spine Institute, Hôpital Pitié Salpêtrière, 75013, Paris, France.,Departamento de Física, Instituto de Física de Buenos Aires-CONICET, Buenos Aires, 1428, Argentina
| | - Céline Vetter
- Department of Integrative Physiology, University of Colorado, Boulder, CO, 80310, USA.,Institute of Medical Psychology, Ludwig Maximilian University, 80336, Munich, Germany
| | - Christian Hassemer
- Cognitive Neuroscience Group, Brain Imaging Center, Goethe University, 60528, Frankfurt am Main, Germany.,Institute of Anatomy III, Goethe University, 60590, Frankfurt am Main, Germany
| | - Till Roenneberg
- Institute of Medical Psychology, Ludwig Maximilian University, 80336, Munich, Germany
| | - Jörg H Stehle
- Institute of Anatomy III, Goethe University, 60590, Frankfurt am Main, Germany
| | - Christian A Kell
- Cognitive Neuroscience Group, Brain Imaging Center, Goethe University, 60528, Frankfurt am Main, Germany. .,Department of Neurology, Goethe University, 60528, Frankfurt am Main, Germany.
| |
Collapse
|
20
|
Sound changes that lead to seeing longer-lasting shapes. Atten Percept Psychophys 2018; 80:986-998. [PMID: 29380283 DOI: 10.3758/s13414-017-1475-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
To survive, people must construct an accurate representation of the world around them. There is a body of research on visual scene analysis, and a largely separate literature on auditory scene analysis. The current study follows up research from the smaller literature on audiovisual scene analysis. Prior work demonstrated that when there is an abrupt size change to a moving object, observers tend to see two objects rather than one-the abrupt visual change enhances visible persistence of the briefly presented different-sized object. Moreover, if a sequence of tones accompanies the moving object, visible persistence is enhanced if the tone frequency suddenly changes at the same time that the object's size changes. Here, we show that although a sound change must occur at roughly the same time as a visual change to enhance visible persistence, there is a fairly wide time frame during which the sound change can occur. In addition, the impact of a sound change on visible persistence is not simply matter of the physical pattern: The same pattern of sound can enhance visible persistence or not, depending on how the pattern is itself perceived. Specifically, a change in a tone's frequency can enhance visible persistence when it accompanies a visual size change, but the same frequency change will not do so if the shift is embedded in a larger pattern that makes the change merely a continuation of alternating frequencies. The current study supports a scene analysis process that is both multimodal and actively constructive.
Collapse
|
21
|
Okita M, Yukihiro T, Miyamoto K, Morioka S, Kaba H. Defective imitation of finger configurations in patients with damage in the right or left hemispheres: An integration disorder of visual and somatosensory information? Brain Cogn 2017; 113:109-116. [DOI: 10.1016/j.bandc.2017.01.009] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2016] [Revised: 01/20/2017] [Accepted: 01/23/2017] [Indexed: 11/28/2022]
|
22
|
Abdoli S, Ho LC, Zhang JW, Dong CM, Lau C, Wu EX. Diffusion tensor imaging reveals changes in the adult rat brain following long-term and passive moderate acoustic exposure. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2016; 140:4540. [PMID: 28040046 DOI: 10.1121/1.4972300] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
This study investigated neuroanatomical changes following long-term acoustic exposure at moderate sound pressure level (SPL) under passive conditions, without coupled behavioral training. The authors utilized diffusion tensor imaging (DTI) to detect morphological changes in white matter. DTIs from adult rats (n = 8) exposed to continuous acoustic exposure at moderate SPL for 2 months were compared with DTIs from rats (n = 8) reared under standard acoustic conditions. Two distinct forms of DTI analysis were applied in a sequential manner. First, DTI images were analyzed using voxel-based statistics which revealed greater fractional anisotropy (FA) of the pyramidal tract and decreased FA of the tectospinal tract and trigeminothalamic tract of the exposed rats. Region of interest analysis confirmed (p < 0.05) that FA had increased in the pyramidal tract but did not show a statistically significant difference in the FA of the tectospinal or trigeminothalamic tract. The results of the authors show that long-term and passive acoustic exposure at moderate SPL increases the organization of white matter in the pyramidal tract.
Collapse
Affiliation(s)
- Sherwin Abdoli
- Keck School of Medicine, University of Southern California, 1975 Zonal Avenue, Los Angeles, California 90033, USA
| | - Leon C Ho
- Laboratory of Biomedical Imaging and Signal Processing, LB1037, 10/F, Laboratory Block, The University of Hong Kong, 21 Sassoon Road, Pokfulam, Hong Kong, China
| | - Jevin W Zhang
- Laboratory of Biomedical Imaging and Signal Processing, LB1037, 10/F, Laboratory Block, The University of Hong Kong, 21 Sassoon Road, Pokfulam, Hong Kong, China
| | - Celia M Dong
- Laboratory of Biomedical Imaging and Signal Processing, LB1037, 10/F, Laboratory Block, The University of Hong Kong, 21 Sassoon Road, Pokfulam, Hong Kong, China
| | - Condon Lau
- Department of Physics and Materials Science, G6702, 6/F, Academic Building 1, City University of Hong Kong, Tat Chee Avenue, Kowloon, Hong Kong, China
| | - Ed X Wu
- Laboratory of Biomedical Imaging and Signal Processing, LB1037, 10/F, Laboratory Block, The University of Hong Kong, 21 Sassoon Road, Pokfulam, Hong Kong, China
| |
Collapse
|
23
|
Kumar GV, Halder T, Jaiswal AK, Mukherjee A, Roy D, Banerjee A. Large Scale Functional Brain Networks Underlying Temporal Integration of Audio-Visual Speech Perception: An EEG Study. Front Psychol 2016; 7:1558. [PMID: 27790169 PMCID: PMC5062921 DOI: 10.3389/fpsyg.2016.01558] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2016] [Accepted: 09/23/2016] [Indexed: 11/13/2022] Open
Abstract
Observable lip movements of the speaker influence perception of auditory speech. A classical example of this influence is reported by listeners who perceive an illusory (cross-modal) speech sound (McGurk-effect) when presented with incongruent audio-visual (AV) speech stimuli. Recent neuroimaging studies of AV speech perception accentuate the role of frontal, parietal, and the integrative brain sites in the vicinity of the superior temporal sulcus (STS) for multisensory speech perception. However, if and how does the network across the whole brain participates during multisensory perception processing remains an open question. We posit that a large-scale functional connectivity among the neural population situated in distributed brain sites may provide valuable insights involved in processing and fusing of AV speech. Varying the psychophysical parameters in tandem with electroencephalogram (EEG) recordings, we exploited the trial-by-trial perceptual variability of incongruent audio-visual (AV) speech stimuli to identify the characteristics of the large-scale cortical network that facilitates multisensory perception during synchronous and asynchronous AV speech. We evaluated the spectral landscape of EEG signals during multisensory speech perception at varying AV lags. Functional connectivity dynamics for all sensor pairs was computed using the time-frequency global coherence, the vector sum of pairwise coherence changes over time. During synchronous AV speech, we observed enhanced global gamma-band coherence and decreased alpha and beta-band coherence underlying cross-modal (illusory) perception compared to unisensory perception around a temporal window of 300-600 ms following onset of stimuli. During asynchronous speech stimuli, a global broadband coherence was observed during cross-modal perception at earlier times along with pre-stimulus decreases of lower frequency power, e.g., alpha rhythms for positive AV lags and theta rhythms for negative AV lags. Thus, our study indicates that the temporal integration underlying multisensory speech perception requires to be understood in the framework of large-scale functional brain network mechanisms in addition to the established cortical loci of multisensory speech perception.
Collapse
Affiliation(s)
- G Vinodh Kumar
- Cognitive Brain Lab, National Brain Research Centre Gurgaon, India
| | - Tamesh Halder
- Cognitive Brain Lab, National Brain Research Centre Gurgaon, India
| | - Amit K Jaiswal
- Cognitive Brain Lab, National Brain Research Centre Gurgaon, India
| | | | - Dipanjan Roy
- Centre for Behavioural and Cognitive Sciences, University of Allahabad Allahabad, India
| | - Arpan Banerjee
- Cognitive Brain Lab, National Brain Research Centre Gurgaon, India
| |
Collapse
|
24
|
|
25
|
Greenaway R, Pring L, Schepers A, Isaacs DP, Dale NJ. Neuropsychological presentation and adaptive skills in high-functioning adolescents with visual impairment: A preliminary investigation. APPLIED NEUROPSYCHOLOGY-CHILD 2016; 6:145-157. [DOI: 10.1080/21622965.2015.1129608] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
26
|
Leske S, Ruhnau P, Frey J, Lithari C, Müller N, Hartmann T, Weisz N. Prestimulus Network Integration of Auditory Cortex Predisposes Near-Threshold Perception Independently of Local Excitability. Cereb Cortex 2015; 25:4898-907. [PMID: 26408799 PMCID: PMC4635927 DOI: 10.1093/cercor/bhv212] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022] Open
Abstract
An ever-increasing number of studies are pointing to the importance of network properties of the brain for understanding behavior such as conscious perception. However, with regards to the influence of prestimulus brain states on perception, this network perspective has rarely been taken. Our recent framework predicts that brain regions crucial for a conscious percept are coupled prior to stimulus arrival, forming pre-established pathways of information flow and influencing perceptual awareness. Using magnetoencephalography (MEG) and graph theoretical measures, we investigated auditory conscious perception in a near-threshold (NT) task and found strong support for this framework. Relevant auditory regions showed an increased prestimulus interhemispheric connectivity. The left auditory cortex was characterized by a hub-like behavior and an enhanced integration into the brain functional network prior to perceptual awareness. Right auditory regions were decoupled from non-auditory regions, presumably forming an integrated information processing unit with the left auditory cortex. In addition, we show for the first time for the auditory modality that local excitability, measured by decreased alpha power in the auditory cortex, increases prior to conscious percepts. Importantly, we were able to show that connectivity states seem to be largely independent from local excitability states in the context of a NT paradigm.
Collapse
Affiliation(s)
- Sabine Leske
- Department of Psychology, University of Konstanz, 78457 Konstanz, Germany
| | - Philipp Ruhnau
- Center for Mind/Brain Sciences (CIMeC), University of Trento, 38123 Mattarello (TN), Italy
| | - Julia Frey
- Center for Mind/Brain Sciences (CIMeC), University of Trento, 38123 Mattarello (TN), Italy
| | - Chrysa Lithari
- Center for Mind/Brain Sciences (CIMeC), University of Trento, 38123 Mattarello (TN), Italy
| | - Nadia Müller
- Department of Neurology, Epilepsy Center, University Hospital Erlangen, 91054 Erlangen, Germany
| | - Thomas Hartmann
- Center for Mind/Brain Sciences (CIMeC), University of Trento, 38123 Mattarello (TN), Italy
| | - Nathan Weisz
- Center for Mind/Brain Sciences (CIMeC), University of Trento, 38123 Mattarello (TN), Italy
| |
Collapse
|
27
|
Chen Z, Yuan W. Central plasticity and dysfunction elicited by aural deprivation in the critical period. Front Neural Circuits 2015; 9:26. [PMID: 26082685 PMCID: PMC4451366 DOI: 10.3389/fncir.2015.00026] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2014] [Accepted: 05/13/2015] [Indexed: 12/31/2022] Open
Abstract
The acoustic signal is crucial for animals to obtain information from the surrounding environment. Like other sensory modalities, the central auditory system undergoes adaptive changes (i.e., plasticity) during the developmental stage as well as other stages of life. Owing to its plasticity, auditory centers may be susceptible to various factors, such as medical intervention, variation in ambient acoustic signals and lesion of the peripheral hearing organ. There are critical periods during which auditory centers are vulnerable to abnormal experiences. Particularly in the early postnatal development period, aural inputs are essential for functional maturity of auditory centers. An aural deprivation model, which can be achieved by attenuating or blocking the peripheral acoustic afferent input to the auditory center, is ideal for investigating plastic changes of auditory centers. Generally, auditory plasticity includes structural and functional changes, some of which can be irreversible. Aural deprivation can distort tonotopic maps, disrupt the binaural integration, reorganize the neural network and change the synaptic transmission in the primary auditory cortex or at lower levels of the auditory system. The regulation of specific gene expression and the modified signal pathway may be the deep molecular mechanism of these plastic changes. By studying this model, researchers may explore the pathogenesis of hearing loss and reveal plastic changes of the auditory cortex, facilitating the therapeutic advancement in patients with severe hearing loss. After summarizing developmental features of auditory centers in auditory deprived animals and discussing changes of central auditory remodeling in hearing loss patients, we aim at stressing the significant of an early and well-designed auditory training program for the hearing rehabilitation.
Collapse
Affiliation(s)
- Zhiji Chen
- Department of Otorhinolaryngology Head and Neck Surgery, Southwest Hospital, Third Military Medical University Chongqing, China
| | - Wei Yuan
- Department of Otorhinolaryngology Head and Neck Surgery, Southwest Hospital, Third Military Medical University Chongqing, China
| |
Collapse
|
28
|
van Wassenhove V, Grzeczkowski L. Visual-induced expectations modulate auditory cortical responses. Front Neurosci 2015; 9:11. [PMID: 25705174 PMCID: PMC4319385 DOI: 10.3389/fnins.2015.00011] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2014] [Accepted: 01/11/2015] [Indexed: 11/13/2022] Open
Abstract
Active sensing has important consequences on multisensory processing (Schroeder et al., 2010). Here, we asked whether in the absence of saccades, the position of the eyes and the timing of transient color changes of visual stimuli could selectively affect the excitability of auditory cortex by predicting the “where” and the “when” of a sound, respectively. Human participants were recorded with magnetoencephalography (MEG) while maintaining the position of their eyes on the left, right, or center of the screen. Participants counted color changes of the fixation cross while neglecting sounds which could be presented to the left, right, or both ears. First, clear alpha power increases were observed in auditory cortices, consistent with participants' attention directed to visual inputs. Second, color changes elicited robust modulations of auditory cortex responses (“when” prediction) seen as ramping activity, early alpha phase-locked responses, and enhanced high-gamma band responses in the contralateral side of sound presentation. Third, no modulations of auditory evoked or oscillatory activity were found to be specific to eye position. Altogether, our results suggest that visual transience can automatically elicit a prediction of “when” a sound will occur by changing the excitability of auditory cortices irrespective of the attended modality, eye position or spatial congruency of auditory and visual events. To the contrary, auditory cortical responses were not significantly affected by eye position suggesting that “where” predictions may require active sensing or saccadic reset to modulate auditory cortex responses, notably in the absence of spatial orientation to sounds.
Collapse
Affiliation(s)
- Virginie van Wassenhove
- CEA, DSV/I2BM, NeuroSpin; INSERM, Cognitive Neuroimaging Unit, U992; Université Paris-Sud Gif-sur-Yvette, France
| | - Lukasz Grzeczkowski
- CEA, DSV/I2BM, NeuroSpin; INSERM, Cognitive Neuroimaging Unit, U992; Université Paris-Sud Gif-sur-Yvette, France ; Laboratory of Psychophysics, Brain Mind Institute, École Polytechnique Fédérale de Lausanne Lausanne, Switzerland
| |
Collapse
|
29
|
Abstract
The auditory cortex is a network of areas in the part of the brain that receives inputs from the subcortical auditory pathways in the brainstem and thalamus. Through an elaborate network of intrinsic and extrinsic connections, the auditory cortex is thought to bring about the conscious perception of sound and provide a basis for the comprehension and production of meaningful utterances. In this chapter, the organization of auditory cortex is described with an emphasis on its anatomic features and the flow of information within the network. These features are then used to introduce key neurophysiologic concepts that are being intensively studied in humans and animal models. The discussion is presented in the context of our working model of the primate auditory cortex and extensions to humans. The material is presented in the context of six underlying principles, which reflect distinct, but related, aspects of anatomic and physiologic organization: (1) the division of auditory cortex into regions; (2) the subdivision of regions into areas; (3) tonotopic organization of areas; (4) thalamocortical connections; (5) serial and parallel organization of connections; and (6) topographic relationships between auditory and auditory-related areas. Although the functional roles of the various components of this network remain poorly defined, a more complete understanding is emerging from ongoing studies that link auditory behavior to its anatomic and physiologic substrates.
Collapse
Affiliation(s)
- Troy A Hackett
- Department of Hearing and Speech Sciences, Vanderbilt University School of Medicine and Department of Psychology, Vanderbilt University, Nashville, TN, USA.
| |
Collapse
|
30
|
Guo X, Li X, Ge X, Tong S. Audiovisual congruency and incongruency effects on auditory intensity discrimination. Neurosci Lett 2015; 584:241-6. [PMID: 25450137 DOI: 10.1016/j.neulet.2014.10.043] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2014] [Revised: 10/20/2014] [Accepted: 10/26/2014] [Indexed: 11/17/2022]
Abstract
This study used a S1-S2 matching paradigm to investigate the influences of visual (size) change on auditory intensity discrimination. Behavioral results showed that subjects made more errors and spent more time to discriminate change in auditory intensity when it was accompanied by an incongruent visual change, while the performance for congruent audiovisual stimuli was better especially if there is a change in auditory stimuli. Event-related potential difference waves revealed that audiovisual interactions for multimodal mismatched information processing activated the right frontal and left centro-parietal cortices around 300-400 ms post S1-onset.
Collapse
Affiliation(s)
- Xiaoli Guo
- School of Biomedical Engineering, Shanghai Jiao Tong University, 800 Dongchuan Road, Shanghai 200240, China
| | - Xuan Li
- School of Biomedical Engineering, Shanghai Jiao Tong University, 800 Dongchuan Road, Shanghai 200240, China
| | - Xiaoli Ge
- School of Biomedical Engineering, Shanghai Jiao Tong University, 800 Dongchuan Road, Shanghai 200240, China
| | - Shanbao Tong
- School of Biomedical Engineering, Shanghai Jiao Tong University, 800 Dongchuan Road, Shanghai 200240, China; Med-X Research Institute, Shanghai Jiao Tong University, 800 Dongchuan Road, Shanghai 200240, China.
| |
Collapse
|
31
|
Gerdes ABM, Wieser MJ, Alpers GW. Emotional pictures and sounds: a review of multimodal interactions of emotion cues in multiple domains. Front Psychol 2014; 5:1351. [PMID: 25520679 PMCID: PMC4248815 DOI: 10.3389/fpsyg.2014.01351] [Citation(s) in RCA: 57] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2014] [Accepted: 11/06/2014] [Indexed: 01/28/2023] Open
Abstract
In everyday life, multiple sensory channels jointly trigger emotional experiences and one channel may alter processing in another channel. For example, seeing an emotional facial expression and hearing the voice’s emotional tone will jointly create the emotional experience. This example, where auditory and visual input is related to social communication, has gained considerable attention by researchers. However, interactions of visual and auditory emotional information are not limited to social communication but can extend to much broader contexts including human, animal, and environmental cues. In this article, we review current research on audiovisual emotion processing beyond face-voice stimuli to develop a broader perspective on multimodal interactions in emotion processing. We argue that current concepts of multimodality should be extended in considering an ecologically valid variety of stimuli in audiovisual emotion processing. Therefore, we provide an overview of studies in which emotional sounds and interactions with complex pictures of scenes were investigated. In addition to behavioral studies, we focus on neuroimaging, electro- and peripher-physiological findings. Furthermore, we integrate these findings and identify similarities or differences. We conclude with suggestions for future research.
Collapse
Affiliation(s)
- Antje B M Gerdes
- Clinical and Biological Psychology, Department of Psychology, School of Social Sciences, University of Mannheim Mannheim, Germany
| | | | - Georg W Alpers
- Clinical and Biological Psychology, Department of Psychology, School of Social Sciences, University of Mannheim Mannheim, Germany ; Otto-Selz Institute, University of Mannheim Mannheim, Germany
| |
Collapse
|
32
|
Silent music reading: auditory imagery and visuotonal modality transfer in singers and non-singers. Brain Cogn 2014; 91:35-44. [PMID: 25222292 DOI: 10.1016/j.bandc.2014.08.002] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2014] [Revised: 06/20/2014] [Accepted: 08/10/2014] [Indexed: 11/21/2022]
Abstract
In daily life, responses are often facilitated by anticipatory imagery of expected targets which are announced by associated stimuli from different sensory modalities. Silent music reading represents an intriguing case of visuotonal modality transfer in working memory as it induces highly defined auditory imagery on the basis of presented visuospatial information (i.e. musical notes). Using functional MRI and a delayed sequence matching-to-sample paradigm, we compared brain activations during retention intervals (10s) of visual (VV) or tonal (TT) unimodal maintenance versus visuospatial-to-tonal modality transfer (VT) tasks. Visual or tonal sequences were comprised of six elements, white squares or tones, which were low, middle, or high regarding vertical screen position or pitch, respectively (presentation duration: 1.5s). For the cross-modal condition (VT, session 3), the visuospatial elements from condition VV (session 1) were re-defined as low, middle or high "notes" indicating low, middle or high tones from condition TT (session 2), respectively, and subjects had to match tonal sequences (probe) to previously presented note sequences. Tasks alternately had low or high cognitive load. To evaluate possible effects of music reading expertise, 15 singers and 15 non-musicians were included. Scanner task performance was excellent in both groups. Despite identity of applied visuospatial stimuli, visuotonal modality transfer versus visual maintenance (VT>VV) induced "inhibition" of visual brain areas and activation of primary and higher auditory brain areas which exceeded auditory activation elicited by tonal stimulation (VT>TT). This transfer-related visual-to-auditory activation shift occurred in both groups but was more pronounced in experts. Frontoparietal areas were activated by higher cognitive load but not by modality transfer. The auditory brain showed a potential to anticipate expected auditory target stimuli on the basis of non-auditory information and sensory brain activation rather mirrored expectation than stimulation. Silent music reading probably relies on these basic neurocognitive mechanisms.
Collapse
|
33
|
Phenomenology of the sound-induced flash illusion. Exp Brain Res 2014; 232:2207-20. [DOI: 10.1007/s00221-014-3912-2] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2013] [Accepted: 03/07/2014] [Indexed: 10/25/2022]
|
34
|
Carlile S, Balachandar K, Kelly H. Accommodating to new ears: the effects of sensory and sensory-motor feedback. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2014; 135:2002-2011. [PMID: 25234999 DOI: 10.1121/1.4868369] [Citation(s) in RCA: 28] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
Changing the shape of the outer ear using small in-ear molds degrades sound localization performance consistent with the distortion of monaural spectral cues to location. It has been shown recently that adult listeners re-calibrate to these new spectral cues for locations both inside and outside the visual field. This raises the question as to the teacher signal for this remarkable functional plasticity. Furthermore, large individual differences in the extent and rate of accommodation suggests a number of factors may be driving this process. A training paradigm exploiting multi-modal and sensory-motor feedback during accommodation was examined to determine whether it might accelerate this process. So as to standardize the modification of the spectral cues, molds filling 40% of the volume of each outer ear were custom made for each subject. Daily training sessions for about an hour, involving repetitive auditory stimuli and exploratory behavior by the subject, significantly improved the extent of accommodation measured by both front-back confusions and polar angle localization errors, with some improvement in the rate of accommodation demonstrated by front-back confusion errors. This work has implications for both the process by which a coherent representation of auditory space is maintained and for accommodative training for hearing aid wearers.
Collapse
Affiliation(s)
- Simon Carlile
- School of Medical Sciences and The Bosch Institute, University of Sydney, Sydney, New South Wales 2006, Australia
| | - Kapilesh Balachandar
- School of Medical Sciences and The Bosch Institute, University of Sydney, Sydney, New South Wales 2006, Australia
| | - Heather Kelly
- School of Medical Sciences and The Bosch Institute, University of Sydney, Sydney, New South Wales 2006, Australia
| |
Collapse
|
35
|
Besle J, Hussain Z, Giard MH, Bertrand O. The Representation of Audiovisual Regularities in the Human Brain. J Cogn Neurosci 2013. [DOI: 10.1162/jocn_a_00334] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Abstract
Neural representation of auditory regularities can be probed using the MMN, a component of ERPs generated in the auditory cortex by any violation of that regularity. Although several studies have shown that visual information can influence or even trigger an MMN by altering an acoustic regularity, it is not known whether audiovisual regularities are encoded in the auditory representation supporting MMN generation. We compared the MMNs elicited by the auditory violation of (a) an auditory regularity (a succession of identical standard sounds), (b) an audiovisual regularity (a succession of identical audiovisual stimuli), and (c) an auditory regularity accompanied by variable visual stimuli. In all three conditions, the physical difference between the standard and the deviant sound was identical. We found that the MMN triggered by the same auditory deviance was larger for audiovisual regularities than for auditory-only regularities or for auditory regularities paired with variable visual stimuli, suggesting that the visual regularity influenced the representation of the auditory regularity. This result provides evidence for the encoding of audiovisual regularities in the human brain.
Collapse
Affiliation(s)
- Julien Besle
- 1Lyon Neuroscience Research Centre, DYCOG Team (CRNL, INSERM/CNRS), Lyon, France
- 2Université Lyon 1
- 3University of Nottingham
| | | | - Marie-Hélène Giard
- 1Lyon Neuroscience Research Centre, DYCOG Team (CRNL, INSERM/CNRS), Lyon, France
- 2Université Lyon 1
| | - Olivier Bertrand
- 1Lyon Neuroscience Research Centre, DYCOG Team (CRNL, INSERM/CNRS), Lyon, France
- 2Université Lyon 1
| |
Collapse
|
36
|
Keil J, Müller N, Hartmann T, Weisz N. Prestimulus Beta Power and Phase Synchrony Influence the Sound-Induced Flash Illusion. Cereb Cortex 2013; 24:1278-88. [DOI: 10.1093/cercor/bhs409] [Citation(s) in RCA: 70] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
|
37
|
Rauschecker JP. Processing Streams in Auditory Cortex. NEURAL CORRELATES OF AUDITORY COGNITION 2013. [DOI: 10.1007/978-1-4614-2350-8_2] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/15/2023]
|
38
|
An FMRI study of the neural systems involved in visually cued auditory top-down spatial and temporal attention. PLoS One 2012; 7:e49948. [PMID: 23166800 PMCID: PMC3499497 DOI: 10.1371/journal.pone.0049948] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2011] [Accepted: 10/18/2012] [Indexed: 02/04/2023] Open
Abstract
Top-down attention to spatial and temporal cues has been thoroughly studied in the visual domain. However, because the neural systems that are important for auditory top-down temporal attention (i.e., attention based on time interval cues) remain undefined, the differences in brain activity between directed attention to auditory spatial location (compared with time intervals) are unclear. Using fMRI (magnetic resonance imaging), we measured the activations caused by cue-target paradigms by inducing the visual cueing of attention to an auditory target within a spatial or temporal domain. Imaging results showed that the dorsal frontoparietal network (dFPN), which consists of the bilateral intraparietal sulcus and the frontal eye field, responded to spatial orienting of attention, but activity was absent in the bilateral frontal eye field (FEF) during temporal orienting of attention. Furthermore, the fMRI results indicated that activity in the right ventrolateral prefrontal cortex (VLPFC) was significantly stronger during spatial orienting of attention than during temporal orienting of attention, while the DLPFC showed no significant differences between the two processes. We conclude that the bilateral dFPN and the right VLPFC contribute to auditory spatial orienting of attention. Furthermore, specific activations related to temporal cognition were confirmed within the superior occipital gyrus, tegmentum, motor area, thalamus and putamen.
Collapse
|
39
|
Tse CY, Low KA, Fabiani M, Gratton G. Rules Rule! Brain Activity Dissociates the Representations of Stimulus Contingencies with Varying Levels of Complexity. J Cogn Neurosci 2012; 24:1941-59. [DOI: 10.1162/jocn_a_00229] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Abstract
The significance of stimuli is linked not only to their nature but also to the sequential structure in which they are embedded, which gives rise to contingency rules. Humans have an extraordinary ability to extract and exploit these rules, as exemplified by the role of grammar and syntax in language. To study the brain representations of contingency rules, we recorded ERPs and event-related optical signal (EROS; which uses near-infrared light to measure the optical changes associated with neuronal responses). We used sequences of high- and low-frequency tones varying according to three contingency rules, which were orthogonally manipulated and differed in processing requirements: A Single Repetition rule required only template matching, a Local Probability rule required relating a stimulus to its context, and a Global Probability rule could be derived through template matching or with reference to the global sequence context. ERP activity at 200–300 msec was related to the Single Repetition and Global Probability rules (reflecting access to representations based on template matching), whereas longer-latency activity (300-450 msec) was related to the Local Probability and Global Probability rules (reflecting access to representations incorporating contextual information). EROS responses with corresponding latencies indicated that the earlier activity involved the superior temporal gyrus, whereas later responses involved a fronto-parietal network. This suggests that the brain can simultaneously hold different models of stimulus contingencies at different levels of the information processing system according to their processing requirements, as indicated by the latency and location of the corresponding brain activity.
Collapse
Affiliation(s)
- Chun-Yu Tse
- 1University of Illinois at Urbana-Champaign
- 2National University of Singapore
| | | | | | | |
Collapse
|
40
|
Schiller PH, Kwak MC, Slocum WM. Visual and auditory cue integration for the generation of saccadic eye movements in monkeys and lever pressing in humans. Eur J Neurosci 2012; 36:2500-4. [PMID: 22621264 DOI: 10.1111/j.1460-9568.2012.08133.x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Abstract
This study examined how effectively visual and auditory cues can be integrated in the brain for the generation of motor responses. The latencies with which saccadic eye movements are produced in humans and monkeys form, under certain conditions, a bimodal distribution, the first mode of which has been termed express saccades. In humans, a much higher percentage of express saccades is generated when both visual and auditory cues are provided compared with the single presentation of these cues [H. C. Hughes et al. (1994) J. Exp. Psychol. Hum. Percept. Perform., 20, 131-153]. In this study, we addressed two questions: first, do monkeys also integrate visual and auditory cues for express saccade generation as do humans and second, does such integration take place in humans when, instead of eye movements, the task is to press levers with fingers? Our results show that (i) in monkeys, as in humans, the combined visual and auditory cues generate a much higher percentage of express saccades than do singly presented cues and (ii) the latencies with which levers are pressed by humans are shorter when both visual and auditory cues are provided compared with the presentation of single cues, but the distribution in all cases is unimodal; response latencies in the express range seen in the execution of saccadic eye movements are not obtained with lever pressing.
Collapse
Affiliation(s)
- Peter H Schiller
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, MA, USA.
| | | | | |
Collapse
|
41
|
Li X, Ge X, Sun J, Tong S. Locating the sources for cross-modal interactions and decision making during judging the visual-affected auditory intensity change. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2012; 2011:3067-70. [PMID: 22254987 DOI: 10.1109/iembs.2011.6090838] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Audiovisual interaction has been one of the most important topics in cognitive neurosciences. Visual stimuli could significantly impact the auditory perception, and vice versa. Nevertheless, how much the change in visual stimuli would influence the perception of auditory change remains to be investigated. In this paper, we designed an audiovisual experiment in which subjects were required to judge whether there is a change in the intensities of two sounds with 150 ms interval, while there are two simultaneously presented size-changed visual stimuli. Behavioral results demonstrated that incongruent audiovisual change could result in the illusory perception of the change in sound intensity. For the correctly judged trials, source analysis showed two characteristic windows post the first auditory stimulus, i.e., (i) the 160-200 ms window including the auditory P200 and visual N100 wave, which was related to audiovisual interaction and working memory of the first stimulus with localized sources in insula and agranular retrolimbic area; and (ii) the 300-400 ms window for P300 with sources in premotor cortex and caudate nucleus, which were related to later audiovisual interaction, change discrimination and working memory. These preliminary results implied two stages in the audiovisual change perception task, with the involvement of insula, agranular retrolimbic, premotor cortex and caudate nucleus.
Collapse
Affiliation(s)
- Xuan Li
- Med-X Research Institute, Shanghai Jiao Tong University, Shanghai 200030, China
| | | | | | | |
Collapse
|
42
|
Visuotactile interactions in the congenitally acallosal brain: Evidence for early cerebral plasticity. Neuropsychologia 2011; 49:3908-16. [DOI: 10.1016/j.neuropsychologia.2011.10.008] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2011] [Revised: 09/21/2011] [Accepted: 10/07/2011] [Indexed: 11/20/2022]
|
43
|
Bulkin DA, Groh JM. Distribution of eye position information in the monkey inferior colliculus. J Neurophysiol 2011; 107:785-95. [PMID: 22031775 DOI: 10.1152/jn.00662.2011] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
The inferior colliculus (IC) is thought to have two main subdivisions, a central region that forms an important stop on the ascending auditory pathway and a surrounding shell region that may play a more modulatory role. In this study, we investigated whether eye position affects activity in both the central and shell regions. Accordingly, we mapped the location of eye position-sensitive neurons in six monkeys making spontaneous eye movements by sampling multiunit activity at regularly spaced intervals throughout the IC. We used a functional map based on auditory response patterns to estimate the anatomical location of recordings, in conjunction with structural MRI and histology. We found eye position-sensitive sites throughout the IC, including at 27% of sites in tonotopically organized recording penetrations (putatively the central nucleus). Recordings from surrounding tissue showed a larger proportion of sites indicating an influence of eye position (33-43%). When present, the magnitude of the change in activity due to eye position was often comparable to that seen for sound frequency. Our results indicate that the primary ascending auditory pathway is influenced by the position of the eyes. Because eye position is essential for visual-auditory integration, our findings suggest that computations underlying visual-auditory integration begin early in the ascending auditory pathway.
Collapse
Affiliation(s)
- David A Bulkin
- Department of Psychology, Cornell University, Ithaca, New York, USA.
| | | |
Collapse
|
44
|
Jacome DE. Sound induced photisms in pontine and extrapontine myelinolysis. Clin Neurol Neurosurg 2011; 113:503-5. [DOI: 10.1016/j.clineuro.2011.01.014] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2009] [Revised: 01/26/2011] [Accepted: 01/29/2011] [Indexed: 11/16/2022]
|
45
|
Łukaszewicz Z, Soluch P, Niemczyk K, Lachowska M. [Correlation of auditory-verbal skills in patients with cochlear implants and their evaluation in positone emission tomography (PET)]. Otolaryngol Pol 2010; 64:10-6. [PMID: 21171304 DOI: 10.1016/s0030-6657(10)70002-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Abstract
INTRODUCTION An assumption was taken that in central nervous system (CNS) in patients above 15 years of age there are possible mechanisms of neuronal changes. Those changes allow for reconstruction or formation of natural activation pattern of appropriate brain structures responsible for auditory speech processing. AIM The aim of the study was to observe if there are any dynamic functional changes in central nervous system and their correlation to the auditory-verbal skills of the patients. MATERIAL AND METHODS Nine right-handed patients between 15 and 36 years of age were examined, 6 females and 3 males. All of them were treated with cochlear implantation and are in frequent follow-up in the Department of Otolaryngology at the Medical University of Warsaw due to profound sensorineural hearing loss. In present study the patients were examined within 24 hours after the first fitting of the speech processor of the cochlear implant, and 1 and 2 years subsequently. Combination of performed examinations consisted of: positone emission tomography of the brain, and audiological tests including speech assessment. In the group of patients 4 were postlingually deaf, and 5 were prelinqually deaf. RESULTS Postlingually deaf patients achieved great improvement of hearing and speech understanding. In their first PET examination very intensive activation of visual cortex V1 and V2 (BA17 and 18) was observed. There was no significant activation in the dominant (left) hemisphere of the brain. In PET examination performed 1 and 2 years after the cochlear implantation no more V1 and V2 activation region was observed. Instead particular regions of the left hemisphere got activated. In prelingually deaf patients no significant changes in central nervous system were noticeable neither in PET nor in speech assessment, although their hearing possibilities improved. CONCLUSIONS Positive correlation was observed between the level of speech understanding, linguistic skills and the activation of appropriate areas of the left hemisphere of the brain in postlingually deaf patients treated with cochlear implants. No such correlation was noted in prelingualy patients treated with the same method.
Collapse
|
46
|
Multisensory integration: resolving sensory ambiguities to build novel representations. Curr Opin Neurobiol 2010; 20:353-60. [PMID: 20471245 DOI: 10.1016/j.conb.2010.04.009] [Citation(s) in RCA: 67] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2010] [Revised: 04/10/2010] [Accepted: 04/14/2010] [Indexed: 11/19/2022]
Abstract
Multisensory integration plays several important roles in the nervous system. One is to combine information from multiple complementary cues to improve stimulus detection and discrimination. Another is to resolve peripheral sensory ambiguities and create novel internal representations that do not exist at the level of individual sensors. Here we focus on how ambiguities inherent in vestibular, proprioceptive and visual signals are resolved to create behaviorally useful internal estimates of our self-motion. We review recent studies that have shed new light on the nature of these estimates and how multiple, but individually ambiguous, sensory signals are processed and combined to compute them. We emphasize the need to combine experiments with theoretical insights to understand the transformations that are being performed.
Collapse
|
47
|
Hirvenkari L, Jousmäki V, Lamminmäki S, Saarinen VM, Sams ME, Hari R. Gaze-Direction-Based MEG Averaging During Audiovisual Speech Perception. Front Hum Neurosci 2010; 4:17. [PMID: 20300464 PMCID: PMC2839848 DOI: 10.3389/fnhum.2010.00017] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2009] [Accepted: 02/10/2010] [Indexed: 11/13/2022] Open
Abstract
To take a step towards real-life-like experimental setups, we simultaneously recorded magnetoencephalographic (MEG) signals and subject's gaze direction during audiovisual speech perception. The stimuli were utterances of /apa/ dubbed onto two side-by-side female faces articulating /apa/ (congruent) and /aka/ (incongruent) in synchrony, repeated once every 3 s. Subjects (N = 10) were free to decide which face they viewed, and responses were averaged to two categories according to the gaze direction. The right-hemisphere 100-ms response to the onset of the second vowel (N100m') was a fifth smaller to incongruent than congruent stimuli. The results demonstrate the feasibility of realistic viewing conditions with gaze-based averaging of MEG signals.
Collapse
Affiliation(s)
- Lotta Hirvenkari
- Brain Research Unit, Low Temperature Laboratory, Aalto University School of Science and Technology Espoo, Finland
| | | | | | | | | | | |
Collapse
|
48
|
Monaci G, Vandergheynst P, Sommer FT. Learning bimodal structure in audio-visual data. IEEE TRANSACTIONS ON NEURAL NETWORKS 2009; 20:1898-910. [PMID: 19963447 DOI: 10.1109/tnn.2009.2032182] [Citation(s) in RCA: 27] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
A novel model is presented to learn bimodally informative structures from audio-visual signals. The signal is represented as a sparse sum of audio-visual kernels. Each kernel is a bimodal function consisting of synchronous snippets of an audio waveform and a spatio-temporal visual basis function. To represent an audio-visual signal, the kernels can be positioned independently and arbitrarily in space and time. The proposed algorithm uses unsupervised learning to form dictionaries of bimodal kernels from audio-visual material. The basis functions that emerge during learning capture salient audio-visual data structures. In addition, it is demonstrated that the learned dictionary can be used to locate sources of sound in the movie frame. Specifically, in sequences containing two speakers, the algorithm can robustly localize a speaker even in the presence of severe acoustic and visual distracters.
Collapse
Affiliation(s)
- Gianluca Monaci
- Redwood Center for Theoretical Neuroscience, University of California, Berkeley, CA 94720-3190 USA.
| | | | | |
Collapse
|
49
|
Budinger E, Scheich H. Anatomical connections suitable for the direct processing of neuronal information of different modalities via the rodent primary auditory cortex. Hear Res 2009; 258:16-27. [DOI: 10.1016/j.heares.2009.04.021] [Citation(s) in RCA: 95] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/30/2009] [Revised: 04/30/2009] [Accepted: 04/30/2009] [Indexed: 10/20/2022]
|
50
|
Phetsom J, Khammuang S, Suwannawon P, Sarnthima R. Copper-Alginate Encapsulation of Crude laccase from Lentinus polychrous Lev. and their Effectiveness in Synthetic Dyes Decolorizations. ACTA ACUST UNITED AC 2009. [DOI: 10.3923/jbs.2009.573.583] [Citation(s) in RCA: 29] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
|