1
|
Kvasova D, Coll L, Stewart T, Soto-Faraco S. Crossmodal semantic congruence guides spontaneous orienting in real-life scenes. PSYCHOLOGICAL RESEARCH 2024; 88:2138-2148. [PMID: 39105825 DOI: 10.1007/s00426-024-02018-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2024] [Accepted: 07/24/2024] [Indexed: 08/07/2024]
Abstract
In real-world scenes, the different objects and events are often interconnected within a rich web of semantic relationships. These semantic links help parse information efficiently and make sense of the sensory environment. It has been shown that, during goal-directed search, hearing the characteristic sound of an everyday life object helps finding the affiliate objects in artificial visual search arrays as well as in naturalistic, real-life videoclips. However, whether crossmodal semantic congruence also triggers orienting during spontaneous, not goal-directed observation is unknown. Here, we investigated this question addressing whether crossmodal semantic congruence can attract spontaneous, overt visual attention when viewing naturalistic, dynamic scenes. We used eye-tracking whilst participants (N = 45) watched video clips presented alongside sounds of varying semantic relatedness with objects present within the scene. We found that characteristic sounds increased the probability of looking at, the number of fixations to, and the total dwell time on semantically corresponding visual objects, in comparison to when the same scenes were presented with semantically neutral sounds or just with background noise only. Interestingly, hearing object sounds not met with an object in the scene led to increased visual exploration. These results suggest that crossmodal semantic information has an impact on spontaneous gaze on realistic scenes, and therefore on how information is sampled. Our findings extend beyond known effects of object-based crossmodal interactions with simple stimuli arrays and shed new light on the role that audio-visual semantic relationships out in the perception of everyday life scenarios.
Collapse
Affiliation(s)
- Daria Kvasova
- Center for Brain and Cognition, Department of Communication and Information Technologies, Universitat Pompeu Fabra, Carrer de Ramón Trias i Fargas 25-27, Barcelona, 08005, Spain
| | - Llucia Coll
- Multiple Sclerosis Centre of Catalonia (Cemcat), Hospital Universitari Vall d'Hebron, Universitat Autònoma de Barcelona, Barcelona, Spain
| | - Travis Stewart
- Center for Brain and Cognition, Department of Communication and Information Technologies, Universitat Pompeu Fabra, Carrer de Ramón Trias i Fargas 25-27, Barcelona, 08005, Spain
| | - Salvador Soto-Faraco
- Center for Brain and Cognition, Department of Communication and Information Technologies, Universitat Pompeu Fabra, Carrer de Ramón Trias i Fargas 25-27, Barcelona, 08005, Spain.
- Institució Catalana de Recerca i Estudis Avançats (ICREA), Passeig de Lluís Companys, 23, Barcelona, 08010, Spain.
| |
Collapse
|
2
|
Levels of neuroticism can predict attentional performance during cross-modal nonspatial repetition inhibition. Atten Percept Psychophys 2022; 84:2552-2561. [PMID: 36253587 DOI: 10.3758/s13414-022-02583-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/17/2022] [Indexed: 11/08/2022]
Abstract
Inhibition of return (IOR) refers to the slower response to targets presented at previously attended locations, and such repetition-induced inhibition has been found to be differentially associated with personality traits. Although it has been well documented how personality traits affect spatial IOR, a mechanism associated with the attentional orienting network, there is not yet a consensus as to the relationship between personality traits and nonspatial repetition inhibition, a mechanism associated with the attentional executive network. The present study herein examined how the Big Five personality traits relate to cross-modal nonspatial repetition inhibition. Participants completed the NEO-PI-R and performed a cross-modal nonspatial repetition inhibition task built on the prime-neutral cue-target paradigm, in which the relationships of the identities and modalities between the prime and the target were manipulated. The results showed a significant nonspatial inhibitory effect and the effect was larger under the visual-auditory condition than under the auditory-visual condition. More importantly, neuroticism was associated with decreased cross-modal nonspatial inhibitory effect, presumably due to impaired attentional control. However, such a result was only found in the visual-auditory condition. We propose that retrieving previous prime representations under the visual-auditory condition requires a large consumption of cognitive resources, making inhibitory control more difficult for individuals with high neuroticism. These findings provide new insight into the influence of personality traits on attentional performance requiring nonspatial inhibitory control and enrich the relationship between neuroticism and repetition-induced inhibition.
Collapse
|
3
|
Fan Y, Fang K, Sun R, Shen D, Yang J, Tang Y, Fang G. Hierarchical auditory perception for species discrimination and individual recognition in the music frog. Curr Zool 2021; 68:581-591. [DOI: 10.1093/cz/zoab085] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2021] [Accepted: 10/01/2021] [Indexed: 11/12/2022] Open
Abstract
Abstract
The ability to discriminate species and recognize individuals is crucial for reproductive success and/or survival in most animals. However, the temporal order and neural localization of these decision-making processes has remained unclear. In this study, event-related potentials (ERPs) were measured in the telencephalon, diencephalon, and mesencephalon of the music frog Nidirana daunchina. These ERPs were elicited by calls from 1 group of heterospecifics (recorded from a sympatric anuran species) and 2 groups of conspecifics that differed in their fundamental frequencies. In terms of the polarity and position within the ERP waveform, auditory ERPs generally consist of 4 main components that link to selective attention (N1), stimulus evaluation (P2), identification (N2), and classification (P3). These occur around 100, 200, 250, and 300 ms after stimulus onset, respectively. Our results show that the N1 amplitudes differed significantly between the heterospecific and conspecific calls, but not between the 2 groups of conspecific calls that differed in fundamental frequency. On the other hand, the N2 amplitudes were significantly different between the 2 groups of conspecific calls, suggesting that the music frogs discriminated the species first, followed by individual identification, since N1 and N2 relate to selective attention and stimuli identification, respectively. Moreover, the P2 amplitudes evoked in females were significantly greater than those in males, indicating the existence of sexual dimorphism in auditory discrimination. In addition, both the N1 amplitudes in the left diencephalon and the P2 amplitudes in the left telencephalon were greater than in other brain areas, suggesting left hemispheric dominance in auditory perception. Taken together, our results support the hypothesis that species discrimination and identification of individual characteristics are accomplished sequentially, and that auditory perception exhibits differences between sexes and in spatial dominance.
Collapse
Affiliation(s)
- Yanzhu Fan
- Chengdu Institute of Biology, Chinese Academy of Sciences, Chengdu 610041, China
- College of Life Sciences, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Ke Fang
- Chengdu Institute of Biology, Chinese Academy of Sciences, Chengdu 610041, China
- School of Life Science, Anhui University, Hefei 230601, China
| | - Ruolei Sun
- Chengdu Institute of Biology, Chinese Academy of Sciences, Chengdu 610041, China
- School of Life Science, Anhui University, Hefei 230601, China
| | - Di Shen
- Chengdu Institute of Biology, Chinese Academy of Sciences, Chengdu 610041, China
- College of Life Sciences, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Jing Yang
- Chengdu Institute of Biology, Chinese Academy of Sciences, Chengdu 610041, China
- College of Life Sciences, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Yezhong Tang
- Chengdu Institute of Biology, Chinese Academy of Sciences, Chengdu 610041, China
- College of Life Sciences, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Guangzhan Fang
- Chengdu Institute of Biology, Chinese Academy of Sciences, Chengdu 610041, China
- College of Life Sciences, University of Chinese Academy of Sciences, Beijing 100049, China
| |
Collapse
|
4
|
Skerritt-Davis B, Elhilali M. Neural Encoding of Auditory Statistics. J Neurosci 2021; 41:6726-6739. [PMID: 34193552 PMCID: PMC8336711 DOI: 10.1523/jneurosci.1887-20.2021] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2020] [Revised: 05/19/2021] [Accepted: 05/26/2021] [Indexed: 11/21/2022] Open
Abstract
The human brain extracts statistical regularities embedded in real-world scenes to sift through the complexity stemming from changing dynamics and entwined uncertainty along multiple perceptual dimensions (e.g., pitch, timbre, location). While there is evidence that sensory dynamics along different auditory dimensions are tracked independently by separate cortical networks, how these statistics are integrated to give rise to unified objects remains unknown, particularly in dynamic scenes that lack conspicuous coupling between features. Using tone sequences with stochastic regularities along spectral and spatial dimensions, this study examines behavioral and electrophysiological responses from human listeners (male and female) to changing statistics in auditory sequences and uses a computational model of predictive Bayesian inference to formulate multiple hypotheses for statistical integration across features. Neural responses reveal multiplexed brain responses reflecting both local statistics along individual features in frontocentral networks, together with global (object-level) processing in centroparietal networks. Independent tracking of local surprisal along each acoustic feature reveals linear modulation of neural responses, while global melody-level statistics follow a nonlinear integration of statistical beliefs across features to guide perception. Near identical results are obtained in separate experiments along spectral and spatial acoustic dimensions, suggesting a common mechanism for statistical inference in the brain. Potential variations in statistical integration strategies and memory deployment shed light on individual variability between listeners in terms of behavioral efficacy and fidelity of neural encoding of stochastic change in acoustic sequences.SIGNIFICANCE STATEMENT The world around us is complex and ever changing: in everyday listening, sound sources evolve along multiple dimensions, such as pitch, timbre, and spatial location, and they exhibit emergent statistical properties that change over time. In the face of this complexity, the brain builds an internal representation of the external world by collecting statistics from the sensory input along multiple dimensions. Using a Bayesian predictive inference model, this work considers alternative hypotheses for how statistics are combined across sensory dimensions. Behavioral and neural responses from human listeners show the brain multiplexes two representations, where local statistics along each feature linearly affect neural responses, and global statistics nonlinearly combine statistical beliefs across dimensions to shape perception of stochastic auditory sequences.
Collapse
|
5
|
Xie Y, Li Y, Guan M, Duan H, Xu X, Fang P. Audiovisual working memory and association with resting-state regional homogeneity. Behav Brain Res 2021; 411:113382. [PMID: 34044090 DOI: 10.1016/j.bbr.2021.113382] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2021] [Revised: 05/03/2021] [Accepted: 05/21/2021] [Indexed: 12/20/2022]
Abstract
Multisensory processing is a prevalent research issue. However, multisensory working memory research has received inadequate attention. The present study aimed to investigate the behavioral performance of an audiovisual working memory task and its association with resting-state functional magnetic resonance imaging (fMRI) regional homogeneity (ReHo). A total of 128 healthy participants were enrolled in this study. The participants completed a modified Sternberg working memory task using complex auditory and visual objects as materials involved in different encoding conditions, including semantically congruent audiovisual, semantically incongruent audiovisual, and single modality of auditory or visual object encoding. Two subgroups received resting-state fMRI scans according to their behavioral performances. The results showed that the semantically congruent audiovisual object encoding sped up the later unisensory memory recognition in this task. Moreover, the high behavioral performance (response time, RT) group showed increased ReHo in the executive control network (ECN) and decreased ReHo in the default mode network (DMN) and saline network (SN). In addition, resting-state ReHo values in the ECN nodes (e.g., middle frontal gyrus and superior frontal gyrus) was correlated with RT. These findings suggested that semantically congruent audiovisual processing in working memory was superior to unisensory memory recognition and may be involved in the different functional networks such as ECN.
Collapse
Affiliation(s)
- Yuanjun Xie
- School of Education, Xin Yang College, Xin Yang, China; Department of Radiology, Xijing Hospital, Fourth Military Medical University, Xi'an, China.
| | - Yanyan Li
- School of Education, Xin Yang College, Xin Yang, China
| | - Muzhen Guan
- Department of Mental Health, Xi'an Medical University, Xi'an, China
| | - Haidan Duan
- School of Education, Xin Yang College, Xin Yang, China
| | - Xiliang Xu
- School of Education, Xin Yang College, Xin Yang, China
| | - Peng Fang
- Department of Military Medical Psychology, Fourth Military Medical University, Xi'an, China
| |
Collapse
|
6
|
Wang A, Wu X, Tang X, Zhang M. How modality processing differences affect cross-modal nonspatial repetition inhibition. Psych J 2020; 9:306-315. [PMID: 31908147 DOI: 10.1002/pchj.332] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2019] [Revised: 11/09/2019] [Accepted: 11/18/2019] [Indexed: 11/10/2022]
Abstract
Although previous studies have demonstrated that identity-based repetition inhibition could occur across modalities, whether the modality processing difference or attentional set caused differences between the unimodal and cross-modal conditions was unknown. To investigate this question in both visual-auditory and auditory-visual patterns, the present study adopted a cross-modal "prime-neutral cue-target" priming paradigm, in which a neutral event was presented between the prime and the target. The relationships of the identities and modalities between the prime and the target were manipulated such that their modalities and identities could either be the same or different. Our results showed that (a) identity-based repetition inhibition occurred under both unimodal and cross-modal conditions, (b) response times to auditory targets were slower than those to visual targets, and (c) identity-based repetition inhibition was larger while discriminating repeated auditory targets than visual targets regardless of whether the prime was visual or auditory. These results suggested that nonspatial repetition inhibition can occur across modalities and that it was not in general larger or smaller than unimodal repetition inhibition, as this difference was due to modality processing differences.
Collapse
Affiliation(s)
- Aijun Wang
- Department of Psychology, Soochow University, Suzhou, China.,Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou, China
| | - Xiaogang Wu
- Department of Psychology, Soochow University, Suzhou, China.,Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou, China
| | - Xiaoyu Tang
- School of Psychology, Liaoning Normal University, Dalian, China.,Liaoning Collaborative Innovation Center of Children and Adolescents Healthy Personality Assessment and Cultivation, Dalian, China
| | - Ming Zhang
- Department of Psychology, Soochow University, Suzhou, China.,Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou, China
| |
Collapse
|
7
|
Ohla K, Höchenberger R, Freiherr J, Lundström JN. Superadditive and Subadditive Neural Processing of Dynamic Auditory-Visual Objects in the Presence of Congruent Odors. Chem Senses 2019; 43:35-44. [PMID: 29045615 DOI: 10.1093/chemse/bjx068] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
Our sensory experiences comprise a variety of different inputs at any given time. Some of these experiences are unmistakable, others are ambiguous and profit from additional sensory information. Here, we explored whether the presence of a congruent odor influences the neural processing and sensory interaction of audio-visual objects using degraded videos (V) and sounds (A) of dynamic objects in unimodal and bimodal (AV) combinations without or with a congruent odor (VO, AO, AVO). Analyses of EEG data revealed superadditive and subadditive interaction effects. The topography and timing of these effects suggest evaluative rather than sensory processes as the underlying cause. Together, the results suggest that the mere presence of an odor affects the processing of A, V, and AV objects differently while multisensory interactions of AV and AVO objects have common neuronal mechanisms pointing to a robust, modality-independent network for the processing of redundant sensory information.
Collapse
Affiliation(s)
- Kathrin Ohla
- German Institute of Human Nutrition Potsdam-Rehbruecke, Germany
- Monell Chemical Senses Center, USA
| | | | - Jessica Freiherr
- Uniklinik RWTH Aachen, Diagnostic and Interventional Neuroradiology, Germany
- Fraunhofer-Institut für Verfahrenstechnik und Verpackung IVV, Sensory Analytics, Germany
| | - Johan N Lundström
- Monell Chemical Senses Center, USA
- Department of Clinical Neuroscience, Karolinska Institutet, Sweden
| |
Collapse
|
8
|
Xu W, Kolozsvári OB, Oostenveld R, Leppänen PHT, Hämäläinen JA. Audiovisual Processing of Chinese Characters Elicits Suppression and Congruency Effects in MEG. Front Hum Neurosci 2019; 13:18. [PMID: 30787872 PMCID: PMC6372538 DOI: 10.3389/fnhum.2019.00018] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2018] [Accepted: 01/16/2019] [Indexed: 11/13/2022] Open
Abstract
Learning to associate written letters/characters with speech sounds is crucial for reading acquisition. Most previous studies have focused on audiovisual integration in alphabetic languages. Less is known about logographic languages such as Chinese characters, which map onto mostly syllable-based morphemes in the spoken language. Here we investigated how long-term exposure to native language affects the underlying neural mechanisms of audiovisual integration in a logographic language using magnetoencephalography (MEG). MEG sensor and source data from 12 adult native Chinese speakers and a control group of 13 adult Finnish speakers were analyzed for audiovisual suppression (bimodal responses vs. sum of unimodal responses) and congruency (bimodal incongruent responses vs. bimodal congruent responses) effects. The suppressive integration effect was found in the left angular and supramarginal gyri (205-365 ms), left inferior frontal and left temporal cortices (575-800 ms) in the Chinese group. The Finnish group showed a distinct suppression effect only in the right parietal and occipital cortices at a relatively early time window (285-460 ms). The congruency effect was only observed in the Chinese group in left inferior frontal and superior temporal cortex in a late time window (about 500-800 ms) probably related to modulatory feedback from multi-sensory regions and semantic processing. The audiovisual integration in a logographic language showed a clear resemblance to that in alphabetic languages in the left superior temporal cortex, but with activation specific to the logographic stimuli observed in the left inferior frontal cortex. The current MEG study indicated that learning of logographic languages has a large impact on the audiovisual integration of written characters with some distinct features compared to previous results on alphabetic languages.
Collapse
Affiliation(s)
- Weiyong Xu
- Department of Psychology, University of Jyväskylä, Jyväskylä, Finland
- Jyväskylä Centre for Interdisciplinary Brain Research, Department of Psychology, University of Jyväskylä, Jyväskylä, Finland
| | - Orsolya Beatrix Kolozsvári
- Department of Psychology, University of Jyväskylä, Jyväskylä, Finland
- Jyväskylä Centre for Interdisciplinary Brain Research, Department of Psychology, University of Jyväskylä, Jyväskylä, Finland
| | - Robert Oostenveld
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, Netherlands
- NatMEG, Department of Clinical Neuroscience, Karolinska Institutet, Stockholm, Sweden
| | - Paavo Herman Tapio Leppänen
- Department of Psychology, University of Jyväskylä, Jyväskylä, Finland
- Jyväskylä Centre for Interdisciplinary Brain Research, Department of Psychology, University of Jyväskylä, Jyväskylä, Finland
| | - Jarmo Arvid Hämäläinen
- Department of Psychology, University of Jyväskylä, Jyväskylä, Finland
- Jyväskylä Centre for Interdisciplinary Brain Research, Department of Psychology, University of Jyväskylä, Jyväskylä, Finland
| |
Collapse
|
9
|
Fang K, Zhang B, Brauth SE, Tang Y, Fang G. The first call note of the Anhui tree frog (Rhacophorus zhoukaiya) is acoustically suited for enabling individual recognition. BIOACOUSTICS 2018. [DOI: 10.1080/09524622.2017.1422805] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Affiliation(s)
- Ke Fang
- School of Life Science, Anhui University, Hefei, China
- Department of Herpetology, Chengdu Institute of Biology, Chinese Academy of Sciences, Chengdu, China
| | - Baowei Zhang
- School of Life Science, Anhui University, Hefei, China
| | - Steven E. Brauth
- Department of Psychology, University of Maryland, College Park, MD, USA
| | - Yezhong Tang
- Department of Herpetology, Chengdu Institute of Biology, Chinese Academy of Sciences, Chengdu, China
| | - Guangzhan Fang
- Department of Herpetology, Chengdu Institute of Biology, Chinese Academy of Sciences, Chengdu, China
| |
Collapse
|
10
|
Being First Matters: Topographical Representational Similarity Analysis of ERP Signals Reveals Separate Networks for Audiovisual Temporal Binding Depending on the Leading Sense. J Neurosci 2017; 37:5274-5287. [PMID: 28450537 PMCID: PMC5456109 DOI: 10.1523/jneurosci.2926-16.2017] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2016] [Revised: 02/20/2017] [Accepted: 02/25/2017] [Indexed: 11/30/2022] Open
Abstract
In multisensory integration, processing in one sensory modality is enhanced by complementary information from other modalities. Intersensory timing is crucial in this process because only inputs reaching the brain within a restricted temporal window are perceptually bound. Previous research in the audiovisual field has investigated various features of the temporal binding window, revealing asymmetries in its size and plasticity depending on the leading input: auditory–visual (AV) or visual–auditory (VA). Here, we tested whether separate neuronal mechanisms underlie this AV–VA dichotomy in humans. We recorded high-density EEG while participants performed an audiovisual simultaneity judgment task including various AV–VA asynchronies and unisensory control conditions (visual-only, auditory-only) and tested whether AV and VA processing generate different patterns of brain activity. After isolating the multisensory components of AV–VA event-related potentials (ERPs) from the sum of their unisensory constituents, we ran a time-resolved topographical representational similarity analysis (tRSA) comparing the AV and VA ERP maps. Spatial cross-correlation matrices were built from real data to index the similarity between the AV and VA maps at each time point (500 ms window after stimulus) and then correlated with two alternative similarity model matrices: AVmaps = VAmaps versus AVmaps ≠ VAmaps. The tRSA results favored the AVmaps ≠ VAmaps model across all time points, suggesting that audiovisual temporal binding (indexed by synchrony perception) engages different neural pathways depending on the leading sense. The existence of such dual route supports recent theoretical accounts proposing that multiple binding mechanisms are implemented in the brain to accommodate different information parsing strategies in auditory and visual sensory systems. SIGNIFICANCE STATEMENT Intersensory timing is a crucial aspect of multisensory integration, determining whether and how inputs in one modality enhance stimulus processing in another modality. Our research demonstrates that evaluating synchrony of auditory-leading (AV) versus visual-leading (VA) audiovisual stimulus pairs is characterized by two distinct patterns of brain activity. This suggests that audiovisual integration is not a unitary process and that different binding mechanisms are recruited in the brain based on the leading sense. These mechanisms may be relevant for supporting different classes of multisensory operations, for example, auditory enhancement of visual attention (AV) and visual enhancement of auditory speech (VA).
Collapse
|
11
|
Fang G, Yang P, Xue F, Cui J, Brauth SE, Tang Y. Sound Classification and Call Discrimination Are Decoded in Order as Revealed by Event-Related Potential Components in Frogs. BRAIN, BEHAVIOR AND EVOLUTION 2015; 86:232-45. [DOI: 10.1159/000441215] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/31/2015] [Accepted: 09/20/2015] [Indexed: 11/19/2022]
Abstract
Species that use communication sounds to coordinate social and reproductive behavior must be able to distinguish vocalizations from nonvocal sounds as well as to identify individual vocalization types. In this study we sought to identify the neural localization of the processes involved and the temporal order in which they occur in an anuran species, the music frog Babina daunchina. To do this we measured telencephalic and mesencephalic event-related potentials (ERPs) elicited by synthesized white noise (WN), highly sexually attractive (HSA) calls produced by males from inside nests and male calls of low sexual attractiveness (LSA) produced outside of nests. Each stimulus possessed similar temporal structures. The results showed the following: (1) the amplitudes of the first negative ERP component (N1) at ∼100 ms differed significantly between WN and conspecific calls but not between HSA and LSA calls, indicating that discrimination between conspecific calls and nonvocal sounds occurs in ∼100 ms, (2) the amplitudes of the second positive ERP component (P2) at ∼200 ms in the difference waves between HSA calls and WN were significantly higher than between LSA calls and WN in the right telencephalon, implying that call characteristic identification occurs in ∼200 ms and (3) WN evoked a larger third positive ERP component (P3) at ∼300 ms than conspecific calls, suggesting the frogs had classified the conspecific calls into one category and perceived WN as novel. Thus, both the detection of sounds and the identification of call characteristics are accomplished quickly in a specific temporal order, as reflected by ERP components. In addition, the most dynamic ERP patterns appeared in the left mesencephalon and the right telencephalon, indicating the two brain regions might play key roles in anuran vocal communication.
Collapse
|
12
|
Matusz PJ, Thelen A, Amrein S, Geiser E, Anken J, Murray MM. The role of auditory cortices in the retrieval of single-trial auditory-visual object memories. Eur J Neurosci 2015; 41:699-708. [PMID: 25728186 DOI: 10.1111/ejn.12804] [Citation(s) in RCA: 29] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2014] [Revised: 11/13/2014] [Accepted: 11/13/2014] [Indexed: 11/28/2022]
Abstract
Single-trial encounters with multisensory stimuli affect both memory performance and early-latency brain responses to visual stimuli. Whether and how auditory cortices support memory processes based on single-trial multisensory learning is unknown and may differ qualitatively and quantitatively from comparable processes within visual cortices due to purported differences in memory capacities across the senses. We recorded event-related potentials (ERPs) as healthy adults (n = 18) performed a continuous recognition task in the auditory modality, discriminating initial (new) from repeated (old) sounds of environmental objects. Initial presentations were either unisensory or multisensory; the latter entailed synchronous presentation of a semantically congruent or a meaningless image. Repeated presentations were exclusively auditory, thus differing only according to the context in which the sound was initially encountered. Discrimination abilities (indexed by d') were increased for repeated sounds that were initially encountered with a semantically congruent image versus sounds initially encountered with either a meaningless or no image. Analyses of ERPs within an electrical neuroimaging framework revealed that early stages of auditory processing of repeated sounds were affected by prior single-trial multisensory contexts. These effects followed from significantly reduced activity within a distributed network, including the right superior temporal cortex, suggesting an inverse relationship between brain activity and behavioural outcome on this task. The present findings demonstrate how auditory cortices contribute to long-term effects of multisensory experiences on auditory object discrimination. We propose a new framework for the efficacy of multisensory processes to impact both current multisensory stimulus processing and unisensory discrimination abilities later in time.
Collapse
Affiliation(s)
- Pawel J Matusz
- The Laboratory for Investigative Neurophysiology (The LINE), Department of Clinical Neurosciences and Department of Radiology, Vaudois University Hospital Center and University of Lausanne, Lausanne, Switzerland; Attention, Behaviour, and Cognitive Development Group, Department of Experimental Psychology, University of Oxford, Oxford, UK; University of Social Sciences and Humanities, Faculty in Wroclaw, Wroclaw, Poland
| | | | | | | | | | | |
Collapse
|
13
|
Stevenson RA, Segers M, Ferber S, Barense MD, Camarata S, Wallace MT. Keeping time in the brain: Autism spectrum disorder and audiovisual temporal processing. Autism Res 2015; 9:720-38. [PMID: 26402725 DOI: 10.1002/aur.1566] [Citation(s) in RCA: 58] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2015] [Revised: 08/22/2015] [Accepted: 08/29/2015] [Indexed: 12/21/2022]
Abstract
A growing area of interest and relevance in the study of autism spectrum disorder (ASD) focuses on the relationship between multisensory temporal function and the behavioral, perceptual, and cognitive impairments observed in ASD. Atypical sensory processing is becoming increasingly recognized as a core component of autism, with evidence of atypical processing across a number of sensory modalities. These deviations from typical processing underscore the value of interpreting ASD within a multisensory framework. Furthermore, converging evidence illustrates that these differences in audiovisual processing may be specifically related to temporal processing. This review seeks to bridge the connection between temporal processing and audiovisual perception, and to elaborate on emerging data showing differences in audiovisual temporal function in autism. We also discuss the consequence of such changes, the specific impact on the processing of different classes of audiovisual stimuli (e.g. speech vs. nonspeech, etc.), and the presumptive brain processes and networks underlying audiovisual temporal integration. Finally, possible downstream behavioral implications, and possible remediation strategies are outlined. Autism Res 2016, 9: 720-738. © 2015 International Society for Autism Research, Wiley Periodicals, Inc.
Collapse
Affiliation(s)
- Ryan A Stevenson
- Department of Psychology, University of Toronto, Toronto, Ontario, Canada
| | - Magali Segers
- Department of Psychology, York University, Toronto, Ontario, Canada
| | - Susanne Ferber
- Department of Psychology, University of Toronto, Toronto, Ontario, Canada.,Rotman Research Institute, Toronto, Ontario, Canada
| | - Morgan D Barense
- Department of Psychology, University of Toronto, Toronto, Ontario, Canada.,Rotman Research Institute, Toronto, Ontario, Canada
| | - Stephen Camarata
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, Tennessee.,Vanderbilt Kennedy Center, Vanderbilt University Medical Center, Nashville, Tennessee
| | - Mark T Wallace
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, Tennessee.,Vanderbilt Kennedy Center, Vanderbilt University Medical Center, Nashville, Tennessee.,Vanderbilt Brain Institute, Vanderbilt University Medical Center, Nashville, Tennessee.,Department of Psychology, Vanderbilt University, Nashville, Tennessee.,Department of Psychiatry, Vanderbilt University Medical Center, Nashville, Tennessee
| |
Collapse
|
14
|
Murray MM, Thelen A, Thut G, Romei V, Martuzzi R, Matusz PJ. The multisensory function of the human primary visual cortex. Neuropsychologia 2015; 83:161-169. [PMID: 26275965 DOI: 10.1016/j.neuropsychologia.2015.08.011] [Citation(s) in RCA: 107] [Impact Index Per Article: 11.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2015] [Revised: 08/08/2015] [Accepted: 08/10/2015] [Indexed: 01/20/2023]
Abstract
It has been nearly 10 years since Ghazanfar and Schroeder (2006) proposed that the neocortex is essentially multisensory in nature. However, it is only recently that sufficient and hard evidence that supports this proposal has accrued. We review evidence that activity within the human primary visual cortex plays an active role in multisensory processes and directly impacts behavioural outcome. This evidence emerges from a full pallet of human brain imaging and brain mapping methods with which multisensory processes are quantitatively assessed by taking advantage of particular strengths of each technique as well as advances in signal analyses. Several general conclusions about multisensory processes in primary visual cortex of humans are supported relatively solidly. First, haemodynamic methods (fMRI/PET) show that there is both convergence and integration occurring within primary visual cortex. Second, primary visual cortex is involved in multisensory processes during early post-stimulus stages (as revealed by EEG/ERP/ERFs as well as TMS). Third, multisensory effects in primary visual cortex directly impact behaviour and perception, as revealed by correlational (EEG/ERPs/ERFs) as well as more causal measures (TMS/tACS). While the provocative claim of Ghazanfar and Schroeder (2006) that the whole of neocortex is multisensory in function has yet to be demonstrated, this can now be considered established in the case of the human primary visual cortex.
Collapse
Affiliation(s)
- Micah M Murray
- The Laboratory for Investigative Neurophysiology (The LINE), Neuropsychology and Neurorehabilitation Service and Department of Radiology, University Hospital Center and University of Lausanne, Lausanne, Switzerland; EEG Brain Mapping Core, Center for Biomedical Imaging (CIBM) of Lausanne and Geneva, Lausanne, Switzerland; Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA.
| | - Antonia Thelen
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Gregor Thut
- Centre for Cognitive Neuroimaging, Institute of Neuroscience and Psychology, University of Glasgow, Glasgow G12 8QB, United Kingdom
| | - Vincenzo Romei
- Centre for Brain Science, Department of Psychology, University of Essex, Colchester, United Kingdom
| | - Roberto Martuzzi
- Laboratory of Cognitive Neuroscience, Brain-Mind Institute, Ecole Polytechnique Fédérale de Lausanne, Switzerland
| | - Pawel J Matusz
- The Laboratory for Investigative Neurophysiology (The LINE), Neuropsychology and Neurorehabilitation Service and Department of Radiology, University Hospital Center and University of Lausanne, Lausanne, Switzerland; Attention, Brain, and Cognitive Development Group, Department of Experimental Psychology, University of Oxford, United Kingdom.
| |
Collapse
|
15
|
Anticipating action effects recruits audiovisual movement representations in the ventral premotor cortex. Brain Cogn 2014; 92C:39-47. [DOI: 10.1016/j.bandc.2014.09.010] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2013] [Revised: 08/13/2014] [Accepted: 09/30/2014] [Indexed: 11/15/2022]
|
16
|
Jost LB, Eberhard-Moscicka AK, Frisch C, Dellwo V, Maurer U. Integration of Spoken and Written Words in Beginning Readers: A Topographic ERP Study. Brain Topogr 2013; 27:786-800. [DOI: 10.1007/s10548-013-0336-4] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2013] [Accepted: 11/15/2013] [Indexed: 11/28/2022]
|
17
|
Elmer S, Sollberger S, Meyer M, Jäncke L. An Empirical Reevaluation of Absolute Pitch: Behavioral and Electrophysiological Measurements. J Cogn Neurosci 2013; 25:1736-53. [DOI: 10.1162/jocn_a_00410] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Abstract
Here, we reevaluated the “two-component” model of absolute pitch (AP) by combining behavioral and electrophysiological measurements. This specific model postulates that AP is driven by a perceptual encoding ability (i.e., pitch memory) plus an associative memory component (i.e., pitch labeling). To test these predictions, during EEG measurements AP and non-AP (NAP) musicians were passively exposed to piano tones (first component of the model) and additionally instructed to judge whether combinations of tones and labels were conceptually associated or not (second component of the model). Auditory-evoked N1/P2 potentials did not reveal differences between the two groups, thus indicating that AP is not necessarily driven by a differential pitch encoding ability at the processing level of the auditory cortex. Otherwise, AP musicians performed the conceptual association task with an order of magnitude better accuracy and shorter RTs than NAP musicians did, this result clearly pointing to distinctive conceptual associations in AP possessors. Most notably, this behavioral superiority was reflected by an increased N400 effect and accompanied by a subsequent late positive component, the latter not being distinguishable in NAP musicians.
Collapse
Affiliation(s)
| | | | - Martin Meyer
- 1University of Zurich
- 2Center for Integrative Human Physiology, Zurich, Switzerland
- 3International Normal Aging and Plasticity Imaging Center, Zurich, Switzerland
| | - Lutz Jäncke
- 1University of Zurich
- 2Center for Integrative Human Physiology, Zurich, Switzerland
- 3International Normal Aging and Plasticity Imaging Center, Zurich, Switzerland
| |
Collapse
|
18
|
Talja S, Alho K, Rinne T. Source analysis of event-related potentials during pitch discrimination and pitch memory tasks. Brain Topogr 2013; 28:445-58. [PMID: 24043402 DOI: 10.1007/s10548-013-0307-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2013] [Accepted: 08/10/2013] [Indexed: 11/26/2022]
Abstract
Our previous studies using fMRI have demonstrated that activations in human auditory cortex (AC) are strongly dependent on the characteristics of the task. The present study tested whether source estimation of scalp-recorded event-related potentials (ERPs) can be used to investigate task-dependent AC activations. Subjects were presented with frequency-varying two-part tones during pitch discrimination, pitch n-back memory, and visual tasks identical to our previous fMRI study (Rinne et al., J Neurosci 29:13338-13343, 2009). ERPs and their minimum-norm source estimates in AC were strongly modulated by task at 200-700 ms from tone onset. As in the fMRI study, the pitch discrimination and pitch memory tasks were associated with distinct AC activation patterns. In the pitch discrimination task, increased activity in the anterior AC was detected relatively late at 300-700 ms from tone onset. Therefore, this activity was probably not associated with enhanced pitch processing but rather with the actual discrimination process (comparison between the two parts of tone). Increased activity in more posterior areas associated with the pitch memory task, in turn, occurred at 200-700 ms suggesting that this activity was related to operations on pitch categories after pitch analysis was completed. Finally, decreased activity associated with the pitch memory task occurred at 150-300 ms consistent with the notion that, in the demanding pitch memory task, spectrotemporal analysis is actively halted as soon as category information has been obtained. These results demonstrate that ERP source analysis can be used to complement fMRI to investigate task-dependent activations of human AC.
Collapse
Affiliation(s)
- Suvi Talja
- Institute of Behavioural Sciences, University of Helsinki, PO Box 9, 00014, Helsinki, Finland,
| | | | | |
Collapse
|
19
|
Colavita dominance effect revisited: the effect of semantic congruity. Atten Percept Psychophys 2013; 75:1827-39. [DOI: 10.3758/s13414-013-0530-1] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
20
|
Jäncke L, Rogenmoser L, Meyer M, Elmer S. Pre-attentive modulation of brain responses to tones in coloured-hearing synesthetes. BMC Neurosci 2012; 13:151. [PMID: 23241212 PMCID: PMC3547775 DOI: 10.1186/1471-2202-13-151] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2012] [Accepted: 11/29/2012] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Coloured-hearing (CH) synesthesia is a perceptual phenomenon in which an acoustic stimulus (the inducer) initiates a concurrent colour perception (the concurrent). Individuals with CH synesthesia "see" colours when hearing tones, words, or music; this specific phenomenon suggesting a close relationship between auditory and visual representations. To date, it is still unknown whether the perception of colours is associated with a modulation of brain functions in the inducing brain area, namely in the auditory-related cortex and associated brain areas. In addition, there is an on-going debate as to whether attention to the inducer is necessarily required for eliciting a visual concurrent, or whether the latter can emerge in a pre-attentive fashion. RESULTS By using the EEG technique in the context of a pre-attentive mismatch negativity (MMN) paradigm, we show that the binding of tones and colours in CH synesthetes is associated with increased MMN amplitudes in response to deviant tones supposed to induce novel concurrent colour perceptions. Most notably, the increased MMN amplitudes we revealed in the CH synesthetes were associated with stronger intracerebral current densities originating from the auditory cortex, parietal cortex, and ventral visual areas. CONCLUSIONS The automatic binding of tones and colours in CH synesthetes is accompanied by an early pre-attentive process recruiting the auditory cortex, inferior and superior parietal lobules, as well as ventral occipital areas.
Collapse
Affiliation(s)
- Lutz Jäncke
- Division Neuropsychology, Institute of Psychology, University of Zurich, Binzmühlestrasse 14/25, Zurich CH-8050, Switzerland
- Center for Integrative Human Physiology, Zurich, Switzerland
- International Normal Aging and Plasticity Imaging Center (INAPIC), Zurich, Switzerland
- Research Unit “Plasticity and learning in the aging brain”, University of Zurich, Zurich, Switzerland
| | - Lars Rogenmoser
- Division Neuropsychology, Institute of Psychology, University of Zurich, Binzmühlestrasse 14/25, Zurich CH-8050, Switzerland
| | - Martin Meyer
- Center for Integrative Human Physiology, Zurich, Switzerland
- Research Unit “Plasticity and learning in the aging brain”, University of Zurich, Zurich, Switzerland
| | - Stefan Elmer
- Division Neuropsychology, Institute of Psychology, University of Zurich, Binzmühlestrasse 14/25, Zurich CH-8050, Switzerland
| |
Collapse
|
21
|
Michel CM, Murray MM. Towards the utilization of EEG as a brain imaging tool. Neuroimage 2012; 61:371-85. [DOI: 10.1016/j.neuroimage.2011.12.039] [Citation(s) in RCA: 333] [Impact Index Per Article: 27.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2011] [Accepted: 12/15/2011] [Indexed: 10/14/2022] Open
|
22
|
Abstract
Multisensory interactions are a fundamental feature of brain organization. Principles governing multisensory processing have been established by varying stimulus location, timing and efficacy independently. Determining whether and how such principles operate when stimuli vary dynamically in their perceived distance (as when looming/receding) provides an assay for synergy among the above principles and also means for linking multisensory interactions between rudimentary stimuli with higher-order signals used for communication and motor planning. Human participants indicated movement of looming or receding versus static stimuli that were visual, auditory, or multisensory combinations while 160-channel EEG was recorded. Multivariate EEG analyses and distributed source estimations were performed. Nonlinear interactions between looming signals were observed at early poststimulus latencies (∼75 ms) in analyses of voltage waveforms, global field power, and source estimations. These looming-specific interactions positively correlated with reaction time facilitation, providing direct links between neural and performance metrics of multisensory integration. Statistical analyses of source estimations identified looming-specific interactions within the right claustrum/insula extending inferiorly into the amygdala and also within the bilateral cuneus extending into the inferior and lateral occipital cortices. Multisensory effects common to all conditions, regardless of perceived distance and congruity, followed (∼115 ms) and manifested as faster transition between temporally stable brain networks (vs summed responses to unisensory conditions). We demonstrate the early-latency, synergistic interplay between existing principles of multisensory interactions. Such findings change the manner in which to model multisensory interactions at neural and behavioral/perceptual levels. We also provide neurophysiologic backing for the notion that looming signals receive preferential treatment during perception.
Collapse
|
23
|
Elmer S, Meyer M, Jäncke L. The spatiotemporal characteristics of elementary audiovisual speech and music processing in musically untrained subjects. Int J Psychophysiol 2012; 83:259-68. [DOI: 10.1016/j.ijpsycho.2011.09.011] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2010] [Revised: 06/16/2011] [Accepted: 09/11/2011] [Indexed: 10/17/2022]
|
24
|
Abstract
Recent studies of multisensory integration compel a redefinition of fundamental sensory processes, including, but not limited to, how visual inputs influence the localization of sounds and suppression of their echoes.
Collapse
Affiliation(s)
- Micah M Murray
- Electroencephalography Brain Mapping Core, Center for Biomedical Imaging of Lausanne and Geneva, rue du Bugnon 46, BH08.078, 1011 Lausanne, Switzerland.
| | | |
Collapse
|
25
|
Current perspectives and methods in studying neural mechanisms of multisensory interactions. Neurosci Biobehav Rev 2011; 36:111-33. [PMID: 21569794 DOI: 10.1016/j.neubiorev.2011.04.015] [Citation(s) in RCA: 65] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2011] [Accepted: 04/21/2011] [Indexed: 11/22/2022]
Abstract
In the past decade neuroscience has witnessed major advances in the field of multisensory interactions. A large body of research has revealed several new types of cross-sensory interactions. In addition, multisensory interactions have been reported at temporal and spatial system levels previously thought of as strictly unimodal. We review the findings that have led to the current broad consensus that most, if not all, higher, as well as lower level neural processes are in some form multisensory. We continue by outlining the progress that has been made in identifying the functional significance of different types of interactions, for example, in subserving stimulus binding and enhancement of perceptual certainty. Finally, we provide a critical introduction to cutting edge methods from bayes optimal integration to multivoxel pattern analysis as applied to multisensory research at different system levels.
Collapse
|
26
|
Brain dynamics underlying training-induced improvement in suppressing inappropriate action. J Neurosci 2010; 30:13670-8. [PMID: 20943907 DOI: 10.1523/jneurosci.2064-10.2010] [Citation(s) in RCA: 47] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Inhibitory control, a core component of executive functions, refers to our ability to suppress intended or ongoing cognitive or motor processes. Mostly based on Go/NoGo paradigms, a considerable amount of literature reports that inhibitory control of responses to "NoGo" stimuli is mediated by top-down mechanisms manifesting ∼200 ms after stimulus onset within frontoparietal networks. However, whether inhibitory functions in humans can be trained and the supporting neurophysiological mechanisms remain unresolved. We addressed these issues by contrasting auditory evoked potentials (AEPs) to left-lateralized "Go" and right NoGo stimuli recorded at the beginning versus the end of 30 min of active auditory spatial Go/NoGo training, as well as during passive listening of the same stimuli before versus after the training session, generating two separate 2 × 2 within-subject designs. Training improved Go/NoGo proficiency. Response times to Go stimuli decreased. During active training, AEPs to NoGo, but not Go, stimuli modulated topographically with training 61-104 ms after stimulus onset, indicative of changes in the underlying brain network. Source estimations revealed that this modulation followed from decreased activity within left parietal cortices, which in turn predicted the extent of behavioral improvement. During passive listening, in contrast, effects were limited to topographic modulations of AEPs in response to Go stimuli over the 31-81 ms interval, mediated by decreased right anterior temporoparietal activity. We discuss our results in terms of the development of an automatic and bottom-up form of inhibitory control with training and a differential effect of Go/NoGo training during active executive control versus passive listening conditions.
Collapse
|
27
|
Sperdin HF, Cappe C, Murray MM. Auditory-somatosensory multisensory interactions in humans: dissociating detection and spatial discrimination. Neuropsychologia 2010; 48:3696-705. [PMID: 20833194 DOI: 10.1016/j.neuropsychologia.2010.09.001] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2009] [Revised: 07/09/2010] [Accepted: 09/03/2010] [Indexed: 10/19/2022]
Abstract
Simple reaction times (RTs) to auditory-somatosensory (AS) multisensory stimuli are facilitated over their unisensory counterparts both when stimuli are delivered to the same location and when separated. In two experiments we addressed the possibility that top-down and/or task-related influences can dynamically impact the spatial representations mediating these effects and the extent to which multisensory facilitation will be observed. Participants performed a simple detection task in response to auditory, somatosensory, or simultaneous AS stimuli that in turn were either spatially aligned or misaligned by lateralizing the stimuli. Additionally, we also informed the participants that they would be retrogradely queried (one-third of trials) regarding the side where a given stimulus in a given sensory modality was presented. In this way, we sought to have participants attending to all possible spatial locations and sensory modalities, while nonetheless having them perform a simple detection task. Experiment 1 provided no cues prior to stimulus delivery. Experiment 2 included spatially uninformative cues (50% of trials). In both experiments, multisensory conditions significantly facilitated detection RTs with no evidence for differences according to spatial alignment (though general benefits of cuing were observed in Experiment 2). Facilitated detection occurs even when attending to spatial information. Performance with probes, quantified using sensitivity (d'), was impaired following multisensory trials in general and significantly more so following misaligned multisensory trials. This indicates that spatial information is not available, despite being task-relevant. The collective results support a model wherein early AS interactions may result in a loss of spatial acuity for unisensory information.
Collapse
Affiliation(s)
- Holger F Sperdin
- Neuropsychology and Neurorehabilitation Service, Department of Clinical Neurosciences, Centre Hospitalier Universitaire Vaudois and University of Lausanne, Lausanne, Switzerland
| | | | | |
Collapse
|
28
|
Dynamic modulation of short-term synaptic plasticity in the auditory cortex: the role of norepinephrine. Hear Res 2010; 271:26-36. [PMID: 20816739 DOI: 10.1016/j.heares.2010.08.014] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/05/2009] [Revised: 07/30/2010] [Accepted: 08/25/2010] [Indexed: 11/20/2022]
Abstract
Norepinephrine (NE) is an important modulator of neuronal activity in the auditory cortex. Using patch-clamp recording and a pair pulse protocol on an auditory cortex slice preparation we recently demonstrated that NE affects cortical inhibition in a layer-specific manner, by decreasing apical but increasing basal inhibition onto layer II/III pyramidal cell dendrites. In the present study we used a similar protocol to investigate the dependence of noradrenergic modulation of inhibition on stimulus frequency, using 1s-long train pulses at 5, 10, and 20 Hz. The study was conducted using pharmacologically isolated inhibitory postsynaptic currents (IPSCs) evoked by electrical stimulation of axons either in layer I (LI-eIPSCs) or in layer II/III (LII/III-eIPSCs). We found that: 1) LI-eIPSC display less synaptic depression than LII/III-eIPSCs at all the frequencies tested, 2) in both type of synapses depression had a presynaptic component which could be altered manipulating [Ca²+]₀, 3) NE modestly altered short-term synaptic plasticity at low or intermediate (5-10 Hz) frequencies, but selectively enhanced synaptic facilitation in LI-eIPSCs while increasing synaptic depression of LII/III-eIPSCs in the latest (>250 ms) part of the response, at high stimulation frequency (20 Hz). We speculate that these mechanisms may limit the temporal window for top-down synaptic integration as well as the duration and intensity of stimulus-evoked gamma-oscillations triggered by complex auditory stimuli during alertness.
Collapse
|
29
|
Abstract
The ability to discriminate conspecific vocalizations is observed across species and early during development. However, its neurophysiologic mechanism remains controversial, particularly regarding whether it involves specialized processes with dedicated neural machinery. We identified spatiotemporal brain mechanisms for conspecific vocalization discrimination in humans by applying electrical neuroimaging analyses to auditory evoked potentials (AEPs) in response to acoustically and psychophysically controlled nonverbal human and animal vocalizations as well as sounds of man-made objects. AEP strength modulations in the absence of topographic modulations are suggestive of statistically indistinguishable brain networks. First, responses were significantly stronger, but topographically indistinguishable to human versus animal vocalizations starting at 169-219 ms after stimulus onset and within regions of the right superior temporal sulcus and superior temporal gyrus. This effect correlated with another AEP strength modulation occurring at 291-357 ms that was localized within the left inferior prefrontal and precentral gyri. Temporally segregated and spatially distributed stages of vocalization discrimination are thus functionally coupled and demonstrate how conventional views of functional specialization must incorporate network dynamics. Second, vocalization discrimination is not subject to facilitated processing in time, but instead lags more general categorization by approximately 100 ms, indicative of hierarchical processing during object discrimination. Third, although differences between human and animal vocalizations persisted when analyses were performed at a single-object level or extended to include additional (man-made) sound categories, at no latency were responses to human vocalizations stronger than those to all other categories. Vocalization discrimination transpires at times synchronous with that of face discrimination but is not functionally specialized.
Collapse
|
30
|
Sperdin HF, Cappe C, Murray MM. The behavioral relevance of multisensory neural response interactions. Front Neurosci 2010; 4:9. [PMID: 20582260 PMCID: PMC2891631 DOI: 10.3389/neuro.01.009.2010] [Citation(s) in RCA: 13] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2009] [Accepted: 12/04/2009] [Indexed: 11/24/2022] Open
Abstract
Sensory information can interact to impact perception and behavior. Foods are appreciated according to their appearance, smell, taste and texture. Athletes and dancers combine visual, auditory, and somatosensory information to coordinate their movements. Under laboratory settings, detection and discrimination are likewise facilitated by multisensory signals. Research over the past several decades has shown that the requisite anatomy exists to support interactions between sensory systems in regions canonically designated as exclusively unisensory in their function and, more recently, that neural response interactions occur within these same regions, including even primary cortices and thalamic nuclei, at early post-stimulus latencies. Here, we review evidence concerning direct links between early, low-level neural response interactions and behavioral measures of multisensory integration.
Collapse
Affiliation(s)
- Holger F. Sperdin
- The Functional Electrical Neuroimaging Laboratory, Neuropsychology and Neurorehabilitation Service and Radiology Service, Centre Hospitalier Universitaire Vaudois and University of LausanneLausanne, Switzerland
| | - Céline Cappe
- The Functional Electrical Neuroimaging Laboratory, Neuropsychology and Neurorehabilitation Service and Radiology Service, Centre Hospitalier Universitaire Vaudois and University of LausanneLausanne, Switzerland
| | - Micah M. Murray
- The Functional Electrical Neuroimaging Laboratory, Neuropsychology and Neurorehabilitation Service and Radiology Service, Centre Hospitalier Universitaire Vaudois and University of LausanneLausanne, Switzerland
- The Electroencephalography Brain Mapping Core, Centre for Biomedical ImagingLausanne, Switzerland
- Department of Hearing and Speech Sciences, Vanderbilt University Medical CenterNashville, TN, USA
| |
Collapse
|
31
|
Interhemispheric coupling between the posterior sylvian regions impacts successful auditory temporal order judgment. Neuropsychologia 2010; 48:2579-85. [PMID: 20457165 DOI: 10.1016/j.neuropsychologia.2010.05.004] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2009] [Revised: 03/10/2010] [Accepted: 05/01/2010] [Indexed: 11/24/2022]
Abstract
Accurate perception of the temporal order of sensory events is a prerequisite in numerous functions ranging from language comprehension to motor coordination. We investigated the spatio-temporal brain dynamics of auditory temporal order judgment (aTOJ) using electrical neuroimaging analyses of auditory evoked potentials (AEPs) recorded while participants completed a near-threshold task requiring spatial discrimination of left-right and right-left sound sequences. AEPs to sound pairs modulated topographically as a function of aTOJ accuracy over the 39-77ms post-stimulus period, indicating the engagement of distinct configurations of brain networks during early auditory processing stages. Source estimations revealed that accurate and inaccurate performance were linked to bilateral posterior sylvian regions activity (PSR). However, activity within left, but not right, PSR predicted behavioral performance suggesting that left PSR activity during early encoding phases of pairs of auditory spatial stimuli appears critical for the perception of their order of occurrence. Correlation analyses of source estimations further revealed that activity between left and right PSR was significantly correlated in the inaccurate but not accurate condition, indicating that aTOJ accuracy depends on the functional decoupling between homotopic PSR areas. These results support a model of temporal order processing wherein behaviorally relevant temporal information--i.e. a temporal 'stamp'--is extracted within the early stages of cortical processes within left PSR but critically modulated by inputs from right PSR. We discuss our results with regard to current models of temporal of temporal order processing, namely gating and latency mechanisms.
Collapse
|
32
|
Spierer L, De Lucia M, Bernasconi F, Grivel J, Bourquin NMP, Clarke S, Murray MM. Learning-induced plasticity in human audition: objects, time, and space. Hear Res 2010; 271:88-102. [PMID: 20430070 DOI: 10.1016/j.heares.2010.03.086] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/23/2009] [Revised: 02/16/2010] [Accepted: 03/03/2010] [Indexed: 10/19/2022]
Abstract
The human auditory system is comprised of specialized but interacting anatomic and functional pathways encoding object, spatial, and temporal information. We review how learning-induced plasticity manifests along these pathways and to what extent there are common mechanisms subserving such plasticity. A first series of experiments establishes a temporal hierarchy along which sounds of objects are discriminated along basic to fine-grained categorical boundaries and learned representations. A widespread network of temporal and (pre)frontal brain regions contributes to object discrimination via recursive processing. Learning-induced plasticity typically manifested as repetition suppression within a common set of brain regions. A second series considered how the temporal sequence of sound sources is represented. We show that lateralized responsiveness during the initial encoding phase of pairs of auditory spatial stimuli is critical for their accurate ordered perception. Finally, we consider how spatial representations are formed and modified through training-induced learning. A population-based model of spatial processing is supported wherein temporal and parietal structures interact in the encoding of relative and absolute spatial information over the initial ~300 ms post-stimulus onset. Collectively, these data provide insights into the functional organization of human audition and open directions for new developments in targeted diagnostic and neurorehabilitation strategies.
Collapse
Affiliation(s)
- Lucas Spierer
- Neuropsychology and Neurorehabilitation Service, Department of Clinical Neuroscience, Vaudois University Hospital Center and University of Lausanne, Switzerland
| | | | | | | | | | | | | |
Collapse
|
33
|
Bernasconi F, Grivel J, Murray MM, Spierer L. Plastic brain mechanisms for attaining auditory temporal order judgment proficiency. Neuroimage 2010; 50:1271-9. [DOI: 10.1016/j.neuroimage.2010.01.016] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2009] [Revised: 01/04/2010] [Accepted: 01/06/2010] [Indexed: 10/20/2022] Open
|
34
|
De Lucia M, Cocchi L, Martuzzi R, Meuli RA, Clarke S, Murray MM. Perceptual and Semantic Contributions to Repetition Priming of Environmental Sounds. Cereb Cortex 2009; 20:1676-84. [DOI: 10.1093/cercor/bhp230] [Citation(s) in RCA: 28] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022] Open
|
35
|
De Lucia M, Camen C, Clarke S, Murray MM. The role of actions in auditory object discrimination. Neuroimage 2009; 48:475-85. [DOI: 10.1016/j.neuroimage.2009.06.041] [Citation(s) in RCA: 31] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2009] [Revised: 06/12/2009] [Accepted: 06/16/2009] [Indexed: 10/20/2022] Open
|