1
|
Yusuf PA, Hubka P, Konerding W, Land R, Tillein J, Kral A. Congenital deafness reduces alpha-gamma cross-frequency coupling in the auditory cortex. Hear Res 2024; 449:109032. [PMID: 38797035 DOI: 10.1016/j.heares.2024.109032] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/15/2024] [Revised: 04/30/2024] [Accepted: 05/13/2024] [Indexed: 05/29/2024]
Abstract
Neurons within a neuronal network can be grouped by bottom-up and top-down influences using synchrony in neuronal oscillations. This creates the representation of perceptual objects from sensory features. Oscillatory activity can be differentiated into stimulus-phase-locked (evoked) and non-phase-locked (induced). The former is mainly determined by sensory input, the latter by higher-level (cortical) processing. Effects of auditory deprivation on cortical oscillations have been studied in congenitally deaf cats (CDCs) using cochlear implant (CI) stimulation. CI-induced alpha, beta, and gamma activity were compromised in the auditory cortex of CDCs. Furthermore, top-down information flow between secondary and primary auditory areas in hearing cats, conveyed by induced alpha oscillations, was lost in CDCs. Here we used the matching pursuit algorithm to assess components of such oscillatory activity in local field potentials recorded in primary field A1. Additionally to the loss of induced alpha oscillations, we also found a loss of evoked theta activity in CDCs. The loss of theta and alpha activity in CDCs can be directly related to reduced high-frequency (gamma-band) activity due to cross-frequency coupling. Here we quantified such cross-frequency coupling in adult 1) hearing-experienced, acoustically stimulated cats (aHCs), 2) hearing-experienced cats following acute pharmacological deafening and subsequent CIs, thus in electrically stimulated cats (eHCs), and 3) electrically stimulated CDCs. We found significant cross-frequency coupling in all animal groups in > 70% of auditory-responsive sites. The predominant coupling in aHCs and eHCs was between theta/alpha phase and gamma power. In CDCs such coupling was lost and replaced by alpha oscillations coupling to delta/theta phase. Thus, alpha/theta oscillations synchronize high-frequency gamma activity only in hearing-experienced cats. The absence of induced alpha and theta oscillations contributes to the loss of induced gamma power in CDCs, thereby signifying impaired local network activity.
Collapse
Affiliation(s)
- Prasandhya A Yusuf
- Hannover Medical School, Institute of AudioNeuroTechnology and Department of Experimental Otology of the ENT Clinics, Hannover, Germany; Faculty of Medicine University of Indonesia, Department of Medical Physiology and Biophysics / Medical Technology IMERI, Jakarta, Indonesia.
| | - Peter Hubka
- Hannover Medical School, Institute of AudioNeuroTechnology and Department of Experimental Otology of the ENT Clinics, Hannover, Germany
| | - Wiebke Konerding
- Hannover Medical School, Institute of AudioNeuroTechnology and Department of Experimental Otology of the ENT Clinics, Hannover, Germany
| | - Rüdiger Land
- Hannover Medical School, Institute of AudioNeuroTechnology and Department of Experimental Otology of the ENT Clinics, Hannover, Germany
| | - Jochen Tillein
- J.W. Goethe University, Department of Otorhinolaryngology, Frankfurt am Main, Germany
| | - Andrej Kral
- Hannover Medical School, Institute of AudioNeuroTechnology and Department of Experimental Otology of the ENT Clinics, Hannover, Germany; Australian Hearing Hub, School of Medicine and Health Sciences, Macquarie University, Sydney, Australia
| |
Collapse
|
2
|
Hou TW, Yang CC, Lai TH, Wu YH, Yang CP. Light Therapy in Chronic Migraine. Curr Pain Headache Rep 2024; 28:621-626. [PMID: 38865075 DOI: 10.1007/s11916-024-01258-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/08/2024] [Indexed: 06/13/2024]
Abstract
PURPOSE OF REVIEW This review assesses the effectiveness and safety of light therapy, particularly green light therapy, as an emerging non-pharmacological treatment for chronic migraine (CM). It aims to highlight alternative or complementary approaches to traditional pharmacological remedies, focusing the need for diverse treatment options. RECENT FINDINGS Despite sensitivity to light being a defining feature of migraine, light therapy has shown promising signs in providing substantial symptom relief. Studies have provided insights into green light therapy's role in managing CM. These studies consistently demonstrate its efficacy in reducing the frequency, severity, and symptoms of migraines. Additional benefits observed include improvements in sleep quality and reductions in anxiety. Importantly, green light therapy has been associated with minimal side effects, indicating its potential as a suitable option for migraine sufferers. In addition to green light, other forms of light therapy, such as infrared polarized light, low-level laser therapy (LLLT), and intravascular irradiation of blood (ILIB), are also being explored with potential therapeutic effects. Light therapies, especially green light therapy, are recognized as promising, safe, and non-pharmacological interventions for treating CM. They have been shown to be effective in decreasing headache frequency and enhancing the overall quality of life. However, current studies, often limited by small sample sizes, prompt more extensive clinical trials to better understand the full impact of light therapies. The exploration of other light-based treatments, such as LLLT and ILIB, warrants further research to broaden the scope of effective migraine management strategies.
Collapse
Affiliation(s)
- Tsung-Wei Hou
- Department of Neurology, Taichung Veteran General Hospital, Taichung, Taiwan
| | - Cheng-Chia Yang
- Department of Healthcare Administration, Asia University, Taichung, Taiwan
| | - Tzu-Hsien Lai
- Department of Neurology, Far Eastern Memorial Hospital, New Taipei, Taiwan
- School of Medicine, College of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Ying-Hui Wu
- Department of Family Medicine, Kuang-Tien General Hospital, Taichung, Taiwan.
| | - Chun-Pai Yang
- Department of Neurology, Kuang Tien General Hospital, Taichung, Taiwan.
- Ph.D. Program in Translational Medicine, National Chung Hsing University, Taichung, Taiwan.
| |
Collapse
|
3
|
Chalas N, Karagiorgis A, Bamidis P, Paraskevopoulos E. The impact of musical training in symbolic and non-symbolic audiovisual judgements of magnitude. PLoS One 2022; 17:e0266165. [PMID: 35511806 PMCID: PMC9070945 DOI: 10.1371/journal.pone.0266165] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2021] [Accepted: 03/15/2022] [Indexed: 11/30/2022] Open
Abstract
Quantity estimation can be represented in either an analog or symbolic manner and recent evidence now suggests that analog and symbolic representation of quantities interact. Nonetheless, those two representational forms of quantities may be enhanced by convergent multisensory information. Here, we elucidate those interactions using high-density electroencephalography (EEG) and an audiovisual oddball paradigm. Participants were presented simultaneous audiovisual tokens in which the co-varying pitch of tones was combined with the embedded cardinality of dot patterns. Incongruencies were elicited independently from symbolic and non-symbolic modality within the audio-visual percept, violating the newly acquired rule that “the higher the pitch of the tone, the larger the cardinality of the figure.” The effect of neural plasticity in symbolic and non-symbolic numerical representations of quantities was investigated through a cross-sectional design, comparing musicians to musically naïve controls. Individual’s cortical activity was reconstructed and statistically modeled for a predefined time-window of the evoked response (130–170 ms). To summarize, we show that symbolic and non-symbolic processing of magnitudes is re-organized in cortical space, with professional musicians showing altered activity in motor and temporal areas. Thus, we argue that the symbolic representation of quantities is altered through musical training.
Collapse
Affiliation(s)
- Nikos Chalas
- Institute for Biomagnetism and Biosignal analysis, University of Münster, Münster, Germany
- School of Medicine, Faculty of Health Sciences, Aristotle University of Thessaloniki, Thessaloniki, Greece
| | - Alexandros Karagiorgis
- School of Medicine, Faculty of Health Sciences, Aristotle University of Thessaloniki, Thessaloniki, Greece
| | - Panagiotis Bamidis
- School of Medicine, Faculty of Health Sciences, Aristotle University of Thessaloniki, Thessaloniki, Greece
| | - Evangelos Paraskevopoulos
- School of Medicine, Faculty of Health Sciences, Aristotle University of Thessaloniki, Thessaloniki, Greece
- Department of Psychology, University of Cyprus, Nicosia, Cyprus
- * E-mail:
| |
Collapse
|
4
|
Neurophysiological Verbal Working Memory Patterns in Children: Searching for a Benchmark of Modality Differences in Audio/Video Stimuli Processing. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2021; 2021:4158580. [PMID: 34966418 PMCID: PMC8712130 DOI: 10.1155/2021/4158580] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/24/2021] [Accepted: 12/02/2021] [Indexed: 12/02/2022]
Abstract
Exploration of specific brain areas involved in verbal working memory (VWM) is a powerful but not widely used tool for the study of different sensory modalities, especially in children. In this study, for the first time, we used electroencephalography (EEG) to investigate neurophysiological similarities and differences in response to the same verbal stimuli, expressed in the auditory and visual modality during the n-back task with varying memory load in children. Since VWM plays an important role in learning ability, we wanted to investigate whether children elaborated the verbal input from auditory and visual stimuli through the same neural patterns and if performance varies depending on the sensory modality. Performance in terms of reaction times was better in visual than auditory modality (p = 0.008) and worse as memory load increased regardless of the modality (p < 0.001). EEG activation was proportionally influenced by task level and was evidenced in theta band over the prefrontal cortex (p = 0.021), along the midline (p = 0.003), and on the left hemisphere (p = 0.003). Differences in the effects of the two modalities were seen only in gamma band in the parietal cortices (p = 0.009). The values of a brainwave-based engagement index, innovatively used here to test children in a dual-modality VWM paradigm, varied depending on n-back task level (p = 0.001) and negatively correlated (p = 0.002) with performance, suggesting its computational effectiveness in detecting changes in mental state during memory tasks involving children. Overall, our findings suggest that auditory and visual VWM involved the same brain cortical areas (frontal, parietal, occipital, and midline) and that the significant differences in cortical activation in theta band were more related to memory load than sensory modality, suggesting that VWM function in the child's brain involves a cross-modal processing pattern.
Collapse
|
5
|
Jerath R, Beveridge C. Multimodal Integration and Phenomenal Spatiotemporal Binding: A Perspective From the Default Space Theory. Front Integr Neurosci 2019; 13:2. [PMID: 30804763 PMCID: PMC6371768 DOI: 10.3389/fnint.2019.00002] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2018] [Accepted: 01/17/2019] [Indexed: 01/08/2023] Open
Abstract
How does the integrated and unified conscious experience arise from the vastly distributed activities of the nervous system? How is the information from the many cones of the retina bound with information coming from the cochlea to create the association of sounds with objects in visual space? In this perspective article, we assert a novel viewpoint on the "binding problem" in which we explain a metastable operation of the brain and body that may provide insight into this problem. In our view which is a component of the Default Space Theory (DST), consciousness arises from a metastable synchronization of local computations into a global coherence by a framework of widespread slow and ultraslow oscillations coordinated by the thalamus. We reinforce a notion shared by some consciousness researchers such as Revonsuo and the Fingelkurts that a spatiotemporal matrix is the foundation of phenomenological experience and that this phenomenology is directly tied to bioelectric operations of the nervous system. Through the oscillatory binding system we describe, cognitive neuroscientists may be able to more accurately correlate bioelectric activity of the brain and body with the phenomenology of human experience.
Collapse
Affiliation(s)
- Ravinder Jerath
- Charitable Medical Healthcare Foundation, Augusta, GA, United States
| | - Connor Beveridge
- Charitable Medical Healthcare Foundation, Augusta, GA, United States
| |
Collapse
|
6
|
He Y, Nagels A, Schlesewsky M, Straube B. The Role of Gamma Oscillations During Integration of Metaphoric Gestures and Abstract Speech. Front Psychol 2018; 9:1348. [PMID: 30104995 PMCID: PMC6077537 DOI: 10.3389/fpsyg.2018.01348] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2017] [Accepted: 07/13/2018] [Indexed: 11/13/2022] Open
Abstract
Metaphoric (MP) co-speech gestures are commonly used during daily communication. They communicate about abstract information by referring to gestures that are clearly concrete (e.g., raising a hand for “the level of the football game is high”). To understand MP co-speech gestures, a multisensory integration at semantic level is necessary between abstract speech and concrete gestures. While semantic gesture-speech integration has been extensively investigated using functional magnetic resonance imaging, evidence from electroencephalography (EEG) is rare. In the current study, we set out an EEG experiment, investigating the processing of MP vs. iconic (IC) co-speech gestures in different contexts, to reveal the oscillatory signature of MP gesture integration. German participants (n = 20) viewed video clips with an actor performing both types of gestures, accompanied by either comprehensible German or incomprehensible Russian (R) speech, or speaking German sentences without any gestures. Time-frequency analysis of the EEG data showed that, when gestures were accompanied by comprehensible German speech, MP gestures elicited decreased gamma band power (50–70 Hz) between 500 and 700 ms in the parietal electrodes when compared to IC gestures, and the source of this effect was localized to the right middle temporal gyrus. This difference is likely to reflect integration processes, as it was reduced in the R language and no-gesture conditions. Our findings provide the first empirical evidence suggesting the functional relationship between gamma band oscillations and higher-level semantic processes in a multisensory setting.
Collapse
Affiliation(s)
- Yifei He
- Translational Neuroimaging Lab, Department of Psychiatry and Psychotherapy, Marburg Center for Mind, Brain and Behavior, Philipps-University Marburg, Marburg, Germany
| | - Arne Nagels
- Department of General Linguistics, Johannes Gutenberg University Mainz, Mainz, Germany
| | - Matthias Schlesewsky
- School of Psychology, Social Work and Social Policy, University of South Australia, Adelaide, SA, Australia
| | - Benjamin Straube
- Translational Neuroimaging Lab, Department of Psychiatry and Psychotherapy, Marburg Center for Mind, Brain and Behavior, Philipps-University Marburg, Marburg, Germany
| |
Collapse
|
7
|
Francisco AA, Jesse A, Groen MA, McQueen JM. A General Audiovisual Temporal Processing Deficit in Adult Readers With Dyslexia. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2017; 60:144-158. [PMID: 28056152 DOI: 10.1044/2016_jslhr-h-15-0375] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/29/2015] [Accepted: 05/26/2016] [Indexed: 05/14/2023]
Abstract
PURPOSE Because reading is an audiovisual process, reading impairment may reflect an audiovisual processing deficit. The aim of the present study was to test the existence and scope of such a deficit in adult readers with dyslexia. METHOD We tested 39 typical readers and 51 adult readers with dyslexia on their sensitivity to the simultaneity of audiovisual speech and nonspeech stimuli, their time window of audiovisual integration for speech (using incongruent /aCa/ syllables), and their audiovisual perception of phonetic categories. RESULTS Adult readers with dyslexia showed less sensitivity to audiovisual simultaneity than typical readers for both speech and nonspeech events. We found no differences between readers with dyslexia and typical readers in the temporal window of integration for audiovisual speech or in the audiovisual perception of phonetic categories. CONCLUSIONS The results suggest an audiovisual temporal deficit in dyslexia that is not specific to speech-related events. But the differences found for audiovisual temporal sensitivity did not translate into a deficit in audiovisual speech perception. Hence, there seems to be a hiatus between simultaneity judgment and perception, suggesting a multisensory system that uses different mechanisms across tasks. Alternatively, it is possible that the audiovisual deficit in dyslexia is only observable when explicit judgments about audiovisual simultaneity are required.
Collapse
Affiliation(s)
- Ana A Francisco
- Behavioural Science Institute, Radboud University, Nijmegen, the Netherlands
| | - Alexandra Jesse
- Department of Psychological and Brain Sciences, University of Massachusetts, Amherst
| | - Margriet A Groen
- Behavioural Science Institute, Radboud University, Nijmegen, the Netherlands
| | - James M McQueen
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, the NetherlandsMax Planck Institute for Psycholinguistics, Nijmegen, the Netherlands
| |
Collapse
|
8
|
Timm J, Schönwiesner M, Schröger E, SanMiguel I. Sensory suppression of brain responses to self-generated sounds is observed with and without the perception of agency. Cortex 2016; 80:5-20. [DOI: 10.1016/j.cortex.2016.03.018] [Citation(s) in RCA: 41] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2015] [Revised: 11/30/2015] [Accepted: 03/24/2016] [Indexed: 10/22/2022]
|
9
|
Chuang SW, Chuang CH, Yu YH, King JT, Lin CT. EEG Alpha and Gamma Modulators Mediate Motion Sickness-Related Spectral Responses. Int J Neural Syst 2016; 26:1650007. [DOI: 10.1142/s0129065716500076] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
Motion sickness (MS) is a common experience of travelers. To provide insights into brain dynamics associated with MS, this study recruited 19 subjects to participate in an electroencephalogram (EEG) experiment in a virtual-reality driving environment. When riding on consecutive winding roads, subjects experienced postural instability and sensory conflict between visual and vestibular stimuli. Meanwhile, subjects rated their level of MS on a six-point scale. Independent component analysis (ICA) was used to separate the filtered EEG signals into maximally temporally independent components (ICs). Then, reduced logarithmic spectra of ICs of interest, using principal component analysis, were decomposed by ICA again to find spectrally fixed and temporally independent modulators (IMs). Results demonstrated that a higher degree of MS accompanied increased activation of alpha ([Formula: see text]) and gamma ([Formula: see text]) IMs across remote-independent brain processes, covering motor, parietal and occipital areas. This co-modulatory spectral change in alpha and gamma bands revealed the neurophysiological demand to regulate conflicts among multi-modal sensory systems during MS.
Collapse
Affiliation(s)
- Shang-Wen Chuang
- Brain Research Center, National Chiao Tung University, 1001 Ta-Hsueh Road, Hsinchu, 30010, Taiwan
| | - Chun-Hsiang Chuang
- Brain Research Center, National Chiao Tung University, 1001 Ta-Hsueh Road, Hsinchu, 30010, Taiwan
- Faculty of Engineering and Information Technology, University of Technology Sydney, 15 Broadway, Ultimo NSW, 2007, Australia
| | - Yi-Hsin Yu
- Brain Research Center, National Chiao Tung University, 1001 Ta-Hsueh Road, Hsinchu, 30010, Taiwan
| | - Jung-Tai King
- Brain Research Center, National Chiao Tung University, 1001 Ta-Hsueh Road, Hsinchu, 30010, Taiwan
| | - Chin-Teng Lin
- Brain Research Center, National Chiao Tung University, 1001 Ta-Hsueh Road, Hsinchu, 30010, Taiwan
- Faculty of Engineering and Information Technology, University of Technology Sydney, 15 Broadway, Ultimo NSW, 2007, Australia
| |
Collapse
|
10
|
Pinheiro AP, Barros C, Pedrosa J. Salience in a social landscape: electrophysiological effects of task-irrelevant and infrequent vocal change. Soc Cogn Affect Neurosci 2015; 11:127-39. [PMID: 26468268 DOI: 10.1093/scan/nsv103] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2015] [Accepted: 07/30/2015] [Indexed: 11/14/2022] Open
Abstract
In a dynamically changing social environment, humans have to face the challenge of prioritizing stimuli that compete for attention. In the context of social communication, the voice is the most important sound category. However, the existing studies do not directly address whether and how the salience of an unexpected vocal change in an auditory sequence influences the orientation of attention. In this study, frequent tones were interspersed with task-relevant infrequent tones and task-irrelevant infrequent vocal sounds (neutral, happy and angry vocalizations). Eighteen healthy college students were asked to count infrequent tones. A combined event-related potential (ERP) and EEG time-frequency approach was used, with the focus on the P3 component and on the early auditory evoked gamma band response, respectively. A spatial-temporal principal component analysis was used to disentangle potentially overlapping ERP components. Although no condition differences were observed in the 210-310 ms window, larger positive responses were observed for emotional than neutral vocalizations in the 310-410 ms window. Furthermore, the phase synchronization of the early auditory evoked gamma oscillation was enhanced for happy vocalizations. These findings support the idea that the brain prioritizes the processing of emotional stimuli, by devoting more attentional resources to salient social signals even when they are not task-relevant.
Collapse
Affiliation(s)
- Ana P Pinheiro
- Neuropsychophysiology Laboratory, School of Psychology, University of Minho, Braga, Portugal
| | - Carla Barros
- Neuropsychophysiology Laboratory, School of Psychology, University of Minho, Braga, Portugal
| | - João Pedrosa
- Neuropsychophysiology Laboratory, School of Psychology, University of Minho, Braga, Portugal
| |
Collapse
|
11
|
Pallesen KJ, Bailey CJ, Brattico E, Gjedde A, Palva JM, Palva S. Experience Drives Synchronization: The phase and Amplitude Dynamics of Neural Oscillations to Musical Chords Are Differentially Modulated by Musical Expertise. PLoS One 2015; 10:e0134211. [PMID: 26291324 PMCID: PMC4546391 DOI: 10.1371/journal.pone.0134211] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2014] [Accepted: 07/07/2015] [Indexed: 11/18/2022] Open
Abstract
Musical expertise is associated with structural and functional changes in the brain that underlie facilitated auditory perception. We investigated whether the phase locking (PL) and amplitude modulations (AM) of neuronal oscillations in response to musical chords are correlated with musical expertise and whether they reflect the prototypicality of chords in Western tonal music. To this aim, we recorded magnetoencephalography (MEG) while musicians and non-musicians were presented with common prototypical major and minor chords, and with uncommon, non-prototypical dissonant and mistuned chords, while watching a silenced movie. We then analyzed the PL and AM of ongoing oscillations in the theta (4–8 Hz) alpha (8–14 Hz), beta- (14–30 Hz) and gamma- (30–80 Hz) bands to these chords. We found that musical expertise was associated with strengthened PL of ongoing oscillations to chords over a wide frequency range during the first 300 ms from stimulus onset, as opposed to increased alpha-band AM to chords over temporal MEG channels. In musicians, the gamma-band PL was strongest to non-prototypical compared to other chords, while in non-musicians PL was strongest to minor chords. In both musicians and non-musicians the long-latency (> 200 ms) gamma-band PL was also sensitive to chord identity, and particularly to the amplitude modulations (beats) of the dissonant chord. These findings suggest that musical expertise modulates oscillation PL to musical chords and that the strength of these modulations is dependent on chord prototypicality.
Collapse
Affiliation(s)
- Karen Johanne Pallesen
- Department of Neuroscience and Pharmacology, University of Copenhagen, Copenhagen, Denmark
- The Research Clinic for Functional Disorders and Psychosomatics, Aarhus University Hospital, Aarhus, Denmark
- Center of Functionally Integrative Neuroscience, Aarhus University, Aarhus, Denmark
- * E-mail:
| | | | - Elvira Brattico
- Helsinki Collegium for Advanced Studies, University of Helsinki, Helsinki, Finland
- Cognitive Brain Research Unit, Institute of Behavioral Science, University of Helsinki, Helsinki, Finland
| | - Albert Gjedde
- Department of Neuroscience and Pharmacology, University of Copenhagen, Copenhagen, Denmark
- Center of Functionally Integrative Neuroscience, Aarhus University, Aarhus, Denmark
- Pathophysiology and Experimental Tomography Center, Aarhus University Hospital, Aarhus, Denmark
| | - J. Matias Palva
- Neuroscience Center, University of Helsinki, Helsinki, Finland
| | - Satu Palva
- Neuroscience Center, University of Helsinki, Helsinki, Finland
- BioMag laboratory, HUS Medical Imaging Center, Helsinki University Central Hospital, Helsinki, Finland
| |
Collapse
|
12
|
Lockwood G, Tuomainen J. Ideophones in Japanese modulate the P2 and late positive complex responses. Front Psychol 2015; 6:933. [PMID: 26191031 PMCID: PMC4488605 DOI: 10.3389/fpsyg.2015.00933] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2014] [Accepted: 06/22/2015] [Indexed: 11/13/2022] Open
Abstract
Sound-symbolism, or the direct link between sound and meaning, is typologically and behaviorally attested across languages. However, neuroimaging research has mostly focused on artificial non-words or individual segments, which do not represent sound-symbolism in natural language. We used EEG to compare Japanese ideophones, which are phonologically distinctive sound-symbolic lexical words, and arbitrary adverbs during a sentence reading task. Ideophones elicit a larger visual P2 response than arbitrary adverbs, as well as a sustained late positive complex. Our results and previous literature suggest that the larger P2 may indicate the integration of sound and sensory information by association in response to the distinctive phonology of ideophones. The late positive complex may reflect the facilitated lexical retrieval of arbitrary words in comparison to ideophones. This account provides new evidence that ideophones exhibit similar cross-modal correspondences to those which have been proposed for non-words and individual sounds.
Collapse
Affiliation(s)
- Gwilym Lockwood
- Department of Neurobiology of Language, Max Planck Institute for Psycholinguistics, Nijmegen Netherlands ; Division of Psychology and Language Sciences, University College London UK
| | - Jyrki Tuomainen
- Division of Psychology and Language Sciences, University College London UK
| |
Collapse
|
13
|
Microsaccadic responses indicate fast categorization of sounds: a novel approach to study auditory cognition. J Neurosci 2014; 34:11152-8. [PMID: 25122911 DOI: 10.1523/jneurosci.1568-14.2014] [Citation(s) in RCA: 27] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
The mental chronometry of the human brain's processing of sounds to be categorized as targets has intensively been studied in cognitive neuroscience. According to current theories, a series of successive stages consisting of the registration, identification, and categorization of the sound has to be completed before participants are able to report the sound as a target by button press after ∼300-500 ms. Here we use miniature eye movements as a tool to study the categorization of a sound as a target or nontarget, indicating that an initial categorization is present already after 80-100 ms. During visual fixation, the rate of microsaccades, the fastest components of miniature eye movements, is transiently modulated after auditory stimulation. In two experiments, we measured microsaccade rates in human participants in an auditory three-tone oddball paradigm (including rare nontarget sounds) and observed a difference in the microsaccade rates between targets and nontargets as early as 142 ms after sound onset. This finding was replicated in a third experiment with directed saccades measured in a paradigm in which tones had to be matched to score-like visual symbols. Considering the delays introduced by (motor) signal transmission and data analysis constraints, the brain must have differentiated target from nontarget sounds as fast as 80-100 ms after sound onset in both paradigms. We suggest that predictive information processing for expected input makes higher cognitive attributes, such as a sound's identity and category, available already during early sensory processing. The measurement of eye movements is thus a promising approach to investigate hearing.
Collapse
|
14
|
Pieszek M, Schröger E, Widmann A. Separate and concurrent symbolic predictions of sound features are processed differently. Front Psychol 2014; 5:1295. [PMID: 25477832 PMCID: PMC4235414 DOI: 10.3389/fpsyg.2014.01295] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2014] [Accepted: 10/24/2014] [Indexed: 11/13/2022] Open
Abstract
The studies investigated the impact of predictive visual information about the pitch and location of a forthcoming sound on the sound processing. In Symbol-to-Sound matching paradigms, symbols induced predictions of particular sounds. The brain's error signals (IR and N2b components of the event-related potential) were measured in response to occasional violations of the prediction, i.e., when a sound was incongruent to the corresponding symbol. IR and N2b index the detection of prediction violations at different levels, IR at a sensory and N2b at a cognitive level. Participants evaluated the congruency between prediction and actual sound by button press. When the prediction referred to only the pitch or only the location feature (Experiment 1), the violation of each feature elicited IR and N2b. The IRs to pitch and location violations revealed differences in the in time course and topography, suggesting that they were generated in feature-specific sensory areas. When the prediction referred to both features concurrently (Experiment 2), that is, the symbol predicted the sound's pitch and location, either one or both predictions were violated. Unexpectedly, no significant effects in the IR range were obtained. However, N2b was elicited in response to all violations. N2b in response to concurrent violations of pitch and location had a shorter latency. We conclude that associative predictions can be established by arbitrary rule-based symbols and for different sound features, and that concurrent violations are processed in parallel. In complex situations as in Experiment 2, capacity limitations appear to affect processing in a hierarchical manner. While predictions were presumably not reliably established at sensory levels (absence of IR), they were established at more cognitive levels, where sounds are represented categorially (presence of N2b).
Collapse
Affiliation(s)
- Marika Pieszek
- Cognitive incl. Biological Psychology, Institute of Psychology, University of Leipzig Leipzig, Germany
| | - Erich Schröger
- Cognitive incl. Biological Psychology, Institute of Psychology, University of Leipzig Leipzig, Germany
| | - Andreas Widmann
- Cognitive incl. Biological Psychology, Institute of Psychology, University of Leipzig Leipzig, Germany
| |
Collapse
|
15
|
Time–Frequency Analysis of Event-Related Potentials: A Brief Tutorial. Brain Topogr 2013; 27:438-50. [DOI: 10.1007/s10548-013-0327-5] [Citation(s) in RCA: 61] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2013] [Accepted: 10/21/2013] [Indexed: 10/26/2022]
|
16
|
Changes in saccadic eye movement (SEM) and quantitative EEG parameter in bipolar patients. J Affect Disord 2013; 145:378-85. [PMID: 22832171 DOI: 10.1016/j.jad.2012.04.049] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/02/2012] [Accepted: 04/25/2012] [Indexed: 12/22/2022]
Abstract
BACKGROUND There is increasing evidence that neurocognitive dysfunction is associated with the different states in Bipolar Disorder. Gamma coherence is strongly related to cognitive processes and cortico-cortical communication. This paper aims at shedding light on the relationship between cortical gamma coherence within bipolar patients and a control group during a prosaccadic attention task. We hypothesized that gamma coherence oscillations act as a main neural mechanism underlying information processing which changes in bipolar patients. METHOD Thirty-two (12 healthy controls and 20 bipolar patients) subjects were enrolled in this study. The subjects performed a prosaccadic attention task while their brain activity pattern was recorded using quantitative electroencephalography (20 channels). RESULTS We observed that the maniac group presented lower saccade latency when compared to depression and control groups. The main finding was a greater gamma coherence for control group in the right hemisphere of both frontal and motor cortices caused by the execution of a prosaccadic attention task. LIMITATIONS The findings need to be confirmed in larger samples and in bipolar patients before start the pharmacological treatment. CONCLUSIONS Our findings suggest a disrupted connection of the brain's entire functioning of maniac patients and represent a deregulation in cortical inhibitory mechanism. Thus, our results reinforce our hypothesis that greater gamma coherence in the right and left frontal cortices for the maniac group produces a "noise" during information processing and highlights that gamma coherence might be a biomarker for cognitive dysfunction during the manic state.
Collapse
|
17
|
Simoens VL, Tervaniemi M. Auditory short-term memory activation during score reading. PLoS One 2013; 8:e53691. [PMID: 23326487 PMCID: PMC3543329 DOI: 10.1371/journal.pone.0053691] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2012] [Accepted: 12/04/2012] [Indexed: 11/19/2022] Open
Abstract
Performing music on the basis of reading a score requires reading ahead of what is being played in order to anticipate the necessary actions to produce the notes. Score reading thus not only involves the decoding of a visual score and the comparison to the auditory feedback, but also short-term storage of the musical information due to the delay of the auditory feedback during reading ahead. This study investigates the mechanisms of encoding of musical information in short-term memory during such a complicated procedure. There were three parts in this study. First, professional musicians participated in an electroencephalographic (EEG) experiment to study the slow wave potentials during a time interval of short-term memory storage in a situation that requires cross-modal translation and short-term storage of visual material to be compared with delayed auditory material, as it is the case in music score reading. This delayed visual-to-auditory matching task was compared with delayed visual-visual and auditory-auditory matching tasks in terms of EEG topography and voltage amplitudes. Second, an additional behavioural experiment was performed to determine which type of distractor would be the most interfering with the score reading-like task. Third, the self-reported strategies of the participants were also analyzed. All three parts of this study point towards the same conclusion according to which during music score reading, the musician most likely first translates the visual score into an auditory cue, probably starting around 700 or 1300 ms, ready for storage and delayed comparison with the auditory feedback.
Collapse
Affiliation(s)
- Veerle L Simoens
- Cognitive Brain Research Unit, Cognitive Science, Institute of Behavioural Sciences, University of Helsinki, Helsinki, Finland.
| | | |
Collapse
|
18
|
Pieszek M, Widmann A, Gruber T, Schröger E. The human brain maintains contradictory and redundant auditory sensory predictions. PLoS One 2013; 8:e53634. [PMID: 23308266 PMCID: PMC3538730 DOI: 10.1371/journal.pone.0053634] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2012] [Accepted: 12/03/2012] [Indexed: 11/19/2022] Open
Abstract
Computational and experimental research has revealed that auditory sensory predictions are derived from regularities of the current environment by using internal generative models. However, so far, what has not been addressed is how the auditory system handles situations giving rise to redundant or even contradictory predictions derived from different sources of information. To this end, we measured error signals in the event-related brain potentials (ERPs) in response to violations of auditory predictions. Sounds could be predicted on the basis of overall probability, i.e., one sound was presented frequently and another sound rarely. Furthermore, each sound was predicted by an informative visual cue. Participants’ task was to use the cue and to discriminate the two sounds as fast as possible. Violations of the probability based prediction (i.e., a rare sound) as well as violations of the visual-auditory prediction (i.e., an incongruent sound) elicited error signals in the ERPs (Mismatch Negativity [MMN] and Incongruency Response [IR]). Particular error signals were observed even in case the overall probability and the visual symbol predicted different sounds. That is, the auditory system concurrently maintains and tests contradictory predictions. Moreover, if the same sound was predicted, we observed an additive error signal (scalp potential and primary current density) equaling the sum of the specific error signals. Thus, the auditory system maintains and tolerates functionally independently represented redundant and contradictory predictions. We argue that the auditory system exploits all currently active regularities in order to optimally prepare for future events.
Collapse
Affiliation(s)
- Marika Pieszek
- Cognitive incl. Biological Psychology, Institute of Psychology, University of Leipzig, Leipzig, Germany.
| | | | | | | |
Collapse
|
19
|
Liu B, Wu G, Meng X. Cross-modal priming effect based on short-term experience of ecologically unrelated audio-visual information: An event-related potential study. Neuroscience 2012; 223:21-7. [PMID: 22698696 DOI: 10.1016/j.neuroscience.2012.06.009] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2012] [Revised: 06/02/2012] [Accepted: 06/05/2012] [Indexed: 01/12/2023]
|
20
|
Abstract
Abstract
Spoken sentence comprehension relies on rapid and effortless temporal integration of speech units displayed at different rates. Temporal integration refers to how chunks of information perceived at different time scales are linked together by the listener in mapping speech sounds onto meaning. The neural implementation of this integration remains unclear. This study explores the role of short and long windows of integration in accessing meaning from long samples of speech. In a cross-linguistic study, we explore the time course of oscillatory brain activity between 1 and 100 Hz, recorded using EEG, during the processing of native and foreign languages. We compare oscillatory responses in a group of Italian and Spanish native speakers while they attentively listen to Italian, Japanese, and Spanish utterances, played either forward or backward. The results show that both groups of participants display a significant increase in gamma band power (55–75 Hz) only when they listen to their native language played forward. The increase in gamma power starts around 1000 msec after the onset of the utterance and decreases by its end, resembling the time course of access to meaning during speech perception. In contrast, changes in low-frequency power show similar patterns for both native and foreign languages. We propose that gamma band power reflects a temporal binding phenomenon concerning the coordination of neural assemblies involved in accessing meaning of long samples of speech.
Collapse
Affiliation(s)
- Marcela Peña
- 1Scuola Internazionale Superiore di Studi Avanzati, Trieste, Italy
- 2Pontificia Universidad Católica de Chile
| | - Lucia Melloni
- 2Pontificia Universidad Católica de Chile
- 3Max Planck Institute for Brain Research, Frankfurt am Main, Germany
| |
Collapse
|
21
|
Widmann A, Schröger E, Tervaniemi M, Pakarinen S, Kujala T. Mapping symbols to sounds: electrophysiological correlates of the impaired reading process in dyslexia. Front Psychol 2012; 3:60. [PMID: 22403564 PMCID: PMC3291877 DOI: 10.3389/fpsyg.2012.00060] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2011] [Accepted: 02/15/2012] [Indexed: 12/01/2022] Open
Abstract
Dyslexic and control first-grade school children were compared in a Symbol-to-Sound matching test based on a non-linguistic audiovisual training which is known to have a remediating effect on dyslexia. Visual symbol patterns had to be matched with predicted sound patterns. Sounds incongruent with the corresponding visual symbol (thus not matching the prediction) elicited the N2b and P3a event-related potential (ERP) components relative to congruent sounds in control children. Their ERPs resembled the ERP effects previously reported for healthy adults with this paradigm. In dyslexic children, N2b onset latency was delayed and its amplitude significantly reduced over left hemisphere whereas P3a was absent. Moreover, N2b amplitudes significantly correlated with the reading skills. ERPs to sound changes in a control condition were unaffected. In addition, correctly predicted sounds, that is, sounds that are congruent with the visual symbol, elicited an early induced auditory gamma band response (GBR) reflecting synchronization of brain activity in normal-reading children as previously observed in healthy adults. However, dyslexic children showed no GBR. This indicates that visual symbolic and auditory sensory information are not integrated into a unitary audiovisual object representation in them. Finally, incongruent sounds were followed by a later desynchronization of brain activity in the gamma band in both groups. This desynchronization was significantly larger in dyslexic children. Although both groups accomplished the task successfully remarkable group differences in brain responses suggest that normal-reading children and dyslexic children recruit (partly) different brain mechanisms when solving the task. We propose that abnormal ERPs and GBRs in dyslexic readers indicate a deficit resulting in a widespread impairment in processing and integrating auditory and visual information and contributing to the reading impairment in dyslexia.
Collapse
Affiliation(s)
- Andreas Widmann
- Institute of Psychology, University of Leipzig Leipzig, Germany
| | | | | | | | | |
Collapse
|
22
|
Bendixen A, SanMiguel I, Schröger E. Early electrophysiological indicators for predictive processing in audition: A review. Int J Psychophysiol 2012; 83:120-31. [PMID: 21867734 DOI: 10.1016/j.ijpsycho.2011.08.003] [Citation(s) in RCA: 228] [Impact Index Per Article: 17.5] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2011] [Revised: 07/28/2011] [Accepted: 08/08/2011] [Indexed: 11/28/2022]
Affiliation(s)
- Alexandra Bendixen
- Institute for Psychology, University of Leipzig, Seeburgstraße 14-20, Leipzig, Germany.
| | | | | |
Collapse
|
23
|
Top down influence on visuo-tactile interaction modulates neural oscillatory responses. Neuroimage 2012; 59:3406-17. [DOI: 10.1016/j.neuroimage.2011.11.076] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2011] [Revised: 10/07/2011] [Accepted: 11/25/2011] [Indexed: 10/14/2022] Open
|
24
|
Grimm S, Recasens M, Althen H, Escera C. Ultrafast tracking of sound location changes as revealed by human auditory evoked potentials. Biol Psychol 2012; 89:232-9. [DOI: 10.1016/j.biopsycho.2011.10.014] [Citation(s) in RCA: 36] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2011] [Revised: 09/14/2011] [Accepted: 10/16/2011] [Indexed: 11/26/2022]
|
25
|
Cohen MX, Wilmes KA, van de Vijver I. Cortical electrophysiological network dynamics of feedback learning. Trends Cogn Sci 2011; 15:558-66. [DOI: 10.1016/j.tics.2011.10.004] [Citation(s) in RCA: 99] [Impact Index Per Article: 7.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2011] [Revised: 10/19/2011] [Accepted: 10/20/2011] [Indexed: 10/15/2022]
|
26
|
Engel A, Senkowski D, Schneider T. Multisensory Integration through Neural Coherence. Front Neurosci 2011. [DOI: 10.1201/9781439812174-10] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022] Open
|
27
|
Engel A, Senkowski D, Schneider T. Multisensory Integration through Neural Coherence. Front Neurosci 2011. [DOI: 10.1201/b11092-10] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022] Open
|
28
|
Mossbridge JA, Grabowecky M, Suzuki S. Changes in auditory frequency guide visual-spatial attention. Cognition 2011; 121:133-9. [PMID: 21741633 DOI: 10.1016/j.cognition.2011.06.003] [Citation(s) in RCA: 40] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2010] [Revised: 04/21/2011] [Accepted: 06/03/2011] [Indexed: 11/26/2022]
Abstract
How do the characteristics of sounds influence the allocation of visual-spatial attention? Natural sounds typically change in frequency. Here we demonstrate that the direction of frequency change guides visual-spatial attention more strongly than the average or ending frequency, and provide evidence suggesting that this cross-modal effect may be mediated by perceptual experience. We used a Go/No-Go color-matching task to avoid response compatibility confounds. Participants performed the task either with their heads upright or tilted by 90°, misaligning the head-centered and environmental axes. The first of two colored circles was presented at fixation and the second was presented in one of four surrounding positions in a cardinal or diagonal direction. Either an ascending or descending auditory-frequency sweep was presented coincident with the first circle. Participants were instructed to respond to the color match between the two circles and to ignore the uninformative sounds. Ascending frequency sweeps facilitated performance (response time and/or sensitivity) when the second circle was presented at the cardinal top position and descending sweeps facilitated performance when the second circle was presented at the cardinal bottom position; there were no effects of the average or ending frequency. The sweeps had no effects when circles were presented at diagonal locations, and head tilt entirely eliminated the effect. Thus, visual-spatial cueing by pitch change is narrowly tuned to vertical directions and dominates any effect of average or ending frequency. Because this cross-modal cueing is dependent on the alignment of head-centered and environmental axes, it may develop through associative learning during waking upright experience.
Collapse
Affiliation(s)
- Julia A Mossbridge
- Department of Psychology, Northwestern University, Evanston, IL 60208, USA.
| | | | | |
Collapse
|
29
|
The effects of visual material and temporal synchrony on the processing of letters and speech sounds. Exp Brain Res 2011; 211:287-98. [DOI: 10.1007/s00221-011-2686-z] [Citation(s) in RCA: 13] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2010] [Accepted: 04/06/2011] [Indexed: 10/18/2022]
|
30
|
Park JY, Park H, Kim JI, Park HJ. Consonant chords stimulate higher EEG gamma activity than dissonant chords. Neurosci Lett 2011; 488:101-5. [DOI: 10.1016/j.neulet.2010.11.011] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2010] [Revised: 10/28/2010] [Accepted: 11/03/2010] [Indexed: 10/18/2022]
|
31
|
Poch C, Campo P, Parmentier FBR, Ruiz-Vargas JM, Elsley JV, Castellanos NP, Maestú F, del Pozo F. Explicit processing of verbal and spatial features during letter-location binding modulates oscillatory activity of a fronto-parietal network. Neuropsychologia 2010; 48:3846-54. [PMID: 20868702 DOI: 10.1016/j.neuropsychologia.2010.09.015] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2010] [Revised: 07/27/2010] [Accepted: 09/15/2010] [Indexed: 11/18/2022]
Abstract
The present study investigated the binding of verbal and spatial features in immediate memory. In a recent study, we demonstrated incidental and asymmetrical letter-location binding effects when participants attended to letter features (but not when they attended to location features) that were associated with greater oscillatory activity over prefrontal and posterior regions during the retention period. We were interested to investigate whether the patterns of brain activity associated with the incidental binding of letters and locations observed when only the verbal feature is attended differ from those reflecting the binding resulting from the controlled/explicit processing of both verbal and spatial features. To achieve this, neural activity was recorded using magnetoencephalography (MEG) while participants performed two working memory tasks. Both tasks were identical in terms of their perceptual characteristics and only differed with respect to the task instructions. One of the tasks required participants to process both letters and locations. In the other, participants were instructed to memorize only the letters, regardless of their location. Time-frequency representation of MEG data based on the wavelet transform of the signals was calculated on a single trial basis during the maintenance period of both tasks. Critically, despite equivalent behavioural binding effects in both tasks, single and dual feature encoding relied on different neuroanatomical and neural oscillatory correlates. We propose that enhanced activation of an anterior-posterior dorsal network observed in the task requiring the processing of both features reflects the necessity for allocating greater resources to intentionally process verbal and spatial features in this task.
Collapse
Affiliation(s)
- Claudia Poch
- Laboratory of Cognitive and Computational Neuroscience, Complutense University of Madrid-Polytechnic University of Madrid, Madrid, Spain
| | | | | | | | | | | | | | | |
Collapse
|
32
|
Scalp-Recorded Induced Gamma-Band Responses to Auditory Stimulation and Its Correlations with Saccadic Muscle-Activity. Brain Topogr 2010; 24:30-9. [DOI: 10.1007/s10548-010-0157-7] [Citation(s) in RCA: 33] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2010] [Accepted: 07/15/2010] [Indexed: 11/26/2022]
|
33
|
Slabu L, Escera C, Grimm S, Costa-Faidella J. Early change detection in humans as revealed by auditory brainstem and middle-latency evoked potentials. Eur J Neurosci 2010; 32:859-65. [DOI: 10.1111/j.1460-9568.2010.07324.x] [Citation(s) in RCA: 85] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
34
|
Is my mobile ringing? Evidence for rapid processing of a personally significant sound in humans. J Neurosci 2010; 30:7310-3. [PMID: 20505097 DOI: 10.1523/jneurosci.1113-10.2010] [Citation(s) in RCA: 38] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Anecdotal reports and also empirical observations suggest a preferential processing of personally significant sounds. The utterance of one's own name, the ringing of one's own telephone, or the like appear to be especially effective for capturing attention. However, there is a lack of knowledge about the time course and functional neuroanatomy of the voluntary and the involuntary detection of personally significant sounds. To address this issue, we applied an active and a passive listening paradigm, in which male and female human participants were presented with the SMS ringtone of their own mobile and other's ringtones, respectively. Enhanced evoked oscillatory activity in the 35-75 Hz band for one's own ringtone shows that the brain distinguishes complex personally significant and nonsignificant sounds, starting as early as 40 ms after sound onset. While in animals it has been reported that the primary auditory cortex accounts for acoustic experience-based memory matching processes, results from the present study suggest that in humans these processes are not confined to sensory processing areas. In particular, we found a coactivation of left auditory areas and left frontal gyri during passive listening. Active listening evoked additional involvement of sensory processing areas in the right hemisphere. This supports the idea that top-down mechanisms affect stimulus representations even at the level of sensory cortices. Furthermore, active detection of sounds additionally activated the superior parietal lobe supporting the existence of a frontoparietal network of selective attention.
Collapse
|
35
|
Bubic A, von Cramon DY, Schubotz RI. Prediction, cognition and the brain. Front Hum Neurosci 2010; 4:25. [PMID: 20631856 PMCID: PMC2904053 DOI: 10.3389/fnhum.2010.00025] [Citation(s) in RCA: 205] [Impact Index Per Article: 13.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2009] [Accepted: 03/07/2010] [Indexed: 12/03/2022] Open
Abstract
The term “predictive brain” depicts one of the most relevant concepts in cognitive neuroscience which emphasizes the importance of “looking into the future”, namely prediction, preparation, anticipation, prospection or expectations in various cognitive domains. Analogously, it has been suggested that predictive processing represents one of the fundamental principles of neural computations and that errors of prediction may be crucial for driving neural and cognitive processes as well as behavior. This review discusses research areas which have recognized the importance of prediction and introduces the relevant terminology and leading theories in the field in an attempt to abstract some generative mechanisms of predictive processing. Furthermore, we discuss the process of testing the validity of postulated expectations by matching these to the realized events and compare the subsequent processing of events which confirm to those which violate the initial predictions. We conclude by suggesting that, although a lot is known about this type of processing, there are still many open issues which need to be resolved before a unified theory of predictive processing can be postulated with regard to both cognitive and neural functioning.
Collapse
Affiliation(s)
- Andreja Bubic
- Department of Cognitive Neurology, Max Planck Institute for Human Cognitive and Brain Sciences Leipzig, Germany
| | | | | |
Collapse
|
36
|
Abstract
In reverberant environments, the brain can suppress echoes so that auditory perception is dominated by the primary or leading sounds. Echo suppression comprises at least two distinct phenomena whose neural bases are unknown: spatial translocation of an echo toward the primary sound, and object capture to combine echo and primary sounds into a single event. In an electroencephalography study, we presented subjects with primary-echo (leading-lagging) click pairs in virtual acoustic space, with interclick delay at the individual's 50% suppression threshold. On each trial, subjects reported both click location (one or both hemifields) and the number of clicks they heard (one or two). Thus, the threshold stimulus led to two common percepts: Suppressed and Not Suppressed. On some trials, a subset of subjects reported an intermediate percept, in which two clicks were perceived in the same hemifield as the leading click, providing a dissociation between spatial translocation and object capture. We conducted time-frequency and event-related potential analyses to examine the time course of the neural mechanisms mediating echo suppression. Enhanced gamma band phase synchronization (peaking at approximately 40 Hz) specific to successful echo suppression was evident from 20 to 60 ms after stimulus onset. N1 latency provided a categorical neural marker of spatial translocation, whereas N1 amplitude still reflected the physical presence of a second (lagging) click. These results provide evidence that (1) echo suppression begins early, at the latest when the acoustic signal first reaches cortex, and (2) the brain spatially translocates a perceived echo before the primary sound captures it.
Collapse
|
37
|
Campo P, Poch C, Parmentier FB, Moratti S, Elsley JV, Castellanos NP, Ruiz-Vargas JM, del Pozo F, Maestú F. Oscillatory activity in prefrontal and posterior regions during implicit letter-location binding. Neuroimage 2010; 49:2807-15. [DOI: 10.1016/j.neuroimage.2009.10.024] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2009] [Revised: 10/07/2009] [Accepted: 10/09/2009] [Indexed: 10/20/2022] Open
|
38
|
Schadow J, Lenz D, Dettler N, Fründ I, Herrmann CS. Early gamma-band responses reflect anticipatory top-down modulation in the auditory cortex. Neuroimage 2009; 47:651-8. [DOI: 10.1016/j.neuroimage.2009.04.074] [Citation(s) in RCA: 31] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2008] [Revised: 03/06/2009] [Accepted: 04/23/2009] [Indexed: 10/20/2022] Open
|
39
|
Bendixen A, Schröger E, Winkler I. I heard that coming: event-related potential evidence for stimulus-driven prediction in the auditory system. J Neurosci 2009; 29:8447-51. [PMID: 19571135 PMCID: PMC6665649 DOI: 10.1523/jneurosci.1493-09.2009] [Citation(s) in RCA: 134] [Impact Index Per Article: 8.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2009] [Revised: 05/11/2009] [Accepted: 05/16/2009] [Indexed: 11/21/2022] Open
Abstract
The auditory system has been shown to detect predictability in a tone sequence, but does it use the extracted regularities for actually predicting the continuation of the sequence? The present study sought to find evidence for the generation of such predictions. Predictability was manipulated in an isochronous series of tones in which every other tone was a repetition of its predecessor. The existence of predictions was probed by occasionally omitting either the first (unpredictable) or the second (predictable) tone of a same-frequency tone pair. Event-related electrical brain activity elicited by the omission of an unpredictable tone differed from the response to the actual tone right from the tone onset. In contrast, early electrical brain activity elicited by the omission of a predictable tone was quite similar to the response to the actual tone. This suggests that the auditory system preactivates the neural circuits for expected input, using sequential predictions to specifically prepare for future acoustic events.
Collapse
Affiliation(s)
- Alexandra Bendixen
- Institute for Psychology, Hungarian Academy of Sciences, H-1068 Budapest, Hungary.
| | | | | |
Collapse
|
40
|
Senkowski D, Schneider TR, Tandler F, Engel AK. Gamma-band activity reflects multisensory matching in working memory. Exp Brain Res 2009; 198:363-72. [DOI: 10.1007/s00221-009-1835-0] [Citation(s) in RCA: 14] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2008] [Accepted: 04/28/2009] [Indexed: 10/20/2022]
|
41
|
Melloni L, Schwiedrzik CM, Wibral M, Rodriguez E, Singer W. Response to: Yuval-Greenberg et al., "Transient Induced Gamma-Band Response in EEG as a Manifestation of Miniature Saccades." Neuron 58, 429-441. Neuron 2009; 62:8-10; author reply 10-12. [PMID: 19376062 DOI: 10.1016/j.neuron.2009.04.002] [Citation(s) in RCA: 27] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
42
|
Shahin AJ, Picton TW, Miller LM. Brain oscillations during semantic evaluation of speech. Brain Cogn 2009; 70:259-66. [PMID: 19324486 DOI: 10.1016/j.bandc.2009.02.008] [Citation(s) in RCA: 81] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2008] [Revised: 02/19/2009] [Accepted: 02/20/2009] [Indexed: 11/18/2022]
Abstract
Changes in oscillatory brain activity have been related to perceptual and cognitive processes such as selective attention and memory matching. Here we examined brain oscillations, measured with electroencephalography (EEG), during a semantic speech processing task that required both lexically mediated memory matching and selective attention. Participants listened to nouns spoken in male and female voices, and detected an animate target (p=20%) in a train of inanimate standards or vice versa. For a control task, subjects listened to the same words and detected a target male voice in standards of a female voice or vice versa. The standard trials of the semantic task showed enhanced upper beta (25-30 Hz) and gamma band (GBA, 30-60 Hz) activity compared to the voice task. Upper beta and GBA enhancement was accompanied by a suppression of alpha (8-12 Hz) and lower to mid beta (13-20 Hz) activity mainly localized to posterior electrodes. Enhancement of phase-locked theta activity peaking near 275 ms also occurred over the midline electrodes. Theta, upper beta, and gamma band enhancement may reflect lexically mediated template matching in auditory memory, whereas the alpha and beta suppression likely indicate increased attentional processes and memory demands.
Collapse
Affiliation(s)
- Antoine J Shahin
- UC Davis Center for Mind and Brain, 267 Cousteau Place, Davis, CA 95618, USA.
| | | | | |
Collapse
|
43
|
Baess P, Widmann A, Roye A, Schröger E, Jacobsen T. Attenuated human auditory middle latency response and evoked 40-Hz response to self-initiated sounds. Eur J Neurosci 2009; 29:1514-21. [PMID: 19323693 DOI: 10.1111/j.1460-9568.2009.06683.x] [Citation(s) in RCA: 79] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
Abstract
For several modalities, it has been shown that the processing of sensory information generated by our own actions is attenuated relative to the processing of sensory information of externally generated stimuli. It has been proposed that the underlying mechanism builds predictions about the forthcoming sensory input and forwards them to the respective sensory processing levels. The present study investigated whether early auditory processing is suppressed by the top-down influences of such an internal forward model mechanism. To this end, we compared auditory middle latency responses (MLRs) and evoked 40-Hz responses elicited by self-initiated sounds with those elicited by externally initiated but otherwise identical sounds. In the self-initiated condition, the amplitudes of the Pa (27-33 ms relative to sound onset) and Nb (40-46 ms) components of the MLRs were significantly attenuated when compared to the responses elicited by click sounds presented in the externally initiated condition. Similarly, the evoked activity in the 40-Hz and adjacent frequency bands was attenuated. Considering that previous research revealed subcortical and auditory cortex contributions to MLRs and 40-Hz responses, our results support the existence of auditory suppression effects with self-initiated sounds on temporally and structurally early auditory processing levels. This attenuation in the processing of self-initiated sounds most probably contributes to the optimal processing of concurrent external acoustic events.
Collapse
Affiliation(s)
- Pamela Baess
- Department of Psychology, Max-Planck-Institute for Human Cognitive and Brain Science, Leipzig, Germany.
| | | | | | | | | |
Collapse
|
44
|
Kanayama N, Sato A, Ohira H. The role of gamma band oscillations and synchrony on rubber hand illusion and crossmodal integration. Brain Cogn 2009; 69:19-29. [DOI: 10.1016/j.bandc.2008.05.001] [Citation(s) in RCA: 48] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2007] [Revised: 04/30/2008] [Accepted: 05/01/2008] [Indexed: 10/21/2022]
|
45
|
Articulatory mediation of speech perception: a causal analysis of multi-modal imaging data. Cognition 2008; 110:222-36. [PMID: 19110238 DOI: 10.1016/j.cognition.2008.11.011] [Citation(s) in RCA: 28] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2007] [Revised: 11/17/2008] [Accepted: 11/17/2008] [Indexed: 11/20/2022]
Abstract
The inherent confound between the organization of articulation and the acoustic-phonetic structure of the speech signal makes it exceptionally difficult to evaluate the competing claims of motor and acoustic-phonetic accounts of how listeners recognize coarticulated speech. Here we use Granger causation analyzes of high spatiotemporal resolution neural activation data derived from the integration of magnetic resonance imaging, magnetoencephalography and electroencephalography, to examine the role of lexical and articulatory mediation in listeners' ability to use phonetic context to compensate for place assimilation. Listeners heard two-word phrases such as pen pad and then saw two pictures, from which they had to select the one that depicted the phrase. Assimilation, lexical competitor environment and the phonological validity of assimilation context were all manipulated. Behavioral data showed an effect of context on the interpretation of assimilated segments. Analysis of 40 Hz gamma phase locking patterns identified a large distributed neural network including 16 distinct regions of interest (ROIs) spanning portions of both hemispheres in the first 200 ms of post-assimilation context. Granger analyzes of individual conditions showed differing patterns of causal interaction between ROIs during this interval, with hypothesized lexical and articulatory structures and pathways driving phonetic activation in the posterior superior temporal gyrus in assimilation conditions, but not in phonetically unambiguous conditions. These results lend strong support for the motor theory of speech perception, and clarify the role of lexical mediation in the phonetic processing of assimilated speech.
Collapse
|
46
|
Enhanced EEG gamma-band activity reflects multisensory semantic matching in visual-to-auditory object priming. Neuroimage 2008; 42:1244-54. [DOI: 10.1016/j.neuroimage.2008.05.033] [Citation(s) in RCA: 102] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2007] [Revised: 05/08/2008] [Accepted: 05/11/2008] [Indexed: 10/22/2022] Open
|
47
|
Senkowski D, Schneider TR, Foxe JJ, Engel AK. Crossmodal binding through neural coherence: implications for multisensory processing. Trends Neurosci 2008; 31:401-9. [PMID: 18602171 DOI: 10.1016/j.tins.2008.05.002] [Citation(s) in RCA: 273] [Impact Index Per Article: 16.1] [Reference Citation Analysis] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2007] [Revised: 05/06/2008] [Accepted: 05/06/2008] [Indexed: 11/18/2022]
Affiliation(s)
- Daniel Senkowski
- Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| | | | | | | |
Collapse
|