1
|
Naeije G, Niesen M, Vander Ghinst M, Bourguignon M. Simultaneous EEG recording of cortical tracking of speech and movement kinematics. Neuroscience 2024; 561:1-10. [PMID: 39395635 DOI: 10.1016/j.neuroscience.2024.10.013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2024] [Revised: 09/23/2024] [Accepted: 10/06/2024] [Indexed: 10/14/2024]
Abstract
RATIONALE Cortical activity is coupled with streams of sensory stimulation. The coupling with the temporal envelope of heard speech is known as the cortical tracking of speech (CTS), and that with movement kinematics is known as the corticokinematic coupling (CKC). Simultaneous measurement of both couplings is desirable in clinical settings, but it is unknown whether the inherent dual-tasking condition has an impact on CTS or CKC. AIM We aim to determine whether and how CTS and CKC levels are affected when recorded simultaneously. METHODS Twenty-three healthy young adults underwent 64-channel EEG recordings while listening to stories and while performing repetitive finger-tapping movements in 3 conditions: separately (audio- or tapping-only) or simultaneously (audio-tapping). CTS and CKC values were estimated using coherence analysis between each EEG signal and speech temporal envelope (CTS) or finger acceleration (CKC). CTS was also estimated as the reconstruction accuracy of a decoding model. RESULTS Across recordings, CTS assessed with reconstruction accuracy was significant in 85 % of the subjects at phrasal frequency (0.5 Hz) and in 68 % at syllabic frequencies (4-8 Hz), and CKC was significant in over 85 % of the subjects at movement frequency and its first harmonic. Comparing CTS and CKC values evaluated in separate recordings to those in simultaneous recordings revealed no significant difference and moderate-to-high levels of correlation. CONCLUSION Despite the subtle behavioral effects, CTS and CKC are not evidently altered by the dual-task setting inherent to recording them simultaneously and can be evaluated simultaneously using EEG in clinical settings.
Collapse
Affiliation(s)
- Gilles Naeije
- Laboratoire de Neuroanatomie et Neuroimagerie Translationnelles, UNI - ULB Neuroscience Institute, Université libre de Bruxelles (ULB), Brussels, Belgium; Centre de Référence Neuromusculaire, Department of Neurology, HUB Hôpital Erasme, Université libre de Bruxelles (ULB), Brussels, Belgium.
| | - Maxime Niesen
- Laboratoire de Neuroanatomie et Neuroimagerie Translationnelles, UNI - ULB Neuroscience Institute, Université libre de Bruxelles (ULB), Brussels, Belgium; Service d'ORL et de chirurgie cervico-faciale, HUB Hôpital Erasme, Université libre de Bruxelles (ULB), Brussels, Belgium
| | - Marc Vander Ghinst
- Laboratoire de Neuroanatomie et Neuroimagerie Translationnelles, UNI - ULB Neuroscience Institute, Université libre de Bruxelles (ULB), Brussels, Belgium; Service d'ORL et de chirurgie cervico-faciale, HUB Hôpital Erasme, Université libre de Bruxelles (ULB), Brussels, Belgium
| | - Mathieu Bourguignon
- Laboratoire de Neuroanatomie et Neuroimagerie Translationnelles, UNI - ULB Neuroscience Institute, Université libre de Bruxelles (ULB), Brussels, Belgium; Laboratory of Neurophysiology and Movement Biomechanics, UNI - ULB Neuroscience Institute, Université libre de Bruxelles (ULB), Brussels, Belgium
| |
Collapse
|
2
|
Xie Z, Brodbeck C, Chandrasekaran B. Cortical Tracking of Continuous Speech Under Bimodal Divided Attention. NEUROBIOLOGY OF LANGUAGE (CAMBRIDGE, MASS.) 2023; 4:318-343. [PMID: 37229509 PMCID: PMC10205152 DOI: 10.1162/nol_a_00100] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/01/2022] [Accepted: 01/11/2023] [Indexed: 05/27/2023]
Abstract
Speech processing often occurs amid competing inputs from other modalities, for example, listening to the radio while driving. We examined the extent to which dividing attention between auditory and visual modalities (bimodal divided attention) impacts neural processing of natural continuous speech from acoustic to linguistic levels of representation. We recorded electroencephalographic (EEG) responses when human participants performed a challenging primary visual task, imposing low or high cognitive load while listening to audiobook stories as a secondary task. The two dual-task conditions were contrasted with an auditory single-task condition in which participants attended to stories while ignoring visual stimuli. Behaviorally, the high load dual-task condition was associated with lower speech comprehension accuracy relative to the other two conditions. We fitted multivariate temporal response function encoding models to predict EEG responses from acoustic and linguistic speech features at different representation levels, including auditory spectrograms and information-theoretic models of sublexical-, word-form-, and sentence-level representations. Neural tracking of most acoustic and linguistic features remained unchanged with increasing dual-task load, despite unambiguous behavioral and neural evidence of the high load dual-task condition being more demanding. Compared to the auditory single-task condition, dual-task conditions selectively reduced neural tracking of only some acoustic and linguistic features, mainly at latencies >200 ms, while earlier latencies were surprisingly unaffected. These findings indicate that behavioral effects of bimodal divided attention on continuous speech processing occur not because of impaired early sensory representations but likely at later cognitive processing stages. Crossmodal attention-related mechanisms may not be uniform across different speech processing levels.
Collapse
Affiliation(s)
- Zilong Xie
- School of Communication Science and Disorders, Florida State University, Tallahassee, FL, USA
| | - Christian Brodbeck
- Department of Psychological Sciences, University of Connecticut, Storrs, CT, USA
| | - Bharath Chandrasekaran
- Department of Communication Science and Disorders, University of Pittsburgh, Pittsburgh, PA, USA
| |
Collapse
|
3
|
Kritzman L, Eidelman-Rothman M, Keil A, Freche D, Sheppes G, Levit-Binnun N. Steady-state visual evoked potentials differentiate between internally and externally directed attention. Neuroimage 2022; 254:119133. [PMID: 35339684 DOI: 10.1016/j.neuroimage.2022.119133] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2021] [Revised: 03/21/2022] [Accepted: 03/21/2022] [Indexed: 12/26/2022] Open
Abstract
While attention to external visual stimuli has been extensively studied, attention directed internally towards mental contents (e.g., thoughts, memories) or bodily signals (e.g., breathing, heartbeat) has only recently become a subject of increased interest, due to its relation to interoception, contemplative practices and mental health. The present study aimed at expanding the methodological toolbox for studying internal attention, by examining for the first time whether the steady-state visual evoked potential (ssVEP), a well-established measure of attention, can differentiate between internally and externally directed attention. To this end, we designed a task in which flickering dots were used to generate ssVEPs, and instructed participants to count visual targets (external attention condition) or their heartbeats (internal attention condition). We compared the ssVEP responses between conditions, along with alpha-band activity and the heartbeat evoked potential (HEP) - two electrophysiological measures associated with internally directed attention. Consistent with our hypotheses, we found that both the magnitude and the phase synchronization of the ssVEP decreased when attention was directed internally, suggesting that ssVEP measures are able to differentiate between internal and external attention. Additionally, and in line with previous findings, we found larger suppression of parieto-occipital alpha-band activity and an increase of the HEP amplitude in the internal attention condition. Furthermore, we found a trade-off between changes in ssVEP response and changes in HEP and alpha-band activity: when shifting from internal to external attention, increase in ssVEP response was related to a decrease in parieto-occipital alpha-band activity and HEP amplitudes. These findings suggest that shifting between external and internal directed attention prompts a re-allocation of limited processing resources that are shared between external sensory and interoceptive processing.
Collapse
Affiliation(s)
- Lior Kritzman
- School of Psychological Sciences, Tel Aviv University, Israel; Sagol Center for Brain and Mind, Reichman University, Israel.
| | | | - Andreas Keil
- Center for the Study of Emotion & Attention, University of Florida, USA
| | - Dominik Freche
- Sagol Center for Brain and Mind, Reichman University, Israel; Physics of Complex Systems, Weizmann Institute of Science, Israel
| | - Gal Sheppes
- School of Psychological Sciences, Tel Aviv University, Israel
| | | |
Collapse
|
4
|
Stroh AL, Grin K, Rösler F, Bottari D, Ossandón J, Rossion B, Röder B. Developmental experiences alter the temporal processing characteristics of the visual cortex: Evidence from deaf and hearing native signers. Eur J Neurosci 2022; 55:1629-1644. [PMID: 35193156 DOI: 10.1111/ejn.15629] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2021] [Revised: 01/26/2022] [Accepted: 02/18/2022] [Indexed: 12/01/2022]
Abstract
To date, the extent to which early experience shapes the functional characteristics of neural circuits is still a matter of debate. In the present study, we tested whether congenital deafness and/or the acquisition of a sign language alter the temporal processing characteristics of the visual system. Moreover, we investigated whether, assuming cross-modal plasticity in deaf individuals, the temporal processing characteristics of possibly reorganised auditory areas resemble those of the visual cortex. Steady-state visual evoked potentials (SSVEPs) were recorded in congenitally deaf native signers, hearing native signers, and hearing nonsigners. The luminance of the visual stimuli was periodically modulated at 12, 21, and 40 Hz. For hearing nonsigners, the optimal driving rate was 12 Hz. By contrast, for the group of hearing signers the optimal driving rate was 12 and 21 Hz, whereas for the group of deaf signers the optimal driving rate was 21 Hz. We did not observe evidence for cross-modal recruitment of auditory cortex in the group of deaf signers. These results suggest a higher preferred neural processing rate as a consequence of the acquisition of a sign language.
Collapse
Affiliation(s)
- Anna-Lena Stroh
- Biological Psychology and Neuropsychology, University of Hamburg, Germany.,Institute of Psychology, Jagiellonian University, Kraków, Poland
| | - Konstantin Grin
- Biological Psychology and Neuropsychology, University of Hamburg, Germany
| | - Frank Rösler
- Biological Psychology and Neuropsychology, University of Hamburg, Germany
| | - Davide Bottari
- Biological Psychology and Neuropsychology, University of Hamburg, Germany.,IMT School for Advanced Studies Lucca, Italy
| | - José Ossandón
- Biological Psychology and Neuropsychology, University of Hamburg, Germany
| | - Bruno Rossion
- Université de Lorraine, CNRS, CRAN, Nancy, France.,Université de Lorraine, CHRU-Nancy, Service de Neurochirurgie, Nancy, France
| | - Brigitte Röder
- Biological Psychology and Neuropsychology, University of Hamburg, Germany
| |
Collapse
|
5
|
Kern L, Niedeggen M. ERP signatures of auditory awareness in cross-modal distractor-induced deafness. Conscious Cogn 2021; 96:103241. [PMID: 34823076 DOI: 10.1016/j.concog.2021.103241] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2021] [Revised: 10/15/2021] [Accepted: 11/05/2021] [Indexed: 10/19/2022]
Abstract
Previous research showed that dual-task processes such as the attentional blink are not always transferable from unimodal to cross-modal settings. This study investigated whether such a transfer can be stated for a distractor-induced impairment of target detection established in vision (distractor-induced blindness, DIB) and recently observed in the auditory modality (distractor-induced deafness, DID). A cross-modal DID effect was confirmed: The detection of an auditory target indicated by a visual cue was impaired if multiple auditory distractors preceded the target. Event-related potentials (ERPs) were used to identify psychophysiological correlates of target detection. A frontal negativity about 200 ms succeeded by a sustained, widespread negativity was associated with auditory target awareness. In contrast to unimodal findings, P3 amplitude was not enhanced for hits. The results support the notion that early frontal attentional processes are linked to auditory awareness, whereas the P3 does not seem to be a reliable indicator of target access.
Collapse
Affiliation(s)
- Lea Kern
- Freie Universität Berlin, Department of Education and Psychology, Division General Psychology and Neuropsychology, Habelschwerdter Allee 45, 14195 Berlin, Germany.
| | - Michael Niedeggen
- Freie Universität Berlin, Department of Education and Psychology, Division General Psychology and Neuropsychology, Habelschwerdter Allee 45, 14195 Berlin, Germany.
| |
Collapse
|
6
|
Gouraud J, Delorme A, Berberian B. Mind Wandering Influences EEG Signal in Complex Multimodal Environments. FRONTIERS IN NEUROERGONOMICS 2021; 2:625343. [PMID: 38236482 PMCID: PMC10790857 DOI: 10.3389/fnrgo.2021.625343] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/02/2020] [Accepted: 02/23/2021] [Indexed: 01/19/2024]
Abstract
The phenomenon of mind wandering (MW), as a family of experiences related to internally directed cognition, heavily influences vigilance evolution. In particular, humans in teleoperations monitoring partially automated fleet before assuming manual control whenever necessary may see their attention drift due to internal sources; as such, it could play an important role in the emergence of out-of-the-loop (OOTL) situations and associated performance problems. To follow, quantify, and mitigate this phenomenon, electroencephalogram (EEG) systems already demonstrated robust results. As MW creates an attentional decoupling, both ERPs and brain oscillations are impacted. However, the factors influencing these markers in complex environments are still not fully understood. In this paper, we specifically addressed the possibility of gradual emergence of attentional decoupling and the differences created by the sensory modality used to convey targets. Eighteen participants were asked to (1) supervise an automated drone performing an obstacle avoidance task (visual task) and (2) respond to infrequent beeps as fast as possible (auditory task). We measured event-related potentials and alpha waves through EEG. We also added a 40-Hz amplitude modulated brown noise to evoke steady-state auditory response (ASSR). Reported MW episodes were categorized between task-related and task-unrelated episodes. We found that N1 ERP component elicited by beeps had lower amplitude during task-unrelated MW, whereas P3 component had higher amplitude during task-related MW, compared with other attentional states. Focusing on parieto-occipital regions, alpha-wave activity was higher during task-unrelated MW compared with others. These results support the decoupling hypothesis for task-unrelated MW but not task-related MW, highlighting possible variations in the "depth" of decoupling depending on MW episodes. Finally, we found no influence of attentional states on ASSR amplitude. We discuss possible reasons explaining why. Results underline both the ability of EEG to track and study MW in laboratory tasks mimicking ecological environments, as well as the complex influence of perceptual decoupling on operators' behavior and, in particular, EEG measures.
Collapse
Affiliation(s)
- Jonas Gouraud
- Systems Control and Flight Dynamics Department, Office National d'Etudes et de Recherche Aérospatiales, Salon de Provence, France
| | - Arnaud Delorme
- Center of Research on Brain and Cognition (UMR 5549), Centre National de Recherche Scientifique, Toulouse, France
| | - Bruno Berberian
- Systems Control and Flight Dynamics Department, Office National d'Etudes et de Recherche Aérospatiales, Salon de Provence, France
| |
Collapse
|
7
|
Visual load effects on the auditory steady-state responses to 20-, 40-, and 80-Hz amplitude-modulated tones. Physiol Behav 2021; 228:113240. [PMID: 33188789 DOI: 10.1016/j.physbeh.2020.113240] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2020] [Revised: 09/29/2020] [Accepted: 10/31/2020] [Indexed: 10/23/2022]
Abstract
Ignoring background sounds while focusing on a visual task is a necessary ability in everyday life. If attentional resources are shared between modalities, processing of task-irrelevant auditory information should become attenuated when attentional capacity is expended by visual demands. According to the early-filter model, top-down attenuation of auditory responses is possible at various stages of the auditory pathway through multiple recurrent loops. Furthermore, the adaptive filtering model of selective attention suggests that filtering occurs early when concurrent visual tasks are demanding (e.g., high load) and late when tasks are easy (e.g., low load). To test these models, this study examined the effects of three levels of visual load on auditory steady-state responses (ASSRs) at three modulation frequencies. Subjects performed a visual task with no, low, and high visual load while ignoring task-irrelevant sounds. The auditory stimuli were 500-Hz tones amplitude-modulated at 20, 40, or 80 Hz to target different processing stages of the auditory pathway. Results from bayesian analyses suggest that ASSRs are unaffected by visual load. These findings imply that attentional resources are modality specific and that the attentional filter of auditory processing does not vary with visual task demands.
Collapse
|
8
|
No intermodal interference effects of threatening information during concurrent audiovisual stimulation. Neuropsychologia 2019; 136:107283. [PMID: 31783079 DOI: 10.1016/j.neuropsychologia.2019.107283] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2019] [Revised: 11/05/2019] [Accepted: 11/24/2019] [Indexed: 11/24/2022]
Abstract
Changes in attention can result in sensory processing trade-off effects, in which sensory cortical responses to attended stimuli are heightened and responses to competing distractors are attenuated. However, it is unclear if competition or facilitation effects will be observed at the level of sensory cortex when attending to competing stimuli in two modalities. The present study used electroencephalogram (EEG) and frequency-tagging to quantitatively assess auditory-visual interactions during sustained multimodal sensory stimulation. The emotional content of a 6.66 Hz rapid serial visual presentation (RSVP) was manipulated to elicit well-established emotional attention effects, while a constant 63 dB tone with a 40.8 Hz modulation served as a concurrent auditory stimulus in two experiments. As a directed attention manipulation, participants were instructed to detect transient sound level events in the auditory stream in Experiment 1. To manipulate attention through threat anticipation, participants were instructed to expect an aversive noise burst after a higher 40.8 Hz modulated tone in Experiment 2. Each stimulus evoked reliable steady-state sensory cortical responses in all participants (n = 30) in both experiments. The visual cortical responses were modulated by the auditory detection task, but not by threat anticipation: Visual responses were smaller during auditory streams with a transient target as compared to uninterrupted auditory streams. Conversely, visual stimulus condition had no significant effects on auditory sensory cortical responses in either experiment. These results indicate that there is neither a competition nor facilitation effect of visual content on concurrent auditory sensory cortical processing. They further indicate that competition effects of auditory stream content on sustained visuocortical responses are limited to auditory target processing.
Collapse
|
9
|
Fisher JT, Huskey R, Keene JR, Weber R. The limited capacity model of motivated mediated message processing: looking to the future. ACTA ACUST UNITED AC 2018. [DOI: 10.1080/23808985.2018.1534551] [Citation(s) in RCA: 20] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/30/2023]
Affiliation(s)
- Jacob T. Fisher
- Media Neuroscience Lab, Department of Communication, UC Santa Barbara, Santa Barbara, CA, USA
| | - Richard Huskey
- Cognitive Communication Science Lab, School of Communication, Ohio State University, Columbus, OH, USA
| | - Justin Robert Keene
- Department of Journalism and Creative Media Industries, Cognition & Emotion Lab, College of Media & Communication, Texas Tech University, Lubbock, TX, USA
| | - René Weber
- Media Neuroscience Lab, Department of Communication, UC Santa Barbara, Santa Barbara, CA, USA
| |
Collapse
|
10
|
Tan X, Fu Q, Yuan H, Ding L, Wang T. Improved Transient Response Estimations in Predicting 40 Hz Auditory Steady-State Response Using Deconvolution Methods. Front Neurosci 2018; 11:697. [PMID: 29311778 PMCID: PMC5732975 DOI: 10.3389/fnins.2017.00697] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2017] [Accepted: 11/27/2017] [Indexed: 11/23/2022] Open
Abstract
The auditory steady-state response (ASSR) is one of the main approaches in clinic for health screening and frequency-specific hearing assessment. However, its generation mechanism is still of much controversy. In the present study, the linear superposition hypothesis for the generation of ASSRs was investigated by comparing the relationships between the classical 40 Hz ASSR and three synthetic ASSRs obtained from three different templates for transient auditory evoked potential (AEP). These three AEPs are the traditional AEP at 5 Hz and two 40 Hz AEPs derived from two deconvolution algorithms using stimulus sequences, i.e., continuous loop averaging deconvolution (CLAD) and multi-rate steady-state average deconvolution (MSAD). CLAD requires irregular inter-stimulus intervals (ISIs) in the sequence while MSAD uses the same ISIs but evenly-spaced stimulus sequences which mimics the classical 40 Hz ASSR. It has been reported that these reconstructed templates show similar patterns but significant difference in morphology and distinct frequency characteristics in synthetic ASSRs. The prediction accuracies of ASSR using these templates show significant differences (p < 0.05) in 45.95, 36.28, and 10.84% of total time points within four cycles of ASSR for the traditional, CLAD, and MSAD templates, respectively, as compared with the classical 40 Hz ASSR, and the ASSR synthesized from the MSAD transient AEP suggests the best similarity. And such a similarity is also demonstrated at individuals only in MSAD showing no statistically significant difference (Hotelling's T2 test, T2 = 6.96, F = 0.80, p = 0.592) as compared with the classical 40 Hz ASSR. The present results indicate that both stimulation rate and sequencing factor (ISI variation) affect transient AEP reconstructions from steady-state stimulation protocols. Furthermore, both auditory brainstem response (ABR) and middle latency response (MLR) are observed in contributing to the composition of ASSR but with variable weights in three templates. The significantly improved prediction accuracy of ASSR achieved by MSAD strongly supports the linear superposition mechanism of ASSR if an accurate template of transient AEPs can be reconstructed. The capacity in obtaining both ASSR and its underlying transient components accurately and simultaneously has the potential to contribute significantly to diagnosis of patients with neuropsychiatric disorders.
Collapse
Affiliation(s)
- Xiaodan Tan
- School of Biomedical Engineering, Southern Medical University, Guangzhou, China
| | - Qiuyang Fu
- Department of Otolaryngology, Guangdong Second Provincial General Hospital, Guangzhou, China
| | - Han Yuan
- Stephenson School of Biomedical Engineering, University of Oklahoma, Norman, OK, United States
| | - Lei Ding
- Stephenson School of Biomedical Engineering, University of Oklahoma, Norman, OK, United States
| | - Tao Wang
- College of Big Data and Internet, Shenzhen Technology University, Shenzhen, China
| |
Collapse
|
11
|
Timora J, Budd T. Steady-State EEG and Psychophysical Measures of Multisensory Integration to Cross-Modally Synchronous and Asynchronous Acoustic and Vibrotactile Amplitude Modulation Rate. Multisens Res 2018; 31:391-418. [DOI: 10.1163/22134808-00002549] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2016] [Accepted: 01/16/2017] [Indexed: 11/19/2022]
Abstract
According to thetemporal principleof multisensory integration, cross-modal synchronisation of stimulus onset facilitates multisensory integration. This is typically observed as a greater response to multisensory stimulation relative to the sum of the constituent unisensory responses (i.e.,superadditivity). The aim of the present study was to examine whether the temporal principle extends to the cross-modal synchrony of amplitude-modulation (AM) rate. It is well established that psychophysical sensitivity to AM stimulation is strongly influenced by AM rate where the optimum rate differs according to sensory modality. This rate-dependent sensitivity is also apparent from EEG steady-state response (SSR) activity, which becomes entrained to the stimulation rate and is thought to reflect neural processing of the temporal characteristics of AM stimulation. In this study we investigated whether cross-modal congruence of AM rate reveals both psychophysical and EEG evidence of enhanced multisensory integration. To achieve this, EEG SSR and psychophysical sensitivity to simultaneous acoustic and/or vibrotactile AM stimuli were measured at cross-modally congruent and incongruent AM rates. While the results provided no evidence of superadditive multisensory SSR activity or psychophysical sensitivity, the complex pattern of results did reveal a consistent correspondence between SSR activity and psychophysical sensitivity to AM stimulation. This indicates that entrained EEG activity may provide a direct measure of cortical activity underlying multisensory integration. Consistent with the temporal principle of multisensory integration, increased vibrotactile SSR responses and psychophysical sensitivity were found for cross-modally congruent relative to incongruent AM rate. However, no corresponding increase in auditory SSR or psychophysical sensitivity was observed for cross-modally congruent AM rates. This complex pattern of results can be understood in terms of the likely influence of theprinciple of inverse effectivenesswhere the temporal principle of multisensory integration was only evident in the context of reduced perceptual sensitivity for the vibrotactile but not the auditory modality.
Collapse
Affiliation(s)
- Justin R. Timora
- Brain Imaging Lab, School of Psychology, University of Newcastle, Ourimbah, NSW, Australia
| | - Timothy W. Budd
- Brain Imaging Lab, School of Psychology, University of Newcastle, Ourimbah, NSW, Australia
| |
Collapse
|
12
|
Covic A, Keitel C, Porcu E, Schröger E, Müller MM. Audio-visual synchrony and spatial attention enhance processing of dynamic visual stimulation independently and in parallel: A frequency-tagging study. Neuroimage 2017; 161:32-42. [PMID: 28802870 DOI: 10.1016/j.neuroimage.2017.08.022] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2017] [Revised: 07/13/2017] [Accepted: 08/06/2017] [Indexed: 11/25/2022] Open
Abstract
The neural processing of a visual stimulus can be facilitated by attending to its position or by a co-occurring auditory tone. Using frequency-tagging, we investigated whether facilitation by spatial attention and audio-visual synchrony rely on similar neural processes. Participants attended to one of two flickering Gabor patches (14.17 and 17 Hz) located in opposite lower visual fields. Gabor patches further "pulsed" (i.e. showed smooth spatial frequency variations) at distinct rates (3.14 and 3.63 Hz). Frequency-modulating an auditory stimulus at the pulse-rate of one of the visual stimuli established audio-visual synchrony. Flicker and pulsed stimulation elicited stimulus-locked rhythmic electrophysiological brain responses that allowed tracking the neural processing of simultaneously presented Gabor patches. These steady-state responses (SSRs) were quantified in the spectral domain to examine visual stimulus processing under conditions of synchronous vs. asynchronous tone presentation and when respective stimulus positions were attended vs. unattended. Strikingly, unique patterns of effects on pulse- and flicker driven SSRs indicated that spatial attention and audiovisual synchrony facilitated early visual processing in parallel and via different cortical processes. We found attention effects to resemble the classical top-down gain effect facilitating both, flicker and pulse-driven SSRs. Audio-visual synchrony, in turn, only amplified synchrony-producing stimulus aspects (i.e. pulse-driven SSRs) possibly highlighting the role of temporally co-occurring sights and sounds in bottom-up multisensory integration.
Collapse
Affiliation(s)
- Amra Covic
- Institut für Psychologie, Universität Leipzig, Neumarkt 9-19, 04109, Leipzig, Germany; Institut für Medizinische Psychologie und Medizinische Soziologie, Universitätsmedizin Göttingen, Georg-August-Universität, 37973, Göttingen, Germany
| | - Christian Keitel
- Centre for Cognitive Neuroimaging, Institute of Neuroscience and Psychology, University of Glasgow, 58 Hillhead Street, G12 8QB, Glasgow, UK.
| | - Emanuele Porcu
- Institut für Psychologie, Otto-von-Guericke-Universität Magdeburg, Universitätsplatz 2, Gebäude 23, 39106, Magdeburg, Germany
| | - Erich Schröger
- Institut für Psychologie, Universität Leipzig, Neumarkt 9-19, 04109, Leipzig, Germany
| | - Matthias M Müller
- Institut für Psychologie, Universität Leipzig, Neumarkt 9-19, 04109, Leipzig, Germany
| |
Collapse
|
13
|
Wahn B, König P. Is Attentional Resource Allocation Across Sensory Modalities Task-Dependent? Adv Cogn Psychol 2017; 13:83-96. [PMID: 28450975 PMCID: PMC5405449 DOI: 10.5709/acp-0209-2] [Citation(s) in RCA: 62] [Impact Index Per Article: 8.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2016] [Accepted: 01/04/2017] [Indexed: 11/23/2022] Open
Abstract
Human information processing is limited by attentional resources. That is, via
attentional mechanisms, humans select a limited amount of sensory input to
process while other sensory input is neglected. In multisensory research, a
matter of ongoing debate is whether there are distinct pools of attentional
resources for each sensory modality or whether attentional resources are shared
across sensory modalities. Recent studies have suggested that attentional
resource allocation across sensory modalities is in part task-dependent. That
is, the recruitment of attentional resources across the sensory modalities
depends on whether processing involves object-based attention
(e.g., the discrimination of stimulus attributes) or spatial
attention (e.g., the localization of stimuli). In the present
paper, we review findings in multisensory research related to this view. For the
visual and auditory sensory modalities, findings suggest that distinct resources
are recruited when humans perform object-based attention tasks, whereas for the
visual and tactile sensory modalities, partially shared resources are recruited.
If object-based attention tasks are time-critical, shared resources are
recruited across the sensory modalities. When humans perform an object-based
attention task in combination with a spatial attention task, partly shared
resources are recruited across the sensory modalities as well. Conversely, for
spatial attention tasks, attentional processing does consistently involve shared
attentional resources for the sensory modalities. Generally, findings suggest
that the attentional system flexibly allocates attentional resources depending
on task demands. We propose that such flexibility reflects a large-scale
optimization strategy that minimizes the brain’s costly resource expenditures
and simultaneously maximizes capability to process currently relevant
information.
Collapse
Affiliation(s)
- Basil Wahn
- Institute of Cognitive Science, Universität Osnabrück, Osnabrück,
Germany
| | - Peter König
- Institut für Neurophysiologie und Pathophysiologie,
Universitätsklinikum Hamburg-Eppendorf, Hamburg, Germany
| |
Collapse
|
14
|
Zhang D, Hong B, Gao S, Röder B. Exploring the temporal dynamics of sustained and transient spatial attention using steady-state visual evoked potentials. Exp Brain Res 2017; 235:1575-1591. [PMID: 28258437 DOI: 10.1007/s00221-017-4907-6] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2016] [Accepted: 02/07/2017] [Indexed: 01/23/2023]
Abstract
While the behavioral dynamics as well as the functional network of sustained and transient attention have extensively been studied, their underlying neural mechanisms have most often been investigated in separate experiments. In the present study, participants were instructed to perform an audio-visual spatial attention task. They were asked to attend to either the left or the right hemifield and to respond to deviant transient either auditory or visual stimuli. Steady-state visual evoked potentials (SSVEPs) elicited by two task irrelevant pattern reversing checkerboards flickering at 10 and 15 Hz in the left and the right hemifields, respectively, were used to continuously monitor the locus of spatial attention. The amplitude and phase of the SSVEPs were extracted for single trials and were separately analyzed. Sustained attention to one hemifield (spatial attention) as well as to the auditory modality (intermodal attention) increased the inter-trial phase locking of the SSVEP responses, whereas briefly presented visual and auditory stimuli decreased the single-trial SSVEP amplitude between 200 and 500 ms post-stimulus. This transient change of the single-trial amplitude was restricted to the SSVEPs elicited by the reversing checkerboard in the spatially attended hemifield and thus might reflect a transient re-orienting of attention towards the brief stimuli. Thus, the present results demonstrate independent, but interacting neural mechanisms of sustained and transient attentional orienting.
Collapse
Affiliation(s)
- Dan Zhang
- Biological Psychology and Neuropsychology, University of Hamburg, Von-Melle-Park 11, 20146, Hamburg, Germany. .,Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, 100084, China. .,Department of Psychology, School of Social Sciences, Tsinghua University, Beijing, 100084, China.
| | - Bo Hong
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, 100084, China
| | - Shangkai Gao
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, 100084, China
| | - Brigitte Röder
- Biological Psychology and Neuropsychology, University of Hamburg, Von-Melle-Park 11, 20146, Hamburg, Germany
| |
Collapse
|
15
|
|
16
|
Visual cortex responses reflect temporal structure of continuous quasi-rhythmic sensory stimulation. Neuroimage 2016; 146:58-70. [PMID: 27867090 PMCID: PMC5312821 DOI: 10.1016/j.neuroimage.2016.11.043] [Citation(s) in RCA: 45] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2016] [Revised: 10/13/2016] [Accepted: 11/16/2016] [Indexed: 12/19/2022] Open
Abstract
Neural processing of dynamic continuous visual input, and cognitive influences thereon, are frequently studied in paradigms employing strictly rhythmic stimulation. However, the temporal structure of natural stimuli is hardly ever fully rhythmic but possesses certain spectral bandwidths (e.g. lip movements in speech, gestures). Examining periodic brain responses elicited by strictly rhythmic stimulation might thus represent ideal, yet isolated cases. Here, we tested how the visual system reflects quasi-rhythmic stimulation with frequencies continuously varying within ranges of classical theta (4–7 Hz), alpha (8–13 Hz) and beta bands (14–20 Hz) using EEG. Our findings substantiate a systematic and sustained neural phase-locking to stimulation in all three frequency ranges. Further, we found that allocation of spatial attention enhances EEG-stimulus locking to theta- and alpha-band stimulation. Our results bridge recent findings regarding phase locking (“entrainment”) to quasi-rhythmic visual input and “frequency-tagging” experiments employing strictly rhythmic stimulation. We propose that sustained EEG-stimulus locking can be considered as a continuous neural signature of processing dynamic sensory input in early visual cortices. Accordingly, EEG-stimulus locking serves to trace the temporal evolution of rhythmic as well as quasi-rhythmic visual input and is subject to attentional bias. Dynamic visual stimuli constitute large parts of our perceptual experience. Strictly rhythmic dynamics condense in EEG-recorded mass-neural activity. We tested how stimuli with fluctuating rhythms reflect in the EEG. We found that the EEG allows tracing two quasi-rhythmic stimuli in parallel. Dynamics of attended stimuli may be tracked with greater temporal precision.
Collapse
|
17
|
Ruhnau P, Keitel C, Lithari C, Weisz N, Neuling T. Flicker-Driven Responses in Visual Cortex Change during Matched-Frequency Transcranial Alternating Current Stimulation. Front Hum Neurosci 2016; 10:184. [PMID: 27199707 PMCID: PMC4844646 DOI: 10.3389/fnhum.2016.00184] [Citation(s) in RCA: 26] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2015] [Accepted: 04/11/2016] [Indexed: 01/23/2023] Open
Abstract
We tested a novel combination of two neuro-stimulation techniques, transcranial alternating current stimulation (tACS) and frequency tagging, that promises powerful paradigms to study the causal role of rhythmic brain activity in perception and cognition. Participants viewed a stimulus flickering at 7 or 11 Hz that elicited periodic brain activity, termed steady-state responses (SSRs), at the same temporal frequency and its higher order harmonics. Further, they received simultaneous tACS at 7 or 11 Hz that either matched or differed from the flicker frequency. Sham tACS served as a control condition. Recent advances in reconstructing cortical sources of oscillatory activity allowed us to measure SSRs during concurrent tACS, which is known to impose strong artifacts in magnetoencephalographic (MEG) recordings. For the first time, we were thus able to demonstrate immediate effects of tACS on SSR-indexed early visual processing. Our data suggest that tACS effects are largely frequency-specific and reveal a characteristic pattern of differential influences on the harmonic constituents of SSRs.
Collapse
Affiliation(s)
- Philipp Ruhnau
- Centre for Cognitive Neuroscience, University of SalzburgSalzburg, Austria; Center for Mind/Brain Science, University of TrentoMattarello, Italy
| | - Christian Keitel
- Centre for Cognitive Neuroimaging, Institute of Neuroscience and Psychology, University of Glasgow Glasgow, UK
| | - Chrysa Lithari
- Centre for Cognitive Neuroscience, University of Salzburg Salzburg, Austria
| | - Nathan Weisz
- Centre for Cognitive Neuroscience, University of Salzburg Salzburg, Austria
| | - Toralf Neuling
- Centre for Cognitive Neuroscience, University of Salzburg Salzburg, Austria
| |
Collapse
|
18
|
Large-scale network-level processes during entrainment. Brain Res 2016; 1635:143-52. [PMID: 26835557 PMCID: PMC4786120 DOI: 10.1016/j.brainres.2016.01.043] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2015] [Revised: 01/19/2016] [Accepted: 01/25/2016] [Indexed: 01/23/2023]
Abstract
Visual rhythmic stimulation evokes a robust power increase exactly at the stimulation frequency, the so-called steady-state response (SSR). Localization of visual SSRs normally shows a very focal modulation of power in visual cortex and led to the treatment and interpretation of SSRs as a local phenomenon. Given the brain network dynamics, we hypothesized that SSRs have additional large-scale effects on the brain functional network that can be revealed by means of graph theory. We used rhythmic visual stimulation at a range of frequencies (4–30 Hz), recorded MEG and investigated source level connectivity across the whole brain. Using graph theoretical measures we observed a frequency-unspecific reduction of global density in the alpha band “disconnecting” visual cortex from the rest of the network. Also, a frequency-specific increase of connectivity between occipital cortex and precuneus was found at the stimulation frequency that exhibited the highest resonance (30 Hz). In conclusion, we showed that SSRs dynamically re-organized the brain functional network. These large-scale effects should be taken into account not only when attempting to explain the nature of SSRs, but also when used in various experimental designs. Visual entrainment is considered mostly to modulate cortical power locally. Instead, we hypothesized large-scale effects in the brain functional network. Graph theoretical analysis combined with MEG source localization. Visual entrainment indeed yielded network-level effects.
Collapse
|
19
|
Schettino A, Rossi V, Pourtois G, Müller MM. Involuntary attentional orienting in the absence of awareness speeds up early sensory processing. Cortex 2015; 74:107-17. [PMID: 26673944 DOI: 10.1016/j.cortex.2015.10.016] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2015] [Revised: 10/06/2015] [Accepted: 10/26/2015] [Indexed: 11/25/2022]
Abstract
A long-standing controversy in the field of human neuroscience has revolved around the question whether attended stimuli are processed more rapidly compared to unattended stimuli. We conducted two event-related potential (ERP) experiments employing a temporal order judgment procedure in order to assess whether involuntary attention accelerates sensory processing, as indicated by latency modulations of early visual ERP components. A non-reportable exogenous cue could precede the first target with equal probability at the same (compatible) or opposite (incompatible) location. The use of non-reportable cues promoted automatic, bottom-up attentional capture, and ensured the elimination of any confounds related to the use of stimulus features that are common to both cue and target. Behavioral results confirmed involuntary exogenous orienting towards the unaware cue. ERP results showed that the N1pc, an electrophysiological measure of attentional orienting, was smaller and peaked earlier in compatible as opposed to incompatible trials, indicating cue-dependent changes in magnitude and speed of first target processing in extrastriate visual areas. Complementary Bayesian analysis confirmed the presence of this effect regardless of whether participants were actively looking for the cue (Experiment 1) or were not informed of it (Experiment 2), indicating purely automatic, stimulus-driven orienting mechanisms.
Collapse
Affiliation(s)
| | - Valentina Rossi
- Department of Experimental - Clinical and Health Psychology, Ghent University, Ghent, Belgium
| | - Gilles Pourtois
- Department of Experimental - Clinical and Health Psychology, Ghent University, Ghent, Belgium
| | | |
Collapse
|
20
|
Keitel C, Müller MM. Audio-visual synchrony and feature-selective attention co-amplify early visual processing. Exp Brain Res 2015; 234:1221-31. [PMID: 26226930 DOI: 10.1007/s00221-015-4392-8] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2015] [Accepted: 07/20/2015] [Indexed: 10/23/2022]
Abstract
Our brain relies on neural mechanisms of selective attention and converging sensory processing to efficiently cope with rich and unceasing multisensory inputs. One prominent assumption holds that audio-visual synchrony can act as a strong attractor for spatial attention. Here, we tested for a similar effect of audio-visual synchrony on feature-selective attention. We presented two superimposed Gabor patches that differed in colour and orientation. On each trial, participants were cued to selectively attend to one of the two patches. Over time, spatial frequencies of both patches varied sinusoidally at distinct rates (3.14 and 3.63 Hz), giving rise to pulse-like percepts. A simultaneously presented pure tone carried a frequency modulation at the pulse rate of one of the two visual stimuli to introduce audio-visual synchrony. Pulsed stimulation elicited distinct time-locked oscillatory electrophysiological brain responses. These steady-state responses were quantified in the spectral domain to examine individual stimulus processing under conditions of synchronous versus asynchronous tone presentation and when respective stimuli were attended versus unattended. We found that both, attending to the colour of a stimulus and its synchrony with the tone, enhanced its processing. Moreover, both gain effects combined linearly for attended in-sync stimuli. Our results suggest that audio-visual synchrony can attract attention to specific stimulus features when stimuli overlap in space.
Collapse
Affiliation(s)
- Christian Keitel
- Institute of Neuroscience and Psychology, University of Glasgow, Hillhead Street 58, Glasgow, G12 8QB, UK.
| | - Matthias M Müller
- Institut für Psychologie, Universität Leipzig, Neumarkt 9-19, 04109, Leipzig, Germany
| |
Collapse
|
21
|
Markkula G. Answering questions about consciousness by modeling perception as covert behavior. Front Psychol 2015; 6:803. [PMID: 26136704 PMCID: PMC4468364 DOI: 10.3389/fpsyg.2015.00803] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2015] [Accepted: 05/27/2015] [Indexed: 11/25/2022] Open
Abstract
Two main open questions in current consciousness research concern (i) the neural correlates of consciousness (NCC) and (ii) the relationship between neural activity and first-person, subjective experience. Here, possible answers are sketched for both of these, by means of a model-based analysis of what is required for one to admit having a conscious experience. To this end, a model is proposed that allows reasoning, albeit necessarily in a simplistic manner, about all of the so called “easy problems” of consciousness, from discrimination of stimuli to control of behavior and language. First, it is argued that current neuroscientific knowledge supports the view of perception and action selection as two examples of the same basic phenomenon, such that one can meaningfully refer to neuronal activations involved in perception as covert behavior. Building on existing neuroscientific and psychological models, a narrative behavior model is proposed, outlining how the brain selects covert (and sometimes overt) behaviors to construct a complex, multi-level narrative about what it is like to be the individual in question. It is hypothesized that we tend to admit a conscious experience of X if, at the time of judging consciousness, we find ourselves acceptably capable of performing narrative behavior describing X. It is argued that the proposed account reconciles seemingly conflicting empirical results, previously presented as evidence for competing theories of consciousness, and suggests that well-defined, experiment-independent NCCs are unlikely to exist. Finally, an analysis is made of what the modeled narrative behavior machinery is and is not capable of. It is discussed how an organism endowed with such a machinery could, from its first-person perspective, come to adopt notions such as “subjective experience,” and of there being “hard problems,” and “explanatory gaps” to be addressed in order to understand consciousness.
Collapse
Affiliation(s)
- Gustav Markkula
- Adaptive Systems Group, Division of Vehicle Engineering and Autonomous Systems, Department of Applied Mechanics, Chalmers University of Technology Gothenburg, Sweden
| |
Collapse
|
22
|
Melcher T, Pfister R, Busmann M, Schlüter MC, Leyhe T, Gruber O. Functional characteristics of control adaptation in intermodal sensory processing. Brain Cogn 2015; 96:43-55. [PMID: 25917247 DOI: 10.1016/j.bandc.2015.03.003] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2014] [Revised: 03/25/2015] [Accepted: 03/26/2015] [Indexed: 10/23/2022]
Abstract
The present work investigated functional characteristics of control adjustments in intermodal sensory processing. Subjects performed an interference task that involved simultaneously presented visual and auditory stimuli which were either congruent or incongruent with respect to their response mappings. In two experiments, trial-by-trial sequential congruency effects were analysed for specific conditions that allowed ruling out "non-executive" contributions of stimulus or response priming to the respective RT fluctuations. In Experiment 1, conflict adaptation was observed in an oddball condition in which interference emanates from a task-irrelevant and response-neutral low-frequency stimulus. This finding characterizes intermodal control adjustments to be based - at least partly - on increased sensory selectivity, which is able to improve performance in any kind of interference condition which shares the same or overlapping attentional requirements. In order to further specify this attentional mechanism, Experiment 2 defined analogous conflict adaptation effects in non-interference unimodal trials in which just one of the two stimulus modalities was presented. Conflict adaptation effects in unimodal trials exclusively occurred for unimodal task-switch trials but not for otherwise equivalent task repetition trials, which suggests that the observed conflict-triggered control adjustments mainly consist of increased distractor inhibition (i.e., down-regulation of task-irrelevant information), while attributing a negligible role to target amplification (i.e., enhancement of task-relevant information) in this setup. This behavioral study yields a promising operational basis for subsequent neuroimaging investigations to define brain activations and connectivities which underlie the adaptive control of attentional selection.
Collapse
Affiliation(s)
- Tobias Melcher
- Center of Old Age Psychiatry, Psychiatric University Hospital, Basel, Switzerland; Centre for Translational Research in Systems Neuroscience and Clinical Psychiatry, Department of Psychiatry and Psychotherapy, Georg-August-University, Goettingen, Germany.
| | - Roland Pfister
- Department of Cognitive Psychology, University of Wuerzburg, Germany
| | - Mareike Busmann
- Centre for Translational Research in Systems Neuroscience and Clinical Psychiatry, Department of Psychiatry and Psychotherapy, Georg-August-University, Goettingen, Germany; Department of Psychosomatic Medicine and Psychotherapy, Curtius Hospital Luebeck, Germany
| | | | - Thomas Leyhe
- Center of Old Age Psychiatry, Psychiatric University Hospital, Basel, Switzerland
| | - Oliver Gruber
- Centre for Translational Research in Systems Neuroscience and Clinical Psychiatry, Department of Psychiatry and Psychotherapy, Georg-August-University, Goettingen, Germany
| |
Collapse
|
23
|
Stimulus-driven brain oscillations in the alpha range: entrainment of intrinsic rhythms or frequency-following response? J Neurosci 2014; 34:10137-40. [PMID: 25080577 DOI: 10.1523/jneurosci.1904-14.2014] [Citation(s) in RCA: 72] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
|