1
|
Wang J, Wang J, Hu J, Tong S, Hong X, Sun J. Willed Attentional Selection of Visual Features: An EEG Study. IEEE Trans Neural Syst Rehabil Eng 2024; 32:1586-1595. [PMID: 38557619 DOI: 10.1109/tnsre.2024.3383669] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/04/2024]
Abstract
Visual selective attention studies generally tend to apply cuing paradigms to instructively direct observers' attention to certain locations, features or objects. However, in real situations, attention in humans often flows spontaneously without any specific instructions. Recently, a concept named "willed attention" was raised in visuospatial attention, in which participants are free to make volitional attention decisions. Several ERP components during willed attention were found, along with a perspective that ongoing alpha activity may bias the subsequent attentional choice. However, it remains unclear whether similar neural mechanisms exist in feature- or object-based willed attention. Here, we included choice cues and instruct cues in a feature-based selective attention paradigm, allowing participants to freely choose or to be instructed to attend a color for the subsequent target detection task. Pre-cue ongoing alpha oscillations, cue-evoked potentials and target-related steady-state visual evoked potentials (SSVEPs) were simultaneously measured as markers of attentional processing. As expected, SSVEP responses were similarly modulated by attention between choice and instruct cue trials. Similar to the case of spatial attention, a willed-attention component (Willed Attention Component, WAC) was isolated during the cue-related choice period by comparing choice and instruct cues. However, pre-cue ongoing alpha oscillations did not predict the color choice (yellow vs blue), as indicated by the chance level decoding accuracy (50%). Overall, our results revealed both similarities and differences between spatial and feature-based willed attention, and thus extended the understanding toward the neural mechanisms of volitional attention.
Collapse
|
2
|
Wan Z, Cheng W, Li M, Zhu R, Duan W. GDNet-EEG: An attention-aware deep neural network based on group depth-wise convolution for SSVEP stimulation frequency recognition. Front Neurosci 2023; 17:1160040. [PMID: 37123356 PMCID: PMC10133471 DOI: 10.3389/fnins.2023.1160040] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2023] [Accepted: 03/27/2023] [Indexed: 05/02/2023] Open
Abstract
Background Steady state visually evoked potentials (SSVEPs) based early glaucoma diagnosis requires effective data processing (e.g., deep learning) to provide accurate stimulation frequency recognition. Thus, we propose a group depth-wise convolutional neural network (GDNet-EEG), a novel electroencephalography (EEG)-oriented deep learning model tailored to learn regional characteristics and network characteristics of EEG-based brain activity to perform SSVEPs-based stimulation frequency recognition. Method Group depth-wise convolution is proposed to extract temporal and spectral features from the EEG signal of each brain region and represent regional characteristics as diverse as possible. Furthermore, EEG attention consisting of EEG channel-wise attention and specialized network-wise attention is designed to identify essential brain regions and form significant feature maps as specialized brain functional networks. Two publicly SSVEPs datasets (large-scale benchmark and BETA dataset) and their combined dataset are utilized to validate the classification performance of our model. Results Based on the input sample with a signal length of 1 s, the GDNet-EEG model achieves the average classification accuracies of 84.11, 85.93, and 93.35% on the benchmark, BETA, and combination datasets, respectively. Compared with the average classification accuracies achieved by comparison baselines, the average classification accuracies of the GDNet-EEG trained on a combination dataset increased from 1.96 to 18.2%. Conclusion Our approach can be potentially suitable for providing accurate SSVEP stimulation frequency recognition and being used in early glaucoma diagnosis.
Collapse
Affiliation(s)
- Zhijiang Wan
- The First Affiliated Hospital of Nanchang University, Nanchang University, Nanchang, Jiangxi, China
- School of Information Engineering, Nanchang University, Nanchang, Jiangxi, China
- Industrial Institute of Artificial Intelligence, Nanchang University, Nanchang, Jiangxi, China
| | - Wangxinjun Cheng
- Queen Mary College of Nanchang University, Nanchang University, Nanchang, Jiangxi, China
| | - Manyu Li
- School of Information Engineering, Nanchang University, Nanchang, Jiangxi, China
| | - Renping Zhu
- School of Information Engineering, Nanchang University, Nanchang, Jiangxi, China
- Industrial Institute of Artificial Intelligence, Nanchang University, Nanchang, Jiangxi, China
- School of Information Management, Wuhan University, Wuhan, China
| | - Wenfeng Duan
- The First Affiliated Hospital of Nanchang University, Nanchang University, Nanchang, Jiangxi, China
| |
Collapse
|
3
|
Zhang R, Xu Z, Zhang L, Cao L, Hu Y, Lu B, Shi L, Yao D, Zhao X. The effect of stimulus number on the recognition accuracy and information transfer rate of SSVEP-BCI in augmented reality. J Neural Eng 2022; 19. [PMID: 35477130 DOI: 10.1088/1741-2552/ac6ae5] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/25/2021] [Accepted: 04/26/2022] [Indexed: 11/12/2022]
Abstract
OBJECTIVE The biggest advantage of steady-state visual evoked potential (SSVEP)-based brain-computer interface (BCI) lies in its large command set and high information transfer rate (ITR). Almost all current SSVEP-BCIs use a computer screen (CS) to present flickering visual stimuli, which limits its flexible use in actual scenes. Augmented reality (AR) technology provides the ability to superimpose visual stimuli on the real world, and it considerably expands the application scenarios of SSVEP-BCI. However, whether the advantages of SSVEP-BCI can be maintained when moving the visual stimuli to AR glasses is not known. This study investigated the effects of the stimulus number for SSVEP-BCI in an AR context. APPROACH We designed SSVEP flickering stimulation interfaces with four different numbers of stimulus targets and put them in AR glasses and a CS to display. Three common recognition algorithms were used to analyze the influence of the stimulus number and stimulation time on the recognition accuracy and ITR of AR-SSVEP and CS-SSVEP. MAIN RESULTS The amplitude spectrum and signal-to-noise ratio of AR-SSVEP were not significantly different from CS-SSVEP at the fundamental frequency but were significantly lower than CS-SSVEP at the second harmonic. SSVEP recognition accuracy decreased as the stimulus number increased in AR-SSVEP but not in CS-SSVEP. When the stimulus number increased, the maximum ITR of CS-SSVEP also increased, but not for AR-SSVEP. When the stimulus number was 25, the maximum ITR (142.05 bits/min) was reached at 400 ms. The importance of stimulation time in SSVEP was confirmed. When the stimulation time became longer, the recognition accuracy of both AR-SSVEP and CS-SSVEP increased. The peak value was reached at 3 s. The ITR increased first and then slowly decreased after reaching the peak value. SIGNIFICANCE Our study indicates that the conclusions based on CS-SSVEP cannot be simply applied to AR-SSVEP, and it is not advisable to set too many stimulus targets in the AR display device.
Collapse
Affiliation(s)
- Rui Zhang
- School of Electrical Engineering, Zhengzhou University, Zhengzhou, 450001, China, Zhengzhou university, Zhengzhou, 450000, CHINA
| | - Zongxin Xu
- School of Electrical Engineering, Zhengzhou University, Zhengzhou, 450001, China , Zhengzhou university, Zhengzhou, Henan, 450001, CHINA
| | - Lipeng Zhang
- Zhengzhou University, Zhengzhou university, Zhengzhou, 450001, CHINA
| | - Lijun Cao
- Zhengzhou University, Zhengzhou university, Zhengzhou, 450000, CHINA
| | - Yuxia Hu
- Zhengzhou University, Zhengzhou university, Zhengzhou, 450001, CHINA
| | - Beihan Lu
- Zhengzhou University, Zhengzhou university, Zhengzhou, 450001, CHINA
| | - Li Shi
- Department of Automation, Tsinghua University, BeiJing, Beijing, P. R, 100084, CHINA
| | - Dezhong Yao
- School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu 610054, Sichuan Province, chengdu, sichuan, 610054, CHINA
| | - Xincan Zhao
- Zhengzhou University, Zhengzhou university, Zhengzhou, 450001, CHINA
| |
Collapse
|
4
|
Kritzman L, Eidelman-Rothman M, Keil A, Freche D, Sheppes G, Levit-Binnun N. Steady-state visual evoked potentials differentiate between internally and externally directed attention. Neuroimage 2022; 254:119133. [PMID: 35339684 DOI: 10.1016/j.neuroimage.2022.119133] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2021] [Revised: 03/21/2022] [Accepted: 03/21/2022] [Indexed: 12/26/2022] Open
Abstract
While attention to external visual stimuli has been extensively studied, attention directed internally towards mental contents (e.g., thoughts, memories) or bodily signals (e.g., breathing, heartbeat) has only recently become a subject of increased interest, due to its relation to interoception, contemplative practices and mental health. The present study aimed at expanding the methodological toolbox for studying internal attention, by examining for the first time whether the steady-state visual evoked potential (ssVEP), a well-established measure of attention, can differentiate between internally and externally directed attention. To this end, we designed a task in which flickering dots were used to generate ssVEPs, and instructed participants to count visual targets (external attention condition) or their heartbeats (internal attention condition). We compared the ssVEP responses between conditions, along with alpha-band activity and the heartbeat evoked potential (HEP) - two electrophysiological measures associated with internally directed attention. Consistent with our hypotheses, we found that both the magnitude and the phase synchronization of the ssVEP decreased when attention was directed internally, suggesting that ssVEP measures are able to differentiate between internal and external attention. Additionally, and in line with previous findings, we found larger suppression of parieto-occipital alpha-band activity and an increase of the HEP amplitude in the internal attention condition. Furthermore, we found a trade-off between changes in ssVEP response and changes in HEP and alpha-band activity: when shifting from internal to external attention, increase in ssVEP response was related to a decrease in parieto-occipital alpha-band activity and HEP amplitudes. These findings suggest that shifting between external and internal directed attention prompts a re-allocation of limited processing resources that are shared between external sensory and interoceptive processing.
Collapse
Affiliation(s)
- Lior Kritzman
- School of Psychological Sciences, Tel Aviv University, Israel; Sagol Center for Brain and Mind, Reichman University, Israel.
| | | | - Andreas Keil
- Center for the Study of Emotion & Attention, University of Florida, USA
| | - Dominik Freche
- Sagol Center for Brain and Mind, Reichman University, Israel; Physics of Complex Systems, Weizmann Institute of Science, Israel
| | - Gal Sheppes
- School of Psychological Sciences, Tel Aviv University, Israel
| | | |
Collapse
|
5
|
Shioiri S, Sasada T, Nishikawa R. Visual attention around a hand location localized by proprioceptive information. Cereb Cortex Commun 2022; 3:tgac005. [PMID: 35224493 PMCID: PMC8867302 DOI: 10.1093/texcom/tgac005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2021] [Revised: 01/14/2022] [Accepted: 01/17/2022] [Indexed: 11/12/2022] Open
Abstract
Facilitation of visual processing has been reported in the space near the hand. To understand the underlying mechanism of hand proximity attention, we conducted experiments that isolated hand-related effects from top–down attention, proprioceptive information from visual information, the position effect from the influence of action, and the distance effect from the peripersonal effect. The flash-lag effect was used as an index of attentional modulation. Because the results showed that the flash-lag effect was smaller at locations near the hand, we concluded that there was a facilitation effect of the visual stimuli around the hand location identified through proprioceptive information. This was confirmed by conventional reaction time measures. We also measured steady-state visual evoked potential (SSVEP) in order to investigate the spatial properties of hand proximity attention and top–down attention. The results showed that SSVEP reflects the effect of top–down attention but not that of hand proximity attention. This suggests that the site of hand proximity attention is at a later stage of visual processing, assuming that SSVEP responds to neural activities at the early stages. The results of left-handers differed from those of right-handers, and this is discussed in relation to handedness variation.
Collapse
Affiliation(s)
- Satoshi Shioiri
- Research Institute of Electrical Communication, Tohoku University, Sendai, Japan
- Graduate School of Information Sciences, Tohoku University, Sendai, Japan
| | - Takumi Sasada
- Graduate School of Information Sciences, Tohoku University, Sendai, Japan
| | - Ryota Nishikawa
- Graduate School of Information Sciences, Tohoku University, Sendai, Japan
| |
Collapse
|
6
|
Wang MY, Yuan Z. EEG Decoding of Dynamic Facial Expressions of Emotion: Evidence from SSVEP and Causal Cortical Network Dynamics. Neuroscience 2021; 459:50-58. [PMID: 33556458 DOI: 10.1016/j.neuroscience.2021.01.040] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2020] [Revised: 01/14/2021] [Accepted: 01/31/2021] [Indexed: 01/23/2023]
Abstract
The neural cognitive mechanism in processing static facial expressions (FEs) has been well documented, whereas the one underlying perceiving dynamic faces remains unclear. In this study, Fourier transformation and time-frequency analysis of Electroencephalography (EEG) data were carried out to detect the brain activation underlying dynamic or static FEs while twenty-one participants were viewing dynamic or static faces flicking at 10 Hz. In particular, steady-state visual evoked potentials (SSVEPs) were quantified through spectral power analysis of EEG recordings. Besides, Granger causality (GC) analysis (GCA) was also performed to capture the causal cortical network dynamics during dynamic or static FEs of emotion. It was discovered that the dynamic (from neural to happy (N2H) or vice versa (H2N)) FEs elicited larger SSVEPs than the static ones. Additionally, GCA demonstrated that the H2N case, in which happy FEs were being gradually changed into neutral ones, exhibited larger GC measure during the late processing stage than that from the early stage. Consequently, enhanced SSVEPs and effective brain connectivity for dynamic FEs illustrated that participants might need consume more attentional resources to process the dynamic faces, particularly for the change from happy to neutral faces. The new neural index might facilitate us to better understand the cognitive processing of dynamic and static FEs.
Collapse
Affiliation(s)
- Meng-Yun Wang
- Faculty of Health Sciences, University of Macau, Taipa, Macau SAR, China; Centre for Cognitive and Brain Sciences, University of Macau, Taipa, Macau SAR, China
| | - Zhen Yuan
- Faculty of Health Sciences, University of Macau, Taipa, Macau SAR, China; Centre for Cognitive and Brain Sciences, University of Macau, Taipa, Macau SAR, China.
| |
Collapse
|
7
|
Davidson MJ, Mithen W, Hogendoorn H, van Boxtel JJA, Tsuchiya N. The SSVEP tracks attention, not consciousness, during perceptual filling-in. eLife 2020; 9:e60031. [PMID: 33170121 PMCID: PMC7682990 DOI: 10.7554/elife.60031] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2020] [Accepted: 11/10/2020] [Indexed: 12/16/2022] Open
Abstract
Research on the neural basis of conscious perception has almost exclusively shown that becoming aware of a stimulus leads to increased neural responses. By designing a novel form of perceptual filling-in (PFI) overlaid with a dynamic texture display, we frequency-tagged multiple disappearing targets as well as their surroundings. We show that in a PFI paradigm, the disappearance of a stimulus and subjective invisibility is associated with increases in neural activity, as measured with steady-state visually evoked potentials (SSVEPs), in electroencephalography (EEG). We also find that this increase correlates with alpha-band activity, a well-established neural measure of attention. These findings cast doubt on the direct relationship previously reported between the strength of neural activity and conscious perception, at least when measured with current tools, such as the SSVEP. Instead, we conclude that SSVEP strength more closely measures changes in attention.
Collapse
Affiliation(s)
- Matthew J Davidson
- School of Psychological Sciences, Faculty of Medicine, Nursing and Health Science, Monash UniversityMelbourneAustralia
- Department of Experimental Psychology, Faculty of Medicine, University of OxfordOxfordUnited Kingdom
| | - Will Mithen
- School of Psychological Sciences, Faculty of Medicine, Nursing and Health Science, Monash UniversityMelbourneAustralia
| | - Hinze Hogendoorn
- Melbourne School of Psychological Sciences, University of MelbourneMelbourneAustralia
| | - Jeroen JA van Boxtel
- Discipline of Psychology, Faculty of Health, University of CanberraCanberraAustralia
| | - Naotsugu Tsuchiya
- School of Psychological Sciences, Faculty of Medicine, Nursing and Health Science, Monash UniversityMelbourneAustralia
- Turner Institute for Brain and Mental Health, Faculty of Medicine, Nursing and Health Science, Monash UniversityMelbourneAustralia
- Center for Information and Neural Networks (CiNet), National Institute of Information and Communications Technology (NICT)SuitaJapan
- Advanced Telecommunications Research Computational Neuroscience Laboratories, 2-2-2 Hikaridai, Seika-cho, Soraku-gunKyotoJapan
| |
Collapse
|
8
|
Retinotopic and topographic analyses with gaze restriction for steady-state visual evoked potentials. Sci Rep 2019; 9:4472. [PMID: 30872723 PMCID: PMC6418283 DOI: 10.1038/s41598-019-41158-5] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2018] [Accepted: 02/26/2019] [Indexed: 12/05/2022] Open
Abstract
Although the mechanisms of steady-state visual evoked potentials (SSVEPs) have been well studied, none of them have been implemented with strictly experimental conditions. Our objective was to create an ideal observer condition to exploit the features of SSVEPs. We present here an electroencephalographic (EEG) eye tracking experimental paradigm that provides biofeedback for gaze restriction during the visual stimulation. Specifically, we designed an EEG eye tracking synchronous data recording system for successful trial selection. Forty-six periodic flickers within a visual field of 11.5° were successively presented to evoke SSVEP responses, and online biofeedback based on an eye tracker was provided for gaze restriction. For eight participants, SSVEP responses in the visual field and topographic maps from full-brain EEG were plotted and analyzed. The experimental results indicated that the optimal visual flicking arrangement to boost SSVEPs should include the features of circular stimuli within a 4–6° spatial distance and increased stimulus area below the fixation point. These findings provide a basis for determining stimulus parameters for neural engineering studies, e.g. SSVEP-based brain-computer interface (BCI) designs. The proposed experimental paradigm could also provide a precise framework for future SSVEP-related studies.
Collapse
|