1
|
Si Y, Wang Z, Xu G, Wang Z, Xu T, Zhou T, Hu H. Group-member selection for RSVP-based collaborative brain-computer interfaces. Front Neurosci 2024; 18:1402154. [PMID: 39234182 PMCID: PMC11371794 DOI: 10.3389/fnins.2024.1402154] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2024] [Accepted: 07/30/2024] [Indexed: 09/06/2024] Open
Abstract
Objective The brain-computer interface (BCI) systems based on rapid serial visual presentation (RSVP) have been widely utilized for the detection of target and non-target images. Collaborative brain-computer interface (cBCI) effectively fuses electroencephalogram (EEG) data from multiple users to overcome the limitations of low single-user performance in single-trial event-related potential (ERP) detection in RSVP-based BCI systems. In a multi-user cBCI system, a superior group mode may lead to better collaborative performance and lower system cost. However, the key factors that enhance the collaboration capabilities of multiple users and how to further use these factors to optimize group mode remain unclear. Approach This study proposed a group-member selection strategy to optimize the group mode and improve the system performance for RSVP-based cBCI. In contrast to the conventional grouping of collaborators at random, the group-member selection strategy enabled pairing each user with a better collaborator and allowed tasks to be done with fewer collaborators. Initially, we introduced the maximum individual capability and maximum collaborative capability (MIMC) to select optimal pairs, improving the system classification performance. The sequential forward floating selection (SFFS) combined with MIMC then selected a sub-group, aiming to reduce the hardware and labor expenses in the cBCI system. Moreover, the hierarchical discriminant component analysis (HDCA) was used as a classifier for within-session conditions, and the Euclidean space data alignment (EA) was used to overcome the problem of inter-trial variability for cross-session analysis. Main results In this paper, we verified the effectiveness of the proposed group-member selection strategy on a public RSVP-based cBCI dataset. For the two-user matching task, the proposed MIMC had a significantly higher AUC and TPR and lower FPR than the common random grouping mode and the potential group-member selection method. Moreover, the SFFS with MIMC enabled a trade-off between maintaining performance and reducing the number of system users. Significance The results showed that our proposed MIMC effectively optimized the group mode, enhanced the classification performance in the two-user matching task, and could reduce the redundant information by selecting the sub-group in the RSVP-based multi-user cBCI systems.
Collapse
Affiliation(s)
- Yuan Si
- Shanghai Advanced Research Institute, Chinese Academy of Sciences, Shanghai, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Zhenyu Wang
- Shanghai Advanced Research Institute, Chinese Academy of Sciences, Shanghai, China
| | - Guiying Xu
- Shanghai Advanced Research Institute, Chinese Academy of Sciences, Shanghai, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Zikai Wang
- Shanghai Advanced Research Institute, Chinese Academy of Sciences, Shanghai, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Tianheng Xu
- Shanghai Advanced Research Institute, Chinese Academy of Sciences, Shanghai, China
- Shanghai Frontier Innovation Research Institute, Shanghai, China
| | - Ting Zhou
- Shanghai Advanced Research Institute, Chinese Academy of Sciences, Shanghai, China
- Shanghai Frontier Innovation Research Institute, Shanghai, China
- School of Microelectronics, Shanghai University, Shanghai, China
| | - Honglin Hu
- Shanghai Advanced Research Institute, Chinese Academy of Sciences, Shanghai, China
- University of Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
2
|
Sadras N, Sani OG, Ahmadipour P, Shanechi MM. Post-stimulus encoding of decision confidence in EEG: toward a brain-computer interface for decision making. J Neural Eng 2023; 20:056012. [PMID: 37524073 DOI: 10.1088/1741-2552/acec14] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2022] [Accepted: 07/31/2023] [Indexed: 08/02/2023]
Abstract
Objective.When making decisions, humans can evaluate how likely they are to be correct. If this subjective confidence could be reliably decoded from brain activity, it would be possible to build a brain-computer interface (BCI) that improves decision performance by automatically providing more information to the user if needed based on their confidence. But this possibility depends on whether confidence can be decoded right after stimulus presentation and before the response so that a corrective action can be taken in time. Although prior work has shown that decision confidence is represented in brain signals, it is unclear if the representation is stimulus-locked or response-locked, and whether stimulus-locked pre-response decoding is sufficiently accurate for enabling such a BCI.Approach.We investigate the neural correlates of confidence by collecting high-density electroencephalography (EEG) during a perceptual decision task with realistic stimuli. Importantly, we design our task to include a post-stimulus gap that prevents the confounding of stimulus-locked activity by response-locked activity and vice versa, and then compare with a task without this gap.Main results.We perform event-related potential and source-localization analyses. Our analyses suggest that the neural correlates of confidence are stimulus-locked, and that an absence of a post-stimulus gap could cause these correlates to incorrectly appear as response-locked. By preventing response-locked activity from confounding stimulus-locked activity, we then show that confidence can be reliably decoded from single-trial stimulus-locked pre-response EEG alone. We also identify a high-performance classification algorithm by comparing a battery of algorithms. Lastly, we design a simulated BCI framework to show that the EEG classification is accurate enough to build a BCI and that the decoded confidence could be used to improve decision making performance particularly when the task difficulty and cost of errors are high.Significance.Our results show feasibility of non-invasive EEG-based BCIs to improve human decision making.
Collapse
Affiliation(s)
- Nitin Sadras
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America
| | - Omid G Sani
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America
| | - Parima Ahmadipour
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America
| | - Maryam M Shanechi
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America
- Department of Biomedical Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America
- Department of Computer Science, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America
- Neuroscience Graduate Program University of Southern California, Los Angeles, CA, United States of America
| |
Collapse
|
3
|
Tran XT, Do TTT, Lin CT. Early Detection of Human Decision-Making in Concealed Object Visual Searching Tasks: An EEG-BiLSTM Study. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2023; 2023:1-4. [PMID: 38082585 DOI: 10.1109/embc40787.2023.10340547] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
Detecting concealed objects presents a significant challenge for human and artificial intelligent systems. Detecting concealed objects task necessitates a high level of human attention and cognitive effort to complete the task successfully. Thus, in this study, we use concealed objects as stimuli for our decision-making experimental paradigms to quantify participants' decision-making performance. We applied a deep learning model, Bi-directional Long Short Term Memory (BiLSTM), to predict the participant's decision accuracy by using their electroencephalogram (EEG) signals as input. The classifier model demonstrated high accuracy, reaching 96.1% with an epoching time range of 500 ms following the stimulus event onset. The results revealed that the parietal-occipital brain region provides highly informative information for the classifier in the concealed visual searching tasks. Furthermore, the neural mechanism underlying the concealed visual-searching and decision-making process was explained by analyzing serial EEG components. The findings of this study could contribute to the development of a fault alert system, which has the potential to improve human decision-making performance.
Collapse
|
4
|
Wu Y, Mao Y, Feng K, Wei D, Song L. Decoding of the neural representation of the visual RGB color model. PeerJ Comput Sci 2023; 9:e1376. [PMID: 37346564 PMCID: PMC10280385 DOI: 10.7717/peerj-cs.1376] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2022] [Accepted: 04/10/2023] [Indexed: 06/23/2023]
Abstract
RGB color is a basic visual feature. Here we use machine learning and visual evoked potential (VEP) of electroencephalogram (EEG) data to investigate the decoding features of the time courses and space location that extract it, and whether they depend on a common brain cortex channel. We show that RGB color information can be decoded from EEG data and, with the task-irrelevant paradigm, features can be decoded across fast changes in VEP stimuli. These results are consistent with the theory of both event-related potential (ERP) and P300 mechanisms. The latency on time course is shorter and more temporally precise for RGB color stimuli than P300, a result that does not depend on a task-relevant paradigm, suggesting that RGB color is an updating signal that separates visual events. Meanwhile, distribution features are evident for the brain cortex of EEG signal, providing a space correlate of RGB color in classification accuracy and channel location. Finally, space decoding of RGB color depends on the channel classification accuracy and location obtained through training and testing EEG data. The result is consistent with channel power value distribution discharged by both VEP and electrophysiological stimuli mechanisms.
Collapse
Affiliation(s)
- Yijia Wu
- Fudan University, Fudan University, ShangHai, YangPu, China
- Shanghai Key Research Laboratory, Shanghai Key Research Laboratory, ShangHai, PuDong, China
| | - Yanjing Mao
- Fudan University, Fudan University, ShangHai, YangPu, China
| | - Kaiqiang Feng
- Fudan University, Fudan University, ShangHai, YangPu, China
| | - Donglai Wei
- Fudan University, Fudan University, ShangHai, YangPu, China
| | - Liang Song
- Fudan University, Fudan University, ShangHai, YangPu, China
- Shanghai Key Research Laboratory, Shanghai Key Research Laboratory, ShangHai, PuDong, China
| |
Collapse
|
5
|
Valeriani D, Santoro F, Ienca M. The present and future of neural interfaces. Front Neurorobot 2022; 16:953968. [PMID: 36304780 PMCID: PMC9592849 DOI: 10.3389/fnbot.2022.953968] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2022] [Accepted: 07/13/2022] [Indexed: 11/18/2022] Open
Abstract
The 2020's decade will likely witness an unprecedented development and deployment of neurotechnologies for human rehabilitation, personalized use, and cognitive or other enhancement. New materials and algorithms are already enabling active brain monitoring and are allowing the development of biohybrid and neuromorphic systems that can adapt to the brain. Novel brain-computer interfaces (BCIs) have been proposed to tackle a variety of enhancement and therapeutic challenges, from improving decision-making to modulating mood disorders. While these BCIs have generally been developed in an open-loop modality to optimize their internal neural decoders, this decade will increasingly witness their validation in closed-loop systems that are able to continuously adapt to the user's mental states. Therefore, a proactive ethical approach is needed to ensure that these new technological developments go hand in hand with the development of a sound ethical framework. In this perspective article, we summarize recent developments in neural interfaces, ranging from neurohybrid synapses to closed-loop BCIs, and thereby identify the most promising macro-trends in BCI research, such as simulating vs. interfacing the brain, brain recording vs. brain stimulation, and hardware vs. software technology. Particular attention is devoted to central nervous system interfaces, especially those with application in healthcare and human enhancement. Finally, we critically assess the possible futures of neural interfacing and analyze the short- and long-term implications of such neurotechnologies.
Collapse
Affiliation(s)
| | - Francesca Santoro
- Institute for Biological Information Processing - Bioelectronics, IBI-3, Forschungszentrum Juelich, Juelich, Germany
- Faculty of Electrical Engineering and Information Technology, RWTH Aachen University, Aachen, Germany
| | - Marcello Ienca
- College of Humanities, Swiss Federal Institute of Technology Lausanne (EPFL), Lausanne, Switzerland
- *Correspondence: Marcello Ienca
| |
Collapse
|
6
|
Cinel C, Fernandez-Vargas J, Tremmel C, Citi L, Poli R. Enhancing performance with multisensory cues in a realistic target discrimination task. PLoS One 2022; 17:e0272320. [PMID: 35930533 PMCID: PMC9355224 DOI: 10.1371/journal.pone.0272320] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2021] [Accepted: 07/17/2022] [Indexed: 11/19/2022] Open
Abstract
Making decisions is an important aspect of people’s lives. Decisions can be highly critical in nature, with mistakes possibly resulting in extremely adverse consequences. Yet, such decisions have often to be made within a very short period of time and with limited information. This can result in decreased accuracy and efficiency. In this paper, we explore the possibility of increasing speed and accuracy of users engaged in the discrimination of realistic targets presented for a very short time, in the presence of unimodal or bimodal cues. More specifically, we present results from an experiment where users were asked to discriminate between targets rapidly appearing in an indoor environment. Unimodal (auditory) or bimodal (audio-visual) cues could shortly precede the target stimulus, warning the users about its location. Our findings show that, when used to facilitate perceptual decision under time pressure, and in condition of limited information in real-world scenarios, spoken cues can be effective in boosting performance (accuracy, reaction times or both), and even more so when presented in bimodal form. However, we also found that cue timing plays a critical role and, if the cue-stimulus interval is too short, cues may offer no advantage. In a post-hoc analysis of our data, we also show that congruency between the response location and both the target location and the cues, can interfere with the speed and accuracy in the task. These effects should be taken in consideration, particularly when investigating performance in realistic tasks.
Collapse
Affiliation(s)
- Caterina Cinel
- Brain Computer Interface and Neural Engineering Lab, School of Computer Science and Electronic Engineering, University of Essex, Colchester, United Kingdom
- * E-mail:
| | - Jacobo Fernandez-Vargas
- Brain Computer Interface and Neural Engineering Lab, School of Computer Science and Electronic Engineering, University of Essex, Colchester, United Kingdom
| | - Christoph Tremmel
- Brain Computer Interface and Neural Engineering Lab, School of Computer Science and Electronic Engineering, University of Essex, Colchester, United Kingdom
- WellthLab, Electronics and Computer Science, University of Southampton, Southampton, United Kingdom
| | - Luca Citi
- Brain Computer Interface and Neural Engineering Lab, School of Computer Science and Electronic Engineering, University of Essex, Colchester, United Kingdom
| | - Riccardo Poli
- Brain Computer Interface and Neural Engineering Lab, School of Computer Science and Electronic Engineering, University of Essex, Colchester, United Kingdom
| |
Collapse
|
7
|
Tremmel C, Fernandez-Vargas J, Stamos D, Cinel C, Pontil M, Citi L, Poli R. A meta-learning BCI for estimating decision confidence. J Neural Eng 2022; 19. [PMID: 35738232 DOI: 10.1088/1741-2552/ac7ba8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2022] [Accepted: 06/23/2022] [Indexed: 11/12/2022]
Abstract
OBJECTIVE We investigated whether a recently introduced transfer-learning technique based on meta-learning could improve the performance of Brain-Computer Interfaces (BCIs) for decision-confidence prediction with respect to more traditional machine learning methods. APPROACH We adapted the meta-learning by biased regularisation algorithm to the problem of predicting decision confidence from EEG and EOG data on a decision-by-decision basis in a difficult target discrimination task based on video feeds. The method exploits previous participants' data to produce a prediction algorithm that is then quickly tuned to new participants. We compared it with with the traditional single-subject training almost universally adopted in BCIs, a state-of-the-art transfer learning technique called Domain Adversarial Neural Networks (DANN), a transfer-learning adaptation of a zero-training method we used recently for a similar task, and with a simple baseline algorithm. MAIN RESULTS The meta-learning approach was significantly better than other approaches in most conditions, and much better in situations where limited data from a new participant are available for training/tuning. Meta-learning by biased regularisation allowed our BCI to seamlessly integrate information from past participants with data from a specific user to produce high-performance predictors. Its robustness in the presence of small training sets is a real-plus in BCI applications, as new users need to train the BCI for a much shorter period. SIGNIFICANCE Due to the variability and noise of EEG/EOG data, BCIs need to be normally trained with data from a specific participant. This work shows that even better performance can be obtained using our version of meta-learning by biased regularisation.
Collapse
Affiliation(s)
- Christoph Tremmel
- School of Computer Science and Electronic Engineering, University of Essex, Wivenhoe Park, Colchester, Essex, CO4 3SQ, UNITED KINGDOM OF GREAT BRITAIN AND NORTHERN IRELAND
| | - Jacobo Fernandez-Vargas
- School of Computer Science and Electronic Engineering, University of Essex, Wivenhoe Park, Colchester, Essex, CO4 3SQ, UNITED KINGDOM OF GREAT BRITAIN AND NORTHERN IRELAND
| | - Dimitrios Stamos
- Department of Computer Science, University College London, Malet Place, London, London, WC1E 6BT, UNITED KINGDOM OF GREAT BRITAIN AND NORTHERN IRELAND
| | - Caterina Cinel
- School of Computer Science and Electronic Engineering, University of Essex, Wivenhoe Park, Colchester, Essex, CO4 3SQ, UNITED KINGDOM OF GREAT BRITAIN AND NORTHERN IRELAND
| | - Massimiliano Pontil
- University College London, Malet Place, London, London, WC1E 6BT, UNITED KINGDOM OF GREAT BRITAIN AND NORTHERN IRELAND
| | - Luca Citi
- School of Computer Science and Electronic Engineering, University of Essex, Wivenhoe Park, Colchester, Essex, CO4 3SQ, UNITED KINGDOM OF GREAT BRITAIN AND NORTHERN IRELAND
| | - Riccardo Poli
- School of Computer Science and Electronic Engineering, University of Essex, Wivenhoe Park, Colchester, Essex, CO4 3SQ, UNITED KINGDOM OF GREAT BRITAIN AND NORTHERN IRELAND
| |
Collapse
|
8
|
Huggins JE, Krusienski D, Vansteensel MJ, Valeriani D, Thelen A, Stavisky S, Norton JJS, Nijholt A, Müller-Putz G, Kosmyna N, Korczowski L, Kapeller C, Herff C, Halder S, Guger C, Grosse-Wentrup M, Gaunt R, Dusang AN, Clisson P, Chavarriaga R, Anderson CW, Allison BZ, Aksenova T, Aarnoutse E. Workshops of the Eighth International Brain-Computer Interface Meeting: BCIs: The Next Frontier. BRAIN-COMPUTER INTERFACES 2022; 9:69-101. [PMID: 36908334 PMCID: PMC9997957 DOI: 10.1080/2326263x.2021.2009654] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2021] [Accepted: 11/15/2021] [Indexed: 12/11/2022]
Abstract
The Eighth International Brain-Computer Interface (BCI) Meeting was held June 7-9th, 2021 in a virtual format. The conference continued the BCI Meeting series' interactive nature with 21 workshops covering topics in BCI (also called brain-machine interface) research. As in the past, workshops covered the breadth of topics in BCI. Some workshops provided detailed examinations of specific methods, hardware, or processes. Others focused on specific BCI applications or user groups. Several workshops continued consensus building efforts designed to create BCI standards and increase the ease of comparisons between studies and the potential for meta-analysis and large multi-site clinical trials. Ethical and translational considerations were both the primary topic for some workshops or an important secondary consideration for others. The range of BCI applications continues to expand, with more workshops focusing on approaches that can extend beyond the needs of those with physical impairments. This paper summarizes each workshop, provides background information and references for further study, presents an overview of the discussion topics, and describes the conclusion, challenges, or initiatives that resulted from the interactions and discussion at the workshop.
Collapse
Affiliation(s)
- Jane E Huggins
- Department of Physical Medicine and Rehabilitation, Department of Biomedical Engineering, Neuroscience Graduate Program, University of Michigan, Ann Arbor, Michigan, United States 325 East Eisenhower, Room 3017; Ann Arbor, Michigan 48108-5744, 734-936-7177
| | - Dean Krusienski
- Department of Biomedical Engineering, Virginia Commonwealth University, Richmond, VA 23219
| | - Mariska J Vansteensel
- UMC Utrecht Brain Center, Dept of Neurosurgery, University Medical Center Utrecht, The Netherlands
| | | | - Antonia Thelen
- eemagine Medical Imaging Solutions GmbH, Berlin, Germany
| | | | - James J S Norton
- National Center for Adaptive Neurotechnologies, US Department of Veterans Affairs, 113 Holland Ave, Albany, NY 12208
| | - Anton Nijholt
- Faculty EEMCS, University of Twente, Enschede, The Netherlands
| | - Gernot Müller-Putz
- Institute of Neural Engineering, GrazBCI Lab, Graz University of Technology, Stremayrgasse 16/4, 8010 Graz, Austria
| | - Nataliya Kosmyna
- Massachusetts Institute of Technology (MIT), Media Lab, E14-548, Cambridge, MA 02139, Unites States
| | | | | | - Christian Herff
- School of Mental Health and Neuroscience, Maastricht University, Maastricht, The Netherlands
| | | | - Christoph Guger
- g.tec medical engineering GmbH/Guger Technologies OG, Austria, Sierningstrasse 14, 4521 Schiedlberg, Austria, +43725122240-0
| | - Moritz Grosse-Wentrup
- Research Group Neuroinformatics, Faculty of Computer Science, Vienna Cognitive Science Hub, Data Science @ Uni Vienna University of Vienna
| | - Robert Gaunt
- Rehab Neural Engineering Labs, Department of Physical Medicine and Rehabilitation, Center for the Neural Basis of Cognition, University of Pittsburgh, Pittsburgh, PA, USA, 3520 5th Ave, Suite 300, Pittsburgh, PA 15213, 412-383-1426
| | - Aliceson Nicole Dusang
- Department of Electrical and Computer Engineering, School of Engineering, Brown University, Carney Institute for Brain Science, Brown University, Providence, RI
- Department of Veterans Affairs Medical Center, Center for Neurorestoration and Neurotechnology, Rehabilitation R&D Service, Providence, RI
- Center for Neurotechnology and Neurorecovery, Neurology, Massachusetts General Hospital, Boston, MA
| | | | - Ricardo Chavarriaga
- IEEE Standards Association Industry Connections group on neurotechnologies for brain-machine interface, Center for Artificial Intelligence, School of Engineering, ZHAW-Zurich University of Applied Sciences, Switzerland, Switzerland
| | - Charles W Anderson
- Department of Computer Science, Molecular, Cellular and Integrative Neurosience Program, Colorado State University, Fort Collins, CO 80523
| | - Brendan Z Allison
- Dept. of Cognitive Science, Mail Code 0515, University of California at San Diego, La Jolla, United States, 619-534-9754
| | - Tetiana Aksenova
- University Grenoble Alpes, CEA, LETI, Clinatec, Grenoble 38000, France
| | - Erik Aarnoutse
- UMC Utrecht Brain Center, Department of Neurology & Neurosurgery, University Medical Center Utrecht, Heidelberglaan 100, 3584 CX Utrecht, The Netherlands
| |
Collapse
|
9
|
A Collaborative Brain-Computer Interface Framework for Enhancing Group Detection Performance of Dynamic Visual Targets. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:4752450. [PMID: 35087580 PMCID: PMC8789438 DOI: 10.1155/2022/4752450] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/20/2021] [Revised: 12/12/2021] [Accepted: 12/14/2021] [Indexed: 11/25/2022]
Abstract
The superiority of collaborative brain-computer interface (cBCI) in performance enhancement makes it an effective way to break through the performance bottleneck of the BCI-based dynamic visual target detection. However, the existing cBCIs focus on multi-mind information fusion with a static and unidirectional mode, lacking the information interaction and learning guidance among multiple agents. Here, we propose a novel cBCI framework to enhance the group detection performance of dynamic visual targets. Specifically, a mutual learning domain adaptation network (MLDANet) with information interaction, dynamic learning, and individual transferring abilities is developed as the core of the cBCI framework. MLDANet takes P3-sSDA network as individual network unit, introduces mutual learning strategy, and establishes a dynamic interactive learning mechanism between individual networks and collaborative decision-making at the neural decision level. The results indicate that the proposed MLDANet-cBCI framework can achieve the best group detection performance, and the mutual learning strategy can improve the detection ability of individual networks. In MLDANet-cBCI, the F1 scores of collaborative detection and individual network are 0.12 and 0.19 higher than those in the multi-classifier cBCI, respectively, when three minds collaborate. Thus, the proposed framework breaks through the traditional multi-mind collaborative mode and exhibits a superior group detection performance of dynamic visual targets, which is also of great significance for the practical application of multi-mind collaboration.
Collapse
|
10
|
Zhang H, Zhu L, Xu S, Cao J, Kong W. Two brains, one target: Design of a multi-level information fusion model based on dual-subject RSVP. J Neurosci Methods 2021; 363:109346. [PMID: 34474046 DOI: 10.1016/j.jneumeth.2021.109346] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2021] [Revised: 08/20/2021] [Accepted: 08/26/2021] [Indexed: 11/15/2022]
Abstract
BACKGROUND Rapid serial visual presentation (RSVP) based brain-computer interface (BCI) is widely used to categorize the target and non-target images. The available information limits the prediction accuracy of single-trial using single-subject electroencephalography (EEG) signals. New Method. Hyperscanning is a new manner to record two or more subjects' signals simultaneously. So we designed a multi-level information fusion model for target image detection based on dual-subject RSVP, namely HyperscanNet. The two modules of this model fuse the data and features of the two subjects at the data and feature layers. A chunked long and short-term memory artificial neural network (LSTM) was used in the time dimension to extract features at different periods separately, completing fine-grained underlying feature extraction. While the feature layer is fused, some plain operations are used to complete the fusion of the data layer to ensure that important information is not missed. RESULTS Experimental results show that the F1-score (the harmonic mean of precision and recall) of this method with best group of channels and segment length is 82.76%. Comparison with existing methods. This method improves the F1-score by at least 5% compared to single-subject target detection. CONCLUSIONS Target detection can be accomplished by the two subjects' collaboration to achieve a higher and more stable F1-score than a single subject.
Collapse
Affiliation(s)
- Hangkui Zhang
- College of Computer Science, Hangzhou Dianzi University, Hangzhou 310018, China
| | - Li Zhu
- College of Computer Science, Hangzhou Dianzi University, Hangzhou 310018, China
| | - Senwei Xu
- College of Computer Science, Hangzhou Dianzi University, Hangzhou 310018, China
| | - Jianting Cao
- Graduate School of Engineering, Saitama Institute of Technology, 369-0293, Japan
| | - Wanzeng Kong
- College of Computer Science, Hangzhou Dianzi University, Hangzhou 310018, China; Key Laboratory of Brain Machine Collaborative Intelligence of Zhejiang Province, Hangzhou 310018, Zhejiang, China.
| |
Collapse
|
11
|
Bhattacharyya S, Hayashibe M. An Optimal Transport Based Transferable System for Detection of Erroneous Somato-Sensory Feedback from Neural Signals. Brain Sci 2021; 11:1393. [PMID: 34827392 PMCID: PMC8615878 DOI: 10.3390/brainsci11111393] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2021] [Revised: 10/15/2021] [Accepted: 10/19/2021] [Indexed: 11/26/2022] Open
Abstract
This study is aimed at the detection of single-trial feedback, perceived as erroneous by the user, using a transferable classification system while conducting a motor imagery brain-computer interfacing (BCI) task. The feedback received by the users are relayed from a functional electrical stimulation (FES) device and hence are somato-sensory in nature. The BCI system designed for this study activates an electrical stimulator placed on the left hand, right hand, left foot, and right foot of the user. Trials containing erroneous feedback can be detected from the neural signals in form of the error related potential (ErrP). The inclusion of neuro-feedback during the experiments indicated the possibility that ErrP signals can be evoked when the participant perceives an error from the feedback. Hence, to detect such feedback using ErrP, a transferable (offline) decoder based on optimal transport theory is introduced herein. The offline system detects single-trial erroneous trials from the feedback period of an online neuro-feedback BCI system. The results of the FES-based feedback BCI system were compared to a similar visual-based (VIS) feedback system. Using our framework, the error detector systems for both the FES and VIS feedback paradigms achieved an F1-score of 92.66% and 83.10%, respectively, and are significantly superior to a comparative system where an optimal transport was not used. It is expected that this form of transferable and automated error detection system compounded with a motor imagery system will augment the performance of a BCI and provide a better BCI-based neuro-rehabilitation protocol that has an error control mechanism embedded into it.
Collapse
Affiliation(s)
- Saugat Bhattacharyya
- School of Computing, Engineering and Intelligent Systems, Ulster University, Magee Campus, Londonderry BT48 7JL, UK
| | - Mitsuhiro Hayashibe
- Department of Robotics, Tohoku University, Sendai 980-8579, Japan;
- Department of Biomedical Engineering, Tohoku University, Sendai 980-8579, Japan
| |
Collapse
|
12
|
Anytime collaborative brain-computer interfaces for enhancing perceptual group decision-making. Sci Rep 2021; 11:17008. [PMID: 34417494 PMCID: PMC8379268 DOI: 10.1038/s41598-021-96434-0] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2020] [Accepted: 07/20/2021] [Indexed: 11/15/2022] Open
Abstract
In this paper we present, and test in two realistic environments, collaborative Brain-Computer Interfaces (cBCIs) that can significantly increase both the speed and the accuracy of perceptual group decision-making. The key distinguishing features of this work are: (1) our cBCIs combine behavioural, physiological and neural data in such a way as to be able to provide a group decision at any time after the quickest team member casts their vote, but the quality of a cBCI-assisted decision improves monotonically the longer the group decision can wait; (2) we apply our cBCIs to two realistic scenarios of military relevance (patrolling a dark corridor and manning an outpost at night where users need to identify any unidentified characters that appear) in which decisions are based on information conveyed through video feeds; and (3) our cBCIs exploit Event-Related Potentials (ERPs) elicited in brain activity by the appearance of potential threats but, uniquely, the appearance time is estimated automatically by the system (rather than being unrealistically provided to it). As a result of these elements, in the two test environments, groups assisted by our cBCIs make both more accurate and faster decisions than when individual decisions are integrated in more traditional manners.
Collapse
|
13
|
Gao X, Wang Y, Chen X, Gao S. Interface, interaction, and intelligence in generalized brain-computer interfaces. Trends Cogn Sci 2021; 25:671-684. [PMID: 34116918 DOI: 10.1016/j.tics.2021.04.003] [Citation(s) in RCA: 52] [Impact Index Per Article: 17.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2020] [Revised: 03/07/2021] [Accepted: 04/05/2021] [Indexed: 11/16/2022]
Abstract
A brain-computer interface (BCI) establishes a direct communication channel between a brain and an external device. With recent advances in neurotechnology and artificial intelligence (AI), the brain signals in BCI communication have been advanced from sensation and perception to higher-level cognition activities. While the field of BCI has grown rapidly in the past decades, the core technologies and innovative ideas behind seemingly unrelated BCI systems have never been summarized from an evolutionary point of view. Here, we review various BCI paradigms and present an evolutionary model of generalized BCI technology which comprises three stages: interface, interaction, and intelligence (I3). We also highlight challenges, opportunities, and future perspectives in the development of new BCI technology.
Collapse
Affiliation(s)
- Xiaorong Gao
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China
| | - Yijun Wang
- Institute of Semiconductors, Chinese Academy of Sciences, Beijing, China
| | - Xiaogang Chen
- Institute of Biomedical Engineering, Chinese Academy of Medical Sciences, Tianjin, China
| | - Shangkai Gao
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China.
| |
Collapse
|
14
|
Fernandez-Vargas J, Tremmel C, Valeriani D, Bhattacharyya S, Cinel C, Citi L, Poli R. Subject- and task-independent neural correlates and prediction of decision confidence in perceptual decision making. J Neural Eng 2021; 18. [PMID: 33780913 DOI: 10.1088/1741-2552/abf2e4] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2020] [Accepted: 03/29/2021] [Indexed: 11/12/2022]
Abstract
Objective.In many real-world decision tasks, the information available to the decision maker is incomplete. To account for this uncertainty, we associate a degree of confidence to every decision, representing the likelihood of that decision being correct. In this study, we analyse electroencephalography (EEG) data from 68 participants undertaking eight different perceptual decision-making experiments. Our goals are to investigate (1) whether subject- and task-independent neural correlates of decision confidence exist, and (2) to what degree it is possible to build brain computer interfaces that can estimate confidence on a trial-by-trial basis. The experiments cover a wide range of perceptual tasks, which allowed to separate the task-related, decision-making features from the task-independent ones.Approach.Our systems train artificial neural networks to predict the confidence in each decision from EEG data and response times. We compare the decoding performance with three training approaches: (1) single subject, where both training and testing data were acquired from the same person; (2) multi-subject, where all the data pertained to the same task, but the training and testing data came from different users; and (3) multi-task, where the training and testing data came from different tasks and subjects. Finally, we validated our multi-task approach using data from two additional experiments, in which confidence was not reported.Main results.We found significant differences in the EEG data for different confidence levels in both stimulus-locked and response-locked epochs. All our approaches were able to predict the confidence between 15% and 35% better than the corresponding reference baselines.Significance.Our results suggest that confidence in perceptual decision making tasks could be reconstructed from neural signals even when using transfer learning approaches. These confidence estimates are based on the decision-making process rather than just the confidence-reporting process.
Collapse
Affiliation(s)
- Jacobo Fernandez-Vargas
- Brain-Computer Interfaces and Neural Engineering laboratory, School of Computer Science and Electronic Engineering, University of Essex, Essex, United Kingdom
| | - Christoph Tremmel
- Brain-Computer Interfaces and Neural Engineering laboratory, School of Computer Science and Electronic Engineering, University of Essex, Essex, United Kingdom
| | - Davide Valeriani
- Department of Otolaryngology
- Head and Neck Surgery, Massachusetts Eye and Ear, Boston, MA, United States of America.,Department of Otolaryngology
- Head and Neck Surgery, Harvard Medical School, Boston, MA, United States of America
| | - Saugat Bhattacharyya
- Brain-Computer Interfaces and Neural Engineering laboratory, School of Computer Science and Electronic Engineering, University of Essex, Essex, United Kingdom.,School of Computing, Engineering & Intelligent Systems, Ulster University, Londonderry, United Kingdom
| | - Caterina Cinel
- Brain-Computer Interfaces and Neural Engineering laboratory, School of Computer Science and Electronic Engineering, University of Essex, Essex, United Kingdom
| | - Luca Citi
- Brain-Computer Interfaces and Neural Engineering laboratory, School of Computer Science and Electronic Engineering, University of Essex, Essex, United Kingdom
| | - Riccardo Poli
- Brain-Computer Interfaces and Neural Engineering laboratory, School of Computer Science and Electronic Engineering, University of Essex, Essex, United Kingdom
| |
Collapse
|
15
|
Zheng L, Sun S, Zhao H, Pei W, Chen H, Gao X, Zhang L, Wang Y. A Cross-Session Dataset for Collaborative Brain-Computer Interfaces Based on Rapid Serial Visual Presentation. Front Neurosci 2020; 14:579469. [PMID: 33192265 PMCID: PMC7642747 DOI: 10.3389/fnins.2020.579469] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2020] [Accepted: 09/22/2020] [Indexed: 11/20/2022] Open
Abstract
Brain-computer interfaces (BCIs) based on rapid serial visual presentation (RSVP) have been widely used to categorize target and non-target images. However, it is still a challenge to detect single-trial event related potentials (ERPs) from electroencephalography (EEG) signals. Besides, the variability of EEG signal over time may cause difficulties of calibration in long-term system use. Recently, collaborative BCIs have been proposed to improve the overall BCI performance by fusing brain activities acquired from multiple subjects. For both individual and collaborative BCIs, feature extraction and classification algorithms that can be transferred across sessions can significantly facilitate system calibration. Although open datasets are highly efficient for developing algorithms, currently there is still a lack of datasets for a collaborative RSVP-based BCI. This paper presents a cross-session EEG dataset of a collaborative RSVP-based BCI system from 14 subjects, who were divided into seven groups. In collaborative BCI experiments, two subjects did the same target image detection tasks synchronously. All subjects participated in the same experiment twice with an average interval of ∼23 days. The results in data evaluation indicate that adequate signal processing algorithms can greatly enhance the cross-session BCI performance in both individual and collaborative conditions. Besides, compared with individual BCIs, the collaborative methods that fuse information from multiple subjects obtain significantly improved BCI performance. This dataset can be used for developing more efficient algorithms to enhance performance and practicality of a collaborative RSVP-based BCI system.
Collapse
Affiliation(s)
- Li Zheng
- State Key Laboratory on Integrated Optoelectronics, Institute of Semiconductors, Chinese Academy of Sciences, Beijing, China.,School of Future Technology, University of Chinese Academy of Sciences, Beijing, China
| | - Sen Sun
- Department of Control Engineering, School of Information Science and Engineering, East China University of Science and Technology, Shanghai, China
| | - Hongze Zhao
- State Key Laboratory on Integrated Optoelectronics, Institute of Semiconductors, Chinese Academy of Sciences, Beijing, China
| | - Weihua Pei
- State Key Laboratory on Integrated Optoelectronics, Institute of Semiconductors, Chinese Academy of Sciences, Beijing, China.,School of Future Technology, University of Chinese Academy of Sciences, Beijing, China
| | - Hongda Chen
- State Key Laboratory on Integrated Optoelectronics, Institute of Semiconductors, Chinese Academy of Sciences, Beijing, China
| | - Xiaorong Gao
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China
| | - Lijian Zhang
- Beijing Machine and Equipment Institute, Beijing, China
| | - Yijun Wang
- State Key Laboratory on Integrated Optoelectronics, Institute of Semiconductors, Chinese Academy of Sciences, Beijing, China.,School of Future Technology, University of Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
16
|
Bhattacharyya S, Valeriani D, Cinel C, Citi L, Poli R. Collaborative Brain-Computer Interfaces to Enhance Group Decisions in an Outpost Surveillance Task. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2019; 2019:3099-3102. [PMID: 31946543 DOI: 10.1109/embc.2019.8856309] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
We present a two-layered collaborative Brain-Computer Interface (cBCI) to aid groups making decisions under time constraints in a realistic video surveillance setting - the very first cBCI application of this type. The cBCI first uses response times (RTs) to estimate the decision confidence the user would report after each decision. Such an estimate is then used with neural features extracted from EEG to refine the decision confidence so that it better correlates with the correctness of the decision. The refined confidence is then used to weigh individual responses and obtain group decisions. Results obtained with 10 participants indicate that cBCI-assisted groups are significantly more accurate than groups using standard majority or weighing decisions using reported confidence values. This two-layer architecture allows the cBCI to not only further enhance group performance but also speed up the decision process, as the cBCI does not have to wait for all users to report their confidence after each decision.
Collapse
|
17
|
Valeriani D, Poli R. Cyborg groups enhance face recognition in crowded environments. PLoS One 2019; 14:e0212935. [PMID: 30840663 PMCID: PMC6402761 DOI: 10.1371/journal.pone.0212935] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2018] [Accepted: 02/12/2019] [Indexed: 11/19/2022] Open
Abstract
Recognizing a person in a crowded environment is a challenging, yet critical, visual-search task for both humans and machine-vision algorithms. This paper explores the possibility of combining a residual neural network (ResNet), brain-computer interfaces (BCIs) and human participants to create "cyborgs" that improve decision making. Human participants and a ResNet undertook the same face-recognition experiment. BCIs were used to decode the decision confidence of humans from their EEG signals. Different types of cyborg groups were created, including either only humans (with or without the BCI) or groups of humans and the ResNet. Cyborg groups decisions were obtained weighing individual decisions by confidence estimates. Results show that groups of cyborgs are significantly more accurate (up to 35%) than the ResNet, the average participant, and equally-sized groups of humans not assisted by technology. These results suggest that melding humans, BCI, and machine-vision technology could significantly improve decision-making in realistic scenarios.
Collapse
Affiliation(s)
- Davide Valeriani
- Brain Computer Interfaces and Neural Engineering Laboratory, School of Computer Science and Electronic Engineering, University of Essex, Colchester, United Kingdom
- Department of Otolaryngology, Massachusetts Eye and Ear, Harvard Medical School, Boston, MA, United States of America
| | - Riccardo Poli
- Department of Otolaryngology, Massachusetts Eye and Ear, Harvard Medical School, Boston, MA, United States of America
| |
Collapse
|
18
|
Cinel C, Valeriani D, Poli R. Neurotechnologies for Human Cognitive Augmentation: Current State of the Art and Future Prospects. Front Hum Neurosci 2019; 13:13. [PMID: 30766483 PMCID: PMC6365771 DOI: 10.3389/fnhum.2019.00013] [Citation(s) in RCA: 51] [Impact Index Per Article: 10.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2018] [Accepted: 01/10/2019] [Indexed: 01/10/2023] Open
Abstract
Recent advances in neuroscience have paved the way to innovative applications that cognitively augment and enhance humans in a variety of contexts. This paper aims at providing a snapshot of the current state of the art and a motivated forecast of the most likely developments in the next two decades. Firstly, we survey the main neuroscience technologies for both observing and influencing brain activity, which are necessary ingredients for human cognitive augmentation. We also compare and contrast such technologies, as their individual characteristics (e.g., spatio-temporal resolution, invasiveness, portability, energy requirements, and cost) influence their current and future role in human cognitive augmentation. Secondly, we chart the state of the art on neurotechnologies for human cognitive augmentation, keeping an eye both on the applications that already exist and those that are emerging or are likely to emerge in the next two decades. Particularly, we consider applications in the areas of communication, cognitive enhancement, memory, attention monitoring/enhancement, situation awareness and complex problem solving, and we look at what fraction of the population might benefit from such technologies and at the demands they impose in terms of user training. Thirdly, we briefly review the ethical issues associated with current neuroscience technologies. These are important because they may differentially influence both present and future research on (and adoption of) neurotechnologies for human cognitive augmentation: an inferior technology with no significant ethical issues may thrive while a superior technology causing widespread ethical concerns may end up being outlawed. Finally, based on the lessons learned in our analysis, using past trends and considering other related forecasts, we attempt to forecast the most likely future developments of neuroscience technology for human cognitive augmentation and provide informed recommendations for promising future research and exploitation avenues.
Collapse
Affiliation(s)
- Caterina Cinel
- Brain Computer Interfaces and Neural Engineering Laboratory, School of Computer Science and Electronic Engineering, University of Essex, Colchester, United Kingdom
| | - Davide Valeriani
- Brain Computer Interfaces and Neural Engineering Laboratory, School of Computer Science and Electronic Engineering, University of Essex, Colchester, United Kingdom
- Department of Otolaryngology, Massachusetts Eye and Ear, Harvard Medical School, Boston, MA, United States
| | - Riccardo Poli
- Brain Computer Interfaces and Neural Engineering Laboratory, School of Computer Science and Electronic Engineering, University of Essex, Colchester, United Kingdom
| |
Collapse
|
19
|
|