1
|
Shi N, Miao Y, Huang C, Li X, Song Y, Chen X, Wang Y, Gao X. Estimating and approaching the maximum information rate of noninvasive visual brain-computer interface. Neuroimage 2024; 289:120548. [PMID: 38382863 DOI: 10.1016/j.neuroimage.2024.120548] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2023] [Revised: 02/16/2024] [Accepted: 02/18/2024] [Indexed: 02/23/2024] Open
Abstract
An essential priority of visual brain-computer interfaces (BCIs) is to enhance the information transfer rate (ITR) to achieve high-speed communication. Despite notable progress, noninvasive visual BCIs have encountered a plateau in ITRs, leaving it uncertain whether higher ITRs are achievable. In this study, we used information theory to study the characteristics and capacity of the visual-evoked channel, which leads us to investigate whether and how we can decode higher information rates in a visual BCI system. Using information theory, we estimate the upper and lower bounds of the information rate with the white noise (WN) stimulus. Consequently, we found out that the information rate is determined by the signal-to-noise ratio (SNR) in the frequency domain, which reflects the spectrum resources of the channel. Based on this discovery, we propose a broadband WN BCI by implementing stimuli on a broader frequency band than the steady-state visual evoked potentials (SSVEPs)-based BCI. Through validation, the broadband BCI outperforms the SSVEP BCI by an impressive 7 bps, setting a record of 50 bps. The integration of information theory and the decoding analysis presented in this study offers valuable insights applicable to general sensory-evoked BCIs, providing a potential direction of next-generation human-machine interaction systems.
Collapse
Affiliation(s)
- Nanlin Shi
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing 100084, China
| | - Yining Miao
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing 100084, China
| | - Changxing Huang
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing 100084, China
| | - Xiang Li
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing 100084, China
| | - Yonghao Song
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing 100084, China
| | - Xiaogang Chen
- Institute of Biomedical Engineering, Chinese Academy of Medical, Sciences and Peking Union Medical College, Street, Tianjin 300192, China
| | - Yijun Wang
- Key Laboratory of Solid-State Optoelectronics Information Technology, Institute of Semiconductors, Chinese Academy of Sciences, Beijing 100083, China
| | - Xiaorong Gao
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing 100084, China.
| |
Collapse
|
2
|
Li R, Hu H, Zhao X, Wang Z, Xu G. A static paradigm based on illusion-induced VEP for brain-computer interfaces. J Neural Eng 2023; 20:026006. [PMID: 36808912 DOI: 10.1088/1741-2552/acbdc0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/23/2023]
Abstract
OBJECTIVE Visual evoked potentials (VEPs) have been commonly applied in brain-computer interfaces (BCIs) due to their satisfactory classification performance recently. However, most existing methods with flickering or oscillating stimuli will induce visual fatigue under long-term training, thus restricting the implementation of VEP-based BCIs. To address this issue, a novel paradigm adopting static motion illusion based on illusion-induced visual evoked potential (IVEP) is proposed for BCIs to enhance visual experience and practicality. APPROACH This study explored the responses to baseline and illusion tasks including the Rotating-Tilted-Lines (RTL) illusion and Rotating-Snakes (RS) illusion. The distinguishable features were examined between different illusions by analyzing the event-related potentials (ERPs) and amplitude modulation of evoked oscillatory responses. MAIN RESULTS The illusion stimuli elicited VEPs in an early time window encompassing a negative component (N1) from 110 to 200 ms and a positive component (P2) between 210 and 300 ms. Based on the feature analysis, a filter bank was designed to extract discriminative signals. The task-related component analysis (TRCA) was used to evaluate the binary classification task performance of the proposed method. Then the highest accuracy of 86.67% was achieved with a data length of 0.6 s. SIGNIFICANCE The results of this study demonstrate that the static motion illusion paradigm has the feasibility of implementation and is promising for VEP-based BCI applications.
Collapse
Affiliation(s)
- Ruxue Li
- Intelligent Information and Communication Technology Research and Development Center, Shanghai Advanced Research Institute Chinese Academy of Sciences, 99 Haike Road, Pudong New Area, Shanghai, Shanghai, 201210, CHINA
| | - Honglin Hu
- Intelligent Information and Communication Technology Research and Development Center, Shanghai Advanced Research Institute Chinese Academy of Sciences, 99 Haike Road, Pudong New Area, Shanghai, Shanghai, 201210, CHINA
| | - Xi Zhao
- Intelligent Information and Communication Technology Research and Development Center, Shanghai Advanced Research Institute, 99 Haike Road, Pudong New Area, Shanghai, Shanghai, 201210, CHINA
| | - Zhenyu Wang
- Intelligent Information and Communication Technology Research and Development Center, Shanghai Advanced Research Institute Chinese Academy of Sciences, 99 Haike Road, Pudong New Area, Shanghai, Shanghai, 201210, CHINA
| | - Guiying Xu
- Intelligent Information and Communication Technology Research and Development Center, Shanghai Advanced Research Institute, 99 Haike Road, Pudong New Area, Shanghai, Shanghai, Shanghai, 201210, CHINA
| |
Collapse
|
3
|
Velasco-Álvarez F, Fernández-Rodríguez Á, Medina-Juliá MT, Ron-Angevin R. Speech stream segregation to control an ERP-based auditory BCI. J Neural Eng 2021; 18. [PMID: 33470970 DOI: 10.1088/1741-2552/abdd44] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2020] [Accepted: 01/19/2021] [Indexed: 11/12/2022]
Abstract
OBJECTIVE The use of natural sounds in auditory Brain-Computer Interfaces (BCI) has been shown to improve classification results and usability. Some auditory BCIs are based on stream segregation, in which the subjects must attend one audio stream and ignore the other(s); these streams include some kind of stimuli to be detected. In this work we focus on Event-Related Potentials (ERP) and study whether providing intelligible content to each audio stream could help the users to better concentrate on the desired stream and so to better attend the target stimuli and to ignore the non-target ones. APPROACH In addition to a control condition, two experimental conditions, based on the selective attention and the cocktail party effect, were tested using two simultaneous and spatialized audio streams: i) the condition A2 consisted of an overlap of auditory stimuli (single syllables) on a background consisting of natural speech for each stream, ii) in condition A3, brief alterations of the natural flow of each speech were used as stimuli. MAIN RESULTS The two experimental proposals improved the results of the control condition (single words as stimuli without a speech background) both in a cross validation analysis of the calibration part and in the online test. The analysis of the ERP responses also presented better discriminability for the two proposals in comparison to the control condition. The results of subjective questionnaires support the better usability of the first experimental condition. SIGNIFICANCE The use of natural speech as background improves the stream segregation in an ERP-based auditory BCI (with significant results in the performance metrics, the ERP waveforms, and in the preference parameter in subjective questionnaires). Future work in the field of ERP-based stream segregation should study the use of natural speech in combination with easily perceived but not distracting stimuli.
Collapse
Affiliation(s)
- Francisco Velasco-Álvarez
- Department of Electronic Technology, Universidad de Malaga, E.T.S.I. Telecomunicación, Campus de Teatinos s/n, Malaga, 29071, SPAIN
| | - Álvaro Fernández-Rodríguez
- Department of Electronic Technology, University of Málaga, E.T.S.I. Telecomunicación, Campus de Teatinos s/n, Málaga, 29071, SPAIN
| | - M Teresa Medina-Juliá
- Department of Electronic Technology, Universidad de Malaga, E.T.S.I. Telecomunicación, Campus de Teatinos s/n, Malaga, 29071, SPAIN
| | - Ricardo Ron-Angevin
- Department of Electronic Technology, Universidad de Malaga, E.T.S.I. Telecomunicación, Campus de Teatinos s/n, Malaga, 29071, SPAIN
| |
Collapse
|
4
|
Ogino M, Kanoga S, Muto M, Mitsukura Y. Analysis of Prefrontal Single-Channel EEG Data for Portable Auditory ERP-Based Brain-Computer Interfaces. Front Hum Neurosci 2019; 13:250. [PMID: 31404255 PMCID: PMC6669913 DOI: 10.3389/fnhum.2019.00250] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2019] [Accepted: 07/04/2019] [Indexed: 11/13/2022] Open
Abstract
An electroencephalogram (EEG)-based brain-computer interface (BCI) is a tool to non-invasively control computers by translating the electrical activity of the brain. This technology has the potential to provide patients who have severe generalized myopathy, such as those suffering from amyotrophic lateral sclerosis (ALS), with the ability to communicate. Recently, auditory oddball paradigms have been developed to implement more practical event-related potential (ERP)-based BCIs because they can operate without ocular activities. These paradigms generally make use of clinical (over 16-channel) EEG devices and natural sound stimuli to maintain the user's motivation during the BCI operation; however, most ALS patients who have taken part in auditory ERP-based BCIs tend to complain about the following factors: (i) total device cost and (ii) setup time. The development of a portable auditory ERP-based BCI could overcome considerable obstacles that prevent the use of this technology in communication in everyday life. To address this issue, we analyzed prefrontal single-channel EEG data acquired from a consumer-grade single-channel EEG device using a natural sound-based auditory oddball paradigm. In our experiments, EEG data was gathered from nine healthy subjects and one ALS patient. The performance of auditory ERP-based BCI was quantified under an offline condition and two online conditions. The offline analysis indicated that our paradigm maintained a high level of detection accuracy (%) and ITR (bits/min) across all subjects through a cross-validation procedure (for five commands: 70.0 ± 16.1 and 1.29 ± 0.93, for four commands: 73.8 ± 14.2 and 1.16 ± 0.78, for three commands: 78.7 ± 11.8 and 0.95 ± 0.61, and for two commands: 85.7 ± 8.6 and 0.63 ± 0.38). Furthermore, the first online analysis demonstrated that our paradigm also achieved high performance for new data in an online data acquisition stream (for three commands: 80.0 ± 19.4 and 1.16 ± 0.83). The second online analysis measured online performances on the different day of offline and first online analyses on a different day (for three commands: 62.5 ± 14.3 and 0.43 ± 0.36). These results indicate that prefrontal single-channel EEGs have the potential to contribute to the development of a user-friendly portable auditory ERP-based BCI.
Collapse
Affiliation(s)
| | - Suguru Kanoga
- National Institute of Advanced Industrial Science and Technology, Tokyo, Japan
| | - Masatane Muto
- WITH ALS General Incorporated Foundation, Tokyo, Japan
| | - Yasue Mitsukura
- School of Integrated Design Engineering, Keio University, Kanagawa, Japan
| |
Collapse
|
5
|
Gao ZK, Guo W, Cai Q, Ma C, Zhang YB, Kurths J. Characterization of SSMVEP-based EEG signals using multiplex limited penetrable horizontal visibility graph. CHAOS (WOODBURY, N.Y.) 2019; 29:073119. [PMID: 31370406 DOI: 10.1063/1.5108606] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/30/2019] [Accepted: 07/09/2019] [Indexed: 06/10/2023]
Abstract
The steady state motion visual evoked potential (SSMVEP)-based brain computer interface (BCI), which incorporates the motion perception capabilities of the human visual system to alleviate the negative effects caused by strong visual stimulation from steady-state VEP, has attracted a great deal of attention. In this paper, we design a SSMVEP-based experiment by Newton's ring paradigm. Then, we use the canonical correlation analysis and Support Vector Machines to classify SSMVEP signals for the SSMVEP-based electroencephalography (EEG) signal detection. We find that the classification accuracy of different subjects under fatigue state is much lower than that in the normal state. To probe into this, we develop a multiplex limited penetrable horizontal visibility graph method, which enables to infer a brain network from 62-channel EEG signals. Subsequently, we analyze the variation of the average weighted clustering coefficient and the weighted global efficiency corresponding to these two brain states and find that both network measures are lower under fatigue state. The results suggest that the associations and information transfer efficiency among different brain regions become weaker when the brain state changes from normal to fatigue, which provide new insights into the explanations for the reduced classification accuracy. The promising classification results and the findings render the proposed methods particularly useful for analyzing EEG recordings from SSMVEP-based BCI system.
Collapse
Affiliation(s)
- Zhong-Ke Gao
- School of Electrical and Information Engineering, Tianjin University, Tianjin 300072, China
| | - Wei Guo
- School of Electrical and Information Engineering, Tianjin University, Tianjin 300072, China
| | - Qing Cai
- School of Electrical and Information Engineering, Tianjin University, Tianjin 300072, China
| | - Chao Ma
- School of Electrical and Information Engineering, Tianjin University, Tianjin 300072, China
| | - Yuan-Bo Zhang
- School of Civil Engineering, Tianjin University, Tianjin 300072, China
| | - Jürgen Kurths
- Potsdam Institute for Climate Impact Research, Telegraphenberg A31, 14473 Potsdam, Germany
| |
Collapse
|
6
|
Baek HJ, Chang MH, Heo J, Park KS. Enhancing the Usability of Brain-Computer Interface Systems. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2019; 2019:5427154. [PMID: 31316556 PMCID: PMC6604478 DOI: 10.1155/2019/5427154] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/21/2019] [Revised: 05/02/2019] [Accepted: 05/14/2019] [Indexed: 11/17/2022]
Abstract
Brain-computer interfaces (BCIs) aim to enable people to interact with the external world through an alternative, nonmuscular communication channel that uses brain signal responses to complete specific cognitive tasks. BCIs have been growing rapidly during the past few years, with most of the BCI research focusing on system performance, such as improving accuracy or information transfer rate. Despite these advances, BCI research and development is still in its infancy and requires further consideration to significantly affect human experience in most real-world environments. This paper reviews the most recent studies and findings about ergonomic issues in BCIs. We review dry electrodes that can be used to detect brain signals with high enough quality to apply in BCIs and discuss their advantages, disadvantages, and performance. Also, an overview is provided of the wide range of recent efforts to create new interface designs that do not induce fatigue or discomfort during everyday, long-term use. The basic principles of each technique are described, along with examples of current applications in BCI research. Finally, we demonstrate a user-friendly interface paradigm that uses dry capacitive electrodes that do not require any preparation procedure for EEG signal acquisition. We explore the capacitively measured steady-state visual evoked potential (SSVEP) response to an amplitude-modulated visual stimulus and the auditory steady-state response (ASSR) to an auditory stimulus modulated by familiar natural sounds to verify their availability for BCI. We report the first results of an online demonstration that adopted this ergonomic approach to evaluating BCI applications. We expect BCI to become a routine clinical, assistive, and commercial tool through advanced EEG monitoring techniques and innovative interface designs.
Collapse
Affiliation(s)
- Hyun Jae Baek
- Department of Medical and Mechatronics Engineering, Soonchunhyang University, Asan, Republic of Korea
| | - Min Hye Chang
- Korea Electrotechnology Research Institute (KERI), Ansan, Republic of Korea
| | - Jeong Heo
- Artificial Intelligence Laboratory, Software Center, LG Electronics, Seoul, Republic of Korea
| | - Kwang Suk Park
- Department of Biomedical Engineering, College of Medicine, Seoul National University, Seoul, Republic of Korea
| |
Collapse
|
7
|
Hübner D, Schall A, Prange N, Tangermann M. Eyes-Closed Increases the Usability of Brain-Computer Interfaces Based on Auditory Event-Related Potentials. Front Hum Neurosci 2018; 12:391. [PMID: 30323749 PMCID: PMC6172854 DOI: 10.3389/fnhum.2018.00391] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2018] [Accepted: 09/10/2018] [Indexed: 11/13/2022] Open
Abstract
Recent research has demonstrated how brain-computer interfaces (BCI) based on auditory stimuli can be used for communication and rehabilitation. In these applications, users are commonly instructed to avoid eye movements while keeping their eyes open. This secondary task can lead to exhaustion and subjects may not succeed in suppressing eye movements. In this work, we investigate the option to use a BCI with eyes-closed. Twelve healthy subjects participated in a single electroencephalography (EEG) session where they were listening to a rapid stream of bisyllabic words while alternatively having their eyes open or closed. In addition, we assessed usability aspects for the two conditions with a questionnaire. Our analysis shows that eyes-closed does not reduce the number of eye artifacts and that event-related potential (ERP) responses and classification accuracies are comparable between both conditions. Importantly, we found that subjects expressed a significant general preference toward the eyes-closed condition and were also less tensed in that condition. Furthermore, switching between eyes-closed and eyes-open and vice versa is possible without a severe drop in classification accuracy. These findings suggest that eyes-closed should be considered as a viable alternative in auditory BCIs that might be especially useful for subjects with limited control over their eye movements.
Collapse
Affiliation(s)
- David Hübner
- Brain State Decoding Lab, Department of Computer Science, University of Freiburg, Freiburg, Germany.,Cluster of Excellence, BrainLinks-BrainTools, Freiburg, Germany
| | - Albrecht Schall
- Brain State Decoding Lab, Department of Computer Science, University of Freiburg, Freiburg, Germany
| | - Natalie Prange
- Brain State Decoding Lab, Department of Computer Science, University of Freiburg, Freiburg, Germany
| | - Michael Tangermann
- Brain State Decoding Lab, Department of Computer Science, University of Freiburg, Freiburg, Germany.,Cluster of Excellence, BrainLinks-BrainTools, Freiburg, Germany
| |
Collapse
|
8
|
Huang M, Jin J, Zhang Y, Hu D, Wang X. Usage of drip drops as stimuli in an auditory P300 BCI paradigm. Cogn Neurodyn 2017; 12:85-94. [PMID: 29435089 DOI: 10.1007/s11571-017-9456-y] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2017] [Revised: 07/17/2017] [Accepted: 10/10/2017] [Indexed: 11/28/2022] Open
Abstract
Recently, many auditory BCIs are using beeps as auditory stimuli, while beeps sound unnatural and unpleasant for some people. It is proved that natural sounds make people feel comfortable, decrease fatigue, and improve the performance of auditory BCI systems. Drip drop is a kind of natural sounds that makes humans feel relaxed and comfortable. In this work, three kinds of drip drops were used as stimuli in an auditory-based BCI system to improve the user-friendness of the system. This study explored whether drip drops could be used as stimuli in the auditory BCI system. The auditory BCI paradigm with drip-drop stimuli, which was called the drip-drop paradigm (DP), was compared with the auditory paradigm with beep stimuli, also known as the beep paradigm (BP), in items of event-related potential amplitudes, online accuracies and scores on the likability and difficulty to demonstrate the advantages of DP. DP obtained significantly higher online accuracy and information transfer rate than the BP (p < 0.05, Wilcoxon signed test; p < 0.05, Wilcoxon signed test). Besides, DP obtained higher scores on the likability with no significant difference on the difficulty (p < 0.05, Wilcoxon signed test). The results showed that the drip drops were reliable acoustic materials as stimuli in an auditory BCI system.
Collapse
Affiliation(s)
- Minqiang Huang
- 1Key Laboratory of Advanced Control and Optimization for Chemical Processes, Ministry of Education, East China University of Science and Technology, Shanghai, People's Republic of China
| | - Jing Jin
- 1Key Laboratory of Advanced Control and Optimization for Chemical Processes, Ministry of Education, East China University of Science and Technology, Shanghai, People's Republic of China
| | - Yu Zhang
- 1Key Laboratory of Advanced Control and Optimization for Chemical Processes, Ministry of Education, East China University of Science and Technology, Shanghai, People's Republic of China
| | - Dewen Hu
- 2College of Mechatronics and Automation, National University of Defense Technology, Changsha, Hunan 410073 People's Republic of China
| | - Xingyu Wang
- 1Key Laboratory of Advanced Control and Optimization for Chemical Processes, Ministry of Education, East China University of Science and Technology, Shanghai, People's Republic of China
| |
Collapse
|
9
|
Heo J, Baek HJ, Hong S, Chang MH, Lee JS, Park KS. Music and natural sounds in an auditory steady-state response based brain-computer interface to increase user acceptance. Comput Biol Med 2017; 84:45-52. [PMID: 28342407 DOI: 10.1016/j.compbiomed.2017.03.011] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2017] [Accepted: 03/15/2017] [Indexed: 11/16/2022]
Abstract
Patients with total locked-in syndrome are conscious; however, they cannot express themselves because most of their voluntary muscles are paralyzed, and many of these patients have lost their eyesight. To improve the quality of life of these patients, there is an increasing need for communication-supporting technologies that leverage the remaining senses of the patient along with physiological signals. The auditory steady-state response (ASSR) is an electro-physiologic response to auditory stimulation that is amplitude-modulated by a specific frequency. By leveraging the phenomenon whereby ASSR is modulated by mind concentration, a brain-computer interface paradigm was proposed to classify the selective attention of the patient. In this paper, we propose an auditory stimulation method to minimize auditory stress by replacing the monotone carrier with familiar music and natural sounds for an ergonomic system. Piano and violin instrumentals were employed in the music sessions; the sounds of water streaming and cicadas singing were used in the natural sound sessions. Six healthy subjects participated in the experiment. Electroencephalograms were recorded using four electrodes (Cz, Oz, T7 and T8). Seven sessions were performed using different stimuli. The spectral power at 38 and 42Hz and their ratio for each electrode were extracted as features. Linear discriminant analysis was utilized to classify the selections for each subject. In offline analysis, the average classification accuracies with a modulation index of 1.0 were 89.67% and 87.67% using music and natural sounds, respectively. In online experiments, the average classification accuracies were 88.3% and 80.0% using music and natural sounds, respectively. Using the proposed method, we obtained significantly higher user-acceptance scores, while maintaining a high average classification accuracy.
Collapse
Affiliation(s)
- Jeong Heo
- Interdisciplinary Program of Bioengineering, Seoul National University, Seoul, Republic of Korea
| | - Hyun Jae Baek
- Mobile Communication Business, Samsung Electronics Co., Ltd., Suwon, Republic of Korea
| | - Seunghyeok Hong
- Interdisciplinary Program of Bioengineering, Seoul National University, Seoul, Republic of Korea
| | - Min Hye Chang
- Advanced Medical Device Research Division, Korea Electro-Technology Research Institute, Ansan, Republic of Korea
| | - Jeong Su Lee
- Mobile Communication Business, Samsung Electronics Co., Ltd., Suwon, Republic of Korea
| | - Kwang Suk Park
- Department of Biomedical Engineering, College of Medicine, Seoul National University, Seoul, Republic of Korea.
| |
Collapse
|
10
|
Minguillon J, Lopez-Gordo MA, Pelayo F. Trends in EEG-BCI for daily-life: Requirements for artifact removal. Biomed Signal Process Control 2017. [DOI: 10.1016/j.bspc.2016.09.005] [Citation(s) in RCA: 61] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
|
11
|
Lopez-Gordo MA, Grima Murcia MD, Padilla P, Pelayo F, Fernandez E. Asynchronous Detection of Trials Onset from Raw EEG Signals. Int J Neural Syst 2016; 26:1650034. [PMID: 27377663 DOI: 10.1142/s0129065716500349] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
Clinical processing of event-related potentials (ERPs) requires a precise synchrony between the stimulation and the acquisition units that are guaranteed by means of a physical link between them. This precise synchrony is needed since temporal misalignments during trial averaging can lead to high deviations of peak times, thus causing error in diagnosis or inefficiency in classification in brain-computer interfaces (BCIs). Out of the laboratory, mobile EEG systems and BCI headsets are not provided with the physical link, thus being inadequate for acquisition of ERPs. In this study, we propose a method for the asynchronous detection of trials onset from raw EEG without physical links. We validate it with a BCI application based on the dichotic listening task. The user goal was to attend the cued auditory message and to report three keywords contained in it while ignoring the other message. The BCI goal was to detect the attended message from the analysis of auditory ERPs. The rate of successful onset detection in both synchronous (using the real onset) and asynchronous (blind detection of trial onset from raw EEG) was 73% with a synchronization error of less than 1[Formula: see text]ms. The level of synchronization provided by this proposal would allow home-based acquisition of ERPs with low cost BCI headsets and any media player unit without physical links between them.
Collapse
Affiliation(s)
- M. A. Lopez-Gordo
- Department of Signal Theory, Telematics and Communications, University of Granada, Spain
- Nicolo Association, Churriana de la Vega, Granada, Spain
| | - M. D. Grima Murcia
- Institute of Bioengineering, University Miguel Hernández and CIBER BBN Av. de la Universidad 03202, Elche, Spain
| | - Pablo Padilla
- Department of Signal Theory, Communications and Networking, University of Granada 18071, Spain
| | - F. Pelayo
- Department of Computer Architecture and Technology, University of Granada, c/Periodista Daniel Saucedo 18071, Granada, Spain
| | - E. Fernandez
- Institute of Bioengineering, University Miguel Hernández and CIBER BBN Av. de la Universidad 03202, Elche, Spain
| |
Collapse
|
12
|
Yoshimura N, Nishimoto A, Belkacem AN, Shin D, Kambara H, Hanakawa T, Koike Y. Decoding of Covert Vowel Articulation Using Electroencephalography Cortical Currents. Front Neurosci 2016; 10:175. [PMID: 27199638 PMCID: PMC4853397 DOI: 10.3389/fnins.2016.00175] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2015] [Accepted: 04/06/2016] [Indexed: 11/30/2022] Open
Abstract
With the goal of providing assistive technology for the communication impaired, we proposed electroencephalography (EEG) cortical currents as a new approach for EEG-based brain-computer interface spellers. EEG cortical currents were estimated with a variational Bayesian method that uses functional magnetic resonance imaging (fMRI) data as a hierarchical prior. EEG and fMRI data were recorded from ten healthy participants during covert articulation of Japanese vowels /a/ and /i/, as well as during a no-imagery control task. Applying a sparse logistic regression (SLR) method to classify the three tasks, mean classification accuracy using EEG cortical currents was significantly higher than that using EEG sensor signals and was also comparable to accuracies in previous studies using electrocorticography. SLR weight analysis revealed vertices of EEG cortical currents that were highly contributive to classification for each participant, and the vertices showed discriminative time series signals according to the three tasks. Furthermore, functional connectivity analysis focusing on the highly contributive vertices revealed positive and negative correlations among areas related to speech processing. As the same findings were not observed using EEG sensor signals, our results demonstrate the potential utility of EEG cortical currents not only for engineering purposes such as brain-computer interfaces but also for neuroscientific purposes such as the identification of neural signaling related to language processing.
Collapse
Affiliation(s)
- Natsue Yoshimura
- Precision and Intelligence Laboratory, Tokyo Institute of TechnologyYokohama, Japan
- Department of Functional Brain Research, National Center of Neurology and Psychiatry, National Institute of NeuroscienceTokyo, Japan
- Department of Advanced Neuroimaging, Integrative Brain Imaging Center, National Center of Neurology and PsychiatryTokyo, Japan
| | - Atsushi Nishimoto
- Precision and Intelligence Laboratory, Tokyo Institute of TechnologyYokohama, Japan
- Department of Functional Brain Research, National Center of Neurology and Psychiatry, National Institute of NeuroscienceTokyo, Japan
- Department of Advanced Neuroimaging, Integrative Brain Imaging Center, National Center of Neurology and PsychiatryTokyo, Japan
| | | | - Duk Shin
- Department of Electronics and Mechatronics, Tokyo Polytechnic UniversityAtsugi, Japan
| | - Hiroyuki Kambara
- Precision and Intelligence Laboratory, Tokyo Institute of TechnologyYokohama, Japan
| | - Takashi Hanakawa
- Department of Functional Brain Research, National Center of Neurology and Psychiatry, National Institute of NeuroscienceTokyo, Japan
- Department of Advanced Neuroimaging, Integrative Brain Imaging Center, National Center of Neurology and PsychiatryTokyo, Japan
- Precursory Research for Embryonic Science and Technology, Japan Science and Technology AgencyTokyo, Japan
| | - Yasuharu Koike
- Precision and Intelligence Laboratory, Tokyo Institute of TechnologyYokohama, Japan
- Department of Advanced Neuroimaging, Integrative Brain Imaging Center, National Center of Neurology and PsychiatryTokyo, Japan
- Solution Science Research Laboratory, Tokyo Institute of TechnologyYokohama, Japan
| |
Collapse
|
13
|
Dijkstra K, Brunner P, Gunduz A, Coon W, Ritaccio A, Farquhar J, Schalk G. Identifying the Attended Speaker Using Electrocorticographic (ECoG) Signals. BRAIN-COMPUTER INTERFACES 2015; 2:161-173. [PMID: 26949710 PMCID: PMC4776341 DOI: 10.1080/2326263x.2015.1063363] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Abstract
People affected by severe neuro-degenerative diseases (e.g., late-stage amyotrophic lateral sclerosis (ALS) or locked-in syndrome) eventually lose all muscular control. Thus, they cannot use traditional assistive communication devices that depend on muscle control, or brain-computer interfaces (BCIs) that depend on the ability to control gaze. While auditory and tactile BCIs can provide communication to such individuals, their use typically entails an artificial mapping between the stimulus and the communication intent. This makes these BCIs difficult to learn and use. In this study, we investigated the use of selective auditory attention to natural speech as an avenue for BCI communication. In this approach, the user communicates by directing his/her attention to one of two simultaneously presented speakers. We used electrocorticographic (ECoG) signals in the gamma band (70-170 Hz) to infer the identity of attended speaker, thereby removing the need to learn such an artificial mapping. Our results from twelve human subjects show that a single cortical location over superior temporal gyrus or pre-motor cortex is typically sufficient to identify the attended speaker within 10 s and with 77% accuracy (50% accuracy due to chance). These results lay the groundwork for future studies that may determine the real-time performance of BCIs based on selective auditory attention to speech.
Collapse
Affiliation(s)
- K. Dijkstra
- Ctr for Adapt Neurotech, Wadsworth Center, New York State Department of Health, Albany, NY
- Dept of Neurology, Albany Medical College, Albany, NY
- Donders Inst for Brain, Cognition and Behaviour, Radboud Univ Nijmegen, The Netherlands
| | - P. Brunner
- Ctr for Adapt Neurotech, Wadsworth Center, New York State Department of Health, Albany, NY
- Dept of Neurology, Albany Medical College, Albany, NY
| | - A. Gunduz
- Ctr for Adapt Neurotech, Wadsworth Center, New York State Department of Health, Albany, NY
- J. Crayton Pruitt Family Dept of Biomed Eng, Univ of Florida, Gainesville, FL
| | - W. Coon
- Ctr for Adapt Neurotech, Wadsworth Center, New York State Department of Health, Albany, NY
- Dept of Biomed Sci, State Univ of New York at Albany, Albany, NY
| | - A.L. Ritaccio
- Dept of Neurology, Albany Medical College, Albany, NY
| | - J. Farquhar
- Donders Inst for Brain, Cognition and Behaviour, Radboud Univ Nijmegen, The Netherlands
| | - G. Schalk
- Ctr for Adapt Neurotech, Wadsworth Center, New York State Department of Health, Albany, NY
- Dept of Neurology, Albany Medical College, Albany, NY
- Dept of Biomed Sci, State Univ of New York at Albany, Albany, NY
| |
Collapse
|
14
|
Simon N, Käthner I, Ruf CA, Pasqualotto E, Kübler A, Halder S. An auditory multiclass brain-computer interface with natural stimuli: Usability evaluation with healthy participants and a motor impaired end user. Front Hum Neurosci 2015; 8:1039. [PMID: 25620924 PMCID: PMC4288388 DOI: 10.3389/fnhum.2014.01039] [Citation(s) in RCA: 41] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2014] [Accepted: 12/11/2014] [Indexed: 11/18/2022] Open
Abstract
Brain-computer interfaces (BCIs) can serve as muscle independent communication aids. Persons, who are unable to control their eye muscles (e.g., in the completely locked-in state) or have severe visual impairments for other reasons, need BCI systems that do not rely on the visual modality. For this reason, BCIs that employ auditory stimuli were suggested. In this study, a multiclass BCI spelling system was implemented that uses animal voices with directional cues to code rows and columns of a letter matrix. To reveal possible training effects with the system, 11 healthy participants performed spelling tasks on 2 consecutive days. In a second step, the system was tested by a participant with amyotrophic lateral sclerosis (ALS) in two sessions. In the first session, healthy participants spelled with an average accuracy of 76% (3.29 bits/min) that increased to 90% (4.23 bits/min) on the second day. Spelling accuracy by the participant with ALS was 20% in the first and 47% in the second session. The results indicate a strong training effect for both the healthy participants and the participant with ALS. While healthy participants reached high accuracies in the first session and second session, accuracies for the participant with ALS were not sufficient for satisfactory communication in both sessions. More training sessions might be needed to improve spelling accuracies. The study demonstrated the feasibility of the auditory BCI with healthy users and stresses the importance of training with auditory multiclass BCIs, especially for potential end-users of BCI with disease.
Collapse
Affiliation(s)
- Nadine Simon
- Institute of Medical Psychology and Behavioral Neurobiology, University of TübingenTübingen, Germany
- Max Planck Institute for Intelligent SystemsTübingen, Germany
| | - Ivo Käthner
- Institute of Psychology, University of WürzburgWürzburg, Germany
| | - Carolin A. Ruf
- Institute of Medical Psychology and Behavioral Neurobiology, University of TübingenTübingen, Germany
| | - Emanuele Pasqualotto
- Psychological Sciences Research Institute, Université Catholique de LouvainLouvain-la-Neuve, Belgium
| | - Andrea Kübler
- Institute of Psychology, University of WürzburgWürzburg, Germany
| | - Sebastian Halder
- Institute of Psychology, University of WürzburgWürzburg, Germany
| |
Collapse
|
15
|
Höhne J, Tangermann M. Towards user-friendly spelling with an auditory brain-computer interface: the CharStreamer paradigm. PLoS One 2014; 9:e98322. [PMID: 24886978 PMCID: PMC4041754 DOI: 10.1371/journal.pone.0098322] [Citation(s) in RCA: 28] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2014] [Accepted: 04/30/2014] [Indexed: 11/18/2022] Open
Abstract
Realizing the decoding of brain signals into control commands, brain-computer interfaces (BCI) aim to establish an alternative communication pathway for locked-in patients. In contrast to most visual BCI approaches which use event-related potentials (ERP) of the electroencephalogram, auditory BCI systems are challenged with ERP responses, which are less class-discriminant between attended and unattended stimuli. Furthermore, these auditory approaches have more complex interfaces which imposes a substantial workload on their users. Aiming for a maximally user-friendly spelling interface, this study introduces a novel auditory paradigm: "CharStreamer". The speller can be used with an instruction as simple as "please attend to what you want to spell". The stimuli of CharStreamer comprise 30 spoken sounds of letters and actions. As each of them is represented by the sound of itself and not by an artificial substitute, it can be selected in a one-step procedure. The mental mapping effort (sound stimuli to actions) is thus minimized. Usability is further accounted for by an alphabetical stimulus presentation: contrary to random presentation orders, the user can foresee the presentation time of the target letter sound. Healthy, normal hearing users (n = 10) of the CharStreamer paradigm displayed ERP responses that systematically differed between target and non-target sounds. Class-discriminant features, however, varied individually from the typical N1-P2 complex and P3 ERP components found in control conditions with random sequences. To fully exploit the sequential presentation structure of CharStreamer, novel data analysis approaches and classification methods were introduced. The results of online spelling tests showed that a competitive spelling speed can be achieved with CharStreamer. With respect to user rating, it clearly outperforms a control setup with random presentation sequences.
Collapse
Affiliation(s)
- Johannes Höhne
- Machine Learning Laboratory, Berlin Institute of Technology, Berlin, Germany
- Neurotechnology group, Berlin Institute of Technology, Berlin, Germany
| | - Michael Tangermann
- BrainLinks-BrainTools Excellence Cluster, University of Freiburg, Freiburg, Germany
| |
Collapse
|
16
|
|
17
|
Effects of augmentative visual training on audio-motor mapping. Hum Mov Sci 2014; 35:145-55. [PMID: 24529925 DOI: 10.1016/j.humov.2014.01.003] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2013] [Revised: 01/12/2014] [Accepted: 01/13/2014] [Indexed: 11/21/2022]
Abstract
The purpose of this study was to determine the effect of augmentative visual feedback training on auditory-motor performance. Thirty-two healthy young participants used facial surface electromyography (sEMG) to control a human-machine interface (HMI) for which the output was vowel synthesis. An auditory-only (AO) group (n=16) trained with auditory feedback alone and an auditory-visual (AV) group (n=16) trained with auditory feedback and progressively-removed visual feedback. Subjects participated in three training sessions and one testing session over 3days. During the testing session they were given novel targets to test auditory-motor generalization. We hypothesized that the auditory-visual group would perform better on the novel set of targets than the group that trained with auditory feedback only. Analysis of variance on the percentage of total targets reached indicated a significant interaction between group and session: individuals in the AV group performed significantly better than those in the AO group during early training sessions (while using visual feedback), but no difference was seen between the two groups during later sessions. Results suggest that augmentative visual feedback during training does not improve auditory-motor performance.
Collapse
|
18
|
Thorp EB, Larson E, Stepp CE. Combined Auditory and Vibrotactile Feedback for Human-Machine-Interface Control. IEEE Trans Neural Syst Rehabil Eng 2013; 22:62-8. [PMID: 23912500 DOI: 10.1109/tnsre.2013.2273177] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
The purpose of this study was to determine the effect of the addition of binary vibrotactile stimulation to continuous auditory feedback (vowel synthesis) for human-machine interface (HMI) control. Sixteen healthy participants controlled facial surface electromyography to achieve 2-D targets (vowels). Eight participants used only real-time auditory feedback to locate targets whereas the other eight participants were additionally alerted to having achieved targets with confirmatory vibrotactile stimulation at the index finger. All participants trained using their assigned feedback modality (auditory alone or combined auditory and vibrotactile) over three sessions on three days and completed a fourth session on the third day using novel targets to assess generalization. Analyses of variance performed on the 1) percentage of targets reached and 2) percentage of trial time at the target revealed a main effect for feedback modality: participants using combined auditory and vibrotactile feedback performed significantly better than those using auditory feedback alone. No effect was found for session or the interaction of feedback modality and session, indicating a successful generalization to novel targets but lack of improvement over training sessions. Future research is necessary to determine the cognitive cost associated with combined auditory and vibrotactile feedback during HMI control.
Collapse
|
19
|
Choi I, Rajaram S, Varghese LA, Shinn-Cunningham BG. Quantifying attentional modulation of auditory-evoked cortical responses from single-trial electroencephalography. Front Hum Neurosci 2013; 7:115. [PMID: 23576968 PMCID: PMC3616343 DOI: 10.3389/fnhum.2013.00115] [Citation(s) in RCA: 61] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2013] [Accepted: 03/15/2013] [Indexed: 11/13/2022] Open
Abstract
Selective auditory attention is essential for human listeners to be able to communicate in multi-source environments. Selective attention is known to modulate the neural representation of the auditory scene, boosting the representation of a target sound relative to the background, but the strength of this modulation, and the mechanisms contributing to it, are not well understood. Here, listeners performed a behavioral experiment demanding sustained, focused spatial auditory attention while we measured cortical responses using electroencephalography (EEG). We presented three concurrent melodic streams; listeners were asked to attend and analyze the melodic contour of one of the streams, randomly selected from trial to trial. In a control task, listeners heard the same sound mixtures, but performed the contour judgment task on a series of visual arrows, ignoring all auditory streams. We found that the cortical responses could be fit as weighted sum of event-related potentials evoked by the stimulus onsets in the competing streams. The weighting to a given stream was roughly 10 dB higher when it was attended compared to when another auditory stream was attended; during the visual task, the auditory gains were intermediate. We then used a template-matching classification scheme to classify single-trial EEG results. We found that in all subjects, we could determine which stream the subject was attending significantly better than by chance. By directly quantifying the effect of selective attention on auditory cortical responses, these results reveal that focused auditory attention both suppresses the response to an unattended stream and enhances the response to an attended stream. The single-trial classification results add to the growing body of literature suggesting that auditory attentional modulation is sufficiently robust that it could be used as a control mechanism in brain-computer interfaces (BCIs).
Collapse
Affiliation(s)
- Inyong Choi
- Center for Computational Neuroscience and Neural Technology, Boston University Boston, MA, USA
| | | | | | | |
Collapse
|
20
|
Categorical vowel perception enhances the effectiveness and generalization of auditory feedback in human-machine-interfaces. PLoS One 2013; 8:e59860. [PMID: 23527278 PMCID: PMC3602293 DOI: 10.1371/journal.pone.0059860] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2012] [Accepted: 02/19/2013] [Indexed: 11/19/2022] Open
Abstract
Human-machine interface (HMI) designs offer the possibility of improving quality of life for patient populations as well as augmenting normal user function. Despite pragmatic benefits, utilizing auditory feedback for HMI control remains underutilized, in part due to observed limitations in effectiveness. The goal of this study was to determine the extent to which categorical speech perception could be used to improve an auditory HMI. Using surface electromyography, 24 healthy speakers of American English participated in 4 sessions to learn to control an HMI using auditory feedback (provided via vowel synthesis). Participants trained on 3 targets in sessions 1–3 and were tested on 3 novel targets in session 4. An “established categories with text cues” group of eight participants were trained and tested on auditory targets corresponding to standard American English vowels using auditory and text target cues. An “established categories without text cues” group of eight participants were trained and tested on the same targets using only auditory cuing of target vowel identity. A “new categories” group of eight participants were trained and tested on targets that corresponded to vowel-like sounds not part of American English. Analyses of user performance revealed significant effects of session and group (established categories groups and the new categories group), and a trend for an interaction between session and group. Results suggest that auditory feedback can be effectively used for HMI operation when paired with established categorical (native vowel) targets with an unambiguous cue.
Collapse
|
21
|
Hands GL, Larson E, Stepp CE. The role of augmentative visual training in auditory human-machine-interface performance. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2013; 2013:2804-2807. [PMID: 24110310 DOI: 10.1109/embc.2013.6610123] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
The purpose of this study was to evaluate the effect of augmentative visual feedback training on performance using auditory feedback alone for human-machine interface (HMI) control. Sixteen healthy participants used bilateral facial surface electromyography to achieve two-dimensional control to reach vowel targets. Eight participants trained with combined visual and auditory feedback, while eight participants trained with real-time auditory feedback only. Each subject participated in four sessions over three days; three sessions with their designated feedback modality (auditory only or auditory with supplementary visual) and a fourth session on the third day using novel vowel targets to test generalization of auditory-motor learning. Analyses of variance performed on the percentage of total targets reached demonstrated a main effect of group and the interaction of group and session. Individuals provided with augmentative visual feedback during training outperformed individuals using auditory feedback alone in initial training sessions. However, training with augmentative visual feedback had no effect on individuals' training and generalization performance using auditory feedback alone after three days of training.
Collapse
|
22
|
Hill NJ, Moinuddin A, Häuser AK, Kienzle S, Schalk G. Communication and control by listening: toward optimal design of a two-class auditory streaming brain-computer interface. Front Neurosci 2012; 6:181. [PMID: 23267312 PMCID: PMC3525941 DOI: 10.3389/fnins.2012.00181] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2012] [Accepted: 11/30/2012] [Indexed: 11/30/2022] Open
Abstract
Most brain-computer interface (BCI) systems require users to modulate brain signals in response to visual stimuli. Thus, they may not be useful to people with limited vision, such as those with severe paralysis. One important approach for overcoming this issue is auditory streaming, an approach whereby a BCI system is driven by shifts of attention between two simultaneously presented auditory stimulus streams. Motivated by the long-term goal of translating such a system into a reliable, simple yes-no interface for clinical usage, we aim to answer two main questions. First, we asked which of two previously published variants provides superior performance: a fixed-phase (FP) design in which the streams have equal period and opposite phase, or a drifting-phase (DP) design where the periods are unequal. We found FP to be superior to DP (p = 0.002): average performance levels were 80 and 72% correct, respectively. We were also able to show, in a pilot with one subject, that auditory streaming can support continuous control and neurofeedback applications: by shifting attention between ongoing left and right auditory streams, the subject was able to control the position of a paddle in a computer game. Second, we examined whether the system is dependent on eye movements, since it is known that eye movements and auditory attention may influence each other, and any dependence on the ability to move one’s eyes would be a barrier to translation to paralyzed users. We discovered that, despite instructions, some subjects did make eye movements that were indicative of the direction of attention. However, there was no correlation, across subjects, between the reliability of the eye movement signal and the reliability of the BCI system, indicating that our system was configured to work independently of eye movement. Together, these findings are an encouraging step forward toward BCIs that provide practical communication and control options for the most severely paralyzed users.
Collapse
Affiliation(s)
- N Jeremy Hill
- New York State Department of Health, Wadsworth Center Albany, NY, USA
| | | | | | | | | |
Collapse
|
23
|
Maddox RK, Cheung W, Lee AKC. Selective attention in an overcrowded auditory scene: implications for auditory-based brain-computer interface design. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2012; 132:EL385-EL390. [PMID: 23145699 PMCID: PMC3482251 DOI: 10.1121/1.4757696] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/13/2012] [Accepted: 09/17/2012] [Indexed: 05/29/2023]
Abstract
Listeners are good at attending to one auditory stream in a crowded environment. However, is there an upper limit of streams present in an auditory scene at which this selective attention breaks down? Here, participants were asked to attend one stream of spoken letters amidst other letter streams. In half of the trials, an initial primer was played, cueing subjects to the sound configuration. Results indicate that performance increases with token repetitions. Priming provided a performance benefit, suggesting that stream selection, not formation, is the bottleneck associated with attention in an overcrowded scene. Results' implications for brain-computer interfaces are discussed.
Collapse
Affiliation(s)
- Ross K Maddox
- Department of Speech and Hearing Sciences, and Institute for Learning and Brain Sciences, University of Washington, Seattle, Washington 98195, USA.
| | | | | |
Collapse
|