1
|
Ramkumar E, Paulraj M. Optimized FFNN with multichannel CSP-ICA framework of EEG signal for BCI. Comput Methods Biomech Biomed Engin 2025; 28:61-78. [PMID: 38404196 DOI: 10.1080/10255842.2024.2319701] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2023] [Revised: 11/07/2023] [Accepted: 02/12/2024] [Indexed: 02/27/2024]
Abstract
The electroencephalogram (EEG) of the patient is used to identify their motor intention, which is then converted into a control signal through a brain-computer interface (BCI) based on motor imagery. Whenever gathering features from EEG signals, making a BCI is difficult in part because of the enormous dimensionality of the data. Three stages make up the suggested methodology: pre-processing, extraction of features, selection, and categorization. To remove unwanted artifacts, the EEG signals are filtered by a fifth-order Butterworth multichannel band-pass filter. This decreases execution time and memory use, both of which improve system performance. Then a novel multichannel optimized CSP-ICA feature extraction technique is used to separate and eliminate non-discriminative information from discriminative information in the EEG channels. Furthermore, CSP uses the concept of an Artificial Bee Colony (ABC) algorithm to automatically identify the simultaneous global ideal frequency band and time interval combination for the extraction and classification of common spatial pattern characteristics. Finally, a Tunable optimized feed-forward neural network (FFNN) classifier is utilized to extract and categorize the temporal and frequency domain features, which employs an FFNN classifier with Tunable-Q wavelet transform. The proposed framework, therefore optimizes signal processing, enabling enhanced EEG signal classification for BCI applications. The result shows that the models that use Tunable optimized FFNN produce higher classification accuracy of more than 20% when compared to the existing models.
Collapse
Affiliation(s)
- E Ramkumar
- Sri Ramakrishna Institute of Technology, Coimbatore, India
| | - M Paulraj
- Sri Ramakrishna Institute of Technology, Coimbatore, India
| |
Collapse
|
2
|
Fu R, Niu S, Feng X, Shi Y, Jia C, Zhao J, Wen G. Performance investigation of MVMD-MSI algorithm in frequency recognition for SSVEP-based brain-computer interface and its application in robotic arm control. Med Biol Eng Comput 2024:10.1007/s11517-024-03236-3. [PMID: 39725763 DOI: 10.1007/s11517-024-03236-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2023] [Accepted: 10/19/2024] [Indexed: 12/28/2024]
Abstract
This study focuses on improving the performance of steady-state visual evoked potential (SSVEP) in brain-computer interfaces (BCIs) for robotic control systems. The challenge lies in effectively reducing the impact of artifacts on raw data to enhance the performance both in quality and reliability. The proposed MVMD-MSI algorithm combines the advantages of multivariate variational mode decomposition (MVMD) and multivariate synchronization index (MSI). Compared to widely used algorithms, the novelty of this method is its capability of decomposing nonlinear and non-stationary EEG signals into intrinsic mode functions (IMF) across different frequency bands with the best center frequency and bandwidth. Therefore, SSVEP decoding performance can be improved by this method, and the effectiveness of MVMD-MSI is evaluated by the robot with 6 degrees-of-freedom. Offline experiments were conducted to optimize the algorithm's parameters, resulting in significant improvements. Additionally, the algorithm showed good performance even with fewer channels and shorter data lengths. In online experiments, the algorithm achieved an average accuracy of 98.31% at 1.8 s, confirming its feasibility and effectiveness for real-time SSVEP BCI-based robotic arm applications. The MVMD-MSI algorithm, as proposed, represents a significant advancement in SSVEP analysis for robotic control systems. It enhances decoding performance and shows promise for practical application in this field.
Collapse
Affiliation(s)
- Rongrong Fu
- Measurement Technology and Instrumentation Key Lab of Hebei Province, Department of Electrical Engineering, Yanshan University, Qinhuangdao, China
| | - Shaoxiong Niu
- Measurement Technology and Instrumentation Key Lab of Hebei Province, Department of Electrical Engineering, Yanshan University, Qinhuangdao, China
| | - Xiaolei Feng
- Measurement Technology and Instrumentation Key Lab of Hebei Province, Department of Electrical Engineering, Yanshan University, Qinhuangdao, China
| | - Ye Shi
- School of Electrical Engineering and the Key Laboratory of Intelligent Rehabilitation and Neromodulation of Hebei Province, Yanshan University, Qinhuangdao, China
| | - Chengcheng Jia
- Department of Electrical, Computer & Biomedical Engineering, Ryerson University, Toronto, Canada
| | - Jing Zhao
- School of Electrical Engineering and the Key Laboratory of Intelligent Rehabilitation and Neromodulation of Hebei Province, Yanshan University, Qinhuangdao, China.
| | - Guilin Wen
- School of Mechanical Engineering, Yanshan University, Qinhuangdao, China
| |
Collapse
|
3
|
Toyama H, Kawamoto H, Sankai Y. Cybernic robot hand-arm that realizes cooperative work as a new hand-arm for people with a single upper-limb dysfunction. Front Robot AI 2024; 11:1455582. [PMID: 39502464 PMCID: PMC11535860 DOI: 10.3389/frobt.2024.1455582] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2024] [Accepted: 09/06/2024] [Indexed: 11/08/2024] Open
Abstract
A robot hand-arm that can perform various tasks with the unaffected arm could ease the daily lives of patients with a single upper-limb dysfunction. A smooth interaction between robot and patient is desirable since their other arm functions normally. If the robot can move in response to the user's intentions and cooperate with the unaffected arm, even without detailed operation, it can effectively assist with daily tasks. This study aims to propose and develop a cybernic robot hand-arm with the following features: 1) input of user intention via bioelectrical signals from the paralyzed arm, the unaffected arm's motion, and voice; 2) autonomous control of support movements; 3) a control system that integrates voluntary and autonomous control by combining 1) and 2) to thus allow smooth work support in cooperation with the unaffected arm, reflecting intention as a part of the body; and 4) a learning function to provide work support across various tasks in daily use. We confirmed the feasibility and usefulness of the proposed system through a pilot study involving three patients. The system learned to support new tasks by working with the user through an operating function that does not require the involvement of the unaffected arm. The system divides the support actions into movement phases and learns the phase-shift conditions from the sensor information about the user's intention. After learning, the system autonomously performs learned support actions through voluntary phase shifts based on input about the user's intention via bioelectrical signals, the unaffected arm's motion, and by voice, enabling smooth collaborative movement with the unaffected arm. Experiments with patients demonstrated that the system could learn and provide smooth work support in cooperation with the unaffected arm to successfully complete tasks they find difficult. Additionally, the questionnaire subjectively confirmed that cooperative work according to the user's intention was achieved and that work time was within a feasible range for daily life. Furthermore, it was observed that participants who used bioelectrical signals from their paralyzed arm perceived the system as part of their body. We thus confirmed the feasibility and usefulness of various cooperative task supports using the proposed method.
Collapse
Affiliation(s)
- Hiroaki Toyama
- Center for Cybernics Research, University of Tsukuba, Tsukuba, Japan
- CYBERDYNE, Inc., Tsukuba, Japan
| | - Hiroaki Kawamoto
- Center for Cybernics Research, University of Tsukuba, Tsukuba, Japan
- CYBERDYNE, Inc., Tsukuba, Japan
- Faculty of Engineering, Information and Systems, University of Tsukuba, Tsukuba, Japan
| | - Yoshiyuki Sankai
- Center for Cybernics Research, University of Tsukuba, Tsukuba, Japan
- CYBERDYNE, Inc., Tsukuba, Japan
- Faculty of Engineering, Information and Systems, University of Tsukuba, Tsukuba, Japan
| |
Collapse
|
4
|
Kim M, Choi MS, Jang GR, Bae JH, Park HS. EEG-controlled tele-grasping for undefined objects. Front Neurorobot 2023; 17:1293878. [PMID: 38186671 PMCID: PMC10770246 DOI: 10.3389/fnbot.2023.1293878] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2023] [Accepted: 11/29/2023] [Indexed: 01/09/2024] Open
Abstract
This paper presents a teleoperation system of robot grasping for undefined objects based on a real-time EEG (Electroencephalography) measurement and shared autonomy. When grasping an undefined object in an unstructured environment, real-time human decision is necessary since fully autonomous grasping may not handle uncertain situations. The proposed system allows involvement of a wide range of human decisions throughout the entire grasping procedure, including 3D movement of the gripper, selecting proper grasping posture, and adjusting the amount of grip force. These multiple decision-making procedures of the human operator have been implemented with six flickering blocks for steady-state visually evoked potentials (SSVEP) by dividing the grasping task into predefined substeps. Each substep consists of approaching the object, selecting posture and grip force, grasping, transporting to the desired position, and releasing. The graphical user interface (GUI) displays the current substep and simple symbols beside each flickering block for quick understanding. The tele-grasping of various objects by using real-time human decisions of selecting among four possible postures and three levels of grip force has been demonstrated. This system can be adapted to other sequential EEG-controlled teleoperation tasks that require complex human decisions.
Collapse
Affiliation(s)
- Minki Kim
- Department of Mechanical Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Republic of Korea
| | - Myoung-Su Choi
- Applied Robot R&D Department, Korea Institute of Industrial Technology, Ansan, Republic of Korea
| | - Ga-Ram Jang
- Applied Robot R&D Department, Korea Institute of Industrial Technology, Ansan, Republic of Korea
| | - Ji-Hun Bae
- Applied Robot R&D Department, Korea Institute of Industrial Technology, Ansan, Republic of Korea
| | - Hyung-Soon Park
- Department of Mechanical Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Republic of Korea
| |
Collapse
|
5
|
Zhang Y, Qian K, Xie SQ, Shi C, Li J, Zhang ZQ. SSVEP-Based Brain-Computer Interface Controlled Robotic Platform With Velocity Modulation. IEEE Trans Neural Syst Rehabil Eng 2023; 31:3448-3458. [PMID: 37624718 DOI: 10.1109/tnsre.2023.3308778] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/27/2023]
Abstract
Steady-state visual evoked potential (SSVEP)-based brain-computer interfaces (BCIs) have been extensively studied due to many benefits, such as non-invasiveness, high information transfer rate, and ease of use. SSVEP-based BCI has been investigated in various applications by projecting brain signals to robot control commands. However, the movement direction and speed are generally fixed and prescribed, neglecting the user's requirement for velocity changes during practical implementations. In this study, we proposed a velocity modulation method based on stimulus brightness for controlling the robotic arm in the SSVEP-based BCI system. A stimulation interface was designed, incorporating flickers, target and a cursor workspace. The synchronization of the cursor and robotic arm does not require the subject's eye switch between the stimuli and the robot. The feature vector consists of the characteristics of the signal and the classification result. Subsequently, the Gaussian mixture model (GMM) and Bayesian inference were used to calculate the posterior probabilities that the signal came from a high or low brightness flicker. A brain-actuated speed function was designed by incorporating the posterior probability difference. Finally, the historical velocity was considered to determine the final velocity. To demonstrate the effectiveness of the proposed method, online experiments, including single- and multi-target reaching tasks, were conducted. The extensive experimental results validated the feasibility of the proposed method in reducing reaching time and achieving proximity to the target.
Collapse
|
6
|
Ai J, Meng J, Mai X, Zhu X. BCI Control of a Robotic Arm Based on SSVEP With Moving Stimuli for Reach and Grasp Tasks. IEEE J Biomed Health Inform 2023; 27:3818-3829. [PMID: 37200132 DOI: 10.1109/jbhi.2023.3277612] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/20/2023]
Abstract
Brain-computer interface (BCI) provides a novel technology for patients and healthy human subjects to control a robotic arm. Currently, BCI control of a robotic arm to complete the reaching and grasping tasks in an unstructured environment is still challenging because the current BCI technology does not meet the requirement of manipulating a multi-degree robotic arm accurately and robustly. BCI based on steady-state visual evoked potential (SSVEP) could output a high information transfer rate; however, the conventional SSVEP paradigm failed to control a robotic arm to move continuously and accurately because the users have to switch their gaze between the flickering stimuli and the target frequently. This study proposed a novel SSVEP paradigm in which the flickering stimuli were attached to the robotic arm's gripper and moved with it. First, an offline experiment was designed to investigate the effects of moving flickering stimuli on the SSVEP's responses and decoding accuracy. After that, contrast experiments were conducted, and twelve subjects were recruited to participate in a robotic arm control experiment using both the paradigm one (P1, with moving flickering stimuli) and the paradigm two (P2, conventional fixed flickering stimuli) using a block randomization design to balance their sequences. Double blinks were used to trigger the grasping action asynchronously whenever the subjects were confident that the position of the robotic arm's gripper was accurate enough. Experimental results showed that the paradigm P1 with moving flickering stimuli provided a much better control performance than the conventional paradigm P2 in completing a reaching and grasping task in an unstructured environment. Subjects' subjective feedback scored by a NASA-TLX mental workload scale also corroborated the BCI control performance. The results of this study suggest that the proposed control interface based on SSVEP BCI provides a better solution for robotic arm control to complete the accurate reaching and grasping tasks.
Collapse
|
7
|
Manual 3D Control of an Assistive Robotic Manipulator Using Alpha Rhythms and an Auditory Menu: A Proof-of-Concept. SIGNALS 2022. [DOI: 10.3390/signals3020024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Brain–Computer Interfaces (BCIs) have been regarded as potential tools for individuals with severe motor disabilities, such as those with amyotrophic lateral sclerosis, that render interfaces that rely on movement unusable. This study aims to develop a dependent BCI system for manual end-point control of a robotic arm. A proof-of-concept system was devised using parieto-occipital alpha wave modulation and a cyclic menu with auditory cues. Users choose a movement to be executed and asynchronously stop said action when necessary. Tolerance intervals allowed users to cancel or confirm actions. Eight able-bodied subjects used the system to perform a pick-and-place task. To investigate the potential learning effects, the experiment was conducted twice over the course of two consecutive days. Subjects obtained satisfactory completion rates (84.0 ± 15.0% and 74.4 ± 34.5% for the first and second day, respectively) and high path efficiency (88.9 ± 11.7% and 92.2 ± 9.6%). Subjects took on average 439.7 ± 203.3 s to complete each task, but the robot was only in motion 10% of the time. There was no significant difference in performance between both days. The developed control scheme provided users with intuitive control, but a considerable amount of time is spent waiting for the right target (auditory cue). Implementing other brain signals may increase its speed.
Collapse
|