1
|
Hossain A, Khan P, Kader MF. Imagined speech classification exploiting EEG power spectrum features. Med Biol Eng Comput 2024; 62:2529-2544. [PMID: 38632207 DOI: 10.1007/s11517-024-03083-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2023] [Accepted: 03/26/2024] [Indexed: 04/19/2024]
Abstract
Imagined speech recognition has developed as a significant topic of research in the field of brain-computer interfaces. This innovative technique has great promise as a communication tool, providing essential help to those with impairments. An imagined speech recognition model is proposed in this paper to identify the ten most frequently used English alphabets (e.g., A, D, E, H, I, N, O, R, S, T) and numerals (e.g., 0 to 9). A novel electroencephalogram (EEG) dataset was created by measuring the brain activity of 30 people while they imagined these alphabets and digits. As part of signal preprocessing, EEG signals are filtered before extracting delta, theta, alpha, and beta band power features. These features are used as input for classification using support vector machines, k-nearest neighbors, and random forest (RF) classifiers. It is identified that the RF classifier outperformed the others in terms of classification accuracy. Classification accuracies of 99.38% and 95.39% were achieved at the coarse-level and fine-level, respectively with the RF classifier. From our study, it is also revealed that the beta frequency band and the frontal lobe of the brain played crucial roles in imagined speech recognition. Furthermore, a comparative analysis against state-of-the-art techniques is conducted to demonstrate the efficacy of our proposed model.
Collapse
Affiliation(s)
- Arman Hossain
- Department of Electrical and Electronic Engineering, University of Chittagong, Chittagong, 4331, Bangladesh
| | - Protima Khan
- Department of Electrical and Electronic Engineering, University of Chittagong, Chittagong, 4331, Bangladesh
| | - Md Fazlul Kader
- Department of Electrical and Electronic Engineering, University of Chittagong, Chittagong, 4331, Bangladesh.
| |
Collapse
|
2
|
Yao Z, Sun K, He S. Synchronization in fractional-order neural networks by the energy balance strategy. Cogn Neurodyn 2024; 18:701-713. [PMID: 39554725 PMCID: PMC11564445 DOI: 10.1007/s11571-023-10023-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2023] [Revised: 09/28/2023] [Accepted: 10/12/2023] [Indexed: 11/19/2024] Open
Abstract
Considering the individual differences between neurons, the fractional-order framework is introduced, and the neurons with various orders denote the individual differences during the cell differentiation. In this paper, the fractional-order FithzHugh-Nagumo (FHN) neural circuit is used to reproduce the firing patterns. In addition, an energy balance strategy is applied to determine the inter-neuronal communication. The neurons with energy imbalance exchange the information whereas the synaptic channels are blocked when energy balance is achieved. Two neurons coupled by this strategy achieve the phase synchronization and phase lock, and it indicates the two neurons generate spiking at the same time or with an interval. Similarly, the synchronization results are also obtained in the chain neuronal network, and the neurons exhibit the same firing patterns since the synchronization factor is closed to 1. Particularly, the neurons with order diversities lead to the heterogeneity and gradient field in the regular network, and the target wave is developed over time. With the wave spreading in the network, the silent states and exciting states appear in the whole network. The formation and diffusion of the target wave reveals the information transmission in neuronal network, and it indicates the individual differences paly an essential role in the collective behavior of neurons.
Collapse
Affiliation(s)
- Zhao Yao
- School of Physics, Central South University, Changsha, 410083 China
| | - Kehui Sun
- School of Physics, Central South University, Changsha, 410083 China
| | - Shaobo He
- School of Automation and Electronic Information, Xiangtan University, Xiangtan, 411105 China
| |
Collapse
|
3
|
Zhang J, Li J, Huang Z, Huang D, Yu H, Li Z. Recent Progress in Wearable Brain-Computer Interface (BCI) Devices Based on Electroencephalogram (EEG) for Medical Applications: A Review. HEALTH DATA SCIENCE 2023; 3:0096. [PMID: 38487198 PMCID: PMC10880169 DOI: 10.34133/hds.0096] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/24/2023] [Accepted: 10/19/2023] [Indexed: 03/17/2024]
Abstract
Importance: Brain-computer interface (BCI) decodes and converts brain signals into machine instructions to interoperate with the external world. However, limited by the implantation risks of invasive BCIs and the operational complexity of conventional noninvasive BCIs, applications of BCIs are mainly used in laboratory or clinical environments, which are not conducive to the daily use of BCI devices. With the increasing demand for intelligent medical care, the development of wearable BCI systems is necessary. Highlights: Based on the scalp-electroencephalogram (EEG), forehead-EEG, and ear-EEG, the state-of-the-art wearable BCI devices for disease management and patient assistance are reviewed. This paper focuses on the EEG acquisition equipment of the novel wearable BCI devices and summarizes the development direction of wearable EEG-based BCI devices. Conclusions: BCI devices play an essential role in the medical field. This review briefly summarizes novel wearable EEG-based BCIs applied in the medical field and the latest progress in related technologies, emphasizing its potential to help doctors, patients, and caregivers better understand and utilize BCI devices.
Collapse
Affiliation(s)
- Jiayan Zhang
- Beijing Advanced Innovation Center for Integrated Circuits, Beijing, China
- National Key Laboratory of Advanced Micro and Nano Manufacture Technology, School of Integrated Circuits,
Peking University, Beijing, China
| | - Junshi Li
- Beijing Advanced Innovation Center for Integrated Circuits, Beijing, China
- National Key Laboratory of Advanced Micro and Nano Manufacture Technology, School of Integrated Circuits,
Peking University, Beijing, China
| | - Zhe Huang
- Beijing Advanced Innovation Center for Integrated Circuits, Beijing, China
- National Key Laboratory of Advanced Micro and Nano Manufacture Technology, School of Integrated Circuits,
Peking University, Beijing, China
- Shenzhen Graduate School,
Peking University, Shenzhen, China
| | - Dong Huang
- Beijing Advanced Innovation Center for Integrated Circuits, Beijing, China
- National Key Laboratory of Advanced Micro and Nano Manufacture Technology, School of Integrated Circuits,
Peking University, Beijing, China
- School of Electronics,
Peking University, Beijing, China
| | - Huaiqiang Yu
- Sichuan Institute of Piezoelectric and Acousto-optic Technology, Chongqing, China
| | - Zhihong Li
- Beijing Advanced Innovation Center for Integrated Circuits, Beijing, China
- National Key Laboratory of Advanced Micro and Nano Manufacture Technology, School of Integrated Circuits,
Peking University, Beijing, China
| |
Collapse
|
4
|
Li Y, He J, Fu C, Jiang K, Cao J, Wei B, Wang X, Luo J, Xu W, Zhu J. Children's Pain Identification Based on Skin Potential Signal. SENSORS (BASEL, SWITZERLAND) 2023; 23:6815. [PMID: 37571601 PMCID: PMC10422611 DOI: 10.3390/s23156815] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/04/2023] [Revised: 07/24/2023] [Accepted: 07/25/2023] [Indexed: 08/13/2023]
Abstract
Pain management is a crucial concern in medicine, particularly in the case of children who may struggle to effectively communicate their pain. Despite the longstanding reliance on various assessment scales by medical professionals, these tools have shown limitations and subjectivity. In this paper, we present a pain assessment scheme based on skin potential signals, aiming to convert subjective pain into objective indicators for pain identification using machine learning methods. We have designed and implemented a portable non-invasive measurement device to measure skin potential signals and conducted experiments involving 623 subjects. From the experimental data, we selected 358 valid records, which were then divided into 218 silent samples and 262 pain samples. A total of 38 features were extracted from each sample, with seven features displaying superior performance in pain identification. Employing three classification algorithms, we found that the random forest algorithm achieved the highest accuracy, reaching 70.63%. While this identification rate shows promise for clinical applications, it is important to note that our results differ from state-of-the-art research, which achieved a recognition rate of 81.5%. This discrepancy arises from the fact that our pain stimuli were induced by clinical operations, making it challenging to precisely control the stimulus intensity when compared to electrical or thermal stimuli. Despite this limitation, our pain assessment scheme demonstrates significant potential in providing objective pain identification in clinical settings. Further research and refinement of the proposed approach may lead to even more accurate and reliable pain management techniques in the future.
Collapse
Affiliation(s)
- Yubo Li
- College of Information Science and Electronic Engineering, Zhejiang University, Hangzhou 310027, China; (J.H.); (K.J.); (J.C.); (X.W.); (J.L.)
- International Joint Innovation Center, Zhejiang University, Haining 314400, China
| | - Jiadong He
- College of Information Science and Electronic Engineering, Zhejiang University, Hangzhou 310027, China; (J.H.); (K.J.); (J.C.); (X.W.); (J.L.)
| | - Cangcang Fu
- Children’s Hospital, Zhejiang University School of Medicine, Hangzhou 310052, China; (C.F.); (W.X.)
| | - Ke Jiang
- College of Information Science and Electronic Engineering, Zhejiang University, Hangzhou 310027, China; (J.H.); (K.J.); (J.C.); (X.W.); (J.L.)
| | - Junjie Cao
- College of Information Science and Electronic Engineering, Zhejiang University, Hangzhou 310027, China; (J.H.); (K.J.); (J.C.); (X.W.); (J.L.)
| | - Bing Wei
- Polytechnic Institute of Zhejiang University, Hangzhou 310015, China;
| | - Xiaozhi Wang
- College of Information Science and Electronic Engineering, Zhejiang University, Hangzhou 310027, China; (J.H.); (K.J.); (J.C.); (X.W.); (J.L.)
- International Joint Innovation Center, Zhejiang University, Haining 314400, China
| | - Jikui Luo
- College of Information Science and Electronic Engineering, Zhejiang University, Hangzhou 310027, China; (J.H.); (K.J.); (J.C.); (X.W.); (J.L.)
- International Joint Innovation Center, Zhejiang University, Haining 314400, China
| | - Weize Xu
- Children’s Hospital, Zhejiang University School of Medicine, Hangzhou 310052, China; (C.F.); (W.X.)
- National Clinical Research Center for Child Health, Hangzhou 310052, China
| | - Jihua Zhu
- College of Information Science and Electronic Engineering, Zhejiang University, Hangzhou 310027, China; (J.H.); (K.J.); (J.C.); (X.W.); (J.L.)
- Children’s Hospital, Zhejiang University School of Medicine, Hangzhou 310052, China; (C.F.); (W.X.)
| |
Collapse
|
5
|
Xu J, Huang Z, Liu L, Li X, Wei K. Eye-Gaze Controlled Wheelchair Based on Deep Learning. SENSORS (BASEL, SWITZERLAND) 2023; 23:6239. [PMID: 37448088 DOI: 10.3390/s23136239] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/31/2023] [Revised: 06/22/2023] [Accepted: 06/30/2023] [Indexed: 07/15/2023]
Abstract
In this paper, we design a technologically intelligent wheelchair with eye-movement control for patients with ALS in a natural environment. The system consists of an electric wheelchair, a vision system, a two-dimensional robotic arm, and a main control system. The smart wheelchair obtains the eye image of the controller through a monocular camera and uses deep learning and an attention mechanism to calculate the eye-movement direction. In addition, starting from the relationship between the trajectory of the joystick and the wheelchair speed, we establish a motion acceleration model of the smart wheelchair, which reduces the sudden acceleration of the smart wheelchair during rapid motion and improves the smoothness of the motion of the smart wheelchair. The lightweight eye-movement recognition model is transplanted into an embedded AI controller. The test results show that the accuracy of eye-movement direction recognition is 98.49%, the wheelchair movement speed is up to 1 m/s, and the movement trajectory is smooth, without sudden changes.
Collapse
Affiliation(s)
- Jun Xu
- School of Automation, Harbin University of Science and Technology, Harbin 150080, China
| | - Zuning Huang
- School of Electrical and Electronic Engineering, Harbin University of Science and Technology, Harbin 150080, China
| | - Liangyuan Liu
- School of Electrical and Electronic Engineering, Harbin University of Science and Technology, Harbin 150080, China
| | - Xinghua Li
- School of Electrical and Electronic Engineering, Harbin University of Science and Technology, Harbin 150080, China
| | - Kai Wei
- School of Electrical and Electronic Engineering, Harbin University of Science and Technology, Harbin 150080, China
| |
Collapse
|
6
|
Zhang X, Li J, Jin L, Zhao J, Huang Q, Song Z, Liu X, Luh DB. Design and Evaluation of the Extended FBS Model Based Gaze-Control Power Wheelchair for Individuals Facing Manual Control Challenges. SENSORS (BASEL, SWITZERLAND) 2023; 23:5571. [PMID: 37420738 PMCID: PMC10303982 DOI: 10.3390/s23125571] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/11/2023] [Revised: 06/09/2023] [Accepted: 06/12/2023] [Indexed: 07/09/2023]
Abstract
This study addresses the challenges faced by individuals with upper limb disadvantages in operating power wheelchair joysticks by utilizing the extended Function-Behavior-Structure (FBS) model to identify design requirements for an alternative wheelchair control system. A gaze-controlled wheelchair system is proposed based on design requirements from the extended FBS model and prioritized using the MosCow method. This innovative system relies on the user's natural gaze and comprises three levels: perception, decision making, and execution. The perception layer senses and acquires information from the environment, including user eye movements and driving context. The decision-making layer processes this information to determine the user's intended direction, while the execution layer controls the wheelchair's movement accordingly. The system's effectiveness was validated through indoor field testing, with an average driving drift of less than 20 cm for participates. Additionally, the user experience scale revealed overall positive user experiences and perceptions of the system's usability, ease of use, and satisfaction.
Collapse
Affiliation(s)
- Xiaochen Zhang
- Department of Industrial Design, Guangdong University of Technology, Guangzhou 510090, China; (X.Z.)
- Guangdong International Center of Advanced Design, Guangdong University of Technology, Guangzhou 510090, China
| | - Jiazhen Li
- Department of Industrial Design, Guangdong University of Technology, Guangzhou 510090, China; (X.Z.)
| | - Lingling Jin
- Department of Industrial Design, Guangdong University of Technology, Guangzhou 510090, China; (X.Z.)
| | - Jie Zhao
- Department of Industrial Design, Guangdong University of Technology, Guangzhou 510090, China; (X.Z.)
| | - Qianbo Huang
- Department of Industrial Design, Guangdong University of Technology, Guangzhou 510090, China; (X.Z.)
| | - Ziyang Song
- Department of Industrial Design, Guangdong University of Technology, Guangzhou 510090, China; (X.Z.)
| | - Xinyu Liu
- Department of Industrial Design, Guangdong University of Technology, Guangzhou 510090, China; (X.Z.)
| | - Ding-Bang Luh
- Department of Industrial Design, Guangdong University of Technology, Guangzhou 510090, China; (X.Z.)
- Guangdong International Center of Advanced Design, Guangdong University of Technology, Guangzhou 510090, China
| |
Collapse
|
7
|
Higa S, Yamada K, Kamisato S. Intelligent Eye-Controlled Electric Wheelchair Based on Estimating Visual Intentions Using One-Dimensional Convolutional Neural Network and Long Short-Term Memory. SENSORS (BASEL, SWITZERLAND) 2023; 23:4028. [PMID: 37112369 PMCID: PMC10145036 DOI: 10.3390/s23084028] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/15/2023] [Revised: 04/11/2023] [Accepted: 04/14/2023] [Indexed: 06/19/2023]
Abstract
When an electric wheelchair is operated using gaze motion, eye movements such as checking the environment and observing objects are also incorrectly recognized as input operations. This phenomenon is called the "Midas touch problem", and classifying visual intentions is extremely important. In this paper, we develop a deep learning model that estimates the user's visual intention in real time and an electric wheelchair control system that combines intention estimation and the gaze dwell time method. The proposed model consists of a 1DCNN-LSTM that estimates visual intention from feature vectors of 10 variables, such as eye movement, head movement, and distance to the fixation point. The evaluation experiments classifying four types of visual intentions show that the proposed model has the highest accuracy compared to other models. In addition, the results of the driving experiments of the electric wheelchair implementing the proposed model show that the user's efforts to operate the wheelchair are reduced and that the operability of the wheelchair is improved compared to the traditional method. From these results, we concluded that visual intentions could be more accurately estimated by learning time series patterns from eye and head movement data.
Collapse
Affiliation(s)
- Sho Higa
- Graduate School of Engineering and Science, University of the Ryukyus, Nishihara 903-0213, Japan
| | - Koji Yamada
- Department of Engineering, University of the Ryukyus, Nishihara 903-0213, Japan;
| | - Shihoko Kamisato
- Department of Information and Communication Systems Engineering, National Institute of Technology, Okinawa College, Nago 905-2171, Japan;
| |
Collapse
|
8
|
Gibertoni G, Borghi G, Rovati L. Vision-Based Eye Image Classification for Ophthalmic Measurement Systems. SENSORS (BASEL, SWITZERLAND) 2022; 23:386. [PMID: 36616983 PMCID: PMC9823474 DOI: 10.3390/s23010386] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/07/2022] [Revised: 12/20/2022] [Accepted: 12/23/2022] [Indexed: 06/17/2023]
Abstract
The accuracy and the overall performances of ophthalmic instrumentation, where specific analysis of eye images is involved, can be negatively influenced by invalid or incorrect frames acquired during everyday measurements of unaware or non-collaborative human patients and non-technical operators. Therefore, in this paper, we investigate and compare the adoption of several vision-based classification algorithms belonging to different fields, i.e., Machine Learning, Deep Learning, and Expert Systems, in order to improve the performance of an ophthalmic instrument designed for the Pupillary Light Reflex measurement. To test the implemented solutions, we collected and publicly released PopEYE as one of the first datasets consisting of 15 k eye images belonging to 22 different subjects acquired through the aforementioned specialized ophthalmic device. Finally, we discuss the experimental results in terms of classification accuracy of the eye status, as well as computational load analysis, since the proposed solution is designed to be implemented in embedded boards, which have limited hardware resources in computational power and memory size.
Collapse
Affiliation(s)
- Giovanni Gibertoni
- Department of Engineering “Enzo Ferrari”, University of Modena and Reggio Emilia, 41125 Modena, Italy
| | - Guido Borghi
- Department of Computer Science and Engineering, University of Bologna, 40126 Bologna, Italy
| | - Luigi Rovati
- Department of Engineering “Enzo Ferrari”, University of Modena and Reggio Emilia, 41125 Modena, Italy
| |
Collapse
|
9
|
Mattiev J, Sajovic J, Drevenšek G, Rogelj P. Assessment of Model Accuracy in Eyes Open and Closed EEG Data: Effect of Data Pre-Processing and Validation Methods. BIOENGINEERING (BASEL, SWITZERLAND) 2022; 10:bioengineering10010042. [PMID: 36671614 PMCID: PMC9854523 DOI: 10.3390/bioengineering10010042] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/28/2022] [Revised: 12/21/2022] [Accepted: 12/25/2022] [Indexed: 12/31/2022]
Abstract
Eyes open and eyes closed data is often used to validate novel human brain activity classification methods. The cross-validation of models trained on minimally preprocessed data is frequently utilized, regardless of electroencephalography data comprised of data resulting from muscle activity and environmental noise, affecting classification accuracy. Moreover, electroencephalography data of a single subject is often divided into smaller parts, due to limited availability of large datasets. The most frequently used method for model validation is cross-validation, even though the results may be affected by overfitting to the specifics of brain activity of limited subjects. To test the effects of preprocessing and classifier validation on classification accuracy, we tested fourteen classification algorithms implemented in WEKA and MATLAB, tested on comprehensively and simply preprocessed electroencephalography data. Hold-out and cross-validation were used to compare the classification accuracy of eyes open and closed data. The data of 50 subjects, with four minutes of data with eyes closed and open each was used. The algorithms trained on simply preprocessed data were superior to the ones trained on comprehensively preprocessed data in cross-validation testing. The reverse was true when hold-out accuracy was examined. Significant increases in hold-out accuracy were observed if the data of different subjects was not strictly separated between the test and training datasets, showing the presence of overfitting. The results show that comprehensive data preprocessing can be advantageous for subject invariant classification, while higher subject-specific accuracy can be attained with simple preprocessing. Researchers should thus state the final intended use of their classifier.
Collapse
Affiliation(s)
- Jamolbek Mattiev
- Department of Information Technologies, Urgench State University, Khamid Alimdjan 14, Urgench 220100, Uzbekistan
- Correspondence:
| | - Jakob Sajovic
- Department of Orthodontics, University Medical Centre Ljubljana, Hrvatski trg 6, 1000 Ljubljana, Slovenia
- Faculty of Medicine, University of Ljubljana, Korytkova 2, 1000 Ljubljana, Slovenia
| | - Gorazd Drevenšek
- Faculty of Medicine, University of Ljubljana, Korytkova 2, 1000 Ljubljana, Slovenia
- Faculty of Mathematics, Natural Sciences and Information Technologies, University of Primorska, Glagoljaška 8, 6000 Koper, Slovenia
| | - Peter Rogelj
- Faculty of Mathematics, Natural Sciences and Information Technologies, University of Primorska, Glagoljaška 8, 6000 Koper, Slovenia
| |
Collapse
|
10
|
Binary Controller Based on the Electrical Activity Related to Head Yaw Rotation. ACTUATORS 2022. [DOI: 10.3390/act11060161] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
A human machine interface (HMI) is presented to switch on/off lights according to the head left/right yaw rotation. The HMI consists of a cap, which can acquire the brain’s electrical activity (i.e., an electroencephalogram, EEG) sampled at 500 Hz on 8 channels with electrodes that are positioned according to the standard 10–20 system. In addition, the HMI includes a controller based on an input–output function that can compute the head position (defined as left, right, and forward position with respect to yaw angle) considering short intervals (10 samples) of the signals coming from three electrodes positioned in O1, O2, and Cz. An artificial neural network (ANN) training based on a Levenberg–Marquardt backpropagation algorithm was used to identify the input–output function. The HMI controller was tested on 22 participants. The proposed classifier achieved an average accuracy of 88% with the best value of 96.85%. After calibration for each specific subject, the HMI was used as a binary controller to verify its ability to switch on/off lamps according to head turning movement. The correct prediction of the head movements was greater than 75% in 90% of the participants when performing the test with open eyes. If the subjects carried out the experiments with closed eyes, the prediction accuracy reached 75% of correctness in 11 participants out of 22. One participant controlled the light system in both experiments, open and closed eyes, with 100% success. The control results achieved in this work can be considered as an important milestone towards humanoid neck systems.
Collapse
|
11
|
Human–System Interaction Based on Eye Tracking for a Virtual Workshop. SUSTAINABILITY 2022. [DOI: 10.3390/su14116841] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
With the constant exploration and development of intelligent manufacturing, the concept of digital twins has been proposed and applied. In view of the complexity and intellectualization of virtual workshop systems, real workshops can link with virtual workshosp based on AR under the structure of digital twins, which allows users to interact with virtual information and perceive the virtual information superimposed on the real world with great immersion. However, the three-dimensionality of virtual workshops and interaction with complex workshop information can be challenging for users. Due to a shortage of input bandwidth and the nontraditional mode of interaction, a more natural interaction technique for virtual workshops is required. To solve such problems, this paper presents a technical framework for 3D eye movement interaction applied to a virtual workshop. An eye movement interaction technique, oriented to implicit interaction and explicit interaction, is developed by establishing behavior recognition and interaction intention understanding. An eye-movement experiment verifies the former’s accuracy is above 90% and had better recognition performance. A better feature vector group of the latter is selected to establish a model and verify its feasibility and effectiveness. Finally, the feasibility of the framework is verified through the development of an application example.
Collapse
|
12
|
The Application of Integration of EEG Signals for Authorial Classification Algorithms in Implementation for a Mobile Robot Control Using Movement Imagery—Pilot Study. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12042161] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
This paper presents a new approach to the issue of recognition and classification of electroencephalographic signals (EEG). A small number of investigations using the Emotiv Epoc Flex sensor set was the reason for searching for original solutions including control of elements of robotics with mental orders given by a user. The signal, measured and archived with a 32-electrode device, was prepared for classification using a new solution consisting of EEG signal integration. The new waveforms modified in this way could be subjected to recognition both by a classic authorial software and an artificial neural network. The properly classified signals made it possible to use them as the signals controlling the LEGO EV3 Mindstorms robot.
Collapse
|
13
|
Domingo MC. An Overview of Machine Learning and 5G for People with Disabilities. SENSORS 2021; 21:s21227572. [PMID: 34833648 PMCID: PMC8622934 DOI: 10.3390/s21227572] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/11/2021] [Revised: 11/10/2021] [Accepted: 11/11/2021] [Indexed: 11/23/2022]
Abstract
Currently, over a billion people, including children (or about 15% of the world’s population), are estimated to be living with disability, and this figure is going to increase to beyond two billion by 2050. People with disabilities generally experience poorer levels of health, fewer achievements in education, fewer economic opportunities, and higher rates of poverty. Artificial intelligence and 5G can make major contributions towards the assistance of people with disabilities, so they can achieve a good quality of life. In this paper, an overview of machine learning and 5G for people with disabilities is provided. For this purpose, the proposed 5G network slicing architecture for disabled people is introduced. Different application scenarios and their main benefits are considered to illustrate the interaction of machine learning and 5G. Critical challenges have been identified and addressed.
Collapse
Affiliation(s)
- Mari Carmen Domingo
- Department of Network Engineering, BarcelonaTech (UPC) University, 08860 Castelldefels, Spain
| |
Collapse
|
14
|
Kubacki A. Use of Force Feedback Device in a Hybrid Brain-Computer Interface Based on SSVEP, EOG and Eye Tracking for Sorting Items. SENSORS 2021; 21:s21217244. [PMID: 34770554 PMCID: PMC8588340 DOI: 10.3390/s21217244] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/24/2021] [Revised: 10/27/2021] [Accepted: 10/29/2021] [Indexed: 11/16/2022]
Abstract
Research focused on signals derived from the human organism is becoming increasingly popular. In this field, a special role is played by brain-computer interfaces based on brainwaves. They are becoming increasingly popular due to the downsizing of EEG signal recording devices and ever-lower set prices. Unfortunately, such systems are substantially limited in terms of the number of generated commands. This especially applies to sets that are not medical devices. This article proposes a hybrid brain-computer system based on the Steady-State Visual Evoked Potential (SSVEP), EOG, eye tracking, and force feedback system. Such an expanded system eliminates many of the particular system shortcomings and provides much better results. The first part of the paper presents information on the methods applied in the hybrid brain-computer system. The presented system was tested in terms of the ability of the operator to place the robot’s tip to a designated position. A virtual model of an industrial robot was proposed, which was used in the testing. The tests were repeated on a real-life industrial robot. Positioning accuracy of system was verified with the feedback system both enabled and disabled. The results of tests conducted both on the model and on the real object clearly demonstrate that force feedback improves the positioning accuracy of the robot’s tip when controlled by the operator. In addition, the results for the model and the real-life industrial model are very similar. In the next stage, research was carried out on the possibility of sorting items using the BCI system. The research was carried out on a model and a real robot. The results show that it is possible to sort using bio signals from the human body.
Collapse
Affiliation(s)
- Arkadiusz Kubacki
- Institute of Mechanical Technology, Poznan University of Technology, ul. Piotrowo 3, 60-965 Poznań, Poland
| |
Collapse
|
15
|
Esposito D, Centracchio J, Andreozzi E, Gargiulo GD, Naik GR, Bifulco P. Biosignal-Based Human-Machine Interfaces for Assistance and Rehabilitation: A Survey. SENSORS 2021; 21:s21206863. [PMID: 34696076 PMCID: PMC8540117 DOI: 10.3390/s21206863] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/31/2021] [Revised: 09/30/2021] [Accepted: 10/12/2021] [Indexed: 12/03/2022]
Abstract
As a definition, Human–Machine Interface (HMI) enables a person to interact with a device. Starting from elementary equipment, the recent development of novel techniques and unobtrusive devices for biosignals monitoring paved the way for a new class of HMIs, which take such biosignals as inputs to control various applications. The current survey aims to review the large literature of the last two decades regarding biosignal-based HMIs for assistance and rehabilitation to outline state-of-the-art and identify emerging technologies and potential future research trends. PubMed and other databases were surveyed by using specific keywords. The found studies were further screened in three levels (title, abstract, full-text), and eventually, 144 journal papers and 37 conference papers were included. Four macrocategories were considered to classify the different biosignals used for HMI control: biopotential, muscle mechanical motion, body motion, and their combinations (hybrid systems). The HMIs were also classified according to their target application by considering six categories: prosthetic control, robotic control, virtual reality control, gesture recognition, communication, and smart environment control. An ever-growing number of publications has been observed over the last years. Most of the studies (about 67%) pertain to the assistive field, while 20% relate to rehabilitation and 13% to assistance and rehabilitation. A moderate increase can be observed in studies focusing on robotic control, prosthetic control, and gesture recognition in the last decade. In contrast, studies on the other targets experienced only a small increase. Biopotentials are no longer the leading control signals, and the use of muscle mechanical motion signals has experienced a considerable rise, especially in prosthetic control. Hybrid technologies are promising, as they could lead to higher performances. However, they also increase HMIs’ complexity, so their usefulness should be carefully evaluated for the specific application.
Collapse
Affiliation(s)
- Daniele Esposito
- Department of Electrical Engineering and Information Technologies, Polytechnic and Basic Sciences School, University of Naples “Federico II”, 80125 Naples, Italy; (D.E.); (J.C.); (E.A.); (P.B.)
| | - Jessica Centracchio
- Department of Electrical Engineering and Information Technologies, Polytechnic and Basic Sciences School, University of Naples “Federico II”, 80125 Naples, Italy; (D.E.); (J.C.); (E.A.); (P.B.)
| | - Emilio Andreozzi
- Department of Electrical Engineering and Information Technologies, Polytechnic and Basic Sciences School, University of Naples “Federico II”, 80125 Naples, Italy; (D.E.); (J.C.); (E.A.); (P.B.)
| | - Gaetano D. Gargiulo
- School of Engineering, Design and Built Environment, Western Sydney University, Penrith, NSW 2747, Australia;
- The MARCS Institute, Western Sydney University, Penrith, NSW 2751, Australia
| | - Ganesh R. Naik
- School of Engineering, Design and Built Environment, Western Sydney University, Penrith, NSW 2747, Australia;
- The Adelaide Institute for Sleep Health, Flinders University, Bedford Park, SA 5042, Australia
- Correspondence:
| | - Paolo Bifulco
- Department of Electrical Engineering and Information Technologies, Polytechnic and Basic Sciences School, University of Naples “Federico II”, 80125 Naples, Italy; (D.E.); (J.C.); (E.A.); (P.B.)
| |
Collapse
|
16
|
Ha J, Park S, Im CH, Kim L. A Hybrid Brain-Computer Interface for Real-Life Meal-Assist Robot Control. SENSORS 2021; 21:s21134578. [PMID: 34283122 PMCID: PMC8271393 DOI: 10.3390/s21134578] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/30/2021] [Revised: 06/30/2021] [Accepted: 07/01/2021] [Indexed: 11/16/2022]
Abstract
Assistant devices such as meal-assist robots aid individuals with disabilities and support the elderly in performing daily activities. However, existing meal-assist robots are inconvenient to operate due to non-intuitive user interfaces, requiring additional time and effort. Thus, we developed a hybrid brain-computer interface-based meal-assist robot system following three features that can be measured using scalp electrodes for electroencephalography. The following three procedures comprise a single meal cycle. (1) Triple eye-blinks (EBs) from the prefrontal channel were treated as activation for initiating the cycle. (2) Steady-state visual evoked potentials (SSVEPs) from occipital channels were used to select the food per the user's intention. (3) Electromyograms (EMGs) were recorded from temporal channels as the users chewed the food to mark the end of a cycle and indicate readiness for starting the following meal. The accuracy, information transfer rate, and false positive rate during experiments on five subjects were as follows: accuracy (EBs/SSVEPs/EMGs) (%): (94.67/83.33/97.33); FPR (EBs/EMGs) (times/min): (0.11/0.08); ITR (SSVEPs) (bit/min): 20.41. These results revealed the feasibility of this assistive system. The proposed system allows users to eat on their own more naturally. Furthermore, it can increase the self-esteem of disabled and elderly peeople and enhance their quality of life.
Collapse
Affiliation(s)
- Jihyeon Ha
- Center for Bionics, Korea Institute of Science and Technology, Seoul 02792, Korea; (J.H.); (S.P.)
- Department of Biomedical Engineering, Hanyang University, Seoul 04763, Korea;
| | - Sangin Park
- Center for Bionics, Korea Institute of Science and Technology, Seoul 02792, Korea; (J.H.); (S.P.)
| | - Chang-Hwan Im
- Department of Biomedical Engineering, Hanyang University, Seoul 04763, Korea;
| | - Laehyun Kim
- Center for Bionics, Korea Institute of Science and Technology, Seoul 02792, Korea; (J.H.); (S.P.)
- Department of HY-KIST Bio-Convergence, Hanyang University, Seoul 04763, Korea
- Correspondence: ; Tel.: +82-2-958-6726
| |
Collapse
|
17
|
Analysis of the Learning Process through Eye Tracking Technology and Feature Selection Techniques. APPLIED SCIENCES-BASEL 2021. [DOI: 10.3390/app11136157] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Abstract
In recent decades, the use of technological resources such as the eye tracking methodology is providing cognitive researchers with important tools to better understand the learning process. However, the interpretation of the metrics requires the use of supervised and unsupervised learning techniques. The main goal of this study was to analyse the results obtained with the eye tracking methodology by applying statistical tests and supervised and unsupervised machine learning techniques, and to contrast the effectiveness of each one. The parameters of fixations, saccades, blinks and scan path, and the results in a puzzle task were found. The statistical study concluded that no significant differences were found between participants in solving the crossword puzzle task; significant differences were only detected in the parameters saccade amplitude minimum and saccade velocity minimum. On the other hand, this study, with supervised machine learning techniques, provided possible features for analysis, some of them different from those used in the statistical study. Regarding the clustering techniques, a good fit was found between the algorithms used (k-means ++, fuzzy k-means and DBSCAN). These algorithms provided the learning profile of the participants in three types (students over 50 years old; and students and teachers under 50 years of age). Therefore, the use of both types of data analysis is considered complementary.
Collapse
|
18
|
Wang X, Xiao Y, Deng F, Chen Y, Zhang H. Eye-Movement-Controlled Wheelchair Based on Flexible Hydrogel Biosensor and WT-SVM. BIOSENSORS 2021; 11:198. [PMID: 34208524 PMCID: PMC8234407 DOI: 10.3390/bios11060198] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/12/2021] [Revised: 05/31/2021] [Accepted: 06/07/2021] [Indexed: 11/17/2022]
Abstract
To assist patients with restricted mobility to control wheelchair freely, this paper presents an eye-movement-controlled wheelchair prototype based on a flexible hydrogel biosensor and Wavelet Transform-Support Vector Machine (WT-SVM) algorithm. Considering the poor deformability and biocompatibility of rigid metal electrodes, we propose a flexible hydrogel biosensor made of conductive HPC/PVA (Hydroxypropyl cellulose/Polyvinyl alcohol) hydrogel and flexible PDMS (Polydimethylsiloxane) substrate. The proposed biosensor is affixed to the wheelchair user's forehead to collect electrooculogram (EOG) and strain signals, which are the basis to recognize eye movements. The low Young's modulus (286 KPa) and exceptional breathability (18 g m-2 h-1 of water vapor transmission rate) of the biosensor ensures a conformal and unobtrusive adhesion between it and the epidermis. To improve the recognition accuracy of eye movements (straight, upward, downward, left, and right), the WT-SVM algorithm is introduced to classify EOG and strain signals according to different features (amplitude, duration, interval). The average recognition accuracy reaches 96.3%, thus the wheelchair can be manipulated precisely.
Collapse
Affiliation(s)
| | | | - Fangming Deng
- School of Electrical and Automation Engineering, East China Jiaotong University, Nanchang 330013, China; (X.W.); (Y.X.); (Y.C.); (H.Z.)
| | | | | |
Collapse
|
19
|
Identification of Brain Electrical Activity Related to Head Yaw Rotations. SENSORS 2021; 21:s21103345. [PMID: 34065035 PMCID: PMC8150891 DOI: 10.3390/s21103345] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/14/2021] [Revised: 05/04/2021] [Accepted: 05/07/2021] [Indexed: 12/20/2022]
Abstract
Automatizing the identification of human brain stimuli during head movements could lead towards a significant step forward for human computer interaction (HCI), with important applications for severely impaired people and for robotics. In this paper, a neural network-based identification technique is presented to recognize, by EEG signals, the participant’s head yaw rotations when they are subjected to visual stimulus. The goal is to identify an input-output function between the brain electrical activity and the head movement triggered by switching on/off a light on the participant’s left/right hand side. This identification process is based on “Levenberg–Marquardt” backpropagation algorithm. The results obtained on ten participants, spanning more than two hours of experiments, show the ability of the proposed approach in identifying the brain electrical stimulus associate with head turning. A first analysis is computed to the EEG signals associated to each experiment for each participant. The accuracy of prediction is demonstrated by a significant correlation between training and test trials of the same file, which, in the best case, reaches value r = 0.98 with MSE = 0.02. In a second analysis, the input output function trained on the EEG signals of one participant is tested on the EEG signals by other participants. In this case, the low correlation coefficient values demonstrated that the classifier performances decreases when it is trained and tested on different subjects.
Collapse
|