1
|
Zhang Z, Xu P, Wu C, Yu H. Smart Nursing Wheelchairs: A New Trend in Assisted Care and the Future of Multifunctional Integration. Biomimetics (Basel) 2024; 9:492. [PMID: 39194471 DOI: 10.3390/biomimetics9080492] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2024] [Revised: 08/06/2024] [Accepted: 08/12/2024] [Indexed: 08/29/2024] Open
Abstract
As a significant technological innovation in the fields of medicine and geriatric care, smart care wheelchairs offer a novel approach to providing high-quality care services and improving the quality of care. The aim of this review article is to examine the development, applications and prospects of smart nursing wheelchairs, with particular emphasis on their assistive nursing functions, multiple-sensor fusion technology, and human-machine interaction interfaces. First, we describe the assistive functions of nursing wheelchairs, including position changing, transferring, bathing, and toileting, which significantly reduce the workload of nursing staff and improve the quality of care. Second, we summarized the existing multiple-sensor fusion technology for smart nursing wheelchairs, including LiDAR, RGB-D, ultrasonic sensors, etc. These technologies give wheelchairs autonomy and safety, better meeting patients' needs. We also discussed the human-machine interaction interfaces of intelligent care wheelchairs, such as voice recognition, touch screens, and remote controls. These interfaces allow users to operate and control the wheelchair more easily, improving usability and maneuverability. Finally, we emphasized the importance of multifunctional-integrated care wheelchairs that integrate assistive care, navigation, and human-machine interaction functions into a comprehensive care solution for users. We are looking forward to the future and assume that smart nursing wheelchairs will play an increasingly important role in medicine and geriatric care. By integrating advanced technologies such as enhanced artificial intelligence, intelligent sensors, and remote monitoring, we expect to further improve patients' quality of care and quality of life.
Collapse
Affiliation(s)
- Zhewen Zhang
- Rehabilitation Engineering and Technology Institute, University of Shanghai for Science and Technology, Shanghai 200093, China
| | - Peng Xu
- Rehabilitation Engineering and Technology Institute, University of Shanghai for Science and Technology, Shanghai 200093, China
| | - Chengjia Wu
- Rehabilitation Engineering and Technology Institute, University of Shanghai for Science and Technology, Shanghai 200093, China
| | - Hongliu Yu
- Rehabilitation Engineering and Technology Institute, University of Shanghai for Science and Technology, Shanghai 200093, China
| |
Collapse
|
2
|
Liu X, Hu B, Si Y, Wang Q. The role of eye movement signals in non-invasive brain-computer interface typing system. Med Biol Eng Comput 2024; 62:1981-1990. [PMID: 38509350 DOI: 10.1007/s11517-024-03070-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2023] [Accepted: 03/05/2024] [Indexed: 03/22/2024]
Abstract
Brain-Computer Interfaces (BCIs) have shown great potential in providing communication and control for individuals with severe motor disabilities. However, traditional BCIs that rely on electroencephalography (EEG) signals suffer from low information transfer rates and high variability across users. Recently, eye movement signals have emerged as a promising alternative due to their high accuracy and robustness. Eye movement signals are the electrical or mechanical signals generated by the movements and behaviors of the eyes, serving to denote the diverse forms of eye movements, such as fixations, smooth pursuit, and other oculomotor activities like blinking. This article presents a review of recent studies on the development of BCI typing systems that incorporate eye movement signals. We first discuss the basic principles of BCI and the recent advancements in text entry. Then, we provide a comprehensive summary of the latest advancements in BCI typing systems that leverage eye movement signals. This includes an in-depth analysis of hybrid BCIs that are built upon the integration of electrooculography (EOG) and eye tracking technology, aiming to enhance the performance and functionality of the system. Moreover, we highlight the advantages and limitations of different approaches, as well as potential future directions. Overall, eye movement signals hold great potential for enhancing the usability and accessibility of BCI typing systems, and further research in this area could lead to more effective communication and control for individuals with motor disabilities.
Collapse
Affiliation(s)
- Xi Liu
- Key Laboratory of Spectral Imaging Technology, Xi'an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi'an, 710119, China
- University of Chinese Academy of Sciences, Beijing, 100049, China
- Key Laboratory of Biomedical Spectroscopy of Xi'an, Xi'an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi'an, 710119, China
| | - Bingliang Hu
- Key Laboratory of Spectral Imaging Technology, Xi'an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi'an, 710119, China
- Key Laboratory of Biomedical Spectroscopy of Xi'an, Xi'an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi'an, 710119, China
| | - Yang Si
- Department of Neurology, Sichuan Academy of Medical Science and Sichuan Provincial People's Hospital, Chengdu, 611731, China
- University of Electronic Science and Technology of China, Chengdu, 611731, China
| | - Quan Wang
- Key Laboratory of Spectral Imaging Technology, Xi'an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi'an, 710119, China.
- Key Laboratory of Biomedical Spectroscopy of Xi'an, Xi'an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi'an, 710119, China.
| |
Collapse
|
3
|
Meena YK, Cecotti H, Bhushan B, Dutta A, Prasad G. Detection of Dyslexic Children Using Machine Learning and Multimodal Hindi Language Eye-Gaze-Assisted Learning System. IEEE TRANSACTIONS ON HUMAN-MACHINE SYSTEMS 2023; 53:122-131. [DOI: 10.1109/thms.2022.3221848] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/21/2023]
Affiliation(s)
- Yogesh Kumar Meena
- School of Computer Science and Electronic Engineering, University of Essex, Colchester, U.K
| | | | - Braj Bhushan
- Department of Humanities and Social Sciences, Indian Institute of Technology, Kanpur, India
| | - Ashish Dutta
- Centre of Mechatronics, Indian Institute of Technology, Kanpur, India
| | - Girijesh Prasad
- Intelligent Systems Research Centre, Ulster University, Londonderry, U.K
| |
Collapse
|
4
|
Wu JY, Ching CTS, Wang HMD, Liao LD. Emerging Wearable Biosensor Technologies for Stress Monitoring and Their Real-World Applications. BIOSENSORS 2022; 12:1097. [PMID: 36551064 PMCID: PMC9776100 DOI: 10.3390/bios12121097] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/07/2022] [Accepted: 11/15/2022] [Indexed: 06/17/2023]
Abstract
Wearable devices are being developed faster and applied more widely. Wearables have been used to monitor movement-related physiological indices, including heartbeat, movement, and other exercise metrics, for health purposes. People are also paying more attention to mental health issues, such as stress management. Wearable devices can be used to monitor emotional status and provide preliminary diagnoses and guided training functions. The nervous system responds to stress, which directly affects eye movements and sweat secretion. Therefore, the changes in brain potential, eye potential, and cortisol content in sweat could be used to interpret emotional changes, fatigue levels, and physiological and psychological stress. To better assess users, stress-sensing devices can be integrated with applications to improve cognitive function, attention, sports performance, learning ability, and stress release. These application-related wearables can be used in medical diagnosis and treatment, such as for attention-deficit hyperactivity disorder (ADHD), traumatic stress syndrome, and insomnia, thus facilitating precision medicine. However, many factors contribute to data errors and incorrect assessments, including the various wearable devices, sensor types, data reception methods, data processing accuracy and algorithms, application reliability and validity, and actual user actions. Therefore, in the future, medical platforms for wearable devices and applications should be developed, and product implementations should be evaluated clinically to confirm product accuracy and perform reliable research.
Collapse
Affiliation(s)
- Ju-Yu Wu
- Institute of Biomedical Engineering and Nanomedicine, National Health Research Institutes, Zhunan Township, Miaoli County 35053, Taiwan
- Program in Tissue Engineering and Regenerative Medicine, National Chung Hsing University, South District, Taichung City 402, Taiwan
| | - Congo Tak-Shing Ching
- Graduate Institute of Biomedical Engineering, National Chung Hsing University, South District, Taichung City 402, Taiwan
- Department of Electrical Engineering, National Chi Nan University, No. 1 University Road, Puli Township, Nantou County 545301, Taiwan
| | - Hui-Min David Wang
- Program in Tissue Engineering and Regenerative Medicine, National Chung Hsing University, South District, Taichung City 402, Taiwan
- Graduate Institute of Biomedical Engineering, National Chung Hsing University, South District, Taichung City 402, Taiwan
| | - Lun-De Liao
- Institute of Biomedical Engineering and Nanomedicine, National Health Research Institutes, Zhunan Township, Miaoli County 35053, Taiwan
- Program in Tissue Engineering and Regenerative Medicine, National Chung Hsing University, South District, Taichung City 402, Taiwan
| |
Collapse
|
5
|
Ban S, Lee YJ, Kim KR, Kim JH, Yeo WH. Advances in Materials, Sensors, and Integrated Systems for Monitoring Eye Movements. BIOSENSORS 2022; 12:1039. [PMID: 36421157 PMCID: PMC9688058 DOI: 10.3390/bios12111039] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/20/2022] [Revised: 11/11/2022] [Accepted: 11/13/2022] [Indexed: 06/16/2023]
Abstract
Eye movements show primary responses that reflect humans' voluntary intention and conscious selection. Because visual perception is one of the fundamental sensory interactions in the brain, eye movements contain critical information regarding physical/psychological health, perception, intention, and preference. With the advancement of wearable device technologies, the performance of monitoring eye tracking has been significantly improved. It also has led to myriad applications for assisting and augmenting human activities. Among them, electrooculograms, measured by skin-mounted electrodes, have been widely used to track eye motions accurately. In addition, eye trackers that detect reflected optical signals offer alternative ways without using wearable sensors. This paper outlines a systematic summary of the latest research on various materials, sensors, and integrated systems for monitoring eye movements and enabling human-machine interfaces. Specifically, we summarize recent developments in soft materials, biocompatible materials, manufacturing methods, sensor functions, systems' performances, and their applications in eye tracking. Finally, we discuss the remaining challenges and suggest research directions for future studies.
Collapse
Affiliation(s)
- Seunghyeb Ban
- School of Engineering and Computer Science, Washington State University, Vancouver, WA 98686, USA
- IEN Center for Human-Centric Interfaces and Engineering, Institute for Electronics and Nanotechnology, Georgia Institute of Technology, Atlanta, GA 30332, USA
| | - Yoon Jae Lee
- IEN Center for Human-Centric Interfaces and Engineering, Institute for Electronics and Nanotechnology, Georgia Institute of Technology, Atlanta, GA 30332, USA
- School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA 30332, USA
| | - Ka Ram Kim
- IEN Center for Human-Centric Interfaces and Engineering, Institute for Electronics and Nanotechnology, Georgia Institute of Technology, Atlanta, GA 30332, USA
- George W. Woodruff School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, GA 30332, USA
| | - Jong-Hoon Kim
- School of Engineering and Computer Science, Washington State University, Vancouver, WA 98686, USA
- Department of Mechanical Engineering, University of Washington, Seattle, WA 98195, USA
| | - Woon-Hong Yeo
- IEN Center for Human-Centric Interfaces and Engineering, Institute for Electronics and Nanotechnology, Georgia Institute of Technology, Atlanta, GA 30332, USA
- George W. Woodruff School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, GA 30332, USA
- Wallace H. Coulter Department of Biomedical Engineering, Georgia Tech and Emory University School of Medicine, Atlanta, GA 30332, USA
- Neural Engineering Center, Institute for Materials, Institute for Robotics and Intelligent Machines, Georgia Institute of Technology, Atlanta, GA 30332, USA
| |
Collapse
|
6
|
Rathi N, Singla R, Tiwari S. A comparative study of classification methods for designing a pictorial P300-based authentication system. Med Biol Eng Comput 2022; 60:2899-2916. [DOI: 10.1007/s11517-022-02626-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2021] [Accepted: 06/24/2022] [Indexed: 10/15/2022]
|
7
|
Rojas M, Ponce P, Molina A. Development of a Sensing Platform Based on Hands-Free Interfaces for Controlling Electronic Devices. Front Hum Neurosci 2022; 16:867377. [PMID: 35754778 PMCID: PMC9231433 DOI: 10.3389/fnhum.2022.867377] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2022] [Accepted: 05/04/2022] [Indexed: 11/13/2022] Open
Abstract
Hands-free interfaces are essential to people with limited mobility for interacting with biomedical or electronic devices. However, there are not enough sensing platforms that quickly tailor the interface to these users with disabilities. Thus, this article proposes to create a sensing platform that could be used by patients with mobility impairments to manipulate electronic devices, thereby their independence will be increased. Hence, a new sensing scheme is developed by using three hands-free signals as inputs: voice commands, head movements, and eye gestures. These signals are obtained by using non-invasive sensors: a microphone for the speech commands, an accelerometer to detect inertial head movements, and an infrared oculography to register eye gestures. These signals are processed and received as the user's commands by an output unit, which provides several communication ports for sending control signals to other devices. The interaction methods are intuitive and could extend boundaries for people with disabilities to manipulate local or remote digital systems. As a study case, two volunteers with severe disabilities used the sensing platform to steer a power wheelchair. Participants performed 15 common skills for wheelchair users and their capacities were evaluated according to a standard test. By using the head control they obtained 93.3 and 86.6%, respectively for volunteers A and B; meanwhile, by using the voice control they obtained 63.3 and 66.6%, respectively. These results show that the end-users achieved high performance by developing most of the skills by using the head movements interface. On the contrary, the users were not able to develop most of the skills by using voice control. These results showed valuable information for tailoring the sensing platform according to the end-user needs.
Collapse
Affiliation(s)
- Mario Rojas
- Tecnologico de Monterrey, School of Engineering and Sciences, Mexico City, Mexico
| | - Pedro Ponce
- Tecnologico de Monterrey, School of Engineering and Sciences, Mexico City, Mexico
| | - Arturo Molina
- Tecnologico de Monterrey, School of Engineering and Sciences, Mexico City, Mexico
| |
Collapse
|
8
|
Cai X, Pan J. Toward a Brain-Computer Interface- and Internet of Things-Based Smart Ward Collaborative System Using Hybrid Signals. JOURNAL OF HEALTHCARE ENGINEERING 2022; 2022:6894392. [PMID: 35480157 PMCID: PMC9038386 DOI: 10.1155/2022/6894392] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/12/2022] [Accepted: 03/26/2022] [Indexed: 11/24/2022]
Abstract
This study proposes a brain-computer interface (BCI)- and Internet of Things (IoT)-based smart ward collaborative system using hybrid signals. The system is divided into hybrid asynchronous electroencephalography (EEG)-, electrooculography (EOG)- and gyro-based BCI control system and an IoT monitoring and management system. The hybrid BCI control system proposes a GUI paradigm with cursor movement. The user uses the gyro to control the cursor area selection and uses blink-related EOG to control the cursor click. Meanwhile, the attention-related EEG signals are classified based on a support-vector machine (SVM) to make the final judgment. The judgment of the cursor area and the judgment of the attention state are reduced, thereby reducing the false operation rate in the hybrid BCI system. The accuracy in the hybrid BCI control system was 96.65 ± 1.44%, and the false operation rate and command response time were 0.89 ± 0.42 events/min and 2.65 ± 0.48 s, respectively. These results show the application potential of the hybrid BCI control system in daily tasks. In addition, we develop an architecture to connect intelligent things in a smart ward based on narrowband Internet of Things (NB-IoT) technology. The results demonstrate that our system provides superior communication transmission quality.
Collapse
Affiliation(s)
- Xugang Cai
- School of Software, South China Normal University, Guangzhou 510631, China
| | - Jiahui Pan
- School of Software, South China Normal University, Guangzhou 510631, China
- Pazhou Lab, Guangzhou 510330, China
| |
Collapse
|
9
|
Steering a Robotic Wheelchair Based on Voice Recognition System Using Convolutional Neural Networks. ELECTRONICS 2022. [DOI: 10.3390/electronics11010168] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
Abstract
Many wheelchair people depend on others to control the movement of their wheelchairs, which significantly influences their independence and quality of life. Smart wheelchairs offer a degree of self-dependence and freedom to drive their own vehicles. In this work, we designed and implemented a low-cost software and hardware method to steer a robotic wheelchair. Moreover, from our method, we developed our own Android mobile app based on Flutter software. A convolutional neural network (CNN)-based network-in-network (NIN) structure approach integrated with a voice recognition model was also developed and configured to build the mobile app. The technique was also implemented and configured using an offline Wi-Fi network hotspot between software and hardware components. Five voice commands (yes, no, left, right, and stop) guided and controlled the wheelchair through the Raspberry Pi and DC motor drives. The overall system was evaluated based on a trained and validated English speech corpus by Arabic native speakers for isolated words to assess the performance of the Android OS application. The maneuverability performance of indoor and outdoor navigation was also evaluated in terms of accuracy. The results indicated a degree of accuracy of approximately 87.2% of the accurate prediction of some of the five voice commands. Additionally, in the real-time performance test, the root-mean-square deviation (RMSD) values between the planned and actual nodes for indoor/outdoor maneuvering were 1.721 × 10−5 and 1.743 × 10−5, respectively.
Collapse
|
10
|
Orlandi S, House SC, Karlsson P, Saab R, Chau T. Brain-Computer Interfaces for Children With Complex Communication Needs and Limited Mobility: A Systematic Review. Front Hum Neurosci 2021; 15:643294. [PMID: 34335203 PMCID: PMC8319030 DOI: 10.3389/fnhum.2021.643294] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2020] [Accepted: 05/18/2021] [Indexed: 11/13/2022] Open
Abstract
Brain-computer interfaces (BCIs) represent a new frontier in the effort to maximize the ability of individuals with profound motor impairments to interact and communicate. While much literature points to BCIs' promise as an alternative access pathway, there have historically been few applications involving children and young adults with severe physical disabilities. As research is emerging in this sphere, this article aims to evaluate the current state of translating BCIs to the pediatric population. A systematic review was conducted using the Scopus, PubMed, and Ovid Medline databases. Studies of children and adolescents that reported BCI performance published in English in peer-reviewed journals between 2008 and May 2020 were included. Twelve publications were identified, providing strong evidence for continued research in pediatric BCIs. Research evidence was generally at multiple case study or exploratory study level, with modest sample sizes. Seven studies focused on BCIs for communication and five on mobility. Articles were categorized and grouped based on type of measurement (i.e., non-invasive and invasive), and the type of brain signal (i.e., sensory evoked potentials or movement-related potentials). Strengths and limitations of studies were identified and used to provide requirements for clinical translation of pediatric BCIs. This systematic review presents the state-of-the-art of pediatric BCIs focused on developing advanced technology to support children and youth with communication disabilities or limited manual ability. Despite a few research studies addressing the application of BCIs for communication and mobility in children, results are encouraging and future works should focus on customizable pediatric access technologies based on brain activity.
Collapse
Affiliation(s)
- Silvia Orlandi
- Bloorview Research Institute, Holland Bloorview Kids Rehabilitation Hospital, Toronto, ON, Canada
| | - Sarah C. House
- Bloorview Research Institute, Holland Bloorview Kids Rehabilitation Hospital, Toronto, ON, Canada
| | - Petra Karlsson
- Cerebral Palsy Alliance, The University of Sydney, Sydney, NSW, Australia
| | - Rami Saab
- Bloorview Research Institute, Holland Bloorview Kids Rehabilitation Hospital, Toronto, ON, Canada
| | - Tom Chau
- Bloorview Research Institute, Holland Bloorview Kids Rehabilitation Hospital, Toronto, ON, Canada
- Institute of Biomedical Engineering (BME), University of Toronto, Toronto, ON, Canada
| |
Collapse
|
11
|
Abstract
The firstgeneration of brain-computer interfaces (BCI) classifies multi-channel electroencephalographic (EEG) signals, enhanced by optimized spatial filters.The second generation directly classifies covariance matrices estimated on EEG signals, based on straightforward algorithms such as the minimum-distance-to-Riemannian-mean (MDRM). Classification results vary greatly depending on the chosen Riemannian distance or divergence, whose definitions and reference implementations are spread across a wide mathematical literature. This paper reviews all the Riemannian distances and divergences to process covariance matrices, with an implementation compatible with BCI constraints. The impact of using different metrics is assessed on a steady-state visually evoked potentials (SSVEP) dataset, evaluating centers of classes and classification accuracy. Riemannian approaches embed crucial properties to process EEG data. The Riemannian centers of classes outperform Euclidean ones both in offline and online setups. Some Riemannian distances and divergences have better performances in terms of classification accuracy, while others have appealing computational efficiency.
Collapse
|