1
|
Akdağ A, Baykan ÖK. Isolated sign language recognition through integrating pose data and motion history images. PeerJ Comput Sci 2024; 10:e2054. [PMID: 38855212 PMCID: PMC11157617 DOI: 10.7717/peerj-cs.2054] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2023] [Accepted: 04/21/2024] [Indexed: 06/11/2024]
Abstract
This article presents an innovative approach for the task of isolated sign language recognition (SLR); this approach centers on the integration of pose data with motion history images (MHIs) derived from these data. Our research combines spatial information obtained from body, hand, and face poses with the comprehensive details provided by three-channel MHI data concerning the temporal dynamics of the sign. Particularly, our developed finger pose-based MHI (FP-MHI) feature significantly enhances the recognition success, capturing the nuances of finger movements and gestures, unlike existing approaches in SLR. This feature improves the accuracy and reliability of SLR systems by more accurately capturing the fine details and richness of sign language. Additionally, we enhance the overall model accuracy by predicting missing pose data through linear interpolation. Our study, based on the randomized leaky rectified linear unit (RReLU) enhanced ResNet-18 model, successfully handles the interaction between manual and non-manual features through the fusion of extracted features and classification with a support vector machine (SVM). This innovative integration demonstrates competitive and superior results compared to current methodologies in the field of SLR across various datasets, including BosphorusSign22k-general, BosphorusSign22k, LSA64, and GSL, in our experiments.
Collapse
Affiliation(s)
- Ali Akdağ
- Department of Computer Engineering, Tokat Gaziosmanpaşa University, Tokat, Turkey
| | - Ömer Kaan Baykan
- Department of Computer Engineering, Konya Technical University, Konya, Turkey
| |
Collapse
|
2
|
Zhang J, Bu X, Wang Y, Dong H, Zhang Y, Wu H. Sign language recognition based on dual-path background erasure convolutional neural network. Sci Rep 2024; 14:11360. [PMID: 38762676 PMCID: PMC11102471 DOI: 10.1038/s41598-024-62008-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2024] [Accepted: 05/13/2024] [Indexed: 05/20/2024] Open
Abstract
Sign language is an important way to provide expression information to people with hearing and speaking disabilities. Therefore, sign language recognition has always been a very important research topic. However, many sign language recognition systems currently require complex deep models and rely on expensive sensors, which limits the application scenarios of sign language recognition. To address this issue, based on computer vision, this study proposed a lightweight, dual-path background erasing deep convolutional neural network (DPCNN) model for sign language recognition. The DPCNN consists of two paths. One path is used to learn the overall features, while the other path learns the background features. The background features are gradually subtracted from the overall features to obtain an effective representation of hand features. Then, these features are flatten into a one-dimensional layer, and pass through a fully connected layer with an output unit of 128. Finally, use a fully connected layer with an output unit of 24 as the output layer. Based on the ASL Finger Spelling dataset, the total accuracy and Macro-F1 scores of the proposed method is 99.52% and 0.997, respectively. More importantly, the proposed method can be applied to small terminals, thereby improving the application scenarios of sign language recognition. Through experimental comparison, the dual path background erasure network model proposed in this paper has better generalization ability.
Collapse
Affiliation(s)
- Junming Zhang
- School of Computer and Artificial Intelligence, Huanghuai University, Zhumadian, 463000, Henan Province, China
- Key Laboratory of Intelligent Lighting, Henan Province, Zhumadian, 463000, China
| | - Xiaolong Bu
- School of Computer and Artificial Intelligence, Huanghuai University, Zhumadian, 463000, Henan Province, China
- Key Laboratory of Intelligent Lighting, Henan Province, Zhumadian, 463000, China
| | - Yushuai Wang
- School of Computer and Artificial Intelligence, Huanghuai University, Zhumadian, 463000, Henan Province, China
- Key Laboratory of Intelligent Lighting, Henan Province, Zhumadian, 463000, China
- School of Computer Science, Zhongyuan University of Technology, Xinzheng, 450007, Henan, China
| | - Hao Dong
- School of Computer and Artificial Intelligence, Huanghuai University, Zhumadian, 463000, Henan Province, China
- Key Laboratory of Intelligent Lighting, Henan Province, Zhumadian, 463000, China
- School of Computer Science, Zhongyuan University of Technology, Xinzheng, 450007, Henan, China
| | - Yu Zhang
- School of Computer and Artificial Intelligence, Huanghuai University, Zhumadian, 463000, Henan Province, China
- Key Laboratory of Intelligent Lighting, Henan Province, Zhumadian, 463000, China
| | - Haitao Wu
- School of Computer and Artificial Intelligence, Huanghuai University, Zhumadian, 463000, Henan Province, China.
- Key Laboratory of Intelligent Lighting, Henan Province, Zhumadian, 463000, China.
| |
Collapse
|
3
|
Vihriälä TA, Raisamo R, Ihalainen T, Virkki J. Towards E-textiles in augmentative and alternative communication - user scenarios developed by speech and language therapists. Disabil Rehabil Assist Technol 2024; 19:1626-1636. [PMID: 37402238 DOI: 10.1080/17483107.2023.2225556] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2022] [Revised: 06/07/2023] [Accepted: 06/10/2023] [Indexed: 07/06/2023]
Abstract
PURPOSE E-textiles have been the focus of interest in health technology, but little research has been done so far on how they could support persons with complex communication needs. A global estimate is that 97 million people may benefit from Augmentative and Alternative Communication (AAC). Unfortunately, despite the growing body of research, many persons with complex communication needs are left without functional means to communicate. This study aimed to address the lack of research in textile-based AAC and to build a picture of the issues that affect novel textile-based technology development. MATERIALS AND METHODS We arranged a focus group study for altogether 12 speech and language therapists to elicit user scenarios to understand needs, activities, and contexts when implementing a novel, textile-based technology in a user-centred approach. RESULTS AND CONCLUSION As a result, we present six user scenarios that were created for children to enhance their social interaction in everyday life when using textile-based technology that recognizes touch or detects motion. The persistent availability and the individual design to meet a person's capability along with ease of use and personalization were perceived important requirements. Through these scenarios, we identified technological constraints regarding the development of e-textile technology and its use in the AAC field, such as issues regarding sensors and providing power supply. Resolving the design constraints will lead to a feasible and portable e-textile AAC system.
Collapse
Affiliation(s)
- Tanja A Vihriälä
- Faculty of Information Technology and Communication Sciences, Tampere University, Tampere, Finland
| | - Roope Raisamo
- Faculty of Information Technology and Communication Sciences, Tampere University, Tampere, Finland
| | - Tiina Ihalainen
- Faculty of Social Sciences, Tampere University, Tampere, Finland
| | - Johanna Virkki
- Faculty of Information Technology and Communication Sciences, Tampere University, Tampere, Finland
| |
Collapse
|
4
|
Vihriӓlӓ TA, Ihalainen T, Elo C, Lintula L, Virkki J. Possibilities of intelligent textiles in AAC - perspectives of speech and language therapists. Disabil Rehabil Assist Technol 2024; 19:1019-1031. [PMID: 36371798 DOI: 10.1080/17483107.2022.2141900] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2022] [Accepted: 10/25/2022] [Indexed: 11/15/2022]
Abstract
PURPOSE The growth of new high-technology devices in the field of augmentative and alternative communication (AAC) has been rapid. However, a vast number of individuals with complex communication needs are left without functional means to communicate in their lives. Intelligent textiles are one of the growing industries in health technologies yet to be explored for the possibility of implementation as an AAC solution. This study aimed to investigate the potential of intelligent textiles and their functions in daily life perceived by experienced speech and language therapists and to obtain data, which will offer direction on how to proceed with prototype development. MATERIALS AND METHODS Focus group discussions were conducted remotely within two groups of experienced speech and language therapists (n = 12). The data obtained from the discussions were analysed thematically. RESULTS AND CONCLUSION According to the stakeholders in question, intelligent textiles were perceived most useful for individuals with motor disabilities and those with severe intellectual disabilities. The most prominent themes for the purpose of using the intelligent textiles were social interaction and accessing meaningful activities independently. The participants also described how this technology could be used in terms of the textile, the input needed and the output the technology provides. The versatile results are discussed along with directions for future research.
Collapse
Affiliation(s)
- Tanja A Vihriӓlӓ
- Faculty of Information Technology and Communication Sciences, Tampere, Finland
| | - Tiina Ihalainen
- Faculty of Social Sciences, Tampere University, Tampere, Finland
| | - Charlotta Elo
- Faculty of Information Technology and Communication Sciences, Tampere, Finland
| | - Lotta Lintula
- Faculty of Social Sciences, Tampere University, Tampere, Finland
| | - Johanna Virkki
- Faculty of Information Technology and Communication Sciences, Tampere, Finland
| |
Collapse
|
5
|
Alsharif B, Altaher AS, Altaher A, Ilyas M, Alalwany E. Deep Learning Technology to Recognize American Sign Language Alphabet. SENSORS (BASEL, SWITZERLAND) 2023; 23:7970. [PMID: 37766026 PMCID: PMC10535774 DOI: 10.3390/s23187970] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/07/2023] [Revised: 09/08/2023] [Accepted: 09/17/2023] [Indexed: 09/29/2023]
Abstract
Historically, individuals with hearing impairments have faced neglect, lacking the necessary tools to facilitate effective communication. However, advancements in modern technology have paved the way for the development of various tools and software aimed at improving the quality of life for hearing-disabled individuals. This research paper presents a comprehensive study employing five distinct deep learning models to recognize hand gestures for the American Sign Language (ASL) alphabet. The primary objective of this study was to leverage contemporary technology to bridge the communication gap between hearing-impaired individuals and individuals with no hearing impairment. The models utilized in this research include AlexNet, ConvNeXt, EfficientNet, ResNet-50, and VisionTransformer were trained and tested using an extensive dataset comprising over 87,000 images of the ASL alphabet hand gestures. Numerous experiments were conducted, involving modifications to the architectural design parameters of the models to obtain maximum recognition accuracy. The experimental results of our study revealed that ResNet-50 achieved an exceptional accuracy rate of 99.98%, the highest among all models. EfficientNet attained an accuracy rate of 99.95%, ConvNeXt achieved 99.51% accuracy, AlexNet attained 99.50% accuracy, while VisionTransformer yielded the lowest accuracy of 88.59%.
Collapse
Affiliation(s)
- Bader Alsharif
- Department of Electrical Engineering and Computer Science, Florida Atlantic University, 777 Glades Road, Boca Raton, FL 33431, USA; (B.A.); (A.S.A.)
- Department of Computer Science and Engineering, College of Telecommunication and Information, Technical and Vocational Training Corporation (TVTC), Riyadh 11564, Saudi Arabia
| | - Ali Salem Altaher
- Department of Electrical Engineering and Computer Science, Florida Atlantic University, 777 Glades Road, Boca Raton, FL 33431, USA; (B.A.); (A.S.A.)
| | - Ahmed Altaher
- Department of Electrical Engineering and Computer Science, Florida Atlantic University, 777 Glades Road, Boca Raton, FL 33431, USA; (B.A.); (A.S.A.)
- Electronic Computer Center, Al-Nahrain University, Jadriya, Baghdad 64074, Iraq
| | - Mohammad Ilyas
- Department of Electrical Engineering and Computer Science, Florida Atlantic University, 777 Glades Road, Boca Raton, FL 33431, USA; (B.A.); (A.S.A.)
| | - Easa Alalwany
- Department of Electrical Engineering and Computer Science, Florida Atlantic University, 777 Glades Road, Boca Raton, FL 33431, USA; (B.A.); (A.S.A.)
- College of Computer Science and Engineering, Taibah University, Yanbu 46421, Saudi Arabia
| |
Collapse
|
6
|
Qahtan S, Alsattar HA, Zaidan AA, Deveci M, Pamucar D, Martinez L. A comparative study of evaluating and benchmarking sign language recognition system-based wearable sensory devices using a single fuzzy set. Knowl Based Syst 2023. [DOI: 10.1016/j.knosys.2023.110519] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/30/2023]
|
7
|
Lee JW, Yu KH. Wearable Drone Controller: Machine Learning-Based Hand Gesture Recognition and Vibrotactile Feedback. SENSORS (BASEL, SWITZERLAND) 2023; 23:2666. [PMID: 36904870 PMCID: PMC10006975 DOI: 10.3390/s23052666] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/10/2023] [Revised: 02/14/2023] [Accepted: 02/23/2023] [Indexed: 06/18/2023]
Abstract
We proposed a wearable drone controller with hand gesture recognition and vibrotactile feedback. The intended hand motions of the user are sensed by an inertial measurement unit (IMU) placed on the back of the hand, and the signals are analyzed and classified using machine learning models. The recognized hand gestures control the drone, and the obstacle information in the heading direction of the drone is fed back to the user by activating the vibration motor attached to the wrist. Simulation experiments for drone operation were performed, and the participants' subjective evaluations regarding the controller's convenience and effectiveness were investigated. Finally, experiments with a real drone were conducted and discussed to validate the proposed controller.
Collapse
Affiliation(s)
- Ji-Won Lee
- KEPCO Research Institute, Daejeon 34056, Republic of Korea
| | - Kee-Ho Yu
- Department of Aerospace Engineering, Jeonbuk National University, Jeonju 54896, Republic of Korea
- Future Air Mobility Research Center, Jeonbuk National University, Jeonju 54896, Republic of Korea
| |
Collapse
|
8
|
Xia K, Lu W, Fan H, Zhao Q. A Sign Language Recognition System Applied to Deaf-Mute Medical Consultation. SENSORS (BASEL, SWITZERLAND) 2022; 22:9107. [PMID: 36501809 PMCID: PMC9739223 DOI: 10.3390/s22239107] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/18/2022] [Revised: 11/10/2022] [Accepted: 11/20/2022] [Indexed: 06/17/2023]
Abstract
It is an objective reality that deaf-mute people have difficulty seeking medical treatment. Due to the lack of sign language interpreters, most hospitals in China currently do not have the ability to interpret sign language. Normal medical treatment is a luxury for deaf people. In this paper, we propose a sign language recognition system: Heart-Speaker. Heart-Speaker is applied to a deaf-mute consultation scenario. The system provides a low-cost solution for the difficult problem of treating deaf-mute patients. The doctor only needs to point the Heart-Speaker at the deaf patient and the system automatically captures the sign language movements and translates the sign language semantics. When a doctor issues a diagnosis or asks a patient a question, the system displays the corresponding sign language video and subtitles to meet the needs of two-way communication between doctors and patients. The system uses the MobileNet-YOLOv3 model to recognize sign language. It meets the needs of running on embedded terminals and provides favorable recognition accuracy. We performed experiments to verify the accuracy of the measurements. The experimental results show that the accuracy rate of Heart-Speaker in recognizing sign language can reach 90.77%.
Collapse
Affiliation(s)
| | - Weiwei Lu
- Correspondence: ; Tel.: +86-13671637275
| | | | | |
Collapse
|
9
|
Kudrinko K, Flavin E, Shepertycky M, Li Q. Assessing the need for a wearable sign language recognition device for deaf individuals: Results from a national questionnaire. Assist Technol 2022; 34:684-697. [PMID: 33872548 DOI: 10.1080/10400435.2021.1913259] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022] Open
Abstract
The purpose of this study was to determine how sign language users perceive the sign language recognition (SLR) field, with a focus on gaining perspectives from members of the Canadian Deaf community. A questionnaire consisting of a series of rating and open-ended questions was used to gather perspectives and insights related to a hypothetical SLR device. The survey was distributed to members of the Deaf community, family and friends of Deaf individuals, and service providers, all of whom had some proficiency in American Sign Language (ASL). The average ratings provided by Deaf participants were distributed normally with a right-modal skew in the direction of the positive ratings. Six fundamental concerns about SLR technologies were identified from participants' responses, with the most frequently cited pertaining to the technology's feasibility. In descending order, participants ranked translation accuracy, speed, and comfort as the three most important design characteristics for potential SLR devices. Respondents identified many potential situations in which SLR devices could be used. For a SLR device to be user-centric and culturally appropriate, it is essential that future work in the field integrates perspectives from members of the Deaf community.
Collapse
Affiliation(s)
- Karly Kudrinko
- Department of Mechanical and Materials Engineering, Queen's University, Kingston, Ontario, Canada
| | - Emile Flavin
- Department of Mechanical and Materials Engineering, Queen's University, Kingston, Ontario, Canada
| | - Michael Shepertycky
- Department of Mechanical and Materials Engineering, Queen's University, Kingston, Ontario, Canada
| | - Qingguo Li
- Department of Mechanical and Materials Engineering, Queen's University, Kingston, Ontario, Canada
| |
Collapse
|
10
|
Prietch SS, Sánchez JA, García JG. A Systematic Review of User Studies as a Basis for the Design of Systems for Automatic Sign Language Processing. ACM TRANSACTIONS ON ACCESSIBLE COMPUTING 2022. [DOI: 10.1145/3563395] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/10/2022]
Abstract
Deaf persons, whether or not they are sign language users, make up one of various existing marginalized populations that historically have been socially and politically underrepresented. Unfortunately, this also happens in technology design. Conducting user studies in which marginalized populations are represented is a step towards guaranteeing their right to participate in choices and decisions that are made for, with, and by them. This paper presents and discusses results from a Systematic Literature Review (SLR) of user studies in the design of systems for Automatic Sign Language Processing (ASLP). Following our SLR protocol, from 2,486 papers initially found, we applied inclusion and exclusion criteria to finally select 37 papers in our review. We excluded publications that were not full papers, were not related to our main topic of interest, or that reported results that had been updated by more recent papers. All the selected papers focus on user studies as a basis for the design of three major aspects of ASLP: generation (ASLG), recognition (ASLR) and translation (ASLT). With regard to our specific area of interest, we analyzed four areas related to our research questions: goals and research methods, types of user involvement in the interaction design life cycle, cultural and collaborative aspects, and other lessons learned from the primary studies under review. Salient findings from our analysis show that numerical scale questionnaires are the most frequently used research instruments, co-designing ASLP systems with sign language users is not a common practice (as potential users are included mostly in the evaluation phase), and only seldom Deaf persons who are sign language users are included as members of research teams. These findings point to the need of conducting more inclusive and qualitative research for, with and by Deaf persons who are sign language users.
Collapse
Affiliation(s)
- Soraia Silva Prietch
- Universidade Federal de Rondonópolis, Mato Grosso, Brazil
- Postdoctoral scholar at the Doctoral Program in Educational Systems and Environments (DSAE), Facultad de Ciencias de la Electrónica, Benemérita Universidad Autónoma de Puebla (BUAP), Puebla, Mexico
| | - J. Alfredo Sánchez
- Laboratorio Nacional de Informática Avanzada (LANIA), Xalapa, Veracruz, Mexico
| | - Josefina Guerrero García
- Doctoral Program in Educational Systems and Environments (DSAE), Facultad de Ciencias de la Electrónica, Benemérita Universidad Autónoma de Puebla (BUAP), Puebla, México
- Facultad de Ciencias de la Computación, Benemérita Universidad Autónoma de Puebla (BUAP), Puebla, Mexico
| |
Collapse
|
11
|
Bahia NK, Rani R. Multi-level Taxonomy Review for Sign Language Recognition: Emphasis on Indian Sign Language. ACM T ASIAN LOW-RESO 2022. [DOI: 10.1145/3530259] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
Abstract
With the phenomenal increase in image and video databases, there is an increase in the human-computer interaction that recognizes Sign Language. Exchanging information using different gestures between two people is sign language, known as non-verbal communication. Sign language recognition is already done in various languages; however, for Indian sign language, there is no adequate amount of work done. This paper presents a review on sign language recognition for multiple languages. Data acquisition methods have been over-viewed in four ways (a) Glove-based, (b) Kinect-based, (c) Leap motion controller and (d) Vision-based. Some of them have pros and cons that have also been discussed for every data acquisition method. Applications of sign language recognition are also discussed.
Furthermore, this review also creates a coherent taxonomy to represent the modern research divided into three levels: Level 1 Elementary level (Recognition of sign characters), Level 2 Advanced level (Recognition of sign words) and Level 3 Professional level (Sentence interpretation). The available challenges and issues for each level are also explored in this research to provide valuable perceptions into technological environments. Various publicly available data-sets for different sign languages are also discussed. An efficient review of this paper shows that the significant exploration of communication via sign acknowledgment has been performed on static, dynamic, isolated and continuous gestures using various acquisition methods. Comprehensively, the hope is, this study will enable readers to learn new pathways and gain knowledge to carry out further research work in the domain related to sign language recognition.
Collapse
|
12
|
Reichert C, Klemm L, Mushunuri RV, Kalyani A, Schreiber S, Kuehn E, Azañón E. Discriminating Free Hand Movements Using Support Vector Machine and Recurrent Neural Network Algorithms. SENSORS (BASEL, SWITZERLAND) 2022; 22:6101. [PMID: 36015862 PMCID: PMC9412700 DOI: 10.3390/s22166101] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/04/2022] [Revised: 07/29/2022] [Accepted: 08/08/2022] [Indexed: 06/15/2023]
Abstract
Decoding natural hand movements is of interest for human-computer interaction and may constitute a helpful tool in the diagnosis of motor diseases and rehabilitation monitoring. However, the accurate measurement of complex hand movements and the decoding of dynamic movement data remains challenging. Here, we introduce two algorithms, one based on support vector machine (SVM) classification combined with dynamic time warping, and the other based on a long short-term memory (LSTM) neural network, which were designed to discriminate small differences in defined sequences of hand movements. We recorded hand movement data from 17 younger and 17 older adults using an exoskeletal data glove while they were performing six different movement tasks. Accuracy rates in decoding the different movement types were similarly high for SVM and LSTM in across-subject classification, but, for within-subject classification, SVM outperformed LSTM. The SVM-based approach, therefore, appears particularly promising for the development of movement decoding tools, in particular if the goal is to generalize across age groups, for example for detecting specific motor disorders or tracking their progress over time.
Collapse
Affiliation(s)
- Christoph Reichert
- Department of Behavioral Neurology, Leibniz Institute for Neurobiology, Brenneckestr. 6, 39118 Magdeburg, Germany
- Center for Behavioral Brain Sciences (CBBS), Universitaetsplatz 2, 39106 Magdeburg, Germany
- Forschungscampus STIMULATE, Otto-Hahn-Str. 2, 39106 Magdeburg, Germany
| | - Lisa Klemm
- Department of Neurology, University Medical Center, Leipziger Str. 44, 39120 Magdeburg, Germany
| | | | - Avinash Kalyani
- Institute for Cognitive Neurology and Dementia Research (IKND), Otto-von-Guericke University, Leipziger Str. 44, 39120 Magdeburg, Germany
- German Center for Neurodegenerative Diseases (DZNE), Leipziger Str. 44, 39120 Magdeburg, Germany
| | - Stefanie Schreiber
- Center for Behavioral Brain Sciences (CBBS), Universitaetsplatz 2, 39106 Magdeburg, Germany
- Department of Neurology, University Medical Center, Leipziger Str. 44, 39120 Magdeburg, Germany
| | - Esther Kuehn
- Center for Behavioral Brain Sciences (CBBS), Universitaetsplatz 2, 39106 Magdeburg, Germany
- Institute for Cognitive Neurology and Dementia Research (IKND), Otto-von-Guericke University, Leipziger Str. 44, 39120 Magdeburg, Germany
- German Center for Neurodegenerative Diseases (DZNE), Leipziger Str. 44, 39120 Magdeburg, Germany
- Hertie Institute for Clinical Brain Research (HIH), Otfried Mueller-Str. 27, 72076 Tuebingen, Germany
| | - Elena Azañón
- Department of Behavioral Neurology, Leibniz Institute for Neurobiology, Brenneckestr. 6, 39118 Magdeburg, Germany
- Center for Behavioral Brain Sciences (CBBS), Universitaetsplatz 2, 39106 Magdeburg, Germany
- Department of Neurology, University Medical Center, Leipziger Str. 44, 39120 Magdeburg, Germany
| |
Collapse
|
13
|
Reducing the Number of Sensors in the Data Glove for Recognition of Static Hand Gestures. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12157388] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
Data glove devices, apart from being widely used in industry and entertainment, can also serve as a means for communication with the environment. This is possible thanks to the advancement in electronic technology and machine learning algorithms. In this paper, the results of the study using a designed data glove equipped with 10 piezoelectric sensors are reported, and the designed glove is validated on a recognition task of hand gestures based on 16 static signs of the Polish Sign Language (PSL) alphabet. The main result of the study is that recognition of 16 PSL static gestures is possible with a reduced number of piezoelectric sensors. This result has been achieved by applying the decision tree classifier that can rank the importance of the sensors for the recognition performance. Other machine learning algorithms were also tested, and it was showed that for the Support Vector Machines, k-NN and Bagged Trees classifiers, a recognition rate of the signs exceeding 90% can be achieved just for three preselected sensors. Such a result is important for a reduction in design complexity and costs of such a data glove with sustained reliability of the device.
Collapse
|
14
|
Wu R, Seo S, Ma L, Bae J, Kim T. Full-Fiber Auxetic-Interlaced Yarn Sensor for Sign-Language Translation Glove Assisted by Artificial Neural Network. NANO-MICRO LETTERS 2022; 14:139. [PMID: 35776226 PMCID: PMC9249965 DOI: 10.1007/s40820-022-00887-5] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/28/2022] [Accepted: 06/01/2022] [Indexed: 06/15/2023]
Abstract
Yarn sensors have shown promising application prospects in wearable electronics owing to their shape adaptability, good flexibility, and weavability. However, it is still a critical challenge to develop simultaneously structure stable, fast response, body conformal, mechanical robust yarn sensor using full microfibers in an industrial-scalable manner. Herein, a full-fiber auxetic-interlaced yarn sensor (AIYS) with negative Poisson's ratio is designed and fabricated using a continuous, mass-producible, structure-programmable, and low-cost spinning technology. Based on the unique microfiber interlaced architecture, AIYS simultaneously achieves a Poisson's ratio of-1.5, a robust mechanical property (0.6 cN/dtex), and a fast train-resistance responsiveness (0.025 s), which enhances conformality with the human body and quickly transduce human joint bending and/or stretching into electrical signals. Moreover, AIYS shows good flexibility, washability, weavability, and high repeatability. Furtherly, with the AIYS array, an ultrafast full-letter sign-language translation glove is developed using artificial neural network. The sign-language translation glove achieves an accuracy of 99.8% for all letters of the English alphabet within a short time of 0.25 s. Furthermore, owing to excellent full letter-recognition ability, real-time translation of daily dialogues and complex sentences is also demonstrated. The smart glove exhibits a remarkable potential in eliminating the communication barriers between signers and non-signers.
Collapse
Affiliation(s)
- Ronghui Wu
- Department of Mechanical Engineering, Ulsan National Institute of Science and Technology (UNIST), 50 UNIST-Gil, Ulsan, 44919, Republic of Korea
| | - Sangjin Seo
- Department of Mechanical Engineering, Ulsan National Institute of Science and Technology (UNIST), 50 UNIST-Gil, Ulsan, 44919, Republic of Korea
| | - Liyun Ma
- College of Physical Science and Technology, Xiamen University, Xiamen, 361005, People's Republic of China
| | - Juyeol Bae
- Department of Mechanical Engineering, Ulsan National Institute of Science and Technology (UNIST), 50 UNIST-Gil, Ulsan, 44919, Republic of Korea
| | - Taesung Kim
- Department of Mechanical Engineering, Ulsan National Institute of Science and Technology (UNIST), 50 UNIST-Gil, Ulsan, 44919, Republic of Korea.
| |
Collapse
|
15
|
Abstract
Given the achievements in automatically translating text from one language to another, one would expect to see similar advancements in translating between signed and spoken languages. However, progress in this effort has lagged in comparison. Typically, machine translation consists of processing text from one language to produce text in another. Because signed languages have no generally-accepted written form, translating spoken to signed language requires the additional step of displaying the language visually as animation through the use of a three-dimensional (3D) virtual human commonly known as an avatar. Researchers have been grappling with this problem for over twenty years, and it is still an open question. With the goal of developing a deeper understanding of the challenges posed by this question, this article gives a summary overview of the unique aspects of signed languages, briefly surveys the technology underlying avatars and performs an in-depth analysis of the features in a textual representation for avatar display. It concludes with a comparison of these features and makes observations about future research directions.
Collapse
|
16
|
Sensing System for Plegic or Paretic Hands Self-Training Motivation. SENSORS 2022; 22:s22062414. [PMID: 35336583 PMCID: PMC8955878 DOI: 10.3390/s22062414] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/31/2021] [Revised: 03/06/2022] [Accepted: 03/07/2022] [Indexed: 11/16/2022]
Abstract
Patients after stroke with paretic or plegic hands require frequent exercises to promote neuroplasticity and to improve hand joint mobilization. Available devices for hand exercising are intended for persons with some level of hand control or provide continuous passive motion with limited patient involvement. Patients can benefit from self-exercising where they use the other hand to exercise the plegic or paretic one. However, post-stroke neuropsychological complications, apathy, and cognitive impairments such as forgetfulness make regular self-exercising difficult. This paper describes Przypominajka v2-a system intended to support self-exercising, remind about it, and motivate patients. We propose a glove-based device with an on-device machine-learning-based exercise scoring, a tablet-based interface, and a web-based application for therapists. The feasibility of on-device inference and the accuracy of correct exercise classification was evaluated on four healthy participants. Whole system use was described in a case study with a patient with a paretic hand. The anomaly classification has an accuracy of 91.3% and f1 value of 91.6% but achieves poorer results for new users (78% and 81%). The case study showed that patients had a positive reaction to exercising with Przypominajka, but there were issues relating to sensor glove: ease of putting on and clarity of instructions. The paper presents a new way in which sensor systems can support the rehabilitation of after-stroke patients with an on-device machine-learning-based classification that can accurately score and contribute to patient motivation.
Collapse
|
17
|
Al-Samarraay MS, Zaidan A, Albahri O, Pamucar D, AlSattar H, Alamoodi A, Zaidan B, Albahri A. Extension of interval-valued Pythagorean FDOSM for evaluating and benchmarking real-time SLRSs based on multidimensional criteria of hand gesture recognition and sensor glove perspectives. Appl Soft Comput 2022. [DOI: 10.1016/j.asoc.2021.108284] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/16/2022]
|
18
|
A new extension of FDOSM based on Pythagorean fuzzy environment for evaluating and benchmarking sign language recognition systems. Neural Comput Appl 2022. [DOI: 10.1007/s00521-021-06683-3] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
|
19
|
Guo K, Zhang S, Zhao S, Yang H. Design and Manufacture of Data Gloves for Rehabilitation Training and Gesture Recognition Based on Flexible Sensors. JOURNAL OF HEALTHCARE ENGINEERING 2021; 2021:6359403. [PMID: 34917309 PMCID: PMC8670918 DOI: 10.1155/2021/6359403] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/19/2021] [Revised: 11/23/2021] [Accepted: 11/26/2021] [Indexed: 11/17/2022]
Abstract
This work takes the production and usage scenarios of the data glove as the research object and studies the method of applying the flexible sensor to the data glove. Many studies are also devoted to exploring the transplantation of flexible sensors to data gloves. However, this type of research still lacks the display of specific application scenarios such as gesture recognition or hand rehabilitation training. A small amount of experimental data and theoretical analysis are difficult to promote the development of flexible sensors and flexible data gloves design schemes. Therefore, this study uses the self-made flexible sensor of the research group as the core sensing unit to produce a flexible data glove to monitor the bending changes of the knuckles and then use it for simple gesture recognition and rehabilitation training.
Collapse
Affiliation(s)
- Kai Guo
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei 230026, China
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou 215163, China
| | - Senhao Zhang
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei 230026, China
| | - Shasha Zhao
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou 215163, China
| | - Hongbo Yang
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei 230026, China
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou 215163, China
| |
Collapse
|
20
|
Pan J, Li Y, Luo Y, Zhang X, Wang X, Wong DLT, Heng CH, Tham CK, Thean AVY. Hybrid-Flexible Bimodal Sensing Wearable Glove System for Complex Hand Gesture Recognition. ACS Sens 2021; 6:4156-4166. [PMID: 34726380 DOI: 10.1021/acssensors.1c01698] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/27/2022]
Abstract
As 5G communication technology allows for speedier access to extended information and knowledge, a more sophisticated human-machine interface beyond touchscreens and keyboards is necessary to improve the communication bandwidth and overcome the interfacing barrier. However, the full extent of human interaction beyond operation dexterity, spatial awareness, sensory feedback, and collaborative capability to be replicated completely remains a challenge. Here, we demonstrate a hybrid-flexible wearable system, consisting of simple bimodal capacitive sensors and a customized low power interface circuit integrated with machine learning algorithms, to accurately recognize complex gestures. The 16 channel sensor array extracts spatial and temporal information of the finger movement (deformation) and hand location (proximity) simultaneously. Using machine learning, over 99 and 91% accuracy are achieved for user-independent static and dynamic gesture recognition, respectively. Our approach proves that an extremely simple bimodal sensing platform that identifies local interactions and perceives spatial context concurrently, is crucial in the field of sign communication, remote robotics, and smart manufacturing.
Collapse
Affiliation(s)
- Jieming Pan
- Department of Electrical and Computer Engineering, National University of Singapore, 4 Engineering Drive 3, 117583 Singapore
| | - Yida Li
- Department of Electrical and Computer Engineering, National University of Singapore, 4 Engineering Drive 3, 117583 Singapore
- Engineering Research Center of Integrated Circuits for Next-Generation communications, Ministry of Education, Southern University of Science and Technology, Shenzhen 518055, China
| | - Yuxuan Luo
- Department of Electrical and Computer Engineering, National University of Singapore, 4 Engineering Drive 3, 117583 Singapore
| | - Xiangyu Zhang
- Department of Electrical and Computer Engineering, National University of Singapore, 4 Engineering Drive 3, 117583 Singapore
| | - Xinghua Wang
- Department of Electrical and Computer Engineering, National University of Singapore, 4 Engineering Drive 3, 117583 Singapore
| | - David Liang Tai Wong
- Department of Electrical and Computer Engineering, National University of Singapore, 4 Engineering Drive 3, 117583 Singapore
| | - Chun-Huat Heng
- Department of Electrical and Computer Engineering, National University of Singapore, 4 Engineering Drive 3, 117583 Singapore
| | - Chen-Khong Tham
- Department of Electrical and Computer Engineering, National University of Singapore, 4 Engineering Drive 3, 117583 Singapore
| | - Aaron Voon-Yew Thean
- Department of Electrical and Computer Engineering, National University of Singapore, 4 Engineering Drive 3, 117583 Singapore
| |
Collapse
|
21
|
Farooq U, Rahim MSM, Sabir N, Hussain A, Abid A. Advances in machine translation for sign language: approaches, limitations, and challenges. Neural Comput Appl 2021. [DOI: 10.1007/s00521-021-06079-3] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
22
|
Sanjeev OP, Mishra US, Singh A. Sign language can reduce communication interference in Emergency Department. Am J Emerg Med 2021; 56:290. [PMID: 34364708 PMCID: PMC8294705 DOI: 10.1016/j.ajem.2021.07.026] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2021] [Accepted: 07/17/2021] [Indexed: 12/01/2022] Open
Affiliation(s)
- O P Sanjeev
- Department of Emergency Medicine, Sanjay Gandhi Post Graduate Institute of Medical Sciences, Lucknow, India.
| | - U S Mishra
- Department of Emergency Medicine, Sanjay Gandhi Post Graduate Institute of Medical Sciences, Lucknow, India
| | - A Singh
- Department of Emergency Medicine, Sanjay Gandhi Post Graduate Institute of Medical Sciences, Lucknow, India
| |
Collapse
|
23
|
Demolder C, Molina A, Hammond FL, Yeo WH. Recent advances in wearable biosensing gloves and sensory feedback biosystems for enhancing rehabilitation, prostheses, healthcare, and virtual reality. Biosens Bioelectron 2021; 190:113443. [PMID: 34171820 DOI: 10.1016/j.bios.2021.113443] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2021] [Revised: 06/02/2021] [Accepted: 06/11/2021] [Indexed: 12/16/2022]
Abstract
Wearable sensing gloves and sensory feedback devices that record and enhance the sensations of the hand are used in healthcare, prosthetics, robotics, and virtual reality. Recent technological advancements in soft actuators, flexible bioelectronics, and wireless data acquisition systems have enabled the development of ergonomic, lightweight, and low-cost wearable devices. This review article includes the most up-to-date materials, sensors, actuators, and system-packaging technologies to develop wearable sensing gloves and sensory feedback devices. Furthermore, this review contemplates the use of wearable sensing gloves and sensory feedback devices together to advance their capabilities as assistive devices for people with prostheses and sensory impaired limbs. This review is divided into two sections: one detailing the technologies used to develop strain, pressure, and temperature sensors integrated with a multifunctional wearable sensing glove, and the other reviewing the devices and methods used for wearable sensory displays. We discuss the limitations of the current methods and technologies along with the future direction of the field. Overall, this paper presents an all-inclusive review of the technologies used to develop wearable sensing gloves and sensory feedback devices.
Collapse
Affiliation(s)
- Carl Demolder
- George W. Woodruff School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, GA, 30332, USA
| | - Alicia Molina
- George W. Woodruff School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, GA, 30332, USA
| | - Frank L Hammond
- George W. Woodruff School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, GA, 30332, USA; Wallace H. Coulter Department of Biomedical Engineering, Parker H. Petit Institute for Bioengineering and Biosciences, Georgia Institute of Technology, Atlanta, GA, 30332, USA; Institute for Robotics and Intelligent Machines, Georgia Institute of Technology, Atlanta, GA, 30332, USA.
| | - Woon-Hong Yeo
- George W. Woodruff School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, GA, 30332, USA; Wallace H. Coulter Department of Biomedical Engineering, Parker H. Petit Institute for Bioengineering and Biosciences, Georgia Institute of Technology, Atlanta, GA, 30332, USA; Institute for Robotics and Intelligent Machines, Georgia Institute of Technology, Atlanta, GA, 30332, USA; Center for Human-Centric Interfaces and Engineering, Institute for Electronics and Nanotechnology, Neural Engineering Center, Institute for Materials, Georgia Institute of Technology, Atlanta, GA, 30332, USA.
| |
Collapse
|
24
|
Real-time sign language framework based on wearable device: analysis of MSL, DataGlove, and gesture recognition. Soft comput 2021. [DOI: 10.1007/s00500-021-05855-6] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
25
|
Jiang S, Kang P, Song X, Lo B, Shull P. Emerging Wearable Interfaces and Algorithms for Hand Gesture Recognition: A Survey. IEEE Rev Biomed Eng 2021; 15:85-102. [PMID: 33961564 DOI: 10.1109/rbme.2021.3078190] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Hands are vital in a wide range of fundamental daily activities, and neurological diseases that impede hand function can significantly affect quality of life. Wearable hand gesture interfaces hold promise to restore and assist hand function and to enhance human-human and human-computer communication. The purpose of this review is to synthesize current novel sensing interfaces and algorithms for hand gesture recognition, and the scope of applications covers rehabilitation, prosthesis control, sign language recognition, and human-computer interaction. Results showed that electrical, dynamic, acoustical/vibratory, and optical sensing were the primary input modalities in gesture recognition interfaces. Two categories of algorithms were identified: 1) classification algorithms for predefined, fixed hand poses and 2) regression algorithms for continuous finger and wrist joint angles. Conventional machine learning algorithms, including linear discriminant analysis, support vector machines, random forests, and non-negative matrix factorization, have been widely used for a variety of gesture recognition applications, and deep learning algorithms have more recently been applied to further facilitate the complex relationship between sensor signals and multi-articulated hand postures. Future research should focus on increasing recognition accuracy with larger hand gesture datasets, improving reliability and robustness for daily use outside of the laboratory, and developing softer, less obtrusive interfaces.
Collapse
|
26
|
Shin S, Yoon HU, Yoo B. Hand Gesture Recognition Using EGaIn-Silicone Soft Sensors. SENSORS (BASEL, SWITZERLAND) 2021; 21:3204. [PMID: 34063055 PMCID: PMC8125695 DOI: 10.3390/s21093204] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/08/2021] [Revised: 04/23/2021] [Accepted: 05/02/2021] [Indexed: 01/23/2023]
Abstract
Exploiting hand gestures for non-verbal communication has extraordinary potential in HCI. A data glove is an apparatus widely used to recognize hand gestures. To improve the functionality of the data glove, a highly stretchable and reliable signal-to-noise ratio sensor is indispensable. To do this, the study focused on the development of soft silicone microchannel sensors using a Eutectic Gallium-Indium (EGaIn) liquid metal alloy and a hand gesture recognition system via the proposed data glove using the soft sensor. The EGaIn-silicone sensor was uniquely designed to include two sensing channels to monitor the finger joint movements and to facilitate the EGaIn alloy injection into the meander-type microchannels. We recruited 15 participants to collect hand gesture dataset investigating 12 static hand gestures. The dataset was exploited to estimate the performance of the proposed data glove in hand gesture recognition. Additionally, six traditional classification algorithms were studied. From the results, a random forest shows the highest classification accuracy of 97.3% and a linear discriminant analysis shows the lowest accuracy of 87.4%. The non-linearity of the proposed sensor deteriorated the accuracy of LDA, however, the other classifiers adequately overcame it and performed high accuracies (>90%).
Collapse
Affiliation(s)
- Sungtae Shin
- Department of Mechanical Engineering, Dong-A University, Busan 49315, Korea;
- Department of Mechanical Engineering, University of Maryland, College Park, MD 20742, USA
| | - Han Ul Yoon
- Division of Computer and Telecommunication Engineering, Yonsei University, Wonju 26493, Korea
| | - Byungseok Yoo
- Department of Aerospace Engineering, University of Maryland, College Park, MD 20742, USA
| |
Collapse
|
27
|
Caeiro-Rodríguez M, Otero-González I, Mikic-Fonte FA, Llamas-Nistal M. A Systematic Review of Commercial Smart Gloves: Current Status and Applications. SENSORS (BASEL, SWITZERLAND) 2021; 21:2667. [PMID: 33920101 PMCID: PMC8070066 DOI: 10.3390/s21082667] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/04/2021] [Revised: 04/02/2021] [Accepted: 04/08/2021] [Indexed: 11/23/2022]
Abstract
Smart gloves have been under development during the last 40 years to support human-computer interaction based on hand and finger movement. Despite the many devoted efforts and the multiple advances in related areas, these devices have not become mainstream yet. Nevertheless, during recent years, new devices with improved features have appeared, being used for research purposes too. This paper provides a review of current commercial smart gloves focusing on three main capabilities: (i) hand and finger pose estimation and motion tracking, (ii) kinesthetic feedback, and (iii) tactile feedback. For the first capability, a detailed reference model of the hand and finger basic movements (known as degrees of freedom) is proposed. Based on the PRISMA guidelines for systematic reviews for the period 2015-2021, 24 commercial smart gloves have been identified, while many others have been discarded because they did not meet the inclusion criteria: currently active commercial and fully portable smart gloves providing some of the three main capabilities for the whole hand. The paper reviews the technologies involved, main applications and it discusses about the current state of development. Reference models to support end users and researchers comparing and selecting the most appropriate devices are identified as a key need.
Collapse
Affiliation(s)
- Manuel Caeiro-Rodríguez
- atlanTTic Research Center for Telecommunication Technologies, Universidade de Vigo, 36312 Vigo, Spain; (I.O.-G.); (F.A.M.-F.); (M.L.-N.)
| | | | | | | |
Collapse
|
28
|
Pallotti A, Orengo G, Saggio G. Measurements comparison of finger joint angles in hand postures between an sEMG armband and a sensory glove. Biocybern Biomed Eng 2021. [DOI: 10.1016/j.bbe.2021.03.003] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
29
|
Cerro I, Latasa I, Guerra C, Pagola P, Bujanda B, Astrain JJ. Smart System with Artificial Intelligence for Sensory Gloves. SENSORS 2021; 21:s21051849. [PMID: 33800847 PMCID: PMC7961828 DOI: 10.3390/s21051849] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/26/2021] [Revised: 03/02/2021] [Accepted: 03/03/2021] [Indexed: 11/30/2022]
Abstract
This paper presents a new sensory system based on advanced algorithms and machine learning techniques that provides sensory gloves with the ability to ensure real-time connection of all connectors in the cabling of a cockpit module. Besides a microphone, the sensory glove also includes a gyroscope and three accelerometers that provide valuable information to allow the selection of the appropriate signal time windows recorded by the microphone of the glove. These signal time windows are subsequently analyzed by a convolutional neural network, which indicates whether the connection of the components has been made correctly or not. The development of the system, its implementation in a production industry environment and the results obtained are analyzed.
Collapse
Affiliation(s)
- Idoia Cerro
- IED Electronics, Pol. Ind. Plazaola, E6, 31195 Berrioplano, Spain; (I.C.); (I.L.)
| | - Iban Latasa
- IED Electronics, Pol. Ind. Plazaola, E6, 31195 Berrioplano, Spain; (I.C.); (I.L.)
- Department of Statistics, Computer Science and Mathematics, Public University of Navarre, 31006 Pamplona, Spain; (P.P.); (B.B.)
| | - Claudio Guerra
- Plant Pamplona SAS Autosystemtechnik, S.A., Faurecia, Polígono Industrial de Arazuri-Orcoyen, 31170 Arazuri, Spain;
| | - Pedro Pagola
- Department of Statistics, Computer Science and Mathematics, Public University of Navarre, 31006 Pamplona, Spain; (P.P.); (B.B.)
- INAMAT2-Institute for Advanced Materials and Mathematics, Public University of Navarre, 31006 Pamplona, Spain
| | - Blanca Bujanda
- Department of Statistics, Computer Science and Mathematics, Public University of Navarre, 31006 Pamplona, Spain; (P.P.); (B.B.)
- INAMAT2-Institute for Advanced Materials and Mathematics, Public University of Navarre, 31006 Pamplona, Spain
| | - José Javier Astrain
- Department of Statistics, Computer Science and Mathematics, Public University of Navarre, 31006 Pamplona, Spain; (P.P.); (B.B.)
- Institute of Smart Cities, Public University of Navarre, 31006 Pamplona, Spain
- Correspondence: ; Tel.: +34-948-169-532
| |
Collapse
|
30
|
Henderson J, Condell J, Connolly J, Kelly D, Curran K. Review of Wearable Sensor-Based Health Monitoring Glove Devices for Rheumatoid Arthritis. SENSORS 2021; 21:s21051576. [PMID: 33668234 PMCID: PMC7956752 DOI: 10.3390/s21051576] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/31/2020] [Revised: 01/31/2021] [Accepted: 02/18/2021] [Indexed: 02/07/2023]
Abstract
Early detection of Rheumatoid Arthritis (RA) and other neurological conditions is vital for effective treatment. Existing methods of detecting RA rely on observation, questionnaires, and physical measurement, each with their own weaknesses. Pharmaceutical medications and procedures aim to reduce the debilitating effect, preventing the progression of the illness and bringing the condition into remission. There is still a great deal of ambiguity around patient diagnosis, as the difficulty of measurement has reduced the importance that joint stiffness plays as an RA identifier. The research areas of medical rehabilitation and clinical assessment indicate high impact applications for wearable sensing devices. As a result, the overall aim of this research is to review current sensor technologies that could be used to measure an individual's RA severity. Other research teams within RA have previously developed objective measuring devices to assess the physical symptoms of hand steadiness through to joint stiffness. Unfamiliar physical effects of these sensory devices restricted their introduction into clinical practice. This paper provides an updated review among the sensor and glove types proposed in the literature to assist with the diagnosis and rehabilitation activities of RA. Consequently, the main goal of this paper is to review contact systems and to outline their potentialities and limitations. Considerable attention has been paid to gloved based devices as they have been extensively researched for medical practice in recent years. Such technologies are reviewed to determine whether they are suitable measuring tools.
Collapse
Affiliation(s)
- Jeffrey Henderson
- School of Computing, Engineering and Intelligent System, Ulster University Magee Campus, Northland Rd, BT48 7JL Londonderry, Ireland; (J.C.); (D.K.); (K.C.)
- Correspondence: ; Tel.: +44-79-3309-4221
| | - Joan Condell
- School of Computing, Engineering and Intelligent System, Ulster University Magee Campus, Northland Rd, BT48 7JL Londonderry, Ireland; (J.C.); (D.K.); (K.C.)
| | - James Connolly
- School of Science, Letterkenny Institute of Technology, Port Rd, Gortlee, F92 FC93 Letterkenny, Ireland;
| | - Daniel Kelly
- School of Computing, Engineering and Intelligent System, Ulster University Magee Campus, Northland Rd, BT48 7JL Londonderry, Ireland; (J.C.); (D.K.); (K.C.)
| | - Kevin Curran
- School of Computing, Engineering and Intelligent System, Ulster University Magee Campus, Northland Rd, BT48 7JL Londonderry, Ireland; (J.C.); (D.K.); (K.C.)
| |
Collapse
|
31
|
Sensor Fusion of Motion-Based Sign Language Interpretation with Deep Learning. SENSORS 2020; 20:s20216256. [PMID: 33147891 PMCID: PMC7663682 DOI: 10.3390/s20216256] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/26/2020] [Revised: 10/25/2020] [Accepted: 10/31/2020] [Indexed: 11/30/2022]
Abstract
Sign language was designed to allow hearing-impaired people to interact with others. Nonetheless, knowledge of sign language is uncommon in society, which leads to a communication barrier with the hearing-impaired community. Many studies of sign language recognition utilizing computer vision (CV) have been conducted worldwide to reduce such barriers. However, this approach is restricted by the visual angle and highly affected by environmental factors. In addition, CV usually involves the use of machine learning, which requires collaboration of a team of experts and utilization of high-cost hardware utilities; this increases the application cost in real-world situations. Thus, this study aims to design and implement a smart wearable American Sign Language (ASL) interpretation system using deep learning, which applies sensor fusion that “fuses” six inertial measurement units (IMUs). The IMUs are attached to all fingertips and the back of the hand to recognize sign language gestures; thus, the proposed method is not restricted by the field of view. The study reveals that this model achieves an average recognition rate of 99.81% for dynamic ASL gestures. Moreover, the proposed ASL recognition system can be further integrated with ICT and IoT technology to provide a feasible solution to assist hearing-impaired people in communicating with others and improve their quality of life.
Collapse
|
32
|
A Multi-Node Magnetic Positioning System with a Distributed Data Acquisition Architecture. SENSORS 2020; 20:s20216210. [PMID: 33143366 PMCID: PMC7662761 DOI: 10.3390/s20216210] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/15/2020] [Revised: 10/27/2020] [Accepted: 10/28/2020] [Indexed: 11/24/2022]
Abstract
We present a short-range magnetic positioning system that can track in real-time both the position and attitude (i.e., the orientation of the principal axes of an object in space) of up to six moving nodes. Moving nodes are small solenoids coupled with a capacitor (resonant circuit) and supplied with an oscillating voltage. Active moving nodes are detected by measuring the voltage that they induce on a three-dimensional matrix of passive coils. Data on each receiving coil are acquired simultaneously by a distributed data-acquisition architecture. Then, they are sent to a computer that calculates the position and attitude of each moving node. The entire process is run in real-time: the system can perform 62 position and attitude measurements per second when tracking six nodes simultaneously and up to 124 measurements per second when tracking one node only. Different active nodes are identified using a frequency-division multiple access technique. The position and angular resolution of the system have been experimentally estimated by tracking active nodes along a reference trajectory traced by a robotic arm. The factors limiting the viability of upscaling the system with more than six active nodes are discussed.
Collapse
|
33
|
El-Din SAE, El-Ghany MAA. Sign Language Interpreter System: An alternative system for machine learning. 2020 2ND NOVEL INTELLIGENT AND LEADING EMERGING SCIENCES CONFERENCE (NILES) 2020. [DOI: 10.1109/niles50944.2020.9257958] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/01/2023]
|
34
|
Kudrinko K, Flavin E, Zhu X, Li Q. Wearable Sensor-Based Sign Language Recognition: A Comprehensive Review. IEEE Rev Biomed Eng 2020; 14:82-97. [PMID: 32845843 DOI: 10.1109/rbme.2020.3019769] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Sign language is used as a primary form of communication by many people who are Deaf, deafened, hard of hearing, and non-verbal. Communication barriers exist for members of these populations during daily interactions with those who are unable to understand or use sign language. Advancements in technology and machine learning techniques have led to the development of innovative approaches for gesture recognition. This literature review focuses on analyzing studies that use wearable sensor-based systems to classify sign language gestures. A review of 72 studies from 1991 to 2019 was performed to identify trends, best practices, and common challenges. Attributes including sign language variation, sensor configuration, classification method, study design, and performance metrics were analyzed and compared. Results from this literature review could aid in the development of user-centred and robust wearable sensor-based systems for sign language recognition.
Collapse
|
35
|
A Dynamic Gesture Recognition Interface for Smart Home Control based on Croatian Sign Language. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app10072300] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Deaf and hard-of-hearing people are facing many challenges in everyday life. Their communication is based on the use of a sign language, and the ability of the cultural/social environment to fully understand such a language defines whether or not it will be accessible for them. Technology is a key factor that has the potential to provide solutions to achieve a higher accessibility and therefore improve the quality of life of deaf and hard-of-hearing people. In this paper, we introduce a smart home automatization system specifically designed to provide real-time sign language recognition. The contribution of this paper implies several elements. Novel hierarchical architecture is presented, including resource-and-time-aware modules—a wake-up module and high-performance sign recognition module based on the Conv3D network. To achieve high-performance classification, multi-modal fusion of RGB and depth modality was used with the temporal alignment. Then, a small Croatian sign language database containing 25 different language signs for the use in smart home environment was created in collaboration with the deaf community. The system was deployed on a Nvidia Jetson TX2 embedded system with StereoLabs ZED M stereo camera for online testing. Obtained results demonstrate that the proposed practical solution is a viable approach for real-time smart home control.
Collapse
|
36
|
Song W, Han Q, Lin Z, Yan N, Luo D, Liao Y, Zhang M, Wang Z, Xie X, Wang A, Chen Y, Bai S. Design of a Flexible Wearable Smart sEMG Recorder Integrated Gradient Boosting Decision Tree Based Hand Gesture Recognition. IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS 2019; 13:1563-1574. [PMID: 31751286 DOI: 10.1109/tbcas.2019.2953998] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
This paper proposed a wearable smart sEMG recorder integrated gradient boosting decision tree (GBDT) based hand gesture recognition. A hydrogel-silica gel based flexible surface electrode band is used as the tissue interface. The sEMG signal is collected using a neural signal acquisition analog front end (AFE) chip. A quantitative analysis method is proposed to balance the algorithm complexity and recognition accuracy. A parallel GBDT implementation is proposed featuring a low latency. The proposed GBDT based neural signal processing unit (NSPU) is implemented on an FPGA near the AFE. A RF module is used for wireless communication. A hand gesture set including 12 gestures is designed for human-computer interaction. Experimental results show an overall hand gesture recognition accuracy of 91%.
Collapse
|
37
|
Non-Touch Sign Word Recognition Based on Dynamic Hand Gesture Using Hybrid Segmentation and CNN Feature Fusion. APPLIED SCIENCES-BASEL 2019. [DOI: 10.3390/app9183790] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Hand gesture-based sign language recognition is a prosperous application of human– computer interaction (HCI), where the deaf community, hard of hearing, and deaf family members communicate with the help of a computer device. To help the deaf community, this paper presents a non-touch sign word recognition system that translates the gesture of a sign word into text. However, the uncontrolled environment, perspective light diversity, and partial occlusion may greatly affect the reliability of hand gesture recognition. From this point of view, a hybrid segmentation technique including YCbCr and SkinMask segmentation is developed to identify the hand and extract the feature using the feature fusion of the convolutional neural network (CNN). YCbCr performs image conversion, binarization, erosion, and eventually filling the hole to obtain the segmented images. SkinMask images are obtained by matching the color of the hand. Finally, a multiclass SVM classifier is used to classify the hand gestures of a sign word. As a result, the sign of twenty common words is evaluated in real time, and the test results confirm that this system can not only obtain better-segmented images but also has a higher recognition rate than the conventional ones.
Collapse
|
38
|
Flex Sensor Compensator via Hammerstein-Wiener Modeling Approach for Improved Dynamic Goniometry and Constrained Control of a Bionic Hand. SENSORS 2019; 19:s19183896. [PMID: 31509987 PMCID: PMC6767013 DOI: 10.3390/s19183896] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/02/2019] [Revised: 08/30/2019] [Accepted: 09/07/2019] [Indexed: 12/17/2022]
Abstract
In this paper, a new control-centric approach is introduced to model the characteristics of flex sensors on a goniometric glove, which is designed to capture the user hand gesture that can be used to wirelessly control a bionic hand. The main technique employs the inverse dynamic model strategy along with a black-box identification for the compensator design, which is aimed to provide an approximate linear mapping between the raw sensor output and the dynamic finger goniometry. To smoothly recover the goniometry on the bionic hand's side during the wireless transmission, the compensator is restructured into a Hammerstein-Wiener model, which consists of a linear dynamic system and two static nonlinearities. A series of real-time experiments involving several hand gestures have been conducted to analyze the performance of the proposed method. The associated temporal and spatial gesture data from both the glove and the bionic hand are recorded, and the performance is evaluated in terms of the integral of absolute error between the glove's and the bionic hand's dynamic goniometry. The proposed method is also compared with the raw sensor data, which has been preliminarily calibrated with the finger goniometry, and the Wiener model, which is based on the initial inverse dynamic design strategy. Experimental results with several trials for each gesture show that a great improvement is obtained via the Hammerstein-Wiener compensator approach where the resulting average errors are significantly smaller than the other two methods. This concludes that the proposed strategy can remarkably improve the dynamic goniometry of the glove, and thus provides a smooth human-robot collaboration with the bionic hand.
Collapse
|
39
|
Abstract
Sign language recognition (SLR) is a bridge linking the hearing impaired and the general public. Some SLR methods using wearable data gloves are not portable enough to provide daily sign language translation service, while visual SLR is more flexible to work with in most scenes. This paper introduces a monocular vision-based approach to SLR. Human skeleton action recognition is proposed to express semantic information, including the representation of signs’ gestures, using the regularization of body joint features and a deep-forest-based semantic classifier with a voting strategy. We test our approach on the public American Sign Language Lexicon Video Dataset (ASLLVD) and a private testing set. It proves to achieve a promising performance and shows a high generalization capability on the testing set.
Collapse
|
40
|
Shull PB, Jiang S, Zhu Y, Zhu X. Hand Gesture Recognition and Finger Angle Estimation via Wrist-Worn Modified Barometric Pressure Sensing. IEEE Trans Neural Syst Rehabil Eng 2019; 27:724-732. [PMID: 30892217 DOI: 10.1109/tnsre.2019.2905658] [Citation(s) in RCA: 43] [Impact Index Per Article: 8.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
This paper presents a new approach to wearable hand gesture recognition and finger angle estimation based on the modified barometric pressure sensing. Barometric pressure sensors were encased and injected with VytaFlex rubber such that the rubber directly contacted the sensing element allowing pressure change detection when the encasing rubber was pressed. A wearable prototype consisting of an array of ten modified barometric pressure sensors around the wrist was developed and validated with experimental testing for three different hand gesture sets and finger flexion/extension trials for each of the five fingers. The overall hand gesture recognition classification accuracy was 94%. Further analysis revealed that the most important sensor location was the underside of the wrist and that when reducing the sensor number to only five optimally placed sensors, classification accuracy was still 90%. For continuous finger angle estimation, aggregate R2 values between actual and predicted angles were thumb: 0.81 ± 0.10, index finger: 0.85±0.06, middle finger: 0.77±0.08, ring finger: 0.77 ± 0.12, and pinkie finger: 0.75 ± 0.10, and the overall average was 0.79 ± 0.05. These results demonstrate that a modified barometric pressure wristband can be used to classify hand gestures and to estimate individual finger joint angles. This approach could serve to improve the clinical treatment for upper extremity deficiencies, such as for stroke rehabilitation, by providing objective patient motor control metrics to inform and aid physicians and therapists throughout the rehabilitation process.
Collapse
|
41
|
Talal M, Zaidan AA, Zaidan BB, Albahri AS, Alamoodi AH, Albahri OS, Alsalem MA, Lim CK, Tan KL, Shir WL, Mohammed KI. Smart Home-based IoT for Real-time and Secure Remote Health Monitoring of Triage and Priority System using Body Sensors: Multi-driven Systematic Review. J Med Syst 2019; 43:42. [PMID: 30648217 DOI: 10.1007/s10916-019-1158-z] [Citation(s) in RCA: 49] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2018] [Accepted: 01/08/2019] [Indexed: 12/12/2022]
Abstract
The Internet of Things (IoT) has been identified in various applications across different domains, such as in the healthcare sector. IoT has also been recognised for its revolution in reshaping modern healthcare with aspiring wide range prospects, including economical, technological and social. This study aims to establish IoT-based smart home security solutions for real-time health monitoring technologies in telemedicine architecture. A multilayer taxonomy is driven and conducted in this study. In the first layer, a comprehensive analysis on telemedicine, which focuses on the client and server sides, shows that other studies associated with IoT-based smart home applications have several limitations that remain unaddressed. Particularly, remote patient monitoring in healthcare applications presents various facilities and benefits by adopting IoT-based smart home technologies without compromising the security requirements and potentially large number of risks. An extensive search is conducted to identify articles that handle these issues, related applications are comprehensively reviewed and a coherent taxonomy for these articles is established. A total number of (n = 3064) are gathered between 2007 and 2017 for most reliable databases, such as ScienceDirect, Web of Science and Institute of Electrical and Electronic Engineer Xplore databases. Then, the articles based on IoT studies that are associated with telemedicine applications are filtered. Nine articles are selected and classified into two categories. The first category, which accounts for 22.22% (n = 2/9), includes surveys on telemedicine articles and their applications. The second category, which accounts for 77.78% (n = 7/9), includes articles on the client and server sides of telemedicine architecture. The collected studies reveal the essential requirement in constructing another taxonomy layer and review IoT-based smart home security studies. Therefore, IoT-based smart home security features are introduced and analysed in the second layer. The security of smart home design based on IoT applications is an aspect that represents a crucial matter for general occupants of smart homes, in which studies are required to provide a better solution with patient security, privacy protection and security of users' entities from being stolen or compromised. Innovative technologies have dispersed limitations related to this matter. The existing gaps and trends in this area should be investigated to provide valuable visions for technical environments and researchers. Thus, 67 articles are obtained in the second layer of our taxonomy and are classified into six categories. In the first category, 25.37% (n = 17/67) of the articles focus on architecture design. In the second category, 17.91% (n = 12/67) includes security analysis articles that investigate the research status in the security area of IoT-based smart home applications. In the third category, 10.44% (n = 7/67) includes articles about security schemes. In the fourth category, 17.91% (n = 12/67) comprises security examination. In the fifth category, 13.43% (n = 9/67) analyses security protocols. In the final category, 14.92% (n = 10/67) analyses the security framework. Then, the identified basic characteristics of this emerging field are presented and provided in the following aspects. Open challenges experienced on the development of IoT-based smart home security are addressed to be adopted fully in telemedicine applications. Then, the requirements are provided to increase researcher's interest in this study area. On this basis, a number of recommendations for different parties are described to provide insights on the next steps that should be considered to enhance the security of smart homes based on IoT. A map matching for both taxonomies is developed in this study to determine the novel risks and benefits of IoT-based smart home security for real-time remote health monitoring within client and server sides in telemedicine applications.
Collapse
Affiliation(s)
- Mohammed Talal
- Department of Communication Engineering, Faculty of Electrical and Electronic Engineering, Universiti Tun Hussein Onn Malaysia (UTHM), Parit Raja, Malaysia
| | - A A Zaidan
- Department of Computing, Universiti Pendidikan Sultan Idris, Tanjong Malim, Perak, Malaysia.
| | - B B Zaidan
- Department of Computing, Universiti Pendidikan Sultan Idris, Tanjong Malim, Perak, Malaysia
| | - A S Albahri
- Department of Computing, Universiti Pendidikan Sultan Idris, Tanjong Malim, Perak, Malaysia
| | - A H Alamoodi
- Department of Computing, Universiti Pendidikan Sultan Idris, Tanjong Malim, Perak, Malaysia
| | - O S Albahri
- Department of Computing, Universiti Pendidikan Sultan Idris, Tanjong Malim, Perak, Malaysia
| | - M A Alsalem
- Department of Computing, Universiti Pendidikan Sultan Idris, Tanjong Malim, Perak, Malaysia
| | - C K Lim
- Department of Computing, Universiti Pendidikan Sultan Idris, Tanjong Malim, Perak, Malaysia
| | - K L Tan
- Department of Computing, Universiti Pendidikan Sultan Idris, Tanjong Malim, Perak, Malaysia
| | - W L Shir
- Department of Computing, Universiti Pendidikan Sultan Idris, Tanjong Malim, Perak, Malaysia
| | - K I Mohammed
- Department of Computing, Universiti Pendidikan Sultan Idris, Tanjong Malim, Perak, Malaysia
| |
Collapse
|
42
|
Shuwandy ML, Zaidan BB, Zaidan AA, Albahri AS. Sensor-Based mHealth Authentication for Real-Time Remote Healthcare Monitoring System: A Multilayer Systematic Review. J Med Syst 2019; 43:33. [PMID: 30612191 DOI: 10.1007/s10916-018-1149-5] [Citation(s) in RCA: 31] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2018] [Accepted: 12/20/2018] [Indexed: 02/07/2023]
Abstract
The new and groundbreaking real-time remote healthcare monitoring system on sensor-based mobile health (mHealth) authentication in telemedicine has considerably bounded and dispersed communication components. mHealth, an attractive part in telemedicine architecture, plays an imperative role in patient security and privacy and adapts different sensing technologies through many built-in sensors. This study aims to improve sensor-based defence and attack mechanisms to ensure patient privacy in client side when using mHealth. Thus, a multilayer taxonomy was conducted to attain the goal of this study. Within the first layer, real-time remote monitoring studies based on sensor technology for telemedicine application were reviewed and analysed to examine these technologies and provide researchers with a clear vision of security- and privacy-based sensors in the telemedicine area. An extensive search was conducted to find articles about security and privacy issues, review related applications comprehensively and establish the coherent taxonomy of these articles. ScienceDirect, IEEE Xplore and Web of Science databases were investigated for articles on mHealth in telemedicine-based sensor. A total of 3064 papers were collected from 2007 to 2017. The retrieved articles were filtered according to the security and privacy of sensor-based telemedicine applications. A total of 19 articles were selected and classified into two categories. The first category, 57.89% (n = 11/19), included survey on telemedicine articles and their applications. The second category, 42.1% (n = 8/19), included articles contributed to the three-tiered architecture of telemedicine. The collected studies improved the essential need to add another taxonomy layer and review the sensor-based smartphone authentication studies. This map matching for both taxonomies was developed for this study to investigate sensor field comprehensively and gain access to novel risks and benefits of the mHealth security in telemedicine application. The literature on sensor-based smartphones in the second layer of our taxonomy was analysed and reviewed. A total of 599 papers were collected from 2007 to 2017. In this layer, we obtained a final set of 81 articles classified into three categories. The first category of the articles [86.41% (n = 70/81)], where sensor-based smartphones were examined by utilising orientation sensors for user authentication, was used. The second category [7.40% (n = 6/81)] included attack articles, which were not intensively included in our literature analysis. The third category [8.64% (n = 7/81)] included 'other' articles. Factors were considered to understand fully the various contextual aspects of the field in published studies. The characteristics included the motivation and challenges related to sensor-based authentication of smartphones encountered by researchers and the recommendations to strengthen this critical area of research. Finally, many studies on the sensor-based smartphone in the second layer have focused on enhancing accurate authentication because sensor-based smartphones require sensors that could authentically secure mHealth.
Collapse
Affiliation(s)
- Moceheb Lazam Shuwandy
- Department of Computing, Universiti Pendidikan Sultan Idris, Tanjong Malim, Perak, Malaysia
| | - B B Zaidan
- Department of Computing, Universiti Pendidikan Sultan Idris, Tanjong Malim, Perak, Malaysia
| | - A A Zaidan
- Department of Computing, Universiti Pendidikan Sultan Idris, Tanjong Malim, Perak, Malaysia.
| | - A S Albahri
- Department of Computing, Universiti Pendidikan Sultan Idris, Tanjong Malim, Perak, Malaysia
| |
Collapse
|