1
|
Guo K, Lu J, Wu Y, Hu X, Yang H. The Latest Research Progress on Bionic Artificial Hands: A Systematic Review. MICROMACHINES 2024; 15:891. [PMID: 39064402 PMCID: PMC11278702 DOI: 10.3390/mi15070891] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/25/2024] [Revised: 07/01/2024] [Accepted: 07/03/2024] [Indexed: 07/28/2024]
Abstract
Bionic prosthetic hands hold the potential to replicate the functionality of human hands. The use of bionic limbs can assist amputees in performing everyday activities. This article systematically reviews the research progress on bionic prostheses, with a focus on control mechanisms, sensory feedback integration, and mechanical design innovations. It emphasizes the use of bioelectrical signals, such as electromyography (EMG), for prosthetic control and discusses the application of machine learning algorithms to enhance the accuracy of gesture recognition. Additionally, the paper explores advancements in sensory feedback technologies, including tactile, visual, and auditory modalities, which enhance user interaction by providing essential environmental feedback. The mechanical design of prosthetic hands is also examined, with particular attention to achieving a balance between dexterity, weight, and durability. Our contribution consists of compiling current research trends and identifying key areas for future development, including the enhancement of control system integration and improving the aesthetic and functional resemblance of prostheses to natural limbs. This work aims to inform and inspire ongoing research that seeks to refine the utility and accessibility of prosthetic hands for amputees, emphasizing user-centric innovations.
Collapse
Affiliation(s)
- Kai Guo
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou 215163, China
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei 230026, China
| | - Jingxin Lu
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou 215163, China
- College of Mechanical and Electrical Engineering, Changchun University of Science and Technology, Changchun 130022, China
| | - Yuwen Wu
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou 215163, China
| | - Xuhui Hu
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou 215163, China
| | - Hongbo Yang
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou 215163, China
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei 230026, China
- College of Mechanical and Electrical Engineering, Changchun University of Science and Technology, Changchun 130022, China
| |
Collapse
|
2
|
Huang HH, Hargrove LJ, Ortiz-Catalan M, Sensinger JW. Integrating Upper-Limb Prostheses with the Human Body: Technology Advances, Readiness, and Roles in Human-Prosthesis Interaction. Annu Rev Biomed Eng 2024; 26:503-528. [PMID: 38594922 DOI: 10.1146/annurev-bioeng-110222-095816] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/11/2024]
Abstract
Significant advances in bionic prosthetics have occurred in the past two decades. The field's rapid expansion has yielded many exciting technologies that can enhance the physical, functional, and cognitive integration of a prosthetic limb with a human. We review advances in the engineering of prosthetic devices and their interfaces with the human nervous system, as well as various surgical techniques for altering human neuromusculoskeletal systems for seamless human-prosthesis integration. We discuss significant advancements in research and clinical translation, focusing on upper limbprosthetics since they heavily rely on user intent for daily operation, although many discussed technologies have been extended to lower limb prostheses as well. In addition, our review emphasizes the roles of advanced prosthetics technologies in complex interactions with humans and the technology readiness levels (TRLs) of individual research advances. Finally, we discuss current gaps and controversies in the field and point out future research directions, guided by TRLs.
Collapse
Affiliation(s)
- He Helen Huang
- Joint Department of Biomedical Engineering, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina, USA
- Joint Department of Biomedical Engineering, North Carolina State University, Raleigh, North Carolina, USA;
| | - Levi J Hargrove
- Department of Physical Medicine and Rehabilitation, Northwestern University, Chicago, Illinois, USA
- Center for Bionic Medicine, Shirley Ryan AbilityLab, Chicago, Illinois, USA
| | - Max Ortiz-Catalan
- Medical Bionics Department, University of Melbourne, Melbourne, Australia
- Bionics Institute, Melbourne, Australia
| | - Jonathon W Sensinger
- Institute of Biomedical Engineering, University of New Brunswick, Fredericton, New Brunswick, Canada;
| |
Collapse
|
3
|
Rostamzadeh S, Abouhossein A, Alam K, Vosoughi S, Sattari SS. Exploratory analysis using machine learning algorithms to predict pinch strength by anthropometric and socio-demographic features. INTERNATIONAL JOURNAL OF OCCUPATIONAL SAFETY AND ERGONOMICS 2024; 30:518-531. [PMID: 38553890 DOI: 10.1080/10803548.2024.2322888] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/08/2024]
Abstract
Objectives. This study examines the role of different machine learning (ML) algorithms to determine which socio-demographic factors and hand-forearm anthropometric dimensions can be used to accurately predict hand function. Methods. The cross-sectional study was conducted with 7119 healthy Iranian participants (3525 males and 3594 females) aged 10-89 years. Seventeen hand-forearm anthropometric dimensions were measured by JEGS digital caliper and a measuring tape. Tip-to-tip, key and three-jaw chuck pinches were measured using a calibrated pinch gauge. Subsequently, 21 features pertinent to socio-demographic factors and hand-forearm anthropometric dimensions were used for classification. Furthermore, 12 well-known classifiers were implemented and evaluated to predict pinches. Results. Among the 21 features considered in this study, hand length, stature, age, thumb length and index finger length were found to be the most relevant and effective components for each of the three pinch predictions. The k-nearest neighbor, adaptive boosting (AdaBoost) and random forest classifiers achieved the highest classification accuracy of 96.75, 86.49 and 84.66% to predict three pinches, respectively. Conclusions. Predicting pinch strength and determining the predictive hand-forearm anthropometric and socio-demographic characteristics using ML may pave the way to designing an enhanced tool handle and reduce common musculoskeletal disorders of the hand.
Collapse
Affiliation(s)
- Sajjad Rostamzadeh
- Department of Ergonomics, School of Public Health and Safety, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Alireza Abouhossein
- Department of Ergonomics, School of Public Health and Safety, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Khurshid Alam
- Department of Mechanical and Industrial Engineering, College of Engineering, Sultan Qaboos University, Muscat, Oman
| | - Shahram Vosoughi
- Department of Occupational Health Engineering, School of Public Health, Iran University of Medical Sciences, Tehran, Iran
| | | |
Collapse
|
4
|
Tully TN, Thomson CJ, Clark GA, George JA. Validity and Impact of Methods for Collecting Training Data for Myoelectric Prosthetic Control Algorithms. IEEE Trans Neural Syst Rehabil Eng 2024; 32:1974-1983. [PMID: 38739519 PMCID: PMC11197051 DOI: 10.1109/tnsre.2024.3400729] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/16/2024]
Abstract
Intuitive regression control of prostheses relies on training algorithms to correlate biological recordings to motor intent. The quality of the training dataset is critical to run-time regression performance, but accurately labeling intended hand kinematics after hand amputation is challenging. In this study, we quantified the accuracy and precision of labeling hand kinematics using two common training paradigms: 1) mimic training, where participants mimic predetermined motions of a prosthesis, and 2) mirror training, where participants mirror their contralateral intact hand during synchronized bilateral movements. We first explored this question in healthy non-amputee individuals where the ground-truth kinematics could be readily determined using motion capture. Kinematic data showed that mimic training fails to account for biomechanical coupling and temporal changes in hand posture. Additionally, mirror training exhibited significantly higher accuracy and precision in labeling hand kinematics. These findings suggest that the mirror training approach generates a more faithful, albeit more complex, dataset. Accordingly, mirror training resulted in significantly better offline regression performance when using a large amount of training data and a non-linear neural network. Next, we explored these different training paradigms online, with a cohort of unilateral transradial amputees actively controlling a prosthesis in real-time to complete a functional task. Overall, we found that mirror training resulted in significantly faster task completion speeds and similar subjective workload. These results demonstrate that mirror training can potentially provide more dexterous control through the utilization of task-specific, user-selected training data. Consequently, these findings serve as a valuable guide for the next generation of myoelectric and neuroprostheses leveraging machine learning to provide more dexterous and intuitive control.
Collapse
|
5
|
Campbell E, Eddy E, Bateman S, Côté-Allard U, Scheme E. Context-informed incremental learning improves both the performance and resilience of myoelectric control. J Neuroeng Rehabil 2024; 21:70. [PMID: 38702813 PMCID: PMC11067119 DOI: 10.1186/s12984-024-01355-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2023] [Accepted: 04/04/2024] [Indexed: 05/06/2024] Open
Abstract
Despite its rich history of success in controlling powered prostheses and emerging commercial interests in ubiquitous computing, myoelectric control continues to suffer from a lack of robustness. In particular, EMG-based systems often degrade over prolonged use resulting in tedious recalibration sessions, user frustration, and device abandonment. Unsupervised adaptation is one proposed solution that updates a model's parameters over time based on its own predictions during real-time use to maintain robustness without requiring additional user input or dedicated recalibration. However, these strategies can actually accelerate performance deterioration when they begin to classify (and thus adapt) incorrectly, defeating their own purpose. To overcome these limitations, we propose a novel adaptive learning strategy, Context-Informed Incremental Learning (CIIL), that leverages in situ context to better inform the prediction of pseudo-labels. In this work, we evaluate these CIIL strategies in an online target acquisition task for two use cases: (1) when there is a lack of training data and (2) when a drastic and enduring alteration in the input space has occurred. A total of 32 participants were evaluated across the two experiments. The results show that the CIIL strategies significantly outperform the current state-of-the-art unsupervised high-confidence adaptation and outperform models trained with the conventional screen-guided training approach, even after a 45-degree electrode shift (p < 0.05). Consequently, CIIL has substantial implications for the future of myoelectric control, potentially reducing the training burden while bolstering model robustness, and leading to improved real-time control.
Collapse
Affiliation(s)
- Evan Campbell
- Institute of Biomedical Engineering, University of new Brunswick, Dineen Dr., Fredericton, NB, E3B 5A3, Canada.
| | - Ethan Eddy
- Institute of Biomedical Engineering, University of new Brunswick, Dineen Dr., Fredericton, NB, E3B 5A3, Canada
- Spectral Lab, University of New Brunswick, Peter Kelly Dr, Fredericton, NB, E3B 5A1, Canada
| | - Scott Bateman
- Spectral Lab, University of New Brunswick, Peter Kelly Dr, Fredericton, NB, E3B 5A1, Canada
| | - Ulysse Côté-Allard
- Department of Technology Systems, University of Oslo, Gunnar Randers vei, Kjeller, P.O Box 70, Norway
| | - Erik Scheme
- Institute of Biomedical Engineering, University of new Brunswick, Dineen Dr., Fredericton, NB, E3B 5A3, Canada
| |
Collapse
|
6
|
Luo S, Meng Q, Li S, Yu H. Research of intent recognition in rehabilitation robots: a systematic review. Disabil Rehabil Assist Technol 2024; 19:1307-1318. [PMID: 36695473 DOI: 10.1080/17483107.2023.2170477] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2022] [Revised: 01/10/2023] [Accepted: 01/16/2023] [Indexed: 01/26/2023]
Abstract
PURPOSE Rehabilitation robots with intent recognition are helping people with dysfunction to enjoy better lives. Many rehabilitation robots with intent recognition have been developed by academic institutions and commercial companies. However, there is no systematic summary about the application of intent recognition in the field of rehabilitation robots. Therefore, the purpose of this paper is to summarize the application of intent recognition in rehabilitation robots, analyze the current status of their research, and provide cutting-edge research directions for colleagues. MATERIALS AND METHODS Literature searches were conducted on Web of Science, IEEE Xplore, ScienceDirect, SpringerLink, and Medline. Search terms included "rehabilitation robot", "intent recognition", "exoskeleton", "prosthesis", "surface electromyography (sEMG)" and "electroencephalogram (EEG)". References listed in relevant literature were further screened according to inclusion and exclusion criteria. RESULTS In this field, most studies have recognized movement intent by kinematic, sEMG, and EEG signals. However, in practical studies, the development of intent recognition in rehabilitation robots is limited by the hysteresis of kinematic signals and the weak anti-interference ability of sEMG and EEG signals. CONCLUSIONS Intent recognition has achieved a lot in the field of rehabilitation robotics but the key factors limiting its development are still timeliness and accuracy. In the future, intent recognition strategy with multi-sensor information fusion may be a good solution.
Collapse
Affiliation(s)
- Shengli Luo
- Institute of Rehabilitation Engineering and Technology, University of Shanghai for Science and Technology, Shanghai, China
| | | | - Sujiao Li
- Institute of Rehabilitation Engineering and Technology, University of Shanghai for Science and Technology, Shanghai, China
| | - Hongliu Yu
- Institute of Rehabilitation Engineering and Technology, University of Shanghai for Science and Technology, Shanghai, China
| |
Collapse
|
7
|
Segas E, Mick S, Leconte V, Dubois O, Klotz R, Cattaert D, de Rugy A. Intuitive movement-based prosthesis control enables arm amputees to reach naturally in virtual reality. eLife 2023; 12:RP87317. [PMID: 37847150 PMCID: PMC10581689 DOI: 10.7554/elife.87317] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2023] Open
Abstract
Impressive progress is being made in bionic limbs design and control. Yet, controlling the numerous joints of a prosthetic arm necessary to place the hand at a correct position and orientation to grasp objects remains challenging. Here, we designed an intuitive, movement-based prosthesis control that leverages natural arm coordination to predict distal joints missing in people with transhumeral limb loss based on proximal residual limb motion and knowledge of the movement goal. This control was validated on 29 participants, including seven with above-elbow limb loss, who picked and placed bottles in a wide range of locations in virtual reality, with median success rates over 99% and movement times identical to those of natural movements. This control also enabled 15 participants, including three with limb differences, to reach and grasp real objects with a robotic arm operated according to the same principle. Remarkably, this was achieved without any prior training, indicating that this control is intuitive and instantaneously usable. It could be used for phantom limb pain management in virtual reality, or to augment the reaching capabilities of invasive neural interfaces usually more focused on hand and grasp control.
Collapse
Affiliation(s)
- Effie Segas
- Univ. Bordeaux, CNRS, INCIA, UMR 5287BordeauxFrance
| | - Sébastien Mick
- Univ. Bordeaux, CNRS, INCIA, UMR 5287BordeauxFrance
- ISIR UMR 7222, Sorbonne Université, CNRS, InsermParisFrance
| | | | - Océane Dubois
- Univ. Bordeaux, CNRS, INCIA, UMR 5287BordeauxFrance
- ISIR UMR 7222, Sorbonne Université, CNRS, InsermParisFrance
| | | | | | | |
Collapse
|
8
|
Chen Z, Min H, Wang D, Xia Z, Sun F, Fang B. A Review of Myoelectric Control for Prosthetic Hand Manipulation. Biomimetics (Basel) 2023; 8:328. [PMID: 37504216 PMCID: PMC10807628 DOI: 10.3390/biomimetics8030328] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2023] [Revised: 07/14/2023] [Accepted: 07/19/2023] [Indexed: 07/29/2023] Open
Abstract
Myoelectric control for prosthetic hands is an important topic in the field of rehabilitation. Intuitive and intelligent myoelectric control can help amputees to regain upper limb function. However, current research efforts are primarily focused on developing rich myoelectric classifiers and biomimetic control methods, limiting prosthetic hand manipulation to simple grasping and releasing tasks, while rarely exploring complex daily tasks. In this article, we conduct a systematic review of recent achievements in two areas, namely, intention recognition research and control strategy research. Specifically, we focus on advanced methods for motion intention types, discrete motion classification, continuous motion estimation, unidirectional control, feedback control, and shared control. In addition, based on the above review, we analyze the challenges and opportunities for research directions of functionality-augmented prosthetic hands and user burden reduction, which can help overcome the limitations of current myoelectric control research and provide development prospects for future research.
Collapse
Affiliation(s)
- Ziming Chen
- Laboratory for Embedded System and Intelligent Robot, Wuhan University of Science and Technology, Wuhan 430081, China; (Z.C.); (H.M.)
| | - Huasong Min
- Laboratory for Embedded System and Intelligent Robot, Wuhan University of Science and Technology, Wuhan 430081, China; (Z.C.); (H.M.)
| | - Dong Wang
- Institute for Artificial Intelligence, State Key Lab of Intelligent Technology and Systems, Department of Computer Science and Technology, Beijing National Research Center for Information Science and Technology, Tsinghua University, Beijing 100084, China
| | - Ziwei Xia
- School of Engineering and Technology, China University of Geosciences, Beijing 100083, China
| | - Fuchun Sun
- Institute for Artificial Intelligence, State Key Lab of Intelligent Technology and Systems, Department of Computer Science and Technology, Beijing National Research Center for Information Science and Technology, Tsinghua University, Beijing 100084, China
| | - Bin Fang
- Institute for Artificial Intelligence, State Key Lab of Intelligent Technology and Systems, Department of Computer Science and Technology, Beijing National Research Center for Information Science and Technology, Tsinghua University, Beijing 100084, China
| |
Collapse
|
9
|
Jiang N, Chen C, He J, Meng J, Pan L, Su S, Zhu X. Bio-robotics research for non-invasive myoelectric neural interfaces for upper-limb prosthetic control: a 10-year perspective review. Natl Sci Rev 2023; 10:nwad048. [PMID: 37056442 PMCID: PMC10089583 DOI: 10.1093/nsr/nwad048] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2022] [Revised: 01/01/2023] [Accepted: 02/07/2023] [Indexed: 04/05/2023] Open
Abstract
ABSTRACT
A decade ago, a group of researchers from academia and industry identified a dichotomy between the industrial and academic state-of-the-art in upper-limb prosthesis control, a widely used bio-robotics application. They proposed that four key technical challenges, if addressed, could bridge this gap and translate academic research into clinically and commercially viable products. These challenges are unintuitive control schemes, lack of sensory feedback, poor robustness and single sensor modality. Here, we provide a perspective review on the research effort that occurred in the last decade, aiming at addressing these challenges. In addition, we discuss three research areas essential to the recent development in upper-limb prosthetic control research but were not envisioned in the review 10 years ago: deep learning methods, surface electromyogram decomposition and open-source databases. To conclude the review, we provide an outlook into the near future of the research and development in upper-limb prosthetic control and beyond.
Collapse
Affiliation(s)
| | - Chen Chen
- State Key Laboratory of Mechanical System and Vibration, and Institute of Robotics, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Jiayuan He
- National Clinical Research Center for Geriatrics, West China Hospital, and Med-X Center for Manufacturing, Sichuan University, Chengdu 610041, China
| | - Jianjun Meng
- State Key Laboratory of Mechanical System and Vibration, and Institute of Robotics, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Lizhi Pan
- Key Laboratory of Mechanism Theory and Equipment Design of Ministry of Education, School of Mechanical Engineering, Tianjin University, Tianjin 300350, China
| | - Shiyong Su
- Institute of Neuroscience, Université Catholique Louvain, Brussel B-1348, Belgium
| | - Xiangyang Zhu
- State Key Laboratory of Mechanical System and Vibration, and Institute of Robotics, Shanghai Jiao Tong University, Shanghai 200240, China
| |
Collapse
|
10
|
Keller M, Guebeli A, Thieringer F, Honigmann P. Artificial intelligence in patient-specific hand surgery: a scoping review of literature. Int J Comput Assist Radiol Surg 2023:10.1007/s11548-023-02831-3. [PMID: 36633789 PMCID: PMC10363089 DOI: 10.1007/s11548-023-02831-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2022] [Accepted: 01/02/2023] [Indexed: 01/13/2023]
Abstract
PURPOSE The implementation of artificial intelligence in hand surgery and rehabilitation is gaining popularity. The purpose of this scoping review was to give an overview of implementations of artificial intelligence in hand surgery and rehabilitation and their current significance in clinical practice. METHODS A systematic literature search of the MEDLINE/PubMed and Cochrane Collaboration libraries was conducted. The review was conducted according to the framework outlined by the Preferred Reporting Items for Systematic Reviews and Meta-Analysis Extension for Scoping Reviews. A narrative summary of the papers is presented to give an orienting overview of this rapidly evolving topic. RESULTS Primary search yielded 435 articles. After application of the inclusion/exclusion criteria and addition of supplementary search, 235 articles were included in the final review. In order to facilitate navigation through this heterogenous field, the articles were clustered into four groups of thematically related publications. The most common applications of artificial intelligence in hand surgery and rehabilitation target automated image analysis of anatomic structures, fracture detection and localization and automated screening for other hand and wrist pathologies such as carpal tunnel syndrome, rheumatoid arthritis or osteoporosis. Compared to other medical subspecialties the number of applications in hand surgery is still small. CONCLUSION Although various promising applications of artificial intelligence in hand surgery and rehabilitation show strong performances, their implementation mostly takes place within the context of experimental studies. Therefore, their use in daily clinical routine is still limited.
Collapse
Affiliation(s)
- Marco Keller
- Hand Surgery, Department of Orthopaedic Surgery and Traumatology, Kantonsspital Baselland, 4410, Liestal, Switzerland. .,Medical Additive Manufacturing Research Group, Department of Biomedical Engineering, University of Basel, 4123, Allschwil, Switzerland.
| | - Alissa Guebeli
- Hand Surgery, Department of Orthopaedic Surgery and Traumatology, Kantonsspital Baselland, 4410, Liestal, Switzerland.,Medical Additive Manufacturing Research Group, Department of Biomedical Engineering, University of Basel, 4123, Allschwil, Switzerland.,Department of Plastic and Hand Surgery, Kantonsspital Aarau, 5001, Aarau, Switzerland
| | - Florian Thieringer
- Medical Additive Manufacturing Research Group, Department of Biomedical Engineering, University of Basel, 4123, Allschwil, Switzerland.,Department of Oral and Cranio-Maxillofacial Surgery, University Hospital Basel, Basel, Switzerland
| | - Philipp Honigmann
- Hand Surgery, Department of Orthopaedic Surgery and Traumatology, Kantonsspital Baselland, 4410, Liestal, Switzerland.,Medical Additive Manufacturing Research Group, Department of Biomedical Engineering, University of Basel, 4123, Allschwil, Switzerland.,Department of Biomedical Engineering and Physics, Amsterdam UMC, University of Amsterdam, Amsterdam, The Netherlands
| |
Collapse
|
11
|
Shi P, Fang K, Yu H. Design and control of intelligent bionic artificial hand based on image recognition. Technol Health Care 2023; 31:21-35. [PMID: 35723126 DOI: 10.3233/thc-213320] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/25/2023]
Abstract
BACKGROUND At present, the popular control method for intelligent bionic prosthetic hands is EMG control. However, the control accuracy of this method is low. It is a trend to integrate computer vision into the prosthetic hand. OBJECTIVE The purpose of this paper is to design an intelligent prosthetic hand based on image recognition, improve the control accuracy and the quality of life of the disabled. METHODS Convolutional neural network is used to recognize the object to be grasped, and the recognition result is used as a trigger signal to control our intelligent prosthetic hand. We have designed a four-bar linkage mechanism and a side swing mechanism in the structure, which can not only achieve the flexion and extension of fingers but also realize the adduction and abduction of the four fingers and the lateral swing of the thumb. RESULTS Through the method of image recognition, the new intelligent bionic hand can achieve five kinds of Human action. Including grasp, side pinch, three-finger pinch, two-finger pinch, and pinch between fingers. CONCLUSIONS The experiment result proves that the precision of image recognition control is very excellent, the intelligent prosthetic hand can be completed the corresponding task.
Collapse
|
12
|
Parr JVV, Galpin A, Uiga L, Marshall B, Wright DJ, Franklin ZC, Wood G. A tool for measuring mental workload during prosthesis use: The Prosthesis Task Load Index (PROS-TLX). PLoS One 2023; 18:e0285382. [PMID: 37141379 PMCID: PMC10159192 DOI: 10.1371/journal.pone.0285382] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2022] [Accepted: 04/21/2023] [Indexed: 05/06/2023] Open
Abstract
When using a upper-limb prosthesis, mental, emotional, and physical effort is often experienced. These have been linked to high rates of device dissatisfaction and rejection. Therefore, understanding and quantifying the complex nature of workload experienced when using, or learning to use, a upper-limb prosthesis has practical and clinical importance for researchers and applied professionals. The aim of this paper was to design and validate a self-report measure of mental workload specific to prosthesis use (The Prosthesis Task Load Index; PROS-TLX) that encapsulates the array of mental, physical, and emotional demands often experienced by users of these devices. We first surveyed upper-limb prosthetic limb users who confirmed the importance of eight workload constructs taken from published literature and previous workload measures. These constructs were mental demands, physical demands, visual demands, conscious processing, frustration, situational stress, time pressure and device uncertainty. To validate the importance of these constructs during initial prosthesis learning, we then asked able-bodied participants to complete a coin-placement task using their anatomical hand and then using a myoelectric prosthesis simulator under low and high mental workload. As expected, using a prosthetic hand resulted in slower movements, more errors, and a greater tendency to visually fixate the hand (indexed using eye-tracking equipment). These changes in performance were accompanied by significant increases in PROS-TLX workload subscales. The scale was also found to have good convergent and divergent validity. Further work is required to validate whether the PROS-TLX can provide meaningful clinical insights to the workload experienced by clinical users of prosthetic devices.
Collapse
Affiliation(s)
- Johnny V V Parr
- Department of Sport and Exercise Sciences, Manchester Metropolitan University, Manchester, United Kingdom
| | - Adam Galpin
- School of Health and Society, University of Salford, Manchester, United Kingdom
| | - Liis Uiga
- Department of Sport and Exercise Sciences, Manchester Metropolitan University, Manchester, United Kingdom
| | - Ben Marshall
- Department of Sport and Exercise Sciences, Manchester Metropolitan University, Manchester, United Kingdom
| | - David J Wright
- Department of Psychology, Manchester Metropolitan University, Manchester, United Kingdom
| | - Zoe C Franklin
- Department of Sport and Exercise Sciences, Manchester Metropolitan University, Manchester, United Kingdom
| | - Greg Wood
- Department of Sport and Exercise Sciences, Manchester Metropolitan University, Manchester, United Kingdom
| |
Collapse
|
13
|
Kuroda Y, Yamanoi Y, Togo S, Jiang Y, Yokoi H. Coevolution of Myoelectric Hand Control under the Tactile Interaction among Fingers and Objects. CYBORG AND BIONIC SYSTEMS 2022; 2022:9861875. [PMID: 36452461 PMCID: PMC9691400 DOI: 10.34133/2022/9861875] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2021] [Accepted: 07/11/2022] [Indexed: 06/27/2024] Open
Abstract
The usability of a prosthetic hand differs significantly from that of a real hand. Moreover, the complexity of manipulation increases as the number of degrees of freedom to be controlled increases, making manipulation with biological signals extremely difficult. To overcome this problem, users need to select a grasping posture that is adaptive to the object and a stable grasping method that prevents the object from falling. In previous studies, these have been left to the operating skills of the user, which is extremely difficult to achieve. In this study, we demonstrate how stable and adaptive grasping can be achieved according to the object regardless of the user's operation technique. The required grasping technique is achieved by determining the correlation between the motor output and each sensor through the interaction between the prosthetic hand and the surrounding stimuli, such as myoelectricity, sense of touch, and grasping objects. The agents of the 16-DOF robot hand were trained with the myoelectric signals of six participants, including one child with a congenital forearm deficiency. Consequently, each agent could open and close the hand in response to the myoelectric stimuli and could accomplish the object pickup task. For the tasks, the agents successfully identified grasping patterns suitable for practical and stable positioning of the objects. In addition, the agents were able to pick up the object in a similar posture regardless of the participant, suggesting that the hand was optimized by evolutionary computation to a posture that prevents the object from being dropped.
Collapse
Affiliation(s)
- Yuki Kuroda
- Joint Doctoral Program for Sustainability Research, Graduate School of Informatics and Engineering, The University of Electro-Communications, Tokyo, Japan
| | - Yusuke Yamanoi
- Department of Mechanical and Intelligent System Engineering, Graduate School of Informatics and Engineering, The University of Electro-Communications, Tokyo, Japan
- Center for Neuroscience and Biomedical Engineering, The University of Electro-Communications, Tokyo, Japan
| | - Shunta Togo
- Department of Mechanical and Intelligent System Engineering, Graduate School of Informatics and Engineering, The University of Electro-Communications, Tokyo, Japan
- Center for Neuroscience and Biomedical Engineering, The University of Electro-Communications, Tokyo, Japan
| | - Yinlai Jiang
- Center for Neuroscience and Biomedical Engineering, The University of Electro-Communications, Tokyo, Japan
- Beijing Innovation Center for Intelligent Robots and Systems, Beijing, China
| | - Hiroshi Yokoi
- Joint Doctoral Program for Sustainability Research, Graduate School of Informatics and Engineering, The University of Electro-Communications, Tokyo, Japan
- Department of Mechanical and Intelligent System Engineering, Graduate School of Informatics and Engineering, The University of Electro-Communications, Tokyo, Japan
- Center for Neuroscience and Biomedical Engineering, The University of Electro-Communications, Tokyo, Japan
- Beijing Innovation Center for Intelligent Robots and Systems, Beijing, China
| |
Collapse
|
14
|
Akbulut A, Gungor F, Tarakci E, Aydin MA, Zaim AH, Catal C. Identification of phantom movements with an ensemble learning approach. Comput Biol Med 2022; 150:106132. [PMID: 36195047 DOI: 10.1016/j.compbiomed.2022.106132] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2022] [Revised: 08/27/2022] [Accepted: 09/18/2022] [Indexed: 11/20/2022]
Abstract
Phantom limb pain after amputation is a debilitating condition that negatively affects activities of daily life and the quality of life of amputees. Most amputees are able to control the movement of the missing limb, which is called the phantom limb movement. Recognition of these movements is crucial for both technology-based amputee rehabilitation and prosthetic control. The aim of the current study is to classify and recognize the phantom movements in four different amputation levels of the upper and lower extremities. In the current study, we utilized ensemble learning algorithms for the recognition and classification of phantom movements of the different amputation levels of the upper and lower extremity. In this context, sEMG signals obtained from 38 amputees and 25 healthy individuals were collected and the dataset was created. Studies of processing sEMG signals in amputees are rather limited, and studies are generally on the classification of upper extremity and hand movements. Our study demonstrated that the ensemble learning-based models resulted in higher accuracy in the detection of phantom movements. The ensemble learning-based approaches outperformed the SVM, Decision tree, and kNN methods. The accuracy of the movement pattern recognition in healthy people was up to 96.33%, this was at most 79.16% in amputees.
Collapse
Affiliation(s)
- Akhan Akbulut
- Department of Computer Engineering, Istanbul Kültür University, 34536 Istanbul, Turkey.
| | - Feray Gungor
- Department of Physiotherapy and Rehabilitation, Istanbul University-Cerrahpasa, 34147, Istanbul, Turkey.
| | - Ela Tarakci
- Department of Physiotherapy and Rehabilitation, Istanbul University-Cerrahpasa, 34147, Istanbul, Turkey.
| | - Muhammed Ali Aydin
- Department of Computer Engineering, Istanbul University-Cerrahpasa, 34520 Istanbul, Turkey.
| | - Abdul Halim Zaim
- Department of Computer Engineering, Istanbul Commerce University, 34840 Istanbul, Turkey.
| | - Cagatay Catal
- Department of Computer Science and Engineering, Qatar University, Doha 2713, Qatar.
| |
Collapse
|
15
|
Wang S, Zheng J, Huang Z, Zhang X, Prado da Fonseca V, Zheng B, Jiang X. Integrating computer vision to prosthetic hand control with sEMG: Preliminary results in grasp classification. Front Robot AI 2022; 9:948238. [PMID: 36212614 PMCID: PMC9538562 DOI: 10.3389/frobt.2022.948238] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2022] [Accepted: 09/06/2022] [Indexed: 11/13/2022] Open
Abstract
The myoelectric prosthesis is a promising tool to restore the hand abilities of amputees, but the classification accuracy of surface electromyography (sEMG) is not high enough for real-time application. Researchers proposed integrating sEMG signals with another feature that is not affected by amputation. The strong coordination between vision and hand manipulation makes us consider including visual information in prosthetic hand control. In this study, we identified a sweet period during the early reaching phase in which the vision data could yield a higher accuracy in classifying the grasp patterns. Moreover, the visual classification results from the sweet period could be naturally integrated with sEMG data collected during the grasp phase. After the integration, the accuracy of grasp classification increased from 85.5% (only sEMG) to 90.06% (integrated). Knowledge gained from this study encourages us to further explore the methods for incorporating computer vision into myoelectric data to enhance the movement control of prosthetic hands.
Collapse
Affiliation(s)
- Shuo Wang
- Department of Computer Science, Memorial University of Newfoundland, St. John’s, NL, Canada
| | - Jingjing Zheng
- Department of Computer Science, Memorial University of Newfoundland, St. John’s, NL, Canada
- Wenzhou University, College of Computer Science and Artificial Intelligence, Zhejiang, China
| | - Ziwei Huang
- Wenzhou University, College of Computer Science and Artificial Intelligence, Zhejiang, China
| | - Xiaoqin Zhang
- Wenzhou University, College of Computer Science and Artificial Intelligence, Zhejiang, China
| | | | - Bin Zheng
- University of Alberta, Department of Surgery, Edmonton, AB, Canada
| | - Xianta Jiang
- Department of Computer Science, Memorial University of Newfoundland, St. John’s, NL, Canada
- *Correspondence: Xianta Jiang,
| |
Collapse
|
16
|
Peak counting in surface electromyography signals for quantification of muscle fatigue during dynamic contractions. Med Eng Phys 2022; 107:103844. [DOI: 10.1016/j.medengphy.2022.103844] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2022] [Revised: 06/04/2022] [Accepted: 06/30/2022] [Indexed: 11/19/2022]
|
17
|
Real-Time Control of Intelligent Prosthetic Hand Based on the Improved TCN. Appl Bionics Biomech 2022; 2022:6488599. [PMID: 35607430 PMCID: PMC9124145 DOI: 10.1155/2022/6488599] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2021] [Accepted: 04/21/2022] [Indexed: 11/18/2022] Open
Abstract
Intelligent prosthetic hand is an important branch of intelligent robotics. It can remotely replace humans to complete various complex tasks and also help humans to complete rehabilitation training. In human-computer interaction technology, the prosthetic hand can be accurately controlled by surface electromyography (sEMG). This paper proposes a new multichannel fusion scheme (MSFS) to extend the virtual channels of sEMG and improve the accuracy of gesture recognition. In addition, the Temporal Convolutional Network (TCN) in deep learning has been improved to enhance the performance of the network. Finally, the sEMG is collected by the Myo armband and the prosthetic hand is controlled in real time to validate the new method. The experimental results show that the method proposed in this paper can improve the accuracy of the control intelligent prosthetic hand, and the accuracy rate is 93.69%.
Collapse
|
18
|
Saggi MK, Jain S. A Survey Towards Decision Support System on Smart Irrigation Scheduling Using Machine Learning approaches. ARCHIVES OF COMPUTATIONAL METHODS IN ENGINEERING : STATE OF THE ART REVIEWS 2022; 29:4455-4478. [PMID: 35573028 PMCID: PMC9083007 DOI: 10.1007/s11831-022-09746-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/08/2021] [Accepted: 04/11/2022] [Indexed: 06/15/2023]
Abstract
From last decade, Big data analytics and machine learning is a hotspot research area in the domain of agriculture. Agriculture analytics is a data intensive multidisciplinary problem. Big data analytics becomes a key technology to perform analysis of voluminous data. Irrigation water management is a challenging task for sustainable agriculture. It depends on various parameters related to climate, soil and weather conditions. For accurate estimation of requirement of water for a crop a strong modeling is required. This paper aims to review the application of big data based decision support system framework for sustainable water irrigation management using intelligent learning approaches. We examined how such developments can be leveraged to design and implement the next generation of data, models, analytics and decision support tools for agriculture irrigation water system. Moreover, water irrigation management need to rapidly adapt state-of-the-art using big data technologies and ICT information technologies with the focus of developing application based on analytical modeling approach. This study introduces the area of research, including a irrigation water management in smart agriculture, the crop water model requirement, and the methods of irrigation scheduling, decision support system, and research motivation.
Collapse
Affiliation(s)
- Mandeep Kaur Saggi
- Department of Computer Science, Thapar Institute of Engineering & Technology, Patiala, India
| | - Sushma Jain
- Department of Computer Science, Thapar Institute of Engineering & Technology, Patiala, India
| |
Collapse
|
19
|
Castro MN, Dosen S. Continuous Semi-autonomous Prosthesis Control Using a Depth Sensor on the Hand. Front Neurorobot 2022; 16:814973. [PMID: 35401136 PMCID: PMC8989737 DOI: 10.3389/fnbot.2022.814973] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2022] [Accepted: 02/24/2022] [Indexed: 11/13/2022] Open
Abstract
Modern myoelectric prostheses can perform multiple functions (e.g., several grasp types and wrist rotation) but their intuitive control by the user is still an open challenge. It has been recently demonstrated that semi-autonomous control can allow the subjects to operate complex prostheses effectively; however, this approach often requires placing sensors on the user. The present study proposes a system for semi-autonomous control of a myoelectric prosthesis that requires a single depth sensor placed on the dorsal side of the hand. The system automatically pre-shapes the hand (grasp type, size, and wrist rotation) and allows the user to grasp objects of different shapes, sizes and orientations, placed individually or within cluttered scenes. The system “reacts” to the side from which the object is approached, and enables the user to target not only the whole object but also an object part. Another unique aspect of the system is that it relies on online interaction between the user and the prosthesis; the system reacts continuously on the targets that are in its focus, while the user interprets the movement of the prosthesis to adjust aiming. Experimental assessment was conducted in ten able-bodied participants to evaluate the feasibility and the impact of training on prosthesis-user interaction. The subjects used the system to grasp a set of objects individually (Phase I) and in cluttered scenarios (Phase II), while the time to accomplish the task (TAT) was used as the performance metric. In both phases, the TAT improved significantly across blocks. Some targets (objects and/or their parts) were more challenging, requiring thus significantly more time to handle, but all objects and scenes were successfully accomplished by all subjects. The assessment therefore demonstrated that the system is indeed robust and effective, and that the subjects could successfully learn how to aim with the system after a brief training. This is an important step toward the development of a self-contained semi-autonomous system convenient for clinical applications.
Collapse
|
20
|
Bao T, Xie SQ, Yang P, Zhou P, Zhang ZQ. Towards Robust, Adaptive and Reliable Upper-limb Motion Estimation Using Machine Learning and Deep Learning--A Survey in Myoelectric Control. IEEE J Biomed Health Inform 2022; 26:3822-3835. [PMID: 35294368 DOI: 10.1109/jbhi.2022.3159792] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
To develop multi-functional human-machine interfaces that can help disabled people reconstruct lost functions of upper-limbs, machine learning (ML) and deep learning (DL) techniques have been widely implemented to decode human movement intentions from surface electromyography (sEMG) signals. However, due to the high complexity of upper-limb movements and the inherent non-stable characteristics of sEMG, the usability of ML/DL based control schemes is still greatly limited in practical scenarios. To this end, tremendous efforts have been made to improve model robustness, adaptation, and reliability. In this article, we provide a systematic review on recent achievements, mainly from three categories: multi-modal sensing fusion to gain additional information of the user, transfer learning (TL) methods to eliminate domain shift impacts on estimation models, and post-processing approaches to obtain more reliable outcomes. Special attention is given to fusion strategies, deep TL frameworks, and confidence estimation. \textcolor{red}{Research challenges and emerging opportunities, with respect to hardware development, public resources, and decoding strategies, are also analysed to provide perspectives for future developments.
Collapse
|
21
|
Weiner P, Starke J, Rader S, Hundhausen F, Asfour T. Designing Prosthetic Hands With Embodied Intelligence: The KIT Prosthetic Hands. Front Neurorobot 2022; 16:815716. [PMID: 35355833 PMCID: PMC8960052 DOI: 10.3389/fnbot.2022.815716] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2021] [Accepted: 02/01/2022] [Indexed: 11/13/2022] Open
Abstract
Hand prostheses should provide functional replacements of lost hands. Yet current prosthetic hands often are not intuitive to control and easy to use by amputees. Commercially available prostheses are usually controlled based on EMG signals triggered by the user to perform grasping tasks. Such EMG-based control requires long training and depends heavily on the robustness of the EMG signals. Our goal is to develop prosthetic hands with semi-autonomous grasping abilities that lead to more intuitive control by the user. In this paper, we present the development of prosthetic hands that enable such abilities as first results toward this goal. The developed prostheses provide intelligent mechatronics including adaptive actuation, multi-modal sensing and on-board computing resources to enable autonomous and intuitive control. The hands are scalable in size and based on an underactuated mechanism which allows the adaptation of grasps to the shape of arbitrary objects. They integrate a multi-modal sensor system including a camera and in the newest version a distance sensor and IMU. A resource-aware embedded system for in-hand processing of sensory data and control is included in the palm of each hand. We describe the design of the new version of the hands, the female hand prosthesis with a weight of 377 g, a grasping force of 40.5 N and closing time of 0.73 s. We evaluate the mechatronics of the hand, its grasping abilities based on the YCB Gripper Assessment Protocol as well as a task-oriented protocol for assessing the hand performance in activities of daily living. Further, we exemplarily show the suitability of the multi-modal sensor system for sensory-based, semi-autonomous grasping in daily life activities. The evaluation demonstrates the merit of the hand concept, its sensor and in-hand computing systems.
Collapse
|
22
|
Zhong B, Huang H, Lobaton E. Reliable Vision-Based Grasping Target Recognition for Upper Limb Prostheses. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:1750-1762. [PMID: 32520717 DOI: 10.1109/tcyb.2020.2996960] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Computer vision has shown promising potential in wearable robotics applications (e.g., human grasping target prediction and context understanding). However, in practice, the performance of computer vision algorithms is challenged by insufficient or biased training, observation noise, cluttered background, etc. By leveraging Bayesian deep learning (BDL), we have developed a novel, reliable vision-based framework to assist upper limb prosthesis grasping during arm reaching. This framework can measure different types of uncertainties from the model and data for grasping target recognition in realistic and challenging scenarios. A probability calibration network was developed to fuse the uncertainty measures into one calibrated probability for online decision making. We formulated the problem as the prediction of grasping target while arm reaching. Specifically, we developed a 3-D simulation platform to simulate and analyze the performance of vision algorithms under several common challenging scenarios in practice. In addition, we integrated our approach into a shared control framework of a prosthetic arm and demonstrated its potential at assisting human participants with fluent target reaching and grasping tasks.
Collapse
|
23
|
Karrenbach M, Boe D, Sie A, Bennett R, Rombokas E. Improving automatic control of upper-limb prosthesis wrists using gaze-centered eye tracking and deep learning. IEEE Trans Neural Syst Rehabil Eng 2022; 30:340-349. [PMID: 35100118 DOI: 10.1109/tnsre.2022.3147772] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Many upper-limb prostheses lack proper wrist rotation functionality, leading to users performing poor compensatory strategies, leading to overuse or abandonment. In this study, we investigate the validity of creating and implementing a data-driven predictive control strategy in object grasping tasks performed in virtual reality. We propose the idea of using gaze-centered vision to predict the wrist rotations of a user and implement a user study to investigate the impact of using this predictive control. We demonstrate that using this vision-based predictive system leads to a decrease in compensatory movement in the shoulder, as well as task completion time. We discuss the cases in which the virtual prosthesis with the predictive model implemented did and did not make a physical improvement in various arm movements. We also discuss the cognitive value in implementing such predictive control strategies into prosthetic controllers. We find that gaze-centered vision provides information about the intent of the user when performing object reaching and that the performance of prosthetic hands improves greatly when wrist prediction is implemented. Lastly, we address the limitations of this study in the context of both the study itself as well as any future physical implementations.
Collapse
|
24
|
Cognolato M, Atzori M, Gassert R, Müller H. Improving Robotic Hand Prosthesis Control With Eye Tracking and Computer Vision: A Multimodal Approach Based on the Visuomotor Behavior of Grasping. Front Artif Intell 2022; 4:744476. [PMID: 35146422 PMCID: PMC8822121 DOI: 10.3389/frai.2021.744476] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2021] [Accepted: 12/06/2021] [Indexed: 11/13/2022] Open
Abstract
The complexity and dexterity of the human hand make the development of natural and robust control of hand prostheses challenging. Although a large number of control approaches were developed and investigated in the last decades, limited robustness in real-life conditions often prevented their application in clinical settings and in commercial products. In this paper, we investigate a multimodal approach that exploits the use of eye-hand coordination to improve the control of myoelectric hand prostheses. The analyzed data are from the publicly available MeganePro Dataset 1, that includes multimodal data from transradial amputees and able-bodied subjects while grasping numerous household objects with ten grasp types. A continuous grasp-type classification based on surface electromyography served as both intent detector and classifier. At the same time, the information provided by eye-hand coordination parameters, gaze data and object recognition in first-person videos allowed to identify the object a person aims to grasp. The results show that the inclusion of visual information significantly increases the average offline classification accuracy by up to 15.61 ± 4.22% for the transradial amputees and of up to 7.37 ± 3.52% for the able-bodied subjects, allowing trans-radial amputees to reach average classification accuracy comparable to intact subjects and suggesting that the robustness of hand prosthesis control based on grasp-type recognition can be significantly improved with the inclusion of visual information extracted by leveraging natural eye-hand coordination behavior and without placing additional cognitive burden on the user.
Collapse
Affiliation(s)
- Matteo Cognolato
- Institute of Information Systems, University of Applied Sciences and Arts of Western Switzerland (HES-SO Valais-Wallis), Sierre, Switzerland
- Rehabilitation Engineering Laboratory, Department of Health Sciences and Technology, ETH Zurich, Zurich, Switzerland
| | - Manfredo Atzori
- Institute of Information Systems, University of Applied Sciences and Arts of Western Switzerland (HES-SO Valais-Wallis), Sierre, Switzerland
- Department of Neuroscience, University of Padua, Padua, Italy
- *Correspondence: Manfredo Atzori
| | - Roger Gassert
- Rehabilitation Engineering Laboratory, Department of Health Sciences and Technology, ETH Zurich, Zurich, Switzerland
| | - Henning Müller
- Institute of Information Systems, University of Applied Sciences and Arts of Western Switzerland (HES-SO Valais-Wallis), Sierre, Switzerland
- Faculty of Medicine, University of Geneva, Geneva, Switzerland
- Henning Müller
| |
Collapse
|
25
|
Castro MCF, Pinheiro WC, Rigolin G. A Hybrid 3D Printed Hand Prosthesis Prototype Based on sEMG and a Fully Embedded Computer Vision System. Front Neurorobot 2022; 15:751282. [PMID: 35140597 PMCID: PMC8818886 DOI: 10.3389/fnbot.2021.751282] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2021] [Accepted: 12/07/2021] [Indexed: 11/13/2022] Open
Abstract
This study presents a new approach for an sEMG hand prosthesis based on a 3D printed model with a fully embedded computer vision (CV) system in a hybrid version. A modified 5-layer Smaller Visual Geometry Group (VGG) convolutional neural network (CNN), running on a Raspberry Pi 3 microcomputer connected to a webcam, recognizes the shape of daily use objects, and defines the pattern of the prosthetic grasp/gesture among five classes: Palmar Neutral, Palmar Pronated, Tripod Pinch, Key Grasp, and Index Finger Extension. Using the Myoware board and a finite state machine, the user's intention, depicted by a myoelectric signal, starts the process, photographing the object, proceeding to the grasp/gesture classification, and commands the prosthetic motors to execute the movements. Keras software was used as an application programming interface and TensorFlow as numerical computing software. The proposed system obtained 99% accuracy, 97% sensitivity, and 99% specificity, showing that the CV system is a promising technology to assist the definition of the grasp pattern in prosthetic devices.
Collapse
Affiliation(s)
| | - Wellington C. Pinheiro
- Mechanical Engineering Department, Centro Universitário FEI, São Bernardo do Cambo, Brazil
| | - Glauco Rigolin
- Electrical Engineering Department, Centro Universitário FEI, São Bernardo do Cambo, Brazil
| |
Collapse
|
26
|
Mouchoux J, Bravo-Cabrera MA, Dosen S, Schilling AF, Markovic M. Impact of Shared Control Modalities on Performance and Usability of Semi-autonomous Prostheses. Front Neurorobot 2021; 15:768619. [PMID: 34975446 PMCID: PMC8718752 DOI: 10.3389/fnbot.2021.768619] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2021] [Accepted: 11/22/2021] [Indexed: 11/13/2022] Open
Abstract
Semi-autonomous (SA) control of upper-limb prostheses can improve the performance and decrease the cognitive burden of a user. In this approach, a prosthesis is equipped with additional sensors (e.g., computer vision) that provide contextual information and enable the system to accomplish some tasks automatically. Autonomous control is fused with a volitional input of a user to compute the commands that are sent to the prosthesis. Although several promising prototypes demonstrating the potential of this approach have been presented, methods to integrate the two control streams (i.e., autonomous and volitional) have not been systematically investigated. In the present study, we implemented three shared control modalities (i.e., sequential, simultaneous, and continuous) and compared their performance, as well as the cognitive and physical burdens imposed on the user. In the sequential approach, the volitional input disabled the autonomous control. In the simultaneous approach, the volitional input to a specific degree of freedom (DoF) activated autonomous control of other DoFs, whereas in the continuous approach, autonomous control was always active except for the DoFs controlled by the user. The experiment was conducted in ten able-bodied subjects, and these subjects used an SA prosthesis to perform reach-and-grasp tasks while reacting to audio cues (dual tasking). The results demonstrated that, compared to the manual baseline (volitional control only), all three SA modalities accomplished the task in a shorter time and resulted in less volitional control input. The simultaneous SA modality performed worse than the sequential and continuous SA approaches. When systematic errors were introduced in the autonomous controller to generate a mismatch between the goals of the user and controller, the performance of SA modalities substantially decreased, even below the manual baseline. The sequential SA scheme was the least impacted one in terms of errors. The present study demonstrates that a specific approach for integrating volitional and autonomous control is indeed an important factor that significantly affects the performance and physical and cognitive load, and therefore these should be considered when designing SA prostheses.
Collapse
Affiliation(s)
- Jérémy Mouchoux
- Applied Rehabilitation Technology Lab, Department of Trauma Surgery, Orthopedics and Plastic Surgery, University Medical Center Göttingen, Georg-August University, Göttingen, Germany
| | - Miguel A. Bravo-Cabrera
- Applied Rehabilitation Technology Lab, Department of Trauma Surgery, Orthopedics and Plastic Surgery, University Medical Center Göttingen, Georg-August University, Göttingen, Germany
| | - Strahinja Dosen
- Faculty of Medicine, Department of Health Science and Technology Center for Sensory-Motor Interaction, Aalborg University, Aalborg, Denmark
| | - Arndt F. Schilling
- Applied Rehabilitation Technology Lab, Department of Trauma Surgery, Orthopedics and Plastic Surgery, University Medical Center Göttingen, Georg-August University, Göttingen, Germany
| | - Marko Markovic
- Applied Rehabilitation Technology Lab, Department of Trauma Surgery, Orthopedics and Plastic Surgery, University Medical Center Göttingen, Georg-August University, Göttingen, Germany
| |
Collapse
|
27
|
Mouchoux J, Carisi S, Dosen S, Farina D, Schilling AF, Markovic M. Artificial Perception and Semiautonomous Control in Myoelectric Hand Prostheses Increases Performance and Decreases Effort. IEEE T ROBOT 2021. [DOI: 10.1109/tro.2020.3047013] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
28
|
Roy R, Mahadevappa M, Nazarpour K. An Electro-Oculogram Based Vision System for Grasp Assistive Devices-A Proof of Concept Study. SENSORS 2021; 21:s21134515. [PMID: 34282770 PMCID: PMC8271916 DOI: 10.3390/s21134515] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/16/2021] [Revised: 05/06/2021] [Accepted: 05/12/2021] [Indexed: 11/17/2022]
Abstract
Humans typically fixate on objects before moving their arm to grasp the object. Patients with ALS disorder can also select the object with their intact eye movement, but are unable to move their limb due to the loss of voluntary muscle control. Though several research works have already achieved success in generating the correct grasp type from their brain measurement, we are still searching for fine controll over an object with a grasp assistive device (orthosis/exoskeleton/robotic arm). Object orientation and object width are two important parameters for controlling the wrist angle and the grasp aperture of the assistive device to replicate a human-like stable grasp. Vision systems are already evolved to measure the geometrical attributes of the object to control the grasp with a prosthetic hand. However, most of the existing vision systems are integrated with electromyography and require some amount of voluntary muscle movement to control the vision system. Due to that reason, those systems are not beneficial for the users with brain-controlled assistive devices. Here, we implemented a vision system which can be controlled through the human gaze. We measured the vertical and horizontal electrooculogram signals and controlled the pan and tilt of a cap-mounted webcam to keep the object of interest in focus and at the centre of the picture. A simple ‘signature’ extraction procedure was also utilized to reduce the algorithmic complexity and system storage capacity. The developed device has been tested with ten healthy participants. We approximated the object orientation and the size of the object and determined an appropriate wrist orientation angle and the grasp aperture size within 22 ms. The combined accuracy exceeded 75%. The integration of the proposed system with the brain-controlled grasp assistive device and increasing the number of grasps can offer more natural manoeuvring in grasp for ALS patients.
Collapse
Affiliation(s)
- Rinku Roy
- Advanced Technology and Development Centre, Indian Institute of Technology, Kharagpur 721302, India
- Correspondence:
| | - Manjunatha Mahadevappa
- Indian Institute of Technology, School of Medical Science and Technology, Kharagpur 721302, India;
| | - Kianoush Nazarpour
- Edinburgh Neuroprosthetics Laboratory, The University of Edinburgh, Edinburgh EH8 9AB, UK;
| |
Collapse
|
29
|
Ovur SE, Zhou X, Qi W, Zhang L, Hu Y, Su H, Ferrigno G, De Momi E. A novel autonomous learning framework to enhance sEMG-based hand gesture recognition using depth information. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.102444] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
30
|
Liu M, Wilder S, Sanford S, Saleh S, Harel NY, Nataraj R. Training with Agency-Inspired Feedback from an Instrumented Glove to Improve Functional Grasp Performance. SENSORS (BASEL, SWITZERLAND) 2021; 21:1173. [PMID: 33562342 PMCID: PMC7915039 DOI: 10.3390/s21041173] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/31/2020] [Revised: 01/27/2021] [Accepted: 02/03/2021] [Indexed: 12/01/2022]
Abstract
Sensory feedback from wearables can be effective to learn better movement through enhanced information and engagement. Facilitating greater user cognition during movement practice is critical to accelerate gains in motor function during rehabilitation following brain or spinal cord trauma. This preliminary study presents an approach using an instrumented glove to leverage sense of agency, or perception of control, to provide training feedback for functional grasp. Seventeen able-bodied subjects underwent training and testing with a custom-built sensor glove prototype from our laboratory. The glove utilizes onboard force and flex sensors to provide inputs to an artificial neural network that predicts achievement of "secure" grasp. Onboard visual and audio feedback was provided during training with progressively shorter time delay to induce greater agency by intentional binding, or perceived compression in time between an action (grasp) and sensory consequence (feedback). After training, subjects demonstrated a significant reduction (p < 0.05) in movement pathlength and completion time for a functional task involving grasp-move-place of a small object. Future work will include a model-based algorithm to compute secure grasp, virtual reality immersion, and testing with clinical populations.
Collapse
Affiliation(s)
- Mingxiao Liu
- Department of Biomedical Engineering, Stevens Institute of Technology, Hoboken, NJ 07030, USA; (M.L.); (S.W.); (S.S.)
- Movement Control Rehabilitation (MOCORE) Laboratory, Altorfer Complex, Stevens Institute of Technology, Hoboken, NJ 07030, USA
| | - Samuel Wilder
- Department of Biomedical Engineering, Stevens Institute of Technology, Hoboken, NJ 07030, USA; (M.L.); (S.W.); (S.S.)
- Movement Control Rehabilitation (MOCORE) Laboratory, Altorfer Complex, Stevens Institute of Technology, Hoboken, NJ 07030, USA
| | - Sean Sanford
- Department of Biomedical Engineering, Stevens Institute of Technology, Hoboken, NJ 07030, USA; (M.L.); (S.W.); (S.S.)
- Movement Control Rehabilitation (MOCORE) Laboratory, Altorfer Complex, Stevens Institute of Technology, Hoboken, NJ 07030, USA
| | - Soha Saleh
- Center for Mobility and Rehabilitation Engineering Research, Advanced Rehabilitation Neuroimaging Laboratory, Kessler Foundation, East Hanover, NJ 07936, USA;
| | - Noam Y. Harel
- Spinal Cord Damage Research Center, James J. Peters VA Medical Center, Bronx, NY 10468, USA;
- Departments of Neurology and Rehabilitation and Human Performance, Icahn School of Medicine at Mount Sinai, New York, NY 10029, USA
| | - Raviraj Nataraj
- Department of Biomedical Engineering, Stevens Institute of Technology, Hoboken, NJ 07030, USA; (M.L.); (S.W.); (S.S.)
- Movement Control Rehabilitation (MOCORE) Laboratory, Altorfer Complex, Stevens Institute of Technology, Hoboken, NJ 07030, USA
| |
Collapse
|
31
|
Togo S, Matsumoto K, Kimizuka S, Jiang Y, Yokoi H. Semi-Automated Control System for Reaching Movements in EMG Shoulder Disarticulation Prosthesis Based on Mixed Reality Device. IEEE OPEN JOURNAL OF ENGINEERING IN MEDICINE AND BIOLOGY 2021; 2:55-64. [PMID: 35402981 PMCID: PMC8901039 DOI: 10.1109/ojemb.2021.3058036] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2020] [Revised: 01/27/2021] [Accepted: 02/03/2021] [Indexed: 11/08/2022] Open
Abstract
Goal: The development of a control system for an electromyographic shoulder disarticulation (EMG-SD) prosthesis to rapidly achieve a task with a reduction in the operational failure of the user. Methods: The motion planning of an EMG-SD prosthesis was automated using measured visual information through a mixed reality device. The detection of an object to be grasped and motion execution depended on the EMG of the user, which gives voluntary controllability and makes the system semi-automated. Two evaluation experiments with reaching and reach-to-grasp movements were conducted to compare the performance of the conventional system when operated using only visual feedback control of the user. Results: The proposed system can more rapidly and accurately achieve reaching movements (32% faster) and more accurate (69%) reach-to-grasp movements than a conventional system. Conclusions: The proposed control system achieves a high task performance with a reduction in the operational failure of an EMG-SD prosthesis user.
Collapse
Affiliation(s)
- Shunta Togo
- Graduate School of Informatics and EngineeringThe University of Electro-Communications Tokyo 1828585 Japan
- Center for Neuroscience and Biomedical EngineeringThe University of Electro-Communications Tokyo 1828585 Japan
| | - Kazuaki Matsumoto
- Graduate School of Informatics and EngineeringThe University of Electro-Communications Tokyo 1828585 Japan
| | - Susumu Kimizuka
- Graduate School of Informatics and EngineeringThe University of Electro-Communications Tokyo 1828585 Japan
| | - Yinlai Jiang
- Center for Neuroscience and Biomedical EngineeringThe University of Electro-Communications Tokyo 1828585 Japan
- Beijing Advanced Innovation Center for Intelligent Robots and Systems Beijing 100081 China
| | - Hiroshi Yokoi
- Graduate School of Informatics and EngineeringThe University of Electro-Communications Tokyo 1828585 Japan
- Center for Neuroscience and Biomedical EngineeringThe University of Electro-Communications Tokyo 1828585 Japan
- Beijing Advanced Innovation Center for Intelligent Robots and Systems Beijing 100081 China
| |
Collapse
|
32
|
Gardner M, Mancero Castillo CS, Wilson S, Farina D, Burdet E, Khoo BC, Atashzar SF, Vaidyanathan R. A Multimodal Intention Detection Sensor Suite for Shared Autonomy of Upper-Limb Robotic Prostheses. SENSORS 2020; 20:s20216097. [PMID: 33120959 PMCID: PMC7662487 DOI: 10.3390/s20216097] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/08/2020] [Revised: 10/08/2020] [Accepted: 10/23/2020] [Indexed: 11/24/2022]
Abstract
Neurorobotic augmentation (e.g., robotic assist) is now in regular use to support individuals suffering from impaired motor functions. A major unresolved challenge, however, is the excessive cognitive load necessary for the human–machine interface (HMI). Grasp control remains one of the most challenging HMI tasks, demanding simultaneous, agile, and precise control of multiple degrees-of-freedom (DoFs) while following a specific timing pattern in the joint and human–robot task spaces. Most commercially available systems use either an indirect mode-switching configuration or a limited sequential control strategy, limiting activation to one DoF at a time. To address this challenge, we introduce a shared autonomy framework centred around a low-cost multi-modal sensor suite fusing: (a) mechanomyography (MMG) to estimate the intended muscle activation, (b) camera-based visual information for integrated autonomous object recognition, and (c) inertial measurement to enhance intention prediction based on the grasping trajectory. The complete system predicts user intent for grasp based on measured dynamical features during natural motions. A total of 84 motion features were extracted from the sensor suite, and tests were conducted on 10 able-bodied and 1 amputee participants for grasping common household objects with a robotic hand. Real-time grasp classification accuracy using visual and motion features obtained 100%, 82.5%, and 88.9% across all participants for detecting and executing grasping actions for a bottle, lid, and box, respectively. The proposed multimodal sensor suite is a novel approach for predicting different grasp strategies and automating task performance using a commercial upper-limb prosthetic device. The system also shows potential to improve the usability of modern neurorobotic systems due to the intuitive control design.
Collapse
Affiliation(s)
- Marcus Gardner
- Moonshine Inc., London W12 0LN, UK;
- Department of Mechanical Engineering, UK Dementia Research Institute Care-Research and Technology Centre (DRI-CRT) Imperial College London, London SW7 2AZ, UK; (C.S.M.C.); (S.W.)
| | - C. Sebastian Mancero Castillo
- Department of Mechanical Engineering, UK Dementia Research Institute Care-Research and Technology Centre (DRI-CRT) Imperial College London, London SW7 2AZ, UK; (C.S.M.C.); (S.W.)
| | - Samuel Wilson
- Department of Mechanical Engineering, UK Dementia Research Institute Care-Research and Technology Centre (DRI-CRT) Imperial College London, London SW7 2AZ, UK; (C.S.M.C.); (S.W.)
| | - Dario Farina
- Department of Bioengineering, Imperial College London, London SW7 2AZ, UK; (D.F.); (E.B.)
| | - Etienne Burdet
- Department of Bioengineering, Imperial College London, London SW7 2AZ, UK; (D.F.); (E.B.)
| | - Boo Cheong Khoo
- Department of Mechanical Engineering, National University of Singapore, Singapore 119077, Singapore;
| | - S. Farokh Atashzar
- Department of Electrical and Computer Engineering, New York University, New York, NY 11201, USA
- Department of Mechanical and Aerospace Engineering, New York University, New York, NY 11201, USA
- NYU WIRELESS, New York University, New York, NY 11201, USA
- Correspondence: (S.F.A.); (R.V.)
| | - Ravi Vaidyanathan
- Department of Mechanical Engineering, UK Dementia Research Institute Care-Research and Technology Centre (DRI-CRT) Imperial College London, London SW7 2AZ, UK; (C.S.M.C.); (S.W.)
- Correspondence: (S.F.A.); (R.V.)
| |
Collapse
|
33
|
Coin A, Dubljević V. The Authenticity of Machine-Augmented Human Intelligence: Therapy, Enhancement, and the Extended Mind. NEUROETHICS-NETH 2020. [DOI: 10.1007/s12152-020-09453-5] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/26/2022]
|
34
|
Ceolini E, Frenkel C, Shrestha SB, Taverni G, Khacef L, Payvand M, Donati E. Hand-Gesture Recognition Based on EMG and Event-Based Camera Sensor Fusion: A Benchmark in Neuromorphic Computing. Front Neurosci 2020; 14:637. [PMID: 32903824 PMCID: PMC7438887 DOI: 10.3389/fnins.2020.00637] [Citation(s) in RCA: 22] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2019] [Accepted: 05/22/2020] [Indexed: 12/03/2022] Open
Abstract
Hand gestures are a form of non-verbal communication used by individuals in conjunction with speech to communicate. Nowadays, with the increasing use of technology, hand-gesture recognition is considered to be an important aspect of Human-Machine Interaction (HMI), allowing the machine to capture and interpret the user's intent and to respond accordingly. The ability to discriminate between human gestures can help in several applications, such as assisted living, healthcare, neuro-rehabilitation, and sports. Recently, multi-sensor data fusion mechanisms have been investigated to improve discrimination accuracy. In this paper, we present a sensor fusion framework that integrates complementary systems: the electromyography (EMG) signal from muscles and visual information. This multi-sensor approach, while improving accuracy and robustness, introduces the disadvantage of high computational cost, which grows exponentially with the number of sensors and the number of measurements. Furthermore, this huge amount of data to process can affect the classification latency which can be crucial in real-case scenarios, such as prosthetic control. Neuromorphic technologies can be deployed to overcome these limitations since they allow real-time processing in parallel at low power consumption. In this paper, we present a fully neuromorphic sensor fusion approach for hand-gesture recognition comprised of an event-based vision sensor and three different neuromorphic processors. In particular, we used the event-based camera, called DVS, and two neuromorphic platforms, Loihi and ODIN + MorphIC. The EMG signals were recorded using traditional electrodes and then converted into spikes to be fed into the chips. We collected a dataset of five gestures from sign language where visual and electromyography signals are synchronized. We compared a fully neuromorphic approach to a baseline implemented using traditional machine learning approaches on a portable GPU system. According to the chip's constraints, we designed specific spiking neural networks (SNNs) for sensor fusion that showed classification accuracy comparable to the software baseline. These neuromorphic alternatives have increased inference time, between 20 and 40%, with respect to the GPU system but have a significantly smaller energy-delay product (EDP) which makes them between 30× and 600× more efficient. The proposed work represents a new benchmark that moves neuromorphic computing toward a real-world scenario.
Collapse
Affiliation(s)
- Enea Ceolini
- Institute of Neuroinformatics, University of Zurich, ETH Zurich, Zurich, Switzerland
| | - Charlotte Frenkel
- Institute of Neuroinformatics, University of Zurich, ETH Zurich, Zurich, Switzerland
- ICTEAM Institute, Université Catholique de Louvain, Louvain-la-Neuve, Belgium
| | - Sumit Bam Shrestha
- Temasek Laboratories, National University of Singapore, Singapore, Singapore
| | - Gemma Taverni
- Institute of Neuroinformatics, University of Zurich, ETH Zurich, Zurich, Switzerland
| | - Lyes Khacef
- Université Côte d'Azur, CNRS, LEAT, Nice, France
| | - Melika Payvand
- Institute of Neuroinformatics, University of Zurich, ETH Zurich, Zurich, Switzerland
| | - Elisa Donati
- Institute of Neuroinformatics, University of Zurich, ETH Zurich, Zurich, Switzerland
| |
Collapse
|
35
|
Shi C, Yang D, Zhao J, Liu H. Computer Vision-Based Grasp Pattern Recognition With Application to Myoelectric Control of Dexterous Hand Prosthesis. IEEE Trans Neural Syst Rehabil Eng 2020; 28:2090-2099. [PMID: 32746315 DOI: 10.1109/tnsre.2020.3007625] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Artificial intelligence provides new feasibilities to the control of dexterous prostheses. To achieve suitable grasps over various objects, a novel computer vision-based classification method assorting objects into different grasp patterns is proposed in this paper. This method can be applied in the autonomous control of the multi-fingered prosthetic hand, as it can help users rapidly complete "reach-and-pick up" tasks on various daily objects with low demand on the myoelectric control. Firstly, an RGB-D image database (121 objects) was established according to four important grasp patterns (cylindrical, spherical, tripod, and lateral). The image samples in the RGB-D dataset were acquired on a large variety of daily objects of different sizes, shapes, postures (16), as well as different illumination conditions (4) and camera positions (4). Then, different inputs and structures of the discrimination model (multilayer CNN) were tested in terms of the classification success rate through cross-validation. Our results showed that depth data play an important role in grasp pattern recognition. The bimodal data (Gray-D) integrating both grayscale and depth information about the objects can improve the classification accuracy acquired from the RGB images (> 10%) effectively. Within the database, the network could achieve the classification with high accuracy (98%); it also has a strong generalization capability on novel samples (93.9 ± 3.0%). We finally applied the method on a dexterous prosthetic hand and tested the whole system on performing the "reach-and-pick up" tasks. The experiments showed that the proposed computer vision-based myoelectric control method (Vision-EMG) could significantly improve the control effectiveness (6.4 s), with comparison to the traditional coding-based myoelectric control method (Coding-EMG, 13 s ).
Collapse
|
36
|
Sensinger JW, Dosen S. A Review of Sensory Feedback in Upper-Limb Prostheses From the Perspective of Human Motor Control. Front Neurosci 2020; 14:345. [PMID: 32655344 PMCID: PMC7324654 DOI: 10.3389/fnins.2020.00345] [Citation(s) in RCA: 60] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2019] [Accepted: 03/23/2020] [Indexed: 12/22/2022] Open
Abstract
This manuscript reviews historical and recent studies that focus on supplementary sensory feedback for use in upper limb prostheses. It shows that the inability of many studies to speak to the issue of meaningful performance improvements in real-life scenarios is caused by the complexity of the interactions of supplementary sensory feedback with other types of feedback along with other portions of the motor control process. To do this, the present manuscript frames the question of supplementary feedback from the perspective of computational motor control, providing a brief review of the main advances in that field over the last 20 years. It then separates the studies on the closed-loop prosthesis control into distinct categories, which are defined by relating the impact of feedback to the relevant components of the motor control framework, and reviews the work that has been done over the last 50+ years in each of those categories. It ends with a discussion of the studies, along with suggestions for experimental construction and connections with other areas of research, such as machine learning.
Collapse
Affiliation(s)
- Jonathon W. Sensinger
- Institute of Biomedical Engineering, University of New Brunswick, Fredericton, NB, Canada
| | - Strahinja Dosen
- Department of Health Science and Technology, The Faculty of Medicine, Integrative Neuroscience, Aalborg University, Aalborg, Denmark
| |
Collapse
|
37
|
Cognolato M, Gijsberts A, Gregori V, Saetta G, Giacomino K, Hager AGM, Gigli A, Faccio D, Tiengo C, Bassetto F, Caputo B, Brugger P, Atzori M, Müller H. Gaze, visual, myoelectric, and inertial data of grasps for intelligent prosthetics. Sci Data 2020; 7:43. [PMID: 32041965 PMCID: PMC7010656 DOI: 10.1038/s41597-020-0380-3] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2019] [Accepted: 01/16/2020] [Indexed: 11/09/2022] Open
Abstract
A hand amputation is a highly disabling event, having severe physical and psychological repercussions on a person's life. Despite extensive efforts devoted to restoring the missing functionality via dexterous myoelectric hand prostheses, natural and robust control usable in everyday life is still challenging. Novel techniques have been proposed to overcome the current limitations, among them the fusion of surface electromyography with other sources of contextual information. We present a dataset to investigate the inclusion of eye tracking and first person video to provide more stable intent recognition for prosthetic control. This multimodal dataset contains surface electromyography and accelerometry of the forearm, and gaze, first person video, and inertial measurements of the head recorded from 15 transradial amputees and 30 able-bodied subjects performing grasping tasks. Besides the intended application for upper-limb prosthetics, we also foresee uses for this dataset to study eye-hand coordination in the context of psychophysics, neuroscience, and assistive robotics.
Collapse
Affiliation(s)
- Matteo Cognolato
- Information Systems Institute, University of Applied Sciences Western Switzerland (HES-SO Valais), Sierre, Switzerland.
- Rehabilitation Engineering Laboratory, Department of Health Sciences and Technology, ETH Zurich, Zurich, Switzerland.
| | | | - Valentina Gregori
- Istituto Italiano di Tecnologia, Genoa, Italy
- Department of Computer, Control, and Management Engineering, University of Rome La Sapienza, Rome, Italy
| | - Gianluca Saetta
- Department of Neurology, University Hospital of Zurich, Zurich, Switzerland
| | - Katia Giacomino
- Department of Physical Therapy, University of Applied Sciences Western Switzerland (HES-SO Valais), Leukerbad, Switzerland
| | - Anne-Gabrielle Mittaz Hager
- Department of Physical Therapy, University of Applied Sciences Western Switzerland (HES-SO Valais), Leukerbad, Switzerland
| | | | - Diego Faccio
- Clinic of Plastic Surgery, Padova University Hospital, Padova, Italy
| | - Cesare Tiengo
- Clinic of Plastic Surgery, Padova University Hospital, Padova, Italy
| | - Franco Bassetto
- Clinic of Plastic Surgery, Padova University Hospital, Padova, Italy
| | - Barbara Caputo
- Istituto Italiano di Tecnologia, Genoa, Italy
- Politecnico di Torino, Turin, Italy
| | - Peter Brugger
- Department of Neurology, University Hospital of Zurich, Zurich, Switzerland
- Rehabilitation Center Valens, Valens, Switzerland
| | - Manfredo Atzori
- Information Systems Institute, University of Applied Sciences Western Switzerland (HES-SO Valais), Sierre, Switzerland.
| | - Henning Müller
- Information Systems Institute, University of Applied Sciences Western Switzerland (HES-SO Valais), Sierre, Switzerland.
- University of Geneva, Geneva, Switzerland.
| |
Collapse
|
38
|
Krausz NE, Hargrove LJ. A Survey of Teleceptive Sensing for Wearable Assistive Robotic Devices. SENSORS (BASEL, SWITZERLAND) 2019; 19:E5238. [PMID: 31795240 PMCID: PMC6928925 DOI: 10.3390/s19235238] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/16/2019] [Revised: 11/04/2019] [Accepted: 11/21/2019] [Indexed: 11/24/2022]
Abstract
Teleception is defined as sensing that occurs remotely, with no physical contact with the object being sensed. To emulate innate control systems of the human body, a control system for a semi- or fully autonomous assistive device not only requires feedforward models of desired movement, but also the environmental or contextual awareness that could be provided by teleception. Several recent publications present teleception modalities integrated into control systems and provide preliminary results, for example, for performing hand grasp prediction or endpoint control of an arm assistive device; and gait segmentation, forward prediction of desired locomotion mode, and activity-specific control of a prosthetic leg or exoskeleton. Collectively, several different approaches to incorporating teleception have been used, including sensor fusion, geometric segmentation, and machine learning. In this paper, we summarize the recent and ongoing published work in this promising new area of research.
Collapse
Affiliation(s)
- Nili E. Krausz
- Neural Engineering for Prosthetics and Orthotics Lab, Center of Bionic Medicine, Shirley Ryan AbilityLab (Formerly Rehabilitation Institute of Chicago), Chicago, IL 60611, USA;
- Biomedical Engineering Department, Northwestern University, Evanston, IL 60208, USA
| | - Levi J. Hargrove
- Neural Engineering for Prosthetics and Orthotics Lab, Center of Bionic Medicine, Shirley Ryan AbilityLab (Formerly Rehabilitation Institute of Chicago), Chicago, IL 60611, USA;
- Biomedical Engineering Department, Northwestern University, Evanston, IL 60208, USA
- Physical Medicine and Rehabilitation Department, Northwestern University, Evanston, IL 60208, USA
| |
Collapse
|
39
|
Volkmar R, Dosen S, Gonzalez-Vargas J, Baum M, Markovic M. Improving bimanual interaction with a prosthesis using semi-autonomous control. J Neuroeng Rehabil 2019; 16:140. [PMID: 31727087 PMCID: PMC6857334 DOI: 10.1186/s12984-019-0617-6] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2019] [Accepted: 10/29/2019] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND The loss of a hand is a traumatic experience that substantially compromises an individual's capability to interact with his environment. The myoelectric prostheses are state-of-the-art (SoA) functional replacements for the lost limbs. Their overall mechanical design and dexterity have improved over the last few decades, but the users have not been able to fully exploit these advances because of the lack of effective and intuitive control. Bimanual tasks are particularly challenging for an amputee since prosthesis control needs to be coordinated with the movement of the sound limb. So far, the bimanual activities have been often neglected by the prosthetic research community. METHODS We present a novel method to prosthesis control, which uses a semi-autonomous approach in order to simplify bimanual interactions. The approach supplements the commercial SoA two-channel myoelectric control with two additional sensors. Two inertial measurement units were attached to the prosthesis and the sound hand to detect the movement of both limbs. Once a bimanual interaction is detected, the system mimics the coordination strategies of able-bodied subjects to automatically adjust the prosthesis wrist rotation (pronation, supination) and grip type (lateral, palmar) to assist the sound hand during a bimanual task. The system has been evaluated in eight able-bodied subjects performing functional uni- and bi-manual tasks using the novel method and SoA two-channel myocontrol. The outcome measures were time to accomplish the task, semi-autonomous system misclassification rate, subjective rating of intuitiveness, and perceived workload (NASA TLX). RESULTS The results demonstrated that the novel control interface substantially outperformed the SoA myoelectric control. While using the semi-autonomous control the time to accomplish the task and the perceived workload decreased for 25 and 27%, respectively, while the subjects rated the system as more intuitive then SoA myocontrol. CONCLUSIONS The novel system uses minimal additional hardware (two inertial sensors) and simple processing and it is therefore convenient for practical implementation. By using the proposed control scheme, the prosthesis assists the user's sound hand in performing bimanual interactions while decreasing cognitive burden.
Collapse
Affiliation(s)
- Robin Volkmar
- Department of Trauma Surgery, Orthopedics and Plastic Surgery, University Medical Center Göttingen, Von-Siebold-Str. 3, 37075 Göttingen, Germany
| | - Strahinja Dosen
- Department of Health Science and Technology, Center for Sensory-Motor Interaction, Aalborg University, Aalborg, Denmark
| | | | - Marcus Baum
- Institute of Computer Science, University of Göttingen, Göttingen, Germany
| | - Marko Markovic
- Department of Trauma Surgery, Orthopedics and Plastic Surgery, University Medical Center Göttingen, Von-Siebold-Str. 3, 37075 Göttingen, Germany
| |
Collapse
|
40
|
HANDS: a multimodal dataset for modeling toward human grasp intent inference in prosthetic hands. INTEL SERV ROBOT 2019; 13:179-185. [PMID: 33312264 DOI: 10.1007/s11370-019-00293-8] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
Abstract
Upper limb and hand functionality is critical to many activities of daily living and the amputation of one can lead to significant functionality loss for individuals. From this perspective, advanced prosthetic hands of the future are anticipated to benefit from improved shared control between a robotic hand and its human user, but more importantly from the improved capability to infer human intent from multimodal sensor data to provide the robotic hand perception abilities regarding the operational context. Such multimodal sensor data may include various environment sensors including vision, as well as human physiology and behavior sensors including electromyography and inertial measurement units. A fusion methodology for environmental state and human intent estimation can combine these sources of evidence in order to help prosthetic hand motion planning and control. In this paper, we present a dataset of this type that was gathered with the anticipation of cameras being built into prosthetic hands, and computer vision methods will need to assess this hand-view visual evidence in order to estimate human intent. Specifically, paired images from human eye-view and hand-view of various objects placed at different orientations have been captured at the initial state of grasping trials, followed by paired video, EMG and IMU from the arm of the human during a grasp, lift, put-down, and retract style trial structure. For each trial, based on eye-view images of the scene showing the hand and object on a table, multiple humans were asked to sort in decreasing order of preference, five grasp types appropriate for the object in its given configuration relative to the hand. The potential utility of paired eye-view and hand-view images was illustrated by training a convolutional neural network to process hand-view images in order to predict eye-view labels assigned by humans.
Collapse
|
41
|
Parr JVV, Vine SJ, Wilson MR, Harrison NR, Wood G. Visual attention, EEG alpha power and T7-Fz connectivity are implicated in prosthetic hand control and can be optimized through gaze training. J Neuroeng Rehabil 2019; 16:52. [PMID: 31029174 PMCID: PMC6487034 DOI: 10.1186/s12984-019-0524-x] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2018] [Accepted: 04/16/2019] [Indexed: 01/29/2023] Open
Abstract
Background Prosthetic hands impose a high cognitive burden on the user that often results in fatigue, frustration and prosthesis rejection. However, efforts to directly measure this burden are sparse and little is known about the mechanisms behind it. There is also a lack of evidence-based training interventions designed to improve prosthesis hand control and reduce the mental effort required to use them. In two experiments, we provide the first direct evaluation of this cognitive burden using measurements of EEG and eye-tracking (Experiment 1), and then explore how a novel visuomotor intervention (gaze training; GT) might alleviate it (Experiment 2). Methods In Experiment 1, able-bodied participants (n = 20) lifted and moved a jar, first using their anatomical hand and then using a myoelectric prosthetic hand simulator. In experiment 2, a GT group (n = 12) and a movement training (MT) group (n = 12) trained with the prosthetic hand simulator over three one hour sessions in a picking up coins task, before returning for retention, delayed retention and transfer tests. The GT group received instruction regarding how to use their eyes effectively, while the MT group received movement-related instruction typical in rehabilitation. Results Experiment 1 revealed that when using the prosthetic hand, participants performed worse, exhibited spatial and temporal disruptions to visual attention, and exhibited a global decrease in EEG alpha power (8-12 Hz), suggesting increased cognitive effort. Experiment 2 showed that GT was the more effective method for expediting prosthesis learning, optimising visual attention, and lowering conscious control – as indexed by reduced T7-Fz connectivity. Whilst the MT group improved performance, they did not reduce hand-focused visual attention and showed increased conscious movement control. The superior benefits of GT transferred to a more complex tea-making task. Conclusions These experiments quantify the visual and cortical mechanisms relating to the cognitive burden experienced during prosthetic hand control. They also evidence the efficacy of a GT intervention that alleviated this burden and promoted better learning and transfer, compared to typical rehabilitation instructions. These findings have theoretical and practical implications for prosthesis rehabilitation, the development of emerging prosthesis technologies and for the general understanding of human-tool interactions. Electronic supplementary material The online version of this article (10.1186/s12984-019-0524-x) contains supplementary material, which is available to authorized users.
Collapse
Affiliation(s)
- J V V Parr
- School of Health Sciences, Liverpool Hope University, Liverpool, UK
| | - S J Vine
- College of Life & Environmental Sciences, University of Exeter, Exeter, UK
| | - M R Wilson
- College of Life & Environmental Sciences, University of Exeter, Exeter, UK
| | - N R Harrison
- Department of Psychology, Liverpool Hope University, Liverpool, UK
| | - G Wood
- Research Centre for Musculoskeletal Science and Sports Medicine Department of Sport and Exercise Science, Manchester Metropolitan University, Manchester, UK.
| |
Collapse
|
42
|
Hays M, Osborn L, Ghosh R, Iskarous M, Hunt C, Thakor NV. Neuromorphic vision and tactile fusion for upper limb prosthesis control. INTERNATIONAL IEEE/EMBS CONFERENCE ON NEURAL ENGINEERING : [PROCEEDINGS]. INTERNATIONAL IEEE EMBS CONFERENCE ON NEURAL ENGINEERING 2019; 2019:981-984. [PMID: 33875927 PMCID: PMC8053366 DOI: 10.1109/ner.2019.8716890] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/11/2023]
Abstract
A major issue with upper limb prostheses is the disconnect between sensory information perceived by the user and the information perceived by the prosthesis. Advances in prosthetic technology introduced tactile information for monitoring grasping activity, but visual information, a vital component in the human sensory system, is still not fully utilized as a form of feedback to the prosthesis. For able-bodied individuals, many of the decisions for grasping or manipulating an object, such as hand orientation and aperture, are made based on visual information before contact with the object. We show that inclusion of neuromorphic visual information, combined with tactile feedback, improves the ability and efficiency of both able-bodied and amputee subjects to pick up and manipulate everyday objects. We discovered that combining both visual and tactile information in a real-time closed loop feedback strategy generally decreased the completion time of a task involving picking up and manipulating objects compared to using a single modality for feedback. While the full benefit of the combined feedback was partially obscured by experimental inaccuracies of the visual classification system, we demonstrate that this fusion of neuromorphic signals from visual and tactile sensors can provide valuable feedback to a prosthetic arm for enhancing real-time function and usability.
Collapse
Affiliation(s)
- Mark Hays
- Department of Biomedical Engineering, Johns Hopkins School of Medicine, 720 Rutland Ave, Baltimore, MD 21205, USA
| | - Luke Osborn
- Department of Biomedical Engineering, Johns Hopkins School of Medicine, 720 Rutland Ave, Baltimore, MD 21205, USA
| | - Rohan Ghosh
- Sinapse Institute for Neurotechnology and the Department of Electrical and Computer Engineering, National University of Singapore, 28 Medical Drive, #05-02, Singapore 117456, Singapore
| | - Mark Iskarous
- Department of Biomedical Engineering, Johns Hopkins School of Medicine, 720 Rutland Ave, Baltimore, MD 21205, USA
| | - Christopher Hunt
- Department of Biomedical Engineering, Johns Hopkins School of Medicine, 720 Rutland Ave, Baltimore, MD 21205, USA
| | - Nitish V Thakor
- Department of Biomedical Engineering, Johns Hopkins School of Medicine, 720 Rutland Ave, Baltimore, MD 21205, USA
- Sinapse Institute for Neurotechnology and the Department of Electrical and Computer Engineering, National University of Singapore, 28 Medical Drive, #05-02, Singapore 117456, Singapore
| |
Collapse
|
43
|
Ozdenizci O, Gunay SY, Quivira F, Erdogmug D. Hierarchical Graphical Models for Context-Aware Hybrid Brain-Machine Interfaces. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2018; 2018:1964-1967. [PMID: 30440783 PMCID: PMC6525618 DOI: 10.1109/embc.2018.8512677] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
We present a novel hierarchical graphical model based context-aware hybrid brain-machine interface (hBMI) using probabilistic fusion of electroencephalographic (EEG) and electromyographic (EMG) activities. Based on experimental data collected during stationary executions and subsequent imageries of five different hand gestures with both limbs, we demonstrate feasibility of the proposed hBMI system through within session and online across sessions classification analyses. Furthermore, we investigate the context-aware extent of the model by a simulated probabilistic approach and highlight potential implications of our work in the field of neurophysiologically-driven robotic hand prosthetics.
Collapse
|
44
|
Batzianoulis I, Krausz NE, Simon AM, Hargrove L, Billard A. Decoding the grasping intention from electromyography during reaching motions. J Neuroeng Rehabil 2018; 15:57. [PMID: 29940991 PMCID: PMC6020187 DOI: 10.1186/s12984-018-0396-5] [Citation(s) in RCA: 32] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/26/2017] [Accepted: 06/11/2018] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Active upper-limb prostheses are used to restore important hand functionalities, such as grasping. In conventional approaches, a pattern recognition system is trained over a number of static grasping gestures. However, training a classifier in a static position results in lower classification accuracy when performing dynamic motions, such as reach-to-grasp. We propose an electromyography-based learning approach that decodes the grasping intention during the reaching motion, leading to a faster and more natural response of the prosthesis. METHODS AND RESULTS Eight able-bodied subjects and four individuals with transradial amputation gave informed consent and participated in our study. All the subjects performed reach-to-grasp motions for five grasp types, while the elecromyographic (EMG) activity and the extension of the arm were recorded. We separated the reach-to-grasp motion into three phases, with respect to the extension of the arm. A multivariate analysis of variance (MANOVA) on the muscular activity revealed significant differences among the motion phases. Additionally, we examined the classification performance on these phases. We compared the performance of three different pattern recognition methods; Linear Discriminant Analysis (LDA), Support Vector Machines (SVM) with linear and non-linear kernels, and an Echo State Network (ESN) approach. Our off-line analysis shows that it is possible to have high classification performance above 80% before the end of the motion when with three-grasp types. An on-line evaluation with an upper-limb prosthesis shows that the inclusion of the reaching motion in the training of the classifier importantly improves classification accuracy and enables the detection of grasp intention early in the reaching motion. CONCLUSIONS This method offers a more natural and intuitive control of prosthetic devices, as it will enable controlling grasp closure in synergy with the reaching motion. This work contributes to the decrease of delays between the user's intention and the device response and improves the coordination of the device with the motion of the arm.
Collapse
Affiliation(s)
- Iason Batzianoulis
- Learning Algorithms and Systems Laboratory (LASA), School of Engineering, École Polytechnique Fédérale de Lausanne (EPFL), Route Cantonale, Lausanne, CH-1015 Switzerland
| | - Nili E. Krausz
- Center for Bionic Medicine, Shirley Ryan AbilityLab, E Erie St., Chicago, 60611 IL USA
- Dept. of Physical Medicine and Rehabilitation, Northwestern University, N Lake Shore, Chicago, 60611 IL USA
| | - Ann M. Simon
- Center for Bionic Medicine, Shirley Ryan AbilityLab, E Erie St., Chicago, 60611 IL USA
- Dept. of Physical Medicine and Rehabilitation, Northwestern University, N Lake Shore, Chicago, 60611 IL USA
| | - Levi Hargrove
- Center for Bionic Medicine, Shirley Ryan AbilityLab, E Erie St., Chicago, 60611 IL USA
- Dept. of Physical Medicine and Rehabilitation, Northwestern University, N Lake Shore, Chicago, 60611 IL USA
- Dept. of Biomedical Engineering, Northwestern University, Evanston, 60208 IL USA
| | - Aude Billard
- Learning Algorithms and Systems Laboratory (LASA), School of Engineering, École Polytechnique Fédérale de Lausanne (EPFL), Route Cantonale, Lausanne, CH-1015 Switzerland
| |
Collapse
|
45
|
Abstract
OBJECTIVE The objective of this study was to compare the use of muscles appropriate for partial-hand prostheses with those typically used for complete hand devices and to determine whether differences in their underlying neural substrates translate to different levels of myoelectric control. APPROACH We developed a novel abstract myoelectric decoder based on motor learning. Three muscle pairs, namely, an intrinsic and independent, an intrinsic and synergist and finally, an extrinsic and antagonist, were tested during abstract myoelectric control. Feedback conditions probed the roles of feed-forward and feedback mechanisms. RESULTS Both performance levels and rates of improvement were significantly higher for intrinsic hand muscles relative to muscles of the forearm. Intrinsic hand muscles showed considerable improvement generalising to decoder use without visual feedback. Results indicate that visual feedback from the decoder is used for transitioning between muscle activity levels, but not for maintaining state. Both individual and group performance were found to be strongly related to motor variability. SIGNIFICANCE Physiological differences inherent to the hand muscles can translate to improved prosthesis control. Our results support the use of motor learning based techniques for upper-limb myoelectric control and strongly argues for their utility in control of partial-hand prostheses. We provide evidence of myoelectric control skill acquisition and offer a formal definition for abstract decoding in the context of prosthetic control.
Collapse
Affiliation(s)
- Matthew Dyson
- Intelligent Sensing Laboratory, School of Engineering, Newcastle University, NE1 7RU, United Kingdom
| | | | | |
Collapse
|
46
|
Silveira C, Brunton E, Spendiff S, Nazarpour K. Influence of nerve cuff channel count and implantation site on the separability of afferent ENG. J Neural Eng 2018; 15:046004. [PMID: 29629880 PMCID: PMC5964361 DOI: 10.1088/1741-2552/aabca0] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/02/2023]
Abstract
Objective. Recording of neural signals from intact peripheral nerves
in patients with spinal cord injury or stroke survivors offers the possibility for
the development of closed-loop sensorimotor prostheses. Nerve cuffs have been found
to provide stable recordings from peripheral nerves for prolonged periods of time.
However, questions remain over the design and positioning of nerve cuffs such that
the separability of neural data recorded from the peripheral nerves is improved.
Approach. Afferent electroneurographic (ENG) signals were
recorded with nerve cuffs placed on the sciatic nerve of rats in response to various
mechanical stimuli to the hindpaw. The mean absolute value of the signal was
extracted and input to a classifier. The performance of the classifier was evaluated
under two conditions: (1) when information from either a 3- or 16-channel cuff was
used; (2) when information was available from a cuff placed either distally or
proximally along the nerve. Main results. We show that both 3- and
16-channel cuffs were able to separate afferent ENG signals with an accuracy greater
than chance. The highest classification scores were achieved when the classifier was
fed with information obtained from a 16-channel cuff placed distally. While the
16-channel cuff always outperformed the 3-channel cuff, the difference in performance
was increased when the 16-channel cuff was placed distally rather than proximally on
the nerve. Significance. The results indicate that increasing the
complexity of a nerve cuff may only be advantageous if the nerve cuff is to be
implanted distally, where the nerve has begun to divide into individual
fascicles.
Collapse
Affiliation(s)
- Carolina Silveira
- Intelligent Sensing Laboratory, School of Engineering, Newcastle University, NE1 7RU, United Kingdom
| | | | | | | |
Collapse
|
47
|
Hramov AE, Frolov NS, Maksimenko VA, Makarov VV, Koronovskii AA, Garcia-Prieto J, Antón-Toro LF, Maestú F, Pisarchik AN. Artificial neural network detects human uncertainty. CHAOS (WOODBURY, N.Y.) 2018; 28:033607. [PMID: 29604631 DOI: 10.1063/1.5002892] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Artificial neural networks (ANNs) are known to be a powerful tool for data analysis. They are used in social science, robotics, and neurophysiology for solving tasks of classification, forecasting, pattern recognition, etc. In neuroscience, ANNs allow the recognition of specific forms of brain activity from multichannel EEG or MEG data. This makes the ANN an efficient computational core for brain-machine systems. However, despite significant achievements of artificial intelligence in recognition and classification of well-reproducible patterns of neural activity, the use of ANNs for recognition and classification of patterns in neural networks still requires additional attention, especially in ambiguous situations. According to this, in this research, we demonstrate the efficiency of application of the ANN for classification of human MEG trials corresponding to the perception of bistable visual stimuli with different degrees of ambiguity. We show that along with classification of brain states associated with multistable image interpretations, in the case of significant ambiguity, the ANN can detect an uncertain state when the observer doubts about the image interpretation. With the obtained results, we describe the possible application of ANNs for detection of bistable brain activity associated with difficulties in the decision-making process.
Collapse
Affiliation(s)
- Alexander E Hramov
- Artificial Intelligence Systems and Neurotechnologies, Yuri Gagarin State Technical University of Saratov, Politehnicheskaya, 77, Saratov 410054, Russia
| | - Nikita S Frolov
- Artificial Intelligence Systems and Neurotechnologies, Yuri Gagarin State Technical University of Saratov, Politehnicheskaya, 77, Saratov 410054, Russia
| | - Vladimir A Maksimenko
- Artificial Intelligence Systems and Neurotechnologies, Yuri Gagarin State Technical University of Saratov, Politehnicheskaya, 77, Saratov 410054, Russia
| | - Vladimir V Makarov
- Artificial Intelligence Systems and Neurotechnologies, Yuri Gagarin State Technical University of Saratov, Politehnicheskaya, 77, Saratov 410054, Russia
| | | | - Juan Garcia-Prieto
- Center for Biomedical Technology, Technical University of Madrid, Campus Montegancedo, 28223 Pozuelo de Alarcon, Madrid, Spain
| | - Luis Fernando Antón-Toro
- Center for Biomedical Technology, Technical University of Madrid, Campus Montegancedo, 28223 Pozuelo de Alarcon, Madrid, Spain
| | - Fernando Maestú
- Center for Biomedical Technology, Technical University of Madrid, Campus Montegancedo, 28223 Pozuelo de Alarcon, Madrid, Spain
| | - Alexander N Pisarchik
- Artificial Intelligence Systems and Neurotechnologies, Yuri Gagarin State Technical University of Saratov, Politehnicheskaya, 77, Saratov 410054, Russia
| |
Collapse
|
48
|
Fu Q, Santello M. Improving Fine Control of Grasping Force during Hand-Object Interactions for a Soft Synergy-Inspired Myoelectric Prosthetic Hand. Front Neurorobot 2018; 11:71. [PMID: 29375360 PMCID: PMC5767584 DOI: 10.3389/fnbot.2017.00071] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2017] [Accepted: 12/18/2017] [Indexed: 11/29/2022] Open
Abstract
The concept of postural synergies of the human hand has been shown to potentially reduce complexity in the neuromuscular control of grasping. By merging this concept with soft robotics approaches, a multi degrees of freedom soft-synergy prosthetic hand [SoftHand-Pro (SHP)] was created. The mechanical innovation of the SHP enables adaptive and robust functional grasps with simple and intuitive myoelectric control from only two surface electromyogram (sEMG) channels. However, the current myoelectric controller has very limited capability for fine control of grasp forces. We addressed this challenge by designing a hybrid-gain myoelectric controller that switches control gains based on the sensorimotor state of the SHP. This controller was tested against a conventional single-gain (SG) controller, as well as against native hand in able-bodied subjects. We used the following tasks to evaluate the performance of grasp force control: (1) pick and place objects with different size, weight, and fragility levels using power or precision grasp and (2) squeezing objects with different stiffness. Sensory feedback of the grasp forces was provided to the user through a non-invasive, mechanotactile haptic feedback device mounted on the upper arm. We demonstrated that the novel hybrid controller enabled superior task completion speed and fine force control over SG controller in object pick-and-place tasks. We also found that the performance of the hybrid controller qualitatively agrees with the performance of native human hands.
Collapse
Affiliation(s)
- Qiushi Fu
- Neural Control of Movement Laboratory, School of Biological and Health Systems Engineering, Arizona State University, Tempe, AZ, United States.,Mechanical and Aerospace Engineering, University of Central Florida, Orlando, FL, United States
| | - Marco Santello
- Neural Control of Movement Laboratory, School of Biological and Health Systems Engineering, Arizona State University, Tempe, AZ, United States
| |
Collapse
|