1
|
Skoraczynski DJ, Chen C. Novel near E-Field Topography Sensor for Human-Machine Interfacing in Robotic Applications. SENSORS (BASEL, SWITZERLAND) 2024; 24:1379. [PMID: 38474915 DOI: 10.3390/s24051379] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/19/2024] [Revised: 02/10/2024] [Accepted: 02/14/2024] [Indexed: 03/14/2024]
Abstract
This work investigates a new sensing technology for use in robotic human-machine interface (HMI) applications. The proposed method uses near E-field sensing to measure small changes in the limb surface topography due to muscle actuation over time. The sensors introduced in this work provide a non-contact, low-computational-cost, and low-noise method for sensing muscle activity. By evaluating the key sensor characteristics, such as accuracy, hysteresis, and resolution, the performance of this sensor is validated. Then, to understand the potential performance in intention detection, the unmodified digital output of the sensor is analysed against movements of the hand and fingers. This is done to demonstrate the worst-case scenario and to show that the sensor provides highly targeted and relevant data on muscle activation before any further processing. Finally, a convolutional neural network is used to perform joint angle prediction over nine degrees of freedom, achieving high-level regression performance with an RMSE value of less than six degrees for thumb and wrist movements and 11 degrees for finger movements. This work demonstrates the promising performance of this novel approach to sensing for use in human-machine interfaces.
Collapse
Affiliation(s)
- Dariusz J Skoraczynski
- Laboratory of Motion Generation and Analysis (LMGA), Monash University, Clayton, VIC 3800, Australia
| | - Chao Chen
- Laboratory of Motion Generation and Analysis (LMGA), Monash University, Clayton, VIC 3800, Australia
| |
Collapse
|
2
|
Zhang Y, Doyle T. Integrating intention-based systems in human-robot interaction: a scoping review of sensors, algorithms, and trust. Front Robot AI 2023; 10:1233328. [PMID: 37876910 PMCID: PMC10591094 DOI: 10.3389/frobt.2023.1233328] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2023] [Accepted: 09/18/2023] [Indexed: 10/26/2023] Open
Abstract
The increasing adoption of robot systems in industrial settings and teaming with humans have led to a growing interest in human-robot interaction (HRI) research. While many robots use sensors to avoid harming humans, they cannot elaborate on human actions or intentions, making them passive reactors rather than interactive collaborators. Intention-based systems can determine human motives and predict future movements, but their closer interaction with humans raises concerns about trust. This scoping review provides an overview of sensors, algorithms, and examines the trust aspect of intention-based systems in HRI scenarios. We searched MEDLINE, Embase, and IEEE Xplore databases to identify studies related to the forementioned topics of intention-based systems in HRI. Results from each study were summarized and categorized according to different intention types, representing various designs. The literature shows a range of sensors and algorithms used to identify intentions, each with their own advantages and disadvantages in different scenarios. However, trust of intention-based systems is not well studied. Although some research in AI and robotics can be applied to intention-based systems, their unique characteristics warrant further study to maximize collaboration performance. This review highlights the need for more research on the trust aspects of intention-based systems to better understand and optimize their role in human-robot interactions, at the same time establishes a foundation for future research in sensor and algorithm designs for intention-based systems.
Collapse
Affiliation(s)
- Yifei Zhang
- Department of Electrical and Computer Engineering, McMaster University, Hamilton, ON, Canada
| | - Thomas Doyle
- Department of Electrical and Computer Engineering, McMaster University, Hamilton, ON, Canada
- School of Biomedical Engineering, McMaster University, Hamilton, ON, Canada
- Vector Institute for Artificial Intelligence, Toronto, ON, Canada
| |
Collapse
|
3
|
Lee H, Park J, Kang BB, Cho KJ. Single-Step 3D Printing of Bio-Inspired Printable Joints Applied to a Prosthetic Hand. 3D PRINTING AND ADDITIVE MANUFACTURING 2023; 10:917-929. [PMID: 37886417 PMCID: PMC10599432 DOI: 10.1089/3dp.2022.0120] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/28/2023]
Abstract
Single-step 3D printing, which can manufacture complicated designs without assembly, has the potential to completely change our design perspective, and how 3D printing products, rather than printing static components, ready-to-use movable mechanisms become a reality. Existing 3D printing solutions are challenged by precision limitations, and cannot directly produce tightly mated moving surfaces. Therefore, joints must be designed with a sufficient gap between the components, resulting in joints and other mechanisms with imprecise motion. In this study, we propose a bio-inspired printable joint and apply it to a Single sTep 3D-printed Prosthetic hand (ST3P hand). We simulate the anatomical structure of the human finger joint and implement a cam effect that changed the distance between the contact surfaces through the elastic bending of the ligaments as the joint flexed. This bio-inspired design allows the joint to be single-step 3D printed and provides precise motion. The bio-inspired printable joint makes it possible for the ST3P hand to be designed as a lightweight (∼255 g), low-cost (∼$500) monolithic structure with nine finger joints and manufactured via single-step 3D printing. The ST3P hand takes ∼6 min to assemble, which is approximately one-tenth the assembly time of open-source 3D printed prostheses. The hand can perform basic hand tasks of activities of daily living by providing a pulling force of 48 N and grasp strength of 20 N. The simple manufacturing of the ST3P hand could help us take one step closer to realizing fully customized robotic prosthetic hands at low cost and effort.
Collapse
Affiliation(s)
- Haemin Lee
- Biorobotics Laboratory, Department of Mechanical Engineering, Seoul National University, Seoul, Republic of Korea
| | - JongHoo Park
- Biorobotics Laboratory, Department of Mechanical Engineering, Seoul National University, Seoul, Republic of Korea
| | - Brian Byunghyun Kang
- Intelligent Robotics Laboratory, School of Intelligent Mechatronics Engineering, Sejong University, Seoul, Republic of Korea
| | - Kyu-Jin Cho
- Biorobotics Laboratory, Department of Mechanical Engineering, Seoul National University, Seoul, Republic of Korea
- Soft Robotics Research Center, Seoul National University, Seoul, Republic of Korea
| |
Collapse
|
4
|
Yang S, Garg NP, Gao R, Yuan M, Noronha B, Ang WT, Accoto D. Learning-Based Motion-Intention Prediction for End-Point Control of Upper-Limb-Assistive Robots. SENSORS (BASEL, SWITZERLAND) 2023; 23:2998. [PMID: 36991709 PMCID: PMC10056111 DOI: 10.3390/s23062998] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/14/2023] [Revised: 03/04/2023] [Accepted: 03/08/2023] [Indexed: 06/19/2023]
Abstract
The lack of intuitive and active human-robot interaction makes it difficult to use upper-limb-assistive devices. In this paper, we propose a novel learning-based controller that intuitively uses onset motion to predict the desired end-point position for an assistive robot. A multi-modal sensing system comprising inertial measurement units (IMUs), electromyographic (EMG) sensors, and mechanomyography (MMG) sensors was implemented. This system was used to acquire kinematic and physiological signals during reaching and placing tasks performed by five healthy subjects. The onset motion data of each motion trial were extracted to input into traditional regression models and deep learning models for training and testing. The models can predict the position of the hand in planar space, which is the reference position for low-level position controllers. The results show that using IMU sensor with the proposed prediction model is sufficient for motion intention detection, which can provide almost the same prediction performance compared with adding EMG or MMG. Additionally, recurrent neural network (RNN)-based models can predict target positions over a short onset time window for reaching motions and are suitable for predicting targets over a longer horizon for placing tasks. This study's detailed analysis can improve the usability of the assistive/rehabilitation robots.
Collapse
Affiliation(s)
- Sibo Yang
- School of Mechanical and Aerospace Engineering, Nanyang Technological University, Singapore 639798, Singapore
| | - Neha P. Garg
- Rehabilitation Research Institute of Singapore (RRIS), Nanyang Technological University, Singapore 308232, Singapore
| | - Ruobin Gao
- School of Computer Science and Engineering, Nanyang Technological University, Singapore 639798, Singapore
| | - Meng Yuan
- Rehabilitation Research Institute of Singapore (RRIS), Nanyang Technological University, Singapore 308232, Singapore
| | - Bernardo Noronha
- School of Mechanical and Aerospace Engineering, Nanyang Technological University, Singapore 639798, Singapore
| | - Wei Tech Ang
- School of Mechanical and Aerospace Engineering, Nanyang Technological University, Singapore 639798, Singapore
- Rehabilitation Research Institute of Singapore (RRIS), Nanyang Technological University, Singapore 308232, Singapore
| | - Dino Accoto
- Department of Mechanical Engineering, Robotics, Automation and Mechatronics Division, KU Leuven, 3590 Diepenbeek, Belgium
| |
Collapse
|
5
|
Castro MN, Dosen S. Continuous Semi-autonomous Prosthesis Control Using a Depth Sensor on the Hand. Front Neurorobot 2022; 16:814973. [PMID: 35401136 PMCID: PMC8989737 DOI: 10.3389/fnbot.2022.814973] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2022] [Accepted: 02/24/2022] [Indexed: 11/13/2022] Open
Abstract
Modern myoelectric prostheses can perform multiple functions (e.g., several grasp types and wrist rotation) but their intuitive control by the user is still an open challenge. It has been recently demonstrated that semi-autonomous control can allow the subjects to operate complex prostheses effectively; however, this approach often requires placing sensors on the user. The present study proposes a system for semi-autonomous control of a myoelectric prosthesis that requires a single depth sensor placed on the dorsal side of the hand. The system automatically pre-shapes the hand (grasp type, size, and wrist rotation) and allows the user to grasp objects of different shapes, sizes and orientations, placed individually or within cluttered scenes. The system “reacts” to the side from which the object is approached, and enables the user to target not only the whole object but also an object part. Another unique aspect of the system is that it relies on online interaction between the user and the prosthesis; the system reacts continuously on the targets that are in its focus, while the user interprets the movement of the prosthesis to adjust aiming. Experimental assessment was conducted in ten able-bodied participants to evaluate the feasibility and the impact of training on prosthesis-user interaction. The subjects used the system to grasp a set of objects individually (Phase I) and in cluttered scenarios (Phase II), while the time to accomplish the task (TAT) was used as the performance metric. In both phases, the TAT improved significantly across blocks. Some targets (objects and/or their parts) were more challenging, requiring thus significantly more time to handle, but all objects and scenes were successfully accomplished by all subjects. The assessment therefore demonstrated that the system is indeed robust and effective, and that the subjects could successfully learn how to aim with the system after a brief training. This is an important step toward the development of a self-contained semi-autonomous system convenient for clinical applications.
Collapse
|
6
|
Esposito D, Centracchio J, Andreozzi E, Gargiulo GD, Naik GR, Bifulco P. Biosignal-Based Human-Machine Interfaces for Assistance and Rehabilitation: A Survey. SENSORS 2021; 21:s21206863. [PMID: 34696076 PMCID: PMC8540117 DOI: 10.3390/s21206863] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/31/2021] [Revised: 09/30/2021] [Accepted: 10/12/2021] [Indexed: 12/03/2022]
Abstract
As a definition, Human–Machine Interface (HMI) enables a person to interact with a device. Starting from elementary equipment, the recent development of novel techniques and unobtrusive devices for biosignals monitoring paved the way for a new class of HMIs, which take such biosignals as inputs to control various applications. The current survey aims to review the large literature of the last two decades regarding biosignal-based HMIs for assistance and rehabilitation to outline state-of-the-art and identify emerging technologies and potential future research trends. PubMed and other databases were surveyed by using specific keywords. The found studies were further screened in three levels (title, abstract, full-text), and eventually, 144 journal papers and 37 conference papers were included. Four macrocategories were considered to classify the different biosignals used for HMI control: biopotential, muscle mechanical motion, body motion, and their combinations (hybrid systems). The HMIs were also classified according to their target application by considering six categories: prosthetic control, robotic control, virtual reality control, gesture recognition, communication, and smart environment control. An ever-growing number of publications has been observed over the last years. Most of the studies (about 67%) pertain to the assistive field, while 20% relate to rehabilitation and 13% to assistance and rehabilitation. A moderate increase can be observed in studies focusing on robotic control, prosthetic control, and gesture recognition in the last decade. In contrast, studies on the other targets experienced only a small increase. Biopotentials are no longer the leading control signals, and the use of muscle mechanical motion signals has experienced a considerable rise, especially in prosthetic control. Hybrid technologies are promising, as they could lead to higher performances. However, they also increase HMIs’ complexity, so their usefulness should be carefully evaluated for the specific application.
Collapse
Affiliation(s)
- Daniele Esposito
- Department of Electrical Engineering and Information Technologies, Polytechnic and Basic Sciences School, University of Naples “Federico II”, 80125 Naples, Italy; (D.E.); (J.C.); (E.A.); (P.B.)
| | - Jessica Centracchio
- Department of Electrical Engineering and Information Technologies, Polytechnic and Basic Sciences School, University of Naples “Federico II”, 80125 Naples, Italy; (D.E.); (J.C.); (E.A.); (P.B.)
| | - Emilio Andreozzi
- Department of Electrical Engineering and Information Technologies, Polytechnic and Basic Sciences School, University of Naples “Federico II”, 80125 Naples, Italy; (D.E.); (J.C.); (E.A.); (P.B.)
| | - Gaetano D. Gargiulo
- School of Engineering, Design and Built Environment, Western Sydney University, Penrith, NSW 2747, Australia;
- The MARCS Institute, Western Sydney University, Penrith, NSW 2751, Australia
| | - Ganesh R. Naik
- School of Engineering, Design and Built Environment, Western Sydney University, Penrith, NSW 2747, Australia;
- The Adelaide Institute for Sleep Health, Flinders University, Bedford Park, SA 5042, Australia
- Correspondence:
| | - Paolo Bifulco
- Department of Electrical Engineering and Information Technologies, Polytechnic and Basic Sciences School, University of Naples “Federico II”, 80125 Naples, Italy; (D.E.); (J.C.); (E.A.); (P.B.)
| |
Collapse
|
7
|
Elbow Motion Trajectory Prediction Using a Multi-Modal Wearable System: A Comparative Analysis of Machine Learning Techniques. SENSORS 2021; 21:s21020498. [PMID: 33445601 PMCID: PMC7827251 DOI: 10.3390/s21020498] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/10/2020] [Revised: 01/06/2021] [Accepted: 01/08/2021] [Indexed: 12/03/2022]
Abstract
Motion intention detection is fundamental in the implementation of human-machine interfaces applied to assistive robots. In this paper, multiple machine learning techniques have been explored for creating upper limb motion prediction models, which generally depend on three factors: the signals collected from the user (such as kinematic or physiological), the extracted features and the selected algorithm. We explore the use of different features extracted from various signals when used to train multiple algorithms for the prediction of elbow flexion angle trajectories. The accuracy of the prediction was evaluated based on the mean velocity and peak amplitude of the trajectory, which are sufficient to fully define it. Results show that prediction accuracy when using solely physiological signals is low, however, when kinematic signals are included, it is largely improved. This suggests kinematic signals provide a reliable source of information for predicting elbow trajectories. Different models were trained using 10 algorithms. Regularization algorithms performed well in all conditions, whereas neural networks performed better when the most important features are selected. The extensive analysis provided in this study can be consulted to aid in the development of accurate upper limb motion intention detection models.
Collapse
|
8
|
Abstract
The advent of telerobotic systems has revolutionized various aspects of the industry and human life. This technology is designed to augment human sensorimotor capabilities to extend them beyond natural competence. Classic examples are space and underwater applications when distance and access are the two major physical barriers to be combated with this technology. In modern examples, telerobotic systems have been used in several clinical applications, including teleoperated surgery and telerehabilitation. In this regard, there has been a significant amount of research and development due to the major benefits in terms of medical outcomes. Recently telerobotic systems are combined with advanced artificial intelligence modules to better share the agency with the operator and open new doors of medical automation. In this review paper, we have provided a comprehensive analysis of the literature considering various topologies of telerobotic systems in the medical domain while shedding light on different levels of autonomy for this technology, starting from direct control, going up to command-tracking autonomous telerobots. Existing challenges, including instrumentation, transparency, autonomy, stochastic communication delays, and stability, in addition to the current direction of research related to benefit in telemedicine and medical automation, and future vision of this technology, are discussed in this review paper.
Collapse
|
9
|
Castillo CSM, Wilson S, Vaidyanathan R, Atashzar SF. Wearable MMG-Plus-One Armband: Evaluation of Normal Force on Mechanomyography (MMG) to Enhance Human-Machine Interfacing. IEEE Trans Neural Syst Rehabil Eng 2020; 29:196-205. [PMID: 33290226 DOI: 10.1109/tnsre.2020.3043368] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
In this paper, we introduce a new mode of mechanomyography (MMG) signal capture for enhancing the performance of human-machine interfaces (HMIs) through modulation of normal pressure at the sensor location. Utilizing this novel approach, increased MMG signal resolution is enabled by a tunable degree of freedom normal to the sensor-skin contact area. We detail the mechatronic design, experimental validation, and user study of an armband with embedded acoustic sensors demonstrating this capacity. The design is motivated by the nonlinear viscoelasticity of the tissue, which increases with the normal surface pressure. This, in theory, results in higher conductivity of mechanical waves and hypothetically allows to interface with deeper muscle; thus, enhancing the discriminative information context of the signal space. Ten subjects (seven able-bodied and three trans-radial amputees) participated in a study consisting of the classification of hand gestures through MMG while increasing levels of contact force were administered. Four MMG channels were positioned around the forearm and placed over the flexor carpi radialis, brachioradialis, extensor digitorum communis, and flexor carpi ulnaris muscles. A total of 852 spectrotemporal features were extracted (213 features per each channel) and passed through a Neighborhood Component Analysis (NCA) technique to select the most informative neurophysiological subspace of the features for classification. A linear support vector machine (SVM) then classified the intended motion of the user. The results indicate that increasing the normal force level between the MMG sensor and the skin can improve the discriminative power of the classifier, and the corresponding pattern can be user-specific. These results have significant implications enabling embedding MMG sensors in sockets for prosthetic limb control and HMI.
Collapse
|