1
|
Uba J, Jurewicz KA. A review on development approaches for 3D gestural embodied human-computer interaction systems. APPLIED ERGONOMICS 2024; 121:104359. [PMID: 39067282 DOI: 10.1016/j.apergo.2024.104359] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/06/2023] [Revised: 07/19/2024] [Accepted: 07/21/2024] [Indexed: 07/30/2024]
Abstract
The integration of 3D gestural embodied human-computer interaction (eHCI) has provided an avenue for contactless interaction with systems. The design of gestural systems employs two approaches: Technology-based approach and Human-based approach. This study reviews the existing literature on development approaches for 3D gestural eHCI to understand the current state of research in 3D gestural eHCI using these approaches and identify potential areas for future exploration. Articles were gathered from three databases: PsycInfo, Science Direct and IEEE Xplore. A total of 35 articles were identified, of which 18 used human-based methods, and 17 used technology-based methods. Findings shed light on inconsistencies between developers and users in preferred hand gesture poses and identify factors influencing users' gesture choice. Implementation of the consolidated findings has the potential to improve human readiness for 3D gestural eHCI technologies.
Collapse
Affiliation(s)
- Jimmy Uba
- Oklahoma State University, School of Industrial Engineering and Management, College of Engineering, Architecture, and Technology, 329 Engineering North, Stillwater, OK, 74078, USA
| | - Katherina A Jurewicz
- Oklahoma State University, School of Industrial Engineering and Management, College of Engineering, Architecture, and Technology, 329 Engineering North, Stillwater, OK, 74078, USA.
| |
Collapse
|
2
|
Ramasubramanian AK, Kazasidis M, Fay B, Papakostas N. On the Evaluation of Diverse Vision Systems towards Detecting Human Pose in Collaborative Robot Applications. SENSORS (BASEL, SWITZERLAND) 2024; 24:578. [PMID: 38257671 PMCID: PMC10818797 DOI: 10.3390/s24020578] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/19/2023] [Revised: 12/26/2023] [Accepted: 01/05/2024] [Indexed: 01/24/2024]
Abstract
Tracking human operators working in the vicinity of collaborative robots can improve the design of safety architecture, ergonomics, and the execution of assembly tasks in a human-robot collaboration scenario. Three commercial spatial computation kits were used along with their Software Development Kits that provide various real-time functionalities to track human poses. The paper explored the possibility of combining the capabilities of different hardware systems and software frameworks that may lead to better performance and accuracy in detecting the human pose in collaborative robotic applications. This study assessed their performance in two different human poses at six depth levels, comparing the raw data and noise-reducing filtered data. In addition, a laser measurement device was employed as a ground truth indicator, together with the average Root Mean Square Error as an error metric. The obtained results were analysed and compared in terms of positional accuracy and repeatability, indicating the dependence of the sensors' performance on the tracking distance. A Kalman-based filter was applied to fuse the human skeleton data and then to reconstruct the operator's poses considering their performance in different distance zones. The results indicated that at a distance less than 3 m, Microsoft Azure Kinect demonstrated better tracking performance, followed by Intel RealSense D455 and Stereolabs ZED2, while at ranges higher than 3 m, ZED2 had superior tracking performance.
Collapse
Affiliation(s)
| | | | | | - Nikolaos Papakostas
- Laboratory for Advanced Manufacturing Simulation and Robotics, School of Mechanical and Materials Engineering, University College Dublin, Belfield, D04 V1W8 Dublin, Ireland; (A.K.R.); (M.K.); (B.F.)
| |
Collapse
|
3
|
Garcia PP, Santos TG, Machado MA, Mendes N. Deep Learning Framework for Controlling Work Sequence in Collaborative Human-Robot Assembly Processes. SENSORS (BASEL, SWITZERLAND) 2023; 23:s23010553. [PMID: 36617153 PMCID: PMC9823442 DOI: 10.3390/s23010553] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/14/2022] [Revised: 12/28/2022] [Accepted: 01/02/2023] [Indexed: 05/14/2023]
Abstract
The human-robot collaboration (HRC) solutions presented so far have the disadvantage that the interaction between humans and robots is based on the human's state or on specific gestures purposely performed by the human, thus increasing the time required to perform a task and slowing down the pace of human labor, making such solutions uninteresting. In this study, a different concept of the HRC system is introduced, consisting of an HRC framework for managing assembly processes that are executed simultaneously or individually by humans and robots. This HRC framework based on deep learning models uses only one type of data, RGB camera data, to make predictions about the collaborative workspace and human action, and consequently manage the assembly process. To validate the HRC framework, an industrial HRC demonstrator was built to assemble a mechanical component. Four different HRC frameworks were created based on the convolutional neural network (CNN) model structures: Faster R-CNN ResNet-50 and ResNet-101, YOLOv2 and YOLOv3. The HRC framework with YOLOv3 structure showed the best performance, showing a mean average performance of 72.26% and allowed the HRC industrial demonstrator to successfully complete all assembly tasks within a desired time window. The HRC framework has proven effective for industrial assembly applications.
Collapse
Affiliation(s)
- Pedro P. Garcia
- UNIDEMI, Department of Mechanical and Industrial Engineering, NOVA School of Science and Technology, Universidade NOVA de Lisboa, 2829-516 Caparica, Portugal
| | - Telmo G. Santos
- UNIDEMI, Department of Mechanical and Industrial Engineering, NOVA School of Science and Technology, Universidade NOVA de Lisboa, 2829-516 Caparica, Portugal
- Laboratório Associado de Sistemas Inteligentes, LASI, 4800-058 Guimarães, Portugal
| | - Miguel A. Machado
- UNIDEMI, Department of Mechanical and Industrial Engineering, NOVA School of Science and Technology, Universidade NOVA de Lisboa, 2829-516 Caparica, Portugal
- Laboratório Associado de Sistemas Inteligentes, LASI, 4800-058 Guimarães, Portugal
- Correspondence:
| | - Nuno Mendes
- UNIDEMI, Department of Mechanical and Industrial Engineering, NOVA School of Science and Technology, Universidade NOVA de Lisboa, 2829-516 Caparica, Portugal
- Laboratório Associado de Sistemas Inteligentes, LASI, 4800-058 Guimarães, Portugal
| |
Collapse
|
4
|
Carfi A, Mastrogiovanni F. Gesture-Based Human-Machine Interaction: Taxonomy, Problem Definition, and Analysis. IEEE TRANSACTIONS ON CYBERNETICS 2023; 53:497-513. [PMID: 34910648 DOI: 10.1109/tcyb.2021.3129119] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
The possibility for humans to interact with physical or virtual systems using gestures has been vastly explored by researchers and designers in the last 20 years to provide new and intuitive interaction modalities. Unfortunately, the literature about gestural interaction is not homogeneous, and it is characterized by a lack of shared terminology. This leads to fragmented results and makes it difficult for research activities to build on top of state-of-the-art results and approaches. The analysis in this article aims at creating a common conceptual design framework to enforce development efforts in gesture-based human-machine interaction (HMI). The main contributions of this article can be summarized as follows: 1) we provide a broad definition for the notion of functional gesture in HMI; 2) we design a flexible and expandable gesture taxonomy; and 3) we put forward a detailed problem statement for gesture-based HMI. Finally, to support our main contribution, this article presents and analyzes 83 most pertinent articles classified on the basis of our taxonomy and problem statement.
Collapse
|
5
|
Robinson N, Tidd B, Campbell D, Kulić D, Corke P. Robotic Vision for Human-Robot Interaction and Collaboration: A Survey and Systematic Review. ACM TRANSACTIONS ON HUMAN-ROBOT INTERACTION 2022. [DOI: 10.1145/3570731] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
Robotic vision for human-robot interaction and collaboration is a critical process for robots to collect and interpret detailed information related to human actions, goals, and preferences, enabling robots to provide more useful services to people. This survey and systematic review presents a comprehensive analysis on robotic vision in human-robot interaction and collaboration over the last 10 years. From a detailed search of 3850 articles, systematic extraction and evaluation was used to identify and explore 310 papers in depth. These papers described robots with some level of autonomy using robotic vision for locomotion, manipulation and/or visual communication to collaborate or interact with people. This paper provides an in-depth analysis of current trends, common domains, methods and procedures, technical processes, data sets and models, experimental testing, sample populations, performance metrics and future challenges. This manuscript found that robotic vision was often used in action and gesture recognition, robot movement in human spaces, object handover and collaborative actions, social communication and learning from demonstration. Few high-impact and novel techniques from the computer vision field had been translated into human-robot interaction and collaboration. Overall, notable advancements have been made on how to develop and deploy robots to assist people.
Collapse
Affiliation(s)
- Nicole Robinson
- Australian Research Council Centre of Excellence for Robotic Vision, School of Electrical Engineering & Robotics, QUT Centre for Robotics, Queensland University of Technology. Faculty of Engineering, Turner Institute for Brain and Mental Health, Monash University, Australia
| | - Brendan Tidd
- Australian Research Council Centre of Excellence for Robotic Vision, School of Electrical Engineering & Robotics, QUT Centre for Robotics, Queensland University of Technology, Australia
| | - Dylan Campbell
- Visual Geometry Group, Department of Engineering Science, University of Oxford, United Kingdom
| | - Dana Kulić
- Australian Research Council Centre of Excellence for Robotic Vision, Faculty of Engineering, Monash University, Australia
| | - Peter Corke
- Australian Research Council Centre of Excellence for Robotic Vision, School of Electrical Engineering & Robotics, QUT Centre for Robotics, Queensland University of Technology, Australia
| |
Collapse
|
6
|
Slekiene J, Chidziwisano K, Morse T. Does Poor Mental Health Impair the Effectiveness of Complementary Food Hygiene Behavior Change Intervention in Rural Malawi? INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2022; 19:10589. [PMID: 36078302 PMCID: PMC9518201 DOI: 10.3390/ijerph191710589] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/13/2022] [Revised: 08/05/2022] [Accepted: 08/13/2022] [Indexed: 06/15/2023]
Abstract
Mental disorders have the potential to affect an individual's capacity to perform household daily activities such as water, sanitation, and hygiene (food hygiene inclusive) that require effort, time, and strong internal motivation. However, there is limited detailed assessment about the influence of mental health on food hygiene behaviors at household level. We conducted a follow-up study to detect the effects of mental health on food hygiene behaviors after food hygiene intervention delivery to child caregivers in rural Malawi. Face-to-face interviews, based on the Risk, Attitude, Norms, Ability, and Self-regulations (RANAS) model, were conducted with 819 participants (control and intervention group) to assess their handwashing and food hygiene-related behaviors. Mental health was assessed using the validated Self-Reporting Questionnaire. Study results showed a significant negative relationship between mental health and handwashing with soap behavior (r = -0.135) and keeping utensils in an elevated place (r = -0.093). Further, a significant difference was found between people with good versus poor mental health on handwashing with soap behavior (p = 0.050) among the intervention group. The results showed that the influence of the intervention on handwashing with soap behavior was mediated by mental health. Thus, integration of mental health in food hygiene interventions can result in improved outcomes for caregivers with poor mental health.
Collapse
Affiliation(s)
- Jurgita Slekiene
- Global Health Engineering (GHE), Department of Mechanical and Process Engineering (D-MAVT), ETH Zurich, Clausiusstrasse 37, 8092 Zurich, Switzerland
| | - Kondwani Chidziwisano
- Centre for Water, Sanitation, Health and Appropriate Technology Development (WASHTED), Malawi University of Business and Applied Sciences (MUBAS), Private Bag 303, Chichiri, Blantyre 3, Malawi
- Department of Environmental Health, Malawi University of Business and Applied Sciences (MUBAS), Private Bag 303, Chichiri, Blantyre 3, Malawi
| | - Tracy Morse
- Department of Civil and Environmental Engineering, University of Strathclyde, Level 5 James Weir Building, Glasgow G1 1XQ, UK
| |
Collapse
|
7
|
Energy–Accuracy Aware Finger Gesture Recognition for Wearable IoT Devices. SENSORS 2022; 22:s22134801. [PMID: 35808298 PMCID: PMC9268903 DOI: 10.3390/s22134801] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/12/2022] [Revised: 06/21/2022] [Accepted: 06/21/2022] [Indexed: 11/16/2022]
Abstract
Wearable Internet of Things (IoT) devices can be used efficiently for gesture recognition applications. The nature of these applications requires high recognition accuracy with low energy consumption, which is not easy to solve at the same time. In this paper, we design a finger gesture recognition system using a wearable IoT device. The proposed recognition system uses a light-weight multi-layer perceptron (MLP) classifier which can be implemented even on a low-end micro controller unit (MCU), with a 2-axes flex sensor. To achieve high recognition accuracy with low energy consumption, we first design a framework for the finger gesture recognition system including its components, followed by system-level performance and energy models. Then, we analyze system-level accuracy and energy optimization issues, and explore the numerous design choices to finally achieve energy–accuracy aware finger gesture recognition, targeting four commonly used low-end MCUs. Our extensive simulation and measurements using prototypes demonstrate that the proposed design achieves up to 95.5% recognition accuracy with energy consumption under 2.74 mJ per gesture on a low-end embedded wearable IoT device. We also provide the Pareto-optimal designs among a total of 159 design choices to achieve energy–accuracy aware design points under given energy or accuracy constraints.
Collapse
|
8
|
Surface Electromyography Signal Recognition Based on Deep Learning for Human-Robot Interaction and Collaboration. J INTELL ROBOT SYST 2022. [DOI: 10.1007/s10846-022-01666-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
9
|
Zhang G, Jing W, Tao H, Rahman MA, Salih SQ, Al-Saffar A, Zhang R. ADA-SR: Activity detection and analysis using security robots for reliable workplace safety. Work 2021; 68:935-943. [PMID: 33612535 DOI: 10.3233/wor-203427] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
BACKGROUND Human-Robot Interaction (HRI) has become a prominent solution to improve the robustness of real-time service provisioning through assisted functions for day-to-day activities. The application of the robotic system in security services helps to improve the precision of event detection and environmental monitoring with ease. OBJECTIVES This paper discusses activity detection and analysis (ADA) using security robots in workplaces. The application scenario of this method relies on processing image and sensor data for event and activity detection. The events that are detected are classified for its abnormality based on the analysis performed using the sensor and image data operated using a convolution neural network. This method aims to improve the accuracy of detection by mitigating the deviations that are classified in different levels of the convolution process. RESULTS The differences are identified based on independent data correlation and information processing. The performance of the proposed method is verified for the three human activities, such as standing, walking, and running, as detected using the images and sensor dataset. CONCLUSION The results are compared with the existing method for metrics accuracy, classification time, and recall.
Collapse
Affiliation(s)
- Guangnan Zhang
- School of Computer Science, Baoji University of Arts and Sciences, Baoji, China
| | - Wang Jing
- School of Computer Science, Baoji University of Arts and Sciences, Baoji, China
| | - Hai Tao
- School of Computer Science, Baoji University of Arts and Sciences, Baoji, China.,Institute for Big Data Analytics and Artificial Intelligence (IBDAAI), Universiti Teknologi MARA, Shah Alam, Malaysia
| | - Md Arafatur Rahman
- Faculty of Computing, IBM CoE, and Earth Resources and Sustainability Center, Universiti Malaysia Pahang, Pahang, Malaysia
| | - Sinan Q Salih
- Institute of Research and Development, Duy Tan University, Da Nang, Vietnam
| | - Ahmed Al-Saffar
- Faculty of Computing, IBM CoE, and Earth Resources and Sustainability Center, Universiti Malaysia Pahang, Pahang, Malaysia
| | - Renrui Zhang
- School of Electronics Engineering and Computer Science, Peking University, Beijing, China
| |
Collapse
|
10
|
A Novel Assisted Artificial Neural Network Modeling Approach for Improved Accuracy Using Small Datasets: Application in Residual Strength Evaluation of Panels with Multiple Site Damage Cracks. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app10228255] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
An artificial neural network (ANN) extracts knowledge from a training dataset and uses this acquired knowledge to forecast outputs for any new set of inputs. When the input/output relations are complex and highly non-linear, the ANN needs a relatively large training dataset (hundreds of data points) to capture these relations adequately. This paper introduces a novel assisted-ANN modeling approach that enables the development of ANNs using small datasets, while maintaining high prediction accuracy. This approach uses parameters that are obtained using the known input/output relations (partial or full relations). These so called assistance parameters are included as ANN inputs in addition to the traditional direct independent inputs. The proposed assisted approach is applied for predicting the residual strength of panels with multiple site damage (MSD) cracks. Different assistance levels (four levels) and different training dataset sizes (from 75 down to 22 data points) are investigated, and the results are compared to the traditional approach. The results show that the assisted approach helps in achieving high predictions’ accuracy (<3% average error). The relative accuracy improvement is higher (up to 46%) for ANN learning algorithms that give lower prediction accuracy. Also, the relative accuracy improvement becomes more significant (up to 38%) for smaller dataset sizes.
Collapse
|
11
|
Górriz JM, Ramírez J, Ortíz A, Martínez-Murcia FJ, Segovia F, Suckling J, Leming M, Zhang YD, Álvarez-Sánchez JR, Bologna G, Bonomini P, Casado FE, Charte D, Charte F, Contreras R, Cuesta-Infante A, Duro RJ, Fernández-Caballero A, Fernández-Jover E, Gómez-Vilda P, Graña M, Herrera F, Iglesias R, Lekova A, de Lope J, López-Rubio E, Martínez-Tomás R, Molina-Cabello MA, Montemayor AS, Novais P, Palacios-Alonso D, Pantrigo JJ, Payne BR, de la Paz López F, Pinninghoff MA, Rincón M, Santos J, Thurnhofer-Hemsi K, Tsanas A, Varela R, Ferrández JM. Artificial intelligence within the interplay between natural and artificial computation: Advances in data science, trends and applications. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2020.05.078] [Citation(s) in RCA: 79] [Impact Index Per Article: 15.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/26/2023]
|
12
|
WEEE Recycling and Circular Economy Assisted by Collaborative Robots. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app10144800] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Considering the amount of waste of electrical and electronic equipment (WEEE) generated each year at an increasing rate, it is of crucial importance to develop circular economy solutions that prioritize reuse and recycling, as well as reducing the amount of waste that is disposed of at landfills. This paper analyses the evolution of the amount of WEEE collection and its recycling rate at the national and European levels. It also describes the regulatory framework and possible future government policy measures to foster a circular economy. Furthermore, it identifies the different parts and materials that can be recovered from the recycling process with a special emphasis on plastics. Finally, it describes a recycling line that has been designed for the dismantling of computer cathodic ray tubes (CRT)s that combines an innovative participation of people and collaborative robots which has led to an effective and efficient material recovery solution. The key issue of this human–robot collaboration relies on only assigning tasks that require human skills to operators and sending all other tasks to robots. The first results from the model show a better economic performance than current manual processes, mainly regarding the higher degree of separation of recovered materials and plastic in particular, thus reaching higher revenues. This collaboration also brings considerable additional benefits for the environment, through a higher recovery rate in weight and for workers, who can make intelligent decisions in the factory and enjoy a safer working environment by avoiding the most dangerous tasks.
Collapse
|