1
|
Younis R, Yamlahi A, Bodenstedt S, Scheikl PM, Kisilenko A, Daum M, Schulze A, Wise PA, Nickel F, Mathis-Ullrich F, Maier-Hein L, Müller-Stich BP, Speidel S, Distler M, Weitz J, Wagner M. A surgical activity model of laparoscopic cholecystectomy for co-operation with collaborative robots. Surg Endosc 2024; 38:4316-4328. [PMID: 38872018 PMCID: PMC11289174 DOI: 10.1007/s00464-024-10958-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2024] [Accepted: 05/24/2024] [Indexed: 06/15/2024]
Abstract
BACKGROUND Laparoscopic cholecystectomy is a very frequent surgical procedure. However, in an ageing society, less surgical staff will need to perform surgery on patients. Collaborative surgical robots (cobots) could address surgical staff shortages and workload. To achieve context-awareness for surgeon-robot collaboration, the intraoperative action workflow recognition is a key challenge. METHODS A surgical process model was developed for intraoperative surgical activities including actor, instrument, action and target in laparoscopic cholecystectomy (excluding camera guidance). These activities, as well as instrument presence and surgical phases were annotated in videos of laparoscopic cholecystectomy performed on human patients (n = 10) and on explanted porcine livers (n = 10). The machine learning algorithm Distilled-Swin was trained on our own annotated dataset and the CholecT45 dataset. The validation of the model was conducted using a fivefold cross-validation approach. RESULTS In total, 22,351 activities were annotated with a cumulative duration of 24.9 h of video segments. The machine learning algorithm trained and validated on our own dataset scored a mean average precision (mAP) of 25.7% and a top K = 5 accuracy of 85.3%. With training and validation on our dataset and CholecT45, the algorithm scored a mAP of 37.9%. CONCLUSIONS An activity model was developed and applied for the fine-granular annotation of laparoscopic cholecystectomies in two surgical settings. A machine recognition algorithm trained on our own annotated dataset and CholecT45 achieved a higher performance than training only on CholecT45 and can recognize frequently occurring activities well, but not infrequent activities. The analysis of an annotated dataset allowed for the quantification of the potential of collaborative surgical robots to address the workload of surgical staff. If collaborative surgical robots could grasp and hold tissue, up to 83.5% of the assistant's tissue interacting tasks (i.e. excluding camera guidance) could be performed by robots.
Collapse
Affiliation(s)
- R Younis
- Department for General, Visceral and Transplant Surgery, Heidelberg University Hospital, Heidelberg, Germany
- National Center for Tumor Diseases (NCT), Heidelberg, Germany
- Centre for the Tactile Internet with Human-in-the-Loop (CeTI), TUD Dresden University of Technology, Dresden, Germany
| | - A Yamlahi
- Division of Intelligent Medical Systems (IMSY), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - S Bodenstedt
- Department for Translational Surgical Oncology, National Center for Tumor Diseases, Partner Site Dresden, Dresden, Germany
- Centre for the Tactile Internet with Human-in-the-Loop (CeTI), TUD Dresden University of Technology, Dresden, Germany
| | - P M Scheikl
- Surgical Planning and Robotic Cognition (SPARC), Department Artificial Intelligence in Biomedical Engineering (AIBE), Friedrich-Alexander-University Erlangen-Nürnberg, Erlangen, Germany
| | - A Kisilenko
- Department for General, Visceral and Transplant Surgery, Heidelberg University Hospital, Heidelberg, Germany
- National Center for Tumor Diseases (NCT), Heidelberg, Germany
| | - M Daum
- Centre for the Tactile Internet with Human-in-the-Loop (CeTI), TUD Dresden University of Technology, Dresden, Germany
- Department of Visceral, Thoracic and Vascular Surgery, Faculty of Medicine and University Hospital Carl Gustav Carus, TUD Dresden University of Technology, Fetscherstraße 74, 01307, Dresden, Germany
| | - A Schulze
- Centre for the Tactile Internet with Human-in-the-Loop (CeTI), TUD Dresden University of Technology, Dresden, Germany
- Department of Visceral, Thoracic and Vascular Surgery, Faculty of Medicine and University Hospital Carl Gustav Carus, TUD Dresden University of Technology, Fetscherstraße 74, 01307, Dresden, Germany
| | - P A Wise
- Department for General, Visceral and Transplant Surgery, Heidelberg University Hospital, Heidelberg, Germany
| | - F Nickel
- Department for General, Visceral and Transplant Surgery, Heidelberg University Hospital, Heidelberg, Germany
- Department of General, Visceral and Thoracic Surgery, University Medical Center Hamburg- Eppendorf, Hamburg, Germany
| | - F Mathis-Ullrich
- Surgical Planning and Robotic Cognition (SPARC), Department Artificial Intelligence in Biomedical Engineering (AIBE), Friedrich-Alexander-University Erlangen-Nürnberg, Erlangen, Germany
| | - L Maier-Hein
- National Center for Tumor Diseases (NCT), Heidelberg, Germany
- Division of Intelligent Medical Systems (IMSY), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - B P Müller-Stich
- Department for Abdominal Surgery, University Center for Gastrointestinal and Liver Diseases, Basel, Switzerland
| | - S Speidel
- Department for Translational Surgical Oncology, National Center for Tumor Diseases, Partner Site Dresden, Dresden, Germany
- Centre for the Tactile Internet with Human-in-the-Loop (CeTI), TUD Dresden University of Technology, Dresden, Germany
| | - M Distler
- Department of Visceral, Thoracic and Vascular Surgery, Faculty of Medicine and University Hospital Carl Gustav Carus, TUD Dresden University of Technology, Fetscherstraße 74, 01307, Dresden, Germany
| | - J Weitz
- Centre for the Tactile Internet with Human-in-the-Loop (CeTI), TUD Dresden University of Technology, Dresden, Germany
- Department of Visceral, Thoracic and Vascular Surgery, Faculty of Medicine and University Hospital Carl Gustav Carus, TUD Dresden University of Technology, Fetscherstraße 74, 01307, Dresden, Germany
| | - M Wagner
- Department for General, Visceral and Transplant Surgery, Heidelberg University Hospital, Heidelberg, Germany.
- National Center for Tumor Diseases (NCT), Heidelberg, Germany.
- Department for Translational Surgical Oncology, National Center for Tumor Diseases, Partner Site Dresden, Dresden, Germany.
- Centre for the Tactile Internet with Human-in-the-Loop (CeTI), TUD Dresden University of Technology, Dresden, Germany.
- Department of Visceral, Thoracic and Vascular Surgery, Faculty of Medicine and University Hospital Carl Gustav Carus, TUD Dresden University of Technology, Fetscherstraße 74, 01307, Dresden, Germany.
| |
Collapse
|
2
|
Sanders NE, Şener E, Chen KB. Robot-related injuries in the workplace: An analysis of OSHA Severe Injury Reports. APPLIED ERGONOMICS 2024; 121:104324. [PMID: 39018706 DOI: 10.1016/j.apergo.2024.104324] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/27/2023] [Revised: 05/26/2024] [Accepted: 05/27/2024] [Indexed: 07/19/2024]
Abstract
Industrial robots are increasingly commonplace, but research on prototypical accidents and injuries has been sparse, hindering evidence-based safety strategies. Using Severe Injury Reports (SIRs) from the U.S. Occupational Safety and Health Administration (OSHA), we identified 77 robot-related accidents from 2015-2022. Of these, 54 involved stationary robots, resulting in 66 injuries, mainly finger amputations and fractures to the head and torso. Mobile robots caused 23 accidents, leading to 27 injuries, mainly fractures to the legs and feet. A two-stage deductive-inductive thematic analysis was performed using text data from the final narratives in the reports to discover patterns in tasks, precipitating mechanisms, and contributing factors. Findings highlight the need for guards and collision avoidance systems that detect individual extremities. Post-contact strategies should focus on mitigating finger amputations. More structured and detailed narratives in the SIRs are needed.
Collapse
Affiliation(s)
- Nathan E Sanders
- North Carolina State University, Department of Industrial & Systems Engineering, United States of America.
| | - Elif Şener
- University of Leeds, School of Design, United Kingdom.
| | - Karen B Chen
- North Carolina State University, Department of Industrial & Systems Engineering, United States of America.
| |
Collapse
|
3
|
Wu XY, Shi JY, Qiao SC, Tonetti MS, Lai HC. Accuracy of robotic surgery for dental implant placement: A systematic review and meta-analysis. Clin Oral Implants Res 2024; 35:598-608. [PMID: 38517053 DOI: 10.1111/clr.14255] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2023] [Revised: 02/22/2024] [Accepted: 02/25/2024] [Indexed: 03/23/2024]
Abstract
OBJECTIVES To systematically analyze the accuracy of robotic surgery for dental implant placement. MATERIALS AND METHODS PubMed, Embase, and Cochrane CENTRAL were searched on October 25, 2023. Model studies or clinical studies reporting the accuracy of robotic surgery for dental implant placement among patients with missing or hopeless teeth were included. Risks of bias in clinical studies were assessed. Meta-analyses were undertaken. RESULTS Data from 8 clinical studies reporting on 109 patients and 242 implants and 13 preclinical studies were included. Positional accuracy was measured by comparing the implant plan in presurgery CBCT and the actual implant position in postsurgery CBCT. For clinical studies, the pooled (95% confidence interval) platform deviation, apex deviation, and angular deviation were 0.68 (0.57, 0.79) mm, 0.67 (0.58, 0.75) mm, and 1.69 (1.25, 2.12)°, respectively. There was no statistically significant difference between the accuracy of implants placed in partially or fully edentulous patients. For model studies, the pooled platform deviation, apex deviation, and angular deviation were 0.72 (0.58, 0.86) mm, 0.90 (0.74, 1.06) mm, and 1.46 (1.22, 1.70)°, respectively. No adverse event was reported. CONCLUSION Within the limitation of the present systematic review, robotic surgery for dental implant placement showed suitable implant positional accuracy and had no reported obvious harm. Both robotic systems and clinical studies on robotic surgery for dental implant placement should be further developed.
Collapse
Affiliation(s)
- Xin-Yu Wu
- Shanghai Perio-Implant Innovation Center, Department of Oral Implantology, Shanghai Ninth People Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
- College of Stomatology, Shanghai Jiao Tong University, Shanghai, China
- National Center of Stomatology, Shanghai, China
- National Clinical Research Center for Oral Diseases, Shanghai, China
- Shanghai Key Laboratory of Stomatology, Shanghai, China
| | - Jun-Yu Shi
- Shanghai Perio-Implant Innovation Center, Department of Oral Implantology, Shanghai Ninth People Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
- College of Stomatology, Shanghai Jiao Tong University, Shanghai, China
- National Center of Stomatology, Shanghai, China
- National Clinical Research Center for Oral Diseases, Shanghai, China
- Shanghai Key Laboratory of Stomatology, Shanghai, China
| | - Shi-Chong Qiao
- Shanghai Perio-Implant Innovation Center, Department of Oral Implantology, Shanghai Ninth People Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
- College of Stomatology, Shanghai Jiao Tong University, Shanghai, China
- National Center of Stomatology, Shanghai, China
- National Clinical Research Center for Oral Diseases, Shanghai, China
- Shanghai Key Laboratory of Stomatology, Shanghai, China
| | - Maurizio S Tonetti
- Shanghai Perio-Implant Innovation Center, Department of Oral Implantology, Shanghai Ninth People Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
- College of Stomatology, Shanghai Jiao Tong University, Shanghai, China
- National Center of Stomatology, Shanghai, China
- National Clinical Research Center for Oral Diseases, Shanghai, China
- Shanghai Key Laboratory of Stomatology, Shanghai, China
- European Research Group on Periodontology, Genova, Italy
| | - Hong-Chang Lai
- Shanghai Perio-Implant Innovation Center, Department of Oral Implantology, Shanghai Ninth People Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
- College of Stomatology, Shanghai Jiao Tong University, Shanghai, China
- National Center of Stomatology, Shanghai, China
- National Clinical Research Center for Oral Diseases, Shanghai, China
- Shanghai Key Laboratory of Stomatology, Shanghai, China
| |
Collapse
|
4
|
Fischer-Janzen A, Wendt TM, Van Laerhoven K. A scoping review of gaze and eye tracking-based control methods for assistive robotic arms. Front Robot AI 2024; 11:1326670. [PMID: 38440775 PMCID: PMC10909843 DOI: 10.3389/frobt.2024.1326670] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2023] [Accepted: 01/29/2024] [Indexed: 03/06/2024] Open
Abstract
Background: Assistive Robotic Arms are designed to assist physically disabled people with daily activities. Existing joysticks and head controls are not applicable for severely disabled people such as people with Locked-in Syndrome. Therefore, eye tracking control is part of ongoing research. The related literature spans many disciplines, creating a heterogeneous field that makes it difficult to gain an overview. Objectives: This work focuses on ARAs that are controlled by gaze and eye movements. By answering the research questions, this paper provides details on the design of the systems, a comparison of input modalities, methods for measuring the performance of these controls, and an outlook on research areas that gained interest in recent years. Methods: This review was conducted as outlined in the PRISMA 2020 Statement. After identifying a wide range of approaches in use the authors decided to use the PRISMA-ScR extension for a scoping review to present the results. The identification process was carried out by screening three databases. After the screening process, a snowball search was conducted. Results: 39 articles and 6 reviews were included in this article. Characteristics related to the system and study design were extracted and presented divided into three groups based on the use of eye tracking. Conclusion: This paper aims to provide an overview for researchers new to the field by offering insight into eye tracking based robot controllers. We have identified open questions that need to be answered in order to provide people with severe motor function loss with systems that are highly useable and accessible.
Collapse
Affiliation(s)
- Anke Fischer-Janzen
- Faculty Economy, Work-Life Robotics Institute, University of Applied Sciences Offenburg, Offenburg, Germany
| | - Thomas M. Wendt
- Faculty Economy, Work-Life Robotics Institute, University of Applied Sciences Offenburg, Offenburg, Germany
| | - Kristof Van Laerhoven
- Ubiquitous Computing, Department of Electrical Engineering and Computer Science, University of Siegen, Siegen, Germany
| |
Collapse
|
5
|
Vukelić M, Bui M, Vorreuther A, Lingelbach K. Combining brain-computer interfaces with deep reinforcement learning for robot training: a feasibility study in a simulation environment. FRONTIERS IN NEUROERGONOMICS 2023; 4:1274730. [PMID: 38234482 PMCID: PMC10790930 DOI: 10.3389/fnrgo.2023.1274730] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/08/2023] [Accepted: 10/31/2023] [Indexed: 01/19/2024]
Abstract
Deep reinforcement learning (RL) is used as a strategy to teach robot agents how to autonomously learn complex tasks. While sparsity is a natural way to define a reward in realistic robot scenarios, it provides poor learning signals for the agent, thus making the design of good reward functions challenging. To overcome this challenge learning from human feedback through an implicit brain-computer interface (BCI) is used. We combined a BCI with deep RL for robot training in a 3-D physical realistic simulation environment. In a first study, we compared the feasibility of different electroencephalography (EEG) systems (wet- vs. dry-based electrodes) and its application for automatic classification of perceived errors during a robot task with different machine learning models. In a second study, we compared the performance of the BCI-based deep RL training to feedback explicitly given by participants. Our findings from the first study indicate the use of a high-quality dry-based EEG-system can provide a robust and fast method for automatically assessing robot behavior using a sophisticated convolutional neural network machine learning model. The results of our second study prove that the implicit BCI-based deep RL version in combination with the dry EEG-system can significantly accelerate the learning process in a realistic 3-D robot simulation environment. Performance of the BCI-based trained deep RL model was even comparable to that achieved by the approach with explicit human feedback. Our findings emphasize the usage of BCI-based deep RL methods as a valid alternative in those human-robot applications where no access to cognitive demanding explicit human feedback is available.
Collapse
Affiliation(s)
- Mathias Vukelić
- Applied Neurocognitive Systems, Fraunhofer Institute for Industrial Engineering (IAO), Stuttgart, Germany
| | - Michael Bui
- Applied Neurocognitive Systems, Fraunhofer Institute for Industrial Engineering (IAO), Stuttgart, Germany
| | - Anna Vorreuther
- Applied Neurocognitive Systems, Institute of Human Factors and Technology Management (IAT), University of Stuttgart, Stuttgart, Germany
| | - Katharina Lingelbach
- Applied Neurocognitive Systems, Fraunhofer Institute for Industrial Engineering (IAO), Stuttgart, Germany
| |
Collapse
|
6
|
Bouillet K, Lemonnier S, Clanche F, Gauchard G. Does the introduction of a cobot change the productivity and posture of the operators in a collaborative task? PLoS One 2023; 18:e0289787. [PMID: 37556492 PMCID: PMC10411803 DOI: 10.1371/journal.pone.0289787] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2022] [Accepted: 07/26/2023] [Indexed: 08/11/2023] Open
Abstract
Musculoskeletal disorders (MSDs) are the main occupational diseases and are pathologies of multifactorial origin, with posture being one of them. This creates new human-robot collaboration situations that can modify operator behaviors and performance in their task. These changes raise questions about human-robot team performance and operator health. This study aims to understand the consequences of introducing a cobot on work performance, operator posture, and the quality of interactions. It also aims to evaluate the impact of two levels of difficulty in a dual task on these measures. For this purpose, thirty-four participants performed an assembly task in collaboration with a co-worker, either a human or a cobot with two articulated arms. In addition to this motor task, the participants had to perform an auditory task with two levels of difficulty (dual task). They were equipped with seventeen motion capture sensors. The collaborative work was filmed with a camera, and the actions of the participants and co-worker were coded based on the dichotomy of idle and activity. Interactions were coded based on time out, cooperation, and collaboration. The results showed that performance (number of products manufactured) was lower when the participant collaborated with a cobot rather than a human, with also less collaboration and activity time. However, RULA scores were lower-indicating a reduced risk of musculoskeletal disorders-during collaboration with a cobot compared to a human. Despite a decrease in production and a loss of fluidity, likely due to the characteristics of the cobot, working in collaboration with a cobot makes the task safer in terms of the risk of musculoskeletal disorders.
Collapse
Affiliation(s)
- Kévin Bouillet
- EA 3450 DevAH “Développement, Adaptation et Handicap”, Université de Lorraine, Vandœuvre-lès-Nancy, Metz, France
| | - Sophie Lemonnier
- EA 7312 PErSEUs “Psychologie Ergonomique et Sociale pour l’Expérience Utilisateurs”, Université de Lorraine, Metz, France
| | - Fabien Clanche
- Faculté des Sciences du Sport, Université de Lorraine, Villers-lès-Nancy, Metz, France
| | - Gérome Gauchard
- EA 3450 DevAH “Développement, Adaptation et Handicap”, Université de Lorraine, Vandœuvre-lès-Nancy, Metz, France
- Faculté des Sciences du Sport, Université de Lorraine, Villers-lès-Nancy, Metz, France
| |
Collapse
|
7
|
Grassini S. Development and validation of the AI attitude scale (AIAS-4): a brief measure of general attitude toward artificial intelligence. Front Psychol 2023; 14:1191628. [PMID: 37554139 PMCID: PMC10406504 DOI: 10.3389/fpsyg.2023.1191628] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2023] [Accepted: 06/16/2023] [Indexed: 08/10/2023] Open
Abstract
The rapid advancement of artificial intelligence (AI) has generated an increasing demand for tools that can assess public attitudes toward AI. This study proposes the development and the validation of the AI Attitude Scale (AIAS), a concise self-report instrument designed to evaluate public perceptions of AI technology. The first version of the AIAS that the present manuscript proposes comprises five items, including one reverse-scored item, which aims to gauge individuals' beliefs about AI's influence on their lives, careers, and humanity overall. The scale is designed to capture attitudes toward AI, focusing on the perceived utility and potential impact of technology on society and humanity. The psychometric properties of the scale were investigated using diverse samples in two separate studies. An exploratory factor analysis was initially conducted on a preliminary 5-item version of the scale. Such exploratory validation study revealed the need to divide the scale into two factors. While the results demonstrated satisfactory internal consistency for the overall scale and its correlation with related psychometric measures, separate analyses for each factor showed robust internal consistency for Factor 1 but insufficient internal consistency for Factor 2. As a result, a second version of the scale is developed and validated, omitting the item that displayed weak correlation with the remaining items in the questionnaire. The refined final 1-factor, 4-item AIAS demonstrated superior overall internal consistency compared to the initial 5-item scale and the proposed factors. Further confirmatory factor analyses, performed on a different sample of participants, confirmed that the 1-factor model (4-items) of the AIAS exhibited an adequate fit to the data, providing additional evidence for the scale's structural validity and generalizability across diverse populations. In conclusion, the analyses reported in this article suggest that the developed and validated 4-items AIAS can be a valuable instrument for researchers and professionals working on AI development who seek to understand and study users' general attitudes toward AI.
Collapse
Affiliation(s)
- Simone Grassini
- Department of Psychosocial Science, University of Bergen, Bergen, Norway
- Cognitive and Behavioral Neuroscience Lab, University of Stavanger, Stavanger, Norway
| |
Collapse
|
8
|
Othman U, Yang E. Human-Robot Collaborations in Smart Manufacturing Environments: Review and Outlook. SENSORS (BASEL, SWITZERLAND) 2023; 23:5663. [PMID: 37420834 DOI: 10.3390/s23125663] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/29/2023] [Revised: 06/07/2023] [Accepted: 06/15/2023] [Indexed: 07/09/2023]
Abstract
The successful implementation of Human-Robot Collaboration (HRC) has become a prominent feature of smart manufacturing environments. Key industrial requirements, such as flexibility, efficiency, collaboration, consistency, and sustainability, present pressing HRC needs in the manufacturing sector. This paper provides a systemic review and an in-depth discussion of the key technologies currently being employed in smart manufacturing with HRC systems. The work presented here focuses on the design of HRC systems, with particular attention given to the various levels of Human-Robot Interaction (HRI) observed in the industry. The paper also examines the key technologies being implemented in smart manufacturing, including Artificial Intelligence (AI), Collaborative Robots (Cobots), Augmented Reality (AR), and Digital Twin (DT), and discusses their applications in HRC systems. The benefits and practical instances of deploying these technologies are showcased, emphasizing the substantial prospects for growth and improvement in sectors such as automotive and food. However, the paper also addresses the limitations of HRC utilization and implementation and provides some insights into how the design of these systems should be approached in future work and research. Overall, this paper provides new insights into the current state of HRC in smart manufacturing and serves as a useful resource for those interested in the ongoing development of HRC systems in the industry.
Collapse
Affiliation(s)
- Uqba Othman
- Department of Design, Manufacturing and Engineering Management, University of Strathclyde, Glasgow G1 1XJ, UK
| | - Erfu Yang
- Department of Design, Manufacturing and Engineering Management, University of Strathclyde, Glasgow G1 1XJ, UK
| |
Collapse
|
9
|
Gardecki A, Rut J, Klin B, Podpora M, Beniak R. Implementation of a Hybrid Intelligence System Enabling the Effectiveness Assessment of Interaction Channels Use in HMI. SENSORS (BASEL, SWITZERLAND) 2023; 23:3826. [PMID: 37112173 PMCID: PMC10140840 DOI: 10.3390/s23083826] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/10/2023] [Revised: 03/31/2023] [Accepted: 04/06/2023] [Indexed: 06/19/2023]
Abstract
The article presents a novel idea of Interaction Quality Sensor (IQS), introduced in the complete solution of Hybrid INTelligence (HINT) architecture for intelligent control systems. The proposed system is designed to use and prioritize multiple information channels (speech, images, videos) in order to optimize the information flow efficiency of interaction in HMI systems. The proposed architecture is implemented and validated in a real-world application of training unskilled workers-new employees (with lower competencies and/or a language barrier). With the help of the HINT system, the man-machine communication information channels are deliberately chosen based on IQS readouts to enable an untrained, inexperienced, foreign employee candidate to become a good worker, while not requiring the presence of either an interpreter or an expert during training. The proposed implementation is in line with the labor market trend, which displays significant fluctuations. The HINT system is designed to activate human resources and support organizations/enterprises in the effective assimilation of employees to the tasks performed on the production assembly line. The market need of solving this noticeable problem was caused by a large migration of employees within (and between) enterprises. The research results presented in the work show significant benefits of the methods used, while supporting multilingualism and optimizing the preselection of information channels.
Collapse
Affiliation(s)
- Arkadiusz Gardecki
- Faculty of Electrical Engineering, Automatic Control and Informatics, Opole University of Technology, 45-758 Opole, Poland
- Weegree Sp. z o.o. S.K., 45-018 Opole, Poland
| | - Joanna Rut
- Faculty of Production Engineering and Logistics, Opole University of Technology, 45-272 Opole, Poland
| | - Bartlomiej Klin
- Faculty of Electrical Engineering, Automatic Control and Informatics, Opole University of Technology, 45-758 Opole, Poland
- Weegree Sp. z o.o. S.K., 45-018 Opole, Poland
| | - Michal Podpora
- Faculty of Electrical Engineering, Automatic Control and Informatics, Opole University of Technology, 45-758 Opole, Poland
- Weegree Sp. z o.o. S.K., 45-018 Opole, Poland
| | - Ryszard Beniak
- Faculty of Electrical Engineering, Automatic Control and Informatics, Opole University of Technology, 45-758 Opole, Poland
- Weegree Sp. z o.o. S.K., 45-018 Opole, Poland
| |
Collapse
|
10
|
Paliga M. The Relationships of Human-Cobot Interaction Fluency with Job Performance and Job Satisfaction among Cobot Operators-The Moderating Role of Workload. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2023; 20:5111. [PMID: 36982018 PMCID: PMC10048792 DOI: 10.3390/ijerph20065111] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/13/2023] [Revised: 03/07/2023] [Accepted: 03/12/2023] [Indexed: 06/18/2023]
Abstract
Modern factories are subject to rapid technological changes, including the advancement of robotics. A key manufacturing solution in the fourth industrial revolution is the introduction of collaborative robots (cobots), which cooperate directly with human operators while executing shared tasks. Although collaborative robotics has tangible benefits, cobots pose several challenges to human-robot interaction. Proximity, unpredictable robot behavior, and switching the operator's role from a co-operant to a supervisor can negatively affect the operator's cognitive, emotional, and behavioral responses, resulting in their lower well-being and decreased job performance. Therefore, proper actions are necessary to improve the interaction between the robot and its human counterpart. Specifically, exploring the concept of human-robot interaction (HRI) fluency shows promising perspectives. However, research on conditions affecting the relationships between HRI fluency and its outcomes is still in its infancy. Therefore, the aim of this cross-sectional survey study was twofold. First, the relationships of HRI fluency with job performance (i.e., task performance, organizational citizenship behavior, and creative performance) and job satisfaction were investigated. Second, the moderating role of the quantitative workload in these associations was verified. The analyses carried out on data from 200 male and female cobot operators working on the shop floor showed positive relationships between HRI fluency, job performance, and job satisfaction. Moreover, the study confirmed the moderating role of the quantitative workload in these relations. The results showed that the higher the workload, the lower the relationships between HRI fluency and its outcomes. The study findings are discussed within the theoretical framework of the Job Demands-Control-Support model.
Collapse
Affiliation(s)
- Mateusz Paliga
- Institute of Psychology, Faculty of Social Sciences, University of Silesia in Katowice, 40-007 Katowice, Poland
| |
Collapse
|
11
|
Behrens R, Pliske G, Piatek S, Walcher F, Elkmann N. A statistical model to predict the occurrence of blunt impact injuries on the human hand-arm system. J Biomech 2023; 151:111517. [PMID: 36893519 DOI: 10.1016/j.jbiomech.2023.111517] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2022] [Revised: 02/09/2023] [Accepted: 02/22/2023] [Indexed: 03/09/2023]
Abstract
Biomechanical limits based on pain thresholds ensure safety in workplaces where humans and cobots (collaborative robots) work together. Standardization bodies' decision to rely on pain thresholds stems from the assumption that such limits inherently protect humans from injury. This assumption has never been verified, though. This article reports on a study with 22 human subjects in which we studied injury onset in four locations of the hand-arm system using an impact pendulum. During the tests, the impact intensity was slowly increased over several weeks until a blunt injury, i.e., bruising or swelling, appeared in the body locations under load. A statistical model, which calculates injury limits for a given percentile, was developed based on the data. A comparison of our injury limits for the 25th percentile with existing pain limits confirms that pain limits provide suitable protection against impact injuries, albeit not for all body locations.
Collapse
Affiliation(s)
- R Behrens
- Fraunhofer IFF, Sandtorstr. 22, 39106 Magdeburg, Germany.
| | - G Pliske
- Department of Trauma Surgery, Otto von Guericke University, Leipziger Str. 44, 39120 Magdeburg, Germany
| | - S Piatek
- Department of Trauma Surgery, Otto von Guericke University, Leipziger Str. 44, 39120 Magdeburg, Germany
| | - F Walcher
- Department of Trauma Surgery, Otto von Guericke University, Leipziger Str. 44, 39120 Magdeburg, Germany
| | - N Elkmann
- Fraunhofer IFF, Sandtorstr. 22, 39106 Magdeburg, Germany
| |
Collapse
|
12
|
Brecelj T, Petrič T. Stable Heteroclinic Channel Networks for Physical Human-Humanoid Robot Collaboration. SENSORS (BASEL, SWITZERLAND) 2023; 23:1396. [PMID: 36772433 PMCID: PMC9921709 DOI: 10.3390/s23031396] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/29/2022] [Revised: 01/10/2023] [Accepted: 01/18/2023] [Indexed: 06/18/2023]
Abstract
Human-robot collaboration is one of the most challenging fields in robotics, as robots must understand human intentions and suitably cooperate with them in the given circumstances. But although this is one of the most investigated research areas in robotics, it is still in its infancy. In this paper, human-robot collaboration is addressed by applying a phase state system, guided by stable heteroclinic channel networks, to a humanoid robot. The base mathematical model is first defined and illustrated on a simple three-state system. Further on, an eight-state system is applied to a humanoid robot to guide it and make it perform different movements according to the forces exerted on its grippers. The movements presented in this paper are squatting, standing up, and walking forwards and backward, while the motion velocity depends on the magnitude of the applied forces. The method presented in this paper proves to be a suitable way of controlling robots by means of physical human-robot interaction. As the phase state system and the robot movements can both be further extended to make the robot execute many other tasks, the proposed method seems to provide a promising way for further investigation and realization of physical human-robot interaction.
Collapse
|
13
|
Evaluation of User Experience in Human–Robot Interaction: A Systematic Literature Review. Int J Soc Robot 2023. [DOI: 10.1007/s12369-022-00957-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/07/2023]
|
14
|
Chiurazzi M, Alcaide JO, Diodato A, Menciassi A, Ciuti G. Spherical Wrist Manipulator Local Planner for Redundant Tasks in Collaborative Environments. SENSORS (BASEL, SWITZERLAND) 2023; 23:s23020677. [PMID: 36679473 PMCID: PMC9864082 DOI: 10.3390/s23020677] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/02/2022] [Revised: 12/21/2022] [Accepted: 12/31/2022] [Indexed: 05/28/2023]
Abstract
Standard industrial robotic manipulators use well-established high performing technologies. However, such manipulators do not guarantee a safe Human-Robot Interaction (HRI), limiting their usage in industrial and medical applications. This paper proposes a novel local path planner for spherical wrist manipulators to control the execution of tasks where the manipulator number of joints is redundant. Such redundancy is used to optimize robot motion and dexterity. We present an intuitive parametrization of the end-effector (EE) angular motion, which decouples the rotation of the third joint of the wrist from the rest of the angular motions. Manipulator EE motion is controlled through a decentralized linear system with closed-loop architecture. The local planner integrates a novel collision avoidance strategy based on a potential repulsive vector applied to the EE. Contrary to classic potential field approaches, the collision avoidance algorithm considers the entire manipulator surface, enhancing human safety. The local path planner is simulated in three generic scenarios: (i) following a periodic reference, (ii) a random sequence of step signal references, and (iii) avoiding instantly introduced obstacles. Time and frequency domain analysis demonstrated that the developed planner, aside from better parametrizing redundant tasks, is capable of successfully executing the simulated paths (max error = 0.25°) and avoiding obstacles.
Collapse
Affiliation(s)
- Marcello Chiurazzi
- The BioRobotics Institute, Scuola Superiore Sant’Anna, 56127 Pisa, Italy
- Department of Excellence in Robotics and AI, Scuola Superiore Sant’Anna, 56127 Pisa, Italy
| | - Joan Ortega Alcaide
- The BioRobotics Institute, Scuola Superiore Sant’Anna, 56127 Pisa, Italy
- Department of Excellence in Robotics and AI, Scuola Superiore Sant’Anna, 56127 Pisa, Italy
| | - Alessandro Diodato
- The BioRobotics Institute, Scuola Superiore Sant’Anna, 56127 Pisa, Italy
- Department of Excellence in Robotics and AI, Scuola Superiore Sant’Anna, 56127 Pisa, Italy
| | - Arianna Menciassi
- The BioRobotics Institute, Scuola Superiore Sant’Anna, 56127 Pisa, Italy
- Department of Excellence in Robotics and AI, Scuola Superiore Sant’Anna, 56127 Pisa, Italy
| | - Gastone Ciuti
- The BioRobotics Institute, Scuola Superiore Sant’Anna, 56127 Pisa, Italy
- Department of Excellence in Robotics and AI, Scuola Superiore Sant’Anna, 56127 Pisa, Italy
| |
Collapse
|
15
|
Impact of Job Demands on Employee Learning: The Moderating Role of Human–Machine Cooperation Relationship. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:7406716. [DOI: 10.1155/2022/7406716] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/16/2022] [Revised: 10/12/2022] [Accepted: 11/01/2022] [Indexed: 12/12/2022]
Abstract
New artificial intelligence (AI) technologies are applied to work scenarios, which may change job demands and affect employees’ learning. Based on the resource conservation theory, the impact of job demands on employee learning was evaluated in the context of AI. The study further explores the moderating effect of the human–machine cooperation relationship between them. By collecting 500 valid questionnaires, a hierarchical regression for the test was performed. Results indicate that, in the AI application scenario, a U-shaped relationship exists between job demands and employee learning. Second, the human–machine cooperation relationship moderates the U-shaped curvilinear relationship between job demands and employees’ learning. In this study, AI is introduced into the field of employee psychology and behavior, enriching the research into the relationship between job demands and employee learning.
Collapse
|
16
|
Robinson N, Tidd B, Campbell D, Kulić D, Corke P. Robotic Vision for Human-Robot Interaction and Collaboration: A Survey and Systematic Review. ACM TRANSACTIONS ON HUMAN-ROBOT INTERACTION 2022. [DOI: 10.1145/3570731] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
Robotic vision for human-robot interaction and collaboration is a critical process for robots to collect and interpret detailed information related to human actions, goals, and preferences, enabling robots to provide more useful services to people. This survey and systematic review presents a comprehensive analysis on robotic vision in human-robot interaction and collaboration over the last 10 years. From a detailed search of 3850 articles, systematic extraction and evaluation was used to identify and explore 310 papers in depth. These papers described robots with some level of autonomy using robotic vision for locomotion, manipulation and/or visual communication to collaborate or interact with people. This paper provides an in-depth analysis of current trends, common domains, methods and procedures, technical processes, data sets and models, experimental testing, sample populations, performance metrics and future challenges. This manuscript found that robotic vision was often used in action and gesture recognition, robot movement in human spaces, object handover and collaborative actions, social communication and learning from demonstration. Few high-impact and novel techniques from the computer vision field had been translated into human-robot interaction and collaboration. Overall, notable advancements have been made on how to develop and deploy robots to assist people.
Collapse
Affiliation(s)
- Nicole Robinson
- Australian Research Council Centre of Excellence for Robotic Vision, School of Electrical Engineering & Robotics, QUT Centre for Robotics, Queensland University of Technology. Faculty of Engineering, Turner Institute for Brain and Mental Health, Monash University, Australia
| | - Brendan Tidd
- Australian Research Council Centre of Excellence for Robotic Vision, School of Electrical Engineering & Robotics, QUT Centre for Robotics, Queensland University of Technology, Australia
| | - Dylan Campbell
- Visual Geometry Group, Department of Engineering Science, University of Oxford, United Kingdom
| | - Dana Kulić
- Australian Research Council Centre of Excellence for Robotic Vision, Faculty of Engineering, Monash University, Australia
| | - Peter Corke
- Australian Research Council Centre of Excellence for Robotic Vision, School of Electrical Engineering & Robotics, QUT Centre for Robotics, Queensland University of Technology, Australia
| |
Collapse
|
17
|
Learning from Demonstrations in Human–Robot Collaborative Scenarios: A Survey. ROBOTICS 2022. [DOI: 10.3390/robotics11060126] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Human–Robot Collaboration (HRC) is an interdisciplinary research area that has gained attention within the smart manufacturing context. To address changes within manufacturing processes, HRC seeks to combine the impressive physical capabilities of robots with the cognitive abilities of humans to design tasks with high efficiency, repeatability, and adaptability. During the implementation of an HRC cell, a key activity is the robot programming that takes into account not only the robot restrictions and the working space, but also human interactions. One of the most promising techniques is the so-called Learning from Demonstration (LfD), this approach is based on a collection of learning algorithms, inspired by how humans imitate behaviors to learn and acquire new skills. In this way, the programming task could be simplified and provided by the shop floor operator. The aim of this work is to present a survey of this programming technique, with emphasis on collaborative scenarios rather than just an isolated task. The literature was classified and analyzed based on: the main algorithms employed for Skill/Task learning, and the human level of participation during the whole LfD process. Our analysis shows that human intervention has been poorly explored, and its implications have not been carefully considered. Among the different methods of data acquisition, the prevalent method is physical guidance. Regarding data modeling, techniques such as Dynamic Movement Primitives and Semantic Learning were the preferred methods for low-level and high-level task solving, respectively. This paper aims to provide guidance and insights for researchers looking for an introduction to LfD programming methods in collaborative robotics context and identify research opportunities.
Collapse
|
18
|
Human mobile robot interaction in the retail environment. Sci Data 2022; 9:673. [PMCID: PMC9636163 DOI: 10.1038/s41597-022-01802-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2021] [Accepted: 10/24/2022] [Indexed: 11/06/2022] Open
Abstract
AbstractAs technology advances, Human-Robot Interaction (HRI) is boosting overall system efficiency and productivity. However, allowing robots to be present closely with humans will inevitably put higher demands on precise human motion tracking and prediction. Datasets that contain both humans and robots operating in the shared space are receiving growing attention as they may facilitate a variety of robotics and human-systems research. Datasets that track HRI with rich information other than video images during daily activities are rarely seen. In this paper, we introduce a novel dataset that focuses on social navigation between humans and robots in a future-oriented Wholesale and Retail Trade (WRT) environment (https://uf-retail-cobot-dataset.github.io/). Eight participants performed the tasks that are commonly undertaken by consumers and retail workers. More than 260 minutes of data were collected, including robot and human trajectories, human full-body motion capture, eye gaze directions, and other contextual information. Comprehensive descriptions of each category of data stream, as well as potential use cases are included. Furthermore, analysis with multiple data sources and future directions are discussed.
Collapse
|
19
|
Pereira da Silva N, Eloy S, Resende R. Robotic construction analysis: simulation with virtual reality. Heliyon 2022; 8:e11039. [PMID: 36281420 PMCID: PMC9586892 DOI: 10.1016/j.heliyon.2022.e11039] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2022] [Revised: 02/22/2022] [Accepted: 10/06/2022] [Indexed: 11/06/2022] Open
Abstract
Advances in robotic construction are evident and increasing every year, bringing present and potential improvements. However, the economic and social impacts are hard to assess and quantify without physical in situ testing, which is expensive and time-consuming. This paper presents a methodology for the simulation of robotic construction technologies, namely drones, using a virtual reality environment. Our hypothesis is that a virtual reality simulation of a robotic construction (H1) has the potential of increasing the precision of predicting the construction duration and cost and (H2) allows for the detection of construction problems. The study begins with a review of the literature on drones, robotic arms, and hybrid automatic construction solutions, as well as virtual reality construction simulations, summarising the robotic technologies currently being used, mainly in academic research, to assemble construction elements. It then proposes a construction simulation methodology applied to three architectonic elements to analyse different approaches and different scenarios for robotic construction simulation methodology. A construction simulation is tested, and the data is analysed and compared with traditional construction methods, focussing on construction time and costs.
Collapse
|
20
|
Arntz A, Straßmann C, Völker S, Eimler SC. Collaborating eye to eye: Effects of workplace design on the perception of dominance of collaboration robots. Front Robot AI 2022; 9:999308. [PMID: 36237845 PMCID: PMC9551178 DOI: 10.3389/frobt.2022.999308] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2022] [Accepted: 08/17/2022] [Indexed: 11/26/2022] Open
Abstract
The concept of Human-Robot Collaboration (HRC) describes innovative industrial work procedures, in which human staff works in close vicinity with robots on a shared task. Current HRC scenarios often deploy hand-guided robots or remote controls operated by the human collaboration partner. As HRC envisions active collaboration between both parties, ongoing research efforts aim to enhance the capabilities of industrial robots not only in the technical dimension but also in the robot’s socio-interactive features. Apart from enabling the robot to autonomously complete the respective shared task in conjunction with a human partner, one essential aspect lifted from the group collaboration among humans is the communication between both entities. State-of-the-art research has identified communication as a significant contributor to successful collaboration between humans and industrial robots. Non-verbal gestures have been shown to be contributing aspect in conveying the respective state of the robot during the collaboration procedure. Research indicates that, depending on the viewing perspective, the usage of non-verbal gestures in humans can impact the interpersonal attribution of certain characteristics. Applied to collaborative robots such as the Yumi IRB 14000, which is equipped with two arms, specifically to mimic human actions, the perception of the robots’ non-verbal behavior can affect the collaboration. Most important in this context are dominance emitting gestures by the robot that can reinforce negative attitudes towards robots, thus hampering the users’ willingness and effectiveness to collaborate with the robot. By using a 3 × 3 within-subjects design online study, we investigated the effect of dominance gestures (Akimbo, crossing arms, and large arm spread) working in a standing position with an average male height, working in a standing position with an average female height, and working in a seated position on the perception of dominance of the robot. Overall 115 participants (58 female and 57 male) with an average age of 23 years evaluated nine videos of the robot. Results indicated that all presented gestures affect a person’s perception of the robot in regards to its perceived characteristics and willingness to cooperate with the robot. The data also showed participants’ increased attribution of dominance based on the presented viewing perspective.
Collapse
Affiliation(s)
- Alexander Arntz
- Institute of Computer Science, University of Applied Sciences Ruhr West, Bottrop, Germany
- *Correspondence: Alexander Arntz,
| | - Carolin Straßmann
- Institute of Computer Science, University of Applied Sciences Ruhr West, Bottrop, Germany
| | - Stefanie Völker
- Institute of Mechanical Engineering, University of Applied Sciences Ruhr West, Mülheim an der Ruhr, Germany
| | - Sabrina C. Eimler
- Institute of Computer Science, University of Applied Sciences Ruhr West, Bottrop, Germany
| |
Collapse
|
21
|
Sen W, Hong Z, Xiaomei Z. Effects of human–machine interaction on employee’s learning: A contingent perspective. Front Psychol 2022; 13:876933. [PMID: 36160504 PMCID: PMC9490363 DOI: 10.3389/fpsyg.2022.876933] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2022] [Accepted: 07/25/2022] [Indexed: 11/13/2022] Open
Abstract
The popularization of intelligent machines such as service robot and industrial robot will make human–machine interaction, an essential work mode. This requires employees to adapt to the new work content through learning. However, the research involved human–machine interaction that how influences the employee’s learning is still rarely. This paper was to reveal the relationship between human–machine interaction and employee’s learning from the perspective of job characteristics and competence perception of employees. We sent questionnaire to 500 employees from 100 artificial intelligence companies in China and received 319 valid and complete responses. Then, we adopted a hierarchical regression for the test. Empirical results show that human–machine interaction has a U-shaped curvilinear relationship with employee learning, and employee’s vitality mediates the curvilinear relationship. In addition, job characteristics (skill variety and job autonomy) moderate the U-shaped curvilinear relationship between human–machine interaction and employee’s vitality, especially the results of moderating effects varying with employee’s competence perception. Exploring the mechanism of the effect of human–machine interaction on employee’s learning enriches the socially embedded model. Moreover, it provides managerial implications how to enhance individual adaptability with the introduction of AI into firms. However, our research focuses more on the impact of human–machine interaction on employees at the initial stage of AI development, and the level of machine intelligence in various industries will reach a high degree of autonomy in the future. The future research can explore the impact of human–machine interaction on individual’s behavior at different stages, and the results may vary depending on the technologies mastered by different individuals. The study has theoretical and practical significance to human–machine interaction literature by underscoring the important of individual’s behavior among individuals with different skills.
Collapse
Affiliation(s)
- Wang Sen
- Department of Business Administration, School of Management, Beijing Union University, Beijing, China
| | - Zhao Hong
- Department of Business Administration, School of Business Administration, Nanjing University of Finance and Economics, Nanjing, China
- *Correspondence: Zhao Hong,
| | - Zhu Xiaomei
- Department of Business Administration, School of Management, Beijing Union University, Beijing, China
| |
Collapse
|
22
|
A review of the literature on fuzzy-logic approaches for collision-free path planning of manipulator robots. Artif Intell Rev 2022. [DOI: 10.1007/s10462-022-10257-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
23
|
Hasan SK. Radial basis function‐based exoskeleton robot controller development. IET CYBER-SYSTEMS AND ROBOTICS 2022. [DOI: 10.1049/csy2.12057] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022] Open
Affiliation(s)
- SK Hasan
- Department of Mechanical and Manufacturing Engineering Miami University Oxford Ohio USA
| |
Collapse
|
24
|
Inverse kinematics strategies for physical human-robot interaction using low-impedance passive link shells. ROBOTICA 2022. [DOI: 10.1017/s0263574722001102] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Abstract
This paper presents an investigation of the effectiveness of different inverse kinematics strategies in a context of physical human-robot interaction in which passive articulated shells are mounted on the links of a serial robot for manual guidance. The concept of passive link shells is first recalled. Then, inverse kinematics strategies that are designed to plan the trajectory of the robot according to the motion sensed at the link shells are presented and formulated. The different approaches presented all aim at interpreting the motion of the shells and provide an intuitive interaction to the human user. Damped Jacobian based methods are introduced in order to alleviate singularities. A serial 5-dof robot used in previous work is briefly introduced and is used as a test case for the proposed inverse kinematics schemes. The robot includes two link shells for interaction. Simulation results based on the different inverse kinematic strategies are then presented and compared. Finally, general observations and recommendations are discussed.
Collapse
|
25
|
Švejda M, Goubej M, Jáger A, Reitinger J, Severa O. Affordable Motion Tracking System for Intuitive Programming of Industrial Robots. SENSORS (BASEL, SWITZERLAND) 2022; 22:4962. [PMID: 35808453 PMCID: PMC9269710 DOI: 10.3390/s22134962] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/10/2022] [Revised: 06/28/2022] [Accepted: 06/28/2022] [Indexed: 06/15/2023]
Abstract
The paper deals with a lead-through method of programming for industrial robots. The goal is to automatically reproduce 6DoF trajectories of a tool wielded by a human operator demonstrating a motion task. We present a novel motion-tracking system built around the HTC Vive pose estimation system. Our solution allows complete automation of the robot teaching process. Specific algorithmic issues of system calibration and motion data post-processing are also discussed, constituting the paper's theoretical contribution. The motion tracking system is successfully deployed in a pilot application of robot-assisted spray painting.
Collapse
|
26
|
Laliberte T, Gosselin C. Low-Impedance Displacement Sensors for Intuitive Physical Human–Robot Interaction: Motion Guidance, Design, and Prototyping. IEEE T ROBOT 2022. [DOI: 10.1109/tro.2021.3121610] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Affiliation(s)
- Thierry Laliberte
- Department of Mechanical Engineering, Université Laval, Québec, QC, Canada
| | - Clement Gosselin
- Department of Mechanical Engineering, Université Laval, Québec, QC, Canada
| |
Collapse
|
27
|
Barker E, Jewitt C. Collaborative Robots and Tangled Passages of Tactile-Affects. ACM TRANSACTIONS ON HUMAN-ROBOT INTERACTION 2022. [DOI: 10.1145/3534090] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Abstract
Collaborative robots are increasingly entering industrial contexts and workflows. These contexts are not just locations for production, they are vibrant social and sensory environments. For better or for worse their entry brings potential to reorganize established tactile and affective dynamics that encompass production processes. There is still much to be learned about these highly contextual and complex dynamics in HRI research and the design of industrial robotics; common approaches in industrial collaborative robotics are restricted to evaluating ‘effective interface design’ whereas methods that seek to measure ‘affective touch’ have limited application to these industrial domains. This paper offers an extended analytical framework and methodological approach to deepen understandings of affect and touch beyond emotional responses to direct human-robot interactions. These distinct contributions are grounded in fieldwork in a glass factory with newly installed collaborative robots. They are illustrated through an ethnographic narrative that traces the emergence and circulation of affect, across
material, experiential
and
social
planes. Beyond this single case ‘tangled passages of tactile-affects’ is offered as novel and valuable concept, that is distinct from the notion of ‘affective touch’, and holds potential to generate holistic and nuanced understandings of how human experiences can be affected by the introduction of new robots in ‘the wild’.
Collapse
Affiliation(s)
| | - Carey Jewitt
- Institute of Education, University College London
| |
Collapse
|
28
|
Costa GDM, Petry MR, Moreira AP. Augmented Reality for Human-Robot Collaboration and Cooperation in Industrial Applications: A Systematic Literature Review. SENSORS 2022; 22:s22072725. [PMID: 35408339 PMCID: PMC9003100 DOI: 10.3390/s22072725] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/23/2022] [Revised: 03/24/2022] [Accepted: 03/30/2022] [Indexed: 02/05/2023]
Abstract
With the continuously growing usage of collaborative robots in industry, the need for achieving a seamless human-robot interaction has also increased, considering that it is a key factor towards reaching a more flexible, effective, and efficient production line. As a prominent and prospective tool to support the human operator to understand and interact with robots, Augmented Reality (AR) has been employed in numerous human-robot collaborative and cooperative industrial applications. Therefore, this systematic literature review critically appraises 32 papers' published between 2016 and 2021 to identify the main employed AR technologies, outline the current state of the art of augmented reality for human-robot collaboration and cooperation, and point out future developments for this research field. Results suggest that this is still an expanding research field, especially with the advent of recent advancements regarding head-mounted displays (HMDs). Moreover, projector-based and HMDs developed approaches are showing promising positive influences over operator-related aspects such as performance, task awareness, and safety feeling, even though HMDs need further maturation in ergonomic aspects. Further research should focus on large-scale assessment of the proposed solutions in industrial environments, involving the solution's target audience, and on establishing standards and guidelines for developing AR assistance systems.
Collapse
Affiliation(s)
- Gabriel de Moura Costa
- Department of Electrical and Computer Engineering, Faculdade de Engenharia da Universidade do Porto (FEUP), 4200-465 Porto, Portugal;
- INESC TEC—Institute for Systems and Computer Engineering Technology and Science, 4200-465 Porto, Portugal;
- Correspondence:
| | - Marcelo Roberto Petry
- INESC TEC—Institute for Systems and Computer Engineering Technology and Science, 4200-465 Porto, Portugal;
| | - António Paulo Moreira
- Department of Electrical and Computer Engineering, Faculdade de Engenharia da Universidade do Porto (FEUP), 4200-465 Porto, Portugal;
- INESC TEC—Institute for Systems and Computer Engineering Technology and Science, 4200-465 Porto, Portugal;
| |
Collapse
|
29
|
Hussein A, Ghignone L, Nguyen T, Salimi N, Nguyen H, Wang M, Abbass HA. Characterization of Indicators for Adaptive Human-Swarm Teaming. Front Robot AI 2022; 9:745958. [PMID: 35252363 PMCID: PMC8891141 DOI: 10.3389/frobt.2022.745958] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2021] [Accepted: 01/12/2022] [Indexed: 12/23/2022] Open
Abstract
Swarm systems consist of large numbers of agents that collaborate autonomously. With an appropriate level of human control, swarm systems could be applied in a variety of contexts ranging from urban search and rescue situations to cyber defence. However, the successful deployment of the swarm in such applications is conditioned by the effective coupling between human and swarm. While adaptive autonomy promises to provide enhanced performance in human-machine interaction, distinct factors must be considered for its implementation within human-swarm interaction. This paper reviews the multidisciplinary literature on different aspects contributing to the facilitation of adaptive autonomy in human-swarm interaction. Specifically, five aspects that are necessary for an adaptive agent to operate properly are considered and discussed, including mission objectives, interaction, mission complexity, automation levels, and human states. We distill the corresponding indicators in each of the five aspects, and propose a framework, named MICAH (i.e., Mission-Interaction-Complexity-Automation-Human), which maps the primitive state indicators needed for adaptive human-swarm teaming.
Collapse
|
30
|
Kim S. Working With Robots: Human Resource Development Considerations in Human–Robot Interaction. HUMAN RESOURCE DEVELOPMENT REVIEW 2022. [DOI: 10.1177/15344843211068810] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Advancements in robotic technology have accelerated the adoption of collaborative robots in the workplace. The role of humans is not reduced, but robotic technology requires different high-level responsibilities in human–robot interaction (HRI). Based on a human-centered perspective, this literature review is to explore current knowledge on HRI through the lens of HRD and propose the roles of HRD in this realm. The review identifies HRD considerations that help implement effective HRI in three human-centered domains: human capabilities, collaboration configuration, and attributes related to contact. The eight HRD considerations include employees’ attitudes toward robots, their readiness for robot technology, communication with robots, human–robot team building, leading multiple robots, systemwide collaboration, safety interventions, and ethical issues. Theoretical implications, practical implications, and limitations are discussed. This paper contributes to HRD by introducing potential areas of multidisciplinary collaborations to help organizations implement robotic systems.
Collapse
Affiliation(s)
- Sehoon Kim
- Department of Organization Leadership, Policy, and Development, University of Minnesota, Minneapolis, MN
| |
Collapse
|
31
|
Abstract
Abstract
Efficiently solving inverse kinematics (IK) of robot manipulators with offset wrists remains a challenge in robotics due to noncompliance with Pieper criteria. In this paper, an improved method to solve the IK for 6-DOF robot manipulators with offset wrists is proposed. This method is based on the Newton iteration technique, but it does not require a selection of initial estimation of joint variables. The solution is divided into two parts: the first part is to reconstruct a simplified structure with analytical IK solution, and the second part is to obtain a numerical solution by iteration. Further, a robot manipulator HSR-BR606 with an offset wrist is used as an example to specifically elaborate the mathematical procedure of the method and to investigate the algorithm in terms of accuracy, efficiency, and application of motion planning. A comparative experiment is conducted with a typical IK algorithm, which demonstrates a higher accuracy and shorter calculation time of the proposed method. The mean calculation time for a single IK solution required for this algorithm is only 4% of the comparison algorithm.
Collapse
|
32
|
Javaid M, Haleem A, Singh RP, Rab S, Suman R. Significant applications of Cobots in the field of manufacturing. COGNITIVE ROBOTICS 2022. [DOI: 10.1016/j.cogr.2022.10.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/19/2023]
|
33
|
Papanagiotou D, Senteri G, Manitsaris S. Egocentric Gesture Recognition Using 3D Convolutional Neural Networks for the Spatiotemporal Adaptation of Collaborative Robots. Front Neurorobot 2021; 15:703545. [PMID: 34887740 PMCID: PMC8649894 DOI: 10.3389/fnbot.2021.703545] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2021] [Accepted: 10/19/2021] [Indexed: 11/23/2022] Open
Abstract
Collaborative robots are currently deployed in professional environments, in collaboration with professional human operators, helping to strike the right balance between mechanization and manual intervention in manufacturing processes required by Industry 4.0. In this paper, the contribution of gesture recognition and pose estimation to the smooth introduction of cobots into an industrial assembly line is described, with a view to performing actions in parallel with the human operators and enabling interaction between them. The proposed active vision system uses two RGB-D cameras that record different points of view of gestures and poses of the operator, to build an external perception layer for the robot that facilitates spatiotemporal adaptation, in accordance with the human's behavior. The use-case of this work is concerned with LCD TV assembly of an appliance manufacturer, comprising of two parts. The first part of the above-mentioned operation is assigned to a robot, strengthening the assembly line. The second part is assigned to a human operator. Gesture recognition, pose estimation, physical interaction, and sonic notification, create a multimodal human-robot interaction system. Five experiments are performed, to test if gesture recognition and pose estimation can reduce the cycle time and range of motion of the operator, respectively. Physical interaction is achieved using the force sensor of the cobot. Pose estimation through a skeleton-tracking algorithm provides the cobot with human pose information and makes it spatially adjustable. Sonic notification is added for the case of unexpected incidents. A real-time gesture recognition module is implemented through a Deep Learning architecture consisting of Convolutional layers, trained in an egocentric view and reducing the cycle time of the routine by almost 20%. This constitutes an added value in this work, as it affords the potential of recognizing gestures independently of the anthropometric characteristics and the background. Common metrics derived from the literature are used for the evaluation of the proposed system. The percentage of spatial adaptation of the cobot is proposed as a new KPI for a collaborative system and the opinion of the human operator is measured through a questionnaire that concerns the various affective states of the operator during the collaboration.
Collapse
Affiliation(s)
| | - Gavriela Senteri
- Centre for Robotics, MINES ParisTech, PSL Université, Paris, France
| | | |
Collapse
|
34
|
A Review of 4IR/5IR Enabling Technologies and Their Linkage to Manufacturing Supply Chain. TECHNOLOGIES 2021. [DOI: 10.3390/technologies9040077] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/05/2023]
Abstract
Over the last decade, manufacturing processes have undergone significant change. Most factory activities have been transformed through a set of features built into a smart manufacturing framework. The tools brought to bear by the fourth industrial revolution are critical enablers of such change and progress. This review article describes the series of industrial revolutions and explores traditional manufacturing before presenting various enabling technologies. Insights are offered regarding traditional manufacturing lines where some enabling technologies have been included. The manufacturing supply chain is envisaged as enhancing the enabling technologies of Industry 4.0 through their integration. A systematic literature review is undertaken to evaluate each enabling technology and the manufacturing supply chain and to provide some theoretical synthesis. Similarly, obstacles are listed that must be overcome before a complete shift to smart manufacturing is possible. A brief discussion maps out how the fourth industrial revolution has led to novel manufacturing technologies. Likewise, a review of the fifth industrial revolution is given, and the justification for this development is presented.
Collapse
|
35
|
Škulj G, Vrabič R, Podržaj P. A Wearable IMU System for Flexible Teleoperation of a Collaborative Industrial Robot. SENSORS 2021; 21:s21175871. [PMID: 34502761 PMCID: PMC8434127 DOI: 10.3390/s21175871] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/23/2021] [Revised: 08/23/2021] [Accepted: 08/28/2021] [Indexed: 11/24/2022]
Abstract
Increasing the accessibility of collaborative robotics requires interfaces that support intuitive teleoperation. One possibility for an intuitive interface is offered by wearable systems that measure the operator’s movement and use the information for robot control. Such wearable systems should preserve the operator’s movement capabilities and, thus, their ability to flexibly operate in the workspace. This paper presents a novel wireless wearable system that uses only inertial measurement units (IMUs) to determine the orientation of the operator’s upper body parts. An algorithm was developed to transform the measured orientations to movement commands for an industrial collaborative robot. The algorithm includes a calibration procedure, which aligns the coordinate systems of all IMUs, the operator, and the robot, and the transformation of the operator’s relative hand motions to the movement of the robot’s end effector, which takes into account the operator’s orientation relative to the robot. The developed system is demonstrated with an example of an industrial application in which a workpiece needs to be inserted into a fixture. The robot’s motion is compared between the developed system and a standard robot controller. The results confirm that the developed system is intuitive, allows for flexible control, and is robust enough for use in industrial collaborative robotic applications.
Collapse
|
36
|
Lombardi M, Liuzza D, di Bernardo M. Dynamic Input Deep Learning Control of Artificial Avatars in a Multi-Agent Joint Motor Task. Front Robot AI 2021; 8:665301. [PMID: 34434967 PMCID: PMC8381333 DOI: 10.3389/frobt.2021.665301] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2021] [Accepted: 07/20/2021] [Indexed: 11/13/2022] Open
Abstract
In many real-word scenarios, humans and robots are required to coordinate their movements in joint tasks to fulfil a common goal. While several examples regarding dyadic human robot interaction exist in the current literature, multi-agent scenarios in which one or more artificial agents need to interact with many humans are still seldom investigated. In this paper we address the problem of synthesizing an autonomous artificial agent to perform a paradigmatic oscillatory joint task in human ensembles while exhibiting some desired human kinematic features. We propose an architecture based on deep reinforcement learning which is flexible enough to make the artificial agent interact with human groups of different sizes. As a paradigmatic coordination task we consider a multi-agent version of the mirror game, an oscillatory motor task largely used in the literature to study human motor coordination.
Collapse
Affiliation(s)
- Maria Lombardi
- Department of Engineering Mathematics, University of Bristol, Bristol, United Kingdom.,Department of Electrical Engineering and Information Technology, University of Naples Federico II, Naples, Italy
| | - Davide Liuzza
- ENEA Fusion and Nuclear Safety Department, Frascati, Italy
| | - Mario di Bernardo
- Department of Engineering Mathematics, University of Bristol, Bristol, United Kingdom.,Scuola Superiore Meridionale, University of Naples Federico II, Naples, Italy
| |
Collapse
|
37
|
Human–Robot Collaborative Assembly Based on Eye-Hand and a Finite State Machine in a Virtual Environment. APPLIED SCIENCES-BASEL 2021. [DOI: 10.3390/app11125754] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
With the development of the global economy, the demand for manufacturing is increasing. Accordingly, human–robot collaborative assembly has become a research hotspot. This paper aims to solve the efficiency problems inherent in traditional human-machine collaboration. Based on eye–hand and finite state machines, a collaborative assembly method is proposed. The method determines the human’s intention by collecting posture and eye data, which can control a robot to grasp an object, move it, and perform co-assembly. The robot’s automatic path planning is based on a probabilistic roadmap planner. Virtual reality tests show that the proposed method is more efficient than traditional methods.
Collapse
|
38
|
Trends of Human-Robot Collaboration in Industry Contexts: Handover, Learning, and Metrics. SENSORS 2021; 21:s21124113. [PMID: 34203766 PMCID: PMC8232712 DOI: 10.3390/s21124113] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/12/2021] [Revised: 06/03/2021] [Accepted: 06/08/2021] [Indexed: 12/03/2022]
Abstract
Repetitive industrial tasks can be easily performed by traditional robotic systems. However, many other works require cognitive knowledge that only humans can provide. Human-Robot Collaboration (HRC) emerges as an ideal concept of co-working between a human operator and a robot, representing one of the most significant subjects for human-life improvement.The ultimate goal is to achieve physical interaction, where handing over an object plays a crucial role for an effective task accomplishment. Considerable research work had been developed in this particular field in recent years, where several solutions were already proposed. Nonetheless, some particular issues regarding Human-Robot Collaboration still hold an open path to truly important research improvements. This paper provides a literature overview, defining the HRC concept, enumerating the distinct human-robot communication channels, and discussing the physical interaction that this collaboration entails. Moreover, future challenges for a natural and intuitive collaboration are exposed: the machine must behave like a human especially in the pre-grasping/grasping phases and the handover procedure should be fluent and bidirectional, for an articulated function development. These are the focus of the near future investigation aiming to shed light on the complex combination of predictive and reactive control mechanisms promoting coordination and understanding. Following recent progress in artificial intelligence, learning exploration stand as the key element to allow the generation of coordinated actions and their shaping by experience.
Collapse
|
39
|
Kaur M, Kim TH, Kim WS. New Frontiers in 3D Structural Sensing Robots. ADVANCED MATERIALS (DEERFIELD BEACH, FLA.) 2021; 33:e2002534. [PMID: 33458908 DOI: 10.1002/adma.202002534] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/13/2020] [Revised: 09/07/2020] [Indexed: 06/12/2023]
Abstract
Advanced robotics is the result of various contributions from complex fields of science and engineering and has tremendous value in human society. Sensing robots are highly desirable in practical settings such as healthcare and manufacturing sectors through sensing activities from human-robot interaction. However, there are still ongoing research and technical challenges in the development of ideal sensing robot systems. The sensing robot should synergically merge sensors and robotics. Geometrical difficulty in the sensor positioning caused by the structural complexity of sensing robots and their corresponding processing have been the main challenges in the production of sensing robots. 3D electronics integrated into 3D objects prepared by the 3D printing process can be the potential solution for designing realistic sensing robot systems. 3D printing provides the advantage to manufacture complex 3D structures in electronics in a single setup, allowing the ease of design flexibility, and customized functions. Therefore, the platform of 3D sensing systems is investigated and their expansion into sensing robots is studied further. The progress toward sensing robots from 3D electronics integrated into 3D objects and the advanced material strategies, used to overcome the challenges, are discussed.
Collapse
Affiliation(s)
- Manpreet Kaur
- Additive Manufacturing Laboratory, School of Mechatronics System Engineering, Simon Fraser University, Surrey, BC, V3T 0A3, Canada
| | - Tae-Ho Kim
- Additive Manufacturing Laboratory, School of Mechatronics System Engineering, Simon Fraser University, Surrey, BC, V3T 0A3, Canada
| | - Woo Soo Kim
- Additive Manufacturing Laboratory, School of Mechatronics System Engineering, Simon Fraser University, Surrey, BC, V3T 0A3, Canada
| |
Collapse
|
40
|
Atashzar SF, Carriere J, Tavakoli M. Review: How Can Intelligent Robots and Smart Mechatronic Modules Facilitate Remote Assessment, Assistance, and Rehabilitation for Isolated Adults With Neuro-Musculoskeletal Conditions? Front Robot AI 2021; 8:610529. [PMID: 33912593 PMCID: PMC8072151 DOI: 10.3389/frobt.2021.610529] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2020] [Accepted: 02/08/2021] [Indexed: 12/12/2022] Open
Abstract
Worldwide, at the time this article was written, there are over 127 million cases of patients with a confirmed link to COVID-19 and about 2.78 million deaths reported. With limited access to vaccine or strong antiviral treatment for the novel coronavirus, actions in terms of prevention and containment of the virus transmission rely mostly on social distancing among susceptible and high-risk populations. Aside from the direct challenges posed by the novel coronavirus pandemic, there are serious and growing secondary consequences caused by the physical distancing and isolation guidelines, among vulnerable populations. Moreover, the healthcare system's resources and capacity have been focused on addressing the COVID-19 pandemic, causing less urgent care, such as physical neurorehabilitation and assessment, to be paused, canceled, or delayed. Overall, this has left elderly adults, in particular those with neuromusculoskeletal (NMSK) conditions, without the required service support. However, in many cases, such as stroke, the available time window of recovery through rehabilitation is limited since neural plasticity decays quickly with time. Given that future waves of the outbreak are expected in the coming months worldwide, it is important to discuss the possibility of using available technologies to address this issue, as societies have a duty to protect the most vulnerable populations. In this perspective review article, we argue that intelligent robotics and wearable technologies can help with remote delivery of assessment, assistance, and rehabilitation services while physical distancing and isolation measures are in place to curtail the spread of the virus. By supporting patients and medical professionals during this pandemic, robots, and smart digital mechatronic systems can reduce the non-COVID-19 burden on healthcare systems. Digital health and cloud telehealth solutions that can complement remote delivery of assessment and physical rehabilitation services will be the subject of discussion in this article due to their potential in enabling more effective and safer NMSDK rehabilitation, assistance, and assessment service delivery. This article will hopefully lead to an interdisciplinary dialogue between the medical and engineering sectors, stake holders, and policy makers for a better delivery of care for those with NMSK conditions during a global health crisis including future pandemics.
Collapse
Affiliation(s)
- S. Farokh Atashzar
- Department of Electrical and Computer Engineering, Department of Mechanical and Aerospace Engineering, New York University, New York, NY, United States
| | - Jay Carriere
- Department of Electrical and Computer Engineering, University of Alberta, Edmonton, AB, Canada
| | - Mahdi Tavakoli
- Department of Electrical and Computer Engineering, University of Alberta, Edmonton, AB, Canada
| |
Collapse
|
41
|
Zhang YJ, Liu L, Huang N, Radwin R, Li J. From Manual Operation to Collaborative Robot Assembly: An Integrated Model of Productivity and Ergonomic Performance. IEEE Robot Autom Lett 2021. [DOI: 10.1109/lra.2021.3052427] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
42
|
Abstract
Abstract
During the robot's operational tasks, a key issue is its reliability in the aspect of human safety providing. Currently, there are a number of methods used to detect people, and their selection most often depends on the type of process carried out by robots. Therefore, the article is focused on the development of a comparative analysis of selected methods of human detection in the storage area. The main aspect in the context of which these systems were compared concerned the safety of robotic systems in the space of human occurrence. Main advantages and drawbacks of the methods in various applications were presented. The detailed analysis of the achievements in this area gives the possibility to identify research gaps and possible future research directions when using these tools in autonomous warehouses designing processes.
Collapse
|
43
|
Bonci A, Cen Cheng PD, Indri M, Nabissi G, Sibona F. Human-Robot Perception in Industrial Environments: A Survey. SENSORS (BASEL, SWITZERLAND) 2021; 21:1571. [PMID: 33668162 PMCID: PMC7956747 DOI: 10.3390/s21051571] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/31/2021] [Revised: 02/18/2021] [Accepted: 02/21/2021] [Indexed: 11/16/2022]
Abstract
Perception capability assumes significant importance for human-robot interaction. The forthcoming industrial environments will require a high level of automation to be flexible and adaptive enough to comply with the increasingly faster and low-cost market demands. Autonomous and collaborative robots able to adapt to varying and dynamic conditions of the environment, including the presence of human beings, will have an ever-greater role in this context. However, if the robot is not aware of the human position and intention, a shared workspace between robots and humans may decrease productivity and lead to human safety issues. This paper presents a survey on sensory equipment useful for human detection and action recognition in industrial environments. An overview of different sensors and perception techniques is presented. Various types of robotic systems commonly used in industry, such as fixed-base manipulators, collaborative robots, mobile robots and mobile manipulators, are considered, analyzing the most useful sensors and methods to perceive and react to the presence of human operators in industrial cooperative and collaborative applications. The paper also introduces two proofs of concept, developed by the authors for future collaborative robotic applications that benefit from enhanced capabilities of human perception and interaction. The first one concerns fixed-base collaborative robots, and proposes a solution for human safety in tasks requiring human collision avoidance or moving obstacles detection. The second one proposes a collaborative behavior implementable upon autonomous mobile robots, pursuing assigned tasks within an industrial space shared with human operators.
Collapse
Affiliation(s)
- Andrea Bonci
- Dipartimento di Ingegneria dell’Informazione (DII), Università Politecnica delle Marche, 60131 Ancona, Italy;
| | - Pangcheng David Cen Cheng
- Dipartimento di Elettronica e Telecomunicazioni (DET), Politecnico di Torino, 10129 Torino, Italy; (P.D.C.C.); (M.I.); (F.S.)
| | - Marina Indri
- Dipartimento di Elettronica e Telecomunicazioni (DET), Politecnico di Torino, 10129 Torino, Italy; (P.D.C.C.); (M.I.); (F.S.)
| | - Giacomo Nabissi
- Dipartimento di Ingegneria dell’Informazione (DII), Università Politecnica delle Marche, 60131 Ancona, Italy;
| | - Fiorella Sibona
- Dipartimento di Elettronica e Telecomunicazioni (DET), Politecnico di Torino, 10129 Torino, Italy; (P.D.C.C.); (M.I.); (F.S.)
| |
Collapse
|
44
|
Remmas W, Chemori A, Kruusmaa M. Diver tracking in open waters: A low‐cost approach based on visual and acoustic sensor fusion. J FIELD ROBOT 2020. [DOI: 10.1002/rob.21999] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Affiliation(s)
- Walid Remmas
- Department of Computer Systems Tallinn University of Technology Tallinn Estonia
- Laboratoire d'Informatique, de Robotique et de Micro‐électronique de Montpellier University of Montpellier, CNRS Montpellier France
| | - Ahmed Chemori
- Laboratoire d'Informatique, de Robotique et de Micro‐électronique de Montpellier University of Montpellier, CNRS Montpellier France
| | - Maarja Kruusmaa
- Department of Computer Systems Tallinn University of Technology Tallinn Estonia
| |
Collapse
|
45
|
Zhao J, Gong S, Xie B, Duan Y, Zhang Z. Human arm motion prediction in human-robot interaction based on a modified minimum jerk model. Adv Robot 2020. [DOI: 10.1080/01691864.2020.1840432] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Affiliation(s)
- Jing Zhao
- College of Mechanical Engineering and Applied Electronics Technology, Beijing University of Technology, Beijing, People’s Republic of China
| | - Shiqiu Gong
- College of Mechanical Engineering and Applied Electronics Technology, Beijing University of Technology, Beijing, People’s Republic of China
| | - Biyun Xie
- Department of Electrical and Computer Engineering, University of Kentucky, USA
| | - Yaxing Duan
- College of Mechanical Engineering and Applied Electronics Technology, Beijing University of Technology, Beijing, People’s Republic of China
| | - Ziqiang Zhang
- College of Mechanical Engineering and Applied Electronics Technology, Beijing University of Technology, Beijing, People’s Republic of China
| |
Collapse
|
46
|
Vermesan O, Bahr R, Ottella M, Serrano M, Karlsen T, Wahlstrøm T, Sand HE, Ashwathnarayan M, Gamba MT. Internet of Robotic Things Intelligent Connectivity and Platforms. Front Robot AI 2020; 7:104. [PMID: 33501271 PMCID: PMC7805974 DOI: 10.3389/frobt.2020.00104] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2019] [Accepted: 07/02/2020] [Indexed: 11/27/2022] Open
Abstract
The Internet of Things (IoT) and Industrial IoT (IIoT) have developed rapidly in the past few years, as both the Internet and "things" have evolved significantly. "Things" now range from simple Radio Frequency Identification (RFID) devices to smart wireless sensors, intelligent wireless sensors and actuators, robotic things, and autonomous vehicles operating in consumer, business, and industrial environments. The emergence of "intelligent things" (static or mobile) in collaborative autonomous fleets requires new architectures, connectivity paradigms, trustworthiness frameworks, and platforms for the integration of applications across different business and industrial domains. These new applications accelerate the development of autonomous system design paradigms and the proliferation of the Internet of Robotic Things (IoRT). In IoRT, collaborative robotic things can communicate with other things, learn autonomously, interact safely with the environment, humans and other things, and gain qualities like self-maintenance, self-awareness, self-healing, and fail-operational behavior. IoRT applications can make use of the individual, collaborative, and collective intelligence of robotic things, as well as information from the infrastructure and operating context to plan, implement and accomplish tasks under different environmental conditions and uncertainties. The continuous, real-time interaction with the environment makes perception, location, communication, cognition, computation, connectivity, propulsion, and integration of federated IoRT and digital platforms important components of new-generation IoRT applications. This paper reviews the taxonomy of the IoRT, emphasizing the IoRT intelligent connectivity, architectures, interoperability, and trustworthiness framework, and surveys the technologies that enable the application of the IoRT across different domains to perform missions more efficiently, productively, and completely. The aim is to provide a novel perspective on the IoRT that involves communication among robotic things and humans and highlights the convergence of several technologies and interactions between different taxonomies used in the literature.
Collapse
Affiliation(s)
| | | | | | - Martin Serrano
- Insight Centre for Data Analytics, National University of Ireland Galway, Galway, Ireland
| | | | | | | | | | | |
Collapse
|
47
|
WEEE Recycling and Circular Economy Assisted by Collaborative Robots. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app10144800] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Considering the amount of waste of electrical and electronic equipment (WEEE) generated each year at an increasing rate, it is of crucial importance to develop circular economy solutions that prioritize reuse and recycling, as well as reducing the amount of waste that is disposed of at landfills. This paper analyses the evolution of the amount of WEEE collection and its recycling rate at the national and European levels. It also describes the regulatory framework and possible future government policy measures to foster a circular economy. Furthermore, it identifies the different parts and materials that can be recovered from the recycling process with a special emphasis on plastics. Finally, it describes a recycling line that has been designed for the dismantling of computer cathodic ray tubes (CRT)s that combines an innovative participation of people and collaborative robots which has led to an effective and efficient material recovery solution. The key issue of this human–robot collaboration relies on only assigning tasks that require human skills to operators and sending all other tasks to robots. The first results from the model show a better economic performance than current manual processes, mainly regarding the higher degree of separation of recovered materials and plastic in particular, thus reaching higher revenues. This collaboration also brings considerable additional benefits for the environment, through a higher recovery rate in weight and for workers, who can make intelligent decisions in the factory and enjoy a safer working environment by avoiding the most dangerous tasks.
Collapse
|
48
|
Xie H, Li G, Zhao X, Li F. Prediction of Limb Joint Angles Based on Multi-Source Signals by GS-GRNN for Exoskeleton Wearer. SENSORS 2020; 20:s20041104. [PMID: 32085505 PMCID: PMC7070277 DOI: 10.3390/s20041104] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/13/2020] [Revised: 02/11/2020] [Accepted: 02/12/2020] [Indexed: 12/03/2022]
Abstract
To enable exoskeleton wearers to walk on level ground, estimation of lower limb movement is particularly indispensable. In fact, it allows the exoskeleton to follow the human movement in real time. In this paper, the general regression neural network optimized by golden section algorithm (GS-GRNN) is used to realize prediction of the human lower limb joint angle. The human body hip joint angle and the surface electromyographic (sEMG) signals of the thigh muscles are taken as the inputs of a neural network to predict joint angles of lower limbs. To improve the prediction accuracy in different gait phases, the plantar pressure signals are also added into the input. After that, the error between the prediction result and the actual data decreases significantly. Finally, compared with the prediction result of the BP neural network, GRNN shows splendid prediction performance for its less processing time and higher prediction accuracy.
Collapse
Affiliation(s)
- Hualong Xie
- Department of Mechanical Engineering and Automation, Northeastern University, Shenyang 110819, China; (G.L.); (X.Z.)
- Correspondence:
| | - Guanchao Li
- Department of Mechanical Engineering and Automation, Northeastern University, Shenyang 110819, China; (G.L.); (X.Z.)
| | - Xiaofei Zhao
- Department of Mechanical Engineering and Automation, Northeastern University, Shenyang 110819, China; (G.L.); (X.Z.)
| | - Fei Li
- Department of Information Science and Engineering, Shenyang University of Technology, Shenyang 110870, China;
| |
Collapse
|