1
|
Oruma SO, Ayele YZ, Sechi F, Rødsethol H. Security Aspects of Social Robots in Public Spaces: A Systematic Mapping Study. SENSORS (BASEL, SWITZERLAND) 2023; 23:8056. [PMID: 37836888 PMCID: PMC10575183 DOI: 10.3390/s23198056] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/22/2023] [Revised: 09/11/2023] [Accepted: 09/20/2023] [Indexed: 10/15/2023]
Abstract
BACKGROUND As social robots increasingly integrate into public spaces, comprehending their security implications becomes paramount. This study is conducted amidst the growing use of social robots in public spaces (SRPS), emphasising the necessity for tailored security standards for these unique robotic systems. METHODS In this systematic mapping study (SMS), we meticulously review and analyse existing literature from the Web of Science database, following guidelines by Petersen et al. We employ a structured approach to categorise and synthesise literature on SRPS security aspects, including physical safety, data privacy, cybersecurity, and legal/ethical considerations. RESULTS Our analysis reveals a significant gap in existing safety standards, originally designed for industrial robots, that need to be revised for SRPS. We propose a thematic framework consolidating essential security guidelines for SRPS, substantiated by evidence from a considerable percentage of the primary studies analysed. CONCLUSIONS The study underscores the urgent need for comprehensive, bespoke security standards and frameworks for SRPS. These standards ensure that SRPS operate securely and ethically, respecting individual rights and public safety, while fostering seamless integration into diverse human-centric environments. This work is poised to enhance public trust and acceptance of these robots, offering significant value to developers, policymakers, and the general public.
Collapse
Affiliation(s)
- Samson Ogheneovo Oruma
- Department of Computer Science and Communication, Østfold University College, 1757 Halden, Norway
| | - Yonas Zewdu Ayele
- Department of Risk, Safety, and Security, Institute for Energy Technology, 1777 Halden, Norway;
| | - Fabien Sechi
- Department of Risk, Safety, and Security, Institute for Energy Technology, 1777 Halden, Norway;
| | - Hanne Rødsethol
- Department of Control Room and Interaction Design, Institute for Energy Technology, 1777 Halden, Norway;
| |
Collapse
|
2
|
Accounting for Diversity in Robot Design, Testbeds, and Safety Standardization. Int J Soc Robot 2023. [DOI: 10.1007/s12369-023-00974-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/07/2023]
Abstract
AbstractScience has started highlighting the importance of integrating diversity considerations in medicine and healthcare. However, there is little research into how these considerations apply, affect, and should be integrated into concrete healthcare innovations such as rehabilitation robotics. Robot policy ecosystems are also oblivious to the vast landscape of gender identity understanding, often ignoring these considerations and failing to guide developers in integrating them to ensure they meet user needs. While this ignorance may be for the traditional heteronormative configuration of the medical, technical, and legal world, the ending result is the failure of roboticists to consider them in robot development. However, missing diversity, equity, and inclusion considerations can result in robotic systems that can compromise user safety, be discriminatory, and not respect their fundamental rights. This paper explores the impact of overlooking gender and sex considerations in robot design on users. We focus on the safety standard for personal care robots ISO 13482:2014 and zoom in on lower-limb exoskeletons. Our findings signal that ISO 13482:2014 has significant gaps concerning intersectional aspects like sex, gender, age, or health conditions and, because of that, developers are creating robot systems that, despite adherence to the standard, can still cause harm to users. In short, our observations show that robotic exoskeletons operate intimately with users’ bodies, thus exemplifying how gender and medical conditions might introduce dissimilarities in human–robot interaction that, as long as they remain ignored in regulations, may compromise user safety. We conclude the article by putting forward particular recommendations to update ISO 13482:2014 to reflect better the broad diversity of users of personal care robots.
Collapse
|
3
|
An iterative regulatory process for robot governance. DATA & POLICY 2023. [DOI: 10.1017/dap.2023.3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/04/2023] Open
Abstract
Abstract
There is an increasing gap between the policy cycle’s speed and that of technological and social change. This gap is becoming broader and more prominent in robotics, that is, movable machines that perform tasks either automatically or with a degree of autonomy. This is because current legislation was unprepared for machine learning and autonomous agents. As a result, the law often lags behind and does not adequately frame robot technologies. This state of affairs inevitably increases legal uncertainty. It is unclear what regulatory frameworks developers have to follow to comply, often resulting in technology that does not perform well in the wild, is unsafe, and can exacerbate biases and lead to discrimination. This paper explores these issues and considers the background, key findings, and lessons learned of the LIAISON project, which stands for “Liaising robot development and policymaking,” and aims to ideate an alignment model for robots’ legal appraisal channeling robot policy development from a hybrid top-down/bottom-up perspective to solve this mismatch. As such, LIAISON seeks to uncover to what extent compliance tools could be used as data generators for robot policy purposes to unravel an optimal regulatory framing for existing and emerging robot technologies.
Collapse
|
4
|
A Systematic Review on Social Robots in Public Spaces: Threat Landscape and Attack Surface. COMPUTERS 2022. [DOI: 10.3390/computers11120181] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
There is a growing interest in using social robots in public spaces for indoor and outdoor applications. The threat landscape is an important research area being investigated and debated by various stakeholders. Objectives: This study aims to identify and synthesize empirical research on the complete threat landscape of social robots in public spaces. Specifically, this paper identifies the potential threat actors, their motives for attacks, vulnerabilities, attack vectors, potential impacts of attacks, possible attack scenarios, and mitigations to these threats. Methods: This systematic literature review follows the guidelines by Kitchenham and Charters. The search was conducted in five digital databases, and 1469 studies were retrieved. This study analyzed 21 studies that satisfied the selection criteria. Results: Main findings reveal four threat categories: cybersecurity, social, physical, and public space. Conclusion: This study completely grasped the complexity of the transdisciplinary problem of social robot security and privacy while accommodating the diversity of stakeholders’ perspectives. Findings give researchers and other stakeholders a comprehensive view by highlighting current developments and new research directions in this field. This study also proposed a taxonomy for threat actors and the threat landscape of social robots in public spaces.
Collapse
|
5
|
Chen Y, Luo Y, Hu B. Towards Next Generation Cleaning Tools: Factors Affecting Cleaning Robot Usage and Proxemic Behaviors Design. FRONTIERS IN ELECTRONICS 2022. [DOI: 10.3389/felec.2022.895001] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
Among all healthcare sectors and working processes, the janitorial section is a prominent source of work-related injuries due to its labor-intensive nature and rising need for a hygienic environment, thus requiring extra attention for prevention strategies. Advancement in robotic technology has allowed autonomous cleaning robots to be a viable solution to ease the burden of janitors. To evaluate the application of commercial-grade cleaning robots, a video-based survey was developed and distributed to participants. Results from 117 participants revealed that: 1) participants were less tolerant when their personal space was invaded by humans compared with the cleaning robot, 2) it is better to inform the surrounding humans that the cleaning robot has been sanitized to make them feel safe and comfortable during the pandemic, and 3) to make the interaction more socially acceptable, the cleaning robot should respect human personal space, especially when there is ample space to maneuver. The findings of the present study provide insight into the usage and Proxemic behaviors design of future cleaning robots.
Collapse
|
6
|
Stange S, Hassan T, Schröder F, Konkol J, Kopp S. Self-Explaining Social Robots: An Explainable Behavior Generation Architecture for Human-Robot Interaction. Front Artif Intell 2022; 5:866920. [PMID: 35573901 PMCID: PMC9106388 DOI: 10.3389/frai.2022.866920] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2022] [Accepted: 04/01/2022] [Indexed: 11/20/2022] Open
Abstract
In recent years, the ability of intelligent systems to be understood by developers and users has received growing attention. This holds in particular for social robots, which are supposed to act autonomously in the vicinity of human users and are known to raise peculiar, often unrealistic attributions and expectations. However, explainable models that, on the one hand, allow a robot to generate lively and autonomous behavior and, on the other, enable it to provide human-compatible explanations for this behavior are missing. In order to develop such a self-explaining autonomous social robot, we have equipped a robot with own needs that autonomously trigger intentions and proactive behavior, and form the basis for understandable self-explanations. Previous research has shown that undesirable robot behavior is rated more positively after receiving an explanation. We thus aim to equip a social robot with the capability to automatically generate verbal explanations of its own behavior, by tracing its internal decision-making routes. The goal is to generate social robot behavior in a way that is generally interpretable, and therefore explainable on a socio-behavioral level increasing users' understanding of the robot's behavior. In this article, we present a social robot interaction architecture, designed to autonomously generate social behavior and self-explanations. We set out requirements for explainable behavior generation architectures and propose a socio-interactive framework for behavior explanations in social human-robot interactions that enables explaining and elaborating according to users' needs for explanation that emerge within an interaction. Consequently, we introduce an interactive explanation dialog flow concept that incorporates empirically validated explanation types. These concepts are realized within the interaction architecture of a social robot, and integrated with its dialog processing modules. We present the components of this interaction architecture and explain their integration to autonomously generate social behaviors as well as verbal self-explanations. Lastly, we report results from a qualitative evaluation of a working prototype in a laboratory setting, showing that (1) the robot is able to autonomously generate naturalistic social behavior, and (2) the robot is able to verbally self-explain its behavior to the user in line with users' requests.
Collapse
Affiliation(s)
- Sonja Stange
- Social Cognitive Systems Group, Faculty of Technology, Bielefeld University, Bielefeld, Germany
- *Correspondence: Sonja Stange
| | - Teena Hassan
- Robotics Group, Faculty 3–Mathematics and Computer Science, University of Bremen, Bremen, Germany
| | - Florian Schröder
- Social Cognitive Systems Group, Faculty of Technology, Bielefeld University, Bielefeld, Germany
| | - Jacqueline Konkol
- Social Cognitive Systems Group, Faculty of Technology, Bielefeld University, Bielefeld, Germany
| | - Stefan Kopp
- Social Cognitive Systems Group, Faculty of Technology, Bielefeld University, Bielefeld, Germany
| |
Collapse
|
7
|
Paez-Granados D, Billard A. Crash test-based assessment of injury risks for adults and children when colliding with personal mobility devices and service robots. Sci Rep 2022; 12:5285. [PMID: 35347216 PMCID: PMC8960768 DOI: 10.1038/s41598-022-09349-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2021] [Accepted: 03/22/2022] [Indexed: 11/09/2022] Open
Abstract
Autonomous mobility devices such as transport, cleaning, and delivery robots, hold a massive economic and social benefit. However, their deployment should not endanger bystanders, particularly vulnerable populations such as children and older adults who are inherently smaller and fragile. This study compared the risks faced by different pedestrian categories and determined risks through crash testing involving a service robot hitting an adult and a child dummy. Results of collisions at 3.1 m/s (11.1 km/h/6.9 mph) showed risks of serious head (14%), neck (20%), and chest (50%) injuries in children, and tibia fracture (33%) in adults. Furthermore, secondary impact analysis resulted in both populations at risk of severe head injuries, namely, from falling to the ground. Our data and simulations show mitigation strategies for reducing impact injury risks below 5% by either lowering the differential speed at impact below 1.5 m/s (5.4 km/h/3.3 mph) or through the usage of absorbent materials. The results presented herein may influence the design of controllers, sensing awareness, and assessment methods for robots and small vehicles standardization, as well as, policymaking and regulations for the speed, design, and usage of these devices in populated areas.
Collapse
Affiliation(s)
- Diego Paez-Granados
- Swiss Federal Institute of Technology in Lausanne, EPFL, Institutes of Microengineering and Mechanical Engineering, 1015, Lausanne, Switzerland.
| | - Aude Billard
- Swiss Federal Institute of Technology in Lausanne, EPFL, Institutes of Microengineering and Mechanical Engineering, 1015, Lausanne, Switzerland
| |
Collapse
|
8
|
Application of an adapted FMEA framework for robot-inclusivity of built environments. Sci Rep 2022; 12:3408. [PMID: 35233018 PMCID: PMC8888750 DOI: 10.1038/s41598-022-06902-4] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2021] [Accepted: 02/08/2022] [Indexed: 11/09/2022] Open
Abstract
Mobile robots are deployed in the built environment at increasing rates. However, lack of considerations for a robot-inclusive planning has led to physical spaces that would potentially pose hazards to robots, and contribute to an overall productivity decline for mobile service robots. This research proposes the use of an adapted Failure Mode and Effects Analysis (FMEA) as a structured tool to evaluate a building's level of robot-inclusivity and safety for service robot deployments. This Robot-Inclusive FMEA (RIFMEA) framework, is used to identify failures in the built environment that compromise the workflow of service robots, assess their effects and causes, and provide recommended actions to alleviate these problems. The method was supported with a case study of deploying telepresence robots in a university campus. The study concluded that common failures were related to poor furniture design, a lack of clearance and hazard indicators, and sub-optimal interior planning.
Collapse
|
9
|
Harnessing robot experimentation to optimize the regulatory framing of emerging robot technologies. DATA & POLICY 2022. [DOI: 10.1017/dap.2022.12] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022] Open
Abstract
Abstract
From exoskeletons to lightweight robotic suits, wearable robots are changing dynamically and rapidly, challenging the timeliness of laws and regulatory standards that were not prepared for robots that would help wheelchair users walk again. In this context, equipping regulators with technical knowledge on technologies could solve information asymmetries among developers and policymakers and avoid the problem of regulatory disconnection. This article introduces pushing robot development for lawmaking (PROPELLING), an financial support to third parties from the Horizon 2020 EUROBENCH project that explores how robot testing facilities could generate policy-relevant knowledge and support optimized regulations for robot technologies. With ISO 13482:2014 as a case study, PROPELLING investigates how robot testbeds could be used as data generators to improve the regulation for lower-limb exoskeletons. Specifically, the article discusses how robot testbeds could help regulators tackle hazards like fear of falling, instability in collisions, or define the safe scenarios for avoiding any adverse consequences generated by abrupt protective stops. The article’s central point is that testbeds offer a promising setting to bring policymakers closer to research and development to make policies more attuned to societal needs. In this way, these approximations can be harnessed to unravel an optimal regulatory framework for emerging technologies, such as robots and artificial intelligence, based on science and evidence.
Collapse
|