1
|
Kokorin K, Mu J, John SE, Grayden DB. Predictive Shared Control of Robotic Arms Using Simulated Brain-Computer Interface Inputs. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2023; 2023:1-5. [PMID: 38082602 DOI: 10.1109/embc40787.2023.10340222] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
Low decoding accuracy makes brain-computer interface (BCI) control of a robotic arm difficult. Shared control (SC) can overcome limitations of a BCI by leveraging external sensor data and generating commands to assist the user. Our study explored whether reaching targets with a robot end-effector was easier using SC rather than direct control (DC). We simulated a motor imagery BCI using a joystick with noise introduced to explicitly control interface accuracy to be 65% or 79%. Compared to DC, our prediction-based implementation of SC led to a significant reduction in the trajectory length of successful reaches for 4 (3) out of 5 targets using the 65% (79%) accurate interface, with failure rates being equivalent to DC for 2 (1) out of 5 targets. Therefore, this implementation of SC is likely to improve reaching efficiency but at the cost of more failures. Additionally, the NASA Task Load Index results suggest SC reduced user workload.Clinical relevance-Shared control can minimise the impact of BCI decoder errors on robot motion, making robotic arm control using noninvasive BCIs more viable.
Collapse
|
2
|
Dimova-Edeleva V, Rivera OS, Laha R, Figueredo LFC, Zavaglia M, Haddadin S. Error-related Potentials in a Virtual Pick-and-Place Experiment: Toward Real-world Shared-control. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2023; 2023:1-7. [PMID: 38083754 DOI: 10.1109/embc40787.2023.10340244] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
In Human-Robot Collaboration setting a robot may be controlled by a user directly or through a Brain-Computer Interface that detects user intention, and it may act as an autonomous agent. As such interaction increases in complexity, conflicts become inevitable. Goal conflicts can arise from different sources, for instance, interface mistakes - related to misinterpretation of human's intention - or errors of the autonomous system to address task and human's expectations. Such conflicts evoke different spontaneous responses in the human's brain, which could be used to regulate intrinsic task parameters and to improve system response to errors - leading to improved transparency, performance, and safety. To study the possibility of detecting interface and agent errors, we designed a virtual pick and place task with sequential human and robot responsibility and recorded the electroencephalography (EEG) activity of six participants. In the virtual environment, the robot received a command from the participants through a computer keyboard or it moved as autonomous agent. In both cases, artificial errors were defined to occur in 20% - 25% of the trials. We found differences in the responses to interface and agent errors. From the EEG data, correct trials, interface errors, and agent errors were truly predicted for 51.62% ± 9.99% (chance level 38.21%) of the pick movements and 46.84%±6.62% (chance level 36.99%) for the place movements in a pseudo-asynchronous fashion. Our study suggests that in a human-robot collaboration setting one may improve the future performance of a system with intention detection and autonomous modes. Specific examples could be Neural Interfaces that replace and restore motor functions.
Collapse
|
3
|
Xavier Macedo de Azevedo F, Heimgärtner R, Nebe K. Development of a metric to evaluate the ergonomic principles of assistive systems, based on the DIN 92419. ERGONOMICS 2023; 66:821-848. [PMID: 36137226 DOI: 10.1080/00140139.2022.2127920] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/25/2022] [Accepted: 09/18/2022] [Indexed: 05/24/2023]
Abstract
The DIN 92419 defines six principles for assistive systems' ergonomic design. There is, however, a lack of measurement tools to evaluate assistive systems considering these principles. Consequently, this study developed a measurement tool for the quantitative evaluation of the fulfilment of each principle for assistive systems. A systematic literature review was performed to identify dimensions belonging to the principles, identify how previous research evaluated these dimensions, and develop a measurement tool for assistive systems. Findings show that scales commonly used for evaluating assistive systems disregard several aspects highlighted as relevant by research, implying the need for considering the DIN 92419 principles. Based on established scales and theoretical findings, a questionnaire, and a checklist for evaluating assistive systems were developed. The work provides a grounding for measuring relevant aspects of assistive systems. Further development is needed to substantiate the reliability and validity of the proposed questionnaire scales and items. Practitioner Summary: Responding to the gap of a holistic measurement tool to evaluate assistive systems, a systematic literature review was performed considering the DIN 92419 principles. This resulted in a comprehensive summary of relevant aspects of assistive systems that were made numerically measurable, which proposes better criteria to assess assistive systems. Abbreviations: IoT: internet of things; RQ: research question; TAM: technology acceptance model; UTAUT: unified theory of acceptance and use of technology; AaaS: adaptivity as a service; SAR: socially assistive robots; SEEV: salience, effort, expectancy, and value; PRISMA: preferred reporting items for systematic reviews and meta-analyses; HMI: human-machine interaction; HRI: human-robot interaction; BCI: brain-computer interface; QUEST: Quebec user evaluation of satisfaction with assistive technology; SUS: system usability scale; NASA-TLX: NASA task load index; ATD PA: assistive technology device predisposition assessment; Wheel Con: wheelchair use confidence scale; CATOM: caregiver assistive technology outcome measure; CBI: caregiver burden inventory; RoSAS: robotic social attributes scale; WheelCon: wheelchair use confidence scale; IMI: intrinsic motivation inventory; ATD PA: assistive technology device predisposition assessment; UEQ: User experience questionnaire; USEUQ: usefulness satisfaction and ease of use questionnaire; USPW: usability scale for power wheelchairs; UES: user engagement scale; SUTAQ: service user technology acceptability questionnaire; QUEAD: questionnaire for the evaluation of physical assistive devices; FATCAT: functional assessment tool for cognitive assistive technology; SE-HRI: human-robot interaction scale; SART: situation awareness rating technique; TSQ;WT: tele-healthcare satisfaction questionnaire-wearable technology; PAIF: participants' assessment of the intervention's feasibility; SWAT: subjective workload assessment technique; MARS-HA: measure of audiologic rehabilitation self-efficacy for hearing aids; IOI-HA: International outcome inventory for hearing aids; FMA: functional mobility assessment; FBIS: familiarity and behavioural intention survey; CSQ: client satisfaction questionnaire; COPM: canadian occupational performance measure; ATCS: assistive technology confidence scale; ACC: acceptance; SSP: safety, security and privacy; OPT: optimisation of resultant internal load; CTRL: controllability; ADAPT: adaptability; P&I: perceptibility and identifiability; AAL: ambient assisted living; VR: virtual reality; AS: assistive system; WEIRD: Western, educated, industrialised, rich, and democratic; HEART: horizontal european activities of rehabilitation technology; AAATE: advancement of assistive technology in Europe's; GATE: global collaboration on assistive technology; ATA-C: assistive technology assessment toolkit.
Collapse
Affiliation(s)
| | - Rüdiger Heimgärtner
- Faculty of Communication and Environment, Rhine-Waal University of Applied Sciences, Kamp-Lintfort, Germany
| | - Karsten Nebe
- Faculty of Communication and Environment, Rhine-Waal University of Applied Sciences, Kamp-Lintfort, Germany
| |
Collapse
|
4
|
Wang X, Chen HT, Lin CT. Error-related potential-based shared autonomy via deep recurrent reinforcement learning. J Neural Eng 2022; 19. [PMID: 36541532 DOI: 10.1088/1741-2552/aca4fb] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2022] [Accepted: 11/22/2022] [Indexed: 11/23/2022]
Abstract
Objective.Error-related potential (ErrP)-based brain-computer interfaces (BCIs) have received a considerable amount of attention in the human-robot interaction community. In contrast to traditional BCI, which requires continuous and explicit commands from an operator, ErrP-based BCI leverages the ErrP, which is evoked when an operator observes unexpected behaviours from the robot counterpart. This paper proposes a novel shared autonomy model for ErrP-based human-robot interaction.Approach.We incorporate ErrP information provided by a BCI as useful observations for an agent and formulate the shared autonomy problem as a partially observable Markov decision process. A recurrent neural network-based actor-critic model is used to address the uncertainty in the ErrP signal. We evaluate the proposed framework in a simulated human-in-the-loop robot navigation task with both simulated users and real users.Main results.The results show that the proposed ErrP-based shared autonomy model enables an autonomous robot to complete navigation tasks more efficiently. In a simulation with 70% ErrP accuracy, agents completed the task 14.1% faster than in the no ErrP condition, while with real users, agents completed the navigation task 14.9% faster.Significance.The evaluation results confirmed that the shared autonomy via deep recurrent reinforcement learning is an effective way to deal with uncertain human feedback in a complex human-robot interaction task.
Collapse
Affiliation(s)
- Xiaofei Wang
- Computational Intelligence and Brain Computer Interface Lab, School of Computer Science, University of Technology, Sydney, NSW 2007, Australia
| | - Hsiang-Ting Chen
- School of Computer Science, University of Adelaide, Adelaide, SA 5005, Australia
| | - Chin-Teng Lin
- Computational Intelligence and Brain Computer Interface Lab, School of Computer Science, University of Technology, Sydney, NSW 2007, Australia
| |
Collapse
|
5
|
Dimova-Edeleva V, Ehrlich SK, Cheng G. Brain computer interface to distinguish between self and other related errors in human agent collaboration. Sci Rep 2022; 12:20764. [PMID: 36456595 PMCID: PMC9715724 DOI: 10.1038/s41598-022-24899-8] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2022] [Accepted: 11/22/2022] [Indexed: 12/05/2022] Open
Abstract
When a human and machine collaborate on a shared task, ambiguous events might occur that could be perceived as an error by the human partner. In such events, spontaneous error-related potentials (ErrPs) are evoked in the human brain. Knowing whom the human perceived as responsible for the error would help a machine in co-adaptation and shared control paradigms to better adapt to human preferences. Therefore, we ask whether self- and agent-related errors evoke different ErrPs. Eleven subjects participated in an electroencephalography human-agent collaboration experiment with a collaborative trajectory-following task on two collaboration levels, where movement errors occurred as trajectory deviations. Independently of the collaboration level, we observed a higher amplitude of the responses on the midline central Cz electrode for self-related errors compared to observed errors made by the agent. On average, Support Vector Machines classified self- and agent-related errors with 72.64% accuracy using subject-specific features. These results demonstrate that ErrPs can tell if a person relates an error to themselves or an external autonomous agent during collaboration. Thus, the collaborative machine will receive more informed feedback for the error attribution that allows appropriate error identification, a possibility for correction, and avoidance in future actions.
Collapse
Affiliation(s)
- Viktorija Dimova-Edeleva
- grid.6936.a0000000123222966Munich Institute of Robotics and Machine Intelligence (MIRMI), Technical University of Munich, Munich, Germany
| | - Stefan K. Ehrlich
- grid.6936.a0000000123222966TUM School of Computation, Information and Technology, Department of Computer Engineering, Institute of Cognitive Systems, Technical University of Munich, Munich, Germany
| | - Gordon Cheng
- grid.6936.a0000000123222966TUM School of Computation, Information and Technology, Department of Computer Engineering, Institute of Cognitive Systems, Technical University of Munich, Munich, Germany
| |
Collapse
|
6
|
Manual 3D Control of an Assistive Robotic Manipulator Using Alpha Rhythms and an Auditory Menu: A Proof-of-Concept. SIGNALS 2022. [DOI: 10.3390/signals3020024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Brain–Computer Interfaces (BCIs) have been regarded as potential tools for individuals with severe motor disabilities, such as those with amyotrophic lateral sclerosis, that render interfaces that rely on movement unusable. This study aims to develop a dependent BCI system for manual end-point control of a robotic arm. A proof-of-concept system was devised using parieto-occipital alpha wave modulation and a cyclic menu with auditory cues. Users choose a movement to be executed and asynchronously stop said action when necessary. Tolerance intervals allowed users to cancel or confirm actions. Eight able-bodied subjects used the system to perform a pick-and-place task. To investigate the potential learning effects, the experiment was conducted twice over the course of two consecutive days. Subjects obtained satisfactory completion rates (84.0 ± 15.0% and 74.4 ± 34.5% for the first and second day, respectively) and high path efficiency (88.9 ± 11.7% and 92.2 ± 9.6%). Subjects took on average 439.7 ± 203.3 s to complete each task, but the robot was only in motion 10% of the time. There was no significant difference in performance between both days. The developed control scheme provided users with intuitive control, but a considerable amount of time is spent waiting for the right target (auditory cue). Implementing other brain signals may increase its speed.
Collapse
|
7
|
Computer Vision-Based Adaptive Semi-Autonomous Control of an Upper Limb Exoskeleton for Individuals with Tetraplegia. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12094374] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
We propose the use of computer vision for adaptive semi-autonomous control of an upper limb exoskeleton for assisting users with severe tetraplegia to increase independence and quality of life. A tongue-based interface was used together with the semi-autonomous control such that individuals with complete tetraplegia were able to use it despite being paralyzed from the neck down. The semi-autonomous control uses computer vision to detect nearby objects and estimate how to grasp them to assist the user in controlling the exoskeleton. Three control schemes were tested: non-autonomous (i.e., manual control using the tongue) control, semi-autonomous control with a fixed level of autonomy, and a semi-autonomous control with a confidence-based adaptive level of autonomy. Studies on experimental participants with and without tetraplegia were carried out. The control schemes were evaluated both in terms of their performance, such as the time and number of commands needed to complete a given task, as well as ratings from the users. The studies showed a clear and significant improvement in both performance and user ratings when using either of the semi-autonomous control schemes. The adaptive semi-autonomous control outperformed the fixed version in some scenarios, namely, in the more complex tasks and with users with more training in using the system.
Collapse
|
8
|
Fontaine MC, Nikolaidis S. Evaluating Human-Robot Interaction Algorithms in Shared Autonomy via Quality Diversity Scenario Generation. ACM TRANSACTIONS ON HUMAN-ROBOT INTERACTION 2022. [DOI: 10.1145/3476412] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Abstract
The growth of scale and complexity of interactions between humans and robots highlights the need for new computational methods to automatically evaluate novel algorithms and applications. Exploring diverse scenarios of humans and robots interacting in simulation can improve understanding of the robotic system and avoid potentially costly failures in real-world settings. We formulate this problem as a quality diversity (QD) problem, where the goal is to discover diverse failure scenarios by simultaneously exploring both environments and human actions. We focus on the shared autonomy domain, where the robot attempts to infer the goal of a human operator, and adopt the QD algorithms CMA-ME and MAP-Elites to generate scenarios for two published algorithms in this domain: shared autonomy via hindsight optimization and linear policy blending. Some of the generated scenarios confirm previous theoretical findings, while others are surprising and bring about a new understanding of state-of-the-art implementations. Our experiments show that the QD algorithms CMA-ME and MAP-Elites outperform Monte-Carlo simulation and optimization based methods in effectively searching the scenario space, highlighting their promise for automatic evaluation of algorithms in human-robot interaction.
Collapse
|
9
|
Lin TC, Krishnan AU, Li Z. Intuitive, Efficient and Ergonomic Tele-Nursing Robot Interfaces: Design Evaluation and Evolution. ACM TRANSACTIONS ON HUMAN-ROBOT INTERACTION 2022. [DOI: 10.1145/3526108] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023]
Abstract
Tele-nursing robots provide a safe approach for patient-caring in quarantine areas. For effective nurse-robot collaboration, ergonomic teleoperation and intuitive interfaces with low physical and cognitive workload must be developed. We propose a framework to evaluate the control interfaces to iteratively develop an intuitive, efficient, and ergonomic teleoperation interface. The framework is a hierarchical procedure that incorporates general to specific assessment and its role in design evolution. We first present pre-defined objective and subjective metrics used to evaluate three representative contemporary teleoperation interfaces. The results indicate that teleoperation via human motion mapping outperforms the gamepad and stylus interfaces. The trade-off with using motion mapping as a teleoperation interface is the non-trivial physical fatigue. To understand the impact of heavy physical demand during motion mapping teleoperation, we propose an objective assessment of physical workload in teleoperation using electromyography (EMG). We find that physical fatigue happens in the actions that involve precise manipulation and steady posture maintenance. We further implemented teleoperation assistance in the form of shared autonomy to eliminate the fatigue-causing component in robot teleoperation via motion mapping. The experimental results show that the autonomous feature effectively reduces the physical effort while improving the efficiency and accuracy of the teleoperation interface.
Collapse
Affiliation(s)
- Tsung-Chi Lin
- Worcester Polytechnic Institute, Robotics Engineering
| | | | - Zhi Li
- Worcester Polytechnic Institute, Robotics Engineering
| |
Collapse
|
10
|
Tao L, Bowman M, Zhou X, Zhang J, Zhang X. Learn and Transfer Knowledge of Preferred Assistance Strategies in Semi-Autonomous Telemanipulation. J INTELL ROBOT SYST 2022. [DOI: 10.1007/s10846-022-01596-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
11
|
Iregui S, De Schutter J, Aertbelien E. Reconfigurable Constraint-Based Reactive Framework for Assistive Robotics With Adaptable Levels of Autonomy. IEEE Robot Autom Lett 2021. [DOI: 10.1109/lra.2021.3098950] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
|
12
|
Costi L, Scimeca L, Maiolino P, Lalitharatne TD, Nanayakkara T, Hashem R, Iida F. Comparative Analysis of Model-Based Predictive Shared Control for Delayed Operation in Object Reaching and Recognition Tasks With Tactile Sensing. Front Robot AI 2021; 8:730946. [PMID: 34738017 PMCID: PMC8562425 DOI: 10.3389/frobt.2021.730946] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2021] [Accepted: 09/10/2021] [Indexed: 12/13/2022] Open
Abstract
Communication delay represents a fundamental challenge in telerobotics: on one hand, it compromises the stability of teleoperated robots, on the other hand, it decreases the user's awareness of the designated task. In scientific literature, such a problem has been addressed both with statistical models and neural networks (NN) to perform sensor prediction, while keeping the user in full control of the robot's motion. We propose shared control as a tool to compensate and mitigate the effects of communication delay. Shared control has been proven to enhance precision and speed in reaching and manipulation tasks, especially in the medical and surgical fields. We analyse the effects of added delay and propose a unilateral teleoperated leader-follower architecture that both implements a predictive system and shared control, in a 1-dimensional reaching and recognition task with haptic sensing. We propose four different control modalities of increasing autonomy: non-predictive human control (HC), predictive human control (PHC), (shared) predictive human-robot control (PHRC), and predictive robot control (PRC). When analyzing how the added delay affects the subjects' performance, the results show that the HC is very sensitive to the delay: users are not able to stop at the desired position and trajectories exhibit wide oscillations. The degree of autonomy introduced is shown to be effective in decreasing the total time requested to accomplish the task. Furthermore, we provide a deep analysis of environmental interaction forces and performed trajectories. Overall, the shared control modality, PHRC, represents a good trade-off, having peak performance in accuracy and task time, a good reaching speed, and a moderate contact with the object of interest.
Collapse
Affiliation(s)
- Leone Costi
- Bio Inspired Robotics Laboratory, Department of Engineering, University of Cambridge, Cambridge, United Kingdom
| | - Luca Scimeca
- NAVER AI Lab, NAVER Corp, Seongnam-si, South Korea
| | - Perla Maiolino
- Oxford Robotics Institute, University of Oxford, Oxford, United Kingdom
| | | | | | - Ryman Hashem
- Bio Inspired Robotics Laboratory, Department of Engineering, University of Cambridge, Cambridge, United Kingdom
| | - Fumiya Iida
- Bio Inspired Robotics Laboratory, Department of Engineering, University of Cambridge, Cambridge, United Kingdom
| |
Collapse
|
13
|
Abstract
Assistive robot arms enable people with disabilities to conduct everyday tasks on their own. These arms are dexterous and high-dimensional; however, the interfaces people must use to control their robots are low-dimensional. Consider teleoperating a 7-DoF robot arm with a 2-DoF joystick. The robot is helping you eat dinner, and currently you want to cut a piece of tofu. Today's robots assume a pre-defined mapping between joystick inputs and robot actions: in one mode the joystick controls the robot's motion in the x-y plane, in another mode the joystick controls the robot's z-yaw motion, and so on. But this mapping misses out on the task you are trying to perform! Ideally, one joystick axis should control how the robot stabs the tofu, and the other axis should control different cutting motions. Our insight is that we can achieve intuitive, user-friendly control of assistive robots by embedding the robot's high-dimensional actions into low-dimensional and human-controllable latent actions. We divide this process into three parts. First, we explore models for learning latent actions from offline task demonstrations, and formalize the properties that latent actions should satisfy. Next, we combine learned latent actions with autonomous robot assistance to help the user reach and maintain their high-level goals. Finally, we learn a personalized alignment model between joystick inputs and latent actions. We evaluate our resulting approach in four user studies where non-disabled participants reach marshmallows, cook apple pie, cut tofu, and assemble dessert. We then test our approach with two disabled adults who leverage assistive devices on a daily basis.
Collapse
|
14
|
Amanhoud W, Hernandez Sanchez J, Bouri M, Billard A. Contact-initiated shared control strategies for four-arm supernumerary manipulation with foot interfaces. Int J Rob Res 2021. [DOI: 10.1177/02783649211017642] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
In industrial or surgical settings, to achieve many tasks successfully, at least two people are needed. To this end, robotic assistance could be used to enable a single person to perform such tasks alone, with the help of robots through direct, shared, or autonomous control. We are interested in four-arm manipulation scenarios, where both feet are used to control two robotic arms via bi-pedal haptic interfaces. The robotic arms complement the tasks of the biological arms, for instance, in supporting and moving an object while working on it (using both hands). To reduce fatigue, cognitive workload, and to ease the execution of the foot manipulation, we propose two types of assistance that can be enabled upon contact with the object (i.e., based on the interaction forces): autonomous-contact force generation and auto-coordination of the robotic arms. The latter relates to controlling both arms with a single foot, once the object is grasped. We designed four (shared) control strategies that are derived from the combinations (absence/presence) of both assistance modalities, and we compared them through a user study (with 12 participants) on a four-arm manipulation task. The results show that force assistance positively improves human–robot fluency in the four-arm task, the ease of use and usefulness; it also reduces the fatigue. Finally, to make the dual-assistance approach the preferred and most successful among the proposed control strategies, delegating the grasping force to the robotic arms is a crucial factor when controlling them both with a single foot.
Collapse
Affiliation(s)
- Walid Amanhoud
- Learning Algorithms and Systems Laboratory (LASA), Swiss Federal School of Technology in Lausanne EPFL, Lausanne, Switzerland
| | - Jacob Hernandez Sanchez
- Learning Algorithms and Systems Laboratory (LASA), Swiss Federal School of Technology in Lausanne EPFL, Lausanne, Switzerland
- Biorobotics Laboratory (BIOROB), Swiss Federal School of Technology in Lausanne EPFL, Lausanne, Switzerland
| | - Mohamed Bouri
- Biorobotics Laboratory (BIOROB), Swiss Federal School of Technology in Lausanne EPFL, Lausanne, Switzerland
- Translational Neural Engineering Laboratory (TNE), Swiss Federal Institute of Technology (EPFL), Geneva, Switzerland
| | - Aude Billard
- Learning Algorithms and Systems Laboratory (LASA), Swiss Federal School of Technology in Lausanne EPFL, Lausanne, Switzerland
| |
Collapse
|
15
|
Olsen S, Zhang J, Liang KF, Lam M, Riaz U, Kao JC. An artificial intelligence that increases simulated brain-computer interface performance. J Neural Eng 2021; 18. [PMID: 33978599 DOI: 10.1088/1741-2552/abfaaa] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2020] [Accepted: 04/22/2021] [Indexed: 12/14/2022]
Abstract
Objective.Brain-computer interfaces (BCIs) translate neural activity into control signals for assistive devices in order to help people with motor disabilities communicate effectively. In this work, we introduce a new BCI architecture that improves control of a BCI computer cursor to type on a virtual keyboard.Approach.Our BCI architecture incorporates an external artificial intelligence (AI) that beneficially augments the movement trajectories of the BCI. This AI-BCI leverages past user actions, at both long (100 s of seconds ago) and short (100 s of milliseconds ago) timescales, to modify the BCI's trajectories.Main results.We tested our AI-BCI in a closed-loop BCI simulator with nine human subjects performing a typing task. We demonstrate that our AI-BCI achieves: (1) categorically higher information communication rates, (2) quicker ballistic movements between targets, (3) improved precision control to 'dial in' on targets, and (4) more efficient movement trajectories. We further show that our AI-BCI increases performance across a wide control quality spectrum from poor to proficient control.Significance.This AI-BCI architecture, by increasing BCI performance across all key metrics evaluated, may increase the clinical viability of BCI systems.
Collapse
Affiliation(s)
- Sebastian Olsen
- Department of Electrical and Computer Engineering, University of California, Los Angeles, CA 90024, United States of America
| | - Jianwei Zhang
- Department of Electrical and Computer Engineering, University of California, Los Angeles, CA 90024, United States of America
| | - Ken-Fu Liang
- Department of Electrical and Computer Engineering, University of California, Los Angeles, CA 90024, United States of America
| | - Michelle Lam
- Department of Electrical and Computer Engineering, University of California, Los Angeles, CA 90024, United States of America
| | - Usama Riaz
- Department of Computer Science, University of California, Los Angeles, CA 90024, United States of America
| | - Jonathan C Kao
- Department of Electrical and Computer Engineering, University of California, Los Angeles, CA 90024, United States of America.,Neurosciences Program, University of California, Los Angeles, CA 90024, United States of America
| |
Collapse
|
16
|
Cao L, Li G, Xu Y, Zhang H, Shu X, Zhang D. A brain-actuated robotic arm system using non-invasive hybrid brain-computer interface and shared control strategy. J Neural Eng 2021; 18. [PMID: 33862607 DOI: 10.1088/1741-2552/abf8cb] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2020] [Accepted: 04/16/2021] [Indexed: 01/20/2023]
Abstract
Objective.The electroencephalography (EEG)-based brain-computer interfaces (BCIs) have been used in the control of robotic arms. The performance of non-invasive BCIs may not be satisfactory due to the poor quality of EEG signals, so the shared control strategies were tried as an alternative solution. However, most of the existing shared control methods set the arbitration rules manually, which highly depended on the specific tasks and developer's experience. In this study, we proposed a novel shared control model that automatically optimized the control commands in a dynamical way based on the context in real-time control. Besides, we employed the hybrid BCI to better allocate commands with multiple functions. The system allowed non-invasive BCI users to manipulate a robotic arm moving in a three-dimensional (3D) space and complete a pick-place task of multiple objects.Approach.Taking the scene information obtained by computer vision as a knowledge base, a machine agent was designed to infer the user's intention and generate automatic commands. Based on the inference confidence and user's characteristic, the proposed shared control model fused the machine autonomy and human intention dynamically for robotic arm motion optimization during the online control. In addition, we introduced a hybrid BCI scheme that applied steady-state visual evoked potentials and motor imagery to the divided primary and secondary BCI interfaces to better allocate the BCI resources (e.g. decoding computing power, screen occupation) and realize the multi-dimensional control of the robotic arm.Main results.Eleven subjects participated in the online experiments of picking and placing five objects that scattered at different positions in a 3D workspace. The results showed that most of the subjects could control the robotic arm to complete accurate and robust picking task with an average success rate of approximately 85% under the shared control strategy, while the average success rate of placing task controlled by pure BCI was 50% approximately.Significance.In this paper, we proposed a novel shared controller for motion automatic optimization, together with a hybrid BCI control scheme that allocated paradigms according to the importance of commands to realize multi-dimensional and effective control of a robotic arm. Our study indicated that the shared control strategy with hybrid BCI could greatly improve the performance of the brain-actuated robotic arm system.
Collapse
Affiliation(s)
- Linfeng Cao
- State Key Laboratory of Mechanical Systems and Vibrations, Institute of Robotics, Shanghai Jiao Tong University, Shanghai, People's Republic of China
| | - Guangye Li
- State Key Laboratory of Mechanical Systems and Vibrations, Institute of Robotics, Shanghai Jiao Tong University, Shanghai, People's Republic of China
| | - Yang Xu
- State Key Laboratory of Mechanical Systems and Vibrations, Institute of Robotics, Shanghai Jiao Tong University, Shanghai, People's Republic of China
| | - Heng Zhang
- State Key Laboratory of Mechanical Systems and Vibrations, Institute of Robotics, Shanghai Jiao Tong University, Shanghai, People's Republic of China
| | - Xiaokang Shu
- State Key Laboratory of Mechanical Systems and Vibrations, Institute of Robotics, Shanghai Jiao Tong University, Shanghai, People's Republic of China
| | - Dingguo Zhang
- Department of Electronic and Electrical Engineering, University of Bath, Bath, United Kingdom
| |
Collapse
|
17
|
Goering S, Klein E, Specker Sullivan L, Wexler A, Agüera y Arcas B, Bi G, Carmena JM, Fins JJ, Friesen P, Gallant J, Huggins JE, Kellmeyer P, Marblestone A, Mitchell C, Parens E, Pham M, Rubel A, Sadato N, Teicher M, Wasserman D, Whittaker M, Wolpaw J, Yuste R. Recommendations for Responsible Development and Application of Neurotechnologies. NEUROETHICS-NETH 2021; 14:365-386. [PMID: 33942016 PMCID: PMC8081770 DOI: 10.1007/s12152-021-09468-6] [Citation(s) in RCA: 45] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2020] [Accepted: 04/15/2021] [Indexed: 12/12/2022]
Abstract
Advancements in novel neurotechnologies, such as brain computer interfaces (BCI) and neuromodulatory devices such as deep brain stimulators (DBS), will have profound implications for society and human rights. While these technologies are improving the diagnosis and treatment of mental and neurological diseases, they can also alter individual agency and estrange those using neurotechnologies from their sense of self, challenging basic notions of what it means to be human. As an international coalition of interdisciplinary scholars and practitioners, we examine these challenges and make recommendations to mitigate negative consequences that could arise from the unregulated development or application of novel neurotechnologies. We explore potential ethical challenges in four key areas: identity and agency, privacy, bias, and enhancement. To address them, we propose (1) democratic and inclusive summits to establish globally-coordinated ethical and societal guidelines for neurotechnology development and application, (2) new measures, including "Neurorights," for data privacy, security, and consent to empower neurotechnology users' control over their data, (3) new methods of identifying and preventing bias, and (4) the adoption of public guidelines for safe and equitable distribution of neurotechnological devices.
Collapse
Affiliation(s)
| | - Eran Klein
- University of Washington, Seattle, WA USA
- Oregon Health & Science University, Portland, OR USA
| | | | - Anna Wexler
- University of Pennsylvania, Philadelphia, PA USA
| | | | - Guoqiang Bi
- University of Science and Technology of China, Hefei, China
- CAS Shenzhen Institute of Advanced Technology, Shenzhen, China
| | | | | | | | | | | | | | | | | | - Erik Parens
- The Hastings Center, Philipstown, Garrison, NY USA
| | | | - Alan Rubel
- University of Wisconsin-Madison, Madison, WI USA
| | - Norihiro Sadato
- National Institute for Physiological Sciences, Okazaki, Aichi Japan
| | | | | | - Meredith Whittaker
- Google, Mountain View, CA USA
- AI Now, New York City, NY USA
- New York University, New York City, NY USA
| | - Jonathan Wolpaw
- National Center for Adaptive Neurotechnologies, Albany, NY USA
| | | |
Collapse
|
18
|
Bustamante S, Quere G, Hagmann K, Wu X, Schmaus P, Vogel J, Stulp F, Leidner D. Toward Seamless Transitions Between Shared Control and Supervised Autonomy in Robotic Assistance. IEEE Robot Autom Lett 2021. [DOI: 10.1109/lra.2021.3064449] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
19
|
Wu L, Alqasemi R, Dubey R. Development of Smartphone-Based Human-Robot Interfaces for Individuals With Disabilities. IEEE Robot Autom Lett 2020. [DOI: 10.1109/lra.2020.3010453] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
20
|
Even-Chen N, Muratore DG, Stavisky SD, Hochberg LR, Henderson JM, Murmann B, Shenoy KV. Power-saving design opportunities for wireless intracortical brain-computer interfaces. Nat Biomed Eng 2020; 4:984-996. [PMID: 32747834 PMCID: PMC8286886 DOI: 10.1038/s41551-020-0595-9] [Citation(s) in RCA: 32] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2018] [Accepted: 06/30/2020] [Indexed: 12/17/2022]
Abstract
The efficacy of wireless intracortical brain-computer interfaces (iBCIs) is limited in part by the number of recording channels, which is constrained by the power budget of the implantable system. Designing wireless iBCIs that provide the high-quality recordings of today's wired neural interfaces may lead to inadvertent over-design at the expense of power consumption and scalability. Here, we report analyses of neural signals collected from experimental iBCI measurements in rhesus macaques and from a clinical-trial participant with implanted 96-channel Utah multielectrode arrays to understand the trade-offs between signal quality and decoder performance. Moreover, we propose an efficient hardware design for clinically viable iBCIs, and suggest that the circuit design parameters of current recording iBCIs can be relaxed considerably without loss of performance. The proposed design may allow for an order-of-magnitude power savings and lead to clinically viable iBCIs with a higher channel count.
Collapse
Affiliation(s)
- Nir Even-Chen
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA.
| | - Dante G Muratore
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA
- Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA, USA
| | - Sergey D Stavisky
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA
- Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA, USA
- Department of Neurosurgery, Stanford University, Stanford, CA, USA
| | - Leigh R Hochberg
- Department of Veterans Affairs Medical Center, Center for Neurorestoration and Neurotechnology, Providence, RI, USA
- School of Engineering and Carney Institute for Brain Science, Brown University, Providence, RI, USA
- Center for Neurotechnology and Neurorecovery, Department of Neurology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Jaimie M Henderson
- Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA, USA
- Department of Neurosurgery, Stanford University, Stanford, CA, USA
| | - Boris Murmann
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA
- Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA, USA
| | - Krishna V Shenoy
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA
- Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA, USA
- Department of Bioengineering, Stanford University, Stanford, CA, USA
- Department of Neurobiology, Stanford University, Stanford, CA, USA
- Howard Hughes Medical Institute at Stanford University, Stanford, CA, USA
- The Bio-X Institute, Stanford University, Stanford, CA, USA
| |
Collapse
|
21
|
Luo J, He W, Yang C. Combined perception, control, and learning for teleoperation: key technologies, applications, and challenges. COGNITIVE COMPUTATION AND SYSTEMS 2020. [DOI: 10.1049/ccs.2020.0005] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/15/2023] Open
Affiliation(s)
- Jing Luo
- Key Laboratory of Autonomous Systems and Networked ControlSchool of Automation Science and EngineeringSouth China University of TechnologyGuangzhou510640People's Republic of China
| | - Wei He
- School of Automation and Electrical EngineeringUniversity of Science and Technology BeijingBeijing100083People's Republic of China
| | - Chenguang Yang
- Key Laboratory of Autonomous Systems and Networked ControlSchool of Automation Science and EngineeringSouth China University of TechnologyGuangzhou510640People's Republic of China
| |
Collapse
|
22
|
Gopinath DE, Argall BD. Active Intent Disambiguation for Shared Control Robots. IEEE Trans Neural Syst Rehabil Eng 2020; 28:1497-1506. [PMID: 32305928 DOI: 10.1109/tnsre.2020.2987878] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Assistive shared-control robots have the potential to transform the lives of millions of people afflicted with severe motor impairments. The usefulness of shared-control robots typically relies on the underlying autonomy's ability to infer the user's needs and intentions, and the ability to do so unambiguously is often a limiting factor for providing appropriate assistance confidently and accurately. The contributions of this paper are four-fold. First, we introduce the idea of intent disambiguation via control mode selection, and present a mathematical formalism for the same. Second, we develop a control mode selection algorithm which selects the control mode in which the user-initiated motion helps the autonomy to maximally disambiguate user intent. Third, we present a pilot study with eight subjects to evaluate the efficacy of the disambiguation algorithm. Our results suggest that the disambiguation system (a) helps to significantly reduce task effort, as measured by number of button presses, and (b) is of greater utility for more limited control interfaces and more complex tasks. We also observe that (c) subjects demonstrated a wide range of disambiguation request behaviors, with the common thread of concentrating requests early in the execution. As our last contribution, we introduce a novel field-theoretic approach to intent inference inspired by dynamic field theory that works in tandem with the disambiguation scheme.
Collapse
|
23
|
Jain S, Argall B. Probabilistic Human Intent Recognition for Shared Autonomy in Assistive Robotics. ACM TRANSACTIONS ON HUMAN-ROBOT INTERACTION 2020; 9. [PMID: 32426695 DOI: 10.1145/3359614] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
Abstract
Effective human-robot collaboration in shared autonomy requires reasoning about the intentions of the human partner. To provide meaningful assistance, the autonomy has to first correctly predict, or infer, the intended goal of the human collaborator. In this work, we present a mathematical formulation for intent inference during assistive teleoperation under shared autonomy. Our recursive Bayesian filtering approach models and fuses multiple non-verbal observations to probabilistically reason about the intended goal of the user without explicit communication. In addition to contextual observations, we model and incorporate the human agent's behavior as goal-directed actions with adjustable rationality to inform intent recognition. Furthermore, we introduce a user-customized optimization of this adjustable rationality to achieve user personalization. We validate our approach with a human subjects study that evaluates intent inference performance under a variety of goal scenarios and tasks. Importantly, the studies are performed using multiple control interfaces that are typically available to users in the assistive domain, which differ in the continuity and dimensionality of the issued control signals. The implications of the control interface limitations on intent inference are analyzed. The study results show that our approach in many scenarios outperforms existing solutions for intent inference in assistive teleoperation, and otherwise performs comparably. Our findings demonstrate the benefit of probabilistic modeling and the incorporation of human agent behavior as goal-directed actions where the adjustable rationality model is user customized. Results further show that the underlying intent inference approach directly affects shared autonomy performance, as do control interface limitations.
Collapse
Affiliation(s)
- Siddarth Jain
- Northwestern University, USA and Shirley Ryan AbilityLab, USA
| | - Brenna Argall
- Northwestern University, USA and Shirley Ryan AbilityLab, USA
| |
Collapse
|
24
|
Zeng H, Shen Y, Hu X, Song A, Xu B, Li H, Wang Y, Wen P. Semi-Autonomous Robotic Arm Reaching With Hybrid Gaze-Brain Machine Interface. Front Neurorobot 2020; 13:111. [PMID: 32038219 PMCID: PMC6992643 DOI: 10.3389/fnbot.2019.00111] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2019] [Accepted: 12/11/2019] [Indexed: 11/13/2022] Open
Abstract
Recent developments in the non-muscular human-robot interface (HRI) and shared control strategies have shown potential for controlling the assistive robotic arm by people with no residual movement or muscular activity in upper limbs. However, most non-muscular HRIs only produce discrete-valued commands, resulting in non-intuitive and less effective control of the dexterous assistive robotic arm. Furthermore, the user commands and the robot autonomy commands usually switch in the shared control strategies of such applications. This characteristic has been found to yield a reduced sense of agency as well as frustration for the user according to previous user studies. In this study, we firstly propose an intuitive and easy-to-learn-and-use hybrid HRI by combing the Brain-machine interface (BMI) and the gaze-tracking interface. For the proposed hybrid gaze-BMI, the continuous modulation of the movement speed via the motor intention occurs seamlessly and simultaneously to the unconstrained movement direction control with the gaze signals. We then propose a shared control paradigm that always combines user input and the autonomy with the dynamic combination regulation. The proposed hybrid gaze-BMI and shared control paradigm were validated for a robotic arm reaching task performed with healthy subjects. All the users were able to employ the hybrid gaze-BMI for moving the end-effector sequentially to reach the target across the horizontal plane while also avoiding collisions with obstacles. The shared control paradigm maintained as much volitional control as possible, while providing the assistance for the most difficult parts of the task. The presented semi-autonomous robotic system yielded continuous, smooth, and collision-free motion trajectories for the end effector approaching the target. Compared to a system without assistances from robot autonomy, it significantly reduces the rate of failure as well as the time and effort spent by the user to complete the tasks.
Collapse
Affiliation(s)
- Hong Zeng
- School of Instrument Science and Engineering, Southeast University, Nanjing, China
| | - Yitao Shen
- School of Instrument Science and Engineering, Southeast University, Nanjing, China
| | - Xuhui Hu
- School of Instrument Science and Engineering, Southeast University, Nanjing, China
| | - Aiguo Song
- State Key Laboratory of Bioelectronics, School of Instrument Science and Engineering, Southeast University, Nanjing, China
| | - Baoguo Xu
- School of Instrument Science and Engineering, Southeast University, Nanjing, China
| | - Huijun Li
- School of Instrument Science and Engineering, Southeast University, Nanjing, China
| | - Yanxin Wang
- School of Instrument Science and Engineering, Southeast University, Nanjing, China
| | - Pengcheng Wen
- AVIC Aeronautics Computing Technique Research Institute, Xi’an, China
| |
Collapse
|
25
|
Bengtson SH, Bak T, Andreasen Struijk LNS, Moeslund TB. A review of computer vision for semi-autonomous control of assistive robotic manipulators (ARMs). Disabil Rehabil Assist Technol 2019; 15:731-745. [PMID: 31268368 DOI: 10.1080/17483107.2019.1615998] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Abstract
Purpose: The advances in artificial intelligence have started to reach a level where autonomous systems are becoming increasingly popular as a way to aid people in their everyday life. Such intelligent systems may especially be beneficially for people struggling to complete common everyday tasks, such as individuals with movement-related disabilities. The focus of this paper is hence to review recent work in using computer vision for semi-autonomous control of assistive robotic manipulators (ARMs). Methods: Four databases were searched using a block search, yielding 257 papers which were reduced to 14 papers after applying various filtering criteria. Each paper was reviewed with focus on the hardware used, the autonomous behaviour achieved using computer vision and the scheme for semi-autonomous control of the system. Each of the reviewed systems were also sought characterized by grading their level of autonomy on a pre-defined scale.Conclusions: A re-occurring issue in the reviewed systems was the inability to handle arbitrary objects. This makes the systems unlikely to perform well outside a controlled environment, such as a lab. This issue could be addressed by having the systems recognize good grasping points or primitive shapes instead of specific pre-defined objects. Most of the reviewed systems did also use a rather simple strategy for the semi-autonomous control, where they switch either between full manual control or full automatic control. An alternative could be a control scheme relying on adaptive blending which could provide a more seamless experience for the user.Implications for rehabilitationAssistive robotic manipulators (ARMs) have the potential to empower individuals with disabilities by enabling them to complete common everyday tasks. This potential can be further enhanced by making the ARM semi-autonomous in order to actively aid the user.The scheme used for the semi-autonomous control of the ARM is crucial as it may be a hindrance if done incorrectly. Especially the ability to customize the semi-autonomous behaviour of the ARM is found to be important.Further research is needed to make the final move from the lab to the homes of the users. Most of the reviewed systems suffer from a rather fixed scheme for the semi-autonomous control and an inability to handle arbitrary objects.
Collapse
Affiliation(s)
- Stefan Hein Bengtson
- Visual Analysis of People (VAP) Laboratory, Department of Architecture, Design, and Media Technology, Aalborg University, Aalborg, Denmark
| | - Thomas Bak
- Automation and Control, Department of Electronic Systems, Aalborg University, Aalborg, Denmark
| | | | - Thomas Baltzer Moeslund
- Visual Analysis of People (VAP) Laboratory, Department of Architecture, Design, and Media Technology, Aalborg University, Aalborg, Denmark
| |
Collapse
|
26
|
Jain S, Argall B. Recursive Bayesian Human Intent Recognition in Shared-Control Robotics. PROCEEDINGS OF THE ... IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS. IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS 2019; 2018:3905-3912. [PMID: 32300492 DOI: 10.1109/iros.2018.8593766] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Effective human-robot collaboration in shared control requires reasoning about the intentions of the human user. In this work, we present a mathematical formulation for human intent recognition during assistive teleoperation under shared autonomy. Our recursive Bayesian filtering approach models and fuses multiple non-verbal observations to probabilistically reason about the intended goal of the user. In addition to contextual observations, we model and incorporate the human agent's behavior as goal-directed actions with adjustable rationality to inform the underlying intent. We examine human inference on robot motion and furthermore validate our approach with a human subjects study that evaluates autonomy intent inference performance under a variety of goal scenarios and tasks, by novice subjects. Results show that our approach outperforms existing solutions and demonstrates that the probabilistic fusion of multiple observations improves intent inference and performance for shared-control operation.
Collapse
Affiliation(s)
- Siddarth Jain
- Siddarth Jain and Brenna Argall are both jointly affiliated with the Department of Electrical Engineering and Computer Science, Northwestern University, Evanston, IL 60208, USA and Shirley Ryan Ability-Lab, Chicago, IL 60611, USA
| | - Brenna Argall
- Siddarth Jain and Brenna Argall are both jointly affiliated with the Department of Electrical Engineering and Computer Science, Northwestern University, Evanston, IL 60208, USA and Shirley Ryan Ability-Lab, Chicago, IL 60611, USA
| |
Collapse
|
27
|
Alonso V, de la Puente P. System Transparency in Shared Autonomy: A Mini Review. Front Neurorobot 2018; 12:83. [PMID: 30555317 PMCID: PMC6284032 DOI: 10.3389/fnbot.2018.00083] [Citation(s) in RCA: 27] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2018] [Accepted: 11/13/2018] [Indexed: 11/17/2022] Open
Abstract
What does transparency mean in a shared autonomy framework? Different ways of understanding system transparency in human-robot interaction can be found in the state of the art. In one of the most common interpretations of the term, transparency is the observability and predictability of the system behavior, the understanding of what the system is doing, why, and what it will do next. Since the main methods to improve this kind of transparency are based on interface design and training, transparency is usually considered a property of such interfaces, while natural language explanations are a popular way to achieve transparent interfaces. Mechanical transparency is the robot capacity to follow human movements without human-perceptible resistive forces. Transparency improves system performance, helping reduce human errors, and builds trust in the system. One of the principles of user-centered design is to keep the user aware of the state of the system: a transparent design is a user-centered design. This article presents a review of the definitions and methods to improve transparency for applications with different interaction requirements and autonomy degrees, in order to clarify the role of transparency in shared autonomy, as well as to identify research gaps and potential future developments.
Collapse
Affiliation(s)
- Victoria Alonso
- ETSI Aeronáutica y del Espacio, Universidad Politécnica de Madrid, Madrid, Spain
| | - Paloma de la Puente
- ETSI Industriales, Universidad Politécnica de Madrid, Madrid, Spain
- Centre for Automation and Robotics, Universidad Politécnica de Madrid-CSIC, Madrid, Spain
| |
Collapse
|
28
|
Loeb GE. Neural Prosthetics:A Review of Empirical vs. Systems Engineering Strategies. Appl Bionics Biomech 2018; 2018:1435030. [PMID: 30532801 PMCID: PMC6247642 DOI: 10.1155/2018/1435030] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2018] [Revised: 06/28/2018] [Accepted: 08/05/2018] [Indexed: 12/21/2022] Open
Abstract
Implantable electrical interfaces with the nervous system were first enabled by cardiac pacemaker technology over 50 years ago and have since diverged into almost all of the physiological functions controlled by the nervous system. There have been a few major clinical and commercial successes, many contentious claims, and some outright failures. These tend to be reviewed within each clinical subspecialty, obscuring the many commonalities of neural control, biophysics, interface materials, electronic technologies, and medical device regulation that they share. This review cites a selection of foundational and recent journal articles and reviews for all major applications of neural prosthetic interfaces in clinical use, trials, or development. The hard-won knowledge and experience across all of these fields can now be amalgamated and distilled into more systematic processes for development of clinical products instead of the often empirical (trial and error) approaches to date. These include a frank assessment of a specific clinical problem, the state of its underlying science, the identification of feasible targets, the availability of suitable technologies, and the path to regulatory and reimbursement approval. Increasing commercial interest and investment facilitates this systematic approach, but it also motivates projects and products whose claims are dubious.
Collapse
Affiliation(s)
- Gerald E. Loeb
- Professor of Biomedical Engineering, University of Southern California, 1042 Downey Way (DRB-B11) Los Angeles, CA 90089, USA
| |
Collapse
|