1
|
Egocentric Computer Vision for Hands-Free Robotic Wheelchair Navigation. J INTELL ROBOT SYST 2023. [DOI: 10.1007/s10846-023-01807-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/15/2023]
Abstract
AbstractIn this paper, we present an approach for navigating a robotic wheelchair that provides users with multiple levels of autonomy and navigation capabilities to fit their individual needs and preferences. We focus on three main aspects: (i) egocentric computer vision based motion control to provide a natural human-robot interface to wheelchair users with impaired hand usage; (ii) techniques that enable user to initiate autonomous navigation to a location, object or person without use of the hands; and (iii) a framework that learns to navigate the wheelchair according to its user’s, often subjective, criteria and preferences. These contributions are evaluated qualitatively and quantitatively in user studies with several subjects demonstrating their effectiveness. These studies have been conducted with healthy subjects, but they still indicate that clinical tests of the proposed technology can be initiated.
Collapse
|
2
|
Lei Z, Tan BY, Garg NP, Li L, Sidarta A, Ang WT. An Intention Prediction Based Shared Control System for Point-to-Point Navigation of a Robotic Wheelchair. IEEE Robot Autom Lett 2022. [DOI: 10.1109/lra.2022.3189151] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Affiliation(s)
- Zhen Lei
- School of Mechanical and Aerospace Engineering, Nanyang Technological University, Singapore
| | - Bang Yi Tan
- Rehabilitation Research Institute of Singapore, Singapore
| | - Neha P. Garg
- Rehabilitation Research Institute of Singapore, Singapore
| | - Lei Li
- Rehabilitation Research Institute of Singapore, Singapore
| | - Ananda Sidarta
- Rehabilitation Research Institute of Singapore, Singapore
| | - Wei Tech Ang
- Rehabilitation Research Institute of Singapore, Singapore
| |
Collapse
|
3
|
Amanhoud W, Hernandez Sanchez J, Bouri M, Billard A. Contact-initiated shared control strategies for four-arm supernumerary manipulation with foot interfaces. Int J Rob Res 2021. [DOI: 10.1177/02783649211017642] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
In industrial or surgical settings, to achieve many tasks successfully, at least two people are needed. To this end, robotic assistance could be used to enable a single person to perform such tasks alone, with the help of robots through direct, shared, or autonomous control. We are interested in four-arm manipulation scenarios, where both feet are used to control two robotic arms via bi-pedal haptic interfaces. The robotic arms complement the tasks of the biological arms, for instance, in supporting and moving an object while working on it (using both hands). To reduce fatigue, cognitive workload, and to ease the execution of the foot manipulation, we propose two types of assistance that can be enabled upon contact with the object (i.e., based on the interaction forces): autonomous-contact force generation and auto-coordination of the robotic arms. The latter relates to controlling both arms with a single foot, once the object is grasped. We designed four (shared) control strategies that are derived from the combinations (absence/presence) of both assistance modalities, and we compared them through a user study (with 12 participants) on a four-arm manipulation task. The results show that force assistance positively improves human–robot fluency in the four-arm task, the ease of use and usefulness; it also reduces the fatigue. Finally, to make the dual-assistance approach the preferred and most successful among the proposed control strategies, delegating the grasping force to the robotic arms is a crucial factor when controlling them both with a single foot.
Collapse
Affiliation(s)
- Walid Amanhoud
- Learning Algorithms and Systems Laboratory (LASA), Swiss Federal School of Technology in Lausanne EPFL, Lausanne, Switzerland
| | - Jacob Hernandez Sanchez
- Learning Algorithms and Systems Laboratory (LASA), Swiss Federal School of Technology in Lausanne EPFL, Lausanne, Switzerland
- Biorobotics Laboratory (BIOROB), Swiss Federal School of Technology in Lausanne EPFL, Lausanne, Switzerland
| | - Mohamed Bouri
- Biorobotics Laboratory (BIOROB), Swiss Federal School of Technology in Lausanne EPFL, Lausanne, Switzerland
- Translational Neural Engineering Laboratory (TNE), Swiss Federal Institute of Technology (EPFL), Geneva, Switzerland
| | - Aude Billard
- Learning Algorithms and Systems Laboratory (LASA), Swiss Federal School of Technology in Lausanne EPFL, Lausanne, Switzerland
| |
Collapse
|
4
|
Udupa S, Kamat VR, Menassa CC. Shared autonomy in assistive mobile robots: a review. Disabil Rehabil Assist Technol 2021:1-22. [PMID: 34133906 DOI: 10.1080/17483107.2021.1928778] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Abstract
PURPOSE Shared autonomy has played a major role in assistive mobile robotics as it has the potential to effectively balance user satisfaction and smooth functioning of systems by adapting itself to each user's needs and preferences. Many shared control paradigms have been developed over the years. However, despite these advancements, shared control paradigms have not been widely adopted as there are several integral aspects that have not fully matured. The purpose of this paper is to discuss and review various aspects of shared control and the technologies leading up to the current advancements in shared control for assistive mobile robots. METHODS A comprehensive review of the literature was conducted following a dichotomy of studies from the pre-2000 and the post-2000 periods to focus on both the early developments and the current state of the art in this domain. RESULTS A systematic review of 135 research papers and 7 review papers selected from the literature was conducted. To facilitate the organization of the reviewed work, a 6-level ladder categorization was developed based on the extent of autonomy shared between the human and the robot in the use of assistive mobile robots. This taxonomy highlights the chronological improvements in this domain. CONCLUSION It was found that most prior studies have focussed on basic functionalities, thus paving the way for research to now focus on the higher levels of the ladder taxonomy. It was concluded that further research in the domain must focus on ensuring safety in mobility and adaptability to varying environments.Implications for rehabilitationShared autonomy in assistive mobile robots plays a vital role in effectively adapting to ensure safety while also considering the user comfort.User's immediate desires should be considered in decision making to ensure that the users are in control of the assistive robots.The current focus of research should be towards successful adaptation of the assistive mobile robots to varying environments to assure safety of the user.
Collapse
Affiliation(s)
- Sumukha Udupa
- Department of Civil and Environmental Engineering, University of Michigan, Ann Arbor, MI, USA.,Robotics Institute, University of Michigan, Ann Arbor, MI, USA
| | - Vineet R Kamat
- Department of Civil and Environmental Engineering, University of Michigan, Ann Arbor, MI, USA.,Robotics Institute, University of Michigan, Ann Arbor, MI, USA
| | - Carol C Menassa
- Department of Civil and Environmental Engineering, University of Michigan, Ann Arbor, MI, USA.,Robotics Institute, University of Michigan, Ann Arbor, MI, USA
| |
Collapse
|
5
|
Othman KM, Rad AB. A Doorway Detection and Direction (3Ds) System for Social Robots via a Monocular Camera. SENSORS 2020; 20:s20092477. [PMID: 32349349 PMCID: PMC7249124 DOI: 10.3390/s20092477] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/24/2020] [Revised: 04/21/2020] [Accepted: 04/24/2020] [Indexed: 11/29/2022]
Abstract
In this paper, we propose a novel algorithm to detect a door and its orientation in indoor settings from the view of a social robot equipped with only a monocular camera. The challenge is to achieve this goal with only a 2D image from a monocular camera. The proposed system is designed through the integration of several modules, each of which serves a special purpose. The detection of the door is addressed by training a convolutional neural network (CNN) model on a new dataset for Social Robot Indoor Navigation (SRIN). The direction of the door (from the robot’s observation) is achieved by three other modules: Depth module, Pixel-Selection module, and Pixel2Angle module, respectively. We include simulation results and real-time experiments to demonstrate the performance of the algorithm. The outcome of this study could be beneficial in any robotic navigation system for indoor environments.
Collapse
|
6
|
Arora AS, Arora A. The Race Between Cognitive and Artificial Intelligence. INTERNATIONAL JOURNAL OF INTELLIGENT INFORMATION TECHNOLOGIES 2020. [DOI: 10.4018/ijiit.2020010101] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Research on human-robot interaction (HRI) is growing; however, focus on the congruent socio-behavioral HRI research fields of social cognition, socio-behavioral intentions, and code of ethics is lacking. Humans possess an inherent ability of integrating perception, cognition, and action; while robots may have limitations as they may not recognize an object or a being, navigate a terrain, and/or comprehend written or verbal language and instructions. This HRI research focuses on issues and challenges for both humans and robots from social, behavioral, technical, and ethical perspectives. The human ability to anthropomorphize robots and adoption of ‘intentional mindset' toward robots through xenocentrism have added new dimensions to HRI. Robotic anthropomorphism plays a significant role in how humans can be successful companions of robots. This research explores social cognitive intelligence versus artificial intelligence with a focus on privacy protections and ethical implications of HRI while designing robots that are ethical, cognitively and artificially intelligent, and social human-like agents.
Collapse
Affiliation(s)
| | - Amit Arora
- University of the District of Columbia, Washington, D.C., USA
| |
Collapse
|
7
|
Krausz NE, Hu BH, Hargrove LJ. Subject- and Environment-Based Sensor Variability for Wearable Lower-Limb Assistive Devices. SENSORS (BASEL, SWITZERLAND) 2019; 19:E4887. [PMID: 31717471 PMCID: PMC6891559 DOI: 10.3390/s19224887] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/16/2019] [Revised: 10/29/2019] [Accepted: 11/06/2019] [Indexed: 02/08/2023]
Abstract
Significant research effort has gone towards the development of powered lower limb prostheses that control power during gait. These devices use forward prediction based on electromyography (EMG), kinetics and kinematics to command the prosthesis which locomotion activity is desired. Unfortunately these predictions can have substantial errors, which can potentially lead to trips or falls. It is hypothesized that one reason for the significant prediction errors in the current control systems for powered lower-limb prostheses is due to the inter- and intra-subject variability of the data sources used for prediction. Environmental data, recorded from a depth sensor worn on a belt, should have less variability across trials and subjects as compared to kinetics, kinematics and EMG data, and thus its addition is proposed. The variability of each data source was analyzed, once normalized, to determine the intra-activity and intra-subject variability for each sensor modality. Then measures of separability, repeatability, clustering and overall desirability were computed. Results showed that combining Vision, EMG, IMU (inertial measurement unit), and Goniometer features yielded the best separability, repeatability, clustering and desirability across subjects and activities. This will likely be useful for future application in a forward predictor, which will incorporate Vision-based environmental data into a forward predictor for powered lower-limb prosthesis and exoskeletons.
Collapse
Affiliation(s)
- Nili E. Krausz
- Neural Engineering for Prosthetics and Orthotics Lab (NEPOL), Center of Bionic Medicine, Shirley Ryan AbilityLab (formerly RIC), Chicago, IL 60611, USA; (B.H.H.); (L.J.H.)
- Biomedical Engineering Department, Northwestern University, Evanston, IL 60208, USA
| | - Blair H. Hu
- Neural Engineering for Prosthetics and Orthotics Lab (NEPOL), Center of Bionic Medicine, Shirley Ryan AbilityLab (formerly RIC), Chicago, IL 60611, USA; (B.H.H.); (L.J.H.)
- Biomedical Engineering Department, Northwestern University, Evanston, IL 60208, USA
| | - Levi J. Hargrove
- Neural Engineering for Prosthetics and Orthotics Lab (NEPOL), Center of Bionic Medicine, Shirley Ryan AbilityLab (formerly RIC), Chicago, IL 60611, USA; (B.H.H.); (L.J.H.)
- Biomedical Engineering Department, Northwestern University, Evanston, IL 60208, USA
- Physical Medicine and Rehabilitation Department, Northwestern University, Evanston, IL 60208, USA
| |
Collapse
|
8
|
Kyrarini M, Zheng Q, Haseeb MA, Graser A. Robot Learning of Assistive Manipulation Tasks by Demonstration via Head Gesture-based Interface. IEEE Int Conf Rehabil Robot 2019; 2019:1139-1146. [PMID: 31374783 DOI: 10.1109/icorr.2019.8779379] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Assistive robotic manipulators have the potential to support the lives of people suffering from severe motor impairments. They can support individuals with disabilities to independently perform daily living activities, such as drinking, eating, manipulation tasks, and opening doors. An attractive solution is to enable motor impaired users to teach a robot by providing demonstrations of daily living tasks. The user controls the robot 'manually' with an intuitive human-robot interface to provide demonstration, which is followed by the robot learning of the performed task. However, the control of robotic manipulators by motor impaired individuals is a challenging topic. In this paper, a novel head gesture-based interface for hands-free robot control and a framework for robot learning from demonstration are presented. The head gesture-based interface consists of a camera mounted on the user's hat, which records the changes in the viewed scene due to the head motion. The head gesture recognition is performed using the optical flow for feature extraction and support vector machine for gesture classification. The recognized head gestures are further mapped into robot control commands to perform object manipulation task. The robot learns the demonstrated task by generating the sequence of actions and Gaussian Mixture Model method is used to segment the demonstrated path of the robot's end-effector. During the robotic reproduction of the task, the modified Gaussian Mixture Model and Gaussian Mixture Regression are used to adapt to environmental changes. The proposed framework was evaluated in a real-world assistive robotic scenario in a small study involving 13 participants; 12 able-bodied and one tetraplegic. The presented results demonstrate a potential of the proposed framework to enable severe motor impaired individuals to demonstrate daily living tasks to robotic manipulators.
Collapse
|
9
|
Learning from demonstration for locally assistive mobility aids. INTERNATIONAL JOURNAL OF INTELLIGENT ROBOTICS AND APPLICATIONS 2019. [DOI: 10.1007/s41315-019-00096-1] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|
10
|
Environment-adaptive interaction primitives through visual context for human–robot motor skill learning. Auton Robots 2018. [DOI: 10.1007/s10514-018-9798-2] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
|
11
|
Zondervan DK, Secoli R, Darling AM, Farris J, Furumasu J, Reinkensmeyer DJ. Design and Evaluation of the Kinect-Wheelchair Interface Controlled (KWIC) Smart Wheelchair for Pediatric Powered Mobility Training. Assist Technol 2018; 27:183-92. [PMID: 26427746 DOI: 10.1080/10400435.2015.1012607] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022] Open
Abstract
BACKGROUND Children with severe disabilities are sometimes unable to access powered mobility training. Thus, we developed the Kinect-Wheelchair Interface Controlled (KWIC) smart wheelchair trainer that converts a manual wheelchair into a powered wheelchair. The KWIC Trainer uses computer vision to create a virtual tether with adaptive shared-control between the wheelchair and a therapist during training. It also includes a mixed-reality video game system. METHODS We performed a year-long usability study of the KWIC Trainer at a local clinic, soliciting qualitative and quantitative feedback on the device after extended use. RESULTS Eight therapists used the KWIC Trainer for over 50 hours with 8 different children. Two of the children obtained their own powered wheelchair as a result of the training. The therapists indicated the device allowed them to provide mobility training for more children than would have been possible with a demo wheelchair, and they found use of the device to be as safe as or safer than conventional training. They viewed the shared control algorithm as counter-productive because it made it difficult for the child to discern when he or she was controlling the chair. They were enthusiastic about the video game integration for increasing motivation and engagement during training. They emphasized the need for additional access methods for controlling the device. CONCLUSION The therapists confirmed that the KWIC Trainer is a useful tool for increasing access to powered mobility training and for engaging children during training sessions. However, some improvements would enhance its applicability for routine clinical use.
Collapse
Affiliation(s)
- Daniel K Zondervan
- a Department of Mechanical and Aerospace Engineering , University of California at Irvine , Irvine , CA , USA
| | - Riccardo Secoli
- a Department of Mechanical and Aerospace Engineering , University of California at Irvine , Irvine , CA , USA
| | - Aurelia Mclaughlin Darling
- a Department of Mechanical and Aerospace Engineering , University of California at Irvine , Irvine , CA , USA
| | - John Farris
- b Department of Product Design & Manufacturing Engineering , Grand Valley State University , Grand Rapids , MI , USA
| | - Jan Furumasu
- c Rehabilitation Engineering Research Center on Technology for Children With Orthopedic Disabilities , Rancho Los Amigos National Rehabilitation Center , Downey , CA , USA
| | - David J Reinkensmeyer
- a Department of Mechanical and Aerospace Engineering , University of California at Irvine , Irvine , CA , USA.,d Department of Biomedical Engineering , University of California at Irvine , Irvine , California , USA.,e Department of Anatomy and Neurobiology , University of California at Irvine , Irvine , California , USA
| |
Collapse
|
12
|
Ezeh C, Trautman P, Devigne L, Bureau V, Babel M, Carlson T. Probabilistic vs linear blending approaches to shared control for wheelchair driving. IEEE Int Conf Rehabil Robot 2017; 2017:835-840. [PMID: 28813924 DOI: 10.1109/icorr.2017.8009352] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Some people with severe mobility impairments are unable to operate powered wheelchairs reliably and effectively, using commercially available interfaces. This has sparked a body of research into "smart wheelchairs", which assist users to drive safely and create opportunities for them to use alternative interfaces. Various "shared control" techniques have been proposed to provide an appropriate level of assistance that is satisfactory and acceptable to the user. Most shared control techniques employ a traditional strategy called linear blending (LB), where the user's commands and wheelchair's autonomous commands are combined in some proportion. In this paper, however, we implement a more generalised form of shared control called probabilistic shared control (PSC). This probabilistic formulation improves the accuracy of modelling the interaction between the user and the wheelchair by taking into account uncertainty in the interaction. In this paper, we demonstrate the practical success of PSC over LB in terms of safety, particularly for novice users.
Collapse
|
13
|
Abu-Alqumsan M, Ebert F, Peer A. Goal-recognition-based adaptive brain-computer interface for navigating immersive robotic systems. J Neural Eng 2017; 14:036024. [PMID: 28294109 DOI: 10.1088/1741-2552/aa66e0] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
OBJECTIVE This work proposes principled strategies for self-adaptations in EEG-based Brain-computer interfaces (BCIs) as a way out of the bandwidth bottleneck resulting from the considerable mismatch between the low-bandwidth interface and the bandwidth-hungry application, and a way to enable fluent and intuitive interaction in embodiment systems. The main focus is laid upon inferring the hidden target goals of users while navigating in a remote environment as a basis for possible adaptations. APPROACH To reason about possible user goals, a general user-agnostic Bayesian update rule is devised to be recursively applied upon the arrival of evidences, i.e. user input and user gaze. Experiments were conducted with healthy subjects within robotic embodiment settings to evaluate the proposed method. These experiments varied along three factors: the type of the robot/environment (simulated and physical), the type of the interface (keyboard or BCI), and the way goal recognition (GR) is used to guide a simple shared control (SC) driving scheme. MAIN RESULTS Our results show that the proposed GR algorithm is able to track and infer the hidden user goals with relatively high precision and recall. Further, the realized SC driving scheme benefits from the output of the GR system and is able to reduce the user effort needed to accomplish the assigned tasks. Despite the fact that the BCI requires higher effort compared to the keyboard conditions, most subjects were able to complete the assigned tasks, and the proposed GR system is additionally shown able to handle the uncertainty in user input during SSVEP-based interaction. The SC application of the belief vector indicates that the benefits of the GR module are more pronounced for BCIs, compared to the keyboard interface. SIGNIFICANCE Being based on intuitive heuristics that model the behavior of the general population during the execution of navigation tasks, the proposed GR method can be used without prior tuning for the individual users. The proposed methods can be easily integrated in devising more advanced SC schemes and/or strategies for automatic BCI self-adaptations.
Collapse
Affiliation(s)
- Mohammad Abu-Alqumsan
- Chair of Automatic Control Engineering, Technical University of Munich (TUM), Munich, Germany
| | | | | |
Collapse
|
14
|
Al-Halimi RK, Moussa M. Performing Complex Tasks by Users With Upper-Extremity Disabilities Using a 6-DOF Robotic Arm: A Study. IEEE Trans Neural Syst Rehabil Eng 2017; 25:686-693. [PMID: 28113593 DOI: 10.1109/tnsre.2016.2603472] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
In this paper, we report on the results of a study that was conducted to examine how users suffering from severe upper-extremity disabilities can control a 6 degrees-of-freedom (DOF) robotics arm to complete complex activities of daily living. The focus of the study is not on assessing the robot arm but on examining the human-robot interaction patterns. Three participants were recruited. Each participant was asked to perform three tasks: eating three pieces of pre-cut bread from a plate, drinking three sips of soup from a bowl, and opening a right-handed door with lever handle. Each of these tasks was repeated three times. The arm was mounted on the participant's wheelchair, and the participants were free to move the arm as they wish to complete these tasks. Each task consisted of a sequence of modes where a mode is defined as arm movement in one DOF. Results show that participants used a total of 938 mode movements with an average of 75.5 (std 10.2) modes for the eating task, 70 (std 8.8) modes for the soup task, and 18.7 (std 4.5) modes for the door opening task. Tasks were then segmented into smaller subtasks. It was found that there are patterns of usage per participant and per subtask. These patterns can potentially allow a robot to learn from user's demonstration what is the task being executed and by whom and respond accordingly to reduce user effort.
Collapse
|