1
|
Hashemian AM, Adhikari A, Aguilar IA, Kruijff E, Heyde MVD, Riecke BE. Leaning-Based Interfaces Improve Simultaneous Locomotion and Object Interaction in VR Compared to the Handheld Controller. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:4665-4682. [PMID: 37200130 DOI: 10.1109/tvcg.2023.3275111] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/20/2023]
Abstract
Physical walking is often considered the gold standard for VR travel whenever feasible. However, limited free-space walking areas in the real-world do not allow exploring larger-scale virtual environments by actual walking. Therefore, users often require handheld controllers for navigation, which can reduce believability, interfere with simultaneous interaction tasks, and exacerbate adverse effects such as motion sickness and disorientation. To investigate alternative locomotion options, we compared handheld Controller (thumbstick-based) and physical walking versus a seated (HeadJoystick) and standing/stepping (NaviBoard) leaning-based locomotion interface, where seated/standing users travel by moving their head toward the target direction. Rotations were always physically performed. To compare these interfaces, we designed a novel simultaneous locomotion and object interaction task, where users needed to keep touching the center of upward moving target balloons with their virtual lightsaber, while simultaneously staying inside a horizontally moving enclosure. Walking resulted in the best locomotion, interaction, and combined performances while the controller performed worst. Leaning-based interfaces improved user experience and performance compared to Controller, especially when standing/stepping using NaviBoard, but did not reach walking performance. That is, leaning-based interfaces HeadJoystick (sitting) and NaviBoard (standing) that provided additional physical self-motion cues compared to controller improved enjoyment, preference, spatial presence, vection intensity, motion sickness, as well as performance for locomotion, object interaction, and combined locomotion and object interaction. Our results also showed that less embodied interfaces (and in particular the controller) caused a more pronounced performance deterioration when increasing locomotion speed. Moreover, observed differences between our interfaces were not affected by repeated interface usage.
Collapse
|
2
|
Hashemian AM, Adhikari A, Kruijff E, Heyde MVD, Riecke BE. Leaning-Based Interfaces Improve Ground-Based VR Locomotion in Reach-the-Target, Follow-the-Path, and Racing Tasks. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; 29:1748-1768. [PMID: 34847032 DOI: 10.1109/tvcg.2021.3131422] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
Using standard handheld interfaces for VR locomotion may not provide a believable self-motion experience and can contribute to unwanted side effects such as motion sickness, disorientation, or increased cognitive load. This paper demonstrates how using a seated leaning-based locomotion interface -HeadJoystick- in VR ground-based navigation affects user experience, usability, and performance. In three within-subject studies, we compared controller (touchpad/thumbstick) with a more embodied interface ("HeadJoystick") where users moved their head and/or leaned in the direction of desired locomotion. In both conditions, users sat on a regular office chair and used it to control virtual rotations. In the first study, 24 participants used HeadJoystick versus Controller in three complementary tasks including reach-the-target, follow-the-path, and racing (dynamic obstacle avoidance). In the second study, 18 participants repeatedly used HeadJoystick versus Controller (8 one-minute trials each) in a reach-the-target task. To evaluate potential benefits of different brake mechanisms, in the third study 18 participants were asked to stop within each target area for one second. All three studies consistently showed advantages of HeadJoystick over Controller: we observed improved performance in all tasks, as well as higher user ratings for enjoyment, spatial presence, immersion, vection intensity, usability, ease of learning, ease of use, and rated potential for daily and long-term use, while reducing motion sickness and task load. Overall, our results suggest that leaning-based interfaces such as HeadJoystick provide an interesting and more embodied alternative to handheld interfaces in driving, reach-the-target, and follow-the-path tasks, and potentially a wider range of scenarios.
Collapse
|
3
|
Di Vincenzo M, Palini F, De Marsico M, Borghi AM, Baldassarre G. A Natural Human-Drone Embodied Interface: Empirical Comparison With a Traditional Interface. Front Neurorobot 2022; 16:898859. [PMID: 36310633 PMCID: PMC9614065 DOI: 10.3389/fnbot.2022.898859] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2022] [Accepted: 05/23/2022] [Indexed: 11/16/2022] Open
Abstract
Despite the importance of usability in human-machine interaction (HMI), most commonly used devices are not usable by all potential users. In particular, users with low or null technological experience, or with special needs, require carefully designed systems and easy-to-use interfaces supporting recognition over recall. To this purpose, Natural User Interfaces (NUIs) represent an effective strategy as the user's learning is facilitated by features of the interface that mimic the human “natural” sensorimotor embodied interactions with the environment. This paper compares the usability of a new NUI (based on an eye-tracker and hand gesture recognition) with a traditional interface (keyboard) for the distal control of a simulated drone flying in a virtual environment. The whole interface relies on “dAIsy”, a new software allowing the flexible use of different input devices and the control of different robotic platforms. The 59 users involved in the study were required to complete two tasks with each interface, while their performance was recorded: (a) exploration: detecting trees embedded in an urban environment; (b) accuracy: guiding the drone as accurately and fast as possible along a predefined track. Then they were administered questionnaires regarding the user's background, the perceived embodiment of the device, and the perceived quality of the virtual experience while either using the NUI or the traditional interface. The results appear controversial and call for further investigation: (a) contrary to our hypothesis, the specific NUI used led to lower performance than the traditional interface; (b) however, the NUI was evaluated as more natural and embodied. The final part of the paper discusses the possible causes underlying these results that suggest possible future improvements of the NUI.
Collapse
Affiliation(s)
| | | | - Maria De Marsico
- Department of Computer Science, Sapienza University of Rome, Rome, Italy
| | - Anna M. Borghi
- Department of Dynamic and Clinical Psychology and Health Studies, Sapienza University of Rome, Rome, Italy
- Institute of Cognitive Sciences and Technologies, National Research Council, Rome, Italy
| | - Gianluca Baldassarre
- Institute of Cognitive Sciences and Technologies, National Research Council, Rome, Italy
- AI2Life Srl Innovative Startup, Spin-Off of ISTC-CNR, Rome, Italy
- *Correspondence: Gianluca Baldassarre
| |
Collapse
|
4
|
|
5
|
Topographic design in wearable MXene sensors with in-sensor machine learning for full-body avatar reconstruction. Nat Commun 2022; 13:5311. [PMID: 36085341 PMCID: PMC9461448 DOI: 10.1038/s41467-022-33021-5] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2022] [Accepted: 08/25/2022] [Indexed: 11/18/2022] Open
Abstract
Wearable strain sensors that detect joint/muscle strain changes become prevalent at human–machine interfaces for full-body motion monitoring. However, most wearable devices cannot offer customizable opportunities to match the sensor characteristics with specific deformation ranges of joints/muscles, resulting in suboptimal performance. Adequate wearable strain sensor design is highly required to achieve user-designated working windows without sacrificing high sensitivity, accompanied with real-time data processing. Herein, wearable Ti3C2Tx MXene sensor modules are fabricated with in-sensor machine learning (ML) models, either functioning via wireless streaming or edge computing, for full-body motion classifications and avatar reconstruction. Through topographic design on piezoresistive nanolayers, the wearable strain sensor modules exhibited ultrahigh sensitivities within the working windows that meet all joint deformation ranges. By integrating the wearable sensors with a ML chip, an edge sensor module is fabricated, enabling in-sensor reconstruction of high-precision avatar animations that mimic continuous full-body motions with an average avatar determination error of 3.5 cm, without additional computing devices. Wearable sensors with edge computing are desired for human motion monitoring. Here, the authors demonstrate a topographic design for wearable MXene sensor modules with wireless streaming or in-sensor computing models for avatar reconstruction.
Collapse
|
6
|
DellrAgnola F, Jao PK, Arza A, Chavarriaga R, Millan JDR, Floreano D, Atienza D. Machine-Learning Based Monitoring of Cognitive Workload in Rescue Missions with Drones. IEEE J Biomed Health Inform 2022; 26:4751-4762. [PMID: 35759604 DOI: 10.1109/jbhi.2022.3186625] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
In search and rescue missions, drone operations are challenging and cognitively demanding. High levels of cognitive workload can affect rescuers' performance, leading to failure with catastrophic outcomes. To face this problem, we propose a machine learning algorithm for real-time cognitive workload monitoring to understand if a search and rescue operator has to be replaced or if more resources are required. Our multimodal cognitive workload monitoring model combines the information of 25 features extracted from physiological signals, such as respiration, electrocardiogram, photoplethysmogram, and skin temperature, acquired in a noninvasive way. To reduce both subject and day inter-variability of the signals, we explore different feature normalization techniques, and introduce a novel weighted-learning method based on support vector machines suitable for subject-specific optimizations. On an unseen test set acquired from 34 volunteers, our proposed subject-specific model is able to distinguish between low and high cognitive workloads with an average accuracy of 87.3% and 91.2% while controlling a drone simulator using both a traditional controller and a new-generation controller, respectively.
Collapse
|
7
|
Hashemian AM, Lotfaliei M, Adhikari A, Kruijff E, Riecke BE. HeadJoystick: Improving Flying in VR Using a Novel Leaning-Based Interface. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:1792-1809. [PMID: 32946395 DOI: 10.1109/tvcg.2020.3025084] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Flying in virtual reality (VR) using standard handheld controllers can be cumbersome and contribute to unwanted side effects such as motion sickness and disorientation. This article investigates a novel hands-free flying interface-HeadJoystick, where the user moves their head similar to a joystick handle toward the target direction to control virtual translation velocity. The user sits on a regular office swivel chair and rotates it physically to control virtual rotation using 1:1 mapping. We evaluated short-term (Study 1) and extended usage effects through repeated usage (Study 2) of the HeadJoystick versus handheld interfaces in two within-subject studies, where participants flew through a sequence of increasingly difficult tunnels in the sky. Using the HeadJoystick instead of handheld interfaces improved both user experience and performance, in terms of accuracy, precision, ease of learning, ease of use, usability, long-term use, presence, immersion, sensation of self-motion, workload, and enjoyment in both studies. These findings demonstrate the benefits of using leaning-based interfaces for VR flying and potentially similar telepresence applications such as remote flight with quadcopter drones. From a theoretical perspective, we also show how leaning-based motion cueing interacts with full physical rotation to improve user experience and performance compared to the gamepad.
Collapse
|
8
|
Control of a Drone in Virtual Reality Using MEMS Sensor Technology and Machine Learning. MICROMACHINES 2022; 13:mi13040521. [PMID: 35457827 PMCID: PMC9024457 DOI: 10.3390/mi13040521] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/23/2022] [Revised: 03/16/2022] [Accepted: 03/23/2022] [Indexed: 01/10/2023]
Abstract
In recent years, drones have been widely used in various applications, from entertainment, agriculture, their use in photo and video services, military applications and so on. The risk of accidents while using a drone is quite high. To meet this risk, the most important solution is to use a device that helps and simplifies the control of a drone; in addition, the training of drone pilots is very important. To train the drone pilots, both physical and virtual environments can be used, but the probability of an accident is higher for beginners, so the safest method is to train in a virtual environment. The aim of this study is to develop a new device for controlling a drone in a virtual environment. This device is attached to the upper limb of the person involved in the control of that drone. For precise control, the newly created device uses MEMS sensor technology and artificial intelligence-specific methods.
Collapse
|
9
|
Macchini M, De Matteis L, Schiano F, Floreano D. Personalized Human-Swarm Interaction Through Hand Motion. IEEE Robot Autom Lett 2021. [DOI: 10.1109/lra.2021.3102324] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
10
|
Safe Local Aerial Manipulation for the Installation of Devices on Power Lines: AERIAL-CORE First Year Results and Designs. APPLIED SCIENCES-BASEL 2021. [DOI: 10.3390/app11136220] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
The power grid is an essential infrastructure in any country, comprising thousands of kilometers of power lines that require periodic inspection and maintenance, carried out nowadays by human operators in risky conditions. To increase safety and reduce time and cost with respect to conventional solutions involving manned helicopters and heavy vehicles, the AERIAL-CORE project proposes the development of aerial robots capable of performing aerial manipulation operations to assist human operators in power lines inspection and maintenance, allowing the installation of devices, such as bird flight diverters or electrical spacers, and the fast delivery and retrieval of tools. This manuscript describes the goals and functionalities to be developed for safe local aerial manipulation, presenting the preliminary designs and experimental results obtained in the first year of the project.
Collapse
|
11
|
Garcia A. DE, Sierra M. SD, Gomez-Vargas D, Jiménez MF, Múnera M, Cifuentes CA. Semi-Remote Gait Assistance Interface: A Joystick with Visual Feedback Capabilities for Therapists. SENSORS 2021; 21:s21103521. [PMID: 34069340 PMCID: PMC8158774 DOI: 10.3390/s21103521] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/24/2021] [Revised: 05/03/2021] [Accepted: 05/06/2021] [Indexed: 11/21/2022]
Abstract
The constant growth of pathologies affecting human mobility has led to developing of different assistive devices to provide physical and cognitive assistance. Smart walkers are a particular type of these devices since they integrate navigation systems, path-following algorithms, and user interaction modules to ensure natural and intuitive interaction. Although these functionalities are often implemented in rehabilitation scenarios, there is a need to actively involve the healthcare professionals in the interaction loop while guaranteeing safety for them and patients. This work presents the validation of two visual feedback strategies for the teleoperation of a simulated robotic walker during an assisted navigation task. For this purpose, a group of 14 clinicians from the rehabilitation area formed the validation group. A simple path-following task was proposed, and the feedback strategies were assessed through the kinematic estimation error (KTE) and a usability survey. A KTE of 0.28 m was obtained for the feedback strategy on the joystick. Additionally, significant differences were found through a Mann–Whitney–Wilcoxon test for the perception of behavior and confidence towards the joystick according to the modes of interaction (p-values of 0.04 and 0.01, respectively). The use of visual feedback with this tool contributes to research areas such as remote management of therapies and monitoring rehabilitation of people’s mobility.
Collapse
Affiliation(s)
- Daniel E. Garcia A.
- Department of Biomedical Engineering, Colombian School of Engineering Julio Garavito, Bogota 111166, Colombia; (D.E.G.A.); (S.D.S.M.); (D.G.-V.); (M.M.)
| | - Sergio D. Sierra M.
- Department of Biomedical Engineering, Colombian School of Engineering Julio Garavito, Bogota 111166, Colombia; (D.E.G.A.); (S.D.S.M.); (D.G.-V.); (M.M.)
| | - Daniel Gomez-Vargas
- Department of Biomedical Engineering, Colombian School of Engineering Julio Garavito, Bogota 111166, Colombia; (D.E.G.A.); (S.D.S.M.); (D.G.-V.); (M.M.)
| | - Mario F. Jiménez
- School of Engineering, Science and Technology, Universidad del Rosario, Bogotá 111711, Colombia
- Correspondence: (M.F.J.); (C.A.C.); Tel.: +57-(1)-297-0200 (M.F.J.); +57-(031)-668-3600 (C.A.C.)
| | - Marcela Múnera
- Department of Biomedical Engineering, Colombian School of Engineering Julio Garavito, Bogota 111166, Colombia; (D.E.G.A.); (S.D.S.M.); (D.G.-V.); (M.M.)
| | - Carlos A. Cifuentes
- Department of Biomedical Engineering, Colombian School of Engineering Julio Garavito, Bogota 111166, Colombia; (D.E.G.A.); (S.D.S.M.); (D.G.-V.); (M.M.)
- Correspondence: (M.F.J.); (C.A.C.); Tel.: +57-(1)-297-0200 (M.F.J.); +57-(031)-668-3600 (C.A.C.)
| |
Collapse
|
12
|
Wang X, Chen G, Gong H, Jiang J. UAV swarm autonomous control based on Internet of Things and artificial intelligence algorithms. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2021. [DOI: 10.3233/jifs-189541] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/20/2023]
Abstract
At present, the UAV swarm positioning solution has the problems of poor positioning accuracy and instability. Therefore, it is necessary to design a sliding mode formation controller to realize formation. This study analyzes the self-service control strategy of the UAV swarm and establishes the behavior-based formation control strategy as the main research point of this article. This paper combines the Internet of Things and artificial intelligence algorithms to build an autonomous control model for the UAV swarm and designs the UAV formation control law from the disturbed and undisturbed conditions respectively. With reference to the basic architecture of the Internet of Things, this study imitates ZigBee’s self-organizing network to propose an adaptive networking scheme based on the Internet of Things by using the AP+STA working mode of the Internet of Things module in the node device. The results of the experiment show that the positioning accuracy of the UAV is high, which can meet the needs of cluster flight. Based on the Z-axis coordinates of the UAV, the accuracy of the laser distance measurement and the barometer value is significantly improved, the root mean square error is reduced, and the positioning result is significantly better than the data of direct traditional positioning.
Collapse
Affiliation(s)
- Xinhua Wang
- Collage of Automation Engineer, Nanjing University of Aeronautics and Astronautics, Nanjing, China
| | - Guanyu Chen
- Collage of Automation Engineer, Nanjing University of Aeronautics and Astronautics, Nanjing, China
| | - Huajun Gong
- Collage of Automation Engineer, Nanjing University of Aeronautics and Astronautics, Nanjing, China
| | - Ju Jiang
- Collage of Automation Engineer, Nanjing University of Aeronautics and Astronautics, Nanjing, China
| |
Collapse
|
13
|
Nostadt N, Abbink DA, Christ O, Beckerle P. Embodiment, Presence, and Their Intersections. ACM TRANSACTIONS ON HUMAN-ROBOT INTERACTION 2020. [DOI: 10.1145/3389210] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Abstract
Subjective experience of human control over remote, artificial, or virtual limbs has traditionally been investigated from two separate angles: presence research originates from teleoperation, aiming to capture to what extent the user feels like actually being in the remote or virtual environment. Embodiment captures to what extent a virtual or artificial limb is perceived as one’s own limb. Unfortunately, the two research fields have not interacted much. This survey intends to provide a coherent overview of the literature at the intersection of these two fields to further that interaction. Two rounds of systematic research in topic-related data bases resulted in 414 related articles, 14 of which satisfy the deliberately strict inclusion criteria: 2 theoretical frameworks that highlighted intersections and 12 experimental studies that evaluated subjective measures for both concepts. Considering the surrounding literature as well, theoretical and experimental potential of embodiment and presence are discussed and suggestions to apply them in teleoperation research are derived. While increased publication activity is observed between 2016 and 2018, potentially caused by affordable virtual reality technologies, various open questions remain. To tackle them, human-in-the-loop experiments and three guiding principles for teleoperation system design (mechanical fidelity, spatial bodily awareness, and self-identification) are suggested.
Collapse
Affiliation(s)
| | - David A. Abbink
- Delft Haptics Lab, Department of Cognitive Robotics, Faculty 3mE, Delft University of Technology, The Netherlands
| | - Oliver Christ
- Institute Humans in Complex Systems, School of Applied Psychology, University of Applied Sciences and Arts Northwestern Switzerland, Switzerland
| | - Philipp Beckerle
- Elastic Lightweight Robotics Group, Robotics Research Institute, Technische Universität Dortmund, German and Institute for Mechatronic Systems in Mechanical Engineering, Technische Universität Darmstadt, Darmstadt, Germany
| |
Collapse
|
14
|
Macchini M, Schiano F, Floreano D. Personalized Telerobotics by Fast Machine Learning of Body-Machine Interfaces. IEEE Robot Autom Lett 2020. [DOI: 10.1109/lra.2019.2950816] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
15
|
A portable three-degrees-of-freedom force feedback origami robot for human–robot interactions. NAT MACH INTELL 2019. [DOI: 10.1038/s42256-019-0125-1] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
16
|
Isop WA, Gebhardt C, Nägeli T, Fraundorfer F, Hilliges O, Schmalstieg D. High-Level Teleoperation System for Aerial Exploration of Indoor Environments. Front Robot AI 2019; 6:95. [PMID: 33501110 PMCID: PMC7805862 DOI: 10.3389/frobt.2019.00095] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2019] [Accepted: 09/17/2019] [Indexed: 11/25/2022] Open
Abstract
Exploration of challenging indoor environments is a demanding task. While automation with aerial robots seems a promising solution, fully autonomous systems still struggle with high-level cognitive tasks and intuitive decision making. To facilitate automation, we introduce a novel teleoperation system with an aerial telerobot that is capable of handling all demanding low-level tasks. Motivated by the typical structure of indoor environments, the system creates an interactive scene topology in real-time that reduces scene details and supports affordances. Thus, difficult high-level tasks can be effectively supervised by a human operator. To elaborate on the effectiveness of our system during a real-world exploration mission, we conducted a user study. Despite being limited by real-world constraints, results indicate that our system better supports operators with indoor exploration, compared to a baseline system with traditional joystick control.
Collapse
Affiliation(s)
- Werner Alexander Isop
- Institute of Computer Graphics and Vision, Graz University of Technology, Graz, Austria
| | | | - Tobias Nägeli
- Advanced Interactive Technologies Lab, ETH Zürich, Zurich, Switzerland
| | - Friedrich Fraundorfer
- Institute of Computer Graphics and Vision, Graz University of Technology, Graz, Austria
| | - Otmar Hilliges
- Advanced Interactive Technologies Lab, ETH Zürich, Zurich, Switzerland
| | - Dieter Schmalstieg
- Institute of Computer Graphics and Vision, Graz University of Technology, Graz, Austria.,VRVis Research Center, Vienna, Austria
| |
Collapse
|
17
|
Intelligent Human–UAV Interaction System with Joint Cross-Validation over Action–Gesture Recognition and Scene Understanding. APPLIED SCIENCES-BASEL 2019. [DOI: 10.3390/app9163277] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
We propose an intelligent human–unmanned aerial vehicle (UAV) interaction system, in which, instead of using the conventional remote controller, the UAV flight actions are controlled by a deep learning-based action–gesture joint detection system. The Resnet-based scene-understanding algorithm is introduced into the proposed system to enable the UAV to adjust its flight strategy automatically, according to the flying conditions. Meanwhile, both the deep learning-based action detection and multi-feature cascade gesture recognition methods are employed by a cross-validation process to create the corresponding flight action. The effectiveness and efficiency of the proposed system are confirmed by its application to controlling the flight action of a real flying UAV for more than 3 h.
Collapse
|
18
|
Delmerico J, Mintchev S, Giusti A, Gromov B, Melo K, Horvat T, Cadena C, Hutter M, Ijspeert A, Floreano D, Gambardella LM, Siegwart R, Scaramuzza D. The current state and future outlook of rescue robotics. J FIELD ROBOT 2019. [DOI: 10.1002/rob.21887] [Citation(s) in RCA: 102] [Impact Index Per Article: 20.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Affiliation(s)
- Jeffrey Delmerico
- Robotics and Perception Group, Department of Informatics and NeuroinformaticsUniversity of Zurich and ETH, Zurich Zürich Switzerland
| | - Stefano Mintchev
- Laboratory of Intelligent SystemsSwiss Federal Institute of Technology Lausanne Switzerland
| | - Alessandro Giusti
- Dalle Molle Institute for Artificial Intelligence (IDSIA), USI‐SUPSI Manno Switzerland
| | - Boris Gromov
- Dalle Molle Institute for Artificial Intelligence (IDSIA), USI‐SUPSI Manno Switzerland
| | - Kamilo Melo
- Biorobotics LaboratorySwiss Federal Institute of Technology Lausanne Switzerland
| | - Tomislav Horvat
- Biorobotics LaboratorySwiss Federal Institute of Technology Lausanne Switzerland
| | - Cesar Cadena
- Autonomous Systems LabSwiss Federal Institute of Technology Zürich Switzerland
| | - Marco Hutter
- Robotic Systems LabSwiss Federal Institute of Technology Zürich Switzerland
| | - Auke Ijspeert
- Biorobotics LaboratorySwiss Federal Institute of Technology Lausanne Switzerland
| | - Dario Floreano
- Laboratory of Intelligent SystemsSwiss Federal Institute of Technology Lausanne Switzerland
| | - Luca M. Gambardella
- Dalle Molle Institute for Artificial Intelligence (IDSIA), USI‐SUPSI Manno Switzerland
| | - Roland Siegwart
- Autonomous Systems LabSwiss Federal Institute of Technology Zürich Switzerland
| | - Davide Scaramuzza
- Robotics and Perception Group, Department of Informatics and NeuroinformaticsUniversity of Zurich and ETH, Zurich Zürich Switzerland
| |
Collapse
|
19
|
Tsykunov E, Agishev R, Ibrahimov R, Labazanova L, Tleugazy A, Tsetserukou D. SwarmTouch: Guiding a Swarm of Micro-Quadrotors With Impedance Control Using a Wearable Tactile Interface. IEEE TRANSACTIONS ON HAPTICS 2019; 12:363-374. [PMID: 31295120 DOI: 10.1109/toh.2019.2927338] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
To achieve a smooth and safe guiding of a drone formation by a human operator, we propose a novel interaction strategy for a human-swarm communication, which combines impedance control and vibrotactile feedback. The presented approach takes into account the human hand velocity and changes the formation shape and dynamics accordingly using impedance interlinks simulated between quadrotors, which helps to achieve a natural swarm behavior. Several tactile patterns representing static and dynamic parameters of the swarm are proposed. The user feels the state of the swarm at the fingertips and receives valuable information to improve the controllability of the complex formation. A user study revealed the patterns with high recognition rates. A flight experiment demonstrated the possibility to accurately navigate the formation in a cluttered environment using only tactile feedback. Subjects stated that tactile sensation allows guiding the drone formation through obstacles and makes the human-swarm communication more interactive. The proposed technology can potentially have a strong impact on the human-swarm interaction, providing a higher level of awareness during the swarm navigation.
Collapse
|
20
|
Rognon C, Ramachandran V, Wu AR, Ijspeert AJ, Floreano D. Haptic Feedback Perception and Learning With Cable-Driven Guidance in Exosuit Teleoperation of a Simulated Drone. IEEE TRANSACTIONS ON HAPTICS 2019; 12:375-385. [PMID: 31251196 DOI: 10.1109/toh.2019.2925612] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Robotics teleoperation enables human operators to control the movements of distally located robots. The development of new wearable interfaces as alternatives to hand-held controllers has created new modalities of control, which are more intuitive to use. Nevertheless, such interfaces also require a period of adjustment before operators can carry out their tasks proficiently. In several fields of human-machine interaction, haptic guidance has proven to be an effective training tool for enhancing user performance. This paper presents the results of psychophysical and motor learning studies that were carried out with human participant to assess the effect of cable-driven haptic guidance for a task involving aerial robotic teleoperation. The guidance system was integrated into an exosuit, called the Flyjacket, that was developed to control drones with torso movements. Results for the just noticeable difference and from the Stevens Power Law suggest that the perception of force on the users' torso scales linearly with the amplitude of the force exerted through the cables and the perceived force is close to the magnitude of the stimulus. Motor learning studies reveal that this form of haptic guidance improves user performance in training, but this improvement is not retained when participants are evaluated without guidance.
Collapse
|
21
|
Rognon C, Koehler M, Duriez C, Floreano D, Okamura AM. Soft Haptic Device to Render the Sensation of Flying Like a Drone. IEEE Robot Autom Lett 2019. [DOI: 10.1109/lra.2019.2907432] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|