1
|
Lee JW, Yu KH. Wearable Drone Controller: Machine Learning-Based Hand Gesture Recognition and Vibrotactile Feedback. SENSORS (BASEL, SWITZERLAND) 2023; 23:2666. [PMID: 36904870 PMCID: PMC10006975 DOI: 10.3390/s23052666] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/10/2023] [Revised: 02/14/2023] [Accepted: 02/23/2023] [Indexed: 06/18/2023]
Abstract
We proposed a wearable drone controller with hand gesture recognition and vibrotactile feedback. The intended hand motions of the user are sensed by an inertial measurement unit (IMU) placed on the back of the hand, and the signals are analyzed and classified using machine learning models. The recognized hand gestures control the drone, and the obstacle information in the heading direction of the drone is fed back to the user by activating the vibration motor attached to the wrist. Simulation experiments for drone operation were performed, and the participants' subjective evaluations regarding the controller's convenience and effectiveness were investigated. Finally, experiments with a real drone were conducted and discussed to validate the proposed controller.
Collapse
Affiliation(s)
- Ji-Won Lee
- KEPCO Research Institute, Daejeon 34056, Republic of Korea
| | - Kee-Ho Yu
- Department of Aerospace Engineering, Jeonbuk National University, Jeonju 54896, Republic of Korea
- Future Air Mobility Research Center, Jeonbuk National University, Jeonju 54896, Republic of Korea
| |
Collapse
|
2
|
Gu C, Sun J, Chen T, Miao W, Yang Y, Lin S, Chen J. Examining the Influence of Using First-Person View Drones as Auxiliary Devices in Matte Painting Courses on College Students’ Continuous Learning Intention. J Intell 2022; 10:jintelligence10030040. [PMID: 35893271 PMCID: PMC9326556 DOI: 10.3390/jintelligence10030040] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2022] [Revised: 06/28/2022] [Accepted: 07/02/2022] [Indexed: 02/01/2023] Open
Abstract
In terms of the teaching process of matte painting, it is essential for students to develop a sound understanding of the relationship between virtual and physical environments. In this study, first-person view (FPV) drones are applied to matte painting courses to evaluate the effectiveness of the teaching, and to propose more effective design suggestions for FPV drones that are more suitable for teaching. This provides students with a better learning environment using a digital education system. The results of the study indicate that the flow experience, learning interest, and continuous learning intention of students who use FPV drones in matte painting are significantly greater than those of students who only utilize traditional teaching methods. Furthermore, the technology incentive model (TIM) was developed in this study after being verified by the structural equation model. The results demonstrate that the second-order construct ‘technology incentive’ comprising perceived interactivity, perceived vividness, and novel experience positively influence students’ learning interest and continuous learning intentions under the mediation of flow experience.
Collapse
Affiliation(s)
- Chao Gu
- Department of Culture and Arts Management, Honam University, Gwangju 62399, Korea; (C.G.); (J.S.)
| | - Jie Sun
- Department of Culture and Arts Management, Honam University, Gwangju 62399, Korea; (C.G.); (J.S.)
| | - Tong Chen
- Michael Smurfit Graduate Business School, University College Dublin, D04 V1W8 Dublin, Ireland;
| | - Wei Miao
- School of Textile Garment and Design, Changshu Institute of Technology, Changshu 215500, China;
| | - Yunshuo Yang
- College of Foreign Languages and Cultures, Xiamen University, Xiamen 361005, China;
| | - Shuyuan Lin
- Department of Media Design, Tatung University, Taipei 104, Taiwan;
| | - Jiangjie Chen
- School of Design, Jiangnan University, Wuxi 214122, China
- Correspondence:
| |
Collapse
|
3
|
Evangeliou N, Chaikalis D, Tsoukalas A, Tzes A. Visual Collaboration Leader-Follower UAV-Formation for Indoor Exploration. Front Robot AI 2022; 8:777535. [PMID: 35059442 PMCID: PMC8764138 DOI: 10.3389/frobt.2021.777535] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2021] [Accepted: 11/18/2021] [Indexed: 11/13/2022] Open
Abstract
UAVs operating in a leader-follower formation demand the knowledge of the relative pose between the collaborating members. This necessitates the RF-communication of this information which increases the communication latency and can easily result in lost data packets. In this work, rather than relying on this autopilot data exchange, a visual scheme using passive markers is presented. Each formation-member carries passive markers in a RhOct configuration. These markers are visually detected and the relative pose of the members is on-board determined, thus eliminating the need for RF-communication. A reference path is then evaluated for each follower that tracks the leader and maintains a constant distance between the formation-members. Experimental studies show a mean position detection error (5 × 5 × 10cm) or less than 0.0031% of the available workspace [0.5 up to 5m, 50.43° × 38.75° Field of View (FoV)]. The efficiency of the suggested scheme against varying delays are examined in these studies, where it is shown that a delay up to 1.25s can be tolerated for the follower to track the leader as long as the latter one remains within its FoV.
Collapse
Affiliation(s)
- Nikolaos Evangeliou
- Robotics and Intelligent Systems Control (RISC) Lab, Electrical and Computer Engineering Department, New York University Abu Dhabi, Abu Dhabi, United Arab Emirates
- *Correspondence: Nikolaos Evangeliou,
| | - Dimitris Chaikalis
- Electrical and Computer Engineering Department, New York University, Brooklyn, NY, United States
| | - Athanasios Tsoukalas
- Robotics and Intelligent Systems Control (RISC) Lab, Electrical and Computer Engineering Department, New York University Abu Dhabi, Abu Dhabi, United Arab Emirates
| | - Anthony Tzes
- Robotics and Intelligent Systems Control (RISC) Lab, Electrical and Computer Engineering Department, New York University Abu Dhabi, Abu Dhabi, United Arab Emirates
| |
Collapse
|
4
|
Luo Y, Wang J, Shi R, Liang HN, Luo S. In-Device Feedback in Immersive Head-Mounted Displays for Distance Perception During Teleoperation of Unmanned Ground Vehicles. IEEE TRANSACTIONS ON HAPTICS 2022; 15:79-84. [PMID: 34962877 DOI: 10.1109/toh.2021.3138590] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
In recent years, Virtual Reality (VR) Head-Mounted Displays (HMD) have been used to provide an immersive, first-person view in real-time for the remote-control of Unmanned Ground Vehicles (UGV). One critical issue is that it is challenging to perceive the distance of obstacles surrounding the vehicle from 2D views in the HMD, which deteriorates the control of UGV. Conventional distance indicators used in HMD take up screen space which leads clutter on the display and can further reduce situation awareness of the physical environment. To address the issue, in this paper we propose off-screen in-device feedback using vibro-tactile and/or light-visual cues to provide real-time distance information for the remote control of UGV. Results from a study show a significantly better performance with either feedback type, reduced workload and improved usability in a driving task that requires continuous perception of the distance between the UGV and its environmental objects or obstacles. Our findings show a solid case for in-device vibro-tactile and/or light-visual feedback to support remote operation of UGVs that highly relies on distance perception of objects.
Collapse
|
5
|
Technologies for Multimodal Interaction in Extended Reality—A Scoping Review. MULTIMODAL TECHNOLOGIES AND INTERACTION 2021. [DOI: 10.3390/mti5120081] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
When designing extended reality (XR) applications, it is important to consider multimodal interaction techniques, which employ several human senses simultaneously. Multimodal interaction can transform how people communicate remotely, practice for tasks, entertain themselves, process information visualizations, and make decisions based on the provided information. This scoping review summarized recent advances in multimodal interaction technologies for head-mounted display-based (HMD) XR systems. Our purpose was to provide a succinct, yet clear, insightful, and structured overview of emerging, underused multimodal technologies beyond standard video and audio for XR interaction, and to find research gaps. The review aimed to help XR practitioners to apply multimodal interaction techniques and interaction researchers to direct future efforts towards relevant issues on multimodal XR. We conclude with our perspective on promising research avenues for multimodal interaction technologies.
Collapse
|
6
|
Abstract
The development of services and applications involving drones is promoting the growth of the unmanned-aerial-vehicle industry. Moreover, the supply of low-cost compact drones has greatly contributed to the popularization of drone flying. However, flying first-person-view (FPV) drones requires considerable experience because the remote pilot views a video transmitted from a camera mounted on the drone. In this paper, we propose a remote training system for FPV drone flying in mixed reality. Thereby, beginners who are inexperienced in FPV drone flight control can practice under the guidance of remote experts.
Collapse
|
7
|
Wang X, Chen G, Gong H, Jiang J. UAV swarm autonomous control based on Internet of Things and artificial intelligence algorithms. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2021. [DOI: 10.3233/jifs-189541] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/20/2023]
Abstract
At present, the UAV swarm positioning solution has the problems of poor positioning accuracy and instability. Therefore, it is necessary to design a sliding mode formation controller to realize formation. This study analyzes the self-service control strategy of the UAV swarm and establishes the behavior-based formation control strategy as the main research point of this article. This paper combines the Internet of Things and artificial intelligence algorithms to build an autonomous control model for the UAV swarm and designs the UAV formation control law from the disturbed and undisturbed conditions respectively. With reference to the basic architecture of the Internet of Things, this study imitates ZigBee’s self-organizing network to propose an adaptive networking scheme based on the Internet of Things by using the AP+STA working mode of the Internet of Things module in the node device. The results of the experiment show that the positioning accuracy of the UAV is high, which can meet the needs of cluster flight. Based on the Z-axis coordinates of the UAV, the accuracy of the laser distance measurement and the barometer value is significantly improved, the root mean square error is reduced, and the positioning result is significantly better than the data of direct traditional positioning.
Collapse
Affiliation(s)
- Xinhua Wang
- Collage of Automation Engineer, Nanjing University of Aeronautics and Astronautics, Nanjing, China
| | - Guanyu Chen
- Collage of Automation Engineer, Nanjing University of Aeronautics and Astronautics, Nanjing, China
| | - Huajun Gong
- Collage of Automation Engineer, Nanjing University of Aeronautics and Astronautics, Nanjing, China
| | - Ju Jiang
- Collage of Automation Engineer, Nanjing University of Aeronautics and Astronautics, Nanjing, China
| |
Collapse
|
8
|
An Aerial Mixed-Reality Environment for First-Person-View Drone Flying. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app10165436] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
A drone be able to fly without colliding to preserve the surroundings and its own safety. In addition, it must also incorporate numerous features of interest for drone users. In this paper, an aerial mixed-reality environment for first-person-view drone flying is proposed to provide an immersive experience and a safe environment for drone users by creating additional virtual obstacles when flying a drone in an open area. The proposed system is effective in perceiving the depth of obstacles, and enables bidirectional interaction between real and virtual worlds using a drone equipped with a stereo camera based on human binocular vision. In addition, it synchronizes the parameters of the real and virtual cameras to effectively and naturally create virtual objects in a real space. Based on user studies that included both general and expert users, we confirm that the proposed system successfully creates a mixed-reality environment using a flying drone by quickly recognizing real objects and stably combining them with virtual objects.
Collapse
|
9
|
Watanabe K, Takahashi M. Head-synced Drone Control for Reducing Virtual Reality Sickness. J INTELL ROBOT SYST 2019. [DOI: 10.1007/s10846-019-01054-6] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|