1
|
Yang K, Kim M, Jung Y, Lee S. Hand Gesture Recognition Using FSK Radar Sensors. SENSORS (BASEL, SWITZERLAND) 2024; 24:349. [PMID: 38257441 PMCID: PMC10820019 DOI: 10.3390/s24020349] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/08/2023] [Revised: 12/25/2023] [Accepted: 01/05/2024] [Indexed: 01/24/2024]
Abstract
Hand gesture recognition, which is one of the fields of human-computer interaction (HCI) research, extracts the user's pattern using sensors. Radio detection and ranging (RADAR) sensors are robust under severe environments and convenient to use for hand gestures. The existing studies mostly adopted continuous-wave (CW) radar, which only shows a good performance at a fixed distance, which is due to its limitation of not seeing the distance. This paper proposes a hand gesture recognition system that utilizes frequency-shift keying (FSK) radar, allowing for a recognition method that can work at the various distances between a radar sensor and a user. The proposed system adopts a convolutional neural network (CNN) model for the recognition. From the experimental results, the proposed recognition system covers the range from 30 cm to 180 cm and shows an accuracy of 93.67% over the entire range.
Collapse
Affiliation(s)
- Kimoon Yang
- Department Semiconductor Systems Engineering, Sejong University, Gunja-dong, Gwangjin-gu, Seoul 05006, Republic of Korea; (K.Y.); (M.K.)
- Department of Convergence Engineering of Intelligent Drone, Sejong University, Gunja-dong, Gwangjin-gu, Seoul 05006, Republic of Korea
| | - Minji Kim
- Department Semiconductor Systems Engineering, Sejong University, Gunja-dong, Gwangjin-gu, Seoul 05006, Republic of Korea; (K.Y.); (M.K.)
- Department of Convergence Engineering of Intelligent Drone, Sejong University, Gunja-dong, Gwangjin-gu, Seoul 05006, Republic of Korea
| | - Yunho Jung
- Department of Smart Drone Convergence, Korea Aerospace University, Goyang 10540, Gyeonggi-do, Republic of Korea;
- School of Electronics and Information Engineering, Korea Aerospace University, Goyang 10540, Gyeonggi-do, Republic of Korea
| | - Seongjoo Lee
- Department of Convergence Engineering of Intelligent Drone, Sejong University, Gunja-dong, Gwangjin-gu, Seoul 05006, Republic of Korea
- Department Electrical Engineering, Sejong University, Gunja-dong, Gwangjin-gu, Seoul 05006, Republic of Korea
| |
Collapse
|
2
|
Chmurski M, Mauro G, Santra A, Zubert M, Dagasan G. Highly-Optimized Radar-Based Gesture Recognition System with Depthwise Expansion Module. SENSORS (BASEL, SWITZERLAND) 2021; 21:7298. [PMID: 34770603 PMCID: PMC8588382 DOI: 10.3390/s21217298] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/14/2021] [Revised: 10/08/2021] [Accepted: 10/26/2021] [Indexed: 11/16/2022]
Abstract
The increasing integration of technology in our daily lives demands the development of more convenient human-computer interaction (HCI) methods. Most of the current hand-based HCI strategies exhibit various limitations, e.g., sensibility to variable lighting conditions and limitations on the operating environment. Further, the deployment of such systems is often not performed in resource-constrained contexts. Inspired by the MobileNetV1 deep learning network, this paper presents a novel hand gesture recognition system based on frequency-modulated continuous wave (FMCW) radar, exhibiting a higher recognition accuracy in comparison to the state-of-the-art systems. First of all, the paper introduces a method to simplify radar preprocessing while preserving the main information of the performed gestures. Then, a deep neural classifier with the novel Depthwise Expansion Module based on the depthwise separable convolutions is presented. The introduced classifier is optimized and deployed on the Coral Edge TPU board. The system defines and adopts eight different hand gestures performed by five users, offering a classification accuracy of 98.13% while operating in a low-power and resource-constrained environment.
Collapse
Affiliation(s)
- Mateusz Chmurski
- Infineon Technologies AG, 85579 Neubiberg, Germany; (G.M.); (A.S.); (M.Z.); (G.D.)
- Department of Microelectronics and Computer Science, Lodz University of Technology, 90924 Lodz, Poland
| | - Gianfranco Mauro
- Infineon Technologies AG, 85579 Neubiberg, Germany; (G.M.); (A.S.); (M.Z.); (G.D.)
- Department of Electronic and Computer Technology, University of Granada, Avenida de Fuente Nueva s/n, 18071 Granada, Spain
| | - Avik Santra
- Infineon Technologies AG, 85579 Neubiberg, Germany; (G.M.); (A.S.); (M.Z.); (G.D.)
| | - Mariusz Zubert
- Infineon Technologies AG, 85579 Neubiberg, Germany; (G.M.); (A.S.); (M.Z.); (G.D.)
| | - Gökberk Dagasan
- Infineon Technologies AG, 85579 Neubiberg, Germany; (G.M.); (A.S.); (M.Z.); (G.D.)
| |
Collapse
|
3
|
Qualitative Assessment of Effective Gamification Design Processes Using Motivators to Identify Game Mechanics. SENSORS 2021; 21:s21072556. [PMID: 33917409 PMCID: PMC8038701 DOI: 10.3390/s21072556] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/15/2021] [Revised: 03/31/2021] [Accepted: 03/31/2021] [Indexed: 11/17/2022]
Abstract
This research focuses on the study and qualitative assessment of the relationships between motivators and game mechanics per the ratings of expert gamification consultants. By taking this approach, it is intended that during the design phase of a gamified system, decisions can be made about the design of the system based on the motivators of each of the profiles. These motivators can be determined from the information provided by the potential players themselves. The research presented starts from a previous analysis in which, based on the three most used gamification frameworks and through a card sorting technique that allows the user to organize and classify the content, a set of mechanics are determined. In the present study, each of the mechanics is analyzed, and a more precise motive is decided. As a result, a higher level of personalization is achieved and, consequently, approximates a higher level of gamification effectiveness. The main conclusions are implemented in the development of the Game4City 3.0 project, which addresses gamified and interactive strategies to visualize urban environments in 3D at an educational and social level.
Collapse
|
4
|
Xu C, Zhou J, Cai W, Jiang Y, Li Y, Liu Y. Robust 3D Hand Detection from a Single RGB-D Image in Unconstrained Environments. SENSORS 2020; 20:s20216360. [PMID: 33171831 PMCID: PMC7664645 DOI: 10.3390/s20216360] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/11/2020] [Revised: 11/04/2020] [Accepted: 11/05/2020] [Indexed: 11/20/2022]
Abstract
Three-dimensional hand detection from a single RGB-D image is an important technology which supports many useful applications. Practically, it is challenging to robustly detect human hands in unconstrained environments because the RGB-D channels can be affected by many uncontrollable factors, such as light changes. To tackle this problem, we propose a 3D hand detection approach which improves the robustness and accuracy by adaptively fusing the complementary features extracted from the RGB-D channels. Using the fused RGB-D feature, the 2D bounding boxes of hands are detected first, and then the 3D locations along the z-axis are estimated through a cascaded network. Furthermore, we represent a challenging RGB-D hand detection dataset collected in unconstrained environments. Different from previous works which primarily rely on either the RGB or D channel, we adaptively fuse the RGB-D channels for hand detection. Specifically, evaluation results show that the D-channel is crucial for hand detection in unconstrained environments. Our RGB-D fusion-based approach significantly improves the hand detection accuracy from 69.1 to 74.1 comparing to one of the most state-of-the-art RGB-based hand detectors. The existing RGB- or D-based methods are unstable in unseen lighting conditions: in dark conditions, the accuracy of the RGB-based method significantly drops to 48.9, and in back-light conditions, the accuracy of the D-based method dramatically drops to 28.3. Compared with these methods, our RGB-D fusion based approach is much more robust without accuracy degrading, and our detection results are 62.5 and 65.9, respectively, in these two extreme lighting conditions for accuracy.
Collapse
Affiliation(s)
- Chi Xu
- School of Automation, China University of Geosciences, Wuhan 430074, China; (C.X.); (W.C.); (Y.J.); (Y.L.)
- Hubei Key Laboratory of Advanced Control and Intelligent Automation for Complex Systems, Wuhan 430074, China
- Engineering Research Center of Intelligent Technology for Geo-Exploration, Ministry of Education, Wuhan 430074, China
| | - Jun Zhou
- School of Automation, China University of Geosciences, Wuhan 430074, China; (C.X.); (W.C.); (Y.J.); (Y.L.)
- Hubei Key Laboratory of Advanced Control and Intelligent Automation for Complex Systems, Wuhan 430074, China
- Correspondence:
| | - Wendi Cai
- School of Automation, China University of Geosciences, Wuhan 430074, China; (C.X.); (W.C.); (Y.J.); (Y.L.)
- Hubei Key Laboratory of Advanced Control and Intelligent Automation for Complex Systems, Wuhan 430074, China
| | - Yunkai Jiang
- School of Automation, China University of Geosciences, Wuhan 430074, China; (C.X.); (W.C.); (Y.J.); (Y.L.)
- Hubei Key Laboratory of Advanced Control and Intelligent Automation for Complex Systems, Wuhan 430074, China
| | - Yongbo Li
- School of Automation, China University of Geosciences, Wuhan 430074, China; (C.X.); (W.C.); (Y.J.); (Y.L.)
- Hubei Key Laboratory of Advanced Control and Intelligent Automation for Complex Systems, Wuhan 430074, China
| | - Yi Liu
- CRRC Zhuzhou Electric Locomotive Co., Ltd., Zhuzhou 412000, China;
- National Innovation Center of Advanced Rail Transit Equipment, Zhuzhou 412000, China
| |
Collapse
|
5
|
Accurate Hand Detection from Single-Color Images by Reconstructing Hand Appearances. SENSORS 2019; 20:s20010192. [PMID: 31905746 PMCID: PMC6982909 DOI: 10.3390/s20010192] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/21/2019] [Revised: 12/23/2019] [Accepted: 12/27/2019] [Indexed: 12/01/2022]
Abstract
Hand detection is a crucial pre-processing procedure for many human hand related computer vision tasks, such as hand pose estimation, hand gesture recognition, human activity analysis, and so on. However, reliably detecting multiple hands from cluttering scenes remains to be a challenging task because of complex appearance diversities of dexterous human hands (e.g., different hand shapes, skin colors, illuminations, orientations, and scales, etc.) in color images. To tackle this problem, an accurate hand detection method is proposed to reliably detect multiple hands from a single color image using a hybrid detection/reconstruction convolutional neural networks (CNN) framework, in which regions of hands are detected and appearances of hands are reconstructed in parallel by sharing features extracted from a region proposal layer, and the proposed model is trained in an end-to-end manner. Furthermore, it is observed that the generative adversarial network (GAN) could further boost the detection performance by generating more realistic hand appearances. The experimental results show that the proposed approach outperforms the state-of-the-art on public challenging hand detection benchmarks.
Collapse
|
6
|
Zhan Z, Zhang L, Mei H, Fong PSW. Online Learners' Reading Ability Detection Based on Eye-Tracking Sensors. SENSORS (BASEL, SWITZERLAND) 2016; 16:s16091457. [PMID: 27626418 PMCID: PMC5038735 DOI: 10.3390/s16091457] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/30/2016] [Revised: 08/27/2016] [Accepted: 08/30/2016] [Indexed: 06/06/2023]
Abstract
The detection of university online learners' reading ability is generally problematic and time-consuming. Thus the eye-tracking sensors have been employed in this study, to record temporal and spatial human eye movements. Learners' pupils, blinks, fixation, saccade, and regression are recognized as primary indicators for detecting reading abilities. A computational model is established according to the empirical eye-tracking data, and applying the multi-feature regularization machine learning mechanism based on a Low-rank Constraint. The model presents good generalization ability with an error of only 4.9% when randomly running 100 times. It has obvious advantages in saving time and improving precision, with only 20 min of testing required for prediction of an individual learner's reading ability.
Collapse
Affiliation(s)
- Zehui Zhan
- Center of Educational Information Technology, South China Normal University, Guangzhou 510631, China.
| | - Lei Zhang
- College of Communication Engineering, Chongqing University, Chongqing 400044, China.
| | - Hu Mei
- School of Economics & Management, South China Normal University, Guangzhou 510006, China.
| | - Patrick S W Fong
- Department of Building & Real Estate, The Hong Kong Polytechnic University, Hong Kong 999077, China.
| |
Collapse
|