1
|
Dang X, Tang Y, Hao Z, Gao Y, Fan K, Wang Y. PGGait: Gait Recognition Based on Millimeter-Wave Radar Spatio-Temporal Sensing of Multidimensional Point Clouds. SENSORS (BASEL, SWITZERLAND) 2023; 24:142. [PMID: 38203004 PMCID: PMC10781080 DOI: 10.3390/s24010142] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/29/2023] [Revised: 12/19/2023] [Accepted: 12/22/2023] [Indexed: 01/12/2024]
Abstract
Gait recognition, crucial in biometrics and behavioral analytics, has applications in human-computer interaction, identity verification, and health monitoring. Traditional sensors face limitations in complex or poorly lit settings. RF-based approaches, particularly millimeter-wave technology, are gaining traction for their privacy, insensitivity to light conditions, and high resolution in wireless sensing applications. In this paper, we propose a gait recognition system called Multidimensional Point Cloud Gait Recognition (PGGait). The system uses commercial millimeter-wave radar to extract high-quality point clouds through a specially designed preprocessing pipeline. This is followed by spatial clustering algorithms to separate users and perform target tracking. Simultaneously, we enhance the original point cloud data by increasing velocity and signal-to-noise ratio, forming the input of multidimensional point clouds. Finally, the system inputs the point cloud data into a neural network to extract spatial and temporal features for user identification. We implemented the PGGait system using a commercially available 77 GHz millimeter-wave radar and conducted comprehensive testing to validate its performance. Experimental results demonstrate that PGGait achieves up to 96.75% accuracy in recognizing single-user radial paths and exceeds 94.30% recognition accuracy in the two-person case. This research provides an efficient and feasible solution for user gait recognition with various applications.
Collapse
Affiliation(s)
- Xiaochao Dang
- College of Computer Science and Engineering, Northwest Normal University, Lanzhou 730070, China; (Y.T.); (Z.H.); (Y.G.); (K.F.); (Y.W.)
- Gansu Province Internet of Things Engineering Research Center, Lanzhou 730070, China
| | - Yangyang Tang
- College of Computer Science and Engineering, Northwest Normal University, Lanzhou 730070, China; (Y.T.); (Z.H.); (Y.G.); (K.F.); (Y.W.)
| | - Zhanjun Hao
- College of Computer Science and Engineering, Northwest Normal University, Lanzhou 730070, China; (Y.T.); (Z.H.); (Y.G.); (K.F.); (Y.W.)
- Gansu Province Internet of Things Engineering Research Center, Lanzhou 730070, China
| | - Yifei Gao
- College of Computer Science and Engineering, Northwest Normal University, Lanzhou 730070, China; (Y.T.); (Z.H.); (Y.G.); (K.F.); (Y.W.)
| | - Kai Fan
- College of Computer Science and Engineering, Northwest Normal University, Lanzhou 730070, China; (Y.T.); (Z.H.); (Y.G.); (K.F.); (Y.W.)
| | - Yue Wang
- College of Computer Science and Engineering, Northwest Normal University, Lanzhou 730070, China; (Y.T.); (Z.H.); (Y.G.); (K.F.); (Y.W.)
| |
Collapse
|
2
|
Dang X, Jin P, Hao Z, Ke W, Deng H, Wang L. Human Movement Recognition Based on 3D Point Cloud Spatiotemporal Information from Millimeter-Wave Radar. SENSORS (BASEL, SWITZERLAND) 2023; 23:9430. [PMID: 38067803 PMCID: PMC10708869 DOI: 10.3390/s23239430] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/09/2023] [Revised: 11/12/2023] [Accepted: 11/21/2023] [Indexed: 12/18/2023]
Abstract
Human movement recognition is the use of perceptual technology to collect some of the limb or body movements presented. This practice involves the use of wireless signals, processing, and classification to identify some of the regular movements of the human body. It has a wide range of application prospects, including in intelligent pensions, remote health monitoring, and child supervision. Among the traditional human movement recognition methods, the widely used ones are video image-based recognition technology and Wi-Fi-based recognition technology. However, in some dim and imperfect weather environments, it is not easy to maintain a high performance and recognition rate for human movement recognition using video images. There is the problem of a low recognition degree for Wi-Fi recognition of human movement in the case of a complex environment. Most of the previous research on human movement recognition is based on LiDAR perception technology. LiDAR scanning using a three-dimensional static point cloud can only present the point cloud characteristics of static objects; it struggles to reflect all the characteristics of moving objects. In addition, due to its consideration of privacy and security issues, the dynamic millimeter-wave radar point cloud used in the previous study on the existing problems of human body movement recognition performance is better, with the recognition of human movement characteristics in non-line-of-sight situations as well as better protection of people's privacy. In this paper, we propose a human motion feature recognition system (PNHM) based on spatiotemporal information of the 3D point cloud of millimeter-wave radar, design a neural network based on the network PointNet++ in order to effectively recognize human motion features, and study four human motions based on the threshold method. The data set of the four movements of the human body at two angles in two experimental environments was constructed. This paper compares four standard mainstream 3D point cloud human action recognition models for the system. The experimental results show that the recognition accuracy of the human body's when walking upright can reach 94%, the recognition accuracy when moving from squatting to standing can reach 84%, that when moving from standing to sitting can reach 87%, and the recognition accuracy of falling can reach 93%.
Collapse
Affiliation(s)
- Xiaochao Dang
- College of Computer Science & Engineering, Northwest Normal University, Lanzhou 730070, China; (P.J.); (Z.H.); (W.K.); (H.D.); (L.W.)
| | | | | | | | | | | |
Collapse
|
3
|
He X, Zhang Y, Dong X. Extraction of Human Limbs Based on Micro-Doppler-Range Trajectories Using Wideband Interferometric Radar. SENSORS (BASEL, SWITZERLAND) 2023; 23:7544. [PMID: 37688000 PMCID: PMC10490733 DOI: 10.3390/s23177544] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/09/2023] [Revised: 08/25/2023] [Accepted: 08/29/2023] [Indexed: 09/10/2023]
Abstract
In this paper, we propose to extract the motions of different human limbs by using interferometric radar based on the micro-Doppler-Range signature (mDRS). As we know, accurate extraction of human limbs in motion has great potential for improving the radar performance on human motion detection. Because the motions of human limbs usually overlap in the time-Doppler plane, it is extremely hard to separate human limbs without other information such as the range or the angle. In addition, it is also difficult to identify which part of the body each signal component belongs to. In this work, the overlaps of multiple components can be solved, and the motions from different limbs can be extracted and classified as well based on the extracted micro-Doppler-Range trajectories (MDRTs) along with a proposed three-dimensional constant false alarm (3D-CFAR) detection. Three experiments are conducted with three different people on typical human motions using a 77 GHz radar board of 4 GHz bandwidth, and the results are validated by the measurements of a Kinect sensor. All three experiments were repeatedly conducted for three different people of different heights to test the repeatability and robust of the proposed approach, and the results met our expectations very well.
Collapse
Affiliation(s)
- Xianxian He
- CAS Key Laboratory of Microwave Remote Sensing, National Space Science Center, Chinese Academy of Sciences, Beijing 100190, China; (X.H.); (X.D.)
- School of Electronic, Electrical, and Communication Engineering, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Yunhua Zhang
- CAS Key Laboratory of Microwave Remote Sensing, National Space Science Center, Chinese Academy of Sciences, Beijing 100190, China; (X.H.); (X.D.)
- School of Electronic, Electrical, and Communication Engineering, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Xiao Dong
- CAS Key Laboratory of Microwave Remote Sensing, National Space Science Center, Chinese Academy of Sciences, Beijing 100190, China; (X.H.); (X.D.)
- School of Electronic, Electrical, and Communication Engineering, University of Chinese Academy of Sciences, Beijing 100049, China
| |
Collapse
|
4
|
Muaaz M, Waqar S, Pätzold M. Orientation-Independent Human Activity Recognition Using Complementary Radio Frequency Sensing. SENSORS (BASEL, SWITZERLAND) 2023; 23:5810. [PMID: 37447660 DOI: 10.3390/s23135810] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/24/2023] [Revised: 06/12/2023] [Accepted: 06/19/2023] [Indexed: 07/15/2023]
Abstract
RF sensing offers an unobtrusive, user-friendly, and privacy-preserving method for detecting accidental falls and recognizing human activities. Contemporary RF-based HAR systems generally employ a single monostatic radar to recognize human activities. However, a single monostatic radar cannot detect the motion of a target, e.g., a moving person, orthogonal to the boresight axis of the radar. Owing to this inherent physical limitation, a single monostatic radar fails to efficiently recognize orientation-independent human activities. In this work, we present a complementary RF sensing approach that overcomes the limitation of existing single monostatic radar-based HAR systems to robustly recognize orientation-independent human activities and falls. Our approach used a distributed mmWave MIMO radar system that was set up as two separate monostatic radars placed orthogonal to each other in an indoor environment. These two radars illuminated the moving person from two different aspect angles and consequently produced two time-variant micro-Doppler signatures. We first computed the mean Doppler shifts (MDSs) from the micro-Doppler signatures and then extracted statistical and time- and frequency-domain features. We adopted feature-level fusion techniques to fuse the extracted features and a support vector machine to classify orientation-independent human activities. To evaluate our approach, we used an orientation-independent human activity dataset, which was collected from six volunteers. The dataset consisted of more than 1350 activity trials of five different activities that were performed in different orientations. The proposed complementary RF sensing approach achieved an overall classification accuracy ranging from 98.31 to 98.54%. It overcame the inherent limitations of a conventional single monostatic radar-based HAR and outperformed it by 6%.
Collapse
Affiliation(s)
- Muhammad Muaaz
- Faculty of Engineering and Science, University of Agder, 4898 Grimstad, Norway
| | - Sahil Waqar
- Faculty of Engineering and Science, University of Agder, 4898 Grimstad, Norway
| | - Matthias Pätzold
- Faculty of Engineering and Science, University of Agder, 4898 Grimstad, Norway
| |
Collapse
|
5
|
Zeng X, Báruson HSL, Sundvall A. Walking Step Monitoring with a Millimeter-Wave Radar in Real-Life Environment for Disease and Fall Prevention for the Elderly. SENSORS (BASEL, SWITZERLAND) 2022; 22:9901. [PMID: 36560270 PMCID: PMC9784666 DOI: 10.3390/s22249901] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/30/2022] [Revised: 12/08/2022] [Accepted: 12/09/2022] [Indexed: 06/17/2023]
Abstract
We studied the use of a millimeter-wave frequency-modulated continuous wave radar for gait analysis in a real-life environment, with a focus on the measurement of the step time. A method was developed for the successful extraction of gait patterns for different test cases. The quantitative investigation carried out in a lab corridor showed the excellent reliability of the proposed method for the step time measurement, with an average accuracy of 96%. In addition, a comparison test between the millimeter-wave radar and a continuous-wave radar working at 2.45 GHz was performed, and the results suggest that the millimeter-wave radar is more capable of capturing instantaneous gait features, which enables the timely detection of small gait changes appearing at the early stage of cognitive disorders.
Collapse
|
6
|
Activity Recognition Based on Millimeter-Wave Radar by Fusing Point Cloud and Range–Doppler Information. SIGNALS 2022. [DOI: 10.3390/signals3020017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
Millimeter-wave radar has demonstrated its high efficiency in complex environments in recent years, which outperforms LiDAR and computer vision in human activity recognition in the presence of smoke, fog, and dust. In previous studies, researchers mostly analyzed either 2D (3D) point cloud or range–Doppler information from radar echo to extract activity features. In this paper, we propose a multi-model deep learning approach to fuse the features of both point clouds and range–Doppler for classifying six activities, i.e., boxing, jumping, squatting, walking, circling, and high-knee lifting, based on a millimeter-wave radar. We adopt a CNN–LSTM model to extract the time-serial features from point clouds and a CNN model to obtain the features from range–Doppler. Then we fuse the two features and input the fused feature into the full connected layer for classification. We built a dataset based on a 3D millimeter-wave radar from 17 volunteers. The evaluation result based on the dataset shows that this method has higher accuracy than utilizing the two kinds of information separately and achieves a recognition accuracy of 97.26%, which is about 1% higher than other networks with only one kind of data as input.
Collapse
|
7
|
Saho K, Hayashi S, Tsuyama M, Meng L, Masugi M. Machine Learning-Based Classification of Human Behaviors and Falls in Restroom via Dual Doppler Radar Measurements. SENSORS 2022; 22:s22051721. [PMID: 35270868 PMCID: PMC8915019 DOI: 10.3390/s22051721] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/02/2022] [Revised: 02/18/2022] [Accepted: 02/20/2022] [Indexed: 12/04/2022]
Abstract
This study presents a radar-based remote measurement system for classification of human behaviors and falls in restrooms without privacy invasion. Our system uses a dual Doppler radar mounted onto a restroom ceiling and wall. Machine learning methods, including the convolutional neural network (CNN), long short-term memory, support vector machine, and random forest methods, are applied to the Doppler radar data to verify the model’s efficiency and features. Experimental results from 21 participants demonstrated the accurate classification of eight realistic behaviors, including falling. Using the Doppler spectrograms (time–velocity distribution) as the inputs, CNN showed the best results with an overall classification accuracy of 95.6% and 100% fall classification accuracy. We confirmed that these accuracies were better than those achieved by conventional restroom monitoring techniques using thermal sensors and radars. Furthermore, the comparison results of various machine learning methods and cases using each radar’s data show that the higher-order derivative parameters of acceleration and jerk, and the motion information in the horizontal direction are the efficient features for behavior classification in a restroom. These findings indicate that daily restroom monitoring using the proposed radar system accurately recognizes human behaviors and allows early detection of fall accidents.
Collapse
Affiliation(s)
- Kenshi Saho
- Department of Intelligent Robotics, Toyama Prefectural University, Imizu 939-0398, Japan
- Department of Electronic and Computer Engineering, Ritsumeikan University, Kusatsu 525-8577, Japan; (S.H.); (M.T.); (L.M.); (M.M.)
- Correspondence:
| | - Sora Hayashi
- Department of Electronic and Computer Engineering, Ritsumeikan University, Kusatsu 525-8577, Japan; (S.H.); (M.T.); (L.M.); (M.M.)
| | - Mutsuki Tsuyama
- Department of Electronic and Computer Engineering, Ritsumeikan University, Kusatsu 525-8577, Japan; (S.H.); (M.T.); (L.M.); (M.M.)
| | - Lin Meng
- Department of Electronic and Computer Engineering, Ritsumeikan University, Kusatsu 525-8577, Japan; (S.H.); (M.T.); (L.M.); (M.M.)
| | - Masao Masugi
- Department of Electronic and Computer Engineering, Ritsumeikan University, Kusatsu 525-8577, Japan; (S.H.); (M.T.); (L.M.); (M.M.)
| |
Collapse
|
8
|
Saleem F, Khan MA, Alhaisoni M, Tariq U, Armghan A, Alenezi F, Choi JI, Kadry S. Human Gait Recognition: A Single Stream Optimal Deep Learning Features Fusion. SENSORS 2021; 21:s21227584. [PMID: 34833658 PMCID: PMC8625438 DOI: 10.3390/s21227584] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/16/2021] [Revised: 11/08/2021] [Accepted: 11/12/2021] [Indexed: 01/20/2023]
Abstract
Human Gait Recognition (HGR) is a biometric technique that has been utilized for security purposes for the last decade. The performance of gait recognition can be influenced by various factors such as wearing clothes, carrying a bag, and the walking surfaces. Furthermore, identification from differing views is a significant difficulty in HGR. Many techniques have been introduced in the literature for HGR using conventional and deep learning techniques. However, the traditional methods are not suitable for large datasets. Therefore, a new framework is proposed for human gait recognition using deep learning and best feature selection. The proposed framework includes data augmentation, feature extraction, feature selection, feature fusion, and classification. In the augmentation step, three flip operations were used. In the feature extraction step, two pre-trained models were employed, Inception-ResNet-V2 and NASNet Mobile. Both models were fine-tuned and trained using transfer learning on the CASIA B gait dataset. The features of the selected deep models were optimized using a modified three-step whale optimization algorithm and the best features were chosen. The selected best features were fused using the modified mean absolute deviation extended serial fusion (MDeSF) approach. Then, the final classification was performed using several classification algorithms. The experimental process was conducted on the entire CASIA B dataset and achieved an average accuracy of 89.0. Comparison with existing techniques showed an improvement in accuracy, recall rate, and computational time.
Collapse
Affiliation(s)
- Faizan Saleem
- Department of Computer Science, HITEC University Taxila, Taxila 47080, Pakistan; (F.S.); (M.A.K.)
| | - Muhammad Attique Khan
- Department of Computer Science, HITEC University Taxila, Taxila 47080, Pakistan; (F.S.); (M.A.K.)
| | - Majed Alhaisoni
- College of Computer Science and Engineering, University of Ha’il, Ha’il 55211, Saudi Arabia;
| | - Usman Tariq
- College of Computer Engineering and Science, Prince Sattam Bin Abdulaziz University, Al-Kharaj 11942, Saudi Arabia;
| | - Ammar Armghan
- Department of Electrical Engineering, College of Engineering, Jouf University, Sakakah 72388, Saudi Arabia; (A.A.); (F.A.)
| | - Fayadh Alenezi
- Department of Electrical Engineering, College of Engineering, Jouf University, Sakakah 72388, Saudi Arabia; (A.A.); (F.A.)
| | - Jung-In Choi
- Department of Applied Artificial Intelligence, Ajou University, Suwon 16499, Korea
- Correspondence:
| | - Seifedine Kadry
- Faculty of Applied Computing and Technology, Noroff University College, 4608 Kristiansand, Norway;
| |
Collapse
|
9
|
Wang L, Li Y, Xiong F, Zhang W. Gait Recognition Using Optical Motion Capture: A Decision Fusion Based Method. SENSORS 2021; 21:s21103496. [PMID: 34067820 PMCID: PMC8156802 DOI: 10.3390/s21103496] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/21/2021] [Revised: 05/01/2021] [Accepted: 05/13/2021] [Indexed: 11/16/2022]
Abstract
Human identification based on motion capture data has received signification attentions for its wide applications in authentication and surveillance systems. The optical motion capture system (OMCS) can dynamically capture the high-precision three-dimensional locations of optical trackers that are implemented on a human body, but its potential in applications on gait recognition has not been studied in existing works. On the other hand, a typical OMCS can only support one player one time, which limits its capability and efficiency. In this paper, our goals are investigating the performance of OMCS-based gait recognition performance, and realizing gait recognition in OMCS such that it can support multiple players at the same time. We develop a gait recognition method based on decision fusion, and it includes the following four steps: feature extraction, unreliable feature calibration, classification of single motion frame, and decision fusion of multiple motion frame. We use kernel extreme learning machine (KELM) for single motion classification, and in particular we propose a reliability weighted sum (RWS) decision fusion method to combine the fuzzy decisions of the motion frames. We demonstrate the performance of the proposed method by using walking gait data collected from 76 participants, and results show that KELM significantly outperforms support vector machine (SVM) and random forest in the single motion frame classification task, and demonstrate that the proposed RWS decision fusion rule can achieve better fusion accuracy compared with conventional fusion rules. Our results also show that, with 10 motion trackers that are implemented on lower body locations, the proposed method can achieve 100% validation accuracy with less than 50 gait motion frames.
Collapse
Affiliation(s)
- Li Wang
- School of Physical Education, Sichuan Normal University, Chengdu 610101, China;
| | - Yajun Li
- Department of Physical Education, Central South University, Changsha 410083, China
- Correspondence:
| | - Fei Xiong
- School of Electronic and Information Engineering, Beijing Jiaotong University, Beijing 100044, China;
| | - Wenyu Zhang
- Institute of Artificial Intelligence, University of Science and Technology Beijing, Beijing 100083, China;
| |
Collapse
|
10
|
Sunarya U, Sun Hariyani Y, Cho T, Roh J, Hyeong J, Sohn I, Kim S, Park C. Feature Analysis of Smart Shoe Sensors for Classification of Gait Patterns. SENSORS 2020; 20:s20216253. [PMID: 33147794 PMCID: PMC7662266 DOI: 10.3390/s20216253] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/28/2020] [Revised: 10/25/2020] [Accepted: 10/28/2020] [Indexed: 12/30/2022]
Abstract
Gait analysis is commonly used to detect foot disorders and abnormalities such as supination, pronation, unstable left foot and unstable right foot. Early detection of these abnormalities could help us to correct the walking posture and avoid getting injuries. This paper presents extensive feature analyses on smart shoes sensor data, including pressure sensors, accelerometer and gyroscope signals, to obtain the optimum combination of the sensors for gait classification, which is crucial to implement a power-efficient mobile smart shoes system. In addition, we investigated the optimal length of data segmentation based on the gait cycle parameters, reduction of the feature dimensions and feature selection for the classification of the gait patterns. Benchmark tests among several machine learning algorithms were conducted using random forest, k-nearest neighbor (KNN), logistic regression and support vector machine (SVM) algorithms for the classification task. Our experiments demonstrated the combination of accelerometer and gyroscope sensor features with SVM achieved the best performance with 89.36% accuracy, 89.76% precision and 88.44% recall. This research suggests a new state-of-the-art gait classification approach, specifically on detecting human gait abnormalities.
Collapse
Affiliation(s)
- Unang Sunarya
- Department of Computer Engineering, Kwangwoon University, Seoul 01897, Korea; (U.S.); (Y.S.H.)
- School of Applied Science, Telkom University, Bandung 40257, Indonesia
| | - Yuli Sun Hariyani
- Department of Computer Engineering, Kwangwoon University, Seoul 01897, Korea; (U.S.); (Y.S.H.)
- School of Applied Science, Telkom University, Bandung 40257, Indonesia
| | - Taeheum Cho
- Department of Intelligent Information and Embedded Software Engineering, Kwangwoon University, Seoul 01897, Korea;
| | - Jongryun Roh
- Human Convergence Technology R&D Department, Korea Institute of Industrial Technology, Ansan 15588, Korea; (J.R.); (J.H.)
| | - Joonho Hyeong
- Human Convergence Technology R&D Department, Korea Institute of Industrial Technology, Ansan 15588, Korea; (J.R.); (J.H.)
| | - Illsoo Sohn
- Department of Computer Science and Engineering Seoul National University of Science and Technology, Seoul 01811, Korea;
| | - Sayup Kim
- Human Convergence Technology R&D Department, Korea Institute of Industrial Technology, Ansan 15588, Korea; (J.R.); (J.H.)
- Correspondence: (S.K.); (C.P.); Tel.: +82-2-940-8251 (C.P.)
| | - Cheolsoo Park
- Department of Computer Engineering, Kwangwoon University, Seoul 01897, Korea; (U.S.); (Y.S.H.)
- Correspondence: (S.K.); (C.P.); Tel.: +82-2-940-8251 (C.P.)
| |
Collapse
|