1
|
Munusamy V, Senthilkumar S. Emerging trends in gait recognition based on deep learning: a survey. PeerJ Comput Sci 2024; 10:e2158. [PMID: 39145199 PMCID: PMC11323174 DOI: 10.7717/peerj-cs.2158] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2024] [Accepted: 06/06/2024] [Indexed: 08/16/2024]
Abstract
Gait recognition, a biometric identification method, has garnered significant attention due to its unique attributes, including non-invasiveness, long-distance capture, and resistance to impersonation. Gait recognition has undergone a revolution driven by the remarkable capacity of deep learning to extract complicated features from data. An overview of the current developments in deep learning-based gait identification methods is provided in this work. We explore and analyze the development of gait recognition and highlight its uses in forensics, security, and criminal investigations. The article delves into the challenges associated with gait recognition, such as variations in walking conditions, viewing angles, and clothing as well. We discuss about the effectiveness of deep neural networks in addressing these challenges by providing a comprehensive analysis of state-of-the-art architectures, including convolutional neural networks (CNNs), recurrent neural networks (RNNs), and attention mechanisms. Diverse neural network-based gait recognition models, such as Gate Controlled and Shared Attention ICDNet (GA-ICDNet), Multi-Scale Temporal Feature Extractor (MSTFE), GaitNet, and various CNN-based approaches, demonstrate impressive accuracy across different walking conditions, showcasing the effectiveness of these models in capturing unique gait patterns. GaitNet achieved an exceptional identification accuracy of 99.7%, whereas GA-ICDNet showed high precision with an equal error rate of 0.67% in verification tasks. GaitGraph (ResGCN+2D CNN) achieved rank-1 accuracies ranging from 66.3% to 87.7%, whereas a Fully Connected Network with Koopman Operator achieved an average rank-1 accuracy of 74.7% for OU-MVLP across various conditions. However, GCPFP (GCN with Graph Convolution-Based Part Feature Polling) utilizing graph convolutional network (GCN) and GaitSet achieves the lowest average rank-1 accuracy of 62.4% for CASIA-B, while MFINet (Multiple Factor Inference Network) exhibits the lowest accuracy range of 11.72% to 19.32% under clothing variation conditions on CASIA-B. In addition to an across-the-board analysis of recent breakthroughs in gait recognition, the scope for potential future research direction is also assessed.
Collapse
Affiliation(s)
- Vaishnavi Munusamy
- School of Computer Science and Engineering, Vellore Institute of Technology, Vellore, Tamilnadu, India
| | - Sudha Senthilkumar
- School of Computer Science and Engineering, Vellore Institute of Technology, Vellore, Tamilnadu, India
| |
Collapse
|
2
|
Yousef RN, Ata MM, Rashed AEE, Badawy M, Elhosseini MA, Bahgat WM. A Novel Multi-Scaled Deep Convolutional Structure for Punctilious Human Gait Authentication. Biomimetics (Basel) 2024; 9:364. [PMID: 38921244 PMCID: PMC11201791 DOI: 10.3390/biomimetics9060364] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2024] [Revised: 05/15/2024] [Accepted: 05/21/2024] [Indexed: 06/27/2024] Open
Abstract
The need for non-interactive human recognition systems to ensure safe isolation between users and biometric equipment has been exposed by the COVID-19 pandemic. This study introduces a novel Multi-Scaled Deep Convolutional Structure for Punctilious Human Gait Authentication (MSDCS-PHGA). The proposed MSDCS-PHGA involves segmenting, preprocessing, and resizing silhouette images into three scales. Gait features are extracted from these multi-scale images using custom convolutional layers and fused to form an integrated feature set. This multi-scaled deep convolutional approach demonstrates its efficacy in gait recognition by significantly enhancing accuracy. The proposed convolutional neural network (CNN) architecture is assessed using three benchmark datasets: CASIA, OU-ISIR, and OU-MVLP. Moreover, the proposed model is evaluated against other pre-trained models using key performance metrics such as precision, accuracy, sensitivity, specificity, and training time. The results indicate that the proposed deep CNN model outperforms existing models focused on human gait. Notably, it achieves an accuracy of approximately 99.9% for both the CASIA and OU-ISIR datasets and 99.8% for the OU-MVLP dataset while maintaining a minimal training time of around 3 min.
Collapse
Affiliation(s)
- Reem N. Yousef
- Delta Higher Institute for Engineering and Technology, Mansoura 35681, Egypt;
| | - Mohamed Maher Ata
- School of Computational Sciences and Artificial Intelligence (CSAI), Zewail City of Science and Technology, October Gardens, 6th of October City, Giza 12578, Egypt;
- Department of Communications and Electronics Engineering, MISR Higher Institute for Engineering and Technology, Mansoura 35516, Egypt
| | - Amr E. Eldin Rashed
- Department of Computer Engineering, College of Computers and Information Technology, Taif University, Taif P.O. Box 11099, Saudi Arabia;
| | - Mahmoud Badawy
- Department of Computer Science and Informatics, Taibah University, Medina 42353, Saudi Arabia;
- Department of Computers and Control Systems Engineering, Faculty of Engineering, Mansoura University, Mansoura 35516, Egypt;
| | - Mostafa A. Elhosseini
- College of Computer Science and Engineering, Taibah University, Yanbu 46421, Saudi Arabia
- Department of Computers and Control Systems Engineering, Faculty of Engineering, Mansoura University, Mansoura 35516, Egypt;
| | - Waleed M. Bahgat
- Department of Computer Science and Informatics, Taibah University, Medina 42353, Saudi Arabia;
- Information Technology Department, Faculty of Computers and Information, Mansoura University, El Mansoura 35516, Egypt
| |
Collapse
|
3
|
Parashar A, Parashar A, Shabaz M, Gupta D, Sahu AK, Khan MA. Advancements in artificial intelligence for biometrics: A deep dive into model-based gait recognition techniques. ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE 2024; 130:107712. [DOI: 10.1016/j.engappai.2023.107712] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/25/2024]
|
4
|
Cao S, Ko M, Li CY, Brown D, Wang X, Hu F, Gan Y. Single-Belt Versus Split-Belt: Intelligent Treadmill Control via Microphase Gait Capture for Poststroke Rehabilitation. IEEE TRANSACTIONS ON HUMAN-MACHINE SYSTEMS 2023; 53:1006-1016. [PMID: 38601093 PMCID: PMC11006014 DOI: 10.1109/thms.2023.3327661] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/12/2024]
Abstract
Stroke is the leading long-term disability and causes a significant financial burden associated with rehabilitation. In poststroke rehabilitation, individuals with hemiparesis have a specialized demand for coordinated movement between the paretic and the nonparetic legs. The split-belt treadmill can effectively facilitate the paretic leg by slowing down the belt speed for that leg while the patient is walking on a split-belt treadmill. Although studies have found that split-belt treadmills can produce better gait recovery outcomes than traditional single-belt treadmills, the high cost of split-belt treadmills is a significant barrier to stroke rehabilitation in clinics. In this article, we design an AI-based system for the single-belt treadmill to make it act like a split-belt by adjusting the belt speed instantaneously according to the patient's microgait phases. This system only requires a low-cost RGB camera to capture human gait patterns. A novel microgait classification pipeline model is used to detect gait phases in real time. The pipeline is based on self-supervised learning that can calibrate the anchor video with the real-time video. We then use a ResNet-LSTM module to handle temporal information and increase accuracy. A real-time filtering algorithm is used to smoothen the treadmill control. We have tested the developed system with 34 healthy individuals and four stroke patients. The results show that our system is able to detect the gait microphase accurately and requires less human annotation in training, compared to the ResNet50 classifier. Our system "Splicer" is boosted by AI modules and performs comparably as a split-belt system, in terms of timely varying left/right foot speed, creating a hemiparetic gait in healthy individuals, and promoting paretic side symmetry in force exertion for stroke patients. This innovative design can potentially provide cost-effective rehabilitation treatment for hemiparetic patients.
Collapse
Affiliation(s)
- Shengting Cao
- Electrical and Computer Engineering, University of Alabama, Tuscaloosa, AL 35487 USA
| | - Mansoo Ko
- University of Texas Medical Branch, Mountain Brook, TX 77555-0128 USA
| | - Chih-Ying Li
- University of Texas Medical Branch, Mountain Brook, TX 77555-0128 USA
| | - David Brown
- University of Texas Medical Branch, Mountain Brook, TX 77555-0128 USA
| | - Xuefeng Wang
- Department of Advanced Manufacturing and Robotics, College of Engineering, Peking University, Beijing 100871, China
| | - Fei Hu
- Electrical and Computer Engineering, University of Alabama, Tuscaloosa, AL 35487 USA
| | - Yu Gan
- Biomedical Engineering Department, Stevens Institute of Technology, Hoboken, NJ 07030 USA
| |
Collapse
|
5
|
Khaliluzzaman M, Uddin A, Deb K, Hasan MJ. Person Recognition Based on Deep Gait: A Survey. SENSORS (BASEL, SWITZERLAND) 2023; 23:4875. [PMID: 37430786 PMCID: PMC10222012 DOI: 10.3390/s23104875] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/18/2023] [Revised: 05/12/2023] [Accepted: 05/16/2023] [Indexed: 07/12/2023]
Abstract
Gait recognition, also known as walking pattern recognition, has expressed deep interest in the computer vision and biometrics community due to its potential to identify individuals from a distance. It has attracted increasing attention due to its potential applications and non-invasive nature. Since 2014, deep learning approaches have shown promising results in gait recognition by automatically extracting features. However, recognizing gait accurately is challenging due to the covariate factors, complexity and variability of environments, and human body representations. This paper provides a comprehensive overview of the advancements made in this field along with the challenges and limitations associated with deep learning methods. For that, it initially examines the various gait datasets used in the literature review and analyzes the performance of state-of-the-art techniques. After that, a taxonomy of deep learning methods is presented to characterize and organize the research landscape in this field. Furthermore, the taxonomy highlights the basic limitations of deep learning methods in the context of gait recognition. The paper is concluded by focusing on the present challenges and suggesting several research directions to improve the performance of gait recognition in the future.
Collapse
Affiliation(s)
- Md. Khaliluzzaman
- Department of Computer Science and Engineering, Chittagong University of Engineering & Technology, Chattogram 4349, Bangladesh; (M.K.); (A.U.)
- Department of Computer Science and Engineering, International Islamic University Chittagong, Chattogram 4318, Bangladesh
| | - Ashraf Uddin
- Department of Computer Science and Engineering, Chittagong University of Engineering & Technology, Chattogram 4349, Bangladesh; (M.K.); (A.U.)
| | - Kaushik Deb
- Department of Computer Science and Engineering, Chittagong University of Engineering & Technology, Chattogram 4349, Bangladesh; (M.K.); (A.U.)
| | - Md Junayed Hasan
- National Subsea Centre, Robert Gordon University, Aberdeen AB10 7AQ, UK
| |
Collapse
|
6
|
Mogan JN, Lee CP, Lim KM, Ali M, Alqahtani A. Gait-CNN-ViT: Multi-Model Gait Recognition with Convolutional Neural Networks and Vision Transformer. SENSORS (BASEL, SWITZERLAND) 2023; 23:3809. [PMID: 37112147 PMCID: PMC10143319 DOI: 10.3390/s23083809] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/10/2023] [Revised: 03/08/2023] [Accepted: 03/13/2023] [Indexed: 06/19/2023]
Abstract
Gait recognition, the task of identifying an individual based on their unique walking style, can be difficult because walking styles can be influenced by external factors such as clothing, viewing angle, and carrying conditions. To address these challenges, this paper proposes a multi-model gait recognition system that integrates Convolutional Neural Networks (CNNs) and Vision Transformer. The first step in the process is to obtain a gait energy image, which is achieved by applying an averaging technique to a gait cycle. The gait energy image is then fed into three different models, DenseNet-201, VGG-16, and a Vision Transformer. These models are pre-trained and fine-tuned to encode the salient gait features that are specific to an individual's walking style. Each model provides prediction scores for the classes based on the encoded features, and these scores are then summed and averaged to produce the final class label. The performance of this multi-model gait recognition system was evaluated on three datasets, CASIA-B, OU-ISIR dataset D, and OU-ISIR Large Population dataset. The experimental results showed substantial improvement compared to existing methods on all three datasets. The integration of CNNs and ViT allows the system to learn both the pre-defined and distinct features, providing a robust solution for gait recognition even under the influence of covariates.
Collapse
Affiliation(s)
- Jashila Nair Mogan
- Faculty of Information Science and Technology, Multimedia University, Melaka 75450, Malaysia
| | - Chin Poo Lee
- Faculty of Information Science and Technology, Multimedia University, Melaka 75450, Malaysia
| | - Kian Ming Lim
- Faculty of Information Science and Technology, Multimedia University, Melaka 75450, Malaysia
| | - Mohammed Ali
- Department of Computer Science, King Khalid University, Abha 61421, Saudi Arabia
| | - Ali Alqahtani
- Department of Computer Science, King Khalid University, Abha 61421, Saudi Arabia
- Center for Artificial Intelligence (CAI), King Khalid University, Abha 61421, Saudi Arabia
| |
Collapse
|
7
|
Imoto D, Hirabayashi M, Honma M, Kurosawa K. Pre-set estimation-based in-silico silhouette-based methodology for improving the robustness to viewing direction difference for assisting forensic gait analysis. J Forensic Sci 2023; 68:470-487. [PMID: 36762778 DOI: 10.1111/1556-4029.15214] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2022] [Revised: 01/20/2023] [Accepted: 01/24/2023] [Indexed: 02/11/2023]
Abstract
Forensic gait analysis is used to visually and quantitatively analyze information regarding the appearance and style of walking that can be presented as evidence in the court. The demand for analyzing CCTV pedestrian footage in video surveillance has been increasing. The dependence of the accuracy of semiautomatic silhouette-based analysis, often used in forensic science, on the differences in the viewing directions is a very challenging issue that is yet to be resolved for real case applications. Currently, the different viewing directions used in comparison footage significantly decrease the accuracy of same person analysis when using the silhouette-based method, often used in the Japanese forensic science domain. A calibration-based method was previously prosed to resolve this problem, but it requires performing an elaborate measurement procedure at the camera installation site for an accurate analysis. In this study, we propose a novel in-silico silhouette-based analysis method that significantly expands the number of viewing direction pre-set settings to 900 from the 24 used in the previous method. Several software tools have been developed to ensure that all the procedures can be executed on a computer. The experimental results confirm that the accuracy of the proposed method is comparable to that of the calibration-based method. Furthermore, the practical comparison results from actual consultation confirmed the effectiveness of the proposed method under existing viewing direction differences. We therefore anticipate that the proposed method will be beneficial for improving the analysis accuracy in real cases and therefore serve as a substitute of the previous method.
Collapse
Affiliation(s)
- Daisuke Imoto
- Artificial Intelligence Section, Second Department of Forensic Science, National Research Institute of Police Science, Kashiwa, Japan
| | - Manato Hirabayashi
- Artificial Intelligence Section, Second Department of Forensic Science, National Research Institute of Police Science, Kashiwa, Japan
| | - Masakatsu Honma
- Artificial Intelligence Section, Second Department of Forensic Science, National Research Institute of Police Science, Kashiwa, Japan
| | - Kenji Kurosawa
- Second Department of Forensic Science, National Research Institute of Police Science, Kashiwa, Japan
| |
Collapse
|
8
|
Song C, Huang Y, Wang W, Wang L. CASIA-E: A Large Comprehensive Dataset for Gait Recognition. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2023; 45:2801-2815. [PMID: 35704543 DOI: 10.1109/tpami.2022.3183288] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/06/2023]
Abstract
Gait recognition plays a special role in visual surveillance due to its unique advantage, e.g., long-distance, cross-view and non-cooperative recognition. However, it has not yet been widely applied. One reason for this awkwardness is the lack of a truly big dataset captured in practical outdoor scenarios. Here, the "big" at least means: (1) huge amount of gait videos; (2) sufficient subjects; (3) rich attributes; and (4) spatial and temporal variations. Moreover, most existing large-scale gait datasets are collected indoors, which have few challenges from real scenes, such as the dynamic and complex background clutters, illumination variations, vertical view variations, etc. In this article, we introduce a newly built big outdoor gait dataset, called CASIA-E. It contains more than one thousand people distributed over near one million videos. Each person involves 26 view angles and varied appearances caused by changes of bag carrying, dressing and walking styles. The videos are captured across five months and across three kinds of outdoor scenes. Soft biometric features are also recorded for all subjects including age, gender, height, weight, and nationality. Besides, we report an experimental benchmark and examine some meaningful problems that have not been well studied previously, e.g., the influence of million-level training videos, vertical view angles, walking styles, and the thermal infrared modality. We believe that such a big outdoor dataset and the experimental benchmark will promote the development of gait recognition in both academic research and industrial applications.
Collapse
|
9
|
Parashar A, Parashar A, Ding W, Shekhawat RS, Rida I. Deep learning pipelines for recognition of gait biometrics with covariates: a comprehensive review. Artif Intell Rev 2023. [DOI: 10.1007/s10462-022-10365-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/19/2023]
|
10
|
Biometrics recognition using deep learning: a survey. Artif Intell Rev 2023. [DOI: 10.1007/s10462-022-10237-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/15/2023]
|
11
|
Walking Speed Classification from Marker-Free Video Images in Two-Dimension Using Optimum Data and a Deep Learning Method. Bioengineering (Basel) 2022; 9:bioengineering9110715. [DOI: 10.3390/bioengineering9110715] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2022] [Revised: 11/09/2022] [Accepted: 11/16/2022] [Indexed: 11/22/2022] Open
Abstract
Walking speed is considered a reliable assessment tool for any movement-related functional activities of an individual (i.e., patients and healthy controls) by caregivers and clinicians. Traditional video surveillance gait monitoring in clinics and aged care homes may employ modern artificial intelligence techniques to utilize walking speed as a screening indicator of various physical outcomes or accidents in individuals. Specifically, ratio-based body measurements of walking individuals are extracted from marker-free and two-dimensional video images to create a walk pattern suitable for walking speed classification using deep learning based artificial intelligence techniques. However, the development of successful and highly predictive deep learning architecture depends on the optimal use of extracted data because redundant data may overburden the deep learning architecture and hinder the classification performance. The aim of this study was to investigate the optimal combination of ratio-based body measurements needed for presenting potential information to define and predict a walk pattern in terms of speed with high classification accuracy using a deep learning-based walking speed classification model. To this end, the performance of different combinations of five ratio-based body measurements was evaluated through a correlation analysis and a deep learning-based walking speed classification test. The results show that a combination of three ratio-based body measurements can potentially define and predict a walk pattern in terms of speed with classification accuracies greater than 92% using a bidirectional long short-term memory deep learning method.
Collapse
|
12
|
Mogan JN, Lee CP, Lim KM, Muthu KS. Gait-ViT: Gait Recognition with Vision Transformer. SENSORS (BASEL, SWITZERLAND) 2022; 22:s22197362. [PMID: 36236462 PMCID: PMC9572525 DOI: 10.3390/s22197362] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/31/2022] [Revised: 08/30/2022] [Accepted: 08/31/2022] [Indexed: 05/14/2023]
Abstract
Identifying an individual based on their physical/behavioral characteristics is known as biometric recognition. Gait is one of the most reliable biometrics due to its advantages, such as being perceivable at a long distance and difficult to replicate. The existing works mostly leverage Convolutional Neural Networks for gait recognition. The Convolutional Neural Networks perform well in image recognition tasks; however, they lack the attention mechanism to emphasize more on the significant regions of the image. The attention mechanism encodes information in the image patches, which facilitates the model to learn the substantial features in the specific regions. In light of this, this work employs the Vision Transformer (ViT) with an attention mechanism for gait recognition, referred to as Gait-ViT. In the proposed Gait-ViT, the gait energy image is first obtained by averaging the series of images over the gait cycle. The images are then split into patches and transformed into sequences by flattening and patch embedding. Position embedding, along with patch embedding, are applied on the sequence of patches to restore the positional information of the patches. Subsequently, the sequence of vectors is fed to the Transformer encoder to produce the final gait representation. As for the classification, the first element of the sequence is sent to the multi-layer perceptron to predict the class label. The proposed method obtained 99.93% on CASIA-B, 100% on OU-ISIR D and 99.51% on OU-LP, which exhibit the ability of the Vision Transformer model to outperform the state-of-the-art methods.
Collapse
|
13
|
Cosma A, Radoi E. Learning Gait Representations with Noisy Multi-Task Learning. SENSORS (BASEL, SWITZERLAND) 2022; 22:6803. [PMID: 36146152 PMCID: PMC9506362 DOI: 10.3390/s22186803] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/12/2022] [Revised: 08/31/2022] [Accepted: 09/06/2022] [Indexed: 06/16/2023]
Abstract
Gait analysis is proven to be a reliable way to perform person identification without relying on subject cooperation. Walking is a biometric that does not significantly change in short periods of time and can be regarded as unique to each person. So far, the study of gait analysis focused mostly on identification and demographics estimation, without considering many of the pedestrian attributes that appearance-based methods rely on. In this work, alongside gait-based person identification, we explore pedestrian attribute identification solely from movement patterns. We propose DenseGait, the largest dataset for pretraining gait analysis systems containing 217 K anonymized tracklets, annotated automatically with 42 appearance attributes. DenseGait is constructed by automatically processing video streams and offers the full array of gait covariates present in the real world. We make the dataset available to the research community. Additionally, we propose GaitFormer, a transformer-based model that after pretraining in a multi-task fashion on DenseGait, achieves 92.5% accuracy on CASIA-B and 85.33% on FVG, without utilizing any manually annotated data. This corresponds to a +14.2% and +9.67% accuracy increase compared to similar methods. Moreover, GaitFormer is able to accurately identify gender information and a multitude of appearance attributes utilizing only movement patterns. The code to reproduce the experiments is made publicly.
Collapse
|
14
|
Parashar A, Shekhawat RS, Ding W, Rida I. Intra-class variations with deep learning-based gait analysis: A comprehensive survey of covariates and methods. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.07.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
|
15
|
Abstract
This study aimed to develop a vision-based gait recognition system for person identification. Gait is the soft biometric trait recognizable from low-resolution surveillance videos, where the face and other hard biometrics are not even extractable. The gait is a cycle pattern of human body locomotion that consists of two sequential phases: swing and stance. The gait features of the complete gait cycle, referred to as gait signature, can be used for person identification. The proposed work utilizes gait dynamics for gait feature extraction. For this purpose, the spatio temporal power spectral gait features are utilized for gait dynamics captured through sub-pixel motion estimation, and they are less affected by the subject’s appearance. The spatio temporal power spectral gait features are utilized for a quadratic support vector machine classifier for gait recognition aiming for person identification. Spatio temporal power spectral preserves the spatiotemporal gait features and is adaptable for a quadratic support vector machine classifier-based gait recognition across different views and appearances. We have evaluated the gait features and support vector machine classifier-based gait recognition on a locally collected gait dataset that captures the effect of view variance in high scene depth videos. The proposed gait recognition technique achieves significant accuracy across all appearances and views.
Collapse
|
16
|
Advances in Vision-Based Gait Recognition: From Handcrafted to Deep Learning. SENSORS 2022; 22:s22155682. [PMID: 35957239 PMCID: PMC9371146 DOI: 10.3390/s22155682] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/23/2022] [Revised: 07/04/2022] [Accepted: 07/11/2022] [Indexed: 02/05/2023]
Abstract
Identifying people’s identity by using behavioral biometrics has attracted many researchers’ attention in the biometrics industry. Gait is a behavioral trait, whereby an individual is identified based on their walking style. Over the years, gait recognition has been performed by using handcrafted approaches. However, due to several covariates’ effects, the competence of the approach has been compromised. Deep learning is an emerging algorithm in the biometrics field, which has the capability to tackle the covariates and produce highly accurate results. In this paper, a comprehensive overview of the existing deep learning-based gait recognition approach is presented. In addition, a summary of the performance of the approach on different gait datasets is provided.
Collapse
|
17
|
VGG16-MLP: Gait Recognition with Fine-Tuned VGG-16 and Multilayer Perceptron. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12157639] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
Gait is a pattern of a person’s walking. The body movements of a person while walking makes the gait unique. Regardless of the uniqueness, the gait recognition process suffers under various factors, namely the viewing angle, carrying condition, and clothing. In this paper, a pre-trained VGG-16 model is incorporated with a multilayer perceptron to enhance the performance under various covariates. At first, the gait energy image is obtained by averaging the silhouettes over a gait cycle. Transfer learning and fine-tuning techniques are then applied on the pre-trained VGG-16 model to learn the gait features of the attained gait energy image. Subsequently, a multilayer perceptron is utilized to determine the relationship among the gait features and the corresponding subject. Lastly, the classification layer identifies the corresponding subject. Experiments are conducted to evaluate the performance of the proposed method on the CASIA-B dataset, the OU-ISIR dataset D, and the OU-ISIR large population dataset. The comparison with the state-of-the-art methods shows that the proposed method outperforms the methods on all the datasets.
Collapse
|
18
|
Gao Z, Wu J, Wu T, Huang R, Zhang A, Zhao J. Robust clothing-independent gait recognition using hybrid part-based gait features. PeerJ Comput Sci 2022; 8:e996. [PMID: 35721406 PMCID: PMC9202625 DOI: 10.7717/peerj-cs.996] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2022] [Accepted: 05/09/2022] [Indexed: 06/15/2023]
Abstract
Recently, gait has been gathering extensive interest for the non-fungible position in applications. Although various methods have been proposed for gait recognition, most of them can only attain an excellent recognition performance when the probe and gallery gaits are in a similar condition. Once external factors (e.g., clothing variations) influence people's gaits and changes happen in human appearances, a significant performance degradation occurs. Hence, in our article, a robust hybrid part-based spatio-temporal feature learning method is proposed for gait recognition to handle this cloth-changing problem. First, human bodies are segmented into the affected and non/less unaffected parts based on the anatomical studies. Then, a well-designed network is proposed in our method to formulate our required hybrid features from the non/less unaffected body parts. This network contains three sub-networks, aiming to generate features independently. Each sub-network emphasizes individual aspects of gait, hence an effective hybrid gait feature can be created through their concatenation. In addition, temporal information can be used as complement to enhance the recognition performance, a sub-network is specifically proposed to establish the temporal relationship between consecutive short-range frames. Also, since local features are more discriminative than global features in gait recognition, in this network a sub-network is specifically proposed to generate features of local refined differences. The effectiveness of our proposed method has been evaluated by experiments on the CASIA Gait Dataset B and OU-ISIR Treadmill Gait Dataset B. Related experiments illustrate that compared with other gait recognition methods, our proposed method can achieve a prominent result when handling this cloth-changing gait recognition problem.
Collapse
Affiliation(s)
- Zhipeng Gao
- Xiamen Meiya Pico Information Co., Ltd., Xiamen, Fujian, China
| | - Junyi Wu
- Xiamen Meiya Pico Information Co., Ltd., Xiamen, Fujian, China
| | - Tingting Wu
- Xiamen Meiya Pico Information Co., Ltd., Xiamen, Fujian, China
| | - Renyu Huang
- Xiamen Meiya Pico Information Co., Ltd., Xiamen, Fujian, China
| | - Anguo Zhang
- College of Mathematics and Data Science, Minjiang University, Fuzhou, China
- College of Physics and Information Engineering, Fuzhou University, Fuzhou, China
| | - Jianqiang Zhao
- Xiamen Meiya Pico Information Co., Ltd., Xiamen, Fujian, China
| |
Collapse
|
19
|
Sethi D, Bharti S, Prakash C. A comprehensive survey on gait analysis: History, parameters, approaches, pose estimation, and future work. Artif Intell Med 2022; 129:102314. [DOI: 10.1016/j.artmed.2022.102314] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2021] [Revised: 04/28/2022] [Accepted: 04/29/2022] [Indexed: 11/15/2022]
|
20
|
A multi-modal dataset for gait recognition under occlusion. APPL INTELL 2022. [DOI: 10.1007/s10489-022-03474-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
21
|
Zhang Z, Tran L, Liu F, Liu X. On Learning Disentangled Representations for Gait Recognition. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2022; 44:345-360. [PMID: 32750777 DOI: 10.1109/tpami.2020.2998790] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Gait, the walking pattern of individuals, is one of the important biometrics modalities. Most of the existing gait recognition methods take silhouettes or articulated body models as gait features. These methods suffer from degraded recognition performance when handling confounding variables, such as clothing, carrying and viewing angle. To remedy this issue, we propose a novel AutoEncoder framework, GaitNet, to explicitly disentangle appearance, canonical and pose features from RGB imagery. The LSTM integrates pose features over time as a dynamic gait feature while canonical features are averaged as a static gait feature. Both of them are utilized as classification features. In addition, we collect a Frontal-View Gait (FVG) dataset to focus on gait recognition from frontal-view walking, which is a challenging problem since it contains minimal gait cues compared to other views. FVG also includes other important variations, e.g., walking speed, carrying, and clothing. With extensive experiments on CASIA-B, USF, and FVG datasets, our method demonstrates superior performance to the SOTA quantitatively, the ability of feature disentanglement qualitatively, and promising computational efficiency. We further compare our GaitNet with state-of-the-art face recognition to demonstrate the advantages of gait biometrics identification under certain scenarios, e.g., long-distance/lower resolutions, cross viewing angles. Source code is available at http://cvlab.cse.msu.edu/project-gaitnet.html.
Collapse
|
22
|
|
23
|
Iwashita Y, Sakano H, Kurazume R, Stoica A. Speed invariant gait recognition-The enhanced mutual subspace method. PLoS One 2021; 16:e0255927. [PMID: 34379692 PMCID: PMC8357177 DOI: 10.1371/journal.pone.0255927] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2020] [Accepted: 07/28/2021] [Indexed: 11/30/2022] Open
Abstract
This paper introduces an enhanced MSM (Mutual Subspace Method) methodology for gait recognition, to provide robustness to variations in walking speed. The enhanced MSM (eMSM) methodology expands and adapts the MSM, commonly used for face recognition, which is a static/physiological biometric, to gait recognition, which is a dynamic/behavioral biometrics. To address the loss of accuracy during calculation of the covariance matrix in the PCA step of MSM, we use a 2D PCA-based mutual subspace. Furhtermore, to enhance the discrimination capability, we rotate images over a number of angles, which enables us to extract richer gait features to then be fused by a boosting method. The eMSM methodology is evaluated on existing data sets which provide variable walking speed, i.e. CASIA-C and OU-ISIR gait databases, and it is shown to outperform state-of-the art methods. While the enhancement to MSM discussed in this paper uses combinations of 2D-PCA, rotation, boosting, other combinations of operations may also be advantageous.
Collapse
Affiliation(s)
- Yumi Iwashita
- Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA, United States of America
- Kyushu University, Fukuoka, Japan
- * E-mail: ,
| | | | | | - Adrian Stoica
- Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA, United States of America
| |
Collapse
|
24
|
Sikandar T, Rabbi MF, Ghazali KH, Altwijri O, Alqahtani M, Almijalli M, Altayyar S, Ahamed NU. Using a Deep Learning Method and Data from Two-Dimensional (2D) Marker-Less Video-Based Images for Walking Speed Classification. SENSORS 2021; 21:s21082836. [PMID: 33920617 PMCID: PMC8072769 DOI: 10.3390/s21082836] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/26/2021] [Revised: 04/10/2021] [Accepted: 04/13/2021] [Indexed: 01/09/2023]
Abstract
Human body measurement data related to walking can characterize functional movement and thereby become an important tool for health assessment. Single-camera-captured two-dimensional (2D) image sequences of marker-less walking individuals might be a simple approach for estimating human body measurement data which could be used in walking speed-related health assessment. Conventional body measurement data of 2D images are dependent on body-worn garments (used as segmental markers) and are susceptible to changes in the distance between the participant and camera in indoor and outdoor settings. In this study, we propose five ratio-based body measurement data that can be extracted from 2D images and can be used to classify three walking speeds (i.e., slow, normal, and fast) using a deep learning-based bidirectional long short-term memory classification model. The results showed that average classification accuracies of 88.08% and 79.18% could be achieved in indoor and outdoor environments, respectively. Additionally, the proposed ratio-based body measurement data are independent of body-worn garments and not susceptible to changes in the distance between the walking individual and camera. As a simple but efficient technique, the proposed walking speed classification has great potential to be employed in clinics and aged care homes.
Collapse
Affiliation(s)
- Tasriva Sikandar
- Faculty of Electrical and Electronics Engineering, Universiti Malaysia Pahang, Pekan 26600, Malaysia; (T.S.); (K.H.G.)
| | - Mohammad F. Rabbi
- School of Allied Health Sciences, Griffith University, Gold Coast, QLD 4222, Australia;
| | - Kamarul H. Ghazali
- Faculty of Electrical and Electronics Engineering, Universiti Malaysia Pahang, Pekan 26600, Malaysia; (T.S.); (K.H.G.)
| | - Omar Altwijri
- Biomedical Technology Department, College of Applied Medical Sciences, King Saud University, Riyadh 11451, Saudi Arabia; (O.A.); (M.A.); (M.A.); (S.A.)
| | - Mahdi Alqahtani
- Biomedical Technology Department, College of Applied Medical Sciences, King Saud University, Riyadh 11451, Saudi Arabia; (O.A.); (M.A.); (M.A.); (S.A.)
| | - Mohammed Almijalli
- Biomedical Technology Department, College of Applied Medical Sciences, King Saud University, Riyadh 11451, Saudi Arabia; (O.A.); (M.A.); (M.A.); (S.A.)
| | - Saleh Altayyar
- Biomedical Technology Department, College of Applied Medical Sciences, King Saud University, Riyadh 11451, Saudi Arabia; (O.A.); (M.A.); (M.A.); (S.A.)
| | - Nizam U. Ahamed
- Neuromuscular Research Laboratory/Warrior Human Performance Research Center, Department of Sports Medicine and Nutrition, University of Pittsburgh, Pittsburgh, PA 15203, USA
- Correspondence:
| |
Collapse
|
25
|
Fakoorian S, Roshanineshat A, Khalaf P, Azimi V, Simon D, Hardin E. An Extensive Set of Kinematic and Kinetic Data for Individuals with Intact Limbs and Transfemoral Prosthesis Users. Appl Bionics Biomech 2020; 2020:8864854. [PMID: 33224270 PMCID: PMC7671801 DOI: 10.1155/2020/8864854] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2020] [Revised: 07/31/2020] [Accepted: 09/28/2020] [Indexed: 11/17/2022] Open
Abstract
This paper introduces an extensive human motion data set for typical activities of daily living. These data are crucial for the design and control of prosthetic devices for transfemoral prosthesis users. This data set was collected from seven individuals, including five individuals with intact limbs and two transfemoral prosthesis users. These data include the following types of movements: (1) walking at three different speeds; (2) walking up and down a 5-degree ramp; (3) stepping up and down; (4) sitting down and standing up. We provide full-body marker trajectories and ground reaction forces (GRFs) as well as joint angles, joint velocities, joint torques, and joint powers. This data set is publicly available at the website referenced in this paper. Data from flexion and extension of the hip, knee, and ankle are presented in this paper. However, the data accompanying this paper (available on the internet) include 46 distinct measurements and can be useful for validating or generating mathematical models to simulate the gait of both transfemoral prosthesis users and individuals with intact legs.
Collapse
Affiliation(s)
- Seyed Fakoorian
- Department of Electrical Engineering and Computer Science, Cleveland State University, Cleveland, Ohio 44115, USA
| | - Arash Roshanineshat
- Department of Electrical Engineering and Computer Engineering, University of Arizona, Tucson, AZ 87721, USA
| | - Poya Khalaf
- Department of Mechanical Engineering, Cleveland State University, Cleveland, Ohio 44115, USA
| | - Vahid Azimi
- Department of Electrical Engineering and Computer Engineering, Georgia Institute of Technology, Atlanta, GA 30313, USA
| | - Dan Simon
- Department of Electrical Engineering and Computer Science, Cleveland State University, Cleveland, Ohio 44115, USA
| | - Elizabeth Hardin
- Motion Study Laboratory, Cleveland VA Medical Center, Cleveland, Ohio 44106, USA
| |
Collapse
|
26
|
Kusakunniran W. Review of gait recognition approaches and their challenges on view changes. IET BIOMETRICS 2020. [DOI: 10.1049/iet-bmt.2020.0103] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022] Open
Affiliation(s)
- Worapan Kusakunniran
- Faculty of Information and Communication Technology Mahidol University 999 Phuttamonthon 4 Road Salaya Nakhon Pathom 73170 Thailand
| |
Collapse
|
27
|
Affiliation(s)
- Imad Rida
- Department of Computer Science and EngineeringQatar UniversityDohaQatar
| | - Noor Almaadeed
- Department of Computer Science and EngineeringQatar UniversityDohaQatar
| | - Somaya Almaadeed
- Department of Computer Science and EngineeringQatar UniversityDohaQatar
| |
Collapse
|
28
|
Santuz A, Ekizos A, Janshen L, Mersmann F, Bohm S, Baltzopoulos V, Arampatzis A. Modular Control of Human Movement During Running: An Open Access Data Set. Front Physiol 2018; 9:1509. [PMID: 30420812 PMCID: PMC6216155 DOI: 10.3389/fphys.2018.01509] [Citation(s) in RCA: 27] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2018] [Accepted: 10/08/2018] [Indexed: 12/31/2022] Open
Abstract
The human body is an outstandingly complex machine including around 1000 muscles and joints acting synergistically. Yet, the coordination of the enormous amount of degrees of freedom needed for movement is mastered by our one brain and spinal cord. The idea that some synergistic neural components of movement exist was already suggested at the beginning of the 20th century. Since then, it has been widely accepted that the central nervous system might simplify the production of movement by avoiding the control of each muscle individually. Instead, it might be controlling muscles in common patterns that have been called muscle synergies. Only with the advent of modern computational methods and hardware it has been possible to numerically extract synergies from electromyography (EMG) signals. However, typical experimental setups do not include a big number of individuals, with common sample sizes of 5 to 20 participants. With this study, we make publicly available a set of EMG activities recorded during treadmill running from the right lower limb of 135 healthy and young adults (78 males and 57 females). Moreover, we include in this open access data set the code used to extract synergies from EMG data using non-negative matrix factorization (NMF) and the relative outcomes. Muscle synergies, containing the time-invariant muscle weightings (motor modules) and the time-dependent activation coefficients (motor primitives), were extracted from 13 ipsilateral EMG activities using NMF. Four synergies were enough to describe as many gait cycle phases during running: weight acceptance, propulsion, early swing, and late swing. We foresee many possible applications of our data that we can summarize in three key points. First, it can be a prime source for broadening the representation of human motor control due to the big sample size. Second, it could serve as a benchmark for scientists from multiple disciplines such as musculoskeletal modeling, robotics, clinical neuroscience, sport science, etc. Third, the data set could be used both to train students or to support established scientists in the perfection of current muscle synergies extraction methods. All the data is available at Zenodo (doi: 10.5281/zenodo.1254380).
Collapse
Affiliation(s)
- Alessandro Santuz
- Department of Training and Movement Sciences, Humboldt-Universität zu Berlin, Berlin, Germany.,Berlin School of Movement Science, Humboldt-Universität zu Berlin, Berlin, Germany
| | - Antonis Ekizos
- Department of Training and Movement Sciences, Humboldt-Universität zu Berlin, Berlin, Germany.,Berlin School of Movement Science, Humboldt-Universität zu Berlin, Berlin, Germany
| | - Lars Janshen
- Department of Training and Movement Sciences, Humboldt-Universität zu Berlin, Berlin, Germany
| | - Falk Mersmann
- Department of Training and Movement Sciences, Humboldt-Universität zu Berlin, Berlin, Germany.,Berlin School of Movement Science, Humboldt-Universität zu Berlin, Berlin, Germany
| | - Sebastian Bohm
- Department of Training and Movement Sciences, Humboldt-Universität zu Berlin, Berlin, Germany.,Berlin School of Movement Science, Humboldt-Universität zu Berlin, Berlin, Germany
| | - Vasilios Baltzopoulos
- Research Institute for Sport and Exercise Sciences, Liverpool John Moores University, Liverpool, United Kingdom
| | - Adamantios Arampatzis
- Department of Training and Movement Sciences, Humboldt-Universität zu Berlin, Berlin, Germany.,Berlin School of Movement Science, Humboldt-Universität zu Berlin, Berlin, Germany
| |
Collapse
|
29
|
Gait Energy Response Functions for Gait Recognition against Various Clothing and Carrying Status. APPLIED SCIENCES-BASEL 2018. [DOI: 10.3390/app8081380] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Silhouette-based gait representations are widely used in the current gait recognition community due to their effectiveness and efficiency, but they are subject to changes in covariate conditions such as clothing and carrying status. Therefore, we propose a gait energy response function (GERF) that transforms a gait energy (i.e., an intensity value) of a silhouette-based gait feature into a value more suitable for handling these covariate conditions. Additionally, since the discrimination capability of gait energies, as well as the degree to which they are affected by the covariate conditions, differs among body parts, we extend the GERF framework to spatially dependent GERF (SD-GERF) which accounts for spatial dependence. Moreover, the proposed GERFs are represented as a vector in the transformation lookup table and are optimized through an efficient generalized eigenvalue problem in a closed form. Finally, two post-processing techniques, Gabor filtering and spatial metric learning, are employed for the transformed gait features to boost the accuracy. Experimental results with three publicly available datasets including clothing and carrying status variations show the state-of-the-art performance of the proposed method compared with other state-of-the-art methods.
Collapse
|
30
|
Chen X, Weng J, Lu W, Xu J, Weng J, Chen X, Xu J, Lu W. Multi-Gait Recognition Based on Attribute Discovery. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2018; 40:1697-1710. [PMID: 28708545 DOI: 10.1109/tpami.2017.2726061] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
Gait recognition is an important topic in biometrics. Current works primarily focus on recognizing a single person's walking gait. However, a person's gait will change when they walk with other people. How to recognize the gait of multiple people walking is still a challenging problem. This paper proposes an attribute discovery model in a max-margin framework to recognize a person based on gait while walking with multiple people. First, human graphlets are integrated into a tracking-by-detection method to obtain a person's complete silhouette. Then, stable and discriminative attributes are developed using a latent conditional random field (L-CRF) model. The model is trained in the latent structural support vector machine (SVM) framework, in which a new constraint is added to improve the multi-gait recognition performance. In the recognition process, the attribute set of each person is detected by inferring on the trained L-CRF model. Finally, attributes based on dense trajectories are extracted as the final gait features to complete the recognition. The experimental results demonstrate that the proposed method achieves better recognition performance than traditional gait recognition methods under the condition of multiple people walking together.
Collapse
|
31
|
Aggarwal H, Vishwakarma DK. Covariate Conscious Approach for Gait Recognition Based Upon Zernike Moment Invariants. IEEE Trans Cogn Dev Syst 2018. [DOI: 10.1109/tcds.2017.2658674] [Citation(s) in RCA: 25] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
32
|
|
33
|
Ortells J, Herrero-Ezquerro MT, Mollineda RA. Vision-based gait impairment analysis for aided diagnosis. Med Biol Eng Comput 2018; 56:1553-1564. [PMID: 29435705 DOI: 10.1007/s11517-018-1795-2] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2017] [Accepted: 01/27/2018] [Indexed: 10/18/2022]
Abstract
Gait is a firsthand reflection of health condition. This belief has inspired recent research efforts to automate the analysis of pathological gait, in order to assist physicians in decision-making. However, most of these efforts rely on gait descriptions which are difficult to understand by humans, or on sensing technologies hardly available in ambulatory services. This paper proposes a number of semantic and normalized gait features computed from a single video acquired by a low-cost sensor. Far from being conventional spatio-temporal descriptors, features are aimed at quantifying gait impairment, such as gait asymmetry from several perspectives or falling risk. They were designed to be invariant to frame rate and image size, allowing cross-platform comparisons. Experiments were formulated in terms of two databases. A well-known general-purpose gait dataset is used to establish normal references for features, while a new database, introduced in this work, provides samples under eight different walking styles: one normal and seven impaired patterns. A number of statistical studies were carried out to prove the sensitivity of features at measuring the expected pathologies, providing enough evidence about their accuracy. Graphical Abstract Graphical abstract reflecting main contributions of the manuscript: at the top, a robust, semantic and easy-to-interpret feature set to describe impaired gait patterns; at the bottom, a new dataset consisting of video-recordings of a number of volunteers simulating different patterns of pathological gait, where features were statistically assessed.
Collapse
Affiliation(s)
- Javier Ortells
- Institute of New Imaging Technologies, Universitat Jaume I, Castellón de la Plana, Spain.
| | | | - Ramón A Mollineda
- Institute of New Imaging Technologies, Universitat Jaume I, Castellón de la Plana, Spain
| |
Collapse
|
34
|
Medikonda J, Madasu H, Bijaya Ketan P. Information set based features for the speed invariant gait recognition. IET BIOMETRICS 2017. [DOI: 10.1049/iet-bmt.2016.0136] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022] Open
Affiliation(s)
- Jeevan Medikonda
- Department of Biomedical EngineeringManipal Institute of Technology, Manipal UniversityManipalIndia
| | | | | |
Collapse
|
35
|
Connie T, Goh MKO, Teoh ABJ. A Grassmannian Approach to Address View Change Problem in Gait Recognition. IEEE TRANSACTIONS ON CYBERNETICS 2017; 47:1395-1408. [PMID: 27101628 DOI: 10.1109/tcyb.2016.2545693] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
Gait recognition appears to be a valuable asset when conventional biometrics cannot be employed. Nonetheless, recognizing human by gait is not a trivial task due to the complex human kinematic structure and other external factors affecting human locomotion. A major challenge in gait recognition is view variation. A large difference between the views in the query and reference sets often leads to performance deterioration. In this paper, we show how to generate virtual views to compensate the view difference in the query and reference sets, making it possible to match the query and reference sets using standardized views. The proposed method, which combines multiview matrix representation and a novel randomized kernel extreme learning machine, is an end-to-end solution for view change problem under Grassmann manifold treatment. Under the right condition, the view-tagging problem can be eliminated. Since the recording angle and walking direction of the subject are not always available, this is particularly valuable for a practical gait recognition system. We present several working scenarios for multiview recognition that have not be considered before. Rigorous experiments have been conducted on two challenging benchmark databases containing multiview gait datasets. Experiments show that the proposed approach outperforms several state-of-the-arts methods.
Collapse
|
36
|
Tang J, Luo J, Tjahjadi T, Guo F. Robust Arbitrary-View Gait Recognition Based on 3D Partial Similarity Matching. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2017; 26:7-22. [PMID: 28113179 DOI: 10.1109/tip.2016.2612823] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
Existing view-invariant gait recognition methods encounter difficulties due to limited number of available gait views and varying conditions during training. This paper proposes gait partial similarity matching that assumes a 3D object shares common view surfaces in significantly different views. Detecting such surfaces aids the extraction of gait features from multiple views; 3D parametric body models are morphed by pose and shape deformation from a template model using 2D gait silhouette as observation. The gait pose is estimated by a level set energy cost function from silhouettes including incomplete ones. Body shape deformation is achieved via Laplacian deformation energy function associated with inpainting gait silhouettes. Partial gait silhouettes in different views are extracted by gait partial region of interest elements selection and re-projected onto 2D space to construct partial gait energy images. A synthetic database with destination views and multi-linear subspace classifier fused with majority voting is used to achieve arbitrary view gait recognition that is robust to varying conditions. Experimental results on CMU, CASIA B, TUM-IITKGP, AVAMVG, and KY4D data sets show the efficacy of the propose method.
Collapse
|
37
|
Recent developments in human gait research: parameters, approaches, applications, machine learning techniques, datasets and challenges. Artif Intell Rev 2016. [DOI: 10.1007/s10462-016-9514-6] [Citation(s) in RCA: 48] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
38
|
Das Choudhury S, Tjahjadi T. Clothing and carrying condition invariant gait recognition based on rotation forest. Pattern Recognit Lett 2016. [DOI: 10.1016/j.patrec.2016.05.009] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
39
|
|
40
|
Nandy A, Chakraborty R, Chakraborty P. Cloth invariant gait recognition using pooled segmented statistical features. Neurocomputing 2016. [DOI: 10.1016/j.neucom.2016.01.002] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
41
|
|
42
|
Moore JK, Hnat SK, van den Bogert AJ. An elaborate data set on human gait and the effect of mechanical perturbations. PeerJ 2015; 3:e918. [PMID: 25945311 PMCID: PMC4419525 DOI: 10.7717/peerj.918] [Citation(s) in RCA: 41] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2014] [Accepted: 04/07/2015] [Indexed: 11/20/2022] Open
Abstract
Here we share a rich gait data set collected from fifteen subjects walking at three speeds on an instrumented treadmill. Each trial consists of 120 s of normal walking and 480 s of walking while being longitudinally perturbed during each stance phase with pseudo-random fluctuations in the speed of the treadmill belt. A total of approximately 1.5 h of normal walking (>5000 gait cycles) and 6 h of perturbed walking (>20,000 gait cycles) is included in the data set. We provide full body marker trajectories and ground reaction loads in addition to a presentation of processed data that includes gait events, 2D joint angles, angular rates, and joint torques along with the open source software used for the computations. The protocol is described in detail and supported with additional elaborate meta data for each trial. This data can likely be useful for validating or generating mathematical models that are capable of simulating normal periodic gait and non-periodic, perturbed gaits.
Collapse
Affiliation(s)
- Jason K. Moore
- Department of Mechanical Engineering, Cleveland State University, Cleveland, OH, USA
| | - Sandra K. Hnat
- Department of Mechanical Engineering, Cleveland State University, Cleveland, OH, USA
| | | |
Collapse
|
43
|
Development of vision based multiview gait recognition system with MMUGait database. ScientificWorldJournal 2014; 2014:376569. [PMID: 25143972 PMCID: PMC3985318 DOI: 10.1155/2014/376569] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2013] [Accepted: 01/23/2014] [Indexed: 11/17/2022] Open
Abstract
This paper describes the acquisition setup and development of a new gait database, MMUGait. This database consists of 82 subjects walking under normal condition and 19 subjects walking with 11 covariate factors, which were captured under two views. This paper also proposes a multiview model-based gait recognition system with joint detection approach that performs well under different walking trajectories and covariate factors, which include self-occluded or external occluded silhouettes. In the proposed system, the process begins by enhancing the human silhouette to remove the artifacts. Next, the width and height of the body are obtained. Subsequently, the joint angular trajectories are determined once the body joints are automatically detected. Lastly, crotch height and step-size of the walking subject are determined. The extracted features are smoothened by Gaussian filter to eliminate the effect of outliers. The extracted features are normalized with linear scaling, which is followed by feature selection prior to the classification process. The classification experiments carried out on MMUGait database were benchmarked against the SOTON Small DB from University of Southampton. Results showed correct classification rate above 90% for all the databases. The proposed approach is found to outperform other approaches on SOTON Small DB in most cases.
Collapse
|
44
|
Guan Y, Sun Y, Li C, Tistarelli M. Human gait identification from extremely low‐quality videos: an enhanced classifier ensemble method. IET BIOMETRICS 2014. [DOI: 10.1049/iet-bmt.2013.0062] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022] Open
Affiliation(s)
- Yu Guan
- Department of Computer ScienceUniversity of WarwickCoventryCV4 7ALUK
| | - Yunlian Sun
- Department of Sciences and Information TechnologyUniversity of Sassari07100SassariItaly
| | - Chang‐Tsun Li
- Department of Computer ScienceUniversity of WarwickCoventryCV4 7ALUK
| | - Massimo Tistarelli
- Department of Sciences and Information TechnologyUniversity of Sassari07100SassariItaly
| |
Collapse
|
45
|
Nandy A, Chakraborty R, Chakraborty P, Nandi G. A novel Approach to Human Gait Recognition using possible Speed Invariant features. INT J COMPUT INT SYS 2014. [DOI: 10.1080/18756891.2014.967004] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022] Open
|
46
|
Lee CP, Tan AW, Tan SC. Gait recognition via optimally interpolated deformable contours. Pattern Recognit Lett 2013. [DOI: 10.1016/j.patrec.2013.01.013] [Citation(s) in RCA: 41] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
47
|
A Complexity Measure of Gait Perception. PATTERN RECOGNITION AND IMAGE ANALYSIS 2013. [DOI: 10.1007/978-3-642-38628-2_58] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|