1
|
Lee P, Chen TB, Lin HY, Yeh LR, Liu CH, Chen YL. Integrating OpenPose and SVM for Quantitative Postural Analysis in Young Adults: A Temporal-Spatial Approach. Bioengineering (Basel) 2024; 11:548. [PMID: 38927784 PMCID: PMC11200693 DOI: 10.3390/bioengineering11060548] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2024] [Revised: 05/12/2024] [Accepted: 05/25/2024] [Indexed: 06/28/2024] Open
Abstract
Noninvasive tracking devices are widely used to monitor real-time posture. Yet significant potential exists to enhance postural control quantification through walking videos. This study advances computational science by integrating OpenPose with a Support Vector Machine (SVM) to perform highly accurate and robust postural analysis, marking a substantial improvement over traditional methods which often rely on invasive sensors. Utilizing OpenPose-based deep learning, we generated Dynamic Joint Nodes Plots (DJNP) and iso-block postural identity images for 35 young adults in controlled walking experiments. Through Temporal and Spatial Regression (TSR) models, key features were extracted for SVM classification, enabling the distinction between various walking behaviors. This approach resulted in an overall accuracy of 0.990 and a Kappa index of 0.985. Cutting points for the ratio of top angles (TAR) and the ratio of bottom angles (BAR) effectively differentiated between left and right skews with AUC values of 0.772 and 0.775, respectively. These results demonstrate the efficacy of integrating OpenPose with SVM, providing more precise, real-time analysis without invasive sensors. Future work will focus on expanding this method to a broader demographic, including individuals with gait abnormalities, to validate its effectiveness across diverse clinical conditions. Furthermore, we plan to explore the integration of alternative machine learning models, such as deep neural networks, enhancing the system's robustness and adaptability for complex dynamic environments. This research opens new avenues for clinical applications, particularly in rehabilitation and sports science, promising to revolutionize noninvasive postural analysis.
Collapse
Affiliation(s)
- Posen Lee
- Department of Occupational Therapy, College of Medicine, I-Shou University, Kaohsiung 82445, Taiwan;
| | - Tai-Been Chen
- Department of Radiological Technology, Faculty of Medical Technology, Teikyo University, Tokyo 173-8605, Japan;
| | - Hung-Yu Lin
- Department of Occupational Therapy, College of Medical and Health Science, Asia University, Taichung 41354, Taiwan;
| | - Li-Ren Yeh
- Department of Anesthesiology, E-DA Cancer Hospital, I-Shou University, Kaohsiung 82445, Taiwan;
| | - Chin-Hsuan Liu
- Department of Occupational Therapy, College of Medicine, I-Shou University, Kaohsiung 82445, Taiwan;
| | - Yen-Lin Chen
- Department of Computer Science and Information Engineering, College of Electrical Engineering and Computer Science, National Taipei University of Technology, Taipei 10608, Taiwan;
| |
Collapse
|
2
|
Yang Z, Tsui B, Wu Z. Assessment System for Child Head Injury from Falls Based on Neural Network Learning. SENSORS (BASEL, SWITZERLAND) 2023; 23:7896. [PMID: 37765953 PMCID: PMC10534444 DOI: 10.3390/s23187896] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/26/2023] [Revised: 08/19/2023] [Accepted: 09/13/2023] [Indexed: 09/29/2023]
Abstract
Toddlers face serious health hazards if they fall from relatively high places at home during everyday activities and are not swiftly rescued. Still, few effective, precise, and exhaustive solutions exist for such a task. This research aims to create a real-time assessment system for head injury from falls. Two phases are involved in processing the framework: In phase I, the data of joints is obtained by processing surveillance video with Open Pose. The long short-term memory (LSTM) network and 3D transform model are then used to integrate key spots' frame space and time information. In phase II, the head acceleration is derived and inserted into the HIC value calculation, and a classification model is developed to assess the injury. We collected 200 RGB-captured daily films of 13- to 30-month-old toddlers playing near furniture edges, guardrails, and upside-down falls. Five hundred video clips extracted from these are divided in an 8:2 ratio into a training and validation set. We prepared an additional collection of 300 video clips (test set) of toddlers' daily falling at home from their parents to evaluate the framework's performance. The experimental findings revealed a classification accuracy of 96.67%. The feasibility of a real-time AI technique for assessing head injuries in falls through monitoring was proven.
Collapse
Affiliation(s)
- Ziqian Yang
- College of Furnishings and Industrial Design, Nanjing Forestry University, Nanjing 210037, China
- Jiangsu Co-Innovation Center of Efficient Processing and Utilization of Forest Resources, Nanjing 210037, China
| | - Baiyu Tsui
- College of Furnishings and Industrial Design, Nanjing Forestry University, Nanjing 210037, China
- Jiangsu Co-Innovation Center of Efficient Processing and Utilization of Forest Resources, Nanjing 210037, China
| | - Zhihui Wu
- College of Furnishings and Industrial Design, Nanjing Forestry University, Nanjing 210037, China
- Jiangsu Co-Innovation Center of Efficient Processing and Utilization of Forest Resources, Nanjing 210037, China
| |
Collapse
|
3
|
Chen J, Deng S, Wang P, Huang X, Liu Y. Lightweight Helmet Detection Algorithm Using an Improved YOLOv4. SENSORS (BASEL, SWITZERLAND) 2023; 23:1256. [PMID: 36772297 PMCID: PMC9919412 DOI: 10.3390/s23031256] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/27/2022] [Revised: 01/17/2023] [Accepted: 01/18/2023] [Indexed: 06/18/2023]
Abstract
Safety helmet wearing plays a major role in protecting the safety of workers in industry and construction, so a real-time helmet wearing detection technology is very necessary. This paper proposes an improved YOLOv4 algorithm to achieve real-time and efficient safety helmet wearing detection. The improved YOLOv4 algorithm adopts a lightweight network PP-LCNet as the backbone network and uses deepwise separable convolution to decrease the model parameters. Besides, the coordinate attention mechanism module is embedded in the three output feature layers of the backbone network to enhance the feature information, and an improved feature fusion structure is designed to fuse the target information. In terms of the loss function, we use a new SIoU loss function that fuses directional information to increase detection precision. The experimental findings demonstrate that the improved YOLOv4 algorithm achieves an accuracy of 92.98%, a model size of 41.88 M, and a detection speed of 43.23 pictures/s. Compared with the original YOLOv4, the accuracy increases by 0.52%, the model size decreases by about 83%, and the detection speed increases by 88%. Compared with other existing methods, it performs better in terms of precision and speed.
Collapse
Affiliation(s)
- Junhua Chen
- School of Computer Science and Technology, Chongqing University of Posts and Telecommunications, Chongqing 400065, China
- Key Laboratory of Industrial Internet of Things & Networked Control, Chongqing University of Posts and Telecommunications, Chongqing 400065, China
| | - Sihao Deng
- Key Laboratory of Industrial Internet of Things & Networked Control, Chongqing University of Posts and Telecommunications, Chongqing 400065, China
| | - Ping Wang
- Key Laboratory of Industrial Internet of Things & Networked Control, Chongqing University of Posts and Telecommunications, Chongqing 400065, China
| | - Xueda Huang
- Key Laboratory of Industrial Internet of Things & Networked Control, Chongqing University of Posts and Telecommunications, Chongqing 400065, China
| | - Yanfei Liu
- School of Artificial Intelligence, Chongqing University of Technology, Chongqing 400054, China
| |
Collapse
|
4
|
Real-Time ISR-YOLOv4 Based Small Object Detection for Safe Shop Floor in Smart Factories. ELECTRONICS 2022. [DOI: 10.3390/electronics11152348] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/10/2022]
Abstract
Wearing a hard hat can effectively improve the safety of workers on a construction site. However, workers often take off their helmets because they have a weak sense of safety and are uncomfortable, and this action poses a large danger. Workers not wearing hard hats are more likely to be injured in accidents such as human falls and vertical falls. Therefore, the detection of wearing a helmet is an important step in the safety management of a construction site, and it is urgent to detect helmets quickly and accurately. However, the existing manual monitor is labor intensive, and it is difficult to popularize the method of mounting the sensor on the helmet. Thus, in this paper, we propose an AI method to detect the wearing of a helmet with satisfactory accuracy with a high detection rate. Our method selects based on YOLO v4 and adds an image super resolution (ISR) module at the end of the input. Afterward, the image resolution is increased, and the noise in the image is removed. Then, dense blocks are used to replace residual blocks in the backbone network using the CSPDarknet53 framework to reduce unnecessary computation and reduce the number of network structure parameters. The neck then uses a combination of SPPnet and PANnet to take full advantage of the small target’s capabilities in the image. We add foreground and background balance loss functions to the YOLOv4 loss function part to solve the image background and foreground imbalance problem. Experiments performed using self-constructed datasets show that the proposed method has more efficacy than the currently available small target detection methods. Finally, our model achieves an average precision of 93.3%, a 7.8% increase over the original algorithm, and it takes only 3.0 ms to detect an image at 416 × 416.
Collapse
|
5
|
Lee P, Chen TB, Liu CH, Wang CY, Huang GH, Lu NH. Identifying the Posture of Young Adults in Walking Videos by Using a Fusion Artificial Intelligent Method. BIOSENSORS 2022; 12:bios12050295. [PMID: 35624595 PMCID: PMC9139042 DOI: 10.3390/bios12050295] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/09/2022] [Revised: 04/29/2022] [Accepted: 05/02/2022] [Indexed: 11/23/2022]
Abstract
Many neurological and musculoskeletal disorders are associated with problems related to postural movement. Noninvasive tracking devices are used to record, analyze, measure, and detect the postural control of the body, which may indicate health problems in real time. A total of 35 young adults without any health problems were recruited for this study to participate in a walking experiment. An iso-block postural identity method was used to quantitatively analyze posture control and walking behavior. The participants who exhibited straightforward walking and skewed walking were defined as the control and experimental groups, respectively. Fusion deep learning was applied to generate dynamic joint node plots by using OpenPose-based methods, and skewness was qualitatively analyzed using convolutional neural networks. The maximum specificity and sensitivity achieved using a combination of ResNet101 and the naïve Bayes classifier were 0.84 and 0.87, respectively. The proposed approach successfully combines cell phone camera recordings, cloud storage, and fusion deep learning for posture estimation and classification.
Collapse
Affiliation(s)
- Posen Lee
- Department of Occupation Therapy, I-Shou University, No. 8, Yida Road, Jiaosu Village, Yanchao District, Kaohsiung 82445, Taiwan;
| | - Tai-Been Chen
- Department of Medical Imaging and Radiological Science, I-Shou University, No. 8, Yida Road, Jiaosu Village Yanchao District, Kaohsiung 82445, Taiwan; (T.-B.C.); (C.-Y.W.); (N.-H.L.)
- Institute of Statistics, National Yang Ming Chiao Tung University, No. 1001, University Road, Hsinchu 30010, Taiwan;
| | - Chin-Hsuan Liu
- Department of Occupation Therapy, I-Shou University, No. 8, Yida Road, Jiaosu Village, Yanchao District, Kaohsiung 82445, Taiwan;
- Department of Occupational Therapy, Kaohsiung Municipal Kai-Syuan Psychiatric Hospital, No. 130, Kaisyuan 2nd Road, Lingya District, Kaohsiung 80276, Taiwan
- Correspondence: ; Tel.: +886-7-6151100 (ext. 7516)
| | - Chi-Yuan Wang
- Department of Medical Imaging and Radiological Science, I-Shou University, No. 8, Yida Road, Jiaosu Village Yanchao District, Kaohsiung 82445, Taiwan; (T.-B.C.); (C.-Y.W.); (N.-H.L.)
| | - Guan-Hua Huang
- Institute of Statistics, National Yang Ming Chiao Tung University, No. 1001, University Road, Hsinchu 30010, Taiwan;
| | - Nan-Han Lu
- Department of Medical Imaging and Radiological Science, I-Shou University, No. 8, Yida Road, Jiaosu Village Yanchao District, Kaohsiung 82445, Taiwan; (T.-B.C.); (C.-Y.W.); (N.-H.L.)
- Department of Pharmacy, Tajen University, No. 20, Weixin Road, Yanpu Township, Pingtung County 90741, Taiwan
- Department of Radiology, E-DA Hospital, I-Shou University, No. 1, Yida Road, Jiaosu Village, Yanchao District, Kaohsiung City 82445, Taiwan
| |
Collapse
|
6
|
Sultana A, Deb K, Dhar PK, Koshiba T. Classification of Indoor Human Fall Events Using Deep Learning. ENTROPY 2021; 23:e23030328. [PMID: 33802164 PMCID: PMC8000947 DOI: 10.3390/e23030328] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/18/2021] [Revised: 03/02/2021] [Accepted: 03/04/2021] [Indexed: 12/02/2022]
Abstract
Human fall identification can play a significant role in generating sensor based alarm systems, assisting physical therapists not only to reduce after fall effects but also to save human lives. Usually, elderly people suffer from various kinds of diseases and fall action is a very frequently occurring circumstance at this time for them. In this regard, this paper represents an architecture to classify fall events from others indoor natural activities of human beings. Video frame generator is applied to extract frame from video clips. Initially, a two dimensional convolutional neural network (2DCNN) model is proposed to extract features from video frames. Afterward, gated recurrent unit (GRU) network finds the temporal dependency of human movement. Binary cross-entropy loss function is calculated to update the attributes of the network like weights, learning rate to minimize the losses. Finally, sigmoid classifier is used for binary classification to detect human fall events. Experimental result shows that the proposed model obtains an accuracy of 99%, which outperforms other state-of-the-art models.
Collapse
Affiliation(s)
- Arifa Sultana
- Department of Computer Science and Engineering, Chittagong University of Engineering & Technology (CUET), Chattogram 4349, Bangladesh; (A.S.); (P.K.D.)
| | - Kaushik Deb
- Department of Computer Science and Engineering, Chittagong University of Engineering & Technology (CUET), Chattogram 4349, Bangladesh; (A.S.); (P.K.D.)
- Correspondence:
| | - Pranab Kumar Dhar
- Department of Computer Science and Engineering, Chittagong University of Engineering & Technology (CUET), Chattogram 4349, Bangladesh; (A.S.); (P.K.D.)
| | - Takeshi Koshiba
- Faculty of Education and Integrated Arts and Sciences, Waseda University, 1-6-1 Nishiwaseda, Shinjuku-ku, Tokyo 169-8050, Japan;
| |
Collapse
|