1
|
Lindbeck EM, Diaz MT, Nichols JA, Harley JB. Predictions of thumb, hand, and arm muscle parameters derived using force measurements of varying complexity and neural networks. J Biomech 2023; 161:111834. [PMID: 37865980 DOI: 10.1016/j.jbiomech.2023.111834] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2023] [Revised: 09/22/2023] [Accepted: 10/09/2023] [Indexed: 10/24/2023]
Abstract
Subject-specific musculoskeletal models are a promising avenue for personalized healthcare. However, current methods for producing personalized models require dense, biomechanical datasets that include expensive and time-consuming physiological measurements. For personalized models to be clinically useful, we must be able to rapidly generate models from simple, easy to collect data. In this context, the objective of this paper is to evaluate if and how simple data, namely height/weight and pinch force data, can be used to achieve model personalization via machine learning. Using simulated lateral pinch force measurements from a synthetic population of 40,000 randomly generated subjects, we train neural networks to estimate four Hill-type muscle model parameters and bone density. We compare parameter estimates to the true parameters of 10,000 additional synthetic subjects. We also generate new personalized models using the parameter estimates and perform new lateral pinch simulations to compare predicted forces using these personalized models to those generated using a baseline model. We demonstrate that increasing force measurement complexity reduces the root-mean-square error in the majority of parameter estimates. Additionally, musculoskeletal models using neural network-based parameter estimates provide up to an 80% reduction in absolute error in simulated forces when compared to a generic model. Thus, easily obtained force measurements may be suitable for personalizing models of the thumb, although extending the method to more tasks and models involving other joints likely requires additional measurements.
Collapse
Affiliation(s)
- Erica M Lindbeck
- University of Florida, Department of Electrical and Computer Engineering, Gainesville, FL, United States.
| | - Maximillian T Diaz
- University of Florida, J. Crayton Pruitt Family Department of Biomedical Engineering, Gainesville, FL, United States
| | - Jennifer A Nichols
- University of Florida, J. Crayton Pruitt Family Department of Biomedical Engineering, Gainesville, FL, United States
| | - Joel B Harley
- University of Florida, Department of Electrical and Computer Engineering, Gainesville, FL, United States
| |
Collapse
|
2
|
Liu K, Wan D, Wang W, Fei C, Zhou T, Guo D, Bai L, Li Y, Ni Z, Lu J. A Time-Division Position-Sensitive Detector Image System for High-Speed Multitarget Trajectory Tracking. ADVANCED MATERIALS (DEERFIELD BEACH, FLA.) 2022; 34:e2206638. [PMID: 36114665 DOI: 10.1002/adma.202206638] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/21/2022] [Revised: 09/01/2022] [Indexed: 06/15/2023]
Abstract
High-speed trajectory tracking with real-time processing capability is particularly important in the fields of pilotless automobiles, guidance systems, robotics, and filmmaking. The conventional optical approach to high-speed trajectory tracking involves charge coupled device (CCD) or complementary metal-oxide-semiconductor (CMOS) image sensors, which suffer from trade-offs between resolution and framerates, complexity of the system, and enormous data-analysis processes. Here, a high-speed trajectory tracking system is designed by using a time-division position-sensitive detector (TD-PSD) based on a graphene-silicon Schottky heterojunction. Benefiting from the high-speed optoelectronic response and sub-micrometer positional accuracy of the TD-PSD, multitarget real-time trajectory tracking is realized, with a maximum image output framerate of up to 62 000 frames per second. Moreover, multichannel trajectory tracking and image-distortion correction functionalities are realized by TD-PSD systems through frequency-related image preprocessing, which significantly improves the capacity of real-time information processing and image quality in complicated light environments.
Collapse
Affiliation(s)
- Kaiyang Liu
- School of Physics, Frontiers Science Center for Mobile Information Communication and Security, Quantum Information Research Center, Southeast University, Nanjing, 211189, China
| | - Dongyang Wan
- School of Physics, Frontiers Science Center for Mobile Information Communication and Security, Quantum Information Research Center, Southeast University, Nanjing, 211189, China
| | - Wenhui Wang
- School of Physics, Frontiers Science Center for Mobile Information Communication and Security, Quantum Information Research Center, Southeast University, Nanjing, 211189, China
| | - Cheng Fei
- Shandong University, Center for Optics Research and Engineering, Qingdao, Shandong, 266237, P. R. China
| | - Tao Zhou
- School of Physics, Frontiers Science Center for Mobile Information Communication and Security, Quantum Information Research Center, Southeast University, Nanjing, 211189, China
| | - Dingli Guo
- School of Physics, Frontiers Science Center for Mobile Information Communication and Security, Quantum Information Research Center, Southeast University, Nanjing, 211189, China
| | - Lin Bai
- School of Physics, Frontiers Science Center for Mobile Information Communication and Security, Quantum Information Research Center, Southeast University, Nanjing, 211189, China
| | - Yongfu Li
- Shandong University, Center for Optics Research and Engineering, Qingdao, Shandong, 266237, P. R. China
| | - Zhenhua Ni
- School of Physics, Frontiers Science Center for Mobile Information Communication and Security, Quantum Information Research Center, Southeast University, Nanjing, 211189, China
- Purple Mountain Laboratories, Nanjing, 211111, China
| | - Junpeng Lu
- School of Physics, Frontiers Science Center for Mobile Information Communication and Security, Quantum Information Research Center, Southeast University, Nanjing, 211189, China
| |
Collapse
|
3
|
Kim M, Lee S. Fusion Poser: 3D Human Pose Estimation Using Sparse IMUs and Head Trackers in Real Time. SENSORS 2022; 22:s22134846. [PMID: 35808342 PMCID: PMC9269439 DOI: 10.3390/s22134846] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/29/2022] [Revised: 06/20/2022] [Accepted: 06/21/2022] [Indexed: 02/01/2023]
Abstract
The motion capture method using sparse inertial sensors is an approach for solving the occlusion and economic problems in vision-based methods, which is suitable for virtual reality applications and works in complex environments. However, VR applications need to track the location of the user in real-world space, which is hard to obtain using only inertial sensors. In this paper, we present Fusion Poser, which combines the deep learning-based pose estimation and location tracking method with six inertial measurement units and a head tracking sensor that provides head-mounted displays. To estimate human poses, we propose a bidirectional recurrent neural network with a convolutional long short-term memory layer that achieves higher accuracy and stability by preserving spatio-temporal properties. To locate a user with real-world coordinates, our method integrates the results of an estimated joint pose with the pose of the tracker. To train the model, we gathered public motion capture datasets of synthesized IMU measurement data, as well as creating a real-world dataset. In the evaluation, our method showed higher accuracy and a more robust estimation performance, especially when the user adopted lower poses, such as a squat or a bow.
Collapse
|
4
|
Vo M, Yumer E, Sunkavalli K, Hadap S, Sheikh Y, Narasimhan SG. Self-Supervised Multi-View Person Association and its Applications. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2021; 43:2794-2808. [PMID: 32086193 DOI: 10.1109/tpami.2020.2974726] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Reliable markerless motion tracking of people participating in a complex group activity from multiple moving cameras is challenging due to frequent occlusions, strong viewpoint and appearance variations, and asynchronous video streams. To solve this problem, reliable association of the same person across distant viewpoints and temporal instances is essential. We present a self-supervised framework to adapt a generic person appearance descriptor to the unlabeled videos by exploiting motion tracking, mutual exclusion constraints, and multi-view geometry. The adapted discriminative descriptor is used in a tracking-by-clustering formulation. We validate the effectiveness of our descriptor learning on WILDTRACK T. Chavdarova et al., "WILDTRACK: A multi-camera HD dataset for dense unscripted pedestrian detection," in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2018, pp. 5030-5039. and three new complex social scenes captured by multiple cameras with up to 60 people "in the wild". We report significant improvement in association accuracy (up to 18 percent) and stable and coherent 3D human skeleton tracking (5 to 10 times) over the baseline. Using the reconstructed 3D skeletons, we cut the input videos into a multi-angle video where the image of a specified person is shown from the best visible front-facing camera. Our algorithm detects inter-human occlusion to determine the camera switching moment while still maintaining the flow of the action well. Website: http://www.cs.cmu.edu/~ILIM/projects/IM/Association4Tracking.
Collapse
|
5
|
Ababsa F, Hadj-Abdelkader H, Boui M. 3D Human Pose Estimation with a Catadioptric Sensor in Unconstrained Environments Using an Annealed Particle Filter. SENSORS 2020; 20:s20236985. [PMID: 33297403 PMCID: PMC7730546 DOI: 10.3390/s20236985] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/22/2020] [Revised: 11/30/2020] [Accepted: 12/05/2020] [Indexed: 11/16/2022]
Abstract
The purpose of this paper is to investigate the problem of 3D human tracking in complex environments using a particle filter with images captured by a catadioptric vision system. This issue has been widely studied in the literature on RGB images acquired from conventional perspective cameras, while omnidirectional images have seldom been used and published research works in this field remains limited. In this study, the Riemannian varieties was considered in order to compute the gradient on spherical images and generate a robust descriptor used along with an SVM classifier for human detection. Original likelihood functions associated with the particle filter are proposed, using both geodesic distances and overlapping regions between the silhouette detected in the images and the projected 3D human model. Our approach was experimentally evaluated on real data and showed favorable results compared to machine learning based techniques about the 3D pose accuracy. Thus, the Root Mean Square Error (RMSE) was measured by comparing estimated 3D poses and truth data, resulting in a mean error of 0.065 m when walking action was applied.
Collapse
Affiliation(s)
- Fakhreddine Ababsa
- Arts et Métiers Institue of Technology, LISPEN, HESAM University, 75005 Chalon-sur-Saône, France
- Correspondence:
| | - Hicham Hadj-Abdelkader
- IBISC Laboratory, University of Evry, 91000 Evry-Courcouronnes, France; (H.H.-A.); (M.B.)
| | - Marouane Boui
- IBISC Laboratory, University of Evry, 91000 Evry-Courcouronnes, France; (H.H.-A.); (M.B.)
| |
Collapse
|
6
|
Abstract
In the process of strawberry easily broken fruit picking, in order to reduce the damage rate of the fruit, improves accuracy and efficiency of picking robot, field put forward a motion capture system based on international standard badminton edge feature detection and capture automation algorithm process of night picking robot badminton motion capture techniques training methods. The badminton motion capture system can analyze the game video in real time and obtain the accuracy rate of excellent badminton players and the technical characteristics of badminton motion capture through motion capture. The purpose of this article is to apply the high-precision motion capture vision control system to the design of the vision control system of the robot in the night picking process, so as to effectively improve the observation and recognition accuracy of the robot in the night picking process, so as to improve the degree of automation of the operation. This paper tests the reliability of the picking robot vision system. Taking the environment of picking at night as an example, image processing was performed on the edge features of the fruits picked by the picking robot. The results show that smooth and enhanced image processing can successfully extract edge features of fruit images. The accuracy of the target recognition rate and the positioning ability of the vision system of the picking robot were tested by the edge feature test. The results showed that the accuracy of the target recognition rate and the positioning ability of the motion edge of the vision system were far higher than 91%, satisfying the automation demand of the picking robot operation with high precision.
Collapse
Affiliation(s)
- Changxin Li
- College of Physical and Health Education, Mianyang Teachers’ College, Mianyang, Sichuan, China
| |
Collapse
|
7
|
Chasing Feet in the Wild: A Proposed Egocentric Motion-Aware Gait Assessment Tool. LECTURE NOTES IN COMPUTER SCIENCE 2019. [DOI: 10.1007/978-3-030-11024-6_12] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/23/2023]
|
8
|
Joo H, Simon T, Li X, Liu H, Tan L, Gui L, Banerjee S, Godisart T, Nabbe B, Matthews I, Kanade T, Nobuhara S, Sheikh Y. Panoptic Studio: A Massively Multiview System for Social Interaction Capture. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2019; 41:190-204. [PMID: 29990012 DOI: 10.1109/tpami.2017.2782743] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
We present an approach to capture the 3D motion of a group of people engaged in a social interaction. The core challenges in capturing social interactions are: (1) occlusion is functional and frequent; (2) subtle motion needs to be measured over a space large enough to host a social group; (3) human appearance and configuration variation is immense; and (4) attaching markers to the body may prime the nature of interactions. The Panoptic Studio is a system organized around the thesis that social interactions should be measured through the integration of perceptual analyses over a large variety of view points. We present a modularized system designed around this principle, consisting of integrated structural, hardware, and software innovations. The system takes, as input, 480 synchronized video streams of multiple people engaged in social activities, and produces, as output, the labeled time-varying 3D structure of anatomical landmarks on individuals in the space. Our algorithm is designed to fuse the "weak" perceptual processes in the large number of views by progressively generating skeletal proposals from low-level appearance cues, and a framework for temporal refinement is also presented by associating body parts to reconstructed dense 3D trajectory stream. Our system and method are the first in reconstructing full body motion of more than five people engaged in social interactions without using markers. We also empirically demonstrate the impact of the number of views in achieving this goal.
Collapse
|
9
|
Khan MH, Schneider M, Farid MS, Grzegorzek M. Detection of Infantile Movement Disorders in Video Data Using Deformable Part-Based Model. SENSORS (BASEL, SWITZERLAND) 2018; 18:E3202. [PMID: 30248968 PMCID: PMC6210538 DOI: 10.3390/s18103202] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/03/2018] [Revised: 09/17/2018] [Accepted: 09/20/2018] [Indexed: 12/20/2022]
Abstract
Movement analysis of infants' body parts is momentous for the early detection of various movement disorders such as cerebral palsy. Most existing techniques are either marker-based or use wearable sensors to analyze the movement disorders. Such techniques work well for adults, however they are not effective for infants as wearing such sensors or markers may cause discomfort to them, affecting their natural movements. This paper presents a method to help the clinicians for the early detection of movement disorders in infants. The proposed method is marker-less and does not use any wearable sensors which makes it ideal for the analysis of body parts movement in infants. The algorithm is based on the deformable part-based model to detect the body parts and track them in the subsequent frames of the video to encode the motion information. The proposed algorithm learns a model using a set of part filters and spatial relations between the body parts. In particular, it forms a mixture of part-filters for each body part to determine its orientation which is used to detect the parts and analyze their movements by tracking them in the temporal direction. The model is represented using a tree-structured graph and the learning process is carried out using the structured support vector machine. The proposed framework will assist the clinicians and the general practitioners in the early detection of infantile movement disorders. The performance evaluation of the proposed method is carried out on a large dataset and the results compared with the existing techniques demonstrate its effectiveness.
Collapse
Affiliation(s)
- Muhammad Hassan Khan
- Research Group for Pattern Recognition, University of Siegen, 57076 Siegen, Germany.
| | - Manuel Schneider
- Research Group for Pattern Recognition, University of Siegen, 57076 Siegen, Germany.
| | - Muhammad Shahid Farid
- College of Information Technology, University of the Punjab, 54000 Lahore, Pakistan.
| | - Marcin Grzegorzek
- Research Group for Pattern Recognition, University of Siegen, 57076 Siegen, Germany.
| |
Collapse
|
10
|
Deep Convolutional Neural Networks for Classifying Body Constitution Based on Face Image. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2017; 2017:9846707. [PMID: 29181087 PMCID: PMC5664380 DOI: 10.1155/2017/9846707] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/26/2017] [Revised: 07/20/2017] [Accepted: 09/06/2017] [Indexed: 12/11/2022]
Abstract
Body constitution classification is the basis and core content of traditional Chinese medicine constitution research. It is to extract the relevant laws from the complex constitution phenomenon and finally build the constitution classification system. Traditional identification methods have the disadvantages of inefficiency and low accuracy, for instance, questionnaires. This paper proposed a body constitution recognition algorithm based on deep convolutional neural network, which can classify individual constitution types according to face images. The proposed model first uses the convolutional neural network to extract the features of face image and then combines the extracted features with the color features. Finally, the fusion features are input to the Softmax classifier to get the classification result. Different comparison experiments show that the algorithm proposed in this paper can achieve the accuracy of 65.29% about the constitution classification. And its performance was accepted by Chinese medicine practitioners.
Collapse
|
11
|
Alldieck T, Kassubeck M, Wandt B, Rosenhahn B, Magnor M. Optical Flow-Based 3D Human Motion Estimation from Monocular Video. LECTURE NOTES IN COMPUTER SCIENCE 2017. [DOI: 10.1007/978-3-319-66709-6_28] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
|