Xu Y, Gao R, Yang A, Liang K, Shi Z, Sun M, Shen T. Extreme Learning Machine/Finite Impulse Response Filter and Vision Data-Assisted Inertial Navigation System-Based Human Motion Capture.
MICROMACHINES 2023;
14:2088. [PMID:
38004945 PMCID:
PMC10673149 DOI:
10.3390/mi14112088]
[Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/02/2023] [Revised: 10/29/2023] [Accepted: 11/08/2023] [Indexed: 11/26/2023]
Abstract
To obtain accurate position information, herein, a one-assistant method involving the fusion of extreme learning machine (ELM)/finite impulse response (FIR) filters and vision data is proposed for inertial navigation system (INS)-based human motion capture. In the proposed method, when vision is available, the vision-based human position is considered as input to an FIR filter that accurately outputs the human position. Meanwhile, another FIR filter outputs the human position using INS data. ELM is used to build mapping between the output of the FIR filter and the corresponding error. When vision data are unavailable, FIR is used to provide the human posture and ELM is used to provide its estimation error built in the abovementioned stage. In the right-arm elbow, the proposed method can improve the cumulative distribution functions (CDFs) of the position errors by about 12.71%, which shows the effectiveness of the proposed method.
Collapse