1
|
Abreu AA, Rail B, Farah E, Alterio RE, Scott DJ, Sankaranarayanan G, Zeh HJ, Polanco PM. Baseline performance in a robotic virtual reality platform predicts rate of skill acquisition in a proficiency-based curriculum: a cohort study of surgical trainees. Surg Endosc 2023; 37:8804-8809. [PMID: 37603102 DOI: 10.1007/s00464-023-10372-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2023] [Accepted: 07/30/2023] [Indexed: 08/22/2023]
Abstract
BACKGROUND Residency programs must prepare to train the next generation of surgeons on the robotic platform. The purpose of this study was to determine if baseline skills of residents on a virtual reality (VR) robotic simulator before intern year predicted future performance in a proficiency-based curriculum. METHODS Across two academic years, 21 general surgery PGY-1s underwent the robotic surgery boot camp at the University of Texas Southwestern. During boot camp, subjects completed five previously validated VR tasks, and their performance metrics (score, time, and economy of motion [EOM]) were extracted retrospectively from their Intuitive learning accounts. The same metrics were assessed during their residency until they reached previously validated proficiency benchmarks. Outcomes were defined as the score at proficiency, attempts to reach proficiency, and time to proficiency. Spearman's rho and Mann-Whitney U tests were used; median (IQR) was reported. Significance level was set at p < 0.05. RESULTS Twenty-one residents completed at least three out of the five boot camp tasks and achieved proficiency in the former during residency. The median average score at boot camp was 12.3 (IQR: 5.14-18.5). The median average EOM at boot camp was 599.58 cm (IQR: 529.64-676.60). The average score at boot camp significantly correlated with lower time to achieve proficiency (p < 0.05). EOM at boot camp showed a significant correlation with attempts to proficiency and time to proficiency (p < 0.01). Residents with an average baseline EOM below the median showed a significant difference in attempts to proficiency (p < 0.05) and time to proficiency (p < 0.05) compared to those with EOMs above or equal to the median. CONCLUSION Residents with an innate ability to perform tasks with better EOM may acquire robotic surgery skills faster. Future investigators could explore how these innate differences impact performance throughout residency.
Collapse
Affiliation(s)
- Andres A Abreu
- Division of Surgical Oncology, Department of Surgery, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd., Dallas, TX, 75390, USA
| | - Benjamin Rail
- Division of Surgical Oncology, Department of Surgery, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd., Dallas, TX, 75390, USA
| | - Emile Farah
- Division of Surgical Oncology, Department of Surgery, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd., Dallas, TX, 75390, USA
| | - Rodrigo E Alterio
- Division of Surgical Oncology, Department of Surgery, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd., Dallas, TX, 75390, USA
| | - Daniel J Scott
- Division of Surgical Oncology, Department of Surgery, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd., Dallas, TX, 75390, USA
| | - Ganesh Sankaranarayanan
- Division of Surgical Oncology, Department of Surgery, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd., Dallas, TX, 75390, USA
| | - Herbert J Zeh
- Division of Surgical Oncology, Department of Surgery, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd., Dallas, TX, 75390, USA
| | - Patricio M Polanco
- Division of Surgical Oncology, Department of Surgery, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd., Dallas, TX, 75390, USA.
| |
Collapse
|
2
|
Dong R, Ikuno S. Biomechanical Analysis of Golf Swing Motion Using Hilbert-Huang Transform. SENSORS (BASEL, SWITZERLAND) 2023; 23:6698. [PMID: 37571482 PMCID: PMC10422357 DOI: 10.3390/s23156698] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/28/2023] [Revised: 07/24/2023] [Accepted: 07/25/2023] [Indexed: 08/13/2023]
Abstract
In golf swing analysis, high-speed cameras and Trackman devices are traditionally used to collect data about the club, ball, and putt. However, these tools are costly and often inaccessible to golfers. This research proposes an alternative solution, employing an affordable inertial motion capture system to record golf swing movements accurately. The focus is discerning the differences between motions producing straight and slice trajectories. Commonly, the opening motion of the body's left half and the head-up motion are associated with a slice trajectory. We employ the Hilbert-Huang transform (HHT) to examine these motions in detail to conduct a biomechanical analysis. The gathered data are then processed through HHT, calculating their instantaneous frequency and amplitude. The research found discernible differences between straight and slice trajectories in the golf swing's moment of impact within the instantaneous frequency domain. An average golfer, a single handicapper, and three beginner golfers were selected as the subjects in this study and analyzed using the proposed method, respectively. For the average golfer, the head and the left leg amplitudes of the swing motions increase at the moment of impact of the swings, resulting in the slice trajectory. These results indicate that an opening of the legs and head-up movements have been detected and extracted as non-linear frequency components, reviewing the biomechanical meaning in slice trajectory motion. For the single handicapper, the hip and left arm joints could be the target joints to detect the biomechanical motion that triggered the slice trajectory. For the beginners, since their golf swing forms were not finalized, the biomechanical motions regarding slice trajectory were different from each swing, indicating that beginner golfers need more practice to fix their golf swing form first. These results revealed that our proposed framework applied to different golf levels and could help golfers to improve their golf swing skills to achieve straight trajectories.
Collapse
Affiliation(s)
- Ran Dong
- School of Engineering, Chukyo University, Toyota 470-0393, Japan
| | - Soichiro Ikuno
- School of Computer Science, Tokyo University of Technology, Hachioji 192-0982, Japan
| |
Collapse
|
3
|
Dong A, Wang F, Shuai Z, Zhang K, Qian D, Tian Y. A new kinematic dataset of lower limbs action for balance testing. Sci Data 2023; 10:209. [PMID: 37059747 PMCID: PMC10104813 DOI: 10.1038/s41597-023-02105-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2022] [Accepted: 03/25/2023] [Indexed: 04/16/2023] Open
Abstract
Balance is a common performance but nevertheless an essential part of performance analysis investigations in ski. Many skier pay attention to the training of balance ability in training. Inertial Measurement Unit, as a kind of Multiplex-type human motion capture system, is widely used because of its humanized human-computer interaction design, low energy consumption and more freedom provided by the environment. The purpose of this research is to use sensor to establish a kinematics dataset of balance test tasks extracted from skis to help quantify skier' balance ability. Perception Neuron Studio motion capture device is used in present. The dataset contains a total of 20 participants' data (half male) of the motion and sensor data, which is collected at a 100 Hz sampling frequency. To our knowledge, this dataset is the only one that uses a BOSU ball in the balance test. We hope that this dataset will contribute to multiple fields of cross-technology integration in physical training and functional testing, including big-data analysis, sports equipment design and sports biomechanical analysis.
Collapse
Affiliation(s)
- Anqi Dong
- Beijing Sport University, Beijing, China
| | - Fei Wang
- Beijing Sport University, Beijing, China
| | | | | | | | | |
Collapse
|
4
|
Wang R, Lv H, Lu Z, Huang X, Wu H, Xiong J, Yang G. A medical assistive robot for tele-healthcare during the COVID-19 pandemic: development and usability study in an isolation ward. JMIR Hum Factors 2023; 10:e42870. [PMID: 36634269 PMCID: PMC10131661 DOI: 10.2196/42870] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2022] [Revised: 12/10/2022] [Accepted: 01/12/2023] [Indexed: 01/13/2023] Open
Abstract
BACKGROUND The coronavirus disease 2019 (COVID-19) pandemic is affecting the mental and emotional well-being of patients, family members, and healthcare workers. Patients in the isolation ward may have psychological problems due to long-term hospitalization, the development of the epidemic, and the inability to meet their families. The medical assistive robot (MAR), acting as an intermediary of communication, can be deployed to address mental pressures. OBJECTIVE CareDo, a MAR with telepresence and teleoperation functions, is developed in this work for remote healthcare. This study aims to investigate its practical performance in the isolation ward during the pandemic. METHODS Two systems were integrated into the CareDo robot. For the telepresence system, web real-time communications solution is used for the multi-user chat system and the convolutional neural network is used for expression recognition. For the teleoperation system, an incremental motion mapping method is used for operating the robot remotely. This study was finally conducted at the First Affiliated Hospital, Zhejiang University for clinical trials. RESULTS During the clinical trials in First Affiliated Hospital, Zhejiang University, tasks such as video chatting, emotion detection, and medical supplies delivery are performed through this robot. Seven voice commands are set for performing system wakeup, video chatting, and system exiting. Statistical durations from 1 second to 3 seconds of common commands are set to improve voice command detection. The facial expression was recorded 152 times for a patient in one day for the psychological intervention. The recognition accuracy reaches 95% and 92.8% for happy and neutral expressions respectively. CONCLUSIONS Patients and healthcare workers can use this MAR in the isolation ward for tele-healthcare during the COVID-19 pandemic. It can be a useful approach to break the chains of virus transmission, and also an effective way for remote psychological intervention. CLINICALTRIAL
Collapse
Affiliation(s)
- Ruohan Wang
- State Key Laboratory of Fluid Power & Mechatronic Systems, School of Mechanical Engineering, Zhejiang University, Hangzhou 310027, China, No.38 Zheda Road, Hangzhou, P.R.China, Hangzhou, CN
| | - Honghao Lv
- State Key Laboratory of Fluid Power & Mechatronic Systems, School of Mechanical Engineering, Zhejiang University, Hangzhou 310027, China, No.38 Zheda Road, Hangzhou, P.R.China, Hangzhou, CN
| | - Zhangli Lu
- State Key Laboratory of Fluid Power & Mechatronic Systems, School of Mechanical Engineering, Zhejiang University, Hangzhou 310027, China, Hangzhou, CN
| | - Xiaoyan Huang
- College of Electrical Engineering, Zhejiang University, Hangzhou 310027, China, Hangzhou, CN
| | - Haiteng Wu
- Hangzhou Shenhao Technology Co., LTD., Hangzhou, China, Hangzhou, CN
| | - Junjie Xiong
- Hangzhou Shenhao Technology Co., LTD., Hangzhou, China, Hangzhou, CN.,School of Mechanical Engineering, Zhejiang University, Hangzhou, CN
| | - Geng Yang
- State Key Laboratory of Fluid Power & Mechatronic Systems, School of Mechanical Engineering, Zhejiang University, Hangzhou 310027, China, No.38 Zheda Road, Hangzhou, P.R.China, Hangzhou, CN
| |
Collapse
|
5
|
Nam HS, Lee WH, Seo HG, Smuck MW, Kim S. Evaluation of Motion Segment Size as a New Sensor-based Functional Outcome Measure in Stroke Rehabilitation. J Int Med Res 2022; 50:3000605221122750. [PMID: 36129970 PMCID: PMC9511330 DOI: 10.1177/03000605221122750] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
Objective To evaluate a novel parameter, motion segment size (MSS), in stroke patients with upper limb impairment and validate its clinical applicability by correlating results with a standard clinical task-based functional evaluation tool. Methods In this cross-sectional study, patients with hemiplegia and healthy controls equipped with multiple inertial measurement unit (IMU) sensors performed Action Research Arm Test (ARAT) and activities of daily living (ADL) tasks. Acceleration of the wrist and Euler angles of each upper limb segment were measured. The average and maximum MSS, accumulated motion, total performance time, and average motion speed (AMS) were extracted for analysis. Results Data from nine patients and 10 controls showed that the average MSS of forearm supination/pronation and elbow flexion/extension during full ARAT tasks showed a significant difference between patients and controls and a significant correlation with ARAT scores. Conclusions We suggest that MSS may provide clinically relevant information regarding upper limb functional status in stroke patients.
Collapse
Affiliation(s)
- Hyung Seok Nam
- Department of Biomedical Engineering, Seoul National University College of Medicine, Seoul, Korea.,Department of Rehabilitation Medicine, Seoul National University Hospital, Seoul, Korea.,Wearable Health Lab, Division of Physical Medicine and Rehabilitation, Stanford University, Redwood City, CA, USA.,Department of Rehabilitation Medicine, Sheikh Khalifa Specialty Hospital, Ras al Khaimah, UAE
| | - Woo Hyung Lee
- Department of Biomedical Engineering, Seoul National University College of Medicine, Seoul, Korea.,Department of Rehabilitation Medicine, Seoul National University Hospital, Seoul, Korea
| | - Han Gil Seo
- Department of Rehabilitation Medicine, Seoul National University Hospital, Seoul, Korea
| | - Matthew W Smuck
- Wearable Health Lab, Division of Physical Medicine and Rehabilitation, Stanford University, Redwood City, CA, USA
| | - Sungwan Kim
- Department of Biomedical Engineering, Seoul National University College of Medicine, Seoul, Korea.,Institute of Bioengineering, Seoul National University, Seoul, Korea
| |
Collapse
|
6
|
Jin Y, Suzuki G, Shioya H. Detecting and Visualizing Stops in Dance Training by Neural Network Based on Velocity and Acceleration. SENSORS (BASEL, SWITZERLAND) 2022; 22:5402. [PMID: 35891082 PMCID: PMC9321875 DOI: 10.3390/s22145402] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/23/2022] [Revised: 07/13/2022] [Accepted: 07/15/2022] [Indexed: 06/15/2023]
Abstract
Various genres of dance, such as Yosakoi Soran, have contributed to the health of many people and contributed to their sense of belonging to a community. However, due to the effects of COVID-19, various face-to-face activities have been restricted and group dance practice has become difficult. Hence, there is a need to facilitate remote dance practice. In this paper, we propose a system for detecting and visualizing the very important dance motions known as stops. We measure dance movements by motion capture and calculate the features of each movement based on velocity and acceleration. Using a neural network to learn motion features, the system detects stops and visualizes them using a human-like 3D model. In an experiment using dance data, the proposed method obtained highly accurate stop detection results and demonstrated its effectiveness as an information and communication technology support for remote group dance practice.
Collapse
|
7
|
Full-Body Motion Capture-Based Virtual Reality Multi-Remote Collaboration System. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12125862] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
Various realistic collaboration technologies have emerged in the context of the COVID-19 pandemic. However, as existing virtual reality (VR) collaboration systems generally employ an inverse kinematic method using a head-mounted display and controller, the user and character cannot be accurately matched. Accordingly, the immersion level of the VR experience is low. In this study, we propose a VR remote collaboration system that uses motion capture to improve immersion. The system uses a VR character in which a user wearing motion capture equipment performs the same operations as the user. Nevertheless, an error can occur in the virtual environment when the sizes of the actual motion capture user and virtual character are different. To reduce this error, a technique for synchronizing the size of the character according to the user’s body was implemented and tested. The experimental results show that the error between the heights of the test subject and virtual character was 0.465 cm on average. To verify that the implementation of the motion-capture-based VR remote collaboration system is possible, we confirm that three motion-capture users can collaborate remotely using a photon server.
Collapse
|
8
|
Reliability and Validity of an Inertial Measurement System to Quantify Lower Extremity Joint Angle in Functional Movements. SENSORS 2022; 22:s22030863. [PMID: 35161609 PMCID: PMC8838175 DOI: 10.3390/s22030863] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/09/2021] [Revised: 01/19/2022] [Accepted: 01/20/2022] [Indexed: 02/01/2023]
Abstract
The purpose of this research was to determine if the commercially available Perception Neuron motion capture system was valid and reliable in clinically relevant lower limb functional tasks. Twenty healthy participants performed two sessions on different days: gait, squat, single-leg squat, side lunge, forward lunge, and counter-movement jump. Seven IMUs and an OptiTrack system were used to record the three-dimensional joint kinematics of the lower extremity. To evaluate the performance, the multiple correlation coefficient (CMC) and the root mean square error (RMSE) of the waveforms as well as the difference and intraclass correlation coefficient (ICC) of discrete parameters were calculated. In all tasks, the CMC revealed fair to excellent waveform similarity (0.47–0.99) and the RMSE was between 3.57° and 13.14°. The difference between discrete parameters was lower than 14.54°. The repeatability analysis of waveforms showed that the CMC was between 0.54 and 0.95 and the RMSE was less than 5° in the frontal and transverse planes. The ICC of all joint angles in the IMU was general to excellent (0.57–1). Our findings showed that the IMU system might be utilized to evaluate lower extremity 3D joint kinematics in functional motions.
Collapse
|
9
|
Sers R, Forrester S, Zecca M, Ward S, Moss E. The ergonomic impact of patient body mass index on surgeon posture during simulated laparoscopy. APPLIED ERGONOMICS 2021; 97:103501. [PMID: 34167015 DOI: 10.1016/j.apergo.2021.103501] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/23/2020] [Revised: 06/02/2021] [Accepted: 06/08/2021] [Indexed: 06/13/2023]
Abstract
Laparoscopy is a cornerstone of modern surgical care, with clear advantages for the patients. However, it has also been associated with inducing upper body musculoskeletal disorders amongst surgeons due to their propensity to assume non-neutral postures. Further, there is a perception that patients with high body mass indexes (BMI) exacerbate these factors. Therefore, surgeon upper body postures were objectively quantified using inertial measurement units and the LUBA ergonomic framework was used to assess posture during laparoscopic training on patient models that simulated BMIs of 20, 30, 40 and 50 kg/m2. In all surgeons the posture of the upper body significantly worsened during simulated laparoscopic surgery on the BMI 50 kg/m2 model as compared to the baseline BMI model of 20 kg/m2. These findings suggest that performing laparoscopic surgery on patients with high BMIs increases the prevalence of non-neutral posture and may further increase the risk of musculoskeletal disorders in surgeons.
Collapse
Affiliation(s)
- Ryan Sers
- Wolfson School of Mechanical, Electrical and Manufacturing Engineering, Loughborough University, UK
| | - Steph Forrester
- Wolfson School of Mechanical, Electrical and Manufacturing Engineering, Loughborough University, UK
| | - Massimiliano Zecca
- Wolfson School of Mechanical, Electrical and Manufacturing Engineering, Loughborough University, UK
| | - Stephen Ward
- Wolfson School of Mechanical, Electrical and Manufacturing Engineering, Loughborough University, UK
| | - Esther Moss
- Leicester Cancer Research Centre, University of Leicester, UK.
| |
Collapse
|
10
|
Qin Z, Stapornchaisit S, He Z, Yoshimura N, Koike Y. Multi-Joint Angles Estimation of Forearm Motion Using a Regression Model. Front Neurorobot 2021; 15:685961. [PMID: 34408635 PMCID: PMC8366416 DOI: 10.3389/fnbot.2021.685961] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2021] [Accepted: 07/07/2021] [Indexed: 11/24/2022] Open
Abstract
To improve the life quality of forearm amputees, prosthetic hands with high accuracy, and robustness are necessary. The application of surface electromyography (sEMG) signals to control a prosthetic hand is challenging. In this study, we proposed a time-domain CNN model for the regression prediction of joint angles in three degrees of freedom (3-DOFs, include two wrist joint motion and one finger joint motion), and five-fold cross validation was used to evaluate the correlation coefficient (CC). The CC value results of wrist flexion/extension motion obtained from 10 participants was 0.87–0.92, pronation/supination motion was 0.72–0.95, and hand grip/open motion was 0.75–0.94. We backtracked the fully connected layer weights to create a geometry plot for analyzing the motion pattern to investigate the learning of the proposed model. In order to discuss the daily updateability of the model by transfer learning, we performed a second experiment on five of the participants in another day and conducted transfer learning based on smaller amount of dataset. The CC results improved (wrist flexion/extension was 0.90–0.97, pronation/supination was 0.84–0.96, hand grip/open was 0.85–0.92), suggesting the effectiveness of the transfer learning by incorporating the small amounts of sEMG data acquired in different days. We compared our CNN-based model with four conventional regression models, the result illustrates that proposed model significantly outperforms the four conventional models with and without transfer learning. The offline result suggests the reliability of the proposed model in real-time control in different days, it can be applied for real-time prosthetic control in the future.
Collapse
Affiliation(s)
- Zixuan Qin
- Department of Information and Communications Engineering, Tokyo Institute of Technology, Yokohama, Japan
| | - Sorawit Stapornchaisit
- Department of Information and Communications Engineering, Tokyo Institute of Technology, Yokohama, Japan
| | - Zixun He
- Department of Information and Communications Engineering, Tokyo Institute of Technology, Yokohama, Japan
| | - Natsue Yoshimura
- Institute of Innovative Research, Tokyo Institute of Technology, Yokohama, Japan.,Precursory Research for Embryonic Science and Technology (PRESTO), Japan Science and Technology Agency (JST), Saitama, Japan
| | - Yasuharu Koike
- Institute of Innovative Research, Tokyo Institute of Technology, Yokohama, Japan
| |
Collapse
|
11
|
Design of a Multifunctional Operating Station Based on Augmented Reality (MOSAR). CYBERNETICS AND INFORMATION TECHNOLOGIES 2021. [DOI: 10.2478/cait-2021-0009] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
Abstract
Abstract
Design principles of a novel Multifunctional Operation Station (MOS) using Augmented Reality (AR) technology (MOSAR) are proposed in this paper. AR-based design allows more ergonomic remote instrument control in real time in contrast to classical instrument-centered interfaces. Another advantage is its hierarchical software structure including multiple programming interpreters. The MOSAR approach is illustrated with a remote surgical operating station that controls intelligent surgical instruments. The implementation of the Operation Station (MOS) is based on the multiplatform open-source library Tcl/Tk, and an AR extension has been developed on a Unity platform, using Vuforia SDK.
Collapse
|
12
|
Prochazka A, Dostal O, Cejnar P, Mohamed HI, Pavelek Z, Valis M, Vysata O. Deep Learning for Accelerometric Data Assessment and Ataxic Gait Monitoring. IEEE Trans Neural Syst Rehabil Eng 2021; 29:360-367. [PMID: 33434133 DOI: 10.1109/tnsre.2021.3051093] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
Ataxic gait monitoring and assessment of neurological disorders belong to important multidisciplinary areas that are supported by digital signal processing methods and machine learning tools. This paper presents the possibility of using accelerometric data to optimise deep learning convolutional neural network systems to distinguish between ataxic and normal gait. The experimental dataset includes 860 signal segments of 16 ataxic patients and 19 individuals from the control set with the mean age of 38.6 and 39.6 years, respectively. The proposed methodology is based upon the analysis of frequency components of accelerometric signals simultaneously recorded at specific body positions with a sampling frequency of 60 Hz. The deep learning system uses all of the frequency components in a range of 〈0,30 〉 Hz. Our classification results are compared with those obtained by standard methods, which include the support vector machine, Bayesian methods, and the two-layer neural network with features estimated as the relative power in selected frequency bands. Our results show that the appropriate selection of sensor positions can increase the accuracy from 81.2% for the foot position to 91.7% for the spine position. Combining the input data and the deep learning methodology with five layers increased the accuracy to 95.8%. Our methodology suggests that artificial intelligence methods and deep learning are efficient methods in the assessment of motion disorders and they have a wide range of further applications.
Collapse
|
13
|
Zhang M, Yu L, Zhang K, Du B, Zhan B, Chen S, Jiang X, Guo S, Zhao J, Wang Y, Wang B, Liu S, Luo W. Kinematic dataset of actors expressing emotions. Sci Data 2020; 7:292. [PMID: 32901035 PMCID: PMC7478954 DOI: 10.1038/s41597-020-00635-7] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2020] [Accepted: 08/07/2020] [Indexed: 11/09/2022] Open
Abstract
Human body movements can convey a variety of emotions and even create advantages in some special life situations. However, how emotion is encoded in body movements has remained unclear. One reason is that there is a lack of public human body kinematic dataset regarding the expressing of various emotions. Therefore, we aimed to produce a comprehensive dataset to assist in recognizing cues from all parts of the body that indicate six basic emotions (happiness, sadness, anger, fear, disgust, surprise) and neutral expression. The present dataset was created using a portable wireless motion capture system. Twenty-two semi-professional actors (half male) completed performances according to the standardized guidance and preferred daily events. A total of 1402 recordings at 125 Hz were collected, consisting of the position and rotation data of 72 anatomical nodes. To our knowledge, this is now the largest emotional kinematic dataset of the human body. We hope this dataset will contribute to multiple fields of research and practice, including social neuroscience, psychiatry, computer vision, and biometric and information forensics.
Collapse
Affiliation(s)
- Mingming Zhang
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, 116029, Liaoning, China
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian, 116029, Liaoning, China
| | - Lu Yu
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, 116029, Liaoning, China
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian, 116029, Liaoning, China
| | - Keye Zhang
- School of Social and Behavioral Sciences, Nanjing University, Nanjing, 210023, Jiangsu, China
| | - Bixuan Du
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, 116029, Liaoning, China
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian, 116029, Liaoning, China
| | - Bin Zhan
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, 116029, Liaoning, China
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian, 116029, Liaoning, China
| | - Shaohua Chen
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, 116029, Liaoning, China
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian, 116029, Liaoning, China
| | - Xiuhao Jiang
- School of Innovation and Entrepreneurship, Dalian University of Technology, Dalian, 116024, Liaoning, China
- Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian, 116024, Liaoning, China
| | - Shuai Guo
- School of Innovation and Entrepreneurship, Dalian University of Technology, Dalian, 116024, Liaoning, China
- Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian, 116024, Liaoning, China
| | - Jiafeng Zhao
- School of Innovation and Entrepreneurship, Dalian University of Technology, Dalian, 116024, Liaoning, China
- Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian, 116024, Liaoning, China
| | - Yang Wang
- School of Innovation and Entrepreneurship, Dalian University of Technology, Dalian, 116024, Liaoning, China
- Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian, 116024, Liaoning, China
| | - Bin Wang
- School of Innovation and Entrepreneurship, Dalian University of Technology, Dalian, 116024, Liaoning, China
- Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian, 116024, Liaoning, China
| | - Shenglan Liu
- School of Innovation and Entrepreneurship, Dalian University of Technology, Dalian, 116024, Liaoning, China.
- Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian, 116024, Liaoning, China.
| | - Wenbo Luo
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, 116029, Liaoning, China.
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian, 116029, Liaoning, China.
| |
Collapse
|