1
|
Hong R, Wu Z, Peng K, Zhang J, He Y, Zhang Z, Gao Y, Jin Y, Su X, Zhi H, Guan Q, Pan L, Jin L. Kinect-based objective assessment of the acute levodopa challenge test in parkinsonism: a feasibility study. Neurol Sci 2024; 45:2661-2670. [PMID: 38183553 DOI: 10.1007/s10072-023-07296-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2023] [Accepted: 12/28/2023] [Indexed: 01/08/2024]
Abstract
INTRODUCTION The acute levodopa challenge test (ALCT) is an important and valuable examination but there are still some shortcomings with it. We aimed to objectively assess ALCT based on a depth camera and filter out the best indicators. METHODS Fifty-nine individuals with parkinsonism completed ALCT and the improvement rate (IR, which indicates the change in value before and after levodopa administration) of the Movement Disorder Society-Sponsored Revision of the Unified Parkinson's Disease Rating Scale part III (MDS-UPDRS III) was calculated. The kinematic features of the patients' movements in both the OFF and ON states were collected with an Azure Kinect depth camera. RESULTS The IR of MDS-UPDRS III was significantly correlated with the IRs of many kinematic features for arising from a chair, pronation-supination movements of the hand, finger tapping, toe tapping, leg agility, and gait (rs = - 0.277 ~ - 0.672, P < 0.05). Moderate to high discriminative values were found in the selected features in identifying a clinically significant response to levodopa with sensitivity, specificity, and area under the curve (AUC) in the range of 50-100%, 47.22%-97.22%, and 0.673-0.915, respectively. The resulting classifier combining kinematic features of toe tapping showed an excellent performance with an AUC of 0.966 (95% CI = 0.922-1.000, P < 0.001). The optimal cut-off value was 21.24% with sensitivity and specificity of 94.44% and 87.18%, respectively. CONCLUSION This study demonstrated the feasibility of measuring the effect of levodopa and objectively assessing ALCT based on kinematic data derived from an Azure Kinect-based system.
Collapse
Affiliation(s)
- Ronghua Hong
- Department of Neurology and Neurological Rehabilitation, Shanghai Disabled Persons' Federation Key Laboratory of Intelligent Rehabilitation Assistive Devices and Technologies, School of Medicine, Shanghai Yangzhi Rehabilitation Hospital (Shanghai Sunshine Rehabilitation Center), Tongji University, Shanghai, China
- Key Laboratory of Spine and Spinal Cord Injury Repair and Regeneration of Ministry of Education, Department of Neurology, School of Medicine, Neurotoxin Research CenterTongji HospitalTongji University, Shanghai, China
| | - Zhuang Wu
- Department of Neurology and Neurological Rehabilitation, Shanghai Disabled Persons' Federation Key Laboratory of Intelligent Rehabilitation Assistive Devices and Technologies, School of Medicine, Shanghai Yangzhi Rehabilitation Hospital (Shanghai Sunshine Rehabilitation Center), Tongji University, Shanghai, China
- Key Laboratory of Spine and Spinal Cord Injury Repair and Regeneration of Ministry of Education, Department of Neurology, School of Medicine, Neurotoxin Research CenterTongji HospitalTongji University, Shanghai, China
| | - Kangwen Peng
- Key Laboratory of Spine and Spinal Cord Injury Repair and Regeneration of Ministry of Education, Department of Neurology, School of Medicine, Neurotoxin Research CenterTongji HospitalTongji University, Shanghai, China
| | - Jingxing Zhang
- Key Laboratory of Spine and Spinal Cord Injury Repair and Regeneration of Ministry of Education, Department of Neurology, School of Medicine, Neurotoxin Research CenterTongji HospitalTongji University, Shanghai, China
| | - Yijing He
- Key Laboratory of Spine and Spinal Cord Injury Repair and Regeneration of Ministry of Education, Department of Neurology, School of Medicine, Neurotoxin Research CenterTongji HospitalTongji University, Shanghai, China
| | - Zhuoyu Zhang
- Key Laboratory of Spine and Spinal Cord Injury Repair and Regeneration of Ministry of Education, Department of Neurology, School of Medicine, Neurotoxin Research CenterTongji HospitalTongji University, Shanghai, China
| | - Yichen Gao
- IFLYTEK Suzhou Research Institute, Suzhou, China
| | - Yue Jin
- IFLYTEK Suzhou Research Institute, Suzhou, China
| | - Xiaoyun Su
- IFLYTEK Suzhou Research Institute, Suzhou, China
| | - Hongping Zhi
- IFLYTEK Suzhou Research Institute, Suzhou, China
| | - Qiang Guan
- Key Laboratory of Spine and Spinal Cord Injury Repair and Regeneration of Ministry of Education, Department of Neurology, School of Medicine, Neurotoxin Research CenterTongji HospitalTongji University, Shanghai, China
| | - Lizhen Pan
- Key Laboratory of Spine and Spinal Cord Injury Repair and Regeneration of Ministry of Education, Department of Neurology, School of Medicine, Neurotoxin Research CenterTongji HospitalTongji University, Shanghai, China.
| | - Lingjing Jin
- Department of Neurology and Neurological Rehabilitation, Shanghai Disabled Persons' Federation Key Laboratory of Intelligent Rehabilitation Assistive Devices and Technologies, School of Medicine, Shanghai Yangzhi Rehabilitation Hospital (Shanghai Sunshine Rehabilitation Center), Tongji University, Shanghai, China.
- Key Laboratory of Spine and Spinal Cord Injury Repair and Regeneration of Ministry of Education, Department of Neurology, School of Medicine, Neurotoxin Research CenterTongji HospitalTongji University, Shanghai, China.
- Collaborative Innovation Center for Brain Science, Tongji University, Shanghai, China.
| |
Collapse
|
2
|
Lim D, Pei W, Lee JW, Musselman KE, Masani K. Feasibility of using a depth camera or pressure mat for visual feedback balance training with functional electrical stimulation. Biomed Eng Online 2024; 23:19. [PMID: 38347584 PMCID: PMC10863251 DOI: 10.1186/s12938-023-01191-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2023] [Accepted: 12/07/2023] [Indexed: 02/15/2024] Open
Abstract
Individuals with incomplete spinal-cord injury/disease are at an increased risk of falling due to their impaired ability to maintain balance. Our research group has developed a closed-loop visual-feedback balance training (VFBT) system coupled with functional electrical stimulation (FES) for rehabilitation of standing balance (FES + VFBT system); however, clinical usage of this system is limited by the use of force plates, which are expensive and not easily accessible. This study aimed to investigate the feasibility of a more affordable and accessible sensor such as a depth camera or pressure mat in place of the force plate. Ten able-bodied participants (7 males, 3 females) performed three sets of four different standing balance exercises using the FES + VFBT system with the force plate. A depth camera and pressure mat collected centre of mass and centre of pressure data passively, respectively. The depth camera showed higher Pearson's correlation (r > 98) and lower root mean squared error (RMSE < 10 mm) than the pressure mat (r > 0.82; RMSE < 4.5 mm) when compared with the force plate overall. Stimulation based on the depth camera showed lower RMSE than that based on the pressure mat relative to the FES + VFBT system. The depth camera shows potential as a replacement sensor to the force plate for providing feedback to the FES + VFBT system.
Collapse
Affiliation(s)
- Derrick Lim
- Institute of Biomedical Engineering, University of Toronto, Toronto, ON, Canada
- KITE - Toronto Rehabilitation Institute, University Health Network, Toronto, ON, Canada
| | - William Pei
- Institute of Biomedical Engineering, University of Toronto, Toronto, ON, Canada
- KITE - Toronto Rehabilitation Institute, University Health Network, Toronto, ON, Canada
| | - Jae W Lee
- Institute of Biomedical Engineering, University of Toronto, Toronto, ON, Canada
- KITE - Toronto Rehabilitation Institute, University Health Network, Toronto, ON, Canada
| | - Kristin E Musselman
- KITE - Toronto Rehabilitation Institute, University Health Network, Toronto, ON, Canada
- Department of Physical Therapy, Temerty Faculty of Medicine, University of Toronto, Toronto, Canada
- Rehabilitation Science Institute, Temerty Faculty of Medicine, University of Toronto, Toronto, Canada
| | - Kei Masani
- Institute of Biomedical Engineering, University of Toronto, Toronto, ON, Canada.
- KITE - Toronto Rehabilitation Institute, University Health Network, Toronto, ON, Canada.
| |
Collapse
|
3
|
Wang X, Cao J, Zhao Q, Chen M, Luo J, Wang H, Yu L, Tsui KL, Zhao Y. Identifying sensors-based parameters associated with fall risk in community-dwelling older adults: an investigation and interpretation of discriminatory parameters. BMC Geriatr 2024; 24:125. [PMID: 38302872 PMCID: PMC10836006 DOI: 10.1186/s12877-024-04723-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2023] [Accepted: 01/18/2024] [Indexed: 02/03/2024] Open
Abstract
BACKGROUND Falls pose a severe threat to the health of older adults worldwide. Determining gait and kinematic parameters that are related to an increased risk of falls is essential for developing effective intervention and fall prevention strategies. This study aimed to investigate the discriminatory parameter, which lay an important basis for developing effective clinical screening tools for identifying high-fall-risk older adults. METHODS Forty-one individuals aged 65 years and above living in the community participated in this study. The older adults were classified as high-fall-risk and low-fall-risk individuals based on their BBS scores. The participants wore an inertial measurement unit (IMU) while conducting the Timed Up and Go (TUG) test. Simultaneously, a depth camera acquired images of the participants' movements during the experiment. After segmenting the data according to subtasks, 142 parameters were extracted from the sensor-based data. A t-test or Mann-Whitney U test was performed on the parameters for distinguishing older adults at high risk of falling. The logistic regression was used to further quantify the role of different parameters in identifying high-fall-risk individuals. Furthermore, we conducted an ablation experiment to explore the complementary information offered by the two sensors. RESULTS Fifteen participants were defined as high-fall-risk individuals, while twenty-six were defined as low-fall-risk individuals. 17 parameters were tested for significance with p-values less than 0.05. Some of these parameters, such as the usage of walking assistance, maximum angular velocity around the yaw axis during turn-to-sit, and step length, exhibit the greatest discriminatory abilities in identifying high-fall-risk individuals. Additionally, combining features from both devices for fall risk assessment resulted in a higher AUC of 0.882 compared to using each device separately. CONCLUSIONS Utilizing different types of sensors can offer more comprehensive information. Interpreting parameters to physiology provides deeper insights into the identification of high-fall-risk individuals. High-fall-risk individuals typically exhibited a cautious gait, such as larger step width and shorter step length during walking. Besides, we identified some abnormal gait patterns of high-fall-risk individuals compared to low-fall-risk individuals, such as less knee flexion and a tendency to tilt the pelvis forward during turning.
Collapse
Affiliation(s)
- Xuan Wang
- Intelligent Sensing and Proactive Health Research Center, School of Public Health (Shenzhen), Sun Yat-sen University, Shenzhen, China
| | - Junjie Cao
- Intelligent Sensing and Proactive Health Research Center, School of Public Health (Shenzhen), Sun Yat-sen University, Shenzhen, China
| | - Qizheng Zhao
- Intelligent Sensing and Proactive Health Research Center, School of Public Health (Shenzhen), Sun Yat-sen University, Shenzhen, China
| | - Manting Chen
- Intelligent Sensing and Proactive Health Research Center, School of Public Health (Shenzhen), Sun Yat-sen University, Shenzhen, China
| | - Jiajia Luo
- Intelligent Sensing and Proactive Health Research Center, School of Public Health (Shenzhen), Sun Yat-sen University, Shenzhen, China
| | - Hailiang Wang
- School of Design, the Hong Kong Polytechnic University, Hung Hom, Hong Kong
| | - Lisha Yu
- School of Design, the Hong Kong Polytechnic University, Hung Hom, Hong Kong
| | - Kwok-Leung Tsui
- Grado Department of Industrial and Systems Engineering, Virginia Polytechnic Institute and State University, Blacksburg, VA, USA
| | - Yang Zhao
- Intelligent Sensing and Proactive Health Research Center, School of Public Health (Shenzhen), Sun Yat-sen University, Shenzhen, China.
| |
Collapse
|
4
|
Tang WR, Su W, Lien JJJ, Chang CC, Yen YT, Tseng YL. Development of a real-time RGB-D visual feedback-assisted pulmonary rehabilitation system. Heliyon 2024; 10:e23704. [PMID: 38261861 PMCID: PMC10796957 DOI: 10.1016/j.heliyon.2023.e23704] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2022] [Revised: 12/10/2023] [Accepted: 12/11/2023] [Indexed: 01/25/2024] Open
Abstract
Background Following surgery, perioperative pulmonary rehabilitation (PR) is important for patients with early-stage lung cancer. However, current inpatient programs are often limited in time and space, and outpatient settings have access barriers. Therefore, we aimed to develop a background-free, zero-contact thoracoabdominal movement-tracking model that is easily set up and incorporated into a pre-existing PR program or extended to home-based rehabilitation and remote monitoring. We validated its effectiveness in providing preclinical real-time RGB-D (colour-depth camera) visual feedback. Methods Twelve healthy volunteers performed deep breathing exercises following audio instruction for three cycles, followed by audio instruction and real-time visual feedback for another three cycles. In the visual feedback system, we used a RealSense™ D415 camera to capture RGB and depth images for human pose-estimation with Google MediaPipe. Target-tracking regions were defined based on the relative position of detected joints. The processed depth information of the tracking regions was visualised on a screen as a motion bar to provide real-time visual feedback of breathing intensity. Pulmonary function was simultaneously recorded using spirometric measurements, and changes in pulmonary volume were derived from respiratory airflow signals. Results Our movement-tracking model showed a very strong correlation (r = 0.90 ± 0.05) between thoracic motion signals and spirometric volume, and a strong correlation (r = 0.73 ± 0.22) between abdominal signals and spirometric volume. Displacement of the chest wall was enhanced by RGB-D visual feedback (23 vs 20 mm, P = 0.034), and accompanied by an increased lung volume (2.58 vs 2.30 L, P = 0.003). Conclusion We developed an easily implemented thoracoabdominal movement-tracking model and reported the positive impact of real-time RGB-D visual feedback on self-promoted external chest wall expansion, accompanied by increased internal lung volumes. This system can be extended to home-based PR.
Collapse
Affiliation(s)
- Wen-Ruei Tang
- Division of Thoracic Surgery, Department of Surgery, National Cheng Kung University Hospital, Tainan, Taiwan
| | - Wei Su
- Department of Computer Science and Information Engineering, National Cheng Kung University, Tainan, Taiwan
| | - Jenn-Jier James Lien
- Department of Computer Science and Information Engineering, National Cheng Kung University, Tainan, Taiwan
| | - Chao-Chun Chang
- Division of Thoracic Surgery, Department of Surgery, National Cheng Kung University Hospital, Tainan, Taiwan
| | - Yi-Ting Yen
- Division of Thoracic Surgery, Department of Surgery, National Cheng Kung University Hospital, Tainan, Taiwan
| | - Yau-Lin Tseng
- Division of Thoracic Surgery, Department of Surgery, National Cheng Kung University Hospital, Tainan, Taiwan
| |
Collapse
|
5
|
Lv L, Yang J, Gu F, Fan J, Zhu Q, Liu X. Validity and Reliability of a Depth Camera-Based Quantitative Measurement for Joint Motion of the Hand. J Hand Surg Glob Online 2022; 5:39-47. [PMID: 36704372 PMCID: PMC9870814 DOI: 10.1016/j.jhsg.2022.08.011] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2022] [Accepted: 08/23/2022] [Indexed: 01/18/2023] Open
Abstract
Purpose Quantitative measurement of hand motion is essential in evaluating hand function. This study aimed to investigate the validity and reliability of a novel depth camera-based contactless automatic measurement system to assess hand range of motion and its potential benefits in clinical applications. Methods Five hand gestures were designed to evaluate the hand range of motion using a depth camera-based measurement system. Seventy-one volunteers were enrolled in performing the designed hand gestures. Then, the hand range of motion was measured with the depth camera and manual procedures. System validity was evaluated based on 3 dimensions: repeatability, within-laboratory precision, and reproducibility. For system reliability, linear evaluation, the intraclass correlation coefficient, paired t -test and bias were employed to test the consistency and difference between the depth camera and manual procedures. Results When measuring phalangeal length, repeatability, within-laboratory precision, and reproducibility were 2.63%, 12.87%, and 27.15%, respectively. When measuring angles of hand motion, the mean repeatability and within-laboratory precision were 1.2° and 3.3° for extension of 5 digits, 2.7° and 10.2° for flexion of 4 fingers, and 3.1° and 5.3° for abduction of 4 metacarpophalangeal joints, respectively. For system reliability, the results showed excellent consistency (intraclass correlation coefficient = 0.823; P < .05) and good linearity with the manual procedures (r = 0.909-0.982, approximately; P < .001). Besides, 78.3% of the measurements were clinically acceptable. Conclusions Our depth camera-based evaluation system provides acceptable validity and reliability in measuring hand range of motion and offers potential benefits for clinical care and research in hand surgery. However, further studies are required before clinical application. Clinical relevance This study suggests a depth camera-based contactless automatic measurement system holds promise for assessing hand range of motion in hand function evaluation, diagnosis, and rehabilitation for medical staff. However, it is currently not adequate for all clinical applications.
Collapse
Affiliation(s)
- Lulu Lv
- Department of Microsurgery, Orthopaedic Trauma and Hand Surgery, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Jiantao Yang
- Department of Microsurgery, Orthopaedic Trauma and Hand Surgery, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, Guangdong, China,Guangdong Province Engineering Laboratory for Soft Tissue Biofabrication, Sun Yat-sen University, Guangzhou, Guangdong, China,Guangdong Provincial Key Laboratory for Orthopaedics and Traumatology, Guangzhou, Guangdong, China
| | - Fanbin Gu
- Department of Microsurgery, Orthopaedic Trauma and Hand Surgery, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Jingyuan Fan
- Department of Microsurgery, Orthopaedic Trauma and Hand Surgery, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Qingtang Zhu
- Department of Microsurgery, Orthopaedic Trauma and Hand Surgery, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, Guangdong, China,Guangdong Province Engineering Laboratory for Soft Tissue Biofabrication, Sun Yat-sen University, Guangzhou, Guangdong, China,Guangdong Provincial Key Laboratory for Orthopaedics and Traumatology, Guangzhou, Guangdong, China
| | - Xiaolin Liu
- Department of Microsurgery, Orthopaedic Trauma and Hand Surgery, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, Guangdong, China,Guangdong Province Engineering Laboratory for Soft Tissue Biofabrication, Sun Yat-sen University, Guangzhou, Guangdong, China,Guangdong Provincial Key Laboratory for Orthopaedics and Traumatology, Guangzhou, Guangdong, China,Corresponding author: Xiaolin Liu, MD, Department of Microsurgery, Orthopaedic Trauma and Hand Surgery, The First Affiliated Hospital, Sun Yat-sen University, No. 58, Zhong Shan Er Lu, Guangzhou, Guangdong 510080, China.
| |
Collapse
|
6
|
Greco A, Percannella G, Ritrovato P, Saggese A, Vento M. A deep learning based system for handwashing procedure evaluation. Neural Comput Appl 2022; 35:1-16. [PMID: 35474686 PMCID: PMC9022899 DOI: 10.1007/s00521-022-07194-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2021] [Accepted: 03/10/2022] [Indexed: 11/26/2022]
Abstract
Hand washing preparation can be considered as one of the main strategies for reducing the risk of surgical site contamination and thus the infections risks. Within this context, in this paper we propose an embedded system able to automatically analyze, in real-time, the sequence of images acquired by a depth camera to evaluate the quality of the handwashing procedure. In particular, the designed system runs on an NVIDIA Jetson NanoTM computing platform. We adopt a convolutional neural network, followed by a majority voting scheme, to classify the movement of the worker according to one of the ten gestures defined by the World Health Organization. To test the proposed system, we collect a dataset built by 74 different video sequences. The results achieved on this dataset confirm the effectiveness of the proposed approach.
Collapse
Affiliation(s)
- Antonio Greco
- Department of Computer and Electrical Engineering and Applied Mathematics, University of Salerno, Fisciano, SA Italy
| | - Gennaro Percannella
- Department of Computer and Electrical Engineering and Applied Mathematics, University of Salerno, Fisciano, SA Italy
| | - Pierluigi Ritrovato
- Department of Computer and Electrical Engineering and Applied Mathematics, University of Salerno, Fisciano, SA Italy
| | - Alessia Saggese
- Department of Computer and Electrical Engineering and Applied Mathematics, University of Salerno, Fisciano, SA Italy
| | - Mario Vento
- Department of Computer and Electrical Engineering and Applied Mathematics, University of Salerno, Fisciano, SA Italy
| |
Collapse
|
7
|
Girase H, Nyayapati P, Booker J, Lotz JC, Bailey JF, Matthew RP. Automated assessment and classification of spine, hip, and knee pathologies from sit-to-stand movements collected in clinical practice. J Biomech 2021; 128:110786. [PMID: 34656825 DOI: 10.1016/j.jbiomech.2021.110786] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2021] [Revised: 09/28/2021] [Accepted: 09/29/2021] [Indexed: 10/20/2022]
Abstract
Efficient, cost-effective methods for quantifying patient biomechanics at the point of care can facilitate faster and more accurate diagnoses. This work presents a new method to diagnose pre-surgical back, hip, and knee patients by analysing their sit-to-stand motion captured by a Kinect camera. Kinematic and dynamic time-series features were extracted from patient movements collected in clinic. These features were used to test a variety of machine learning methods for patient classification. The performance of models trained on time-series features were compared against models trained on domain-knowledge features, highlighting the importance of using time-series data for the classification of human movement. Additionally, the effectiveness of using semi-supervised learning is tested on partially labelled datasets, providing insight on how to boost classification performance in situations where labelled patient data is difficult to obtain. The best semi-supervised model achieves ∼73% accuracy in distinguishing individuals with low-back pain, and hip and knee degeneration from control subjects.
Collapse
Affiliation(s)
- Harshayu Girase
- Department of Electrical Engineering and Computer Science, University of California at Berkeley, Berkeley, 94720, CA, USA
| | - Priya Nyayapati
- Department of Orthopaedic Surgery, University of California at San Francisco, San Francisco, 94158, CA, USA
| | - Jacqueline Booker
- School of Medicine, University of California at San Francisco, San Francisco, 94158, CA, USA
| | - Jeffrey C Lotz
- Department of Orthopaedic Surgery, University of California at San Francisco, San Francisco, 94158, CA, USA
| | - Jeannie F Bailey
- Department of Orthopaedic Surgery, University of California at San Francisco, San Francisco, 94158, CA, USA
| | - Robert P Matthew
- Department of Physical Therapy and Rehabilitation Science, University of California at San Francisco, San Francisco, 94158, CA, USA.
| |
Collapse
|
8
|
Ramos WC, Beange KHE, Graham RB. Concurrent validity of a custom computer vision algorithm for measuring lumbar spine motion from RGB-D camera depth data. Med Eng Phys 2021; 96:22-28. [PMID: 34565549 DOI: 10.1016/j.medengphy.2021.08.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2021] [Revised: 08/10/2021] [Accepted: 08/12/2021] [Indexed: 11/29/2022]
Abstract
Using RGB-D cameras as an alternative motion capture device can be advantageous for biomechanical spine motion assessments of movement quality and dysfunction due to their lower cost and complexity. In this study, we evaluated RGB-D camera performance relative to gold-standard optoelectronic motion capture equipment. Twelve healthy young adults (6M, 6F) were recruited to perform repetitive spine flexion-extension, while wearing infrared reflective marker clusters placed over their T10-T12 spinous processes and sacrum, and motion capture data were recorded simultaneously by both systems. Custom computer vision algorithms were developed to extract spine angles from depth data. Root mean square error (RMSE) was calculated for continuous Euler angles, and intraclass correlation coefficients (ICC2,1) were calculated between minimum and maximum angles and range of motion in all movement planes. RMSE was low (RMSE ≤ 2.05°) and reliability was good to excellent (0.849 ≤ ICC2,1 ≤ 0.979) across all movement planes. In conclusion, the proposed algorithm for tracking 3D lumbar spine motion during a sagittal movement task from one RGB-D camera is reliable in comparison to gold-standard motion tracking equipment. Future research will investigate accuracy and validity in a wider variety of movements, and will also investigate the development of novel methods to measure spine motion without using infrared reflective markers.
Collapse
Affiliation(s)
- Wantuir C Ramos
- School of Human Kinetics, Faculty of Health Sciences, University of Ottawa, 200 Lees Avenue, Ottawa, ON K1N 6N5, Canada
| | - Kristen H E Beange
- Department of Systems and Computer Engineering, Faculty of Engineering and Design, Carleton University, 1125 Colonel By Drive, Ottawa, ON K1S 5B6, Canada; Ottawa-Carleton Institute for Biomedical Engineering, Ottawa, ON, Canada
| | - Ryan B Graham
- School of Human Kinetics, Faculty of Health Sciences, University of Ottawa, 200 Lees Avenue, Ottawa, ON K1N 6N5, Canada; Ottawa-Carleton Institute for Biomedical Engineering, Ottawa, ON, Canada.
| |
Collapse
|
9
|
Abstract
Digital signage is widely utilized in digital-out-of-home (DOOH) advertising for marketing and business. Recently, the combination of the digital camera and digital signage enables the advertiser to gather the audience demographic for audience measurement. Audience measurement is useful for the advertiser to understand the audience's behavior and improve their business strategies. When an audience is facing the digital display, the vision-based DOOH system will process the audience's face and broadcast a personalized advertisement. Most of the digital signage is available in an uncontrolled environment of public areas. Thus, it poses two main challenges for the vision-based DOOH system to track the audience's movement, which are multiple adjacent faces and occlusion by passer-by. In this paper, a new framework is proposed to combine the digital signage with a depth camera for tracking multi-face in the three-dimensional (3D) environment. The proposed framework extracts the audience's face centroid position (x, y) and depth information (z) and plots into the aerial map to simulate the audience's movement that is corresponding to the real-world environment. The advertiser can further measure the advertising effectiveness through the audience's behavior.
Collapse
Affiliation(s)
- Chuan-Chuan Low
- Faculty of Information Science & Technology, Multimedia University, Jalan Ayer Keroh Lama, Melaka, 75450, Malaysia
| | - Lee-Yeng Ong
- Faculty of Information Science & Technology, Multimedia University, Jalan Ayer Keroh Lama, Melaka, 75450, Malaysia
- Corresponding author.
| | - Voon-Chet Koo
- Faculty of Engineering & Technology, Multimedia University, Jalan Ayer Keroh Lama, Melaka, 75450, Malaysia
| | - Meng-Chew Leow
- Faculty of Information Science & Technology, Multimedia University, Jalan Ayer Keroh Lama, Melaka, 75450, Malaysia
| |
Collapse
|
10
|
Qin C, Ran X, Wu Y, Chen X. The development of non-contact user interface of a surgical navigation system based on multi-LSTM and a phantom experiment for zygomatic implant placement. Int J Comput Assist Radiol Surg 2019; 14:2147-54. [PMID: 31300964 DOI: 10.1007/s11548-019-02031-y] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2019] [Accepted: 07/05/2019] [Indexed: 10/26/2022]
Abstract
PURPOSE Image-guided surgical navigation system (SNS) has proved to be an increasingly important assistance tool for mini-invasive surgery. However, using standard devices such as keyboard and mouse as human-computer interaction (HCI) is a latent vector of infectious medium, causing risks to patients and surgeons. To solve the human-computer interaction problem, we proposed an optimized structure of LSTM based on a depth camera to recognize gestures and applied it to an in-house oral and maxillofacial surgical navigation system (Qin et al. in Int J Comput Assist Radiol Surg 14(2):281-289, 2019). METHODS The proposed optimized structure of LSTM named multi-LSTM allows multiple input layers and takes into account the relationships between inputs. To combine the gesture recognition with the SNS, four left-hand signs waving along four directions were designed to correspond to four operations of the mouse, and the motion of right hand was used to control the movement of the cursor. Finally, a phantom study for zygomatic implant placement was conducted to evaluate the feasibility of multi-LSTM as HCI.
RESULTS: 3D hand trajectories of both wrist and elbow from 10 participants were collected to train the recognition network. Then tenfold cross-validation was performed for judging signs, and the mean accuracy was 96% ± 3%. In the phantom study, four implants were successfully placed, and the average deviations of planned-placed implants were 1.22 mm and 1.70 mm for the entry and end points, respectively, while the angular deviation ranged from 0.4° to 2.9°. CONCLUSION The results showed that this non-contact user interface based on multi-LSTM could be used as a promising tool to eliminate the disinfection problem in operation room and alleviate manipulation complexity of surgical navigation system.
Collapse
|
11
|
Dubois A, Mouthon A, Sivagnanaselvam RS, Bresciani JP. Fast and automatic assessment of fall risk by coupling machine learning algorithms with a depth camera to monitor simple balance tasks. J Neuroeng Rehabil 2019; 16:71. [PMID: 31186002 PMCID: PMC6560720 DOI: 10.1186/s12984-019-0532-x] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2018] [Accepted: 05/03/2019] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Falls in the elderly constitute a major health issue associated to population ageing. Current clinical tests evaluating fall risk mostly consist in assessing balance abilities. The devices used for these tests can be expensive or inconvenient to set up. We investigated whether, how and to which extent fall risk could be assessed using a low cost ambient sensor to monitor balance tasks. METHOD Eighty four participants, forty of which were 65 or older, performed eight simple balance tasks in front of a Microsoft Kinect sensor. Custom-made algorithms coupled to the Kinect sensor were used to automatically extract body configuration parameters such as body centroid and dispersion. Participants were then classified in two groups using a clustering method. The clusters were formed based on the parameters measured by the sensor for each balance task. For each participant, fall risk was independently assessed using known risk factors as age and average physical activity, as well as the participant's performance on the Timed Up and Go clinical test. RESULTS Standing with a normal stance and the eyes closed on a foam pad, and standing with a narrow stance and the eyes closed on regular ground were the two balance tasks for which the classification's outcome best matched fall risk as assessed by the three known risk factors. Standing on a foam pad with eyes closed was the task driving to the most robust results. CONCLUSION Our method constitutes a simple, fast, and reliable way to assess fall risk more often with elderly people. Importantly, this method requires very little space, time and equipment, so that it could be easily and frequently used by a large number of health professionals, and in particular by family physicians. Therefore, we believe that the use of this method would substantially contribute to improve fall prevention. TRIAL REGISTRATION CER-VD 2015-00035. Registered 7 December 2015.
Collapse
Affiliation(s)
- Amandine Dubois
- Department of Neurosciences & Movement Sciences, University of Fribourg, Fribourg, 1700, Switzerland.
| | - Audrey Mouthon
- Department of Neurosciences & Movement Sciences, University of Fribourg, Fribourg, 1700, Switzerland
| | | | - Jean-Pierre Bresciani
- Department of Neurosciences & Movement Sciences, University of Fribourg, Fribourg, 1700, Switzerland.,Grenoble Alpes University, CNRS, LPNC UMR 5105, Grenoble, F-38000, France
| |
Collapse
|
12
|
Clark RA, Mentiplay BF, Hough E, Pua YH. Three-dimensional cameras and skeleton pose tracking for physical function assessment: A review of uses, validity, current developments and Kinect alternatives. Gait Posture 2019; 68:193-200. [PMID: 30500731 DOI: 10.1016/j.gaitpost.2018.11.029] [Citation(s) in RCA: 85] [Impact Index Per Article: 17.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/17/2018] [Revised: 10/16/2018] [Accepted: 11/21/2018] [Indexed: 02/02/2023]
Abstract
BACKGROUND Three-dimensional camera systems that integrate depth assessment with traditional two-dimensional images, such as the Microsoft Kinect, Intel Realsense, StereoLabs Zed and Orbecc, hold great promise as physical function assessment tools. When combined with point cloud and skeleton pose tracking software they can be used to assess many different aspects of physical function and anatomy. These assessments have received great interest over the past decade, and will likely receive further study as the integration of depth sensing and augmented reality smartphone cameras occurs more in everyday life. RESEARCH QUESTION The aim of this review is to discuss how these devices work, what options are available, the best methods for performing assessments and how they can be used in the future. METHODS Firstly, a review of the Microsoft Kinect devices and associated artificial intelligence, automated skeleton tracking algorithms is provided. This includes a narrative critique of the validity and clinical utility of these devices for assessing different aspects of physical function including spatiotemporal, kinematic and inverse dynamics data derived from gait and balance trials, and anatomical assessments performed using the depth sensor information. Methods for improving the accuracy of data are examined, including multiple-camera systems and sensor fusion with inertial monitoring units, model fitting, and marker tracking. Secondly, alternative hardware, including other structured light and time of flight methods, stereoscopic cameras and augmented reality leveraging smartphone and tablet cameras to perform measurements in three-dimensional space are summarised. Software options related to depth sensing cameras are then discussed, focussing on recent advances such as OpenPose and web-based methods such as PoseNet. RESULTS AND SIGNIFICANCE The clinical and non-laboratory utility of these devices holds great promise for physical function assessment, and recent developments could strengthen their ability to provide important and impactful health-related data.
Collapse
|
13
|
Lacher RM, Vasconcelos F, Williams NR, Rindermann G, Hipwell J, Hawkes D, Stoyanov D. Nonrigid reconstruction of 3D breast surfaces with a low-cost RGBD camera for surgical planning and aesthetic evaluation. Med Image Anal 2019; 53:11-25. [PMID: 30660103 PMCID: PMC6854464 DOI: 10.1016/j.media.2019.01.003] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2018] [Revised: 01/06/2019] [Accepted: 01/10/2019] [Indexed: 12/18/2022]
Abstract
A nonrigid 3D breast surface reconstruction pipeline running on a standard PC taking a noisy RGBD input video from a Kinect-style camera is proposed. Pairwise nonrigid ICP is extended to the multi-view case incorporating soft mobility constraints in areas of non-overlap. Shortest distance correspondences as a new technique for data association are shown to lead to consistently better alignment. The method is able to reconstruct clinical-quality surface models in spite of varying degrees of postural sway during data capture. Landmark and volumetric quantitative validation in metric units demonstrate improved reconstruction quality on par with the gold standard and superior to a competing method.
Accounting for 26% of all new cancer cases worldwide, breast cancer remains the most common form of cancer in women. Although early breast cancer has a favourable long-term prognosis, roughly a third of patients suffer from a suboptimal aesthetic outcome despite breast conserving cancer treatment. Clinical-quality 3D modelling of the breast surface therefore assumes an increasingly important role in advancing treatment planning, prediction and evaluation of breast cosmesis. Yet, existing 3D torso scanners are expensive and either infrastructure-heavy or subject to motion artefacts. In this paper we employ a single consumer-grade RGBD camera with an ICP-based registration approach to jointly align all points from a sequence of depth images non-rigidly. Subtle body deformation due to postural sway and respiration is successfully mitigated leading to a higher geometric accuracy through regularised locally affine transformations. We present results from 6 clinical cases where our method compares well with the gold standard and outperforms a previous approach. We show that our method produces better reconstructions qualitatively by visual assessment and quantitatively by consistently obtaining lower landmark error scores and yielding more accurate breast volume estimates.
Collapse
Affiliation(s)
- R M Lacher
- Wellcome / EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London, London, UK.
| | - F Vasconcelos
- Wellcome / EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London, London, UK.
| | - N R Williams
- Surgical & Interventional Trials Unit, University College London, London, United Kingdom.
| | | | - J Hipwell
- Centre for Medical Image Computing (CMIC), University College London, London, United Kingdom.
| | - D Hawkes
- Wellcome / EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London, London, UK.
| | - D Stoyanov
- Wellcome / EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London, London, UK.
| |
Collapse
|
14
|
Groisser B, Kimmel R, Feldman G, Rozen N, Wolf A. 3D Reconstruction of Scoliotic Spines from Stereoradiography and Depth Imaging. Ann Biomed Eng 2018; 46:1206-1215. [PMID: 29687237 DOI: 10.1007/s10439-018-2033-7] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/25/2017] [Accepted: 04/18/2018] [Indexed: 11/25/2022]
Abstract
Spine shape can be reconstructed from stereoradiography, but often requires specialized infrastructure or fails to account for subject posture. In this paper a protocol is presented for stereo reconstructions that integrates surface recordings with radiography and naturally accounts for variations in patient posture. Low cost depth cameras are added to an existing radiographic system to capture patient pose. A statistical model of human body shape is learned from public datasets and registered to depth scans, providing 3D correspondence across images for stereo reconstruction of radiographic landmarks. A radiographic phantom was used to validate these methods in vitro with RMS 3D landmark reconstruction error of 2.0 mm. Surfaces were automatically and reliably registered, with SD 12 mm translation disparity and SD .5° rotation. The proposed method is suitable for 3D radiographic reconstructions and may be beneficial in compensating for involuntary patient motion.
Collapse
Affiliation(s)
- Benjamin Groisser
- Department of Mechanical Engineering, Technion Israel Institute of Technology, 32000, Haifa, Israel.
| | - Ron Kimmel
- Department of Computer Science, Technion Israel Institute of Technology, 32000, Haifa, Israel
| | - Guy Feldman
- Department of Orthopedics, Emek Medical Center, Yitshak Rabin Boulevard 21, 1834111, Afula, Israel
| | - Nimrod Rozen
- Department of Orthopedics, Emek Medical Center, Yitshak Rabin Boulevard 21, 1834111, Afula, Israel
| | - Alon Wolf
- Department of Mechanical Engineering, Technion Israel Institute of Technology, 32000, Haifa, Israel
| |
Collapse
|
15
|
Dubois A, Bresciani JP. Validation of an ambient system for the measurement of gait parameters. J Biomech 2018; 69:175-180. [PMID: 29397110 DOI: 10.1016/j.jbiomech.2018.01.024] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2017] [Revised: 01/10/2018] [Accepted: 01/14/2018] [Indexed: 11/27/2022]
Abstract
Fall risk in elderly people is usually assessed using clinical tests. These tests consist in a subjective evaluation of gait performed by healthcare professionals, most of the time shortly after the first fall occurrence. We propose to complement this one-time, subjective evaluation, by a more quantitative analysis of the gait pattern using a Microsoft Kinect. To evaluate the potential of the Kinect sensor for such a quantitative gait analysis, we benchmarked its performance against that of a gold-standard motion capture system, namely the OptiTrack. The "Kinect" analysis relied on a home-made algorithm specifically developed for this sensor, whereas the OptiTrack analysis relied on the "built-in" OptiTrack algorithm. We measured different gait parameters as step length, step duration, cadence, and gait speed in twenty-five subjects, and compared the results respectively provided by the Kinect and OptiTrack systems. These comparisons were performed using Bland-Altman plot (95% bias and limits of agreement), percentage error, Spearman's correlation coefficient, concordance correlation coefficient and intra-class correlation. The agreement between the measurements made with the two motion capture systems was very high, demonstrating that associated with the right algorithm, the Kinect is a very reliable and valuable tool to analyze gait. Importantly, the measured spatio-temporal parameters varied significantly between age groups, step length and gait speed proving the most effective discriminating parameters. Kinect-monitoring and quantitative gait pattern analysis could therefore be routinely used to complete subjective clinical evaluation in order to improve fall risk assessment during rehabilitation.
Collapse
Affiliation(s)
- Amandine Dubois
- Department of Medicine, University of Fribourg, Fribourg, Switzerland.
| | | |
Collapse
|
16
|
Okayama T, Goto T, Toyoda A. Assessing nest-building behavior of mice using a 3D depth camera. J Neurosci Methods 2015; 251:151-7. [PMID: 26051553 DOI: 10.1016/j.jneumeth.2015.05.019] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2014] [Revised: 04/27/2015] [Accepted: 05/28/2015] [Indexed: 01/18/2023]
Abstract
We developed a novel method to evaluate the nest-building behavior of mice using an inexpensive depth camera. The depth camera clearly captured nest-building behavior. Using three-dimensional information from the depth camera, we obtained objective features for assessing nest-building behavior, including "volume," "radius," and "mean height". The "volume" represents the change in volume of the nesting material, a pressed cotton square that a mouse shreds and untangles in order to build its nest. During the nest-building process, the total volume of cotton fragments is increased. The "radius" refers to the radius of the circle enclosing the fragments of cotton. It describes the extent of nesting material dispersion. The "radius" averaged approximately 60mm when a nest was built. The "mean height" represents the change in the mean height of objects. If the nest walls were high, the "mean height" was also high. These features provided us with useful information for assessment of nest-building behavior, similar to conventional methods for the assessment of nest building. However, using the novel method, we found that JF1 mice built nests with higher walls than B6 mice, and B6 mice built nests faster than JF1 mice. Thus, our novel method can evaluate the differences in nest-building behavior that cannot be detected or quantified by conventional methods. In future studies, we will evaluate nest-building behaviors of genetically modified, as well as several inbred, strains of mice, with several nesting materials.
Collapse
Affiliation(s)
- Tsuyoshi Okayama
- College of Agriculture, Ibaraki University, Ami 300-0393, Ibaraki, Japan; United Graduate School of Agricultural Science, Tokyo University of Agriculture and Technology, Fuchu-city 183-8509, Tokyo, Japan; Ibaraki University Cooperation between Agriculture and Medical Science (IUCAM), Ami 300-0393, Ibaraki, Japan.
| | - Tatsuhiko Goto
- College of Agriculture, Ibaraki University, Ami 300-0393, Ibaraki, Japan; Ibaraki University Cooperation between Agriculture and Medical Science (IUCAM), Ami 300-0393, Ibaraki, Japan.
| | - Atsushi Toyoda
- College of Agriculture, Ibaraki University, Ami 300-0393, Ibaraki, Japan; United Graduate School of Agricultural Science, Tokyo University of Agriculture and Technology, Fuchu-city 183-8509, Tokyo, Japan; Ibaraki University Cooperation between Agriculture and Medical Science (IUCAM), Ami 300-0393, Ibaraki, Japan.
| |
Collapse
|
17
|
Spector JT, Lieblich M, Bao S, McQuade K, Hughes M. Automation of workplace lifting hazard assessment for musculoskeletal injury prevention. Ann Occup Environ Med 2014; 26:15. [PMID: 24987523 PMCID: PMC4076760 DOI: 10.1186/2052-4374-26-15] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2014] [Accepted: 06/12/2014] [Indexed: 11/21/2022] Open
Abstract
Objectives Existing methods for practically evaluating musculoskeletal exposures such as posture and repetition in workplace settings have limitations. We aimed to automate the estimation of parameters in the revised United States National Institute for Occupational Safety and Health (NIOSH) lifting equation, a standard manual observational tool used to evaluate back injury risk related to lifting in workplace settings, using depth camera (Microsoft Kinect) and skeleton algorithm technology. Methods A large dataset (approximately 22,000 frames, derived from six subjects) of simultaneous lifting and other motions recorded in a laboratory setting using the Kinect (Microsoft Corporation, Redmond, Washington, United States) and a standard optical motion capture system (Qualysis, Qualysis Motion Capture Systems, Qualysis AB, Sweden) was assembled. Error-correction regression models were developed to improve the accuracy of NIOSH lifting equation parameters estimated from the Kinect skeleton. Kinect-Qualysis errors were modelled using gradient boosted regression trees with a Huber loss function. Models were trained on data from all but one subject and tested on the excluded subject. Finally, models were tested on three lifting trials performed by subjects not involved in the generation of the model-building dataset. Results Error-correction appears to produce estimates for NIOSH lifting equation parameters that are more accurate than those derived from the Microsoft Kinect algorithm alone. Our error-correction models substantially decreased the variance of parameter errors. In general, the Kinect underestimated parameters, and modelling reduced this bias, particularly for more biased estimates. Use of the raw Kinect skeleton model tended to result in falsely high safe recommended weight limits of loads, whereas error-corrected models gave more conservative, protective estimates. Conclusions Our results suggest that it may be possible to produce reasonable estimates of posture and temporal elements of tasks such as task frequency in an automated fashion, although these findings should be confirmed in a larger study. Further work is needed to incorporate force assessments and address workplace feasibility challenges. We anticipate that this approach could ultimately be used to perform large-scale musculoskeletal exposure assessment not only for research but also to provide real-time feedback to workers and employers during work method improvement activities and employee training.
Collapse
Affiliation(s)
- June T Spector
- Department of Environmental & Occupational Health Sciences, University of Washington, 4225 Roosevelt Way NE, Suite 100, Seattle, WA 98105, USA ; Department of Medicine, University of Washington, 4225 Roosevelt Way NE, Suite 100, Seattle, WA 98105, USA
| | - Max Lieblich
- Department of Mathematics, University of Washington, Seattle, WA, USA
| | - Stephen Bao
- Safety and Health Assessment and Research for Prevention (SHARP) Program, Washington State Department of Labor and Industries, Olympia, WA, USA
| | - Kevin McQuade
- Department of Rehabilitation Medicine, University of Washington, Seattle, WA, USA
| | - Margaret Hughes
- Department of Environmental & Occupational Health Sciences, University of Washington, 4225 Roosevelt Way NE, Suite 100, Seattle, WA 98105, USA
| |
Collapse
|