1
|
Dick L, Boyle CP, Skipworth RJE, Smink DS, Tallentire VR, Yule S. Automated analysis of operative video in surgical training: scoping review. BJS Open 2024; 8:zrae124. [PMID: 39413048 PMCID: PMC11482280 DOI: 10.1093/bjsopen/zrae124] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2024] [Revised: 07/31/2024] [Accepted: 09/06/2024] [Indexed: 10/18/2024] Open
Abstract
BACKGROUND There is increasing availability of operative video for use in surgical training. Emerging technologies can now assess video footage and automatically generate metrics that could be harnessed to improve the assessment of operative performance. However, a comprehensive understanding of which technology features are most impactful in surgical training is lacking. The aim of this scoping review was to explore the current use of automated video analytics in surgical training. METHODS PubMed, Scopus, the Web of Science, and the Cochrane database were searched, to 29 September 2023, following PRISMA extension for scoping reviews (PRISMA-ScR) guidelines. Search terms included 'trainee', 'video analytics', and 'education'. Articles were screened independently by two reviewers to identify studies that applied automated video analytics to trainee-performed operations. Data on the methods of analysis, metrics generated, and application to training were extracted. RESULTS Of the 6736 articles screened, 13 studies were identified. Computer vision tracking was the common method of video analysis. Metrics were described for processes (for example movement of instruments), outcomes (for example intraoperative phase duration), and critical safety elements (for example critical view of safety in laparoscopic cholecystectomy). Automated metrics were able to differentiate between skill levels (for example consultant versus trainee) and correlated with traditional methods of assessment. There was a lack of longitudinal application to training and only one qualitative study reported the experience of trainees using automated video analytics. CONCLUSION The performance metrics generated from automated video analysis are varied and encompass several domains. Validation of analysis techniques and the metrics generated are a priority for future research, after which evidence demonstrating the impact on training can be established.
Collapse
Affiliation(s)
- Lachlan Dick
- Surgical Sabermetrics Laboratory, Usher Institute, University of Edinburgh, Edinburgh, UK
- Medical Education Directorate, NHS Lothian, Edinburgh, UK
| | - Connor P Boyle
- Department of Surgery, Royal Infirmary of Edinburgh, Edinburgh, UK
| | - Richard J E Skipworth
- Surgical Sabermetrics Laboratory, Usher Institute, University of Edinburgh, Edinburgh, UK
- Clinical Surgery, University of Edinburgh, Edinburgh, UK
| | - Douglas S Smink
- Department of Surgery, Brigham and Women’s Hospital/Harvard Medical School, Boston, Massachusetts, USA
| | | | - Steven Yule
- Surgical Sabermetrics Laboratory, Usher Institute, University of Edinburgh, Edinburgh, UK
- Clinical Surgery, University of Edinburgh, Edinburgh, UK
| |
Collapse
|
2
|
Radwin RG, Hu YH, Akkas O, Bao S, Harris-Adamson C, Lin JH, Meyers AR, Rempel D. Comparison of the observer, single-frame video and computer vision hand activity levels. ERGONOMICS 2023; 66:1132-1141. [PMID: 36227226 PMCID: PMC10130228 DOI: 10.1080/00140139.2022.2136407] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/14/2022] [Accepted: 10/10/2022] [Indexed: 05/11/2023]
Abstract
Observer, manual single-frame video, and automated computer vision measures of the Hand Activity Level (HAL) were compared. HAL can be measured three ways: (1) observer rating (HALO), (2) calculated from single-frame multimedia video task analysis for measuring frequency (F) and duty cycle (D) (HALF), or (3) from automated computer vision (HALC). This study analysed videos collected from three prospective cohort studies to ascertain HALO, HALF, and HALC for 419 industrial videos. Although the differences for the three methods were relatively small on average (<1), they were statistically significant (p < .001). A difference between the HALC and HALF ratings within ±1 point on the HAL scale was the most consistent, where more than two thirds (68%) of all the cases were within that range and had a linear regression through the mean coefficient of 1.03 (R2 = 0.89). The results suggest that the computer vision methodology yields comparable results as single-frame video analysis.Practitioner summary: The ACGIH Hand Activity Level (HAL) was obtained for 419 industrial tasks using three methods: observation, calculated using single-frame video analysis and computer vision. The computer vision methodology produced results that were comparable to single-frame video analysis.
Collapse
Affiliation(s)
| | - Yu Hen Hu
- University of Wisconsin, Madison, WI, USA
| | - Oguz Akkas
- University of Wisconsin, Madison, WI, USA
| | - Stephen Bao
- SHARP Program, Washington State Department of Labor and Industries, Olympia, WA, USA
| | | | - Jia-Hua Lin
- SHARP Program, Washington State Department of Labor and Industries, Olympia, WA, USA
| | - Alysha R. Meyers
- Division of Field Studies and Engineering, National Institute for Occupational Safety and Health, Cincinnati, OH, USA
| | - David Rempel
- University of California-San Francisco, San Francisco, CA, USA
| |
Collapse
|
3
|
Li Y, Wolf MD, Kulkarni AD, Bell J, Chang JS, Nimunkar A, Radwin RG. In Situ Tremor in Vitreoretinal Surgery. HUMAN FACTORS 2021; 63:1169-1181. [PMID: 32286884 PMCID: PMC8521350 DOI: 10.1177/0018720820916629] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/15/2019] [Accepted: 03/04/2020] [Indexed: 06/11/2023]
Abstract
OBJECTIVE Surgeon tremor was measured during vitreoretinal microscopic surgeries under different hand support conditions. BACKGROUND While the ophthalmic surgeon's forearm is supported using a standard symmetric wrist rest when operating on the patient's same side as the dominant hand (SSD), the surgeon's hand is placed directly on the patient's forehead when operating on the contralateral side of the dominant hand (CSD). It was hypothesized that more tremor is associated with CSD surgeries than SSD surgeries and that, using an experimental asymmetric wrist rest where the contralateral wrist bar gradually rises and curves toward the patient's operative eye, there is no difference in tremor associated with CSD and SSD surgeries. METHODS Seventy-six microscope videos, recorded from three surgeons performing macular membrane peeling operations, were analyzed using marker-less motion tracking, and movement data (instrument path length and acceleration) were recorded. Tremor acceleration frequency and magnitude were measured using spectral analysis. Following 47 surgeries using a conventional symmetric wrist support, surgeons incorporated the experimental asymmetric wrist rest into their surgical routine. RESULTS There was 0.11 mm/s2 (22%) greater (p = .05) average tremor acceleration magnitude for CSD surgeries (0.62 mm/s2, SD = 0.08) than SSD surgeries (0.51 mm/s2, SD = 0.09) for the symmetric wrist rest, while no significant (p > .05) differences were observed (0.57 mm, SD = 0.13 for SSD and 0.58 mm, SD = 0.11 for CSD surgeries) for the experimental asymmetric wrist rest. CONCLUSION The asymmetric wrist support reduced the difference in tremor acceleration between CSD and SSD surgeries.
Collapse
Affiliation(s)
- Yifan Li
- University of Wisconsin-Madison, USA
| | | | | | - James Bell
- SSM Health Dean Medical Group, Madison, WI, USA
| | | | | | | |
Collapse
|
4
|
Lee S, Liu L, Radwin R, Li J. Machine Learning in Manufacturing Ergonomics: Recent Advances, Challenges, and Opportunities. IEEE Robot Autom Lett 2021. [DOI: 10.1109/lra.2021.3084881] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
5
|
Seidel DH, Heinrich K, Hermanns-Truxius I, Ellegast RP, Barrero LH, Rieger MA, Steinhilber B, Weber B. Assessment of work-related hand and elbow workloads using measurement-based TLV for HAL. APPLIED ERGONOMICS 2021; 92:103310. [PMID: 33352500 DOI: 10.1016/j.apergo.2020.103310] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/22/2020] [Revised: 11/11/2020] [Accepted: 11/13/2020] [Indexed: 06/12/2023]
Abstract
Direct-measurement-based methods for assessing workloads of the hand or elbow in the field are rare. Aim of the study was to develop such a method based on the Threshold Limit Value for Hand Activity Level (TLV for HAL). Hence, HAL was quantified using kinematic data (mean power frequencies, angular velocities and micro-pauses) and combined with electromyographic data (root-mean-square values) in order to generate a measurement-based TLV for HAL (mTLV for HAL). The multi-sensor system CUELA including inertial sensors, potentiometers and a 4-channel surface electromyography module was used. For wrist and elbow regions, associations between mTLV for HAL and disorders/complaints (quantified by odds ratios (OR [95%-confidence interval])) were tested exploratively within a cross-sectional field study with 500 participants. Higher workloads were frequently significantly associated with arthrosis of distal joints (9.23 [3.29-25.87]), wrist complaints (2.89 [1.63-5.11]) or elbow complaints (1.99 [1.08-3.67]). The new method could extend previous application possibilities.
Collapse
Affiliation(s)
- David H Seidel
- Institute for Occupational Safety and Health of the German Social Accident Insurance (IFA), Alte Heerstrasse 111, Sankt Augustin, 53757, DE, Germany; University Hospital Tuebingen, Institute of Occupational and Social Medicine and Health Services Research (IASV), Wilhelmstrasse 27, Tuebingen, 72074, DE, Germany.
| | - Kai Heinrich
- Institute for Occupational Safety and Health of the German Social Accident Insurance (IFA), Alte Heerstrasse 111, Sankt Augustin, 53757, DE, Germany
| | - Ingo Hermanns-Truxius
- Institute for Occupational Safety and Health of the German Social Accident Insurance (IFA), Alte Heerstrasse 111, Sankt Augustin, 53757, DE, Germany
| | - Rolf P Ellegast
- Institute for Occupational Safety and Health of the German Social Accident Insurance (IFA), Alte Heerstrasse 111, Sankt Augustin, 53757, DE, Germany
| | - Lope H Barrero
- Institute for Occupational Safety and Health of the German Social Accident Insurance (IFA), Alte Heerstrasse 111, Sankt Augustin, 53757, DE, Germany; School of Engineering, Department of Industrial Engineering, Pontificia Universidad Javeriana, Carrera 7 No. 40 - 62, Bogotá DC, 110231, CO, Colombia
| | - Monika A Rieger
- University Hospital Tuebingen, Institute of Occupational and Social Medicine and Health Services Research (IASV), Wilhelmstrasse 27, Tuebingen, 72074, DE, Germany
| | - Benjamin Steinhilber
- University Hospital Tuebingen, Institute of Occupational and Social Medicine and Health Services Research (IASV), Wilhelmstrasse 27, Tuebingen, 72074, DE, Germany
| | - Britta Weber
- Institute for Occupational Safety and Health of the German Social Accident Insurance (IFA), Alte Heerstrasse 111, Sankt Augustin, 53757, DE, Germany
| |
Collapse
|
6
|
Thamsuwan O, Galvin K, Tchong-French M, Aulck L, Boyle LN, Ching RP, McQuade KJ, Johnson PW. Comparisons of physical exposure between workers harvesting apples on mobile orchard platforms and ladders, part 2: Repetitive upper arm motions. APPLIED ERGONOMICS 2020; 89:103192. [PMID: 32738460 DOI: 10.1016/j.apergo.2020.103192] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/04/2019] [Revised: 06/10/2020] [Accepted: 06/11/2020] [Indexed: 06/11/2023]
Abstract
Farmworkers are exposed to physical risk factors including repetitive motions. Existing ergonomic assessment methods are primarily laboratory-based and, thus, inappropriate for use in the field. This study presents an approach to characterize the repetitive motions of the upper arms based on direct measurement using accelerometers. Repetition rates were derived from upper arm inclination data and with video recordings in the field. This method was used to investigate whether harvesting with mobile platforms (teams harvesting apples from the platform and the ground) increased the farmworkers' exposure to upper arm repetitive motions compared to traditional harvesting using ladders. The ladder workers had higher repetitive motions (13.7 cycles per minute) compared to the platform and ground workers (11.7 and 12.2 cycles per minutes). The higher repetitions in the ladder workers were likely due to their ability to work independently and the additional arm movements associated with ladder climbing and walking.
Collapse
Affiliation(s)
- Ornwipa Thamsuwan
- Department of Industrial and Systems Engineering, University of Washington, Seattle, WA, USA.
| | - Kit Galvin
- Department of Environmental and Occupational Health Sciences, University of Washington, Seattle, WA, USA
| | - Maria Tchong-French
- Department of Environmental and Occupational Health Sciences, University of Washington, Seattle, WA, USA
| | - Lovenoor Aulck
- Information School, University of Washington, Seattle, WA, USA
| | - Linda Ng Boyle
- Department of Industrial and Systems Engineering, University of Washington, Seattle, WA, USA; Department of Civil and Environmental Engineering, University of Washington, Seattle, WA, USA
| | - Randal P Ching
- Department of Mechanical Engineering, University of Washington, Seattle, WA, USA
| | - Kevin J McQuade
- Department of Rehabilitation Medicine, University of Washington, Seattle, WA, USA
| | - Peter W Johnson
- Department of Environmental and Occupational Health Sciences, University of Washington, Seattle, WA, USA
| |
Collapse
|
7
|
Azari DP, Hu YH, Miller BL, Le BV, Radwin RG. Using Surgeon Hand Motions to Predict Surgical Maneuvers. HUMAN FACTORS 2019; 61:1326-1339. [PMID: 31013463 DOI: 10.1177/0018720819838901] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
OBJECTIVE This study explores how common machine learning techniques can predict surgical maneuvers from a continuous video record of surgical benchtop simulations. BACKGROUND Automatic computer vision recognition of surgical maneuvers (suturing, tying, and transition) could expedite video review and objective assessment of surgeries. METHOD We recorded hand movements of 37 clinicians performing simple and running subcuticular suturing benchtop simulations, and applied three machine learning techniques (decision trees, random forests, and hidden Markov models) to classify surgical maneuvers every 2 s (60 frames) of video. RESULTS Random forest predictions of surgical video correctly classified 74% of all video segments into suturing, tying, and transition states for a randomly selected test set. Hidden Markov model adjustments improved the random forest predictions to 79% for simple interrupted suturing on a subset of randomly selected participants. CONCLUSION Random forest predictions aided by hidden Markov modeling provided the best prediction of surgical maneuvers. Training of models across all users improved prediction accuracy by 10% compared with a random selection of participants. APPLICATION Marker-less video hand tracking can predict surgical maneuvers from a continuous video record with similar accuracy as robot-assisted surgical platforms, and may enable more efficient video review of surgical procedures for training and coaching.
Collapse
Affiliation(s)
| | - Yu Hen Hu
- University of Wisconsin-Madison, USA
| | | | | | | |
Collapse
|
8
|
Azari DP, Frasier LL, Quamme SRP, Greenberg CC, Pugh C, Greenberg JA, Radwin RG. Modeling Surgical Technical Skill Using Expert Assessment for Automated Computer Rating. Ann Surg 2019; 269:574-581. [PMID: 28885509 PMCID: PMC7412996 DOI: 10.1097/sla.0000000000002478] [Citation(s) in RCA: 42] [Impact Index Per Article: 8.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/01/2023]
Abstract
OBJECTIVE Computer vision was used to predict expert performance ratings from surgeon hand motions for tying and suturing tasks. SUMMARY BACKGROUND DATA Existing methods, including the objective structured assessment of technical skills (OSATS), have proven reliable, but do not readily discriminate at the task level. Computer vision may be used for evaluating distinct task performance throughout an operation. METHODS Open surgeries was videoed and surgeon hands were tracked without using sensors or markers. An expert panel of 3 attending surgeons rated tying and suturing video clips on continuous scales from 0 to 10 along 3 task measures adapted from the broader OSATS: motion economy, fluidity of motion, and tissue handling. Empirical models were developed to predict the expert consensus ratings based on the hand kinematic data records. RESULTS The predicted versus panel ratings for suturing had slopes from 0.73 to 1, and intercepts from 0.36 to 1.54 (Average R2 = 0.81). Predicted versus panel ratings for tying had slopes from 0.39 to 0.88, and intercepts from 0.79 to 4.36 (Average R2 = 0.57). The mean square error among predicted and expert ratings was consistently less than the mean squared difference among individual expert ratings and the eventual consensus ratings. CONCLUSIONS The computer algorithm consistently predicted the panel ratings of individual tasks, and were more objective and reliable than individual assessment by surgical experts.
Collapse
Affiliation(s)
- David P. Azari
- Department of Industrial and Systems Engineering, University of Wisconsin-Madison, Madison, WI
| | - Lane L. Frasier
- Wisconsin Surgical Outcomes Research (WiSOR) Program, Department of Surgery, University of Wisconsin-Madison, Madison, WI
| | | | - Caprice C. Greenberg
- Department of Industrial and Systems Engineering, University of Wisconsin-Madison, Madison, WI
- Wisconsin Surgical Outcomes Research (WiSOR) Program, Department of Surgery, University of Wisconsin-Madison, Madison, WI
| | - Carla Pugh
- Wisconsin Surgical Outcomes Research (WiSOR) Program, Department of Surgery, University of Wisconsin-Madison, Madison, WI
| | - Jacob A. Greenberg
- Wisconsin Surgical Outcomes Research (WiSOR) Program, Department of Surgery, University of Wisconsin-Madison, Madison, WI
| | - Robert G. Radwin
- Department of Industrial and Systems Engineering, University of Wisconsin-Madison, Madison, WI
- Department of Biomedical Engineering, University of Wisconsin-Madison, Madison, WI
| |
Collapse
|
9
|
Greene RL, Hu YH, Difranco N, Wang X, Lu ML, Bao S, Lin JH, Radwin RG. Predicting Sagittal Plane Lifting Postures From Image Bounding Box Dimensions. HUMAN FACTORS 2019; 61:64-77. [PMID: 30091947 DOI: 10.1177/0018720818791367] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
OBJECTIVE A method for automatically classifying lifting postures from simple features in video recordings was developed and tested. We explored if an "elastic" rectangular bounding box, drawn tightly around the subject, can be used for classifying standing, stooping, and squatting at the lift origin and destination. BACKGROUND Current marker-less video tracking methods depend on a priori skeletal human models, which are prone to error from poor illumination, obstructions, and difficulty placing cameras in the field. Robust computer vision algorithms based on spatiotemporal features were previously applied for evaluating repetitive motion tasks, exertion frequency, and duty cycle. METHODS Mannequin poses were systematically generated using the Michigan 3DSSPP software for a wide range of hand locations and lifting postures. The stature-normalized height and width of a bounding box were measured in the sagittal plane and when rotated horizontally by 30°. After randomly ordering the data, a classification and regression tree algorithm was trained to classify the lifting postures. RESULTS The resulting tree had four levels and four splits, misclassifying 0.36% training-set cases. The algorithm was tested using 30 video clips of industrial lifting tasks, misclassifying 3.33% test-set cases. The sensitivity and specificity, respectively, were 100.0% and 100.0% for squatting, 90.0% and 100.0% for stooping, and 100.0% and 95.0% for standing. CONCLUSIONS The tree classification algorithm is capable of classifying lifting postures based only on dimensions of bounding boxes. APPLICATIONS It is anticipated that this practical algorithm can be implemented on handheld devices such as a smartphone, making it readily accessible to practitioners.
Collapse
Affiliation(s)
| | | | | | - Xuan Wang
- University of Wisconsin-Madison, USA
| | - Ming-Lun Lu
- National Institute for Occupational Safety and Health, Cincinnati, Ohio, USA
| | | | - Jia-Hua Lin
- Washington Department of Labor and Industries, Olympia, USA
| | | |
Collapse
|
10
|
Greene RL, Azari DP, Hu YH, Radwin RG. Visualizing stressful aspects of repetitive motion tasks and opportunities for ergonomic improvements using computer vision. APPLIED ERGONOMICS 2017; 65:461-472. [PMID: 28284701 DOI: 10.1016/j.apergo.2017.02.020] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/11/2016] [Revised: 02/24/2017] [Accepted: 02/27/2017] [Indexed: 06/06/2023]
Abstract
Patterns of physical stress exposure are often difficult to measure, and the metrics of variation and techniques for identifying them is underdeveloped in the practice of occupational ergonomics. Computer vision has previously been used for evaluating repetitive motion tasks for hand activity level (HAL) utilizing conventional 2D videos. The approach was made practical by relaxing the need for high precision, and by adopting a semi-automatic approach for measuring spatiotemporal characteristics of the repetitive task. In this paper, a new method for visualizing task factors, using this computer vision approach, is demonstrated. After videos are made, the analyst selects a region of interest on the hand to track and the hand location and its associated kinematics are measured for every frame. The visualization method spatially deconstructs and displays the frequency, speed and duty cycle components of tasks that are part of the threshold limit value for hand activity for the purpose of identifying patterns of exposure associated with the specific job factors, as well as for suggesting task improvements. The localized variables are plotted as a heat map superimposed over the video, and displayed in the context of the task being performed. Based on the intensity of the specific variables used to calculate HAL, we can determine which task factors most contribute to HAL, and readily identify those work elements in the task that contribute more to increased risk for an injury. Work simulations and actual industrial examples are described. This method should help practitioners more readily measure and interpret temporal exposure patterns and identify potential task improvements.
Collapse
Affiliation(s)
| | | | - Yu Hen Hu
- University of Wisconsin-Madison, United States
| | | |
Collapse
|
11
|
Radwin RG, Lee JD, Akkas O. Driver Movement Patterns Indicate Distraction and Engagement. HUMAN FACTORS 2017; 59:844-860. [PMID: 28704631 DOI: 10.1177/0018720817696496] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
Objective This research considers how driver movements in video clips of naturalistic driving are related to observer subjective ratings of distraction and engagement behaviors. Background Naturalistic driving video provides a unique window into driver behavior unmatched by crash data, roadside observations, or driving simulator experiments. However, manually coding many thousands of hours of video is impractical. An objective method is needed to identify driver behaviors suggestive of distracted or disengaged driving for automated computer vision analysis to access this rich source of data. Method Visual analog scales ranging from 0 to 10 were created, and observers rated their perception of driver distraction and engagement behaviors from selected naturalistic driving videos. Driver kinematics time series were extracted from frame-by-frame coding of driver motions, including head rotation, head flexion/extension, and hands on/off the steering wheel. Results The ratings were consistent among participants. A statistical model predicting average ratings from the kinematic features accounted for 54% of distraction rating variance and 50% of engagement rating variance. Conclusion Rated distraction behavior was positively related to the magnitude of head rotation and fraction of time the hands were off the wheel. Rated engagement behavior was positively related to the variation of head rotation and negatively related to the fraction of time the hands were off the wheel. Application If automated computer vision can code simple kinematic features, such as driver head and hand movements, then large-volume naturalistic driving videos could be automatically analyzed to identify instances when drivers were distracted or disengaged.
Collapse
|
12
|
Xiao ZG, Menon C. Counting Grasping Action Using Force Myography: An Exploratory Study With Healthy Individuals. JMIR Rehabil Assist Technol 2017; 4:e5. [PMID: 28582263 PMCID: PMC5460070 DOI: 10.2196/rehab.6901] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2016] [Revised: 02/09/2017] [Accepted: 02/10/2017] [Indexed: 01/09/2023] Open
Abstract
Background Functional arm movements generally require grasping an object. The possibility of detecting and counting the action of grasping is believed to be of importance for individual with motor function deficits of the arm, as it could be an indication of the number of the functional arm movements performed by the individuals during rehabilitation. In this exploratory work, the feasibility of using armbands recording radial displacements of forearm muscles and tendons (ie, force myography, FMG) to estimate hand grasping with healthy individuals was investigated. In contrast to previous studies, this exploratory study investigates the feasibility of (1) detecting grasping when the participants move their arms, which could introduce large artifacts to the point of potentially preventing the practical use of the proposed technology, and (2) counting grasping during arm-reaching tasks. Objective The aim of this study was to determine the usefulness of FMG in the detection of functional arm movements. The use of FMG straps placed on the forearm is proposed for counting the number of grasping actions in the presence of arm movements. Methods Ten healthy volunteers participated in this study to perform a pick-and-place exercise after providing informed consent. FMG signals were simultaneously collected using 2 FMG straps worn on their wrist and at the midposition of their forearm, respectively. Raw FMG signals and 3 additional FMG features (ie, root mean square, wavelength, and window symmetry) were extracted and fed into a linear discriminant analysis classifier to predict grasping states. The transition from nongrasping to grasping states was detected during the process of counting the number of grasping actions. Results The median accuracy for detecting grasping events using FMG recorded from the wrist was 95%, and the corresponding interquartile range (IQR) was 5%. For forearm FMG classification, the median accuracy was 92%, and the corresponding IQR was 3%. The difference between the 2 median accuracies was statistically significant (P<.001) when using a paired 2-tailed sign test. The median percentage error for counting grasping events when FMG was recorded from the wrist was 1%, and the corresponding IQR was 2%. The median percentage error for FMG recorded from the forearm was 2%, and the corresponding IQR was also 2%. While the median percentage error for the wrist was lower than that of the forearm, the difference between the 2 was not statistically significant based on a paired 2-tailed sign test (P=.29). Conclusions This study reports that grasping can reliably be counted using an unobtrusive and simple FMG strap even in the presence of arm movements. Such a result supports the foundation for future research evaluating the feasibility of monitoring hand grasping during unsupervised ADL, leading to further investigations with individuals with motor function deficits of the arm.
Collapse
Affiliation(s)
| | - Carlo Menon
- Schools of Mechatronics Systems Engineering and Engineering Science, Simon Fraser University, Surrey, BC, Canada
| |
Collapse
|
13
|
Akkas O, Lee CH, Hu YH, Yen TY, Radwin RG. Measuring elemental time and duty cycle using automated video processing. ERGONOMICS 2016; 59:1514-1525. [PMID: 26848051 PMCID: PMC5226076 DOI: 10.1080/00140139.2016.1146347] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/11/2023]
Abstract
A marker-less 2D video algorithm measured hand kinematics (location, velocity and acceleration) in a paced repetitive laboratory task for varying hand activity levels (HAL). The decision tree (DT) algorithm identified the trajectory of the hand using spatiotemporal relationships during the exertion and rest states. The feature vector training (FVT) method utilised the k-nearest neighbourhood classifier, trained using a set of samples or the first cycle. The average duty cycle (DC) error using the DT algorithm was 2.7%. The FVT algorithm had an average 3.3% error when trained using the first cycle sample of each repetitive task, and had a 2.8% average error when trained using several representative repetitive cycles. Error for HAL was 0.1 for both algorithms, which was considered negligible. Elemental time, stratified by task and subject, were not statistically different from ground truth (p < 0.05). Both algorithms performed well for automatically measuring elapsed time, DC and HAL. Practitioner Summary: A completely automated approach for measuring elapsed time and DC was developed using marker-less video tracking and the tracked kinematic record. Such an approach is automatic, repeatable, objective and unobtrusive, and is suitable for evaluating repetitive exertions, muscle fatigue and manual tasks.
Collapse
Affiliation(s)
- Oguz Akkas
- Department of Industrial and Systems Engineering, University of Wisconsin-Madison, Madison, WI 53706, USA
| | - Cheng-Hsien Lee
- Department of Electrical and Computer Engineering, University of Wisconsin-Madison, Madison, WI 53706, USA
| | - Yu Hen Hu
- Department of Electrical and Computer Engineering, University of Wisconsin-Madison, Madison, WI 53706, USA
| | - Thomas Y. Yen
- Department of Industrial and Systems Engineering, University of Wisconsin-Madison, Madison, WI 53706, USA
| | - Robert G. Radwin
- Department of Industrial and Systems Engineering, University of Wisconsin-Madison, Madison, WI 53706, USA
- Corresponding author. Robert G. Radwin, PhD, Department of Industrial and Systems Engineering, 1550 Engineering Drive, Madison, WI 53706,
| |
Collapse
|
14
|
Azari DP, Pugh CM, Laufer S, Kwan C, Chen CH, Yen TY, Hu YH, Radwin RG. Evaluation of Simulated Clinical Breast Exam Motion Patterns Using Marker-Less Video Tracking. HUMAN FACTORS 2016; 58:427-440. [PMID: 26546381 PMCID: PMC4924820 DOI: 10.1177/0018720815613919] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/11/2015] [Accepted: 09/30/2015] [Indexed: 06/05/2023]
Abstract
OBJECTIVE This study investigates using marker-less video tracking to evaluate hands-on clinical skills during simulated clinical breast examinations (CBEs). BACKGROUND There are currently no standardized and widely accepted CBE screening techniques. METHODS Experienced physicians attending a national conference conducted simulated CBEs presenting different pathologies with distinct tumorous lesions. Single hand exam motion was recorded and analyzed using marker-less video tracking. Four kinematic measures were developed to describe temporal (time pressing and time searching) and spatial (area covered and distance explored) patterns. RESULTS Mean differences between time pressing, area covered, and distance explored varied across the simulated lesions. Exams were objectively categorized as either sporadic, localized, thorough, or efficient for both temporal and spatial categories based on spatiotemporal characteristics. The majority of trials were temporally or spatially thorough (78% and 91%), exhibiting proportionally greater time pressing and time searching (temporally thorough) and greater area probed with greater distance explored (spatially thorough). More efficient exams exhibited proportionally more time pressing with less time searching (temporally efficient) and greater area probed with less distance explored (spatially efficient). Just two (5.9 %) of the trials exhibited both high temporal and spatial efficiency. CONCLUSIONS Marker-less video tracking was used to discriminate different examination techniques and measure when an exam changes from general searching to specific probing. The majority of participants exhibited more thorough than efficient patterns. APPLICATION Marker-less video kinematic tracking may be useful for quantifying clinical skills for training and assessment.
Collapse
Affiliation(s)
| | | | | | - Calvin Kwan
- University of Wisconsin-Madison, Madison, WI
| | | | | | - Yu Hen Hu
- University of Wisconsin-Madison, Madison, WI
| | | |
Collapse
|
15
|
Azari D, Pugh C, Laufer S, Cohen E, Kwan C, Chen CHE, Yen TY, Hu YH, Radwin R. Evaluation of Hands-On Clinical Exam Performance Using Marker-less Video Tracking. PROCEEDINGS OF THE HUMAN FACTORS AND ERGONOMICS SOCIETY ... ANNUAL MEETING. HUMAN FACTORS AND ERGONOMICS SOCIETY. ANNUAL MEETING 2015; 58:793-797. [PMID: 26401124 DOI: 10.1177/1541931214581145] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
This study investigates the potential of using marker-less video tracking of the hands for evaluating hands-on clinical skills. Experienced family practitioners attending a national conference were recruited and asked to conduct a breast examination on a simulator that simulates different clinical presentations. Videos were made of the clinician's hands during the exam and video processing software for tracking hand motion to quantify hand motion kinematics was used. Practitioner motion patterns indicated consistent behavior of participants across multiple pathologies. Different pathologies exhibited characteristic motion patterns in the aggregate at specific parts of an exam, indicating consistent inter-participant behavior. Marker-less video kinematic tracking therefore shows promise in discriminating between different examination procedures, clinicians, and pathologies.
Collapse
Affiliation(s)
- David Azari
- College of Engineering, The University of Wisconsin Madison, 1513 University Ave., Madison, WI 53706
| | - Carla Pugh
- Department of Surgery, The University of Wisconsin School of Medicine and Public Health, 1685 Highland Ave., Madison, WI 53705
| | - Shlomi Laufer
- Department of Surgery, The University of Wisconsin School of Medicine and Public Health, 1685 Highland Ave., Madison, WI 53705
| | - Elaine Cohen
- Department of Surgery, The University of Wisconsin School of Medicine and Public Health, 1685 Highland Ave., Madison, WI 53705
| | - Calvin Kwan
- Department of Surgery, The University of Wisconsin School of Medicine and Public Health, 1685 Highland Ave., Madison, WI 53705
| | - Chia-Hsiung Eric Chen
- College of Engineering, The University of Wisconsin Madison, 1513 University Ave., Madison, WI 53706
| | - Thomas Y Yen
- College of Engineering, The University of Wisconsin Madison, 1513 University Ave., Madison, WI 53706
| | - Yu Hen Hu
- College of Engineering, The University of Wisconsin Madison, 1513 University Ave., Madison, WI 53706
| | - Robert Radwin
- College of Engineering, The University of Wisconsin Madison, 1513 University Ave., Madison, WI 53706
| |
Collapse
|
16
|
Radwin RG, Azari DP, Lindstrom MJ, Ulin SS, Armstrong TJ, Rempel D. A frequency-duty cycle equation for the ACGIH hand activity level. ERGONOMICS 2015; 58:173-83. [PMID: 25343340 PMCID: PMC4302734 DOI: 10.1080/00140139.2014.966154] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/11/2023]
Abstract
A new equation for predicting the hand activity level (HAL) used in the American Conference for Government Industrial Hygienists threshold limit value®(TLV®) was based on exertion frequency (F) and percentage duty cycle (D). The TLV® includes a table for estimating HAL from F and D originating from data in Latko et al. (Latko WA, Armstrong TJ, Foulke JA, Herrin GD, Rabourn RA, Ulin SS, Development and evaluation of an observational method for assessing repetition in hand tasks. American Industrial Hygiene Association Journal, 58(4):278-285, 1997) and post hoc adjustments that include extrapolations outside of the data range. Multimedia video task analysis determined D for two additional jobs from Latko's study not in the original data-set, and a new nonlinear regression equation was developed to better fit the data and create a more accurate table. The equation, HAL = 6:56 ln D[F(1:31) /1+3:18 F(1:31), generally matches the TLV® HAL lookup table, and is a substantial improvement over the linear model, particularly for F>1.25 Hz and D>60% jobs. The equation more closely fits the data and applies the TLV® using a continuous function.
Collapse
Affiliation(s)
- Robert G. Radwin
- Department of Industrial and Systems Engineering, University of Wisconsin-Madison
- Corresponding Author University of Wisconsin-Madison, 1550 Engineering Drive, Madison, WI 53706-1608,
| | - David P. Azari
- Department of Industrial and Systems Engineering, University of Wisconsin-Madison
| | - Mary J. Lindstrom
- Department of Biostatistics and Medical Informatics, University of Wisconsin-Madison
| | - Sheryl S. Ulin
- Department of Industrial and Operations Engineering, University of Michigan
| | | | - David Rempel
- Department of Medicine, University of California, San Francisco
| |
Collapse
|
17
|
Akkas O, Azari DP, Chen CHE, Hu YH, Ulin SS, Armstrong TJ, Rempel D, Radwin RG. A hand speed-duty cycle equation for estimating the ACGIH hand activity level rating. ERGONOMICS 2015; 58:184-94. [PMID: 25343278 PMCID: PMC4664886 DOI: 10.1080/00140139.2014.966155] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/11/2023]
Abstract
An equation was developed for estimating hand activity level (HAL) directly from tracked root mean square (RMS) hand speed (S) and duty cycle (D). Table lookup, equation or marker-less video tracking can estimate HAL from motion/exertion frequency (F) and D. Since automatically estimating F is sometimes complex, HAL may be more readily assessed using S. Hands from 33 videos originally used for the HAL rating were tracked to estimate S, scaled relative to hand breadth (HB), and single-frame analysis was used to measure D. Since HBs were unknown, a Monte Carlo method was employed for iteratively estimating the regression coefficients from US Army anthropometry survey data. The equation: HAL = 10[e(-15:87+0:02D+2:25 ln S)/(1+e(-15:87+0:02D+2:25 ln S)], R(2) = 0.97, had a residual range ± 0.5 HAL. The S equation superiorly fits the Latko et al. ( 1997 ) data and predicted independently observed HAL values (Harris 2011) better (MSE = 0.16) than the F equation (MSE = 1.28).
Collapse
Affiliation(s)
- Oguz Akkas
- a Department of Industrial and Systems Engineering , University of Wisconsin-Madison , Madison , WI , USA
| | | | | | | | | | | | | | | |
Collapse
|
18
|
Chen CH, Azari D, Hu YH, Lindstrom MJ, Thelen D, Yen TY, Radwin RG. The accuracy of conventional 2D video for quantifying upper limb kinematics in repetitive motion occupational tasks. ERGONOMICS 2015; 58:2057-66. [PMID: 25978764 PMCID: PMC4684497 DOI: 10.1080/00140139.2015.1051594] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/11/2023]
Abstract
Marker-less 2D video tracking was studied as a practical means to measure upper limb kinematics for ergonomics evaluations. Hand activity level (HAL) can be estimated from speed and duty cycle. Accuracy was measured using a cross-correlation template-matching algorithm for tracking a region of interest on the upper extremities. Ten participants performed a paced load transfer task while varying HAL (2, 4, and 5) and load (2.2 N, 8.9 N and 17.8 N). Speed and acceleration measured from 2D video were compared against ground truth measurements using 3D infrared motion capture. The median absolute difference between 2D video and 3D motion capture was 86.5 mm/s for speed, and 591 mm/s(2) for acceleration, and less than 93 mm/s for speed and 656 mm/s(2) for acceleration when camera pan and tilt were within ± 30 degrees. Single-camera 2D video had sufficient accuracy (< 100 mm/s) for evaluating HAL. Practitioner Summary: This study demonstrated that 2D video tracking had sufficient accuracy to measure HAL for ascertaining the American Conference of Government Industrial Hygienists Threshold Limit Value(®) for repetitive motion when the camera is located within ± 30 degrees off the plane of motion when compared against 3D motion capture for a simulated repetitive motion task.
Collapse
Affiliation(s)
- Chia-Hsiung Chen
- Department of Electrical and Computer Engineering, University of Wisconsin-Madison
| | - David Azari
- Department of Industrial and Systems Engineering, University of Wisconsin-Madison
| | - Yu Hen Hu
- Department of Electrical and Computer Engineering, University of Wisconsin-Madison
| | - Mary J. Lindstrom
- Department of Biostatistics and Medical Informatics, University of Wisconsin-Madison
| | - Darryl Thelen
- Department of Mechanical Engineering, University of Wisconsin-Madison
| | - Thomas Y. Yen
- Department of Industrial and Systems Engineering, University of Wisconsin-Madison
| | - Robert G. Radwin
- Department of Industrial and Systems Engineering, University of Wisconsin-Madison
- Corresponding Author Robert G, Radwin, PhD, 1550 Engineering Drive, University of Wisconsin-Madison, Madison, WI 53706, 608-263-6596, 608-262-8454,
| |
Collapse
|
19
|
Armstrong T, Cao C, Hallbeck S, Radwin R, Rempel D. Ergonomic Aspects of Clinical and Surgical Procedures. ACTA ACUST UNITED AC 2014. [DOI: 10.1177/1541931214581194] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
This discussion panel aims to identify ergonomic concerns, solutions and research needs, physical stresses, and outcomes related to clinical and surgical procedures. This session will begin with formal presentations to demonstrate current ergonomic concerns and research initiatives associated with clinical and surgical procedures to frame the panel discussion for the second part of the session. Discussion of different procedures will help to identify solutions and research needs that relate to a broad range of ergonomic problems. Questions will be collected from the attendees and speakers and organized so as to guide the panel discussion and to engage all of the speakers in the discussion to achieve the symposium aims.
Collapse
Affiliation(s)
| | | | | | | | - D. Rempel
- University of California at San Francisco
| |
Collapse
|
20
|
Glarner CE, Hu YY, Chen CH, Radwin RG, Zhao Q, Craven MW, Wiegmann DA, Pugh CM, Carty MJ, Greenberg CC. Quantifying technical skills during open operations using video-based motion analysis. Surgery 2014; 156:729-34. [PMID: 24962187 DOI: 10.1016/j.surg.2014.04.054] [Citation(s) in RCA: 28] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2014] [Accepted: 04/25/2014] [Indexed: 11/17/2022]
Abstract
INTRODUCTION Objective quantification of technical operative skills in surgery remains poorly defined, although the delivery of and training in these skills is essential to the profession of surgery. Attempts to measure hand kinematics to quantify operative performance primarily have relied on electromagnetic sensors attached to the surgeon's hand or instrument. We sought to determine whether a similar motion analysis could be performed with a marker-less, video-based review, allowing for a scalable approach to performance evaluation. METHODS We recorded six reduction mammoplasty operations-a plastic surgery procedure in which the attending and resident surgeons operate in parallel. Segments representative of surgical tasks were identified with Multimedia Video Task Analysis software. Video digital processing was used to extract and analyze the spatiotemporal characteristics of hand movement. RESULTS Attending plastic surgeons appear to use their nondominant hand more than residents when cutting with the scalpel, suggesting more use of countertraction. While suturing, attendings were more ambidextrous, with smaller differences in movement between their dominant and nondominant hands than residents. Attendings also seem to have more conservation of movement when performing instrument tying than residents, as demonstrated by less nondominant hand displacement. These observations were consistent within procedures and between the different attending plastic surgeons evaluated in this fashion. CONCLUSION Video motion analysis can be used to provide objective measurement of technical skills without the need for sensors or markers. Such data could be valuable in better understanding the acquisition and degradation of operative skills, providing enhanced feedback to shorten the learning curve.
Collapse
Affiliation(s)
- Carly E Glarner
- Wisconsin Surgical Outcomes Research Program, Department of Surgery, University of Wisconsin, Madison, WI
| | - Yue-Yung Hu
- Center for Surgery & Public Health, Brigham & Women's Hospital, Department of Surgery, Beth Israel Medical Center, Boston, MA
| | - Chia-Hsiung Chen
- Electrical and Computer Engineering, University of Wisconsin, Madison, WI
| | - Robert G Radwin
- Department of Industrial and Systems Engineering, University of Wisconsin, Madison, WI
| | - Qianqian Zhao
- Department of Biostatistics & Medical Informatics, University of Wisconsin, Madison, WI
| | - Mark W Craven
- Department of Surgery, Brigham and Women's Hospital, Boston, MA
| | - Douglas A Wiegmann
- Department of Industrial and Systems Engineering, University of Wisconsin, Madison, WI
| | - Carla M Pugh
- Wisconsin Surgical Outcomes Research Program, Department of Surgery, University of Wisconsin, Madison, WI; Department of Industrial and Systems Engineering, University of Wisconsin, Madison, WI
| | - Matthew J Carty
- Department of Surgery, Brigham and Women's Hospital, Boston, MA
| | - Caprice C Greenberg
- Wisconsin Surgical Outcomes Research Program, Department of Surgery, University of Wisconsin, Madison, WI.
| |
Collapse
|