1
|
Zhang C, Hallbeck MS, Salehinejad H, Thiels C. The integration of artificial intelligence in robotic surgery: A narrative review. Surgery 2024; 176:552-557. [PMID: 38480053 DOI: 10.1016/j.surg.2024.02.005] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2023] [Revised: 11/26/2023] [Accepted: 02/09/2024] [Indexed: 08/18/2024]
Abstract
BACKGROUND The rise of high-definition imaging and robotic surgery has independently been associated with improved postoperative outcomes. However, steep learning curves and finite human cognitive ability limit the facility in imaging interpretation and interaction with the robotic surgery console interfaces. This review presents innovative ways in which artificial intelligence integrates preoperative imaging and surgery to help overcome these limitations and to further advance robotic operations. METHODS PubMed was queried for "artificial intelligence," "machine learning," and "robotic surgery." From the 182 publications in English, a further in-depth review of the cited literature was performed. RESULTS Artificial intelligence boasts efficiency and proclivity for large amounts of unwieldy and unstructured data. Its wide adoption has significant practice-changing implications throughout the perioperative period. Assessment of preoperative imaging can augment preoperative surgeon knowledge by accessing pathology data that have been traditionally only available postoperatively through analysis of preoperative imaging. Intraoperatively, the interaction of artificial intelligence with augmented reality through the dynamic overlay of preoperative anatomical knowledge atop the robotic operative field can outline safe dissection planes, helping surgeons make critical real-time intraoperative decisions. Finally, semi-independent artificial intelligence-assisted robotic operations may one day be performed by artificial intelligence with limited human intervention. CONCLUSION As artificial intelligence has allowed machines to think and problem-solve like humans, it promises further advancement of existing technologies and a revolution of individualized patient care. Further research and ethical precautions are necessary before the full implementation of artificial intelligence in robotic surgery.
Collapse
Affiliation(s)
- Chi Zhang
- Department of Surgery, Mayo Clinic Arizona, Phoenix, AZ; Robert D. and Patricia E. Kern Center for the Science of Health Care Delivery, Mayo Clinic Rochester, MN. https://twitter.com/ChiZhang_MD
| | - M Susan Hallbeck
- Robert D. and Patricia E. Kern Center for the Science of Health Care Delivery, Mayo Clinic Rochester, MN; Division of Health Care Delivery Research, Mayo Clinic Rochester, MN; Department of Surgery, Mayo Clinic Rochester, MN
| | - Hojjat Salehinejad
- Robert D. and Patricia E. Kern Center for the Science of Health Care Delivery, Mayo Clinic Rochester, MN; Division of Health Care Delivery Research, Mayo Clinic Rochester, MN. https://twitter.com/SalehinejadH
| | - Cornelius Thiels
- Robert D. and Patricia E. Kern Center for the Science of Health Care Delivery, Mayo Clinic Rochester, MN; Department of Surgery, Mayo Clinic Rochester, MN.
| |
Collapse
|
2
|
Ikeda A, Izumi K, Katori K, Nosato H, Kobayashi K, Suzuki S, Kandori S, Sanuki M, Ochiai Y, Nishiyama H. Objective Evaluation of Gaze Location Patterns Using Eye Tracking During Cystoscopy and Artificial Intelligence-Assisted Lesion Detection. J Endourol 2024; 38:865-870. [PMID: 38526374 DOI: 10.1089/end.2023.0699] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/26/2024] Open
Abstract
Background: The diagnostic accuracy of cystoscopy varies according to the knowledge and experience of the performing physician. In this study, we evaluated the difference in cystoscopic gaze location patterns between medical students and urologists and assessed the differences in their eye movements when simultaneously observing conventional cystoscopic images and images with lesions detected by artificial intelligence (AI). Methodology: Eye-tracking measurements were performed, and observation patterns of participants (24 medical students and 10 urologists) viewing images from routine cystoscopic videos were analyzed. The cystoscopic video was captured preoperatively in a case of initial-onset noninvasive bladder cancer with three low-lying papillary tumors in the posterior, anterior, and neck areas (urothelial carcinoma, high grade, and pTa). The viewpoint coordinates and stop times during observation were obtained using a noncontact type of gaze tracking and gaze measurement system for screen-based gaze tracking. In addition, observation patterns of medical students and urologists during parallel observation of conventional cystoscopic videos and AI-assisted lesion detection videos were compared. Results: Compared with medical students, urologists exhibited a significantly higher degree of stationary gaze entropy when viewing cystoscopic images (p < 0.05), suggesting that urologists with expertise in identifying lesions efficiently observed a broader range of bladder mucosal surfaces on the screen, presumably with the conscious intent of identifying pathologic changes. When the participants observed conventional and AI-assisted lesion detection images side by side, contrary to urologists, medical students showed a higher proportion of attention directed toward AI-detected lesion images. Conclusion: Eye-tracking measurements during cystoscopic image assessment revealed that experienced specialists efficiently observed a wide range of video screens during cystoscopy. In addition, this study revealed how lesion images detected by AI are viewed. Observation patterns of observers' gaze may have implications for assessing and improving proficiency and serving educational purposes. To the best of our knowledge, this is the first study to utilize eye tracking in cystoscopy. University of Tsukuba Hospital, clinical research reference number R02-122.
Collapse
Affiliation(s)
- Atsushi Ikeda
- Department of Urology, Institute of Medicine, University of Tsukuba, Tsukuba, Ibaraki, Japan
| | - Kazuya Izumi
- Master's Programs in Informatics, Graduate School of Comprehensive Human Sciences, University of Tsukuba, Tsukuba, Ibaraki, Japan
| | - Kensuke Katori
- Master's Programs in Informatics, Graduate School of Comprehensive Human Sciences, University of Tsukuba, Tsukuba, Ibaraki, Japan
| | - Hirokazu Nosato
- Artificial Intelligence Research Center, National Institute of Advanced Industrial Science and Technology, Tsukuba, Ibaraki, Japan
| | - Keita Kobayashi
- Department of Urology, Graduate School of Medicine, Yamaguchi University, Ube, Yamaguchi, Japan
| | - Shuhei Suzuki
- Department of Urology, Institute of Medicine, University of Tsukuba, Tsukuba, Ibaraki, Japan
| | - Shuya Kandori
- Department of Urology, Institute of Medicine, University of Tsukuba, Tsukuba, Ibaraki, Japan
| | - Masaru Sanuki
- Department of Clinical Medicine, Institute of Medicine, University of Tsukuba, Tsukuba, Ibaraki, Japan
| | - Yoichi Ochiai
- Research and Development Center for Digital Nature, University of Tsukuba, Tsukuba, Ibaraki, Japan
| | - Hiroyuki Nishiyama
- Department of Urology, Institute of Medicine, University of Tsukuba, Tsukuba, Ibaraki, Japan
| |
Collapse
|
3
|
Upasani S, Srinivasan D, Zhu Q, Du J, Leonessa A. Eye-Tracking in Physical Human-Robot Interaction: Mental Workload and Performance Prediction. HUMAN FACTORS 2024; 66:2104-2119. [PMID: 37793896 DOI: 10.1177/00187208231204704] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/06/2023]
Abstract
BACKGROUND In Physical Human-Robot Interaction (pHRI), the need to learn the robot's motor-control dynamics is associated with increased cognitive load. Eye-tracking metrics can help understand the dynamics of fluctuating mental workload over the course of learning. OBJECTIVE The aim of this study was to test eye-tracking measures' sensitivity and reliability to variations in task difficulty, as well as their performance-prediction capability, in physical human-robot collaboration tasks involving an industrial robot for object comanipulation. METHODS Participants (9M, 9F) learned to coperform a virtual pick-and-place task with a bimanual robot over multiple trials. Joint stiffness of the robot was manipulated to increase motor-coordination demands. The psychometric properties of eye-tracking measures and their ability to predict performance was investigated. RESULTS Stationary Gaze Entropy and pupil diameter were the most reliable and sensitive measures of workload associated with changes in task difficulty and learning. Increased task difficulty was more likely to result in a robot-monitoring strategy. Eye-tracking measures were able to predict the occurrence of success or failure in each trial with 70% sensitivity and 71% accuracy. CONCLUSION The sensitivity and reliability of eye-tracking measures was acceptable, although values were lower than those observed in cognitive domains. Measures of gaze behaviors indicative of visual monitoring strategies were most sensitive to task difficulty manipulations, and should be explored further for the pHRI domain where motor-control and internal-model formation will likely be strong contributors to workload. APPLICATION Future collaborative robots can adapt to human cognitive state and skill-level measured using eye-tracking measures of workload and visual attention.
Collapse
Affiliation(s)
| | | | - Qi Zhu
- National Institute of Standards and Technology, Boulder, CO, USA
| | - Jing Du
- University of Florida, Gainesville, FL, USA
| | | |
Collapse
|
4
|
Rahimi AM, Uluç E, Hardon SF, Bonjer HJ, van der Peet DL, Daams F. Training in robotic-assisted surgery: a systematic review of training modalities and objective and subjective assessment methods. Surg Endosc 2024; 38:3547-3555. [PMID: 38814347 PMCID: PMC11219449 DOI: 10.1007/s00464-024-10915-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2024] [Accepted: 05/05/2024] [Indexed: 05/31/2024]
Abstract
INTRODUCTION The variety of robotic surgery systems, training modalities, and assessment tools within robotic surgery training is extensive. This systematic review aimed to comprehensively overview different training modalities and assessment methods for teaching and assessing surgical skills in robotic surgery, with a specific focus on comparing objective and subjective assessment methods. METHODS A systematic review was conducted following the PRISMA guidelines. The electronic databases Pubmed, EMBASE, and Cochrane were searched from inception until February 1, 2022. Included studies consisted of robotic-assisted surgery training (e.g., box training, virtual reality training, cadaver training and animal tissue training) with an assessment method (objective or subjective), such as assessment forms, virtual reality scores, peer-to-peer feedback or time recording. RESULTS The search identified 1591 studies. After abstract screening and full-texts examination, 209 studies were identified that focused on robotic surgery training and included an assessment tool. The majority of the studies utilized the da Vinci Surgical System, with dry lab training being the most common approach, followed by the da Vinci Surgical Skills Simulator. The most frequently used assessment methods included simulator scoring system (e.g., dVSS score), and assessment forms (e.g., GEARS and OSATS). CONCLUSION This systematic review provides an overview of training modalities and assessment methods in robotic-assisted surgery. Dry lab training on the da Vinci Surgical System and training on the da Vinci Skills Simulator are the predominant approaches. However, focused training on tissue handling, manipulation, and force interaction is lacking, despite the absence of haptic feedback. Future research should focus on developing universal objective assessment and feedback methods to address these limitations as the field continues to evolve.
Collapse
Affiliation(s)
- A Masie Rahimi
- Department of Surgery, Amsterdam UMC, Vrije Universiteit, Tafelbergweg 47, 1105 BD, Amsterdam, The Netherlands.
- Amsterdam Skills Centre for Health Sciences, Tafelbergweg 47, 1105 BD, Amsterdam, The Netherlands.
- Cancer Center Amsterdam, Amsterdam, The Netherlands.
| | - Ezgi Uluç
- Department of Surgery, Amsterdam UMC, Vrije Universiteit, Tafelbergweg 47, 1105 BD, Amsterdam, The Netherlands
| | - Sem F Hardon
- Department of Surgery, Amsterdam UMC, Vrije Universiteit, Tafelbergweg 47, 1105 BD, Amsterdam, The Netherlands
| | - H Jaap Bonjer
- Department of Surgery, Amsterdam UMC, Vrije Universiteit, Tafelbergweg 47, 1105 BD, Amsterdam, The Netherlands
- Amsterdam Skills Centre for Health Sciences, Tafelbergweg 47, 1105 BD, Amsterdam, The Netherlands
- Cancer Center Amsterdam, Amsterdam, The Netherlands
| | - Donald L van der Peet
- Department of Surgery, Amsterdam UMC, Vrije Universiteit, Tafelbergweg 47, 1105 BD, Amsterdam, The Netherlands
- Cancer Center Amsterdam, Amsterdam, The Netherlands
| | - Freek Daams
- Department of Surgery, Amsterdam UMC, Vrije Universiteit, Tafelbergweg 47, 1105 BD, Amsterdam, The Netherlands
- Cancer Center Amsterdam, Amsterdam, The Netherlands
| |
Collapse
|
5
|
Colcuc C, Miersbach M, Cienfuegos M, Grüneweller N, Vordemvenne T, Wähnert D. Comparison of virtual reality and computed tomography in the preoperative planning of complex tibial plateau fractures. Arch Orthop Trauma Surg 2024; 144:2631-2639. [PMID: 38703213 PMCID: PMC11211142 DOI: 10.1007/s00402-024-05348-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/31/2024] [Accepted: 04/16/2024] [Indexed: 05/06/2024]
Abstract
INTRODUCTION Preoperative planning is a critical step in the success of any complex surgery. The pur-pose of this study is to evaluate the advantage of VR glasses in surgical planning of complex tibial plateau fractures compared to CT planning. MATERIALS AND METHODS Five orthopedic surgeons performed preoperative planning for 30 fractures using either conventional CT slices or VR visualization with a VR headset. Planning was performed in a randomized order with a 3-month interval between planning sessions. A standardized questionnaire assessed planned operative time, planning time, fracture classification and understanding, and surgeons' subjective confidence in surgical planning. RESULTS The mean planned operative time of 156 (SD 47) minutes was significantly lower (p < 0.001) in the VR group than in the CT group (172 min; SD 44). The mean planning time in the VR group was 3.48 min (SD 2.4), 17% longer than in the CT group (2.98 min, SD 1.9; p = 0.027). Relevant parameters influencing planning time were surgeon experience (-0.61 min) and estimated complexity of fracture treatment (+ 0.65 min). CONCLUSION The use of virtual reality for surgical planning of complex tibial plateau fractures resulted in significantly shorter planned operative time, while planning time was longer compared to CT planning. After VR planning, more surgeons felt (very) well prepared for surgery.
Collapse
Affiliation(s)
- Christian Colcuc
- Bielefeld University, Medical School and University Medical Center OWL, Protestant Hospital of the Bethel Foundation, Department of Trauma and Orthopaedic Surgery, Burgsteig 13, 33617, Bielefeld, Germany
| | - Marco Miersbach
- Bielefeld University, Medical School and University Medical Center OWL, Protestant Hospital of the Bethel Foundation, Department of Trauma and Orthopaedic Surgery, Burgsteig 13, 33617, Bielefeld, Germany
| | - Miguel Cienfuegos
- Bielefeld University, Center for Cognitive Interaction Technology CITEC, Universitätsstraße 25, 33615, Bielefeld, Germany
| | - Niklas Grüneweller
- Bielefeld University, Medical School and University Medical Center OWL, Protestant Hospital of the Bethel Foundation, Department of Trauma and Orthopaedic Surgery, Burgsteig 13, 33617, Bielefeld, Germany
| | - Thomas Vordemvenne
- Bielefeld University, Medical School and University Medical Center OWL, Protestant Hospital of the Bethel Foundation, Department of Trauma and Orthopaedic Surgery, Burgsteig 13, 33617, Bielefeld, Germany
| | - Dirk Wähnert
- Bielefeld University, Medical School and University Medical Center OWL, Protestant Hospital of the Bethel Foundation, Department of Trauma and Orthopaedic Surgery, Burgsteig 13, 33617, Bielefeld, Germany.
| |
Collapse
|
6
|
Büter R, Soberanis-Mukul RD, Shankar R, Ruiz Puentes P, Ghazi A, Wu JY, Unberath M. Cognitive effort detection for tele-robotic surgery via personalized pupil response modeling. Int J Comput Assist Radiol Surg 2024; 19:1113-1120. [PMID: 38589579 DOI: 10.1007/s11548-024-03108-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2024] [Accepted: 03/08/2024] [Indexed: 04/10/2024]
Abstract
PURPOSE Gaze tracking and pupillometry are established proxies for cognitive load, giving insights into a user's mental effort. In tele-robotic surgery, knowing a user's cognitive load can inspire novel human-machine interaction designs, fostering contextual surgical assistance systems and personalized training programs. While pupillometry-based methods for estimating cognitive effort have been proposed, their application in surgery is limited by the pupil's sensitivity to brightness changes, which can mask pupil's response to cognitive load. Thus, methods considering pupil and brightness conditions are essential for detecting cognitive effort in unconstrained scenarios. METHODS To contend with this challenge, we introduce a personalized pupil response model integrating pupil and brightness-based features. Discrepancies between predicted and measured pupil diameter indicate dilations due to non-brightness-related sources, i.e., cognitive effort. Combined with gaze entropy, it can detect cognitive load using a random forest classifier. To test our model, we perform a user study with the da Vinci Research Kit, where 17 users perform pick-and-place tasks in addition to auditory tasks known to generate cognitive effort responses. RESULTS We compare our method to two baselines (BCPD and CPD), demonstrating favorable performance in varying brightness conditions. Our method achieves an average true positive rate of 0.78, outperforming the baselines (0.57 and 0.64). CONCLUSION We present a personalized brightness-aware model for cognitive effort detection able to operate under unconstrained brightness conditions, comparing favorably to competing approaches, contributing to the advancement of cognitive effort detection in tele-robotic surgery. Future work will consider alternative learning strategies, handling the difficult positive-unlabeled scenario in user studies, where only some positive and no negative events are reliably known.
Collapse
Affiliation(s)
- Regine Büter
- Department of Computer Science, Johns Hopkins University, 3400 N Charles St, Baltimore, 21218, MD, USA.
| | - Roger D Soberanis-Mukul
- Department of Computer Science, Johns Hopkins University, 3400 N Charles St, Baltimore, 21218, MD, USA
| | - Rohit Shankar
- Department of Computer Science, Johns Hopkins University, 3400 N Charles St, Baltimore, 21218, MD, USA
| | - Paola Ruiz Puentes
- Department of Computer Science, Johns Hopkins University, 3400 N Charles St, Baltimore, 21218, MD, USA
| | - Ahmed Ghazi
- Department of Urology, Johns Hopkins Medical Institute, 600 N Wolfe St, Baltimore, 21287, MD, USA
| | - Jie Ying Wu
- Department of Computer Science, Vanderbilt University, 2201 West End Ave, Nashville, 37235, TN, USA
| | - Mathias Unberath
- Department of Computer Science, Johns Hopkins University, 3400 N Charles St, Baltimore, 21218, MD, USA
| |
Collapse
|
7
|
Biondi FN, Jajo N. On the impact of on-road partially-automated driving on drivers' cognitive workload and attention allocation. ACCIDENT; ANALYSIS AND PREVENTION 2024; 200:107537. [PMID: 38471237 DOI: 10.1016/j.aap.2024.107537] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/18/2023] [Revised: 11/07/2023] [Accepted: 03/03/2024] [Indexed: 03/14/2024]
Abstract
The use of partially-automated or SAE level-2 vehicles is expected to change the role of the human driver from operator to supervisor, which may have an effect on the driver's workload and visual attention. In this study, 30 Ontario drivers operated a vehicle in manual and partially-automated mode. Cognitive workload was measured by means of the Detection Response Task, and visual attention was measured by means of coding glances on and off the forward roadway. No difference in cognitive workload was found between driving modes. However, drivers spent less time glancing at the forward roadway, and more time glancing at the vehicle's touchscreen. These data add to our knowledge of how vehicle automation affects cognitive workload and attention allocation, and show potential safety risks associated with the adoption of partially-automated driving.
Collapse
Affiliation(s)
- Francesco N Biondi
- Human Systems Lab, University of Windsor, Windsor, Ontario, Canada; Applied Cognition Lab, University of Utah, Salt Lake City, UT, United States.
| | - Noor Jajo
- Human Systems Lab, University of Windsor, Windsor, Ontario, Canada
| |
Collapse
|
8
|
Park J, Seo B, Jeong Y, Park I. A Review of Recent Advancements in Sensor-Integrated Medical Tools. ADVANCED SCIENCE (WEINHEIM, BADEN-WURTTEMBERG, GERMANY) 2024; 11:e2307427. [PMID: 38460177 PMCID: PMC11132050 DOI: 10.1002/advs.202307427] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/06/2023] [Revised: 12/26/2023] [Indexed: 03/11/2024]
Abstract
A medical tool is a general instrument intended for use in the prevention, diagnosis, and treatment of diseases in humans or other animals. Nowadays, sensors are widely employed in medical tools to analyze or quantify disease-related parameters for the diagnosis and monitoring of patients' diseases. Recent explosive advancements in sensor technologies have extended the integration and application of sensors in medical tools by providing more versatile in vivo sensing capabilities. These unique sensing capabilities, especially for medical tools for surgery or medical treatment, are getting more attention owing to the rapid growth of minimally invasive surgery. In this review, recent advancements in sensor-integrated medical tools are presented, and their necessity, use, and examples are comprehensively introduced. Specifically, medical tools often utilized for medical surgery or treatment, for example, medical needles, catheters, robotic surgery, sutures, endoscopes, and tubes, are covered, and in-depth discussions about the working mechanism used for each sensor-integrated medical tool are provided.
Collapse
Affiliation(s)
- Jaeho Park
- Department of Mechanical EngineeringKorea Advanced Institute of Science and Technology (KAIST)Daejeon34141South Korea
| | - Bokyung Seo
- Department of Mechanical EngineeringKorea Advanced Institute of Science and Technology (KAIST)Daejeon34141South Korea
| | - Yongrok Jeong
- Department of Mechanical EngineeringKorea Advanced Institute of Science and Technology (KAIST)Daejeon34141South Korea
- Radioisotope Research DivisionKorea Atomic Energy Research Institute (KAERI)Daejeon34057South Korea
| | - Inkyu Park
- Department of Mechanical EngineeringKorea Advanced Institute of Science and Technology (KAIST)Daejeon34141South Korea
| |
Collapse
|
9
|
Yang J, Barragan JA, Farrow JM, Sundaram CP, Wachs JP, Yu D. An Adaptive Human-Robotic Interaction Architecture for Augmenting Surgery Performance Using Real-Time Workload Sensing-Demonstration of a Semi-autonomous Suction Tool. HUMAN FACTORS 2024; 66:1081-1102. [PMID: 36367971 DOI: 10.1177/00187208221129940] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
OBJECTIVE This study developed and evaluated a mental workload-based adaptive automation (MWL-AA) that monitors surgeon cognitive load and assist during cognitively demanding tasks and assists surgeons in robotic-assisted surgery (RAS). BACKGROUND The introduction of RAS makes operators overwhelmed. The need for precise, continuous assessment of human mental workload (MWL) states is important to identify when the interventions should be delivered to moderate operators' MWL. METHOD The MWL-AA presented in this study was a semi-autonomous suction tool. The first experiment recruited ten participants to perform surgical tasks under different MWL levels. The physiological responses were captured and used to develop a real-time multi-sensing model for MWL detection. The second experiment evaluated the effectiveness of the MWL-AA, where nine brand-new surgical trainees performed the surgical task with and without the MWL-AA. Mixed effect models were used to compare task performance, objective- and subjective-measured MWL. RESULTS The proposed system predicted high MWL hemorrhage conditions with an accuracy of 77.9%. For the MWL-AA evaluation, the surgeons' gaze behaviors and brain activities suggested lower perceived MWL with MWL-AA than without. This was further supported by lower self-reported MWL and better task performance in the task condition with MWL-AA. CONCLUSION A MWL-AA systems can reduce surgeons' workload and improve performance in a high-stress hemorrhaging scenario. Findings highlight the potential of utilizing MWL-AA to enhance the collaboration between the autonomous system and surgeons. Developing a robust and personalized MWL-AA is the first step that can be used do develop additional use cases in future studies. APPLICATION The proposed framework can be expanded and applied to more complex environments to improve human-robot collaboration.
Collapse
Affiliation(s)
- Jing Yang
- School of Industrial Engineering, Purdue University, West Lafayette, Indiana, USA
| | | | - Jason Michael Farrow
- Department of Urology, Indiana University School of Medicine, Indianapolis, Indiana, USA
| | - Chandru P Sundaram
- Department of Urology, Indiana University School of Medicine, Indianapolis, Indiana, USA
| | - Juan P Wachs
- School of Industrial Engineering, Purdue University, West Lafayette, Indiana, USA
| | - Denny Yu
- School of Industrial Engineering, Purdue University, West Lafayette, Indiana, USA
| |
Collapse
|
10
|
Wong SW, Crowe P. Cognitive ergonomics and robotic surgery. J Robot Surg 2024; 18:110. [PMID: 38441814 PMCID: PMC10914881 DOI: 10.1007/s11701-024-01852-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2024] [Accepted: 01/28/2024] [Indexed: 03/07/2024]
Abstract
Cognitive ergonomics refer to mental resources and is associated with memory, sensory motor response, and perception. Cognitive workload (CWL) involves use of working memory (mental strain and effort) to complete a task. The three types of cognitive loads have been divided into intrinsic (dependent on complexity and expertise), extraneous (the presentation of tasks) and germane (the learning process) components. The effect of robotic surgery on CWL is complex because the postural, visualisation, and manipulation ergonomic benefits for the surgeon may be offset by the disadvantages associated with team separation and reduced situation awareness. Physical fatigue and workflow disruptions have a negative impact on CWL. Intraoperative CWL can be measured subjectively post hoc with the use of self-reported instruments or objectively with real-time physiological response metrics. Cognitive training can play a crucial role in the process of skill acquisition during the three stages of motor learning: from cognitive to integrative and then to autonomous. Mentorship, technical practice and watching videos are the most common traditional cognitive training methods in surgery. Cognitive training can also occur with computer-based cognitive simulation, mental rehearsal, and cognitive task analysis. Assessment of cognitive skills may offer a more effective way to differentiate robotic expertise level than automated performance (tool-based) metrics.
Collapse
Affiliation(s)
- Shing Wai Wong
- Department of General Surgery, Prince of Wales Hospital, Sydney, NSW, Australia.
- School of Clinical Medicine, The University of New South Wales, Randwick Campus, Sydney, NSW, Australia.
| | - Philip Crowe
- Department of General Surgery, Prince of Wales Hospital, Sydney, NSW, Australia
- School of Clinical Medicine, The University of New South Wales, Randwick Campus, Sydney, NSW, Australia
| |
Collapse
|
11
|
Ahmadi N, Sasangohar F, Yang J, Yu D, Danesh V, Klahn S, Masud F. Quantifying Workload and Stress in Intensive Care Unit Nurses: Preliminary Evaluation Using Continuous Eye-Tracking. HUMAN FACTORS 2024; 66:714-728. [PMID: 35511206 DOI: 10.1177/00187208221085335] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
OBJECTIVE (1) To assess mental workloads of intensive care unit (ICU) nurses in 12-hour working shifts (days and nights) using eye movement data; (2) to explore the impact of stress on the ocular metrics of nurses performing patient care in the ICU. BACKGROUND Prior studies have employed workload scoring systems or accelerometer data to assess ICU nurses' workload. This is the first naturalistic attempt to explore nurses' mental workload using eye movement data. METHODS Tobii Pro Glasses 2 eye-tracking and Empatica E4 devices were used to collect eye movement and physiological data from 15 nurses during 12-hour shifts (252 observation hours). We used mixed-effect models and an ordinal regression model with a random effect to analyze the changes in eye movement metrics during high stress episodes. RESULTS While the cadence and characteristics of nurse workload can vary between day shift and night shift, no significant difference in eye movement values was detected. However, eye movement metrics showed that the initial handoff period of nursing shifts has a higher mental workload compared with other times. Analysis of ocular metrics showed that stress is positively associated with an increase in number of eye fixations and gaze entropy, but negatively correlated with the duration of saccades and pupil diameter. CONCLUSION Eye-tracking technology can be used to assess the temporal variation of stress and associated changes with mental workload in the ICU environment. A real-time system could be developed for monitoring stress and workload for intervention development.
Collapse
Affiliation(s)
- Nima Ahmadi
- Center for Outcomes Research, Houston Methodist, Houston, TX, USA
| | - Farzan Sasangohar
- Center for Outcomes Research, Houston Methodist, Houston, TX, USA and Industrial and Systems Engineering, Texas A&M University, College Station, TX, USA
| | - Jing Yang
- School of Industrial Engineering, Purdue University, West Lafayette, IN, USA
| | - Denny Yu
- School of Industrial Engineering, Purdue University, West Lafayette, IN, USA
| | - Valerie Danesh
- Baylor Scott & White Health, Center for Applied Health Research, Dallas, TX, USA and University of Texas at Austin, School of Nursing, Austin, TX, USA
| | - Steven Klahn
- Center for Critical Care, Houston Methodist Hospital, Houston, TX, USA
| | - Faisal Masud
- Center for Critical Care, Houston Methodist Hospital, Houston, TX, USA
| |
Collapse
|
12
|
Shafiei SB, Shadpour S, Sasangohar F, Mohler JL, Attwood K, Jing Z. Development of performance and learning rate evaluation models in robot-assisted surgery using electroencephalography and eye-tracking. NPJ SCIENCE OF LEARNING 2024; 9:3. [PMID: 38242909 PMCID: PMC10799032 DOI: 10.1038/s41539-024-00216-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/24/2023] [Accepted: 01/08/2024] [Indexed: 01/21/2024]
Abstract
The existing performance evaluation methods in robot-assisted surgery (RAS) are mainly subjective, costly, and affected by shortcomings such as the inconsistency of results and dependency on the raters' opinions. The aim of this study was to develop models for an objective evaluation of performance and rate of learning RAS skills while practicing surgical simulator tasks. The electroencephalogram (EEG) and eye-tracking data were recorded from 26 subjects while performing Tubes, Suture Sponge, and Dots and Needles tasks. Performance scores were generated by the simulator program. The functional brain networks were extracted using EEG data and coherence analysis. Then these networks, along with community detection analysis, facilitated the extraction of average search information and average temporal flexibility features at 21 Brodmann areas (BA) and four band frequencies. Twelve eye-tracking features were extracted and used to develop linear random intercept models for performance evaluation and multivariate linear regression models for the evaluation of the learning rate. Results showed that subject-wise standardization of features improved the R2 of the models. Average pupil diameter and rate of saccade were associated with performance in the Tubes task (multivariate analysis; p-value = 0.01 and p-value = 0.04, respectively). Entropy of pupil diameter was associated with performance in Dots and Needles task (multivariate analysis; p-value = 0.01). Average temporal flexibility and search information in several BAs and band frequencies were associated with performance and rate of learning. The models may be used to objectify performance and learning rate evaluation in RAS once validated with a broader sample size and tasks.
Collapse
Affiliation(s)
- Somayeh B Shafiei
- Intelligent Cancer Care Laboratory, Department of Urology, Roswell Park Comprehensive Cancer Center, Buffalo, NY, 14263, USA.
| | - Saeed Shadpour
- Department of Animal Biosciences, University of Guelph, Guelph, Ontario, N1G 2W1, Canada
| | - Farzan Sasangohar
- Department of Industrial and Systems Engineering, Texas A&M University, College Station, TX, 77843, USA
| | - James L Mohler
- Department of Urology, Roswell Park Comprehensive Cancer Center, Buffalo, NY, 14263, USA
| | - Kristopher Attwood
- Department of Biostatistics and Bioinformatics, Roswell Park Comprehensive Cancer Center, Buffalo, NY, 14263, USA
| | - Zhe Jing
- Department of Biostatistics and Bioinformatics, Roswell Park Comprehensive Cancer Center, Buffalo, NY, 14263, USA
| |
Collapse
|
13
|
Pluchino P, Pernice GFA, Nenna F, Mingardi M, Bettelli A, Bacchin D, Spagnolli A, Jacucci G, Ragazzon A, Miglioranzi L, Pettenon C, Gamberini L. Advanced workstations and collaborative robots: exploiting eye-tracking and cardiac activity indices to unveil senior workers' mental workload in assembly tasks. Front Robot AI 2023; 10:1275572. [PMID: 38149058 PMCID: PMC10749956 DOI: 10.3389/frobt.2023.1275572] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2023] [Accepted: 11/20/2023] [Indexed: 12/28/2023] Open
Abstract
Introduction: As a result of Industry 5.0's technological advancements, collaborative robots (cobots) have emerged as pivotal enablers for refining manufacturing processes while re-focusing on humans. However, the successful integration of these cutting-edge tools hinges on a better understanding of human factors when interacting with such new technologies, eventually fostering workers' trust and acceptance and promoting low-fatigue work. This study thus delves into the intricate dynamics of human-cobot interactions by adopting a human-centric view. Methods: With this intent, we targeted senior workers, who often contend with diminishing work capabilities, and we explored the nexus between various human factors and task outcomes during a joint assembly operation with a cobot on an ergonomic workstation. Exploiting a dual-task manipulation to increase the task demand, we measured performance, subjective perceptions, eye-tracking indices and cardiac activity during the task. Firstly, we provided an overview of the senior workers' perceptions regarding their shared work with the cobot, by measuring technology acceptance, perceived wellbeing, work experience, and the estimated social impact of this technology in the industrial sector. Secondly, we asked whether the considered human factors varied significantly under dual-tasking, thus responding to a higher mental load while working alongside the cobot. Finally, we explored the predictive power of the collected measurements over the number of errors committed at the work task and the participants' perceived workload. Results: The present findings demonstrated how senior workers exhibited strong acceptance and positive experiences with our advanced workstation and the cobot, even under higher mental strain. Besides, their task performance suffered increased errors and duration during dual-tasking, while the eye behavior partially reflected the increased mental demand. Some interesting outcomes were also gained about the predictive power of some of the collected indices over the number of errors committed at the assembly task, even though the same did not apply to predicting perceived workload levels. Discussion: Overall, the paper discusses possible applications of these results in the 5.0 manufacturing sector, emphasizing the importance of adopting a holistic human-centered approach to understand the human-cobot complex better.
Collapse
Affiliation(s)
- Patrik Pluchino
- Department of General Psychology, University of Padova, Padova, Italy
- Human Inspired Technology (HIT) Research Centre, University of Padova, Padova, Italy
| | | | - Federica Nenna
- Department of General Psychology, University of Padova, Padova, Italy
| | - Michele Mingardi
- Department of General Psychology, University of Padova, Padova, Italy
| | - Alice Bettelli
- Department of General Psychology, University of Padova, Padova, Italy
| | - Davide Bacchin
- Department of General Psychology, University of Padova, Padova, Italy
| | - Anna Spagnolli
- Department of General Psychology, University of Padova, Padova, Italy
- Human Inspired Technology (HIT) Research Centre, University of Padova, Padova, Italy
| | - Giulio Jacucci
- Department of Computer Science, Helsinki Institute for Information Technology, University of Helsinki, Helsinki, Finland
| | | | | | | | - Luciano Gamberini
- Department of General Psychology, University of Padova, Padova, Italy
- Human Inspired Technology (HIT) Research Centre, University of Padova, Padova, Italy
| |
Collapse
|
14
|
Anton NE, Cha JS, Hernandez E, Athanasiadis D, Yang J, Zhou G, Stefanidis D, Yu D. Utilizing Eye Tracking to Assess Medical Student Non-Technical Performance During Scenario-Based Simulation: Results of a Pilot Study. GLOBAL SURGICAL EDUCATION : JOURNAL OF THE ASSOCIATION FOR SURGICAL EDUCATION 2023; 2:49. [PMID: 38414559 PMCID: PMC10896278 DOI: 10.1007/s44186-023-00127-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/18/2022] [Revised: 03/09/2023] [Accepted: 03/24/2023] [Indexed: 02/29/2024]
Abstract
Background Non-technical skills (NTS) are essential for safe surgical patient management. However, assessing NTS involves observer-based ratings, which can introduce bias. Eye tracking (ET) has been proposed as an effective method to capture NTS. The purpose of the current study was to determine if ET metrics are associated with NTS performance. Methods Participants wore a mobile ET system and participated in two patient care simulations, where they managed a deteriorating patient. The scenarios featured several challenges to leadership, which were evaluated using a 4-point Likert scale. NTS were evaluated by trained raters using the Non-Technical Skills for Surgeons (NOTSS) scale. ET metrics included percentage of fixations and visits on areas of interest. Results Ten medical students participated. Average visit duration on the patient was negatively correlated with participants' communication and leadership. Average visit duration on the patient's intravenous access was negatively correlated with participants' decision making and situation awareness. Conclusions Our preliminary data suggests that visual attention on the patient was negatively associated with NTS and may indicate poor comprehension of the patient's status due to heightened cognitive load. In future work, researchers and educators should consider using ET to objectively evaluate and provide feedback on their NTS.
Collapse
Affiliation(s)
- Nicholas E Anton
- Department of Surgery, Indiana University School of Medicine, Indianapolis, IN
- School of Industrial Engineering, Purdue University, West Lafayette, IN
| | - Jackie S Cha
- Department of Industrial Engineering, Clemson University, Clemson, SC
| | - Edward Hernandez
- Department of Surgery, Indiana University School of Medicine, Indianapolis, IN
| | | | - Jing Yang
- School of Industrial Engineering, Purdue University, West Lafayette, IN
| | - Guoyang Zhou
- School of Industrial Engineering, Purdue University, West Lafayette, IN
| | | | - Denny Yu
- School of Industrial Engineering, Purdue University, West Lafayette, IN
| |
Collapse
|
15
|
Bapna T, Valles J, Leng S, Pacilli M, Nataraja RM. Eye-tracking in surgery: a systematic review. ANZ J Surg 2023; 93:2600-2608. [PMID: 37668263 DOI: 10.1111/ans.18686] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2023] [Revised: 08/20/2023] [Accepted: 08/22/2023] [Indexed: 09/06/2023]
Abstract
BACKGROUND Surgery is constantly evolving with the assistance of rapidly developing novel technology. Eye-tracking devices provide opportunities to monitor the acquisition of surgical skills, gain insight into performance, and enhance surgical practice. The aim of this review was to consolidate the available evidence for the use of eye-tracking in the surgical disciplines. METHODS A systematic literature review was conducted in accordance with PRISMA guidelines. A search of OVID Medline, EMBASE, Cochrane library, Scopus, and Science Direct was conducted January 2000 until December 2022. Studies involving eye-tracking in surgical training, assessment and technical innovation were included in the review. Non-surgical procedures, animal studies, and studies not involving surgical participants were excluded from the review. RESULTS The search returned a total of 12 054 articles, 80 of which were included in the final analysis and review. Seventeen studies involved eye-tracking in surgical training, 48 surgical assessment, and 20 were focussing on technical aspects of this technology. Twenty-six different eye-tracking devices were used in the included studies. Metrics such as the number of fixations, duration of fixations, dwell time, and cognitive workload were able to differentiate between novice and expert performance. Eight studies demonstrated the effectiveness of gaze-training for improving surgical skill. CONCLUSION The current literature shows a broad range of utility for a variety of eye-tracking devices in surgery. There remains a lack of standardization for metric parameters and gaze analysis techniques. Further research is required to validate its use to establish reliability and create uniform practices.
Collapse
Affiliation(s)
- Tanay Bapna
- Department of Paediatric Surgery & Surgical Simulation, Monash Children's Hospital, Melbourne, Victoria, Australia
| | - John Valles
- Department of Paediatric Surgery & Surgical Simulation, Monash Children's Hospital, Melbourne, Victoria, Australia
| | - Samantha Leng
- Department of Paediatric Surgery & Surgical Simulation, Monash Children's Hospital, Melbourne, Victoria, Australia
| | - Maurizio Pacilli
- Department of Paediatric Surgery & Surgical Simulation, Monash Children's Hospital, Melbourne, Victoria, Australia
- Departments of Paediatrics & Surgery, School of Clinical Sciences, Faculty of Medicine, Nursing and Health Sciences, Monash University, Melbourne, Victoria, Australia
| | - Ramesh Mark Nataraja
- Department of Paediatric Surgery & Surgical Simulation, Monash Children's Hospital, Melbourne, Victoria, Australia
- Departments of Paediatrics & Surgery, School of Clinical Sciences, Faculty of Medicine, Nursing and Health Sciences, Monash University, Melbourne, Victoria, Australia
| |
Collapse
|
16
|
Srinivas S, Young AJ. Machine Learning and Artificial Intelligence in Surgical Research. Surg Clin North Am 2023; 103:299-316. [PMID: 36948720 DOI: 10.1016/j.suc.2022.11.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/24/2023]
Abstract
Machine learning, a subtype of artificial intelligence, is an emerging field of surgical research dedicated to predictive modeling. From its inception, machine learning has been of interest in medical and surgical research. Built on traditional research metrics for optimal success, avenues of research include diagnostics, prognosis, operative timing, and surgical education, in a variety of surgical subspecialties. Machine learning represents an exciting and developing future in the world of surgical research that will not only allow for more personalized and comprehensive medical care.
Collapse
Affiliation(s)
- Shruthi Srinivas
- Department of Surgery, The Ohio State University, 370 West 9th Avenue, Columbus, OH 43210, USA
| | - Andrew J Young
- Division of Trauma, Critical Care, and Burn, The Ohio State University, 181 Taylor Avenue, Suite 1102K, Columbus, OH 43203, USA.
| |
Collapse
|
17
|
Berges AJ, Vedula SS, Chara A, Hager GD, Ishii M, Malpani A. Eye Tracking and Motion Data Predict Endoscopic Sinus Surgery Skill. Laryngoscope 2023; 133:500-505. [PMID: 35357011 PMCID: PMC9825109 DOI: 10.1002/lary.30121] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2022] [Revised: 03/10/2022] [Accepted: 03/14/2022] [Indexed: 01/11/2023]
Abstract
OBJECTIVE Endoscopic surgery has a considerable learning curve due to dissociation of the visual-motor axes, coupled with decreased tactile feedback and mobility. In particular, endoscopic sinus surgery (ESS) lacks objective skill assessment metrics to provide specific feedback to trainees. This study aims to identify summary metrics from eye tracking, endoscope motion, and tool motion to objectively assess surgeons' ESS skill. METHODS In this cross-sectional study, expert and novice surgeons performed ESS tasks of inserting an endoscope and tool into a cadaveric nose, touching an anatomical landmark, and withdrawing the endoscope and tool out of the nose. Tool and endoscope motion were collected using an electromagnetic tracker, and eye gaze was tracked using an infrared camera. Three expert surgeons provided binary assessments of low/high skill. 20 summary statistics were calculated for eye, tool, and endoscope motion and used in logistic regression models to predict surgical skill. RESULTS 14 metrics (10 eye gaze, 2 tool motion, and 2 endoscope motion) were significantly different between surgeons with low and high skill. Models to predict skill for 6/9 ESS tasks had an AUC >0.95. A combined model of all tasks (AUC 0.95, PPV 0.93, NPV 0.89) included metrics from eye tracking data and endoscope motion, indicating that these metrics are transferable across tasks. CONCLUSIONS Eye gaze, endoscope, and tool motion data can provide an objective and accurate measurement of ESS surgical performance. Incorporation of these algorithmic techniques intraoperatively could allow for automated skill assessment for trainees learning endoscopic surgery. LEVEL OF EVIDENCE N/A Laryngoscope, 133:500-505, 2023.
Collapse
Affiliation(s)
| | | | | | | | - Masaru Ishii
- Johns Hopkins Department of Otolaryngology–Head and Neck Surgery
| | | |
Collapse
|
18
|
Takahashi Y, Hakamada K, Morohashi H, Akasaka H, Ebihara Y, Oki E, Hirano S, Mori M. Reappraisal of telesurgery in the era of high-speed, high-bandwidth, secure communications: Evaluation of surgical performance in local and remote environments. Ann Gastroenterol Surg 2023; 7:167-174. [PMID: 36643359 PMCID: PMC9831893 DOI: 10.1002/ags3.12611] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/26/2022] [Accepted: 07/27/2022] [Indexed: 01/18/2023] Open
Abstract
Aim Communication and video transmission delays negatively affect telerobotic surgery. Since latency varies by communication environment and robot, to realize remote surgery, both must perform well. This study aims to examine the feasibility of telerobotic surgery by validating the communication environment and local/remote robot operation, using secure commercial lines and newly developed robots. Methods Hirosaki University and Mutsu General Hospital, 150 km apart, were connected via a Medicaroid surgical robot. Ten surgeons performed a simple task remotely using information encoding and decoding. The required bandwidth, delay time, task completion time, number of errors, and image quality were evaluated. Next, 11 surgeons performed a complex task using gallbladder and intestinal models in local/remote environments; round trip time (RTT), packet loss, time to completion, operator fatigue, operability, and image were observed locally and remotely. Results Image quality was not so degraded as to affect remote robot operation. Median RTT was 4 msec (2-12), and added delay was 29 msec. There was no significant difference in accuracy or number of errors for cholecystectomy, intestinal suturing, completion time, surgeon fatigue, or image evaluation. Conclusion The fact that remote surgery succeeded equally to local surgery showed that this system has the necessary elemental technology for widespread social implementation.
Collapse
Affiliation(s)
- Yoshiya Takahashi
- Department of Gastroenterological SurgeryHirosaki University Graduate School of MedicineHirosakiJapan
- Committee for Promotion of Remote Surgery ImplementationJapan Surgical SocietyTokyoJapan
| | - Kenichi Hakamada
- Department of Gastroenterological SurgeryHirosaki University Graduate School of MedicineHirosakiJapan
- Committee for Promotion of Remote Surgery ImplementationJapan Surgical SocietyTokyoJapan
| | - Hajime Morohashi
- Department of Gastroenterological SurgeryHirosaki University Graduate School of MedicineHirosakiJapan
- Committee for Promotion of Remote Surgery ImplementationJapan Surgical SocietyTokyoJapan
| | - Harue Akasaka
- Department of Gastroenterological SurgeryHirosaki University Graduate School of MedicineHirosakiJapan
- Committee for Promotion of Remote Surgery ImplementationJapan Surgical SocietyTokyoJapan
| | - Yuma Ebihara
- Committee for Promotion of Remote Surgery ImplementationJapan Surgical SocietyTokyoJapan
- Department of Gastroenterological Surgery IIHokkaido University Faculty of MedicineSapporoJapan
| | - Eiji Oki
- Committee for Promotion of Remote Surgery ImplementationJapan Surgical SocietyTokyoJapan
- Department of Surgery and ScienceKyushu UniversityFukuokaJapan
| | - Satoshi Hirano
- Committee for Promotion of Remote Surgery ImplementationJapan Surgical SocietyTokyoJapan
- Department of Gastroenterological Surgery IIHokkaido University Faculty of MedicineSapporoJapan
| | - Masaki Mori
- Committee for Promotion of Remote Surgery ImplementationJapan Surgical SocietyTokyoJapan
- Tokai University School of MedicineIseharaJapan
| |
Collapse
|
19
|
Cai B, Xu N, Duan S, Yi J, Bay BH, Shen F, Hu N, Zhang P, Chen J, Chen C. Eye tracking metrics of orthopedic surgeons with different competency levels who practice simulation-based hip arthroscopic procedures. Heliyon 2022; 8:e12335. [PMID: 36582732 PMCID: PMC9792746 DOI: 10.1016/j.heliyon.2022.e12335] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2022] [Revised: 10/16/2022] [Accepted: 12/06/2022] [Indexed: 12/23/2022] Open
Abstract
Objective This study aimed to investigate the feasibility of using eye tracking data to identify orthopedic trainees' technical proficiency in hip arthroscopic procedures during simulation-based training. Design A cross sectional study. Setting A simulation-based training session for hip arthroscopy was conducted. Eye tracking devices were used to record participants' eye movements while performing simulated operations. The NASA Task Load Index survey was then used to measure subjective opinions on the perceived workload of the training. Statistical analyses were performed to determine the significance of the eye metrics and survey data. Participants A total of 12 arthroscopic trainees, including resident doctors, junior specialist surgeons, and consultant surgeons from the Department of Orthopedics in five hospitals, participated in this study. They were divided into three subgroups based on their prior clinical experience. Results Significant differences, including those for dwell time, number of fixations, number of saccades, saccade duration, peak velocity of the saccade, and pupil entropy, were observed among the three subgroups. Additionally, there were clear trends in the perceived workload of the simulation-based training based on feedback from the participants. Conclusion Based on this preliminary study, a correlation was identified between the eye tracking metrics and participants' experience levels. Hence, it is feasible to apply eye tracking data as a supplementary objective assessment tool to benchmark the technical proficiency of surgical trainees in hip arthroscopy, and enhance simulation-based training.
Collapse
Affiliation(s)
- Bohong Cai
- Department of Industrial and Product Design, School of Design, Sichuan Fine Arts Institute, Chongqing, China
| | - Na Xu
- Department of Industrial and Product Design, School of Design, Sichuan Fine Arts Institute, Chongqing, China
| | - Shengfeng Duan
- Department of Industrial and Product Design, School of Design, Sichuan Fine Arts Institute, Chongqing, China
| | - Jiahui Yi
- Department of Industrial and Product Design, School of Design, Sichuan Fine Arts Institute, Chongqing, China
| | - Boon Huat Bay
- Department of Anatomy, Yong Loo Lin School of Medicine, National University of Singapore, Singapore
| | - Fangyuan Shen
- State Key Laboratory of Oral Diseases and National Clinical Research Center for Oral Diseases, West China Hospital of Stomatology, Sichuan University, Chengdu, China
| | - Ning Hu
- Department of Orthopedics, The First Affiliated Hospital of Chongqing Medical University, Chongqing, China
| | - Peng Zhang
- Department of Orthopedics, Sichuan Province Orthopedic Hospital, Chengdu, China
| | - Jie Chen
- Department of Orthopedics, The First Affiliated Hospital of Chongqing Medical University, Chongqing, China,Corresponding author.
| | - Cheng Chen
- Department of Orthopedics, The First Affiliated Hospital of Chongqing Medical University, Chongqing, China,Corresponding author.
| |
Collapse
|
20
|
Naik R, Kogkas A, Ashrafian H, Mylonas G, Darzi A. The Measurement of Cognitive Workload in Surgery Using Pupil Metrics: A Systematic Review and Narrative Analysis. J Surg Res 2022; 280:258-272. [PMID: 36030601 DOI: 10.1016/j.jss.2022.07.010] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2022] [Revised: 07/09/2022] [Accepted: 07/11/2022] [Indexed: 11/17/2022]
Abstract
INTRODUCTION Increased cognitive workload (CWL) is a well-established entity that can impair surgical performance and increase the likelihood of surgical error. The use of pupil and gaze tracking data is increasingly being used to measure CWL objectively in surgery. The aim of this review is to summarize and synthesize the existing evidence that surrounds this. METHODS A systematic review was undertaken in accordance with Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. A search of OVID MEDLINE, IEEE Xplore, Web of Science, Google Scholar, APA PsychINFO, and EMBASE was conducted for articles published in English between 1990 and January 2021. In total, 6791 articles were screened and 32 full-text articles were selected based on the inclusion criteria. A narrative analysis was undertaken in view of the heterogeneity of studies. RESULTS Seventy-eight percent of selected studies were deemed high quality. The most frequent surgical environment and task studied was surgical simulation (75%) and performance of laparoscopic skills (56%) respectively. The results demonstrated that the current literature can be broadly categorized into pupil, blink, and gaze metrics used in the assessment of CWL. These can be further categorized according to their use in the context of CWL: (1) direct measurement of CWL (n = 16), (2) determination of expertise level (n = 14), and (3) predictors of performance (n = 2). CONCLUSIONS Eye-tracking data provide a wealth of information; however, there is marked study heterogeneity. Pupil diameter and gaze entropy demonstrate promise in CWL assessment. Future work will entail the use of artificial intelligence in the form of deep learning and the use of a multisensor platform to accurately measure CWL.
Collapse
Affiliation(s)
- Ravi Naik
- Department of Surgery and Cancer, St Mary's Hospital, Imperial College London, London, UK; Hamlyn Centre for Robotic Surgery, Institute of Global Health Innovation, Imperial College London, London, UK.
| | - Alexandros Kogkas
- Department of Surgery and Cancer, St Mary's Hospital, Imperial College London, London, UK; Hamlyn Centre for Robotic Surgery, Institute of Global Health Innovation, Imperial College London, London, UK
| | - Hutan Ashrafian
- Department of Surgery and Cancer, St Mary's Hospital, Imperial College London, London, UK
| | - George Mylonas
- Department of Surgery and Cancer, St Mary's Hospital, Imperial College London, London, UK; Hamlyn Centre for Robotic Surgery, Institute of Global Health Innovation, Imperial College London, London, UK
| | - Ara Darzi
- Department of Surgery and Cancer, St Mary's Hospital, Imperial College London, London, UK; Hamlyn Centre for Robotic Surgery, Institute of Global Health Innovation, Imperial College London, London, UK
| |
Collapse
|
21
|
Eye Tracking Use in Surgical Research: A Systematic Review. J Surg Res 2022; 279:774-787. [PMID: 35944332 DOI: 10.1016/j.jss.2022.05.024] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2022] [Revised: 03/18/2022] [Accepted: 05/22/2022] [Indexed: 11/20/2022]
Abstract
INTRODUCTION Eye tracking (ET) is a popular tool to study what factors affect the visual behaviour of surgical team members. To our knowledge, there have been no reviews to date that evaluate the broad use of ET in surgical research. This review aims to identify and assess the quality of this evidence, to synthesize how ET can be used to inform surgical practice, and to provide recommendations to improve future ET surgical studies. METHODS In line with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines, a systematic literature review was conducted. An electronic search was performed in MEDLINE, Cochrane Central, Embase, and Web of Science databases up to September 2020. Included studies used ET to measure the visual behaviour of members of the surgical team during surgery or surgical tasks. The included studies were assessed by two independent reviewers. RESULTS A total of 7614 studies were identified, and 111 were included for data extraction. Eleven applications were identified; the four most common were skill assessment (41%), visual attention assessment (22%), workload measurement (17%), and skills training (10%). A summary was provided of the various ways ET could be used to inform surgical practice, and three areas were identified for the improvement of future ET studies in surgery. CONCLUSIONS This review provided a comprehensive summary of the various applications of ET in surgery and how ET could be used to inform surgical practice, including how to use ET to improve surgical education. The information provided in this review can also aid in the design and conduct of future ET surgical studies.
Collapse
|
22
|
A Perspective Review on Integrating VR/AR with Haptics into STEM Education for Multi-Sensory Learning. ROBOTICS 2022. [DOI: 10.3390/robotics11020041] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
As a result of several governments closing educational facilities in reaction to the COVID-19 pandemic in 2020, almost 80% of the world’s students were not in school for several weeks. Schools and universities are thus increasing their efforts to leverage educational resources and provide possibilities for remote learning. A variety of educational programs, platforms, and technologies are now accessible to support student learning; while these tools are important for society, they are primarily concerned with the dissemination of theoretical material. There is a lack of support for hands-on laboratory work and practical experience. This is particularly important for all disciplines related to science, technology, engineering, and mathematics (STEM), where labs and pedagogical assets must be continuously enhanced in order to provide effective study programs. In this study, we describe a unique perspective to achieving multi-sensory learning through the integration of virtual and augmented reality (VR/AR) with haptic wearables in STEM education. We address the implications of a novel viewpoint on established pedagogical notions. We want to encourage worldwide efforts to make fully immersive, open, and remote laboratory learning a reality.
Collapse
|
23
|
Liu X, Sanchez Perdomo YP, Zheng B, Duan X, Zhang Z, Zhang D. When medical trainees encountering a performance difficulty: evidence from pupillary responses. BMC MEDICAL EDUCATION 2022; 22:191. [PMID: 35305623 PMCID: PMC8934497 DOI: 10.1186/s12909-022-03256-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/29/2021] [Accepted: 03/13/2022] [Indexed: 06/14/2023]
Abstract
BACKGROUND Medical trainees are required to learn many procedures following instructions to improve their skills. This study aims to investigate the pupillary response of trainees when they encounter moment of performance difficulty (MPD) during skill learning. Detecting the moment of performance difficulty is essential for educators to assist trainees when they need it. METHODS Eye motions were recorded while trainees practiced the thoracostomy procedure in the simulation model. To make pupillary data comparable among trainees, we proposed the adjusted pupil size (APS) normalizing pupil dilation for each trainee in their entire procedure. APS variables including APS, maxAPS, minAPS, meanAPS, medianAPS, and max interval indices were compared between easy and difficult subtasks; the APSs were compared among the three different performance situations, the moment of normal performance (MNP), MPD, and moment of seeking help (MSH). RESULTS The mixed ANOVA revealed that the adjusted pupil size variables, such as the maxAPS, the minAPS, the meanAPS, and the medianAPS, had significant differences between performance situations. Compared to MPD and MNP, pupil size was reduced during MSH. Trainees displayed a smaller accumulative frequency of APS during difficult subtask when compared to easy subtasks. CONCLUSIONS Results from this project suggest that pupil responses can be a good behavioral indicator. This study is a part of our research aiming to create an artificial intelligent system for medical trainees with automatic detection of their performance difficulty and delivering instructional messages using augmented reality technology.
Collapse
Affiliation(s)
- Xin Liu
- School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing, 100083, China
- Surgical Simulation Research Lab, Department of Surgery, University of Alberta, Edmonton, AB, T6G 2E1, Canada
- Beijing Key Laboratory of Knowledge Engineering for Materials Science, Beijing, 100083, China
| | - Yerly Paola Sanchez Perdomo
- Surgical Simulation Research Lab, Department of Surgery, University of Alberta, Edmonton, AB, T6G 2E1, Canada
| | - Bin Zheng
- Surgical Simulation Research Lab, Department of Surgery, University of Alberta, Edmonton, AB, T6G 2E1, Canada.
- Department of Surgery, Faculty of Medicine and Dentistry, 162 Heritage Medical Research Centre, University of Alberta, 8440 112 St. NW. Edmonton, Alberta, T6G 2E1, Canada.
| | - Xiaoqin Duan
- Surgical Simulation Research Lab, Department of Surgery, University of Alberta, Edmonton, AB, T6G 2E1, Canada
- Department of Rehabilitation Medicine, Second Hospital of Jilin University, Changchun, Jilin, 130041, China
| | - Zhongshi Zhang
- Surgical Simulation Research Lab, Department of Surgery, University of Alberta, Edmonton, AB, T6G 2E1, Canada
- Department of Biological Sciences, University of Alberta, Edmonton, AB, T6G 2E9, Canada
| | - Dezheng Zhang
- School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing, 100083, China
- Beijing Key Laboratory of Knowledge Engineering for Materials Science, Beijing, 100083, China
| |
Collapse
|
24
|
Kirubarajan A, Young D, Khan S, Crasto N, Sobel M, Sussman D. Artificial Intelligence and Surgical Education: A Systematic Scoping Review of Interventions. JOURNAL OF SURGICAL EDUCATION 2022; 79:500-515. [PMID: 34756807 DOI: 10.1016/j.jsurg.2021.09.012] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/12/2021] [Revised: 07/21/2021] [Accepted: 09/16/2021] [Indexed: 06/13/2023]
Abstract
OBJECTIVE To synthesize peer-reviewed evidence related to the use of artificial intelligence (AI) in surgical education DESIGN: We conducted and reported a scoping review according to the standards outlined in the Preferred Reporting Items for Systematic Reviews and Meta-Analysis with extension for Scoping Reviews guideline and the fourth edition of the Joanna Briggs Institute Reviewer's Manual. We systematically searched eight interdisciplinary databases including MEDLINE-Ovid, ERIC, EMBASE, CINAHL, Web of Science: Core Collection, Compendex, Scopus, and IEEE Xplore. Databases were searched from inception until the date of search on April 13, 2021. SETTING/PARTICIPANTS We only examined original, peer-reviewed interventional studies that self-described as AI interventions, focused on medical education, and were relevant to surgical trainees (defined as medical or dental students, postgraduate residents, or surgical fellows) within the title and abstract (see Table 2). Animal, cadaveric, and in vivo studies were not eligible for inclusion. RESULTS After systematically searching eight databases and 4255 citations, our scoping review identified 49 studies relevant to artificial intelligence in surgical education. We found diverse interventions related to the evaluation of surgical competency, personalization of surgical education, and improvement of surgical education materials across surgical specialties. Many studies used existing surgical education materials, such as the Objective Structured Assessment of Technical Skills framework or the JHU-ISI Gesture and Skill Assessment Working Set database. Though most studies did not provide outcomes related to the implementation in medical schools (such as cost-effective analyses or trainee feedback), there are numerous promising interventions. In particular, many studies noted high accuracy in the objective characterization of surgical skill sets. These interventions could be further used to identify at-risk surgical trainees or evaluate teaching methods. CONCLUSIONS There are promising applications for AI in surgical education, particularly for the assessment of surgical competencies, though further evidence is needed regarding implementation and applicability.
Collapse
Affiliation(s)
| | - Dylan Young
- Department of Electrical, Computer and Biomedical Engineering, Ryerson University, Toronto, Ontario, Canada
| | - Shawn Khan
- Faculty of Medicine, University of Toronto, Toronto, Ontario, Canada
| | - Noelle Crasto
- Department of Electrical, Computer and Biomedical Engineering, Ryerson University, Toronto, Ontario, Canada
| | - Mara Sobel
- Department of Electrical, Computer and Biomedical Engineering, Ryerson University, Toronto, Ontario, Canada; Institute for Biomedical Engineering, Science and Technology (iBEST) at Ryerson University and St. Michael's Hospital, Toronto, Ontario, Canada
| | - Dafna Sussman
- Department of Electrical, Computer and Biomedical Engineering, Ryerson University, Toronto, Ontario, Canada; Institute for Biomedical Engineering, Science and Technology (iBEST) at Ryerson University and St. Michael's Hospital, Toronto, Ontario, Canada; Department of Obstetrics and Gynaecology, University of Toronto, Toronto, Ontario, Canada; The Keenan Research Centre for Biomedical Science, St. Michael's Hospital, Toronto, Ontario, Canada
| |
Collapse
|
25
|
Cha JS, Yu D. Objective Measures of Surgeon Non-Technical Skills in Surgery: A Scoping Review. HUMAN FACTORS 2022; 64:42-73. [PMID: 33682476 DOI: 10.1177/0018720821995319] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
OBJECTIVE The purpose of this study was to identify, synthesize, and discuss objective behavioral or physiological metrics of surgeons' nontechnical skills (NTS) in the literature. BACKGROUND NTS, or interpersonal or cognitive skills, have been identified to contribute to safe and efficient surgical performance; however, current assessments are subjective, checklist-based tools. Intraoperative skill evaluation, such as technical skills, has been previously utilized as an objective measure to address such limitations. METHODS Five databases in engineering, behavioral science, and medicine were searched following PRISMA reporting guidelines. Eligibility criteria included studies with NTS objective measurements, surgeons, and took place within simulated or live operations. RESULTS Twenty-three articles were included in this review. Objective metrics included communication metrics and measures from physiological responses such as changes in brain activation and motion of the eye. Frequencies of content-coded communication in surgery were utilized in 16 studies and were associated with not only the communication construct but also cognitive constructs of situation awareness and decision making. This indicates the underlying importance of communication in evaluating the NTS constructs. To synthesize the scoped literature, a framework based on the one-way communication model was used to map the objective measures to NTS constructs. CONCLUSION Objective NTS measurement of surgeons is still preliminary, and future work on leveraging objective metrics in parallel with current assessment tools is needed. APPLICATION Findings from this work identify objective NTS metrics for measurement applications in a surgical environment.
Collapse
Affiliation(s)
| | - Denny Yu
- 311308 Purdue University, Indiana, USA
| |
Collapse
|
26
|
Tolvanen O, Elomaa AP, Itkonen M, Vrzakova H, Bednarik R, Huotarinen A. Eye-Tracking Indicators of Workload in Surgery: A Systematic Review. J INVEST SURG 2022; 35:1340-1349. [PMID: 35038963 DOI: 10.1080/08941939.2021.2025282] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
BackgroundEye tracking is a powerful tool for unobtrusive and real time assessment of workload in clinical settings. Before the complex eye tracking derived surrogates can be proactively utilized to improve surgical safety, the indications, validity and reliability requires careful evaluation.MethodsWe conducted a systematic review of literature from 2010 to 2020 according to PRISMA guidelines. A search on PubMed, Cochrane, Scopus, Web of science, PsycInfo and Google scholar databases was conducted on July 2020. The following search query was used" ("eye tracking" OR "gaze tracking") AND (surgery OR surgical OR operative OR intraoperative) AND (workload OR stress)". Short papers, no peer reviewed or papers in which eye-tracking methodology was not used to investigate workload or stress factors in surgery, were omitted.ResultsA total of 17 (N = 17) studies were identified eligible to this review. Most of the studies (n = 15) measured workload in simulated setting. Task difficulty and expertise were the most studied factors. Studies consistently showed surgeon's eye movements such as pupil responses, gaze patterns, blinks were associated with the level of perceived workload. However, differences between measurements in operational room and simulated environments have been found.ConclusionPupil responses, blink rate and gaze indices are valid indicators of workload. However, the effect of distractions and non-technical factors on workload is underrepresented aspect in the literature even though recognized as underlying factors in successful surgery.
Collapse
Affiliation(s)
- Otto Tolvanen
- School of Medicine, University of Eastern Finland, Kuopio, Finland
| | - Antti-Pekka Elomaa
- Microsurgery Training Center, Kuopio University Hospital, Kuopio, Finland.,Neurosurgery of KUH NeuroCenter, Kuopio University Hospital, Kuopio, Finland
| | - Matti Itkonen
- Center of Brain Science (CBS), CBS-TOYOTA Collaboration Center, RIKEN, Nagoya, Japan
| | - Hana Vrzakova
- Microsurgery Training Center, Kuopio University Hospital, Kuopio, Finland
| | - Roman Bednarik
- School of Computing, University of Eastern Finland, Kuopio, Finland
| | - Antti Huotarinen
- School of Computing, University of Eastern Finland, Kuopio, Finland
| |
Collapse
|
27
|
Bilgic E, Gorgy A, Yang A, Cwintal M, Ranjbar H, Kahla K, Reddy D, Li K, Ozturk H, Zimmermann E, Quaiattini A, Abbasgholizadeh-Rahimi S, Poenaru D, Harley JM. Exploring the roles of artificial intelligence in surgical education: A scoping review. Am J Surg 2021; 224:205-216. [PMID: 34865736 DOI: 10.1016/j.amjsurg.2021.11.023] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2021] [Revised: 11/19/2021] [Accepted: 11/22/2021] [Indexed: 01/02/2023]
Abstract
BACKGROUND Technology-enhanced teaching and learning, including Artificial Intelligence (AI) applications, has started to evolve in surgical education. Hence, the purpose of this scoping review is to explore the current and future roles of AI in surgical education. METHODS Nine bibliographic databases were searched from January 2010 to January 2021. Full-text articles were included if they focused on AI in surgical education. RESULTS Out of 14,008 unique sources of evidence, 93 were included. Out of 93, 84 were conducted in the simulation setting, and 89 targeted technical skills. Fifty-six studies focused on skills assessment/classification, and 36 used multiple AI techniques. Also, increasing sample size, having balanced data, and using AI to provide feedback were major future directions mentioned by authors. CONCLUSIONS AI can help optimize the education of trainees and our results can help educators and researchers identify areas that need further investigation.
Collapse
Affiliation(s)
- Elif Bilgic
- Department of Surgery, McGill University, Montreal, Quebec, Canada
| | - Andrew Gorgy
- Department of Surgery, McGill University, Montreal, Quebec, Canada
| | - Alison Yang
- Department of Surgery, McGill University, Montreal, Quebec, Canada
| | - Michelle Cwintal
- Department of Surgery, McGill University, Montreal, Quebec, Canada
| | - Hamed Ranjbar
- Department of Surgery, McGill University, Montreal, Quebec, Canada
| | - Kalin Kahla
- Department of Surgery, McGill University, Montreal, Quebec, Canada
| | - Dheeksha Reddy
- Department of Surgery, McGill University, Montreal, Quebec, Canada
| | - Kexin Li
- Department of Surgery, McGill University, Montreal, Quebec, Canada
| | - Helin Ozturk
- Department of Surgery, McGill University, Montreal, Quebec, Canada
| | - Eric Zimmermann
- Department of Surgery, McGill University, Montreal, Quebec, Canada
| | - Andrea Quaiattini
- Schulich Library of Physical Sciences, Life Sciences, and Engineering, McGill University, Canada; Institute of Health Sciences Education, McGill University, Montreal, Quebec, Canada
| | - Samira Abbasgholizadeh-Rahimi
- Department of Family Medicine, McGill University, Montreal, Quebec, Canada; Department of Electrical and Computer Engineering, McGill University, Montreal, Canada; Lady Davis Institute for Medical Research, Jewish General Hospital, Montreal, Canada; Mila Quebec AI Institute, Montreal, Canada
| | - Dan Poenaru
- Institute of Health Sciences Education, McGill University, Montreal, Quebec, Canada; Department of Pediatric Surgery, McGill University, Canada
| | - Jason M Harley
- Department of Surgery, McGill University, Montreal, Quebec, Canada; Institute of Health Sciences Education, McGill University, Montreal, Quebec, Canada; Research Institute of the McGill University Health Centre, Montreal, Quebec, Canada; Steinberg Centre for Simulation and Interactive Learning, McGill University, Montreal, Quebec, Canada.
| |
Collapse
|
28
|
Czerniak JN, Schierhorst N, Brandl C, Mertens A, Schwalm M, Nitsch V. A meta-analytic review of the reliability of the Index of Cognitive Activity concerning task-evoked cognitive workload and light influences. Acta Psychol (Amst) 2021; 220:103402. [PMID: 34506977 DOI: 10.1016/j.actpsy.2021.103402] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2021] [Revised: 08/04/2021] [Accepted: 08/19/2021] [Indexed: 11/16/2022] Open
Abstract
The Index of Cognitive Activity (ICA) was introduced as a promising pupillary workload measure for field investigations since, unlike pupil dilation, it is not affected by illumination. Recent studies have investigated the ICA for task-evoked cognitive workload with contradictory findings. However, few studies investigated the influence of illumination on the ICA. Therefore, to examine inconsistencies regarding the reliability for workload measurement and the effects of light, a meta-analysis was conducted based on a structured literature review. The meta-analysis considered k = 14 studies with a total sample size of N = 751 participants. Results showed significant effects for workload (r = 0.61) and light (r = 0.45) on the ICA. Since moderating effects were found for several between-study differences, it seems likely that different cognitive processes and settings affect the indicator and should be considered in empirical investigations. According to the findings, the ICA is a reliable indicator for task-evoked workload. However, light influences were found which indicates that evidence-based conclusions regarding the ICA's practical applicability require further research.
Collapse
Affiliation(s)
- Julia N Czerniak
- Institute of Industrial Engineering and Ergonomics (IAW), RWTH Aachen University, Eilfschornsteinstraße 18, 52062 Aachen, Germany.
| | - Nikolas Schierhorst
- Institute of Industrial Engineering and Ergonomics (IAW), RWTH Aachen University, Eilfschornsteinstraße 18, 52062 Aachen, Germany.
| | - Christopher Brandl
- Institute of Industrial Engineering and Ergonomics (IAW), RWTH Aachen University, Eilfschornsteinstraße 18, 52062 Aachen, Germany.
| | - Alexander Mertens
- Institute of Industrial Engineering and Ergonomics (IAW), RWTH Aachen University, Eilfschornsteinstraße 18, 52062 Aachen, Germany.
| | - Maximilian Schwalm
- Institute of Highway Engineering (ISAC), RWTH Aachen University, Mies-van-der-Rohe-Straße 1, 52074 Aachen, Germany.
| | - Verena Nitsch
- Institute of Industrial Engineering and Ergonomics (IAW), RWTH Aachen University, Eilfschornsteinstraße 18, 52062 Aachen, Germany.
| |
Collapse
|
29
|
Chen IHA, Ghazi A, Sridhar A, Stoyanov D, Slack M, Kelly JD, Collins JW. Evolving robotic surgery training and improving patient safety, with the integration of novel technologies. World J Urol 2021; 39:2883-2893. [PMID: 33156361 PMCID: PMC8405494 DOI: 10.1007/s00345-020-03467-7] [Citation(s) in RCA: 26] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2020] [Accepted: 09/21/2020] [Indexed: 12/18/2022] Open
Abstract
INTRODUCTION Robot-assisted surgery is becoming increasingly adopted by multiple surgical specialties. There is evidence of inherent risks of utilising new technologies that are unfamiliar early in the learning curve. The development of standardised and validated training programmes is crucial to deliver safe introduction. In this review, we aim to evaluate the current evidence and opportunities to integrate novel technologies into modern digitalised robotic training curricula. METHODS A systematic literature review of the current evidence for novel technologies in surgical training was conducted online and relevant publications and information were identified. Evaluation was made on how these technologies could further enable digitalisation of training. RESULTS Overall, the quality of available studies was found to be low with current available evidence consisting largely of expert opinion, consensus statements and small qualitative studies. The review identified that there are several novel technologies already being utilised in robotic surgery training. There is also a trend towards standardised validated robotic training curricula. Currently, the majority of the validated curricula do not incorporate novel technologies and training is delivered with more traditional methods that includes centralisation of training services with wet laboratories that have access to cadavers and dedicated training robots. CONCLUSIONS Improvements to training standards and understanding performance data have good potential to significantly lower complications in patients. Digitalisation automates data collection and brings data together for analysis. Machine learning has potential to develop automated performance feedback for trainees. Digitalised training aims to build on the current gold standards and to further improve the 'continuum of training' by integrating PBP training, 3D-printed models, telementoring, telemetry and machine learning.
Collapse
Affiliation(s)
- I-Hsuan Alan Chen
- Division of Surgery and Interventional Science, Research Department of Targeted Intervention, University College London, London, UK.
- Department of Surgery, Division of Urology, Kaohsiung Veterans General Hospital, No. 386, Dazhong 1st Rd., Zuoying District, Kaohsiung, 81362, Taiwan.
- Wellcome/ESPRC Centre for Interventional and Surgical Sciences (WEISS), University College London, London, UK.
| | - Ahmed Ghazi
- Department of Urology, Simulation Innovation Laboratory, University of Rochester, New York, USA
| | - Ashwin Sridhar
- Division of Uro-Oncology, University College London Hospital, London, UK
| | - Danail Stoyanov
- Wellcome/ESPRC Centre for Interventional and Surgical Sciences (WEISS), University College London, London, UK
| | | | - John D Kelly
- Division of Surgery and Interventional Science, Research Department of Targeted Intervention, University College London, London, UK
- Wellcome/ESPRC Centre for Interventional and Surgical Sciences (WEISS), University College London, London, UK
- Division of Uro-Oncology, University College London Hospital, London, UK
| | - Justin W Collins
- Division of Surgery and Interventional Science, Research Department of Targeted Intervention, University College London, London, UK.
- Wellcome/ESPRC Centre for Interventional and Surgical Sciences (WEISS), University College London, London, UK.
- Division of Uro-Oncology, University College London Hospital, London, UK.
| |
Collapse
|
30
|
Walsh GS. Visuomotor control dynamics of quiet standing under single and dual task conditions in younger and older adults. Neurosci Lett 2021; 761:136122. [PMID: 34293417 DOI: 10.1016/j.neulet.2021.136122] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2021] [Revised: 06/23/2021] [Accepted: 07/15/2021] [Indexed: 11/16/2022]
Abstract
Visual input facilitates stable postural control; however, ageing alters visual gaze strategies and visual input processing times. Understanding the complex interaction between visual gaze behaviour and the effects of age may inform future interventions to improve postural control in older adults. The purpose of this study was to determine effects of age and dual task on gaze and postural sway dynamics, and the sway-gaze complexity coupling to explore the coupling between sensory input and motor output. Ten older and 10 younger adults performed single and dual task quiet standing while gaze behaviour and centre of mass motion were recorded. The complexity and stability of postural sway, saccade characteristics, visual input duration and complexity of gaze were calculated in addition to sway-gaze coupling quantified by cross-sample entropy. Dual tasking increased complexity and decreased stability of sway with increased gaze complexity and visual input duration, suggesting greater automaticity of sway with greater exploration of the visual field but with longer visual inputs to maintain postural stability in dual task conditions. In addition, older adults had lower complexity and stability of sway than younger adults indicating less automated and stable postural control. Older adults also demonstrated lower gaze complexity, longer visual input durations and greater sway-gaze coupling. These findings suggest older adults adopted a strategy to increase the capacity for visual information input, whilst exploring less of the visual field than younger adults.
Collapse
Affiliation(s)
- Gregory S Walsh
- Department of Sport, Health Sciences and Social Work, Oxford Brookes University, Oxford, UK.
| |
Collapse
|
31
|
Nagyné Elek R, Haidegger T. Non-Technical Skill Assessment and Mental Load Evaluation in Robot-Assisted Minimally Invasive Surgery. SENSORS (BASEL, SWITZERLAND) 2021; 21:2666. [PMID: 33920087 PMCID: PMC8068868 DOI: 10.3390/s21082666] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/26/2021] [Revised: 03/31/2021] [Accepted: 04/08/2021] [Indexed: 01/07/2023]
Abstract
BACKGROUND: Sensor technologies and data collection practices are changing and improving quality metrics across various domains. Surgical skill assessment in Robot-Assisted Minimally Invasive Surgery (RAMIS) is essential for training and quality assurance. The mental workload on the surgeon (such as time criticality, task complexity, distractions) and non-technical surgical skills (including situational awareness, decision making, stress resilience, communication, leadership) may directly influence the clinical outcome of the surgery. METHODS: A literature search in PubMed, Scopus and PsycNet databases was conducted for relevant scientific publications. The standard PRISMA method was followed to filter the search results, including non-technical skill assessment and mental/cognitive load and workload estimation in RAMIS. Publications related to traditional manual Minimally Invasive Surgery were excluded, and also the usability studies on the surgical tools were not assessed. RESULTS: 50 relevant publications were identified for non-technical skill assessment and mental load and workload estimation in the domain of RAMIS. The identified assessment techniques ranged from self-rating questionnaires and expert ratings to autonomous techniques, citing their most important benefits and disadvantages. CONCLUSIONS: Despite the systematic research, only a limited number of articles was found, indicating that non-technical skill and mental load assessment in RAMIS is not a well-studied area. Workload assessment and soft skill measurement do not constitute part of the regular clinical training and practice yet. Meanwhile, the importance of the research domain is clear based on the publicly available surgical error statistics. Questionnaires and expert-rating techniques are widely employed in traditional surgical skill assessment; nevertheless, recent technological development in sensors and Internet of Things-type devices show that skill assessment approaches in RAMIS can be much more profound employing automated solutions. Measurements and especially big data type analysis may introduce more objectivity and transparency to this critical domain as well. SIGNIFICANCE: Non-technical skill assessment and mental load evaluation in Robot-Assisted Minimally Invasive Surgery is not a well-studied area yet; while the importance of this domain from the clinical outcome's point of view is clearly indicated by the available surgical error statistics.
Collapse
Affiliation(s)
- Renáta Nagyné Elek
- Antal Bejczy Center for Intelligent Robotics, University Research and Innovation Center, Óbuda University, 1034 Budapest, Hungary;
- Doctoral School of Applied Informatics and Applied Mathematics, Óbuda University, 1034 Budapest, Hungary
| | - Tamás Haidegger
- Antal Bejczy Center for Intelligent Robotics, University Research and Innovation Center, Óbuda University, 1034 Budapest, Hungary;
- John von Neumann Faculty of Informatics, Óbuda University, 1034 Budapest, Hungary
- Austrian Center for Medical Innovation and Technology, 2700 Wiener Neustadt, Austria
| |
Collapse
|
32
|
Wang X, Blumenthal HJ, Hoffman D, Benda N, Kim T, Perry S, Franklin ES, Roth EM, Hettinger AZ, Bisantz AM. Modeling patient-related workload in the emergency department using electronic health record data. Int J Med Inform 2021; 150:104451. [PMID: 33862507 DOI: 10.1016/j.ijmedinf.2021.104451] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2020] [Revised: 03/29/2021] [Accepted: 03/30/2021] [Indexed: 11/24/2022]
Abstract
INTRODUCTION Understanding and managing clinician workload is important for clinician (nurses, physicians and advanced practice providers) occupational health as well as patient safety. Efforts have been made to develop strategies for managing clinician workload by improving patient assignment. The goal of the current study is to use electronic health record (EHR) data to predict the amount of work that individual patients contribute to clinician workload (patient-related workload). METHODS One month of EHR data was retrieved from an emergency department (ED). A list of workload indicators and five potential workload proxies were extracted from the data. Linear regression and four machine learning classification algorithms were utilized to model the relationship between the indicators and the proxies. RESULTS Linear regression proved that the indicators explained a substantial amount of variance of the proxies (four out of five proxies were modeled with R2 > 0.80). Classification algorithms also showed success in classifying a patient as having high or low task demand based on data from early in the ED visit (e.g. 80 % accurate binary classification with data from the first hour). CONCLUSION The main contribution of this study is demonstrating the potential of using EHR data to predict patient-related workload automatically in the ED. The predicted workload can potentially help in managing clinician workload by supporting decisions around the assignment of new patients to providers. Future work should focus on identifying the relationship between workload proxies and actual workload, as well as improving prediction performance of regression and multi-class classification.
Collapse
Affiliation(s)
| | - H Joseph Blumenthal
- National Center for Human Factors in Healthcare, MedStar Institute for Innovation, United States
| | - Daniel Hoffman
- National Center for Human Factors in Healthcare, MedStar Institute for Innovation, United States
| | - Natalie Benda
- National Center for Human Factors in Healthcare, MedStar Institute for Innovation, United States
| | - Tracy Kim
- National Center for Human Factors in Healthcare, MedStar Institute for Innovation, United States
| | | | - Ella S Franklin
- National Center for Human Factors in Healthcare, MedStar Institute for Innovation, United States
| | | | - A Zachary Hettinger
- National Center for Human Factors in Healthcare, MedStar Institute for Innovation, United States; Georgetown University School of Medicine, United States
| | | |
Collapse
|
33
|
Wu C, Cha J, Sulek J, Sundaram CP, Wachs J, Proctor RW, Yu D. Sensor-based indicators of performance changes between sessions during robotic surgery training. APPLIED ERGONOMICS 2021; 90:103251. [PMID: 32961465 PMCID: PMC7606790 DOI: 10.1016/j.apergo.2020.103251] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/10/2020] [Revised: 08/04/2020] [Accepted: 08/20/2020] [Indexed: 05/27/2023]
Abstract
Training of surgeons is essential for safe and effective use of robotic surgery, yet current assessment tools for learning progression are limited. The objective of this study was to measure changes in trainees' cognitive and behavioral states as they progressed in a robotic surgeon training curriculum at a medical institution. Seven surgical trainees in urology who had no formal robotic training experience participated in the simulation curriculum. They performed 12 robotic skills exercises with varying levels of difficulty repetitively in separate sessions. EEG (electroencephalogram) activity and eye movements were measured throughout to calculate three metrics: engagement index (indicator of task engagement), pupil diameter (indicator of mental workload) and gaze entropy (indicator of randomness in gaze pattern). Performance scores (completion of task goals) and mental workload ratings (NASA-Task Load Index) were collected after each exercise. Changes in performance scores between training sessions were calculated. Analysis of variance, repeated measures correlation, and machine learning classification were used to diagnose how cognitive and behavioral states associate with performance increases or decreases between sessions. The changes in performance were correlated with changes in engagement index (rrm=-.25,p<.001) and gaze entropy (rrm=-.37,p<.001). Changes in cognitive and behavioral states were able to predict training outcomes with 72.5% accuracy. Findings suggest that cognitive and behavioral metrics correlate with changes in performance between sessions. These measures can complement current feedback tools used by medical educators and learners for skills assessment in robotic surgery training.
Collapse
Affiliation(s)
- Chuhao Wu
- Purdue University, West Lafayette, IN, United States
| | - Jackie Cha
- Purdue University, West Lafayette, IN, United States
| | - Jay Sulek
- Indiana University, Indianapolis, IN, United States
| | | | - Juan Wachs
- Purdue University, West Lafayette, IN, United States
| | | | - Denny Yu
- Purdue University, West Lafayette, IN, United States.
| |
Collapse
|
34
|
Shafiei SB, Lone Z, Elsayed AS, Hussein AA, Guru KA. Identifying mental health status using deep neural network trained by visual metrics. Transl Psychiatry 2020; 10:430. [PMID: 33318471 PMCID: PMC7736364 DOI: 10.1038/s41398-020-01117-5] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/08/2020] [Revised: 09/15/2020] [Accepted: 10/20/2020] [Indexed: 11/17/2022] Open
Abstract
Mental health is an integral part of the quality of life of cancer patients. It has been found that mental health issues, such as depression and anxiety, are more common in cancer patients. They may result in catastrophic consequences, including suicide. Therefore, monitoring mental health metrics (such as hope, anxiety, and mental well-being) is recommended. Currently, there is lack of objective method for mental health evaluation, and most of the available methods are limited to subjective face-to-face discussions between the patient and psychotherapist. In this study we introduced an objective method for mental health evaluation using a combination of convolutional neural network and long short-term memory (CNN-LSTM) algorithms learned and validated by visual metrics time-series. Data were recorded by the TobiiPro eyeglasses from 16 patients with cancer after major oncologic surgery and nine individuals without cancer while viewing18 artworks in an in-house art gallery. Pre-study and post-study questionnaires of Herth Hope Index (HHI; for evaluation of hope), anxiety State-Trait Anxiety Inventory for Adults (STAI; for evaluation of anxiety) and Warwick-Edinburgh Mental Wellbeing Scale (WEMWBS; for evaluation of mental well-being) were completed by participants. Clinical psychotherapy and statistical suggestions for cutoff scores were used to assign an individual's mental health metrics level during each session into low (class 0), intermediate (class 1), and high (class 2) levels. Our proposed model was used to objectify evaluation and categorize HHI, STAI, and WEMWBS status of individuals. Classification accuracy of the model was 93.81%, 94.76%, and 95.00% for HHI, STAI, and WEMWBS metrics, respectively. The proposed model can be integrated into applications for home-based mental health monitoring to be used by patients after oncologic surgery to identify patients at risk.
Collapse
Affiliation(s)
- Somayeh B Shafiei
- Applied Technology Laboratory for Advanced Surgery (ATLAS), Roswell Park Comprehensive Cancer Center, Buffalo, NY, USA
- Department of Urology, Roswell Park Comprehensive Cancer Center, Buffalo, NY, USA
| | - Zaeem Lone
- Applied Technology Laboratory for Advanced Surgery (ATLAS), Roswell Park Comprehensive Cancer Center, Buffalo, NY, USA
- Department of Urology, Roswell Park Comprehensive Cancer Center, Buffalo, NY, USA
| | - Ahmed S Elsayed
- Applied Technology Laboratory for Advanced Surgery (ATLAS), Roswell Park Comprehensive Cancer Center, Buffalo, NY, USA
- Department of Urology, Roswell Park Comprehensive Cancer Center, Buffalo, NY, USA
| | - Ahmed A Hussein
- Applied Technology Laboratory for Advanced Surgery (ATLAS), Roswell Park Comprehensive Cancer Center, Buffalo, NY, USA
- Department of Urology, Roswell Park Comprehensive Cancer Center, Buffalo, NY, USA
| | - Khurshid A Guru
- Applied Technology Laboratory for Advanced Surgery (ATLAS), Roswell Park Comprehensive Cancer Center, Buffalo, NY, USA.
- Department of Urology, Roswell Park Comprehensive Cancer Center, Buffalo, NY, USA.
| |
Collapse
|