1
|
Matsumoto S, Kawahira H, Fukata K, Doi Y, Kobayashi N, Hosoya Y, Sata N. Laparoscopic distal gastrectomy skill evaluation from video: a new artificial intelligence-based instrument identification system. Sci Rep 2024; 14:12432. [PMID: 38816459 PMCID: PMC11139867 DOI: 10.1038/s41598-024-63388-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2023] [Accepted: 05/28/2024] [Indexed: 06/01/2024] Open
Abstract
The advent of Artificial Intelligence (AI)-based object detection technology has made identification of position coordinates of surgical instruments from videos possible. This study aimed to find kinematic differences by surgical skill level. An AI algorithm was developed to identify X and Y coordinates of surgical instrument tips accurately from video. Kinematic analysis including fluctuation analysis was performed on 18 laparoscopic distal gastrectomy videos from three expert and three novice surgeons (3 videos/surgeon, 11.6 h, 1,254,010 frames). Analysis showed the expert surgeon cohort moved more efficiently and regularly, with significantly less operation time and total travel distance. Instrument tip movement did not differ in velocity, acceleration, or jerk between skill levels. The evaluation index of fluctuation β was significantly higher in experts. ROC curve cutoff value at 1.4 determined sensitivity and specificity of 77.8% for experts and novices. Despite the small sample, this study suggests AI-based object detection with fluctuation analysis is promising because skill evaluation can be calculated in real time with potential for peri-operational evaluation.
Collapse
Affiliation(s)
- Shiro Matsumoto
- Department of Surgery, Division of Gastroenterological, General and Transplant Surgery, Jichi Medical University, Tochigi, Japan.
| | - Hiroshi Kawahira
- Medical Simulation Center, Jichi Medical University, Tochigi, Japan
| | | | | | | | - Yoshinori Hosoya
- Department of Surgery, Division of Gastroenterological, General and Transplant Surgery, Jichi Medical University, Tochigi, Japan
| | - Naohiro Sata
- Department of Surgery, Division of Gastroenterological, General and Transplant Surgery, Jichi Medical University, Tochigi, Japan
| |
Collapse
|
2
|
Shevlin SP, Turbitt L, Burckett-St Laurent D, Macfarlane AJ, West S, Bowness JS. Augmented Reality in Ultrasound-Guided Regional Anaesthesia: An Exploratory Study on Models With Potential Implications for Training. Cureus 2023; 15:e42346. [PMID: 37621802 PMCID: PMC10445048 DOI: 10.7759/cureus.42346] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/22/2023] [Indexed: 08/26/2023] Open
Abstract
Introduction Needle tip visualisation is a key skill required for the safe practice of ultrasound-guided regional anaesthesia (UGRA). This exploratory study assesses the utility of a novel augmented reality device, NeedleTrainer™, to differentiate between anaesthetists with varying levels of UGRA experience in a simulated environment. Methods Four groups of five participants were recruited (n = 20): novice, early career, experienced anaesthetists, and UGRA experts. Each participant performed three simulated UGRA blocks using NeedleTrainer™ on healthy volunteers (n = 60). The primary aim was to determine whether there was a difference in needle tip visibility, as calculated by the device, between groups of anaesthetists with differing levels of UGRA experience. Secondary aims included the assessment of simulated block conduct by an expert assessor and subjective participant self-assessment. Results The percentage of time the simulated needle tip was maintained in view was higher in the UGRA expert group (57.1%) versus the other three groups (novice 41.8%, early career 44.5%, and experienced anaesthetists 43.6%), but did not reach statistical significance (p = 0.05). An expert assessor was able to differentiate between participants of different UGRA experience when assessing needle tip visibility (novice 3.3 out of 10, early career 5.1, experienced anaesthetists 5.9, UGRA expert group 8.7; p < 0.01) and final needle tip placement (novice 4.2 out of 10, early career 5.6, experienced anaesthetists 6.8, UGRA expert group 8.9; p < 0.01). Subjective self-assessment by participants did not differentiate UGRA experience when assessing needle tip visibility (p = 0.07) or final needle tip placement (p = 0.07). Discussion An expert assessor was able to differentiate between participants with different levels of UGRA experience in this simulated environment. Objective NeedleTrainer™ and subjective participant assessments did not reach statistical significance. The findings are novel as simulated needling using live human subjects has not been assessed before, and no previous studies have attempted to objectively quantify needle tip visibility during simulated UGRA techniques. Future research should include larger sample sizes to further assess the potential use of such technology.
Collapse
Affiliation(s)
- Sean P Shevlin
- Anaesthesia, Belfast Health and Social Care Trust, Belfast, GBR
| | - Lloyd Turbitt
- Anaesthesia, Belfast Health and Social Care Trust, Belfast, GBR
| | | | | | - Simeon West
- Anaesthesia, University College London Hospital, London, GBR
| | - James S Bowness
- Anaesthesia, Aneurin Bevan University Health Board, Newport, GBR
| |
Collapse
|
3
|
Heiliger C, Andrade D, Geister C, Winkler A, Ahmed K, Deodati A, Treuenstätt VHEV, Werner J, Eursch A, Karcz K, Frank A. Tracking and evaluating motion skills in laparoscopy with inertial sensors. Surg Endosc 2023:10.1007/s00464-023-09983-y. [PMID: 36976421 DOI: 10.1007/s00464-023-09983-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2022] [Accepted: 02/25/2023] [Indexed: 03/29/2023]
Abstract
BACKGROUND Analysis of surgical instrument motion is applicable in surgical skill assessment and monitoring of the learning progress in laparoscopy. Current commercial instrument tracking technology (optical or electromagnetic) has specific limitations and is expensive. Therefore, in this study, we apply inexpensive, off-the-shelf inertial sensors to track laparoscopic instruments in a training scenario. METHODS We calibrated two laparoscopic instruments to the inertial sensor and investigated its accuracy on a 3D-printed phantom. In a user study during a one-week laparoscopy training course with medical students and physicians, we then documented and compared the training effect in laparoscopic tasks on a commercially available laparoscopy trainer (Laparo Analytic, Laparo Medical Simulators, Wilcza, Poland) and the newly developed tracking setup. RESULTS Eighteen participants (twelve medical students and six physicians) participated in the study. The student subgroup showed significantly poorer results for the count of swings (CS) and count of rotations (CR) at the beginning of the training compared to the physician subgroup (p = 0.012 and p = 0.042). After training, the student subgroup showed significant improvements in the rotatory angle sum, CS, and CR (p = 0.025, p = 0.004 and p = 0.024). After training, there were no significant differences between medical students and physicians. There was a strong correlation between the measured learning success (LS) from the data of our inertial measurement unit system (LSIMU) and the Laparo Analytic (LSLap) (Pearson's r = 0.79). CONCLUSION In the current study, we observed a good and valid performance of inertial measurement units as a possible tool for instrument tracking and surgical skill assessment. Moreover, we conclude that the sensor can meaningfully examine the learning progress of medical students in an ex-vivo setting.
Collapse
Affiliation(s)
- Christian Heiliger
- Department of General, Visceral, and Transplant Surgery, Ludwig-Maximilians-University (LMU) Hospital, 81377, Munich, Germany
| | - Dorian Andrade
- Department of General, Visceral, and Transplant Surgery, Ludwig-Maximilians-University (LMU) Hospital, 81377, Munich, Germany
| | - Christian Geister
- Department of Mechanical, Automotive and Aeronautical Engineering, University of Applied Sciences, Munich, Germany
| | - Alexander Winkler
- Department of General, Visceral, and Transplant Surgery, Ludwig-Maximilians-University (LMU) Hospital, 81377, Munich, Germany
- Chair for Computer Aided Medical Procedures & Augmented Reality (CAMP), Technical University of Munich (TUM), Munich, Germany
| | - Khaled Ahmed
- Department of General, Visceral, and Transplant Surgery, Ludwig-Maximilians-University (LMU) Hospital, 81377, Munich, Germany
| | - Alessandra Deodati
- Department of General, Visceral, and Transplant Surgery, Ludwig-Maximilians-University (LMU) Hospital, 81377, Munich, Germany
| | - Viktor H Ehrlich V Treuenstätt
- Department of General, Visceral, and Transplant Surgery, Ludwig-Maximilians-University (LMU) Hospital, 81377, Munich, Germany
| | - Jens Werner
- Department of General, Visceral, and Transplant Surgery, Ludwig-Maximilians-University (LMU) Hospital, 81377, Munich, Germany
| | - Andreas Eursch
- Department of Mechanical, Automotive and Aeronautical Engineering, University of Applied Sciences, Munich, Germany
| | - Konrad Karcz
- Department of General, Visceral, and Transplant Surgery, Ludwig-Maximilians-University (LMU) Hospital, 81377, Munich, Germany
| | - Alexander Frank
- Department of General, Visceral, and Transplant Surgery, Ludwig-Maximilians-University (LMU) Hospital, 81377, Munich, Germany.
| |
Collapse
|
4
|
Sejor E, Berthet-Rayne P, Frey S. Calling on the Next Generation of Surgeons. Surg Innov 2022:15533506221124501. [PMID: 36039669 DOI: 10.1177/15533506221124501] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Affiliation(s)
- Eric Sejor
- Digestive Surgery and Liver Transplantation Unit, Centre Hospitalier Universitaire de Nice, Archet
| | - Pierre Berthet-Rayne
- Department of Computing, The Hamlyn Centre for Robotic Surgery, Imperial College London, London
| | - Sébastien Frey
- Digestive Surgery and Liver Transplantation Unit, Centre Hospitalier Universitaire de Nice, Archet
- Université Côte d'Azur, Nice, France
| |
Collapse
|
5
|
Matsumoto S, Kawahira H, Oiwa K, Maeda Y, Nozawa A, Lefor AK, Hosoya Y, Sata N. Laparoscopic surgical skill evaluation with motion capture and eyeglass gaze cameras: A pilot study. Asian J Endosc Surg 2022; 15:619-628. [PMID: 35598888 DOI: 10.1111/ases.13065] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/22/2022] [Revised: 03/28/2022] [Accepted: 03/30/2022] [Indexed: 11/30/2022]
Abstract
INTRODUCTION An eyeglass gaze camera and a skeletal coordinate camera without sensors attached to the operator's body were used to monitor gaze and movement during a simulated surgical procedure. These new devices have the potential to change skill assessment for laparoscopic surgery. The suitability of these devices for skill assessment was investigated. MATERIAL AND METHODS Six medical students, six intermediate surgeons, and four experts performed suturing tasks in a dry box. The tip positions of the instruments were identified from video recordings. Performance was evaluated based on instrument movement, gaze, and skeletal coordination. RESULTS Task performance time and skeletal coordinates were not significantly different among skill levels. The total movement distance of the right instrument was significantly different depending on the skill level. The SD of the gaze coordinates was significantly different depending on skill level and was less for experts. The expert's gaze stayed in a small area with little blurring. CONCLUSIONS The SD of gaze point coordinates correlates with laparoscopic surgical skill level. These devices may facilitate objective intraoperative skill evaluation in future studies.
Collapse
Affiliation(s)
- Shiro Matsumoto
- Department of Surgery, Jichi Medical University, Tochigi, Japan
| | - Hiroshi Kawahira
- Medical Simulation Center, Jichi Medical University, Tochigi, Japan
| | - Kosuke Oiwa
- Department of Electrical Engineering and Electronics, Aoyama Gakuin University, Kanagawa, Japan
| | - Yoshitaka Maeda
- Medical Simulation Center, Jichi Medical University, Tochigi, Japan
| | - Akio Nozawa
- Department of Electrical Engineering and Electronics, Aoyama Gakuin University, Kanagawa, Japan
| | | | | | - Naohiro Sata
- Department of Surgery, Jichi Medical University, Tochigi, Japan
| |
Collapse
|
6
|
Zenteno O, Trinh DH, Treuillet S, Lucas Y, Bazin T, Lamarque D, Daul C. Optical biopsy mapping on endoscopic image mosaics with a marker-free probe. Comput Biol Med 2022; 143:105234. [PMID: 35093845 DOI: 10.1016/j.compbiomed.2022.105234] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2021] [Revised: 12/25/2021] [Accepted: 01/10/2022] [Indexed: 12/24/2022]
Abstract
Gastric cancer is the second leading cause of cancer-related deaths worldwide. Early diagnosis significantly increases the chances of survival; therefore, improved assisted exploration and screening techniques are necessary. Previously, we made use of an augmented multi-spectral endoscope by inserting an optical probe into the instrumentation channel. However, the limited field of view and the lack of markings left by optical biopsies on the tissue complicate the navigation and revisit of the suspect areas probed in-vivo. In this contribution two innovative tools are introduced to significantly increase the traceability and monitoring of patients in clinical practice: (i) video mosaicing to build a more comprehensive and panoramic view of large gastric areas; (ii) optical biopsy targeting and registration with the endoscopic images. The proposed optical flow-based mosaicing technique selects images that minimize texture discontinuities and is robust despite the lack of texture and illumination variations. The optical biopsy targeting is based on automatic tracking of a free-marker probe in the endoscopic view using deep learning to dynamically estimate its pose during exploration. The accuracy of pose estimation is sufficient to ensure a precise overlapping of the standard white-light color image and the hyperspectral probe image, assuming that the small target area of the organ is almost flat. This allows the mapping of all spatio-temporally tracked biopsy sites onto the panoramic mosaic. Experimental validations are carried out from videos acquired on patients in hospital. The proposed technique is purely software-based and therefore easily integrable into clinical practice. It is also generic and compatible to any imaging modality that connects to a fiberscope.
Collapse
Affiliation(s)
- Omar Zenteno
- Laboratoire PRISME, Université d'Orléans, Orléans, France
| | - Dinh-Hoan Trinh
- CRAN, UMR 7039 CNRS and Université de Lorraine, Vandœuvre-lès-Nancy, France
| | | | - Yves Lucas
- Laboratoire PRISME, Université d'Orléans, Orléans, France
| | - Thomas Bazin
- Service d'Hépato-gastroentérologie et oncologie digestive, Hôpital Ambroise Paré, Boulogne-Billancourt, France
| | - Dominique Lamarque
- Service d'Hépato-gastroentérologie et oncologie digestive, Hôpital Ambroise Paré, Boulogne-Billancourt, France
| | - Christian Daul
- CRAN, UMR 7039 CNRS and Université de Lorraine, Vandœuvre-lès-Nancy, France.
| |
Collapse
|
7
|
Othman W, Lai ZHA, Abril C, Barajas-Gamboa JS, Corcelles R, Kroh M, Qasaimeh MA. Tactile Sensing for Minimally Invasive Surgery: Conventional Methods and Potential Emerging Tactile Technologies. Front Robot AI 2022; 8:705662. [PMID: 35071332 PMCID: PMC8777132 DOI: 10.3389/frobt.2021.705662] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2021] [Accepted: 11/04/2021] [Indexed: 11/13/2022] Open
Abstract
As opposed to open surgery procedures, minimally invasive surgery (MIS) utilizes small skin incisions to insert a camera and surgical instruments. MIS has numerous advantages such as reduced postoperative pain, shorter hospital stay, faster recovery time, and reduced learning curve for surgical trainees. MIS comprises surgical approaches, including laparoscopic surgery, endoscopic surgery, and robotic-assisted surgery. Despite the advantages that MIS provides to patients and surgeons, it remains limited by the lost sense of touch due to the indirect contact with tissues under operation, especially in robotic-assisted surgery. Surgeons, without haptic feedback, could unintentionally apply excessive forces that may cause tissue damage. Therefore, incorporating tactile sensation into MIS tools has become an interesting research topic. Designing, fabricating, and integrating force sensors onto different locations on the surgical tools are currently under development by several companies and research groups. In this context, electrical force sensing modality, including piezoelectric, resistive, and capacitive sensors, is the most conventionally considered approach to measure the grasping force, manipulation force, torque, and tissue compliance. For instance, piezoelectric sensors exhibit high sensitivity and accuracy, but the drawbacks of thermal sensitivity and the inability to detect static loads constrain their adoption in MIS tools. Optical-based tactile sensing is another conventional approach that facilitates electrically passive force sensing compatible with magnetic resonance imaging. Estimations of applied loadings are calculated from the induced changes in the intensity, wavelength, or phase of light transmitted through optical fibers. Nonetheless, new emerging technologies are also evoking a high potential of contributions to the field of smart surgical tools. The recent development of flexible, highly sensitive tactile microfluidic-based sensors has become an emerging field in tactile sensing, which contributed to wearable electronics and smart-skin applications. Another emerging technology is imaging-based tactile sensing that achieved superior multi-axial force measurements by implementing image sensors with high pixel densities and frame rates to track visual changes on a sensing surface. This article aims to review the literature on MIS tactile sensing technologies in terms of working principles, design requirements, and specifications. Moreover, this work highlights and discusses the promising potential of a few emerging technologies towards establishing low-cost, high-performance MIS force sensing.
Collapse
Affiliation(s)
- Wael Othman
- Engineering Division, New York University Abu Dhabi, Abu Dhabi, United Arab Emirates
- Mechanical and Aerospace Engineering, New York University, New York, NY, United States
| | - Zhi-Han A. Lai
- Engineering Division, New York University Abu Dhabi, Abu Dhabi, United Arab Emirates
| | - Carlos Abril
- Digestive Disease Institute, Cleveland Clinic Abu Dhabi, Abu Dhabi, United Arab Emirates
| | - Juan S. Barajas-Gamboa
- Digestive Disease Institute, Cleveland Clinic Abu Dhabi, Abu Dhabi, United Arab Emirates
| | - Ricard Corcelles
- Digestive Disease and Surgery Institute, Cleveland Clinic Main Campus, Cleveland, OH, United States
- Cleveland Clinic Lerner College of Medicine, Cleveland, OH, United States
| | - Matthew Kroh
- Digestive Disease Institute, Cleveland Clinic Abu Dhabi, Abu Dhabi, United Arab Emirates
| | - Mohammad A. Qasaimeh
- Engineering Division, New York University Abu Dhabi, Abu Dhabi, United Arab Emirates
- Mechanical and Aerospace Engineering, New York University, New York, NY, United States
| |
Collapse
|
8
|
Gautier B, Tugal H, Tang B, Nabi G, Erden MS. Real-Time 3D Tracking of Laparoscopy Training Instruments for Assessment and Feedback. Front Robot AI 2021; 8:751741. [PMID: 34805292 PMCID: PMC8600079 DOI: 10.3389/frobt.2021.751741] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2021] [Accepted: 10/13/2021] [Indexed: 11/13/2022] Open
Abstract
Assessment of minimally invasive surgical skills is a non-trivial task, usually requiring the presence and time of expert observers, including subjectivity and requiring special and expensive equipment and software. Although there are virtual simulators that provide self-assessment features, they are limited as the trainee loses the immediate feedback from realistic physical interaction. The physical training boxes, on the other hand, preserve the immediate physical feedback, but lack the automated self-assessment facilities. This study develops an algorithm for real-time tracking of laparoscopy instruments in the video cues of a standard physical laparoscopy training box with a single fisheye camera. The developed visual tracking algorithm recovers the 3D positions of the laparoscopic instrument tips, to which simple colored tapes (markers) are attached. With such system, the extracted instrument trajectories can be digitally processed, and automated self-assessment feedback can be provided. In this way, both the physical interaction feedback would be preserved and the need for the observance of an expert would be overcome. Real-time instrument tracking with a suitable assessment criterion would constitute a significant step towards provision of real-time (immediate) feedback to correct trainee actions and show them how the action should be performed. This study is a step towards achieving this with a low cost, automated, and widely applicable laparoscopy training and assessment system using a standard physical training box equipped with a fisheye camera.
Collapse
Affiliation(s)
| | - Harun Tugal
- Heriot-Watt University, Scotland, United Kingdom
| | - Benjie Tang
- University of Dundee and Ninewells Hospital, Dundee, United Kingdom
| | - Ghulam Nabi
- University of Dundee and Ninewells Hospital, Dundee, United Kingdom
| | | |
Collapse
|
9
|
Shabir D, Abdurahiman N, Padhan J, Trinh M, Balakrishnan S, Kurer M, Ali O, Al-Ansari A, Yaacoub E, Deng Z, Erbad A, Mohammed A, Navkar NV. Towards development of a tele-mentoring framework for minimally invasive surgeries. Int J Med Robot 2021; 17:e2305. [PMID: 34256415 DOI: 10.1002/rcs.2305] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2021] [Revised: 06/02/2021] [Accepted: 07/07/2021] [Indexed: 12/12/2022]
Abstract
BACKGROUND Tele-mentoring facilitates the transfer of surgical knowledge. The objective of this work is to develop a tele-mentoring framework that enables a specialist surgeon to mentor an operating surgeon by transferring information in a form of surgical instruments' motion required during a minimally invasive surgery. METHOD A tele-mentoring framework is developed to transfer video stream of the surgical field, poses of the scope and port placement from the operating room to a remote location. From the remote location, the motion of virtual surgical instruments augmented onto the surgical field is sent to the operating room. RESULTS The proposed framework is suitable to be integrated with laparoscopic as well as robotic surgeries. It takes on average 1.56 s to send information from the operating room to the remote location and 0.089 s for vice versa over a local area network. CONCLUSIONS The work demonstrates a tele-mentoring framework that enables a specialist surgeon to mentor an operating surgeon during a minimally invasive surgery.
Collapse
Affiliation(s)
- Dehlela Shabir
- Department of Surgery, Hamad Medical Corporation, Doha, Qatar
| | | | | | - May Trinh
- Department of Computer Science, University of Houston, Houston, Texas, USA
| | | | - Mohamed Kurer
- Department of Surgery, Hamad Medical Corporation, Doha, Qatar
| | - Omar Ali
- Department of Surgery, Hamad Medical Corporation, Doha, Qatar
| | | | - Elias Yaacoub
- Department of Computer Science and Engineering, Qatar University, Doha, Qatar
| | - Zhigang Deng
- Department of Computer Science, University of Houston, Houston, Texas, USA
| | - Aiman Erbad
- College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| | - Amr Mohammed
- Department of Computer Science and Engineering, Qatar University, Doha, Qatar
| | - Nikhil V Navkar
- Department of Surgery, Hamad Medical Corporation, Doha, Qatar
| |
Collapse
|
10
|
Sestini L, Rosa B, De Momi E, Ferrigno G, Padoy N. A Kinematic Bottleneck Approach for Pose Regression of Flexible Surgical Instruments Directly From Images. IEEE Robot Autom Lett 2021. [DOI: 10.1109/lra.2021.3062308] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
11
|
Zenteno O, Treuillet S, Lucas Y. Pose estimation of a markerless fiber bundle for endoscopic optical biopsy. J Med Imaging (Bellingham) 2021; 8:025001. [PMID: 33681409 DOI: 10.1117/1.jmi.8.2.025001] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2020] [Accepted: 01/28/2021] [Indexed: 11/14/2022] Open
Abstract
Purpose: We present a markerless vision-based method for on-the-fly three-dimensional (3D) pose estimation of a fiberscope instrument to target pathologic areas in the endoscopic view during exploration. Approach: A 2.5-mm-diameter fiberscope is inserted through the endoscope's operating channel and connected to an additional camera to perform complementary observation of a targeted area such as a multimodal magnifier. The 3D pose of the fiberscope is estimated frame-by-frame by maximizing the similarity between its silhouette (automatically detected in the endoscopic view using a deep learning neural network) and a cylindrical shape bound to a kinematic model reduced to three degrees-of-freedom. An alignment of the cylinder axis, based on Plücker coordinates from the straight edges detected in the image, makes convergence faster and more reliable. Results: The performance on simulations has been validated with a virtual trajectory mimicking endoscopic exploration and on real images of a chessboard pattern acquired with different endoscopic configurations. The experiments demonstrated a good accuracy and robustness of the proposed algorithm with errors of 0.33 ± 0.68 mm in distance position and 0.32 ± 0.11 deg in axis orientation for the 3D pose estimation, which reveals its superiority over previous approaches. This allows multimodal image registration with sufficient accuracy of < 3 pixels . Conclusion: Our pose estimation pipeline was executed on simulations and patterns; the results demonstrate the robustness of our method and the potential of fiber-optical instrument image-based tracking for pose estimation and multimodal registration. It can be fully implemented in software and therefore easily integrated into a routine clinical environment.
Collapse
Affiliation(s)
- Omar Zenteno
- Université d'Orléans, Laboratoire PRISME, Orléans, France
| | | | - Yves Lucas
- Université d'Orléans, Laboratoire PRISME, Orléans, France
| |
Collapse
|
12
|
Horgan CC, Bergholt MS, Thin MZ, Nagelkerke A, Kennedy R, Kalber TL, Stuckey DJ, Stevens MM. Image-guided Raman spectroscopy probe-tracking for tumor margin delineation. JOURNAL OF BIOMEDICAL OPTICS 2021; 26:JBO-200321R. [PMID: 33715315 PMCID: PMC7960531 DOI: 10.1117/1.jbo.26.3.036002] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/07/2020] [Accepted: 02/17/2021] [Indexed: 06/01/2023]
Abstract
SIGNIFICANCE Tumor detection and margin delineation are essential for successful tumor resection. However, postsurgical positive margin rates remain high for many cancers. Raman spectroscopy has shown promise as a highly accurate clinical spectroscopic diagnostic modality, but its margin delineation capabilities are severely limited by the need for pointwise application. AIM We aim to extend Raman spectroscopic diagnostics and develop a multimodal computer vision-based diagnostic system capable of both the detection and identification of suspicious lesions and the precise delineation of disease margins. APPROACH We first apply visual tracking of a Raman spectroscopic probe to achieve real-time tumor margin delineation. We then combine this system with protoporphyrin IX fluorescence imaging to achieve fluorescence-guided Raman spectroscopic margin delineation. RESULTS Our system enables real-time Raman spectroscopic tumor margin delineation for both ex vivo human tumor biopsies and an in vivo tumor xenograft mouse model. We then further demonstrate that the addition of protoporphyrin IX fluorescence imaging enables fluorescence-guided Raman spectroscopic margin delineation in a tissue phantom model. CONCLUSIONS Our image-guided Raman spectroscopic probe-tracking system enables tumor margin delineation and is compatible with both white light and fluorescence image guidance, demonstrating the potential for our system to be developed toward clinical tumor resection surgeries.
Collapse
Affiliation(s)
- Conor C. Horgan
- Imperial College London, Department of Materials, London, United Kingdom
- Imperial College London, Department of Bioengineering, London, United Kingdom
- Imperial College London, Institute of Biomedical Engineering, London, United Kingdom
| | - Mads S. Bergholt
- Imperial College London, Department of Materials, London, United Kingdom
- Imperial College London, Department of Bioengineering, London, United Kingdom
- Imperial College London, Institute of Biomedical Engineering, London, United Kingdom
| | - May Zaw Thin
- University College London, Centre for Advanced Biomedical Imaging, London, United Kingdom
| | - Anika Nagelkerke
- Imperial College London, Department of Materials, London, United Kingdom
- Imperial College London, Department of Bioengineering, London, United Kingdom
- Imperial College London, Institute of Biomedical Engineering, London, United Kingdom
| | - Robert Kennedy
- King’s College London, Guy’s and St Thomas’ NHS Foundation Trust, Oral/Head and Neck Pathology Laboratory, London, United Kingdom
| | - Tammy L. Kalber
- University College London, Centre for Advanced Biomedical Imaging, London, United Kingdom
| | - Daniel J. Stuckey
- University College London, Centre for Advanced Biomedical Imaging, London, United Kingdom
| | - Molly M. Stevens
- Imperial College London, Department of Materials, London, United Kingdom
- Imperial College London, Department of Bioengineering, London, United Kingdom
- Imperial College London, Institute of Biomedical Engineering, London, United Kingdom
| |
Collapse
|
13
|
Kim CW, Jeon SY, Paik B, Bong JW, Kim SH, Lee SH. Resident Learning Curve for Laparoscopic Appendectomy According to Seniority. Ann Coloproctol 2020; 36:163-171. [PMID: 32054238 PMCID: PMC7392570 DOI: 10.3393/ac.2019.07.20] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/30/2019] [Accepted: 07/20/2019] [Indexed: 01/04/2023] Open
Abstract
Purpose This study sought to delineate the learning curve (LC) for laparoscopic appendectomy (LA) in surgical residency according to seniority and experience. Methods Between October 2015 and November 2016, 150 patients underwent LA performed by one of 3 residents (who were in their first [A], second [B], or third [C] year of training) under supervision. The patients were nonrandomly assigned to each resident. Data from a prospectively collected database were reviewed and analyzed retrospectively. Perioperative outcomes including operation time, complications, and conversion were compared among the 3 residents. The LC was evaluated using the moving average method and cumulative sum control chart (CUSUM) for operation time and surgical completion. Results Baseline characteristics and perioperative outcomes were similar among the 3 groups except for age and location of the appendix. The operation time did not vary among the 3 residents (43.9, 45.3, and 48.4 minutes for A, B, and C, respectively; P=0.392). The moving average method for operation time showed a decreasing tendency for all residents. CUSUM results for operation time revealed peak points achieved at the 24th, 18th, and 31st cases for residents A, B, and C, respectively. In terms of surgical failure, residents A, B, and C reached steady states after their 35th, 11th, and 16th cases, respectively. Perforation of the appendix base was the only risk factor for surgical failure. Conclusion The resident LC for LA was 11 to 35 cases according to multidimensional statistical analyses. The accumulation of surgical experience among residents might influence the LC for surgical completion but not that for operation time.
Collapse
Affiliation(s)
- Chang Woo Kim
- Kyung Hee University Hospital at Gangdong, Seoul, Korea
| | - Sook Young Jeon
- Department of General Surgery, Graduate School, Kyung Hee University, Seoul, Korea, Seoul, Korea
| | - Bomina Paik
- Kyung Hee University Hospital at Gangdong, Seoul, Korea
| | - Jun Woo Bong
- Kyung Hee University Hospital at Gangdong, Seoul, Korea
| | - Sang Hyun Kim
- Kyung Hee University Hospital at Gangdong, Seoul, Korea
| | - Suk-Hwan Lee
- Kyung Hee University Hospital at Gangdong, Seoul, Korea
| |
Collapse
|
14
|
Qiu L, Li C, Ren H. Real-time surgical instrument tracking in robot-assisted surgery using multi-domain convolutional neural network. Healthc Technol Lett 2020; 6:159-164. [PMID: 32038850 PMCID: PMC6945802 DOI: 10.1049/htl.2019.0068] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2019] [Accepted: 10/02/2019] [Indexed: 11/20/2022] Open
Abstract
Image-based surgical instrument tracking in robot-assisted surgery is an active and challenging research area. Having a real-time knowledge of surgical instrument location is an essential part of a computer-assisted intervention system. Tracking can be used as visual feedback for servo control of a surgical robot or transformed as haptic feedback for surgeon–robot interaction. In this Letter, the authors apply a multi-domain convolutional neural network for fast 2D surgical instrument tracking considering the application for multiple surgical tools and use a focal loss to decrease the effect of easy negative examples. They further introduce a new dataset based on m2cai16-tool and their cadaver experiments due to the lack of established public surgical tool tracking dataset despite significant progress in this field. Their method is evaluated on the introduced dataset and outperforms the state-of-the-art real-time trackers.
Collapse
Affiliation(s)
- Liang Qiu
- Department of Biomedical Engineering, National University of Singapore, Singapore 117575, Singapore
| | - Changsheng Li
- Department of Biomedical Engineering, National University of Singapore, Singapore 117575, Singapore
| | - Hongliang Ren
- Department of Biomedical Engineering, National University of Singapore, Singapore 117575, Singapore
| |
Collapse
|
15
|
Ganni S, Botden SMBI, Chmarra M, Li M, Goossens RHM, Jakimowicz JJ. Validation of Motion Tracking Software for Evaluation of Surgical Performance in Laparoscopic Cholecystectomy. J Med Syst 2020; 44:56. [PMID: 31980955 PMCID: PMC6981315 DOI: 10.1007/s10916-020-1525-9] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2019] [Accepted: 01/16/2020] [Indexed: 01/22/2023]
Abstract
Motion tracking software for assessing laparoscopic surgical proficiency has been proven to be effective in differentiating between expert and novice performances. However, with several indices that can be generated from the software, there is no set threshold that can be used to benchmark performances. The aim of this study was to identify the best possible algorithm that can be used to benchmark expert, intermediate and novice performances for objective evaluation of psychomotor skills. 12 video recordings of various surgeons were collected in a blinded fashion. Data from our previous study of 6 experts and 23 novices was also included in the analysis to determine thresholds for performance. Video recording were analyzed both by the Kinovea 0.8.15 software and a blinded expert observer using the CAT form. Multiple algorithms were tested to accurately identify expert and novice performances. ½ L + [Formula: see text] A + [Formula: see text] J scoring of path length, average movement and jerk index respectively resulted in identifying 23/24 performances. Comparing the algorithm to CAT assessment yielded in a linear regression coefficient R2 of 0.844. The value of motion tracking software in providing objective clinical evaluation and retrospective analysis is evident. Given the prospective use of this tool the algorithm developed in this study proves to be effective in benchmarking performances for psychomotor skills evaluation.
Collapse
Affiliation(s)
- Sandeep Ganni
- Delft University of Technology, Industrial Design Engineering, Medisign, Delft, The Netherlands.
- GSL Medical College, Department of Surgery, Rajahmundry, India.
- Catharina Hospital, Research and Education, Michelangelolaan 2, 5653 EJ, Eindhoven, The Netherlands.
| | - Sanne M B I Botden
- Department of Pediatric Surgery, Radboudumc - Amalia Children's Hospital, Nijmegen, the Netherlands
| | - Magdalena Chmarra
- Delft University of Technology, Industrial Design Engineering, Medisign, Delft, The Netherlands
| | - Meng Li
- Delft University of Technology, Industrial Design Engineering, Medisign, Delft, The Netherlands
- Catharina Hospital, Research and Education, Michelangelolaan 2, 5653 EJ, Eindhoven, The Netherlands
| | - Richard H M Goossens
- Delft University of Technology, Industrial Design Engineering, Medisign, Delft, The Netherlands
| | - Jack J Jakimowicz
- Delft University of Technology, Industrial Design Engineering, Medisign, Delft, The Netherlands
- Catharina Hospital, Research and Education, Michelangelolaan 2, 5653 EJ, Eindhoven, The Netherlands
| |
Collapse
|
16
|
Sorriento A, Porfido MB, Mazzoleni S, Calvosa G, Tenucci M, Ciuti G, Dario P. Optical and Electromagnetic Tracking Systems for Biomedical Applications: A Critical Review on Potentialities and Limitations. IEEE Rev Biomed Eng 2019; 13:212-232. [PMID: 31484133 DOI: 10.1109/rbme.2019.2939091] [Citation(s) in RCA: 34] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
Abstract
Optical and electromagnetic tracking systems represent the two main technologies integrated into commercially-available surgical navigators for computer-assisted image-guided surgery so far. Optical Tracking Systems (OTSs) work within the optical spectrum to track the position and orientation, i.e., pose of target surgical instruments. OTSs are characterized by high accuracy and robustness to environmental conditions. The main limitation of OTSs is the need of a direct line-of-sight between the optical markers and the camera sensor, rigidly fixed into the operating theatre. Electromagnetic Tracking Systems (EMTSs) use electromagnetic field generator to detect the pose of electromagnetic sensors. EMTSs do not require such a direct line-of-sight, however the presence of metal or ferromagnetic sources in the operating workspace can significantly affect the measurement accuracy. The aim of the proposed review is to provide a complete and detailed overview of optical and electromagnetic tracking systems, including working principles, source of error and validation protocols. Moreover, commercial and research-oriented solutions, as well as clinical applications, are described for both technologies. Finally, a critical comparative analysis of the state of the art which highlights the potentialities and the limitations of each tracking system for a medical use is provided.
Collapse
|
17
|
Funke I, Mees ST, Weitz J, Speidel S. Video-based surgical skill assessment using 3D convolutional neural networks. Int J Comput Assist Radiol Surg 2019; 14:1217-1225. [PMID: 31104257 DOI: 10.1007/s11548-019-01995-1] [Citation(s) in RCA: 66] [Impact Index Per Article: 13.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2019] [Accepted: 05/08/2019] [Indexed: 11/29/2022]
Abstract
PURPOSE A profound education of novice surgeons is crucial to ensure that surgical interventions are effective and safe. One important aspect is the teaching of technical skills for minimally invasive or robot-assisted procedures. This includes the objective and preferably automatic assessment of surgical skill. Recent studies presented good results for automatic, objective skill evaluation by collecting and analyzing motion data such as trajectories of surgical instruments. However, obtaining the motion data generally requires additional equipment for instrument tracking or the availability of a robotic surgery system to capture kinematic data. In contrast, we investigate a method for automatic, objective skill assessment that requires video data only. This has the advantage that video can be collected effortlessly during minimally invasive and robot-assisted training scenarios. METHODS Our method builds on recent advances in deep learning-based video classification. Specifically, we propose to use an inflated 3D ConvNet to classify snippets, i.e., stacks of a few consecutive frames, extracted from surgical video. The network is extended into a temporal segment network during training. RESULTS We evaluate the method on the publicly available JIGSAWS dataset, which consists of recordings of basic robot-assisted surgery tasks performed on a dry lab bench-top model. Our approach achieves high skill classification accuracies ranging from 95.1 to 100.0%. CONCLUSIONS Our results demonstrate the feasibility of deep learning-based assessment of technical skill from surgical video. Notably, the 3D ConvNet is able to learn meaningful patterns directly from the data, alleviating the need for manual feature engineering. Further evaluation will require more annotated data for training and testing.
Collapse
Affiliation(s)
- Isabel Funke
- Division of Translational Surgical Oncology, National Center for Tumor Diseases (NCT), Partner Site Dresden, Dresden, Germany.
| | - Sören Torge Mees
- Department of Visceral, Thoracic and Vascular Surgery, Faculty of Medicine and University Hospital Carl Gustav Carus, TU Dresden, Dresden, Germany
| | - Jürgen Weitz
- Department of Visceral, Thoracic and Vascular Surgery, Faculty of Medicine and University Hospital Carl Gustav Carus, TU Dresden, Dresden, Germany
| | - Stefanie Speidel
- Division of Translational Surgical Oncology, National Center for Tumor Diseases (NCT), Partner Site Dresden, Dresden, Germany
| |
Collapse
|
18
|
Bilgic E, Alyafi M, Hada T, Landry T, Fried GM, Vassiliou MC. Simulation platforms to assess laparoscopic suturing skills: a scoping review. Surg Endosc 2019; 33:2742-2762. [PMID: 31089881 DOI: 10.1007/s00464-019-06821-y] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2018] [Accepted: 05/03/2019] [Indexed: 11/30/2022]
Abstract
BACKGROUND Laparoscopic suturing (LS) has become a common technique used in a variety of advanced laparoscopic procedures. However, LS is a challenging skill to master, and many trainees may not be competent in performing LS at the end of their training. The purpose of this review is to identify simulation platforms available for assessment of LS skills, and determine the characteristics of the platforms and the LS skills that are targeted. METHODS A scoping review was conducted between January 1997 and October 2018 for full-text articles. The search was done in various databases. Only articles written in English or French were included. Additional studies were identified through reference lists. The search terms included "laparoscopic suturing" and "clinical competence." RESULTS Sixty-two studies were selected. The majority of the simulation platforms were box trainers with inanimate tissue, and targeted basic suturing and intracorporeal knot-tying techniques. Most of the validation came from internal structure (rater reliability) and relationship to other variables (compare training levels/case experience, and various metrics). Consequences were not addressed in any of the studies. CONCLUSION We identified many types of simulation platforms that were used for assessing LS skills, with most being for assessment of basic skills. Platforms assessing the competence of trainees for advanced LS skills were limited. Therefore, future research should focus on development of LS tasks that better reflect the needs of the trainees.
Collapse
Affiliation(s)
- Elif Bilgic
- Steinberg Centre for Simulation and Interactive Learning, McGill University, Montreal, QC, Canada.,Steinberg-Bernstein Centre for Minimally Invasive Surgery and Innovation, McGill University Health Centre, 1650, Cedar Avenue, L9. 313, Montreal, QC, H3G 1A4, Canada
| | - Motaz Alyafi
- Steinberg-Bernstein Centre for Minimally Invasive Surgery and Innovation, McGill University Health Centre, 1650, Cedar Avenue, L9. 313, Montreal, QC, H3G 1A4, Canada
| | - Tomonori Hada
- Steinberg-Bernstein Centre for Minimally Invasive Surgery and Innovation, McGill University Health Centre, 1650, Cedar Avenue, L9. 313, Montreal, QC, H3G 1A4, Canada
| | - Tara Landry
- Montreal General Hospital Medical Library, McGill University Health Centre, Montreal, QC, Canada
| | - Gerald M Fried
- Steinberg-Bernstein Centre for Minimally Invasive Surgery and Innovation, McGill University Health Centre, 1650, Cedar Avenue, L9. 313, Montreal, QC, H3G 1A4, Canada
| | - Melina C Vassiliou
- Steinberg-Bernstein Centre for Minimally Invasive Surgery and Innovation, McGill University Health Centre, 1650, Cedar Avenue, L9. 313, Montreal, QC, H3G 1A4, Canada.
| |
Collapse
|
19
|
Hagelsteen K, Johansson R, Ekelund M, Bergenfelz A, Anderberg M. Performance and perception of haptic feedback in a laparoscopic 3D virtual reality simulator. MINIM INVASIV THER 2019; 28:309-316. [DOI: 10.1080/13645706.2018.1539012] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
Affiliation(s)
- Kristine Hagelsteen
- Practicum Clinical Skills Centre, Skåne University Hospital, Lund, Sweden
- Department of Clinical Sciences, Surgery, Lund University, Skåne University Hospital, Lund, Sweden
| | - Richard Johansson
- Practicum Clinical Skills Centre, Skåne University Hospital, Lund, Sweden
| | - Mikael Ekelund
- Practicum Clinical Skills Centre, Skåne University Hospital, Lund, Sweden
- Department of Clinical Sciences, Surgery, Lund University, Skåne University Hospital, Malmö, Sweden
| | - Anders Bergenfelz
- Practicum Clinical Skills Centre, Skåne University Hospital, Lund, Sweden
- Department of Clinical Sciences, Surgery, Lund University, Skåne University Hospital, Lund, Sweden
| | - Magnus Anderberg
- Practicum Clinical Skills Centre, Skåne University Hospital, Lund, Sweden
- Department of Clinical Sciences, Paediatrics, Lund University, Skåne University Hospital, Lund, Sweden
| |
Collapse
|
20
|
Performance Assessment. COMPREHENSIVE HEALTHCARE SIMULATION: SURGERY AND SURGICAL SUBSPECIALTIES 2019. [DOI: 10.1007/978-3-319-98276-2_9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/16/2022]
|
21
|
Al Hajj H, Lamard M, Conze PH, Roychowdhury S, Hu X, Maršalkaitė G, Zisimopoulos O, Dedmari MA, Zhao F, Prellberg J, Sahu M, Galdran A, Araújo T, Vo DM, Panda C, Dahiya N, Kondo S, Bian Z, Vahdat A, Bialopetravičius J, Flouty E, Qiu C, Dill S, Mukhopadhyay A, Costa P, Aresta G, Ramamurthy S, Lee SW, Campilho A, Zachow S, Xia S, Conjeti S, Stoyanov D, Armaitis J, Heng PA, Macready WG, Cochener B, Quellec G. CATARACTS: Challenge on automatic tool annotation for cataRACT surgery. Med Image Anal 2018; 52:24-41. [PMID: 30468970 DOI: 10.1016/j.media.2018.11.008] [Citation(s) in RCA: 34] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2018] [Revised: 11/13/2018] [Accepted: 11/15/2018] [Indexed: 12/29/2022]
Abstract
Surgical tool detection is attracting increasing attention from the medical image analysis community. The goal generally is not to precisely locate tools in images, but rather to indicate which tools are being used by the surgeon at each instant. The main motivation for annotating tool usage is to design efficient solutions for surgical workflow analysis, with potential applications in report generation, surgical training and even real-time decision support. Most existing tool annotation algorithms focus on laparoscopic surgeries. However, with 19 million interventions per year, the most common surgical procedure in the world is cataract surgery. The CATARACTS challenge was organized in 2017 to evaluate tool annotation algorithms in the specific context of cataract surgery. It relies on more than nine hours of videos, from 50 cataract surgeries, in which the presence of 21 surgical tools was manually annotated by two experts. With 14 participating teams, this challenge can be considered a success. As might be expected, the submitted solutions are based on deep learning. This paper thoroughly evaluates these solutions: in particular, the quality of their annotations are compared to that of human interpretations. Next, lessons learnt from the differential analysis of these solutions are discussed. We expect that they will guide the design of efficient surgery monitoring tools in the near future.
Collapse
Affiliation(s)
| | - Mathieu Lamard
- Inserm, UMR 1101, Brest, F-29200, France; Univ Bretagne Occidentale, Brest, F-29200, France
| | - Pierre-Henri Conze
- Inserm, UMR 1101, Brest, F-29200, France; IMT Atlantique, LaTIM UMR 1101, UBL, Brest, F-29200, France
| | | | - Xiaowei Hu
- Dept. of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China
| | | | | | - Muneer Ahmad Dedmari
- Chair for Computer Aided Medical Procedures, Faculty of Informatics, Technical University of Munich, Garching b. Munich, 85748, Germany
| | - Fenqiang Zhao
- Key Laboratory of Biomedical Engineering of Ministry of Education, Zhejiang University, HangZhou, 310000, China
| | - Jonas Prellberg
- Dept. of Informatics, Carl von Ossietzky University, Oldenburg, 26129, Germany
| | - Manish Sahu
- Department of Visual Data Analysis, Zuse Institute Berlin, Berlin, 14195, Germany
| | - Adrian Galdran
- INESC TEC - Instituto de Engenharia de Sistemas e Computadores - Tecnologia e Ciência, Porto, 4200-465, Portugal
| | - Teresa Araújo
- Faculdade de Engenharia, Universidade do Porto, Porto, 4200-465, Portugal; INESC TEC - Instituto de Engenharia de Sistemas e Computadores - Tecnologia e Ciência, Porto, 4200-465, Portugal
| | - Duc My Vo
- Gachon University, 1342 Seongnamdaero, Sujeonggu, Seongnam, 13120, Korea
| | | | - Navdeep Dahiya
- Laboratory of Computational Computer Vision, Georgia Tech, Atlanta, GA, 30332, USA
| | | | | | - Arash Vahdat
- D-Wave Systems Inc., Burnaby, BC, V5G 4M9, Canada
| | | | | | - Chenhui Qiu
- Key Laboratory of Biomedical Engineering of Ministry of Education, Zhejiang University, HangZhou, 310000, China
| | - Sabrina Dill
- Department of Visual Data Analysis, Zuse Institute Berlin, Berlin, 14195, Germany
| | - Anirban Mukhopadhyay
- Department of Computer Science, Technische Universität Darmstadt, Darmstadt, 64283, Germany
| | - Pedro Costa
- INESC TEC - Instituto de Engenharia de Sistemas e Computadores - Tecnologia e Ciência, Porto, 4200-465, Portugal
| | - Guilherme Aresta
- Faculdade de Engenharia, Universidade do Porto, Porto, 4200-465, Portugal; INESC TEC - Instituto de Engenharia de Sistemas e Computadores - Tecnologia e Ciência, Porto, 4200-465, Portugal
| | - Senthil Ramamurthy
- Laboratory of Computational Computer Vision, Georgia Tech, Atlanta, GA, 30332, USA
| | - Sang-Woong Lee
- Gachon University, 1342 Seongnamdaero, Sujeonggu, Seongnam, 13120, Korea
| | - Aurélio Campilho
- Faculdade de Engenharia, Universidade do Porto, Porto, 4200-465, Portugal; INESC TEC - Instituto de Engenharia de Sistemas e Computadores - Tecnologia e Ciência, Porto, 4200-465, Portugal
| | - Stefan Zachow
- Department of Visual Data Analysis, Zuse Institute Berlin, Berlin, 14195, Germany
| | - Shunren Xia
- Key Laboratory of Biomedical Engineering of Ministry of Education, Zhejiang University, HangZhou, 310000, China
| | - Sailesh Conjeti
- Chair for Computer Aided Medical Procedures, Faculty of Informatics, Technical University of Munich, Garching b. Munich, 85748, Germany; German Center for Neurodegenrative Diseases (DZNE), Bonn, 53127, Germany
| | - Danail Stoyanov
- Digital Surgery Ltd, EC1V 2QY, London, UK; University College London, Gower Street, WC1E 6BT, London, UK
| | | | - Pheng-Ann Heng
- Dept. of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China
| | | | - Béatrice Cochener
- Inserm, UMR 1101, Brest, F-29200, France; Univ Bretagne Occidentale, Brest, F-29200, France; Service d'Ophtalmologie, CHRU Brest, Brest, F-29200, France
| | | |
Collapse
|
22
|
Allan M, Ourselin S, Hawkes DJ, Kelly JD, Stoyanov D. 3-D Pose Estimation of Articulated Instruments in Robotic Minimally Invasive Surgery. IEEE TRANSACTIONS ON MEDICAL IMAGING 2018; 37:1204-1213. [PMID: 29727283 DOI: 10.1109/tmi.2018.2794439] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Estimating the 3-D pose of instruments is an important part of robotic minimally invasive surgery for automation of basic procedures as well as providing safety features, such as virtual fixtures. Image-based methods of 3-D pose estimation provide a non-invasive low cost solution compared with methods that incorporate external tracking systems. In this paper, we extend our recent work in estimating rigid 3-D pose with silhouette and optical flow-based features to incorporate the articulated degrees-of-freedom (DOFs) of robotic instruments within a gradient-based optimization framework. Validation of the technique is provided with a calibrated ex-vivo study from the da Vinci Research Kit (DVRK) robotic system, where we perform quantitative analysis on the errors each DOF of our tracker. Additionally, we perform several detailed comparisons with recently published techniques that combine visual methods with kinematic data acquired from the joint encoders. Our experiments demonstrate that our method is competitively accurate while relying solely on image data.
Collapse
|
23
|
Cifuentes J, Pham MT, Boulanger P, Moreau R, Prieto F. Gesture segmentation and classification using affine speed and energy. Proc Inst Mech Eng H 2018; 232:588-596. [PMID: 29683373 DOI: 10.1177/0954411918768350] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
The characterization and analysis of hand gestures are challenging tasks with an important number of applications in human-computer interaction, machine vision and control, and medical gesture recognition. Specifically, several researchers have tried to develop objective evaluation methods of surgical skills for medical training. As a result, the adequate selection and extraction of similarities and differences between experts and novices have become an important challenge in this area. In particular, some of these works have shown that human movements performed during surgery can be described as a sequence of constant affine-speed trajectories. In this article, we will show that affine speed can be used to segment medical hand movements and present how the mechanical energy computed in the segment is analyzed to compare surgical skills. The position and orientation of the instrument end effectors are determined by six video photographic cameras. In addition, two laparoscopic instruments are capable of measuring simultaneously the forces and torques applied to the tool. Finally, we will report the results of these experiments and present a correlation between the mechanical energy values, dissipated during a procedure, and the surgical skills.
Collapse
Affiliation(s)
- Jenny Cifuentes
- 1 Program of Electrical Engineering, Universidad de La Salle, Bogotá, Colombia
| | - Minh Tu Pham
- 2 Department of Mechanical Engineering, INSA de Lyon, Villeurbanne, France
| | - Pierre Boulanger
- 3 Department of Computing Science, University of Alberta, Edmonton, AB, Canada
| | - Richard Moreau
- 2 Department of Mechanical Engineering, INSA de Lyon, Villeurbanne, France
| | - Flavio Prieto
- 4 Department of Mechanical and Mechatronics Engineering, Universidad Nacional de Colombia, Bogotá, Colombia
| |
Collapse
|
24
|
Ganni S, Botden SMBI, Chmarra M, Goossens RHM, Jakimowicz JJ. A software-based tool for video motion tracking in the surgical skills assessment landscape. Surg Endosc 2018; 32:2994-2999. [PMID: 29340824 PMCID: PMC5956097 DOI: 10.1007/s00464-018-6023-5] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2017] [Accepted: 01/03/2018] [Indexed: 01/22/2023]
Abstract
BACKGROUND The use of motion tracking has been proved to provide an objective assessment in surgical skills training. Current systems, however, require the use of additional equipment or specialised laparoscopic instruments and cameras to extract the data. The aim of this study was to determine the possibility of using a software-based solution to extract the data. METHODS 6 expert and 23 novice participants performed a basic laparoscopic cholecystectomy procedure in the operating room. The recorded videos were analysed using Kinovea 0.8.15 and the following parameters calculated the path length, average instrument movement and number of sudden or extreme movements. RESULTS The analysed data showed that experts had significantly shorter path length (median 127 cm vs. 187 cm, p = 0.01), smaller average movements (median 0.40 cm vs. 0.32 cm, p = 0.002) and fewer sudden movements (median 14.00 vs. 21.61, p = 0.001) than their novice counterparts. CONCLUSION The use of software-based video motion tracking of laparoscopic cholecystectomy is a simple and viable method enabling objective assessment of surgical performance. It provides clear discrimination between expert and novice performance.
Collapse
Affiliation(s)
- Sandeep Ganni
- Medisign, Industrial Design Engineering, Delft University of Technology, Delft, The Netherlands. .,Department of Surgery, GSL Medical College, Rajahmundry, India. .,Research and Education, Catharina Hospital, Michelangelolaan 2, 5653 EJ, Eindhoven, The Netherlands.
| | - Sanne M B I Botden
- Department of Pediatric Surgery, Radboudumc - Amalia Children's Hospital, Nijmegen, The Netherlands
| | - Magdalena Chmarra
- Medisign, Industrial Design Engineering, Delft University of Technology, Delft, The Netherlands
| | - Richard H M Goossens
- Medisign, Industrial Design Engineering, Delft University of Technology, Delft, The Netherlands
| | - Jack J Jakimowicz
- Medisign, Industrial Design Engineering, Delft University of Technology, Delft, The Netherlands.,Research and Education, Catharina Hospital, Michelangelolaan 2, 5653 EJ, Eindhoven, The Netherlands
| |
Collapse
|
25
|
Lahanas V, Loukas C, Georgiou K, Lababidi H, Al-Jaroudi D. Virtual reality-based assessment of basic laparoscopic skills using the Leap Motion controller. Surg Endosc 2017; 31:5012-5023. [PMID: 28466361 DOI: 10.1007/s00464-017-5503-3] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2016] [Accepted: 03/08/2017] [Indexed: 11/24/2022]
Abstract
BACKGROUND The majority of the current surgical simulators employ specialized sensory equipment for instrument tracking. The Leap Motion controller is a new device able to track linear objects with sub-millimeter accuracy. The aim of this study was to investigate the potential of a virtual reality (VR) simulator for assessment of basic laparoscopic skills, based on the low-cost Leap Motion controller. METHODS A simple interface was constructed to simulate the insertion point of the instruments into the abdominal cavity. The controller provided information about the position and orientation of the instruments. Custom tools were constructed to simulate the laparoscopic setup. Three basic VR tasks were developed: camera navigation (CN), instrument navigation (IN), and bimanual operation (BO). The experiments were carried out in two simulation centers: MPLSC (Athens, Greece) and CRESENT (Riyadh, Kingdom of Saudi Arabia). Two groups of surgeons (28 experts and 21 novices) participated in the study by performing the VR tasks. Skills assessment metrics included time, pathlength, and two task-specific errors. The face validity of the training scenarios was also investigated via a questionnaire completed by the participants. RESULTS Expert surgeons significantly outperformed novices in all assessment metrics for IN and BO (p < 0.05). For CN, a significant difference was found in one error metric (p < 0.05). The greatest difference between the performances of the two groups occurred for BO. Qualitative analysis of the instrument trajectory revealed that experts performed more delicate movements compared to novices. Subjects' ratings on the feedback questionnaire highlighted the training value of the system. CONCLUSIONS This study provides evidence regarding the potential use of the Leap Motion controller for assessment of basic laparoscopic skills. The proposed system allowed the evaluation of dexterity of the hand movements. Future work will involve comparison studies with validated simulators and development of advanced training scenarios on current Leap Motion controller.
Collapse
Affiliation(s)
- Vasileios Lahanas
- Medical Physics Lab-Simulation Center, School of Medicine, National and Kapodistrian University of Athens, Mikras Asias 75 Str., 11527, Athens, Greece
| | - Constantinos Loukas
- Medical Physics Lab-Simulation Center, School of Medicine, National and Kapodistrian University of Athens, Mikras Asias 75 Str., 11527, Athens, Greece.
| | - Konstantinos Georgiou
- Medical Physics Lab-Simulation Center, School of Medicine, National and Kapodistrian University of Athens, Mikras Asias 75 Str., 11527, Athens, Greece
| | - Hani Lababidi
- Center for Research, Education & Simulation Enhanced Training, King Fahad Medical City, Riyadh, Saudi Arabia
| | - Dania Al-Jaroudi
- Center for Research, Education & Simulation Enhanced Training, King Fahad Medical City, Riyadh, Saudi Arabia
| |
Collapse
|
26
|
Vedula SS, Ishii M, Hager GD. Objective Assessment of Surgical Technical Skill and Competency in the Operating Room. Annu Rev Biomed Eng 2017; 19:301-325. [PMID: 28375649 DOI: 10.1146/annurev-bioeng-071516-044435] [Citation(s) in RCA: 75] [Impact Index Per Article: 10.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/16/2023]
Abstract
Training skillful and competent surgeons is critical to ensure high quality of care and to minimize disparities in access to effective care. Traditional models to train surgeons are being challenged by rapid advances in technology, an intensified patient-safety culture, and a need for value-driven health systems. Simultaneously, technological developments are enabling capture and analysis of large amounts of complex surgical data. These developments are motivating a "surgical data science" approach to objective computer-aided technical skill evaluation (OCASE-T) for scalable, accurate assessment; individualized feedback; and automated coaching. We define the problem space for OCASE-T and summarize 45 publications representing recent research in this domain. We find that most studies on OCASE-T are simulation based; very few are in the operating room. The algorithms and validation methodologies used for OCASE-T are highly varied; there is no uniform consensus. Future research should emphasize competency assessment in the operating room, validation against patient outcomes, and effectiveness for surgical training.
Collapse
Affiliation(s)
- S Swaroop Vedula
- Malone Center for Engineering in Healthcare, Department of Computer Science, The Johns Hopkins University Whiting School of Engineering, Baltimore, Maryland 21218;
| | - Masaru Ishii
- Department of Otolaryngology-Head and Neck Surgery, The Johns Hopkins University School of Medicine, Baltimore, Maryland 21287
| | - Gregory D Hager
- Malone Center for Engineering in Healthcare, Department of Computer Science, The Johns Hopkins University Whiting School of Engineering, Baltimore, Maryland 21218;
| |
Collapse
|
27
|
Broekema TH, Talsma AK, Wevers KP, Pierie JPEN. Laparoscopy Instructional Videos: The Effect of Preoperative Compared With Intraoperative Use on Learning Curves. JOURNAL OF SURGICAL EDUCATION 2017; 74:91-99. [PMID: 27553762 DOI: 10.1016/j.jsurg.2016.07.001] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/22/2016] [Revised: 06/30/2016] [Accepted: 07/01/2016] [Indexed: 06/06/2023]
Abstract
OBJECTIVE Previous studies have shown that the use of intraoperative instructional videos has a positive effect on learning laparoscopic procedures. This study investigated the effect of the timing of the instructional videos on learning curves in laparoscopic skills training. DESIGN After completing a basic skills course on a virtual reality simulator, medical students and residents with less than 1 hour experience using laparoscopic instruments were randomized into 2 groups. Using an instructional video either preoperatively or intraoperatively, both groups then performed 4 repetitions of a standardized task on the TrEndo augmented reality. With the TrEndo, 9 motion analysis parameters (MAPs) were recorded for each session (4 MAPs for each hand and time). These were the primary outcome measurements for performance. The time spent watching the instructional video was also recorded. Improvement in performance was studied within and between groups. SETTING Medical Center Leeuwarden, a secondary care hospital located in Leeuwarden, The Netherlands. PARTICIPANTS Right-hand dominant medical student and residents with more than 1 hour experience operating any kind of laparoscopic instruments were participated. A total of 23 persons entered the study, of which 21 completed the study course. RESULTS In both groups, at least 5 of 9 MAPs showed significant improvements between repetition 1 and 4. When both groups were compared after completion of repetition 4, no significant differences in improvement were detected. The intraoperative group showed significant improvement in 3 MAPs of the left-nondominant-hand, compared with one MAP for the preoperative group. CONCLUSION No significant differences in learning curves could be detected between the subjects who used intraoperative instructional videos and those who used preoperative instructional videos. Intraoperative video instruction may result in improved dexterity of the nondominant hand.
Collapse
Affiliation(s)
- Theo H Broekema
- Department of Surgery, Medical Center Leeuwarden, Leeuwarden, The Netherlands.
| | - Aaldert K Talsma
- Department of Surgery, Medical Center Leeuwarden, Leeuwarden, The Netherlands; Postgraduate School of Medicine, University Medical Center Groningen, Groningen, The Netherlands
| | - Kevin P Wevers
- Department of Surgery, Medical Center Leeuwarden, Leeuwarden, The Netherlands
| | - Jean-Pierre E N Pierie
- Department of Surgery, Medical Center Leeuwarden, Leeuwarden, The Netherlands; Postgraduate School of Medicine, University Medical Center Groningen, Groningen, The Netherlands; University Groningen, Groningen, The Netherlands
| |
Collapse
|
28
|
Obdeijn MC, Horeman T, de Boer LL, van Baalen SJ, Liverneaux P, Tuijthof GJM. Navigation forces during wrist arthroscopy: assessment of expert levels. Knee Surg Sports Traumatol Arthrosc 2016; 24:3684-3692. [PMID: 25448136 DOI: 10.1007/s00167-014-3450-2] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/05/2014] [Accepted: 11/17/2014] [Indexed: 10/24/2022]
Abstract
PURPOSE To facilitate effective and efficient training in skills laboratory, objective metrics can be used. Forces exerted on the tissues can be a measure of safe tissue manipulation. To provide feedback during training, expert threshold levels need to be determined. The purpose of this study was to define the magnitude and the direction of navigation forces used during arthroscopic inspection of the wrist. METHODS We developed a set-up to mount a cadaver wrist to a 3D force platform that allowed measurement of the forces exerted on the wrist. Six experts in wrist arthroscopy performed two tasks: (1) Introduction of the camera and visualization of the hook. (2) Navigation through the wrist with visualization of five anatomic structures. The magnitude (Fabs) and direction of force were recorded, with the direction defined as α being the angle in the vertical plane and β being the angle in the horizontal plane. The 10th-90th percentile of the data were used to set threshold levels for training. RESULTS The results show distinct force patterns for each of the anatomic landmarks. Median Fabs of the navigation task is 3.8 N (1.8-7.3), α is 3.60 (-54-44) and β is 260 (0-72). CONCLUSION Unique expert data on navigation forces during wrist arthroscopy were determined. The defined maximum allowable navigation force of 7.3 N (90th percentile) can be used in providing feedback on performance during skills training. The clinical value is that this study contributes to objective assessment of skills levels.
Collapse
Affiliation(s)
- Miryam C Obdeijn
- Department of Plastic, Reconstructive and Hand Surgery, Academic Medical Center, University of Amsterdam, Meibergdreef 9, 1105 AZ, Amsterdam, Netherlands.
| | - Tim Horeman
- Department of Biomechanical Engineering, Delft University of Technology, Delft, Netherlands
| | - Lisanne L de Boer
- Department of Technical Medicine, MIRA Institute for Biomedical Technology and Technical Medicine Enschede, University of Twente, Enschede, Netherlands
| | - Sophie J van Baalen
- Department of Technical Medicine, MIRA Institute for Biomedical Technology and Technical Medicine Enschede, University of Twente, Enschede, Netherlands
| | - Philippe Liverneaux
- Department of Hand Surgery, Strasbourg University Hospitals, Illkirch, France
| | - Gabrielle J M Tuijthof
- Department of Biomechanical Engineering, Delft University of Technology, Delft, Netherlands.,Department of Orthopedic Surgery, Orthopedic Research Center Amsterdam, Academic Medical Center, University of Amsterdam, Amsterdam, Netherlands
| |
Collapse
|
29
|
Arndt S, Russell A, Tomas J, Müller P, Shekhar S, Brandstädter K, Bruns C, Wex C. Rupture probability of porcine liver under planar and point loading. Biomed Phys Eng Express 2016. [DOI: 10.1088/2057-1976/2/5/055018] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
|
30
|
Bouget D, Allan M, Stoyanov D, Jannin P. Vision-based and marker-less surgical tool detection and tracking: a review of the literature. Med Image Anal 2016; 35:633-654. [PMID: 27744253 DOI: 10.1016/j.media.2016.09.003] [Citation(s) in RCA: 104] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2016] [Revised: 06/26/2016] [Accepted: 09/05/2016] [Indexed: 11/16/2022]
Abstract
In recent years, tremendous progress has been made in surgical practice for example with Minimally Invasive Surgery (MIS). To overcome challenges coming from deported eye-to-hand manipulation, robotic and computer-assisted systems have been developed. Having real-time knowledge of the pose of surgical tools with respect to the surgical camera and underlying anatomy is a key ingredient for such systems. In this paper, we present a review of the literature dealing with vision-based and marker-less surgical tool detection. This paper includes three primary contributions: (1) identification and analysis of data-sets used for developing and testing detection algorithms, (2) in-depth comparison of surgical tool detection methods from the feature extraction process to the model learning strategy and highlight existing shortcomings, and (3) analysis of validation techniques employed to obtain detection performance results and establish comparison between surgical tool detectors. The papers included in the review were selected through PubMed and Google Scholar searches using the keywords: "surgical tool detection", "surgical tool tracking", "surgical instrument detection" and "surgical instrument tracking" limiting results to the year range 2000 2015. Our study shows that despite significant progress over the years, the lack of established surgical tool data-sets, and reference format for performance assessment and method ranking is preventing faster improvement.
Collapse
Affiliation(s)
- David Bouget
- Medicis team, INSERM U1099, Université de Rennes 1 LTSI, 35000 Rennes, France.
| | - Max Allan
- Center for Medical Image Computing. University College London, WC1E 6BT London, United Kingdom.
| | - Danail Stoyanov
- Center for Medical Image Computing. University College London, WC1E 6BT London, United Kingdom.
| | - Pierre Jannin
- Medicis team, INSERM U1099, Université de Rennes 1 LTSI, 35000 Rennes, France.
| |
Collapse
|
31
|
Lorias-Espinoza D, Carranza VG, de León FCP, Escamirosa FP, Martinez AM. A Low-Cost, Passive Navigation Training System for Image-Guided Spinal Intervention. World Neurosurg 2016; 95:322-328. [PMID: 27535635 DOI: 10.1016/j.wneu.2016.08.006] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2016] [Revised: 07/31/2016] [Accepted: 08/01/2016] [Indexed: 10/21/2022]
Abstract
BACKGROUND Navigation technology is used for training in various medical specialties, not least image-guided spinal interventions. Navigation practice is an important educational component that allows residents to understand how surgical instruments interact with complex anatomy and to learn basic surgical skills such as the tridimensional mental interpretation of bidimensional data. Inexpensive surgical simulators for spinal surgery, however, are lacking. We therefore designed a low-cost spinal surgery simulator (Spine MovDigSys 01) to allow 3-dimensional navigation via 2-dimensional images without altering or limiting the surgeon's natural movement. METHODS A training system was developed with an anatomical lumbar model and 2 webcams to passively digitize surgical instruments under MATLAB software control. A proof-of-concept recognition task (vertebral body cannulation) and a pilot test of the system with 12 neuro- and orthopedic surgeons were performed to obtain feedback on the system. Position, orientation, and kinematic variables were determined and the lateral, posteroanterior, and anteroposterior views obtained. RESULTS The system was tested with a proof-of-concept experimental task. Operator metrics including time of execution (t), intracorporeal length (d), insertion angle (α), average speed (v¯), and acceleration (a) were obtained accurately. These metrics were converted into assessment metrics such as smoothness of operation and linearity of insertion. Results from initial testing are shown and the system advantages and disadvantages described. CONCLUSIONS This low-cost spinal surgery training system digitized the position and orientation of the instruments and allowed image-guided navigation, the generation of metrics, and graphic recording of the instrumental route. Spine MovDigSys 01 is useful for development of basic, noninnate skills and allows the novice apprentice to quickly and economically move beyond the basics.
Collapse
Affiliation(s)
- Daniel Lorias-Espinoza
- Electrical Department, Research and Advanced Studies Center of the National Polytechnic Institute of Mexico (Cinvestav - IPN). Av. IPN No 2508, Col San Pedro Zacatenco, México DF, Mexico.
| | - Vicente González Carranza
- Department of Neurosurgery, Hospital Infantil de México Federico Gómez, col Doctores, México DF, Mexico
| | | | - Fernando Pérez Escamirosa
- Departamento de cirugía, Facultad de medicina Universidad Nacional Autónoma de México, UNAM, México DF, Mexico
| | - Arturo Minor Martinez
- Electrical Department, Research and Advanced Studies Center of the National Polytechnic Institute of Mexico (Cinvestav - IPN). Av. IPN No 2508, Col San Pedro Zacatenco, México DF, Mexico
| |
Collapse
|
32
|
Lu K, Song C, Yang L, Ai L, Shi Q. Measurement of the Range of Motion of Laparoscopic Instruments Based on an Optical Tracking System1. J Med Device 2016. [DOI: 10.1115/1.4033168] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022] Open
Affiliation(s)
- Kunyong Lu
- Shanghai Institute for Minimally Invasive Therapy, School of Medical Instrument and Food Engineering, University of Shanghai for Science and Technology, Yangpu, Shanghai 200093, China
| | - Chengli Song
- Shanghai Institute for Minimally Invasive Therapy, School of Medical Instrument and Food Engineering, University of Shanghai for Science and Technology, Yangpu, Shanghai 200093, China e-mail:
| | - Lixiao Yang
- Changhai Hospital, Second Military Medical University, Yangpu, Shanghai 200093, China
| | - Liaoyuan Ai
- Shanghai Institute for Minimally Invasive Therapy, School of Medical Instrument and Food Engineering, University of Shanghai for Science and Technology, Yangpu, Shanghai 200093, China
| | - Qin Shi
- Shanghai Institute for Minimally Invasive Therapy, School of Medical Instrument and Food Engineering, University of Shanghai for Science and Technology Yangpu, Shanghai 200093, China
| |
Collapse
|
33
|
Sánchez A, Rodríguez O, Sánchez R, Benítez G, Pena R, Salamo O, Baez V. Laparoscopic surgery skills evaluation: analysis based on accelerometers. JSLS 2016; 18:JSLS.2014.00234. [PMID: 25489218 PMCID: PMC4254482 DOI: 10.4293/jsls.2014.00234] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/22/2023] Open
Abstract
Background and Objective: Technical skills assessment is considered an important part of surgical training. Subjective assessment is not appropriate for training feedback, and there is now increased demand for objective assessment of surgical performance. Economy of movement has been proposed as an excellent alternative for this purpose. The investigators describe a readily available method to evaluate surgical skills through motion analysis using accelerometers in Apple's iPod Touch device. Methods: Two groups of individuals with different minimally invasive surgery skill levels (experts and novices) were evaluated. Each group was asked to perform a given task with an iPod Touch placed on the dominant-hand wrist. The Accelerometer Data Pro application makes it possible to obtain movement-related data detected by the accelerometers. Average acceleration and maximum acceleration for each axis (x, y, and z) were determined and compared. Results: The analysis of average acceleration and maximum acceleration showed statistically significant differences between groups on both the y (P = .04, P = .03) and z (P = .04, P = .04) axes. This demonstrates the ability to distinguish between experts and novices. The analysis of the x axis showed no significant differences between groups, which could be explained by the fact that the task involves few movements on this axis. Conclusion: Accelerometer-based motion analysis is a useful tool to evaluate laparoscopic skill development of surgeons and should be used in training programs. Validation of this device in an in vivo setting is a research goal of the investigators' team.
Collapse
Affiliation(s)
- Alexis Sánchez
- Medicine Faculty, Central University of Venezuela, Caracas, Venezuela
| | - Omaira Rodríguez
- Medicine Faculty, Central University of Venezuela, Caracas, Venezuela
| | - Renata Sánchez
- Medicine Faculty, Central University of Venezuela, Caracas, Venezuela
| | - Gustavo Benítez
- Medicine Faculty, Central University of Venezuela, Caracas, Venezuela
| | - Romina Pena
- Surgery Department III, University Hospital of Caracas, Caracas, Venezuela
| | - Oriana Salamo
- Surgery Department III, University Hospital of Caracas, Caracas, Venezuela
| | - Valentina Baez
- Surgery Department III, University Hospital of Caracas, Caracas, Venezuela
| |
Collapse
|
34
|
Ciarapica FE, Bevilacqua M, Mazzuto G, Paciarotti C. Business process re-engineering of surgical instruments sterilization process: A case study. INTERNATIONAL JOURNAL OF RF TECHNOLOGIES-RESEARCH AND APPLICATIONS 2016. [DOI: 10.3233/rft-150070] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
|
35
|
Loukas C, Georgiou E. Performance comparison of various feature detector-descriptors and temporal models for video-based assessment of laparoscopic skills. Int J Med Robot 2015; 12:387-98. [PMID: 26415583 DOI: 10.1002/rcs.1702] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2013] [Revised: 07/17/2015] [Accepted: 08/21/2015] [Indexed: 11/07/2022]
Abstract
BACKGROUND Despite the significant progress in hand gesture analysis for surgical skills assessment, video-based analysis has not received much attention. In this study we investigate the application of various feature detector-descriptors and temporal modeling techniques for laparoscopic skills assessment. METHODS Two different setups were designed: static and dynamic video-histogram analysis. Four well-known feature detection-extraction methods were investigated: SIFT, SURF, STAR-BRIEF and STIP-HOG. For the dynamic setup two temporal models were employed (LDS and GMMAR model). Each method was evaluated for its ability to classify experts and novices on peg transfer and knot tying. RESULTS STIP-HOG yielded the best performance (static: 74-79%; dynamic: 80-89%). Temporal models had equivalent performance. Important differences were found between the two groups with respect to the underlying dynamics of the video-histogram sequences. CONCLUSIONS Temporal modeling of feature histograms extracted from laparoscopic training videos provides information about the skill level and motion pattern of the operator. Copyright © 2015 John Wiley & Sons, Ltd.
Collapse
Affiliation(s)
- Constantinos Loukas
- Medical Physics Lab-Simulation Center, School of Medicine, University of Athens, Greece
| | - Evangelos Georgiou
- Medical Physics Lab-Simulation Center, School of Medicine, University of Athens, Greece
| |
Collapse
|
36
|
Lahanas V, Loukas C, Georgiou E. A simple sensor calibration technique for estimating the 3D pose of endoscopic instruments. Surg Endosc 2015; 30:1198-204. [PMID: 26123335 DOI: 10.1007/s00464-015-4330-7] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2015] [Accepted: 06/09/2015] [Indexed: 11/29/2022]
Abstract
INTRODUCTION The aim of this study was to describe a simple and easy-to-use calibration method that is able to estimate the pose (tip position and orientation) of a rigid endoscopic instrument with respect to an electromagnetic tracking device attached to the handle. METHODS A two-step calibration protocol was developed. First, the orientation of the instrument shaft is derived by performing a 360° rotation of the instrument around its shaft using a firmly positioned surgical trocar. Second, the 3D position of the instrument tip is obtained by allowing the tip to come in contact with a planar surface. RESULTS The results indicate submillimeter accuracy in the estimation of the tooltip position, and subdegree accuracy in the estimation of the shaft orientation, both with respect to a known reference frame. The assets of the proposed method are also highlighted by illustrating an indicative application in the field of augmented reality simulation. CONCLUSIONS The proposed method is simple, inexpensive, does not require employment of special calibration frames, and has potential applications not only in training systems but also in the operating room.
Collapse
Affiliation(s)
- Vasileios Lahanas
- Medical Physics Laboratory Simulation Centre, School of Medicine, University of Athens, Mikras Asias St. 75, 11527, Athens, Greece.
| | - Constantinos Loukas
- Medical Physics Laboratory Simulation Centre, School of Medicine, University of Athens, Mikras Asias St. 75, 11527, Athens, Greece
| | - Evangelos Georgiou
- Medical Physics Laboratory Simulation Centre, School of Medicine, University of Athens, Mikras Asias St. 75, 11527, Athens, Greece
| |
Collapse
|
37
|
D'Angelo ALD, Rutherford DN, Ray RD, Laufer S, Kwan C, Cohen ER, Mason A, Pugh CM. Idle time: an underdeveloped performance metric for assessing surgical skill. Am J Surg 2015; 209:645-51. [PMID: 25725505 DOI: 10.1016/j.amjsurg.2014.12.013] [Citation(s) in RCA: 46] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2014] [Revised: 12/06/2014] [Accepted: 12/17/2014] [Indexed: 10/24/2022]
Abstract
BACKGROUND The aim of this study was to evaluate validity evidence using idle time as a performance measure in open surgical skills assessment. METHODS This pilot study tested psychomotor planning skills of surgical attendings (n = 6), residents (n = 4) and medical students (n = 5) during suturing tasks of varying difficulty. Performance data were collected with a motion tracking system. Participants' hand movements were analyzed for idle time, total operative time, and path length. We hypothesized that there will be shorter idle times for more experienced individuals and on the easier tasks. RESULTS A total of 365 idle periods were identified across all participants. Attendings had fewer idle periods during 3 specific procedure steps (P < .001). All participants had longer idle time on friable tissue (P < .005). CONCLUSIONS Using an experimental model, idle time was found to correlate with experience and motor planning when operating on increasingly difficult tissue types. Further work exploring idle time as a valid psychomotor measure is warranted.
Collapse
Affiliation(s)
- Anne-Lise D D'Angelo
- Department of Surgery, School of Medicine and Public Health, University of Wisconsin - Madison, 750 Highland Avenue, Madison, WI 53726, USA.
| | - Drew N Rutherford
- Department of Surgery, School of Medicine and Public Health, University of Wisconsin - Madison, 750 Highland Avenue, Madison, WI 53726, USA; Department of Kinesiology, School of Education, University of Wisconsin - Madison, 2000 Observatory Drive, Madison, WI 53706, USA
| | - Rebecca D Ray
- Department of Surgery, School of Medicine and Public Health, University of Wisconsin - Madison, 750 Highland Avenue, Madison, WI 53726, USA
| | - Shlomi Laufer
- Department of Surgery, School of Medicine and Public Health, University of Wisconsin - Madison, 750 Highland Avenue, Madison, WI 53726, USA; Department of Electrical and Computer Engineering, College of Engineering, University of Wisconsin - Madison, 1415 Engineering Drive, Madison, WI 53706, USA
| | - Calvin Kwan
- Department of Surgery, School of Medicine and Public Health, University of Wisconsin - Madison, 750 Highland Avenue, Madison, WI 53726, USA
| | - Elaine R Cohen
- Department of Surgery, School of Medicine and Public Health, University of Wisconsin - Madison, 750 Highland Avenue, Madison, WI 53726, USA
| | - Andrea Mason
- Department of Kinesiology, School of Education, University of Wisconsin - Madison, 2000 Observatory Drive, Madison, WI 53706, USA
| | - Carla M Pugh
- Department of Surgery, School of Medicine and Public Health, University of Wisconsin - Madison, 750 Highland Avenue, Madison, WI 53726, USA
| |
Collapse
|
38
|
Image Based Surgical Instrument Pose Estimation with Multi-class Labelling and Optical Flow. LECTURE NOTES IN COMPUTER SCIENCE 2015. [DOI: 10.1007/978-3-319-24553-9_41] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
|
39
|
Face, content, and construct validity of the EndoViS training system for objective assessment of psychomotor skills of laparoscopic surgeons. Surg Endosc 2014; 29:3392-403. [DOI: 10.1007/s00464-014-4032-6] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2014] [Accepted: 12/03/2014] [Indexed: 11/30/2022]
|
40
|
Rana J, Kowalewski T. Feasibility of a Low-Cost Instrumented Trocar for Universal Surgical Procedure Analyses1. J Med Device 2014. [DOI: 10.1115/1.4027081] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022] Open
Affiliation(s)
- Jalal Rana
- Department of Mechanical Engineering, University of Minnesota, 111 Church Street SE, Minneapolis, MN 55455
| | - Timothy Kowalewski
- Department of Mechanical Engineering, University of Minnesota, 111 Church Street SE, Minneapolis, MN 55455
| |
Collapse
|
41
|
Conroy E, Surender K, Geng Z, Chen T, Dailey S, Jiang J. Video-based method of quantifying performance and instrument motion during simulated phonosurgery. Laryngoscope 2014; 124:2332-7. [PMID: 24737286 DOI: 10.1002/lary.24724] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2013] [Revised: 03/11/2014] [Accepted: 04/14/2014] [Indexed: 11/06/2022]
Abstract
OBJECTIVES/HYPOTHESIS To investigate the use of the Video-Based Phonomicrosurgery Instrument Tracking System to collect instrument position data during simulated phonomicrosurgery and calculate motion metrics using these data. We used this system to determine if novice subject motion metrics improved over 1 week of training. STUDY DESIGN Prospective cohort study. METHODS Ten subjects performed simulated surgical tasks once per day for 5 days. Instrument position data were collected and used to compute motion metrics (path length, depth perception, and motion smoothness). Data were analyzed to determine if motion metrics improved with practice time. Task outcome was also determined each day, and relationships between task outcome and motion metrics were used to evaluate the validity of motion metrics as indicators of surgical performance. RESULTS Significant decreases over time were observed for path length (P < .001), depth perception (P < .001), and task outcome (P < .001). No significant change was observed for motion smoothness. Significant relationships were observed between task outcome and path length (P < .001), depth perception (P < .001), and motion smoothness (P < .001). CONCLUSIONS Our system can estimate instrument trajectory and provide quantitative descriptions of surgical performance. It may be useful for evaluating phonomicrosurgery performance. Path length and depth perception may be particularly useful indicators.
Collapse
Affiliation(s)
- Ellen Conroy
- Department of Surgery, Division of Otolaryngology-Head and Neck Surgery, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin, U.S.A
| | | | | | | | | | | |
Collapse
|
42
|
Allan M, Thompson S, Clarkson MJ, Ourselin S, Hawkes DJ, Kelly J, Stoyanov D. 2D-3D Pose Tracking of Rigid Instruments in Minimally Invasive Surgery. ACTA ACUST UNITED AC 2014. [DOI: 10.1007/978-3-319-07521-1_1] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/07/2023]
|
43
|
|
44
|
A systematic review on low-cost box models to achieve basic and advanced laparoscopic skills during modern surgical training. Surg Laparosc Endosc Percutan Tech 2013; 23:109-20. [PMID: 23579503 DOI: 10.1097/sle.0b013e3182827c29] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
INTRODUCTION Low-cost box models (BMs) are a valuable tool alternative to virtual-reality simulators. We aim to provide surgical trainees with a description of most common BMs and to present their validity to achieve basic and advanced laparoscopic skills. MATERIALS AND METHODS A literature search was undertaken for all studies focusing on BMs, excluded were those presenting data on virtual-reality simulators only. Databases were screened up to December 2011. RESULTS Numerous studies focused on various BMs to improve generic tasks (ie, instrument navigation, coordination, and cutting). Only fewer articles described models specific for peculiar operations. All studies showed a significant improvement of basic laparoscopic skills after training with BMs. Furthermore, their low costs make them easily available to most surgical trainees. CONCLUSIONS BMs should be developed by all surgical trainees during their training. Fields for future improvement regard endoscopy and complex laparoscopic operations for which ad hoc BMs are not available.
Collapse
|
45
|
Ruda K, Beekman D, White LW, Lendvay TS, Kowalewski TM. SurgTrak — A Universal Platform for Quantitative Surgical Data Capture. J Med Device 2013. [DOI: 10.1115/1.4024525] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022] Open
Affiliation(s)
| | - Darrin Beekman
- Mechanical Engineering Department, University of Minnesota
| | - Lee W. White
- Bioengineering Engineering Department
University of Washington
| | - Thomas S. Lendvay
- University of Washington, Department of Urologic Surgery, Seattle Children's Hospital
| | | |
Collapse
|
46
|
Kranzfelder M, Schneider A, Fiolka A, Schwan E, Gillen S, Wilhelm D, Schirren R, Reiser S, Jensen B, Feussner H. Real-time instrument detection in minimally invasive surgery using radiofrequency identification technology. J Surg Res 2013; 185:704-10. [PMID: 23859134 DOI: 10.1016/j.jss.2013.06.022] [Citation(s) in RCA: 37] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2013] [Revised: 04/17/2013] [Accepted: 06/07/2013] [Indexed: 11/18/2022]
Abstract
BACKGROUND A key part of surgical workflow recording is recognition of the instrument in use. We present a radiofrequency identification (RFID)-based approach for real-time tracking of laparoscopic instruments. METHODS The system consists of RFID-tagged instruments and an antenna unit positioned on the Mayo stand. For reliability analysis, RFID tracking data were compared with the assessment of the perioperative video data of instrument changes (the reference standard for instrument application detection) in 10 laparoscopic cholecystectomies. When the tagged instrument was on the Mayo stand, it was referred to as "not in use." Once it was handed to the surgeon, it was considered to be "in use." Temporal miscounts (incorrect number of instruments "in use") were analyzed. The surgeons and scrub nurses completed a questionnaire after each operation for individual system evaluation. RESULTS A total of 110 distinct instrument applications ("in use" versus "not in use") were eligible for analysis. No RFID tag failure occurred. The RFID detection rates were consistent with the period of effective instrument application. The delay in instrument detection was 4.2 ± 1.7 s. The highest percentage of temporal miscounts occurred during phases with continuous application of coagulation current. Surgeons generally rated the system better than the scrub nurses (P = 0.54). CONCLUSIONS The feasibility of RFID-based real-time instrument detection was successfully proved in our study, with reliable detection results during laparoscopic cholecystectomy. Thus, RFID technology has the potential to be a valuable additional tool for surgical workflow recognition that could enable a situation dependent assistance of the surgeon in the future.
Collapse
Affiliation(s)
- Michael Kranzfelder
- Department of Surgery, Klinikum rechts der Isar, Technische Universität München, München, Germany; Research Group, Minimally invasive Interdisciplinary Therapeutical Intervention, Klinikum rechts der Isar, Technische Universität München, München, Germany.
| | | | | | | | | | | | | | | | | | | |
Collapse
|
47
|
Learning curve on the TrEndo laparoscopic simulator compared to an expert level. Surg Endosc 2013; 27:2934-9. [DOI: 10.1007/s00464-013-2859-x] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2012] [Accepted: 01/28/2013] [Indexed: 12/17/2022]
|
48
|
Loukas C, Lahanas V, Georgiou E. An integrated approach to endoscopic instrument tracking for augmented reality applications in surgical simulation training. Int J Med Robot 2013; 9:e34-51. [PMID: 23355307 DOI: 10.1002/rcs.1485] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2012] [Revised: 11/01/2012] [Accepted: 12/14/2012] [Indexed: 12/17/2022]
Abstract
BACKGROUND Despite the popular use of virtual and physical reality simulators in laparoscopic training, the educational potential of augmented reality (AR) has not received much attention. A major challenge is the robust tracking and three-dimensional (3D) pose estimation of the endoscopic instrument, which are essential for achieving interaction with the virtual world and for realistic rendering when the virtual scene is occluded by the instrument. In this paper we propose a method that addresses these issues, based solely on visual information obtained from the endoscopic camera. METHODS Two different tracking algorithms are combined for estimating the 3D pose of the surgical instrument with respect to the camera. The first tracker creates an adaptive model of a colour strip attached to the distal part of the tool (close to the tip). The second algorithm tracks the endoscopic shaft, using a combined Hough-Kalman approach. The 3D pose is estimated with perspective geometry, using appropriate measurements extracted by the two trackers. RESULTS The method has been validated on several complex image sequences for its tracking efficiency, pose estimation accuracy and applicability in AR-based training. Using a standard endoscopic camera, the absolute average error of the tip position was 2.5 mm for working distances commonly found in laparoscopic training. The average error of the instrument's angle with respect to the camera plane was approximately 2°. The results are also supplemented by video segments of laparoscopic training tasks performed in a physical and an AR environment. CONCLUSIONS The experiments yielded promising results regarding the potential of applying AR technologies for laparoscopic skills training, based on a computer vision framework. The issue of occlusion handling was adequately addressed. The estimated trajectory of the instruments may also be used for surgical gesture interpretation and assessment.
Collapse
Affiliation(s)
- Constantinos Loukas
- Medical Physics Laboratory Simulation Centre, School of Medicine, University of Athens, Greece
| | | | | |
Collapse
|
49
|
Analysis of laparoscopic dissection skill by instrument tip force measurement. Surg Endosc 2013; 27:2193-200. [DOI: 10.1007/s00464-012-2739-9] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2012] [Accepted: 11/12/2012] [Indexed: 11/25/2022]
|
50
|
Müller SC, Strunk T, Alken P. [Quality and objectifiability of training and advanced training in urology]. Urologe A 2013; 51:1065-73. [PMID: 22782191 DOI: 10.1007/s00120-012-2934-2] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
Abstract
The attraction to study medicine has not changed, however we are facing a lack of trainees especially in surgical subspecialties like urology. Possible explanations are a 70% proportion of female students and different views on the work-life balance in the future. A high burden of theory and unrealistic multiple choice examinations support those who can learn but there are no objective and reproducible criteria to recognize the competence of a good physician early in the career. This problem continues during residency, especially in surgical subspecialities. The different medical boards in Germany responsible for the training programs have no concepts. Many attempts in other countries to objectively measure surgical skills have so far been ignored. If we do not want to lose our traditionally high competence in medicine we should join those who attempt to improve teaching and to use methods for selecting suitable candidates for surgery as soon and as objectively as possible.
Collapse
Affiliation(s)
- S C Müller
- Klinik und Poliklinik für Urologie und Kinderurologie, Universitätsklinikum Bonn, Sigmund-Freud-Straße 25, 53127 Bonn, Deutschland.
| | | | | |
Collapse
|