1
|
Lavanchy JL, Zindel J, Kirtac K, Twick I, Hosgor E, Candinas D, Beldi G. Surgical skill assessment using machine learning algorithms. Br J Surg 2021. [DOI: 10.1093/bjs/znab202.093] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
Abstract
Abstract
Objective
Surgical skill is correlated with clinical outcomes. Therefore, the assessment of surgical skill is of major importance to improve clinical outcomes and increase patient safety. However, surgical skill assessment often lacks objectivity and reproducibility. Furthermore, it is time-consuming and expensive. Therefore, we developed an automated surgical skill assessment using machine learning algorithms.
Methods
Surgical skills were assessed in videos of laparoscopic cholecystectomy using a three-step machine learning algorithm. First, a three-dimensional convolutional neural network was trained to localize and classify the instruments within the videos. Second, movement patterns of the instruments were recorded over time and extracted. Third, the movement patterns were correlated with human surgical skill ratings using a linear regression model to predict surgical skill ratings automatically. Machine ratings were compared against human ratings of four board certified surgeons using a score ranging from 1 (poor skills) to 5 (excellent skills).
Results
Human raters and machine learning algorithms assessed surgical skills in 242 videos. Inter-rater reliability for human raters was excellent (79%, 95%CI 72-85%). Instrument detection showed an average precision of 78% and average recall of 82%. Machine learning algorithms showed an 87% accuracy in predicting good or poor surgical skills, when compared to human raters.
Conclusion
Machine learning algorithms can be trained to distinguish good and poor surgical skills with high accuracy.
This work was published in Sci Rep 11, 5197 (2021). https://doi.org/10.1038/s41598-021-84295-6
Collapse
Affiliation(s)
- J L Lavanchy
- Department of Visceral Surgery and Medicine, Inselspital, Bern University Hospital, Bern, Switzerland
| | - J Zindel
- Department of Visceral Surgery and Medicine, Inselspital, Bern University Hospital, Bern, Switzerland
| | - K Kirtac
- Caresyntax GmbH, Berlin, Germany
| | - I Twick
- Caresyntax GmbH, Berlin, Germany
| | - E Hosgor
- Caresyntax GmbH, Berlin, Germany
| | - D Candinas
- Department of Visceral Surgery and Medicine, Inselspital, Bern University Hospital, Bern, Switzerland
| | - G Beldi
- Department of Visceral Surgery and Medicine, Inselspital, Bern University Hospital, Bern, Switzerland
| |
Collapse
|
2
|
Aspart F, Lindström Bolmgren J, Lavanchy JL, Beldi G, Woods MS, Padoy N, Hosgor E. Real-time clipper tip visibility detection using computer vision. Br J Surg 2021. [DOI: 10.1093/bjs/znab202.096] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
Abstract
Abstract
Objective
Laparoscopic cholecystectomy is one of the most common laparoscopic procedures. The critical phase of this intervention consists in dissecting the hepatocystic triangle and clipping the cystic artery and duct. Poor visibility of the clipper tips can result in unintentional clipping of neighboring tissues (“past-pointing”) or improper enclosing of the artery or duct, leading to hemorrhage or bile leaks. To improve patient safety during this clipping phase, we propose real-time intraoperative feedback to alert a surgeon when departing from safe behavior, i.e., losing visibility of the clipper tip. This is achieved using a deep learning model which classifies the clipper tip visibility in each frame.
Methods
We tailored a dataset for our application by selecting frames containing a clipper that were selected from 300 laparoscopic cholecystectomy videos. These 122k frames were annotated with binary labels: clipper tip visible/invisible. A frame was labelled as tip visible when the tips of both clipper jaws were visible. Frames in which the clipper tip was occluded (e.g. by tissue) or frames with poor image quality (e.g., bad contrast, blurriness/smoke) were labelled as tip invisible. Frames from 29 videos were set aside for a test set; the remaining frames were used for training/validation (80%, 20% resp.).
Using a 5-fold cross-validation scheme, convolutional neural networks (Resnet50 architecture) were trained to classify the clipper tip visibility in each frame. Finally, 5 neural networks trained in the cross-validation were ensembled into a single model by averaging their predictions.
Results
On the test set, the ensembled model achieved an AUROC of 0.906 and a specificity of 64.5% at 95% sensitivity. Looking at per video performance, the median specificity across videos raised to 76.6% (at 95% sensitivity). That is, the model would correctly detect 95% of the clipper tip not visible cases; in the majority of the interventions, 7 out of 10 warnings would be justified.
Conclusion
We propose a novel safety feedback which warns on poor visibility of the clipper while clipping the cystic duct or artery. While being accurate, our technical solution runs in real-time, a requirement for intraoperative use. We believe this feedback can raise surgeons’ attentiveness when departing from safe visibility during this critical phase of laparoscopic cholecystectomy.
Collapse
Affiliation(s)
- F Aspart
- Caresyntax GmbH, Berlin, Germany
| | | | - J L Lavanchy
- Department of Visceral Surgery and Medicine, Inselspital, Bern University Hospital, Bern, Switzerland
| | - G Beldi
- Department of Visceral Surgery and Medicine, Inselspital, Bern University Hospital, Bern, Switzerland
| | | | - N Padoy
- CNRS, IHU Strasbourg, I Cube, University of Strasbourg, Strasbourg, France
| | - E Hosgor
- Caresyntax GmbH, Berlin, Germany
| |
Collapse
|