101
|
Kumazu Y, Kobayashi N, Kitamura N, Rayan E, Neculoiu P, Misumi T, Hojo Y, Nakamura T, Kumamoto T, Kurahashi Y, Ishida Y, Masuda M, Shinohara H. Automated segmentation by deep learning of loose connective tissue fibers to define safe dissection planes in robot-assisted gastrectomy. Sci Rep 2021; 11:21198. [PMID: 34707141 PMCID: PMC8551298 DOI: 10.1038/s41598-021-00557-3] [Citation(s) in RCA: 26] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2021] [Accepted: 10/13/2021] [Indexed: 02/06/2023] Open
Abstract
The prediction of anatomical structures within the surgical field by artificial intelligence (AI) is expected to support surgeons’ experience and cognitive skills. We aimed to develop a deep-learning model to automatically segment loose connective tissue fibers (LCTFs) that define a safe dissection plane. The annotation was performed on video frames capturing a robot-assisted gastrectomy performed by trained surgeons. A deep-learning model based on U-net was developed to output segmentation results. Twenty randomly sampled frames were provided to evaluate model performance by comparing Recall and F1/Dice scores with a ground truth and with a two-item questionnaire on sensitivity and misrecognition that was completed by 20 surgeons. The model produced high Recall scores (mean 0.606, maximum 0.861). Mean F1/Dice scores reached 0.549 (range 0.335–0.691), showing acceptable spatial overlap of the objects. Surgeon evaluators gave a mean sensitivity score of 3.52 (with 88.0% assigning the highest score of 4; range 2.45–3.95). The mean misrecognition score was a low 0.14 (range 0–0.7), indicating very few acknowledged over-detection failures. Thus, AI can be trained to predict fine, difficult-to-discern anatomical structures at a level convincing to expert surgeons. This technology may help reduce adverse events by determining safe dissection planes.
Collapse
Affiliation(s)
- Yuta Kumazu
- Department of Surgery, Yokohama City University, Kanagawa, Japan.,Anaut Inc., Tokyo, Japan
| | | | | | | | | | - Toshihiro Misumi
- Department of Biostatistics, Yokohama City University School of Medicine, Kanagawa, Japan
| | - Yudai Hojo
- Department of Gastroenterological Surgery, Hyogo College of Medicine, 1-1 Mukogawa-cho, Nishinomiya, Hyogo, 663-8501, Japan
| | - Tatsuro Nakamura
- Department of Gastroenterological Surgery, Hyogo College of Medicine, 1-1 Mukogawa-cho, Nishinomiya, Hyogo, 663-8501, Japan
| | - Tsutomu Kumamoto
- Department of Gastroenterological Surgery, Hyogo College of Medicine, 1-1 Mukogawa-cho, Nishinomiya, Hyogo, 663-8501, Japan
| | - Yasunori Kurahashi
- Department of Gastroenterological Surgery, Hyogo College of Medicine, 1-1 Mukogawa-cho, Nishinomiya, Hyogo, 663-8501, Japan
| | - Yoshinori Ishida
- Department of Gastroenterological Surgery, Hyogo College of Medicine, 1-1 Mukogawa-cho, Nishinomiya, Hyogo, 663-8501, Japan
| | - Munetaka Masuda
- Department of Surgery, Yokohama City University, Kanagawa, Japan
| | - Hisashi Shinohara
- Department of Gastroenterological Surgery, Hyogo College of Medicine, 1-1 Mukogawa-cho, Nishinomiya, Hyogo, 663-8501, Japan.
| |
Collapse
|
102
|
Williams S, Layard Horsfall H, Funnell JP, Hanrahan JG, Khan DZ, Muirhead W, Stoyanov D, Marcus HJ. Artificial Intelligence in Brain Tumour Surgery-An Emerging Paradigm. Cancers (Basel) 2021; 13:cancers13195010. [PMID: 34638495 PMCID: PMC8508169 DOI: 10.3390/cancers13195010] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2021] [Revised: 10/02/2021] [Accepted: 10/03/2021] [Indexed: 01/01/2023] Open
Abstract
Artificial intelligence (AI) platforms have the potential to cause a paradigm shift in brain tumour surgery. Brain tumour surgery augmented with AI can result in safer and more effective treatment. In this review article, we explore the current and future role of AI in patients undergoing brain tumour surgery, including aiding diagnosis, optimising the surgical plan, providing support during the operation, and better predicting the prognosis. Finally, we discuss barriers to the successful clinical implementation, the ethical concerns, and we provide our perspective on how the field could be advanced.
Collapse
Affiliation(s)
- Simon Williams
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London WC1N 3BG, UK; (H.L.H.); (J.P.F.); (J.G.H.); (D.Z.K.); (W.M.); (H.J.M.)
- Wellcome/Engineering and Physical Sciences Research Council (EPSRC) Centre for Interventional and Surgical Sciences (WEISS), London W1W 7TY, UK;
- Correspondence:
| | - Hugo Layard Horsfall
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London WC1N 3BG, UK; (H.L.H.); (J.P.F.); (J.G.H.); (D.Z.K.); (W.M.); (H.J.M.)
- Wellcome/Engineering and Physical Sciences Research Council (EPSRC) Centre for Interventional and Surgical Sciences (WEISS), London W1W 7TY, UK;
| | - Jonathan P. Funnell
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London WC1N 3BG, UK; (H.L.H.); (J.P.F.); (J.G.H.); (D.Z.K.); (W.M.); (H.J.M.)
- Wellcome/Engineering and Physical Sciences Research Council (EPSRC) Centre for Interventional and Surgical Sciences (WEISS), London W1W 7TY, UK;
| | - John G. Hanrahan
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London WC1N 3BG, UK; (H.L.H.); (J.P.F.); (J.G.H.); (D.Z.K.); (W.M.); (H.J.M.)
- Wellcome/Engineering and Physical Sciences Research Council (EPSRC) Centre for Interventional and Surgical Sciences (WEISS), London W1W 7TY, UK;
| | - Danyal Z. Khan
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London WC1N 3BG, UK; (H.L.H.); (J.P.F.); (J.G.H.); (D.Z.K.); (W.M.); (H.J.M.)
- Wellcome/Engineering and Physical Sciences Research Council (EPSRC) Centre for Interventional and Surgical Sciences (WEISS), London W1W 7TY, UK;
| | - William Muirhead
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London WC1N 3BG, UK; (H.L.H.); (J.P.F.); (J.G.H.); (D.Z.K.); (W.M.); (H.J.M.)
- Wellcome/Engineering and Physical Sciences Research Council (EPSRC) Centre for Interventional and Surgical Sciences (WEISS), London W1W 7TY, UK;
| | - Danail Stoyanov
- Wellcome/Engineering and Physical Sciences Research Council (EPSRC) Centre for Interventional and Surgical Sciences (WEISS), London W1W 7TY, UK;
| | - Hani J. Marcus
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London WC1N 3BG, UK; (H.L.H.); (J.P.F.); (J.G.H.); (D.Z.K.); (W.M.); (H.J.M.)
- Wellcome/Engineering and Physical Sciences Research Council (EPSRC) Centre for Interventional and Surgical Sciences (WEISS), London W1W 7TY, UK;
| |
Collapse
|
103
|
Birkhoff DC, van Dalen ASH, Schijven MP. A Review on the Current Applications of Artificial Intelligence in the Operating Room. Surg Innov 2021; 28:611-619. [PMID: 33625307 PMCID: PMC8450995 DOI: 10.1177/1553350621996961] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
Abstract
Background. Artificial intelligence (AI) is an era upcoming in medicine and, more recently, in the operating room (OR). Existing literature elaborates mainly on the future possibilities and expectations for AI in surgery. The aim of this study is to systematically provide an overview of the current actual AI applications used to support processes inside the OR. Methods. PubMed, Embase, Cochrane Library, and IEEE Xplore were searched using inclusion criteria for relevant articles up to August 25th, 2020. No study types were excluded beforehand. Articles describing current AI applications for surgical purposes inside the OR were reviewed. Results. Nine studies were included. An overview of the researched and described applications of AI in the OR is provided, including procedure duration prediction, gesture recognition, intraoperative cancer detection, intraoperative video analysis, workflow recognition, an endoscopic guidance system, knot-tying, and automatic registration and tracking of the bone in orthopedic surgery. These technologies are compared to their, often non-AI, baseline alternatives. Conclusions. Currently described applications of AI in the OR are limited to date. They may, however, have a promising future in improving surgical precision, reduce manpower, support intraoperative decision-making, and increase surgical safety. Nonetheless, the application and implementation of AI inside the OR still has several challenges to overcome. Clear regulatory, organizational, and clinical conditions are imperative for AI to redeem its promise. Future research on use of AI in the OR should therefore focus on clinical validation of AI applications, the legal and ethical considerations, and on evaluation of implementation trajectory.
Collapse
Affiliation(s)
- David C. Birkhoff
- Department of Surgery, Amsterdam UMC, University of Amsterdam, The Netherlands
| | | | - Marlies P. Schijven
- Department of Surgery, Amsterdam Gastroenterology and Metabolism, University of Amsterdam, The Netherlands
- institution-id-type="Ringgold" />Li Ka Shing Knowledge Institute, institution-id-type="Ringgold" />St Michaels Hospital, Toronto, Canada
| |
Collapse
|
104
|
Assurance of surgical quality within multicenter randomized controlled trials for bariatric and metabolic surgery: a systematic review. Surg Obes Relat Dis 2021; 18:124-132. [PMID: 34602346 DOI: 10.1016/j.soard.2021.08.020] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2021] [Revised: 05/30/2021] [Accepted: 08/21/2021] [Indexed: 01/12/2023]
Abstract
BACKGROUND Surgical quality assurance methods aim to ensure standardization and high quality of surgical techniques within multicenter randomized controlled trials (RCTs), thereby diminishing the heterogeneity of surgery and reducing biases due to surgical variation. This study aimed to establish the measures undertaken to ensure surgical quality within multicenter RCTs investigating bariatric and metabolic surgery, and their influence upon clinical outcomes. METHODS An electronic literature search was performed from the Embase, Medline, and Web of Science databases to identify multicenter RCTs investigating bariatric and metabolic surgery. Each RCT was evaluated against a checklist of surgical quality measures within 3 domain: (1) standardization of surgical techniques; (2) credentialing of surgical experience; and (3) monitoring of performance. Outcome measures were postoperative weight change and complications. RESULTS Nineteen multicenter RCTs were included in the analysis. Three studies undertook pretrial education of surgical standard. Fourteen studies described complete standardization of surgical techniques. Four studies credentialed surgeons by case volume prior to enrollment. Two studies used intraoperative or video evaluation of surgical technique prior to enrollment. Only two studies monitored performance during the study. Although there were limited quality assurance methods undertaken, utilization of these techniques was associated with reduced overall complications. Standardization of surgery was associated with reduced re-operation rates but did not influence postoperative weight loss. CONCLUSION The utilization of methods for surgical quality assurance are very limited within multicenter RCTs of bariatric and metabolic surgery. Future studies must implement surgical quality assurance methods to reduce variability of surgical performance and potential bias within RCTs.
Collapse
|
105
|
Zhang B, Ghanem A, Simes A, Choi H, Yoo A. Surgical workflow recognition with 3DCNN for Sleeve Gastrectomy. Int J Comput Assist Radiol Surg 2021; 16:2029-2036. [PMID: 34415503 PMCID: PMC8589754 DOI: 10.1007/s11548-021-02473-3] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2021] [Accepted: 08/04/2021] [Indexed: 01/07/2023]
Abstract
PURPOSE Surgical workflow recognition is a crucial and challenging problem when building a computer-assisted surgery system. Current techniques focus on utilizing a convolutional neural network and a recurrent neural network (CNN-RNN) to solve the surgical workflow recognition problem. In this paper, we attempt to use a deep 3DCNN to solve this problem. METHODS In order to tackle the surgical workflow recognition problem and the imbalanced data problem, we implement a 3DCNN workflow referred to as I3D-FL-PKF. We utilize focal loss (FL) to train a 3DCNN architecture known as Inflated 3D ConvNet (I3D) for surgical workflow recognition. We use prior knowledge filtering (PKF) to filter the recognition results. RESULTS We evaluate our proposed workflow on a large sleeve gastrectomy surgical video dataset. We show that focal loss can help to address the imbalanced data problem. We show that our PKF can be used to generate smoothed prediction results and improve the overall accuracy. We show that the proposed workflow achieves 84.16% frame-level accuracy and reaches a weighted Jaccard score of 0.7327 which outperforms traditional CNN-RNN design. CONCLUSION The proposed workflow can obtain consistent and smooth predictions not only within the surgical phases but also for phase transitions. By utilizing focal loss and prior knowledge filtering, our implementation of deep 3DCNN has great potential to solve surgical workflow recognition problems for clinical practice.
Collapse
Affiliation(s)
- Bokai Zhang
- C-SATS, Inc. Johnson & Johnson, 1100 Olive Way, Suite 1100, Seattle, WA, 98101, USA.
| | - Amer Ghanem
- C-SATS, Inc. Johnson & Johnson, 1100 Olive Way, Suite 1100, Seattle, WA, 98101, USA
| | - Alexander Simes
- C-SATS, Inc. Johnson & Johnson, 1100 Olive Way, Suite 1100, Seattle, WA, 98101, USA
| | - Henry Choi
- C-SATS, Inc. Johnson & Johnson, 1100 Olive Way, Suite 1100, Seattle, WA, 98101, USA
| | - Andrew Yoo
- C-SATS, Inc. Johnson & Johnson, 1100 Olive Way, Suite 1100, Seattle, WA, 98101, USA
| |
Collapse
|
106
|
Gumbs AA, Frigerio I, Spolverato G, Croner R, Illanes A, Chouillard E, Elyan E. Artificial Intelligence Surgery: How Do We Get to Autonomous Actions in Surgery? SENSORS (BASEL, SWITZERLAND) 2021; 21:5526. [PMID: 34450976 PMCID: PMC8400539 DOI: 10.3390/s21165526] [Citation(s) in RCA: 32] [Impact Index Per Article: 10.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/06/2021] [Revised: 08/03/2021] [Accepted: 08/11/2021] [Indexed: 12/30/2022]
Abstract
Most surgeons are skeptical as to the feasibility of autonomous actions in surgery. Interestingly, many examples of autonomous actions already exist and have been around for years. Since the beginning of this millennium, the field of artificial intelligence (AI) has grown exponentially with the development of machine learning (ML), deep learning (DL), computer vision (CV) and natural language processing (NLP). All of these facets of AI will be fundamental to the development of more autonomous actions in surgery, unfortunately, only a limited number of surgeons have or seek expertise in this rapidly evolving field. As opposed to AI in medicine, AI surgery (AIS) involves autonomous movements. Fortuitously, as the field of robotics in surgery has improved, more surgeons are becoming interested in technology and the potential of autonomous actions in procedures such as interventional radiology, endoscopy and surgery. The lack of haptics, or the sensation of touch, has hindered the wider adoption of robotics by many surgeons; however, now that the true potential of robotics can be comprehended, the embracing of AI by the surgical community is more important than ever before. Although current complete surgical systems are mainly only examples of tele-manipulation, for surgeons to get to more autonomously functioning robots, haptics is perhaps not the most important aspect. If the goal is for robots to ultimately become more and more independent, perhaps research should not focus on the concept of haptics as it is perceived by humans, and the focus should be on haptics as it is perceived by robots/computers. This article will discuss aspects of ML, DL, CV and NLP as they pertain to the modern practice of surgery, with a focus on current AI issues and advances that will enable us to get to more autonomous actions in surgery. Ultimately, there may be a paradigm shift that needs to occur in the surgical community as more surgeons with expertise in AI may be needed to fully unlock the potential of AIS in a safe, efficacious and timely manner.
Collapse
Affiliation(s)
- Andrew A. Gumbs
- Centre Hospitalier Intercommunal de POISSY/SAINT-GERMAIN-EN-LAYE 10, Rue Champ de Gaillard, 78300 Poissy, France;
| | - Isabella Frigerio
- Department of Hepato-Pancreato-Biliary Surgery, Pederzoli Hospital, 37019 Peschiera del Garda, Italy;
| | - Gaya Spolverato
- Department of Surgical, Oncological and Gastroenterological Sciences, University of Padova, 35122 Padova, Italy;
| | - Roland Croner
- Department of General-, Visceral-, Vascular- and Transplantation Surgery, University of Magdeburg, Haus 60a, Leipziger Str. 44, 39120 Magdeburg, Germany;
| | - Alfredo Illanes
- INKA–Innovation Laboratory for Image Guided Therapy, Medical Faculty, Otto-von-Guericke University Magdeburg, 39120 Magdeburg, Germany;
| | - Elie Chouillard
- Centre Hospitalier Intercommunal de POISSY/SAINT-GERMAIN-EN-LAYE 10, Rue Champ de Gaillard, 78300 Poissy, France;
| | - Eyad Elyan
- School of Computing, Robert Gordon University, Aberdeen AB10 7JG, UK;
| |
Collapse
|
107
|
Can Deep Learning Algorithms Help Identify Surgical Workflow and Techniques? J Surg Res 2021; 268:318-325. [PMID: 34399354 DOI: 10.1016/j.jss.2021.07.003] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2021] [Revised: 07/14/2021] [Accepted: 07/15/2021] [Indexed: 12/21/2022]
Abstract
BACKGROUND Surgical videos are now being used for performance review and educational purposes; however, broad use is still limited due to time constraints. To make video review more efficient, we implemented Artificial Intelligence (AI) algorithms to detect surgical workflow and technical approaches. METHODS Participants (N = 200) performed a simulated open bowel repair. The operation included two major phases: (1) Injury Identification and (2) Suture Repair. Accordingly, a phase detection algorithm (MobileNetV2+GRU) was implemented to automatically detect the two phases using video data. In addition, participants were noted to use three different technical approaches when running the bowel: (1) use of both hands, (2) use of one hand and one tool, or (3) use of two tools. To discern the three technical approaches, an object detection (YOLOv3) algorithm was implemented to recognize objects that were commonly used during the Injury Identification phase (hands versus tools). RESULTS The phase detection algorithm achieved high precision (recall) when segmenting the two phases: Injury Identification (86 ± 9% [81 ± 12%]) and Suture Repair (81 ± 6% [81 ± 16%]). When evaluating three technical approaches in running the bowel, the object detection algorithm achieved high average precisions (Hands [99.32%] and Tools [94.47%]). The three technical approaches showed no difference in execution time (Kruskal-Wallis Test: P= 0.062) or injury identification (not missing an injury) (Chi-squared: P= 0.998). CONCLUSIONS The AI algorithms showed high precision when segmenting surgical workflow and identifying technical approaches. Automation of these techniques for surgical video databases has great potential to facilitate efficient performance review.
Collapse
|
108
|
Comment on "Situating Artificial Intelligence in Surgery A Focus on Disease Severity". Ann Surg 2021; 274:e924-e925. [PMID: 34353997 DOI: 10.1097/sla.0000000000005143] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
109
|
Kitaguchi D, Takeshita N, Matsuzaki H, Igaki T, Hasegawa H, Ito M. Development and Validation of a 3-Dimensional Convolutional Neural Network for Automatic Surgical Skill Assessment Based on Spatiotemporal Video Analysis. JAMA Netw Open 2021; 4:e2120786. [PMID: 34387676 PMCID: PMC8363914 DOI: 10.1001/jamanetworkopen.2021.20786] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/10/2023] Open
Abstract
IMPORTANCE A high level of surgical skill is essential to prevent intraoperative problems. One important aspect of surgical education is surgical skill assessment, with pertinent feedback facilitating efficient skill acquisition by novices. OBJECTIVES To develop a 3-dimensional (3-D) convolutional neural network (CNN) model for automatic surgical skill assessment and to evaluate the performance of the model in classification tasks by using laparoscopic colorectal surgical videos. DESIGN, SETTING, AND PARTICIPANTS This prognostic study used surgical videos acquired prior to 2017. In total, 650 laparoscopic colorectal surgical videos were provided for study purposes by the Japan Society for Endoscopic Surgery, and 74 were randomly extracted. Every video had highly reliable scores based on the Endoscopic Surgical Skill Qualification System (ESSQS, range 1-100, with higher scores indicating greater surgical skill) established by the society. Data were analyzed June to December 2020. MAIN OUTCOMES AND MEASURES From the groups with scores less than the difference between the mean and 2 SDs, within the range spanning the mean and 1 SD, and greater than the sum of the mean and 2 SDs, 17, 26, and 31 videos, respectively, were randomly extracted. In total, 1480 video clips with a length of 40 seconds each were extracted for each surgical step (medial mobilization, lateral mobilization, inferior mesenteric artery transection, and mesorectal transection) and separated into 1184 training sets and 296 test sets. Automatic surgical skill classification was performed based on spatiotemporal video analysis using the fully automated 3-D CNN model, and classification accuracies and screening accuracies for the groups with scores less than the mean minus 2 SDs and greater than the mean plus 2 SDs were calculated. RESULTS The mean (SD) ESSQS score of all 650 intraoperative videos was 66.2 (8.6) points and for the 74 videos used in the study, 67.6 (16.1) points. The proposed 3-D CNN model automatically classified video clips into groups with scores less than the mean minus 2 SDs, within 1 SD of the mean, and greater than the mean plus 2 SDs with a mean (SD) accuracy of 75.0% (6.3%). The highest accuracy was 83.8% for the inferior mesenteric artery transection. The model also screened for the group with scores less than the mean minus 2 SDs with 94.1% sensitivity and 96.5% specificity and for group with greater than the mean plus 2 SDs with 87.1% sensitivity and 86.0% specificity. CONCLUSIONS AND RELEVANCE The results of this prognostic study showed that the proposed 3-D CNN model classified laparoscopic colorectal surgical videos with sufficient accuracy to be used for screening groups with scores greater than the mean plus 2 SDs and less than the mean minus 2 SDs. The proposed approach was fully automatic and easy to use for various types of surgery, and no special annotations or kinetics data extraction were required, indicating that this approach warrants further development for application to automatic surgical skill assessment.
Collapse
Affiliation(s)
- Daichi Kitaguchi
- Surgical Device Innovation Office, National Cancer Center Hospital East, Kashiwanoha, Kashiwa, Chiba, Japan
- Department of Colorectal Surgery, National Cancer Center Hospital East, Kashiwanoha, Kashiwa, Chiba, Japan
| | - Nobuyoshi Takeshita
- Surgical Device Innovation Office, National Cancer Center Hospital East, Kashiwanoha, Kashiwa, Chiba, Japan
- Department of Colorectal Surgery, National Cancer Center Hospital East, Kashiwanoha, Kashiwa, Chiba, Japan
| | - Hiroki Matsuzaki
- Surgical Device Innovation Office, National Cancer Center Hospital East, Kashiwanoha, Kashiwa, Chiba, Japan
| | - Takahiro Igaki
- Surgical Device Innovation Office, National Cancer Center Hospital East, Kashiwanoha, Kashiwa, Chiba, Japan
- Department of Colorectal Surgery, National Cancer Center Hospital East, Kashiwanoha, Kashiwa, Chiba, Japan
| | - Hiro Hasegawa
- Surgical Device Innovation Office, National Cancer Center Hospital East, Kashiwanoha, Kashiwa, Chiba, Japan
- Department of Colorectal Surgery, National Cancer Center Hospital East, Kashiwanoha, Kashiwa, Chiba, Japan
| | - Masaaki Ito
- Surgical Device Innovation Office, National Cancer Center Hospital East, Kashiwanoha, Kashiwa, Chiba, Japan
- Department of Colorectal Surgery, National Cancer Center Hospital East, Kashiwanoha, Kashiwa, Chiba, Japan
| |
Collapse
|
110
|
Mohamadipanah H, Wise B, Witt A, Goll C, Yang S, Perumalla C, Huemer K, Kearse L, Pugh C. Performance assessment using sensor technology. J Surg Oncol 2021; 124:200-215. [PMID: 34245582 PMCID: PMC8855881 DOI: 10.1002/jso.26519] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2021] [Revised: 04/13/2021] [Accepted: 04/14/2021] [Indexed: 11/10/2022]
Abstract
Over the past 30 years, there have been numerous, noteworthy successes in the development, validation, and implementation of clinical skills assessments. Despite this progress, the medical profession has barely scratched the surface towards developing assessments that capture the true complexity of hands-on skills in procedural medicine. This paper highlights the development implementation and new discoveries in performance metrics when using sensor technology to assess cognitive and technical aspects of hands-on skills.
Collapse
Affiliation(s)
- Hossein Mohamadipanah
- Department of Surgery, Stanford University School of Medicine, Stanford, California, USA
| | - Brett Wise
- Department of Surgery, Stanford University School of Medicine, Stanford, California, USA
| | - Anna Witt
- Department of Surgery, Stanford University School of Medicine, Stanford, California, USA
| | - Cassidi Goll
- Department of Surgery, Stanford University School of Medicine, Stanford, California, USA
| | - Su Yang
- Department of Surgery, Stanford University School of Medicine, Stanford, California, USA
| | - Calvin Perumalla
- Department of Surgery, Stanford University School of Medicine, Stanford, California, USA
| | - Kayla Huemer
- Department of Surgery, Stanford University School of Medicine, Stanford, California, USA
| | - LaDonna Kearse
- Department of Surgery, Stanford University School of Medicine, Stanford, California, USA
| | - Carla Pugh
- Department of Surgery, Stanford University School of Medicine, Stanford, California, USA
| |
Collapse
|
111
|
Development of an International Standardized Curriculum for Laparoscopic Sleeve Gastrectomy Teaching Utilizing Modified Delphi Methodology. Obes Surg 2021; 31:4257-4263. [PMID: 34296371 DOI: 10.1007/s11695-021-05572-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2021] [Revised: 06/18/2021] [Accepted: 06/30/2021] [Indexed: 10/20/2022]
Abstract
BACKGROUND The performance of laparoscopic sleeve gastrectomy has increased markedly to become the single-most performed bariatric surgical procedure globally. To date, a means of standardized trainee teaching has not been developed. The aim of this study was to design a laparoscopic curriculum for trainees of bariatric surgery utilizing modified Delphi consensus methodology. METHODS A panel of surgeons was assembled to devise an academic framework of technical, non-technical and cognitive skills utilized in the performance of laparoscopic sleeve gastrectomy. The panel invited 18 bariatric surgeons experienced in laparoscopic gastrectomy from 11 countries to rate the items for inclusion in the curriculum to a predefined level of agreement. RESULTS A consensus of experts was achieved for 24 of the 30 proposed elements for inclusion within the first round of the curriculum Delphi panel. All components pertaining to anatomical knowledge, peri-operative considerations and non-technical items were accepted. A second round further examined six statements, of which three were accepted. Agreement of the panel was reached for 27 of the cognitive, technical and non-technical components after two rounds. Three statements found no consensus. CONCLUSIONS Utilizing modified Delphi methodology, a curriculum outlining the most important components of teaching the procedure of laparoscopic sleeve gastrectomy, has been determined by a consensus of international experts in bariatric surgery. The curriculum is suggested as a standard in proficiency-based training of this procedure. It forms a generic template which facilitates individual jurisdictions to perform content validation, adapting the curriculum to local requirements in teaching the next generation of bariatric surgeons.
Collapse
|
112
|
Ward TM, Mascagni P, Madani A, Padoy N, Perretta S, Hashimoto DA. Surgical data science and artificial intelligence for surgical education. J Surg Oncol 2021; 124:221-230. [PMID: 34245578 DOI: 10.1002/jso.26496] [Citation(s) in RCA: 25] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2021] [Revised: 03/29/2021] [Accepted: 04/02/2021] [Indexed: 11/11/2022]
Abstract
Surgical data science (SDS) aims to improve the quality of interventional healthcare and its value through the capture, organization, analysis, and modeling of procedural data. As data capture has increased and artificial intelligence (AI) has advanced, SDS can help to unlock augmented and automated coaching, feedback, assessment, and decision support in surgery. We review major concepts in SDS and AI as applied to surgical education and surgical oncology.
Collapse
Affiliation(s)
- Thomas M Ward
- Department of Surgery, Surgical AI & Innovation Laboratory, Massachusetts General Hospital, Boston, Massachusetts
| | - Pietro Mascagni
- ICube, University of Strasbourg, CNRS, France.,Fondazione Policlinico A. Gemelli IRCCS, Rome, Italy.,IHU Strasbourg, Strasbourg, France
| | - Amin Madani
- Department of Surgery, University Health Network, Toronto, Canada
| | - Nicolas Padoy
- ICube, University of Strasbourg, CNRS, France.,IHU Strasbourg, Strasbourg, France
| | | | - Daniel A Hashimoto
- Department of Surgery, Surgical AI & Innovation Laboratory, Massachusetts General Hospital, Boston, Massachusetts
| |
Collapse
|
113
|
A Scoping Review of Artificial Intelligence and Machine Learning in Bariatric and Metabolic Surgery: Current Status and Future Perspectives. Obes Surg 2021; 31:4555-4563. [PMID: 34264433 DOI: 10.1007/s11695-021-05548-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2021] [Revised: 06/12/2021] [Accepted: 06/17/2021] [Indexed: 01/01/2023]
Abstract
Artificial intelligence (AI) is a revolution in data analysis with emerging roles in various specialties and with various applications. The objective of this scoping review was to retrieve current literature on the fields of AI that have been applied to metabolic bariatric surgery (MBS) and to investigate potential applications of AI as a decision-making tool of the bariatric surgeon. Initial search yielded 3260 studies published from January 2000 until March 2021. After screening, 49 unique articles were included in the final analysis. Studies were grouped into categories, and the frequency of appearing algorithms, dataset types, and metrics were documented. The heterogeneity of current studies showed that meticulous validation, strict reporting systems, and reliable benchmarking are mandatory for ensuring the clinical validity of future research.
Collapse
|
114
|
Cheng K, You J, Wu S, Chen Z, Zhou Z, Guan J, Peng B, Wang X. Artificial intelligence-based automated laparoscopic cholecystectomy surgical phase recognition and analysis. Surg Endosc 2021; 36:3160-3168. [PMID: 34231066 DOI: 10.1007/s00464-021-08619-3] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2021] [Accepted: 06/14/2021] [Indexed: 02/08/2023]
Abstract
BACKGROUND Artificial intelligence and computer vision have revolutionized laparoscopic surgical video analysis. However, there is no multi-center study focused on deep learning-based laparoscopic cholecystectomy phases recognizing. This work aims to apply artificial intelligence in recognizing and analyzing phases in laparoscopic cholecystectomy videos from multiple centers. METHODS This observational cohort-study included 163 laparoscopic cholecystectomy videos collected from four medical centers. Videos were labeled by surgeons and a deep-learning model was developed based on 90 videos. Thereafter, the performance of the model was tested in additional ten videos by comparing it with the annotated ground truth of the surgeon. Deep-learning models were trained to identify laparoscopic cholecystectomy phases. The performance of models was measured using precision, recall, F1 score, and overall accuracy. With a high overall accuracy of the model, additional 63 videos as an analysis set were analyzed by the model to identify different phases. RESULTS Mean concordance correlation coefficient for annotations of the surgeons across all operative phases was 92.38%. Also, the overall phase recognition accuracy of laparoscopic cholecystectomy by the model was 91.05%. In the analysis set, there was an average surgery time of 2195 ± 896 s, with a huge individual variance of different surgical phases. Notably, laparoscopic cholecystectomy in acute cholecystitis cases had prolonged overall durations, and the surgeon would spend more time in mobilizing the hepatocystic triangle phase. CONCLUSION A deep-learning model based on multiple centers data can identify phases of laparoscopic cholecystectomy with a high degree of accuracy. With continued refinements, artificial intelligence could be utilized in huge data surgery analysis to achieve clinically relevant future applications.
Collapse
Affiliation(s)
- Ke Cheng
- West China School of Medicine, West China Hospital of Sichuan University, Chengdu, China.,Department of Pancreatic Surgery, West China Hospital, Sichuan University, No. 37, Guoxue Alley, Chengdu, 610041, Sichuan Province, China
| | - Jiaying You
- West China School of Medicine, West China Hospital of Sichuan University, Chengdu, China.,Department of Pancreatic Surgery, West China Hospital, Sichuan University, No. 37, Guoxue Alley, Chengdu, 610041, Sichuan Province, China
| | - Shangdi Wu
- West China School of Medicine, West China Hospital of Sichuan University, Chengdu, China.,Department of Pancreatic Surgery, West China Hospital, Sichuan University, No. 37, Guoxue Alley, Chengdu, 610041, Sichuan Province, China
| | - Zixin Chen
- West China School of Medicine, West China Hospital of Sichuan University, Chengdu, China.,Department of Pancreatic Surgery, West China Hospital, Sichuan University, No. 37, Guoxue Alley, Chengdu, 610041, Sichuan Province, China
| | - Zijian Zhou
- West China School of Medicine, West China Hospital of Sichuan University, Chengdu, China.,Department of Pancreatic Surgery, West China Hospital, Sichuan University, No. 37, Guoxue Alley, Chengdu, 610041, Sichuan Province, China
| | - Jingye Guan
- ChengDu Withai Innovations Technology Company, Chengdu, China
| | - Bing Peng
- West China School of Medicine, West China Hospital of Sichuan University, Chengdu, China. .,Department of Pancreatic Surgery, West China Hospital, Sichuan University, No. 37, Guoxue Alley, Chengdu, 610041, Sichuan Province, China.
| | - Xin Wang
- West China School of Medicine, West China Hospital of Sichuan University, Chengdu, China. .,Department of Pancreatic Surgery, West China Hospital, Sichuan University, No. 37, Guoxue Alley, Chengdu, 610041, Sichuan Province, China.
| |
Collapse
|
115
|
Meireles OR, Rosman G, Altieri MS, Carin L, Hager G, Madani A, Padoy N, Pugh CM, Sylla P, Ward TM, Hashimoto DA. SAGES consensus recommendations on an annotation framework for surgical video. Surg Endosc 2021; 35:4918-4929. [PMID: 34231065 DOI: 10.1007/s00464-021-08578-9] [Citation(s) in RCA: 40] [Impact Index Per Article: 13.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2021] [Accepted: 05/26/2021] [Indexed: 11/25/2022]
Abstract
BACKGROUND The growing interest in analysis of surgical video through machine learning has led to increased research efforts; however, common methods of annotating video data are lacking. There is a need to establish recommendations on the annotation of surgical video data to enable assessment of algorithms and multi-institutional collaboration. METHODS Four working groups were formed from a pool of participants that included clinicians, engineers, and data scientists. The working groups were focused on four themes: (1) temporal models, (2) actions and tasks, (3) tissue characteristics and general anatomy, and (4) software and data structure. A modified Delphi process was utilized to create a consensus survey based on suggested recommendations from each of the working groups. RESULTS After three Delphi rounds, consensus was reached on recommendations for annotation within each of these domains. A hierarchy for annotation of temporal events in surgery was established. CONCLUSIONS While additional work remains to achieve accepted standards for video annotation in surgery, the consensus recommendations on a general framework for annotation presented here lay the foundation for standardization. This type of framework is critical to enabling diverse datasets, performance benchmarks, and collaboration.
Collapse
Affiliation(s)
- Ozanan R Meireles
- Department of Surgery, Massachusetts General Hospital, 15 Parkman Street, WAC460, Boston, MA, 02114, USA.
| | - Guy Rosman
- Department of Surgery, Massachusetts General Hospital, 15 Parkman Street, WAC460, Boston, MA, 02114, USA
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, USA
| | - Maria S Altieri
- Department of Surgery, East Carolina University, Greenville, USA
| | - Lawrence Carin
- Department of Electrical and Computer Engineering, Duke University, Durham, USA
| | - Gregory Hager
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, USA
| | - Amin Madani
- Department of Surgery, University Health Network, Toronto, Canada
| | - Nicolas Padoy
- ICube, University of Strasbourg, Strasbourg, France
- IHU Strasbourg, Strasbourg, France
| | - Carla M Pugh
- Department of Surgery, Stanford University, Stanford, USA
| | - Patricia Sylla
- Department of Surgery, Mount Sinai Medical Center, New York, USA
| | - Thomas M Ward
- Department of Surgery, Massachusetts General Hospital, 15 Parkman Street, WAC460, Boston, MA, 02114, USA
| | - Daniel A Hashimoto
- Department of Surgery, Massachusetts General Hospital, 15 Parkman Street, WAC460, Boston, MA, 02114, USA.
| |
Collapse
|
116
|
Mascagni P, Alapatt D, Urade T, Vardazaryan A, Mutter D, Marescaux J, Costamagna G, Dallemagne B, Padoy N. A Computer Vision Platform to Automatically Locate Critical Events in Surgical Videos: Documenting Safety in Laparoscopic Cholecystectomy. Ann Surg 2021; 274:e93-e95. [PMID: 33417329 DOI: 10.1097/sla.0000000000004736] [Citation(s) in RCA: 32] [Impact Index Per Article: 10.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
OBJECTIVE The aim of this study was to develop a computer vision platform to automatically locate critical events in surgical videos and provide short video clips documenting the critical view of safety (CVS) in laparoscopic cholecystectomy (LC). BACKGROUND Intraoperative events are typically documented through operator-dictated reports that do not always translate the operative reality. Surgical videos provide complete information on surgical procedures, but the burden associated with storing and manually analyzing full-length videos has so far limited their effective use. METHODS A computer vision platform named EndoDigest was developed and used to analyze LC videos. The mean absolute error (MAE) of the platform in automatically locating the manually annotated time of the cystic duct division in full-length videos was assessed. The relevance of the automatically extracted short video clips was evaluated by calculating the percentage of video clips in which the CVS was assessable by surgeons. RESULTS A total of 155 LC videos were analyzed: 55 of these videos were used to develop EndoDigest, whereas the remaining 100 were used to test it. The time of the cystic duct division was automatically located with a MAE of 62.8 ± 130.4 seconds (1.95% of full-length video duration). CVS was assessable in 91% of the 2.5 minutes long video clips automatically extracted from the considered test procedures. CONCLUSIONS Deep learning models for workflow analysis can be used to reliably locate critical events in surgical videos and document CVS in LC. Further studies are needed to assess the clinical impact of surgical data science solutions for safer laparoscopic cholecystectomy.
Collapse
Affiliation(s)
- Pietro Mascagni
- ICube, University of Strasbourg, CNRS, IHU Strasbourg, France
- Fondazione Policlinico Universitario A. Gemelli IRCCS, Rome, Italy
| | - Deepak Alapatt
- ICube, University of Strasbourg, CNRS, IHU Strasbourg, France
| | - Takeshi Urade
- IHU-Strasbourg, Institute of Image-Guided Surgery, Strasbourg, France
| | | | - Didier Mutter
- IHU-Strasbourg, Institute of Image-Guided Surgery, Strasbourg, France
- Institute for Research against Digestive Cancer (IRCAD), Strasbourg, France
- Department of Digestive and Endocrine Surgery, University of Strasbourg, Strasbourg, France
| | - Jacques Marescaux
- Institute for Research against Digestive Cancer (IRCAD), Strasbourg, France
| | - Guido Costamagna
- Fondazione Policlinico Universitario A. Gemelli IRCCS, Rome, Italy
| | - Bernard Dallemagne
- Institute for Research against Digestive Cancer (IRCAD), Strasbourg, France
- Department of Digestive and Endocrine Surgery, University of Strasbourg, Strasbourg, France
| | - Nicolas Padoy
- ICube, University of Strasbourg, CNRS, IHU Strasbourg, France
| |
Collapse
|
117
|
Hardy NP, Dalli J, Mac Aonghusa P, Neary PM, Cahill RA. Biophysics inspired artificial intelligence for colorectal cancer characterization. Artif Intell Gastroenterol 2021; 2:77-84. [DOI: 10.35712/aig.v2.i3.77] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/28/2021] [Revised: 05/21/2021] [Accepted: 06/18/2021] [Indexed: 02/06/2023] Open
Abstract
Over the last ten years artificial intelligence (AI) methods have begun to pervade even the most common everyday tasks such as email filtering and mobile banking. While the necessary quality and safety standards may have understandably slowed the introduction of AI to healthcare when compared with other industries, we are now beginning to see AI methods becoming more available to the clinician in select settings. In this paper we discuss current AI methods as they pertain to gastrointestinal procedures including both gastroenterology and gastrointestinal surgery. The current state of the art for polyp detection in gastroenterology is explored with a particular focus on deep leaning, its strengths, as well as some of the factors that may limit its application to the field of surgery. The use of biophysics (utilizing physics to study and explain biological phenomena) in combination with more traditional machine learning is also discussed and proposed as an alternative approach that may solve some of the challenges associated with deep learning. Past and present uses of biophysics inspired AI methods, such as the use of fluorescence guided surgery to aid in the characterization of colorectal lesions, are used to illustrate the role biophysics-inspired AI can play in the exciting future of the gastrointestinal proceduralist.
Collapse
Affiliation(s)
- Niall P Hardy
- UCD Centre for Precision Surgery, Dublin 7 D07 Y9AW, Ireland
| | - Jeffrey Dalli
- UCD Centre for Precision Surgery, Dublin 7 D07 Y9AW, Ireland
| | | | - Peter M Neary
- Department of Surgery, University Hospital Waterford, University College Cork, Waterford X91 ER8E, Ireland
| | - Ronan A Cahill
- UCD Centre for Precision Surgery, Dublin 7 D07 Y9AW, Ireland
- Department of Surgery, Mater Misericordiae University Hospital (MMUH), Dublin 7, Ireland
| |
Collapse
|
118
|
Bamba Y, Ogawa S, Itabashi M, Shindo H, Kameoka S, Okamoto T, Yamamoto M. Object and anatomical feature recognition in surgical video images based on a convolutional neural network. Int J Comput Assist Radiol Surg 2021; 16:2045-2054. [PMID: 34169465 PMCID: PMC8224261 DOI: 10.1007/s11548-021-02434-w] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2021] [Accepted: 06/17/2021] [Indexed: 12/14/2022]
Abstract
Purpose Artificial intelligence-enabled techniques can process large amounts of surgical data and may be utilized for clinical decision support to recognize or forecast adverse events in an actual intraoperative scenario. To develop an image-guided navigation technology that will help in surgical education, we explored the performance of a convolutional neural network (CNN)-based computer vision system in detecting intraoperative objects. Methods The surgical videos used for annotation were recorded during surgeries conducted in the Department of Surgery of Tokyo Women’s Medical University from 2019 to 2020. Abdominal endoscopic images were cut out from manually captured surgical videos. An open-source programming framework for CNN was used to design a model that could recognize and segment objects in real time through IBM Visual Insights. The model was used to detect the GI tract, blood, vessels, uterus, forceps, ports, gauze and clips in the surgical images. Results The accuracy, precision and recall of the model were 83%, 80% and 92%, respectively. The mean average precision (mAP), the calculated mean of the precision for each object, was 91%. Among surgical tools, the highest recall and precision of 96.3% and 97.9%, respectively, were achieved for forceps. Among the anatomical structures, the highest recall and precision of 92.9% and 91.3%, respectively, were achieved for the GI tract. Conclusion The proposed model could detect objects in operative images with high accuracy, highlighting the possibility of using AI-based object recognition techniques for intraoperative navigation. Real-time object recognition will play a major role in navigation surgery and surgical education. Supplementary Information The online version contains supplementary material available at 10.1007/s11548-021-02434-w.
Collapse
Affiliation(s)
- Yoshiko Bamba
- Department of Surgery, Institute of Gastroenterology, Tokyo Women's Medical University, 8-1, Kawadacho Shinjuku-ku, Tokyo, 162-8666, Japan.
| | - Shimpei Ogawa
- Department of Surgery, Institute of Gastroenterology, Tokyo Women's Medical University, 8-1, Kawadacho Shinjuku-ku, Tokyo, 162-8666, Japan
| | - Michio Itabashi
- Department of Surgery, Institute of Gastroenterology, Tokyo Women's Medical University, 8-1, Kawadacho Shinjuku-ku, Tokyo, 162-8666, Japan
| | | | | | - Takahiro Okamoto
- Department of Breast Endocrinology Surgery, Tokyo Women's Medical University, Tokyo, Japan
| | - Masakazu Yamamoto
- Department of Surgery, Institute of Gastroenterology, Tokyo Women's Medical University, 8-1, Kawadacho Shinjuku-ku, Tokyo, 162-8666, Japan
| |
Collapse
|
119
|
Ward TM, Fer DM, Ban Y, Rosman G, Meireles OR, Hashimoto DA. Challenges in surgical video annotation. Comput Assist Surg (Abingdon) 2021; 26:58-68. [PMID: 34126014 DOI: 10.1080/24699322.2021.1937320] [Citation(s) in RCA: 30] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022] Open
Abstract
Annotation of surgical video is important for establishing ground truth in surgical data science endeavors that involve computer vision. With the growth of the field over the last decade, several challenges have been identified in annotating spatial, temporal, and clinical elements of surgical video as well as challenges in selecting annotators. In reviewing current challenges, we provide suggestions on opportunities for improvement and possible next steps to enable translation of surgical data science efforts in surgical video analysis to clinical research and practice.
Collapse
Affiliation(s)
- Thomas M Ward
- Surgical AI & Innovation Laboratory, Department of Surgery, Massachusetts General Hospital, Boston, MA, USA
| | - Danyal M Fer
- Department of Surgery, University of California San Francisco East Bay, Hayward, CA, USA
| | - Yutong Ban
- Surgical AI & Innovation Laboratory, Department of Surgery, Massachusetts General Hospital, Boston, MA, USA.,Distributed Robotics Laboratory, Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Guy Rosman
- Surgical AI & Innovation Laboratory, Department of Surgery, Massachusetts General Hospital, Boston, MA, USA.,Distributed Robotics Laboratory, Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Ozanan R Meireles
- Surgical AI & Innovation Laboratory, Department of Surgery, Massachusetts General Hospital, Boston, MA, USA
| | - Daniel A Hashimoto
- Surgical AI & Innovation Laboratory, Department of Surgery, Massachusetts General Hospital, Boston, MA, USA
| |
Collapse
|
120
|
Yule S, Janda A, Likosky DS. Surgical Sabermetrics: Applying Athletics Data Science to Enhance Operative Performance. ANNALS OF SURGERY OPEN 2021; 2:e054. [PMID: 34179890 PMCID: PMC8221711 DOI: 10.1097/as9.0000000000000054] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2021] [Accepted: 02/13/2021] [Indexed: 12/03/2022] Open
Abstract
Mini-abstract: Surgical sabermetrics is advanced analytics of digitally recorded surgical training and operative procedures to enhance insight, support professional development, and optimize clinical and safety outcomes. This perspectives article illustrates how surgery can leverage data science approaches in athletics and industry to transform individual and team performance in the operating room.
Collapse
Affiliation(s)
- Steven Yule
- From the Department of Clinical Surgery, University of Edinburgh, Edinburgh, Scotland
- Department of Surgery, Brigham & Women’s Hospital/Harvard Medical School, Boston, MA
| | - Allison Janda
- Department of Anesthesiology, Michigan Medicine, Ann Arbor, MI
| | | |
Collapse
|
121
|
DeArmond M, Vidal E, Vanier C. Technology-assisted methods to assess the quality of the therapeutic alliance between health care providers and patients: a scoping review protocol. JBI Evid Synth 2021; 19:1222-1229. [PMID: 33278267 DOI: 10.11124/jbisrir-d-19-00429] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/31/2022]
Abstract
OBJECTIVE The goal of this review is to identify and summarize technology-assisted methods that are being used in clinical, research, or educational settings to assess non-verbal behaviors that have been identified as contributors to the quality of the therapeutic alliance between health care providers and patients. INTRODUCTION A strong therapeutic alliance is a critical component of positive patient outcomes. A health care provider's non-verbal behaviors help build a strong therapeutic alliance, but practice with expert feedback is often required to develop desirable non-verbal behaviors. Advances in technology have been harnessed to assess and provide feedback to health care providers, but the technological tools can be difficult to find and compare. Technology-assisted feedback has the potential to help health care providers hone important clinical skills without requiring highly trained instructors, improving medical care overall. INCLUSION CRITERIA This review will consider quantitative and qualitative studies, as well as review articles. Participants must be health care providers (or students) who routinely conduct appointments with patients. Included studies must incorporate technology-assisted methods that are being used to collect or analyze information regarding at least one behavior associated with the therapeutic alliance in a clinical, research, or educational setting. Any type of patient encounter, whether actual, actor-based, virtual reality, or simulation-based, will be included. METHODS Five bibliographic databases will be searched, with results limited to English-language articles published from 2010 to the present. The search strategy yielded 404 results in PubMed. The proposed methodology follows the JBI methodology for scoping reviews.
Collapse
Affiliation(s)
- Megan DeArmond
- Jay Sexter Library, Touro University Nevada, Henderson, NV, USA.,Touro University Nevada: A JBI Affiliated Group, Henderson, NV, USA
| | - Evan Vidal
- College of Osteopathic Medicine, Touro University Nevada, Henderson, NV, USA
| | - Cheryl Vanier
- Touro University Nevada: A JBI Affiliated Group, Henderson, NV, USA.,Department of Research, Touro University Nevada, Henderson, NV, USA
| |
Collapse
|
122
|
Cahill RA, Mac Aonghusa P, Mortensen N. The age of surgical operative video big data - My bicycle or our park? Surgeon 2021; 20:e7-e12. [PMID: 33962892 DOI: 10.1016/j.surge.2021.03.006] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2020] [Revised: 06/28/2020] [Accepted: 03/29/2021] [Indexed: 02/07/2023]
Abstract
BACKGROUND Surgery is a major component of health-care provision. Operative intervention often employs minimally invasive approaches incorporating digital cameras creating a 'digital twin' of both intracorporeal appearances and operative performance. Video recordings provide richer detail than the traditional operative note and can couple with advanced computer technology to unlock new analytic capabilities capable of driving surgical advancement via quality improvement initiatives and new technology design. Surgical video is however an under-utilized technology resource, in part, because ownership along with broader issues including purpose, privacy, confidentiality, copyright and inclusion in outputs have been poorly considered using outdated categorisation. METHOD A first principles perspective on operative video classification as a useful public interest resource enshrining fundamental stakeholder (patients, physicians, institutions, industry and society) rights, roles and responsibilities. RESULT A facility of noble purpose, understandable to all, for fair, accountable, safe and transparent access to large volumes of anonymised surgical videos of intracorporeal operations that enables advances through cross-disciplinary research is proposed. Technology can be exploited to protect all relevant parties respecting both citizen data-rights and the special status doctor-patient relationship. Through general consensus, the capability can be understood, established and iterated to perfection. CONCLUSION Overall we argue that new and specific classification of surgical video enables responsible curation and serves the public good better than the current model. Rather than being thought of as a bicycle where discrete ownership is ascribed, such data are better viewed as being more like a park, a regulated amenity we should preserve for better human life.
Collapse
Affiliation(s)
- Ronan A Cahill
- Department of Surgery, Mater Misericordiae University Hospital, Dublin 7, Ireland; UCD Centre for Precision Surgery, School of Medicine, University College Dublin, Dublin, Ireland.
| | | | - Neil Mortensen
- Nuffield Department of Surgery, University of Oxford, Royal College of Surgeons of England, UK
| |
Collapse
|
123
|
Collins JW, Marcus HJ, Ghazi A, Sridhar A, Hashimoto D, Hager G, Arezzo A, Jannin P, Maier-Hein L, Marz K, Valdastri P, Mori K, Elson D, Giannarou S, Slack M, Hares L, Beaulieu Y, Levy J, Laplante G, Ramadorai A, Jarc A, Andrews B, Garcia P, Neemuchwala H, Andrusaite A, Kimpe T, Hawkes D, Kelly JD, Stoyanov D. Ethical implications of AI in robotic surgical training: A Delphi consensus statement. Eur Urol Focus 2021; 8:613-622. [PMID: 33941503 DOI: 10.1016/j.euf.2021.04.006] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2021] [Revised: 03/02/2021] [Accepted: 04/08/2021] [Indexed: 12/12/2022]
Abstract
CONTEXT As the role of AI in healthcare continues to expand there is increasing awareness of the potential pitfalls of AI and the need for guidance to avoid them. OBJECTIVES To provide ethical guidance on developing narrow AI applications for surgical training curricula. We define standardised approaches to developing AI driven applications in surgical training that address current recognised ethical implications of utilising AI on surgical data. We aim to describe an ethical approach based on the current evidence, understanding of AI and available technologies, by seeking consensus from an expert committee. EVIDENCE ACQUISITION The project was carried out in 3 phases: (1) A steering group was formed to review the literature and summarize current evidence. (2) A larger expert panel convened and discussed the ethical implications of AI application based on the current evidence. A survey was created, with input from panel members. (3) Thirdly, panel-based consensus findings were determined using an online Delphi process to formulate guidance. 30 experts in AI implementation and/or training including clinicians, academics and industry contributed. The Delphi process underwent 3 rounds. Additions to the second and third-round surveys were formulated based on the answers and comments from previous rounds. Consensus opinion was defined as ≥ 80% agreement. EVIDENCE SYNTHESIS There was 100% response from all 3 rounds. The resulting formulated guidance showed good internal consistency, with a Cronbach alpha of >0.8. There was 100% consensus that there is currently a lack of guidance on the utilisation of AI in the setting of robotic surgical training. Consensus was reached in multiple areas, including: 1. Data protection and privacy; 2. Reproducibility and transparency; 3. Predictive analytics; 4. Inherent biases; 5. Areas of training most likely to benefit from AI. CONCLUSIONS Using the Delphi methodology, we achieved international consensus among experts to develop and reach content validation for guidance on ethical implications of AI in surgical training. Providing an ethical foundation for launching narrow AI applications in surgical training. This guidance will require further validation. PATIENT SUMMARY As the role of AI in healthcare continues to expand there is increasing awareness of the potential pitfalls of AI and the need for guidance to avoid them.In this paper we provide guidance on ethical implications of AI in surgical training.
Collapse
Affiliation(s)
- Justin W Collins
- University College London, Division of Surgery and Interventional Science, Research Department of Targeted Intervention; Wellcome/ESPRC Centre for Interventional and Surgical Sciences (WEISS), University College London; University College London Hospital, Division of Uro-oncology.
| | - Hani J Marcus
- Wellcome/ESPRC Centre for Interventional and Surgical Sciences (WEISS), University College London
| | - Ahmed Ghazi
- Simulation Innovation Laboratory, University of Rochester, USA
| | - Ashwin Sridhar
- University College London, Division of Surgery and Interventional Science, Research Department of Targeted Intervention; University College London Hospital, Division of Uro-oncology
| | - Daniel Hashimoto
- Surgical Artificial Intelligence and Innovation Laboratory, Massachusetts General Hospital, USA
| | - Gregory Hager
- Malone Center for engineering in healthcare, Department of Computer Science, John Hopkins University, Baltimore, USA
| | - Alberto Arezzo
- Department of Surgical Sciences, University of Torino, Italy
| | | | - Lena Maier-Hein
- Deutsches Krebsforschungszentrum, Division of Computer Assisted Medical Interventions, Heidelberg, Germany
| | - Keno Marz
- Deutsches Krebsforschungszentrum, Division of Computer Assisted Medical Interventions, Heidelberg, Germany
| | - Pietro Valdastri
- STORM Lab, School of Electronic and Electrical Engineering, University of Leeds, Leeds, UK
| | - Kensaku Mori
- Director of Information Technology Center, Nagoya University, Japan
| | - Daniel Elson
- Hamlyn Centre for robotic surgery, Department of Surgery and cancer, Imperial College London, UK
| | - Stamatia Giannarou
- Hamlyn Centre for robotic surgery, Department of Surgery and cancer, Imperial College London, UK
| | - Mark Slack
- Honorary Senior Lecturer, University of Cambridge, Cambridge UK; CMO CMR Surgical, Cambridge, UK
| | - Luke Hares
- Chief technology director, CMR Surgical, Cambridge, UK
| | - Yanick Beaulieu
- Division of Cardiology and Critical Care, Sacré-Coeur Hospital, University of Montreal, Montreal, Canada
| | - Jeff Levy
- Institute for Surgical Excellence, Philadelphia, USA
| | - Guy Laplante
- Director, Global Medical Affairs at Medtronic Minimally Invasive Therapies, Brampton, Canada
| | - Arvind Ramadorai
- Director, Digital-Assisted Surgery (DAS), Medtronic Surgical Robotics, North Haven, CT, USA
| | - Anthony Jarc
- Applied Research, Intuitive Surgical, Inc., Sunnyvale, CA, USA
| | - Ben Andrews
- Strategy, Intuitive Surgical, Inc., Sunnyvale, CA, USA
| | | | | | | | - Tom Kimpe
- BARCO NV - Healthcare division, Kortrijk, Belgium
| | - David Hawkes
- Wellcome/ESPRC Centre for Interventional and Surgical Sciences (WEISS), University College London
| | - John D Kelly
- University College London, Division of Surgery and Interventional Science, Research Department of Targeted Intervention; Wellcome/ESPRC Centre for Interventional and Surgical Sciences (WEISS), University College London; University College London Hospital, Division of Uro-oncology
| | - Danail Stoyanov
- Wellcome/ESPRC Centre for Interventional and Surgical Sciences (WEISS), University College London
| |
Collapse
|
124
|
Kitaguchi D, Takeshita N, Matsuzaki H, Hasegawa H, Igaki T, Oda T, Ito M. Deep learning-based automatic surgical step recognition in intraoperative videos for transanal total mesorectal excision. Surg Endosc 2021; 36:1143-1151. [PMID: 33825016 PMCID: PMC8758657 DOI: 10.1007/s00464-021-08381-6] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2020] [Accepted: 02/09/2021] [Indexed: 11/30/2022]
Abstract
BACKGROUND Dividing a surgical procedure into a sequence of identifiable and meaningful steps facilitates intraoperative video data acquisition and storage. These efforts are especially valuable for technically challenging procedures that require intraoperative video analysis, such as transanal total mesorectal excision (TaTME); however, manual video indexing is time-consuming. Thus, in this study, we constructed an annotated video dataset for TaTME with surgical step information and evaluated the performance of a deep learning model in recognizing the surgical steps in TaTME. METHODS This was a single-institutional retrospective feasibility study. All TaTME intraoperative videos were divided into frames. Each frame was manually annotated as one of the following major steps: (1) purse-string closure; (2) full thickness transection of the rectal wall; (3) down-to-up dissection; (4) dissection after rendezvous; and (5) purse-string suture for stapled anastomosis. Steps 3 and 4 were each further classified into four sub-steps, specifically, for dissection of the anterior, posterior, right, and left planes. A convolutional neural network-based deep learning model, Xception, was utilized for the surgical step classification task. RESULTS Our dataset containing 50 TaTME videos was randomly divided into two subsets for training and testing with 40 and 10 videos, respectively. The overall accuracy obtained for all classification steps was 93.2%. By contrast, when sub-step classification was included in the performance analysis, a mean accuracy (± standard deviation) of 78% (± 5%), with a maximum accuracy of 85%, was obtained. CONCLUSIONS To the best of our knowledge, this is the first study based on automatic surgical step classification for TaTME. Our deep learning model self-learned and recognized the classification steps in TaTME videos with high accuracy after training. Thus, our model can be applied to a system for intraoperative guidance or for postoperative video indexing and analysis in TaTME procedures.
Collapse
Affiliation(s)
- Daichi Kitaguchi
- Surgical Device Innovation Office, National Cancer Center Hospital East, 6-5-1, Kashiwanoha, Kashiwa, Chiba, 277-8577, Japan.,Department of Colorectal Surgery, National Cancer Center Hospital East, 6-5-1, Kashiwanoha, Kashiwa, Chiba, 277-8577, Japan.,Department of Gastrointestinal and Hepato-Biliary-Pancreatic Surgery, Faculty of Medicine, University of Tsukuba, Tsukuba, Ibaraki, 305-8575, Japan
| | - Nobuyoshi Takeshita
- Surgical Device Innovation Office, National Cancer Center Hospital East, 6-5-1, Kashiwanoha, Kashiwa, Chiba, 277-8577, Japan.
| | - Hiroki Matsuzaki
- Surgical Device Innovation Office, National Cancer Center Hospital East, 6-5-1, Kashiwanoha, Kashiwa, Chiba, 277-8577, Japan
| | - Hiro Hasegawa
- Surgical Device Innovation Office, National Cancer Center Hospital East, 6-5-1, Kashiwanoha, Kashiwa, Chiba, 277-8577, Japan.,Department of Colorectal Surgery, National Cancer Center Hospital East, 6-5-1, Kashiwanoha, Kashiwa, Chiba, 277-8577, Japan
| | - Takahiro Igaki
- Surgical Device Innovation Office, National Cancer Center Hospital East, 6-5-1, Kashiwanoha, Kashiwa, Chiba, 277-8577, Japan.,Department of Colorectal Surgery, National Cancer Center Hospital East, 6-5-1, Kashiwanoha, Kashiwa, Chiba, 277-8577, Japan
| | - Tatsuya Oda
- Department of Gastrointestinal and Hepato-Biliary-Pancreatic Surgery, Faculty of Medicine, University of Tsukuba, Tsukuba, Ibaraki, 305-8575, Japan
| | - Masaaki Ito
- Surgical Device Innovation Office, National Cancer Center Hospital East, 6-5-1, Kashiwanoha, Kashiwa, Chiba, 277-8577, Japan. .,Department of Colorectal Surgery, National Cancer Center Hospital East, 6-5-1, Kashiwanoha, Kashiwa, Chiba, 277-8577, Japan.
| |
Collapse
|
125
|
Garrow CR, Kowalewski KF, Li L, Wagner M, Schmidt MW, Engelhardt S, Hashimoto DA, Kenngott HG, Bodenstedt S, Speidel S, Müller-Stich BP, Nickel F. Machine Learning for Surgical Phase Recognition: A Systematic Review. Ann Surg 2021; 273:684-693. [PMID: 33201088 DOI: 10.1097/sla.0000000000004425] [Citation(s) in RCA: 110] [Impact Index Per Article: 36.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/08/2023]
Abstract
OBJECTIVE To provide an overview of ML models and data streams utilized for automated surgical phase recognition. BACKGROUND Phase recognition identifies different steps and phases of an operation. ML is an evolving technology that allows analysis and interpretation of huge data sets. Automation of phase recognition based on data inputs is essential for optimization of workflow, surgical training, intraoperative assistance, patient safety, and efficiency. METHODS A systematic review was performed according to the Cochrane recommendations and the Preferred Reporting Items for Systematic Reviews and Meta-Analyses statement. PubMed, Web of Science, IEEExplore, GoogleScholar, and CiteSeerX were searched. Literature describing phase recognition based on ML models and the capture of intraoperative signals during general surgery procedures was included. RESULTS A total of 2254 titles/abstracts were screened, and 35 full-texts were included. Most commonly used ML models were Hidden Markov Models and Artificial Neural Networks with a trend towards higher complexity over time. Most frequently used data types were feature learning from surgical videos and manual annotation of instrument use. Laparoscopic cholecystectomy was used most commonly, often achieving accuracy rates over 90%, though there was no consistent standardization of defined phases. CONCLUSIONS ML for surgical phase recognition can be performed with high accuracy, depending on the model, data type, and complexity of surgery. Different intraoperative data inputs such as video and instrument type can successfully be used. Most ML models still require significant amounts of manual expert annotations for training. The ML models may drive surgical workflow towards standardization, efficiency, and objectiveness to improve patient outcome in the future. REGISTRATION PROSPERO CRD42018108907.
Collapse
Affiliation(s)
- Carly R Garrow
- Department of General, Visceral, and Transplantation Surgery, University Hospital of Heidelberg, Heidelberg, Germany
| | - Karl-Friedrich Kowalewski
- Department of General, Visceral, and Transplantation Surgery, University Hospital of Heidelberg, Heidelberg, Germany
- Department of Urology, University Medical Center Mannheim, Heidelberg University, Mannheim, Germany
| | - Linhong Li
- Department of General, Visceral, and Transplantation Surgery, University Hospital of Heidelberg, Heidelberg, Germany
| | - Martin Wagner
- Department of General, Visceral, and Transplantation Surgery, University Hospital of Heidelberg, Heidelberg, Germany
| | - Mona W Schmidt
- Department of General, Visceral, and Transplantation Surgery, University Hospital of Heidelberg, Heidelberg, Germany
| | - Sandy Engelhardt
- Department of Computer Science, Mannheim University of Applied Sciences, Mannheim, Germany
| | - Daniel A Hashimoto
- Department of Surgery, Massachusetts General Hospital, Boston, Massachusetts
| | - Hannes G Kenngott
- Department of General, Visceral, and Transplantation Surgery, University Hospital of Heidelberg, Heidelberg, Germany
| | - Sebastian Bodenstedt
- Division of Translational Surgical Oncology, National Center for Tumor Diseases (NCT), Dresden, Germany
- Centre for Tactile Internet with Human-in-the-Loop (CeTI), TU Dresden, Dresden, Germany
| | - Stefanie Speidel
- Division of Translational Surgical Oncology, National Center for Tumor Diseases (NCT), Dresden, Germany
- Centre for Tactile Internet with Human-in-the-Loop (CeTI), TU Dresden, Dresden, Germany
| | - Beat P Müller-Stich
- Department of General, Visceral, and Transplantation Surgery, University Hospital of Heidelberg, Heidelberg, Germany
| | - Felix Nickel
- Department of General, Visceral, and Transplantation Surgery, University Hospital of Heidelberg, Heidelberg, Germany
| |
Collapse
|
126
|
The potential of artificial intelligence to improve patient safety: a scoping review. NPJ Digit Med 2021; 4:54. [PMID: 33742085 PMCID: PMC7979747 DOI: 10.1038/s41746-021-00423-6] [Citation(s) in RCA: 74] [Impact Index Per Article: 24.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2020] [Accepted: 02/16/2021] [Indexed: 12/12/2022] Open
Abstract
Artificial intelligence (AI) represents a valuable tool that could be used to improve the safety of care. Major adverse events in healthcare include: healthcare-associated infections, adverse drug events, venous thromboembolism, surgical complications, pressure ulcers, falls, decompensation, and diagnostic errors. The objective of this scoping review was to summarize the relevant literature and evaluate the potential of AI to improve patient safety in these eight harm domains. A structured search was used to query MEDLINE for relevant articles. The scoping review identified studies that described the application of AI for prediction, prevention, or early detection of adverse events in each of the harm domains. The AI literature was narratively synthesized for each domain, and findings were considered in the context of incidence, cost, and preventability to make projections about the likelihood of AI improving safety. Three-hundred and ninety-two studies were included in the scoping review. The literature provided numerous examples of how AI has been applied within each of the eight harm domains using various techniques. The most common novel data were collected using different types of sensing technologies: vital sign monitoring, wearables, pressure sensors, and computer vision. There are significant opportunities to leverage AI and novel data sources to reduce the frequency of harm across all domains. We expect AI to have the greatest impact in areas where current strategies are not effective, and integration and complex analysis of novel, unstructured data are necessary to make accurate predictions; this applies specifically to adverse drug events, decompensation, and diagnostic errors.
Collapse
|
127
|
A Guide to Annotation of Neurosurgical Intraoperative Video for Machine Learning Analysis and Computer Vision. World Neurosurg 2021; 150:26-30. [PMID: 33722717 DOI: 10.1016/j.wneu.2021.03.022] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2021] [Revised: 03/02/2021] [Accepted: 03/03/2021] [Indexed: 11/21/2022]
Abstract
OBJECTIVE Computer vision (CV) is a subset of artificial intelligence that performs computations on image or video data, permitting the quantitative analysis of visual information. Common CV tasks that may be relevant to surgeons include image classification, object detection and tracking, and extraction of higher order features. Despite the potential applications of CV to intraoperative video, however, few surgeons describe the use of CV. A primary roadblock in implementing CV is the lack of a clear workflow to create an intraoperative video dataset to which CV can be applied. We report general principles for creating usable surgical video datasets and the result of their applications. METHODS Video annotations from cadaveric endoscopic endonasal skull base simulations (n = 20 trials of 1-5 minutes, size = 8 GB) were reviewed by 2 researcher-annotators. An internal, retrospective analysis of workflow for development of the intraoperative video annotations was performed to identify guiding practices. RESULTS Approximately 34,000 frames of surgical video were annotated. Key considerations in developing annotation workflows include 1) overcoming software and personnel constraints; 2) ensuring adequate storage and access infrastructure; 3) optimization and standardization of annotation protocol; and 4) operationalizing annotated data. Potential tools for use include CVAT (Computer Vision Annotation Tool) and Vott: open-sourced annotation software allowing for local video storage, easy setup, and the use of interpolation. CONCLUSIONS CV techniques can be applied to surgical video, but challenges for novice users may limit adoption. We outline principles in annotation workflow that can mitigate initial challenges groups may have when converting raw video into useable, annotated datasets.
Collapse
|
128
|
OR black box and surgical control tower: Recording and streaming data and analytics to improve surgical care. J Visc Surg 2021; 158:S18-S25. [PMID: 33712411 DOI: 10.1016/j.jviscsurg.2021.01.004] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
Abstract
Effective and safe surgery results from a complex sociotechnical process prone to human error. Acquiring large amount of data on surgical care and modelling the process of surgery with artificial intelligence's computational methods could shed lights on system strengths and limitations and enable computer-based smart assistance. With this vision in mind, surgeons and computer scientists have joined forces in a novel discipline called Surgical Data Science. In this regard, operating room (OR) black boxes and surgical control towers are being developed to systematically capture comprehensive data on surgical procedures and to oversee and assist during operating rooms activities, respectively. Most of the early Surgical Data Science works have focused on understanding risks and resilience factors affecting surgical safety, the context and workflow of procedures, and team behaviors. These pioneering efforts in sensing and analyzing surgical activities, together with the advent of precise robotic actuators, bring surgery on the verge of a fourth revolution characterized by smart assistance in perceptual, cognitive and physical tasks. Barriers to implement this vision exist, but the surgical-technical partnerships set by ambitious efforts such as the OR black box and the surgical control tower are working to overcome these roadblocks and translate the vision and early works described in the manuscript into value for patients, surgeons and health systems.
Collapse
|
129
|
Affiliation(s)
- E Vibert
- Hepato-Biliary Center, Paul-Brousse Hospital, AP-HP, Villejuif, France; BOPA Innovation Chair, AP-HP, Institut Mines Telecom, France; Paris Saclay University, INSERM 1193, France.
| |
Collapse
|
130
|
Humm G, Harries RL, Stoyanov D, Lovat LB. Supporting laparoscopic general surgery training with digital technology: The United Kingdom and Ireland paradigm. BMC Surg 2021; 21:123. [PMID: 33685437 PMCID: PMC7941971 DOI: 10.1186/s12893-021-01123-4] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2020] [Accepted: 02/25/2021] [Indexed: 12/20/2022] Open
Abstract
Surgical training in the UK and Ireland has faced challenges following the implementation of the European Working Time Directive and postgraduate training reform. The health services are undergoing a digital transformation; digital technology is remodelling the delivery of surgical care and surgical training. This review aims to critically evaluate key issues in laparoscopic general surgical training and the digital technology such as virtual and augmented reality, telementoring and automated workflow analysis and surgical skills assessment. We include pre-clinical, proof of concept research and commercial systems that are being developed to provide solutions. Digital surgical technology is evolving through interdisciplinary collaboration to provide widespread access to high-quality laparoscopic general surgery training and assessment. In the future this could lead to integrated, context-aware systems that support surgical teams in providing safer surgical care.
Collapse
Affiliation(s)
- Gemma Humm
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, Charles Bell House, 43-45 Foley Street, London, W1W 7TY, UK.
- Division of Surgery and Interventional Science, University College London, London, UK.
| | | | - Danail Stoyanov
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, Charles Bell House, 43-45 Foley Street, London, W1W 7TY, UK
- Department of Computer Science, University College London, London, UK
| | - Laurence B Lovat
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, Charles Bell House, 43-45 Foley Street, London, W1W 7TY, UK
- Division of Surgery and Interventional Science, University College London, London, UK
| |
Collapse
|
131
|
Urbanski A, Babic B, Schröder W, Schiffmann L, Müller DT, Bruns CJ, Fuchs HF. [New techniques and training methods for robot-assisted surgery and cost-benefit analysis of Ivor Lewis esophagectomy]. Chirurg 2021; 92:97-101. [PMID: 33237368 DOI: 10.1007/s00104-020-01317-1] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2022]
Abstract
INTRODUCTION Robotic surgery was introduced into general surgery more than 20 years ago. Shortly afterwards, Horgan performed the first robotic-assisted esophagectomy in 2003 in Chicago. The aim of this manuscript is to elucidate new developments and training methods in robotic surgery with a cost-benefit analysis for robotic-assisted Ivor Lewis esophagectomy. METHODS Systematic literature search regarding new technology and training methods for robotic surgery and cost analysis of intraoperative materials for hybrid and robotic-assisted Ivor Lewis esophagectomy. RESULTS Robotic-assisted esophageal surgery is complex and involves an extensive learning curve, which can be shortened with modern teaching methods. New robotic systems aim at the use of image-guided surgery and artificial intelligence. Robotic-assisted surgery of esophageal cancer is significantly more expensive that surgery without this technology. CONCLUSION Oncological short-term and long-term benefits need to be further evaluated to support the higher cost of robotic esophageal cancer surgery.
Collapse
Affiliation(s)
- Alexander Urbanski
- Klinik und Poliklinik für Allgemein‑, Viszeral‑, Tumor- und Transplantationschirurgie, Uniklinikum Köln, Kerpener Straße 62, 50937, Köln, Deutschland
| | - Benjamin Babic
- Klinik und Poliklinik für Allgemein‑, Viszeral‑, Tumor- und Transplantationschirurgie, Uniklinikum Köln, Kerpener Straße 62, 50937, Köln, Deutschland
| | - Wolfgang Schröder
- Klinik und Poliklinik für Allgemein‑, Viszeral‑, Tumor- und Transplantationschirurgie, Uniklinikum Köln, Kerpener Straße 62, 50937, Köln, Deutschland
| | - Lars Schiffmann
- Klinik und Poliklinik für Allgemein‑, Viszeral‑, Tumor- und Transplantationschirurgie, Uniklinikum Köln, Kerpener Straße 62, 50937, Köln, Deutschland
| | - Dolores T Müller
- Klinik und Poliklinik für Allgemein‑, Viszeral‑, Tumor- und Transplantationschirurgie, Uniklinikum Köln, Kerpener Straße 62, 50937, Köln, Deutschland
| | - Christiane J Bruns
- Klinik und Poliklinik für Allgemein‑, Viszeral‑, Tumor- und Transplantationschirurgie, Uniklinikum Köln, Kerpener Straße 62, 50937, Köln, Deutschland
| | - Hans F Fuchs
- Klinik und Poliklinik für Allgemein‑, Viszeral‑, Tumor- und Transplantationschirurgie, Uniklinikum Köln, Kerpener Straße 62, 50937, Köln, Deutschland.
| |
Collapse
|
132
|
Deep learning visual analysis in laparoscopic surgery: a systematic review and diagnostic test accuracy meta-analysis. Surg Endosc 2021; 35:1521-1533. [PMID: 33398560 DOI: 10.1007/s00464-020-08168-1] [Citation(s) in RCA: 36] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2020] [Accepted: 11/15/2020] [Indexed: 10/22/2022]
Abstract
BACKGROUND In the past decade, deep learning has revolutionized medical image processing. This technique may advance laparoscopic surgery. Study objective was to evaluate whether deep learning networks accurately analyze videos of laparoscopic procedures. METHODS Medline, Embase, IEEE Xplore, and the Web of science databases were searched from January 2012 to May 5, 2020. Selected studies tested a deep learning model, specifically convolutional neural networks, for video analysis of laparoscopic surgery. Study characteristics including the dataset source, type of operation, number of videos, and prediction application were compared. A random effects model was used for estimating pooled sensitivity and specificity of the computer algorithms. Summary receiver operating characteristic curves were calculated by the bivariate model of Reitsma. RESULTS Thirty-two out of 508 studies identified met inclusion criteria. Applications included instrument recognition and detection (45%), phase recognition (20%), anatomy recognition and detection (15%), action recognition (13%), surgery time prediction (5%), and gauze recognition (3%). The most common tested procedures were cholecystectomy (51%) and gynecological-mainly hysterectomy and myomectomy (26%). A total of 3004 videos were analyzed. Publications in clinical journals increased in 2020 compared to bio-computational ones. Four studies provided enough data to construct 8 contingency tables, enabling calculation of test accuracy with a pooled sensitivity of 0.93 (95% CI 0.85-0.97) and specificity of 0.96 (95% CI 0.84-0.99). Yet, the majority of papers had a high risk of bias. CONCLUSIONS Deep learning research holds potential in laparoscopic surgery, but is limited in methodologies. Clinicians may advance AI in surgery, specifically by offering standardized visual databases and reporting.
Collapse
|
133
|
Artificial Intelligence in Surgery. Artif Intell Med 2021. [DOI: 10.1007/978-3-030-58080-3_171-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
134
|
Bar O, Neimark D, Zohar M, Hager GD, Girshick R, Fried GM, Wolf T, Asselmann D. Impact of data on generalization of AI for surgical intelligence applications. Sci Rep 2020; 10:22208. [PMID: 33335191 PMCID: PMC7747564 DOI: 10.1038/s41598-020-79173-6] [Citation(s) in RCA: 33] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2020] [Accepted: 12/04/2020] [Indexed: 12/18/2022] Open
Abstract
AI is becoming ubiquitous, revolutionizing many aspects of our lives. In surgery, it is still a promise. AI has the potential to improve surgeon performance and impact patient care, from post-operative debrief to real-time decision support. But, how much data is needed by an AI-based system to learn surgical context with high fidelity? To answer this question, we leveraged a large-scale, diverse, cholecystectomy video dataset. We assessed surgical workflow recognition and report a deep learning system, that not only detects surgical phases, but does so with high accuracy and is able to generalize to new settings and unseen medical centers. Our findings provide a solid foundation for translating AI applications from research to practice, ushering in a new era of surgical intelligence.
Collapse
Affiliation(s)
- Omri Bar
- theator Inc., San Mateo, CA, USA.
| | | | | | - Gregory D Hager
- theator Inc., San Mateo, CA, USA
- Department of Computer Science, Johns Hopkins University, Baltimore, USA
| | | | - Gerald M Fried
- theator Inc., San Mateo, CA, USA
- Department of Surgery, McGill University, Montreal, QC, Canada
| | | | | |
Collapse
|
135
|
Künstliche Intelligenz in der Ausbildung. ARTHROSKOPIE 2020. [DOI: 10.1007/s00142-020-00425-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
136
|
Ward TM, Mascagni P, Ban Y, Rosman G, Padoy N, Meireles O, Hashimoto DA. Computer vision in surgery. Surgery 2020; 169:1253-1256. [PMID: 33272610 DOI: 10.1016/j.surg.2020.10.039] [Citation(s) in RCA: 56] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2020] [Revised: 10/09/2020] [Accepted: 10/10/2020] [Indexed: 12/17/2022]
Abstract
The fields of computer vision (CV) and artificial intelligence (AI) have undergone rapid advancements in the past decade, many of which have been applied to the analysis of intraoperative video. These advances are driven by wide-spread application of deep learning, which leverages multiple layers of neural networks to teach computers complex tasks. Prior to these advances, applications of AI in the operating room were limited by our relative inability to train computers to accurately understand images with traditional machine learning (ML) techniques. The development and refining of deep neural networks that can now accurately identify objects in images and remember past surgical events has sparked a surge in the applications of CV to analyze intraoperative video and has allowed for the accurate identification of surgical phases (steps) and instruments across a variety of procedures. In some cases, CV can even identify operative phases with accuracy similar to surgeons. Future research will likely expand on this foundation of surgical knowledge using larger video datasets and improved algorithms with greater accuracy and interpretability to create clinically useful AI models that gain widespread adoption and augment the surgeon's ability to provide safer care for patients everywhere.
Collapse
Affiliation(s)
- Thomas M Ward
- Surgical Artificial Intelligence and Innovation Laboratory, Massachusetts General Hospital, Harvard Medical School, Boston, MA
| | - Pietro Mascagni
- ICube, University of Strasbourg, CNRS, IHU Strasbourg, France; Fondazione Policlinico Universitario A. Gemelli IRCCS, Rome, Italy
| | - Yutong Ban
- Surgical Artificial Intelligence and Innovation Laboratory, Massachusetts General Hospital, Harvard Medical School, Boston, MA; Distributed Robotics Laboratory, Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA
| | - Guy Rosman
- Surgical Artificial Intelligence and Innovation Laboratory, Massachusetts General Hospital, Harvard Medical School, Boston, MA; Distributed Robotics Laboratory, Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA
| | - Nicolas Padoy
- ICube, University of Strasbourg, CNRS, IHU Strasbourg, France
| | - Ozanan Meireles
- Surgical Artificial Intelligence and Innovation Laboratory, Massachusetts General Hospital, Harvard Medical School, Boston, MA
| | - Daniel A Hashimoto
- Surgical Artificial Intelligence and Innovation Laboratory, Massachusetts General Hospital, Harvard Medical School, Boston, MA.
| |
Collapse
|
137
|
Artificial Intelligence for Surgical Safety: Automatic Assessment of the Critical View of Safety in Laparoscopic Cholecystectomy Using Deep Learning. Ann Surg 2020; 275:955-961. [PMID: 33201104 DOI: 10.1097/sla.0000000000004351] [Citation(s) in RCA: 75] [Impact Index Per Article: 18.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/15/2023]
Abstract
OBJECTIVE To develop a deep learning model to automatically segment hepatocystic anatomy and assess the criteria defining the critical view of safety (CVS) in laparoscopic cholecystectomy (LC). BACKGROUND Poor implementation and subjective interpretation of CVS contributes to the stable rates of bile duct injuries in LC. As CVS is assessed visually, this task can be automated by using computer vision, an area of artificial intelligence aimed at interpreting images. METHODS Still images from LC videos were annotated with CVS criteria and hepatocystic anatomy segmentation. A deep neural network comprising a segmentation model to highlight hepatocystic anatomy and a classification model to predict CVS criteria achievement was trained and tested using 5-fold cross validation. Intersection over union, average precision, and balanced accuracy were computed to evaluate the model performance versus the annotated ground truth. RESULTS A total of 2854 images from 201 LC videos were annotated and 402 images were further segmented. Mean intersection over union for segmentation was 66.6%. The model assessed the achievement of CVS criteria with a mean average precision and balanced accuracy of 71.9% and 71.4%, respectively. CONCLUSIONS Deep learning algorithms can be trained to reliably segment hepatocystic anatomy and assess CVS criteria in still laparoscopic images. Surgical-technical partnerships should be encouraged to develop and evaluate deep learning models to improve surgical safety.
Collapse
|
138
|
Abstract
PURPOSE OF REVIEW Current bariatric surgical practice has developed from early procedures, some of which are no longer routinely performed. This review highlights how surgical practice in this area has developed over time. RECENT FINDINGS This review outlines early procedures including jejuno-colic and jejuno-ileal bypass, initial experience with gastric bypass, vertical banded gastroplasty and biliopancreatic diversion with or without duodenal switch. The role laparoscopy has played in the widespread utilization of surgery for treatment of obesity will be described, as will the development of procedures which form the mainstay of current bariatric surgical practice including gastric bypass, sleeve gastrectomy and adjustable gastric banding. Endoscopic therapies for the treatment of obesity will be described. By outlining how bariatric surgical practice has developed over time, this review will help practicing surgeons understand how individual procedures have evolved and also provide insight into potential future developments in this field.
Collapse
Affiliation(s)
- T Wiggins
- Department of Bariatric Surgery, Homerton University Hospital, Homerton Row, London, E9 6SR, UK
| | - M S Majid
- Department of Bariatric Surgery, Homerton University Hospital, Homerton Row, London, E9 6SR, UK
| | - S Agrawal
- Department of Bariatric Surgery, Homerton University Hospital, Homerton Row, London, E9 6SR, UK.
| |
Collapse
|
139
|
Affiliation(s)
- Daniel A Hashimoto
- Surgical AI & Innovation Laboratory, Department of Surgery, Massachusetts General Hospital, Harvard Medical School, 15 Parkman Street, WAC460, Boston, MA 02114, USA.
| | - Thomas M Ward
- Surgical AI & Innovation Laboratory, Department of Surgery, Massachusetts General Hospital, Harvard Medical School, 15 Parkman Street, WAC460, Boston, MA 02114, USA
| | - Ozanan R Meireles
- Surgical AI & Innovation Laboratory, Department of Surgery, Massachusetts General Hospital, Harvard Medical School, 15 Parkman Street, WAC460, Boston, MA 02114, USA
| |
Collapse
|
140
|
Loftus TJ, Filiberto AC, Li Y, Balch J, Cook AC, Tighe PJ, Efron PA, Upchurch GR, Rashidi P, Li X, Bihorac A. Decision analysis and reinforcement learning in surgical decision-making. Surgery 2020; 168:253-266. [PMID: 32540036 PMCID: PMC7390703 DOI: 10.1016/j.surg.2020.04.049] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2019] [Revised: 03/18/2020] [Accepted: 04/17/2020] [Indexed: 12/18/2022]
Abstract
BACKGROUND Surgical patients incur preventable harm from cognitive and judgment errors made under time constraints and uncertainty regarding patients' diagnoses and predicted response to treatment. Decision analysis and techniques of reinforcement learning theoretically can mitigate these challenges but are poorly understood and rarely used clinically. This review seeks to promote an understanding of decision analysis and reinforcement learning by describing their use in the context of surgical decision-making. METHODS Cochrane, EMBASE, and PubMed databases were searched from their inception to June 2019. Included were 41 articles about cognitive and diagnostic errors, decision-making, decision analysis, and machine-learning. The articles were assimilated into relevant categories according to Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews guidelines. RESULTS Requirements for time-consuming manual data entry and crude representations of individual patients and clinical context compromise many traditional decision-support tools. Decision analysis methods for calculating probability thresholds can inform population-based recommendations that jointly consider risks, benefits, costs, and patient values but lack precision for individual patient-centered decisions. Reinforcement learning, a machine-learning method that mimics human learning, can use a large set of patient-specific input data to identify actions yielding the greatest probability of achieving a goal. This methodology follows a sequence of events with uncertain conditions, offering potential advantages for personalized, patient-centered decision-making. Clinical application would require secure integration of multiple data sources and attention to ethical considerations regarding liability for errors and individual patient preferences. CONCLUSION Traditional decision-support tools are ill-equipped to accommodate time constraints and uncertainty regarding diagnoses and the predicted response to treatment, both of which often impair surgical decision-making. Decision analysis and reinforcement learning have the potential to play complementary roles in delivering high-value surgical care through sound judgment and optimal decision-making.
Collapse
Affiliation(s)
- Tyler J Loftus
- Department of Surgery, University of Florida Health, Gainesville, FL
| | | | - Yanjun Li
- NSF Center for Big Learning, University of Florida, Gainesville, FL
| | - Jeremy Balch
- Department of Surgery, University of Florida Health, Gainesville, FL
| | - Allyson C Cook
- Department of Medicine, University of California, San Francisco, CA
| | - Patrick J Tighe
- Departments of Anesthesiology, Orthopedics, and Information Systems/Operations Management, University of Florida Health, Gainesville, FL
| | - Philip A Efron
- Department of Surgery, University of Florida Health, Gainesville, FL
| | | | - Parisa Rashidi
- Departments of Biomedical Engineering, Computer and Information Science and Engineering, and Electrical and Computer Engineering, University of Florida, Gainesville, FL; Precision and Intelligence in Medicine, Department of Medicine, University of Florida Health, Gainesville, FL
| | - Xiaolin Li
- NSF Center for Big Learning, University of Florida, Gainesville, FL
| | - Azra Bihorac
- Department of Medicine, University of California, San Francisco, CA; Precision and Intelligence in Medicine, Department of Medicine, University of Florida Health, Gainesville, FL.
| |
Collapse
|
141
|
Affiliation(s)
- Elif Bilgic
- Department of Surgery, Division of Surgical Education, McGill University, McGill University Health Centre, 1650 Cedar Avenue, #D6.136, Montreal, Quebec H3G 1A4, Canada
| | - Sofia Valanci-Aroesty
- Department of Surgery, Division of Experimental Surgery, McGill University, McGill University Health Centre, 1650 Cedar Avenue, #D6.136, Montreal, Quebec H3G 1A4, Canada
| | - Gerald M Fried
- Department of Surgery, McGill University, McGill University Health Centre, 1650 Cedar Avenue, #D6.136, Montreal, Quebec H3G 1A4, Canada.
| |
Collapse
|
142
|
Ward TM, Hashimoto DA, Ban Y, Rattner DW, Inoue H, Lillemoe KD, Rus DL, Rosman G, Meireles OR. Automated operative phase identification in peroral endoscopic myotomy. Surg Endosc 2020; 35:4008-4015. [PMID: 32720177 DOI: 10.1007/s00464-020-07833-9] [Citation(s) in RCA: 27] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2020] [Accepted: 07/16/2020] [Indexed: 12/23/2022]
Abstract
BACKGROUND Artificial intelligence (AI) and computer vision (CV) have revolutionized image analysis. In surgery, CV applications have focused on surgical phase identification in laparoscopic videos. We proposed to apply CV techniques to identify phases in an endoscopic procedure, peroral endoscopic myotomy (POEM). METHODS POEM videos were collected from Massachusetts General and Showa University Koto Toyosu Hospitals. Videos were labeled by surgeons with the following ground truth phases: (1) Submucosal injection, (2) Mucosotomy, (3) Submucosal tunnel, (4) Myotomy, and (5) Mucosotomy closure. The deep-learning CV model-Convolutional Neural Network (CNN) plus Long Short-Term Memory (LSTM)-was trained on 30 videos to create POEMNet. We then used POEMNet to identify operative phases in the remaining 20 videos. The model's performance was compared to surgeon annotated ground truth. RESULTS POEMNet's overall phase identification accuracy was 87.6% (95% CI 87.4-87.9%). When evaluated on a per-phase basis, the model performed well, with mean unweighted and prevalence-weighted F1 scores of 0.766 and 0.875, respectively. The model performed best with longer phases, with 70.6% accuracy for phases that had a duration under 5 min and 88.3% accuracy for longer phases. DISCUSSION A deep-learning-based approach to CV, previously successful in laparoscopic video phase identification, translates well to endoscopic procedures. With continued refinements, AI could contribute to intra-operative decision-support systems and post-operative risk prediction.
Collapse
Affiliation(s)
- Thomas M Ward
- Surgical AI and Innovation Laboratory, Massachusetts General Hospital, 15 Parkman St., WAC 460, Boston, MA, 02114, USA.
- Department of Surgery, Massachusetts General Hospital, Boston, MA, USA.
| | - Daniel A Hashimoto
- Surgical AI and Innovation Laboratory, Massachusetts General Hospital, 15 Parkman St., WAC 460, Boston, MA, 02114, USA
- Department of Surgery, Massachusetts General Hospital, Boston, MA, USA
| | - Yutong Ban
- Surgical AI and Innovation Laboratory, Massachusetts General Hospital, 15 Parkman St., WAC 460, Boston, MA, 02114, USA
- Department of Surgery, Massachusetts General Hospital, Boston, MA, USA
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - David W Rattner
- Department of Surgery, Massachusetts General Hospital, Boston, MA, USA
| | - Haruhiro Inoue
- Digestive Disease Center, Showa University Koto Toyosu Hospital, Tokyo, Japan
| | - Keith D Lillemoe
- Department of Surgery, Massachusetts General Hospital, Boston, MA, USA
| | - Daniela L Rus
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Guy Rosman
- Surgical AI and Innovation Laboratory, Massachusetts General Hospital, 15 Parkman St., WAC 460, Boston, MA, 02114, USA
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Ozanan R Meireles
- Surgical AI and Innovation Laboratory, Massachusetts General Hospital, 15 Parkman St., WAC 460, Boston, MA, 02114, USA
- Department of Surgery, Massachusetts General Hospital, Boston, MA, USA
| |
Collapse
|
143
|
Egert M, Steward JE, Sundaram CP. Machine Learning and Artificial Intelligence in Surgical Fields. Indian J Surg Oncol 2020; 11:573-577. [PMID: 33299275 DOI: 10.1007/s13193-020-01166-8] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2019] [Accepted: 07/07/2020] [Indexed: 12/17/2022] Open
Abstract
Artificial intelligence (AI) and machine learning (ML) have the potential to improve multiple facets of medical practice, including diagnosis of disease, surgical training, clinical outcomes, and access to healthcare. There have been various applications of this technology to surgical fields. AI and ML have been used to evaluate a surgeon's technical skill. These technologies can detect instrument motion, recognize patterns in video recordings, and track the physical motion, eye movements, and cognitive function of the surgeon. These modalities also aid in the advancement of robotic surgical training. The da Vinci Standard Surgical System developed a recording and playback system to help trainees receive tactical feedback to acquire more precision when operating. ML has shown promise in recognizing and classifying complex patterns on diagnostic images and within pathologic tissue analysis. This allows for more accurate and efficient diagnosis and treatment. Artificial neural networks are able to analyze sets of symptoms in conjunction with labs, imaging, and exam findings to determine the likelihood of a diagnosis or outcome. Telemedicine is another use of ML and AI that uses technology such as voice recognition to deliver health care remotely. Limitations include the need for large data sets to program computers to create the algorithms. There is also the potential for misclassification of data points that do not follow the typical patterns learned by the machine. As more applications of AI and ML are developed for the surgical field, further studies are needed to determine feasibility, efficacy, and cost.
Collapse
Affiliation(s)
- Melissa Egert
- Department of Urology, Indiana University School of Medicine, 535 N Barnhill Drive, Suite 150, Indianapolis, IN 46202 USA
| | - James E Steward
- Department of Urology, Indiana University School of Medicine, 535 N Barnhill Drive, Suite 150, Indianapolis, IN 46202 USA
| | - Chandru P Sundaram
- Department of Urology, Indiana University School of Medicine, 535 N Barnhill Drive, Suite 150, Indianapolis, IN 46202 USA
| |
Collapse
|
144
|
Navarrete-Welton AJ, Hashimoto DA. Current applications of artificial intelligence for intraoperative decision support in surgery. Front Med 2020; 14:369-381. [PMID: 32621201 DOI: 10.1007/s11684-020-0784-7] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2019] [Accepted: 03/14/2020] [Indexed: 02/06/2023]
Abstract
Research into medical artificial intelligence (AI) has made significant advances in recent years, including surgical applications. This scoping review investigated AI-based decision support systems targeted at the intraoperative phase of surgery and found a wide range of technological approaches applied across several surgical specialties. Within the twenty-one (n = 21) included papers, three main categories of motivations were identified for developing such technologies: (1) augmenting the information available to surgeons, (2) accelerating intraoperative pathology, and (3) recommending surgical steps. While many of the proposals hold promise for improving patient outcomes, important methodological shortcomings were observed in most of the reviewed papers that made it difficult to assess the clinical significance of the reported performance statistics. Despite limitations, the current state of this field suggests that a number of opportunities exist for future researchers and clinicians to work on AI for surgical decision support with exciting implications for improving surgical care.
Collapse
Affiliation(s)
- Allison J Navarrete-Welton
- Surgical Artificial Intelligence and Innovation Laboratory, Massachusetts General Hospital, Boston, MA, 02114, USA
| | - Daniel A Hashimoto
- Surgical Artificial Intelligence and Innovation Laboratory, Massachusetts General Hospital, Boston, MA, 02114, USA. .,Harvard Medical School, Boston, MA, 02114, USA.
| |
Collapse
|
145
|
Pugh CM, Hashimoto DA, Korndorffer JR. The what? How? And Who? Of video based assessment. Am J Surg 2020; 221:13-18. [PMID: 32665080 DOI: 10.1016/j.amjsurg.2020.06.027] [Citation(s) in RCA: 29] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2020] [Accepted: 06/19/2020] [Indexed: 01/25/2023]
Abstract
BACKGROUND Currently, there is significant variability in the development, implementation and overarching goals of video review for assessment of surgical performance. METHODS This paper evaluates the current methods in which video review is used for evaluation of surgical performance and identifies which processes are critical for successful, widespread implementation of video-based assessment. RESULTS Despite the advances in video capture technology and growing interest in video-based assessment, there is a notable gap in the implementation and longitudinal use of formative and summative assessment using video. CONCLUSION Validity, scalability and discoverability are current but removable barriers to video-based assessment.
Collapse
Affiliation(s)
- Carla M Pugh
- Department of Surgery, Stanford University School of Medicine, 300 Pasteur Drive, Stanford, CA, 94305, USA.
| | - Daniel A Hashimoto
- Department of Surgery, Massachusetts General Hospital, 55 Fruit Street, Boston, MA, 02114, USA.
| | - James R Korndorffer
- Department of Surgery, Stanford University School of Medicine, 300 Pasteur Drive, Stanford, CA, 94305, USA.
| |
Collapse
|
146
|
Pradarelli JC, Pavuluri Quamme SR, Yee A, Faerber AE, Dombrowski JC, King C, Greenberg CC. Surgical coaching to achieve the ABMS vision for the future of continuing board certification. Am J Surg 2020; 221:4-10. [PMID: 32631596 DOI: 10.1016/j.amjsurg.2020.06.014] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2020] [Revised: 06/12/2020] [Accepted: 06/12/2020] [Indexed: 01/26/2023]
Abstract
In February 2019, the American Board of Medical Specialties (ABMS) released the final report of the Continuing Board Certification: Vision for the Future initiative, issuing strong recommendations to replace ineffective, traditional mechanisms for physicians' maintenance of certification with meaningful strategies that strengthen professional self-regulation and simultaneously engender public trust. The Vision report charges ABMS Member Boards, including the American Board of Surgery (ABS), to develop and implement a more formative, less summative approach to continuing certification. To realize the ABMS's Vision in surgery, new programs must support the assessment of surgeons' performance in practice, identification of individualized performance gaps, tailored goals to address those gaps, and execution of personalized action plans with accountability and longitudinal support. Peer surgical coaching, especially when paired with video-based assessment, provides a structured approach that can meet this need. Surgical coaching was one of the approaches to continuing professional development that was discussed at an ABS-sponsored retreat in January 2020; this commentary review provides an overview of that discussion. The professional surgical societies, in partnership with the ABS, are uniquely positioned to implement surgical coaching programs to support the continuing certification of their membership. In this article, we provide historical context for board certification in surgery, interpret how the ABMS's Vision applies to surgical performance, and highlight recent developments in video-based assessment and peer surgical coaching. We propose surgical coaching as a foundational strategy for accomplishing the ABMS's Vision for continuing board certification in surgery.
Collapse
Affiliation(s)
- Jason C Pradarelli
- The Academy for Surgical Coaching, Madison, WI, USA; Brigham and Women's Hospital Department of Surgery, Boston, MA, USA
| | - Sudha R Pavuluri Quamme
- The Academy for Surgical Coaching, Madison, WI, USA; University of Wisconsin Department of Surgery, Wisconsin Surgical Outcomes Research Program, Madison, WI, USA
| | - Andrew Yee
- The Academy for Surgical Coaching, Madison, WI, USA; Washington University Department of Surgery, St Louis, MO, USA
| | | | | | - Cara King
- The Academy for Surgical Coaching, Madison, WI, USA; Cleveland Clinic Obstetrics, Gynecology & Women's Health Institute, Cleveland, OH, USA
| | - Caprice C Greenberg
- The Academy for Surgical Coaching, Madison, WI, USA; University of Wisconsin Department of Surgery, Wisconsin Surgical Outcomes Research Program, Madison, WI, USA.
| |
Collapse
|
147
|
Mangano A, Valle V, Dreifuss NH, Aguiluz G, Masrur MA. Role of Artificial Intelligence (AI) in Surgery: Introduction, General Principles, and Potential Applications. Surg Technol Int 2020; 38:17-21. [PMID: 33370842 DOI: 10.52198/21.sti.38.so1369] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
AI (Artificial intelligence) is an interdisciplinary field aimed at the development of algorithms to endow machines with the capability of executing cognitive tasks. The number of publications regarding AI and surgery has increased dramatically over the last two decades. This phenomenon can partly be explained by the exponential growth in computing power available to the largest AI training runs. AI can be classified into different sub-domains with extensive potential clinical applications in the surgical setting. AI will increasingly become a major component of clinical practice in surgery. The aim of the present Narrative Review is to give a general introduction and summarized overview of AI, as well as to present additional remarks on potential surgical applications and future perspectives in surgery.
Collapse
Affiliation(s)
- Alberto Mangano
- Division of General, Minimally Invasive and Robotic Surgery, University of Illinois at Chicago, Chicago, IL, USA
| | - Valentina Valle
- Division of General, Minimally Invasive and Robotic Surgery, University of Illinois at Chicago, Chicago, IL, USA
| | - Nicolas H Dreifuss
- Division of General, Minimally Invasive and Robotic Surgery, University of Illinois at Chicago, Chicago, IL, USA
| | - Gabriela Aguiluz
- Division of General, Minimally Invasive and Robotic Surgery, University of Illinois at Chicago, Chicago, IL, USA
| | - Mario A Masrur
- Division of General, Minimally Invasive and Robotic Surgery, University of Illinois at Chicago, Chicago, IL, USA
| |
Collapse
|
148
|
Kitaguchi D, Takeshita N, Matsuzaki H, Oda T, Watanabe M, Mori K, Kobayashi E, Ito M. Automated laparoscopic colorectal surgery workflow recognition using artificial intelligence: Experimental research. Int J Surg 2020; 79:88-94. [PMID: 32413503 DOI: 10.1016/j.ijsu.2020.05.015] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2020] [Revised: 05/01/2020] [Accepted: 05/05/2020] [Indexed: 12/13/2022]
Abstract
BACKGROUND Identifying laparoscopic surgical videos using artificial intelligence (AI) facilitates the automation of several currently time-consuming manual processes, including video analysis, indexing, and video-based skill assessment. This study aimed to construct a large annotated dataset comprising laparoscopic colorectal surgery (LCRS) videos from multiple institutions and evaluate the accuracy of automatic recognition for surgical phase, action, and tool by combining this dataset with AI. MATERIALS AND METHODS A total of 300 intraoperative videos were collected from 19 high-volume centers. A series of surgical workflows were classified into 9 phases and 3 actions, and the area of 5 tools were assigned by painting. More than 82 million frames were annotated for a phase and action classification task, and 4000 frames were annotated for a tool segmentation task. Of these frames, 80% were used for the training dataset and 20% for the test dataset. A convolutional neural network (CNN) was used to analyze the videos. Intersection over union (IoU) was used as the evaluation metric for tool recognition. RESULTS The overall accuracies for the automatic surgical phase and action classification task were 81.0% and 83.2%, respectively. The mean IoU for the automatic tool segmentation task for 5 tools was 51.2%. CONCLUSIONS A large annotated dataset of LCRS videos was constructed, and the phase, action, and tool were recognized with high accuracy using AI. Our dataset has potential uses in medical applications such as automatic video indexing and surgical skill assessments. Open research will assist in improving CNN models by making our dataset available in the field of computer vision.
Collapse
Affiliation(s)
- Daichi Kitaguchi
- Surgical Device Innovation Office, National Cancer Center Hospital East, 6-5-1 Kashiwanoha, Kashiwa, Chiba, 277-8577, Japan; Department of Colorectal Surgery, National Cancer Center Hospital East, 6-5-1 Kashiwanoha, Kashiwa, Chiba, 277-8577, Japan; Department of Gastrointestinal and Hepato-Biliary-Pancreatic Surgery, Faculty of Medicine, University of Tsukuba, 1-1-1 Tennodai, Tsukuba, Ibaraki, 305-8575, Japan
| | - Nobuyoshi Takeshita
- Surgical Device Innovation Office, National Cancer Center Hospital East, 6-5-1 Kashiwanoha, Kashiwa, Chiba, 277-8577, Japan; Department of Colorectal Surgery, National Cancer Center Hospital East, 6-5-1 Kashiwanoha, Kashiwa, Chiba, 277-8577, Japan.
| | - Hiroki Matsuzaki
- Surgical Device Innovation Office, National Cancer Center Hospital East, 6-5-1 Kashiwanoha, Kashiwa, Chiba, 277-8577, Japan
| | - Tatsuya Oda
- Department of Gastrointestinal and Hepato-Biliary-Pancreatic Surgery, Faculty of Medicine, University of Tsukuba, 1-1-1 Tennodai, Tsukuba, Ibaraki, 305-8575, Japan
| | - Masahiko Watanabe
- Department of Surgery, Kitasato University School of Medicine, 1-15-1 Kitasato, Minami-ku, Sagamihara, Kanagawa, 252-0374, Japan
| | - Kensaku Mori
- Graduate School of Informatics, Nagoya University, Furo-cho, Chikusa-ku, Nagoya, Aichi, 464-8601, Japan
| | - Etsuko Kobayashi
- Institute of Advanced Biomedical Engineering and Science, Tokyo Women's Medical University, 8-1 Kawada-cho, Shinjuku-ku, Tokyo, 162-8666, Japan
| | - Masaaki Ito
- Surgical Device Innovation Office, National Cancer Center Hospital East, 6-5-1 Kashiwanoha, Kashiwa, Chiba, 277-8577, Japan; Department of Colorectal Surgery, National Cancer Center Hospital East, 6-5-1 Kashiwanoha, Kashiwa, Chiba, 277-8577, Japan.
| |
Collapse
|
149
|
|
150
|
Chandawarkar A, Chartier C, Kanevsky J, Cress PE. A Practical Approach to Artificial Intelligence in Plastic Surgery. Aesthet Surg J Open Forum 2020; 2:ojaa001. [PMID: 33791621 PMCID: PMC7671238 DOI: 10.1093/asjof/ojaa001] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023] Open
Abstract
Understanding the intersection of technology and plastic surgery has been and will be essential to positioning plastic surgeons at the forefront of surgical innovation. This account of the current and future applications of artificial intelligence (AI) in reconstructive and aesthetic surgery introduces us to the subset of issues amenable to support from this technology. It equips plastic surgeons with the knowledge to navigate technical conversations with peers, trainees, patients, and technical partners for collaboration and to usher in a new era of technology in plastic surgery. From the mathematical basis of AI to its commercially viable applications, topics introduced herein constitute a framework for design and execution of quantitative studies that will better outcomes and benefit patients. Finally, adherence to the principles of quality data collection will leverage and amplify plastic surgeons’ creativity and undoubtedly drive the field forward.
Collapse
Affiliation(s)
- Akash Chandawarkar
- Corresponding Author: Dr Akash Chandawarkar, Johns Hopkins University School of Medicine, Department of Plastic and Reconstructive Surgery, 601 N. Caroline Street, Baltimore, MD 21287. E-mail: ; Twitter: @AChandMD
| | | | | | | |
Collapse
|