1
|
Mita K, Kobayashi N, Takahashi K, Sakai T, Shimaguchi M, Kouno M, Toyota N, Hatano M, Toyota T, Sasaki J. Anatomical recognition of dissection layers, nerves, vas deferens, and microvessels using artificial intelligence during transabdominal preperitoneal inguinal hernia repair. Hernia 2024; 29:52. [PMID: 39724499 DOI: 10.1007/s10029-024-03223-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2024] [Accepted: 11/16/2024] [Indexed: 12/28/2024]
Abstract
PURPOSE In laparoscopic inguinal hernia surgery, proper recognition of loose connective tissue, nerves, vas deferens, and microvessels is important to prevent postoperative complications, such as recurrence, pain, sexual dysfunction, and bleeding. EUREKA (Anaut Inc., Tokyo, Japan) is a system that uses artificial intelligence (AI) for anatomical recognition. This system can intraoperatively confirm the aforementioned anatomical landmarks. In this study, we validated the accuracy of EUREKA in recognizing dissection layers, nerves, vas deferens, and microvessels during transabdominal preperitoneal inguinal hernia repair (TAPP). METHODS We used TAPP videos to compare EUREKA's recognition of loose connective tissue, nerves, vas deferens, and microvessels with the original surgical video and examined whether EUREKA accurately identified these structures. Intersection over Union (IoU) and F1/Dice scores were calculated to quantitively evaluate AI predictive images. RESULTS The mean IoU and F1/Dice scores were 0.33 and 0.50 for connective tissue, 0.24 and 0.38 for nerves, 0.50 and 0.66 for the vas deferens, and 0.30 and 0.45 for microvessels, respectively. Compared with the images without EUREKA visualization, dissection layers were very clearly recognized and displayed when appropriate tension was applied.
Collapse
Affiliation(s)
- Kazuhito Mita
- Department of Surgery, Tsudanuma Central General Hospital, 1- 9-17 Yatsu, Narashino, Japan.
| | - Nao Kobayashi
- Department of Surgery, Tsudanuma Central General Hospital, 1- 9-17 Yatsu, Narashino, Japan
- Anaut Inc, Tokyo, Japan
| | - Kunihiko Takahashi
- Department of Surgery, Tsudanuma Central General Hospital, 1- 9-17 Yatsu, Narashino, Japan
| | - Takashi Sakai
- Department of Surgery, Tsudanuma Central General Hospital, 1- 9-17 Yatsu, Narashino, Japan
| | - Mayu Shimaguchi
- Department of Surgery, Tsudanuma Central General Hospital, 1- 9-17 Yatsu, Narashino, Japan
| | - Michitaka Kouno
- Department of Surgery, Tsudanuma Central General Hospital, 1- 9-17 Yatsu, Narashino, Japan
| | - Naoyuki Toyota
- Department of Surgery, Tsudanuma Central General Hospital, 1- 9-17 Yatsu, Narashino, Japan
| | - Minoru Hatano
- Department of Surgery, Tsudanuma Central General Hospital, 1- 9-17 Yatsu, Narashino, Japan
| | - Tsuyoshi Toyota
- Department of Surgery, Tsudanuma Central General Hospital, 1- 9-17 Yatsu, Narashino, Japan
| | - Junichi Sasaki
- Department of Surgery, Tsudanuma Central General Hospital, 1- 9-17 Yatsu, Narashino, Japan
| |
Collapse
|
2
|
Furube T, Takeuchi M, Kawakubo H, Noma K, Maeda N, Daiko H, Ishiyama K, Otsuka K, Sato Y, Koyanagi K, Tajima K, Garcia RN, Maeda Y, Matsuda S, Kitagawa Y. Usefulness of an Artificial Intelligence Model in Recognizing Recurrent Laryngeal Nerves During Robot-Assisted Minimally Invasive Esophagectomy. Ann Surg Oncol 2024; 31:9344-9351. [PMID: 39266790 DOI: 10.1245/s10434-024-16157-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2024] [Accepted: 08/23/2024] [Indexed: 09/14/2024]
Abstract
BACKGROUND Recurrent laryngeal nerve (RLN) palsy is a common complication in esophagectomy and its main risk factor is reportedly intraoperative procedure associated with surgeons' experience. We aimed to improve surgeons' recognition of the RLN during robot-assisted minimally invasive esophagectomy (RAMIE) by developing an artificial intelligence (AI) model. METHODS We used 120 RAMIE videos from four institutions to develop an AI model and eight other surgical videos from another institution for AI model evaluation. AI performance was measured using the Intersection over Union (IoU). Furthermore, to verify the AI's clinical validity, we conducted the two experiments on the early identification of RLN and recognition of its location by eight trainee surgeons with or without AI. RESULTS The IoUs for AI recognition of the right and left RLNs were 0.40 ± 0.26 and 0.34 ± 0.27, respectively. The recognition of the right RLN presence in the beginning of right RLN lymph node dissection (LND) by surgeons with AI (81.3%) was significantly more accurate (p = 0.004) than that by surgeons without AI (46.9%). The IoU of right RLN during right RLN LND recognized by surgeons with AI (0.59 ± 0.18) was significantly higher (p = 0.010) than that by surgeons without AI (0.40 ± 0.29). CONCLUSIONS Surgeons' recognition of anatomical structures in RAMIE was improved by our AI system with high accuracy. Especially in right RLN LND, surgeons could recognize the RLN more quickly and accurately by using the AI model.
Collapse
Affiliation(s)
- Tasuku Furube
- Department of Surgery, Keio University School of Medicine, Shinjuku City, Tokyo, Japan
| | - Masashi Takeuchi
- Department of Surgery, Keio University School of Medicine, Shinjuku City, Tokyo, Japan.
| | - Hirofumi Kawakubo
- Department of Surgery, Keio University School of Medicine, Shinjuku City, Tokyo, Japan.
| | - Kazuhiro Noma
- Department of Gastroenterological Surgery, Okayama University Graduate School of Medicine, Dentistry and Pharmaceutical Sciences, Okayama, Japan
| | - Naoaki Maeda
- Department of Gastroenterological Surgery, Okayama University Graduate School of Medicine, Dentistry and Pharmaceutical Sciences, Okayama, Japan
| | - Hiroyuki Daiko
- Department of Esophageal Surgery, National Cancer Center Hospital, Chuo City, Tokyo, Japan
| | - Koshiro Ishiyama
- Department of Esophageal Surgery, National Cancer Center Hospital, Chuo City, Tokyo, Japan
| | - Koji Otsuka
- Esophageal Cancer Center, Showa University Hospital, Shinagawa City, Tokyo, Japan
| | - Yoshihito Sato
- Esophageal Cancer Center, Showa University Hospital, Shinagawa City, Tokyo, Japan
| | - Kazuo Koyanagi
- Department of Gastroenterological Surgery, Tokai University School of Medicine, Isehara, Kanagawa, Japan
| | - Kohei Tajima
- Department of Gastroenterological Surgery, Tokai University School of Medicine, Isehara, Kanagawa, Japan
| | - Rodrigo Nicida Garcia
- Department of Gastroenterology, Digestive Surgery Division, Hospital das Clínicas HCFMUSP, Faculdade de Medicina, Universidade de São Paulo, São Paulo, Brazil
| | - Yusuke Maeda
- Department of Surgery, Keio University School of Medicine, Shinjuku City, Tokyo, Japan
| | - Satoru Matsuda
- Department of Surgery, Keio University School of Medicine, Shinjuku City, Tokyo, Japan
| | - Yuko Kitagawa
- Department of Surgery, Keio University School of Medicine, Shinjuku City, Tokyo, Japan
| |
Collapse
|
3
|
Furube T, Takeuchi M, Kawakubo H, Matsuda S, Kitagawa Y. ASO Author Reflections: Can Artificial Intelligence Assist in the Recognition of Recurrent Laryngeal Nerve During Robot-Assisted Minimally Invasive Esophagectomy? Ann Surg Oncol 2024; 31:9054-9055. [PMID: 39343817 DOI: 10.1245/s10434-024-16273-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2024] [Accepted: 09/12/2024] [Indexed: 10/01/2024]
Affiliation(s)
- Tasuku Furube
- Department of Surgery, Keio University School of Medicine, Shinjuku-ku, Tokyo, Japan
| | - Masashi Takeuchi
- Department of Surgery, Keio University School of Medicine, Shinjuku-ku, Tokyo, Japan.
| | - Hirofumi Kawakubo
- Department of Surgery, Keio University School of Medicine, Shinjuku-ku, Tokyo, Japan.
| | - Satoru Matsuda
- Department of Surgery, Keio University School of Medicine, Shinjuku-ku, Tokyo, Japan
| | - Yuko Kitagawa
- Department of Surgery, Keio University School of Medicine, Shinjuku-ku, Tokyo, Japan
| |
Collapse
|
4
|
Strong JS, Furube T, Takeuchi M, Kawakubo H, Maeda Y, Matsuda S, Fukuda K, Nakamura R, Kitagawa Y. Evaluating surgical expertise with AI-based automated instrument recognition for robotic distal gastrectomy. Ann Gastroenterol Surg 2024; 8:611-619. [PMID: 38957567 PMCID: PMC11216797 DOI: 10.1002/ags3.12784] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/13/2023] [Revised: 12/11/2023] [Accepted: 02/09/2024] [Indexed: 07/04/2024] Open
Abstract
Introduction Complexities of robotic distal gastrectomy (RDG) give reason to assess physician's surgical skill. Varying levels in surgical skill affect patient outcomes. We aim to investigate how a novel artificial intelligence (AI) model can be used to evaluate surgical skill in RDG by recognizing surgical instruments. Methods Fifty-five consecutive robotic surgical videos of RDG for gastric cancer were analyzed. We used Deeplab, a multi-stage temporal convolutional network, and it trained on 1234 manually annotated images. The model was then tested on 149 annotated images for accuracy. Deep learning metrics such as Intersection over Union (IoU) and accuracy were assessed, and the comparison between experienced and non-experienced surgeons based on usage of instruments during infrapyloric lymph node dissection was performed. Results We annotated 540 Cadiere forceps, 898 Fenestrated bipolars, 359 Suction tubes, 307 Maryland bipolars, 688 Harmonic scalpels, 400 Staplers, and 59 Large clips. The average IoU and accuracy were 0.82 ± 0.12 and 87.2 ± 11.9% respectively. Moreover, the percentage of each instrument's usage to overall infrapyloric lymphadenectomy duration predicted by AI were compared. The use of Stapler and Large clip were significantly shorter in the experienced group compared to the non-experienced group. Conclusions This study is the first to report that surgical skill can be successfully and accurately determined by an AI model for RDG. Our AI gives us a way to recognize and automatically generate instance segmentation of the surgical instruments present in this procedure. Use of this technology allows unbiased, more accessible RDG surgical skill.
Collapse
Affiliation(s)
- James S. Strong
- Department of SurgeryKeio University School of MedicineTokyoJapan
- Harvard CollegeHarvard UniversityCambridgeMassachusettsUSA
| | - Tasuku Furube
- Department of SurgeryKeio University School of MedicineTokyoJapan
| | - Masashi Takeuchi
- Department of SurgeryKeio University School of MedicineTokyoJapan
| | | | - Yusuke Maeda
- Department of SurgeryKeio University School of MedicineTokyoJapan
| | - Satoru Matsuda
- Department of SurgeryKeio University School of MedicineTokyoJapan
| | - Kazumasa Fukuda
- Department of SurgeryKeio University School of MedicineTokyoJapan
| | - Rieko Nakamura
- Department of SurgeryKeio University School of MedicineTokyoJapan
| | - Yuko Kitagawa
- Department of SurgeryKeio University School of MedicineTokyoJapan
| |
Collapse
|
5
|
Horita K, Hida K, Itatani Y, Fujita H, Hidaka Y, Yamamoto G, Ito M, Obama K. Real-time detection of active bleeding in laparoscopic colectomy using artificial intelligence. Surg Endosc 2024; 38:3461-3469. [PMID: 38760565 DOI: 10.1007/s00464-024-10874-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2024] [Accepted: 04/20/2024] [Indexed: 05/19/2024]
Abstract
BACKGROUND Most intraoperative adverse events (iAEs) result from surgeons' errors, and bleeding is the majority of iAEs. Recognizing active bleeding timely is important to ensure safe surgery, and artificial intelligence (AI) has great potential for detecting active bleeding and providing real-time surgical support. This study aimed to develop a real-time AI model to detect active intraoperative bleeding. METHODS We extracted 27 surgical videos from a nationwide multi-institutional surgical video database in Japan and divided them at the patient level into three sets: training (n = 21), validation (n = 3), and testing (n = 3). We subsequently extracted the bleeding scenes and labeled distinctively active bleeding and blood pooling frame by frame. We used pre-trained YOLOv7_6w and developed a model to learn both active bleeding and blood pooling. The Average Precision at an Intersection over Union threshold of 0.5 (AP.50) for active bleeding and frames per second (FPS) were quantified. In addition, we conducted two 5-point Likert scales (5 = Excellent, 4 = Good, 3 = Fair, 2 = Poor, and 1 = Fail) questionnaires about sensitivity (the sensitivity score) and number of overdetection areas (the overdetection score) to investigate the surgeons' assessment. RESULTS We annotated 34,117 images of 254 bleeding events. The AP.50 for active bleeding in the developed model was 0.574 and the FPS was 48.5. Twenty surgeons answered two questionnaires, indicating a sensitivity score of 4.92 and an overdetection score of 4.62 for the model. CONCLUSIONS We developed an AI model to detect active bleeding, achieving real-time processing speed. Our AI model can be used to provide real-time surgical support.
Collapse
Affiliation(s)
- Kenta Horita
- Department of Surgery, Kyoto University Graduate School of Medicine, 54 Shogoin-Kawahara-Cho, Sakyo-Ku, Kyoto, 606-8507, Japan
| | - Koya Hida
- Department of Surgery, Kyoto University Graduate School of Medicine, 54 Shogoin-Kawahara-Cho, Sakyo-Ku, Kyoto, 606-8507, Japan.
| | - Yoshiro Itatani
- Department of Surgery, Kyoto University Graduate School of Medicine, 54 Shogoin-Kawahara-Cho, Sakyo-Ku, Kyoto, 606-8507, Japan
| | - Haruku Fujita
- Department of Surgery, Kyoto University Graduate School of Medicine, 54 Shogoin-Kawahara-Cho, Sakyo-Ku, Kyoto, 606-8507, Japan
| | - Yu Hidaka
- Department of Biomedical Statistics and Bioinformatics, Kyoto University Graduate School of Medicine, Kyoto, Japan
| | - Goshiro Yamamoto
- Division of Medical Information Technology and Administration Planning, Kyoto University, Kyoto, Japan
| | - Masaaki Ito
- Surgical Device Innovation Office, National Cancer Center Hospital East, Chiba, Japan
- Department of Colorectal Surgery, National Cancer Center Hospital East, Chiba, Japan
| | - Kazutaka Obama
- Department of Surgery, Kyoto University Graduate School of Medicine, 54 Shogoin-Kawahara-Cho, Sakyo-Ku, Kyoto, 606-8507, Japan
| |
Collapse
|
6
|
Furube T, Takeuchi M, Kawakubo H, Maeda Y, Matsuda S, Fukuda K, Nakamura R, Kato M, Yahagi N, Kitagawa Y. Automated artificial intelligence-based phase-recognition system for esophageal endoscopic submucosal dissection (with video). Gastrointest Endosc 2024; 99:830-838. [PMID: 38185182 DOI: 10.1016/j.gie.2023.12.037] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/15/2023] [Revised: 11/13/2023] [Accepted: 12/22/2023] [Indexed: 01/09/2024]
Abstract
BACKGROUND AND AIMS Endoscopic submucosal dissection (ESD) for superficial esophageal cancer is a multistep treatment involving several endoscopic processes. Although analyzing each phase separately is worthwhile, it is not realistic in practice owing to the need for considerable manpower. To solve this problem, we aimed to establish a state-of-the-art artificial intelligence (AI)-based system, specifically, an automated phase-recognition system that can automatically identify each endoscopic phase based on video images. METHODS Ninety-four videos of ESD procedures for superficial esophageal cancer were evaluated in this single-center study. A deep neural network-based phase-recognition system was developed in an automated manner to recognize each of the endoscopic phases. The system was trained with the use of videos that were annotated and verified by 2 GI endoscopists. RESULTS The overall accuracy of the AI model for automated phase recognition was 90%, and the average precision, recall, and F value rates were 91%, 90%, and 90%, respectively. Two representative ESD videos predicted by the model indicated the usability of AI in clinical practice. CONCLUSIONS We demonstrated that an AI-based automated phase-recognition system for esophageal ESD can be established with high accuracy. To the best of our knowledge, this is the first report on automated recognition of ESD treatment phases. Because this system enabled a detailed analysis of phases, collecting large volumes of data in the future may help to identify quality indicators for treatment techniques and uncover unmet medical needs that necessitate the creation of new treatment methods and devices.
Collapse
Affiliation(s)
- Tasuku Furube
- Department of Surgery, Keio University School of Medicine, Tokyo, Japan
| | - Masashi Takeuchi
- Department of Surgery, Keio University School of Medicine, Tokyo, Japan
| | - Hirofumi Kawakubo
- Department of Surgery, Keio University School of Medicine, Tokyo, Japan
| | - Yusuke Maeda
- Department of Surgery, Keio University School of Medicine, Tokyo, Japan
| | - Satoru Matsuda
- Department of Surgery, Keio University School of Medicine, Tokyo, Japan
| | - Kazumasa Fukuda
- Department of Surgery, Keio University School of Medicine, Tokyo, Japan
| | - Rieko Nakamura
- Department of Surgery, Keio University School of Medicine, Tokyo, Japan
| | - Motohiko Kato
- Center for Diagnostic and Therapeutic Endoscopy, Keio University School of Medicine, Tokyo, Japan
| | - Naohisa Yahagi
- Division of Research and Development for Minimally Invasive Treatment, Cancer Center, Graduate School of Medicine, Keio University School of Medicine, Tokyo, Japan
| | - Yuko Kitagawa
- Department of Surgery, Keio University School of Medicine, Tokyo, Japan
| |
Collapse
|
7
|
Takeuchi M, Kitagawa Y. Artificial intelligence and surgery. Ann Gastroenterol Surg 2024; 8:4-5. [PMID: 38250693 PMCID: PMC10797843 DOI: 10.1002/ags3.12766] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/06/2023] [Accepted: 12/06/2023] [Indexed: 01/23/2024] Open
Affiliation(s)
- Masashi Takeuchi
- Department of SurgeryKeio University School of MedicineTokyoJapan
| | - Yuko Kitagawa
- Department of SurgeryKeio University School of MedicineTokyoJapan
| |
Collapse
|
8
|
Ortenzi M, Rapoport Ferman J, Antolin A, Bar O, Zohar M, Perry O, Asselmann D, Wolf T. A novel high accuracy model for automatic surgical workflow recognition using artificial intelligence in laparoscopic totally extraperitoneal inguinal hernia repair (TEP). Surg Endosc 2023; 37:8818-8828. [PMID: 37626236 PMCID: PMC10615930 DOI: 10.1007/s00464-023-10375-5] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2023] [Accepted: 07/30/2023] [Indexed: 08/27/2023]
Abstract
INTRODUCTION Artificial intelligence and computer vision are revolutionizing the way we perceive video analysis in minimally invasive surgery. This emerging technology has increasingly been leveraged successfully for video segmentation, documentation, education, and formative assessment. New, sophisticated platforms allow pre-determined segments chosen by surgeons to be automatically presented without the need to review entire videos. This study aimed to validate and demonstrate the accuracy of the first reported AI-based computer vision algorithm that automatically recognizes surgical steps in videos of totally extraperitoneal (TEP) inguinal hernia repair. METHODS Videos of TEP procedures were manually labeled by a team of annotators trained to identify and label surgical workflow according to six major steps. For bilateral hernias, an additional change of focus step was also included. The videos were then used to train a computer vision AI algorithm. Performance accuracy was assessed in comparison to the manual annotations. RESULTS A total of 619 full-length TEP videos were analyzed: 371 were used to train the model, 93 for internal validation, and the remaining 155 as a test set to evaluate algorithm accuracy. The overall accuracy for the complete procedure was 88.8%. Per-step accuracy reached the highest value for the hernia sac reduction step (94.3%) and the lowest for the preperitoneal dissection step (72.2%). CONCLUSIONS These results indicate that the novel AI model was able to provide fully automated video analysis with a high accuracy level. High-accuracy models leveraging AI to enable automation of surgical video analysis allow us to identify and monitor surgical performance, providing mathematical metrics that can be stored, evaluated, and compared. As such, the proposed model is capable of enabling data-driven insights to improve surgical quality and demonstrate best practices in TEP procedures.
Collapse
Affiliation(s)
- Monica Ortenzi
- Theator Inc., Palo Alto, CA, USA.
- Department of General and Emergency Surgery, Polytechnic University of Marche, Ancona, Italy.
| | | | | | - Omri Bar
- Theator Inc., Palo Alto, CA, USA
| | | | | | | | | |
Collapse
|