1
|
Yan L, Ebina K, Abe T, Kon M, Higuchi M, Hotta K, Furumido J, Iwahara N, Komizunai S, Tsujita T, Sase K, Chen X, Kurashima Y, Kikuchi H, Miyata H, Matsumoto R, Osawa T, Murai S, Shichinohe T, Murakami S, Senoo T, Watanabe M, Konno A, Shinohara N. Validation and motion analyses of laparoscopic radical nephrectomy with Thiel-embalmed cadavers. Curr Probl Surg 2024; 61:101559. [PMID: 39266126 DOI: 10.1016/j.cpsurg.2024.101559] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2024] [Revised: 04/23/2024] [Accepted: 07/01/2024] [Indexed: 09/14/2024]
Abstract
PURPOSE Our aim was to develop practical training for laparoscopic surgery using Thielembalmed cadavers. Furthermore, in order to verbalize experts' motion characteristics and provide objective feedback to trainees, we initiated motion capture analyses of multiple surgical instruments simultaneously during the cadaveric trainings. In the present study, we report our preliminary results. METHODS Participants voluntarily joined the present cadaveric simulation trainings, and performed laparoscopic radical nephrectomy. After the trainings, scores for tissue similarity (face validity) and impression of educational merit (content validity) were collected from participants based on a 5-point Likert scale (tissue similarity: 5: very similar, 3: average, 1: very different; educational merit: 5: very high, 3: average, 1: very low). In addition, after the additional IRB approval, we started motion capture (Mocap) analyses of 6 surgical instruments (scissors, vessel sealing system, grasping forceps, clip applier, right-angled forceps, and suction), using an infrared trinocular camera (120-Hz location record). Mocap-metrics were compared according to the previous surgical experiences (experts: ≧50 laparoscopic surgeries, intermediates: 10-49, novices: 0-9), using the Kruskal-Wallis test. RESULTS A total of 9 experts, 19 intermediates, and 15 novices participated in the present study. In terms of face validity, the mean scores were higher than 3, other than for the Vena cava(mean score of 2.89). Participants agreed with the training value (usefulness for future skill improvement: mean score of 4.57). In terms of Mocap analysis, faster speed-related metrics (e.g., velocity, the distribution of tip velocity, acceleration, and jerk) in the scissors and vessel sealing system, a shorter path length of grasping forceps, and fewer dimensionless squared jerks, which indicated more purposeful motion of 4 surgical instruments (vessel sealing system, grasping forceps, clip applier and suction), were observed in the more experienced group. CONCLUSIONS The Thiel-embalmed cadaver provides an excellent training opportunity for complex laparoscopic procedures with participants' high level of satisfaction, and may become a promising tool for a better objective understanding of surgical dexterity. In order to enrich formative feedback to trainees, we are now proceeding with Mocap analysis.
Collapse
Affiliation(s)
- Lingbo Yan
- Graduate School of Information Science and Technology, Hokkaido University, Sapporo, Japan
| | - Koki Ebina
- Graduate School of Information Science and Technology, Hokkaido University, Sapporo, Japan
| | - Takashige Abe
- Department of Renal and Genitourinary Surgery, Faculty of Medicine, Hokkaido University, Sapporo, Japan.
| | - Masafumi Kon
- Department of Renal and Genitourinary Surgery, Faculty of Medicine, Hokkaido University, Sapporo, Japan
| | - Madoka Higuchi
- Department of Renal and Genitourinary Surgery, Faculty of Medicine, Hokkaido University, Sapporo, Japan
| | - Kiyohiko Hotta
- Department of Renal and Genitourinary Surgery, Faculty of Medicine, Hokkaido University, Sapporo, Japan
| | - Jun Furumido
- Department of Urology, Asahikawa Kousei Hospital, Asahikawa, Japan
| | - Naoya Iwahara
- Department of Renal and Genitourinary Surgery, Faculty of Medicine, Hokkaido University, Sapporo, Japan
| | | | - Teppei Tsujita
- Department of Mechanical Engineering, National Defense Academy of Japan, Yokosuka, Japan
| | - Kazuya Sase
- Department of Mechanical Engineering and Intelligent Systems, Tohoku Gakuin University, Sendai, Japan
| | - Xiaoshuai Chen
- Graduate School of Science and Technology, Hirosaki University, Hirosaki, Japan
| | - Yo Kurashima
- Clinical Simulation Center, Faculty of Medicine, Hokkaido University, Sapporo, Japan
| | - Hiroshi Kikuchi
- Department of Urology, Teine Keijinkai Hospital, Sapporo, Japan
| | - Haruka Miyata
- Department of Renal and Genitourinary Surgery, Faculty of Medicine, Hokkaido University, Sapporo, Japan
| | - Ryuji Matsumoto
- Department of Renal and Genitourinary Surgery, Faculty of Medicine, Hokkaido University, Sapporo, Japan
| | - Takahiro Osawa
- Department of Renal and Genitourinary Surgery, Faculty of Medicine, Hokkaido University, Sapporo, Japan
| | - Sachiyo Murai
- Department of Renal and Genitourinary Surgery, Faculty of Medicine, Hokkaido University, Sapporo, Japan
| | - Toshiaki Shichinohe
- Department of Gastroenterological Surgery II, Faculty of Medicine, Hokkaido University, Sapporo, Japan; Center for Education Research and Innovation of Advanced Medical Technology, Hokkaido University Hospital, Sapporo, Japan
| | - Soichi Murakami
- Center for Education Research and Innovation of Advanced Medical Technology, Hokkaido University Hospital, Sapporo, Japan
| | - Taku Senoo
- Graduate School of Information Science and Technology, Hokkaido University, Sapporo, Japan
| | - Masahiko Watanabe
- Department of Anatomy, Faculty of Medicine, Hokkaido University, Sapporo, Japan
| | - Atsushi Konno
- Graduate School of Information Science and Technology, Hokkaido University, Sapporo, Japan
| | - Nobuo Shinohara
- Department of Renal and Genitourinary Surgery, Faculty of Medicine, Hokkaido University, Sapporo, Japan
| |
Collapse
|
2
|
Furube T, Takeuchi M, Kawakubo H, Noma K, Maeda N, Daiko H, Ishiyama K, Otsuka K, Sato Y, Koyanagi K, Tajima K, Garcia RN, Maeda Y, Matsuda S, Kitagawa Y. Usefulness of an Artificial Intelligence Model in Recognizing Recurrent Laryngeal Nerves During Robot-Assisted Minimally Invasive Esophagectomy. Ann Surg Oncol 2024:10.1245/s10434-024-16157-0. [PMID: 39266790 DOI: 10.1245/s10434-024-16157-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2024] [Accepted: 08/23/2024] [Indexed: 09/14/2024]
Abstract
BACKGROUND Recurrent laryngeal nerve (RLN) palsy is a common complication in esophagectomy and its main risk factor is reportedly intraoperative procedure associated with surgeons' experience. We aimed to improve surgeons' recognition of the RLN during robot-assisted minimally invasive esophagectomy (RAMIE) by developing an artificial intelligence (AI) model. METHODS We used 120 RAMIE videos from four institutions to develop an AI model and eight other surgical videos from another institution for AI model evaluation. AI performance was measured using the Intersection over Union (IoU). Furthermore, to verify the AI's clinical validity, we conducted the two experiments on the early identification of RLN and recognition of its location by eight trainee surgeons with or without AI. RESULTS The IoUs for AI recognition of the right and left RLNs were 0.40 ± 0.26 and 0.34 ± 0.27, respectively. The recognition of the right RLN presence in the beginning of right RLN lymph node dissection (LND) by surgeons with AI (81.3%) was significantly more accurate (p = 0.004) than that by surgeons without AI (46.9%). The IoU of right RLN during right RLN LND recognized by surgeons with AI (0.59 ± 0.18) was significantly higher (p = 0.010) than that by surgeons without AI (0.40 ± 0.29). CONCLUSIONS Surgeons' recognition of anatomical structures in RAMIE was improved by our AI system with high accuracy. Especially in right RLN LND, surgeons could recognize the RLN more quickly and accurately by using the AI model.
Collapse
Affiliation(s)
- Tasuku Furube
- Department of Surgery, Keio University School of Medicine, Shinjuku City, Tokyo, Japan
| | - Masashi Takeuchi
- Department of Surgery, Keio University School of Medicine, Shinjuku City, Tokyo, Japan.
| | - Hirofumi Kawakubo
- Department of Surgery, Keio University School of Medicine, Shinjuku City, Tokyo, Japan.
| | - Kazuhiro Noma
- Department of Gastroenterological Surgery, Okayama University Graduate School of Medicine, Dentistry and Pharmaceutical Sciences, Okayama, Japan
| | - Naoaki Maeda
- Department of Gastroenterological Surgery, Okayama University Graduate School of Medicine, Dentistry and Pharmaceutical Sciences, Okayama, Japan
| | - Hiroyuki Daiko
- Department of Esophageal Surgery, National Cancer Center Hospital, Chuo City, Tokyo, Japan
| | - Koshiro Ishiyama
- Department of Esophageal Surgery, National Cancer Center Hospital, Chuo City, Tokyo, Japan
| | - Koji Otsuka
- Esophageal Cancer Center, Showa University Hospital, Shinagawa City, Tokyo, Japan
| | - Yoshihito Sato
- Esophageal Cancer Center, Showa University Hospital, Shinagawa City, Tokyo, Japan
| | - Kazuo Koyanagi
- Department of Gastroenterological Surgery, Tokai University School of Medicine, Isehara, Kanagawa, Japan
| | - Kohei Tajima
- Department of Gastroenterological Surgery, Tokai University School of Medicine, Isehara, Kanagawa, Japan
| | - Rodrigo Nicida Garcia
- Department of Gastroenterology, Digestive Surgery Division, Hospital das Clínicas HCFMUSP, Faculdade de Medicina, Universidade de São Paulo, São Paulo, Brazil
| | - Yusuke Maeda
- Department of Surgery, Keio University School of Medicine, Shinjuku City, Tokyo, Japan
| | - Satoru Matsuda
- Department of Surgery, Keio University School of Medicine, Shinjuku City, Tokyo, Japan
| | - Yuko Kitagawa
- Department of Surgery, Keio University School of Medicine, Shinjuku City, Tokyo, Japan
| |
Collapse
|
3
|
You J, Cai H, Wang Y, Bian A, Cheng K, Meng L, Wang X, Gao P, Chen S, Cai Y, Peng B. Artificial intelligence automated surgical phases recognition in intraoperative videos of laparoscopic pancreatoduodenectomy. Surg Endosc 2024; 38:4894-4905. [PMID: 38958719 DOI: 10.1007/s00464-024-10916-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2024] [Accepted: 05/05/2024] [Indexed: 07/04/2024]
Abstract
BACKGROUND Laparoscopic pancreatoduodenectomy (LPD) is one of the most challenging operations and has a long learning curve. Artificial intelligence (AI) automated surgical phase recognition in intraoperative videos has many potential applications in surgical education, helping shorten the learning curve, but no study has made this breakthrough in LPD. Herein, we aimed to build AI models to recognize the surgical phase in LPD and explore the performance characteristics of AI models. METHODS Among 69 LPD videos from a single surgical team, we used 42 in the building group to establish the models and used the remaining 27 videos in the analysis group to assess the models' performance characteristics. We annotated 13 surgical phases of LPD, including 4 key phases and 9 necessary phases. Two minimal invasive pancreatic surgeons annotated all the videos. We built two AI models for the key phase and necessary phase recognition, based on convolutional neural networks. The overall performance of the AI models was determined mainly by mean average precision (mAP). RESULTS Overall mAPs of the AI models in the test set of the building group were 89.7% and 84.7% for key phases and necessary phases, respectively. In the 27-video analysis group, overall mAPs were 86.8% and 71.2%, with maximum mAPs of 98.1% and 93.9%. We found commonalities between the error of model recognition and the differences of surgeon annotation, and the AI model exhibited bad performance in cases with anatomic variation or lesion involvement with adjacent organs. CONCLUSIONS AI automated surgical phase recognition can be achieved in LPD, with outstanding performance in selective cases. This breakthrough may be the first step toward AI- and video-based surgical education in more complex surgeries.
Collapse
Affiliation(s)
- Jiaying You
- WestChina-California Research Center for Predictive Intervention, Sichuan University West China Hospital, Chengdu, China
- Division of Pancreatic Surgery, Department of General Surgery, Sichuan University West China Hospital, No. 37, Guoxue Alley, Chengdu, 610041, China
| | - He Cai
- Division of Pancreatic Surgery, Department of General Surgery, Sichuan University West China Hospital, No. 37, Guoxue Alley, Chengdu, 610041, China
| | - Yuxian Wang
- Chengdu Withai Innovations Technology Company, Chengdu, China
| | - Ang Bian
- College of Computer Science, Sichuan University, Chengdu, China
| | - Ke Cheng
- Division of Pancreatic Surgery, Department of General Surgery, Sichuan University West China Hospital, No. 37, Guoxue Alley, Chengdu, 610041, China
| | - Lingwei Meng
- Division of Pancreatic Surgery, Department of General Surgery, Sichuan University West China Hospital, No. 37, Guoxue Alley, Chengdu, 610041, China
| | - Xin Wang
- Division of Pancreatic Surgery, Department of General Surgery, Sichuan University West China Hospital, No. 37, Guoxue Alley, Chengdu, 610041, China
| | - Pan Gao
- Division of Pancreatic Surgery, Department of General Surgery, Sichuan University West China Hospital, No. 37, Guoxue Alley, Chengdu, 610041, China
| | - Sirui Chen
- Mianyang Central Hospital, School of Medicine University of Electronic Science and Technology of China, Mianyang, China
| | - Yunqiang Cai
- Division of Pancreatic Surgery, Department of General Surgery, Sichuan University West China Hospital, No. 37, Guoxue Alley, Chengdu, 610041, China.
| | - Bing Peng
- Division of Pancreatic Surgery, Department of General Surgery, Sichuan University West China Hospital, No. 37, Guoxue Alley, Chengdu, 610041, China.
| |
Collapse
|
4
|
Kinoshita K, Maruyama T, Kobayashi N, Imanishi S, Maruyama M, Ohira G, Endo S, Tochigi T, Kinoshita M, Fukui Y, Kumazu Y, Kita J, Shinohara H, Matsubara H. An artificial intelligence-based nerve recognition model is useful as surgical support technology and as an educational tool in laparoscopic and robot-assisted rectal cancer surgery. Surg Endosc 2024; 38:5394-5404. [PMID: 39073558 PMCID: PMC11362368 DOI: 10.1007/s00464-024-10939-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2024] [Accepted: 05/17/2024] [Indexed: 07/30/2024]
Abstract
BACKGROUND Artificial intelligence (AI) has the potential to enhance surgical practice by predicting anatomical structures within the surgical field, thereby supporting surgeons' experiences and cognitive skills. Preserving and utilising nerves as critical guiding structures is paramount in rectal cancer surgery. Hence, we developed a deep learning model based on U-Net to automatically segment nerves. METHODS The model performance was evaluated using 60 randomly selected frames, and the Dice and Intersection over Union (IoU) scores were quantitatively assessed by comparing them with ground truth data. Additionally, a questionnaire was administered to five colorectal surgeons to gauge the extent of underdetection, overdetection, and the practical utility of the model in rectal cancer surgery. Furthermore, we conducted an educational assessment of non-colorectal surgeons, trainees, physicians, and medical students. We evaluated their ability to recognise nerves in mesorectal dissection scenes, scored them on a 12-point scale, and examined the score changes before and after exposure to the AI analysis videos. RESULTS The mean Dice and IoU scores for the 60 test frames were 0.442 (range 0.0465-0.639) and 0.292 (range 0.0238-0.469), respectively. The colorectal surgeons revealed an under-detection score of 0.80 (± 0.47), an over-detection score of 0.58 (± 0.41), and a usefulness evaluation score of 3.38 (± 0.43). The nerve recognition scores of non-colorectal surgeons, rotating residents, and medical students significantly improved by simply watching the AI nerve recognition videos for 1 min. Notably, medical students showed a more substantial increase in nerve recognition scores when exposed to AI nerve analysis videos than when exposed to traditional lectures on nerves. CONCLUSIONS In laparoscopic and robot-assisted rectal cancer surgeries, the AI-based nerve recognition model achieved satisfactory recognition levels for expert surgeons and demonstrated effectiveness in educating junior surgeons and medical students on nerve recognition.
Collapse
Affiliation(s)
- Kazuya Kinoshita
- Department of Frontier Surgery, Graduate School of Medicine, Chiba University, Chiba, Japan
- Department of General Surgery, Kumagaya General Hospital, Saitama, Japan
| | - Tetsuro Maruyama
- Department of Frontier Surgery, Graduate School of Medicine, Chiba University, Chiba, Japan.
| | | | - Shunsuke Imanishi
- Department of Frontier Surgery, Graduate School of Medicine, Chiba University, Chiba, Japan
| | - Michihiro Maruyama
- Department of Frontier Surgery, Graduate School of Medicine, Chiba University, Chiba, Japan
| | - Gaku Ohira
- Department of Frontier Surgery, Graduate School of Medicine, Chiba University, Chiba, Japan
| | - Satoshi Endo
- Department of Frontier Surgery, Graduate School of Medicine, Chiba University, Chiba, Japan
| | - Toru Tochigi
- Department of Frontier Surgery, Graduate School of Medicine, Chiba University, Chiba, Japan
| | - Mayuko Kinoshita
- Department of Frontier Surgery, Graduate School of Medicine, Chiba University, Chiba, Japan
| | - Yudai Fukui
- Department of Gastroenterological Surgery, Toranomon Hospital, Tokyo, Japan
| | - Yuta Kumazu
- Anaut Inc, Tokyo, Japan
- Department of Surgery, Yokohama City University, Kanagawa, Japan
| | - Junji Kita
- Department of General Surgery, Kumagaya General Hospital, Saitama, Japan
| | - Hisashi Shinohara
- Department of Gastroenterological Surgery, Hyogo College of Medicine, Hyogo, Japan
| | - Hisahiro Matsubara
- Department of Frontier Surgery, Graduate School of Medicine, Chiba University, Chiba, Japan
| |
Collapse
|
5
|
Ichinose J, Kobayashi N, Fukata K, Kanno K, Suzuki A, Matsuura Y, Nakao M, Okumura S, Mun M. Accuracy of thoracic nerves recognition for surgical support system using artificial intelligence. Sci Rep 2024; 14:18329. [PMID: 39112794 PMCID: PMC11306550 DOI: 10.1038/s41598-024-69405-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2024] [Accepted: 08/05/2024] [Indexed: 08/10/2024] Open
Abstract
We developed a surgical support system that visualises important microanatomies using artificial intelligence (AI). This study evaluated its accuracy in recognising the thoracic nerves during lung cancer surgery. Recognition models were created with deep learning using images precisely annotated for nerves. Computational evaluation was performed using the Dice index and the Jaccard index. Four general thoracic surgeons evaluated the accuracy of nerve recognition. Further, the differences in time lag, image quality and smoothness of movement between the AI system and surgical monitor were assessed. Ratings were made using a five-point scale. The computational evaluation was relatively favourable, with a Dice index of 0.56 and a Jaccard index of 0.39. The AI system was used for 10 thoracoscopic surgeries for lung cancer. The accuracy of thoracic nerve recognition was satisfactory, with a recall score of 4.5 ± 0.4 and a precision score of 4.0 ± 0.9. Though smoothness of motion (3.2 ± 0.4) differed slightly, nearly no difference in time lag (4.9 ± 0.3) and image quality (4.6 ± 0.5) between the AI system and the surgical monitor were observed. In conclusion, the AI surgical support system has a satisfactory accuracy in recognising the thoracic nerves.
Collapse
Affiliation(s)
- Junji Ichinose
- Department of Thoracic Surgical Oncology, Cancer Institute Hospital of JFCR, 3-8-31 Ariake, Koto-ku, Tokyo, 135-8550, Japan.
| | - Nao Kobayashi
- Anaut Inc., 2-1-6 Uchisaiwaicho, Chiyoda-ku, Tokyo, 100-0011, Japan
| | - Kyohei Fukata
- Anaut Inc., 2-1-6 Uchisaiwaicho, Chiyoda-ku, Tokyo, 100-0011, Japan
| | - Kenji Kanno
- Anaut Inc., 2-1-6 Uchisaiwaicho, Chiyoda-ku, Tokyo, 100-0011, Japan
| | - Ayumi Suzuki
- Department of Thoracic Surgical Oncology, Cancer Institute Hospital of JFCR, 3-8-31 Ariake, Koto-ku, Tokyo, 135-8550, Japan
| | - Yosuke Matsuura
- Department of Thoracic Surgical Oncology, Cancer Institute Hospital of JFCR, 3-8-31 Ariake, Koto-ku, Tokyo, 135-8550, Japan
| | - Masayuki Nakao
- Department of Thoracic Surgical Oncology, Cancer Institute Hospital of JFCR, 3-8-31 Ariake, Koto-ku, Tokyo, 135-8550, Japan
| | - Sakae Okumura
- Department of Thoracic Surgical Oncology, Cancer Institute Hospital of JFCR, 3-8-31 Ariake, Koto-ku, Tokyo, 135-8550, Japan
| | - Mingyon Mun
- Department of Thoracic Surgical Oncology, Cancer Institute Hospital of JFCR, 3-8-31 Ariake, Koto-ku, Tokyo, 135-8550, Japan
| |
Collapse
|
6
|
Aoyama Y, Matsunobu Y, Etoh T, Suzuki K, Fujita S, Aiba T, Fujishima H, Empuku S, Kono Y, Endo Y, Ueda Y, Shiroshita H, Kamiyama T, Sugita T, Morishima K, Ebe K, Tokuyasu T, Inomata M. Artificial intelligence for surgical safety during laparoscopic gastrectomy for gastric cancer: Indication of anatomical landmarks related to postoperative pancreatic fistula using deep learning. Surg Endosc 2024:10.1007/s00464-024-11117-x. [PMID: 39093411 DOI: 10.1007/s00464-024-11117-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2024] [Accepted: 07/23/2024] [Indexed: 08/04/2024]
Abstract
BACKGROUND Postoperative pancreatic fistula (POPF) is a critical complication of laparoscopic gastrectomy (LG). However, there are no widely recognized anatomical landmarks to prevent POPF during LG. This study aimed to identify anatomical landmarks related to POPF occurrence during LG for gastric cancer and to develop an artificial intelligence (AI) navigation system for indicating these landmarks. METHODS Dimpling lines (DLs)-depressions formed between the pancreas and surrounding organs-were defined as anatomical landmarks related to POPF. The DLs for the mesogastrium, intestine, and transverse mesocolon were named DMP, DIP, and DTP, respectively. We included 50 LG cases to develop the AI system (45/50 were used for training and 5/50 for adjusting the hyperparameters of the employed system). Regarding the validation of the AI system, DLs were assessed by an external evaluation committee using a Likert scale, and the pancreas was assessed using the Dice coefficient, with 10 prospectively registered cases. RESULTS Six expert surgeons confirmed the efficacy of DLs as anatomical landmarks related to POPF in LG. An AI system was developed using a semantic segmentation model that indicated DLs in real-time when this system was synchronized during surgery. Additionally, the distribution of scores for DMP was significantly higher than that of the other DLs (p < 0.001), indicating the relatively high accuracy of this landmark. In addition, the Dice coefficient of the pancreas was 0.70. CONCLUSIONS The DLs may be used as anatomical landmarks related to POPF occurrence. The developed AI navigation system can help visualize the DLs in real-time during LG.
Collapse
Affiliation(s)
- Yoshimasa Aoyama
- Department of Gastroenterological and Pediatric Surgery, Faculty of Medicine, Oita University, Oita, Japan
| | - Yusuke Matsunobu
- Department of Information Systems and Engineering, Faculty of Information Engineering, Fukuoka Institute of Technology, Fukuoka, Japan
- Department of Healthcare AI Data Science, Faculty of Medicine, Oita University, Oita, Japan
| | - Tsuyoshi Etoh
- Department of Gastroenterological and Pediatric Surgery, Faculty of Medicine, Oita University, Oita, Japan.
- Research Center for GLOBAL and LOCAL Infectious Diseases, Oita University, 1-1 Idaigaoka, Hasama-Machi, Oita, Oita, 879-5593, Japan.
| | - Kosuke Suzuki
- Department of Gastroenterological and Pediatric Surgery, Faculty of Medicine, Oita University, Oita, Japan
| | - Shunsuke Fujita
- Department of Gastroenterological and Pediatric Surgery, Faculty of Medicine, Oita University, Oita, Japan
| | - Takayuki Aiba
- Department of Gastroenterological and Pediatric Surgery, Faculty of Medicine, Oita University, Oita, Japan
| | - Hajime Fujishima
- Department of Gastroenterological and Pediatric Surgery, Faculty of Medicine, Oita University, Oita, Japan
| | - Shinichiro Empuku
- Department of Gastroenterological and Pediatric Surgery, Faculty of Medicine, Oita University, Oita, Japan
| | - Yohei Kono
- Department of Advanced Medical Research and Development for Cancer and Hair [Aderans], Faculty of Medicine, Oita University, Oita, Japan
| | - Yuichi Endo
- Department of Gastroenterological and Pediatric Surgery, Faculty of Medicine, Oita University, Oita, Japan
| | - Yoshitake Ueda
- Department of Comprehensive Surgery for Community Medicine, Faculty of Medicine, Oita University, Oita, Japan
| | - Hidefumi Shiroshita
- Department of Gastroenterological and Pediatric Surgery, Faculty of Medicine, Oita University, Oita, Japan
| | - Toshiya Kamiyama
- Advanced AI Technology Research, Advanced Software Technology Research, Olympus Corporation, Tokyo, Japan
| | - Takemasa Sugita
- Advanced AI Technology Research, Advanced Software Technology Research, Olympus Corporation, Tokyo, Japan
| | - Kenichi Morishima
- Advanced AI Technology Research, Advanced Software Technology Research, Olympus Corporation, Tokyo, Japan
| | - Kohei Ebe
- Information Aided Medical Solutions Development, Application Software Engineering, Olympus Medical Systems Corporation, Tokyo, Japan
| | - Tatsushi Tokuyasu
- Department of Information Systems and Engineering, Faculty of Information Engineering, Fukuoka Institute of Technology, Fukuoka, Japan
- Clinical Engineering Research Center, Faculty of Medicine, Oita University, Oita, Japan
| | - Masafumi Inomata
- Department of Gastroenterological and Pediatric Surgery, Faculty of Medicine, Oita University, Oita, Japan
| |
Collapse
|
7
|
Cizmic A, Killat D, Häberle F, Schwabe N, Hackert T, Müller-Stich BP, Nickel F. Simulation training of intraoperative complication management in laparoscopic cholecystectomy for novices-A randomized controlled study. Curr Probl Surg 2024; 61:101506. [PMID: 39098335 DOI: 10.1016/j.cpsurg.2024.101506] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2024] [Accepted: 05/02/2024] [Indexed: 08/06/2024]
Affiliation(s)
- Amila Cizmic
- Department of General, Visceral and Thoracic Surgery, University Medical Center Hamburg-Eppendorf, Hamburg, Germany; Department of General, Visceral, and Transplantation Surgery, Heidelberg University Hospital, Heidelberg, Germany.
| | - David Killat
- Department of General, Visceral, and Transplantation Surgery, Heidelberg University Hospital, Heidelberg, Germany
| | - Frida Häberle
- Department of General, Visceral, and Transplantation Surgery, Heidelberg University Hospital, Heidelberg, Germany
| | - Nils Schwabe
- Department of General, Visceral, and Transplantation Surgery, Heidelberg University Hospital, Heidelberg, Germany
| | - Thilo Hackert
- Department of General, Visceral and Thoracic Surgery, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| | - Beat P Müller-Stich
- Department of Digestive Surgery, University Digestive Healthcare Center Basel, Basel, Switzerland
| | - Felix Nickel
- Department of General, Visceral and Thoracic Surgery, University Medical Center Hamburg-Eppendorf, Hamburg, Germany; Department of General, Visceral, and Transplantation Surgery, Heidelberg University Hospital, Heidelberg, Germany
| |
Collapse
|
8
|
Tesfai FM, Nagi J, Morrison I, Boal M, Olaitan A, Chandrasekaran D, Stoyanov D, Lanceley A, Francis N. Objective assessment tools in laparoscopic or robotic-assisted gynecological surgery: A systematic review. Acta Obstet Gynecol Scand 2024; 103:1480-1497. [PMID: 38610108 PMCID: PMC11266631 DOI: 10.1111/aogs.14840] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2024] [Revised: 02/28/2024] [Accepted: 03/19/2024] [Indexed: 04/14/2024]
Abstract
INTRODUCTION There is a growing emphasis on proficiency-based progression within surgical training. To enable this, clearly defined metrics for those newly acquired surgical skills are needed. These can be formulated in objective assessment tools. The aim of the present study was to systematically review the literature reporting on available tools for objective assessment of minimally invasive gynecological surgery (simulated) performance and evaluate their reliability and validity. MATERIAL AND METHODS A systematic search (1989-2022) was conducted in MEDLINE, Embase, PubMed, Web of Science in accordance with PRISMA. The trial was registered with the Prospective Register of Systematic Reviews (PROSPERO) ID: CRD42022376552. Randomized controlled trials, prospective comparative studies, prospective single-group (with pre- and post-training assessment) or consensus studies that reported on the development, validation or usage of assessment tools of surgical performance in minimally invasive gynecological surgery, were included. Three independent assessors assessed study setting and validity evidence according to a contemporary framework of validity, which was adapted from Messick's validity framework. Methodological quality of included studies was assessed using the modified medical education research study quality instrument (MERSQI) checklist. Heterogeneity in data reporting on types of tools, data collection, study design, definition of expertise (novice vs. experts) and statistical values prevented a meaningful meta-analysis. RESULTS A total of 19 746 titles and abstracts were screened of which 72 articles met the inclusion criteria. A total of 37 different assessment tools were identified of which 13 represented manual global assessment tools, 13 manual procedure-specific assessment tools and 11 automated performance metrices. Only two tools showed substantive evidence of validity. Reliability and validity per tool were provided. No assessment tools showed direct correlation between tool scores and patient related outcomes. CONCLUSIONS Existing objective assessment tools lack evidence on predicting patient outcomes and suffer from limitations in transferability outside of the research environment, particularly for automated performance metrics. Future research should prioritize filling these gaps while integrating advanced technologies like kinematic data and AI for robust, objective surgical skill assessment within gynecological advanced surgical training programs.
Collapse
Affiliation(s)
- Freweini Martha Tesfai
- The Griffin InstituteNorthwick Park & St Marks' HospitalLondonUK
- EGA Institute for Women's HealthUniversity College LondonLondonUK
- Wellcome/EPSRC Center for Interventional and Surgical Sciences (WEISS)University College LondonLondonUK
| | | | - Iona Morrison
- Yeovil District HospitalSomerset Foundation NHS TrustYeovilUK
| | - Matt Boal
- The Griffin InstituteNorthwick Park & St Marks' HospitalLondonUK
- EGA Institute for Women's HealthUniversity College LondonLondonUK
- Wellcome/EPSRC Center for Interventional and Surgical Sciences (WEISS)University College LondonLondonUK
| | | | - Dhivya Chandrasekaran
- EGA Institute for Women's HealthUniversity College LondonLondonUK
- Department of Gynecological OncologyUniversity College of London HospitalsLondonUK
| | - Danail Stoyanov
- EGA Institute for Women's HealthUniversity College LondonLondonUK
- Wellcome/EPSRC Center for Interventional and Surgical Sciences (WEISS)University College LondonLondonUK
| | - Anne Lanceley
- EGA Institute for Women's HealthUniversity College LondonLondonUK
| | - Nader Francis
- The Griffin InstituteNorthwick Park & St Marks' HospitalLondonUK
- EGA Institute for Women's HealthUniversity College LondonLondonUK
- Yeovil District HospitalSomerset Foundation NHS TrustYeovilUK
| |
Collapse
|
9
|
Grüter AAJ, Daams F, Bonjer HJ, van Duijvendijk P, Tuynman JB. Surgical quality assessment of critical view of safety in 283 laparoscopic cholecystectomy videos by surgical residents and surgeons. Surg Endosc 2024; 38:3609-3614. [PMID: 38769182 PMCID: PMC11219398 DOI: 10.1007/s00464-024-10873-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2024] [Accepted: 04/20/2024] [Indexed: 05/22/2024]
Abstract
INTRODUCTION Surgical quality assessment has improved the efficacy and efficiency of surgical training and has the potential to optimize the surgical learning curve. In laparoscopic cholecystectomy (LC), the critical view of safety (CVS) can be assessed with a 6-point competency assessment tool (CAT), a task commonly performed by experienced surgeons. The aim of this study is to determine the capability of surgical residents to perform this assessment. METHODS Both surgeons and surgical residents assessed unedited LC videos using a 6-point CVS, a CAT, using an online video assessment platform. The CAT consists of the following three criteria: 1. clearance of hepatocystic triangle, 2. cystic plate, and 3. two structures connect to the gallbladder, with a maximum of 2 points available for each criterion. A higher score indicates superior surgical performance. The intraclass correlation coefficient (ICC) was employed to assess the inter-rater reliability between surgeons and surgical residents. RESULTS In total, 283 LC videos were assessed by 19 surgeons and 31 surgical residents. The overall ICC for all criteria was 0.628. Specifically, the ICC scores were 0.504 for criterion 1, 0.639 for criterion 2, and 0.719 for the criterion involving the two structures connected to the gallbladder. Consequently, only the criterion regarding clearance of the hepatocystic triangle exhibited fair agreement, whereas the other two criteria, as well as the overall scores, demonstrated good agreement. In 71% of cases, both surgeons and surgical residents scored a total score either ranging from 0 to 4 or from 5 to 6. CONCLUSION Compared to the gold standard, i.e., the surgeons' assessments, surgical residents are equally skilled at assessing critical view of safety (CVS) in laparoscopic cholecystectomy (LC) videos. By incorporating video-based assessments of surgical procedures into their training, residents could potentially enhance their learning pace, which may result in better clinical outcomes.
Collapse
Affiliation(s)
- Alexander A J Grüter
- Department of Surgery, Amsterdam UMC Location Vrije Universiteit Amsterdam, De Boelelaan 1117, 1081 HV, Amsterdam, The Netherlands.
- Cancer Center Amsterdam, Treatment and Quality of Life, Amsterdam, The Netherlands.
| | - Freek Daams
- Department of Surgery, Amsterdam UMC Location Vrije Universiteit Amsterdam, De Boelelaan 1117, 1081 HV, Amsterdam, The Netherlands
| | - Hendrik J Bonjer
- Department of Surgery, Amsterdam UMC Location Vrije Universiteit Amsterdam, De Boelelaan 1117, 1081 HV, Amsterdam, The Netherlands
| | - Peter van Duijvendijk
- Department of Surgery, Gelre Hospitals, Albert Schweitzerlaan 31, Apeldoorn, The Netherlands
| | - Jurriaan B Tuynman
- Department of Surgery, Amsterdam UMC Location Vrije Universiteit Amsterdam, De Boelelaan 1117, 1081 HV, Amsterdam, The Netherlands
| |
Collapse
|
10
|
Mehta P, Owen D, Grammatikopoulou M, Culshaw L, Kerr K, Stoyanov D, Luengo I. Hierarchical segmentation of surgical scenes in laparoscopy. Int J Comput Assist Radiol Surg 2024; 19:1449-1457. [PMID: 38914722 DOI: 10.1007/s11548-024-03157-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2024] [Accepted: 04/20/2024] [Indexed: 06/26/2024]
Abstract
PURPOSE Segmentation of surgical scenes may provide valuable information for real-time guidance and post-operative analysis. However, in some surgical video frames there is unavoidable ambiguity, leading to incorrect predictions of class or missed detections. In this work, we propose a novel method that alleviates this problem by introducing a hierarchy and associated hierarchical inference scheme that allows broad anatomical structures to be predicted when fine-grained structures cannot be reliably distinguished. METHODS First, we formulate a multi-label segmentation loss informed by a hierarchy of anatomical classes and then train a network using this. Subsequently, we use a novel leaf-to-root inference scheme ("Hiera-Mix") to determine the trade-off between label confidence and granularity. This method can be applied to any segmentation model. We evaluate our method using a large laparoscopic cholecystectomy dataset with 65,000 labelled frames. RESULTS We observed an increase in per-structure detection F1 score for the critical structures, when evaluated across their sub-hierarchies, compared to the baseline method: 6.0% for the cystic artery and 2.9% for the cystic duct, driven primarily by increases in precision of 11.3% and 4.7%, respectively. This corresponded to visibly improved segmentation outputs, with better characterisation of the undissected area containing the critical structures and fewer inter-class confusions. For other anatomical classes, which did not stand to benefit from the hierarchy, performance was unimpaired. CONCLUSION Our proposed hierarchical approach improves surgical scene segmentation in frames with ambiguity, by more suitably reflecting the model's parsing of the scene. This may be beneficial in applications of surgical scene segmentation, including recent advancements towards computer-assisted intra-operative guidance.
Collapse
Affiliation(s)
| | - David Owen
- Medtronic Digital Technologies, London, UK
| | | | | | - Karen Kerr
- Medtronic Digital Technologies, London, UK
| | - Danail Stoyanov
- Medtronic Digital Technologies, London, UK
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | | |
Collapse
|
11
|
Mascagni P, Alapatt D, Sestini L, Yu T, Alfieri S, Morales-Conde S, Padoy N, Perretta S. Applications of artificial intelligence in surgery: clinical, technical, and governance considerations. Cir Esp 2024; 102 Suppl 1:S66-S71. [PMID: 38704146 DOI: 10.1016/j.cireng.2024.04.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2024] [Accepted: 04/29/2024] [Indexed: 05/06/2024]
Abstract
Artificial intelligence (AI) will power many of the tools in the armamentarium of digital surgeons. AI methods and surgical proof-of-concept flourish, but we have yet to witness clinical translation and value. Here we exemplify the potential of AI in the care pathway of colorectal cancer patients and discuss clinical, technical, and governance considerations of major importance for the safe translation of surgical AI for the benefit of our patients and practices.
Collapse
Affiliation(s)
- Pietro Mascagni
- IHU Strasbourg, Strasbourg, France; Fondazione Policlinico Universitario A. Gemelli IRCCS, Rome, Italy; Università Cattolica del Sacro Cuore, Rome, Italy.
| | - Deepak Alapatt
- University of Strasbourg, CNRS, INSERM, ICube, UMR7357, Strasbourg, France
| | - Luca Sestini
- University of Strasbourg, CNRS, INSERM, ICube, UMR7357, Strasbourg, France
| | - Tong Yu
- University of Strasbourg, CNRS, INSERM, ICube, UMR7357, Strasbourg, France
| | - Sergio Alfieri
- Fondazione Policlinico Universitario A. Gemelli IRCCS, Rome, Italy; Università Cattolica del Sacro Cuore, Rome, Italy
| | | | - Nicolas Padoy
- IHU Strasbourg, Strasbourg, France; University of Strasbourg, CNRS, INSERM, ICube, UMR7357, Strasbourg, France
| | - Silvana Perretta
- IHU Strasbourg, Strasbourg, France; IRCAD, Research Institute Against Digestive Cancer, Strasbourg, France; Nouvel Hôpital Civil, Hôpitaux Universitaires de Strasbourg, Strasbourg, France
| |
Collapse
|
12
|
Madani A, Liu Y, Pryor A, Altieri M, Hashimoto DA, Feldman L. SAGES surgical data science task force: enhancing surgical innovation, education and quality improvement through data science. Surg Endosc 2024; 38:3489-3493. [PMID: 38831213 DOI: 10.1007/s00464-024-10921-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2024] [Accepted: 05/05/2024] [Indexed: 06/05/2024]
Affiliation(s)
- Amin Madani
- Department of Surgery, University of Toronto, Toronto, ON, Canada.
| | - Yao Liu
- Department of Surgery, Brown University, Providence, RI, USA
| | - Aurora Pryor
- Department of Surgery, Northwell Health, New York, NY, USA
| | - Maria Altieri
- Department of Surgery, Department of Computer and Information Science, University of Pennsylvania, Philadelphia, PA, USA
| | - Daniel A Hashimoto
- Department of Surgery, Department of Computer and Information Science, University of Pennsylvania, Philadelphia, PA, USA
| | - Liane Feldman
- Department of Surgery, McGill University, Montreal, QC, Canada
| |
Collapse
|
13
|
Sengun B, Iscan Y, Yazici ZA, Sormaz IC, Aksakal N, Tunca F, Ekenel HK, Giles Senyurek Y. Utilization of artificial intelligence in minimally invasive right adrenalectomy: recognition of anatomical landmarks with deep learning. Acta Chir Belg 2024:1-7. [PMID: 38841838 DOI: 10.1080/00015458.2024.2363599] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2024] [Accepted: 05/30/2024] [Indexed: 06/07/2024]
Abstract
BACKGROUND The primary surgical approach for removing adrenal masses is minimally invasive adrenalectomy. Recognition of anatomical landmarks during surgery is critical for minimizing complications. Artificial intelligence-based tools can be utilized to create real-time navigation systems during laparoscopic and robotic right adrenalectomy. In this study, we aimed to develop deep learning models that can identify critical anatomical structures during minimally invasive right adrenalectomy. METHODS In this experimental feasibility study, intraoperative videos of 20 patients who underwent minimally invasive right adrenalectomy in a tertiary care center between 2011 and 2023 were analyzed and used to develop an artificial intelligence-based anatomical landmark recognition system. Semantic segmentation of the liver, the inferior vena cava (IVC), and the right adrenal gland were performed. Fifty random images per patient during the dissection phase were extracted from videos. The experiments on the annotated images were performed on two state-of-the-art segmentation models named SwinUNETR and MedNeXt, which are transformer and convolutional neural network (CNN)-based segmentation architectures, respectively. Two loss function combinations, Dice-Cross Entropy and Dice-Focal Loss were experimented with for both of the models. The dataset was split into training and validation subsets with an 80:20 distribution on a patient basis in a 5-fold cross-validation approach. To introduce a sample variability to the dataset, strong-augmentation techniques were performed using intensity modifications and perspective transformations to represent different surgery environment scenarios. The models were evaluated by Dice Similarity Coefficient (DSC) and Intersection over Union (IoU) which are widely used segmentation metrics. For pixelwise classification performance, accuracy, sensitivity and specificity metrics were calculated on the validation subset. RESULTS Out of 20 videos, 1000 images were extracted, and the anatomical landmarks (liver, IVC, and right adrenal gland) were annotated. Randomly distributed 800 images and 200 images were selected for the training and validation subsets, respectively. Our benchmark results show that the utilization of Dice-Cross Entropy Loss with the transformer-based SwinUNETR model achieved 78.37%, whereas the CNN-based MedNeXt model reached a 77.09% mDSC score. Conversely, MedNeXt reaches a higher mIoU score of 63.71% than SwinUNETR by 62.10% on a three-region prediction task. CONCLUSION Artificial intelligence-based systems can predict anatomical landmarks with high performance in minimally invasive right adrenalectomy. Such tools can later be used to create real-time navigation systems during surgery in the near future.
Collapse
Affiliation(s)
- Berke Sengun
- Istanbul Faculty of Medicine, Department of General Surgery, Istanbul University, Istanbul, Turkey
| | - Yalin Iscan
- Istanbul Faculty of Medicine, Department of General Surgery, Istanbul University, Istanbul, Turkey
| | - Ziya Ata Yazici
- Faculty of Computer and Informatics Engineering, Istanbul Technical University, Istanbul, Turkey
| | - Ismail Cem Sormaz
- Istanbul Faculty of Medicine, Department of General Surgery, Istanbul University, Istanbul, Turkey
| | - Nihat Aksakal
- Istanbul Faculty of Medicine, Department of General Surgery, Istanbul University, Istanbul, Turkey
| | - Fatih Tunca
- Istanbul Faculty of Medicine, Department of General Surgery, Istanbul University, Istanbul, Turkey
| | - Hazim Kemal Ekenel
- Faculty of Computer and Informatics Engineering, Istanbul Technical University, Istanbul, Turkey
| | - Yasemin Giles Senyurek
- Istanbul Faculty of Medicine, Department of General Surgery, Istanbul University, Istanbul, Turkey
| |
Collapse
|
14
|
Fernicola A, Palomba G, Capuano M, De Palma GD, Aprea G. Artificial intelligence applied to laparoscopic cholecystectomy: what is the next step? A narrative review. Updates Surg 2024:10.1007/s13304-024-01892-6. [PMID: 38839723 DOI: 10.1007/s13304-024-01892-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2024] [Accepted: 05/18/2024] [Indexed: 06/07/2024]
Abstract
Artificial Intelligence (AI) is playing an increasing role in several fields of medicine. AI is also used during laparoscopic cholecystectomy (LC) surgeries. In the literature, there is no review that groups together the various fields of application of AI applied to LC. The aim of this review is to describe the use of AI in these contexts. We performed a narrative literature review by searching PubMed, Web of Science, Scopus and Embase for all studies on AI applied to LC, published from January 01, 2010, to December 30, 2023. Our focus was on randomized controlled trials (RCTs), meta-analysis, systematic reviews, and observational studies, dealing with large cohorts of patients. We then gathered further relevant studies from the reference list of the selected publications. Based on the studies reviewed, it emerges that AI could strongly improve surgical efficiency and accuracy during LC. Future prospects include speeding up, implementing, and improving the automaticity with which AI recognizes, differentiates and classifies the phases of the surgical intervention and the anatomic structures that are safe and those at risk.
Collapse
Affiliation(s)
- Agostino Fernicola
- Division of Endoscopic Surgery, Department of Clinical Medicine and Surgery, "Federico II" University of Naples, Via Pansini 5, 80131, Naples, Italy.
| | - Giuseppe Palomba
- Division of Endoscopic Surgery, Department of Clinical Medicine and Surgery, "Federico II" University of Naples, Via Pansini 5, 80131, Naples, Italy
| | - Marianna Capuano
- Division of Endoscopic Surgery, Department of Clinical Medicine and Surgery, "Federico II" University of Naples, Via Pansini 5, 80131, Naples, Italy
| | - Giovanni Domenico De Palma
- Division of Endoscopic Surgery, Department of Clinical Medicine and Surgery, "Federico II" University of Naples, Via Pansini 5, 80131, Naples, Italy
| | - Giovanni Aprea
- Division of Endoscopic Surgery, Department of Clinical Medicine and Surgery, "Federico II" University of Naples, Via Pansini 5, 80131, Naples, Italy
| |
Collapse
|
15
|
Cizmic A, Häberle F, Wise PA, Müller F, Gabel F, Mascagni P, Namazi B, Wagner M, Hashimoto DA, Madani A, Alseidi A, Hackert T, Müller-Stich BP, Nickel F. Structured feedback and operative video debriefing with critical view of safety annotation in training of laparoscopic cholecystectomy: a randomized controlled study. Surg Endosc 2024; 38:3241-3252. [PMID: 38653899 PMCID: PMC11133174 DOI: 10.1007/s00464-024-10843-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2024] [Accepted: 04/02/2024] [Indexed: 04/25/2024]
Abstract
BACKGROUND The learning curve in minimally invasive surgery (MIS) is lengthened compared to open surgery. It has been reported that structured feedback and training in teams of two trainees improves MIS training and MIS performance. Annotation of surgical images and videos may prove beneficial for surgical training. This study investigated whether structured feedback and video debriefing, including annotation of critical view of safety (CVS), have beneficial learning effects in a predefined, multi-modal MIS training curriculum in teams of two trainees. METHODS This randomized-controlled single-center study included medical students without MIS experience (n = 80). The participants first completed a standardized and structured multi-modal MIS training curriculum. They were then randomly divided into two groups (n = 40 each), and four laparoscopic cholecystectomies (LCs) were performed on ex-vivo porcine livers each. Students in the intervention group received structured feedback after each LC, consisting of LC performance evaluations through tutor-trainee joint video debriefing and CVS video annotation. Performance was evaluated using global and LC-specific Objective Structured Assessments of Technical Skills (OSATS) and Global Operative Assessment of Laparoscopic Skills (GOALS) scores. RESULTS The participants in the intervention group had higher global and LC-specific OSATS as well as global and LC-specific GOALS scores than the participants in the control group (25.5 ± 7.3 vs. 23.4 ± 5.1, p = 0.003; 47.6 ± 12.9 vs. 36 ± 12.8, p < 0.001; 17.5 ± 4.4 vs. 16 ± 3.8, p < 0.001; 6.6 ± 2.3 vs. 5.9 ± 2.1, p = 0.005). The intervention group achieved CVS more often than the control group (1. LC: 20 vs. 10 participants, p = 0.037, 2. LC: 24 vs. 8, p = 0.001, 3. LC: 31 vs. 8, p < 0.001, 4. LC: 31 vs. 10, p < 0.001). CONCLUSIONS Structured feedback and video debriefing with CVS annotation improves CVS achievement and ex-vivo porcine LC training performance based on OSATS and GOALS scores.
Collapse
Affiliation(s)
- Amila Cizmic
- Department of General, Visceral and Thoracic Surgery, University Medical Center Hamburg-Eppendorf, Martinistraße 52, 20251, Hamburg, Germany
| | - Frida Häberle
- Department of General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Heidelberg, Germany
| | - Philipp A Wise
- Department of General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Heidelberg, Germany
| | - Felix Müller
- Department of General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Heidelberg, Germany
| | - Felix Gabel
- Department of General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Heidelberg, Germany
| | - Pietro Mascagni
- Fondazione Policlinico Universitario Agostino Gemelli IRCCS, Rome, Italy
- Institute of Image-Guided Surgery, IHU-Strasbourg, Strasbourg, France
| | - Babak Namazi
- Center for Evidence-Based Simulation, Baylor University Medical Center, Dallas, USA
| | - Martin Wagner
- Department of Visceral, Thoracic and Vascular Surgery, Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany
| | - Daniel A Hashimoto
- Penn Computer Assisted Surgery and Outcomes (PCASO) Laboratory, Department of Surgery, Department of Computer and Information Science, University of Pennsylvania, Philadelphia, USA
| | - Amin Madani
- Surgical Artificial Intelligence Research Academy (SARA), Department of Surgery, University Health Network, Toronto, Canada
| | - Adnan Alseidi
- Department of Surgery, University of California - San Francisco, San Francisco, USA
| | - Thilo Hackert
- Department of General, Visceral and Thoracic Surgery, University Medical Center Hamburg-Eppendorf, Martinistraße 52, 20251, Hamburg, Germany
| | - Beat P Müller-Stich
- Department of Surgery, Clarunis - University Centre for Gastrointestinal and Liver Diseases, Basel, Switzerland
| | - Felix Nickel
- Department of General, Visceral and Thoracic Surgery, University Medical Center Hamburg-Eppendorf, Martinistraße 52, 20251, Hamburg, Germany.
- Department of General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Heidelberg, Germany.
- HIDSS4Health - Helmholtz Information and Data Science School for Health, Karlsruhe, Heidelberg, Germany.
| |
Collapse
|
16
|
Satyanaik S, Murali A, Alapatt D, Wang X, Mascagni P, Padoy N. Optimizing latent graph representations of surgical scenes for unseen domain generalization. Int J Comput Assist Radiol Surg 2024; 19:1243-1250. [PMID: 38678488 DOI: 10.1007/s11548-024-03121-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2024] [Accepted: 03/22/2024] [Indexed: 05/01/2024]
Abstract
PURPOSE Advances in deep learning have resulted in effective models for surgical video analysis; however, these models often fail to generalize across medical centers due to domain shift caused by variations in surgical workflow, camera setups, and patient demographics. Recently, object-centric learning has emerged as a promising approach for improved surgical scene understanding, capturing and disentangling visual and semantic properties of surgical tools and anatomy to improve downstream task performance. In this work, we conduct a multicentric performance benchmark of object-centric approaches, focusing on critical view of safety assessment in laparoscopic cholecystectomy, then propose an improved approach for unseen domain generalization. METHODS We evaluate four object-centric approaches for domain generalization, establishing baseline performance. Next, leveraging the disentangled nature of object-centric representations, we dissect one of these methods through a series of ablations (e.g., ignoring either visual or semantic features for downstream classification). Finally, based on the results of these ablations, we develop an optimized method specifically tailored for domain generalization, LG-DG, that includes a novel disentanglement loss function. RESULTS Our optimized approach, LG-DG, achieves an improvement of 9.28% over the best baseline approach. More broadly, we show that object-centric approaches are highly effective for domain generalization thanks to their modular approach to representation learning. CONCLUSION We investigate the use of object-centric methods for unseen domain generalization, identify method-agnostic factors critical for performance, and present an optimized approach that substantially outperforms existing methods.
Collapse
Affiliation(s)
| | - Aditya Murali
- ICube, University of Strasbourg, CNRS, Strasbourg, France.
| | - Deepak Alapatt
- ICube, University of Strasbourg, CNRS, Strasbourg, France
| | - Xin Wang
- West China Hospital of Sichuan University, Chengdu, China
| | - Pietro Mascagni
- IHU, Strasbourg, France
- Fondazione Policlinico Universitario A. Gemelli IRCCS, Rome, Italy
| | - Nicolas Padoy
- ICube, University of Strasbourg, CNRS, Strasbourg, France
- IHU, Strasbourg, France
| |
Collapse
|
17
|
Dayan D, Dvir N, Agbariya H, Nizri E. Implementation of artificial intelligence-based computer vision model in laparoscopic appendectomy: validation, reliability, and clinical correlation. Surg Endosc 2024; 38:3310-3319. [PMID: 38664295 DOI: 10.1007/s00464-024-10847-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2024] [Accepted: 04/06/2024] [Indexed: 05/30/2024]
Abstract
BACKGROUND Application of artificial intelligence (AI) in general surgery is evolving. Real-world implementation of an AI-based computer-vision model in laparoscopic appendectomy (LA) is presented. We aimed to evaluate (1) its accuracy in complexity grading and safety adherence, (2) clinical correlation to outcomes. METHODS A retrospective single-center study of 499 consecutive LA videos, captured and analyzed by 'Surgical Intelligence Platform,' Theator Inc. (9/2020-5/2022). Two expert surgeons viewed all videos and manually graded complexity and safety adherence. Automated annotations were compared to surgeons' assessments. Inter-surgeons' agreements were measured. Since 7/2021 videos were linked to patients' admission numbers. Data retrieval from medical records was performed (n = 365). Outcomes were compared between high and low complexity grades. RESULTS Low and high complexity grades comprised 74.8 and 25.2% of 499 videos. Surgeons' agreements were high (76.9-94.4%, kappa 0.77/0.91; p < 0.001) for all annotated complexity grades. Surgeons' agreements were also high (96.0-99.8%, kappa 0.78/0.87; p < 0.001) for full safety adherence, whereas agreement was moderate in partial safety adherence and none (32.8-58.8%). Inter-surgeons' agreements were high for complexity grading (kappa 0.86, p < 0.001) and safety adherence (kappa 0.88, p < 0.001). Comparing high to low grade complexity, preoperative clinical features were similar, except larger appendix diameter on imaging (13.4 ± 4.4 vs. 10.5 ± 3.0 mm, p < 0.001). Intraoperative outcomes were significantly higher (p < 0.001), including time to achieve critical view of safety (29.6, IQR 19.1-41.6 vs. 13.7, IQR 8.5-21.1 min), operative duration (45.3, IQR 37.7-65.2 vs. 25.0, IQR 18.3-32.7 min), and intraoperative events (39.4% vs. 5.9%). Postoperative outcomes (7.4% vs. 9.2%) including surgical complications, mortality, and readmissions were comparable (p = 0.6), except length of stay (4, IQR 2-5.5 vs. 1, IQR 1-2 days; p < 0.001). CONCLUSION The model accurately assesses complexity grading and full safety achievement. It can serve to predict operative time and intraoperative course, whereas no clinical correlation was found regarding postoperative outcomes. Further studies are needed.
Collapse
Affiliation(s)
- Danit Dayan
- Division of General Surgery, Affiliated to Sackler Faculty of Medicine, Tel Aviv Medical Center, Tel Aviv University, 6, Weizman St., 6423906, Tel Aviv-Yafo, Israel.
| | - Nadav Dvir
- Division of General Surgery, Affiliated to Sackler Faculty of Medicine, Tel Aviv Medical Center, Tel Aviv University, 6, Weizman St., 6423906, Tel Aviv-Yafo, Israel
| | - Haneen Agbariya
- Division of General Surgery, Affiliated to Sackler Faculty of Medicine, Tel Aviv Medical Center, Tel Aviv University, 6, Weizman St., 6423906, Tel Aviv-Yafo, Israel
| | - Eran Nizri
- Division of General Surgery, Affiliated to Sackler Faculty of Medicine, Tel Aviv Medical Center, Tel Aviv University, 6, Weizman St., 6423906, Tel Aviv-Yafo, Israel
| |
Collapse
|
18
|
Wierick A, Schulze A, Bodenstedt S, Speidel S, Distler M, Weitz J, Wagner M. [The digital operating room : Chances and risks of artificial intelligence]. CHIRURGIE (HEIDELBERG, GERMANY) 2024; 95:429-435. [PMID: 38443676 DOI: 10.1007/s00104-024-02058-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 02/07/2024] [Indexed: 03/07/2024]
Abstract
At the central workplace of the surgeon the digitalization of the operating room has particular consequences for the surgical work. Starting with intraoperative cross-sectional imaging and sonography, through functional imaging, minimally invasive and robot-assisted surgery up to digital surgical and anesthesiological documentation, the vast majority of operating rooms are now at least partially digitalized. The increasing digitalization of the whole process chain enables not only for the collection but also the analysis of big data. Current research focuses on artificial intelligence for the analysis of intraoperative data as the prerequisite for assistance systems that support surgical decision making or warn of risks; however, these technologies raise new ethical questions for the surgical community that affect the core of surgical work.
Collapse
Affiliation(s)
- Ann Wierick
- Klinik und Poliklinik für Viszeral‑, Thorax- und Gefäßchirurgie, Universitätsklinikum Carl Gustav Carus, Technische Universität Dresden, Fetscherstr. 74, 01307, Dresden, Deutschland
- Nationales Centrum für Tumorerkrankungen (NCT) Dresden, Dresden, Deutschland
| | - André Schulze
- Klinik und Poliklinik für Viszeral‑, Thorax- und Gefäßchirurgie, Universitätsklinikum Carl Gustav Carus, Technische Universität Dresden, Fetscherstr. 74, 01307, Dresden, Deutschland
- Nationales Centrum für Tumorerkrankungen (NCT) Dresden, Dresden, Deutschland
- Zentrum für Taktiles Internet mit Mensch-Maschine-Interaktion (CeTI), Technische Universität Dresden, Dresden, Deutschland
| | - Sebastian Bodenstedt
- Nationales Centrum für Tumorerkrankungen (NCT) Dresden, Dresden, Deutschland
- Zentrum für Taktiles Internet mit Mensch-Maschine-Interaktion (CeTI), Technische Universität Dresden, Dresden, Deutschland
| | - Stefanie Speidel
- Nationales Centrum für Tumorerkrankungen (NCT) Dresden, Dresden, Deutschland
- Zentrum für Taktiles Internet mit Mensch-Maschine-Interaktion (CeTI), Technische Universität Dresden, Dresden, Deutschland
| | - Marius Distler
- Klinik und Poliklinik für Viszeral‑, Thorax- und Gefäßchirurgie, Universitätsklinikum Carl Gustav Carus, Technische Universität Dresden, Fetscherstr. 74, 01307, Dresden, Deutschland
- Nationales Centrum für Tumorerkrankungen (NCT) Dresden, Dresden, Deutschland
- Zentrum für Taktiles Internet mit Mensch-Maschine-Interaktion (CeTI), Technische Universität Dresden, Dresden, Deutschland
| | - Jürgen Weitz
- Klinik und Poliklinik für Viszeral‑, Thorax- und Gefäßchirurgie, Universitätsklinikum Carl Gustav Carus, Technische Universität Dresden, Fetscherstr. 74, 01307, Dresden, Deutschland
- Nationales Centrum für Tumorerkrankungen (NCT) Dresden, Dresden, Deutschland
- Zentrum für Taktiles Internet mit Mensch-Maschine-Interaktion (CeTI), Technische Universität Dresden, Dresden, Deutschland
| | - Martin Wagner
- Klinik und Poliklinik für Viszeral‑, Thorax- und Gefäßchirurgie, Universitätsklinikum Carl Gustav Carus, Technische Universität Dresden, Fetscherstr. 74, 01307, Dresden, Deutschland.
- Nationales Centrum für Tumorerkrankungen (NCT) Dresden, Dresden, Deutschland.
- Zentrum für Taktiles Internet mit Mensch-Maschine-Interaktion (CeTI), Technische Universität Dresden, Dresden, Deutschland.
| |
Collapse
|
19
|
Chen J, Li M, Han H, Zhao Z, Chen X. SurgNet: Self-Supervised Pretraining With Semantic Consistency for Vessel and Instrument Segmentation in Surgical Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:1513-1525. [PMID: 38090838 DOI: 10.1109/tmi.2023.3341948] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/04/2024]
Abstract
Blood vessel and surgical instrument segmentation is a fundamental technique for robot-assisted surgical navigation. Despite the significant progress in natural image segmentation, surgical image-based vessel and instrument segmentation are rarely studied. In this work, we propose a novel self-supervised pretraining method (SurgNet) that can effectively learn representative vessel and instrument features from unlabeled surgical images. As a result, it allows for precise and efficient segmentation of vessels and instruments with only a small amount of labeled data. Specifically, we first construct a region adjacency graph (RAG) based on local semantic consistency in unlabeled surgical images and use it as a self-supervision signal for pseudo-mask segmentation. We then use the pseudo-mask to perform guided masked image modeling (GMIM) to learn representations that integrate structural information of intraoperative objectives more effectively. Our pretrained model, paired with various segmentation methods, can be applied to perform vessel and instrument segmentation accurately using limited labeled data for fine-tuning. We build an Intraoperative Vessel and Instrument Segmentation (IVIS) dataset, comprised of ~3 million unlabeled images and over 4,000 labeled images with manual vessel and instrument annotations to evaluate the effectiveness of our self-supervised pretraining method. We also evaluated the generalizability of our method to similar tasks using two public datasets. The results demonstrate that our approach outperforms the current state-of-the-art (SOTA) self-supervised representation learning methods in various surgical image segmentation tasks.
Collapse
|
20
|
Chen Z, Yang D, Li A, Sun L, Zhao J, Liu J, Liu L, Zhou X, Chen Y, Cai Y, Wu Z, Cheng K, Cai H, Tang M, Peng B, Wang X. Decoding surgical skill: an objective and efficient algorithm for surgical skill classification based on surgical gesture features -experimental studies. Int J Surg 2024; 110:1441-1449. [PMID: 38079605 PMCID: PMC10942222 DOI: 10.1097/js9.0000000000000975] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2023] [Accepted: 11/21/2023] [Indexed: 03/16/2024]
Abstract
BACKGROUND Various surgical skills lead to differences in patient outcomes and identifying poorly skilled surgeons with constructive feedback contributes to surgical quality improvement. The aim of the study was to develop an algorithm for evaluating surgical skills in laparoscopic cholecystectomy based on the features of elementary functional surgical gestures (Surgestures). MATERIALS AND METHODS Seventy-five laparoscopic cholecystectomy videos were collected from 33 surgeons in five hospitals. The phase of mobilization hepatocystic triangle and gallbladder dissection from the liver bed of each video were annotated with 14 Surgestures. The videos were grouped into competent and incompetent based on the quantiles of modified global operative assessment of laparoscopic skills (mGOALS). Surgeon-related information, clinical data, and intraoperative events were analyzed. Sixty-three Surgesture features were extracted to develop the surgical skill classification algorithm. The area under the receiver operating characteristic curve of the classification and the top features were evaluated. RESULTS Correlation analysis revealed that most perioperative factors had no significant correlation with mGOALS scores. The incompetent group has a higher probability of cholecystic vascular injury compared to the competent group (30.8 vs 6.1%, P =0.004). The competent group demonstrated fewer inefficient Surgestures, lower shift frequency, and a larger dissection-exposure ratio of Surgestures during the procedure. The area under the receiver operating characteristic curve of the classification algorithm achieved 0.866. Different Surgesture features contributed variably to overall performance and specific skill items. CONCLUSION The computer algorithm accurately classified surgeons with different skill levels using objective Surgesture features, adding insight into designing automatic laparoscopic surgical skill assessment tools with technical feedback.
Collapse
Affiliation(s)
- Zixin Chen
- Department of General Surgery, Division of Pancreatic Surgery
- West China School of Medicine, West China Hospital of Sichuan University
| | - Dewei Yang
- Chongqing University of Posts and Telecommunications, School of Advanced Manufacturing Engineering, Chongqing
| | - Ang Li
- Department of General Surgery, Division of Pancreatic Surgery
- Guang’an People’s Hospital, Guang’an
| | - Louzong Sun
- Department of Hepatobiliary Surgery, Zigong First People’s Hospital, Zigong
| | - Jifan Zhao
- Chengdu Withai Innovations Technology Company, Chengdu
| | - Jie Liu
- Chengdu Withai Innovations Technology Company, Chengdu
| | - Linxun Liu
- Department of General Surgery, Qinghai Provincial People’s Hospital, Xining, People’s Republic of China
| | - Xiaobo Zhou
- School of Biomedical Informatics, McGovern Medical School, University of Texas Health Science Center, Houston, USA
| | - Yonghua Chen
- Department of General Surgery, Division of Pancreatic Surgery
| | - Yunqiang Cai
- Department of General Surgery, Division of Pancreatic Surgery
| | - Zhong Wu
- Department of General Surgery, Division of Pancreatic Surgery
| | - Ke Cheng
- Department of General Surgery, Division of Pancreatic Surgery
| | - He Cai
- Department of General Surgery, Division of Pancreatic Surgery
| | - Ming Tang
- Department of General Surgery, Division of Pancreatic Surgery
- West China School of Medicine, West China Hospital of Sichuan University
| | - Bing Peng
- Department of General Surgery, Division of Pancreatic Surgery
| | - Xin Wang
- Department of General Surgery, Division of Pancreatic Surgery
| |
Collapse
|
21
|
Li A, Javidan AP, Namazi B, Madani A, Forbes TL. Development of an Artificial Intelligence Tool for Intraoperative Guidance During Endovascular Abdominal Aortic Aneurysm Repair. Ann Vasc Surg 2024; 99:96-104. [PMID: 37914075 DOI: 10.1016/j.avsg.2023.08.027] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2023] [Revised: 08/02/2023] [Accepted: 08/15/2023] [Indexed: 11/03/2023]
Abstract
BACKGROUND Adverse events during surgery can occur in part due to errors in visual perception and judgment. Deep learning is a branch of artificial intelligence (AI) that has shown promise in providing real-time intraoperative guidance. This study aims to train and test the performance of a deep learning model that can identify inappropriate landing zones during endovascular aneurysm repair (EVAR). METHODS A deep learning model was trained to identify a "No-Go" landing zone during EVAR, defined by coverage of the lowest renal artery by the stent graft. Fluoroscopic images from elective EVAR procedures performed at a single institution and from open-access sources were selected. Annotations of the "No-Go" zone were performed by trained annotators. A 10-fold cross-validation technique was used to evaluate the performance of the model against human annotations. Primary outcomes were intersection-over-union (IoU) and F1 score and secondary outcomes were pixel-wise accuracy, sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV). RESULTS The AI model was trained using 369 images procured from 110 different patients/videos, including 18 patients/videos (44 images) from open-access sources. For the primary outcomes, IoU and F1 were 0.43 (standard deviation ± 0.29) and 0.53 (±0.32), respectively. For the secondary outcomes, accuracy, sensitivity, specificity, NPV, and PPV were 0.97 (±0.002), 0.51 (±0.34), 0.99 (±0.001). 0.99 (±0.002), and 0.62 (±0.34), respectively. CONCLUSIONS AI can effectively identify suboptimal areas of stent deployment during EVAR. Further directions include validating the model on datasets from other institutions and assessing its ability to predict optimal stent graft placement and clinical outcomes.
Collapse
Affiliation(s)
- Allen Li
- Faculty of Medicine & The Ottawa Hospital Research Institute, University of Ottawa, Ottawa, Ontario, Canada
| | - Arshia P Javidan
- Division of Vascular Surgery, University of Toronto, Toronto, Ontario, Canada
| | - Babak Namazi
- Department of Surgery, University of Texas Southwestern Medical Center, Dallas, TX
| | - Amin Madani
- Department of Surgery, University Health Network & University of Toronto, Toronto, Ontario, Canada; Surgical Artificial Intelligence Research Academy, University Health Network, Toronto, Ontario, Canada
| | - Thomas L Forbes
- Department of Surgery, University Health Network & University of Toronto, Toronto, Ontario, Canada.
| |
Collapse
|
22
|
Dayan D. Implementation of Artificial Intelligence-Based Computer Vision Model for Sleeve Gastrectomy: Experience in One Tertiary Center. Obes Surg 2024; 34:330-336. [PMID: 38180619 DOI: 10.1007/s11695-023-07043-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2023] [Revised: 12/23/2023] [Accepted: 12/28/2023] [Indexed: 01/06/2024]
Abstract
INTRODUCTION Sleeve gastrectomy (SG) is the most common metabolic and bariatric procedure performed. Leveraging artificial intelligence (AI) for automated real-time data structuring and annotations of surgical videos has immense potential of clinical applications. This study presents initial real-world implementation of AI-based computer vision model in sleeve gastrectomy (SG) and external validation of accuracy of safety milestone annotations. METHODS A retrospective single-center study of 49 consecutive SG videos was captured and analyzed by the AI platform (December 2020-August 2023). A bariatric surgeon viewed all videos and assessed safety milestones adherence, compared to the AI annotations. Patients' data were retrieved from the bariatric unit registry. RESULTS SG total duration was 47.5 min (interquartile range 36-64). Main steps included preparation (12.2%), dissection of the greater curvature (30.8%), gastric transection (28.5%), specimen extraction (7.2%), and final inspection (14.4%). Out of body time comprised 6.9% of the total video. Safety milestones components and AI-surgeon agreements included the following: bougie insertion (100%), distance from pylorus ≥ 2 cm (100%), parallel to lesser curvature (98%), fundus mobilization (100%), and distance from esophagus ≥ 1 cm (true-100%, false-13.6%; kappa coefficient 0.2, p = 0.006). Intraoperative complications included notable hemorrhage (n = 4) and parenchymal injury (n = 1). CONCLUSIONS The AI model provides a fully automated SG video analysis. Outcomes suggest its accuracy in four of five safety milestone annotations. This data is valuable, as it reflects objective performance measures which can help us improve the surgical quality and efficiency of SG. Larger cohorts will enable SG standardization and clinical correlations with outcomes, aiming to improve patients' safety.
Collapse
Affiliation(s)
- Danit Dayan
- Division of General Surgery, Bariatric Unit, Tel Aviv Medical Center, Affiliated to Sackler Faculty of Medicine, Tel Aviv University, 6, Weizman St., Tel Aviv, Israel.
| |
Collapse
|
23
|
Une N, Kobayashi S, Kitaguchi D, Sunakawa T, Sasaki K, Ogane T, Hayashi K, Kosugi N, Kudo M, Sugimoto M, Hasegawa H, Takeshita N, Gotohda N, Ito M. Intraoperative artificial intelligence system identifying liver vessels in laparoscopic liver resection: a retrospective experimental study. Surg Endosc 2024; 38:1088-1095. [PMID: 38216749 DOI: 10.1007/s00464-023-10637-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2023] [Accepted: 11/29/2023] [Indexed: 01/14/2024]
Abstract
BACKGROUND The precise recognition of liver vessels during liver parenchymal dissection is the crucial technique for laparoscopic liver resection (LLR). This retrospective feasibility study aimed to develop artificial intelligence (AI) models to recognize liver vessels in LLR, and to evaluate their accuracy and real-time performance. METHODS Images from LLR videos were extracted, and the hepatic veins and Glissonean pedicles were labeled separately. Two AI models were developed to recognize liver vessels: the "2-class model" which recognized both hepatic veins and Glissonean pedicles as equivalent vessels and distinguished them from the background class, and the "3-class model" which recognized them all separately. The Feature Pyramid Network was used as a neural network architecture for both models in their semantic segmentation tasks. The models were evaluated using fivefold cross-validation tests, and the Dice coefficient (DC) was used as an evaluation metric. Ten gastroenterological surgeons also evaluated the models qualitatively through rubric. RESULTS In total, 2421 frames from 48 video clips were extracted. The mean DC value of the 2-class model was 0.789, with a processing speed of 0.094 s. The mean DC values for the hepatic vein and the Glissonean pedicle in the 3-class model were 0.631 and 0.482, respectively. The average processing time for the 3-class model was 0.097 s. Qualitative evaluation by surgeons revealed that false-negative and false-positive ratings in the 2-class model averaged 4.40 and 3.46, respectively, on a five-point scale, while the false-negative, false-positive, and vessel differentiation ratings in the 3-class model averaged 4.36, 3.44, and 3.28, respectively, on a five-point scale. CONCLUSION We successfully developed deep-learning models that recognize liver vessels in LLR with high accuracy and sufficient processing speed. These findings suggest the potential of a new real-time automated navigation system for LLR.
Collapse
Affiliation(s)
- Norikazu Une
- Department of Hepatobiliary and Pancreatic Surgery, National Cancer Center Hospital East, 6-5-1 Kashiwanoha, Kashiwa, Chiba, 277-8577, Japan
- Division of Medical Device Innovation, National Cancer Center Hospital East, 6-5-1 Kashiwanoha, Kashiwa, Chiba, 277-8577, Japan
| | - Shin Kobayashi
- Department of Hepatobiliary and Pancreatic Surgery, National Cancer Center Hospital East, 6-5-1 Kashiwanoha, Kashiwa, Chiba, 277-8577, Japan
| | - Daichi Kitaguchi
- Division of Medical Device Innovation, National Cancer Center Hospital East, 6-5-1 Kashiwanoha, Kashiwa, Chiba, 277-8577, Japan
| | - Taiki Sunakawa
- Department of Hepatobiliary and Pancreatic Surgery, National Cancer Center Hospital East, 6-5-1 Kashiwanoha, Kashiwa, Chiba, 277-8577, Japan
- Division of Medical Device Innovation, National Cancer Center Hospital East, 6-5-1 Kashiwanoha, Kashiwa, Chiba, 277-8577, Japan
| | - Kimimasa Sasaki
- Department of Hepatobiliary and Pancreatic Surgery, National Cancer Center Hospital East, 6-5-1 Kashiwanoha, Kashiwa, Chiba, 277-8577, Japan
- Division of Medical Device Innovation, National Cancer Center Hospital East, 6-5-1 Kashiwanoha, Kashiwa, Chiba, 277-8577, Japan
| | - Tateo Ogane
- Division of Medical Device Innovation, National Cancer Center Hospital East, 6-5-1 Kashiwanoha, Kashiwa, Chiba, 277-8577, Japan
| | - Kazuyuki Hayashi
- Division of Medical Device Innovation, National Cancer Center Hospital East, 6-5-1 Kashiwanoha, Kashiwa, Chiba, 277-8577, Japan
| | - Norihito Kosugi
- Division of Medical Device Innovation, National Cancer Center Hospital East, 6-5-1 Kashiwanoha, Kashiwa, Chiba, 277-8577, Japan
| | - Masashi Kudo
- Department of Hepatobiliary and Pancreatic Surgery, National Cancer Center Hospital East, 6-5-1 Kashiwanoha, Kashiwa, Chiba, 277-8577, Japan
| | - Motokazu Sugimoto
- Department of Hepatobiliary and Pancreatic Surgery, National Cancer Center Hospital East, 6-5-1 Kashiwanoha, Kashiwa, Chiba, 277-8577, Japan
| | - Hiro Hasegawa
- Division of Medical Device Innovation, National Cancer Center Hospital East, 6-5-1 Kashiwanoha, Kashiwa, Chiba, 277-8577, Japan
| | - Nobuyoshi Takeshita
- Division of Medical Device Innovation, National Cancer Center Hospital East, 6-5-1 Kashiwanoha, Kashiwa, Chiba, 277-8577, Japan
| | - Naoto Gotohda
- Department of Hepatobiliary and Pancreatic Surgery, National Cancer Center Hospital East, 6-5-1 Kashiwanoha, Kashiwa, Chiba, 277-8577, Japan
| | - Masaaki Ito
- Division of Medical Device Innovation, National Cancer Center Hospital East, 6-5-1 Kashiwanoha, Kashiwa, Chiba, 277-8577, Japan.
| |
Collapse
|
24
|
Ali JT, Yang G, Green CA, Reed BL, Madani A, Ponsky TA, Hazey J, Rothenberg SS, Schlachta CM, Oleynikov D, Szoka N. Defining digital surgery: a SAGES white paper. Surg Endosc 2024; 38:475-487. [PMID: 38180541 DOI: 10.1007/s00464-023-10551-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2023] [Accepted: 10/17/2023] [Indexed: 01/06/2024]
Abstract
BACKGROUND Digital surgery is a new paradigm within the surgical innovation space that is rapidly advancing and encompasses multiple areas. METHODS This white paper from the SAGES Digital Surgery Working Group outlines the scope of digital surgery, defines key terms, and analyzes the challenges and opportunities surrounding this disruptive technology. RESULTS In its simplest form, digital surgery inserts a computer interface between surgeon and patient. We divide the digital surgery space into the following elements: advanced visualization, enhanced instrumentation, data capture, data analytics with artificial intelligence/machine learning, connectivity via telepresence, and robotic surgical platforms. We will define each area, describe specific terminology, review current advances as well as discuss limitations and opportunities for future growth. CONCLUSION Digital Surgery will continue to evolve and has great potential to bring value to all levels of the healthcare system. The surgical community has an essential role in understanding, developing, and guiding this emerging field.
Collapse
Affiliation(s)
- Jawad T Ali
- University of Texas at Austin, Austin, TX, USA
| | - Gene Yang
- University at Buffalo, Buffalo, NY, USA
| | | | | | - Amin Madani
- University of Toronto, Toronto, ON, Canada
- Surgical Artificial Intelligence Research Academy, University Health Network, Toronto, ON, Canada
| | - Todd A Ponsky
- Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA
| | | | | | | | - Dmitry Oleynikov
- Monmouth Medical Center, Robert Wood Johnson Barnabas Health, Rutgers School of Medicine, Long Branch, NJ, USA
| | - Nova Szoka
- Department of Surgery, West Virginia University, Suite 7500 HSS, PO Box 9238, Morgantown, WV, 26506-9238, USA.
| |
Collapse
|
25
|
Mascagni P, Alapatt D, Lapergola A, Vardazaryan A, Mazellier JP, Dallemagne B, Mutter D, Padoy N. Early-stage clinical evaluation of real-time artificial intelligence assistance for laparoscopic cholecystectomy. Br J Surg 2024; 111:znad353. [PMID: 37935636 DOI: 10.1093/bjs/znad353] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2023] [Revised: 07/24/2023] [Accepted: 08/26/2023] [Indexed: 11/09/2023]
Abstract
Lay Summary
The growing availability of surgical digital data and developments in analytics such as artificial intelligence (AI) are being harnessed to improve surgical care. However, technical and cultural barriers to real-time intraoperative AI assistance exist. This early-stage clinical evaluation shows the technical feasibility of concurrently deploying several AIs in operating rooms for real-time assistance during procedures. In addition, potentially relevant clinical applications of these AI models are explored with a multidisciplinary cohort of key stakeholders.
Collapse
Affiliation(s)
- Pietro Mascagni
- ICube, University of Strasbourg, CNRS, IHU Strasbourg, Strasbourg, France
- Department of Medical and Abdominal Surgery and Endocrine-Metabolic Science, Fondazione Policlinico Universitario A. Gemelli IRCCS, Rome, Italy
| | - Deepak Alapatt
- ICube, University of Strasbourg, CNRS, IHU Strasbourg, Strasbourg, France
| | - Alfonso Lapergola
- Department of Digestive and Endocrine Surgery, Nouvel Hôpital Civil, Hôpitaux Universitaires de Strasbourg, Strasbourg, France
| | | | | | - Bernard Dallemagne
- Institute for Research against Digestive Cancer (IRCAD), Strasbourg, France
| | - Didier Mutter
- Department of Digestive and Endocrine Surgery, Nouvel Hôpital Civil, Hôpitaux Universitaires de Strasbourg, Strasbourg, France
- Institute of Image-Guided Surgery, IHU-Strasbourg, Strasbourg, France
| | - Nicolas Padoy
- ICube, University of Strasbourg, CNRS, IHU Strasbourg, Strasbourg, France
- Institute of Image-Guided Surgery, IHU-Strasbourg, Strasbourg, France
| |
Collapse
|
26
|
Takeuchi M, Kitagawa Y. Artificial intelligence and surgery. Ann Gastroenterol Surg 2024; 8:4-5. [PMID: 38250693 PMCID: PMC10797843 DOI: 10.1002/ags3.12766] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/06/2023] [Accepted: 12/06/2023] [Indexed: 01/23/2024] Open
Affiliation(s)
- Masashi Takeuchi
- Department of SurgeryKeio University School of MedicineTokyoJapan
| | - Yuko Kitagawa
- Department of SurgeryKeio University School of MedicineTokyoJapan
| |
Collapse
|
27
|
Mosca V, Fuschillo G, Sciaudone G, Sahnan K, Selvaggi F, Pellino G. Use of artificial intelligence in total mesorectal excision in rectal cancer surgery: State of the art and perspectives. Artif Intell Gastroenterol 2023; 4:64-71. [DOI: 10.35712/aig.v4.i3.64] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/27/2023] [Revised: 09/13/2023] [Accepted: 10/23/2023] [Indexed: 12/07/2023] Open
Abstract
BACKGROUND Colorectal cancer is a major public health problem, with 1.9 million new cases and 953000 deaths worldwide in 2020. Total mesorectal excision (TME) is the standard of care for the treatment of rectal cancer and is crucial to prevent local recurrence, but it is a technically challenging surgery. The use of artificial intelligence (AI) could help improve the performance and safety of TME surgery.
AIM To review the literature on the use of AI and machine learning in rectal surgery and potential future developments.
METHODS Online scientific databases were searched for articles on the use of AI in rectal cancer surgery between 2020 and 2023.
RESULTS The literature search yielded 876 results, and only 13 studies were selected for review. The use of AI in rectal cancer surgery and specifically in TME is a rapidly evolving field. There are a number of different AI algorithms that have been developed for use in TME, including algorithms for instrument detection, anatomical structure identification, and image-guided navigation systems.
CONCLUSION AI has the potential to revolutionize TME surgery by providing real-time surgical guidance, preventing complications, and improving training. However, further research is needed to fully understand the benefits and risks of AI in TME surgery.
Collapse
Affiliation(s)
- Vinicio Mosca
- Department of Advanced Medical and Surgical Sciences, Università degli Studi della Campania “Luigi Vanvitelli”, Napoli 80138, Italy
| | - Giacomo Fuschillo
- Department of Advanced Medical and Surgical Sciences, Università degli Studi della Campania “Luigi Vanvitelli”, Napoli 80138, Italy
| | - Guido Sciaudone
- Department of Medicine and Health Sciences “Vincenzo Tiberio”, University of Molise, Campobasso 86100, Italy
| | - Kapil Sahnan
- Department of Colorectal Surgery, St Mark’s Hospital, London HA1 3UJ, United Kingdom
- Department of Surgery and Cancer, Imperial College London, London SW7 5NH, United Kingdom
| | - Francesco Selvaggi
- Department of Advanced Medical and Surgical Sciences, Università degli Studi della Campania “Luigi Vanvitelli”, Napoli 80138, Italy
| | - Gianluca Pellino
- Department of Advanced Medical and Surgical Sciences, Università degli Studi della Campania “Luigi Vanvitelli”, Napoli 80138, Italy
- Colorectal Surgery, Vall d’Hebron University Hospital, Barcelona 08035, Spain
| |
Collapse
|
28
|
Khalid MU, Laplante S, Masino C, Alseidi A, Jayaraman S, Zhang H, Mashouri P, Protserov S, Hunter J, Brudno M, Madani A. Use of artificial intelligence for decision-support to avoid high-risk behaviors during laparoscopic cholecystectomy. Surg Endosc 2023; 37:9467-9475. [PMID: 37697115 DOI: 10.1007/s00464-023-10403-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2023] [Accepted: 08/14/2023] [Indexed: 09/13/2023]
Abstract
INTRODUCTION Bile duct injuries (BDIs) are a significant source of morbidity among patients undergoing laparoscopic cholecystectomy (LC). GoNoGoNet is an artificial intelligence (AI) algorithm that has been developed and validated to identify safe ("Go") and dangerous ("No-Go") zones of dissection during LC, with the potential to prevent BDIs through real-time intraoperative decision-support. This study evaluates GoNoGoNet's ability to predict Go/No-Go zones during LCs with BDIs. METHODS AND PROCEDURES Eleven LC videos with BDI (BDI group) were annotated by GoNoGoNet. All tool-tissue interactions, including the one that caused the BDI, were characterized in relation to the algorithm's predicted location of Go/No-Go zones. These were compared to another 11 LC videos with cholecystitis (control group) deemed to represent "safe cholecystectomy" by experts. The probability threshold of GoNoGoNet annotations were then modulated to determine its relationship to Go/No-Go predictions. Data is shown as % difference [99% confidence interval]. RESULTS Compared to control, the BDI group showed significantly greater proportion of sharp dissection (+ 23.5% [20.0-27.0]), blunt dissection (+ 32.1% [27.2-37.0]), and total interactions (+ 33.6% [31.0-36.2]) outside of the Go zone. Among injury-causing interactions, 4 (36%) were in the No-Go zone, 2 (18%) were in the Go zone, and 5 (45%) were outside both zones, after maximizing the probability threshold of the Go algorithm. CONCLUSION AI has potential to detect unsafe dissection and prevent BDIs through real-time intraoperative decision-support. More work is needed to determine how to optimize integration of this technology into the operating room workflow and adoption by end-users.
Collapse
Affiliation(s)
- Muhammad Uzair Khalid
- Temerty Faculty of Medicine, University of Toronto, Medical Sciences Building, 1 King's College Circle, Toronto, ON, M5S 1A8, Canada.
- Surgical Artificial Intelligence Research Academy, University Health Network, Toronto, ON, Canada.
| | - Simon Laplante
- Surgical Artificial Intelligence Research Academy, University Health Network, Toronto, ON, Canada
- Department of Surgery, University of Toronto, Toronto, ON, Canada
| | - Caterina Masino
- Surgical Artificial Intelligence Research Academy, University Health Network, Toronto, ON, Canada
| | - Adnan Alseidi
- Department of Surgery, University of California, San Francisco, San Francisco, CA, USA
| | - Shiva Jayaraman
- Department of Surgery, University of Toronto, Toronto, ON, Canada
- Department of Surgery, St Joseph's Health Centre, Toronto, ON, Canada
| | - Haochi Zhang
- DATA Team, University Health Network, Toronto, ON, Canada
| | | | - Sergey Protserov
- DATA Team, University Health Network, Toronto, ON, Canada
- Department of Computer Science, University of Toronto, Toronto, ON, Canada
| | - Jaryd Hunter
- DATA Team, University Health Network, Toronto, ON, Canada
| | - Michael Brudno
- DATA Team, University Health Network, Toronto, ON, Canada
- Department of Computer Science, University of Toronto, Toronto, ON, Canada
| | - Amin Madani
- Surgical Artificial Intelligence Research Academy, University Health Network, Toronto, ON, Canada
- Department of Surgery, University of Toronto, Toronto, ON, Canada
| |
Collapse
|
29
|
Tao H, Fang C, Yang J. ASO Author Reflections: Laparoscopic Anatomical Segment 8 Resection Using Digital Intelligent Liver Surgery Technologies: The Combination of Multiple Navigation Approaches. Ann Surg Oncol 2023; 30:7388-7390. [PMID: 37610492 DOI: 10.1245/s10434-023-14214-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2023] [Accepted: 08/10/2023] [Indexed: 08/24/2023]
Affiliation(s)
- Haisu Tao
- Department of Hepatobiliary Surgery, Zhujiang Hospital, Southern Medical University, Guangzhou, China
- Guangdong Provincial Clinical and Engineering Center of Digital Medicine, Guangzhou, China
- Pazhou Lab, Guangzhou, China
| | - Chihua Fang
- Department of Hepatobiliary Surgery, Zhujiang Hospital, Southern Medical University, Guangzhou, China
- Guangdong Provincial Clinical and Engineering Center of Digital Medicine, Guangzhou, China
- Pazhou Lab, Guangzhou, China
| | - Jian Yang
- Department of Hepatobiliary Surgery, Zhujiang Hospital, Southern Medical University, Guangzhou, China.
- Guangdong Provincial Clinical and Engineering Center of Digital Medicine, Guangzhou, China.
- Pazhou Lab, Guangzhou, China.
| |
Collapse
|
30
|
Kawamura M, Endo Y, Fujinaga A, Orimoto H, Amano S, Kawasaki T, Kawano Y, Masuda T, Hirashita T, Kimura M, Ejima A, Matsunobu Y, Shinozuka K, Tokuyasu T, Inomata M. Development of an artificial intelligence system for real-time intraoperative assessment of the Critical View of Safety in laparoscopic cholecystectomy. Surg Endosc 2023; 37:8755-8763. [PMID: 37567981 DOI: 10.1007/s00464-023-10328-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2023] [Accepted: 07/19/2023] [Indexed: 08/13/2023]
Abstract
BACKGROUND The Critical View of Safety (CVS) was proposed in 1995 to prevent bile duct injury during laparoscopic cholecystectomy (LC). The achievement of CVS was evaluated subjectively. This study aimed to develop an artificial intelligence (AI) system to evaluate CVS scores in LC. MATERIALS AND METHODS AI software was developed to evaluate the achievement of CVS using an algorithm for image classification based on a deep convolutional neural network. Short clips of hepatocystic triangle dissection were converted from 72 LC videos, and 23,793 images were labeled for training data. The learning models were examined using metrics commonly used in machine learning. RESULTS The mean values of precision, recall, F-measure, specificity, and overall accuracy for all the criteria of the best model were 0.971, 0.737, 0.832, 0.966, and 0.834, respectively. It took approximately 6 fps to obtain scores for a single image. CONCLUSIONS Using the AI system, we successfully evaluated the achievement of the CVS criteria using still images and videos of hepatocystic triangle dissection in LC. This encourages surgeons to be aware of CVS and is expected to improve surgical safety.
Collapse
Affiliation(s)
- Masahiro Kawamura
- Department of Gastroenterological and Pediatric Surgery, Faculty of Medicine, Oita University, Oita, Japan.
| | - Yuichi Endo
- Department of Gastroenterological and Pediatric Surgery, Faculty of Medicine, Oita University, Oita, Japan
| | - Atsuro Fujinaga
- Department of Gastroenterological and Pediatric Surgery, Faculty of Medicine, Oita University, Oita, Japan
| | - Hiroki Orimoto
- Department of Gastroenterological and Pediatric Surgery, Faculty of Medicine, Oita University, Oita, Japan
| | - Shota Amano
- Department of Gastroenterological and Pediatric Surgery, Faculty of Medicine, Oita University, Oita, Japan
| | - Takahide Kawasaki
- Department of Gastroenterological and Pediatric Surgery, Faculty of Medicine, Oita University, Oita, Japan
| | - Yoko Kawano
- Department of Gastroenterological and Pediatric Surgery, Faculty of Medicine, Oita University, Oita, Japan
| | - Takashi Masuda
- Department of Gastroenterological and Pediatric Surgery, Faculty of Medicine, Oita University, Oita, Japan
| | - Teijiro Hirashita
- Department of Gastroenterological and Pediatric Surgery, Faculty of Medicine, Oita University, Oita, Japan
| | - Misako Kimura
- Department of Information System and Engineering, Faculty of Information Engineering, Fukuoka Institute of Technology, Fukuoka, Japan
| | - Aika Ejima
- Department of Information System and Engineering, Faculty of Information Engineering, Fukuoka Institute of Technology, Fukuoka, Japan
| | - Yusuke Matsunobu
- Department of Information System and Engineering, Faculty of Information Engineering, Fukuoka Institute of Technology, Fukuoka, Japan
| | - Ken'ichi Shinozuka
- Department of Information System and Engineering, Faculty of Information Engineering, Fukuoka Institute of Technology, Fukuoka, Japan
| | - Tatsushi Tokuyasu
- Department of Information System and Engineering, Faculty of Information Engineering, Fukuoka Institute of Technology, Fukuoka, Japan
| | - Masafumi Inomata
- Department of Gastroenterological and Pediatric Surgery, Faculty of Medicine, Oita University, Oita, Japan
| |
Collapse
|
31
|
Shinohara H. Surgery utilizing artificial intelligence technology: why we should not rule it out. Surg Today 2023; 53:1219-1224. [PMID: 36192612 DOI: 10.1007/s00595-022-02601-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2022] [Accepted: 09/13/2022] [Indexed: 10/10/2022]
Abstract
Recent advances in optical and robotic technologies have given surgeons high-definition eyes and precision hands that perform beyond human capabilities. This has expanded the scope of minimally invasive surgery and increased opportunities for surgery in high-risk situations; however, absolute surgical safety has not yet been achieved. Deficiencies in human performance are associated with surgical adverse events and advanced surgery places stress on surgeons and affect their concentration, causing not only novice surgeons with limited experience, but even skilled surgeons, to make misrecognition and decision-making errors. Therefore, the issue of "surgical comfort" for surgeons cannot be ignored. In recent years, artificial intelligence (AI), which is designed to mimic the function of the human brain, has been developed in various fields to assist humans. Computer vision, a visual assistive technology that uses AI, is being applied to surgery and will become available in the near future. AI-controlled robots cannot be expected to replace surgeons, because surgeons operate with higher brain functions that integrate all their abilities, including the senses of humanity, mission, and ethics. However, if there is a way to reduce the mental and physical burden on surgeons by utilizing AI technology, then it should not be ruled out.
Collapse
Affiliation(s)
- Hisashi Shinohara
- Department of Gastroenterological Surgery, Hyogo Medical University, 11 Mukogawa-cho, Nishinomiya, Hyogo, 663-8501, Japan.
| |
Collapse
|
32
|
Park JJ, Doiphode N, Zhang X, Pan L, Blue R, Shi J, Buch VP. Developing the surgeon-machine interface: using a novel instance-segmentation framework for intraoperative landmark labelling. Front Surg 2023; 10:1259756. [PMID: 37936949 PMCID: PMC10626480 DOI: 10.3389/fsurg.2023.1259756] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2023] [Accepted: 09/20/2023] [Indexed: 11/09/2023] Open
Abstract
Introduction The utilisation of artificial intelligence (AI) augments intraoperative safety, surgical training, and patient outcomes. We introduce the term Surgeon-Machine Interface (SMI) to describe this innovative intersection between surgeons and machine inference. A custom deep computer vision (CV) architecture within a sparse labelling paradigm was developed, specifically tailored to conceptualise the SMI. This platform demonstrates the ability to perform instance segmentation on anatomical landmarks and tools from a single open spinal dural arteriovenous fistula (dAVF) surgery video dataset. Methods Our custom deep convolutional neural network was based on SOLOv2 architecture for precise, instance-level segmentation of surgical video data. Test video consisted of 8520 frames, with sparse labelling of only 133 frames annotated for training. Accuracy and inference time, assessed using F1-score and mean Average Precision (mAP), were compared against current state-of-the-art architectures on a separate test set of 85 additionally annotated frames. Results Our SMI demonstrated superior accuracy and computing speed compared to these frameworks. The F1-score and mAP achieved by our platform were 17% and 15.2% respectively, surpassing MaskRCNN (15.2%, 13.9%), YOLOv3 (5.4%, 11.9%), and SOLOv2 (3.1%, 10.4%). Considering detections that exceeded the Intersection over Union threshold of 50%, our platform achieved an impressive F1-score of 44.2% and mAP of 46.3%, outperforming MaskRCNN (41.3%, 43.5%), YOLOv3 (15%, 34.1%), and SOLOv2 (9%, 32.3%). Our platform demonstrated the fastest inference time (88ms), compared to MaskRCNN (90ms), SOLOV2 (100ms), and YOLOv3 (106ms). Finally, the minimal amount of training set demonstrated a good generalisation performance -our architecture successfully identified objects in a frame that were not included in the training or validation frames, indicating its ability to handle out-of-domain scenarios. Discussion We present our development of an innovative intraoperative SMI to demonstrate the future promise of advanced CV in the surgical domain. Through successful implementation in a microscopic dAVF surgery, our framework demonstrates superior performance over current state-of-the-art segmentation architectures in intraoperative landmark guidance with high sample efficiency, representing the most advanced AI-enabled surgical inference platform to date. Our future goals include transfer learning paradigms for scaling to additional surgery types, addressing clinical and technical limitations for performing real-time decoding, and ultimate enablement of a real-time neurosurgical guidance platform.
Collapse
Affiliation(s)
- Jay J. Park
- Department of Neurosurgery, The Surgical Innovation and Machine Interfacing (SIMI) Lab, Stanford University School of Medicine, Stanford, CA, United States
- Centre for Global Health, Usher Institute, Edinburgh Medical School, The University of Edinburgh, Edinburgh, United Kingdom
| | - Nehal Doiphode
- Department of Neurosurgery, The Surgical Innovation and Machine Interfacing (SIMI) Lab, Stanford University School of Medicine, Stanford, CA, United States
- Department of Computer and Information Science, School of Engineering and Applied Science, University of Pennsylvania, Philadelphia, PA, United States
| | - Xiao Zhang
- Department of Computer Science, University of Chicago, Chicago, IL, United States
| | - Lishuo Pan
- Department of Computer Science, Brown University, Providence, RI, United States
| | - Rachel Blue
- Department of Neurosurgery, Perelman School of Medicine at The University of Pennsylvania, Philadelphia, PA, United States
| | - Jianbo Shi
- Department of Computer and Information Science, School of Engineering and Applied Science, University of Pennsylvania, Philadelphia, PA, United States
| | - Vivek P. Buch
- Department of Neurosurgery, The Surgical Innovation and Machine Interfacing (SIMI) Lab, Stanford University School of Medicine, Stanford, CA, United States
| |
Collapse
|
33
|
Lünse S, Wisotzky EL, Beckmann S, Paasch C, Hunger R, Mantke R. Technological advancements in surgical laparoscopy considering artificial intelligence: a survey among surgeons in Germany. Langenbecks Arch Surg 2023; 408:405. [PMID: 37843584 PMCID: PMC10579134 DOI: 10.1007/s00423-023-03134-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2023] [Accepted: 10/02/2023] [Indexed: 10/17/2023]
Abstract
PURPOSE The integration of artificial intelligence (AI) into surgical laparoscopy has shown promising results in recent years. This survey aims to investigate the inconveniences of current conventional laparoscopy and to evaluate the attitudes and desires of surgeons in Germany towards new AI-based laparoscopic systems. METHODS A 12-item web-based questionnaire was distributed to 38 German university hospitals as well as to a Germany-wide voluntary hospital association (CLINOTEL) consisting of 66 hospitals between July and November 2022. RESULTS A total of 202 questionnaires were completed. The majority of respondents (88.1%) stated that they needed one assistant during laparoscopy and rated the assistants' skillfulness as "very important" (39.6%) or "important" (49.5%). The most uncomfortable aspects of conventional laparoscopy were inappropriate camera movement (73.8%) and lens condensation (73.3%). Selected features that should be included in a new laparoscopic system were simple and intuitive maneuverability (81.2%), automatic de-fogging (80.7%), and self-cleaning of camera (77.2%). Furthermore, AI-based features were improvement of camera positioning (71.3%), visualization of anatomical landmarks (67.3%), image stabilization (66.8%), and tissue damage protection (59.4%). The reason for purchasing an AI-based system was to improve patient safety (86.1%); the reasonable price was €50.000-100.000 (34.2%), and it was expected to replace the existing assistants' workflow up to 25% (41.6%). CONCLUSION Simple and intuitive maneuverability with improved and image-stabilized camera guidance in combination with a lens cleaning system as well as AI-based augmentation of anatomical landmarks and tissue damage protection seem to be significant requirements for the further development of laparoscopic systems.
Collapse
Affiliation(s)
- Sebastian Lünse
- Department of General and Visceral Surgery, Brandenburg Medical School, University Hospital Brandenburg/Havel, Hochstrasse 29, 14770, Brandenburg, Germany.
| | - Eric L Wisotzky
- Vision and Imaging Technologies, Fraunhofer Heinrich-Hertz-Institut HHI, Einsteinufer 37, 10587, Berlin, Germany
- Department of Computer Science, Humboldt-Universität Zu Berlin, Unter Den Linden 6, 10117, Berlin, Germany
| | - Sophie Beckmann
- Vision and Imaging Technologies, Fraunhofer Heinrich-Hertz-Institut HHI, Einsteinufer 37, 10587, Berlin, Germany
- Department of Computer Science, Humboldt-Universität Zu Berlin, Unter Den Linden 6, 10117, Berlin, Germany
| | - Christoph Paasch
- Department of General and Visceral Surgery, Brandenburg Medical School, University Hospital Brandenburg/Havel, Hochstrasse 29, 14770, Brandenburg, Germany
| | - Richard Hunger
- Department of General and Visceral Surgery, Brandenburg Medical School, University Hospital Brandenburg/Havel, Hochstrasse 29, 14770, Brandenburg, Germany
| | - René Mantke
- Department of General and Visceral Surgery, Brandenburg Medical School, University Hospital Brandenburg/Havel, Hochstrasse 29, 14770, Brandenburg, Germany
- Faculty of Health Science Brandenburg, Brandenburg Medical School, University Hospital Brandenburg/Havel, 14770, Brandenburg, Germany
| |
Collapse
|
34
|
Kitaguchi D, Harai Y, Kosugi N, Hayashi K, Kojima S, Ishikawa Y, Yamada A, Hasegawa H, Takeshita N, Ito M. Artificial intelligence for the recognition of key anatomical structures in laparoscopic colorectal surgery. Br J Surg 2023; 110:1355-1358. [PMID: 37552629 DOI: 10.1093/bjs/znad249] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2023] [Revised: 07/10/2023] [Accepted: 07/10/2023] [Indexed: 08/10/2023]
Abstract
Lay Summary
To prevent intraoperative organ injury, surgeons strive to identify anatomical structures as early and accurately as possible during surgery. The objective of this prospective observational study was to develop artificial intelligence (AI)-based real-time automatic organ recognition models in laparoscopic surgery and to compare its performance with that of surgeons. The time taken to recognize target anatomy between AI and both expert and novice surgeons was compared. The AI models demonstrated faster recognition of target anatomy than surgeons, especially novice surgeons. These findings suggest that AI has the potential to compensate for the skill and experience gap between surgeons.
Collapse
Affiliation(s)
- Daichi Kitaguchi
- Department for the Promotion of Medical Device Innovation, National Cancer Centre Hospital East, Chiba, Japan
- Department of Colorectal Surgery, National Cancer Centre Hospital East, Chiba, Japan
| | - Yuriko Harai
- Department for the Promotion of Medical Device Innovation, National Cancer Centre Hospital East, Chiba, Japan
| | - Norihito Kosugi
- Department for the Promotion of Medical Device Innovation, National Cancer Centre Hospital East, Chiba, Japan
| | - Kazuyuki Hayashi
- Department for the Promotion of Medical Device Innovation, National Cancer Centre Hospital East, Chiba, Japan
| | - Shigehiro Kojima
- Department for the Promotion of Medical Device Innovation, National Cancer Centre Hospital East, Chiba, Japan
| | - Yuto Ishikawa
- Department for the Promotion of Medical Device Innovation, National Cancer Centre Hospital East, Chiba, Japan
| | - Atsushi Yamada
- Department for the Promotion of Medical Device Innovation, National Cancer Centre Hospital East, Chiba, Japan
| | - Hiro Hasegawa
- Department for the Promotion of Medical Device Innovation, National Cancer Centre Hospital East, Chiba, Japan
- Department of Colorectal Surgery, National Cancer Centre Hospital East, Chiba, Japan
| | - Nobuyoshi Takeshita
- Department for the Promotion of Medical Device Innovation, National Cancer Centre Hospital East, Chiba, Japan
| | - Masaaki Ito
- Department for the Promotion of Medical Device Innovation, National Cancer Centre Hospital East, Chiba, Japan
- Department of Colorectal Surgery, National Cancer Centre Hospital East, Chiba, Japan
| |
Collapse
|
35
|
Alkhamaiseh KN, Grantner JL, Shebrain S, Abdel-Qader I. Towards reliable hepatocytic anatomy segmentation in laparoscopic cholecystectomy using U-Net with Auto-Encoder. Surg Endosc 2023; 37:7358-7369. [PMID: 37491657 DOI: 10.1007/s00464-023-10306-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2023] [Accepted: 07/12/2023] [Indexed: 07/27/2023]
Abstract
BACKGROUND Most bile duct (BDI) injuries during laparoscopic cholecystectomy (LC) occur due to visual misperception leading to the misinterpretation of anatomy. Deep learning (DL) models for surgical video analysis could, therefore, support visual tasks such as identifying critical view of safety (CVS). This study aims to develop a prediction model of CVS during LC. This aim is accomplished using a deep neural network integrated with a segmentation model that is capable of highlighting hepatocytic anatomy. METHODS Still images from LC videos were annotated with four hepatocystic landmarks of anatomy segmentation. A deep autoencoder neural network with U-Net to investigate accurate medical image segmentation was trained and tested using fivefold cross-validation. Accuracy, Loss, Intersection over Union (IoU), Precision, Recall, and Hausdorff Distance were computed to evaluate the model performance versus the annotated ground truth. RESULTS A total of 1550 images from 200 LC videos were annotated. Mean IoU for segmentation was 74.65%. The proposed approach performed well for automatic hepatocytic landmarks identification with 92% accuracy and 93.9% precision and can segment challenging cases. CONCLUSION DL, can potentially provide an intraoperative model for surgical video analysis and can be trained to guide surgeons toward reliable hepatocytic anatomy segmentation and produce selective video documentation of this safety step of LC.
Collapse
Affiliation(s)
- Koloud N Alkhamaiseh
- Department of Electrical and Computer Engineering, Western Michigan University, Kalamazoo, MI, USA.
| | - Janos L Grantner
- Department of Electrical and Computer Engineering, Western Michigan University, Kalamazoo, MI, USA
| | - Saad Shebrain
- Western Michigan University Homer Stryker MD School of Medicine, Kalamazoo, MI, USA
| | - Ikhlas Abdel-Qader
- Department of Electrical and Computer Engineering, Western Michigan University, Kalamazoo, MI, USA
| |
Collapse
|
36
|
Igaki T, Kitaguchi D, Matsuzaki H, Nakajima K, Kojima S, Hasegawa H, Takeshita N, Kinugasa Y, Ito M. Automatic Surgical Skill Assessment System Based on Concordance of Standardized Surgical Field Development Using Artificial Intelligence. JAMA Surg 2023; 158:e231131. [PMID: 37285142 PMCID: PMC10248810 DOI: 10.1001/jamasurg.2023.1131] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2022] [Accepted: 01/28/2023] [Indexed: 06/08/2023]
Abstract
Importance Automatic surgical skill assessment with artificial intelligence (AI) is more objective than manual video review-based skill assessment and can reduce human burden. Standardization of surgical field development is an important aspect of this skill assessment. Objective To develop a deep learning model that can recognize the standardized surgical fields in laparoscopic sigmoid colon resection and to evaluate the feasibility of automatic surgical skill assessment based on the concordance of the standardized surgical field development using the proposed deep learning model. Design, Setting, and Participants This retrospective diagnostic study used intraoperative videos of laparoscopic colorectal surgery submitted to the Japan Society for Endoscopic Surgery between August 2016 and November 2017. Data were analyzed from April 2020 to September 2022. Interventions Videos of surgery performed by expert surgeons with Endoscopic Surgical Skill Qualification System (ESSQS) scores higher than 75 were used to construct a deep learning model able to recognize a standardized surgical field and output its similarity to standardized surgical field development as an AI confidence score (AICS). Other videos were extracted as the validation set. Main Outcomes and Measures Videos with scores less than or greater than 2 SDs from the mean were defined as the low- and high-score groups, respectively. The correlation between AICS and ESSQS score and the screening performance using AICS for low- and high-score groups were analyzed. Results The sample included 650 intraoperative videos, 60 of which were used for model construction and 60 for validation. The Spearman rank correlation coefficient between the AICS and ESSQS score was 0.81. The receiver operating characteristic (ROC) curves for the screening of the low- and high-score groups were plotted, and the areas under the ROC curve for the low- and high-score group screening were 0.93 and 0.94, respectively. Conclusions and Relevance The AICS from the developed model strongly correlated with the ESSQS score, demonstrating the model's feasibility for use as a method of automatic surgical skill assessment. The findings also suggest the feasibility of the proposed model for creating an automated screening system for surgical skills and its potential application to other types of endoscopic procedures.
Collapse
Affiliation(s)
- Takahiro Igaki
- Surgical Device Innovation Office, National Cancer Center Hospital East, Kashiwanoha, Kashiwa, Chiba, Japan
- Department of Colorectal Surgery, National Cancer Center Hospital East, Kashiwanoha, Kashiwa, Chiba, Japan
- Department of Gastrointestinal Surgery, Tokyo Medical and Dental University Graduate School of Medicine, Yushima, Bunkyo-Ku, Tokyo, Japan
| | - Daichi Kitaguchi
- Surgical Device Innovation Office, National Cancer Center Hospital East, Kashiwanoha, Kashiwa, Chiba, Japan
- Department of Colorectal Surgery, National Cancer Center Hospital East, Kashiwanoha, Kashiwa, Chiba, Japan
| | - Hiroki Matsuzaki
- Surgical Device Innovation Office, National Cancer Center Hospital East, Kashiwanoha, Kashiwa, Chiba, Japan
| | - Kei Nakajima
- Surgical Device Innovation Office, National Cancer Center Hospital East, Kashiwanoha, Kashiwa, Chiba, Japan
- Department of Colorectal Surgery, National Cancer Center Hospital East, Kashiwanoha, Kashiwa, Chiba, Japan
| | - Shigehiro Kojima
- Surgical Device Innovation Office, National Cancer Center Hospital East, Kashiwanoha, Kashiwa, Chiba, Japan
- Department of Colorectal Surgery, National Cancer Center Hospital East, Kashiwanoha, Kashiwa, Chiba, Japan
| | - Hiro Hasegawa
- Surgical Device Innovation Office, National Cancer Center Hospital East, Kashiwanoha, Kashiwa, Chiba, Japan
- Department of Colorectal Surgery, National Cancer Center Hospital East, Kashiwanoha, Kashiwa, Chiba, Japan
| | - Nobuyoshi Takeshita
- Surgical Device Innovation Office, National Cancer Center Hospital East, Kashiwanoha, Kashiwa, Chiba, Japan
- Department of Colorectal Surgery, National Cancer Center Hospital East, Kashiwanoha, Kashiwa, Chiba, Japan
| | - Yusuke Kinugasa
- Department of Gastrointestinal Surgery, Tokyo Medical and Dental University Graduate School of Medicine, Yushima, Bunkyo-Ku, Tokyo, Japan
| | - Masaaki Ito
- Surgical Device Innovation Office, National Cancer Center Hospital East, Kashiwanoha, Kashiwa, Chiba, Japan
- Department of Colorectal Surgery, National Cancer Center Hospital East, Kashiwanoha, Kashiwa, Chiba, Japan
| |
Collapse
|
37
|
Sengun B, Iscan Y, Tataroglu Ozbulak GA, Kumbasar N, Egriboz E, Sormaz IC, Aksakal N, Deniz SM, Haklidir M, Tunca F, Giles Senyurek Y. Artificial Intelligence in Minimally Invasive Adrenalectomy: Using Deep Learning to Identify the Left Adrenal Vein. Surg Laparosc Endosc Percutan Tech 2023; 33:327-331. [PMID: 37311027 DOI: 10.1097/sle.0000000000001185] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2023] [Accepted: 04/18/2023] [Indexed: 06/15/2023]
Abstract
BACKGROUND Minimally invasive adrenalectomy is the main surgical treatment option for the resection of adrenal masses. Recognition and ligation of adrenal veins are critical parts of adrenal surgery. The utilization of artificial intelligence and deep learning algorithms to identify anatomic structures during laparoscopic and robot-assisted surgery can be used to provide real-time guidance. METHODS In this experimental feasibility study, intraoperative videos of patients who underwent minimally invasive transabdominal left adrenalectomy procedures between 2011 and 2022 in a tertiary endocrine referral center were retrospectively analyzed and used to develop an artificial intelligence model. Semantic segmentation of the left adrenal vein with deep learning was performed. To train a model, 50 random images per patient were captured during the identification and dissection of the left adrenal vein. A randomly selected 70% of data was used to train models while 15% for testing and 15% for validation with 3 efficient stage-wise feature pyramid networks (ESFPNet). Dice similarity coefficient (DSC) and intersection over union scores were used to evaluate segmentation accuracy. RESULTS A total of 40 videos were analyzed. Annotation of the left adrenal vein was performed in 2000 images. The segmentation network training on 1400 images was used to identify the left adrenal vein in 300 test images. The mean DSC and sensitivity for the highest scoring efficient stage-wise feature pyramid network B-2 network were 0.77 (±0.16 SD) and 0.82 (±0.15 SD), respectively, while the maximum DSC was 0.93, suggesting a successful prediction of anatomy. CONCLUSIONS Deep learning algorithms can predict the left adrenal vein anatomy with high performance and can potentially be utilized to identify critical anatomy during adrenal surgery and provide real-time guidance in the near future.
Collapse
Affiliation(s)
- Berke Sengun
- Department of General Surgery, Istanbul University, Istanbul Faculty of Medicine, Istanbul, Turkey
| | - Yalin Iscan
- Department of General Surgery, Istanbul University, Istanbul Faculty of Medicine, Istanbul, Turkey
| | | | | | | | - Ismail C Sormaz
- Department of General Surgery, Istanbul University, Istanbul Faculty of Medicine, Istanbul, Turkey
| | - Nihat Aksakal
- Department of General Surgery, Istanbul University, Istanbul Faculty of Medicine, Istanbul, Turkey
| | | | | | - Fatih Tunca
- Department of General Surgery, Istanbul University, Istanbul Faculty of Medicine, Istanbul, Turkey
| | - Yasemin Giles Senyurek
- Department of General Surgery, Istanbul University, Istanbul Faculty of Medicine, Istanbul, Turkey
| |
Collapse
|
38
|
Cheung HC, De Louche C, Komorowski M. Artificial Intelligence Applications in Space Medicine. Aerosp Med Hum Perform 2023; 94:610-622. [PMID: 37501303 DOI: 10.3357/amhp.6178.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/29/2023]
Abstract
INTRODUCTION:During future interplanetary space missions, a number of health conditions may arise, owing to the hostile environment of space and the myriad of stressors experienced by the crew. When managing these conditions, crews will be required to make accurate, timely clinical decisions at a high level of autonomy, as telecommunication delays and increasing distances restrict real-time support from the ground. On Earth, artificial intelligence (AI) has proven successful in healthcare, augmenting expert clinical decision-making or enhancing medical knowledge where it is lacking. Similarly, deploying AI tools in the context of a space mission could improve crew self-reliance and healthcare delivery.METHODS: We conducted a narrative review to discuss existing AI applications that could improve the prevention, recognition, evaluation, and management of the most mission-critical conditions, including psychological and mental health, acute radiation sickness, surgical emergencies, spaceflight-associated neuro-ocular syndrome, infections, and cardiovascular deconditioning.RESULTS: Some examples of the applications we identified include AI chatbots designed to prevent and mitigate psychological and mental health conditions, automated medical imaging analysis, and closed-loop systems for hemodynamic optimization. We also discuss at length gaps in current technologies, as well as the key challenges and limitations of developing and deploying AI for space medicine to inform future research and innovation. Indeed, shifts in patient cohorts, space-induced physiological changes, limited size and breadth of space biomedical datasets, and changes in disease characteristics may render the models invalid when transferred from ground settings into space.Cheung HC, De Louche C, Komorowski M. Artificial intelligence applications in space medicine. Aerosp Med Hum Perform. 2023; 94(8):610-622.
Collapse
|
39
|
Sone K, Tanimoto S, Toyohara Y, Taguchi A, Miyamoto Y, Mori M, Iriyama T, Wada-Hiraike O, Osuga Y. Evolution of a surgical system using deep learning in minimally invasive surgery (Review). Biomed Rep 2023; 19:45. [PMID: 37324165 PMCID: PMC10265572 DOI: 10.3892/br.2023.1628] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2022] [Accepted: 03/31/2023] [Indexed: 06/17/2023] Open
Abstract
Recently, artificial intelligence (AI) has been applied in various fields due to the development of new learning methods, such as deep learning, and the marked progress in computational processing speed. AI is also being applied in the medical field for medical image recognition and omics analysis of genomes and other data. Recently, AI applications for videos of minimally invasive surgeries have also advanced, and studies on such applications are increasing. In the present review, studies that focused on the following topics were selected: i) Organ and anatomy identification, ii) instrument identification, iii) procedure and surgical phase recognition, iv) surgery-time prediction, v) identification of an appropriate incision line, and vi) surgical education. The development of autonomous surgical robots is also progressing, with the Smart Tissue Autonomous Robot (STAR) and RAVEN systems being the most reported developments. STAR, in particular, is currently being used in laparoscopic imaging to recognize the surgical site from laparoscopic images and is in the process of establishing an automated suturing system, albeit in animal experiments. The present review examined the possibility of fully autonomous surgical robots in the future.
Collapse
Affiliation(s)
- Kenbun Sone
- Department of Obstetrics and Gynecology, Faculty of Medicine, The University of Tokyo, Tokyo 113-8655, Japan
| | - Saki Tanimoto
- Department of Obstetrics and Gynecology, Faculty of Medicine, The University of Tokyo, Tokyo 113-8655, Japan
| | - Yusuke Toyohara
- Department of Obstetrics and Gynecology, Faculty of Medicine, The University of Tokyo, Tokyo 113-8655, Japan
| | - Ayumi Taguchi
- Department of Obstetrics and Gynecology, Faculty of Medicine, The University of Tokyo, Tokyo 113-8655, Japan
| | - Yuichiro Miyamoto
- Department of Obstetrics and Gynecology, Faculty of Medicine, The University of Tokyo, Tokyo 113-8655, Japan
| | - Mayuyo Mori
- Department of Obstetrics and Gynecology, Faculty of Medicine, The University of Tokyo, Tokyo 113-8655, Japan
| | - Takayuki Iriyama
- Department of Obstetrics and Gynecology, Faculty of Medicine, The University of Tokyo, Tokyo 113-8655, Japan
| | - Osamu Wada-Hiraike
- Department of Obstetrics and Gynecology, Faculty of Medicine, The University of Tokyo, Tokyo 113-8655, Japan
| | - Yutaka Osuga
- Department of Obstetrics and Gynecology, Faculty of Medicine, The University of Tokyo, Tokyo 113-8655, Japan
| |
Collapse
|
40
|
Endo Y, Tokuyasu T, Mori Y, Asai K, Umezawa A, Kawamura M, Fujinaga A, Ejima A, Kimura M, Inomata M. Impact of AI system on recognition for anatomical landmarks related to reducing bile duct injury during laparoscopic cholecystectomy. Surg Endosc 2023:10.1007/s00464-023-10224-5. [PMID: 37365396 DOI: 10.1007/s00464-023-10224-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2023] [Accepted: 06/16/2023] [Indexed: 06/28/2023]
Abstract
BACKGROUND According to the National Clinical Database of Japan, the incidence of bile duct injury (BDI) during laparoscopic cholecystectomy has hovered around 0.4% for the last 10 years and has not declined. On the other hand, it has been found that about 60% of BDI occurrences are due to misidentifying anatomical landmarks. However, the authors developed an artificial intelligence (AI) system that gave intraoperative data to recognize the extrahepatic bile duct (EHBD), cystic duct (CD), inferior border of liver S4 (S4), and Rouviere sulcus (RS). The purpose of this research was to evaluate how the AI system affects landmark identification. METHODS We prepared a 20-s intraoperative video before the serosal incision of Calot's triangle dissection and created a short video with landmarks overwritten by AI. The landmarks were defined as landmark (LM)-EHBD, LM-CD, LM-RS, and LM-S4. Four beginners and four experts were recruited as subjects. After viewing a 20-s intraoperative video, subjects annotated the LM-EHBD and LM-CD. Then, a short video is shown with the AI overwriting landmark instructions; if there is a change in each perspective, the annotation is changed. The subjects answered a three-point scale questionnaire to clarify whether the AI teaching data advanced their confidence in verifying the LM-RS and LM-S4. Four external evaluation committee members investigated the clinical importance. RESULTS In 43 of 160 (26.9%) images, the subjects transformed their annotations. Annotation changes were primarily observed in the gallbladder line of the LM-EHBD and LM-CD, and 70% of these shifts were considered safer changes. The AI-based teaching data encouraged both beginners and experts to affirm the LM-RS and LM-S4. CONCLUSION The AI system provided significant awareness to beginners and experts and prompted them to identify anatomical landmarks linked to reducing BDI.
Collapse
Affiliation(s)
- Yuichi Endo
- Department of Gastroenterological and Pediatric Surgery, Faculty of Medicine, Oita University, Oita, Japan.
| | - Tatsushi Tokuyasu
- Department of Information System and Engineering, Faculty of Information Engineering, Fukuoka Institute of Technology, Fukuoka, Japan
| | - Yasuhisa Mori
- Department of Surgery 1, School of Medicine, University of Occupational and Environmental Health, Kitakyushu, Fukuoka, Japan
| | - Koji Asai
- Department of Surgery, Toho University Ohashi Medical Center, Tokyo, Japan
| | - Akiko Umezawa
- Minimally Invasive Surgery Center, Yotsuya Medical Cube, Tokyo, Japan
| | - Masahiro Kawamura
- Department of Gastroenterological and Pediatric Surgery, Faculty of Medicine, Oita University, Oita, Japan
| | - Atsuro Fujinaga
- Department of Gastroenterological and Pediatric Surgery, Faculty of Medicine, Oita University, Oita, Japan
| | - Aika Ejima
- Department of Information System and Engineering, Faculty of Information Engineering, Fukuoka Institute of Technology, Fukuoka, Japan
| | - Misako Kimura
- Department of Information System and Engineering, Faculty of Information Engineering, Fukuoka Institute of Technology, Fukuoka, Japan
| | - Masafumi Inomata
- Department of Gastroenterological and Pediatric Surgery, Faculty of Medicine, Oita University, Oita, Japan
| |
Collapse
|
41
|
Lavanchy JL, Vardazaryan A, Mascagni P, Mutter D, Padoy N. Preserving privacy in surgical video analysis using a deep learning classifier to identify out-of-body scenes in endoscopic videos. Sci Rep 2023; 13:9235. [PMID: 37286660 DOI: 10.1038/s41598-023-36453-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2023] [Accepted: 06/03/2023] [Indexed: 06/09/2023] Open
Abstract
Surgical video analysis facilitates education and research. However, video recordings of endoscopic surgeries can contain privacy-sensitive information, especially if the endoscopic camera is moved out of the body of patients and out-of-body scenes are recorded. Therefore, identification of out-of-body scenes in endoscopic videos is of major importance to preserve the privacy of patients and operating room staff. This study developed and validated a deep learning model for the identification of out-of-body images in endoscopic videos. The model was trained and evaluated on an internal dataset of 12 different types of laparoscopic and robotic surgeries and was externally validated on two independent multicentric test datasets of laparoscopic gastric bypass and cholecystectomy surgeries. Model performance was evaluated compared to human ground truth annotations measuring the receiver operating characteristic area under the curve (ROC AUC). The internal dataset consisting of 356,267 images from 48 videos and the two multicentric test datasets consisting of 54,385 and 58,349 images from 10 and 20 videos, respectively, were annotated. The model identified out-of-body images with 99.97% ROC AUC on the internal test dataset. Mean ± standard deviation ROC AUC on the multicentric gastric bypass dataset was 99.94 ± 0.07% and 99.71 ± 0.40% on the multicentric cholecystectomy dataset, respectively. The model can reliably identify out-of-body images in endoscopic videos and is publicly shared. This facilitates privacy preservation in surgical video analysis.
Collapse
Affiliation(s)
- Joël L Lavanchy
- IHU Strasbourg, 1 Place de l'Hôpital, 67091, Strasbourg Cedex, France.
- Department of Visceral Surgery and Medicine, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland.
- Division of Surgery, Clarunis-University Center for Gastrointestinal and Liver Diseases, St Clara and University Hospital of Basel, Basel, Switzerland.
| | - Armine Vardazaryan
- IHU Strasbourg, 1 Place de l'Hôpital, 67091, Strasbourg Cedex, France
- ICube, University of Strasbourg, CNRS, Strasbourg, France
| | - Pietro Mascagni
- IHU Strasbourg, 1 Place de l'Hôpital, 67091, Strasbourg Cedex, France
- Fondazione Policlinico Universitario Agostino Gemelli IRCCS, Rome, Italy
| | - Didier Mutter
- IHU Strasbourg, 1 Place de l'Hôpital, 67091, Strasbourg Cedex, France
- University Hospital of Strasbourg, Strasbourg, France
| | - Nicolas Padoy
- IHU Strasbourg, 1 Place de l'Hôpital, 67091, Strasbourg Cedex, France
- ICube, University of Strasbourg, CNRS, Strasbourg, France
| |
Collapse
|
42
|
Fujinaga A, Endo Y, Etoh T, Kawamura M, Nakanuma H, Kawasaki T, Masuda T, Hirashita T, Kimura M, Matsunobu Y, Shinozuka K, Tanaka Y, Kamiyama T, Sugita T, Morishima K, Ebe K, Tokuyasu T, Inomata M. Development of a cross-artificial intelligence system for identifying intraoperative anatomical landmarks and surgical phases during laparoscopic cholecystectomy. Surg Endosc 2023:10.1007/s00464-023-10097-8. [PMID: 37142714 DOI: 10.1007/s00464-023-10097-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2023] [Accepted: 04/19/2023] [Indexed: 05/06/2023]
Abstract
BACKGROUND Attention to anatomical landmarks in the appropriate surgical phase is important to prevent bile duct injury (BDI) during laparoscopic cholecystectomy (LC). Therefore, we created a cross-AI system that works with two different AI algorithms simultaneously, landmark detection and phase recognition. We assessed whether landmark detection was activated in the appropriate phase by phase recognition during LC and the potential contribution of the cross-AI system in preventing BDI through a clinical feasibility study (J-SUMMIT-C-02). METHODS A prototype was designed to display landmarks during the preparation phase and Calot's triangle dissection. A prospective clinical feasibility study using the cross-AI system was performed in 20 LC cases. The primary endpoint of this study was the appropriateness of the detection timing of landmarks, which was assessed by an external evaluation committee (EEC). The secondary endpoint was the correctness of landmark detection and the contribution of cross-AI in preventing BDI, which were assessed based on the annotation and 4-point rubric questionnaire. RESULTS Cross-AI-detected landmarks in 92% of the phases where the EEC considered landmarks necessary. In the questionnaire, each landmark detected by AI had high accuracy, especially the landmarks of the common bile duct and cystic duct, which were assessed at 3.78 and 3.67, respectively. In addition, the contribution to preventing BDI was relatively high at 3.65. CONCLUSIONS The cross-AI system provided landmark detection at appropriate situations. The surgeons who previewed the model suggested that the landmark information provided by the cross-AI system may be effective in preventing BDI. Therefore, it is suggested that our system could help prevent BDI in practice. Trial registration University Hospital Medical Information Network Research Center Clinical Trial Registration System (UMIN000045731).
Collapse
Affiliation(s)
- Atsuro Fujinaga
- Department of Gastroenterological and Pediatric Surgery, Faculty of Medicine, Oita University, 1-1 Idaigaoka, Hasama-Machi, Oita, 879-5593, Japan.
| | - Yuichi Endo
- Department of Gastroenterological and Pediatric Surgery, Faculty of Medicine, Oita University, 1-1 Idaigaoka, Hasama-Machi, Oita, 879-5593, Japan
| | - Tsuyoshi Etoh
- Department of Gastroenterological and Pediatric Surgery, Faculty of Medicine, Oita University, 1-1 Idaigaoka, Hasama-Machi, Oita, 879-5593, Japan
| | - Masahiro Kawamura
- Department of Gastroenterological and Pediatric Surgery, Faculty of Medicine, Oita University, 1-1 Idaigaoka, Hasama-Machi, Oita, 879-5593, Japan
| | - Hiroaki Nakanuma
- Department of Gastroenterological and Pediatric Surgery, Faculty of Medicine, Oita University, 1-1 Idaigaoka, Hasama-Machi, Oita, 879-5593, Japan
| | - Takahide Kawasaki
- Department of Gastroenterological and Pediatric Surgery, Faculty of Medicine, Oita University, 1-1 Idaigaoka, Hasama-Machi, Oita, 879-5593, Japan
| | - Takashi Masuda
- Department of Gastroenterological and Pediatric Surgery, Faculty of Medicine, Oita University, 1-1 Idaigaoka, Hasama-Machi, Oita, 879-5593, Japan
| | - Teijiro Hirashita
- Department of Gastroenterological and Pediatric Surgery, Faculty of Medicine, Oita University, 1-1 Idaigaoka, Hasama-Machi, Oita, 879-5593, Japan
| | - Misako Kimura
- Department of Information Systems and Engineering, Faculty of Information Engineering, Fukuoka Institute of Technology, Fukuoka, Japan
| | - Yusuke Matsunobu
- Department of Information Systems and Engineering, Faculty of Information Engineering, Fukuoka Institute of Technology, Fukuoka, Japan
| | - Ken'ichi Shinozuka
- Department of Information Systems and Engineering, Faculty of Information Engineering, Fukuoka Institute of Technology, Fukuoka, Japan
| | - Yuki Tanaka
- Advanced AI Technology Research, Advanced Technology Research, Olympus Corporation, Tokyo, Japan
| | - Toshiya Kamiyama
- Advanced AI Technology Research, Advanced Technology Research, Olympus Corporation, Tokyo, Japan
| | - Takemasa Sugita
- Advanced AI Technology Research, Advanced Technology Research, Olympus Corporation, Tokyo, Japan
| | - Kenichi Morishima
- Advanced AI Technology Research, Advanced Technology Research, Olympus Corporation, Tokyo, Japan
| | - Kohei Ebe
- Information Aided Medical Solutions Development, Application Software Engineering, Olympus Medical Systems Corporation, Tokyo, Japan
| | - Tatsushi Tokuyasu
- Department of Information Systems and Engineering, Faculty of Information Engineering, Fukuoka Institute of Technology, Fukuoka, Japan
| | - Masafumi Inomata
- Department of Gastroenterological and Pediatric Surgery, Faculty of Medicine, Oita University, 1-1 Idaigaoka, Hasama-Machi, Oita, 879-5593, Japan
| |
Collapse
|
43
|
Wu S, Chen Z, Liu R, Li A, Cao Y, Wei A, Liu Q, Liu J, Wang Y, Jiang J, Ying Z, An J, Peng B, Wang X. SurgSmart: an artificial intelligent system for quality control in laparoscopic cholecystectomy: an observational study. Int J Surg 2023; 109:1105-1114. [PMID: 37039533 PMCID: PMC10389595 DOI: 10.1097/js9.0000000000000329] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2022] [Accepted: 02/22/2023] [Indexed: 04/12/2023]
Abstract
BACKGROUND The rate of bile duct injury in laparoscopic cholecystectomy (LC) continues to be high due to low critical view of safety (CVS) achievement and the absence of an effective quality control system. The development of an intelligent system enables the automatic quality control of LC surgery and, eventually, the mitigation of bile duct injury. This study aims to develop an intelligent surgical quality control system for LC and using the system to evaluate LC videos and investigate factors associated with CVS achievement. MATERIALS AND METHODS SurgSmart, an intelligent system capable of recognizing surgical phases, disease severity, critical division action, and CVS automatically, was developed using training datasets. SurgSmart was also applied in another multicenter dataset to validate its application and investigate factors associated with CVS achievement. RESULTS SurgSmart performed well in all models, with the critical division action model achieving the highest overall accuracy (98.49%), followed by the disease severity model (95.45%) and surgical phases model (88.61%). CVSI, CVSII, and CVSIII had an accuracy of 80.64, 97.62, and 78.87%, respectively. CVS was achieved in 4.33% in the system application dataset. In addition, the analysis indicated that surgeons at a higher hospital level had a higher CVS achievement rate. However, there was still considerable variation in CVS achievement among surgeons in the same hospital. CONCLUSIONS SurgSmart, the surgical quality control system, performed admirably in our study. In addition, the system's initial application demonstrated its broad potential for use in surgical quality control.
Collapse
Affiliation(s)
- Shangdi Wu
- Division of Pancreatic Surgery, Department of General Surgery
- West China School of Medicine
| | - Zixin Chen
- Division of Pancreatic Surgery, Department of General Surgery
- West China School of Medicine
| | - Runwen Liu
- ChengDu Withai Innovations Technology Company
| | - Ang Li
- Division of Pancreatic Surgery, Department of General Surgery
- Guang’an People’s Hospital, Guang’an, Sichuan Province, China
| | - Yu Cao
- Operating Room
- West China School of Nursing, Sichuan University
| | - Ailin Wei
- Guang’an People’s Hospital, Guang’an, Sichuan Province, China
| | | | - Jie Liu
- ChengDu Withai Innovations Technology Company
| | - Yuxian Wang
- ChengDu Withai Innovations Technology Company
| | - Jingwen Jiang
- West China Biomedical Big Data Center, West China Hospital of Sichuan University
- Med-X Center for Informatics, Sichuan University, Chengdu
| | - Zhiye Ying
- West China Biomedical Big Data Center, West China Hospital of Sichuan University
- Med-X Center for Informatics, Sichuan University, Chengdu
| | - Jingjing An
- Operating Room
- West China School of Nursing, Sichuan University
| | - Bing Peng
- Division of Pancreatic Surgery, Department of General Surgery
- West China School of Medicine
| | - Xin Wang
- Division of Pancreatic Surgery, Department of General Surgery
- West China School of Medicine
| |
Collapse
|
44
|
Ríos MS, Molina-Rodriguez MA, Londoño D, Guillén CA, Sierra S, Zapata F, Giraldo LF. Cholec80-CVS: An open dataset with an evaluation of Strasberg's critical view of safety for AI. Sci Data 2023; 10:194. [PMID: 37031247 PMCID: PMC10082817 DOI: 10.1038/s41597-023-02073-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2022] [Accepted: 03/15/2023] [Indexed: 04/10/2023] Open
Abstract
Strasberg's criteria to detect a critical view of safety is a widely known strategy to reduce bile duct injuries during laparoscopic cholecystectomy. In spite of its popularity and efficiency, recent studies have shown that human miss-identification errors have led to important bile duct injuries occurrence rates. Developing tools based on artificial intelligence that facilitate the identification of a critical view of safety in cholecystectomy surgeries can potentially minimize the risk of such injuries. With this goal in mind, we present Cholec80-CVS, the first open dataset with video annotations of Strasberg's Critical View of Safety (CVS) criteria. Our dataset contains CVS criteria annotations provided by skilled surgeons for all videos in the well-known Cholec80 open video dataset. We consider that Cholec80-CVS is the first step towards the creation of intelligent systems that can assist humans during laparoscopic cholecystectomy.
Collapse
Affiliation(s)
- Manuel Sebastián Ríos
- Department of Electric and Electronic Engineering, Universidad de Los Andes, Bogotá D.C., Colombia
| | | | - Daniella Londoño
- Department of General Surgery, Universidad CES, Medellín, Colombia
| | - Camilo Andrés Guillén
- Department of Electric and Electronic Engineering, Universidad de Los Andes, Bogotá D.C., Colombia
| | - Sebastián Sierra
- Department of General Surgery, Universidad CES, Medellín, Colombia
| | - Felipe Zapata
- Department of General Surgery, Universidad CES, Medellín, Colombia
| | - Luis Felipe Giraldo
- Department of Biomedical Engineering, Universidad de Los Andes, Bogotá D.C., Colombia.
| |
Collapse
|
45
|
Kojima S, Kitaguchi D, Igaki T, Nakajima K, Ishikawa Y, Harai Y, Yamada A, Lee Y, Hayashi K, Kosugi N, Hasegawa H, Ito M. Deep-learning-based semantic segmentation of autonomic nerves from laparoscopic images of colorectal surgery: an experimental pilot study. Int J Surg 2023; 109:813-820. [PMID: 36999784 PMCID: PMC10389575 DOI: 10.1097/js9.0000000000000317] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2022] [Accepted: 02/21/2023] [Indexed: 04/01/2023]
Abstract
BACKGROUND The preservation of autonomic nerves is the most important factor in maintaining genitourinary function in colorectal surgery; however, these nerves are not clearly recognisable, and their identification is strongly affected by the surgical ability. Therefore, this study aimed to develop a deep learning model for the semantic segmentation of autonomic nerves during laparoscopic colorectal surgery and to experimentally verify the model through intraoperative use and pathological examination. MATERIALS AND METHODS The annotation data set comprised videos of laparoscopic colorectal surgery. The images of the hypogastric nerve (HGN) and superior hypogastric plexus (SHP) were manually annotated under a surgeon's supervision. The Dice coefficient was used to quantify the model performance after five-fold cross-validation. The model was used in actual surgeries to compare the recognition timing of the model with that of surgeons, and pathological examination was performed to confirm whether the samples labelled by the model from the colorectal branches of the HGN and SHP were nerves. RESULTS The data set comprised 12 978 video frames of the HGN from 245 videos and 5198 frames of the SHP from 44 videos. The mean (±SD) Dice coefficients of the HGN and SHP were 0.56 (±0.03) and 0.49 (±0.07), respectively. The proposed model was used in 12 surgeries, and it recognised the right HGN earlier than the surgeons did in 50.0% of the cases, the left HGN earlier in 41.7% of the cases and the SHP earlier in 50.0% of the cases. Pathological examination confirmed that all 11 samples were nerve tissue. CONCLUSION An approach for the deep-learning-based semantic segmentation of autonomic nerves was developed and experimentally validated. This model may facilitate intraoperative recognition during laparoscopic colorectal surgery.
Collapse
Affiliation(s)
- Shigehiro Kojima
- Surgical Device Innovation
- Department of Colorectal Surgery, National Cancer Center Hospital East, Chiba
- Division of Frontier Surgery, The Institute of Medical Science, The University of Tokyo, Tokyo, Japan
| | - Daichi Kitaguchi
- Surgical Device Innovation
- Department of Colorectal Surgery, National Cancer Center Hospital East, Chiba
| | - Takahiro Igaki
- Surgical Device Innovation
- Department of Colorectal Surgery, National Cancer Center Hospital East, Chiba
| | - Kei Nakajima
- Surgical Device Innovation
- Department of Colorectal Surgery, National Cancer Center Hospital East, Chiba
| | | | | | | | | | | | | | - Hiro Hasegawa
- Surgical Device Innovation
- Department of Colorectal Surgery, National Cancer Center Hospital East, Chiba
| | - Masaaki Ito
- Surgical Device Innovation
- Department of Colorectal Surgery, National Cancer Center Hospital East, Chiba
| |
Collapse
|
46
|
den Boer RB, Jaspers TJM, de Jongh C, Pluim JPW, van der Sommen F, Boers T, van Hillegersberg R, Van Eijnatten MAJM, Ruurda JP. Deep learning-based recognition of key anatomical structures during robot-assisted minimally invasive esophagectomy. Surg Endosc 2023:10.1007/s00464-023-09990-z. [PMID: 36947221 DOI: 10.1007/s00464-023-09990-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2022] [Accepted: 02/25/2023] [Indexed: 03/23/2023]
Abstract
OBJECTIVE To develop a deep learning algorithm for anatomy recognition in thoracoscopic video frames from robot-assisted minimally invasive esophagectomy (RAMIE) procedures using deep learning. BACKGROUND RAMIE is a complex operation with substantial perioperative morbidity and a considerable learning curve. Automatic anatomy recognition may improve surgical orientation and recognition of anatomical structures and might contribute to reducing morbidity or learning curves. Studies regarding anatomy recognition in complex surgical procedures are currently lacking. METHODS Eighty-three videos of consecutive RAMIE procedures between 2018 and 2022 were retrospectively collected at University Medical Center Utrecht. A surgical PhD candidate and an expert surgeon annotated the azygos vein and vena cava, aorta, and right lung on 1050 thoracoscopic frames. 850 frames were used for training of a convolutional neural network (CNN) to segment the anatomical structures. The remaining 200 frames of the dataset were used for testing the CNN. The Dice and 95% Hausdorff distance (95HD) were calculated to assess algorithm accuracy. RESULTS The median Dice of the algorithm was 0.79 (IQR = 0.20) for segmentation of the azygos vein and/or vena cava. A median Dice coefficient of 0.74 (IQR = 0.86) and 0.89 (IQR = 0.30) were obtained for segmentation of the aorta and lung, respectively. Inference time was 0.026 s (39 Hz). The prediction of the deep learning algorithm was compared with the expert surgeon annotations, showing an accuracy measured in median Dice of 0.70 (IQR = 0.19), 0.88 (IQR = 0.07), and 0.90 (0.10) for the vena cava and/or azygos vein, aorta, and lung, respectively. CONCLUSION This study shows that deep learning-based semantic segmentation has potential for anatomy recognition in RAMIE video frames. The inference time of the algorithm facilitated real-time anatomy recognition. Clinical applicability should be assessed in prospective clinical studies.
Collapse
Affiliation(s)
- R B den Boer
- Department of Surgery, University Medical Center Utrecht, Heidelberglaan 100, 3584 CX, Utrecht, The Netherlands
| | - T J M Jaspers
- Department of Biomedical Engineering, Eindhoven University of Technology, Groene Loper 3, 5612 AE, Eindhoven, The Netherlands
| | - C de Jongh
- Department of Surgery, University Medical Center Utrecht, Heidelberglaan 100, 3584 CX, Utrecht, The Netherlands
| | - J P W Pluim
- Department of Biomedical Engineering, Eindhoven University of Technology, Groene Loper 3, 5612 AE, Eindhoven, The Netherlands
| | - F van der Sommen
- Department of Electrical Engineering, Eindhoven University of Technology, Groene Loper 19, 5612 AP, Eindhoven, The Netherlands
| | - T Boers
- Department of Electrical Engineering, Eindhoven University of Technology, Groene Loper 19, 5612 AP, Eindhoven, The Netherlands
| | - R van Hillegersberg
- Department of Surgery, University Medical Center Utrecht, Heidelberglaan 100, 3584 CX, Utrecht, The Netherlands
| | - M A J M Van Eijnatten
- Department of Biomedical Engineering, Eindhoven University of Technology, Groene Loper 3, 5612 AE, Eindhoven, The Netherlands
| | - J P Ruurda
- Department of Surgery, University Medical Center Utrecht, Heidelberglaan 100, 3584 CX, Utrecht, The Netherlands.
| |
Collapse
|
47
|
Nakanuma H, Endo Y, Fujinaga A, Kawamura M, Kawasaki T, Masuda T, Hirashita T, Etoh T, Shinozuka K, Matsunobu Y, Kamiyama T, Ishikake M, Ebe K, Tokuyasu T, Inomata M. An intraoperative artificial intelligence system identifying anatomical landmarks for laparoscopic cholecystectomy: a prospective clinical feasibility trial (J-SUMMIT-C-01). Surg Endosc 2023; 37:1933-1942. [PMID: 36261644 DOI: 10.1007/s00464-022-09678-w] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2022] [Accepted: 09/25/2022] [Indexed: 10/24/2022]
Abstract
BACKGROUND We have implemented Smart Endoscopic Surgery (SES), a surgical system that uses artificial intelligence (AI) to detect the anatomical landmarks that expert surgeons base on to perform certain surgical maneuvers. No report has verified the use of AI-based support systems for surgery in clinical practice, and no evaluation method has been established. To evaluate the detection performance of SES, we have developed and established a new evaluation method by conducting a clinical feasibility trial. METHODS A single-center prospective clinical feasibility trial was conducted on 10 cases of LC performed at Oita University hospital. Subsequently, an external evaluation committee (EEC) evaluated the AI detection accuracy for each landmark using five-grade rubric evaluation and DICE coefficient. We defined LM-CBD as the expert surgeon's "judge" of the cystic bile duct in endoscopic images. RESULTS The average detection accuracy on the rubric by the EEC was 4.2 ± 0.8 for the LM-CBD. The DICE coefficient between the AI detection area of the LM-CBD and the EEC members' evaluation was similar to the mean value of the DICE coefficient between the EEC members. The DICE coefficient was high score for the case that was highly evaluated by the EEC on a five-grade scale. CONCLUSION This is the first feasible clinical trial of an AI system designed for intraoperative use and to evaluate the AI system using an EEC. In the future, this concept of evaluation for the AI system would contribute to the development of new AI navigation systems for surgery.
Collapse
Affiliation(s)
- Hiroaki Nakanuma
- Department of Gastroenterological and Pediatric Surgery, Faculty of Medicine, Oita University, 1-1 Idaigaoka, Hasama-machi, Yufu, Oita, 879-5593, Japan.
| | - Yuichi Endo
- Department of Gastroenterological and Pediatric Surgery, Faculty of Medicine, Oita University, 1-1 Idaigaoka, Hasama-machi, Yufu, Oita, 879-5593, Japan
| | - Atsuro Fujinaga
- Department of Gastroenterological and Pediatric Surgery, Faculty of Medicine, Oita University, 1-1 Idaigaoka, Hasama-machi, Yufu, Oita, 879-5593, Japan
| | - Masahiro Kawamura
- Department of Gastroenterological and Pediatric Surgery, Faculty of Medicine, Oita University, 1-1 Idaigaoka, Hasama-machi, Yufu, Oita, 879-5593, Japan
| | - Takahide Kawasaki
- Department of Gastroenterological and Pediatric Surgery, Faculty of Medicine, Oita University, 1-1 Idaigaoka, Hasama-machi, Yufu, Oita, 879-5593, Japan
| | - Takashi Masuda
- Department of Gastroenterological and Pediatric Surgery, Faculty of Medicine, Oita University, 1-1 Idaigaoka, Hasama-machi, Yufu, Oita, 879-5593, Japan
| | - Teijiro Hirashita
- Department of Gastroenterological and Pediatric Surgery, Faculty of Medicine, Oita University, 1-1 Idaigaoka, Hasama-machi, Yufu, Oita, 879-5593, Japan
| | - Tsuyoshi Etoh
- Department of Gastroenterological and Pediatric Surgery, Faculty of Medicine, Oita University, 1-1 Idaigaoka, Hasama-machi, Yufu, Oita, 879-5593, Japan
| | - Ken'ichi Shinozuka
- Department of Information System and Engineering, Fukuoka Institute of Technology, Fukuoka, Japan
| | - Yusuke Matsunobu
- Department of Information System and Engineering, Fukuoka Institute of Technology, Fukuoka, Japan
| | - Toshiya Kamiyama
- R&D, Customer Solutions Development, Platform Technology, Olympus Technologies Asia, Hachioji, Japan
| | - Makoto Ishikake
- R&D, Customer Solutions Development, Platform Technology, Olympus Technologies Asia, Hachioji, Japan
| | - Kohei Ebe
- R&D, Customer Solutions Development, Platform Technology, Olympus Technologies Asia, Hachioji, Japan
| | - Tatsushi Tokuyasu
- Department of Information System and Engineering, Fukuoka Institute of Technology, Fukuoka, Japan
| | - Masafumi Inomata
- Department of Gastroenterological and Pediatric Surgery, Faculty of Medicine, Oita University, 1-1 Idaigaoka, Hasama-machi, Yufu, Oita, 879-5593, Japan
| |
Collapse
|
48
|
Takeshita N, Sakamoto S, Kitaguchi D, Takeshita N, Yajima S, Koike T, Ishikawa Y, Matsuzaki H, Mori K, Masuda H, Ichikawa T, Ito M. Deep Learning-Based Seminal Vesicle and Vas Deferens Recognition in the Posterior Approach of Robot-Assisted Radical Prostatectomy. Urology 2023; 173:98-103. [PMID: 36572225 DOI: 10.1016/j.urology.2022.12.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2022] [Revised: 11/24/2022] [Accepted: 12/04/2022] [Indexed: 12/24/2022]
Abstract
OBJECTIVE To develop a convolutional neural network to recognize the seminal vesicle and vas deferens (SV-VD) in the posterior approach of robot-assisted radical prostatectomy (RARP) and assess the performance of the convolutional neural network model under clinically relevant conditions. METHODS Intraoperative videos of robot-assisted radical prostatectomy performed by the posterior approach from 3 institutions were obtained between 2019 and 2020. Using SV-VD dissection videos, semantic segmentation of the seminal vesicle-vas deferens area was performed using a convolutional neural network-based approach. The dataset was split into training and test data in a 10:3 ratio. The average time required by 6 novice urologists to correctly recognize the SV-VD was compared using intraoperative videos with and without segmentation masks generated by the convolutional neural network model, which was evaluated with the test data using the Dice similarity coefficient. Training and test datasets were compared using the Mann-Whitney U-test and chi-square test. Time required to recognize the SV-VD was evaluated using the Mann-Whitney U-test. RESULTS From 26 patient videos, 1 040 images were created (520 SV-VD annotated images and 520 SV-VD non-displayed images). The convolutional neural network model had a Dice similarity coefficient value of 0.73 in the test data. Compared with original videos, videos with the generated segmentation mask promoted significantly faster seminal vesicle and vas deferens recognition (P < .001). CONCLUSION The convolutional neural network model provides accurate recognition of the SV-VD in the posterior approach RARP, which may be helpful, especially for novice urologists.
Collapse
Affiliation(s)
- Nobushige Takeshita
- Surgical Device Innovation Office, National Cancer Center Hospital East, Kashiwa, Chiba, Japan; Department of Urology, Graduate School of Medicine, Chiba University, Chuo-ku, Chiba, Japan; Department of Urology, National Cancer Center Hospital East, Kashiwa, Chiba, Japan
| | - Shinichi Sakamoto
- Department of Urology, Graduate School of Medicine, Chiba University, Chuo-ku, Chiba, Japan
| | - Daichi Kitaguchi
- Surgical Device Innovation Office, National Cancer Center Hospital East, Kashiwa, Chiba, Japan
| | - Nobuyoshi Takeshita
- Surgical Device Innovation Office, National Cancer Center Hospital East, Kashiwa, Chiba, Japan
| | - Shugo Yajima
- Department of Urology, National Cancer Center Hospital East, Kashiwa, Chiba, Japan
| | - Tatsuki Koike
- Surgical Device Innovation Office, National Cancer Center Hospital East, Kashiwa, Chiba, Japan
| | - Yuto Ishikawa
- Surgical Device Innovation Office, National Cancer Center Hospital East, Kashiwa, Chiba, Japan
| | - Hiroki Matsuzaki
- Surgical Device Innovation Office, National Cancer Center Hospital East, Kashiwa, Chiba, Japan
| | - Kensaku Mori
- Graduate School of Informatics, Nagoya University, Furo-cho, Chikusa-ku, Nagoya, Japan
| | - Hitoshi Masuda
- Department of Urology, National Cancer Center Hospital East, Kashiwa, Chiba, Japan
| | - Tomohiko Ichikawa
- Department of Urology, Graduate School of Medicine, Chiba University, Chuo-ku, Chiba, Japan
| | - Masaaki Ito
- Surgical Device Innovation Office, National Cancer Center Hospital East, Kashiwa, Chiba, Japan.
| |
Collapse
|
49
|
Lavanchy JL, Gonzalez C, Kassem H, Nett PC, Mutter D, Padoy N. Proposal and multicentric validation of a laparoscopic Roux-en-Y gastric bypass surgery ontology. Surg Endosc 2023; 37:2070-2077. [PMID: 36289088 PMCID: PMC10017621 DOI: 10.1007/s00464-022-09745-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2022] [Accepted: 10/14/2022] [Indexed: 11/30/2022]
Abstract
BACKGROUND Phase and step annotation in surgical videos is a prerequisite for surgical scene understanding and for downstream tasks like intraoperative feedback or assistance. However, most ontologies are applied on small monocentric datasets and lack external validation. To overcome these limitations an ontology for phases and steps of laparoscopic Roux-en-Y gastric bypass (LRYGB) is proposed and validated on a multicentric dataset in terms of inter- and intra-rater reliability (inter-/intra-RR). METHODS The proposed LRYGB ontology consists of 12 phase and 46 step definitions that are hierarchically structured. Two board certified surgeons (raters) with > 10 years of clinical experience applied the proposed ontology on two datasets: (1) StraBypass40 consists of 40 LRYGB videos from Nouvel Hôpital Civil, Strasbourg, France and (2) BernBypass70 consists of 70 LRYGB videos from Inselspital, Bern University Hospital, Bern, Switzerland. To assess inter-RR the two raters' annotations of ten randomly chosen videos from StraBypass40 and BernBypass70 each, were compared. To assess intra-RR ten randomly chosen videos were annotated twice by the same rater and annotations were compared. Inter-RR was calculated using Cohen's kappa. Additionally, for inter- and intra-RR accuracy, precision, recall, F1-score, and application dependent metrics were applied. RESULTS The mean ± SD video duration was 108 ± 33 min and 75 ± 21 min in StraBypass40 and BernBypass70, respectively. The proposed ontology shows an inter-RR of 96.8 ± 2.7% for phases and 85.4 ± 6.0% for steps on StraBypass40 and 94.9 ± 5.8% for phases and 76.1 ± 13.9% for steps on BernBypass70. The overall Cohen's kappa of inter-RR was 95.9 ± 4.3% for phases and 80.8 ± 10.0% for steps. Intra-RR showed an accuracy of 98.4 ± 1.1% for phases and 88.1 ± 8.1% for steps. CONCLUSION The proposed ontology shows an excellent inter- and intra-RR and should therefore be implemented routinely in phase and step annotation of LRYGB.
Collapse
Affiliation(s)
- Joël L Lavanchy
- IHU Strasbourg, 1 Place de l'Hôpital, 67000, Strasbourg, France.
- Department of Visceral Surgery and Medicine, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland.
| | - Cristians Gonzalez
- IHU Strasbourg, 1 Place de l'Hôpital, 67000, Strasbourg, France
- University Hospital of Strasbourg, Strasbourg, France
| | - Hasan Kassem
- ICube, CNRS, University of Strasbourg, Strasbourg, France
| | - Philipp C Nett
- Department of Visceral Surgery and Medicine, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Didier Mutter
- IHU Strasbourg, 1 Place de l'Hôpital, 67000, Strasbourg, France
- University Hospital of Strasbourg, Strasbourg, France
| | - Nicolas Padoy
- IHU Strasbourg, 1 Place de l'Hôpital, 67000, Strasbourg, France
- ICube, CNRS, University of Strasbourg, Strasbourg, France
| |
Collapse
|
50
|
Laplante S, Namazi B, Kiani P, Hashimoto DA, Alseidi A, Pasten M, Brunt LM, Gill S, Davis B, Bloom M, Pernar L, Okrainec A, Madani A. Validation of an artificial intelligence platform for the guidance of safe laparoscopic cholecystectomy. Surg Endosc 2023; 37:2260-2268. [PMID: 35918549 DOI: 10.1007/s00464-022-09439-9] [Citation(s) in RCA: 12] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2022] [Accepted: 07/04/2022] [Indexed: 10/16/2022]
Abstract
BACKGROUND Many surgical adverse events, such as bile duct injuries during laparoscopic cholecystectomy (LC), occur due to errors in visual perception and judgment. Artificial intelligence (AI) can potentially improve the quality and safety of surgery, such as through real-time intraoperative decision support. GoNoGoNet is a novel AI model capable of identifying safe ("Go") and dangerous ("No-Go") zones of dissection on surgical videos of LC. Yet, it is unknown how GoNoGoNet performs in comparison to expert surgeons. This study aims to evaluate the GoNoGoNet's ability to identify Go and No-Go zones compared to an external panel of expert surgeons. METHODS A panel of high-volume surgeons from the SAGES Safe Cholecystectomy Task Force was recruited to draw free-hand annotations on frames of prospectively collected videos of LC to identify the Go and No-Go zones. Expert consensus on the location of Go and No-Go zones was established using Visual Concordance Test pixel agreement. Identification of Go and No-Go zones by GoNoGoNet was compared to expert-derived consensus using mean F1 Dice Score, and pixel accuracy, sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV). RESULTS A total of 47 frames from 25 LC videos, procured from 3 countries and 9 surgeons, were annotated simultaneously by an expert panel of 6 surgeons and GoNoGoNet. Mean (± standard deviation) F1 Dice score were 0.58 (0.22) and 0.80 (0.12) for Go and No-Go zones, respectively. Mean (± standard deviation) accuracy, sensitivity, specificity, PPV and NPV for the Go zones were 0.92 (0.05), 0.52 (0.24), 0.97 (0.03), 0.70 (0.21), and 0.94 (0.04) respectively. For No-Go zones, these metrics were 0.92 (0.05), 0.80 (0.17), 0.95 (0.04), 0.84 (0.13) and 0.95 (0.05), respectively. CONCLUSIONS AI can be used to identify safe and dangerous zones of dissection within the surgical field, with high specificity/PPV for Go zones and high sensitivity/NPV for No-Go zones. Overall, model prediction was better for No-Go zones compared to Go zones. This technology may eventually be used to provide real-time guidance and minimize the risk of adverse events.
Collapse
Affiliation(s)
- Simon Laplante
- Surgical Artificial Intelligence Research Academy, University Health Network, Toronto, ON, Canada.
- Department of Surgery, University of Toronto, Toronto, ON, Canada.
- MIS Fellow, Toronto Western Hospital, Division of General Surgery, 8MP-325., 399 Bathurst St, Toronto,, ON, M5T 2S8, Canada.
| | - Babak Namazi
- Department of Surgery, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Parmiss Kiani
- Surgical Artificial Intelligence Research Academy, University Health Network, Toronto, ON, Canada
| | | | - Adnan Alseidi
- Department of Surgery, University of California, San Francisco, CA, USA
| | - Mauricio Pasten
- Instituto de Gastroenterologia Boliviano Japones, Cochabamba, Bolivia
| | - L Michael Brunt
- Department of Surgery, Washington University School of Medicine, St. Louis, MO, USA
| | - Sujata Gill
- Department of Surgery, Northeast Georgia Medical Center, Georgia, USA
| | - Brian Davis
- Department of Surgery, Texas Tech Paul L Foster School of Medicine, El Paso, TX, USA
| | - Matthew Bloom
- Department of Surgery, Cedars-Sinai Medical Center, Los Angeles, CA, USA
| | - Luise Pernar
- Department of Surgery, Boston medical center, Boston, MA, USA
| | - Allan Okrainec
- Surgical Artificial Intelligence Research Academy, University Health Network, Toronto, ON, Canada
- Department of Surgery, University of Toronto, Toronto, ON, Canada
| | - Amin Madani
- Surgical Artificial Intelligence Research Academy, University Health Network, Toronto, ON, Canada
- Department of Surgery, University of Toronto, Toronto, ON, Canada
| |
Collapse
|