1
|
Ershad Langroodi M, Liu X, Tousignant MR, Jarc AM. Objective performance indicators versus GEARS: an opportunity for more accurate assessment of surgical skill. Int J Comput Assist Radiol Surg 2024:10.1007/s11548-024-03248-2. [PMID: 39320413 DOI: 10.1007/s11548-024-03248-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2023] [Accepted: 07/29/2024] [Indexed: 09/26/2024]
Abstract
PURPOSE Surgical skill evaluation that relies on subjective scoring of surgical videos can be time-consuming and inconsistent across raters. We demonstrate differentiated opportunities for objective evaluation to improve surgeon training and performance. METHODS Subjective evaluation was performed using the Global evaluative assessment of robotic skills (GEARS) from both expert and crowd raters; whereas, objective evaluation used objective performance indicators (OPIs) derived from da Vinci surgical systems. Classifiers were trained for each evaluation method to distinguish between surgical expertise levels. This study includes one clinical task from a case series of robotic-assisted sleeve gastrectomy procedures performed by a single surgeon, and two training tasks performed by novice and expert surgeons, i.e., surgeons with no experience in robotic-assisted surgery (RAS) and those with more than 500 RAS procedures. RESULTS When comparing expert and novice skill levels, OPI-based classifier showed significantly higher accuracy than GEARS-based classifier on the more complex dissection task (OPI 0.93 ± 0.08 vs. GEARS 0.67 ± 0.18; 95% CI, 0.16-0.37; p = 0.02), but no significant difference was shown on the simpler suturing task. For the single-surgeon case series, both classifiers performed well when differentiating between early and late group cases with smaller group sizes and larger intervals between groups (OPI 0.9 ± 0.08; GEARS 0.87 ± 0.12; 95% CI, 0.02-0.04; p = 0.67). When increasing the group size to include more cases, thereby having smaller intervals between groups, OPIs demonstrated significantly higher accuracy (OPI 0.97 ± 0.06; GEARS 0.76 ± 0.07; 95% CI, 0.12-0.28; p = 0.004) in differentiating between the early/late cases. CONCLUSIONS Objective methods for skill evaluation in RAS outperform subjective methods when (1) differentiating expertise in a technically challenging training task, and (2) identifying more granular differences along early versus late phases of a surgeon learning curve within a clinical task. Objective methods offer an opportunity for more accessible and scalable skill evaluation in RAS.
Collapse
Affiliation(s)
| | - Xi Liu
- Research and Development, Intuitive Surgical, Inc, 5655 Spalding Dr, Norcross, GA, 30092, USA
| | - Mark R Tousignant
- Research and Development, Intuitive Surgical, Inc, 5655 Spalding Dr, Norcross, GA, 30092, USA
| | - Anthony M Jarc
- Research and Development, Intuitive Surgical, Inc, 5655 Spalding Dr, Norcross, GA, 30092, USA
| |
Collapse
|
2
|
Shukla A, Chaudhary R, Nayyar N. Role of artificial intelligence in gastrointestinal surgery. Artif Intell Cancer 2024; 5:97317. [DOI: 10.35713/aic.v5.i2.97317] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/28/2024] [Revised: 07/11/2024] [Accepted: 07/17/2024] [Indexed: 09/05/2024] Open
Abstract
Artificial intelligence is rapidly evolving and its application is increasing day-by-day in the medical field. The application of artificial intelligence is also valuable in gastrointestinal diseases, by calculating various scoring systems, evaluating radiological images, preoperative and intraoperative assistance, processing pathological slides, prognosticating, and in treatment responses. This field has a promising future and can have an impact on many management algorithms. In this minireview, we aimed to determine the basics of artificial intelligence, the role that artificial intelligence may play in gastrointestinal surgeries and malignancies, and the limitations thereof.
Collapse
Affiliation(s)
- Ankit Shukla
- Department of Surgery, Dr Rajendra Prasad Government Medical College, Kangra 176001, Himachal Pradesh, India
| | - Rajesh Chaudhary
- Department of Renal Transplantation, Dr Rajendra Prasad Government Medical College, Kangra 176001, India
| | - Nishant Nayyar
- Department of Radiology, Dr Rajendra Prasad Government Medical College, Kangra 176001, Himachal Pradesh, India
| |
Collapse
|
3
|
You J, Cai H, Wang Y, Bian A, Cheng K, Meng L, Wang X, Gao P, Chen S, Cai Y, Peng B. Artificial intelligence automated surgical phases recognition in intraoperative videos of laparoscopic pancreatoduodenectomy. Surg Endosc 2024; 38:4894-4905. [PMID: 38958719 DOI: 10.1007/s00464-024-10916-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2024] [Accepted: 05/05/2024] [Indexed: 07/04/2024]
Abstract
BACKGROUND Laparoscopic pancreatoduodenectomy (LPD) is one of the most challenging operations and has a long learning curve. Artificial intelligence (AI) automated surgical phase recognition in intraoperative videos has many potential applications in surgical education, helping shorten the learning curve, but no study has made this breakthrough in LPD. Herein, we aimed to build AI models to recognize the surgical phase in LPD and explore the performance characteristics of AI models. METHODS Among 69 LPD videos from a single surgical team, we used 42 in the building group to establish the models and used the remaining 27 videos in the analysis group to assess the models' performance characteristics. We annotated 13 surgical phases of LPD, including 4 key phases and 9 necessary phases. Two minimal invasive pancreatic surgeons annotated all the videos. We built two AI models for the key phase and necessary phase recognition, based on convolutional neural networks. The overall performance of the AI models was determined mainly by mean average precision (mAP). RESULTS Overall mAPs of the AI models in the test set of the building group were 89.7% and 84.7% for key phases and necessary phases, respectively. In the 27-video analysis group, overall mAPs were 86.8% and 71.2%, with maximum mAPs of 98.1% and 93.9%. We found commonalities between the error of model recognition and the differences of surgeon annotation, and the AI model exhibited bad performance in cases with anatomic variation or lesion involvement with adjacent organs. CONCLUSIONS AI automated surgical phase recognition can be achieved in LPD, with outstanding performance in selective cases. This breakthrough may be the first step toward AI- and video-based surgical education in more complex surgeries.
Collapse
Affiliation(s)
- Jiaying You
- WestChina-California Research Center for Predictive Intervention, Sichuan University West China Hospital, Chengdu, China
- Division of Pancreatic Surgery, Department of General Surgery, Sichuan University West China Hospital, No. 37, Guoxue Alley, Chengdu, 610041, China
| | - He Cai
- Division of Pancreatic Surgery, Department of General Surgery, Sichuan University West China Hospital, No. 37, Guoxue Alley, Chengdu, 610041, China
| | - Yuxian Wang
- Chengdu Withai Innovations Technology Company, Chengdu, China
| | - Ang Bian
- College of Computer Science, Sichuan University, Chengdu, China
| | - Ke Cheng
- Division of Pancreatic Surgery, Department of General Surgery, Sichuan University West China Hospital, No. 37, Guoxue Alley, Chengdu, 610041, China
| | - Lingwei Meng
- Division of Pancreatic Surgery, Department of General Surgery, Sichuan University West China Hospital, No. 37, Guoxue Alley, Chengdu, 610041, China
| | - Xin Wang
- Division of Pancreatic Surgery, Department of General Surgery, Sichuan University West China Hospital, No. 37, Guoxue Alley, Chengdu, 610041, China
| | - Pan Gao
- Division of Pancreatic Surgery, Department of General Surgery, Sichuan University West China Hospital, No. 37, Guoxue Alley, Chengdu, 610041, China
| | - Sirui Chen
- Mianyang Central Hospital, School of Medicine University of Electronic Science and Technology of China, Mianyang, China
| | - Yunqiang Cai
- Division of Pancreatic Surgery, Department of General Surgery, Sichuan University West China Hospital, No. 37, Guoxue Alley, Chengdu, 610041, China.
| | - Bing Peng
- Division of Pancreatic Surgery, Department of General Surgery, Sichuan University West China Hospital, No. 37, Guoxue Alley, Chengdu, 610041, China.
| |
Collapse
|
4
|
Kinoshita K, Maruyama T, Kobayashi N, Imanishi S, Maruyama M, Ohira G, Endo S, Tochigi T, Kinoshita M, Fukui Y, Kumazu Y, Kita J, Shinohara H, Matsubara H. An artificial intelligence-based nerve recognition model is useful as surgical support technology and as an educational tool in laparoscopic and robot-assisted rectal cancer surgery. Surg Endosc 2024; 38:5394-5404. [PMID: 39073558 PMCID: PMC11362368 DOI: 10.1007/s00464-024-10939-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2024] [Accepted: 05/17/2024] [Indexed: 07/30/2024]
Abstract
BACKGROUND Artificial intelligence (AI) has the potential to enhance surgical practice by predicting anatomical structures within the surgical field, thereby supporting surgeons' experiences and cognitive skills. Preserving and utilising nerves as critical guiding structures is paramount in rectal cancer surgery. Hence, we developed a deep learning model based on U-Net to automatically segment nerves. METHODS The model performance was evaluated using 60 randomly selected frames, and the Dice and Intersection over Union (IoU) scores were quantitatively assessed by comparing them with ground truth data. Additionally, a questionnaire was administered to five colorectal surgeons to gauge the extent of underdetection, overdetection, and the practical utility of the model in rectal cancer surgery. Furthermore, we conducted an educational assessment of non-colorectal surgeons, trainees, physicians, and medical students. We evaluated their ability to recognise nerves in mesorectal dissection scenes, scored them on a 12-point scale, and examined the score changes before and after exposure to the AI analysis videos. RESULTS The mean Dice and IoU scores for the 60 test frames were 0.442 (range 0.0465-0.639) and 0.292 (range 0.0238-0.469), respectively. The colorectal surgeons revealed an under-detection score of 0.80 (± 0.47), an over-detection score of 0.58 (± 0.41), and a usefulness evaluation score of 3.38 (± 0.43). The nerve recognition scores of non-colorectal surgeons, rotating residents, and medical students significantly improved by simply watching the AI nerve recognition videos for 1 min. Notably, medical students showed a more substantial increase in nerve recognition scores when exposed to AI nerve analysis videos than when exposed to traditional lectures on nerves. CONCLUSIONS In laparoscopic and robot-assisted rectal cancer surgeries, the AI-based nerve recognition model achieved satisfactory recognition levels for expert surgeons and demonstrated effectiveness in educating junior surgeons and medical students on nerve recognition.
Collapse
Affiliation(s)
- Kazuya Kinoshita
- Department of Frontier Surgery, Graduate School of Medicine, Chiba University, Chiba, Japan
- Department of General Surgery, Kumagaya General Hospital, Saitama, Japan
| | - Tetsuro Maruyama
- Department of Frontier Surgery, Graduate School of Medicine, Chiba University, Chiba, Japan.
| | | | - Shunsuke Imanishi
- Department of Frontier Surgery, Graduate School of Medicine, Chiba University, Chiba, Japan
| | - Michihiro Maruyama
- Department of Frontier Surgery, Graduate School of Medicine, Chiba University, Chiba, Japan
| | - Gaku Ohira
- Department of Frontier Surgery, Graduate School of Medicine, Chiba University, Chiba, Japan
| | - Satoshi Endo
- Department of Frontier Surgery, Graduate School of Medicine, Chiba University, Chiba, Japan
| | - Toru Tochigi
- Department of Frontier Surgery, Graduate School of Medicine, Chiba University, Chiba, Japan
| | - Mayuko Kinoshita
- Department of Frontier Surgery, Graduate School of Medicine, Chiba University, Chiba, Japan
| | - Yudai Fukui
- Department of Gastroenterological Surgery, Toranomon Hospital, Tokyo, Japan
| | - Yuta Kumazu
- Anaut Inc, Tokyo, Japan
- Department of Surgery, Yokohama City University, Kanagawa, Japan
| | - Junji Kita
- Department of General Surgery, Kumagaya General Hospital, Saitama, Japan
| | - Hisashi Shinohara
- Department of Gastroenterological Surgery, Hyogo College of Medicine, Hyogo, Japan
| | - Hisahiro Matsubara
- Department of Frontier Surgery, Graduate School of Medicine, Chiba University, Chiba, Japan
| |
Collapse
|
5
|
Nakajima K, Kitaguchi D, Takenaka S, Tanaka A, Ryu K, Takeshita N, Kinugasa Y, Ito M. Automated surgical skill assessment in colorectal surgery using a deep learning-based surgical phase recognition model. Surg Endosc 2024:10.1007/s00464-024-11208-9. [PMID: 39214877 DOI: 10.1007/s00464-024-11208-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2024] [Accepted: 08/19/2024] [Indexed: 09/04/2024]
Abstract
BACKGROUND There is an increasing demand for automated surgical skill assessment to solve issues such as subjectivity and bias that accompany manual assessments. This study aimed to verify the feasibility of assessing surgical skills using a surgical phase recognition model. METHODS A deep learning-based model that recognizes five surgical phases of laparoscopic sigmoidectomy was constructed, and its ability to distinguish between three skill-level groups-the expert group, with a high Endoscopic Surgical Skill Qualification System (ESSQS) score (26 videos); the intermediate group, with a low ESSQS score (32 videos); and the novice group, with an experience of < 5 colorectal surgeries (27 videos)-was assessed. Furthermore, 1 272 videos were divided into three groups according to the ESSQS score: ESSQS-high, ESSQS-middle, and ESSQS-low groups, and whether they could be distinguished by the score calculated by multiple regression analysis of the parameters from the model was also evaluated. RESULTS The time for mobilization of the colon, time for dissection of the mesorectum plus transection of the rectum plus anastomosis, and phase transition counts were significantly shorter or less in the expert group than in the intermediate (p = 0.0094, 0.0028, and < 0.001, respectively) and novice groups (all p < 0.001). Mesorectal excision time was significantly shorter in the expert group than in the novice group (p = 0.0037). The group with higher ESSQS scores also had higher AI scores. CONCLUSION This model has the potential to be applied to automated skill assessments.
Collapse
Affiliation(s)
- Kei Nakajima
- Department for the Promotion of Medical Device Innovation, National Cancer Center Hospital East, 6-5-1 Kashiwanoha, Kashiwa, Chiba, 277-8577, Japan
- Department of Gastrointestinal Surgery, Graduate School of Medicine, Tokyo Medical and Dental University, 1-5-45, Yushima, Bunkyo-Ku, Tokyo, 113-8510, Japan
| | - Daichi Kitaguchi
- Department for the Promotion of Medical Device Innovation, National Cancer Center Hospital East, 6-5-1 Kashiwanoha, Kashiwa, Chiba, 277-8577, Japan
| | - Shin Takenaka
- Department for the Promotion of Medical Device Innovation, National Cancer Center Hospital East, 6-5-1 Kashiwanoha, Kashiwa, Chiba, 277-8577, Japan
| | - Atsuki Tanaka
- Department for the Promotion of Medical Device Innovation, National Cancer Center Hospital East, 6-5-1 Kashiwanoha, Kashiwa, Chiba, 277-8577, Japan
| | - Kyoko Ryu
- Department for the Promotion of Medical Device Innovation, National Cancer Center Hospital East, 6-5-1 Kashiwanoha, Kashiwa, Chiba, 277-8577, Japan
| | - Nobuyoshi Takeshita
- Department for the Promotion of Medical Device Innovation, National Cancer Center Hospital East, 6-5-1 Kashiwanoha, Kashiwa, Chiba, 277-8577, Japan
| | - Yusuke Kinugasa
- Department of Gastrointestinal Surgery, Graduate School of Medicine, Tokyo Medical and Dental University, 1-5-45, Yushima, Bunkyo-Ku, Tokyo, 113-8510, Japan
| | - Masaaki Ito
- Department for the Promotion of Medical Device Innovation, National Cancer Center Hospital East, 6-5-1 Kashiwanoha, Kashiwa, Chiba, 277-8577, Japan.
- Surgical Device Innovation Office, National Cancer Center Hospital East, 6-5-1 Kashiwanoha, Kashiwa, Chiba, 277-8577, Japan.
| |
Collapse
|
6
|
Khan DZ, Koh CH, Das A, Valetopolou A, Hanrahan JG, Horsfall HL, Baldeweg SE, Bano S, Borg A, Dorward NL, Olukoya O, Stoyanov D, Marcus HJ. Video-Based Performance Analysis in Pituitary Surgery-Part 1: Surgical Outcomes. World Neurosurg 2024:S1878-8750(24)01363-9. [PMID: 39122112 DOI: 10.1016/j.wneu.2024.07.218] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2024] [Accepted: 07/31/2024] [Indexed: 08/12/2024]
Abstract
BACKGROUND Endoscopic pituitary adenoma surgery has a steep learning curve, with varying surgical techniques and outcomes across centers. In other surgeries, superior performance is linked with superior surgical outcomes. This study aimed to explore the prediction of patient-specific outcomes using surgical video analysis in pituitary surgery. METHODS Endoscopic pituitary adenoma surgery videos from a single center were annotated by experts for operative workflow (3 surgical phases and 15 surgical steps) and operative skill (using modified Objective Structured Assessment of Technical Skills [mOSATS]). Quantitative workflow metrics were calculated, including phase duration and step transitions. Poisson or logistic regression was used to assess the association of workflow metrics and mOSATS with common inpatient surgical outcomes. RESULTS 100 videos from 100 patients were included. Nasal phase mean duration was 24 minutes and mean mOSATS was 21.2/30. Mean duration was 34 minutes and mean mOSATS was 20.9/30 for the sellar phase, and 11 minutes and 21.7/30, respectively, for the closure phase. The most common adverse outcomes were new anterior pituitary hormone deficiency (n = 26), dysnatremia (n = 24), and cerebrospinal fluid leak (n = 5). Higher mOSATS for all 3 phases and shorter operation duration were associated with decreased length of stay (P = 0.003 &P < 0.001). Superior closure phase mOSATS were associated with reduced postoperative cerebrospinal fluid leak (P < 0.001), and superior sellar phase mOSATS were associated with reduced postoperative visual deterioration (P = 0.041). CONCLUSIONS Superior surgical skill and shorter surgical time were associated with superior surgical outcomes, at a generic and phase-specific level. Such video-based analysis has promise for integration into data-driven training and service improvement initiatives.
Collapse
Affiliation(s)
- Danyal Z Khan
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London, UK; Wellcome / EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK.
| | - Chan Hee Koh
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London, UK; Wellcome / EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - Adrito Das
- Wellcome / EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - Alexandra Valetopolou
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London, UK; Wellcome / EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - John G Hanrahan
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London, UK; Wellcome / EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - Hugo Layard Horsfall
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London, UK; Wellcome / EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - Stephanie E Baldeweg
- Department of Diabetes & Endocrinology, University College London Hospitals NHS Foundation Trust, London, UK; Division of Medicine, Department of Experimental and Translational Medicine, Centre for Obesity and Metabolism, University College London, London, UK
| | - Sophia Bano
- Wellcome / EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - Anouk Borg
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London, UK
| | - Neil L Dorward
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London, UK
| | - Olatomiwa Olukoya
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London, UK
| | - Danail Stoyanov
- Wellcome / EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK; Digital Surgery Ltd, Medtronic, London, UK
| | - Hani J Marcus
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London, UK; Wellcome / EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| |
Collapse
|
7
|
Yoshida M, Kitaguchi D, Takeshita N, Matsuzaki H, Ishikawa Y, Yura M, Akimoto T, Kinoshita T, Ito M. Surgical step recognition in laparoscopic distal gastrectomy using artificial intelligence: a proof-of-concept study. Langenbecks Arch Surg 2024; 409:213. [PMID: 38995411 DOI: 10.1007/s00423-024-03411-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2024] [Accepted: 07/05/2024] [Indexed: 07/13/2024]
Abstract
PURPOSE Laparoscopic distal gastrectomy (LDG) is a difficult procedure for early career surgeons. Artificial intelligence (AI)-based surgical step recognition is crucial for establishing context-aware computer-aided surgery systems. In this study, we aimed to develop an automatic recognition model for LDG using AI and evaluate its performance. METHODS Patients who underwent LDG at our institution in 2019 were included in this study. Surgical video data were classified into the following nine steps: (1) Port insertion; (2) Lymphadenectomy on the left side of the greater curvature; (3) Lymphadenectomy on the right side of the greater curvature; (4) Division of the duodenum; (5) Lymphadenectomy of the suprapancreatic area; (6) Lymphadenectomy on the lesser curvature; (7) Division of the stomach; (8) Reconstruction; and (9) From reconstruction to completion of surgery. Two gastric surgeons manually assigned all annotation labels. Convolutional neural network (CNN)-based image classification was further employed to identify surgical steps. RESULTS The dataset comprised 40 LDG videos. Over 1,000,000 frames with annotated labels of the LDG steps were used to train the deep-learning model, with 30 and 10 surgical videos for training and validation, respectively. The classification accuracies of the developed models were precision, 0.88; recall, 0.87; F1 score, 0.88; and overall accuracy, 0.89. The inference speed of the proposed model was 32 ps. CONCLUSION The developed CNN model automatically recognized the LDG surgical process with relatively high accuracy. Adding more data to this model could provide a fundamental technology that could be used in the development of future surgical instruments.
Collapse
Affiliation(s)
- Mitsumasa Yoshida
- Gastric Surgery Division, National Cancer Center Hospital East, 6-5-1 Kashiwanoha, Kashiwa, Chiba, 277-8577, Japan
- Department for the Promotion of Medical Device Innovation, National Cancer Center Hospital East, 6- 5-1 Kashiwanoha, Kashiwa, Chiba, 277-8577, Japan
- Course of Advanced Clinical Research of Cancer, Juntendo University Graduate School of Medicine, 2- 1-1, Hongo, Bunkyo-Ward, Tokyo, 113-8421, Japan
| | - Daichi Kitaguchi
- Department for the Promotion of Medical Device Innovation, National Cancer Center Hospital East, 6- 5-1 Kashiwanoha, Kashiwa, Chiba, 277-8577, Japan
| | - Nobuyoshi Takeshita
- Department for the Promotion of Medical Device Innovation, National Cancer Center Hospital East, 6- 5-1 Kashiwanoha, Kashiwa, Chiba, 277-8577, Japan
| | - Hiroki Matsuzaki
- Department for the Promotion of Medical Device Innovation, National Cancer Center Hospital East, 6- 5-1 Kashiwanoha, Kashiwa, Chiba, 277-8577, Japan
| | - Yuto Ishikawa
- Department for the Promotion of Medical Device Innovation, National Cancer Center Hospital East, 6- 5-1 Kashiwanoha, Kashiwa, Chiba, 277-8577, Japan
| | - Masahiro Yura
- Gastric Surgery Division, National Cancer Center Hospital East, 6-5-1 Kashiwanoha, Kashiwa, Chiba, 277-8577, Japan
| | - Tetsuo Akimoto
- Course of Advanced Clinical Research of Cancer, Juntendo University Graduate School of Medicine, 2- 1-1, Hongo, Bunkyo-Ward, Tokyo, 113-8421, Japan
| | - Takahiro Kinoshita
- Gastric Surgery Division, National Cancer Center Hospital East, 6-5-1 Kashiwanoha, Kashiwa, Chiba, 277-8577, Japan
| | - Masaaki Ito
- Department for the Promotion of Medical Device Innovation, National Cancer Center Hospital East, 6- 5-1 Kashiwanoha, Kashiwa, Chiba, 277-8577, Japan.
| |
Collapse
|
8
|
Hamilton A. The Future of Artificial Intelligence in Surgery. Cureus 2024; 16:e63699. [PMID: 39092371 PMCID: PMC11293880 DOI: 10.7759/cureus.63699] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/01/2024] [Indexed: 08/04/2024] Open
Abstract
Until recently, innovations in surgery were largely represented by extensions or augmentations of the surgeon's perception. This includes advancements such as the operating microscope, tumor fluorescence, intraoperative ultrasound, and minimally invasive surgical instrumentation. However, introducing artificial intelligence (AI) into the surgical disciplines represents a transformational event. Not only does AI contribute substantively to enhancing a surgeon's perception with such methodologies as three-dimensional anatomic overlays with augmented reality, AI-improved visualization for tumor resection, and AI-formatted endoscopic and robotic surgery guidance. What truly makes AI so different is that it also provides ways to augment the surgeon's cognition. By analyzing enormous databases, AI can offer new insights that can transform the operative environment in several ways. It can enable preoperative risk assessment and allow a better selection of candidates for procedures such as organ transplantation. AI can also increase the efficiency and throughput of operating rooms and staff and coordinate the utilization of critical resources such as intensive care unit beds and ventilators. Furthermore, AI is revolutionizing intraoperative guidance, improving the detection of cancers, permitting endovascular navigation, and ensuring the reduction in collateral damage to adjacent tissues during surgery (e.g., identification of parathyroid glands during thyroidectomy). AI is also transforming how we evaluate and assess surgical proficiency and trainees in postgraduate programs. It offers the potential for multiple, serial evaluations, using various scoring systems while remaining free from the biases that can plague human supervisors. The future of AI-driven surgery holds promising trends, including the globalization of surgical education, the miniaturization of instrumentation, and the increasing success of autonomous surgical robots. These advancements raise the prospect of deploying fully autonomous surgical robots in the near future into challenging environments such as the battlefield, disaster areas, and even extraplanetary exploration. In light of these transformative developments, it is clear that the future of surgery will belong to those who can most readily embrace and harness the power of AI.
Collapse
Affiliation(s)
- Allan Hamilton
- Artificial Intelligence Division for Simulation, Education, and Training, University of Arizona Health Sciences, Tucson, USA
| |
Collapse
|
9
|
Zhang J, Fang J, Xu Y, Si G. How AI and Robotics Will Advance Interventional Radiology: Narrative Review and Future Perspectives. Diagnostics (Basel) 2024; 14:1393. [PMID: 39001283 PMCID: PMC11241154 DOI: 10.3390/diagnostics14131393] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2024] [Revised: 06/20/2024] [Accepted: 06/26/2024] [Indexed: 07/16/2024] Open
Abstract
The rapid advancement of artificial intelligence (AI) and robotics has led to significant progress in various medical fields including interventional radiology (IR). This review focuses on the research progress and applications of AI and robotics in IR, including deep learning (DL), machine learning (ML), and convolutional neural networks (CNNs) across specialties such as oncology, neurology, and cardiology, aiming to explore potential directions in future interventional treatments. To ensure the breadth and depth of this review, we implemented a systematic literature search strategy, selecting research published within the last five years. We conducted searches in databases such as PubMed and Google Scholar to find relevant literature. Special emphasis was placed on selecting large-scale studies to ensure the comprehensiveness and reliability of the results. This review summarizes the latest research directions and developments, ultimately analyzing their corresponding potential and limitations. It furnishes essential information and insights for researchers, clinicians, and policymakers, potentially propelling advancements and innovations within the domains of AI and IR. Finally, our findings indicate that although AI and robotics technologies are not yet widely applied in clinical settings, they are evolving across multiple aspects and are expected to significantly improve the processes and efficacy of interventional treatments.
Collapse
Affiliation(s)
- Jiaming Zhang
- Department of Radiology, Clinical Medical College, Southwest Medical University, Luzhou 646699, China; (J.Z.); (J.F.)
| | - Jiayi Fang
- Department of Radiology, Clinical Medical College, Southwest Medical University, Luzhou 646699, China; (J.Z.); (J.F.)
| | - Yanneng Xu
- Department of Radiology, Affiliated Traditional Chinese Medicine Hospital, Southwest Medical University, Luzhou 646699, China;
| | - Guangyan Si
- Department of Radiology, Affiliated Traditional Chinese Medicine Hospital, Southwest Medical University, Luzhou 646699, China;
| |
Collapse
|
10
|
Lavanchy JL, Ramesh S, Dall'Alba D, Gonzalez C, Fiorini P, Müller-Stich BP, Nett PC, Marescaux J, Mutter D, Padoy N. Challenges in multi-centric generalization: phase and step recognition in Roux-en-Y gastric bypass surgery. Int J Comput Assist Radiol Surg 2024:10.1007/s11548-024-03166-3. [PMID: 38761319 DOI: 10.1007/s11548-024-03166-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2023] [Accepted: 04/02/2024] [Indexed: 05/20/2024]
Abstract
PURPOSE Most studies on surgical activity recognition utilizing artificial intelligence (AI) have focused mainly on recognizing one type of activity from small and mono-centric surgical video datasets. It remains speculative whether those models would generalize to other centers. METHODS In this work, we introduce a large multi-centric multi-activity dataset consisting of 140 surgical videos (MultiBypass140) of laparoscopic Roux-en-Y gastric bypass (LRYGB) surgeries performed at two medical centers, i.e., the University Hospital of Strasbourg, France (StrasBypass70) and Inselspital, Bern University Hospital, Switzerland (BernBypass70). The dataset has been fully annotated with phases and steps by two board-certified surgeons. Furthermore, we assess the generalizability and benchmark different deep learning models for the task of phase and step recognition in 7 experimental studies: (1) Training and evaluation on BernBypass70; (2) Training and evaluation on StrasBypass70; (3) Training and evaluation on the joint MultiBypass140 dataset; (4) Training on BernBypass70, evaluation on StrasBypass70; (5) Training on StrasBypass70, evaluation on BernBypass70; Training on MultiBypass140, (6) evaluation on BernBypass70 and (7) evaluation on StrasBypass70. RESULTS The model's performance is markedly influenced by the training data. The worst results were obtained in experiments (4) and (5) confirming the limited generalization capabilities of models trained on mono-centric data. The use of multi-centric training data, experiments (6) and (7), improves the generalization capabilities of the models, bringing them beyond the level of independent mono-centric training and validation (experiments (1) and (2)). CONCLUSION MultiBypass140 shows considerable variation in surgical technique and workflow of LRYGB procedures between centers. Therefore, generalization experiments demonstrate a remarkable difference in model performance. These results highlight the importance of multi-centric datasets for AI model generalization to account for variance in surgical technique and workflows. The dataset and code are publicly available at https://github.com/CAMMA-public/MultiBypass140.
Collapse
Affiliation(s)
- Joël L Lavanchy
- University Digestive Health Care Center - Clarunis, 4002, Basel, Switzerland.
- Department of Biomedical Engineering, University of Basel, 4123, Allschwil, Switzerland.
- Institute of Image-Guided Surgery, IHU Strasbourg, 67000, Strasbourg, France.
| | - Sanat Ramesh
- Institute of Image-Guided Surgery, IHU Strasbourg, 67000, Strasbourg, France
- ICube, University of Strasbourg, CNRS, 67000, Strasbourg, France
- Altair Robotics Lab, University of Verona, 37134, Verona, Italy
| | - Diego Dall'Alba
- Altair Robotics Lab, University of Verona, 37134, Verona, Italy
| | - Cristians Gonzalez
- Institute of Image-Guided Surgery, IHU Strasbourg, 67000, Strasbourg, France
- University Hospital of Strasbourg, 67000, Strasbourg, France
| | - Paolo Fiorini
- Altair Robotics Lab, University of Verona, 37134, Verona, Italy
| | - Beat P Müller-Stich
- University Digestive Health Care Center - Clarunis, 4002, Basel, Switzerland
- Department of Biomedical Engineering, University of Basel, 4123, Allschwil, Switzerland
| | - Philipp C Nett
- Department of Visceral Surgery and Medicine, Inselspital Bern University Hospital, 3010, Bern, Switzerland
| | | | - Didier Mutter
- Institute of Image-Guided Surgery, IHU Strasbourg, 67000, Strasbourg, France
- University Hospital of Strasbourg, 67000, Strasbourg, France
| | - Nicolas Padoy
- Institute of Image-Guided Surgery, IHU Strasbourg, 67000, Strasbourg, France
- ICube, University of Strasbourg, CNRS, 67000, Strasbourg, France
| |
Collapse
|
11
|
Morris MX, Fiocco D, Caneva T, Yiapanis P, Orgill DP. Current and future applications of artificial intelligence in surgery: implications for clinical practice and research. Front Surg 2024; 11:1393898. [PMID: 38783862 PMCID: PMC11111929 DOI: 10.3389/fsurg.2024.1393898] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/29/2024] [Accepted: 04/29/2024] [Indexed: 05/25/2024] Open
Abstract
Surgeons are skilled at making complex decisions over invasive procedures that can save lives and alleviate pain and avoid complications in patients. The knowledge to make these decisions is accumulated over years of schooling and practice. Their experience is in turn shared with others, also via peer-reviewed articles, which get published in larger and larger amounts every year. In this work, we review the literature related to the use of Artificial Intelligence (AI) in surgery. We focus on what is currently available and what is likely to come in the near future in both clinical care and research. We show that AI has the potential to be a key tool to elevate the effectiveness of training and decision-making in surgery and the discovery of relevant and valid scientific knowledge in the surgical domain. We also address concerns about AI technology, including the inability for users to interpret algorithms as well as incorrect predictions. A better understanding of AI will allow surgeons to use new tools wisely for the benefit of their patients.
Collapse
Affiliation(s)
- Miranda X. Morris
- Duke University School of Medicine, Duke University Hospital, Durham, NC, United States
| | - Davide Fiocco
- Department of Artificial Intelligence, Frontiers Media SA, Lausanne, Switzerland
| | - Tommaso Caneva
- Department of Artificial Intelligence, Frontiers Media SA, Lausanne, Switzerland
| | - Paris Yiapanis
- Department of Artificial Intelligence, Frontiers Media SA, Lausanne, Switzerland
| | - Dennis P. Orgill
- Harvard Medical School, Brigham and Women’s Hospital, Boston, MA, United States
| |
Collapse
|
12
|
Corrêa EL, Cotian LFP, Lourenço JW, Lopes CM, Carvalho DR, Strobel R, Junior OC, Strobel KM, Schaefer JL, Nara EOB. Overview of the Last 71 Years of Metabolic and Bariatric Surgery: Content Analysis and Meta-analysis to Investigate the Topic and Scientific Evolution. Obes Surg 2024; 34:1885-1908. [PMID: 38485892 DOI: 10.1007/s11695-024-07165-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2024] [Revised: 03/02/2024] [Accepted: 03/06/2024] [Indexed: 04/20/2024]
Abstract
Obesity is a worldwide epidemic, and bariatric surgery has become increasingly popular due to its effectiveness in treating it. Therefore, understanding this area is of paramount importance. This article aims to provide an understanding of the development of the topic related to procedures, content, data, and status. To achieve this objective, a literature review and a bibliometric analysis were conducted. The methods provided insight into the current state and relevant topics over time. In conclusion, the article provided the identification of the transformation of the research field, initially focused only on physical aspects, to a more complex approach, which also incorporates psychological and social aspects and the correlation between obesity, bariatric surgery, and quality of life.
Collapse
Affiliation(s)
- Erica L Corrêa
- Department of Production and Systems Engineering, Pontifical Catholic University of Paraná, Curitiba, 1155, Brazil
| | - Luís F P Cotian
- Department of Production and Systems Engineering, Pontifical Catholic University of Paraná, Curitiba, 1155, Brazil
| | - Jordam W Lourenço
- Department of Production and Systems Engineering, Pontifical Catholic University of Paraná, Curitiba, 1155, Brazil
| | - Caroline M Lopes
- Department of Production and Systems Engineering, Pontifical Catholic University of Paraná, Curitiba, 1155, Brazil
| | - Deborah R Carvalho
- Department of Applied Social Sciences, Pontifical Catholic University of Paraná, Curitiba, 1155, Brazil
| | - Rodrigo Strobel
- Gastrovida: Bariatric and Metabolic Surgical Center, Curitiba, 433, Brazil
| | - Osiris C Junior
- Department of Production and Systems Engineering, Pontifical Catholic University of Paraná, Curitiba, 1155, Brazil
| | - Kamyla M Strobel
- Gastrovida: Bariatric and Metabolic Surgical Center, Curitiba, 433, Brazil
| | - Jones L Schaefer
- Department of Production and Systems Engineering, Pontifical Catholic University of Paraná, Curitiba, 1155, Brazil
| | - Elpídio O B Nara
- Department of Production and Systems Engineering, Pontifical Catholic University of Paraná, Curitiba, 1155, Brazil.
| |
Collapse
|
13
|
Al Abbas AI, Namazi B, Radi I, Alterio R, Abreu AA, Rail B, Polanco PM, Zeh HJ, Hogg ME, Zureikat AH, Sankaranarayanan G. The development of a deep learning model for automated segmentation of the robotic pancreaticojejunostomy. Surg Endosc 2024; 38:2553-2561. [PMID: 38488870 DOI: 10.1007/s00464-024-10725-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2023] [Accepted: 01/28/2024] [Indexed: 03/17/2024]
Abstract
BACKGROUND Minimally invasive surgery provides an unprecedented opportunity to review video for assessing surgical performance. Surgical video analysis is time-consuming and expensive. Deep learning provides an alternative for analysis. Robotic pancreaticoduodenectomy (RPD) is a complex and morbid operation. Surgeon technical performance of pancreaticojejunostomy (PJ) has been associated with postoperative pancreatic fistula. In this work, we aimed to utilize deep learning to automatically segment PJ RPD videos. METHODS This was a retrospective review of prospectively collected videos from 2011 to 2022 that were in libraries at tertiary referral centers, including 111 PJ videos. Each frame of a robotic PJ video was categorized based on 6 tasks. A 3D convolutional neural network was trained for frame-level visual feature extraction and classification. All the videos were manually annotated for the start and end of each task. RESULTS Of the 100 videos assessed, 60 videos were used for the training the model, 10 for hyperparameter optimization, and 30 for the testing of performance. All the frames were extracted (6 frames/second) and annotated. The accuracy and mean per-class F1 scores were 88.01% and 85.34% for tasks. CONCLUSION The deep learning model performed well for automated segmentation of PJ videos. Future work will focus on skills assessment and outcome prediction.
Collapse
Affiliation(s)
- Amr I Al Abbas
- Department of Surgery, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd., Dallas, TX, 75390-9169, USA
| | - Babak Namazi
- Department of Surgery, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd., Dallas, TX, 75390-9169, USA
| | - Imad Radi
- Department of Surgery, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd., Dallas, TX, 75390-9169, USA
| | - Rodrigo Alterio
- Department of Surgery, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd., Dallas, TX, 75390-9169, USA
| | - Andres A Abreu
- Department of Surgery, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd., Dallas, TX, 75390-9169, USA
| | - Benjamin Rail
- Department of Surgery, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd., Dallas, TX, 75390-9169, USA
| | - Patricio M Polanco
- Department of Surgery, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd., Dallas, TX, 75390-9169, USA
| | - Herbert J Zeh
- Department of Surgery, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd., Dallas, TX, 75390-9169, USA
| | | | - Amer H Zureikat
- University of Pittsburgh Medical Center, Pittsburgh, PA, USA
| | - Ganesh Sankaranarayanan
- Department of Surgery, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd., Dallas, TX, 75390-9169, USA.
| |
Collapse
|
14
|
Zhu Y, Du L, Fu PY, Geng ZH, Zhang DF, Chen WF, Li QL, Zhou PH. An Automated Video Analysis System for Retrospective Assessment and Real-Time Monitoring of Endoscopic Procedures (with Video). Bioengineering (Basel) 2024; 11:445. [PMID: 38790312 PMCID: PMC11118061 DOI: 10.3390/bioengineering11050445] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2024] [Revised: 04/21/2024] [Accepted: 04/22/2024] [Indexed: 05/26/2024] Open
Abstract
BACKGROUND AND AIMS Accurate recognition of endoscopic instruments facilitates quantitative evaluation and quality control of endoscopic procedures. However, no relevant research has been reported. In this study, we aimed to develop a computer-assisted system, EndoAdd, for automated endoscopic surgical video analysis based on our dataset of endoscopic instrument images. METHODS Large training and validation datasets containing 45,143 images of 10 different endoscopic instruments and a test dataset of 18,375 images collected from several medical centers were used in this research. Annotated image frames were used to train the state-of-the-art object detection model, YOLO-v5, to identify the instruments. Based on the frame-level prediction results, we further developed a hidden Markov model to perform video analysis and generate heatmaps to summarize the videos. RESULTS EndoAdd achieved high accuracy (>97%) on the test dataset for all 10 endoscopic instrument types. The mean average accuracy, precision, recall, and F1-score were 99.1%, 92.0%, 88.8%, and 89.3%, respectively. The area under the curve values exceeded 0.94 for all instrument types. Heatmaps of endoscopic procedures were generated for both retrospective and real-time analyses. CONCLUSIONS We successfully developed an automated endoscopic video analysis system, EndoAdd, which supports retrospective assessment and real-time monitoring. It can be used for data analysis and quality control of endoscopic procedures in clinical practice.
Collapse
Affiliation(s)
- Yan Zhu
- Endoscopy Center and Endoscopy Research Institute, Zhongshan Hospital, Fudan University, Shanghai 200032, China; (Y.Z.); (L.D.); (P.-Y.F.); (Z.-H.G.); (D.-F.Z.); (W.-F.C.)
- Shanghai Collaborative Innovation Center of Endoscopy, Shanghai 200032, China
| | - Ling Du
- Endoscopy Center and Endoscopy Research Institute, Zhongshan Hospital, Fudan University, Shanghai 200032, China; (Y.Z.); (L.D.); (P.-Y.F.); (Z.-H.G.); (D.-F.Z.); (W.-F.C.)
- Shanghai Collaborative Innovation Center of Endoscopy, Shanghai 200032, China
| | - Pei-Yao Fu
- Endoscopy Center and Endoscopy Research Institute, Zhongshan Hospital, Fudan University, Shanghai 200032, China; (Y.Z.); (L.D.); (P.-Y.F.); (Z.-H.G.); (D.-F.Z.); (W.-F.C.)
- Shanghai Collaborative Innovation Center of Endoscopy, Shanghai 200032, China
| | - Zi-Han Geng
- Endoscopy Center and Endoscopy Research Institute, Zhongshan Hospital, Fudan University, Shanghai 200032, China; (Y.Z.); (L.D.); (P.-Y.F.); (Z.-H.G.); (D.-F.Z.); (W.-F.C.)
- Shanghai Collaborative Innovation Center of Endoscopy, Shanghai 200032, China
| | - Dan-Feng Zhang
- Endoscopy Center and Endoscopy Research Institute, Zhongshan Hospital, Fudan University, Shanghai 200032, China; (Y.Z.); (L.D.); (P.-Y.F.); (Z.-H.G.); (D.-F.Z.); (W.-F.C.)
- Shanghai Collaborative Innovation Center of Endoscopy, Shanghai 200032, China
| | - Wei-Feng Chen
- Endoscopy Center and Endoscopy Research Institute, Zhongshan Hospital, Fudan University, Shanghai 200032, China; (Y.Z.); (L.D.); (P.-Y.F.); (Z.-H.G.); (D.-F.Z.); (W.-F.C.)
- Shanghai Collaborative Innovation Center of Endoscopy, Shanghai 200032, China
| | - Quan-Lin Li
- Endoscopy Center and Endoscopy Research Institute, Zhongshan Hospital, Fudan University, Shanghai 200032, China; (Y.Z.); (L.D.); (P.-Y.F.); (Z.-H.G.); (D.-F.Z.); (W.-F.C.)
- Shanghai Collaborative Innovation Center of Endoscopy, Shanghai 200032, China
| | - Ping-Hong Zhou
- Endoscopy Center and Endoscopy Research Institute, Zhongshan Hospital, Fudan University, Shanghai 200032, China; (Y.Z.); (L.D.); (P.-Y.F.); (Z.-H.G.); (D.-F.Z.); (W.-F.C.)
- Shanghai Collaborative Innovation Center of Endoscopy, Shanghai 200032, China
| |
Collapse
|
15
|
Skinner G, Chen T, Jentis G, Liu Y, McCulloh C, Harzman A, Huang E, Kalady M, Kim P. Real-time near infrared artificial intelligence using scalable non-expert crowdsourcing in colorectal surgery. NPJ Digit Med 2024; 7:99. [PMID: 38649447 PMCID: PMC11035672 DOI: 10.1038/s41746-024-01095-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2023] [Accepted: 03/29/2024] [Indexed: 04/25/2024] Open
Abstract
Surgical artificial intelligence (AI) has the potential to improve patient safety and clinical outcomes. To date, training such AI models to identify tissue anatomy requires annotations by expensive and rate-limiting surgical domain experts. Herein, we demonstrate and validate a methodology to obtain high quality surgical tissue annotations through crowdsourcing of non-experts, and real-time deployment of multimodal surgical anatomy AI model in colorectal surgery.
Collapse
Affiliation(s)
- Garrett Skinner
- Jacobs School of Medicine and Biomedical Sciences, University at Buffalo, Buffalo, NY, USA
- Activ Surgical, University at Buffalo, Buffalo, NY, USA
| | - Tina Chen
- Activ Surgical, University at Buffalo, Buffalo, NY, USA
| | | | - Yao Liu
- Activ Surgical, University at Buffalo, Buffalo, NY, USA
- Warren Alpert Medical School Alpert Medical School of Brown University, Providence, RI, USA
| | | | - Alan Harzman
- The Ohio State University Wexner Medical Center, Columbus, OH, USA
| | - Emily Huang
- The Ohio State University Wexner Medical Center, Columbus, OH, USA
| | - Matthew Kalady
- The Ohio State University Wexner Medical Center, Columbus, OH, USA
| | - Peter Kim
- Jacobs School of Medicine and Biomedical Sciences, University at Buffalo, Buffalo, NY, USA.
- Activ Surgical, University at Buffalo, Buffalo, NY, USA.
| |
Collapse
|
16
|
Deol ES, Tollefson MK, Antolin A, Zohar M, Bar O, Ben-Ayoun D, Mynderse LA, Lomas DJ, Avant RA, Miller AR, Elliott DS, Boorjian SA, Wolf T, Asselmann D, Khanna A. Automated surgical step recognition in transurethral bladder tumor resection using artificial intelligence: transfer learning across surgical modalities. Front Artif Intell 2024; 7:1375482. [PMID: 38525302 PMCID: PMC10958784 DOI: 10.3389/frai.2024.1375482] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2024] [Accepted: 02/26/2024] [Indexed: 03/26/2024] Open
Abstract
Objective Automated surgical step recognition (SSR) using AI has been a catalyst in the "digitization" of surgery. However, progress has been limited to laparoscopy, with relatively few SSR tools in endoscopic surgery. This study aimed to create a SSR model for transurethral resection of bladder tumors (TURBT), leveraging a novel application of transfer learning to reduce video dataset requirements. Materials and methods Retrospective surgical videos of TURBT were manually annotated with the following steps of surgery: primary endoscopic evaluation, resection of bladder tumor, and surface coagulation. Manually annotated videos were then utilized to train a novel AI computer vision algorithm to perform automated video annotation of TURBT surgical video, utilizing a transfer-learning technique to pre-train on laparoscopic procedures. Accuracy of AI SSR was determined by comparison to human annotations as the reference standard. Results A total of 300 full-length TURBT videos (median 23.96 min; IQR 14.13-41.31 min) were manually annotated with sequential steps of surgery. One hundred and seventy-nine videos served as a training dataset for algorithm development, 44 for internal validation, and 77 as a separate test cohort for evaluating algorithm accuracy. Overall accuracy of AI video analysis was 89.6%. Model accuracy was highest for the primary endoscopic evaluation step (98.2%) and lowest for the surface coagulation step (82.7%). Conclusion We developed a fully automated computer vision algorithm for high-accuracy annotation of TURBT surgical videos. This represents the first application of transfer-learning from laparoscopy-based computer vision models into surgical endoscopy, demonstrating the promise of this approach in adapting to new procedure types.
Collapse
Affiliation(s)
- Ekamjit S. Deol
- Department of Urology, Mayo Clinic, Rochester, MN, United States
| | | | | | - Maya Zohar
- theator.io, Palo Alto, CA, United States
| | - Omri Bar
- theator.io, Palo Alto, CA, United States
| | | | | | - Derek J. Lomas
- Department of Urology, Mayo Clinic, Rochester, MN, United States
| | - Ross A. Avant
- Department of Urology, Mayo Clinic, Rochester, MN, United States
| | - Adam R. Miller
- Department of Urology, Mayo Clinic, Rochester, MN, United States
| | | | | | - Tamir Wolf
- theator.io, Palo Alto, CA, United States
| | | | - Abhinav Khanna
- Department of Urology, Mayo Clinic, Rochester, MN, United States
| |
Collapse
|
17
|
Nikolian VC, Camacho D, Earle D, Lehmann R, Nau P, Ramshaw B, Stulberg J. Development and preliminary validation of a new task-based objective procedure-specific assessment of inguinal hernia repair procedural safety. Surg Endosc 2024; 38:1583-1591. [PMID: 38332173 DOI: 10.1007/s00464-024-10677-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2023] [Accepted: 12/30/2023] [Indexed: 02/10/2024]
Abstract
BACKGROUND Surgical videos coupled with structured assessments enable surgical training programs to provide independent competency evaluations and align with the American Board of Surgery's entrustable professional activities initiative. Existing assessment instruments for minimally invasive inguinal hernia repair (IHR) have limitations with regards to reliability, validity, and usability. A cross-sectional study of six surgeons using a novel objective, procedure-specific, 8-item competency assessment for minimally invasive inguinal hernia repair (IHR-OPSA) was performed to assess inter-rater reliability using a "safe" vs. "unsafe" scoring rubric. METHODS The IHR-OPSA was developed by three expert IHR surgeons, field tested with five IHR surgeons, and revised based upon feedback. The final instrument included: (1) incision/port placement; (2) dissection of peritoneal flap (TAPP) or dissection of peritoneal flap (TEP); (3) exposure; (4) reducing the sac; (5) full dissection of the myopectineal orifice; (6) mesh insertion; (7) mesh fixation; and (8) operation flow. The IHR-OPSA was applied by six expert IHR surgeons to 20 IHR surgical videos selected to include a spectrum of hernia procedures (15 laparoscopic, 5 robotic), anatomy (14 indirect, 5 direct, 1 femoral), and Global Case Difficulty (easy, average, hard). Inter-rater reliability was assessed against Gwet's AC2. RESULTS The IHR-OPSA inter-rater reliability was good to excellent, ranging from 0.65 to 0.97 across the eight items. Assessments of robotic procedures had higher reliability with near perfect agreement for 7 of 8 items. In general, assessments of easier cases had higher levels of agreement than harder cases. CONCLUSIONS A novel 8-item minimally invasive IHR assessment tool was developed and tested for inter-rater reliability using a "safe" vs. "unsafe" rating system with promising results. To promote instrument validity the IHR-OPSA was designed and evaluated within the context of intended use with iterative engagement with experts and testing of constructs against real-world operative videos.
Collapse
Affiliation(s)
- Vahagn C Nikolian
- Department of Surgery, Oregon Health & Science University, 3181 S.W. Sam Jackson Park Rd., Portland, OR, 97239, USA.
| | - Diego Camacho
- Minimally Invasive and Endoscopic Surgery at Montefiore Medical Center, New York, NY, USA
| | - David Earle
- New England Hernia Center, Lowell, MA, USA
- Tufts University School of Medicine, Boston, MA, USA
| | - Ryan Lehmann
- Department of Surgery, Section of Bariatric Surgery, University of Iowa Hospitals & Clinics, Iowa City, IA, USA
| | - Peter Nau
- Department of Surgery, Section of Bariatric Surgery, University of Iowa Hospitals & Clinics, Iowa City, IA, USA
| | - Bruce Ramshaw
- CQInsights PBC, Knoxville, TN, USA
- Caresyntax Corporation, Boston, MA, USA
| | - Jonah Stulberg
- Department of Surgery, McGovern Medical School University of Texas Health Science Center at Houston, Houston, TX, USA
| |
Collapse
|
18
|
Kawa N, Araji T, Kaafarani H, Adra SW. A Narrative Review on Intraoperative Adverse Events: Risks, Prevention, and Mitigation. J Surg Res 2024; 295:468-476. [PMID: 38070261 DOI: 10.1016/j.jss.2023.11.045] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2023] [Revised: 10/16/2023] [Accepted: 11/12/2023] [Indexed: 02/25/2024]
Abstract
INTRODUCTION Adverse events from surgical interventions are common. They can occur at various stages of surgical care, and they carry a heavy burden on the different parties involved. While extensive research and efforts have been made to better understand the etiologies of postoperative complications, more research on intraoperative adverse events (iAEs) remains to be done. METHODS In this article, we reviewed the literature looking at iAEs to discuss their risk factors, their implications on surgical care, and the current efforts to mitigate and manage them. RESULTS Risk factors for iAEs are diverse and are dictated by patient-related risk factors, the nature and complexity of the procedures, the surgeon's experience, and the work environment of the operating room. The implications of iAEs vary according to their severity and include increased rates of 30-day postoperative morbidity and mortality, increased length of hospital stay and readmission, increased care cost, and a second victim emotional toll on the operating surgeon. CONCLUSIONS While transparent reporting of iAEs remains a challenge, many efforts are using new measures not only to report iAEs but also to provide better surveillance, prevention, and mitigation strategies to reduce their overall adverse impact.
Collapse
Affiliation(s)
- Nisrine Kawa
- Department of Dermatology, New York Presbyterian Hospital, Columbia University Irving Medical Center, New York City, New York
| | - Tarek Araji
- Department of Surgery, Hospital of the University of Pennsylvania, Philadelphia, Pennsylvania
| | - Haytham Kaafarani
- Division of Trauma, Department of Surgery, Massachusetts General Hospital and Harvard Medical School, Emergency Surgery and Critical Care, Boston, Massachusetts
| | - Souheil W Adra
- Division of Bariatric and Minimally Invasive Surgery, Department of Surgery, Beth Israel Deaconess Medical Center, Boston, Massachusetts.
| |
Collapse
|
19
|
Li A, Javidan AP, Namazi B, Madani A, Forbes TL. Development of an Artificial Intelligence Tool for Intraoperative Guidance During Endovascular Abdominal Aortic Aneurysm Repair. Ann Vasc Surg 2024; 99:96-104. [PMID: 37914075 DOI: 10.1016/j.avsg.2023.08.027] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2023] [Revised: 08/02/2023] [Accepted: 08/15/2023] [Indexed: 11/03/2023]
Abstract
BACKGROUND Adverse events during surgery can occur in part due to errors in visual perception and judgment. Deep learning is a branch of artificial intelligence (AI) that has shown promise in providing real-time intraoperative guidance. This study aims to train and test the performance of a deep learning model that can identify inappropriate landing zones during endovascular aneurysm repair (EVAR). METHODS A deep learning model was trained to identify a "No-Go" landing zone during EVAR, defined by coverage of the lowest renal artery by the stent graft. Fluoroscopic images from elective EVAR procedures performed at a single institution and from open-access sources were selected. Annotations of the "No-Go" zone were performed by trained annotators. A 10-fold cross-validation technique was used to evaluate the performance of the model against human annotations. Primary outcomes were intersection-over-union (IoU) and F1 score and secondary outcomes were pixel-wise accuracy, sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV). RESULTS The AI model was trained using 369 images procured from 110 different patients/videos, including 18 patients/videos (44 images) from open-access sources. For the primary outcomes, IoU and F1 were 0.43 (standard deviation ± 0.29) and 0.53 (±0.32), respectively. For the secondary outcomes, accuracy, sensitivity, specificity, NPV, and PPV were 0.97 (±0.002), 0.51 (±0.34), 0.99 (±0.001). 0.99 (±0.002), and 0.62 (±0.34), respectively. CONCLUSIONS AI can effectively identify suboptimal areas of stent deployment during EVAR. Further directions include validating the model on datasets from other institutions and assessing its ability to predict optimal stent graft placement and clinical outcomes.
Collapse
Affiliation(s)
- Allen Li
- Faculty of Medicine & The Ottawa Hospital Research Institute, University of Ottawa, Ottawa, Ontario, Canada
| | - Arshia P Javidan
- Division of Vascular Surgery, University of Toronto, Toronto, Ontario, Canada
| | - Babak Namazi
- Department of Surgery, University of Texas Southwestern Medical Center, Dallas, TX
| | - Amin Madani
- Department of Surgery, University Health Network & University of Toronto, Toronto, Ontario, Canada; Surgical Artificial Intelligence Research Academy, University Health Network, Toronto, Ontario, Canada
| | - Thomas L Forbes
- Department of Surgery, University Health Network & University of Toronto, Toronto, Ontario, Canada.
| |
Collapse
|
20
|
Dayan D. Implementation of Artificial Intelligence-Based Computer Vision Model for Sleeve Gastrectomy: Experience in One Tertiary Center. Obes Surg 2024; 34:330-336. [PMID: 38180619 DOI: 10.1007/s11695-023-07043-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2023] [Revised: 12/23/2023] [Accepted: 12/28/2023] [Indexed: 01/06/2024]
Abstract
INTRODUCTION Sleeve gastrectomy (SG) is the most common metabolic and bariatric procedure performed. Leveraging artificial intelligence (AI) for automated real-time data structuring and annotations of surgical videos has immense potential of clinical applications. This study presents initial real-world implementation of AI-based computer vision model in sleeve gastrectomy (SG) and external validation of accuracy of safety milestone annotations. METHODS A retrospective single-center study of 49 consecutive SG videos was captured and analyzed by the AI platform (December 2020-August 2023). A bariatric surgeon viewed all videos and assessed safety milestones adherence, compared to the AI annotations. Patients' data were retrieved from the bariatric unit registry. RESULTS SG total duration was 47.5 min (interquartile range 36-64). Main steps included preparation (12.2%), dissection of the greater curvature (30.8%), gastric transection (28.5%), specimen extraction (7.2%), and final inspection (14.4%). Out of body time comprised 6.9% of the total video. Safety milestones components and AI-surgeon agreements included the following: bougie insertion (100%), distance from pylorus ≥ 2 cm (100%), parallel to lesser curvature (98%), fundus mobilization (100%), and distance from esophagus ≥ 1 cm (true-100%, false-13.6%; kappa coefficient 0.2, p = 0.006). Intraoperative complications included notable hemorrhage (n = 4) and parenchymal injury (n = 1). CONCLUSIONS The AI model provides a fully automated SG video analysis. Outcomes suggest its accuracy in four of five safety milestone annotations. This data is valuable, as it reflects objective performance measures which can help us improve the surgical quality and efficiency of SG. Larger cohorts will enable SG standardization and clinical correlations with outcomes, aiming to improve patients' safety.
Collapse
Affiliation(s)
- Danit Dayan
- Division of General Surgery, Bariatric Unit, Tel Aviv Medical Center, Affiliated to Sackler Faculty of Medicine, Tel Aviv University, 6, Weizman St., Tel Aviv, Israel.
| |
Collapse
|
21
|
Abid R, Hussein AA, Guru KA. Artificial Intelligence in Urology: Current Status and Future Perspectives. Urol Clin North Am 2024; 51:117-130. [PMID: 37945097 DOI: 10.1016/j.ucl.2023.06.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2023]
Abstract
Surgical fields, especially urology, have shifted increasingly toward the use of artificial intelligence (AI). Advancements in AI have created massive improvements in diagnostics, outcome predictions, and robotic surgery. For robotic surgery to progress from assisting surgeons to eventually reaching autonomous procedures, there must be advancements in machine learning, natural language processing, and computer vision. Moreover, barriers such as data availability, interpretability of autonomous decision-making, Internet connection and security, and ethical concerns must be overcome.
Collapse
Affiliation(s)
- Rayyan Abid
- Case Western Reserve University, 10900 Euclid Avenue, Cleveland, OH 44106, USA
| | - Ahmed A Hussein
- Department of Urology, Roswell Park Comprehensive Cancer Center
| | - Khurshid A Guru
- Department of Urology, Roswell Park Comprehensive Cancer Center.
| |
Collapse
|
22
|
Goodman ED, Patel KK, Zhang Y, Locke W, Kennedy CJ, Mehrotra R, Ren S, Guan M, Zohar O, Downing M, Chen HW, Clark JZ, Berrigan MT, Brat GA, Yeung-Levy S. Analyzing Surgical Technique in Diverse Open Surgical Videos With Multitask Machine Learning. JAMA Surg 2024; 159:185-192. [PMID: 38055227 PMCID: PMC10701669 DOI: 10.1001/jamasurg.2023.6262] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2023] [Accepted: 09/04/2023] [Indexed: 12/07/2023]
Abstract
Objective To overcome limitations of open surgery artificial intelligence (AI) models by curating the largest collection of annotated videos and to leverage this AI-ready data set to develop a generalizable multitask AI model capable of real-time understanding of clinically significant surgical behaviors in prospectively collected real-world surgical videos. Design, Setting, and Participants The study team programmatically queried open surgery procedures on YouTube and manually annotated selected videos to create the AI-ready data set used to train a multitask AI model for 2 proof-of-concept studies, one generating surgical signatures that define the patterns of a given procedure and the other identifying kinematics of hand motion that correlate with surgeon skill level and experience. The Annotated Videos of Open Surgery (AVOS) data set includes 1997 videos from 23 open-surgical procedure types uploaded to YouTube from 50 countries over the last 15 years. Prospectively recorded surgical videos were collected from a single tertiary care academic medical center. Deidentified videos were recorded of surgeons performing open surgical procedures and analyzed for correlation with surgical training. Exposures The multitask AI model was trained on the AI-ready video data set and then retrospectively applied to the prospectively collected video data set. Main Outcomes and Measures Analysis of open surgical videos in near real-time, performance on AI-ready and prospectively collected videos, and quantification of surgeon skill. Results Using the AI-ready data set, the study team developed a multitask AI model capable of real-time understanding of surgical behaviors-the building blocks of procedural flow and surgeon skill-across space and time. Through principal component analysis, a single compound skill feature was identified, composed of a linear combination of kinematic hand attributes. This feature was a significant discriminator between experienced surgeons and surgical trainees across 101 prospectively collected surgical videos of 14 operators. For each unit increase in the compound feature value, the odds of the operator being an experienced surgeon were 3.6 times higher (95% CI, 1.67-7.62; P = .001). Conclusions and Relevance In this observational study, the AVOS-trained model was applied to analyze prospectively collected open surgical videos and identify kinematic descriptors of surgical skill related to efficiency of hand motion. The ability to provide AI-deduced insights into surgical structure and skill is valuable in optimizing surgical skill acquisition and ultimately improving surgical care.
Collapse
Affiliation(s)
- Emmett D. Goodman
- Department of Computer Science, Stanford University, Stanford, California
- Department of Biomedical Data Science, Stanford University, Stanford, California
| | - Krishna K. Patel
- Department of Computer Science, Stanford University, Stanford, California
- Department of Biomedical Data Science, Stanford University, Stanford, California
| | - Yilun Zhang
- Department of Surgery, Beth Israel Deaconess Medical Center, Boston, Massachusetts
| | - William Locke
- Department of Computer Science, Stanford University, Stanford, California
- Department of Biomedical Data Science, Stanford University, Stanford, California
| | - Chris J. Kennedy
- Department of Surgery, Beth Israel Deaconess Medical Center, Boston, Massachusetts
- Department of Biomedical Informatics, Harvard Medical School, Boston, Massachusetts
| | - Rohan Mehrotra
- Department of Computer Science, Stanford University, Stanford, California
- Department of Biomedical Data Science, Stanford University, Stanford, California
| | - Stephen Ren
- Department of Computer Science, Stanford University, Stanford, California
- Department of Biomedical Data Science, Stanford University, Stanford, California
| | - Melody Guan
- Department of Computer Science, Stanford University, Stanford, California
- Department of Biomedical Data Science, Stanford University, Stanford, California
| | - Orr Zohar
- Department of Biomedical Data Science, Stanford University, Stanford, California
- Department of Electrical Engineering, Stanford University, Stanford, California
| | - Maren Downing
- Department of Surgery, Beth Israel Deaconess Medical Center, Boston, Massachusetts
| | - Hao Wei Chen
- Department of Surgery, Beth Israel Deaconess Medical Center, Boston, Massachusetts
| | - Jevin Z. Clark
- Department of Surgery, Beth Israel Deaconess Medical Center, Boston, Massachusetts
| | - Margaret T. Berrigan
- Department of Surgery, Beth Israel Deaconess Medical Center, Boston, Massachusetts
| | - Gabriel A. Brat
- Department of Surgery, Beth Israel Deaconess Medical Center, Boston, Massachusetts
- Department of Biomedical Informatics, Harvard Medical School, Boston, Massachusetts
| | - Serena Yeung-Levy
- Department of Computer Science, Stanford University, Stanford, California
- Department of Biomedical Data Science, Stanford University, Stanford, California
- Department of Electrical Engineering, Stanford University, Stanford, California
- Clinical Excellence Research Center, Stanford University School of Medicine, Stanford, California
| |
Collapse
|
23
|
Adrales G, Ardito F, Chowbey P, Morales-Conde S, Ferreres AR, Hensman C, Martin D, Matthaei H, Ramshaw B, Roberts JK, Schrem H, Sharma A, Tabiri S, Vibert E, Woods MS. Laparoscopic cholecystectomy critical view of safety (LC-CVS): a multi-national validation study of an objective, procedure-specific assessment using video-based assessment (VBA). Surg Endosc 2024; 38:922-930. [PMID: 37891369 DOI: 10.1007/s00464-023-10479-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2023] [Accepted: 09/17/2023] [Indexed: 10/29/2023]
Abstract
BACKGROUND A novel 6-item objective, procedure-specific assessment for laparoscopic cholecystectomy incorporating the critical view of safety (LC-CVS OPSA) was developed to support trainee formative and summative assessments. The LC-CVS OPSA included two retraction items (fundus and infundibulum retraction) and four CVS items (hepatocystic triangle visualization, gallbladder-liver separation, cystic artery identification, and cystic duct identification). The scoring rubric for retraction consisted of poor (frequently outside of defined range), adequate (minimally outside of defined range) and excellent (consistently inside defined range) and for CVS items were "poor-unsafe", "adequate-safe", or "excellent-safe". METHODS A multi-national consortium of 12 expert LC surgeons applied the OPSA-LC CVS to 35 unique LC videos and one duplicate video. Primary outcome measure was inter-rater reliability as measured by Gwet's AC2, a weighted measure that adjusts for scales with high probability of random agreement. Analysis of the inter-rater reliability was conducted on a collapsed dichotomous scoring rubric of "poor-unsafe" vs. "adequate/excellent-safe". RESULTS Inter-rater reliability was high for all six items ranging from 0.76 (hepatocystic triangle visualization) to 0.86 (cystic duct identification). Intra-rater reliability for the single duplicate video was substantially higher across the six items ranging from 0.91 to 1.00. CONCLUSIONS The novel 6-item OPSA LC CVS demonstrated high inter-rater reliability when tested with a multi-national consortium of LC expert surgeons. This brief instrument focused on safe surgical practice was designed to support the implementation of entrustable professional activities into busy surgical training programs. Instrument use coupled with video-based assessments creates novel datasets with the potential for artificial intelligence development including computer vision to drive assessment automation.
Collapse
Affiliation(s)
- Gina Adrales
- Johns Hopkins University School of Medicine, 600 N. Wolfe St., Blalock 618, Baltimore, MD, 21287, USA.
| | - Francesco Ardito
- Hepatobiliary Surgery Unit, Fondazione Policlinico Universitario Agostino Gemelli, IRCCS, Catholic University, Rome, Italy
| | - Pradeep Chowbey
- Institute of Laparoscopic, Endoscopic and Bariatric Surgery, Max Super Specialty Hospital, Saket, New Delhi, India
| | - Salvador Morales-Conde
- Unit of Innovation in Minimally Invasive Surgery, University Hospital Virgen del Rocío, University of Sevilla, Sevilla, Spain
| | - Alberto R Ferreres
- Department of Surgery, University of Buenos Aires, Buenos Aires, Argentina
| | - Chrys Hensman
- Department of Surgery & LapSurgery, Monash University, Melbourne, Australia
| | - David Martin
- Division of Critical Care/Acute Care Surgery, University of Minnesota, Minneapolis, MN, USA
| | - Hanno Matthaei
- Department of Surgery, University Medical Center, Bonn, Germany
| | - Bruce Ramshaw
- CQInsights PBC, Knoxville, TN, USA
- Caresyntax Corporation, Boston, MA, USA
| | - J Keith Roberts
- Liver Transplant and HPB Surgery, University Hospitals Birmingham NHS Trust, Birmingham, UK
| | - Harald Schrem
- General, Visceral and Transplant Surgery, Medical University Graz, Graz, Austria
| | - Anil Sharma
- Institute of Laparoscopic, Endoscopic and Bariatric Surgery, Max Super Specialty Hospital, Saket, New Delhi, India
| | - Stephen Tabiri
- University for Development Studies-School of Medicine and Health Sciences, Tamale Teaching Hospital, Tamales, Ghana
| | - Eric Vibert
- Centre Hépato-Biliaire, Paul Brousse Hospital, AP-HP, Villejuif, France
| | | |
Collapse
|
24
|
Yoon D, Yoo M, Kim BS, Kim YG, Lee JH, Lee E, Min GH, Hwang DY, Baek C, Cho M, Suh YS, Kim S. Automated deep learning model for estimating intraoperative blood loss using gauze images. Sci Rep 2024; 14:2597. [PMID: 38297011 PMCID: PMC10830489 DOI: 10.1038/s41598-024-52524-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2023] [Accepted: 01/19/2024] [Indexed: 02/02/2024] Open
Abstract
The intraoperative estimated blood loss (EBL), an essential parameter for perioperative management, has been evaluated by manually weighing blood in gauze and suction bottles, a process both time-consuming and labor-intensive. As the novel EBL prediction platform, we developed an automated deep learning EBL prediction model, utilizing the patch-wise crumpled state (P-W CS) of gauze images with texture analysis. The proposed algorithm was developed using animal data obtained from a porcine experiment and validated on human intraoperative data prospectively collected from 102 laparoscopic gastric cancer surgeries. The EBL prediction model involves gauze area detection and subsequent EBL regression based on the detected areas, with each stage optimized through comparative model performance evaluations. The selected gauze detection model demonstrated a sensitivity of 96.5% and a specificity of 98.0%. Based on this detection model, the performance of EBL regression stage models was compared. Comparative evaluations revealed that our P-W CS-based model outperforms others, including one reliant on convolutional neural networks and another analyzing the gauze's overall crumpled state. The P-W CS-based model achieved a mean absolute error (MAE) of 0.25 g and a mean absolute percentage error (MAPE) of 7.26% in EBL regression. Additionally, per-patient assessment yielded an MAE of 0.58 g, indicating errors < 1 g/patient. In conclusion, our algorithm provides an objective standard and streamlined approach for EBL estimation during surgery without the need for perioperative approximation and additional tasks by humans. The robust performance of the model across varied surgical conditions emphasizes its clinical potential for real-world application.
Collapse
Affiliation(s)
- Dan Yoon
- Interdisciplinary Program in Bioengineering, Graduate School, Seoul National University, Seoul, 08826, Korea
| | - Mira Yoo
- Department of Surgery, Seoul National University Bundang Hospital, Seongnam, 13620, Korea
| | - Byeong Soo Kim
- Interdisciplinary Program in Bioengineering, Graduate School, Seoul National University, Seoul, 08826, Korea
| | - Young Gyun Kim
- Interdisciplinary Program in Bioengineering, Graduate School, Seoul National University, Seoul, 08826, Korea
| | - Jong Hyeon Lee
- Interdisciplinary Program in Bioengineering, Graduate School, Seoul National University, Seoul, 08826, Korea
| | - Eunju Lee
- Department of Surgery, Seoul National University Bundang Hospital, Seongnam, 13620, Korea
- Department of Surgery, Chung-Ang University Gwangmyeong Hospital, Gwangmyeong, 14353, Korea
| | - Guan Hong Min
- Department of Surgery, Seoul National University Bundang Hospital, Seongnam, 13620, Korea
| | - Du-Yeong Hwang
- Department of Surgery, Seoul National University Bundang Hospital, Seongnam, 13620, Korea
| | - Changhoon Baek
- Department of Transdisciplinary Medicine, Seoul National University Hospital, Seoul, 03080, Korea
| | - Minwoo Cho
- Department of Transdisciplinary Medicine, Seoul National University Hospital, Seoul, 03080, Korea
| | - Yun-Suhk Suh
- Department of Surgery, Seoul National University Bundang Hospital, Seongnam, 13620, Korea.
- Department of Surgery, Seoul National University College of Medicine, Seoul, 03080, Korea.
| | - Sungwan Kim
- Department of Biomedical Engineering, Seoul National University College of Medicine, Seoul, 03080, Korea.
- Institute of Bioengineering, Seoul National University, Seoul, 08826, Korea.
- Artificial Intelligence Institute, Seoul National University, Seoul, 08826, Korea.
| |
Collapse
|
25
|
Balu A, Kugener G, Pangal DJ, Lee H, Lasky S, Han J, Buchanan I, Liu J, Zada G, Donoho DA. Simulated outcomes for durotomy repair in minimally invasive spine surgery. Sci Data 2024; 11:62. [PMID: 38200013 PMCID: PMC10781746 DOI: 10.1038/s41597-023-02744-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2023] [Accepted: 11/13/2023] [Indexed: 01/12/2024] Open
Abstract
Minimally invasive spine surgery (MISS) is increasingly performed using endoscopic and microscopic visualization, and the captured video can be used for surgical education and development of predictive artificial intelligence (AI) models. Video datasets depicting adverse event management are also valuable, as predictive models not exposed to adverse events may exhibit poor performance when these occur. Given that no dedicated spine surgery video datasets for AI model development are publicly available, we introduce Simulated Outcomes for Durotomy Repair in Minimally Invasive Spine Surgery (SOSpine). A validated MISS cadaveric dural repair simulator was used to educate neurosurgery residents, and surgical microscope video recordings were paired with outcome data. Objects including durotomy, needle, grasper, needle driver, and nerve hook were then annotated. Altogether, SOSpine contains 15,698 frames with 53,238 annotations and associated durotomy repair outcomes. For validation, an AI model was fine-tuned on SOSpine video and detected surgical instruments with a mean average precision of 0.77. In summary, SOSpine depicts spine surgeons managing a common complication, providing opportunities to develop surgical AI models.
Collapse
Affiliation(s)
- Alan Balu
- Department of Neurosurgery, Georgetown University School of Medicine, 3900 Reservoir Rd NW, Washington, D.C., 20007, USA.
| | - Guillaume Kugener
- Department of Neurological Surgery, Keck School of Medicine of University of Southern California, 1200 North State St., Suite 3300, Los Angeles, CA, 90033, USA
| | - Dhiraj J Pangal
- Department of Neurological Surgery, Keck School of Medicine of University of Southern California, 1200 North State St., Suite 3300, Los Angeles, CA, 90033, USA
| | - Heewon Lee
- University of Southern California, 3709 Trousdale Pkwy., Los Angeles, CA, 90089, USA
| | - Sasha Lasky
- University of Southern California, 3709 Trousdale Pkwy., Los Angeles, CA, 90089, USA
| | - Jane Han
- University of Southern California, 3709 Trousdale Pkwy., Los Angeles, CA, 90089, USA
| | - Ian Buchanan
- Department of Neurological Surgery, Keck School of Medicine of University of Southern California, 1200 North State St., Suite 3300, Los Angeles, CA, 90033, USA
| | - John Liu
- Department of Neurological Surgery, Keck School of Medicine of University of Southern California, 1200 North State St., Suite 3300, Los Angeles, CA, 90033, USA
| | - Gabriel Zada
- Department of Neurological Surgery, Keck School of Medicine of University of Southern California, 1200 North State St., Suite 3300, Los Angeles, CA, 90033, USA
| | - Daniel A Donoho
- Department of Neurosurgery, Children's National Hospital, 111 Michigan Avenue NW, Washington, DC, 20010, USA
| |
Collapse
|
26
|
Komatsu M, Kitaguchi D, Yura M, Takeshita N, Yoshida M, Yamaguchi M, Kondo H, Kinoshita T, Ito M. Automatic surgical phase recognition-based skill assessment in laparoscopic distal gastrectomy using multicenter videos. Gastric Cancer 2024; 27:187-196. [PMID: 38038811 DOI: 10.1007/s10120-023-01450-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/20/2023] [Accepted: 10/31/2023] [Indexed: 12/02/2023]
Abstract
BACKGROUND Gastric surgery involves numerous surgical phases; however, its steps can be clearly defined. Deep learning-based surgical phase recognition can promote stylization of gastric surgery with applications in automatic surgical skill assessment. This study aimed to develop a deep learning-based surgical phase-recognition model using multicenter videos of laparoscopic distal gastrectomy, and examine the feasibility of automatic surgical skill assessment using the developed model. METHODS Surgical videos from 20 hospitals were used. Laparoscopic distal gastrectomy was defined and annotated into nine phases and a deep learning-based image classification model was developed for phase recognition. We examined whether the developed model's output, including the number of frames in each phase and the adequacy of the surgical field development during the phase of supra-pancreatic lymphadenectomy, correlated with the manually assigned skill assessment score. RESULTS The overall accuracy of phase recognition was 88.8%. Regarding surgical skill assessment based on the number of frames during the phases of lymphadenectomy of the left greater curvature and reconstruction, the number of frames in the high-score group were significantly less than those in the low-score group (829 vs. 1,152, P < 0.01; 1,208 vs. 1,586, P = 0.01, respectively). The output score of the adequacy of the surgical field development, which is the developed model's output, was significantly higher in the high-score group than that in the low-score group (0.975 vs. 0.970, P = 0.04). CONCLUSION The developed model had high accuracy in phase-recognition tasks and has the potential for application in automatic surgical skill assessment systems.
Collapse
Affiliation(s)
- Masaru Komatsu
- Gastric Surgery Division, National Cancer Center Hospital East, 6-5-1 Kashiwanoha, Kashiwa, Chiba, 277-8577, Japan
- Department for the Promotion of Medical Device Innovation, National Cancer Center Hospital East, 6-5-1 Kashiwanoha, Kashiwa, Chiba, 277-8577, Japan
- Course of Advanced Clinical Research of Cancer, Juntendo University Graduate School of Medicine, 2-1-1, Hongo, Bunkyo-Ward, Tokyo, 113-8421, Japan
| | - Daichi Kitaguchi
- Department for the Promotion of Medical Device Innovation, National Cancer Center Hospital East, 6-5-1 Kashiwanoha, Kashiwa, Chiba, 277-8577, Japan
| | - Masahiro Yura
- Gastric Surgery Division, National Cancer Center Hospital East, 6-5-1 Kashiwanoha, Kashiwa, Chiba, 277-8577, Japan
| | - Nobuyoshi Takeshita
- Department for the Promotion of Medical Device Innovation, National Cancer Center Hospital East, 6-5-1 Kashiwanoha, Kashiwa, Chiba, 277-8577, Japan
| | - Mitsumasa Yoshida
- Gastric Surgery Division, National Cancer Center Hospital East, 6-5-1 Kashiwanoha, Kashiwa, Chiba, 277-8577, Japan
| | - Masayuki Yamaguchi
- Course of Advanced Clinical Research of Cancer, Juntendo University Graduate School of Medicine, 2-1-1, Hongo, Bunkyo-Ward, Tokyo, 113-8421, Japan
| | - Hibiki Kondo
- Department for the Promotion of Medical Device Innovation, National Cancer Center Hospital East, 6-5-1 Kashiwanoha, Kashiwa, Chiba, 277-8577, Japan
| | - Takahiro Kinoshita
- Gastric Surgery Division, National Cancer Center Hospital East, 6-5-1 Kashiwanoha, Kashiwa, Chiba, 277-8577, Japan
| | - Masaaki Ito
- Department for the Promotion of Medical Device Innovation, National Cancer Center Hospital East, 6-5-1 Kashiwanoha, Kashiwa, Chiba, 277-8577, Japan.
- Surgical Device Innovation Office, National Cancer Center Hospital East, 6-5-1, Kashiwanoha, Kashiwa, Chiba, 277-8577, Japan.
| |
Collapse
|
27
|
Hegde SR, Namazi B, Iyengar N, Cao S, Desir A, Marques C, Mahnken H, Dumas RP, Sankaranarayanan G. Automated segmentation of phases, steps, and tasks in laparoscopic cholecystectomy using deep learning. Surg Endosc 2024; 38:158-170. [PMID: 37945709 DOI: 10.1007/s00464-023-10482-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2023] [Accepted: 09/17/2023] [Indexed: 11/12/2023]
Abstract
BACKGROUND Video-based review is paramount for operative performance assessment but can be laborious when performed manually. Hierarchical Task Analysis (HTA) is a well-known method that divides any procedure into phases, steps, and tasks. HTA requires large datasets of videos with consistent definitions at each level. Our aim was to develop an AI model for automated segmentation of phases, steps, and tasks for laparoscopic cholecystectomy videos using a standardized HTA. METHODS A total of 160 laparoscopic cholecystectomy videos were collected from a publicly available dataset known as cholec80 and from our own institution. All videos were annotated for the beginning and ending of a predefined set of phases, steps, and tasks. Deep learning models were then separately developed and trained for the three levels using a 3D Convolutional Neural Network architecture. RESULTS Four phases, eight steps, and nineteen tasks were defined through expert consensus. The training set for our deep learning models contained 100 videos with an additional 20 videos for hyperparameter optimization and tuning. The remaining 40 videos were used for testing the performance. The overall accuracy for phases, steps, and tasks were 0.90, 0.81, and 0.65 with the average F1 score of 0.86, 0.76 and 0.48 respectively. Control of bleeding and bile spillage tasks were most variable in definition, operative management, and clinical relevance. CONCLUSION The use of hierarchical task analysis for surgical video analysis has numerous applications in AI-based automated systems. Our results show that our tiered method of task analysis can successfully be used to train a DL model.
Collapse
Affiliation(s)
- Shruti R Hegde
- Department of Surgery, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd., Dallas, TX, 75390-9159, USA
| | - Babak Namazi
- Department of Surgery, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd., Dallas, TX, 75390-9159, USA
| | - Niyenth Iyengar
- Department of Surgery, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd., Dallas, TX, 75390-9159, USA
| | - Sarah Cao
- Department of Surgery, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd., Dallas, TX, 75390-9159, USA
| | - Alexis Desir
- Department of Surgery, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd., Dallas, TX, 75390-9159, USA
| | - Carolina Marques
- Department of Surgery, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd., Dallas, TX, 75390-9159, USA
| | - Heidi Mahnken
- Department of Surgery, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd., Dallas, TX, 75390-9159, USA
| | - Ryan P Dumas
- Department of Surgery, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd., Dallas, TX, 75390-9159, USA
| | - Ganesh Sankaranarayanan
- Department of Surgery, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd., Dallas, TX, 75390-9159, USA.
| |
Collapse
|
28
|
Huang Y, Ding X, Zhao Y, Tian X, Feng G, Gao Z. Automatic detection and segmentation of chorda tympani under microscopic vision in otosclerosis patients via convolutional neural networks. Int J Med Robot 2023; 19:e2567. [PMID: 37634074 DOI: 10.1002/rcs.2567] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2022] [Revised: 08/08/2023] [Accepted: 08/14/2023] [Indexed: 08/28/2023]
Abstract
BACKGROUND Artificial intelligence (AI) techniques, especially deep learning (DL) techniques, have shown promising results for various computer vision tasks in the field of surgery. However, AI-guided navigation during microscopic surgery for real-time surgical guidance and decision support is much more complex, and its efficacy has yet to be demonstrated. We propose a model dedicated to the evaluation of DL-based semantic segmentation of chorda tympani (CT) during microscopic surgery. METHODS Various convolutional neural networks were constructed, trained, and validated for semantic segmentation of CT. Our dataset has 5817 images annotated from 36 patients, which were further randomly split into the training set (90%, 5236 images) and validation set (10%, 581 images). In addition, 1500 raw images from 3 patients (500 images randomly selected per patient) were used to evaluate the network performance. RESULTS When evaluated on a validation set (581 images), our proposed CT detection networks achieved great performance, and the modified U-net performed best (mIOU = 0.892, mPA = 0.9427). Moreover, when applying U-net to predict the test set (1500 raw images from 3 patients), our methods also showed great overall performance (Accuracy = 0.976, Precision = 0.996, Sensitivity = 0.979, Specificity = 0.902). CONCLUSIONS This study suggests that DL can be used for the automated detection and segmentation of CT in patients with otosclerosis during microscopic surgery with a high degree of performance. Our research validated the potential feasibility for future vision-based navigation surgical assistance and autonomous surgery using AI.
Collapse
Affiliation(s)
- Yu Huang
- Department of Otorhinolaryngology Head and Neck Surgery, the Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Xin Ding
- Department of Otorhinolaryngology Head and Neck Surgery, the Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Yang Zhao
- Department of Otorhinolaryngology Head and Neck Surgery, the Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Xu Tian
- Department of Otorhinolaryngology Head and Neck Surgery, the Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Guodong Feng
- Department of Otorhinolaryngology Head and Neck Surgery, the Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Zhiqiang Gao
- Department of Otorhinolaryngology Head and Neck Surgery, the Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| |
Collapse
|
29
|
Hameed MS, Laplante S, Masino C, Khalid MU, Zhang H, Protserov S, Hunter J, Mashouri P, Fecso AB, Brudno M, Madani A. What is the educational value and clinical utility of artificial intelligence for intraoperative and postoperative video analysis? A survey of surgeons and trainees. Surg Endosc 2023; 37:9453-9460. [PMID: 37697116 DOI: 10.1007/s00464-023-10377-3] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2023] [Accepted: 07/30/2023] [Indexed: 09/13/2023]
Abstract
INTRODUCTION Surgical complications often occur due to lapses in judgment and decision-making. Advances in artificial intelligence (AI) have made it possible to train algorithms that identify anatomy and interpret the surgical field. These algorithms can potentially be used for intraoperative decision-support and postoperative video analysis and feedback. Despite the very early success of proof-of-concept algorithms, it remains unknown whether this innovation meets the needs of end-users or how best to deploy it. This study explores users' opinion on the value, usability and design for adapting AI in operating rooms. METHODS A device-agnostic web-accessible software was developed to provide AI inference either (1) intraoperatively on a live video stream (synchronous mode), or (2) on an uploaded video or image file (asynchronous mode) postoperatively for feedback. A validated AI model (GoNoGoNet), which identifies safe and dangerous zones of dissection during laparoscopic cholecystectomy, was used as the use case. Surgeons and trainees performing laparoscopic cholecystectomy interacted with the AI platform and completed a 5-point Likert scale survey to evaluate the educational value, usability and design of the platform. RESULTS Twenty participants (11 surgeons and 9 trainees) evaluated the platform intraoperatively (n = 10) and postoperatively (n = 11). The majority agreed or strongly agreed that AI is an effective adjunct to surgical training (81%; neutral = 10%), effective for providing real-time feedback (70%; neutral = 20%), postoperative feedback (73%; neutral = 27%), and capable of improving surgeon confidence (67%; neutral = 29%). Only 40% (neutral = 50%) and 57% (neutral = 43%) believe that the tool is effective in improving intraoperative decisions and performance, or beneficial for patient care, respectively. Overall, 38% (neutral = 43%) reported they would use this platform consistently if available. The majority agreed or strongly agreed that the platform was easy to use (81%; neutral = 14%) and has acceptable resolution (62%; neutral = 24%), while 30% (neutral = 20%) reported that it disrupted the OR workflow, and 20% (neutral = 0%) reported significant time lag. All respondents reported that such a system should be available "on-demand" to turn on/off at their discretion. CONCLUSIONS Most found AI to be a useful tool for providing support and feedback to surgeons, despite several implementation obstacles. The study findings will inform the future design and usability of this technology in order to optimize its clinical impact and adoption by end-users.
Collapse
Affiliation(s)
- M Saif Hameed
- Surgical Artificial Intelligence Research Academy, University Health Network, 81 Baldwin Street, Toronto, ON, M5T 1L5, Canada.
| | - Simon Laplante
- Surgical Artificial Intelligence Research Academy, University Health Network, 81 Baldwin Street, Toronto, ON, M5T 1L5, Canada
- Department of Surgery, University of Toronto, Toronto, ON, Canada
| | - Caterina Masino
- Surgical Artificial Intelligence Research Academy, University Health Network, 81 Baldwin Street, Toronto, ON, M5T 1L5, Canada
| | - Muhammad Uzair Khalid
- Surgical Artificial Intelligence Research Academy, University Health Network, 81 Baldwin Street, Toronto, ON, M5T 1L5, Canada
- Temerty Faculty of Medicine, University of Toronto, Toronto, ON, Canada
| | - Haochi Zhang
- DATA Team, University Health Network, Toronto, ON, Canada
| | | | - Jaryd Hunter
- DATA Team, University Health Network, Toronto, ON, Canada
| | | | - Andras B Fecso
- Surgical Artificial Intelligence Research Academy, University Health Network, 81 Baldwin Street, Toronto, ON, M5T 1L5, Canada
| | - Michael Brudno
- DATA Team, University Health Network, Toronto, ON, Canada
| | - Amin Madani
- Surgical Artificial Intelligence Research Academy, University Health Network, 81 Baldwin Street, Toronto, ON, M5T 1L5, Canada
- Department of Surgery, University of Toronto, Toronto, ON, Canada
| |
Collapse
|
30
|
Buyck F, Vandemeulebroucke J, Ceranka J, Van Gestel F, Cornelius JF, Duerinck J, Bruneau M. Computer-vision based analysis of the neurosurgical scene - A systematic review. BRAIN & SPINE 2023; 3:102706. [PMID: 38020988 PMCID: PMC10668095 DOI: 10.1016/j.bas.2023.102706] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/30/2023] [Revised: 10/23/2023] [Accepted: 10/29/2023] [Indexed: 12/01/2023]
Abstract
Introduction With increasing use of robotic surgical adjuncts, artificial intelligence and augmented reality in neurosurgery, the automated analysis of digital images and videos acquired over various procedures becomes a subject of increased interest. While several computer vision (CV) methods have been developed and implemented for analyzing surgical scenes, few studies have been dedicated to neurosurgery. Research question In this work, we present a systematic literature review focusing on CV methodologies specifically applied to the analysis of neurosurgical procedures based on intra-operative images and videos. Additionally, we provide recommendations for the future developments of CV models in neurosurgery. Material and methods We conducted a systematic literature search in multiple databases until January 17, 2023, including Web of Science, PubMed, IEEE Xplore, Embase, and SpringerLink. Results We identified 17 studies employing CV algorithms on neurosurgical videos/images. The most common applications of CV were tool and neuroanatomical structure detection or characterization, and to a lesser extent, surgical workflow analysis. Convolutional neural networks (CNN) were the most frequently utilized architecture for CV models (65%), demonstrating superior performances in tool detection and segmentation. In particular, mask recurrent-CNN manifested most robust performance outcomes across different modalities. Discussion and conclusion Our systematic review demonstrates that CV models have been reported that can effectively detect and differentiate tools, surgical phases, neuroanatomical structures, as well as critical events in complex neurosurgical scenes with accuracies above 95%. Automated tool recognition contributes to objective characterization and assessment of surgical performance, with potential applications in neurosurgical training and intra-operative safety management.
Collapse
Affiliation(s)
- Félix Buyck
- Department of Neurosurgery, Universitair Ziekenhuis Brussel (UZ Brussel), 1090, Brussels, Belgium
- Vrije Universiteit Brussel (VUB), Research group Center For Neurosciences (C4N-NEUR), 1090, Brussels, Belgium
| | - Jef Vandemeulebroucke
- Vrije Universiteit Brussel (VUB), Department of Electronics and Informatics (ETRO), 1050, Brussels, Belgium
- Department of Radiology, Universitair Ziekenhuis Brussel (UZ Brussel), 1090, Brussels, Belgium
- imec, 3001, Leuven, Belgium
| | - Jakub Ceranka
- Vrije Universiteit Brussel (VUB), Department of Electronics and Informatics (ETRO), 1050, Brussels, Belgium
- imec, 3001, Leuven, Belgium
| | - Frederick Van Gestel
- Department of Neurosurgery, Universitair Ziekenhuis Brussel (UZ Brussel), 1090, Brussels, Belgium
- Vrije Universiteit Brussel (VUB), Research group Center For Neurosciences (C4N-NEUR), 1090, Brussels, Belgium
| | - Jan Frederick Cornelius
- Department of Neurosurgery, Medical Faculty, Heinrich-Heine-University, 40225, Düsseldorf, Germany
| | - Johnny Duerinck
- Department of Neurosurgery, Universitair Ziekenhuis Brussel (UZ Brussel), 1090, Brussels, Belgium
- Vrije Universiteit Brussel (VUB), Research group Center For Neurosciences (C4N-NEUR), 1090, Brussels, Belgium
| | - Michaël Bruneau
- Department of Neurosurgery, Universitair Ziekenhuis Brussel (UZ Brussel), 1090, Brussels, Belgium
- Vrije Universiteit Brussel (VUB), Research group Center For Neurosciences (C4N-NEUR), 1090, Brussels, Belgium
| |
Collapse
|
31
|
Ortenzi M, Rapoport Ferman J, Antolin A, Bar O, Zohar M, Perry O, Asselmann D, Wolf T. A novel high accuracy model for automatic surgical workflow recognition using artificial intelligence in laparoscopic totally extraperitoneal inguinal hernia repair (TEP). Surg Endosc 2023; 37:8818-8828. [PMID: 37626236 PMCID: PMC10615930 DOI: 10.1007/s00464-023-10375-5] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2023] [Accepted: 07/30/2023] [Indexed: 08/27/2023]
Abstract
INTRODUCTION Artificial intelligence and computer vision are revolutionizing the way we perceive video analysis in minimally invasive surgery. This emerging technology has increasingly been leveraged successfully for video segmentation, documentation, education, and formative assessment. New, sophisticated platforms allow pre-determined segments chosen by surgeons to be automatically presented without the need to review entire videos. This study aimed to validate and demonstrate the accuracy of the first reported AI-based computer vision algorithm that automatically recognizes surgical steps in videos of totally extraperitoneal (TEP) inguinal hernia repair. METHODS Videos of TEP procedures were manually labeled by a team of annotators trained to identify and label surgical workflow according to six major steps. For bilateral hernias, an additional change of focus step was also included. The videos were then used to train a computer vision AI algorithm. Performance accuracy was assessed in comparison to the manual annotations. RESULTS A total of 619 full-length TEP videos were analyzed: 371 were used to train the model, 93 for internal validation, and the remaining 155 as a test set to evaluate algorithm accuracy. The overall accuracy for the complete procedure was 88.8%. Per-step accuracy reached the highest value for the hernia sac reduction step (94.3%) and the lowest for the preperitoneal dissection step (72.2%). CONCLUSIONS These results indicate that the novel AI model was able to provide fully automated video analysis with a high accuracy level. High-accuracy models leveraging AI to enable automation of surgical video analysis allow us to identify and monitor surgical performance, providing mathematical metrics that can be stored, evaluated, and compared. As such, the proposed model is capable of enabling data-driven insights to improve surgical quality and demonstrate best practices in TEP procedures.
Collapse
Affiliation(s)
- Monica Ortenzi
- Theator Inc., Palo Alto, CA, USA.
- Department of General and Emergency Surgery, Polytechnic University of Marche, Ancona, Italy.
| | | | | | - Omri Bar
- Theator Inc., Palo Alto, CA, USA
| | | | | | | | | |
Collapse
|
32
|
Chen KA, Kirchoff KE, Butler LR, Holloway AD, Kapadia MR, Kuzmiak CM, Downs-Canner SM, Spanheimer PM, Gallagher KK, Gomez SM. Analysis of Specimen Mammography with Artificial Intelligence to Predict Margin Status. Ann Surg Oncol 2023; 30:7107-7115. [PMID: 37563337 PMCID: PMC10592216 DOI: 10.1245/s10434-023-14083-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2023] [Accepted: 07/17/2023] [Indexed: 08/12/2023]
Abstract
BACKGROUND Intraoperative specimen mammography is a valuable tool in breast cancer surgery, providing immediate assessment of margins for a resected tumor. However, the accuracy of specimen mammography in detecting microscopic margin positivity is low. We sought to develop an artificial intelligence model to predict the pathologic margin status of resected breast tumors using specimen mammography. METHODS A dataset of specimen mammography images matched with pathologic margin status was collected from our institution from 2017 to 2020. The dataset was randomly split into training, validation, and test sets. Specimen mammography models pretrained on radiologic images were developed and compared with models pretrained on nonmedical images. Model performance was assessed using sensitivity, specificity, and area under the receiver operating characteristic curve (AUROC). RESULTS The dataset included 821 images, and 53% had positive margins. For three out of four model architectures tested, models pretrained on radiologic images outperformed nonmedical models. The highest performing model, InceptionV3, showed sensitivity of 84%, specificity of 42%, and AUROC of 0.71. Model performance was better among patients with invasive cancers, less dense breasts, and non-white race. CONCLUSIONS This study developed and internally validated artificial intelligence models that predict pathologic margins status for partial mastectomy from specimen mammograms. The models' accuracy compares favorably with published literature on surgeon and radiologist interpretation of specimen mammography. With further development, these models could more precisely guide the extent of resection, potentially improving cosmesis and reducing reoperations.
Collapse
Affiliation(s)
- Kevin A Chen
- Division of Surgical Oncology, Department of Surgery, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Kathryn E Kirchoff
- Joint Department of Biomedical Engineering, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Logan R Butler
- Division of Surgical Oncology, Department of Surgery, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Alexa D Holloway
- Division of Surgical Oncology, Department of Surgery, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Muneera R Kapadia
- Division of Surgical Oncology, Department of Surgery, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Cherie M Kuzmiak
- Department of Radiology, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Stephanie M Downs-Canner
- Department of Surgery, Breast Service, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - Phillip M Spanheimer
- Division of Surgical Oncology, Department of Surgery, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Kristalyn K Gallagher
- Division of Surgical Oncology, Department of Surgery, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA.
| | - Shawn M Gomez
- Joint Department of Biomedical Engineering, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA.
| |
Collapse
|
33
|
Knoedler L, Knoedler S, Allam O, Remy K, Miragall M, Safi AF, Alfertshofer M, Pomahac B, Kauke-Navarro M. Application possibilities of artificial intelligence in facial vascularized composite allotransplantation-a narrative review. Front Surg 2023; 10:1266399. [PMID: 38026484 PMCID: PMC10646214 DOI: 10.3389/fsurg.2023.1266399] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2023] [Accepted: 09/26/2023] [Indexed: 12/01/2023] Open
Abstract
Facial vascularized composite allotransplantation (FVCA) is an emerging field of reconstructive surgery that represents a dogmatic shift in the surgical treatment of patients with severe facial disfigurements. While conventional reconstructive strategies were previously considered the goldstandard for patients with devastating facial trauma, FVCA has demonstrated promising short- and long-term outcomes. Yet, there remain several obstacles that complicate the integration of FVCA procedures into the standard workflow for facial trauma patients. Artificial intelligence (AI) has been shown to provide targeted and resource-effective solutions for persisting clinical challenges in various specialties. However, there is a paucity of studies elucidating the combination of FVCA and AI to overcome such hurdles. Here, we delineate the application possibilities of AI in the field of FVCA and discuss the use of AI technology for FVCA outcome simulation, diagnosis and prediction of rejection episodes, and malignancy screening. This line of research may serve as a fundament for future studies linking these two revolutionary biotechnologies.
Collapse
Affiliation(s)
- Leonard Knoedler
- Department of Plastic, Hand- and Reconstructive Surgery, University Hospital Regensburg, Regensburg, Germany
- Division of Plastic Surgery, Department of Surgery, Yale New Haven Hospital, Yale School of Medicine, New Haven, CT, United States
| | - Samuel Knoedler
- Division of Plastic Surgery, Department of Surgery, Yale New Haven Hospital, Yale School of Medicine, New Haven, CT, United States
| | - Omar Allam
- Division of Plastic Surgery, Department of Surgery, Yale New Haven Hospital, Yale School of Medicine, New Haven, CT, United States
| | - Katya Remy
- Department of Oral and Maxillofacial Surgery, University Hospital Regensburg, Regensburg, Germany
| | - Maximilian Miragall
- Department of Oral and Maxillofacial Surgery, University Hospital Regensburg, Regensburg, Germany
| | - Ali-Farid Safi
- Craniologicum, Center for Cranio-Maxillo-Facial Surgery, Bern, Switzerland
- Faculty of Medicine, University of Bern, Bern, Switzerland
| | - Michael Alfertshofer
- Division of Hand, Plastic and Aesthetic Surgery, Ludwig-Maximilians University Munich, Munich, Germany
| | - Bohdan Pomahac
- Division of Plastic Surgery, Department of Surgery, Yale New Haven Hospital, Yale School of Medicine, New Haven, CT, United States
| | - Martin Kauke-Navarro
- Division of Plastic Surgery, Department of Surgery, Yale New Haven Hospital, Yale School of Medicine, New Haven, CT, United States
| |
Collapse
|
34
|
Regazzoni P, Jupiter JB, Liu WC, Fernández dell’Oca AA. Evidence-Based Surgery: What Can Intra-Operative Images Contribute? J Clin Med 2023; 12:6809. [PMID: 37959274 PMCID: PMC10649165 DOI: 10.3390/jcm12216809] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2023] [Revised: 10/25/2023] [Accepted: 10/26/2023] [Indexed: 11/15/2023] Open
Abstract
Evidence-based medicine integrates results from randomized controlled trials (RCTs) and meta-analyses, combining the best external evidence with individual clinical expertise and patients' preferences. However, RCTs of surgery differ from those of medicine in that surgical performance is often assumed to be consistent. Yet, evaluating whether each surgery is performed to the same standard is quite challenging. As a primary issue, the novelty of this review is to emphasize-with a focus on orthopedic trauma-the advantage of having complete intra-operative image documentation, allowing the direct evaluation of the quality of the intra-operative technical performance. The absence of complete intra-operative image documentation leads to the inhomogeneity of case series, yielding inconsistent results due to the impossibility of a secondary analysis. Thus, comparisons and the reproduction of studies are difficult. Access to complete intra-operative image data in surgical RCTs allows not only secondary analysis but also comparisons with similar cases. Such complete data can be included in electronic papers. Offering these data to peers-in an accessible link-when presenting papers facilitates the selection process and improves publications for readers. Additionally, having access to the full set of image data for all presented cases serves as a rich resource for learning. It enables the reader to sift through the information and pinpoint the details that are most relevant to their individual needs, allowing them to potentially incorporate this knowledge into daily practice. A broad use of the concept of complete intra-operative image documentation is pivotal for bridging the gap between clinical research findings and real-world applications. Enhancing the quality of surgical RCTs would facilitate the equalization of evidence acquisition in both internal medicine and surgery. Joint effort by surgeons, scientific societies, publishers, and healthcare authorities is needed to support the ideas, implement economic requirements, and overcome the mental obstacles to its realization.
Collapse
Affiliation(s)
- Pietro Regazzoni
- Department of Trauma Surgery, University Hospital Basel, 4031 Basel, Switzerland
| | - Jesse B. Jupiter
- Hand and Arm Center, Department of Orthopedics, Massachusetts General Hospital, Boston, MA 02114, USA;
| | - Wen-Chih Liu
- Hand and Arm Center, Department of Orthopedics, Massachusetts General Hospital, Boston, MA 02114, USA;
- Department of Orthopedics, Kaohsiung Medical University Hospital, Kaohsiung 80756, Taiwan
- School of Medicine, College of Medicine, Kaohsiung Medical University, Kaohsiung 80756, Taiwan
| | - Alberto A. Fernández dell’Oca
- Department of Traumatology, Hospital Britanico, Montevideo 11600, Uruguay;
- Residency Program in Traumatology and Orthopedics, University of Montevideo, Montevideo 11600, Uruguay
| |
Collapse
|
35
|
Park JJ, Doiphode N, Zhang X, Pan L, Blue R, Shi J, Buch VP. Developing the surgeon-machine interface: using a novel instance-segmentation framework for intraoperative landmark labelling. Front Surg 2023; 10:1259756. [PMID: 37936949 PMCID: PMC10626480 DOI: 10.3389/fsurg.2023.1259756] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2023] [Accepted: 09/20/2023] [Indexed: 11/09/2023] Open
Abstract
Introduction The utilisation of artificial intelligence (AI) augments intraoperative safety, surgical training, and patient outcomes. We introduce the term Surgeon-Machine Interface (SMI) to describe this innovative intersection between surgeons and machine inference. A custom deep computer vision (CV) architecture within a sparse labelling paradigm was developed, specifically tailored to conceptualise the SMI. This platform demonstrates the ability to perform instance segmentation on anatomical landmarks and tools from a single open spinal dural arteriovenous fistula (dAVF) surgery video dataset. Methods Our custom deep convolutional neural network was based on SOLOv2 architecture for precise, instance-level segmentation of surgical video data. Test video consisted of 8520 frames, with sparse labelling of only 133 frames annotated for training. Accuracy and inference time, assessed using F1-score and mean Average Precision (mAP), were compared against current state-of-the-art architectures on a separate test set of 85 additionally annotated frames. Results Our SMI demonstrated superior accuracy and computing speed compared to these frameworks. The F1-score and mAP achieved by our platform were 17% and 15.2% respectively, surpassing MaskRCNN (15.2%, 13.9%), YOLOv3 (5.4%, 11.9%), and SOLOv2 (3.1%, 10.4%). Considering detections that exceeded the Intersection over Union threshold of 50%, our platform achieved an impressive F1-score of 44.2% and mAP of 46.3%, outperforming MaskRCNN (41.3%, 43.5%), YOLOv3 (15%, 34.1%), and SOLOv2 (9%, 32.3%). Our platform demonstrated the fastest inference time (88ms), compared to MaskRCNN (90ms), SOLOV2 (100ms), and YOLOv3 (106ms). Finally, the minimal amount of training set demonstrated a good generalisation performance -our architecture successfully identified objects in a frame that were not included in the training or validation frames, indicating its ability to handle out-of-domain scenarios. Discussion We present our development of an innovative intraoperative SMI to demonstrate the future promise of advanced CV in the surgical domain. Through successful implementation in a microscopic dAVF surgery, our framework demonstrates superior performance over current state-of-the-art segmentation architectures in intraoperative landmark guidance with high sample efficiency, representing the most advanced AI-enabled surgical inference platform to date. Our future goals include transfer learning paradigms for scaling to additional surgery types, addressing clinical and technical limitations for performing real-time decoding, and ultimate enablement of a real-time neurosurgical guidance platform.
Collapse
Affiliation(s)
- Jay J. Park
- Department of Neurosurgery, The Surgical Innovation and Machine Interfacing (SIMI) Lab, Stanford University School of Medicine, Stanford, CA, United States
- Centre for Global Health, Usher Institute, Edinburgh Medical School, The University of Edinburgh, Edinburgh, United Kingdom
| | - Nehal Doiphode
- Department of Neurosurgery, The Surgical Innovation and Machine Interfacing (SIMI) Lab, Stanford University School of Medicine, Stanford, CA, United States
- Department of Computer and Information Science, School of Engineering and Applied Science, University of Pennsylvania, Philadelphia, PA, United States
| | - Xiao Zhang
- Department of Computer Science, University of Chicago, Chicago, IL, United States
| | - Lishuo Pan
- Department of Computer Science, Brown University, Providence, RI, United States
| | - Rachel Blue
- Department of Neurosurgery, Perelman School of Medicine at The University of Pennsylvania, Philadelphia, PA, United States
| | - Jianbo Shi
- Department of Computer and Information Science, School of Engineering and Applied Science, University of Pennsylvania, Philadelphia, PA, United States
| | - Vivek P. Buch
- Department of Neurosurgery, The Surgical Innovation and Machine Interfacing (SIMI) Lab, Stanford University School of Medicine, Stanford, CA, United States
| |
Collapse
|
36
|
Lünse S, Wisotzky EL, Beckmann S, Paasch C, Hunger R, Mantke R. Technological advancements in surgical laparoscopy considering artificial intelligence: a survey among surgeons in Germany. Langenbecks Arch Surg 2023; 408:405. [PMID: 37843584 PMCID: PMC10579134 DOI: 10.1007/s00423-023-03134-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2023] [Accepted: 10/02/2023] [Indexed: 10/17/2023]
Abstract
PURPOSE The integration of artificial intelligence (AI) into surgical laparoscopy has shown promising results in recent years. This survey aims to investigate the inconveniences of current conventional laparoscopy and to evaluate the attitudes and desires of surgeons in Germany towards new AI-based laparoscopic systems. METHODS A 12-item web-based questionnaire was distributed to 38 German university hospitals as well as to a Germany-wide voluntary hospital association (CLINOTEL) consisting of 66 hospitals between July and November 2022. RESULTS A total of 202 questionnaires were completed. The majority of respondents (88.1%) stated that they needed one assistant during laparoscopy and rated the assistants' skillfulness as "very important" (39.6%) or "important" (49.5%). The most uncomfortable aspects of conventional laparoscopy were inappropriate camera movement (73.8%) and lens condensation (73.3%). Selected features that should be included in a new laparoscopic system were simple and intuitive maneuverability (81.2%), automatic de-fogging (80.7%), and self-cleaning of camera (77.2%). Furthermore, AI-based features were improvement of camera positioning (71.3%), visualization of anatomical landmarks (67.3%), image stabilization (66.8%), and tissue damage protection (59.4%). The reason for purchasing an AI-based system was to improve patient safety (86.1%); the reasonable price was €50.000-100.000 (34.2%), and it was expected to replace the existing assistants' workflow up to 25% (41.6%). CONCLUSION Simple and intuitive maneuverability with improved and image-stabilized camera guidance in combination with a lens cleaning system as well as AI-based augmentation of anatomical landmarks and tissue damage protection seem to be significant requirements for the further development of laparoscopic systems.
Collapse
Affiliation(s)
- Sebastian Lünse
- Department of General and Visceral Surgery, Brandenburg Medical School, University Hospital Brandenburg/Havel, Hochstrasse 29, 14770, Brandenburg, Germany.
| | - Eric L Wisotzky
- Vision and Imaging Technologies, Fraunhofer Heinrich-Hertz-Institut HHI, Einsteinufer 37, 10587, Berlin, Germany
- Department of Computer Science, Humboldt-Universität Zu Berlin, Unter Den Linden 6, 10117, Berlin, Germany
| | - Sophie Beckmann
- Vision and Imaging Technologies, Fraunhofer Heinrich-Hertz-Institut HHI, Einsteinufer 37, 10587, Berlin, Germany
- Department of Computer Science, Humboldt-Universität Zu Berlin, Unter Den Linden 6, 10117, Berlin, Germany
| | - Christoph Paasch
- Department of General and Visceral Surgery, Brandenburg Medical School, University Hospital Brandenburg/Havel, Hochstrasse 29, 14770, Brandenburg, Germany
| | - Richard Hunger
- Department of General and Visceral Surgery, Brandenburg Medical School, University Hospital Brandenburg/Havel, Hochstrasse 29, 14770, Brandenburg, Germany
| | - René Mantke
- Department of General and Visceral Surgery, Brandenburg Medical School, University Hospital Brandenburg/Havel, Hochstrasse 29, 14770, Brandenburg, Germany
- Faculty of Health Science Brandenburg, Brandenburg Medical School, University Hospital Brandenburg/Havel, 14770, Brandenburg, Germany
| |
Collapse
|
37
|
Graëff C, Daiss A, Lampert T, Padoy N, Martins A, Sapa MC, Liverneaux P. Preliminary stage in the development of an artificial intelligence algorithm: Variations between 100 surgeons in phase annotation in a video of internal fixation of distal radius fracture. Orthop Traumatol Surg Res 2023; 109:103564. [PMID: 36702298 DOI: 10.1016/j.otsr.2023.103564] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/11/2022] [Revised: 11/16/2022] [Accepted: 12/13/2022] [Indexed: 01/25/2023]
Abstract
INTRODUCTION In order to be used naturally and widely, an artificial intelligence algorithm of phase detection in surgical videos presupposes an expert consensus defining phases. OBJECTIVES The aim of the present study was to seek consensus in defining the various phases of a surgical technique in wrist traumatology. METHODS Three thousand two hundred and twenty-nine surgeons were sent a video showing anterior plate fixation of the distal radius and a questionnaire on the number of phases they distinguished and the visual cues signaling the beginning of each phase. Three experimenters predefined the number of phases (5: installation, approach, fixation, verification, closure) and sub-phases (3a: introduction of plate; 3b: positioning distal screws; 3c: positioning proximal screws) and the cues signaling the beginning of each. The numbers of the responses per item were collected. RESULTS Only 216 (6.7%) surgeons opened the questionnaire, and 100 answered all questions (3.1%). Most respondents claimed 5/5 expertise. Number of phases identified ranged between 3 and 10. More than two-thirds of respondents identified the same phase cue as defined by the 3 experimenters in most cases, except for "verification" and "positioning proximal screws". DISCUSSION Surgical procedures comprise a succession of phases, the beginning or end of which can be defined by a precise visual cue on video, either beginning with the appearance of the cue or the disappearance of the cue defining the preceding phase. CONCLUSION These cues need to be defined very precisely before attempting manual annotation of surgical videos in order to develop an artificial intelligence algorithm. LEVEL OF EVIDENCE II.
Collapse
Affiliation(s)
- Camille Graëff
- ICube CNRS UMR7357, Strasbourg University, 2-4, rue Boussingault, 67000 Strasbourg, France; IHU, Institute of image-guided surgery, Strasbourg, France
| | - Audrey Daiss
- Department of hand surgery, Strasbourg University Hospitals, FMTS, 1, avenue Molière, 67200 Strasbourg, France
| | - Thomas Lampert
- ICube CNRS UMR7357, Strasbourg University, 2-4, rue Boussingault, 67000 Strasbourg, France
| | - Nicolas Padoy
- ICube CNRS UMR7357, Strasbourg University, 2-4, rue Boussingault, 67000 Strasbourg, France; IHU, Institute of image-guided surgery, Strasbourg, France
| | - Antoine Martins
- Department of hand surgery, Strasbourg University Hospitals, FMTS, 1, avenue Molière, 67200 Strasbourg, France
| | - Marie-Cécile Sapa
- Department of hand surgery, Strasbourg University Hospitals, FMTS, 1, avenue Molière, 67200 Strasbourg, France
| | - Philippe Liverneaux
- ICube CNRS UMR7357, Strasbourg University, 2-4, rue Boussingault, 67000 Strasbourg, France; Department of hand surgery, Strasbourg University Hospitals, FMTS, 1, avenue Molière, 67200 Strasbourg, France.
| |
Collapse
|
38
|
Khan DZ, Hanrahan JG, Baldeweg SE, Dorward NL, Stoyanov D, Marcus HJ. Current and Future Advances in Surgical Therapy for Pituitary Adenoma. Endocr Rev 2023; 44:947-959. [PMID: 37207359 PMCID: PMC10502574 DOI: 10.1210/endrev/bnad014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/30/2022] [Revised: 03/14/2023] [Accepted: 05/17/2023] [Indexed: 05/21/2023]
Abstract
The vital physiological role of the pituitary gland, alongside its proximity to critical neurovascular structures, means that pituitary adenomas can cause significant morbidity or mortality. While enormous advancements have been made in the surgical care of pituitary adenomas, numerous challenges remain, such as treatment failure and recurrence. To meet these clinical challenges, there has been an enormous expansion of novel medical technologies (eg, endoscopy, advanced imaging, artificial intelligence). These innovations have the potential to benefit each step of the patient's journey, and ultimately, drive improved outcomes. Earlier and more accurate diagnosis addresses this in part. Analysis of novel patient data sets, such as automated facial analysis or natural language processing of medical records holds potential in achieving an earlier diagnosis. After diagnosis, treatment decision-making and planning will benefit from radiomics and multimodal machine learning models. Surgical safety and effectiveness will be transformed by smart simulation methods for trainees. Next-generation imaging techniques and augmented reality will enhance surgical planning and intraoperative navigation. Similarly, surgical abilities will be augmented by the future operative armamentarium, including advanced optical devices, smart instruments, and surgical robotics. Intraoperative support to surgical team members will benefit from a data science approach, utilizing machine learning analysis of operative videos to improve patient safety and orientate team members to a common workflow. Postoperatively, neural networks leveraging multimodal datasets will allow early detection of individuals at risk of complications and assist in the prediction of treatment failure, thus supporting patient-specific discharge and monitoring protocols. While these advancements in pituitary surgery hold promise to enhance the quality of care, clinicians must be the gatekeepers of the translation of such technologies, ensuring systematic assessment of risk and benefit prior to clinical implementation. In doing so, the synergy between these innovations can be leveraged to drive improved outcomes for patients of the future.
Collapse
Affiliation(s)
- Danyal Z Khan
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London WC1N 3BG, UK
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London W1W 7TY, UK
| | - John G Hanrahan
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London WC1N 3BG, UK
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London W1W 7TY, UK
| | - Stephanie E Baldeweg
- Department of Diabetes & Endocrinology, University College London Hospitals NHS Foundation Trust, London NW1 2BU, UK
- Centre for Obesity and Metabolism, Department of Experimental and Translational Medicine, Division of Medicine, University College London, London WC1E 6BT, UK
| | - Neil L Dorward
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London WC1N 3BG, UK
| | - Danail Stoyanov
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London W1W 7TY, UK
- Digital Surgery Ltd, Medtronic, London WD18 8WW, UK
| | - Hani J Marcus
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London WC1N 3BG, UK
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London W1W 7TY, UK
| |
Collapse
|
39
|
Kitaguchi D, Harai Y, Kosugi N, Hayashi K, Kojima S, Ishikawa Y, Yamada A, Hasegawa H, Takeshita N, Ito M. Artificial intelligence for the recognition of key anatomical structures in laparoscopic colorectal surgery. Br J Surg 2023; 110:1355-1358. [PMID: 37552629 DOI: 10.1093/bjs/znad249] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2023] [Revised: 07/10/2023] [Accepted: 07/10/2023] [Indexed: 08/10/2023]
Abstract
Lay Summary
To prevent intraoperative organ injury, surgeons strive to identify anatomical structures as early and accurately as possible during surgery. The objective of this prospective observational study was to develop artificial intelligence (AI)-based real-time automatic organ recognition models in laparoscopic surgery and to compare its performance with that of surgeons. The time taken to recognize target anatomy between AI and both expert and novice surgeons was compared. The AI models demonstrated faster recognition of target anatomy than surgeons, especially novice surgeons. These findings suggest that AI has the potential to compensate for the skill and experience gap between surgeons.
Collapse
Affiliation(s)
- Daichi Kitaguchi
- Department for the Promotion of Medical Device Innovation, National Cancer Centre Hospital East, Chiba, Japan
- Department of Colorectal Surgery, National Cancer Centre Hospital East, Chiba, Japan
| | - Yuriko Harai
- Department for the Promotion of Medical Device Innovation, National Cancer Centre Hospital East, Chiba, Japan
| | - Norihito Kosugi
- Department for the Promotion of Medical Device Innovation, National Cancer Centre Hospital East, Chiba, Japan
| | - Kazuyuki Hayashi
- Department for the Promotion of Medical Device Innovation, National Cancer Centre Hospital East, Chiba, Japan
| | - Shigehiro Kojima
- Department for the Promotion of Medical Device Innovation, National Cancer Centre Hospital East, Chiba, Japan
| | - Yuto Ishikawa
- Department for the Promotion of Medical Device Innovation, National Cancer Centre Hospital East, Chiba, Japan
| | - Atsushi Yamada
- Department for the Promotion of Medical Device Innovation, National Cancer Centre Hospital East, Chiba, Japan
| | - Hiro Hasegawa
- Department for the Promotion of Medical Device Innovation, National Cancer Centre Hospital East, Chiba, Japan
- Department of Colorectal Surgery, National Cancer Centre Hospital East, Chiba, Japan
| | - Nobuyoshi Takeshita
- Department for the Promotion of Medical Device Innovation, National Cancer Centre Hospital East, Chiba, Japan
| | - Masaaki Ito
- Department for the Promotion of Medical Device Innovation, National Cancer Centre Hospital East, Chiba, Japan
- Department of Colorectal Surgery, National Cancer Centre Hospital East, Chiba, Japan
| |
Collapse
|
40
|
Williams E, Fernandes RD, Choi K, Fasola L, Zevin B. Learning Outcomes and Educational Effectiveness of E-Learning as a Continuing Professional Development Intervention for Practicing Surgeons and Proceduralists: A Systematic Review. JOURNAL OF SURGICAL EDUCATION 2023; 80:1139-1149. [PMID: 37316431 DOI: 10.1016/j.jsurg.2023.05.017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/01/2023] [Revised: 04/21/2023] [Accepted: 05/20/2023] [Indexed: 06/16/2023]
Abstract
BACKGROUND Electronic learning (e-Learning) has become a prevalent mode of delivering medical education. We aimed to determine the learning outcomes and educational effectiveness of e-Learning as a continuing professional development (CPD) intervention for practicing surgeons and proceduralists. METHODS We searched MEDLINE databases and included studies reporting learning outcomes of e-learning CPD interventions for practicing surgeons and physicians performing technical procedures. We excluded articles only studying surgical trainees and those not reporting learning outcomes. Two reviewers independently screened, extracted data, and assessed study quality using the Critical Appraisal Skills Programme (CASP) tools. Learning outcomes and educational effectiveness were categorized using Moore's Outcomes Framework (PROSPERO: CRD42022333523). RESULTS Of 1307 identified articles, 12 were included- 9 cohort studies, one randomized controlled trial and 2 qualitative studies, with a total of 2158 participants. Eight studies were rated as moderate, five as strong, and 2 as weak in study quality. E-Learning CPD interventions included web-based modules, image recognition, videos, a repository of videos and schematics, and an online journal club. Seven studies reported participants' satisfaction with the e-Learning interventions (Moore's Level 2), 4 reported improvements in participants' declarative knowledge (Level 3a), 1 reported improvements in procedural knowledge (Level 3b) and five reported improvements in participants' procedural competence in an educational setting (Level 4). No studies demonstrated improvements in participants' workplace-based performance, the health of patients, or community health (Levels 5-7). CONCLUSIONS E-Learning as a CPD educational intervention is associated with high satisfaction and improvements in knowledge and procedural competencies of practicing surgeons and proceduralists in an educational setting. Future research is required to investigate whether e-Learning is associated with higher-level learning outcomes.
Collapse
Affiliation(s)
- Erin Williams
- Department of Surgery, Queen's University, Kingston, Canada
| | | | - Ken Choi
- The School of Medicine, Queen's University, Kingston, Canada
| | - Laurie Fasola
- Department of Surgery, Queen's University, Kingston, Canada
| | - Boris Zevin
- Department of Surgery, Queen's University, Kingston, Canada.
| |
Collapse
|
41
|
Kinoshita T, Komatsu M. Artificial Intelligence in Surgery and Its Potential for Gastric Cancer. J Gastric Cancer 2023; 23:400-409. [PMID: 37553128 PMCID: PMC10412972 DOI: 10.5230/jgc.2023.23.e27] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/15/2023] [Revised: 07/19/2023] [Accepted: 07/20/2023] [Indexed: 08/10/2023] Open
Abstract
Artificial intelligence (AI) has made significant progress in recent years, and many medical fields are attempting to introduce AI technology into clinical practice. Currently, much research is being conducted to evaluate that AI can be incorporated into surgical procedures to make them safer and more efficient, subsequently to obtain better outcomes for patients. In this paper, we review basic AI research regarding surgery and discuss the potential for implementing AI technology in gastric cancer surgery. At present, research and development is focused on AI technologies that assist the surgeon's understandings and judgment during surgery, such as anatomical navigation. AI systems are also being developed to recognize in which the surgical phase is ongoing. Such a surgical phase recognition systems is considered for effective storage of surgical videos and education, in the future, for use in systems to objectively evaluate the skill of surgeons. At this time, it is not considered practical to let AI make intraoperative decisions or move forceps automatically from an ethical standpoint, too. At present, AI research on surgery has various limitations, and it is desirable to develop practical systems that will truly benefit clinical practice in the future.
Collapse
Affiliation(s)
- Takahiro Kinoshita
- Gastric Surgery Division, National Cancer Center Hospital East, Kashiwa, Japan.
| | - Masaru Komatsu
- Gastric Surgery Division, National Cancer Center Hospital East, Kashiwa, Japan
| |
Collapse
|
42
|
Lavanchy JL, Vardazaryan A, Mascagni P, Mutter D, Padoy N. Preserving privacy in surgical video analysis using a deep learning classifier to identify out-of-body scenes in endoscopic videos. Sci Rep 2023; 13:9235. [PMID: 37286660 PMCID: PMC10247775 DOI: 10.1038/s41598-023-36453-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2023] [Accepted: 06/03/2023] [Indexed: 06/09/2023] Open
Abstract
Surgical video analysis facilitates education and research. However, video recordings of endoscopic surgeries can contain privacy-sensitive information, especially if the endoscopic camera is moved out of the body of patients and out-of-body scenes are recorded. Therefore, identification of out-of-body scenes in endoscopic videos is of major importance to preserve the privacy of patients and operating room staff. This study developed and validated a deep learning model for the identification of out-of-body images in endoscopic videos. The model was trained and evaluated on an internal dataset of 12 different types of laparoscopic and robotic surgeries and was externally validated on two independent multicentric test datasets of laparoscopic gastric bypass and cholecystectomy surgeries. Model performance was evaluated compared to human ground truth annotations measuring the receiver operating characteristic area under the curve (ROC AUC). The internal dataset consisting of 356,267 images from 48 videos and the two multicentric test datasets consisting of 54,385 and 58,349 images from 10 and 20 videos, respectively, were annotated. The model identified out-of-body images with 99.97% ROC AUC on the internal test dataset. Mean ± standard deviation ROC AUC on the multicentric gastric bypass dataset was 99.94 ± 0.07% and 99.71 ± 0.40% on the multicentric cholecystectomy dataset, respectively. The model can reliably identify out-of-body images in endoscopic videos and is publicly shared. This facilitates privacy preservation in surgical video analysis.
Collapse
Affiliation(s)
- Joël L Lavanchy
- IHU Strasbourg, 1 Place de l'Hôpital, 67091, Strasbourg Cedex, France.
- Department of Visceral Surgery and Medicine, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland.
- Division of Surgery, Clarunis-University Center for Gastrointestinal and Liver Diseases, St Clara and University Hospital of Basel, Basel, Switzerland.
| | - Armine Vardazaryan
- IHU Strasbourg, 1 Place de l'Hôpital, 67091, Strasbourg Cedex, France
- ICube, University of Strasbourg, CNRS, Strasbourg, France
| | - Pietro Mascagni
- IHU Strasbourg, 1 Place de l'Hôpital, 67091, Strasbourg Cedex, France
- Fondazione Policlinico Universitario Agostino Gemelli IRCCS, Rome, Italy
| | - Didier Mutter
- IHU Strasbourg, 1 Place de l'Hôpital, 67091, Strasbourg Cedex, France
- University Hospital of Strasbourg, Strasbourg, France
| | - Nicolas Padoy
- IHU Strasbourg, 1 Place de l'Hôpital, 67091, Strasbourg Cedex, France
- ICube, University of Strasbourg, CNRS, Strasbourg, France
| |
Collapse
|
43
|
Zang C, Turkcan MK, Narasimhan S, Cao Y, Yarali K, Xiang Z, Szot S, Ahmad F, Choksi S, Bitner DP, Filicori F, Kostic Z. Surgical Phase Recognition in Inguinal Hernia Repair-AI-Based Confirmatory Baseline and Exploration of Competitive Models. Bioengineering (Basel) 2023; 10:654. [PMID: 37370585 DOI: 10.3390/bioengineering10060654] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2023] [Revised: 05/18/2023] [Accepted: 05/23/2023] [Indexed: 06/29/2023] Open
Abstract
Video-recorded robotic-assisted surgeries allow the use of automated computer vision and artificial intelligence/deep learning methods for quality assessment and workflow analysis in surgical phase recognition. We considered a dataset of 209 videos of robotic-assisted laparoscopic inguinal hernia repair (RALIHR) collected from 8 surgeons, defined rigorous ground-truth annotation rules, then pre-processed and annotated the videos. We deployed seven deep learning models to establish the baseline accuracy for surgical phase recognition and explored four advanced architectures. For rapid execution of the studies, we initially engaged three dozen MS-level engineering students in a competitive classroom setting, followed by focused research. We unified the data processing pipeline in a confirmatory study, and explored a number of scenarios which differ in how the DL networks were trained and evaluated. For the scenario with 21 validation videos of all surgeons, the Video Swin Transformer model achieved ~0.85 validation accuracy, and the Perceiver IO model achieved ~0.84. Our studies affirm the necessity of close collaborative research between medical experts and engineers for developing automated surgical phase recognition models deployable in clinical settings.
Collapse
Affiliation(s)
- Chengbo Zang
- Department of Electrical Engineering, Columbia University, New York, NY 10027, USA
| | - Mehmet Kerem Turkcan
- Department of Electrical Engineering, Columbia University, New York, NY 10027, USA
| | - Sanjeev Narasimhan
- Department of Computer Science, Columbia University, New York, NY 10027, USA
| | - Yuqing Cao
- Department of Electrical Engineering, Columbia University, New York, NY 10027, USA
| | - Kaan Yarali
- Department of Electrical Engineering, Columbia University, New York, NY 10027, USA
| | - Zixuan Xiang
- Department of Electrical Engineering, Columbia University, New York, NY 10027, USA
| | - Skyler Szot
- Department of Electrical Engineering, Columbia University, New York, NY 10027, USA
| | - Feroz Ahmad
- Department of Computer Science, Columbia University, New York, NY 10027, USA
| | - Sarah Choksi
- Intraoperative Performance Analytics Laboratory (IPAL), Lenox Hill Hospital, New York, NY 10021, USA
| | - Daniel P Bitner
- Intraoperative Performance Analytics Laboratory (IPAL), Lenox Hill Hospital, New York, NY 10021, USA
| | - Filippo Filicori
- Intraoperative Performance Analytics Laboratory (IPAL), Lenox Hill Hospital, New York, NY 10021, USA
- Zucker School of Medicine at Hofstra/Northwell Health, Hempstead, NY 11549, USA
| | - Zoran Kostic
- Department of Electrical Engineering, Columbia University, New York, NY 10027, USA
| |
Collapse
|
44
|
Nyangoh Timoh K, Huaulme A, Cleary K, Zaheer MA, Lavoué V, Donoho D, Jannin P. A systematic review of annotation for surgical process model analysis in minimally invasive surgery based on video. Surg Endosc 2023:10.1007/s00464-023-10041-w. [PMID: 37157035 DOI: 10.1007/s00464-023-10041-w] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2022] [Accepted: 03/25/2023] [Indexed: 05/10/2023]
Abstract
BACKGROUND Annotated data are foundational to applications of supervised machine learning. However, there seems to be a lack of common language used in the field of surgical data science. The aim of this study is to review the process of annotation and semantics used in the creation of SPM for minimally invasive surgery videos. METHODS For this systematic review, we reviewed articles indexed in the MEDLINE database from January 2000 until March 2022. We selected articles using surgical video annotations to describe a surgical process model in the field of minimally invasive surgery. We excluded studies focusing on instrument detection or recognition of anatomical areas only. The risk of bias was evaluated with the Newcastle Ottawa Quality assessment tool. Data from the studies were visually presented in table using the SPIDER tool. RESULTS Of the 2806 articles identified, 34 were selected for review. Twenty-two were in the field of digestive surgery, six in ophthalmologic surgery only, one in neurosurgery, three in gynecologic surgery, and two in mixed fields. Thirty-one studies (88.2%) were dedicated to phase, step, or action recognition and mainly relied on a very simple formalization (29, 85.2%). Clinical information in the datasets was lacking for studies using available public datasets. The process of annotation for surgical process model was lacking and poorly described, and description of the surgical procedures was highly variable between studies. CONCLUSION Surgical video annotation lacks a rigorous and reproducible framework. This leads to difficulties in sharing videos between institutions and hospitals because of the different languages used. There is a need to develop and use common ontology to improve libraries of annotated surgical videos.
Collapse
Affiliation(s)
- Krystel Nyangoh Timoh
- Department of Gynecology and Obstetrics and Human Reproduction, CHU Rennes, Rennes, France.
- INSERM, LTSI - UMR 1099, University Rennes 1, Rennes, France.
- Laboratoire d'Anatomie et d'Organogenèse, Faculté de Médecine, Centre Hospitalier Universitaire de Rennes, 2 Avenue du Professeur Léon Bernard, 35043, Rennes Cedex, France.
- Department of Obstetrics and Gynecology, Rennes Hospital, Rennes, France.
| | - Arnaud Huaulme
- INSERM, LTSI - UMR 1099, University Rennes 1, Rennes, France
| | - Kevin Cleary
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Hospital, Washington, DC, 20010, USA
| | - Myra A Zaheer
- George Washington University School of Medicine and Health Sciences, Washington, DC, USA
| | - Vincent Lavoué
- Department of Gynecology and Obstetrics and Human Reproduction, CHU Rennes, Rennes, France
| | - Dan Donoho
- Division of Neurosurgery, Center for Neuroscience, Children's National Hospital, Washington, DC, 20010, USA
| | - Pierre Jannin
- INSERM, LTSI - UMR 1099, University Rennes 1, Rennes, France
| |
Collapse
|
45
|
Eckhoff JA, Ban Y, Rosman G, Müller DT, Hashimoto DA, Witkowski E, Babic B, Rus D, Bruns C, Fuchs HF, Meireles O. TEsoNet: knowledge transfer in surgical phase recognition from laparoscopic sleeve gastrectomy to the laparoscopic part of Ivor-Lewis esophagectomy. Surg Endosc 2023; 37:4040-4053. [PMID: 36932188 PMCID: PMC10156818 DOI: 10.1007/s00464-023-09971-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2022] [Accepted: 02/21/2023] [Indexed: 03/19/2023]
Abstract
BACKGROUND Surgical phase recognition using computer vision presents an essential requirement for artificial intelligence-assisted analysis of surgical workflow. Its performance is heavily dependent on large amounts of annotated video data, which remain a limited resource, especially concerning highly specialized procedures. Knowledge transfer from common to more complex procedures can promote data efficiency. Phase recognition models trained on large, readily available datasets may be extrapolated and transferred to smaller datasets of different procedures to improve generalizability. The conditions under which transfer learning is appropriate and feasible remain to be established. METHODS We defined ten operative phases for the laparoscopic part of Ivor-Lewis Esophagectomy through expert consensus. A dataset of 40 videos was annotated accordingly. The knowledge transfer capability of an established model architecture for phase recognition (CNN + LSTM) was adapted to generate a "Transferal Esophagectomy Network" (TEsoNet) for co-training and transfer learning from laparoscopic Sleeve Gastrectomy to the laparoscopic part of Ivor-Lewis Esophagectomy, exploring different training set compositions and training weights. RESULTS The explored model architecture is capable of accurate phase detection in complex procedures, such as Esophagectomy, even with low quantities of training data. Knowledge transfer between two upper gastrointestinal procedures is feasible and achieves reasonable accuracy with respect to operative phases with high procedural overlap. CONCLUSION Robust phase recognition models can achieve reasonable yet phase-specific accuracy through transfer learning and co-training between two related procedures, even when exposed to small amounts of training data of the target procedure. Further exploration is required to determine appropriate data amounts, key characteristics of the training procedure and temporal annotation methods required for successful transferal phase recognition. Transfer learning across different procedures addressing small datasets may increase data efficiency. Finally, to enable the surgical application of AI for intraoperative risk mitigation, coverage of rare, specialized procedures needs to be explored.
Collapse
Affiliation(s)
- J A Eckhoff
- Surgical Artificial Intelligence and Innovation Laboratory, Department of Surgery, Massachusetts General Hospital, 15 Parkman Street, WAC339, Boston, MA, 02114, USA.
- Department of General, Visceral, Tumor and Transplant Surgery, University Hospital Cologne, Kerpenerstrasse 62, 50937, Cologne, Germany.
| | - Y Ban
- Surgical Artificial Intelligence and Innovation Laboratory, Department of Surgery, Massachusetts General Hospital, 15 Parkman Street, WAC339, Boston, MA, 02114, USA
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, 32 Vassar St, Cambridge, MA, 02139, USA
| | - G Rosman
- Surgical Artificial Intelligence and Innovation Laboratory, Department of Surgery, Massachusetts General Hospital, 15 Parkman Street, WAC339, Boston, MA, 02114, USA
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, 32 Vassar St, Cambridge, MA, 02139, USA
| | - D T Müller
- Department of General, Visceral, Tumor and Transplant Surgery, University Hospital Cologne, Kerpenerstrasse 62, 50937, Cologne, Germany
| | - D A Hashimoto
- Department of Surgery, University Hospitals Cleveland Medical Center, Cleveland, OH, 44106, USA
- Department of Surgery, Case Western Reserve School of Medicine, Cleveland, OH, 44106, USA
| | - E Witkowski
- Surgical Artificial Intelligence and Innovation Laboratory, Department of Surgery, Massachusetts General Hospital, 15 Parkman Street, WAC339, Boston, MA, 02114, USA
| | - B Babic
- Department of General, Visceral, Tumor and Transplant Surgery, University Hospital Cologne, Kerpenerstrasse 62, 50937, Cologne, Germany
| | - D Rus
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, 32 Vassar St, Cambridge, MA, 02139, USA
| | - C Bruns
- Department of General, Visceral, Tumor and Transplant Surgery, University Hospital Cologne, Kerpenerstrasse 62, 50937, Cologne, Germany
| | - H F Fuchs
- Department of General, Visceral, Tumor and Transplant Surgery, University Hospital Cologne, Kerpenerstrasse 62, 50937, Cologne, Germany
| | - O Meireles
- Surgical Artificial Intelligence and Innovation Laboratory, Department of Surgery, Massachusetts General Hospital, 15 Parkman Street, WAC339, Boston, MA, 02114, USA
| |
Collapse
|
46
|
Wu S, Chen Z, Liu R, Li A, Cao Y, Wei A, Liu Q, Liu J, Wang Y, Jiang J, Ying Z, An J, Peng B, Wang X. SurgSmart: an artificial intelligent system for quality control in laparoscopic cholecystectomy: an observational study. Int J Surg 2023; 109:1105-1114. [PMID: 37039533 PMCID: PMC10389595 DOI: 10.1097/js9.0000000000000329] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2022] [Accepted: 02/22/2023] [Indexed: 04/12/2023]
Abstract
BACKGROUND The rate of bile duct injury in laparoscopic cholecystectomy (LC) continues to be high due to low critical view of safety (CVS) achievement and the absence of an effective quality control system. The development of an intelligent system enables the automatic quality control of LC surgery and, eventually, the mitigation of bile duct injury. This study aims to develop an intelligent surgical quality control system for LC and using the system to evaluate LC videos and investigate factors associated with CVS achievement. MATERIALS AND METHODS SurgSmart, an intelligent system capable of recognizing surgical phases, disease severity, critical division action, and CVS automatically, was developed using training datasets. SurgSmart was also applied in another multicenter dataset to validate its application and investigate factors associated with CVS achievement. RESULTS SurgSmart performed well in all models, with the critical division action model achieving the highest overall accuracy (98.49%), followed by the disease severity model (95.45%) and surgical phases model (88.61%). CVSI, CVSII, and CVSIII had an accuracy of 80.64, 97.62, and 78.87%, respectively. CVS was achieved in 4.33% in the system application dataset. In addition, the analysis indicated that surgeons at a higher hospital level had a higher CVS achievement rate. However, there was still considerable variation in CVS achievement among surgeons in the same hospital. CONCLUSIONS SurgSmart, the surgical quality control system, performed admirably in our study. In addition, the system's initial application demonstrated its broad potential for use in surgical quality control.
Collapse
Affiliation(s)
- Shangdi Wu
- Division of Pancreatic Surgery, Department of General Surgery
- West China School of Medicine
| | - Zixin Chen
- Division of Pancreatic Surgery, Department of General Surgery
- West China School of Medicine
| | - Runwen Liu
- ChengDu Withai Innovations Technology Company
| | - Ang Li
- Division of Pancreatic Surgery, Department of General Surgery
- Guang’an People’s Hospital, Guang’an, Sichuan Province, China
| | - Yu Cao
- Operating Room
- West China School of Nursing, Sichuan University
| | - Ailin Wei
- Guang’an People’s Hospital, Guang’an, Sichuan Province, China
| | | | - Jie Liu
- ChengDu Withai Innovations Technology Company
| | - Yuxian Wang
- ChengDu Withai Innovations Technology Company
| | - Jingwen Jiang
- West China Biomedical Big Data Center, West China Hospital of Sichuan University
- Med-X Center for Informatics, Sichuan University, Chengdu
| | - Zhiye Ying
- West China Biomedical Big Data Center, West China Hospital of Sichuan University
- Med-X Center for Informatics, Sichuan University, Chengdu
| | - Jingjing An
- Operating Room
- West China School of Nursing, Sichuan University
| | - Bing Peng
- Division of Pancreatic Surgery, Department of General Surgery
- West China School of Medicine
| | - Xin Wang
- Division of Pancreatic Surgery, Department of General Surgery
- West China School of Medicine
| |
Collapse
|
47
|
Müller DT, Schiffmann LM, Reisewitz A, Chon SH, Eckhoff JA, Babic B, Schmidt T, Schröder W, Bruns CJ, Fuchs HF. Mapping the Lymphatic Drainage Pattern of Esophageal Cancer with Near-Infrared Fluorescent Imaging during Robotic Assisted Minimally Invasive Ivor Lewis Esophagectomy (RAMIE)-First Results of the Prospective ESOMAP Feasibility Trial. Cancers (Basel) 2023; 15:cancers15082247. [PMID: 37190175 DOI: 10.3390/cancers15082247] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2023] [Revised: 04/04/2023] [Accepted: 04/09/2023] [Indexed: 05/17/2023] Open
Abstract
While the sentinel lymph node concept is routinely applied in other surgical fields, no established and valid modality for lymph node mapping for esophageal cancer surgery currently exists. Near-infrared light fluorescence (NIR) using indocyanine green (ICG) has been recently proven to be a safe technology for peritumoral injection and consecutive lymph node mapping in small surgical cohorts, mostly without the usage of robotic technology. The aim of this study was to identify the lymphatic drainage pattern of esophageal cancer during highly standardized RAMIE and to correlate the intraoperative images with the histopathological dissemination of lymphatic metastases. Patients with clinically advanced stage squamous cell carcinoma or adenocarcinoma of the esophagus undergoing a RAMIE at our Center of Excellence for Surgery of the Upper Gastrointestinal Tract were prospectively included in this study. Patients were admitted on the day prior to surgery, and an additional EGD with endoscopic injection of the ICG solution around the tumor was performed. Intraoperative imaging procedures were performed using the Stryker 1688 or the FIREFLY fluorescence imaging system, and resected lymph nodes were sent to pathology. A total of 20 patients were included in the study, and feasibility and safety for the application of NIR using ICG during RAMIE were shown. NIR imaging to detect lymph node metastases can be safely performed during RAMIE. Further analyses in our center will focus on pathological analyses of ICG-positive tissue and quantification using artificial intelligence tools with a correlation of long-term follow-up data.
Collapse
Affiliation(s)
- Dolores T Müller
- Department of General, Visceral, Cancer and Transplant Surgery, University of Cologne, Kerpener Str. 62, D-50937 Cologne, Germany
| | - Lars M Schiffmann
- Department of General, Visceral, Cancer and Transplant Surgery, University of Cologne, Kerpener Str. 62, D-50937 Cologne, Germany
| | - Alissa Reisewitz
- Department of General, Visceral, Cancer and Transplant Surgery, University of Cologne, Kerpener Str. 62, D-50937 Cologne, Germany
| | - Seung-Hun Chon
- Department of General, Visceral, Cancer and Transplant Surgery, University of Cologne, Kerpener Str. 62, D-50937 Cologne, Germany
| | - Jennifer A Eckhoff
- Department of General, Visceral, Cancer and Transplant Surgery, University of Cologne, Kerpener Str. 62, D-50937 Cologne, Germany
| | - Benjamin Babic
- Center for Esophagogastric Cancer Surgery Frankfurt, St. Elisabethen Hospital Frankfurt, D-60487 Frankfurt am Main, Germany
| | - Thomas Schmidt
- Department of General, Visceral, Cancer and Transplant Surgery, University of Cologne, Kerpener Str. 62, D-50937 Cologne, Germany
| | - Wolfgang Schröder
- Department of General, Visceral, Cancer and Transplant Surgery, University of Cologne, Kerpener Str. 62, D-50937 Cologne, Germany
| | - Christiane J Bruns
- Department of General, Visceral, Cancer and Transplant Surgery, University of Cologne, Kerpener Str. 62, D-50937 Cologne, Germany
| | - Hans F Fuchs
- Department of General, Visceral, Cancer and Transplant Surgery, University of Cologne, Kerpener Str. 62, D-50937 Cologne, Germany
| |
Collapse
|
48
|
Ríos MS, Molina-Rodriguez MA, Londoño D, Guillén CA, Sierra S, Zapata F, Giraldo LF. Cholec80-CVS: An open dataset with an evaluation of Strasberg's critical view of safety for AI. Sci Data 2023; 10:194. [PMID: 37031247 PMCID: PMC10082817 DOI: 10.1038/s41597-023-02073-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2022] [Accepted: 03/15/2023] [Indexed: 04/10/2023] Open
Abstract
Strasberg's criteria to detect a critical view of safety is a widely known strategy to reduce bile duct injuries during laparoscopic cholecystectomy. In spite of its popularity and efficiency, recent studies have shown that human miss-identification errors have led to important bile duct injuries occurrence rates. Developing tools based on artificial intelligence that facilitate the identification of a critical view of safety in cholecystectomy surgeries can potentially minimize the risk of such injuries. With this goal in mind, we present Cholec80-CVS, the first open dataset with video annotations of Strasberg's Critical View of Safety (CVS) criteria. Our dataset contains CVS criteria annotations provided by skilled surgeons for all videos in the well-known Cholec80 open video dataset. We consider that Cholec80-CVS is the first step towards the creation of intelligent systems that can assist humans during laparoscopic cholecystectomy.
Collapse
Affiliation(s)
- Manuel Sebastián Ríos
- Department of Electric and Electronic Engineering, Universidad de Los Andes, Bogotá D.C., Colombia
| | | | - Daniella Londoño
- Department of General Surgery, Universidad CES, Medellín, Colombia
| | - Camilo Andrés Guillén
- Department of Electric and Electronic Engineering, Universidad de Los Andes, Bogotá D.C., Colombia
| | - Sebastián Sierra
- Department of General Surgery, Universidad CES, Medellín, Colombia
| | - Felipe Zapata
- Department of General Surgery, Universidad CES, Medellín, Colombia
| | - Luis Felipe Giraldo
- Department of Biomedical Engineering, Universidad de Los Andes, Bogotá D.C., Colombia.
| |
Collapse
|
49
|
Zhang B, Goel B, Sarhan MH, Goel VK, Abukhalil R, Kalesan B, Stottler N, Petculescu S. Surgical workflow recognition with temporal convolution and transformer for action segmentation. Int J Comput Assist Radiol Surg 2023; 18:785-794. [PMID: 36542253 DOI: 10.1007/s11548-022-02811-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2022] [Accepted: 12/09/2022] [Indexed: 12/24/2022]
Abstract
PURPOSE Automatic surgical workflow recognition enabled by computer vision algorithms plays a key role in enhancing the learning experience of surgeons. It also supports building context-aware systems that allow better surgical planning and decision making which may in turn improve outcomes. Utilizing temporal information is crucial for recognizing context; hence, various recent approaches use recurrent neural networks or transformers to recognize actions. METHODS We design and implement a two-stage method for surgical workflow recognition. We utilize R(2+1)D for video clip modeling in the first stage. We propose Action Segmentation Temporal Convolutional Transformer (ASTCFormer) network for full video modeling in the second stage. ASTCFormer utilizes action segmentation transformers (ASFormers) and temporal convolutional networks (TCNs) to build a temporally aware surgical workflow recognition system. RESULTS We compare the proposed ASTCFormer with recurrent neural networks, multi-stage TCN, and ASFormer approaches. The comparison is done on a dataset comprised of 207 robotic and laparoscopic cholecystectomy surgical videos annotated for 7 surgical phases. The proposed method outperforms the compared methods achieving a [Formula: see text] relative improvement in the average segmental F1-score over the state-of-the-art ASFormer method. Moreover, our proposed method achieves state-of-the-art results on the publicly available Cholec80 dataset. CONCLUSION The improvement in the results when using the proposed method suggests that temporal context could be better captured when adding information from TCN to the ASFormer paradigm. This addition leads to better surgical workflow recognition.
Collapse
Affiliation(s)
- Bokai Zhang
- Johnson & Johnson MedTech, 1100 Olive Way, Suite 1100, Seattle, 98101, WA, USA.
| | - Bharti Goel
- Johnson & Johnson MedTech, 5490 Great America Pkwy, Santa Clara, CA, 95054, USA
| | - Mohammad Hasan Sarhan
- Johnson & Johnson MedTech, Robert-Koch-Straße 1, 22851, Norderstedt, Schleswig-Holstein, Germany
| | - Varun Kejriwal Goel
- Johnson & Johnson MedTech, 5490 Great America Pkwy, Santa Clara, CA, 95054, USA
| | - Rami Abukhalil
- Johnson & Johnson MedTech, 5490 Great America Pkwy, Santa Clara, CA, 95054, USA
| | - Bindu Kalesan
- Johnson & Johnson MedTech, 5490 Great America Pkwy, Santa Clara, CA, 95054, USA
| | - Natalie Stottler
- Johnson & Johnson MedTech, 1100 Olive Way, Suite 1100, Seattle, 98101, WA, USA
| | - Svetlana Petculescu
- Johnson & Johnson MedTech, 1100 Olive Way, Suite 1100, Seattle, 98101, WA, USA
| |
Collapse
|
50
|
Chen KA, Kirchoff KE, Butler LR, Holloway AD, Kapadia MR, Gallagher KK, Gomez SM. Computer Vision Analysis of Specimen Mammography to Predict Margin Status. MEDRXIV : THE PREPRINT SERVER FOR HEALTH SCIENCES 2023:2023.03.06.23286864. [PMID: 36945565 PMCID: PMC10029028 DOI: 10.1101/2023.03.06.23286864] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/12/2023]
Abstract
Intra-operative specimen mammography is a valuable tool in breast cancer surgery, providing immediate assessment of margins for a resected tumor. However, the accuracy of specimen mammography in detecting microscopic margin positivity is low. We sought to develop a deep learning-based model to predict the pathologic margin status of resected breast tumors using specimen mammography. A dataset of specimen mammography images matched with pathology reports describing margin status was collected. Models pre-trained on radiologic images were developed and compared with models pre-trained on non-medical images. Model performance was assessed using sensitivity, specificity, and area under the receiver operating characteristic curve (AUROC). The dataset included 821 images and 53% had positive margins. For three out of four model architectures tested, models pre-trained on radiologic images outperformed domain-agnostic models. The highest performing model, InceptionV3, showed a sensitivity of 84%, a specificity of 42%, and AUROC of 0.71. These results compare favorably with the published literature on surgeon and radiologist interpretation of specimen mammography. With further development, these models could assist clinicians with identifying positive margins intra-operatively and decrease the rate of positive margins and re-operation in breast-conserving surgery.
Collapse
Affiliation(s)
- Kevin A Chen
- Department of Surgery, University of North Carolina at Chapel Hill, Chapel Hill, NC
| | - Kathryn E Kirchoff
- Joint Department of Biomedical Engineering, University of North Carolina at Chapel Hill, Chapel Hill, NC
| | - Logan R Butler
- Department of Surgery, University of North Carolina at Chapel Hill, Chapel Hill, NC
| | - Alexa D Holloway
- Department of Surgery, University of North Carolina at Chapel Hill, Chapel Hill, NC
| | - Muneera R Kapadia
- Department of Surgery, University of North Carolina at Chapel Hill, Chapel Hill, NC
| | | | - Shawn M Gomez
- Joint Department of Biomedical Engineering, University of North Carolina at Chapel Hill, Chapel Hill, NC
| |
Collapse
|