1
|
Shafiei SB, Shadpour S, Mohler JL, Kauffman EC, Holden M, Gutierrez C. Classification of subtask types and skill levels in robot-assisted surgery using EEG, eye-tracking, and machine learning. Surg Endosc 2024; 38:5137-5147. [PMID: 39039296 PMCID: PMC11362185 DOI: 10.1007/s00464-024-11049-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2024] [Accepted: 07/06/2024] [Indexed: 07/24/2024]
Abstract
BACKGROUND Objective and standardized evaluation of surgical skills in robot-assisted surgery (RAS) holds critical importance for both surgical education and patient safety. This study introduces machine learning (ML) techniques using features derived from electroencephalogram (EEG) and eye-tracking data to identify surgical subtasks and classify skill levels. METHOD The efficacy of this approach was assessed using a comprehensive dataset encompassing nine distinct classes, each representing a unique combination of three surgical subtasks executed by surgeons while performing operations on pigs. Four ML models, logistic regression, random forest, gradient boosting, and extreme gradient boosting (XGB) were used for multi-class classification. To develop the models, 20% of data samples were randomly allocated to a test set, with the remaining 80% used for training and validation. Hyperparameters were optimized through grid search, using fivefold stratified cross-validation repeated five times. Model reliability was ensured by performing train-test split over 30 iterations, with average measurements reported. RESULTS The findings revealed that the proposed approach outperformed existing methods for classifying RAS subtasks and skills; the XGB and random forest models yielded high accuracy rates (88.49% and 88.56%, respectively) that were not significantly different (two-sample t-test; P-value = 0.9). CONCLUSION These results underscore the potential of ML models to augment the objectivity and precision of RAS subtask and skill evaluation. Future research should consider exploring ways to optimize these models, particularly focusing on the classes identified as challenging in this study. Ultimately, this study marks a significant step towards a more refined, objective, and standardized approach to RAS training and competency assessment.
Collapse
Affiliation(s)
- Somayeh B Shafiei
- The Intelligent Cancer Care Laboratory, Department of Urology, Roswell Park Comprehensive Cancer Center, Buffalo, NY, 14263, USA.
| | - Saeed Shadpour
- Department of Animal Biosciences, University of Guelph, Guelph, ON, N1G 2W1, Canada
| | - James L Mohler
- Department of Urology, Roswell Park Comprehensive Cancer Center, Buffalo, NY, 14263, USA
| | - Eric C Kauffman
- Department of Urology, Roswell Park Comprehensive Cancer Center, Buffalo, NY, 14263, USA
| | - Matthew Holden
- School of Computer Science, Carleton University, 1125 Colonel By Drive, Ottawa, ON, K1S 5B6, Canada
| | - Camille Gutierrez
- Obstetrics and Gynecology Residency Program, Sisters of Charity Health System, Buffalo, NY, 14214, USA
| |
Collapse
|
2
|
Li Y, Lyu J, Cao X, Zheng M, Zhou Y, Tan J, Liu X. Development and accuracy assessment of a crown lengthening surgery robot for use in the esthetic zone: An in vitro study. J Prosthet Dent 2024:S0022-3913(24)00525-0. [PMID: 39155169 DOI: 10.1016/j.prosdent.2024.07.037] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2024] [Revised: 07/24/2024] [Accepted: 07/25/2024] [Indexed: 08/20/2024]
Abstract
STATEMENT OF PROBLEM Crown lengthening surgery has been widely used to enhance the health and esthetics of anterior teeth, and its accuracy significantly influences surgical outcomes. However, the feasibility and accuracy of a robot system for crown lengthening surgery remains unknown. PURPOSE The purpose of this in vitro study was to develop a crown lengthening surgery robot and evaluate its accuracy. MATERIAL AND METHODS A robotic crown lengthening surgery system consisting of a robotic arm, a robotic software system, and an optical tracking device was designed. Intraoral scanning and cone beam computed tomography (CBCT) were performed on 18 artificial dentition models. The data were imported into the planning software program to synthesize a surgical path for gingivectomy and alveolectomy. Subsequently, a robotic arm equipped with a high-speed handpiece was used to perform these surgical procedures. Postoperatively, the models were rescanned for evaluation, with the accuracy (trueness ±precision) of the surgical outcomes of gingivectomy and alveolectomy being assessed from the trajectories in the highest, lowest, and overall regions. Differences between groups were analyzed by using the independent sample t test and the Levene test (α=.05). RESULTS Crown lengthening surgery was feasible in vitro using the robot developed in this study. The overall robot-assisted crown lengthening surgery accuracy (trueness ±precision) of gingivectomy (0.23 ±0.08 mm) was significantly higher than that of alveolectomy (0.33 ±0.11 mm) (P<.05). CONCLUSIONS Robot-assisted crown lengthening surgery had acceptable accuracy generally and can be considered a feasible treatment option.
Collapse
Affiliation(s)
- Yi Li
- Graduate student, Department of Prosthodontics, Peking University School and Hospital of Stomatology & National Center for Stomatology & National Clinical Research Center for Oral Diseases & National Engineering Research Center of Oral Biomaterials and Digital Medical Devices, Beijing, PR China
| | - Jizhe Lyu
- Graduate student, Department of Prosthodontics, Peking University School and Hospital of Stomatology, Beijing, PR China
| | - Xunning Cao
- Graduate student, Department of Prosthodontics, Peking University School and Hospital of Stomatology, Beijing, PR China
| | - Miao Zheng
- Lecturer, Department of Stomatology, Peking University Third Hospital, Beijing, PR China
| | - Yin Zhou
- Clinical Associate Professor, Department of Anaesthesiology, Peking University School and Hospital of Stomatology, Beijing, PR China
| | - Jianguo Tan
- Professor, Department of Prosthodontics, Peking University School and Hospital of Stomatology, Beijing, PR China
| | - Xiaoqiang Liu
- Clinical Professor, Department of Prosthodontics, Peking University School and Hospital of Stomatology & National Center for Stomatology & National Clinical Research Center for Oral Diseases & National Engineering Research Center of Oral Biomaterials and Digital Medical Devices, Beijing, PR China.
| |
Collapse
|
3
|
Zhao H, Li C, Shi X, Zhang J, Jia X, Hu Z, Gao Y, Tian J. Near-infrared II fluorescence-guided glioblastoma surgery targeting monocarboxylate transporter 4 combined with photothermal therapy. EBioMedicine 2024; 106:105243. [PMID: 39004066 PMCID: PMC11284385 DOI: 10.1016/j.ebiom.2024.105243] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2023] [Revised: 06/30/2024] [Accepted: 06/30/2024] [Indexed: 07/16/2024] Open
Abstract
BACKGROUND Surgery is crucial for glioma treatment, but achieving complete tumour removal remains challenging. We evaluated the effectiveness of a probe targeting monocarboxylate transporter 4 (MCT4) in recognising gliomas, and of near-infrared window II (NIR-II) fluorescent molecular imaging and photothermal therapy as treatment strategies. METHODS We combined an MCT4-specific monoclonal antibody with indocyanine green to create the probe. An orthotopic mouse model and a transwell model were used to evaluate its ability to guide tumour resection using NIR-II fluorescence and to penetrate the blood-brain barrier (BBB), respectively. A subcutaneous tumour model was established to confirm photothermal therapy efficacy. Probe specificity was assessed in brain tissue from mice and humans. Finally, probe effectiveness in photothermal therapy was investigated. FINDINGS MCT4 was differentially expressed in tumour and normal brain tissue. The designed probe exhibited precise tumour targeting. Tumour imaging was precise, with a signal-to-background (SBR) ratio of 2.8. Residual tumour cells were absent from brain tissue postoperatively (SBR: 6.3). The probe exhibited robust penetration of the BBB. Moreover, the probe increased the tumour temperature to 50 °C within 5 min of laser excitation. Photothermal therapy significantly reduced tumour volume and extended survival time in mice without damage to vital organs. INTERPRETATION These findings highlight the potential efficacy of our probe for fluorescence-guided surgery and therapeutic interventions. FUNDING Jilin Province Department of Science and Technology (20200403079SF), Department of Finance (2021SCZ06) and Development and Reform Commission (20200601002JC); National Natural Science Foundation of China (92059207, 92359301, 62027901, 81930053, 81227901, U21A20386); and CAS Youth Interdisciplinary Team (JCTD-2021-08).
Collapse
Affiliation(s)
- Hongyang Zhao
- Department of Neurosurgery, China-Japan Union Hospital of Jilin University, Jilin University, Changchun, China; CAS Key Laboratory of Molecular Imaging, Beijing Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing, China; Jilin Province Neuro-oncology Engineering Laboratory, Changchun, China; Jilin Provincial Key Laboratory of Neuro-oncology, Changchun, China
| | - Chunzhao Li
- CAS Key Laboratory of Molecular Imaging, Beijing Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing, China; Department of Neurosurgery, Beijing Tiantan Hospital, Capital Medical University, Beijing, China; China National Clinical Research Center for Neurological Diseases, Beijing, China
| | - Xiaojing Shi
- CAS Key Laboratory of Molecular Imaging, Beijing Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing, China; School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China
| | - Jinnan Zhang
- Department of Neurosurgery, China-Japan Union Hospital of Jilin University, Jilin University, Changchun, China; Jilin Province Neuro-oncology Engineering Laboratory, Changchun, China; Jilin Provincial Key Laboratory of Neuro-oncology, Changchun, China
| | - Xiaohua Jia
- CAS Key Laboratory of Molecular Imaging, Beijing Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing, China.
| | - Zhenhua Hu
- CAS Key Laboratory of Molecular Imaging, Beijing Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing, China; School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China; National Key Laboratory of Kidney Diseases, Beijing, China.
| | - Yufei Gao
- Department of Neurosurgery, China-Japan Union Hospital of Jilin University, Jilin University, Changchun, China; Jilin Province Neuro-oncology Engineering Laboratory, Changchun, China; Jilin Provincial Key Laboratory of Neuro-oncology, Changchun, China.
| | - Jie Tian
- CAS Key Laboratory of Molecular Imaging, Beijing Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing, China; School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China; National Key Laboratory of Kidney Diseases, Beijing, China; Beijing Advanced Innovation Center for Big Data-based Precision Medicine, School of Engineering Medicine, Beihang University, Beijing, China.
| |
Collapse
|
4
|
Maita KC, Avila FR, Torres-Guzman RA, Garcia JP, De Sario Velasquez GD, Borna S, Brown SA, Haider CR, Ho OS, Forte AJ. The usefulness of artificial intelligence in breast reconstruction: a systematic review. Breast Cancer 2024; 31:562-571. [PMID: 38619786 DOI: 10.1007/s12282-024-01582-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2023] [Accepted: 03/30/2024] [Indexed: 04/16/2024]
Abstract
BACKGROUND Artificial Intelligence (AI) offers an approach to predictive modeling. The model learns to determine specific patterns of undesirable outcomes in a dataset. Therefore, a decision-making algorithm can be built based on these patterns to prevent negative results. This systematic review aimed to evaluate the usefulness of AI in breast reconstruction. METHODS A systematic review was conducted in August 2022 following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. MEDLINE, EMBASE, SCOPUS, and Google Scholar online databases were queried to capture all publications studying the use of artificial intelligence in breast reconstruction. RESULTS A total of 23 studies were full text-screened after removing duplicates, and twelve articles fulfilled our inclusion criteria. The Machine Learning algorithms applied for neuropathic pain, lymphedema diagnosis, microvascular abdominal flap failure, donor site complications associated to muscle sparing Transverse Rectus Abdominis flap, surgical complications, financial toxicity, and patient-reported outcomes after breast surgery demonstrated that AI is a helpful tool to accurately predict patient results. In addition, one study used Computer Vision technology to assist in Deep Inferior Epigastric Perforator Artery detection for flap design, considerably reducing the preoperative time compared to manual identification. CONCLUSIONS In breast reconstruction, AI can help the surgeon by optimizing the perioperative patients' counseling to predict negative outcomes, allowing execution of timely interventions and reducing the postoperative burden, which leads to obtaining the most successful results and improving patient satisfaction.
Collapse
Affiliation(s)
- Karla C Maita
- Division of Plastic Surgery, Mayo Clinic, 4500 San Pablo Rd, Jacksonville, FL, 32224, USA
| | - Francisco R Avila
- Division of Plastic Surgery, Mayo Clinic, 4500 San Pablo Rd, Jacksonville, FL, 32224, USA
| | | | - John P Garcia
- Division of Plastic Surgery, Mayo Clinic, 4500 San Pablo Rd, Jacksonville, FL, 32224, USA
| | | | - Sahar Borna
- Division of Plastic Surgery, Mayo Clinic, 4500 San Pablo Rd, Jacksonville, FL, 32224, USA
| | - Sally A Brown
- Department of Administration, Mayo Clinic, Jacksonville, FL, USA
| | - Clifton R Haider
- Department of Physiology and Biomedical Engineering, Mayo Clinic, Rochester, MN, USA
| | - Olivia S Ho
- Division of Plastic Surgery, Mayo Clinic, 4500 San Pablo Rd, Jacksonville, FL, 32224, USA
| | - Antonio Jorge Forte
- Division of Plastic Surgery, Mayo Clinic, 4500 San Pablo Rd, Jacksonville, FL, 32224, USA.
| |
Collapse
|
5
|
Hamilton A. The Future of Artificial Intelligence in Surgery. Cureus 2024; 16:e63699. [PMID: 39092371 PMCID: PMC11293880 DOI: 10.7759/cureus.63699] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/01/2024] [Indexed: 08/04/2024] Open
Abstract
Until recently, innovations in surgery were largely represented by extensions or augmentations of the surgeon's perception. This includes advancements such as the operating microscope, tumor fluorescence, intraoperative ultrasound, and minimally invasive surgical instrumentation. However, introducing artificial intelligence (AI) into the surgical disciplines represents a transformational event. Not only does AI contribute substantively to enhancing a surgeon's perception with such methodologies as three-dimensional anatomic overlays with augmented reality, AI-improved visualization for tumor resection, and AI-formatted endoscopic and robotic surgery guidance. What truly makes AI so different is that it also provides ways to augment the surgeon's cognition. By analyzing enormous databases, AI can offer new insights that can transform the operative environment in several ways. It can enable preoperative risk assessment and allow a better selection of candidates for procedures such as organ transplantation. AI can also increase the efficiency and throughput of operating rooms and staff and coordinate the utilization of critical resources such as intensive care unit beds and ventilators. Furthermore, AI is revolutionizing intraoperative guidance, improving the detection of cancers, permitting endovascular navigation, and ensuring the reduction in collateral damage to adjacent tissues during surgery (e.g., identification of parathyroid glands during thyroidectomy). AI is also transforming how we evaluate and assess surgical proficiency and trainees in postgraduate programs. It offers the potential for multiple, serial evaluations, using various scoring systems while remaining free from the biases that can plague human supervisors. The future of AI-driven surgery holds promising trends, including the globalization of surgical education, the miniaturization of instrumentation, and the increasing success of autonomous surgical robots. These advancements raise the prospect of deploying fully autonomous surgical robots in the near future into challenging environments such as the battlefield, disaster areas, and even extraplanetary exploration. In light of these transformative developments, it is clear that the future of surgery will belong to those who can most readily embrace and harness the power of AI.
Collapse
Affiliation(s)
- Allan Hamilton
- Artificial Intelligence Division for Simulation, Education, and Training, University of Arizona Health Sciences, Tucson, USA
| |
Collapse
|
6
|
Crouzet A, Lopez N, Riss Yaw B, Lepelletier Y, Demange L. The Millennia-Long Development of Drugs Associated with the 80-Year-Old Artificial Intelligence Story: The Therapeutic Big Bang? Molecules 2024; 29:2716. [PMID: 38930784 PMCID: PMC11206022 DOI: 10.3390/molecules29122716] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2024] [Revised: 05/30/2024] [Accepted: 05/31/2024] [Indexed: 06/28/2024] Open
Abstract
The journey of drug discovery (DD) has evolved from ancient practices to modern technology-driven approaches, with Artificial Intelligence (AI) emerging as a pivotal force in streamlining and accelerating the process. Despite the vital importance of DD, it faces challenges such as high costs and lengthy timelines. This review examines the historical progression and current market of DD alongside the development and integration of AI technologies. We analyse the challenges encountered in applying AI to DD, focusing on drug design and protein-protein interactions. The discussion is enriched by presenting models that put forward the application of AI in DD. Three case studies are highlighted to demonstrate the successful application of AI in DD, including the discovery of a novel class of antibiotics and a small-molecule inhibitor that has progressed to phase II clinical trials. These cases underscore the potential of AI to identify new drug candidates and optimise the development process. The convergence of DD and AI embodies a transformative shift in the field, offering a path to overcome traditional obstacles. By leveraging AI, the future of DD promises enhanced efficiency and novel breakthroughs, heralding a new era of medical innovation even though there is still a long way to go.
Collapse
Affiliation(s)
- Aurore Crouzet
- UMR 8038 CNRS CiTCoM, Team PNAS, Faculté de Pharmacie, Université Paris Cité, 4 Avenue de l’Observatoire, 75006 Paris, France
- W-MedPhys, 128 Rue la Boétie, 75008 Paris, France
| | - Nicolas Lopez
- W-MedPhys, 128 Rue la Boétie, 75008 Paris, France
- ENOES, 62 Rue de Miromesnil, 75008 Paris, France
- Unité Mixte de Recherche «Institut de Physique Théorique (IPhT)» CEA-CNRS, UMR 3681, Bat 774, Route de l’Orme des Merisiers, 91191 St Aubin-Gif-sur-Yvette, France
| | - Benjamin Riss Yaw
- UMR 8038 CNRS CiTCoM, Team PNAS, Faculté de Pharmacie, Université Paris Cité, 4 Avenue de l’Observatoire, 75006 Paris, France
| | - Yves Lepelletier
- W-MedPhys, 128 Rue la Boétie, 75008 Paris, France
- Université Paris Cité, Imagine Institute, 24 Boulevard Montparnasse, 75015 Paris, France
- INSERM UMR 1163, Laboratory of Cellular and Molecular Basis of Normal Hematopoiesis and Hematological Disorders: Therapeutical Implications, 24 Boulevard Montparnasse, 75015 Paris, France
| | - Luc Demange
- UMR 8038 CNRS CiTCoM, Team PNAS, Faculté de Pharmacie, Université Paris Cité, 4 Avenue de l’Observatoire, 75006 Paris, France
| |
Collapse
|
7
|
Xie X, Xiao YF, Yang H, Peng X, Li JJ, Zhou YY, Fan CQ, Meng RP, Huang BB, Liao XP, Chen YY, Zhong TT, Lin H, Koulaouzidis A, Yang SM. A new artificial intelligence system for both stomach and small-bowel capsule endoscopy. Gastrointest Endosc 2024:S0016-5107(24)03259-0. [PMID: 38851456 DOI: 10.1016/j.gie.2024.06.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/11/2024] [Revised: 05/31/2024] [Accepted: 06/02/2024] [Indexed: 06/10/2024]
Abstract
BACKGROUND AND AIMS Despite the benefits of artificial intelligence in small-bowel (SB) capsule endoscopy (CE) image reading, information on its application in the stomach and SB CE is lacking. METHODS In this multicenter, retrospective diagnostic study, gastric imaging data were added to the deep learning-based SmartScan (SS), which has been described previously. A total of 1069 magnetically controlled GI CE examinations (comprising 2,672,542 gastric images) were used in the training phase for recognizing gastric pathologies, producing a new artificial intelligence algorithm named SS Plus. A total of 342 fully automated, magnetically controlled CE examinations were included in the validation phase. The performance of both senior and junior endoscopists with both the SS Plus-assisted reading (SSP-AR) and conventional reading (CR) modes was assessed. RESULTS SS Plus was designed to recognize 5 types of gastric lesions and 17 types of SB lesions. SS Plus reduced the number of CE images required for review to 873.90 (median, 1000; interquartile range [IQR], 814.50-1000) versus 44,322.73 (median, 42,393; IQR, 31,722.75-54,971.25) for CR. Furthermore, with SSP-AR, endoscopists took 9.54 minutes (median, 8.51; IQR, 6.05-13.13) to complete the CE video reading. In the 342 CE videos, SS Plus identified 411 gastric and 422 SB lesions, whereas 400 gastric and 368 intestinal lesions were detected with CR. Moreover, junior endoscopists remarkably improved their CE image reading ability with SSP-AR. CONCLUSIONS Our study shows that the newly upgraded deep learning-based algorithm SS Plus can detect GI lesions and help improve the diagnostic performance of junior endoscopists in interpreting CE videos.
Collapse
Affiliation(s)
- Xia Xie
- Department of Gastroenterology, The Second Affiliated Hospital, The Third Military Medical University, Chongqing, China
| | - Yu-Feng Xiao
- Department of Gastroenterology, The Second Affiliated Hospital, The Third Military Medical University, Chongqing, China
| | - Huan Yang
- Department of Gastroenterology, The Second Affiliated Hospital, The Third Military Medical University, Chongqing, China
| | - Xue Peng
- Department of Gastroenterology, The Second Affiliated Hospital, The Third Military Medical University, Chongqing, China
| | - Jian-Jun Li
- Department of Gastroenterology, The Second Affiliated Hospital, The Third Military Medical University, Chongqing, China
| | - Yuan-Yuan Zhou
- Department of Gastroenterology, The Second Affiliated Hospital, The Third Military Medical University, Chongqing, China
| | - Chao-Qiang Fan
- Department of Gastroenterology, The Second Affiliated Hospital, The Third Military Medical University, Chongqing, China
| | - Rui-Ping Meng
- Department of Gastroenterology, The Second Affiliated Hospital, The Third Military Medical University, Chongqing, China
| | - Bao-Bao Huang
- Department of Gastroenterology, The Second Affiliated Hospital, The Third Military Medical University, Chongqing, China
| | - Xi-Ping Liao
- Department of Gastroenterology, The Second Affiliated Hospital, The Third Military Medical University, Chongqing, China
| | - Yu-Yang Chen
- Department of Gastroenterology, The Second Affiliated Hospital, The Third Military Medical University, Chongqing, China
| | - Ting-Ting Zhong
- Department of Gastroenterology, The Second Affiliated Hospital, The Third Military Medical University, Chongqing, China
| | - Hui Lin
- Department of Gastroenterology, The Second Affiliated Hospital, The Third Military Medical University, Chongqing, China; Department of Epidemiology, the Third Military Medical University, Chongqing, China.
| | - Anastasios Koulaouzidis
- Department of Clinical Research University of Southern Denmark, Odense, Denmark; Centre for Clinical Implementation of Capsule Endoscopy, Store Adenomer Tidlige Cancere Centre, Svendborg, Denmark.
| | - Shi-Ming Yang
- Department of Gastroenterology, The Second Affiliated Hospital, The Third Military Medical University, Chongqing, China.
| |
Collapse
|
8
|
Li J, Tang T, Wu E, Zhao J, Zong H, Wu R, Feng W, Zhang K, Wang D, Qin Y, Shen Z, Qin Y, Ren S, Zhan C, Yang L, Wei Q, Shen B. RARPKB: a knowledge-guide decision support platform for personalized robot-assisted surgery in prostate cancer. Int J Surg 2024; 110:3412-3424. [PMID: 38498357 PMCID: PMC11175739 DOI: 10.1097/js9.0000000000001290] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2023] [Accepted: 02/22/2024] [Indexed: 03/20/2024]
Abstract
BACKGROUND Robot-assisted radical prostatectomy (RARP) has emerged as a pivotal surgical intervention for the treatment of prostate cancer (PCa). However, the complexity of clinical cases, heterogeneity of PCa, and limitations in physician expertise pose challenges to rational decision-making in RARP. To address these challenges, the authors aimed to organize the knowledge of previously complex cohorts and establish an online platform named the RARP knowledge base (RARPKB) to provide reference evidence for personalized treatment plans. MATERIALS AND METHODS PubMed searches over the past two decades were conducted to identify publications describing RARP. The authors collected, classified, and structured surgical details, patient information, surgical data, and various statistical results from the literature. A knowledge-guided decision-support tool was established using MySQL, DataTable, ECharts, and JavaScript. ChatGPT-4 and two assessment scales were used to validate and compare the platform. RESULTS The platform comprised 583 studies, 1589 cohorts, 1 911 968 patients, and 11 986 records, resulting in 54 834 data entries. The knowledge-guided decision support tool provide personalized surgical plan recommendations and potential complications on the basis of patients' baseline and surgical information. Compared with ChatGPT-4, RARPKB outperformed in authenticity (100% vs. 73%), matching (100% vs. 53%), personalized recommendations (100% vs. 20%), matching of patients (100% vs. 0%), and personalized recommendations for complications (100% vs. 20%). Postuse, the average System Usability Scale score was 88.88±15.03, and the Net Promoter Score of RARPKB was 85. The knowledge base is available at: http://rarpkb.bioinf.org.cn . CONCLUSIONS The authors introduced the pioneering RARPKB, the first knowledge base for robot-assisted surgery, with an emphasis on PCa. RARPKB can assist in personalized and complex surgical planning for PCa to improve its efficacy. RARPKB provides a reference for the future applications of artificial intelligence in clinical practice.
Collapse
Affiliation(s)
- Jiakun Li
- Department of Urology, West China Hospital, Sichuan University
- Institutes for Systems Genetics, Frontiers Science Center for Disease-related Molecular Network, West China Hospital, Sichuan University
| | - Tong Tang
- Institutes for Systems Genetics, Frontiers Science Center for Disease-related Molecular Network, West China Hospital, Sichuan University
- Department of Computer Science and Information Technologies, Elviña Campus, University of A Coruña, A Coruña, Spain
| | - Erman Wu
- Institutes for Systems Genetics, Frontiers Science Center for Disease-related Molecular Network, West China Hospital, Sichuan University
| | - Jing Zhao
- Institutes for Systems Genetics, Frontiers Science Center for Disease-related Molecular Network, West China Hospital, Sichuan University
| | - Hui Zong
- Institutes for Systems Genetics, Frontiers Science Center for Disease-related Molecular Network, West China Hospital, Sichuan University
| | - Rongrong Wu
- Institutes for Systems Genetics, Frontiers Science Center for Disease-related Molecular Network, West China Hospital, Sichuan University
| | - Weizhe Feng
- Institutes for Systems Genetics, Frontiers Science Center for Disease-related Molecular Network, West China Hospital, Sichuan University
| | - Ke Zhang
- Institutes for Systems Genetics, Frontiers Science Center for Disease-related Molecular Network, West China Hospital, Sichuan University
- Chengdu Aixam Medical Technology Co. Ltd, Chengdu
| | - Dongyue Wang
- Department of Ophthalmology, West China Hospital, Sichuan University
| | - Yawen Qin
- Clinical Medical College, Southwest Medical University, Luzhou, Sichuan Province
| | | | - Yi Qin
- Institutes for Systems Genetics, Frontiers Science Center for Disease-related Molecular Network, West China Hospital, Sichuan University
| | - Shumin Ren
- Institutes for Systems Genetics, Frontiers Science Center for Disease-related Molecular Network, West China Hospital, Sichuan University
- Department of Computer Science and Information Technologies, Elviña Campus, University of A Coruña, A Coruña, Spain
| | - Chaoying Zhan
- Institutes for Systems Genetics, Frontiers Science Center for Disease-related Molecular Network, West China Hospital, Sichuan University
| | - Lu Yang
- Department of Urology, West China Hospital, Sichuan University
| | - Qiang Wei
- Department of Urology, West China Hospital, Sichuan University
| | - Bairong Shen
- Institutes for Systems Genetics, Frontiers Science Center for Disease-related Molecular Network, West China Hospital, Sichuan University
| |
Collapse
|
9
|
Handa A, Gaidhane A, Choudhari SG. Role of Robotic-Assisted Surgery in Public Health: Its Advantages and Challenges. Cureus 2024; 16:e62958. [PMID: 39050344 PMCID: PMC11265954 DOI: 10.7759/cureus.62958] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2024] [Accepted: 06/23/2024] [Indexed: 07/27/2024] Open
Abstract
The modern hospital setting is closely related to engineering and technology. In a hospital, modern equipment is abundant in every department, including the operating room, intensive care unit, and laboratories. Thus, the quality of treatment provided in hospitals and technology advancements are closely tied. Robotic systems are used to support and improve the accuracy and agility of human surgeons during medical procedures. This surgical approach is commonly referred to as robotic surgery or robotic-assisted surgery (RAS). These systems are not entirely autonomous; they are managed by skilled surgeons who carry out procedures with improved accuracy and minimized invasiveness using a console and specialized instruments. Because RAS offers increased surgical precision, less discomfort after surgery, shorter hospital stays, and faster recovery time, all of which improve patient outcomes and lessen the strain on healthcare resources, it plays a critical role in public health. Its minimally invasive technique benefits patients and the healthcare system by lowering problems, reducing the requirement for blood transfusions, and reducing the danger of infections related to medical care. Furthermore, the possibility of remote surgery via robotic systems can increase access to specialized care, reducing regional differences and advancing fairness in public health. In this review article, we will be covering how RAS has its role in public health.
Collapse
Affiliation(s)
- Alisha Handa
- Community Medicine, Datta Meghe Institute of Higher Education and Research, Wardha, IND
| | - Abhay Gaidhane
- School of Epidemiology and Public Health, Jawaharlal Nehru Medical College, Datta Meghe Institute of Medical Sciences, Wardha, IND
| | - Sonali G Choudhari
- School of Epidemiology and Public Health, Community Medicine, Jawaharlal Nehru Medical College, Datta Meghe Institute of Medical Sciences, Wardha, IND
| |
Collapse
|
10
|
Zhu Y, Du L, Fu PY, Geng ZH, Zhang DF, Chen WF, Li QL, Zhou PH. An Automated Video Analysis System for Retrospective Assessment and Real-Time Monitoring of Endoscopic Procedures (with Video). Bioengineering (Basel) 2024; 11:445. [PMID: 38790312 PMCID: PMC11118061 DOI: 10.3390/bioengineering11050445] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2024] [Revised: 04/21/2024] [Accepted: 04/22/2024] [Indexed: 05/26/2024] Open
Abstract
BACKGROUND AND AIMS Accurate recognition of endoscopic instruments facilitates quantitative evaluation and quality control of endoscopic procedures. However, no relevant research has been reported. In this study, we aimed to develop a computer-assisted system, EndoAdd, for automated endoscopic surgical video analysis based on our dataset of endoscopic instrument images. METHODS Large training and validation datasets containing 45,143 images of 10 different endoscopic instruments and a test dataset of 18,375 images collected from several medical centers were used in this research. Annotated image frames were used to train the state-of-the-art object detection model, YOLO-v5, to identify the instruments. Based on the frame-level prediction results, we further developed a hidden Markov model to perform video analysis and generate heatmaps to summarize the videos. RESULTS EndoAdd achieved high accuracy (>97%) on the test dataset for all 10 endoscopic instrument types. The mean average accuracy, precision, recall, and F1-score were 99.1%, 92.0%, 88.8%, and 89.3%, respectively. The area under the curve values exceeded 0.94 for all instrument types. Heatmaps of endoscopic procedures were generated for both retrospective and real-time analyses. CONCLUSIONS We successfully developed an automated endoscopic video analysis system, EndoAdd, which supports retrospective assessment and real-time monitoring. It can be used for data analysis and quality control of endoscopic procedures in clinical practice.
Collapse
Affiliation(s)
- Yan Zhu
- Endoscopy Center and Endoscopy Research Institute, Zhongshan Hospital, Fudan University, Shanghai 200032, China; (Y.Z.); (L.D.); (P.-Y.F.); (Z.-H.G.); (D.-F.Z.); (W.-F.C.)
- Shanghai Collaborative Innovation Center of Endoscopy, Shanghai 200032, China
| | - Ling Du
- Endoscopy Center and Endoscopy Research Institute, Zhongshan Hospital, Fudan University, Shanghai 200032, China; (Y.Z.); (L.D.); (P.-Y.F.); (Z.-H.G.); (D.-F.Z.); (W.-F.C.)
- Shanghai Collaborative Innovation Center of Endoscopy, Shanghai 200032, China
| | - Pei-Yao Fu
- Endoscopy Center and Endoscopy Research Institute, Zhongshan Hospital, Fudan University, Shanghai 200032, China; (Y.Z.); (L.D.); (P.-Y.F.); (Z.-H.G.); (D.-F.Z.); (W.-F.C.)
- Shanghai Collaborative Innovation Center of Endoscopy, Shanghai 200032, China
| | - Zi-Han Geng
- Endoscopy Center and Endoscopy Research Institute, Zhongshan Hospital, Fudan University, Shanghai 200032, China; (Y.Z.); (L.D.); (P.-Y.F.); (Z.-H.G.); (D.-F.Z.); (W.-F.C.)
- Shanghai Collaborative Innovation Center of Endoscopy, Shanghai 200032, China
| | - Dan-Feng Zhang
- Endoscopy Center and Endoscopy Research Institute, Zhongshan Hospital, Fudan University, Shanghai 200032, China; (Y.Z.); (L.D.); (P.-Y.F.); (Z.-H.G.); (D.-F.Z.); (W.-F.C.)
- Shanghai Collaborative Innovation Center of Endoscopy, Shanghai 200032, China
| | - Wei-Feng Chen
- Endoscopy Center and Endoscopy Research Institute, Zhongshan Hospital, Fudan University, Shanghai 200032, China; (Y.Z.); (L.D.); (P.-Y.F.); (Z.-H.G.); (D.-F.Z.); (W.-F.C.)
- Shanghai Collaborative Innovation Center of Endoscopy, Shanghai 200032, China
| | - Quan-Lin Li
- Endoscopy Center and Endoscopy Research Institute, Zhongshan Hospital, Fudan University, Shanghai 200032, China; (Y.Z.); (L.D.); (P.-Y.F.); (Z.-H.G.); (D.-F.Z.); (W.-F.C.)
- Shanghai Collaborative Innovation Center of Endoscopy, Shanghai 200032, China
| | - Ping-Hong Zhou
- Endoscopy Center and Endoscopy Research Institute, Zhongshan Hospital, Fudan University, Shanghai 200032, China; (Y.Z.); (L.D.); (P.-Y.F.); (Z.-H.G.); (D.-F.Z.); (W.-F.C.)
- Shanghai Collaborative Innovation Center of Endoscopy, Shanghai 200032, China
| |
Collapse
|
11
|
Wiklund P, Rebuffo S, Frego N, Mottrie A. What More Can We Ask of Robotics? Eur Urol 2024; 85:315-316. [PMID: 37919191 DOI: 10.1016/j.eururo.2023.10.013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2023] [Accepted: 10/17/2023] [Indexed: 11/04/2023]
Abstract
The future of robotics relies heavily on the ongoing synergy between robotic surgery and artificial intelligence. To unlock their full potential, we should address issues such as accessibility, education, data privacy, and ethics.
Collapse
Affiliation(s)
- Peter Wiklund
- Department of Urology, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Silvia Rebuffo
- Department of Urology, Onze-Lieve-Vrouwziekenhuis, Aalst, Belgium; ORSI Academy, Ghent, Belgium; Department of Urology, Policlinico San Martino Hospital, University of Genoa, Genoa, Italy
| | - Nicola Frego
- Department of Urology, Onze-Lieve-Vrouwziekenhuis, Aalst, Belgium; ORSI Academy, Ghent, Belgium; Department of Urology, IRCCS Humanitas Research Hospital, Rozzano, Italy
| | - Alexandre Mottrie
- Department of Urology, Onze-Lieve-Vrouwziekenhuis, Aalst, Belgium; ORSI Academy, Ghent, Belgium.
| |
Collapse
|
12
|
Muthupandian S, Arockiaraj J, Belete MA. A commentary on 'The use of multilayer perceptron and radial basis function: an artificial intelligence model to predict progression of oral cancer': correspondence. Int J Surg 2024; 110:2438-2439. [PMID: 38668666 PMCID: PMC11020071 DOI: 10.1097/js9.0000000000001058] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2023] [Accepted: 12/20/2023] [Indexed: 04/29/2024]
Affiliation(s)
- Saravanan Muthupandian
- AMR and Nanomedicine Laboratory, Department of Pharmacology, Saveetha Dental College & Hospitals, Saveetha Institute of Medical and Technical Sciences (SIMATS), Chennai
| | - Jesu Arockiaraj
- Toxicology and Pharmacology Laboratory, Department of Biotechnology, Faculty of Science and Humanities, SRM Institute of Science and Technology, Chengalpattu District, Kattankulathur, Tamil Nadu, India
| | - Melaku A. Belete
- Department of Medical Laboratory Science, College of Medicine and Health Sciences, Wollo University, Dessie, Ethiopia
| |
Collapse
|
13
|
Hu Y, Li X, Chen X, Wang S, Cao L, Zhang H, Zhang Y, Wang Z, Yu B, Tong P, Zhou Q, Niu F, Yang W, Zhang W, Chen S, Yang Q, Shen T, Zhang P, Zhang Y, Miao J, Lin H, Wang J, Wang L, Ma X, Liu H, Stambler I, Bai L, Liu H, Jing Y, Liu G, Wang X, Wang D, Shi Z, Zhao RC, Su J. Expert consensus on Prospective Precision Diagnosis and Treatment Strategies for Osteoporotic Fractures. Aging Dis 2024:AD.2023.1223. [PMID: 38502589 DOI: 10.14336/ad.2023.1223] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2023] [Accepted: 12/23/2023] [Indexed: 03/21/2024] Open
Abstract
Osteoporotic fractures are the most severe complications of osteoporosis, characterized by poor bone quality, difficult realignment and fixation, slow fracture healing, and a high risk of recurrence. Clinically managing these fractures is relatively challenging, and in the context of rapid aging, they pose significant social hazards. The rapid advancement of disciplines such as biophysics and biochemistry brings new opportunities for future medical diagnosis and treatment. However, there has been limited attention to precision diagnosis and treatment strategies for osteoporotic fractures both domestically and internationally. In response to this, the Chinese Medical Association Orthopaedic Branch Youth Osteoporosis Group, Chinese Geriatrics Society Geriatric Orthopaedics Committee, Chinese Medical Doctor Association Orthopaedic Physicians Branch Youth Committee Osteoporosis Group, and Shanghai Association of Integrated Traditional Chinese and Western Medicine Osteoporosis Professional Committee have collaborated to develop this consensus. It aims to elucidate emerging technologies that may play a pivotal role in both diagnosis and treatment, advocating for clinicians to embrace interdisciplinary approaches and incorporate these new technologies into their practice. Ultimately, the goal is to improve the prognosis and quality of life for elderly patients with osteoporotic fractures.
Collapse
Affiliation(s)
- Yan Hu
- Xinhua Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai, China
| | - Xiaoqun Li
- First Affiliated Hospital of Naval Medical University, Shanghai, China
| | - Xiao Chen
- Xinhua Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai, China
| | | | - Liehu Cao
- Luodian Hospital, Baoshan District, Shanghai, China
| | - Hao Zhang
- Xinhua Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai, China
| | - Yunfei Zhang
- Tangdu Hospital Air Force Medical University, Xi'an, China
| | - Zhiwei Wang
- Eastern Hepatobiliary Surgery Hospital, Shanghai, China
| | - Baoqing Yu
- Shanghai Pudong New Area People's Hospital, Shanghai, China
| | - Peijian Tong
- Zhejiang Provincial Hospital of Chinese Medicine, Hangzhou, China
| | - Qiang Zhou
- Third Affiliated Hospital of Chongqing Medical University, Chongqing, China
| | - Feng Niu
- First Bethune Hospital of Jilin University, Changchun, China
| | - Weiguo Yang
- HKU Li Ka Shing Faculty of Medicine, Hongkong, China
| | - Wencai Zhang
- First Affiliated Hospital of Jinan University, Guangzhou, China
| | - Shijie Chen
- Third Xiangya Hospital of Central South University, Changsha, China
| | | | - Tao Shen
- Shengjing Hospital of Chinese Medical University, Shenyang, China
| | - Peng Zhang
- Shandong Provincial Hospital Affiliated to Shandong First Medical University, Jinan, China
| | - Yong Zhang
- Tangdu Hospital Air Force Medical University, Xi'an, China
| | - Jun Miao
- Tianjin Hospital, Tianjin, China
| | | | - Jinwu Wang
- Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Lei Wang
- Ruijin Hospital Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Xin Ma
- Sixth People's Hospital Affiliated to Shanghai Jiao Tong University, Shanghai, China
| | - Hongjian Liu
- First Affiliated Hospital of Zhengzhou University, Zhengzhou, China
| | - Ilia Stambler
- Department of Science, Technology and Society, Bar Ilan University, Ramat Gan, Israel
- International Society on Aging and Disease, Bryan, TX, USA
| | - Long Bai
- Institute of Translational Medicine, Shanghai University, Shanghai, China
| | - Han Liu
- Institute of Translational Medicine, Shanghai University, Shanghai, China
| | - Yingying Jing
- Institute of Translational Medicine, Shanghai University, Shanghai, China
| | - Guohui Liu
- Union Hospital of Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Xinglong Wang
- Department of Pharmacology & Toxicology, University of Arizona, Tucson, USA
| | - Dongliang Wang
- Xinhua Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai, China
| | - Zhongmin Shi
- Sixth People's Hospital Affiliated to Shanghai Jiao Tong University, Shanghai, China
| | - Robert Chunhua Zhao
- International Society on Aging and Disease, Bryan, TX, USA
- Institute of Basic Medical Sciences, Chinese Academy of Medical Sciences, School of Basic Medicine, Peking Union Medical College, Beijing, China
| | - Jiacan Su
- Xinhua Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai, China
- Institute of Translational Medicine, Shanghai University, Shanghai, China
| |
Collapse
|
14
|
Selvaraj V, Sudhakar S, Sekaran S, Rajamani Sekar SK, Warrier S. Enhancing precision and efficiency: harnessing robotics and artificial intelligence for endoscopic and surgical advancements. Int J Surg 2024; 110:1315-1316. [PMID: 38016128 PMCID: PMC10871611 DOI: 10.1097/js9.0000000000000936] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2023] [Accepted: 11/09/2023] [Indexed: 11/30/2023]
Affiliation(s)
- Vimalraj Selvaraj
- Department of Applied Mechanics and Biomedical Engineering, Indian Institute of Technology – Madras
| | - Swathi Sudhakar
- Department of Applied Mechanics and Biomedical Engineering, Indian Institute of Technology – Madras
| | - Saravanan Sekaran
- Department of Prosthodontics, Saveetha Dental College and Hospital, Saveetha Institute of Medical and Technical Sciences (SIMATS), Saveetha University
| | | | - Sudha Warrier
- Department of Biotechnology, Faculty of Biomedical Sciences and Technology, Sri Ramachandra Institute of Higher Education and Research, Chennai Tamil Nadu, India
| |
Collapse
|
15
|
Kolasa K, Admassu B, Hołownia-Voloskova M, Kędzior KJ, Poirrier JE, Perni S. Systematic reviews of machine learning in healthcare: a literature review. Expert Rev Pharmacoecon Outcomes Res 2024; 24:63-115. [PMID: 37955147 DOI: 10.1080/14737167.2023.2279107] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2023] [Accepted: 10/31/2023] [Indexed: 11/14/2023]
Abstract
INTRODUCTION The increasing availability of data and computing power has made machine learning (ML) a viable approach to faster, more efficient healthcare delivery. METHODS A systematic literature review (SLR) of published SLRs evaluating ML applications in healthcare settings published between1 January 2010 and 27 March 2023 was conducted. RESULTS In total 220 SLRs covering 10,462 ML algorithms were reviewed. The main application of AI in medicine related to the clinical prediction and disease prognosis in oncology and neurology with the use of imaging data. Accuracy, specificity, and sensitivity were provided in 56%, 28%, and 25% SLRs respectively. Internal and external validation was reported in 53% and less than 1% of the cases respectively. The most common modeling approach was neural networks (2,454 ML algorithms), followed by support vector machine and random forest/decision trees (1,578 and 1,522 ML algorithms, respectively). EXPERT OPINION The review indicated considerable reporting gaps in terms of the ML's performance, both internal and external validation. Greater accessibility to healthcare data for developers can ensure the faster adoption of ML algorithms into clinical practice.
Collapse
Affiliation(s)
- Katarzyna Kolasa
- Division of Health Economics and Healthcare Management, Kozminski University, Warsaw, Poland
| | - Bisrat Admassu
- Division of Health Economics and Healthcare Management, Kozminski University, Warsaw, Poland
| | | | | | | | | |
Collapse
|
16
|
Wise PA, Studier-Fischer A, Nickel F, Hackert T. [Status Quo of Surgical Navigation]. Zentralbl Chir 2023. [PMID: 38056501 DOI: 10.1055/a-2211-4898] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/08/2023]
Abstract
Surgical navigation, also referred to as computer-assisted or image-guided surgery, is a technique that employs a variety of methods - such as 3D imaging, tracking systems, specialised software, and robotics to support surgeons during surgical interventions. These emerging technologies aim not only to enhance the accuracy and precision of surgical procedures, but also to enable less invasive approaches, with the objective of reducing complications and improving operative outcomes for patients. By harnessing the integration of emerging digital technologies, surgical navigation holds the promise of assisting complex procedures across various medical disciplines. In recent years, the field of surgical navigation has witnessed significant advances. Abdominal surgical navigation, particularly endoscopy, laparoscopic, and robot-assisted surgery, is currently undergoing a phase of rapid evolution. Emphases include image-guided navigation, instrument tracking, and the potential integration of augmented and mixed reality (AR, MR). This article will comprehensively delve into the latest developments in surgical navigation, spanning state-of-the-art intraoperative technologies like hyperspectral and fluorescent imaging, to the integration of preoperative radiological imaging within the intraoperative setting.
Collapse
Affiliation(s)
- Philipp Anthony Wise
- Klinik für Allgemein-, Viszeral- und Transplantationschirurgie, Universitätsklinikum Heidelberg, Heidelberg, Deutschland
| | - Alexander Studier-Fischer
- Klinik für Allgemein-, Viszeral- und Transplantationschirurgie, Universitätsklinikum Heidelberg, Heidelberg, Deutschland
| | - Felix Nickel
- Klinik für Allgemein-, Viszeral- und Thoraxchirurgie, Universitätsklinikum Hamburg-Eppendorf, Hamburg, Deutschland
- Klinik für Allgemein-, Viszeral- und Transplantationschirurgie, Universitätsklinikum Heidelberg, Heidelberg, Deutschland
| | - Thilo Hackert
- Klinik für Allgemein-, Viszeral- und Thoraxchirurgie, Universitätsklinikum Hamburg-Eppendorf, Hamburg, Deutschland
| |
Collapse
|
17
|
Checcucci E, Piana A, Volpi G, Piazzolla P, Amparore D, De Cillis S, Piramide F, Gatti C, Stura I, Bollito E, Massa F, Di Dio M, Fiori C, Porpiglia F. Three-dimensional automatic artificial intelligence driven augmented-reality selective biopsy during nerve-sparing robot-assisted radical prostatectomy: A feasibility and accuracy study. Asian J Urol 2023; 10:407-415. [PMID: 38024433 PMCID: PMC10659972 DOI: 10.1016/j.ajur.2023.08.001] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2023] [Revised: 05/21/2023] [Accepted: 07/06/2023] [Indexed: 12/01/2023] Open
Abstract
Objective To evaluate the accuracy of our new three-dimensional (3D) automatic augmented reality (AAR) system guided by artificial intelligence in the identification of tumour's location at the level of the preserved neurovascular bundle (NVB) at the end of the extirpative phase of nerve-sparing robot-assisted radical prostatectomy. Methods In this prospective study, we enrolled patients with prostate cancer (clinical stages cT1c-3, cN0, and cM0) with a positive index lesion at target biopsy, suspicious for capsular contact or extracapsular extension at preoperative multiparametric magnetic resonance imaging. Patients underwent robot-assisted radical prostatectomy at San Luigi Gonzaga Hospital (Orbassano, Turin, Italy), from December 2020 to December 2021. At the end of extirpative phase, thanks to our new AAR artificial intelligence driven system, the virtual prostate 3D model allowed to identify the tumour's location at the level of the preserved NVB and to perform a selective excisional biopsy, sparing the remaining portion of the bundle. Perioperative and postoperative data were evaluated, especially focusing on the positive surgical margin (PSM) rates, potency, continence recovery, and biochemical recurrence. Results Thirty-four patients were enrolled. In 15 (44.1%) cases, the target lesion was in contact with the prostatic capsule at multiparametric magnetic resonance imaging (Wheeler grade L2) while in 19 (55.9%) cases extracapsular extension was detected (Wheeler grade L3). 3D AAR guided biopsies were negative in all pathological tumour stage 2 (pT2) patients while they revealed the presence of cancer in 14 cases in the pT3 cohort (14/16; 87.5%). PSM rates were 0% and 7.1% in the pathological stages pT2 and pT3 (<3 mm, Gleason score 3), respectively. Conclusion With the proposed 3D AAR system, it is possible to correctly identify the lesion's location on the NVB in 87.5% of pT3 patients and perform a 3D-guided tailored nerve-sparing even in locally advanced diseases, without compromising the oncological safety in terms of PSM rates.
Collapse
Affiliation(s)
- Enrico Checcucci
- Department of Surgery, Candiolo Cancer Institute, FPO-IRCCS, Candiolo, Turin, Italy
| | - Alberto Piana
- Department of Oncology, Division of Urology, University of Turin, San Luigi Gonzaga Hospital, Orbassano, To, Italy
| | - Gabriele Volpi
- Department of Surgery, Candiolo Cancer Institute, FPO-IRCCS, Candiolo, Turin, Italy
| | - Pietro Piazzolla
- Department of Mechanical Engineering, Politecnico di Milano, Milan, Italy
| | - Daniele Amparore
- Department of Oncology, Division of Urology, University of Turin, San Luigi Gonzaga Hospital, Orbassano, To, Italy
| | - Sabrina De Cillis
- Department of Oncology, Division of Urology, University of Turin, San Luigi Gonzaga Hospital, Orbassano, To, Italy
| | - Federico Piramide
- Department of Oncology, Division of Urology, University of Turin, San Luigi Gonzaga Hospital, Orbassano, To, Italy
| | - Cecilia Gatti
- Department of Oncology, Division of Urology, University of Turin, San Luigi Gonzaga Hospital, Orbassano, To, Italy
| | - Ilaria Stura
- Department of Public Health and Pediatric Sciences, School of Medicine, University of Turin, Italy
| | - Enrico Bollito
- Department of Pathology, San Luigi Gonzaga Hospital, University of Turin, Orbassano, Italy
| | - Federica Massa
- Department of Pathology, San Luigi Gonzaga Hospital, University of Turin, Orbassano, Italy
| | - Michele Di Dio
- SS Annunziata Hospital, Department of Surgery, Division of Urology, Cosenza, Italy
| | - Cristian Fiori
- Department of Oncology, Division of Urology, University of Turin, San Luigi Gonzaga Hospital, Orbassano, To, Italy
| | - Francesco Porpiglia
- Department of Oncology, Division of Urology, University of Turin, San Luigi Gonzaga Hospital, Orbassano, To, Italy
| |
Collapse
|
18
|
Rodriguez Peñaranda N, Eissa A, Ferretti S, Bianchi G, Di Bari S, Farinha R, Piazza P, Checcucci E, Belenchón IR, Veccia A, Gomez Rivas J, Taratkin M, Kowalewski KF, Rodler S, De Backer P, Cacciamani GE, De Groote R, Gallagher AG, Mottrie A, Micali S, Puliatti S. Artificial Intelligence in Surgical Training for Kidney Cancer: A Systematic Review of the Literature. Diagnostics (Basel) 2023; 13:3070. [PMID: 37835812 PMCID: PMC10572445 DOI: 10.3390/diagnostics13193070] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2023] [Revised: 09/17/2023] [Accepted: 09/24/2023] [Indexed: 10/15/2023] Open
Abstract
The prevalence of renal cell carcinoma (RCC) is increasing due to advanced imaging techniques. Surgical resection is the standard treatment, involving complex radical and partial nephrectomy procedures that demand extensive training and planning. Furthermore, artificial intelligence (AI) can potentially aid the training process in the field of kidney cancer. This review explores how artificial intelligence (AI) can create a framework for kidney cancer surgery to address training difficulties. Following PRISMA 2020 criteria, an exhaustive search of PubMed and SCOPUS databases was conducted without any filters or restrictions. Inclusion criteria encompassed original English articles focusing on AI's role in kidney cancer surgical training. On the other hand, all non-original articles and articles published in any language other than English were excluded. Two independent reviewers assessed the articles, with a third party settling any disagreement. Study specifics, AI tools, methodologies, endpoints, and outcomes were extracted by the same authors. The Oxford Center for Evidence-Based Medicine's evidence levels were employed to assess the studies. Out of 468 identified records, 14 eligible studies were selected. Potential AI applications in kidney cancer surgical training include analyzing surgical workflow, annotating instruments, identifying tissues, and 3D reconstruction. AI is capable of appraising surgical skills, including the identification of procedural steps and instrument tracking. While AI and augmented reality (AR) enhance training, challenges persist in real-time tracking and registration. The utilization of AI-driven 3D reconstruction proves beneficial for intraoperative guidance and preoperative preparation. Artificial intelligence (AI) shows potential for advancing surgical training by providing unbiased evaluations, personalized feedback, and enhanced learning processes. Yet challenges such as consistent metric measurement, ethical concerns, and data privacy must be addressed. The integration of AI into kidney cancer surgical training offers solutions to training difficulties and a boost to surgical education. However, to fully harness its potential, additional studies are imperative.
Collapse
Affiliation(s)
- Natali Rodriguez Peñaranda
- Department of Urology, Azienda Ospedaliero-Universitaria di Modena, Via Pietro Giardini, 1355, 41126 Baggiovara, Italy; (N.R.P.); (A.E.); (S.F.); (G.B.); (S.D.B.); (S.M.)
| | - Ahmed Eissa
- Department of Urology, Azienda Ospedaliero-Universitaria di Modena, Via Pietro Giardini, 1355, 41126 Baggiovara, Italy; (N.R.P.); (A.E.); (S.F.); (G.B.); (S.D.B.); (S.M.)
- Department of Urology, Faculty of Medicine, Tanta University, Tanta 31527, Egypt
| | - Stefania Ferretti
- Department of Urology, Azienda Ospedaliero-Universitaria di Modena, Via Pietro Giardini, 1355, 41126 Baggiovara, Italy; (N.R.P.); (A.E.); (S.F.); (G.B.); (S.D.B.); (S.M.)
| | - Giampaolo Bianchi
- Department of Urology, Azienda Ospedaliero-Universitaria di Modena, Via Pietro Giardini, 1355, 41126 Baggiovara, Italy; (N.R.P.); (A.E.); (S.F.); (G.B.); (S.D.B.); (S.M.)
| | - Stefano Di Bari
- Department of Urology, Azienda Ospedaliero-Universitaria di Modena, Via Pietro Giardini, 1355, 41126 Baggiovara, Italy; (N.R.P.); (A.E.); (S.F.); (G.B.); (S.D.B.); (S.M.)
| | - Rui Farinha
- Orsi Academy, 9090 Melle, Belgium; (R.F.); (P.D.B.); (R.D.G.); (A.G.G.); (A.M.)
- Urology Department, Lusíadas Hospital, 1500-458 Lisbon, Portugal
| | - Pietro Piazza
- Division of Urology, IRCCS Azienda Ospedaliero-Universitaria di Bologna, 40138 Bologna, Italy;
| | - Enrico Checcucci
- Department of Surgery, FPO-IRCCS Candiolo Cancer Institute, 10060 Turin, Italy;
| | - Inés Rivero Belenchón
- Urology and Nephrology Department, Virgen del Rocío University Hospital, 41013 Seville, Spain;
| | - Alessandro Veccia
- Department of Urology, University of Verona, Azienda Ospedaliera Universitaria Integrata, 37126 Verona, Italy;
| | - Juan Gomez Rivas
- Department of Urology, Hospital Clinico San Carlos, 28040 Madrid, Spain;
| | - Mark Taratkin
- Institute for Urology and Reproductive Health, Sechenov University, 119435 Moscow, Russia;
| | - Karl-Friedrich Kowalewski
- Department of Urology and Urosurgery, University Medical Center Mannheim, Medical Faculty Mannheim, Heidelberg University, 68167 Mannheim, Germany;
| | - Severin Rodler
- Department of Urology, University Hospital LMU Munich, 80336 Munich, Germany;
| | - Pieter De Backer
- Orsi Academy, 9090 Melle, Belgium; (R.F.); (P.D.B.); (R.D.G.); (A.G.G.); (A.M.)
- Department of Human Structure and Repair, Faculty of Medicine and Health Sciences, Ghent University, 9000 Ghent, Belgium
| | - Giovanni Enrico Cacciamani
- USC Institute of Urology, Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA 90089, USA;
- AI Center at USC Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA 90089, USA
| | - Ruben De Groote
- Orsi Academy, 9090 Melle, Belgium; (R.F.); (P.D.B.); (R.D.G.); (A.G.G.); (A.M.)
| | - Anthony G. Gallagher
- Orsi Academy, 9090 Melle, Belgium; (R.F.); (P.D.B.); (R.D.G.); (A.G.G.); (A.M.)
- Faculty of Life and Health Sciences, Ulster University, Derry BT48 7JL, UK
| | - Alexandre Mottrie
- Orsi Academy, 9090 Melle, Belgium; (R.F.); (P.D.B.); (R.D.G.); (A.G.G.); (A.M.)
| | - Salvatore Micali
- Department of Urology, Azienda Ospedaliero-Universitaria di Modena, Via Pietro Giardini, 1355, 41126 Baggiovara, Italy; (N.R.P.); (A.E.); (S.F.); (G.B.); (S.D.B.); (S.M.)
| | - Stefano Puliatti
- Department of Urology, Azienda Ospedaliero-Universitaria di Modena, Via Pietro Giardini, 1355, 41126 Baggiovara, Italy; (N.R.P.); (A.E.); (S.F.); (G.B.); (S.D.B.); (S.M.)
| |
Collapse
|
19
|
Longin L, Bahrami B, Deroy O. Intelligence brings responsibility - Even smart AI assistants are held responsible. iScience 2023; 26:107494. [PMID: 37609629 PMCID: PMC10440553 DOI: 10.1016/j.isci.2023.107494] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2022] [Revised: 06/07/2023] [Accepted: 07/22/2023] [Indexed: 08/24/2023] Open
Abstract
People will not hold cars responsible for traffic accidents, yet they do when artificial intelligence (AI) is involved. AI systems are held responsible when they act or merely advise a human agent. Does this mean that as soon as AI is involved responsibility follows? To find out, we examined whether purely instrumental AI systems stay clear of responsibility. We compared AI-powered with non-AI-powered car warning systems and measured their responsibility rating alongside their human users. Our findings show that responsibility is shared when the warning system is powered by AI but not by a purely mechanical system, even though people consider both systems as mere tools. Surprisingly, whether the warning prevents the accident introduces an outcome bias: the AI takes higher credit than blame depending on what the human manages or fails to do.
Collapse
Affiliation(s)
- Louis Longin
- Faculty of Philosophy, Philosophy of Science and the Study of Religion, LMU Munich, Geschwister-Scholl-Platz 1, 80539 Munich, Germany
| | - Bahador Bahrami
- Crowd Cognition Group, Department of General Psychology and Education, LMU-Munich, Gabelsbergerstraße 62, 80333 Munich, Germany
| | - Ophelia Deroy
- Faculty of Philosophy, Philosophy of Science and the Study of Religion, LMU Munich, Geschwister-Scholl-Platz 1, 80539 Munich, Germany
- Munich Centre for Neurosciences-Brain & Mind, Großhaderner Str. 2, 82152 Munich, Germany
- Institute of Philosophy, School of Advanced Study, University of London, Senate House, Malet Street, London WC1E 7HU, UK
| |
Collapse
|
20
|
van der Meijden S, Arbous M, Geerts B. Possibilities and challenges for artificial intelligence and machine learning in perioperative care. BJA Educ 2023; 23:288-294. [PMID: 37465235 PMCID: PMC10350557 DOI: 10.1016/j.bjae.2023.04.003] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/21/2023] [Indexed: 07/20/2023] Open
Affiliation(s)
- S.L. van der Meijden
- Healthplus.ai-R&D B.V., Amsterdam, The Netherlands
- Intensive Care Unit, Leiden University Medical Centre, Leiden, The Netherlands
| | - M.S. Arbous
- Intensive Care Unit, Leiden University Medical Centre, Leiden, The Netherlands
| | - B.F. Geerts
- Healthplus.ai-R&D B.V., Amsterdam, The Netherlands
| |
Collapse
|
21
|
Pai SN, Jeyaraman M, Jeyaraman N, Nallakumarasamy A, Yadav S. In the Hands of a Robot, From the Operating Room to the Courtroom: The Medicolegal Considerations of Robotic Surgery. Cureus 2023; 15:e43634. [PMID: 37719624 PMCID: PMC10504870 DOI: 10.7759/cureus.43634] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/17/2023] [Indexed: 09/19/2023] Open
Abstract
Robotic surgery has rapidly evolved as a groundbreaking field in medicine, revolutionizing surgical practices across various specialties. Despite its numerous benefits, the adoption of robotic surgery faces significant medicolegal challenges. This article delves into the underexplored legal implications of robotic surgery and identifies three distinct medicolegal problems. First, the lack of standardized training and credentialing for robotic surgery poses potential risks to patient safety and surgeon competence. Second, informed consent processes require additional considerations to ensure patients are fully aware of the technology's capabilities and potential risks. Finally, the issue of legal liability becomes complex due to the involvement of multiple stakeholders in the functioning of robotic systems. The article highlights the need for comprehensive guidelines, regulations, and training programs to navigate the medicolegal aspects of robotic surgery effectively, thereby unlocking its full potential for the future..
Collapse
Affiliation(s)
- Satvik N Pai
- Orthopaedic Surgery, Hospital for Orthopedics, Sports Medicine, Arthritis, and Trauma (HOSMAT) Hospital, Bangalore, IND
| | - Madhan Jeyaraman
- Orthopaedics, ACS Medical College and Hospital, Dr. MGR Educational and Research Institute, Chennai, IND
| | - Naveen Jeyaraman
- Orthopaedics, ACS Medical College and Hospital, Dr. MGR Educational and Research Institute, Chennai, IND
| | - Arulkumar Nallakumarasamy
- Orthopaedics, ACS Medical College and Hospital, Dr. MGR Educational and Research Institute, Chennai, IND
| | - Sankalp Yadav
- Medicine, Shri Madan Lal Khurana Chest Clinic, New Delhi, IND
| |
Collapse
|
22
|
Rivero-Moreno Y, Echevarria S, Vidal-Valderrama C, Pianetti L, Cordova-Guilarte J, Navarro-Gonzalez J, Acevedo-Rodríguez J, Dorado-Avila G, Osorio-Romero L, Chavez-Campos C, Acero-Alvarracín K. Robotic Surgery: A Comprehensive Review of the Literature and Current Trends. Cureus 2023; 15:e42370. [PMID: 37621804 PMCID: PMC10445506 DOI: 10.7759/cureus.42370] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/23/2023] [Indexed: 08/26/2023] Open
Abstract
Robotic surgery (RS) is an evolution of minimally invasive surgery that combines medical science, robotics, and engineering. The first robots approved by the Food and Drug Administration (FDA) were the Da Vinci Surgical System and the ZEUS Robotic Surgical System, which have been improving over time. Through the decades, the equipment applied to RS had undergone a wide transformation as a response to the development of new techniques and facilities for the assembly and implementation of the own. RS has revolutionized the field of urology, enabling surgeons to perform complex procedures with greater precision and accuracy, and many other surgical specialties such as gynecology, general surgery, otolaryngology, cardiothoracic surgery, and neurosurgery. Several benefits, such as a better approach to the surgical site, a three-dimensional image that improves depth perception, and smaller scars, enhance range of motion, allowing the surgeon to conduct more complicated surgical operations, and reduced postoperative complications have made robotic-assisted surgery an increasingly popular approach. However, some points like the cost of surgical procedures, equipment-instrument, and maintenance are important aspects to consider. Machine learning will likely have a role to play in surgical training shortly through "automated performance metrics," where algorithms observe and "learn" individual surgeons' techniques, assess performance, and anticipate surgical outcomes with the potential to individualize surgical training and aid decision-making in real time.
Collapse
Affiliation(s)
| | | | | | - Luigi Pianetti
- General Surgery, Universidad Nacional del Litoral, Argentina, ARG
| | | | | | | | | | | | | | | |
Collapse
|
23
|
Sone K, Tanimoto S, Toyohara Y, Taguchi A, Miyamoto Y, Mori M, Iriyama T, Wada-Hiraike O, Osuga Y. Evolution of a surgical system using deep learning in minimally invasive surgery (Review). Biomed Rep 2023; 19:45. [PMID: 37324165 PMCID: PMC10265572 DOI: 10.3892/br.2023.1628] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2022] [Accepted: 03/31/2023] [Indexed: 06/17/2023] Open
Abstract
Recently, artificial intelligence (AI) has been applied in various fields due to the development of new learning methods, such as deep learning, and the marked progress in computational processing speed. AI is also being applied in the medical field for medical image recognition and omics analysis of genomes and other data. Recently, AI applications for videos of minimally invasive surgeries have also advanced, and studies on such applications are increasing. In the present review, studies that focused on the following topics were selected: i) Organ and anatomy identification, ii) instrument identification, iii) procedure and surgical phase recognition, iv) surgery-time prediction, v) identification of an appropriate incision line, and vi) surgical education. The development of autonomous surgical robots is also progressing, with the Smart Tissue Autonomous Robot (STAR) and RAVEN systems being the most reported developments. STAR, in particular, is currently being used in laparoscopic imaging to recognize the surgical site from laparoscopic images and is in the process of establishing an automated suturing system, albeit in animal experiments. The present review examined the possibility of fully autonomous surgical robots in the future.
Collapse
Affiliation(s)
- Kenbun Sone
- Department of Obstetrics and Gynecology, Faculty of Medicine, The University of Tokyo, Tokyo 113-8655, Japan
| | - Saki Tanimoto
- Department of Obstetrics and Gynecology, Faculty of Medicine, The University of Tokyo, Tokyo 113-8655, Japan
| | - Yusuke Toyohara
- Department of Obstetrics and Gynecology, Faculty of Medicine, The University of Tokyo, Tokyo 113-8655, Japan
| | - Ayumi Taguchi
- Department of Obstetrics and Gynecology, Faculty of Medicine, The University of Tokyo, Tokyo 113-8655, Japan
| | - Yuichiro Miyamoto
- Department of Obstetrics and Gynecology, Faculty of Medicine, The University of Tokyo, Tokyo 113-8655, Japan
| | - Mayuyo Mori
- Department of Obstetrics and Gynecology, Faculty of Medicine, The University of Tokyo, Tokyo 113-8655, Japan
| | - Takayuki Iriyama
- Department of Obstetrics and Gynecology, Faculty of Medicine, The University of Tokyo, Tokyo 113-8655, Japan
| | - Osamu Wada-Hiraike
- Department of Obstetrics and Gynecology, Faculty of Medicine, The University of Tokyo, Tokyo 113-8655, Japan
| | - Yutaka Osuga
- Department of Obstetrics and Gynecology, Faculty of Medicine, The University of Tokyo, Tokyo 113-8655, Japan
| |
Collapse
|
24
|
Hashemi N, Svendsen MBS, Bjerrum F, Rasmussen S, Tolsgaard MG, Friis ML. Acquisition and usage of robotic surgical data for machine learning analysis. Surg Endosc 2023:10.1007/s00464-023-10214-7. [PMID: 37389741 PMCID: PMC10338401 DOI: 10.1007/s00464-023-10214-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2023] [Accepted: 06/12/2023] [Indexed: 07/01/2023]
Abstract
BACKGROUND The increasing use of robot-assisted surgery (RAS) has led to the need for new methods of assessing whether new surgeons are qualified to perform RAS, without the resource-demanding process of having expert surgeons do the assessment. Computer-based automation and artificial intelligence (AI) are seen as promising alternatives to expert-based surgical assessment. However, no standard protocols or methods for preparing data and implementing AI are available for clinicians. This may be among the reasons for the impediment to the use of AI in the clinical setting. METHOD We tested our method on porcine models with both the da Vinci Si and the da Vinci Xi. We sought to capture raw video data from the surgical robots and 3D movement data from the surgeons and prepared the data for the use in AI by a structured guide to acquire and prepare video data using the following steps: 'Capturing image data from the surgical robot', 'Extracting event data', 'Capturing movement data of the surgeon', 'Annotation of image data'. RESULTS 15 participant (11 novices and 4 experienced) performed 10 different intraabdominal RAS procedures. Using this method we captured 188 videos (94 from the surgical robot, and 94 corresponding movement videos of the surgeons' arms and hands). Event data, movement data, and labels were extracted from the raw material and prepared for use in AI. CONCLUSION With our described methods, we could collect, prepare, and annotate images, events, and motion data from surgical robotic systems in preparation for its use in AI.
Collapse
Affiliation(s)
- Nasseh Hashemi
- Department of Clinical Medicine, Aalborg University Hospital, Aalborg, Denmark.
- Nordsim-Centre for Skills Training and Simulation, Aalborg, Denmark.
- ROCnord-Robot Centre, Aalborg University Hospital, Aalborg, Denmark.
- Department of Urology, Aalborg University Hospital, Aalborg, Denmark.
| | - Morten Bo Søndergaard Svendsen
- Copenhagen Academy for Medical Education and Simulation, Center for Human Resources and Education, Copenhagen, Denmark
- Department of Computer Science, University of Copenhagen, Copenhagen, Denmark
| | - Flemming Bjerrum
- Copenhagen Academy for Medical Education and Simulation, Center for Human Resources and Education, Copenhagen, Denmark
- Department of Gastrointestinal and Hepatic Diseases, Copenhagen University Hospital - Herlev and Gentofte, Herlev, Denmark
| | - Sten Rasmussen
- Department of Clinical Medicine, Aalborg University Hospital, Aalborg, Denmark
| | - Martin G Tolsgaard
- Nordsim-Centre for Skills Training and Simulation, Aalborg, Denmark
- Copenhagen Academy for Medical Education and Simulation, Center for Human Resources and Education, Copenhagen, Denmark
| | - Mikkel Lønborg Friis
- Department of Clinical Medicine, Aalborg University Hospital, Aalborg, Denmark
- Nordsim-Centre for Skills Training and Simulation, Aalborg, Denmark
| |
Collapse
|
25
|
Mittermaier M, Raza MM, Kvedar JC. Bias in AI-based models for medical applications: challenges and mitigation strategies. NPJ Digit Med 2023; 6:113. [PMID: 37311802 DOI: 10.1038/s41746-023-00858-z] [Citation(s) in RCA: 35] [Impact Index Per Article: 35.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2023] [Accepted: 06/06/2023] [Indexed: 06/15/2023] Open
Affiliation(s)
- Mirja Mittermaier
- Charité-Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Department of Infectious Diseases, Respiratory Medicine and Critical Care, Berlin, Germany.
- Berlin Institute of Health at Charité-Universitätsmedizin Berlin, Charitéplatz 1, 10117, Berlin, Germany.
| | | | | |
Collapse
|
26
|
Nyangoh Timoh K, Huaulme A, Cleary K, Zaheer MA, Lavoué V, Donoho D, Jannin P. A systematic review of annotation for surgical process model analysis in minimally invasive surgery based on video. Surg Endosc 2023:10.1007/s00464-023-10041-w. [PMID: 37157035 DOI: 10.1007/s00464-023-10041-w] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2022] [Accepted: 03/25/2023] [Indexed: 05/10/2023]
Abstract
BACKGROUND Annotated data are foundational to applications of supervised machine learning. However, there seems to be a lack of common language used in the field of surgical data science. The aim of this study is to review the process of annotation and semantics used in the creation of SPM for minimally invasive surgery videos. METHODS For this systematic review, we reviewed articles indexed in the MEDLINE database from January 2000 until March 2022. We selected articles using surgical video annotations to describe a surgical process model in the field of minimally invasive surgery. We excluded studies focusing on instrument detection or recognition of anatomical areas only. The risk of bias was evaluated with the Newcastle Ottawa Quality assessment tool. Data from the studies were visually presented in table using the SPIDER tool. RESULTS Of the 2806 articles identified, 34 were selected for review. Twenty-two were in the field of digestive surgery, six in ophthalmologic surgery only, one in neurosurgery, three in gynecologic surgery, and two in mixed fields. Thirty-one studies (88.2%) were dedicated to phase, step, or action recognition and mainly relied on a very simple formalization (29, 85.2%). Clinical information in the datasets was lacking for studies using available public datasets. The process of annotation for surgical process model was lacking and poorly described, and description of the surgical procedures was highly variable between studies. CONCLUSION Surgical video annotation lacks a rigorous and reproducible framework. This leads to difficulties in sharing videos between institutions and hospitals because of the different languages used. There is a need to develop and use common ontology to improve libraries of annotated surgical videos.
Collapse
Affiliation(s)
- Krystel Nyangoh Timoh
- Department of Gynecology and Obstetrics and Human Reproduction, CHU Rennes, Rennes, France.
- INSERM, LTSI - UMR 1099, University Rennes 1, Rennes, France.
- Laboratoire d'Anatomie et d'Organogenèse, Faculté de Médecine, Centre Hospitalier Universitaire de Rennes, 2 Avenue du Professeur Léon Bernard, 35043, Rennes Cedex, France.
- Department of Obstetrics and Gynecology, Rennes Hospital, Rennes, France.
| | - Arnaud Huaulme
- INSERM, LTSI - UMR 1099, University Rennes 1, Rennes, France
| | - Kevin Cleary
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Hospital, Washington, DC, 20010, USA
| | - Myra A Zaheer
- George Washington University School of Medicine and Health Sciences, Washington, DC, USA
| | - Vincent Lavoué
- Department of Gynecology and Obstetrics and Human Reproduction, CHU Rennes, Rennes, France
| | - Dan Donoho
- Division of Neurosurgery, Center for Neuroscience, Children's National Hospital, Washington, DC, 20010, USA
| | - Pierre Jannin
- INSERM, LTSI - UMR 1099, University Rennes 1, Rennes, France
| |
Collapse
|
27
|
Artificial Intelligence Splint in Orthognathic Surgery for Skeletal Class III Malocclusion: Design and Application. J Craniofac Surg 2023; 34:698-703. [PMID: 36728461 DOI: 10.1097/scs.0000000000009162] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2022] [Accepted: 09/30/2022] [Indexed: 02/03/2023] Open
Abstract
BACKGROUND Digital splints are indispensable in orthognathic surgery. However, the present design process of splints is time-consuming and has low reproducibility. To solve these problems, an algorithm for artificial intelligent splints has been developed in this study, making the automatic design of splints accessible. METHODS Firstly, the algorithm and program of the artificial intelligence splint were created. Then a total of 54 patients with skeletal class III malocclusion were included in this study from 2018 to 2020. Pre and postoperative radiographic examinations were performed. The cephalometric measurements were recorded and the difference between virtual simulation and postoperative images was measured. The time cost and differences between artificial intelligent splints and digital splints were analyzed through both model surgery and radiographic images. RESULTS The results showed that the efficiency of designing splints is significantly improved. And the mean difference between artificial intelligent splints and digital splints was <0.15 mm in model surgery. Meanwhile, there was no significant difference between the artificial intelligent splints and digital splints in radiological image analysis. CONCLUSIONS In conclusion, compared with digital splints, artificial intelligent splints could save time for preoperative design while ensuring accuracy. The authors believed that it is conducive to the presurgical design of orthognathic surgery.
Collapse
|
28
|
Artificial Intelligence in Surgical Learning. SURGERIES 2023. [DOI: 10.3390/surgeries4010010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/22/2023] Open
Abstract
(1) Background: Artificial Intelligence (AI) is transforming healthcare on all levels. While AI shows immense potential, the clinical implementation is lagging. We present a concise review of AI in surgical learning; (2) Methods: A non-systematic review of AI in surgical learning of the literature in English is provided; (3) Results: AI shows utility for all components of surgical competence within surgical learning. AI presents with great potential within robotic surgery specifically (4) Conclusions: Technology will evolve in ways currently unimaginable, presenting us with novel applications of AI and derivatives thereof. Surgeons must be open to new modes of learning to be able to implement all evidence-based applications of AI in the future. Systematic analyses of AI in surgical learning are needed.
Collapse
|
29
|
Huang P, Feng Z, Shu X, Wu A, Wang Z, Hu T, Cao Y, Tu Y, Li Z. A bibliometric and visual analysis of publications on artificial intelligence in colorectal cancer (2002-2022). Front Oncol 2023; 13:1077539. [PMID: 36824138 PMCID: PMC9941644 DOI: 10.3389/fonc.2023.1077539] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2022] [Accepted: 01/27/2023] [Indexed: 02/10/2023] Open
Abstract
Background Colorectal cancer (CRC) has the third-highest incidence and second-highest mortality rate of all cancers worldwide. Early diagnosis and screening of CRC have been the focus of research in this field. With the continuous development of artificial intelligence (AI) technology, AI has advantages in many aspects of CRC, such as adenoma screening, genetic testing, and prediction of tumor metastasis. Objective This study uses bibliometrics to analyze research in AI in CRC, summarize the field's history and current status of research, and predict future research directions. Method We searched the SCIE database for all literature on CRC and AI. The documents span the period 2002-2022. we used bibliometrics to analyze the data of these papers, such as authors, countries, institutions, and references. Co-authorship, co-citation, and co-occurrence analysis were the main methods of analysis. Citespace, VOSviewer, and SCImago Graphica were used to visualize the results. Result This study selected 1,531 articles on AI in CRC. China has published a maximum number of 580 such articles in this field. The U.S. had the most quality publications, boasting an average citation per article of 46.13. Mori Y and Ding K were the two authors with the highest number of articles. Scientific Reports, Cancers, and Frontiers in Oncology are this field's most widely published journals. Institutions from China occupy the top 9 positions among the most published institutions. We found that research on AI in this field mainly focuses on colonoscopy-assisted diagnosis, imaging histology, and pathology examination. Conclusion AI in CRC is currently in the development stage with good prospects. AI is currently widely used in colonoscopy, imageomics, and pathology. However, the scope of AI applications is still limited, and there is a lack of inter-institutional collaboration. The pervasiveness of AI technology is the main direction of future housing development in this field.
Collapse
Affiliation(s)
- Pan Huang
- Department of General Surgery, First Affiliated Hospital of Nanchang University, Nanchang, China,Department of Digestive Surgery, Digestive Disease Hospital, The First Affiliated Hospital of Nanchang University, Nanchang, China,Medical Innovation Center, The First Affiliated Hospital of Nanchang University, Nanchang, China
| | - Zongfeng Feng
- Department of General Surgery, First Affiliated Hospital of Nanchang University, Nanchang, China,Department of Digestive Surgery, Digestive Disease Hospital, The First Affiliated Hospital of Nanchang University, Nanchang, China,Medical Innovation Center, The First Affiliated Hospital of Nanchang University, Nanchang, China
| | - Xufeng Shu
- Department of General Surgery, First Affiliated Hospital of Nanchang University, Nanchang, China,Department of Digestive Surgery, Digestive Disease Hospital, The First Affiliated Hospital of Nanchang University, Nanchang, China,Medical Innovation Center, The First Affiliated Hospital of Nanchang University, Nanchang, China
| | - Ahao Wu
- Department of General Surgery, First Affiliated Hospital of Nanchang University, Nanchang, China,Department of Digestive Surgery, Digestive Disease Hospital, The First Affiliated Hospital of Nanchang University, Nanchang, China,Medical Innovation Center, The First Affiliated Hospital of Nanchang University, Nanchang, China
| | - Zhonghao Wang
- Department of General Surgery, First Affiliated Hospital of Nanchang University, Nanchang, China,Department of Digestive Surgery, Digestive Disease Hospital, The First Affiliated Hospital of Nanchang University, Nanchang, China,Medical Innovation Center, The First Affiliated Hospital of Nanchang University, Nanchang, China
| | - Tengcheng Hu
- Department of General Surgery, First Affiliated Hospital of Nanchang University, Nanchang, China,Department of Digestive Surgery, Digestive Disease Hospital, The First Affiliated Hospital of Nanchang University, Nanchang, China,Medical Innovation Center, The First Affiliated Hospital of Nanchang University, Nanchang, China
| | - Yi Cao
- Department of General Surgery, First Affiliated Hospital of Nanchang University, Nanchang, China,Department of Digestive Surgery, Digestive Disease Hospital, The First Affiliated Hospital of Nanchang University, Nanchang, China,Medical Innovation Center, The First Affiliated Hospital of Nanchang University, Nanchang, China
| | - Yi Tu
- Department of Pathology, The First Affiliated Hospital of Nanchang University, Nanchang, China,*Correspondence: Yi Tu, ; Zhengrong Li,
| | - Zhengrong Li
- Department of General Surgery, First Affiliated Hospital of Nanchang University, Nanchang, China,Department of Digestive Surgery, Digestive Disease Hospital, The First Affiliated Hospital of Nanchang University, Nanchang, China,Medical Innovation Center, The First Affiliated Hospital of Nanchang University, Nanchang, China,*Correspondence: Yi Tu, ; Zhengrong Li,
| |
Collapse
|
30
|
Künstliche Intelligenz in der Therapie chronischer Wunden – Konzepte und Ausblick. GEFÄSSCHIRURGIE 2023. [DOI: 10.1007/s00772-022-00964-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/13/2023]
|
31
|
Gazis A, Karaiskos P, Loukas C. Surgical Gesture Recognition in Laparoscopic Tasks Based on the Transformer Network and Self-Supervised Learning. BIOENGINEERING (BASEL, SWITZERLAND) 2022; 9:bioengineering9120737. [PMID: 36550943 PMCID: PMC9774918 DOI: 10.3390/bioengineering9120737] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/09/2022] [Revised: 11/07/2022] [Accepted: 11/25/2022] [Indexed: 12/05/2022]
Abstract
In this study, we propose a deep learning framework and a self-supervision scheme for video-based surgical gesture recognition. The proposed framework is modular. First, a 3D convolutional network extracts feature vectors from video clips for encoding spatial and short-term temporal features. Second, the feature vectors are fed into a transformer network for capturing long-term temporal dependencies. Two main models are proposed, based on the backbone framework: C3DTrans (supervised) and SSC3DTrans (self-supervised). The dataset consisted of 80 videos from two basic laparoscopic tasks: peg transfer (PT) and knot tying (KT). To examine the potential of self-supervision, the models were trained on 60% and 100% of the annotated dataset. In addition, the best-performing model was evaluated on the JIGSAWS robotic surgery dataset. The best model (C3DTrans) achieves an accuracy of 88.0%, a 95.2% clip level, and 97.5% and 97.9% (gesture level), for PT and KT, respectively. The SSC3DTrans performed similar to C3DTrans when training on 60% of the annotated dataset (about 84% and 93% clip-level accuracies for PT and KT, respectively). The performance of C3DTrans on JIGSAWS was close to 76% accuracy, which was similar to or higher than prior techniques based on a single video stream, no additional video training, and online processing.
Collapse
|
32
|
Moglia A, Georgiou K, Morelli L, Toutouzas K, Satava RM, Cuschieri A. Breaking down the silos of artificial intelligence in surgery: glossary of terms. Surg Endosc 2022; 36:7986-7997. [PMID: 35729406 PMCID: PMC9613746 DOI: 10.1007/s00464-022-09371-y] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2022] [Accepted: 05/28/2022] [Indexed: 01/06/2023]
Abstract
BACKGROUND The literature on artificial intelligence (AI) in surgery has advanced rapidly during the past few years. However, the published studies on AI are mostly reported by computer scientists using their own jargon which is unfamiliar to surgeons. METHODS A literature search was conducted in using PubMed following the preferred reporting items for systematic reviews and meta-analyses (PRISMA) statement. The primary outcome of this review is to provide a glossary with definitions of the commonly used AI terms in surgery to improve their understanding by surgeons. RESULTS One hundred ninety-five studies were included in this review, and 38 AI terms related to surgery were retrieved. Convolutional neural networks were the most frequently culled term by the search, accounting for 74 studies on AI in surgery, followed by classification task (n = 62), artificial neural networks (n = 53), and regression (n = 49). Then, the most frequent expressions were supervised learning (reported in 24 articles), support vector machine (SVM) in 21, and logistic regression in 16. The rest of the 38 terms was seldom mentioned. CONCLUSIONS The proposed glossary can be used by several stakeholders. First and foremost, by residents and attending consultant surgeons, both having to understand the fundamentals of AI when reading such articles. Secondly, junior researchers at the start of their career in Surgical Data Science and thirdly experts working in the regulatory sections of companies involved in the AI Business Software as a Medical Device (SaMD) preparing documents for submission to the Food and Drug Administration (FDA) or other agencies for approval.
Collapse
Affiliation(s)
- Andrea Moglia
- Department of Translational Research and New Technologies in Medicine and Surgery, University of Pisa, Pisa, Italy.
| | - Konstantinos Georgiou
- 1st Propaedeutic Surgical Unit, Hippocrateion Athens General Hospital, Athens Medical School, National and Kapodistrian University of Athens, Athens, Greece
| | - Luca Morelli
- Department of Translational Research and New Technologies in Medicine and Surgery, University of Pisa, Pisa, Italy
- Department of General Surgery, University of Pisa, Pisa, Italy
| | - Konstantinos Toutouzas
- 1st Propaedeutic Surgical Unit, Hippocrateion Athens General Hospital, Athens Medical School, National and Kapodistrian University of Athens, Athens, Greece
| | - Richard M Satava
- Department of Surgery, University of Washington Medical Center, Seattle, WA, USA
| | - Alfred Cuschieri
- Scuola Superiore Sant'Anna of Pisa, 56214, Pisa, Italy
- Institute for Medical Science and Technology, University of Dundee, Dundee, DD2 1FD, UK
| |
Collapse
|
33
|
Moglia A, Morelli L, D'Ischia R, Fatucchi LM, Pucci V, Berchiolli R, Ferrari M, Cuschieri A. Ensemble deep learning for the prediction of proficiency at a virtual simulator for robot-assisted surgery. Surg Endosc 2022; 36:6473-6479. [PMID: 35020053 PMCID: PMC9402513 DOI: 10.1007/s00464-021-08999-6] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2021] [Accepted: 12/31/2021] [Indexed: 02/05/2023]
Abstract
BACKGROUND Artificial intelligence (AI) has the potential to enhance patient safety in surgery, and all its aspects, including education and training, will derive considerable benefit from AI. In the present study, deep-learning models were used to predict the rates of proficiency acquisition in robot-assisted surgery (RAS), thereby providing surgical programs directors information on the levels of the innate ability of trainees to facilitate the implementation of flexible personalized training. METHODS 176 medical students, without prior experience with surgical simulators, were trained to reach proficiency in five tasks on a virtual simulator for RAS. Ensemble deep neural networks (DNN) models were developed and compared with other ensemble AI algorithms, i.e., random forests and gradient boosted regression trees (GBRT). RESULTS DNN models achieved a higher accuracy than random forests and GBRT in predicting time to proficiency, 0.84 vs. 0.70 and 0.77, respectively (Peg board 2), 0.83 vs. 0.79 and 0.78 (Ring walk 2), 0.81 vs 0.81 and 0.80 (Match board 1), 0.79 vs. 0.75 and 0.71 (Ring and rail 2), and 0.87 vs. 0.86 and 0.84 (Thread the rings 2). Ensemble DNN models outperformed random forests and GBRT in predicting number of attempts to proficiency, with an accuracy of 0.87 vs. 0.86 and 0.83, respectively (Peg board 2), 0.89 vs. 0.88 and 0.89 (Ring walk 2), 0.91 vs. 0.89 and 0.89 (Match board 1), 0.89 vs. 0.87 and 0.83 (Ring and rail 2), and 0.96 vs. 0.94 and 0.94 (Thread the rings 2). CONCLUSIONS Ensemble DNN models can identify at an early stage the acquisition rates of surgical technical proficiency of trainees and identify those struggling to reach the required expected proficiency level.
Collapse
Affiliation(s)
- Andrea Moglia
- EndoCAS, Center for Computer Assisted Surgery, University of Pisa, Edificio 102, via Paradisa 2, 56124, Pisa, Italy.
| | - Luca Morelli
- EndoCAS, Center for Computer Assisted Surgery, University of Pisa, Edificio 102, via Paradisa 2, 56124, Pisa, Italy
- General Surgery Unit, Cisanello Teaching Hospital of Pisa, 56124, Pisa, Italy
- Multidisciplinary Center of Robotic Surgery, University Hospital of Pisa, 56124, Pisa, Italy
| | - Roberto D'Ischia
- General Surgery Unit, Cisanello Teaching Hospital of Pisa, 56124, Pisa, Italy
| | | | - Valentina Pucci
- General Surgery Unit, Cisanello Teaching Hospital of Pisa, 56124, Pisa, Italy
| | | | - Mauro Ferrari
- EndoCAS, Center for Computer Assisted Surgery, University of Pisa, Edificio 102, via Paradisa 2, 56124, Pisa, Italy
- Vascular Surgery Unit, Cisanello Teaching Hospital of Pisa, 56124, Pisa, Italy
| | - Alfred Cuschieri
- Scuola Superiore Sant'Anna of Pisa, 56214, Pisa, Italy
- Institute for Medical Science and Technology, University of Dundee, Dundee, DD2 1FD, UK
| |
Collapse
|
34
|
Brenner AR, Laoveeravat P, Carey PJ, Joiner D, Mardini SH, Jovani M. Artificial intelligence using advanced imaging techniques and cholangiocarcinoma: Recent advances and future direction. Artif Intell Gastroenterol 2022; 3:88-95. [DOI: 10.35712/aig.v3.i3.88] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/07/2022] [Revised: 04/16/2022] [Accepted: 05/08/2022] [Indexed: 02/06/2023] Open
Abstract
While cholangiocarcinoma represents only about 3% of all gastrointestinal tumors, it has a dismal survival rate, usually because it is diagnosed at a late stage. The utilization of Artificial Intelligence (AI) in medicine in general, and in gastroenterology has made gigantic steps. However, the application of AI for biliary disease, in particular for cholangiocarcinoma, has been sub-optimal. The use of AI in combination with clinical data, cross-sectional imaging (computed tomography, magnetic resonance imaging) and endoscopy (endoscopic ultrasound and cholangioscopy) has the potential to significantly improve early diagnosis and the choice of optimal therapeutic options, leading to a transformation in the prognosis of this feared disease. In this review we summarize the current knowledge on the use of AI for the diagnosis and management of cholangiocarcinoma and point to future directions in the field.
Collapse
Affiliation(s)
- Aaron R Brenner
- Department of Internal Medicine, University of Kentucky College of Medicine, Lexington, KY 40536, United States
| | - Passisd Laoveeravat
- Division of Digestive Diseases and Nutrition, University of Kentucky College of Medicine, Lexington, KY 40536, United States
| | - Patrick J Carey
- Department of Internal Medicine, University of Kentucky College of Medicine, Lexington, KY 40536, United States
| | - Danielle Joiner
- Department of Internal Medicine, University of Kentucky College of Medicine, Lexington, KY 40536, United States
| | - Samuel H Mardini
- Division of Digestive Diseases and Nutrition, University of Kentucky College of Medicine, Lexington, KENTUCKY 40536, United States
| | - Manol Jovani
- Digestive Diseases and Nutrition, University of Kentucky Albert B. Chandler Hospital, Lexington, KY 40536, United States
| |
Collapse
|
35
|
Amparore D, Pecoraro A, Piramide F, Verri P, Checcucci E, De Cillis S, Piana A, Burgio M, Di Dio M, Manfredi M, Fiori C, Porpiglia F. Three-dimensional imaging reconstruction of the kidney's anatomy for a tailored minimal invasive partial nephrectomy: A pilot study. Asian J Urol 2022; 9:263-271. [PMID: 36035345 PMCID: PMC9399544 DOI: 10.1016/j.ajur.2022.06.003] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/20/2023] Open
Abstract
Objective Methods Results Conclusion
Collapse
Affiliation(s)
- Daniele Amparore
- Department of Urology, San Luigi Gonzaga Hospital, University of Turin, Orbassano, Turin, Italy
- European Association of Urology (EAU) Young Academic Urologists (YAU) Renal Cancer Working Group, Arnhem, Netherlands
- Corresponding author. Department of Urology, San Luigi Gonzaga Hospital, University of Turin, Orbassano, Turin, Italy
| | - Angela Pecoraro
- Department of Urology, San Luigi Gonzaga Hospital, University of Turin, Orbassano, Turin, Italy
- European Association of Urology (EAU) Young Academic Urologists (YAU) Renal Cancer Working Group, Arnhem, Netherlands
| | - Federico Piramide
- Department of Urology, San Luigi Gonzaga Hospital, University of Turin, Orbassano, Turin, Italy
| | - Paolo Verri
- Department of Urology, San Luigi Gonzaga Hospital, University of Turin, Orbassano, Turin, Italy
| | - Enrico Checcucci
- Department of Urology, San Luigi Gonzaga Hospital, University of Turin, Orbassano, Turin, Italy
- European Association of Urology (EAU) Young Academic Urologists (YAU) Uro-technology and SoMe Working Group, Arnhem, Netherlands
| | - Sabrina De Cillis
- Department of Urology, San Luigi Gonzaga Hospital, University of Turin, Orbassano, Turin, Italy
| | - Alberto Piana
- Department of Urology, San Luigi Gonzaga Hospital, University of Turin, Orbassano, Turin, Italy
| | - Mariano Burgio
- Department of Urology, San Luigi Gonzaga Hospital, University of Turin, Orbassano, Turin, Italy
| | - Michele Di Dio
- Division of Urology, Department of Surgery, SS Annunziata Hospital, Cosenza, Italy
| | - Matteo Manfredi
- Department of Urology, San Luigi Gonzaga Hospital, University of Turin, Orbassano, Turin, Italy
| | - Cristian Fiori
- Department of Urology, San Luigi Gonzaga Hospital, University of Turin, Orbassano, Turin, Italy
| | - Francesco Porpiglia
- Department of Urology, San Luigi Gonzaga Hospital, University of Turin, Orbassano, Turin, Italy
| |
Collapse
|
36
|
Puliatti S, Eissa A, Checcucci E, Piazza P, Amato M, Scarcella S, Rivas JG, Taratkin M, Marenco J, Rivero IB, Kowalewski KF, Cacciamani G, El-Sherbiny A, Zoeir A, El-Bahnasy AM, De Groote R, Mottrie A, Micali S. New imaging technologies for robotic kidney cancer surgery. Asian J Urol 2022; 9:253-262. [PMID: 36035346 PMCID: PMC9399539 DOI: 10.1016/j.ajur.2022.03.008] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2021] [Revised: 01/19/2022] [Accepted: 03/16/2022] [Indexed: 11/21/2022] Open
Abstract
Objective Kidney cancers account for approximately 2% of all newly diagnosed cancer in 2020. Among the primary treatment options for kidney cancer, urologist may choose between radical or partial nephrectomy, or ablative therapies. Nowadays, robotic-assisted partial nephrectomy (RAPN) for the management of renal cancers has gained popularity, up to being considered the gold standard. However, RAPN is a challenging procedure with a steep learning curve. Methods In this narrative review, different imaging technologies used to guide and aid RAPN are discussed. Results Three-dimensional visualization technology has been extensively discussed in RAPN, showing its value in enhancing robotic-surgery training, patient counseling, surgical planning, and intraoperative guidance. Intraoperative imaging technologies such as intracorporeal ultrasound, near-infrared fluorescent imaging, and intraoperative pathological examination can also be used to improve the outcomes following RAPN. Finally, artificial intelligence may play a role in the field of RAPN soon. Conclusion RAPN is a complex surgery; however, many imaging technologies may play an important role in facilitating it.
Collapse
|