1
|
Bellos T, Manolitsis I, Katsimperis S, Juliebø-Jones P, Feretzakis G, Mitsogiannis I, Varkarakis I, Somani BK, Tzelves L. Artificial Intelligence in Urologic Robotic Oncologic Surgery: A Narrative Review. Cancers (Basel) 2024; 16:1775. [PMID: 38730727 PMCID: PMC11083167 DOI: 10.3390/cancers16091775] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2024] [Revised: 04/29/2024] [Accepted: 05/02/2024] [Indexed: 05/13/2024] Open
Abstract
With the rapid increase in computer processing capacity over the past two decades, machine learning techniques have been applied in many sectors of daily life. Machine learning in therapeutic settings is also gaining popularity. We analysed current studies on machine learning in robotic urologic surgery. We searched PubMed/Medline and Google Scholar up to December 2023. Search terms included "urologic surgery", "artificial intelligence", "machine learning", "neural network", "automation", and "robotic surgery". Automatic preoperative imaging, intraoperative anatomy matching, and bleeding prediction has been a major focus. Early artificial intelligence (AI) therapeutic outcomes are promising. Robot-assisted surgery provides precise telemetry data and a cutting-edge viewing console to analyse and improve AI integration in surgery. Machine learning enhances surgical skill feedback, procedure effectiveness, surgical guidance, and postoperative prediction. Tension-sensors on robotic arms and augmented reality can improve surgery. This provides real-time organ motion monitoring, improving precision and accuracy. As datasets develop and electronic health records are used more and more, these technologies will become more effective and useful. AI in robotic surgery is intended to improve surgical training and experience. Both seek precision to improve surgical care. AI in ''master-slave'' robotic surgery offers the detailed, step-by-step examination of autonomous robotic treatments.
Collapse
Affiliation(s)
- Themistoklis Bellos
- 2nd Department of Urology, Sismanoglio General Hospital of Athens, 15126 Athens, Greece; (T.B.); (I.M.); (S.K.); (I.M.); (I.V.)
| | - Ioannis Manolitsis
- 2nd Department of Urology, Sismanoglio General Hospital of Athens, 15126 Athens, Greece; (T.B.); (I.M.); (S.K.); (I.M.); (I.V.)
| | - Stamatios Katsimperis
- 2nd Department of Urology, Sismanoglio General Hospital of Athens, 15126 Athens, Greece; (T.B.); (I.M.); (S.K.); (I.M.); (I.V.)
| | | | - Georgios Feretzakis
- School of Science and Technology, Hellenic Open University, 26335 Patras, Greece;
| | - Iraklis Mitsogiannis
- 2nd Department of Urology, Sismanoglio General Hospital of Athens, 15126 Athens, Greece; (T.B.); (I.M.); (S.K.); (I.M.); (I.V.)
| | - Ioannis Varkarakis
- 2nd Department of Urology, Sismanoglio General Hospital of Athens, 15126 Athens, Greece; (T.B.); (I.M.); (S.K.); (I.M.); (I.V.)
| | - Bhaskar K. Somani
- Department of Urology, University of Southampton, Southampton SO16 6YD, UK;
| | - Lazaros Tzelves
- 2nd Department of Urology, Sismanoglio General Hospital of Athens, 15126 Athens, Greece; (T.B.); (I.M.); (S.K.); (I.M.); (I.V.)
| |
Collapse
|
2
|
Kang YJ, Kim SJ, Seo SH, Lee S, Kim HS, Yoo JI. Assessment of Automated Identification of Phases in Videos of Total Hip Arthroplasty Using Deep Learning Techniques. Clin Orthop Surg 2024; 16:210-216. [PMID: 38562629 PMCID: PMC10973629 DOI: 10.4055/cios23280] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/12/2023] [Revised: 10/23/2023] [Accepted: 11/06/2023] [Indexed: 04/04/2024] Open
Abstract
Background As the population ages, the rates of hip diseases and fragility fractures are increasing, making total hip arthroplasty (THA) one of the best methods for treating elderly patients. With the increasing number of THA surgeries and diverse surgical methods, there is a need for standard evaluation protocols. This study aimed to use deep learning algorithms to classify THA videos and evaluate the accuracy of the labelling of these videos. Methods In our study, we manually annotated 7 phases in THA, including skin incision, broaching, exposure of acetabulum, acetabular reaming, acetabular cup positioning, femoral stem insertion, and skin closure. Within each phase, a second trained annotator marked the beginning and end of instrument usages, such as the skin blade, forceps, Bovie, suction device, suture material, retractor, rasp, femoral stem, acetabular reamer, head trial, and real head. Results In our study, we utilized YOLOv3 to collect 540 operating images of THA procedures and create a scene annotation model. The results of our study showed relatively high accuracy in the clear classification of surgical techniques such as skin incision and closure, broaching, acetabular reaming, and femoral stem insertion, with a mean average precision (mAP) of 0.75 or higher. Most of the equipment showed good accuracy of mAP 0.7 or higher, except for the suction device, suture material, and retractor. Conclusions Scene annotation for the instrument and phases in THA using deep learning techniques may provide potentially useful tools for subsequent documentation, assessment of skills, and feedback.
Collapse
Affiliation(s)
- Yang Jae Kang
- Division of Bio and Medical Big Data Department (BK4 Program) and Life Science Department, Gyeongsang National University, Jinju, Korea
| | - Shin June Kim
- Biomedical Research Institute, Inha University Hospital, Incheon, Korea
| | - Sung Hyo Seo
- Biomedical Research Institute, Gyeongsang National University Hospital, Jinju, Korea
| | - Sangyeob Lee
- Biomedical Research Institute, Gyeongsang National University Hospital, Jinju, Korea
| | - Hyeon Su Kim
- Biomedical Research Institute, Inha University Hospital, Incheon, Korea
| | - Jun-Il Yoo
- Department of Orthopedic Surgery, Inha University Hospital, Inha University College of Medicine, Incheon, Korea
| |
Collapse
|
3
|
Tsai AY, Carter SR, Greene AC. Artificial intelligence in pediatric surgery. Semin Pediatr Surg 2024; 33:151390. [PMID: 38242061 DOI: 10.1016/j.sempedsurg.2024.151390] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/21/2024]
Abstract
Artificial intelligence (AI) is rapidly changing the landscape of medicine and is already being utilized in conjunction with medical diagnostics and imaging analysis. We hereby explore AI applications in surgery and examine its relevance to pediatric surgery, covering its evolution, current state, and promising future. The various fields of AI are explored including machine learning and applications to predictive analytics and decision support in surgery, computer vision and image analysis in preoperative planning, image segmentation, surgical navigation, and finally, natural language processing assist in expediting clinical documentation, identification of clinical indications, quality improvement, outcome research, and other types of automated data extraction. The purpose of this review is to familiarize the pediatric surgical community with the rise of AI and highlight the ongoing advancements and challenges in its adoption, including data privacy, regulatory considerations, and the imperative for interdisciplinary collaboration. We hope this review serves as a comprehensive guide to AI's transformative influence on surgery, demonstrating its potential to enhance pediatric surgical patient outcomes, improve precision, and usher in a new era of surgical excellence.
Collapse
Affiliation(s)
- Anthony Y Tsai
- Division of Pediatric Surgery, Penn State Health Children's Hospital, 500 University Drive, Hershey, PA 17033, United States.
| | - Stewart R Carter
- Division of Pediatric Surgery, University of Louisville School of Medicine, Louisville, KY, United States
| | - Alicia C Greene
- Division of Pediatric Surgery, Penn State Health Children's Hospital, 500 University Drive, Hershey, PA 17033, United States
| |
Collapse
|
4
|
Boal MWE, Anastasiou D, Tesfai F, Ghamrawi W, Mazomenos E, Curtis N, Collins JW, Sridhar A, Kelly J, Stoyanov D, Francis NK. Evaluation of objective tools and artificial intelligence in robotic surgery technical skills assessment: a systematic review. Br J Surg 2024; 111:znad331. [PMID: 37951600 PMCID: PMC10771126 DOI: 10.1093/bjs/znad331] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2023] [Revised: 09/18/2023] [Accepted: 09/19/2023] [Indexed: 11/14/2023]
Abstract
BACKGROUND There is a need to standardize training in robotic surgery, including objective assessment for accreditation. This systematic review aimed to identify objective tools for technical skills assessment, providing evaluation statuses to guide research and inform implementation into training curricula. METHODS A systematic literature search was conducted in accordance with the PRISMA guidelines. Ovid Embase/Medline, PubMed and Web of Science were searched. Inclusion criterion: robotic surgery technical skills tools. Exclusion criteria: non-technical, laparoscopy or open skills only. Manual tools and automated performance metrics (APMs) were analysed using Messick's concept of validity and the Oxford Centre of Evidence-Based Medicine (OCEBM) Levels of Evidence and Recommendation (LoR). A bespoke tool analysed artificial intelligence (AI) studies. The Modified Downs-Black checklist was used to assess risk of bias. RESULTS Two hundred and forty-seven studies were analysed, identifying: 8 global rating scales, 26 procedure-/task-specific tools, 3 main error-based methods, 10 simulators, 28 studies analysing APMs and 53 AI studies. Global Evaluative Assessment of Robotic Skills and the da Vinci Skills Simulator were the most evaluated tools at LoR 1 (OCEBM). Three procedure-specific tools, 3 error-based methods and 1 non-simulator APMs reached LoR 2. AI models estimated outcomes (skill or clinical), demonstrating superior accuracy rates in the laboratory with 60 per cent of methods reporting accuracies over 90 per cent, compared to real surgery ranging from 67 to 100 per cent. CONCLUSIONS Manual and automated assessment tools for robotic surgery are not well validated and require further evaluation before use in accreditation processes.PROSPERO: registration ID CRD42022304901.
Collapse
Affiliation(s)
- Matthew W E Boal
- The Griffin Institute, Northwick Park & St Marks’ Hospital, London, UK
- Wellcome/ESPRC Centre for Interventional Surgical Sciences (WEISS), University College London (UCL), London, UK
- Division of Surgery and Interventional Science, Research Department of Targeted Intervention, UCL, London, UK
| | - Dimitrios Anastasiou
- Wellcome/ESPRC Centre for Interventional Surgical Sciences (WEISS), University College London (UCL), London, UK
- Medical Physics and Biomedical Engineering, UCL, London, UK
| | - Freweini Tesfai
- The Griffin Institute, Northwick Park & St Marks’ Hospital, London, UK
- Wellcome/ESPRC Centre for Interventional Surgical Sciences (WEISS), University College London (UCL), London, UK
| | - Walaa Ghamrawi
- The Griffin Institute, Northwick Park & St Marks’ Hospital, London, UK
| | - Evangelos Mazomenos
- Wellcome/ESPRC Centre for Interventional Surgical Sciences (WEISS), University College London (UCL), London, UK
- Medical Physics and Biomedical Engineering, UCL, London, UK
| | - Nathan Curtis
- Department of General Surgey, Dorset County Hospital NHS Foundation Trust, Dorchester, UK
| | - Justin W Collins
- Division of Surgery and Interventional Science, Research Department of Targeted Intervention, UCL, London, UK
- University College London Hospitals NHS Foundation Trust, London, UK
| | - Ashwin Sridhar
- Division of Surgery and Interventional Science, Research Department of Targeted Intervention, UCL, London, UK
- University College London Hospitals NHS Foundation Trust, London, UK
| | - John Kelly
- Division of Surgery and Interventional Science, Research Department of Targeted Intervention, UCL, London, UK
- University College London Hospitals NHS Foundation Trust, London, UK
| | - Danail Stoyanov
- Wellcome/ESPRC Centre for Interventional Surgical Sciences (WEISS), University College London (UCL), London, UK
- Computer Science, UCL, London, UK
| | - Nader K Francis
- The Griffin Institute, Northwick Park & St Marks’ Hospital, London, UK
- Division of Surgery and Interventional Science, Research Department of Targeted Intervention, UCL, London, UK
- Yeovil District Hospital, Somerset Foundation NHS Trust, Yeovil, Somerset, UK
| |
Collapse
|
5
|
Aghazadeh F, Zheng B, Tavakoli M, Rouhani H. Surgical tooltip motion metrics assessment using virtual marker: an objective approach to skill assessment for minimally invasive surgery. Int J Comput Assist Radiol Surg 2023; 18:2191-2202. [PMID: 37597089 DOI: 10.1007/s11548-023-03007-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2022] [Accepted: 07/19/2023] [Indexed: 08/21/2023]
Abstract
PURPOSE Surgical skill assessment has primarily been performed using checklists or rating scales, which are prone to bias and subjectivity. To tackle this shortcoming, assessment of surgical tool motion can be implemented to objectively classify skill levels. Due to the challenges involved in motion tracking of surgical tooltips in minimally invasive surgeries, formerly used assessment approaches may not be feasible for real-world skill assessment. We proposed an assessment approach based on the virtual marker on surgical tooltips to derive the tooltip's 3D position and introduced a novel metric for surgical skill assessment. METHODS We obtained the 3D tooltip position based on markers placed on the tool handle. Then, we derived tooltip motion metrics to identify the metrics differentiating the skill levels for objective surgical skill assessment. We proposed a new tooltip motion metric, i.e., motion inconsistency, that can assess the skill level, and also can evaluate the stage of skill learning. In this study, peg transfer, dual transfer, and rubber band translocation tasks were included, and nine novices, five surgical residents and five attending general surgeons participated. RESULTS Our analyses showed that tooltip path length (p [Formula: see text] 0.007) and path length along the instrument axis (p [Formula: see text] 0.014) differed across the three skill levels in all the tasks and decreased by skill level. Tooltip motion inconsistency showed significant differences among the three skill levels in the dual transfer (p [Formula: see text] 0.025) and the rubber band translocation tasks (p [Formula: see text] 0.021). Lastly, bimanual dexterity differed across the three skill levels in all the tasks (p [Formula: see text] 0.012) and increased by skill level. CONCLUSION Depth perception ability (indicated by shorter tooltip path lengths along the instrument axis), bimanual dexterity, tooltip motion consistency, and economical tooltip movements (shorter tooltip path lengths) are related to surgical skill. Our findings can contribute to objective surgical skill assessment, reducing subjectivity, bias, and associated costs.
Collapse
Affiliation(s)
- Farzad Aghazadeh
- Department of Mechanical Engineering, 10-390 Donadeo Innovation Centre for Engineering, University of Alberta, 9211-116 Street NW, Edmonton, AB, T6G 1H9, Canada
| | - Bin Zheng
- Department of Surgery, University of Alberta, Edmonton, AB, Canada
| | - Mahdi Tavakoli
- Department of Electrical and Computer Engineering, University of Alberta, Edmonton, AB, Canada
| | - Hossein Rouhani
- Department of Mechanical Engineering, 10-390 Donadeo Innovation Centre for Engineering, University of Alberta, 9211-116 Street NW, Edmonton, AB, T6G 1H9, Canada.
| |
Collapse
|
6
|
Abreu AA, Rail B, Farah E, Alterio RE, Scott DJ, Sankaranarayanan G, Zeh HJ, Polanco PM. Baseline performance in a robotic virtual reality platform predicts rate of skill acquisition in a proficiency-based curriculum: a cohort study of surgical trainees. Surg Endosc 2023; 37:8804-8809. [PMID: 37603102 DOI: 10.1007/s00464-023-10372-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2023] [Accepted: 07/30/2023] [Indexed: 08/22/2023]
Abstract
BACKGROUND Residency programs must prepare to train the next generation of surgeons on the robotic platform. The purpose of this study was to determine if baseline skills of residents on a virtual reality (VR) robotic simulator before intern year predicted future performance in a proficiency-based curriculum. METHODS Across two academic years, 21 general surgery PGY-1s underwent the robotic surgery boot camp at the University of Texas Southwestern. During boot camp, subjects completed five previously validated VR tasks, and their performance metrics (score, time, and economy of motion [EOM]) were extracted retrospectively from their Intuitive learning accounts. The same metrics were assessed during their residency until they reached previously validated proficiency benchmarks. Outcomes were defined as the score at proficiency, attempts to reach proficiency, and time to proficiency. Spearman's rho and Mann-Whitney U tests were used; median (IQR) was reported. Significance level was set at p < 0.05. RESULTS Twenty-one residents completed at least three out of the five boot camp tasks and achieved proficiency in the former during residency. The median average score at boot camp was 12.3 (IQR: 5.14-18.5). The median average EOM at boot camp was 599.58 cm (IQR: 529.64-676.60). The average score at boot camp significantly correlated with lower time to achieve proficiency (p < 0.05). EOM at boot camp showed a significant correlation with attempts to proficiency and time to proficiency (p < 0.01). Residents with an average baseline EOM below the median showed a significant difference in attempts to proficiency (p < 0.05) and time to proficiency (p < 0.05) compared to those with EOMs above or equal to the median. CONCLUSION Residents with an innate ability to perform tasks with better EOM may acquire robotic surgery skills faster. Future investigators could explore how these innate differences impact performance throughout residency.
Collapse
Affiliation(s)
- Andres A Abreu
- Division of Surgical Oncology, Department of Surgery, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd., Dallas, TX, 75390, USA
| | - Benjamin Rail
- Division of Surgical Oncology, Department of Surgery, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd., Dallas, TX, 75390, USA
| | - Emile Farah
- Division of Surgical Oncology, Department of Surgery, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd., Dallas, TX, 75390, USA
| | - Rodrigo E Alterio
- Division of Surgical Oncology, Department of Surgery, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd., Dallas, TX, 75390, USA
| | - Daniel J Scott
- Division of Surgical Oncology, Department of Surgery, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd., Dallas, TX, 75390, USA
| | - Ganesh Sankaranarayanan
- Division of Surgical Oncology, Department of Surgery, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd., Dallas, TX, 75390, USA
| | - Herbert J Zeh
- Division of Surgical Oncology, Department of Surgery, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd., Dallas, TX, 75390, USA
| | - Patricio M Polanco
- Division of Surgical Oncology, Department of Surgery, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd., Dallas, TX, 75390, USA.
| |
Collapse
|
7
|
Liu Z, Hitchcock DB, Singapogu RB. Cannulation Skill Assessment Using Functional Data Analysis. IEEE J Biomed Health Inform 2023; 27:4512-4523. [PMID: 37310836 PMCID: PMC10519736 DOI: 10.1109/jbhi.2023.3283188] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
OBJECTIVE A clinician's operative skill-the ability to safely and effectively perform a procedure-directly impacts patient outcomes and well-being. Therefore, it is necessary to accurately assess skill progression during medical training as well as develop methods to most efficiently train healthcare professionals. METHODS In this study, we explore whether time-series needle angle data recorded during cannulation on a simulator can be analyzed using functional data analysis methods to (1) identify skilled versus unskilled performance and (2) relate angle profiles to degree of success of the procedure. RESULTS Our methods successfully differentiated between types of needle angle profiles. In addition, the identified profile types were associated with degrees of skilled and unskilled behavior of subjects. Furthermore, the types of variability in the dataset were analyzed, providing particular insight into the overall range of needle angles used as well as the rate of change of angle as cannulation progressed in time. Finally, cannulation angle profiles also demonstrated an observable correlation with degree of cannulation success, a metric that is closely related to clinical outcome. CONCLUSION In summary, the methods presented here enable rich assessment of clinical skill since the functional (i.e., dynamic) nature of the data is duly considered.
Collapse
|
8
|
Liu Z, Bible J, Petersen L, Zhang Z, Roy-Chaudhury P, Singapogu R. Relating process and outcome metrics for meaningful and interpretable cannulation skill assessment: A machine learning paradigm. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 236:107429. [PMID: 37119772 PMCID: PMC10291517 DOI: 10.1016/j.cmpb.2023.107429] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/02/2022] [Revised: 02/06/2023] [Accepted: 02/15/2023] [Indexed: 05/21/2023]
Abstract
BACKGROUND AND OBJECTIVES The quality of healthcare delivery depends directly on the skills of clinicians. For patients on hemodialysis, medical errors or injuries caused during cannulation can lead to adverse outcomes, including potential death. To promote objective skill assessment and effective training, we present a machine learning approach, which utilizes a highly-sensorized cannulation simulator and a set of objective process and outcome metrics. METHODS In this study, 52 clinicians were recruited to perform a set of pre-defined cannulation tasks on the simulator. Based on data collected by sensors during their task performance, the feature space was then constructed based on force, motion, and infrared sensor data. Following this, three machine learning models- support vector machine (SVM), support vector regression (SVR), and elastic net (EN)- were constructed to relate the feature space to objective outcome metrics. Our models utilize classification based on the conventional skill classification labels as well as a new method that represents skill on a continuum. RESULTS With less than 5% of trials misplaced by two classes, the SVM model was effective in predicting skill based on the feature space. In addition, the SVR model effectively places both skill and outcome on a fine-grained continuum (versus discrete divisions) that is representative of reality. As importantly, the elastic net model enabled the identification of a set of process metrics that highly impact outcomes of the cannulation task, including smoothness of motion, needle angles, and pinch forces. CONCLUSIONS The proposed cannulation simulator, paired with machine learning assessment, demonstrates definite advantages over current cannulation training practices. The methods presented here can be adopted to drastically increase the effectiveness of skill assessment and training, thereby potentially improving clinical outcomes of hemodialysis treatment.
Collapse
Affiliation(s)
- Zhanhe Liu
- Department of Bioengineering, Clemson University, 301 Rhodes Research Center, Clemson, 29634, SC, USA
| | - Joe Bible
- School of Mathematical and Statistical Sciences, Clemson University, O-110 Martin Hall, Clemson, 29634, SC, USA
| | - Lydia Petersen
- Department of Bioengineering, Clemson University, 301 Rhodes Research Center, Clemson, 29634, SC, USA
| | - Ziyang Zhang
- Department of Bioengineering, Clemson University, 301 Rhodes Research Center, Clemson, 29634, SC, USA
| | - Prabir Roy-Chaudhury
- UNC Kidney Center, University of North Carolina, Chapel Hill, NC, 28144, USA; (Bill Hefner) VA Medical Center, Salisbury, NC, 28144, USA
| | - Ravikiran Singapogu
- Department of Bioengineering, Clemson University, 301 Rhodes Research Center, Clemson, 29634, SC, USA.
| |
Collapse
|
9
|
Zha Y, Xue C, Liu Y, Ni J, De La Fuente JM, Cui D. Artificial intelligence in theranostics of gastric cancer, a review. MEDICAL REVIEW (2021) 2023; 3:214-229. [PMID: 37789960 PMCID: PMC10542883 DOI: 10.1515/mr-2022-0042] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/04/2022] [Accepted: 04/26/2023] [Indexed: 10/05/2023]
Abstract
Gastric cancer (GC) is one of the commonest cancers with high morbidity and mortality in the world. How to realize precise diagnosis and therapy of GC owns great clinical requirement. In recent years, artificial intelligence (AI) has been actively explored to apply to early diagnosis and treatment and prognosis of gastric carcinoma. Herein, we review recent advance of AI in early screening, diagnosis, therapy and prognosis of stomach carcinoma. Especially AI combined with breath screening early GC system improved 97.4 % of early GC diagnosis ratio, AI model on stomach cancer diagnosis system of saliva biomarkers obtained an overall accuracy of 97.18 %, specificity of 97.44 %, and sensitivity of 96.88 %. We also discuss concept, issues, approaches and challenges of AI applied in stomach cancer. This review provides a comprehensive view and roadmap for readers working in this field, with the aim of pushing application of AI in theranostics of stomach cancer to increase the early discovery ratio and curative ratio of GC patients.
Collapse
Affiliation(s)
- Yiqian Zha
- Institute of Nano Biomedicine and Engineering, Shanghai Engineering Research Center for Intelligent Diagnosis and Treatment Instrument, School of Sensing Science and Engineering, Shanghai Jiao Tong University, Shanghai, China
- National Engineering Research Center for Nanotechnology, Shanghai, China
| | - Cuili Xue
- Institute of Nano Biomedicine and Engineering, Shanghai Engineering Research Center for Intelligent Diagnosis and Treatment Instrument, School of Sensing Science and Engineering, Shanghai Jiao Tong University, Shanghai, China
- National Engineering Research Center for Nanotechnology, Shanghai, China
| | - Yanlei Liu
- Institute of Nano Biomedicine and Engineering, Shanghai Engineering Research Center for Intelligent Diagnosis and Treatment Instrument, School of Sensing Science and Engineering, Shanghai Jiao Tong University, Shanghai, China
- National Engineering Research Center for Nanotechnology, Shanghai, China
| | - Jian Ni
- Institute of Nano Biomedicine and Engineering, Shanghai Engineering Research Center for Intelligent Diagnosis and Treatment Instrument, School of Sensing Science and Engineering, Shanghai Jiao Tong University, Shanghai, China
- National Engineering Research Center for Nanotechnology, Shanghai, China
| | | | - Daxiang Cui
- Institute of Nano Biomedicine and Engineering, Shanghai Engineering Research Center for Intelligent Diagnosis and Treatment Instrument, School of Sensing Science and Engineering, Shanghai Jiao Tong University, Shanghai, China
- National Engineering Research Center for Nanotechnology, Shanghai, China
| |
Collapse
|
10
|
Aghazadeh F, Zheng B, Tavakoli M, Rouhani H. Motion Smoothness-Based Assessment of Surgical Expertise: The Importance of Selecting Proper Metrics. SENSORS (BASEL, SWITZERLAND) 2023; 23:3146. [PMID: 36991855 PMCID: PMC10057623 DOI: 10.3390/s23063146] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/02/2023] [Revised: 03/07/2023] [Accepted: 03/12/2023] [Indexed: 06/19/2023]
Abstract
The smooth movement of hand/surgical instruments is considered an indicator of skilled, coordinated surgical performance. Jerky surgical instrument movements or hand tremors can cause unwanted damages to the surgical site. Different methods have been used in previous studies for assessing motion smoothness, causing conflicting results regarding the comparison among surgical skill levels. We recruited four attending surgeons, five surgical residents, and nine novices. The participants conducted three simulated laparoscopic tasks, including peg transfer, bimanual peg transfer, and rubber band translocation. Tooltip motion smoothness was computed using the mean tooltip motion jerk, logarithmic dimensionless tooltip motion jerk, and 95% tooltip motion frequency (originally proposed in this study) to evaluate their capability of surgical skill level differentiation. The results revealed that logarithmic dimensionless motion jerk and 95% motion frequency were capable of distinguishing skill levels, indicated by smoother tooltip movements observed in high compared to low skill levels. Contrarily, mean motion jerk was not able to distinguish the skill levels. Additionally, 95% motion frequency was less affected by the measurement noise since it did not require the calculation of motion jerk, and 95% motion frequency and logarithmic dimensionless motion jerk yielded a better motion smoothness assessment outcome in distinguishing skill levels than mean motion jerk.
Collapse
Affiliation(s)
- Farzad Aghazadeh
- Department of Mechanical Engineering, University of Alberta, Edmonton, AB T6G 1H9, Canada;
| | - Bin Zheng
- Department of Surgery, University of Alberta, Edmonton, AB T6G 2B7, Canada
| | - Mahdi Tavakoli
- Department of Electrical and Computer Engineering, University of Alberta, Edmonton, AB T6G 2R3, Canada
| | - Hossein Rouhani
- Department of Mechanical Engineering, University of Alberta, Edmonton, AB T6G 1H9, Canada;
| |
Collapse
|
11
|
Moglia A, Georgiou K, Morelli L, Toutouzas K, Satava RM, Cuschieri A. Breaking down the silos of artificial intelligence in surgery: glossary of terms. Surg Endosc 2022; 36:7986-7997. [PMID: 35729406 PMCID: PMC9613746 DOI: 10.1007/s00464-022-09371-y] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2022] [Accepted: 05/28/2022] [Indexed: 01/06/2023]
Abstract
BACKGROUND The literature on artificial intelligence (AI) in surgery has advanced rapidly during the past few years. However, the published studies on AI are mostly reported by computer scientists using their own jargon which is unfamiliar to surgeons. METHODS A literature search was conducted in using PubMed following the preferred reporting items for systematic reviews and meta-analyses (PRISMA) statement. The primary outcome of this review is to provide a glossary with definitions of the commonly used AI terms in surgery to improve their understanding by surgeons. RESULTS One hundred ninety-five studies were included in this review, and 38 AI terms related to surgery were retrieved. Convolutional neural networks were the most frequently culled term by the search, accounting for 74 studies on AI in surgery, followed by classification task (n = 62), artificial neural networks (n = 53), and regression (n = 49). Then, the most frequent expressions were supervised learning (reported in 24 articles), support vector machine (SVM) in 21, and logistic regression in 16. The rest of the 38 terms was seldom mentioned. CONCLUSIONS The proposed glossary can be used by several stakeholders. First and foremost, by residents and attending consultant surgeons, both having to understand the fundamentals of AI when reading such articles. Secondly, junior researchers at the start of their career in Surgical Data Science and thirdly experts working in the regulatory sections of companies involved in the AI Business Software as a Medical Device (SaMD) preparing documents for submission to the Food and Drug Administration (FDA) or other agencies for approval.
Collapse
Affiliation(s)
- Andrea Moglia
- Department of Translational Research and New Technologies in Medicine and Surgery, University of Pisa, Pisa, Italy.
| | - Konstantinos Georgiou
- 1st Propaedeutic Surgical Unit, Hippocrateion Athens General Hospital, Athens Medical School, National and Kapodistrian University of Athens, Athens, Greece
| | - Luca Morelli
- Department of Translational Research and New Technologies in Medicine and Surgery, University of Pisa, Pisa, Italy
- Department of General Surgery, University of Pisa, Pisa, Italy
| | - Konstantinos Toutouzas
- 1st Propaedeutic Surgical Unit, Hippocrateion Athens General Hospital, Athens Medical School, National and Kapodistrian University of Athens, Athens, Greece
| | - Richard M Satava
- Department of Surgery, University of Washington Medical Center, Seattle, WA, USA
| | - Alfred Cuschieri
- Scuola Superiore Sant'Anna of Pisa, 56214, Pisa, Italy
- Institute for Medical Science and Technology, University of Dundee, Dundee, DD2 1FD, UK
| |
Collapse
|
12
|
An explainable machine learning method for assessing surgical skill in liposuction surgery. Int J Comput Assist Radiol Surg 2022; 17:2325-2336. [PMID: 36167953 DOI: 10.1007/s11548-022-02739-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2021] [Accepted: 08/12/2022] [Indexed: 11/05/2022]
Abstract
PURPOSE Surgical skill assessment has received growing interest in surgery training and quality control due to its essential role in competency assessment and trainee feedback. However, the current assessment methods rarely provide corresponding feedback guidance while giving ability evaluation. We aim to validate an explainable surgical skill assessment method that automatically evaluates the trainee performance of liposuction surgery and provides visual postoperative and real-time feedback. METHODS In this study, machine learning using a model-agnostic interpretable method based on stroke segmentation was introduced to objectively evaluate surgical skills. We evaluated the method on liposuction surgery datasets that consisted of motion and force data for classification tasks. RESULTS Our classifier achieved optimistic accuracy in clinical and imitation liposuction surgery models, ranging from 89 to 94%. With the help of SHapley Additive exPlanations (SHAP), we deeply explore the potential rules of liposuction operation between surgeons with variant experiences and provide real-time feedback based on the ML model to surgeons with undesirable skills. CONCLUSION Our results demonstrate the strong abilities of explainable machine learning methods in objective surgical skill assessment. We believe that the machine learning model based on interpretive methods proposed in this article can improve the evaluation and training of liposuction surgery and provide objective assessment and training guidance for other surgeries.
Collapse
|
13
|
Moglia A, Morelli L, D'Ischia R, Fatucchi LM, Pucci V, Berchiolli R, Ferrari M, Cuschieri A. Ensemble deep learning for the prediction of proficiency at a virtual simulator for robot-assisted surgery. Surg Endosc 2022; 36:6473-6479. [PMID: 35020053 PMCID: PMC9402513 DOI: 10.1007/s00464-021-08999-6] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2021] [Accepted: 12/31/2021] [Indexed: 02/05/2023]
Abstract
BACKGROUND Artificial intelligence (AI) has the potential to enhance patient safety in surgery, and all its aspects, including education and training, will derive considerable benefit from AI. In the present study, deep-learning models were used to predict the rates of proficiency acquisition in robot-assisted surgery (RAS), thereby providing surgical programs directors information on the levels of the innate ability of trainees to facilitate the implementation of flexible personalized training. METHODS 176 medical students, without prior experience with surgical simulators, were trained to reach proficiency in five tasks on a virtual simulator for RAS. Ensemble deep neural networks (DNN) models were developed and compared with other ensemble AI algorithms, i.e., random forests and gradient boosted regression trees (GBRT). RESULTS DNN models achieved a higher accuracy than random forests and GBRT in predicting time to proficiency, 0.84 vs. 0.70 and 0.77, respectively (Peg board 2), 0.83 vs. 0.79 and 0.78 (Ring walk 2), 0.81 vs 0.81 and 0.80 (Match board 1), 0.79 vs. 0.75 and 0.71 (Ring and rail 2), and 0.87 vs. 0.86 and 0.84 (Thread the rings 2). Ensemble DNN models outperformed random forests and GBRT in predicting number of attempts to proficiency, with an accuracy of 0.87 vs. 0.86 and 0.83, respectively (Peg board 2), 0.89 vs. 0.88 and 0.89 (Ring walk 2), 0.91 vs. 0.89 and 0.89 (Match board 1), 0.89 vs. 0.87 and 0.83 (Ring and rail 2), and 0.96 vs. 0.94 and 0.94 (Thread the rings 2). CONCLUSIONS Ensemble DNN models can identify at an early stage the acquisition rates of surgical technical proficiency of trainees and identify those struggling to reach the required expected proficiency level.
Collapse
Affiliation(s)
- Andrea Moglia
- EndoCAS, Center for Computer Assisted Surgery, University of Pisa, Edificio 102, via Paradisa 2, 56124, Pisa, Italy.
| | - Luca Morelli
- EndoCAS, Center for Computer Assisted Surgery, University of Pisa, Edificio 102, via Paradisa 2, 56124, Pisa, Italy
- General Surgery Unit, Cisanello Teaching Hospital of Pisa, 56124, Pisa, Italy
- Multidisciplinary Center of Robotic Surgery, University Hospital of Pisa, 56124, Pisa, Italy
| | - Roberto D'Ischia
- General Surgery Unit, Cisanello Teaching Hospital of Pisa, 56124, Pisa, Italy
| | | | - Valentina Pucci
- General Surgery Unit, Cisanello Teaching Hospital of Pisa, 56124, Pisa, Italy
| | | | - Mauro Ferrari
- EndoCAS, Center for Computer Assisted Surgery, University of Pisa, Edificio 102, via Paradisa 2, 56124, Pisa, Italy
- Vascular Surgery Unit, Cisanello Teaching Hospital of Pisa, 56124, Pisa, Italy
| | - Alfred Cuschieri
- Scuola Superiore Sant'Anna of Pisa, 56214, Pisa, Italy
- Institute for Medical Science and Technology, University of Dundee, Dundee, DD2 1FD, UK
| |
Collapse
|
14
|
Zheng Y, Ershad M, Fey AM. Toward Correcting Anxious Movements Using Haptic Cues on the Da Vinci Surgical Robot. PROCEEDINGS OF THE ... IEEE/RAS-EMBS INTERNATIONAL CONFERENCE ON BIOMEDICAL ROBOTICS AND BIOMECHATRONICS. IEEE/RAS-EMBS INTERNATIONAL CONFERENCE ON BIOMEDICAL ROBOTICS AND BIOMECHATRONICS 2022; 2022:10.1109/biorob52689.2022.9925380. [PMID: 37408769 PMCID: PMC10321328 DOI: 10.1109/biorob52689.2022.9925380] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/07/2023]
Abstract
Surgical movements have an important stylistic quality that individuals without formal surgical training can use to identify expertise. In our prior work, we sought to characterize quantitative metrics associated with surgical style and developed a near-real-time detection framework for stylistic deficiencies using a commercial haptic device. In this paper, we implement bimanual stylistic detection on the da Vinci Research Kit (dVRK) and focus on one stylistic deficiency, "Anxious", which may describe movements under stressful conditions. Our goal is to potentially correct these "Anxious" movements by exploring the effects of three different types of haptic cues (time-variant spring, damper, and spring-damper feedback) on performance during a basic surgical training task using the da Vinci Research Kit (dVRK). Eight subjects were recruited to complete peg transfer tasks using a randomized order of haptic cues and with baseline trials between each task. Overall, all cues lead to a significant improvement over baseline economy of volume and time-variant spring haptic cues lead to significant improvements in reducing the classified "Anxious" movements and also corresponded with significantly lower path length and economy of volume for the non-dominant hand. This work is the first step in evaluating our stylistic detection model on a surgical robot and could lay the groundwork for future methods to actively and adaptively reduce the negative effect of stress in the operating room.
Collapse
Affiliation(s)
- Yi Zheng
- Department of Mechanical Engineering, the University of Texas at Austin, 204 East Dean Keeton Street, Austin, TX 78712, USA
| | - Marzieh Ershad
- Intuitive Surgical, Inc., 1020 Kifer Road Sunnyvale, CA 94086
| | - Ann Majewicz Fey
- Department of Mechanical Engineering, the University of Texas at Austin, 204 East Dean Keeton Street, Austin, TX 78712, USA
- Department of Surgery, UT South-western Medical Center, 5323 Harry Hines Blvd, Dallas, TX 75390, USA
| |
Collapse
|
15
|
Lam K, Chen J, Wang Z, Iqbal FM, Darzi A, Lo B, Purkayastha S, Kinross JM. Machine learning for technical skill assessment in surgery: a systematic review. NPJ Digit Med 2022; 5:24. [PMID: 35241760 PMCID: PMC8894462 DOI: 10.1038/s41746-022-00566-0] [Citation(s) in RCA: 36] [Impact Index Per Article: 18.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2021] [Accepted: 01/21/2022] [Indexed: 12/18/2022] Open
Abstract
Accurate and objective performance assessment is essential for both trainees and certified surgeons. However, existing methods can be time consuming, labor intensive, and subject to bias. Machine learning (ML) has the potential to provide rapid, automated, and reproducible feedback without the need for expert reviewers. We aimed to systematically review the literature and determine the ML techniques used for technical surgical skill assessment and identify challenges and barriers in the field. A systematic literature search, in accordance with the PRISMA statement, was performed to identify studies detailing the use of ML for technical skill assessment in surgery. Of the 1896 studies that were retrieved, 66 studies were included. The most common ML methods used were Hidden Markov Models (HMM, 14/66), Support Vector Machines (SVM, 17/66), and Artificial Neural Networks (ANN, 17/66). 40/66 studies used kinematic data, 19/66 used video or image data, and 7/66 used both. Studies assessed the performance of benchtop tasks (48/66), simulator tasks (10/66), and real-life surgery (8/66). Accuracy rates of over 80% were achieved, although tasks and participants varied between studies. Barriers to progress in the field included a focus on basic tasks, lack of standardization between studies, and lack of datasets. ML has the potential to produce accurate and objective surgical skill assessment through the use of methods including HMM, SVM, and ANN. Future ML-based assessment tools should move beyond the assessment of basic tasks and towards real-life surgery and provide interpretable feedback with clinical value for the surgeon. PROSPERO: CRD42020226071
Collapse
Affiliation(s)
- Kyle Lam
- Department of Surgery and Cancer, 10th Floor Queen Elizabeth the Queen Mother Building, St Mary's Hospital, Imperial College, London, W2 1NY, UK
| | - Junhong Chen
- Department of Surgery and Cancer, 10th Floor Queen Elizabeth the Queen Mother Building, St Mary's Hospital, Imperial College, London, W2 1NY, UK
| | - Zeyu Wang
- Department of Surgery and Cancer, 10th Floor Queen Elizabeth the Queen Mother Building, St Mary's Hospital, Imperial College, London, W2 1NY, UK
| | - Fahad M Iqbal
- Department of Surgery and Cancer, 10th Floor Queen Elizabeth the Queen Mother Building, St Mary's Hospital, Imperial College, London, W2 1NY, UK
| | - Ara Darzi
- Department of Surgery and Cancer, 10th Floor Queen Elizabeth the Queen Mother Building, St Mary's Hospital, Imperial College, London, W2 1NY, UK
| | - Benny Lo
- Department of Surgery and Cancer, 10th Floor Queen Elizabeth the Queen Mother Building, St Mary's Hospital, Imperial College, London, W2 1NY, UK
| | - Sanjay Purkayastha
- Department of Surgery and Cancer, 10th Floor Queen Elizabeth the Queen Mother Building, St Mary's Hospital, Imperial College, London, W2 1NY, UK.
| | - James M Kinross
- Department of Surgery and Cancer, 10th Floor Queen Elizabeth the Queen Mother Building, St Mary's Hospital, Imperial College, London, W2 1NY, UK
| |
Collapse
|
16
|
Kirubarajan A, Young D, Khan S, Crasto N, Sobel M, Sussman D. Artificial Intelligence and Surgical Education: A Systematic Scoping Review of Interventions. JOURNAL OF SURGICAL EDUCATION 2022; 79:500-515. [PMID: 34756807 DOI: 10.1016/j.jsurg.2021.09.012] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/12/2021] [Revised: 07/21/2021] [Accepted: 09/16/2021] [Indexed: 06/13/2023]
Abstract
OBJECTIVE To synthesize peer-reviewed evidence related to the use of artificial intelligence (AI) in surgical education DESIGN: We conducted and reported a scoping review according to the standards outlined in the Preferred Reporting Items for Systematic Reviews and Meta-Analysis with extension for Scoping Reviews guideline and the fourth edition of the Joanna Briggs Institute Reviewer's Manual. We systematically searched eight interdisciplinary databases including MEDLINE-Ovid, ERIC, EMBASE, CINAHL, Web of Science: Core Collection, Compendex, Scopus, and IEEE Xplore. Databases were searched from inception until the date of search on April 13, 2021. SETTING/PARTICIPANTS We only examined original, peer-reviewed interventional studies that self-described as AI interventions, focused on medical education, and were relevant to surgical trainees (defined as medical or dental students, postgraduate residents, or surgical fellows) within the title and abstract (see Table 2). Animal, cadaveric, and in vivo studies were not eligible for inclusion. RESULTS After systematically searching eight databases and 4255 citations, our scoping review identified 49 studies relevant to artificial intelligence in surgical education. We found diverse interventions related to the evaluation of surgical competency, personalization of surgical education, and improvement of surgical education materials across surgical specialties. Many studies used existing surgical education materials, such as the Objective Structured Assessment of Technical Skills framework or the JHU-ISI Gesture and Skill Assessment Working Set database. Though most studies did not provide outcomes related to the implementation in medical schools (such as cost-effective analyses or trainee feedback), there are numerous promising interventions. In particular, many studies noted high accuracy in the objective characterization of surgical skill sets. These interventions could be further used to identify at-risk surgical trainees or evaluate teaching methods. CONCLUSIONS There are promising applications for AI in surgical education, particularly for the assessment of surgical competencies, though further evidence is needed regarding implementation and applicability.
Collapse
Affiliation(s)
| | - Dylan Young
- Department of Electrical, Computer and Biomedical Engineering, Ryerson University, Toronto, Ontario, Canada
| | - Shawn Khan
- Faculty of Medicine, University of Toronto, Toronto, Ontario, Canada
| | - Noelle Crasto
- Department of Electrical, Computer and Biomedical Engineering, Ryerson University, Toronto, Ontario, Canada
| | - Mara Sobel
- Department of Electrical, Computer and Biomedical Engineering, Ryerson University, Toronto, Ontario, Canada; Institute for Biomedical Engineering, Science and Technology (iBEST) at Ryerson University and St. Michael's Hospital, Toronto, Ontario, Canada
| | - Dafna Sussman
- Department of Electrical, Computer and Biomedical Engineering, Ryerson University, Toronto, Ontario, Canada; Institute for Biomedical Engineering, Science and Technology (iBEST) at Ryerson University and St. Michael's Hospital, Toronto, Ontario, Canada; Department of Obstetrics and Gynaecology, University of Toronto, Toronto, Ontario, Canada; The Keenan Research Centre for Biomedical Science, St. Michael's Hospital, Toronto, Ontario, Canada
| |
Collapse
|
17
|
Bellini V, Valente M, Del Rio P, Bignami E. Artificial intelligence in thoracic surgery: a narrative review. J Thorac Dis 2022; 13:6963-6975. [PMID: 35070380 PMCID: PMC8743413 DOI: 10.21037/jtd-21-761] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2021] [Accepted: 08/30/2021] [Indexed: 12/12/2022]
Abstract
Objective The aim of this article is to review the current applications of artificial intelligence in thoracic surgery, from diagnosis and pulmonary disease management, to preoperative risk-assessment, surgical planning, and outcomes prediction. Background Artificial intelligence implementation in healthcare settings is rapidly growing, though its widespread use in clinical practice is still limited. The employment of machine learning algorithms in thoracic surgery is wide-ranging, including all steps of the clinical pathway. Methods We performed a narrative review of the literature on Scopus, PubMed and Cochrane databases, including all the relevant studies published in the last ten years, until March 2021. Conclusion Machine learning methods are promising encouraging results throughout the key issues of thoracic surgery, both clinical, organizational, and educational. Artificial intelligence-based technologies showed remarkable efficacy to improve the perioperative evaluation of the patient, to assist the decision-making process, to enhance the surgical performance, and to optimize the operating room scheduling. Still, some concern remains about data supply, protection, and transparency, thus further studies and specific consensus guidelines are needed to validate these technologies for daily common practice. Keywords Artificial intelligence (AI); thoracic surgery; machine learning; lung resection; perioperative medicine
Collapse
Affiliation(s)
- Valentina Bellini
- Anesthesiology, Critical Care and Pain Medicine Division, Department of Medicine and Surgery, University of Parma, Parma, Italy
| | - Marina Valente
- General Surgery Unit, Department of Medicine and Surgery, University of Parma, Parma, Italy
| | - Paolo Del Rio
- General Surgery Unit, Department of Medicine and Surgery, University of Parma, Parma, Italy
| | - Elena Bignami
- Anesthesiology, Critical Care and Pain Medicine Division, Department of Medicine and Surgery, University of Parma, Parma, Italy
| |
Collapse
|
18
|
Bilgic E, Gorgy A, Yang A, Cwintal M, Ranjbar H, Kahla K, Reddy D, Li K, Ozturk H, Zimmermann E, Quaiattini A, Abbasgholizadeh-Rahimi S, Poenaru D, Harley JM. Exploring the roles of artificial intelligence in surgical education: A scoping review. Am J Surg 2021; 224:205-216. [PMID: 34865736 DOI: 10.1016/j.amjsurg.2021.11.023] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2021] [Revised: 11/19/2021] [Accepted: 11/22/2021] [Indexed: 01/02/2023]
Abstract
BACKGROUND Technology-enhanced teaching and learning, including Artificial Intelligence (AI) applications, has started to evolve in surgical education. Hence, the purpose of this scoping review is to explore the current and future roles of AI in surgical education. METHODS Nine bibliographic databases were searched from January 2010 to January 2021. Full-text articles were included if they focused on AI in surgical education. RESULTS Out of 14,008 unique sources of evidence, 93 were included. Out of 93, 84 were conducted in the simulation setting, and 89 targeted technical skills. Fifty-six studies focused on skills assessment/classification, and 36 used multiple AI techniques. Also, increasing sample size, having balanced data, and using AI to provide feedback were major future directions mentioned by authors. CONCLUSIONS AI can help optimize the education of trainees and our results can help educators and researchers identify areas that need further investigation.
Collapse
Affiliation(s)
- Elif Bilgic
- Department of Surgery, McGill University, Montreal, Quebec, Canada
| | - Andrew Gorgy
- Department of Surgery, McGill University, Montreal, Quebec, Canada
| | - Alison Yang
- Department of Surgery, McGill University, Montreal, Quebec, Canada
| | - Michelle Cwintal
- Department of Surgery, McGill University, Montreal, Quebec, Canada
| | - Hamed Ranjbar
- Department of Surgery, McGill University, Montreal, Quebec, Canada
| | - Kalin Kahla
- Department of Surgery, McGill University, Montreal, Quebec, Canada
| | - Dheeksha Reddy
- Department of Surgery, McGill University, Montreal, Quebec, Canada
| | - Kexin Li
- Department of Surgery, McGill University, Montreal, Quebec, Canada
| | - Helin Ozturk
- Department of Surgery, McGill University, Montreal, Quebec, Canada
| | - Eric Zimmermann
- Department of Surgery, McGill University, Montreal, Quebec, Canada
| | - Andrea Quaiattini
- Schulich Library of Physical Sciences, Life Sciences, and Engineering, McGill University, Canada; Institute of Health Sciences Education, McGill University, Montreal, Quebec, Canada
| | - Samira Abbasgholizadeh-Rahimi
- Department of Family Medicine, McGill University, Montreal, Quebec, Canada; Department of Electrical and Computer Engineering, McGill University, Montreal, Canada; Lady Davis Institute for Medical Research, Jewish General Hospital, Montreal, Canada; Mila Quebec AI Institute, Montreal, Canada
| | - Dan Poenaru
- Institute of Health Sciences Education, McGill University, Montreal, Quebec, Canada; Department of Pediatric Surgery, McGill University, Canada
| | - Jason M Harley
- Department of Surgery, McGill University, Montreal, Quebec, Canada; Institute of Health Sciences Education, McGill University, Montreal, Quebec, Canada; Research Institute of the McGill University Health Centre, Montreal, Quebec, Canada; Steinberg Centre for Simulation and Interactive Learning, McGill University, Montreal, Quebec, Canada.
| |
Collapse
|
19
|
Ershad M, Rege R, Majewicz Fey A. Adaptive Surgical Robotic Training Using Real-Time Stylistic Behavior Feedback Through Haptic Cues. IEEE TRANSACTIONS ON MEDICAL ROBOTICS AND BIONICS 2021; 3:959-969. [PMID: 38250511 PMCID: PMC10798657 DOI: 10.1109/tmrb.2021.3124128] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/23/2024]
Abstract
Surgical skill directly affects surgical procedure outcomes; thus, effective training is needed to ensure satisfactory results. Many objective assessment metrics have been developed that provide the trainee with descriptive feedback about their performance however, often lack feedback on how to improve performance. The most effective training method is one that is intuitive, easy to understand, personalized to the user,and provided in a timely manner. We propose a framework to enable user-adaptive training using near real-time detection of performance, based on intuitive styles of surgical movements, and design a haptic feedback framework to assist with correcting styles of movement. We evaluate the ability of three types of force feedback (spring, damping, and spring plus damping feedback), computed based on prior user positions, to improve different stylistic behaviors of the user during kinematically constrained reaching movement tasks. The results indicate that five out of six styles studied here were improved using at least one of the three types of force feedback. Task performance metrics were compared in the presence of the three types of feedback. Task time was statistically significantly lower when applying spring feedback, compared to the other two types of feedback. Path straightness and targeting error were statistically significantly improved when using spring-damping feedback compared to the other two types of feedback. This study presents a groundwork for adaptive training in robotic surgery based on near real-time human-centric models of surgical behavior.
Collapse
Affiliation(s)
- Marzieh Ershad
- Department of Electrical Engineering, University of Texas at Dallas, Richardson, TX, 75080
| | - Robert Rege
- Department of Surgery at UT Southwestern Medical Center, Dallas, TX, 75390
| | - Ann Majewicz Fey
- Department of Mechanical Engineering, University of Texas at Austin, Austin, TX 78712
- Department of Surgery at UT Southwestern Medical Center, Dallas, TX, 75390
| |
Collapse
|
20
|
Moglia A, Georgiou K, Georgiou E, Satava RM, Cuschieri A. A systematic review on artificial intelligence in robot-assisted surgery. Int J Surg 2021; 95:106151. [PMID: 34695601 DOI: 10.1016/j.ijsu.2021.106151] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2021] [Revised: 10/04/2021] [Accepted: 10/19/2021] [Indexed: 12/12/2022]
Abstract
BACKGROUND Despite the extensive published literature on the significant potential of artificial intelligence (AI) there are no reports on its efficacy in improving patient safety in robot-assisted surgery (RAS). The purposes of this work are to systematically review the published literature on AI in RAS, and to identify and discuss current limitations and challenges. MATERIALS AND METHODS A literature search was conducted on PubMed, Web of Science, Scopus, and IEEExplore according to PRISMA 2020 statement. Eligible articles were peer-review studies published in English language from January 1, 2016 to December 31, 2020. Amstar 2 was used for quality assessment. Risk of bias was evaluated with the Newcastle Ottawa Quality assessment tool. Data of the studies were visually presented in tables using SPIDER tool. RESULTS Thirty-five publications, representing 3436 patients, met the search criteria and were included in the analysis. The selected reports concern: motion analysis (n = 17), urology (n = 12), gynecology (n = 1), other specialties (n = 1), training (n = 3), and tissue retraction (n = 1). Precision for surgical tools detection varied from 76.0% to 90.6%. Mean absolute error on prediction of urinary continence after robot-assisted radical prostatectomy (RARP) ranged from 85.9 to 134.7 days. Accuracy on prediction of length of stay after RARP was 88.5%. Accuracy on recognition of the next surgical task during robot-assisted partial nephrectomy (RAPN) achieved 75.7%. CONCLUSION The reviewed studies were of low quality. The findings are limited by the small size of the datasets. Comparison between studies on the same topic was restricted due to algorithms and datasets heterogeneity. There is no proof that currently AI can identify the critical tasks of RAS operations, which determine patient outcome. There is an urgent need for studies on large datasets and external validation of the AI algorithms used. Furthermore, the results should be transparent and meaningful to surgeons, enabling them to inform patients in layman's words. REGISTRATION Review Registry Unique Identifying Number: reviewregistry1225.
Collapse
Affiliation(s)
- Andrea Moglia
- EndoCAS, Center for Computer Assisted Surgery, University of Pisa, 56124, Pisa, Italy 1st Propaedeutic Surgical Unit, Hippocrateion Athens General Hospital, Athens Medical School, National and Kapodistrian University of Athens, Greece MPLSC, Athens Medical School, National and Kapodistrian University of Athens, Greece Department of Surgery, University of Washington Medical Center, Seattle, WA, United States Scuola Superiore Sant'Anna of Pisa, 56214, Pisa, Italy Institute for Medical Science and Technology, University of Dundee, Dundee, DD2 1FD, United Kingdom
| | | | | | | | | |
Collapse
|
21
|
Battaglia E, Boehm J, Zheng Y, Jamieson AR, Gahan J, Majewicz Fey A. Rethinking Autonomous Surgery: Focusing on Enhancement over Autonomy. Eur Urol Focus 2021; 7:696-705. [PMID: 34246619 PMCID: PMC10394949 DOI: 10.1016/j.euf.2021.06.009] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2021] [Revised: 05/28/2021] [Accepted: 06/17/2021] [Indexed: 12/12/2022]
Abstract
CONTEXT As robot-assisted surgery is increasingly used in surgical care, the engineering research effort towards surgical automation has also increased significantly. Automation promises to enhance surgical outcomes, offload mundane or repetitive tasks, and improve workflow. However, we must ask an important question: should autonomous surgery be our long-term goal? OBJECTIVE To provide an overview of the engineering requirements for automating control systems, summarize technical challenges in automated robotic surgery, and review sensing and modeling techniques to capture real-time human behaviors for integration into the robotic control loop for enhanced shared or collaborative control. EVIDENCE ACQUISITION We performed a nonsystematic search of the English language literature up to March 25, 2021. We included original studies related to automation in robot-assisted laparoscopic surgery and human-centered sensing and modeling. EVIDENCE SYNTHESIS We identified four comprehensive review papers that present techniques for automating portions of surgical tasks. Sixteen studies relate to human-centered sensing technologies and 23 to computer vision and/or advanced artificial intelligence or machine learning methods for skill assessment. Twenty-two studies evaluate or review the role of haptic or adaptive guidance during some learning task, with only a few applied to robotic surgery. Finally, only three studies discuss the role of some form of training in patient outcomes and none evaluated the effects of full or semi-autonomy on patient outcomes. CONCLUSIONS Rather than focusing on autonomy, which eliminates the surgeon from the loop, research centered on more fully understanding the surgeon's behaviors, goals, and limitations could facilitate a superior class of collaborative surgical robots that could be more effective and intelligent than automation alone. PATIENT SUMMARY We reviewed the literature for studies on automation in surgical robotics and on modeling of human behavior in human-machine interaction. The main application is to enhance the ability of surgical robotic systems to collaborate more effectively and intelligently with human surgeon operators.
Collapse
Affiliation(s)
- Edoardo Battaglia
- Department of Mechanical Engineering, University of Texas at Austin, Austin, TX, USA
| | - Jacob Boehm
- Department of Mechanical Engineering, University of Texas at Austin, Austin, TX, USA
| | - Yi Zheng
- Department of Mechanical Engineering, University of Texas at Austin, Austin, TX, USA
| | - Andrew R Jamieson
- Lyda Hill Department of Bioinformatics, UT Southwestern Medical Center, Dallas, TX, USA
| | - Jeffrey Gahan
- Department of Urology, UT Southwestern Medical Center, Dallas, TX, USA
| | - Ann Majewicz Fey
- Department of Mechanical Engineering, University of Texas at Austin, Austin, TX, USA.
| |
Collapse
|
22
|
Brodie A, Dai N, Teoh JYC, Decaestecker K, Dasgupta P, Vasdev N. Artificial intelligence in urological oncology: An update and future applications. Urol Oncol 2021; 39:379-399. [PMID: 34024704 DOI: 10.1016/j.urolonc.2021.03.012] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2020] [Revised: 12/20/2020] [Accepted: 03/21/2021] [Indexed: 01/16/2023]
Abstract
There continues to be rapid developments and research in the field of Artificial Intelligence (AI) in Urological Oncology worldwide. In this review we discuss the basics of AI, application of AI per tumour group (Renal, Prostate and Bladder Cancer) and application of AI in Robotic Urological Surgery. We also discuss future applications of AI being developed with the benefits to patients with Urological Oncology.
Collapse
Affiliation(s)
- Andrew Brodie
- Addenbrooke's Hospital, Cambridge University Hospitals NHS Foundation Trust, Cambridge, United Kingdom
| | - Nick Dai
- Addenbrooke's Hospital, Cambridge University Hospitals NHS Foundation Trust, Cambridge, United Kingdom
| | - Jeremy Yuen-Chun Teoh
- S.H. Ho Urology Centre, Department of Surgery, The Chinese University of Hong Kong, Hong Kong, China
| | | | - Prokar Dasgupta
- Faculty of Life Sciences and Medicine, King's College London, London, United Kingdom
| | - Nikhil Vasdev
- Hertfordshire and Bedfordshire Urological Cancer Centre, Department of Urology, Lister Hospital, Stevenage, United Kingdom; School of Medicine and Life Sciences, University of Hertfordshire, Hatfield, United Kingdom.
| |
Collapse
|
23
|
Murali B, Belvroy VM, Pandey S, Bismuth J, Byrne MD, O'Malley MK. Velocity-Domain Motion Quality Measures for Surgical Performance Evaluation and Feedback. J Med Device 2021. [DOI: 10.1115/1.4049310] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022] Open
Abstract
Abstract
Endovascular navigation proficiency requires a significant amount of manual dexterity from surgeons. Objective performance measures derived from endovascular tool tip kinematics have been shown to correlate with expertise; however, such metrics have not yet been used during training as a basis for real-time performance feedback. This paper evaluates a set of velocity-based performance measures derived from guidewire motion to determine their suitability for online performance evaluation and feedback. We evaluated the endovascular navigation skill of 75 participants using three metrics (spectral arc length, average velocity, and idle time) as they steered tools to anatomical targets using a virtual reality simulator. First, we examined the effect of navigation task and experience level on performance and found that novice performance was significantly different from intermediate and expert performance. Then we computed correlations between measures calculated online and spectral arc length, our “gold standard” metric, calculated offline (at the end of the trial, using data from the entire trial). Our results suggest that average velocity and idle time calculated online are strongly and consistently correlated with spectral arc length computed offline, which was not the case when comparing spectral arc length computed online and offline. Average velocity and idle time, both time-domain based performance measures, are therefore more suitable measures than spectral arc length, a frequency-domain based metric, to use as the basis of online performance feedback. Future work is needed to determine how to best provide real-time performance feedback to endovascular surgery trainees based on these metrics.
Collapse
Affiliation(s)
- Barathwaj Murali
- Department of Mechanical Engineering, Rice University, Houston, TX 77005
| | - Viony M. Belvroy
- DeBakey Heart & Vascular Center, Houston Methodist Hospital, Houston, TX 77030
| | - Shivam Pandey
- Department of Psychological Sciences, Rice University, Houston, TX 77005
| | - Jean Bismuth
- DeBakey Heart & Vascular Center, Houston Methodist Hospital, Houston, TX 77030
| | - Michael D. Byrne
- Department of Psychological Sciences, Rice University, Houston, TX 77005
| | - Marcia K. O'Malley
- Department of Mechanical Engineering, Rice University, Houston, TX 77005
| |
Collapse
|
24
|
Ma R, Vanstrum EB, Lee R, Chen J, Hung AJ. Machine learning in the optimization of robotics in the operative field. Curr Opin Urol 2020; 30:808-816. [PMID: 32925312 PMCID: PMC7735438 DOI: 10.1097/mou.0000000000000816] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/06/2023]
Abstract
PURPOSE OF REVIEW The increasing use of robotics in urologic surgery facilitates collection of 'big data'. Machine learning enables computers to infer patterns from large datasets. This review aims to highlight recent findings and applications of machine learning in robotic-assisted urologic surgery. RECENT FINDINGS Machine learning has been used in surgical performance assessment and skill training, surgical candidate selection, and autonomous surgery. Autonomous segmentation and classification of surgical data have been explored, which serves as the stepping-stone for providing real-time surgical assessment and ultimately, improve surgical safety and quality. Predictive machine learning models have been created to guide appropriate surgical candidate selection, whereas intraoperative machine learning algorithms have been designed to provide 3-D augmented reality and real-time surgical margin checks. Reinforcement-learning strategies have been utilized in autonomous robotic surgery, and the combination of expert demonstrations and trial-and-error learning by the robot itself is a promising approach towards autonomy. SUMMARY Robot-assisted urologic surgery coupled with machine learning is a burgeoning area of study that demonstrates exciting potential. However, further validation and clinical trials are required to ensure the safety and efficacy of incorporating machine learning into surgical practice.
Collapse
Affiliation(s)
- Runzhuo Ma
- Center for Robotic Simulation & Education, Catherine & Joseph Aresty Department of Urology, USC Institute of Urology, University of Southern California, Los Angeles, California, USA
| | | | | | | | | |
Collapse
|
25
|
Anh NX, Nataraja RM, Chauhan S. Towards near real-time assessment of surgical skills: A comparison of feature extraction techniques. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2020; 187:105234. [PMID: 31794913 DOI: 10.1016/j.cmpb.2019.105234] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/13/2019] [Revised: 10/31/2019] [Accepted: 11/18/2019] [Indexed: 05/22/2023]
Abstract
BACKGROUND AND OBJECTIVE Surgical skill assessment aims to objectively evaluate and provide constructive feedback for trainee surgeons. Conventional methods require direct observation with assessment from surgical experts which are both unscalable and subjective. The recent involvement of surgical robotic systems in the operating room has facilitated the ability of automated evaluation of the expertise level of trainees for certain representative maneuvers by using machine learning for motion analysis. The features extraction technique plays a critical role in such an automated surgical skill assessment system. METHODS We present a direct comparison of nine well-known feature extraction techniques which are statistical features, principal component analysis, discrete Fourier/Cosine transform, codebook, deep learning models and auto-encoder for automated surgical skills evaluation. Towards near real-time evaluation, we also investigate the effect of time interval on the classification accuracy and efficiency. RESULTS We validate the study on the benchmark robotic surgical training JIGSAWS dataset. An accuracy of 95.63, 90.17 and 90.26% by the Principal Component Analysis and 96.84, 92.75 and 95.36% by the deep Convolutional Neural Network for suturing, knot tying and needle passing, respectively, highlighted the effectiveness of these two techniques in extracting the most discriminative features among different surgical skill levels. CONCLUSIONS This study contributes toward the development of an online automated and efficient surgical skills assessment technique.
Collapse
Affiliation(s)
- Nguyen Xuan Anh
- Department of Mechanical and Aerospace Engineering, Monash University, Melbourne, Australia
| | - Ramesh Mark Nataraja
- Department of Surgical Simulation, Monash Children's Hospital, Melbourne, Australia
| | - Sunita Chauhan
- Department of Mechanical and Aerospace Engineering, Monash University, Melbourne, Australia.
| |
Collapse
|
26
|
|
27
|
Andras I, Mazzone E, van Leeuwen FWB, De Naeyer G, van Oosterom MN, Beato S, Buckle T, O'Sullivan S, van Leeuwen PJ, Beulens A, Crisan N, D'Hondt F, Schatteman P, van Der Poel H, Dell'Oglio P, Mottrie A. Artificial intelligence and robotics: a combination that is changing the operating room. World J Urol 2019; 38:2359-2366. [PMID: 31776737 DOI: 10.1007/s00345-019-03037-6] [Citation(s) in RCA: 36] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2019] [Accepted: 11/21/2019] [Indexed: 12/12/2022] Open
Abstract
PURPOSE The aim of the current narrative review was to summarize the available evidence in the literature on artificial intelligence (AI) methods that have been applied during robotic surgery. METHODS A narrative review of the literature was performed on MEDLINE/Pubmed and Scopus database on the topics of artificial intelligence, autonomous surgery, machine learning, robotic surgery, and surgical navigation, focusing on articles published between January 2015 and June 2019. All available evidences were analyzed and summarized herein after an interactive peer-review process of the panel. LITERATURE REVIEW The preliminary results of the implementation of AI in clinical setting are encouraging. By providing a readout of the full telemetry and a sophisticated viewing console, robot-assisted surgery can be used to study and refine the application of AI in surgical practice. Machine learning approaches strengthen the feedback regarding surgical skills acquisition, efficiency of the surgical process, surgical guidance and prediction of postoperative outcomes. Tension-sensors on the robotic arms and the integration of augmented reality methods can help enhance the surgical experience and monitor organ movements. CONCLUSIONS The use of AI in robotic surgery is expected to have a significant impact on future surgical training as well as enhance the surgical experience during a procedure. Both aim to realize precision surgery and thus to increase the quality of the surgical care. Implementation of AI in master-slave robotic surgery may allow for the careful, step-by-step consideration of autonomous robotic surgery.
Collapse
Affiliation(s)
- Iulia Andras
- ORSI Academy, Melle, Belgium
- Department of Urology, Iuliu Hatieganu University of Medicine and Pharmacy, Cluj-Napoca, Romania
| | - Elio Mazzone
- ORSI Academy, Melle, Belgium
- Department of Urology, Onze Lieve Vrouw Hospital, Aalst, Belgium
- Department of Urology and Division of Experimental Oncology, URI, Urological Research Institute, IRCCS San Raffaele Scientific Institute, Milan, Italy
| | - Fijs W B van Leeuwen
- ORSI Academy, Melle, Belgium
- Interventional Molecular Imaging Laboratory, Department of Radiology, Leiden University Medical Centre, Leiden, The Netherlands
- Department of Urology, Antoni Van Leeuwenhoek Hospital, The Netherlands Cancer Institute, Amsterdam, The Netherlands
| | - Geert De Naeyer
- ORSI Academy, Melle, Belgium
- Department of Urology, Onze Lieve Vrouw Hospital, Aalst, Belgium
| | - Matthias N van Oosterom
- Interventional Molecular Imaging Laboratory, Department of Radiology, Leiden University Medical Centre, Leiden, The Netherlands
- Department of Urology, Antoni Van Leeuwenhoek Hospital, The Netherlands Cancer Institute, Amsterdam, The Netherlands
| | | | - Tessa Buckle
- Interventional Molecular Imaging Laboratory, Department of Radiology, Leiden University Medical Centre, Leiden, The Netherlands
| | - Shane O'Sullivan
- Department of Pathology, Faculdade de Medicina, Universidade de São Paulo, São Paulo, Brazil
| | - Pim J van Leeuwen
- Department of Urology, Antoni Van Leeuwenhoek Hospital, The Netherlands Cancer Institute, Amsterdam, The Netherlands
| | - Alexander Beulens
- Department of Urology, Catharina Hospital, Eindhoven, The Netherlands
- Netherlands Institute for Health Services (NIVEL), Utrecht, The Netherlands
| | - Nicolae Crisan
- Department of Urology, Iuliu Hatieganu University of Medicine and Pharmacy, Cluj-Napoca, Romania
| | - Frederiek D'Hondt
- ORSI Academy, Melle, Belgium
- Department of Urology, Onze Lieve Vrouw Hospital, Aalst, Belgium
| | - Peter Schatteman
- ORSI Academy, Melle, Belgium
- Department of Urology, Onze Lieve Vrouw Hospital, Aalst, Belgium
| | - Henk van Der Poel
- Department of Urology, Antoni Van Leeuwenhoek Hospital, The Netherlands Cancer Institute, Amsterdam, The Netherlands
| | - Paolo Dell'Oglio
- ORSI Academy, Melle, Belgium.
- Department of Urology, Onze Lieve Vrouw Hospital, Aalst, Belgium.
- Interventional Molecular Imaging Laboratory, Department of Radiology, Leiden University Medical Centre, Leiden, The Netherlands.
- Department of Urology, Antoni Van Leeuwenhoek Hospital, The Netherlands Cancer Institute, Amsterdam, The Netherlands.
| | - Alexandre Mottrie
- ORSI Academy, Melle, Belgium
- Department of Urology, Onze Lieve Vrouw Hospital, Aalst, Belgium
| |
Collapse
|