1
|
Alongi P, Arnone A, Vultaggio V, Fraternali A, Versari A, Casali C, Arnone G, DiMeco F, Vetrano IG. Artificial Intelligence Analysis Using MRI and PET Imaging in Gliomas: A Narrative Review. Cancers (Basel) 2024; 16:407. [PMID: 38254896 PMCID: PMC10814838 DOI: 10.3390/cancers16020407] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2023] [Revised: 01/10/2024] [Accepted: 01/14/2024] [Indexed: 01/24/2024] Open
Abstract
The lack of early detection and a high rate of recurrence/progression after surgery are defined as the most common causes of a very poor prognosis of Gliomas. The developments of quantification systems with special regards to artificial intelligence (AI) on medical images (CT, MRI, PET) are under evaluation in the clinical and research context in view of several applications providing different information related to the reconstruction of imaging, the segmentation of tissues acquired, the selection of features, and the proper data analyses. Different approaches of AI have been proposed as the machine and deep learning, which utilize artificial neural networks inspired by neuronal architectures. In addition, new systems have been developed using AI techniques to offer suggestions or make decisions in medical diagnosis, emulating the judgment of radiologist experts. The potential clinical role of AI focuses on the prediction of disease progression in more aggressive forms in gliomas, differential diagnosis (pseudoprogression vs. proper progression), and the follow-up of aggressive gliomas. This narrative Review will focus on the available applications of AI in brain tumor diagnosis, mainly related to malignant gliomas, with particular attention to the postoperative application of MRI and PET imaging, considering the current state of technical approach and the evaluation after treatment (including surgery, radiotherapy/chemotherapy, and prognostic stratification).
Collapse
Affiliation(s)
- Pierpaolo Alongi
- Nuclear Medicine Unit, ARNAS Ospedali Civico, Di Cristina e Benfratelli, 90127 Palermo, Italy; (P.A.); (V.V.); (G.A.)
| | - Annachiara Arnone
- Nuclear Medicine Unit, Azienda Unità Sanitaria Locale IRCCS, 42122 Reggio Emilia, Italy; (A.A.); (A.F.); (A.V.)
| | - Viola Vultaggio
- Nuclear Medicine Unit, ARNAS Ospedali Civico, Di Cristina e Benfratelli, 90127 Palermo, Italy; (P.A.); (V.V.); (G.A.)
| | - Alessandro Fraternali
- Nuclear Medicine Unit, Azienda Unità Sanitaria Locale IRCCS, 42122 Reggio Emilia, Italy; (A.A.); (A.F.); (A.V.)
| | - Annibale Versari
- Nuclear Medicine Unit, Azienda Unità Sanitaria Locale IRCCS, 42122 Reggio Emilia, Italy; (A.A.); (A.F.); (A.V.)
| | - Cecilia Casali
- Department of Neurosurgery, Fondazione IRCCS Istituto Neurologico Carlo Besta, 20133 Milan, Italy; (C.C.); (F.D.)
| | - Gaspare Arnone
- Nuclear Medicine Unit, ARNAS Ospedali Civico, Di Cristina e Benfratelli, 90127 Palermo, Italy; (P.A.); (V.V.); (G.A.)
| | - Francesco DiMeco
- Department of Neurosurgery, Fondazione IRCCS Istituto Neurologico Carlo Besta, 20133 Milan, Italy; (C.C.); (F.D.)
- Department of Oncology and Onco-Hematology, Università di Milano, 20122 Milan, Italy
- Department of Neurological Surgery, Johns Hopkins Medical School, Baltimore, MD 21218, USA
| | - Ignazio Gaspare Vetrano
- Department of Neurosurgery, Fondazione IRCCS Istituto Neurologico Carlo Besta, 20133 Milan, Italy; (C.C.); (F.D.)
- Department of Biomedical Sciences for Health, Università di Milano, 20122 Milan, Italy
| |
Collapse
|
2
|
Sugiyama T, Sugimori H, Tang M, Ito Y, Gekka M, Uchino H, Ito M, Ogasawara K, Fujimura M. Deep learning-based video-analysis of instrument motion in microvascular anastomosis training. Acta Neurochir (Wien) 2024; 166:6. [PMID: 38214753 DOI: 10.1007/s00701-024-05896-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2023] [Accepted: 12/11/2023] [Indexed: 01/13/2024]
Abstract
PURPOSE Attaining sufficient microsurgical skills is paramount for neurosurgical trainees. Kinematic analysis of surgical instruments using video offers the potential for an objective assessment of microsurgical proficiency, thereby enhancing surgical training and patient safety. The purposes of this study were to develop a deep-learning-based automated instrument tip-detection algorithm, and to validate its performance in microvascular anastomosis training. METHODS An automated instrument tip-tracking algorithm was developed and trained using YOLOv2, based on clinical microsurgical videos and microvascular anastomosis practice videos. With this model, we measured motion economy (procedural time and path distance) and motion smoothness (normalized jerk index) during the task of suturing artificial blood vessels for end-to-side anastomosis. These parameters were validated using traditional criteria-based rating scales and were compared across surgeons with varying microsurgical experience (novice, intermediate, and expert). The suturing task was deconstructed into four distinct phases, and parameters within each phase were compared between novice and expert surgeons. RESULTS The high accuracy of the developed model was indicated by a mean Dice similarity coefficient of 0.87. Deep learning-based parameters (procedural time, path distance, and normalized jerk index) exhibited correlations with traditional criteria-based rating scales and surgeons' years of experience. Experts completed the suturing task faster than novices. The total path distance for the right (dominant) side instrument movement was shorter for experts compared to novices. However, for the left (non-dominant) side, differences between the two groups were observed only in specific phases. The normalized jerk index for both the right and left sides was significantly lower in the expert than in the novice groups, and receiver operating characteristic analysis showed strong discriminative ability. CONCLUSION The deep learning-based kinematic analytic approach for surgical instruments proves beneficial in assessing performance in microvascular anastomosis. Moreover, this methodology can be adapted for use in clinical settings.
Collapse
Affiliation(s)
- Taku Sugiyama
- Department of Neurosurgery, Hokkaido University Graduate School of Medicine, North 15 West 7, Kita-Ku, Sapporo, 060-8638, Japan.
| | - Hiroyuki Sugimori
- Faculty of Health Sciences, Hokkaido University, Sapporo, 060-0812, Japan
| | - Minghui Tang
- Department of Diagnostic Imaging, Hokkaido University Graduate School of Medicine, North 15 West 7, Kita-Ku, Sapporo, 060-8638, Japan
| | - Yasuhiro Ito
- Department of Neurosurgery, Hokkaido University Graduate School of Medicine, North 15 West 7, Kita-Ku, Sapporo, 060-8638, Japan
| | - Masayuki Gekka
- Department of Neurosurgery, Hokkaido University Graduate School of Medicine, North 15 West 7, Kita-Ku, Sapporo, 060-8638, Japan
| | - Haruto Uchino
- Department of Neurosurgery, Hokkaido University Graduate School of Medicine, North 15 West 7, Kita-Ku, Sapporo, 060-8638, Japan
| | - Masaki Ito
- Department of Neurosurgery, Hokkaido University Graduate School of Medicine, North 15 West 7, Kita-Ku, Sapporo, 060-8638, Japan
| | | | - Miki Fujimura
- Department of Neurosurgery, Hokkaido University Graduate School of Medicine, North 15 West 7, Kita-Ku, Sapporo, 060-8638, Japan
| |
Collapse
|
3
|
Balu A, Pangal DJ, Kugener G, Donoho DA. Pilot Analysis of Surgeon Instrument Utilization Signatures Based on Shannon Entropy and Deep Learning for Surgeon Performance Assessment in a Cadaveric Carotid Artery Injury Control Simulation. Oper Neurosurg (Hagerstown) 2023; 25:e330-e337. [PMID: 37655892 DOI: 10.1227/ons.0000000000000888] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2023] [Accepted: 06/27/2023] [Indexed: 09/02/2023] Open
Abstract
BACKGROUND AND OBJECTIVES Assessment and feedback are critical to surgical education, but direct observational feedback by experts is rarely provided because of time constraints and is typically only qualitative. Automated, video-based, quantitative feedback on surgical performance could address this gap, improving surgical training. The authors aim to demonstrate the ability of Shannon entropy (ShEn), an information theory metric that quantifies series diversity, to predict surgical performance using instrument detections generated through deep learning. METHODS Annotated images from a publicly available video data set of surgeons managing endoscopic endonasal carotid artery lacerations in a perfused cadaveric simulator were collected. A deep learning model was implemented to detect surgical instruments across video frames. ShEn score for the instrument sequence was calculated from each surgical trial. Logistic regression using ShEn was used to predict hemorrhage control success. RESULTS ShEn scores and instrument usage patterns differed between successful and unsuccessful trials (ShEn: 0.452 vs 0.370, P < .001). Unsuccessful hemorrhage control trials displayed lower entropy and less varied instrument use patterns. By contrast, successful trials demonstrated higher entropy with more diverse instrument usage and consistent progression in instrument utilization. A logistic regression model using ShEn scores (78% accuracy and 97% average precision) was at least as accurate as surgeons' attending/resident status and years of experience for predicting trial success and had similar accuracy as expert human observers. CONCLUSION ShEn score offers a summative signal about surgeon performance and predicted success at controlling carotid hemorrhage in a simulated cadaveric setting. Future efforts to generalize ShEn to additional surgical scenarios can further validate this metric.
Collapse
Affiliation(s)
- Alan Balu
- Department of Neurosurgery, Georgetown University School of Medicine, Washington , District of Columbia, USA
| | - Dhiraj J Pangal
- Department of Neurosurgery, Keck School of Medicine of University of Southern California, Los Angeles , California , USA
| | - Guillaume Kugener
- Department of Neurosurgery, Keck School of Medicine of University of Southern California, Los Angeles , California , USA
| | - Daniel A Donoho
- Division of Neurosurgery, Children's National Hospital, Washington , District of Columbia , USA
| |
Collapse
|
4
|
Buyck F, Vandemeulebroucke J, Ceranka J, Van Gestel F, Cornelius JF, Duerinck J, Bruneau M. Computer-vision based analysis of the neurosurgical scene - A systematic review. BRAIN & SPINE 2023; 3:102706. [PMID: 38020988 PMCID: PMC10668095 DOI: 10.1016/j.bas.2023.102706] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/30/2023] [Revised: 10/23/2023] [Accepted: 10/29/2023] [Indexed: 12/01/2023]
Abstract
Introduction With increasing use of robotic surgical adjuncts, artificial intelligence and augmented reality in neurosurgery, the automated analysis of digital images and videos acquired over various procedures becomes a subject of increased interest. While several computer vision (CV) methods have been developed and implemented for analyzing surgical scenes, few studies have been dedicated to neurosurgery. Research question In this work, we present a systematic literature review focusing on CV methodologies specifically applied to the analysis of neurosurgical procedures based on intra-operative images and videos. Additionally, we provide recommendations for the future developments of CV models in neurosurgery. Material and methods We conducted a systematic literature search in multiple databases until January 17, 2023, including Web of Science, PubMed, IEEE Xplore, Embase, and SpringerLink. Results We identified 17 studies employing CV algorithms on neurosurgical videos/images. The most common applications of CV were tool and neuroanatomical structure detection or characterization, and to a lesser extent, surgical workflow analysis. Convolutional neural networks (CNN) were the most frequently utilized architecture for CV models (65%), demonstrating superior performances in tool detection and segmentation. In particular, mask recurrent-CNN manifested most robust performance outcomes across different modalities. Discussion and conclusion Our systematic review demonstrates that CV models have been reported that can effectively detect and differentiate tools, surgical phases, neuroanatomical structures, as well as critical events in complex neurosurgical scenes with accuracies above 95%. Automated tool recognition contributes to objective characterization and assessment of surgical performance, with potential applications in neurosurgical training and intra-operative safety management.
Collapse
Affiliation(s)
- Félix Buyck
- Department of Neurosurgery, Universitair Ziekenhuis Brussel (UZ Brussel), 1090, Brussels, Belgium
- Vrije Universiteit Brussel (VUB), Research group Center For Neurosciences (C4N-NEUR), 1090, Brussels, Belgium
| | - Jef Vandemeulebroucke
- Vrije Universiteit Brussel (VUB), Department of Electronics and Informatics (ETRO), 1050, Brussels, Belgium
- Department of Radiology, Universitair Ziekenhuis Brussel (UZ Brussel), 1090, Brussels, Belgium
- imec, 3001, Leuven, Belgium
| | - Jakub Ceranka
- Vrije Universiteit Brussel (VUB), Department of Electronics and Informatics (ETRO), 1050, Brussels, Belgium
- imec, 3001, Leuven, Belgium
| | - Frederick Van Gestel
- Department of Neurosurgery, Universitair Ziekenhuis Brussel (UZ Brussel), 1090, Brussels, Belgium
- Vrije Universiteit Brussel (VUB), Research group Center For Neurosciences (C4N-NEUR), 1090, Brussels, Belgium
| | - Jan Frederick Cornelius
- Department of Neurosurgery, Medical Faculty, Heinrich-Heine-University, 40225, Düsseldorf, Germany
| | - Johnny Duerinck
- Department of Neurosurgery, Universitair Ziekenhuis Brussel (UZ Brussel), 1090, Brussels, Belgium
- Vrije Universiteit Brussel (VUB), Research group Center For Neurosciences (C4N-NEUR), 1090, Brussels, Belgium
| | - Michaël Bruneau
- Department of Neurosurgery, Universitair Ziekenhuis Brussel (UZ Brussel), 1090, Brussels, Belgium
- Vrije Universiteit Brussel (VUB), Research group Center For Neurosciences (C4N-NEUR), 1090, Brussels, Belgium
| |
Collapse
|
5
|
Wang T, Li H, Pu T, Yang L. Microsurgery Robots: Applications, Design, and Development. SENSORS (BASEL, SWITZERLAND) 2023; 23:8503. [PMID: 37896597 PMCID: PMC10611418 DOI: 10.3390/s23208503] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/24/2023] [Revised: 10/07/2023] [Accepted: 10/09/2023] [Indexed: 10/29/2023]
Abstract
Microsurgical techniques have been widely utilized in various surgical specialties, such as ophthalmology, neurosurgery, and otolaryngology, which require intricate and precise surgical tool manipulation on a small scale. In microsurgery, operations on delicate vessels or tissues require high standards in surgeons' skills. This exceptionally high requirement in skills leads to a steep learning curve and lengthy training before the surgeons can perform microsurgical procedures with quality outcomes. The microsurgery robot (MSR), which can improve surgeons' operation skills through various functions, has received extensive research attention in the past three decades. There have been many review papers summarizing the research on MSR for specific surgical specialties. However, an in-depth review of the relevant technologies used in MSR systems is limited in the literature. This review details the technical challenges in microsurgery, and systematically summarizes the key technologies in MSR with a developmental perspective from the basic structural mechanism design, to the perception and human-machine interaction methods, and further to the ability in achieving a certain level of autonomy. By presenting and comparing the methods and technologies in this cutting-edge research, this paper aims to provide readers with a comprehensive understanding of the current state of MSR research and identify potential directions for future development in MSR.
Collapse
Affiliation(s)
- Tiexin Wang
- ZJU-UIUC Institute, International Campus, Zhejiang University, Haining 314400, China; (T.W.); (H.L.); (T.P.)
- School of Mechanical Engineering, Zhejiang University, Hangzhou 310058, China
| | - Haoyu Li
- ZJU-UIUC Institute, International Campus, Zhejiang University, Haining 314400, China; (T.W.); (H.L.); (T.P.)
| | - Tanhong Pu
- ZJU-UIUC Institute, International Campus, Zhejiang University, Haining 314400, China; (T.W.); (H.L.); (T.P.)
| | - Liangjing Yang
- ZJU-UIUC Institute, International Campus, Zhejiang University, Haining 314400, China; (T.W.); (H.L.); (T.P.)
- School of Mechanical Engineering, Zhejiang University, Hangzhou 310058, China
- Department of Mechanical Engineering, University of Illinois Urbana-Champaign, Urbana, IL 61801, USA
| |
Collapse
|
6
|
Williams SC, Ahmed R, Davids JD, Funnell JP, Hanrahan JG, Layard Horsfall H, Muirhead W, Nicolosi F, Thorne L, Marcus HJ, Grover P. Benchtop simulation of the retrosigmoid approach: Validation of a surgical simulator and development of a task-specific outcome measure score. World Neurosurg X 2023; 20:100230. [PMID: 37456690 PMCID: PMC10344945 DOI: 10.1016/j.wnsx.2023.100230] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2022] [Revised: 05/11/2023] [Accepted: 06/12/2023] [Indexed: 07/18/2023] Open
Abstract
Background Neurosurgical training is changing globally. Reduced working hours and training opportunities, increased patient safety expectations, and the impact of COVID-19 have reduced operative exposure. Benchtop simulators enable trainees to develop surgical skills in a controlled environment. We aim to validate a high-fidelity simulator model (RetrosigmoidBox, UpSurgeOn) for the retrosigmoid approach to the cerebellopontine angle (CPA). Methods Novice and expert Neurosurgeons and Ear, Nose, and Throat surgeons performed a surgical task using the model - identification of the trigeminal nerve. Experts completed a post-task questionnaire examining face and content validity. Construct validity was assessed through scoring of operative videos employing Objective Structured Assessment of Technical Skills (OSATS) and a novel Task-Specific Outcome Measure score. Results Fifteen novice and five expert participants were recruited. Forty percent of experts agreed or strongly agreed that the brain tissue looked real. Experts unanimously agreed that the RetrosigmoidBox was appropriate for teaching. Statistically significant differences were noted in task performance between novices and experts, demonstrating construct validity. Median total OSATS score was 14/25 (IQR 10-19) for novices and 22/25 (IQR 20-22) for experts (p < 0.05). Median Task-Specific Outcome Measure score was 10/20 (IQR 7-17) for novices compared to 19/20 (IQR 18.5-19.5) for experts (p < 0.05). Conclusion The RetrosigmoidBox benchtop simulator has a high degree of content and construct validity and moderate face validity. The changing landscape of neurosurgical training mean that simulators are likely to become increasingly important in the delivery of high-quality education. We demonstrate the validity of a Task-Specific Outcome Measure score for performance assessment of a simulated approach to the CPA.
Collapse
Affiliation(s)
- Simon C. Williams
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, Queen Square, London, UK
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), London, UK
| | - Razna Ahmed
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), London, UK
- Queen Square Institute of Neurology, University College London, London, UK
| | - Joseph Darlington Davids
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, Queen Square, London, UK
- Institute of Global Health Innovation and Hamlyn Centre for Robotics Surgery, Imperial College London, London, UK
| | - Jonathan P. Funnell
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, Queen Square, London, UK
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), London, UK
| | - John Gerrard Hanrahan
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, Queen Square, London, UK
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), London, UK
| | - Hugo Layard Horsfall
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, Queen Square, London, UK
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), London, UK
| | - William Muirhead
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, Queen Square, London, UK
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), London, UK
| | - Federico Nicolosi
- School of Medicine and Surgery, University of Milano-Bicocca, Monza, Italy
| | - Lewis Thorne
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, Queen Square, London, UK
| | - Hani J. Marcus
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, Queen Square, London, UK
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), London, UK
| | - Patrick Grover
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, Queen Square, London, UK
| |
Collapse
|
7
|
Titov O, Bykanov A, Pitskhelauri D. Neurosurgical skills analysis by machine learning models: systematic review. Neurosurg Rev 2023; 46:121. [PMID: 37191734 DOI: 10.1007/s10143-023-02028-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2023] [Revised: 04/16/2023] [Accepted: 05/06/2023] [Indexed: 05/17/2023]
Abstract
Machine learning (ML) models are being actively used in modern medicine, including neurosurgery. This study aimed to summarize the current applications of ML in the analysis and assessment of neurosurgical skills. We conducted this systematic review in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. We searched the PubMed and Google Scholar databases for eligible studies published until November 15, 2022, and used the Medical Education Research Study Quality Instrument (MERSQI) to assess the quality of the included articles. Of the 261 studies identified, we included 17 in the final analysis. Studies were most commonly related to oncological, spinal, and vascular neurosurgery using microsurgical and endoscopic techniques. Machine learning-evaluated tasks included subpial brain tumor resection, anterior cervical discectomy and fusion, hemostasis of the lacerated internal carotid artery, brain vessel dissection and suturing, glove microsuturing, lumbar hemilaminectomy, and bone drilling. The data sources included files extracted from VR simulators and microscopic and endoscopic videos. The ML application was aimed at classifying participants into several expertise levels, analysis of differences between experts and novices, surgical instrument recognition, division of operation into phases, and prediction of blood loss. In two articles, ML models were compared with those of human experts. The machines outperformed humans in all tasks. The most popular algorithms used to classify surgeons by skill level were the support vector machine and k-nearest neighbors, and their accuracy exceeded 90%. The "you only look once" detector and RetinaNet usually solved the problem of detecting surgical instruments - their accuracy was approximately 70%. The experts differed by more confident contact with tissues, higher bimanuality, smaller distance between the instrument tips, and relaxed and focused state of the mind. The average MERSQI score was 13.9 (from 18). There is growing interest in the use of ML in neurosurgical training. Most studies have focused on the evaluation of microsurgical skills in oncological neurosurgery and on the use of virtual simulators; however, other subspecialties, skills, and simulators are being investigated. Machine learning models effectively solve different neurosurgical tasks related to skill classification, object detection, and outcome prediction. Properly trained ML models outperform human efficacy. Further research on ML application in neurosurgery is needed.
Collapse
Affiliation(s)
- Oleg Titov
- Burdenko Neurosurgery Center, Moscow, Russia.
- OPEN BRAIN, Laboratory of Neurosurgical Innovations, Moscow, Russia.
| | | | | |
Collapse
|
8
|
Aghazadeh F, Zheng B, Tavakoli M, Rouhani H. Motion Smoothness-Based Assessment of Surgical Expertise: The Importance of Selecting Proper Metrics. SENSORS (BASEL, SWITZERLAND) 2023; 23:3146. [PMID: 36991855 PMCID: PMC10057623 DOI: 10.3390/s23063146] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/02/2023] [Revised: 03/07/2023] [Accepted: 03/12/2023] [Indexed: 06/19/2023]
Abstract
The smooth movement of hand/surgical instruments is considered an indicator of skilled, coordinated surgical performance. Jerky surgical instrument movements or hand tremors can cause unwanted damages to the surgical site. Different methods have been used in previous studies for assessing motion smoothness, causing conflicting results regarding the comparison among surgical skill levels. We recruited four attending surgeons, five surgical residents, and nine novices. The participants conducted three simulated laparoscopic tasks, including peg transfer, bimanual peg transfer, and rubber band translocation. Tooltip motion smoothness was computed using the mean tooltip motion jerk, logarithmic dimensionless tooltip motion jerk, and 95% tooltip motion frequency (originally proposed in this study) to evaluate their capability of surgical skill level differentiation. The results revealed that logarithmic dimensionless motion jerk and 95% motion frequency were capable of distinguishing skill levels, indicated by smoother tooltip movements observed in high compared to low skill levels. Contrarily, mean motion jerk was not able to distinguish the skill levels. Additionally, 95% motion frequency was less affected by the measurement noise since it did not require the calculation of motion jerk, and 95% motion frequency and logarithmic dimensionless motion jerk yielded a better motion smoothness assessment outcome in distinguishing skill levels than mean motion jerk.
Collapse
Affiliation(s)
- Farzad Aghazadeh
- Department of Mechanical Engineering, University of Alberta, Edmonton, AB T6G 1H9, Canada;
| | - Bin Zheng
- Department of Surgery, University of Alberta, Edmonton, AB T6G 2B7, Canada
| | - Mahdi Tavakoli
- Department of Electrical and Computer Engineering, University of Alberta, Edmonton, AB T6G 2R3, Canada
| | - Hossein Rouhani
- Department of Mechanical Engineering, University of Alberta, Edmonton, AB T6G 1H9, Canada;
| |
Collapse
|
9
|
Autonomous sequential surgical skills assessment for the peg transfer task in a laparoscopic box-trainer system with three cameras. ROBOTICA 2023. [DOI: 10.1017/s0263574723000218] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/06/2023]
Abstract
Abstract
In laparoscopic surgery, surgeons should develop several manual laparoscopic skills before carrying out real operative procedures using a low-cost box trainer. The Fundamentals of Laparoscopic Surgery (FLS) program was developed as a program to assess fundamental knowledge and surgical skills, required for basic laparoscopic surgery. The peg transfer task is a hands-on exam in the FLS program that assists a trainee to understand the relative minimum amount of grasping force necessary to move the pegs from one place to another place without dropping them. In this paper, an autonomous, sequential assessment algorithm based on deep learning, a multi-object detection method, and, several sequential If-Then conditional statements have been developed to monitor each step of a surgeon’s performance. Images from three different cameras are used to assess whether the surgeon executes the peg transfer task correctly and to display a notification on any errors on the monitor immediately. This algorithm improves the performance of a laparoscopic box-trainer system using top, side, and front cameras and removes the need for any human monitoring during a peg transfer task. The developed algorithm can detect each object and its status during a peg transfer task and notifies the resident about the correct or failed outcome. In addition, this system can correctly determine the peg transfer execution time, and the move, carry, and dropped states for each object by the top, side, and front-mounted cameras. Based on the experimental results, the proposed surgical skill assessment system can identify each object at a high score of fidelity, and the train-validation total loss for the single-shot detector (SSD) ResNet50 v1 was about 0.05. Also, the mean average precision (mAP) and Intersection over Union (IoU) of this detection system were 0.741, and 0.75, respectively. This project is a collaborative research effort between the Department of Electrical and Computer Engineering and the Department of Surgery, at Western Michigan University.
Collapse
|
10
|
Rashidi Fathabadi F, Grantner JL, Shebrain SA, Abdel-Qader I. 3D Autonomous Surgeon's Hand Movement Assessment Using a Cascaded Fuzzy Supervisor in Multi-Thread Video Processing. SENSORS (BASEL, SWITZERLAND) 2023; 23:s23052623. [PMID: 36904830 PMCID: PMC10007173 DOI: 10.3390/s23052623] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/27/2022] [Revised: 02/25/2023] [Accepted: 02/25/2023] [Indexed: 06/02/2023]
Abstract
The purpose of the Fundamentals of Laparoscopic Surgery (FLS) training is to develop laparoscopic surgery skills by using simulation experiences. Several advanced training methods based on simulation have been created to enable training in a non-patient environment. Laparoscopic box trainers-cheap, portable devices-have been deployed for a while to offer training opportunities, competence evaluations, and performance reviews. However, the trainees must be under the supervision of medical experts who can evaluate their abilities, which is an expensive and time-consuming operation. Thus, a high level of surgical skill, determined by assessment, is necessary to prevent any intraoperative issues and malfunctions during a real laparoscopic procedure and during human intervention. To guarantee that the use of laparoscopic surgical training methods results in surgical skill improvement, it is necessary to measure and assess surgeons' skills during tests. We used our intelligent box-trainer system (IBTS) as a platform for skill training. The main aim of this study was to monitor the surgeon's hands' movement within a predefined field of interest. To evaluate the surgeons' hands' movement in 3D space, an autonomous evaluation system using two cameras and multi-thread video processing is proposed. This method works by detecting laparoscopic instruments and using a cascaded fuzzy logic assessment system. It is composed of two fuzzy logic systems executing in parallel. The first level assesses the left and right-hand movements simultaneously. Its outputs are cascaded by the final fuzzy logic assessment at the second level. This algorithm is completely autonomous and removes the need for any human monitoring or intervention. The experimental work included nine physicians (surgeons and residents) from the surgery and obstetrics/gynecology (OB/GYN) residency programs at WMU Homer Stryker MD School of Medicine (WMed) with different levels of laparoscopic skills and experience. They were recruited to participate in the peg-transfer task. The participants' performances were assessed, and the videos were recorded throughout the exercises. The results were delivered autonomously about 10 s after the experiments were concluded. In the future, we plan to increase the computing power of the IBTS to achieve real-time performance assessment.
Collapse
Affiliation(s)
| | - Janos L. Grantner
- Electrical & Computer Engineering Department, Western Michigan University, Kalamazoo, MI 49008, USA
| | - Saad A. Shebrain
- Department of Surgery, Homer Stryker MD School of Medicine, Western Michigan University, Kalamazoo, MI 49008, USA
| | - Ikhlas Abdel-Qader
- Electrical & Computer Engineering Department, Western Michigan University, Kalamazoo, MI 49008, USA
| |
Collapse
|
11
|
Chakraborty C, Bhattacharya M, Dhama K, Roy SS, Sharma AR, Mohapatra RK, Lee SS. Deep learning research should be encouraged more and more in different domains of surgery: An open call - Correspondence. Int J Surg 2022; 104:106749. [PMID: 35803516 DOI: 10.1016/j.ijsu.2022.106749] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2022] [Accepted: 07/01/2022] [Indexed: 12/21/2022]
Affiliation(s)
- Chiranjib Chakraborty
- Department of Biotechnology, School of Life Science and Biotechnology, Adamas University, Kolkata, West Bengal, 700126, India.
| | - Manojit Bhattacharya
- Department of Zoology, Fakir Mohan University, Vyasa Vihar, Balasore, 756020, Odisha, India
| | - Kuldeep Dhama
- Division of Pathology, ICAR-Indian Veterinary Research Institute, Izatnagar, Bareilly, 243122, Uttar Pradesh, India
| | - Sanjiban Sekhar Roy
- School of Computer Science and Engineering, Vellore Institute of Technology, Vellore, Tamil Nadu, 632014, India
| | - Ashish Ranjan Sharma
- Institute for Skeletal Aging & Orthopedic Surgery, Hallym University-Chuncheon Sacred Heart Hospital, Chuncheon-si, 24252, Gangwon-do, Republic of Korea
| | - Ranjan K Mohapatra
- Department of Chemistry, Government College of Engineering, Keonjhar, 758002, Odisha, India
| | - Sang-Soo Lee
- Institute for Skeletal Aging & Orthopedic Surgery, Hallym University-Chuncheon Sacred Heart Hospital, Chuncheon-si, 24252, Gangwon-do, Republic of Korea
| |
Collapse
|
12
|
Deepika P, Udupa K, Beniwal M, Uppar AM, V V, Rao M. Automated Microsurgical Tool Segmentation and Characterization in Intra-Operative Neurosurgical Videos. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2022; 2022:2110-2114. [PMID: 36086279 DOI: 10.1109/embc48229.2022.9871838] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Checklist based routine evaluation of surgical skills in any medical school demands quality time and effort from the supervising expert and is highly influenced by assessor bias. Alternatively, automated video based surgical skill assessment is a simple and viable method to analyse surgical dexterity offline without the need for acute presence of an expert surgeon throughout the surgery. In this paper, a novel approach and results for the automated segmentation of microsurgical instruments from the real-world neurosurgical video dataset was presented. The proposed tool segmentation model showcased mean average precision of 96.7% in detecting, and localizing five surgical instruments from the real-world neurosurgical videos. Accurate detection and characterization of motion features of the microsurgical tool from the novel annotated neurosurgical video dataset forms the key step towards automated surgical skill evaluation. Clinical Relevance- Tool segmentation, localization, and characterization in neurosurgical video, has several applications including assessing surgeons skills, training novice surgeons, understanding critical operating procedures post surgery, characterizing any critical anatomical response to the tool that leads to the success or failure of the surgery, and building models for conducting autonomous robotic surgery. Semantic segmentation, and characterization of the microsurgical tools forms the basis of the modern neurosurgery.
Collapse
|
13
|
Soangra R, Sivakumar R, Anirudh ER, Reddy Y. SV, John EB. Evaluation of surgical skill using machine learning with optimal wearable sensor locations. PLoS One 2022; 17:e0267936. [PMID: 35657912 PMCID: PMC9165861 DOI: 10.1371/journal.pone.0267936] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2021] [Accepted: 04/20/2022] [Indexed: 12/02/2022] Open
Abstract
Evaluation of surgical skills during minimally invasive surgeries is needed when recruiting new surgeons. Although surgeons’ differentiation by skill level is highly complex, performance in specific clinical tasks such as pegboard transfer and knot tying could be determined using wearable EMG and accelerometer sensors. A wireless wearable platform has made it feasible to collect movement and muscle activation signals for quick skill evaluation during surgical tasks. However, it is challenging since the placement of multiple wireless wearable sensors may interfere with their performance in the assessment. This study utilizes machine learning techniques to identify optimal muscles and features critical for accurate skill evaluation. This study enrolled a total of twenty-six surgeons of different skill levels: novice (n = 11), intermediaries (n = 12), and experts (n = 3). Twelve wireless wearable sensors consisting of surface EMGs and accelerometers were placed bilaterally on bicep brachii, tricep brachii, anterior deltoid, flexor carpi ulnaris (FCU), extensor carpi ulnaris (ECU), and thenar eminence (TE) muscles to assess muscle activations and movement variability profiles. We found features related to movement complexity such as approximate entropy, sample entropy, and multiscale entropy played a critical role in skill level identification. We found that skill level was classified with highest accuracy by i) ECU for Random Forest Classifier (RFC), ii) deltoid for Support Vector Machines (SVM) and iii) biceps for Naïve Bayes Classifier with classification accuracies 61%, 57% and 47%. We found RFC classifier performed best with highest classification accuracy when muscles are combined i) ECU and deltoid (58%), ii) ECU and biceps (53%), and iii) ECU, biceps and deltoid (52%). Our findings suggest that quick surgical skill evaluation is possible using wearables sensors, and features from ECU, deltoid, and biceps muscles contribute an important role in surgical skill evaluation.
Collapse
Affiliation(s)
- Rahul Soangra
- Department of Physical Therapy, Crean College of Health and Behavioral Sciences, Chapman University, Irvine, California, United States of America
- Department of Electrical and Computer Science Engineering, Fowler School of Engineering, Chapman University, Orange, California, United States of America
- * E-mail:
| | - R. Sivakumar
- Department of Sensor and Biomedical Technology, School of Electronics Engineering, Vellore Institute of Technology, Vellore, India
| | - E. R. Anirudh
- Department of Sensor and Biomedical Technology, School of Electronics Engineering, Vellore Institute of Technology, Vellore, India
| | - Sai Viswanth Reddy Y.
- Department of Sensor and Biomedical Technology, School of Electronics Engineering, Vellore Institute of Technology, Vellore, India
| | - Emmanuel B. John
- Department of Physical Therapy, Crean College of Health and Behavioral Sciences, Chapman University, Irvine, California, United States of America
| |
Collapse
|
14
|
Continuous monitoring of surgical bimanual expertise using deep neural networks in virtual reality simulation. NPJ Digit Med 2022; 5:54. [PMID: 35473961 PMCID: PMC9042967 DOI: 10.1038/s41746-022-00596-8] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2021] [Accepted: 03/29/2022] [Indexed: 11/22/2022] Open
Abstract
In procedural-based medicine, the technical ability can be a critical determinant of patient outcomes. Psychomotor performance occurs in real-time, hence a continuous assessment is necessary to provide action-oriented feedback and error avoidance guidance. We outline a deep learning application, the Intelligent Continuous Expertise Monitoring System (ICEMS), to assess surgical bimanual performance at 0.2-s intervals. A long-short term memory network was built using neurosurgeon and student performance in 156 virtually simulated tumor resection tasks. Algorithm predictive ability was tested separately on 144 procedures by scoring the performance of neurosurgical trainees who are at different training stages. The ICEMS successfully differentiated between neurosurgeons, senior trainees, junior trainees, and students. Trainee average performance score correlated with the year of training in neurosurgery. Furthermore, coaching and risk assessment for critical metrics were demonstrated. This work presents a comprehensive technical skill monitoring system with predictive validation throughout surgical residency training, with the ability to detect errors.
Collapse
|
15
|
Kirubarajan A, Young D, Khan S, Crasto N, Sobel M, Sussman D. Artificial Intelligence and Surgical Education: A Systematic Scoping Review of Interventions. JOURNAL OF SURGICAL EDUCATION 2022; 79:500-515. [PMID: 34756807 DOI: 10.1016/j.jsurg.2021.09.012] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/12/2021] [Revised: 07/21/2021] [Accepted: 09/16/2021] [Indexed: 06/13/2023]
Abstract
OBJECTIVE To synthesize peer-reviewed evidence related to the use of artificial intelligence (AI) in surgical education DESIGN: We conducted and reported a scoping review according to the standards outlined in the Preferred Reporting Items for Systematic Reviews and Meta-Analysis with extension for Scoping Reviews guideline and the fourth edition of the Joanna Briggs Institute Reviewer's Manual. We systematically searched eight interdisciplinary databases including MEDLINE-Ovid, ERIC, EMBASE, CINAHL, Web of Science: Core Collection, Compendex, Scopus, and IEEE Xplore. Databases were searched from inception until the date of search on April 13, 2021. SETTING/PARTICIPANTS We only examined original, peer-reviewed interventional studies that self-described as AI interventions, focused on medical education, and were relevant to surgical trainees (defined as medical or dental students, postgraduate residents, or surgical fellows) within the title and abstract (see Table 2). Animal, cadaveric, and in vivo studies were not eligible for inclusion. RESULTS After systematically searching eight databases and 4255 citations, our scoping review identified 49 studies relevant to artificial intelligence in surgical education. We found diverse interventions related to the evaluation of surgical competency, personalization of surgical education, and improvement of surgical education materials across surgical specialties. Many studies used existing surgical education materials, such as the Objective Structured Assessment of Technical Skills framework or the JHU-ISI Gesture and Skill Assessment Working Set database. Though most studies did not provide outcomes related to the implementation in medical schools (such as cost-effective analyses or trainee feedback), there are numerous promising interventions. In particular, many studies noted high accuracy in the objective characterization of surgical skill sets. These interventions could be further used to identify at-risk surgical trainees or evaluate teaching methods. CONCLUSIONS There are promising applications for AI in surgical education, particularly for the assessment of surgical competencies, though further evidence is needed regarding implementation and applicability.
Collapse
Affiliation(s)
| | - Dylan Young
- Department of Electrical, Computer and Biomedical Engineering, Ryerson University, Toronto, Ontario, Canada
| | - Shawn Khan
- Faculty of Medicine, University of Toronto, Toronto, Ontario, Canada
| | - Noelle Crasto
- Department of Electrical, Computer and Biomedical Engineering, Ryerson University, Toronto, Ontario, Canada
| | - Mara Sobel
- Department of Electrical, Computer and Biomedical Engineering, Ryerson University, Toronto, Ontario, Canada; Institute for Biomedical Engineering, Science and Technology (iBEST) at Ryerson University and St. Michael's Hospital, Toronto, Ontario, Canada
| | - Dafna Sussman
- Department of Electrical, Computer and Biomedical Engineering, Ryerson University, Toronto, Ontario, Canada; Institute for Biomedical Engineering, Science and Technology (iBEST) at Ryerson University and St. Michael's Hospital, Toronto, Ontario, Canada; Department of Obstetrics and Gynaecology, University of Toronto, Toronto, Ontario, Canada; The Keenan Research Centre for Biomedical Science, St. Michael's Hospital, Toronto, Ontario, Canada
| |
Collapse
|
16
|
Uncharted Waters of Machine and Deep Learning for Surgical Phase Recognition in Neurosurgery. World Neurosurg 2022; 160:4-12. [PMID: 35026457 DOI: 10.1016/j.wneu.2022.01.020] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2021] [Revised: 01/05/2022] [Accepted: 01/05/2022] [Indexed: 12/20/2022]
Abstract
Recent years have witnessed artificial intelligence (AI) make meteoric leaps in both medicine and surgery, bridging the gap between the capabilities of humans and machines. Digitization of operating rooms and the creation of massive quantities of data have paved the way for machine learning and computer vision applications in surgery. Surgical phase recognition (SPR) is a newly emerging technology that uses data derived from operative videos to train machine and deep learning algorithms to identify the phases of surgery. Advancement of this technology will be key in establishing context-aware surgical systems in the future. By automatically recognizing and evaluating the current surgical scenario, these intelligent systems are able to provide intraoperative decision support, improve operating room efficiency, assess surgical skills, and aid in surgical training and education. Still in its infancy, SPR has been mainly studied in laparoscopic surgeries, with a contrasting stark lack of research within neurosurgery. Given the high-tech and rapidly advancing nature of neurosurgery, we believe SPR has a tremendous untapped potential in this field. Herein, we present an overview of the SPR technology, its potential applications in neurosurgery, and the challenges that lie ahead.
Collapse
|
17
|
Davids J, Ashrafian H. AIM and mHealth, Smartphones and Apps. Artif Intell Med 2022. [DOI: 10.1007/978-3-030-64573-1_242] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
18
|
Koskinen J, Torkamani-Azar M, Hussein A, Huotarinen A, Bednarik R. Automated tool detection with deep learning for monitoring kinematics and eye-hand coordination in microsurgery. Comput Biol Med 2021; 141:105121. [PMID: 34968859 DOI: 10.1016/j.compbiomed.2021.105121] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2021] [Revised: 11/30/2021] [Accepted: 12/03/2021] [Indexed: 11/03/2022]
Abstract
In microsurgical procedures, surgeons use micro-instruments under high magnifications to handle delicate tissues. These procedures require highly skilled attentional and motor control for planning and implementing eye-hand coordination strategies. Eye-hand coordination in surgery has mostly been studied in open, laparoscopic, and robot-assisted surgeries, as there are no available tools to perform automatic tool detection in microsurgery. We introduce and investigate a method for simultaneous detection and processing of micro-instruments and gaze during microsurgery. We train and evaluate a convolutional neural network for detecting 17 microsurgical tools with a dataset of 7500 frames from 20 videos of simulated and real surgical procedures. Model evaluations result in mean average precision at the 0.5 threshold of 89.5-91.4% for validation and 69.7-73.2% for testing over partially unseen surgical settings, and the average inference time of 39.90 ± 1.2 frames/second. While prior research has mostly evaluated surgical tool detection on homogeneous datasets with limited number of tools, we demonstrate the feasibility of transfer learning, and conclude that detectors that generalize reliably to new settings require data from several different surgical procedures. In a case study, we apply the detector with a microscope eye tracker to investigate tool use and eye-hand coordination during an intracranial vessel dissection task. The results show that tool kinematics differentiate microsurgical actions. The gaze-to-microscissors distances are also smaller during dissection than other actions when the surgeon has more space to maneuver. The presented detection pipeline provides the clinical and research communities with a valuable resource for automatic content extraction and objective skill assessment in various microsurgical environments.
Collapse
Affiliation(s)
- Jani Koskinen
- School of Computing, University of Eastern Finland, Länsikatu 15, Joensuu, 80100, Pohjois-Karjala, Finland.
| | - Mastaneh Torkamani-Azar
- School of Computing, University of Eastern Finland, Länsikatu 15, Joensuu, 80100, Pohjois-Karjala, Finland
| | - Ahmed Hussein
- Microsurgery Center, Kuopio University Hospital, Kuopio, 70211, Pohjois-Savo, Finland; Department of Neurosurgery, Faculty of Medicine, Assiut University, Assiut, 71111, Egypt
| | - Antti Huotarinen
- Microsurgery Center, Kuopio University Hospital, Kuopio, 70211, Pohjois-Savo, Finland; Department of Neurosurgery, Institute of Clinical Medicine, Kuopio University Hospital, Kuopio, 70211, Pohjois-Savo, Finland
| | - Roman Bednarik
- School of Computing, University of Eastern Finland, Länsikatu 15, Joensuu, 80100, Pohjois-Karjala, Finland
| |
Collapse
|
19
|
Davids J, Ashrafian H. AIM and mHealth, Smartphones and Apps. Artif Intell Med 2021. [DOI: 10.1007/978-3-030-58080-3_242-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|