1
|
Asadi Z, Asadi M, Kazemipour N, Léger É, Kersten-Oertel M. A decade of progress: bringing mixed reality image-guided surgery systems in the operating room. Comput Assist Surg (Abingdon) 2024; 29:2355897. [PMID: 38794834 DOI: 10.1080/24699322.2024.2355897] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/26/2024] Open
Abstract
Advancements in mixed reality (MR) have led to innovative approaches in image-guided surgery (IGS). In this paper, we provide a comprehensive analysis of the current state of MR in image-guided procedures across various surgical domains. Using the Data Visualization View (DVV) Taxonomy, we analyze the progress made since a 2013 literature review paper on MR IGS systems. In addition to examining the current surgical domains using MR systems, we explore trends in types of MR hardware used, type of data visualized, visualizations of virtual elements, and interaction methods in use. Our analysis also covers the metrics used to evaluate these systems in the operating room (OR), both qualitative and quantitative assessments, and clinical studies that have demonstrated the potential of MR technologies to enhance surgical workflows and outcomes. We also address current challenges and future directions that would further establish the use of MR in IGS.
Collapse
Affiliation(s)
- Zahra Asadi
- Department of Computer Science and Software Engineering, Concordia University, Montréal, Canada
| | - Mehrdad Asadi
- Department of Computer Science and Software Engineering, Concordia University, Montréal, Canada
| | - Negar Kazemipour
- Department of Computer Science and Software Engineering, Concordia University, Montréal, Canada
| | - Étienne Léger
- Montréal Neurological Institute & Hospital (MNI/H), Montréal, Canada
- McGill University, Montréal, Canada
| | - Marta Kersten-Oertel
- Department of Computer Science and Software Engineering, Concordia University, Montréal, Canada
| |
Collapse
|
2
|
Zhang J, Xu S. High aggressiveness of papillary thyroid cancer: from clinical evidence to regulatory cellular networks. Cell Death Discov 2024; 10:378. [PMID: 39187514 PMCID: PMC11347646 DOI: 10.1038/s41420-024-02157-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2024] [Revised: 08/20/2024] [Accepted: 08/21/2024] [Indexed: 08/28/2024] Open
Abstract
The global incidence of thyroid cancer has increased over recent decades. Papillary thyroid cancer (PTC) is the most common type of thyroid cancer and accounts for nearly 90% of all cases. Typically, PTC has a good prognosis. However, some PTC variants exhibit more aggressive behaviour, which significantly increases the risk of postoperative recurrence. Over the past decade, the high metastatic potential of PTC has drawn the attention of many researchers and these studies have provided useful molecular markers for improved diagnosis, risk stratification and clinical approaches. The aim of this review is to discuss the progress in epidemiology, metastatic features, risk factors and molecular mechanisms associated with PTC aggressiveness. We present a detailed picture showing that epithelial-to-mesenchymal transition, cancer metabolic reprogramming, alterations in important signalling pathways, epigenetic aberrations and the tumour microenvironment are crucial drivers of PTC metastasis. Further research is needed to more fully elucidate the pathogenesis and biological behaviour underlying the aggressiveness of PTC.
Collapse
Affiliation(s)
- Junsi Zhang
- Department of Thyroid and Breast Surgery, the First Affiliated Hospital, Fujian Medical University, Fuzhou, China
| | - Sunwang Xu
- Department of Thyroid and Breast Surgery, the First Affiliated Hospital, Fujian Medical University, Fuzhou, China.
- Department of Thyroid and Breast Surgery, National Regional Medical Center, Binhai Campus of the First Affiliated Hospital, Fujian Medical University, Fuzhou, China.
- Fujian Provincial Key Laboratory of Precision Medicine for Cancer, Fuzhou, China.
| |
Collapse
|
3
|
Zhang X, Zhang Y, Yang J, Du H. A prostate seed implantation robot system based on human-computer interactions: Augmented reality and voice control. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2024; 21:5947-5971. [PMID: 38872565 DOI: 10.3934/mbe.2024262] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2024]
Abstract
The technology of robot-assisted prostate seed implantation has developed rapidly. However, during the process, there are some problems to be solved, such as non-intuitive visualization effects and complicated robot control. To improve the intelligence and visualization of the operation process, a voice control technology of prostate seed implantation robot in augmented reality environment was proposed. Initially, the MRI image of the prostate was denoised and segmented. The three-dimensional model of prostate and its surrounding tissues was reconstructed by surface rendering technology. Combined with holographic application program, the augmented reality system of prostate seed implantation was built. An improved singular value decomposition three-dimensional registration algorithm based on iterative closest point was proposed, and the results of three-dimensional registration experiments verified that the algorithm could effectively improve the three-dimensional registration accuracy. A fusion algorithm based on spectral subtraction and BP neural network was proposed. The experimental results showed that the average delay of the fusion algorithm was 1.314 s, and the overall response time of the integrated system was 1.5 s. The fusion algorithm could effectively improve the reliability of the voice control system, and the integrated system could meet the responsiveness requirements of prostate seed implantation.
Collapse
Affiliation(s)
- Xinran Zhang
- Key Laboratory of Advanced Manufacturing and Intelligent Technology, Harbin University of Science and Technology, Harbin 150080, China
| | - Yongde Zhang
- Key Laboratory of Advanced Manufacturing and Intelligent Technology, Harbin University of Science and Technology, Harbin 150080, China
- Foshan Baikang Robot Technology Co., Ltd., Foshan 528237, China
| | - Jianzhi Yang
- Key Laboratory of Advanced Manufacturing and Intelligent Technology, Harbin University of Science and Technology, Harbin 150080, China
| | - Haiyan Du
- Key Laboratory of Advanced Manufacturing and Intelligent Technology, Harbin University of Science and Technology, Harbin 150080, China
| |
Collapse
|
4
|
Preukschas AA, Wise PA, Bettscheider L, Pfeiffer M, Wagner M, Huber M, Golriz M, Fischer L, Mehrabi A, Rössler F, Speidel S, Hackert T, Müller-Stich BP, Nickel F, Kenngott HG. Comparing a virtual reality head-mounted display to on-screen three-dimensional visualization and two-dimensional computed tomography data for training in decision making in hepatic surgery: a randomized controlled study. Surg Endosc 2024; 38:2483-2496. [PMID: 38456945 PMCID: PMC11078809 DOI: 10.1007/s00464-023-10615-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2023] [Accepted: 11/26/2023] [Indexed: 03/09/2024]
Abstract
OBJECTIVE Evaluation of the benefits of a virtual reality (VR) environment with a head-mounted display (HMD) for decision-making in liver surgery. BACKGROUND Training in liver surgery involves appraising radiologic images and considering the patient's clinical information. Accurate assessment of 2D-tomography images is complex and requires considerable experience, and often the images are divorced from the clinical information. We present a comprehensive and interactive tool for visualizing operation planning data in a VR environment using a head-mounted-display and compare it to 3D visualization and 2D-tomography. METHODS Ninety medical students were randomized into three groups (1:1:1 ratio). All participants analyzed three liver surgery patient cases with increasing difficulty. The cases were analyzed using 2D-tomography data (group "2D"), a 3D visualization on a 2D display (group "3D") or within a VR environment (group "VR"). The VR environment was displayed using the "Oculus Rift ™" HMD technology. Participants answered 11 questions on anatomy, tumor involvement and surgical decision-making and 18 evaluative questions (Likert scale). RESULTS Sum of correct answers were significantly higher in the 3D (7.1 ± 1.4, p < 0.001) and VR (7.1 ± 1.4, p < 0.001) groups than the 2D group (5.4 ± 1.4) while there was no difference between 3D and VR (p = 0.987). Times to answer in the 3D (6:44 ± 02:22 min, p < 0.001) and VR (6:24 ± 02:43 min, p < 0.001) groups were significantly faster than the 2D group (09:13 ± 03:10 min) while there was no difference between 3D and VR (p = 0.419). The VR environment was evaluated as most useful for identification of anatomic anomalies, risk and target structures and for the transfer of anatomical and pathological information to the intraoperative situation in the questionnaire. CONCLUSIONS A VR environment with 3D visualization using a HMD is useful as a surgical training tool to accurately and quickly determine liver anatomy and tumor involvement in surgery.
Collapse
Affiliation(s)
- Anas Amin Preukschas
- Department of General, Visceral and Transplantation Surgery, University of Heidelberg, Im Neuenheimer Feld 672, 69120, Heidelberg, Germany
- Department of General, Visceral and Thoracic Surgery, University Medical Center Hamburg-Eppendorf, Martinistraße 52, 20246, Hamburg, Germany
| | - Philipp Anthony Wise
- Department of General, Visceral and Transplantation Surgery, University of Heidelberg, Im Neuenheimer Feld 672, 69120, Heidelberg, Germany
| | - Lisa Bettscheider
- Department of General, Visceral and Transplantation Surgery, University of Heidelberg, Im Neuenheimer Feld 672, 69120, Heidelberg, Germany
| | - Micha Pfeiffer
- Institute for Anthropomatics and Robotics, Karlsruhe Institute of Technology, Kaiserstrasse 12, 76131, Karlsruhe, Germany
- Department for Translational Surgical Oncology, National Center for Tumor Diseases, Fiedlerstraße 23, 01307, Dresden, Germany
| | - Martin Wagner
- Department of General, Visceral and Transplantation Surgery, University of Heidelberg, Im Neuenheimer Feld 672, 69120, Heidelberg, Germany
| | - Matthias Huber
- Institute for Anthropomatics and Robotics, Karlsruhe Institute of Technology, Kaiserstrasse 12, 76131, Karlsruhe, Germany
| | - Mohammad Golriz
- Department of General, Visceral and Transplantation Surgery, University of Heidelberg, Im Neuenheimer Feld 672, 69120, Heidelberg, Germany
| | - Lars Fischer
- Department of Surgery, Hospital Mittelbaden, Balgerstrasse 50, 76532, Baden-Baden, Germany
| | - Arianeb Mehrabi
- Department of General, Visceral and Transplantation Surgery, University of Heidelberg, Im Neuenheimer Feld 672, 69120, Heidelberg, Germany
| | - Fabian Rössler
- Department of Surgery and Transplantation, University Hospital of Zürich, Rämistrasse 100, 8091, Zurich, Switzerland
| | - Stefanie Speidel
- Department for Translational Surgical Oncology, National Center for Tumor Diseases, Fiedlerstraße 23, 01307, Dresden, Germany
| | - Thilo Hackert
- Department of General, Visceral and Transplantation Surgery, University of Heidelberg, Im Neuenheimer Feld 672, 69120, Heidelberg, Germany
- Department of General, Visceral and Thoracic Surgery, University Medical Center Hamburg-Eppendorf, Martinistraße 52, 20246, Hamburg, Germany
| | - Beat Peter Müller-Stich
- Division of Abdominal Surgery, Clarunis Academic Centre of Gastrointestinal Diseases, St. Clara and University Hospital of Basel, Petersgraben 4, 4051, Basel, Switzerland
| | - Felix Nickel
- Department of General, Visceral and Transplantation Surgery, University of Heidelberg, Im Neuenheimer Feld 672, 69120, Heidelberg, Germany
- Department of General, Visceral and Thoracic Surgery, University Medical Center Hamburg-Eppendorf, Martinistraße 52, 20246, Hamburg, Germany
| | - Hannes Götz Kenngott
- Department of General, Visceral and Transplantation Surgery, University of Heidelberg, Im Neuenheimer Feld 672, 69120, Heidelberg, Germany.
| |
Collapse
|
5
|
Pace-Asciak P, Tufano RP. Future Directions in the Treatment of Thyroid and Parathyroid Disease. Otolaryngol Clin North Am 2024; 57:155-170. [PMID: 37634983 DOI: 10.1016/j.otc.2023.07.013] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/29/2023]
Abstract
The surgical management of thyroid and parathyroid disease has evolved considerably since the era of Theodor Kocher. We review the current trends in thyroid and parathyroid surgery concerning robotic surgery for remote access, the use of parathyroid autofluorescence detection technology to aid in the prevention of hypocalcemia as well as the use of thermal ablation to target thyroid nodules in a minimally invasive way. We also discuss how artificial intelligence is being used to improve the workflow and diagnostics preoperatively as well as for intraoperative decision-making. We also discuss potential areas where future research may enhance outcomes.
Collapse
Affiliation(s)
- Pia Pace-Asciak
- Department of Otolaryngology-Head and Neck Surgery, Temerty Faculty of Medicine, University of Toronto, Toronto, Canada.
| | - Ralph P Tufano
- Sarasota Memorial Health Care System Multidisciplinary Thyroid and Parathyroid Center, Sarasota, FL, USA
| |
Collapse
|
6
|
Li C, Ji A, Jian Z, Zheng Y, Feng X, Guo W, Lerut T, Lin J, Li H. Augmented reality navigation-guided intraoperative pulmonary nodule localization: a pilot study. Transl Lung Cancer Res 2023; 12:1728-1737. [PMID: 37691871 PMCID: PMC10483087 DOI: 10.21037/tlcr-23-201] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2023] [Accepted: 07/20/2023] [Indexed: 09/12/2023]
Abstract
Background With the increasing number of small pulmonary nodules detected, effective localization of pulmonary nodules has become an issue. The goal of this study is to determine the safety and feasibility of a newly developed augmented reality navigation technology for intraoperative localization of small pulmonary nodules. Methods We conducted a prospective single-center feasibility study of a novel augmented reality navigation system and lung localization (LungBrella) marker on ten patients between July and October 2020. For augmented reality navigation-guided localization, a preoperative chest computed tomography scan was performed to generate 3-dimensional (3D) virtual images and individualized localization plan, which were uploaded into Hololens (a head-mounted augmented reality device). Under the guidance of established procedure plan displayed by HoloLens, localization marker was placed in operating room. Segmentectomy or wedge resection was subsequently performed. The primary endpoint was the localization procedure success rate, and the secondary endpoints were localization time, operation time, and complications. Results Localization was successful in seven of the ten procedures. Due to different reasons, failures were noted in three cases, after which immediate adjustments were made. In the successful cases, the LungBrella marker was positioned at a median of 5.8 mm (range, 0-10 mm) from the edge of the nodule. Median localization time was 9.4 min (range, 5-19 min), and median operation time was 172.9 min (range, 105-200 min). There were no complications during the entire process. Conclusions This exploratory study suggests that augmented reality navigation-guided pulmonary nodule localization is a safe and feasible technique (ClinicalTrials.gov identifier, NCT04211051).
Collapse
Affiliation(s)
- Chengqiang Li
- Department of Thoracic Surgery, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Anqi Ji
- Department of Thoracic Surgery, Cancer Hospital of Guangxi Medical University, Nanning, China
| | - Zheng Jian
- Department of Surgery, Li Ka Shing Faculty of Medicine, The University of Hong Kong, Hong Kong, China
| | - Yuyan Zheng
- Department of Thoracic Surgery, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Xijia Feng
- Department of Thoracic Surgery, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Wei Guo
- Department of Thoracic Surgery, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Toni Lerut
- Department of Thoracic Surgery, University Hospital Leuven, Leuven, Belgium
| | - Jules Lin
- Section of Thoracic Surgery, University of Michigan Medical Center, Ann Arbor, MI, USA
| | - Hecheng Li
- Department of Thoracic Surgery, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| |
Collapse
|
7
|
Seetohul J, Shafiee M, Sirlantzis K. Augmented Reality (AR) for Surgical Robotic and Autonomous Systems: State of the Art, Challenges, and Solutions. SENSORS (BASEL, SWITZERLAND) 2023; 23:6202. [PMID: 37448050 DOI: 10.3390/s23136202] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/22/2023] [Revised: 06/09/2023] [Accepted: 07/03/2023] [Indexed: 07/15/2023]
Abstract
Despite the substantial progress achieved in the development and integration of augmented reality (AR) in surgical robotic and autonomous systems (RAS), the center of focus in most devices remains on improving end-effector dexterity and precision, as well as improved access to minimally invasive surgeries. This paper aims to provide a systematic review of different types of state-of-the-art surgical robotic platforms while identifying areas for technological improvement. We associate specific control features, such as haptic feedback, sensory stimuli, and human-robot collaboration, with AR technology to perform complex surgical interventions for increased user perception of the augmented world. Current researchers in the field have, for long, faced innumerable issues with low accuracy in tool placement around complex trajectories, pose estimation, and difficulty in depth perception during two-dimensional medical imaging. A number of robots described in this review, such as Novarad and SpineAssist, are analyzed in terms of their hardware features, computer vision systems (such as deep learning algorithms), and the clinical relevance of the literature. We attempt to outline the shortcomings in current optimization algorithms for surgical robots (such as YOLO and LTSM) whilst providing mitigating solutions to internal tool-to-organ collision detection and image reconstruction. The accuracy of results in robot end-effector collisions and reduced occlusion remain promising within the scope of our research, validating the propositions made for the surgical clearance of ever-expanding AR technology in the future.
Collapse
Affiliation(s)
- Jenna Seetohul
- Mechanical Engineering Group, School of Engineering, University of Kent, Canterbury CT2 7NT, UK
| | - Mahmood Shafiee
- Mechanical Engineering Group, School of Engineering, University of Kent, Canterbury CT2 7NT, UK
- School of Mechanical Engineering Sciences, University of Surrey, Guildford GU2 7XH, UK
| | - Konstantinos Sirlantzis
- School of Engineering, Technology and Design, Canterbury Christ Church University, Canterbury CT1 1QU, UK
- Intelligent Interactions Group, School of Engineering, University of Kent, Canterbury CT2 7NT, UK
| |
Collapse
|
8
|
Hashemi N, Svendsen MBS, Bjerrum F, Rasmussen S, Tolsgaard MG, Friis ML. Acquisition and usage of robotic surgical data for machine learning analysis. Surg Endosc 2023:10.1007/s00464-023-10214-7. [PMID: 37389741 PMCID: PMC10338401 DOI: 10.1007/s00464-023-10214-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2023] [Accepted: 06/12/2023] [Indexed: 07/01/2023]
Abstract
BACKGROUND The increasing use of robot-assisted surgery (RAS) has led to the need for new methods of assessing whether new surgeons are qualified to perform RAS, without the resource-demanding process of having expert surgeons do the assessment. Computer-based automation and artificial intelligence (AI) are seen as promising alternatives to expert-based surgical assessment. However, no standard protocols or methods for preparing data and implementing AI are available for clinicians. This may be among the reasons for the impediment to the use of AI in the clinical setting. METHOD We tested our method on porcine models with both the da Vinci Si and the da Vinci Xi. We sought to capture raw video data from the surgical robots and 3D movement data from the surgeons and prepared the data for the use in AI by a structured guide to acquire and prepare video data using the following steps: 'Capturing image data from the surgical robot', 'Extracting event data', 'Capturing movement data of the surgeon', 'Annotation of image data'. RESULTS 15 participant (11 novices and 4 experienced) performed 10 different intraabdominal RAS procedures. Using this method we captured 188 videos (94 from the surgical robot, and 94 corresponding movement videos of the surgeons' arms and hands). Event data, movement data, and labels were extracted from the raw material and prepared for use in AI. CONCLUSION With our described methods, we could collect, prepare, and annotate images, events, and motion data from surgical robotic systems in preparation for its use in AI.
Collapse
Affiliation(s)
- Nasseh Hashemi
- Department of Clinical Medicine, Aalborg University Hospital, Aalborg, Denmark.
- Nordsim-Centre for Skills Training and Simulation, Aalborg, Denmark.
- ROCnord-Robot Centre, Aalborg University Hospital, Aalborg, Denmark.
- Department of Urology, Aalborg University Hospital, Aalborg, Denmark.
| | - Morten Bo Søndergaard Svendsen
- Copenhagen Academy for Medical Education and Simulation, Center for Human Resources and Education, Copenhagen, Denmark
- Department of Computer Science, University of Copenhagen, Copenhagen, Denmark
| | - Flemming Bjerrum
- Copenhagen Academy for Medical Education and Simulation, Center for Human Resources and Education, Copenhagen, Denmark
- Department of Gastrointestinal and Hepatic Diseases, Copenhagen University Hospital - Herlev and Gentofte, Herlev, Denmark
| | - Sten Rasmussen
- Department of Clinical Medicine, Aalborg University Hospital, Aalborg, Denmark
| | - Martin G Tolsgaard
- Nordsim-Centre for Skills Training and Simulation, Aalborg, Denmark
- Copenhagen Academy for Medical Education and Simulation, Center for Human Resources and Education, Copenhagen, Denmark
| | - Mikkel Lønborg Friis
- Department of Clinical Medicine, Aalborg University Hospital, Aalborg, Denmark
- Nordsim-Centre for Skills Training and Simulation, Aalborg, Denmark
| |
Collapse
|
9
|
Ludwig B, Ludwig M, Dziekiewicz A, Mikuła A, Cisek J, Biernat S, Kaliszewski K. Modern Surgical Techniques of Thyroidectomy and Advances in the Prevention and Treatment of Perioperative Complications. Cancers (Basel) 2023; 15:cancers15112931. [PMID: 37296896 DOI: 10.3390/cancers15112931] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2023] [Revised: 05/16/2023] [Accepted: 05/23/2023] [Indexed: 06/12/2023] Open
Abstract
Thyroid cancer is the most common cancer of the endocrine system, and, in recent years, there has been a phenomenon of overdiagnosis followed by subsequent overtreatment. This results in an increasing number of thyroidectomy complications being faced in clinical practice. In this paper, we present the current state of knowledge and the latest findings in the fields of modern surgical techniques, thermal ablation, the identification and assessment of parathyroid function, recurrent laryngeal nerve monitoring and treatment and perioperative bleeding. We reviewed 485 papers, from which we selected 125 papers that are the most relevant. The main merit of this article is its comprehensive view of the subject under discussion-both general, concerning the selection of the appropriate method of surgery, and particular, concerning the selection of the appropriate method of prevention or treatment of selected perioperative complications.
Collapse
Affiliation(s)
- Bartłomiej Ludwig
- Department of General, Minimally Invasive and Endocrine Surgery, Wroclaw Medical University, Borowska Street 213, 50-556 Wroclaw, Poland
| | - Maksymilian Ludwig
- Department of General, Minimally Invasive and Endocrine Surgery, Wroclaw Medical University, Borowska Street 213, 50-556 Wroclaw, Poland
| | - Anna Dziekiewicz
- Department of General, Minimally Invasive and Endocrine Surgery, Wroclaw Medical University, Borowska Street 213, 50-556 Wroclaw, Poland
| | - Agnieszka Mikuła
- Department of General, Minimally Invasive and Endocrine Surgery, Wroclaw Medical University, Borowska Street 213, 50-556 Wroclaw, Poland
| | - Jakub Cisek
- Department of General, Minimally Invasive and Endocrine Surgery, Wroclaw Medical University, Borowska Street 213, 50-556 Wroclaw, Poland
| | - Szymon Biernat
- Department of General, Minimally Invasive and Endocrine Surgery, Wroclaw Medical University, Borowska Street 213, 50-556 Wroclaw, Poland
| | - Krzysztof Kaliszewski
- Department of General, Minimally Invasive and Endocrine Surgery, Wroclaw Medical University, Borowska Street 213, 50-556 Wroclaw, Poland
| |
Collapse
|
10
|
Zhou J, Muirhead W, Williams SC, Stoyanov D, Marcus HJ, Mazomenos EB. Shifted-windows transformers for the detection of cerebral aneurysms in microsurgery. Int J Comput Assist Radiol Surg 2023:10.1007/s11548-023-02871-9. [DOI: 10.1007/s11548-023-02871-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2023] [Accepted: 03/09/2023] [Indexed: 04/03/2023]
Abstract
Abstract
Purpose
Microsurgical Aneurysm Clipping Surgery (MACS) carries a high risk for intraoperative aneurysm rupture. Automated recognition of instances when the aneurysm is exposed in the surgical video would be a valuable reference point for neuronavigation, indicating phase transitioning and more importantly designating moments of high risk for rupture. This article introduces the MACS dataset containing 16 surgical videos with frame-level expert annotations and proposes a learning methodology for surgical scene understanding identifying video frames with the aneurysm present in the operating microscope’s field-of-view.
Methods
Despite the dataset imbalance (80% no presence, 20% presence) and developed without explicit annotations, we demonstrate the applicability of Transformer-based deep learning architectures (MACSSwin-T, vidMACSSwin-T) to detect the aneurysm and classify MACS frames accordingly. We evaluate the proposed models in multiple-fold cross-validation experiments with independent sets and in an unseen set of 15 images against 10 human experts (neurosurgeons).
Results
Average (across folds) accuracy of 80.8% (range 78.5–82.4%) and 87.1% (range 85.1–91.3%) is obtained for the image- and video-level approach, respectively, demonstrating that the models effectively learn the classification task. Qualitative evaluation of the models’ class activation maps shows these to be localized on the aneurysm’s actual location. Depending on the decision threshold, MACSWin-T achieves 66.7–86.7% accuracy in the unseen images, compared to 82% of human raters, with moderate to strong correlation.
Conclusions
Proposed architectures show robust performance and with an adjusted threshold promoting detection of the underrepresented (aneurysm presence) class, comparable to human expert accuracy. Our work represents the first step towards landmark detection in MACS with the aim to inform surgical teams to attend to high-risk moments, taking precautionary measures to avoid rupturing.
Collapse
|
11
|
Abstract
Augmented reality (AR) is an innovative system that enhances the real world by superimposing virtual objects on reality. The aim of this study was to analyze the application of AR in medicine and which of its technical solutions are the most used. We carried out a scoping review of the articles published between 2019 and February 2022. The initial search yielded a total of 2649 articles. After applying filters, removing duplicates and screening, we included 34 articles in our analysis. The analysis of the articles highlighted that AR has been traditionally and mainly used in orthopedics in addition to maxillofacial surgery and oncology. Regarding the display application in AR, the Microsoft HoloLens Optical Viewer is the most used method. Moreover, for the tracking and registration phases, the marker-based method with a rigid registration remains the most used system. Overall, the results of this study suggested that AR is an innovative technology with numerous advantages, finding applications in several new surgery domains. Considering the available data, it is not possible to clearly identify all the fields of application and the best technologies regarding AR.
Collapse
|
12
|
Youn JK, Lee D, Ko D, Yeom I, Joo HJ, Kim HC, Kong HJ, Kim HY. Augmented Reality-Based Visual Cue for Guiding Central Catheter Insertion in Pediatric Oncologic Patients. World J Surg 2022; 46:942-948. [PMID: 35006323 DOI: 10.1007/s00268-021-06425-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/29/2021] [Indexed: 10/19/2022]
Abstract
BACKGROUND Pediatric hemato-oncologic patients require central catheters for chemotherapy, and the junction of the superior vena cava and right atrium is considered the ideal location for catheter tips. Skin landmarks or fluoroscopic supports have been applied to identify the cavoatrial junction; however, none has been recognized as the gold standard. Therefore, we aim to develop a safe and accurate technique using augmented reality technology for the location of the cavoatrial junction in pediatric hemato-oncologic patients. METHODS Fifteen oncology patients who underwent chest computed tomography were enrolled for Hickman catheter or chemoport insertion. With the aid of augmented reality technology, three-dimensional models of the internal jugular veins, external jugular veins, subclavian veins, superior vena cava, and right atrium were constructed. On inserting the central vein catheters, the cavoatrial junction identified using the three-dimensional models were marked on the body surface, the tip was positioned at the corresponding location, and the actual insertion location was confirmed using a portable x-ray machine. The proposed method was evaluated by comparing the distance from the cavoatrial junction to the augmented reality location with that to the conventional location on x-ray. RESULTS The mean distance between the cavoatrial junction and augmented reality location on x-ray was 1.2 cm, which was significantly shorter than that between the cavoatrial junction and conventional location (1.9 cm; P = 0.027). CONCLUSIONS Central catheter insertion using augmented reality technology is more safe and accurate than that using conventional methods and can be performed at no additional cost in oncology patients.
Collapse
Affiliation(s)
- Joong Kee Youn
- Department of Pediatric Surgery, Seoul National University Hospital, Seoul, Republic of Korea.,Department of Pediatric Surgery, Seoul National University College of Medicine, 101 Daehak-ro, Jongro-gu, Seoul, 03080, Republic of Korea
| | - Dongheon Lee
- Department of Biomedical Engineering, Chungnam National University College of Medicine and Hospital, Daejeon, Republic of Korea
| | - Dayoung Ko
- Department of Pediatric Surgery, Seoul National University Hospital, Seoul, Republic of Korea
| | - Inhwa Yeom
- Transdisciplinary Department of Medicine and Advanced Technology, Seoul National University Hospital, 101 Daehak-ro, Jongro-gu, Seoul, 03080, Republic of Korea
| | - Hyun-Jin Joo
- Transdisciplinary Department of Medicine and Advanced Technology, Seoul National University Hospital, 101 Daehak-ro, Jongro-gu, Seoul, 03080, Republic of Korea
| | - Hee Chan Kim
- Department of Biomedical Engineering, Seoul National University College of Medicine, Seoul, Republic of Korea.,Institute of Medical and Biological Engineering, Medical Research Center, Seoul National University College of Medicine, Seoul, Republic of Korea
| | - Hyoun-Joong Kong
- Transdisciplinary Department of Medicine and Advanced Technology, Seoul National University Hospital, 101 Daehak-ro, Jongro-gu, Seoul, 03080, Republic of Korea.
| | - Hyun-Young Kim
- Department of Pediatric Surgery, Seoul National University College of Medicine, 101 Daehak-ro, Jongro-gu, Seoul, 03080, Republic of Korea.
| |
Collapse
|
13
|
Yu HW, Lee D, Lee K, Kim SJ, Chai YJ, Kim HC, Choi JY, Lee KE. Effect of an anti-adhesion agent on vision-based assessment of cervical adhesions after thyroid surgery: randomized, placebo-controlled trial. Sci Rep 2021; 11:19935. [PMID: 34620907 PMCID: PMC8497539 DOI: 10.1038/s41598-021-97919-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2021] [Accepted: 09/01/2021] [Indexed: 11/23/2022] Open
Abstract
Many patients experience cervical adhesions after thyroid surgery. To date, however, no studies have objectively measured the effects of anti-adhesion agents on cervical adhesion symptoms. This study evaluated the effects of an anti-adhesion agent on cervical adhesions after thyroid surgery, as determined using a system that measures the extent of marker movement objectively. One hundred patients were randomized in a 1:1 ratio to undergo thyroid surgery with or without the anti-adhesion agent Collabarrier. Using specially manufactured recording equipment, the position of the marker on neck skin was measured before surgery, and 2 weeks, 3 months, and 9 months after surgery. Relative change in marker distance, calculated by subtracting the marker position before surgery from the marker positions 2 weeks, 3 months, and 9 months after surgery, differed significantly in the groups of patients who underwent thyroid surgery with and without the anti-adhesion agent (P < 0.05). A novel measuring system can objectively evaluate the effectiveness of a thyroid anti-adhesion agent. The anti-adhesion agent used significantly reduced adhesions compared with the control group. The trial is registered at www.cris.nih.go.kr (KCT0005745; date of registration, 08/01/2021).
Collapse
Affiliation(s)
- Hyeong Won Yu
- Department of Surgery, Seoul National University Bundang Hospital, Seongnam-si, Korea
| | - Dongheon Lee
- Department of Biomedical Engineering, Chungnam National University College of Medicine and Hospital, Daejeon, Korea
| | - Keunchul Lee
- Department of Surgery, Seoul National University Bundang Hospital, Seongnam-si, Korea
| | - Su-Jin Kim
- Department of Surgery, Seoul National University Hospital and College of Medicine, Seoul, Korea
| | - Young Jun Chai
- Department of Surgery, Seoul National University Boramae Medical Center, Seoul, Korea
| | - Hee Chan Kim
- Department of Biomedical Engineering and Institute of Medical & Biological Engineering, Medical Research Center, Seoul National University College of Medicine and Hospital, Seoul, Korea
| | - June Young Choi
- Department of Surgery, Seoul National University Bundang Hospital, Seongnam-si, Korea.
| | - Kyu Eun Lee
- Department of Surgery, Seoul National University Hospital and College of Medicine, Seoul, Korea
| |
Collapse
|
14
|
Using deep learning to identify the recurrent laryngeal nerve during thyroidectomy. Sci Rep 2021; 11:14306. [PMID: 34253767 PMCID: PMC8275665 DOI: 10.1038/s41598-021-93202-y] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2021] [Accepted: 06/22/2021] [Indexed: 11/16/2022] Open
Abstract
Surgeons must visually distinguish soft-tissues, such as nerves, from surrounding anatomy to prevent complications and optimize patient outcomes. An accurate nerve segmentation and analysis tool could provide useful insight for surgical decision-making. Here, we present an end-to-end, automatic deep learning computer vision algorithm to segment and measure nerves. Unlike traditional medical imaging, our unconstrained setup with accessible handheld digital cameras, along with the unstructured open surgery scene, makes this task uniquely challenging. We investigate one common procedure, thyroidectomy, during which surgeons must avoid damaging the recurrent laryngeal nerve (RLN), which is responsible for human speech. We evaluate our segmentation algorithm on a diverse dataset across varied and challenging settings of operating room image capture, and show strong segmentation performance in the optimal image capture condition. This work lays the foundation for future research in real-time tissue discrimination and integration of accessible, intelligent tools into open surgery to provide actionable insights.
Collapse
|
15
|
Meng FH, Zhu ZH, Lei ZH, Zhang XH, Shao L, Zhang HZ, Zhang T. Feasibility of the application of mixed reality in mandible reconstruction with fibula flap: A cadaveric specimen study. JOURNAL OF STOMATOLOGY, ORAL AND MAXILLOFACIAL SURGERY 2021; 122:e45-e49. [PMID: 33434746 DOI: 10.1016/j.jormas.2021.01.005] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/14/2020] [Revised: 12/02/2020] [Accepted: 01/04/2021] [Indexed: 11/16/2022]
Abstract
BACKGROUND In recent years, a new technology, mixed reality (MR), has emerged and surpassed the limitations of augmented reality (AR) with its inability to interact with hologram. This study aimed to investigate the feasibility of the application of MR in mandible reconstruction with fibula flap. METHODS Computed tomography (CT) examination was performed for one cadaveric mandible and ten fibula bones. Using professional software Proplan CMF 3.0 (Materialize, Leuven, Belgium), we created a defected mandibular model and simulated the reconstruction design with these 10 fibula bones. The surgical plans were transferred to the HoloLens. We used HoloLens to guide the osteotomy and shaping of the fibular bone. After fixing the fibular segments using the Ti template, all segments underwent a CT examination. Before and after objects were compared for measurements of the location of fibular osteotomies, angular deviation of fibular segments, and intergonial angle distances. RESULTS The mean location of the fibular osteotomies, angular deviation of the fibular segments, and intergonial angle distances were 2.11 ± 1.31 mm, 2.85°± 1.97°, and 7.24 ± 3.42 mm, respectively. CONCLUSION The experimental results revealed that slight deviations remained in the accuracy of fibular osteotomy. With the further development of technology, it has the potential to improve the efficiency and precision of the reconstructive surgery.
Collapse
Affiliation(s)
- F H Meng
- Chinese PLA General Hospital, Department of Oral and Maxillofacial Surgery, 100853, Beijing, China
| | - Z H Zhu
- Peking Union Medical College Hospital, Department of Oral and Maxillofacial Surgery, 100730, Beijing, China
| | - Z H Lei
- Peking Union Medical College Hospital, Department of Oral and Maxillofacial Surgery, 100730, Beijing, China
| | - X H Zhang
- Shenzhen Luohu Hospital Group Luohu People's Hospital, Department of Oral and Maxillofacial Surgery, 518020, Shenzhen, China
| | - L Shao
- Beijing Institute of Technology, Optoelectronic College, 100081, Beijing, China
| | - H Z Zhang
- Chinese PLA General Hospital, Department of Oral and Maxillofacial Surgery, 100853, Beijing, China.
| | - T Zhang
- Peking Union Medical College Hospital, Department of Oral and Maxillofacial Surgery, 100730, Beijing, China.
| |
Collapse
|
16
|
Evaluation of Surgical Skills during Robotic Surgery by Deep Learning-Based Multiple Surgical Instrument Tracking in Training and Actual Operations. J Clin Med 2020; 9:jcm9061964. [PMID: 32585953 PMCID: PMC7355689 DOI: 10.3390/jcm9061964] [Citation(s) in RCA: 27] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2020] [Revised: 06/13/2020] [Accepted: 06/15/2020] [Indexed: 12/17/2022] Open
Abstract
As the number of robotic surgery procedures has increased, so has the importance of evaluating surgical skills in these techniques. It is difficult, however, to automatically and quantitatively evaluate surgical skills during robotic surgery, as these skills are primarily associated with the movement of surgical instruments. This study proposes a deep learning-based surgical instrument tracking algorithm to evaluate surgeons’ skills in performing procedures by robotic surgery. This method overcame two main drawbacks: occlusion and maintenance of the identity of the surgical instruments. In addition, surgical skill prediction models were developed using motion metrics calculated from the motion of the instruments. The tracking method was applied to 54 video segments and evaluated by root mean squared error (RMSE), area under the curve (AUC), and Pearson correlation analysis. The RMSE was 3.52 mm, the AUC of 1 mm, 2 mm, and 5 mm were 0.7, 0.78, and 0.86, respectively, and Pearson’s correlation coefficients were 0.9 on the x-axis and 0.87 on the y-axis. The surgical skill prediction models showed an accuracy of 83% with Objective Structured Assessment of Technical Skill (OSATS) and Global Evaluative Assessment of Robotic Surgery (GEARS). The proposed method was able to track instruments during robotic surgery, suggesting that the current method of surgical skill assessment by surgeons can be replaced by the proposed automatic and quantitative evaluation method.
Collapse
|