1
|
Schmidt A, Mohareri O, DiMaio SP, Salcudean SE. Surgical Tattoos in Infrared: A Dataset for Quantifying Tissue Tracking and Mapping. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:2634-2645. [PMID: 38437151 DOI: 10.1109/tmi.2024.3372828] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/06/2024]
Abstract
Quantifying performance of methods for tracking and mapping tissue in endoscopic environments is essential for enabling image guidance and automation of medical interventions and surgery. Datasets developed so far either use rigid environments, visible markers, or require annotators to label salient points in videos after collection. These are respectively: not general, visible to algorithms, or costly and error-prone. We introduce a novel labeling methodology along with a dataset that uses said methodology, Surgical Tattoos in Infrared (STIR). STIR has labels that are persistent but invisible to visible spectrum algorithms. This is done by labelling tissue points with IR-fluorescent dye, indocyanine green (ICG), and then collecting visible light video clips. STIR comprises hundreds of stereo video clips in both in vivo and ex vivo scenes with start and end points labelled in the IR spectrum. With over 3,000 labelled points, STIR will help to quantify and enable better analysis of tracking and mapping methods. After introducing STIR, we analyze multiple different frame-based tracking methods on STIR using both 3D and 2D endpoint error and accuracy metrics. STIR is available at https://dx.doi.org/10.21227/w8g4-g548.
Collapse
|
2
|
Huang Z, Li H, Shao S, Zhu H, Hu H, Cheng Z, Wang J, Kevin Zhou S. PELE scores: pelvic X-ray landmark detection with pelvis extraction and enhancement. Int J Comput Assist Radiol Surg 2024; 19:939-950. [PMID: 38491244 DOI: 10.1007/s11548-024-03089-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2023] [Accepted: 02/27/2024] [Indexed: 03/18/2024]
Abstract
PURPOSE Pelvic X-ray (PXR) is widely utilized in clinical decision-making associated with the pelvis, the lower part of the trunk that supports and balances the trunk. In particular, PXR-based landmark detection facilitates downstream analysis and computer-assisted diagnosis and treatment of pelvic diseases. Although PXR has the advantages of low radiation and reduced cost compared to computed tomography (CT), it characterizes the 2D pelvis-tissue superposition of 3D structures, which may affect the accuracy of landmark detection in some cases. However, the superposition nature of PXR is implicitly handled by existing deep learning-based landmark detection methods, which mainly design the deep network structures for better detection performances. Explicit handling of the superposition nature of PXR is rarely done. METHODS In this paper, we explicitly focus on the superposition of X-ray images. Specifically, we propose a pelvis extraction (PELE) module that consists of a decomposition network, a domain adaptation network, and an enhancement module, which utilizes 3D prior anatomical knowledge in CT to guide and well isolate the pelvis from PXR, thereby eliminating the influence of soft tissue for landmark detection. The extracted pelvis image, after enhancement, is then used for landmark detection. RESULTS We conduct an extensive evaluation based on two public and one private dataset, totaling 850 PXRs. The experimental results show that the proposed PELE module significantly improves the accuracy of PXRs landmark detection and achieves state-of-the-art performances in several benchmark metrics. CONCLUSION The design of PELE module can improve the accuracy of different pelvic landmark detection baselines, which we believe is obviously conducive to the positioning and inspection of clinical landmarks and critical structures, thus better serving downstream tasks. Our project has been open-sourced at https://github.com/ECNUACRush/PELEscores .
Collapse
Affiliation(s)
- Zhen Huang
- Computer Science Department, University of Science and Technology of China (USTC), Hefei, 230026, China
- Center for Medical Imaging, Robotics, Analytic Computing and Learning (MIRACLE), Suzhou Institute for Advanced Research, USTC, Suzhou, 215123, China
| | - Han Li
- School of Biomedical Engineering, Division of Life Sciences and Medicine, USTC, Hefei, 230026, China
- Center for Medical Imaging, Robotics, Analytic Computing and Learning (MIRACLE), Suzhou Institute for Advanced Research, USTC, Suzhou, 215123, China
| | | | - Heqin Zhu
- School of Biomedical Engineering, Division of Life Sciences and Medicine, USTC, Hefei, 230026, China
- Center for Medical Imaging, Robotics, Analytic Computing and Learning (MIRACLE), Suzhou Institute for Advanced Research, USTC, Suzhou, 215123, China
| | - Huijie Hu
- Computer Science Department, University of Science and Technology of China (USTC), Hefei, 230026, China
- Center for Medical Imaging, Robotics, Analytic Computing and Learning (MIRACLE), Suzhou Institute for Advanced Research, USTC, Suzhou, 215123, China
| | | | - Jianji Wang
- Affiliated Hospital of Guizhou Medical University, Guiyang, 550000, China
| | - S Kevin Zhou
- School of Biomedical Engineering, Division of Life Sciences and Medicine, USTC, Hefei, 230026, China.
- Center for Medical Imaging, Robotics, Analytic Computing and Learning (MIRACLE), Suzhou Institute for Advanced Research, USTC, Suzhou, 215123, China.
- Key Lab of Intelligent Information Processing of Chinese Academy of Sciences (CAS), Institute of Computing Technology, CAS, Beijing, 100190, China.
| |
Collapse
|
3
|
Melchior C, Isfort P, Braunschweig T, Witjes M, Van den Bosch V, Rashad A, Egger J, de la Fuente M, Röhrig R, Hölzle F, Puladi B. Development and validation of a cadaveric porcine Pseudotumor model for Oral Cancer biopsy and resection training. BMC MEDICAL EDUCATION 2024; 24:250. [PMID: 38500112 PMCID: PMC10949621 DOI: 10.1186/s12909-024-05224-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/18/2023] [Accepted: 02/23/2024] [Indexed: 03/20/2024]
Abstract
OBJECTIVE The gold standard of oral cancer (OC) treatment is diagnostic confirmation by biopsy followed by surgical treatment. However, studies have shown that dentists have difficulty performing biopsies, dental students lack knowledge about OC, and surgeons do not always maintain a safe margin during tumor resection. To address this, biopsies and resections could be trained under realistic conditions outside the patient. The aim of this study was to develop and to validate a porcine pseudotumor model of the tongue. METHODS An interdisciplinary team reflecting various specialties involved in the oncological treatment of head and neck oncology developed a porcine pseudotumor model of the tongue in which biopsies and resections can be practiced. The refined model was validated in a final trial of 10 participants who each resected four pseudotumors on a tongue, resulting in a total of 40 resected pseudotumors. The participants (7 residents and 3 specialists) had an experience in OC treatment ranging from 0.5 to 27 years. Resection margins (minimum and maximum) were assessed macroscopically and compared beside self-assessed margins and resection time between residents and specialists. Furthermore, the model was evaluated using Likert-type questions on haptic and radiological fidelity, its usefulness as a training model, as well as its imageability using CT and ultrasound. RESULTS The model haptically resembles OC (3.0 ± 0.5; 4-point Likert scale), can be visualized with medical imaging and macroscopically evaluated immediately after resection providing feedback. Although, participants (3.2 ± 0.4) tended to agree that they had resected the pseudotumor with an ideal safety margin (10 mm), the mean minimum resection margin was insufficient at 4.2 ± 1.2 mm (mean ± SD), comparable to reported margins in literature. Simultaneously, a maximum resection margin of 18.4 ± 6.1 mm was measured, indicating partial over-resection. Although specialists were faster at resection (p < 0.001), this had no effect on margins (p = 0.114). Overall, the model was well received by the participants, and they could see it being implemented in training (3.7 ± 0.5). CONCLUSION The model, which is cost-effective, cryopreservable, and provides a risk-free training environment, is ideal for training in OC biopsy and resection and could be incorporated into dental, medical, or oncologic surgery curricula. Future studies should evaluate the long-term training effects using this model and its potential impact on improving patient outcomes.
Collapse
Affiliation(s)
- Claire Melchior
- Department of Oral and Maxillofacial Surgery, University Hospital RWTH Aachen, Pauwelsstraße 30, 52074, Aachen, Germany
- Institute of Medical Informatics, University Hospital RWTH Aachen, Pauwelsstraße 30, 52074, Aachen, Germany
| | - Peter Isfort
- Department of Diagnostic and Interventional Radiology, University Hospital RWTH Aachen, 52074, Aachen, Germany
| | - Till Braunschweig
- Institute of Pathology, RWTH Aachen University, 52074, Aachen, Germany
- Institute of Pathology, Faculty of Medicine, Ludwig Maximilians University (LMU), 80337, Munich, Germany
| | - Max Witjes
- Department of Oral and Maxillofacial Surgery, UMCG Groningen, 9713, GZ, Groningen, The Netherlands
| | - Vincent Van den Bosch
- Department of Diagnostic and Interventional Radiology, University Hospital RWTH Aachen, 52074, Aachen, Germany
| | - Ashkan Rashad
- Department of Oral and Maxillofacial Surgery, University Hospital RWTH Aachen, Pauwelsstraße 30, 52074, Aachen, Germany
| | - Jan Egger
- Cancer Research Center Cologne Essen (CCCE), University Medicine Essen (AöR), 45147, Essen, Germany
- Institute of Artificial Intelligence in Medicine, Essen University Hospital, 45131, Essen, Germany
| | - Matías de la Fuente
- Chair of Medical Engineering, RWTH Aachen University, 52074, Aachen, Germany
| | - Rainer Röhrig
- Institute of Medical Informatics, University Hospital RWTH Aachen, Pauwelsstraße 30, 52074, Aachen, Germany
| | - Frank Hölzle
- Department of Oral and Maxillofacial Surgery, University Hospital RWTH Aachen, Pauwelsstraße 30, 52074, Aachen, Germany
| | - Behrus Puladi
- Department of Oral and Maxillofacial Surgery, University Hospital RWTH Aachen, Pauwelsstraße 30, 52074, Aachen, Germany.
- Institute of Medical Informatics, University Hospital RWTH Aachen, Pauwelsstraße 30, 52074, Aachen, Germany.
| |
Collapse
|
4
|
Ge J, Kam M, Opfermann JD, Saeidi H, Leonard S, Mady LJ, Schnermann MJ, Krieger A. Autonomous System for Tumor Resection (ASTR) - Dual-Arm Robotic Midline Partial Glossectomy. IEEE Robot Autom Lett 2024; 9:1166-1173. [PMID: 38292408 PMCID: PMC10824540 DOI: 10.1109/lra.2023.3341773] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/01/2024]
Abstract
Head and neck cancers are the seventh most common cancers worldwide, with squamous cell carcinoma being the most prevalent histologic subtype. Surgical resection is a primary treatment modality for many patients with head and neck squamous cell carcinoma, and accurately identifying tumor boundaries and ensuring sufficient resection margins are critical for optimizing oncologic outcomes. This study presents an innovative autonomous system for tumor resection (ASTR) and conducts a feasibility study by performing supervised autonomous midline partial glossectomy for pseudotumor with millimeter accuracy. The proposed ASTR system consists of a dual-camera vision system, an electrosurgical instrument, a newly developed vacuum grasping instrument, two 6-DOF manipulators, and a novel autonomous control system. The letter introduces an ontology-based research framework for creating and implementing a complex autonomous surgical workflow, using the glossectomy as a case study. Porcine tongue tissues are used in this study, and marked using color inks and near-infrared fluorescent (NIRF) markers to indicate the pseudotumor. ASTR actively monitors the NIRF markers and gathers spatial and color data from the samples, enabling planning and execution of robot trajectories in accordance with the proposed glossectomy workflow. The system successfully performs six consecutive supervised autonomous pseudotumor resections on porcine specimens. The average surface and depth resection errors measure 0.73±0.60 mm and 1.89±0.54 mm, respectively, with no positive tumor margins detected in any of the six resections. The resection accuracy is demonstrated to be on par with manual pseudotumor glossectomy performed by an experienced otolaryngologist.
Collapse
Affiliation(s)
- Jiawei Ge
- Department of Mechanical Engineering, Johns Hopkins University, Baltimore, MD 21211 USA
| | - Michael Kam
- Department of Mechanical Engineering, Johns Hopkins University, Baltimore, MD 21211 USA
| | - Justin D Opfermann
- Department of Mechanical Engineering, Johns Hopkins University, Baltimore, MD 21211 USA
| | - Hamed Saeidi
- Department of Computer Science, University of North Carolina Wilmington, Wilmington, NC 28403, USA
| | - Simon Leonard
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD 21211, USA
| | - Leila J Mady
- Department of Otolaryngology - Head and Neck Surgery, Johns Hopkins School of Medicine, Johns Hopkins University, Baltimore, MD 21287, USA
| | - Martin J Schnermann
- Chemical Biology Laboratory, Center for Cancer Research, National Cancer Institute, National Institutes of Health, Frederick, MD 21702, USA
| | - Axel Krieger
- Department of Mechanical Engineering, Johns Hopkins University, Baltimore, MD 21211 USA
| |
Collapse
|
5
|
Tran MQ, Do T, Tran H, Tjiputra E, Tran QD, Nguyen A. Light-Weight Deformable Registration Using Adversarial Learning With Distilling Knowledge. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:1443-1453. [PMID: 34990354 DOI: 10.1109/tmi.2022.3141013] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Deformable registration is a crucial step in many medical procedures such as image-guided surgery and radiation therapy. Most recent learning-based methods focus on improving the accuracy by optimizing the non-linear spatial correspondence between the input images. Therefore, these methods are computationally expensive and require modern graphic cards for real-time deployment. In this paper, we introduce a new Light-weight Deformable Registration network that significantly reduces the computational cost while achieving competitive accuracy. In particular, we propose a new adversarial learning with distilling knowledge algorithm that successfully leverages meaningful information from the effective but expensive teacher network to the student network. We design the student network such as it is light-weight and well suitable for deployment on a typical CPU. The extensively experimental results on different public datasets show that our proposed method achieves state-of-the-art accuracy while significantly faster than recent methods. We further show that the use of our adversarial learning algorithm is essential for a time-efficiency deformable registration method. Finally, our source code and trained models are available at https://github.com/aioz-ai/LDR_ALDK.
Collapse
|
6
|
Wang J, Yue C, Wang G, Gong Y, Li H, Yao W, Kuang S, Liu W, Wang J, Su B. Task Autonomous Medical Robot for Both Incision Stapling and Staples Removal. IEEE Robot Autom Lett 2022. [DOI: 10.1109/lra.2022.3141452] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
7
|
Ge J, Saeidi H, Kam M, Opfermann J, Krieger A. Supervised Autonomous Electrosurgery for Soft Tissue Resection. PROCEEDINGS. IEEE INTERNATIONAL SYMPOSIUM ON BIOINFORMATICS AND BIOENGINEERING 2021; 2021:10.1109/bibe52308.2021.9635563. [PMID: 38533465 PMCID: PMC10965307 DOI: 10.1109/bibe52308.2021.9635563] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/28/2024]
Abstract
Surgical resection is the current clinical standard of care for treating squamous cell carcinoma. Maintaining an adequate tumor resection margin is the key to a good surgical outcome, but tumor edge delineation errors are inevitable with manual surgery due to difficulty in visualization and hand-eye coordination. Surgical automation is a growing field of robotics to relieve surgeon burdens and to achieve a consistent and potentially better surgical outcome. This paper reports a novel robotic supervised autonomous electrosurgery technique for soft tissue resection achieving millimeter accuracy. The tumor resection procedure is decomposed to the subtask level for a more direct understanding and automation. A 4-DOF suction system is developed, and integrated with a 6-DOF electrocautery robot to perform resection experiments. A novel near-infrared fluorescent marker is manually dispensed on cadaver samples to define a pseudotumor, and intraoperatively tracked using a dual-camera system. The autonomous dual-robot resection cooperation workflow is proposed and evaluated in this study. The integrated system achieves autonomous localization of the pseudotumor by tracking the near-infrared marker, and performs supervised autonomous resection in cadaver porcine tongues (N=3). The three pseudotumors were successfully removed from porcine samples. The evaluated average surface and depth resection errors are 1.19 and 1.83mm, respectively. This work is an essential step towards autonomous tumor resections.
Collapse
Affiliation(s)
- Jiawei Ge
- Department of Mechanical Engineering, Johns Hopkins University, Baltimore, MD, USA
| | - Hamed Saeidi
- Department of Computer Science, University of North Carolina Wilmington, Wilmington, NC, USA
| | - Michael Kam
- Department of Mechanical Engineering, Johns Hopkins University, Baltimore, MD, USA
| | - Justin Opfermann
- Department of Mechanical Engineering, Johns Hopkins University, Baltimore, MD, USA
| | - Axel Krieger
- Department of Mechanical Engineering, Johns Hopkins University, Baltimore, MD, USA
| |
Collapse
|