1
|
Xie X, Zhu M, He B, Xu J. Image-guided navigation system for minimally invasive total hip arthroplasty (MITHA) using an improved position-sensing marker. Int J Comput Assist Radiol Surg 2023; 18:2155-2166. [PMID: 36892722 DOI: 10.1007/s11548-023-02861-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2022] [Accepted: 02/24/2023] [Indexed: 03/10/2023]
Abstract
PURPOSE Minimally invasive total hip arthroplasty (MITHA) is a treatment for hip arthritis, and it causes less tissue trauma, blood loss, and recovery time. However, the limited incision makes it difficult for surgeons to perceive the instruments' location and orientation. Computer-assisted navigation systems can help improve the medical outcome of MITHA. Directly applying existing navigation systems for MITHA, however, suffers from problems of bulky fiducial marker, severe feature-loss, multiple instruments tracking confusion, and radiation exposure. To tackle these problems, we propose an image-guided navigation system for MITHA using a novel position-sensing marker. METHODS A position-sensing marker is proposed to serve as the fiducial marker with high-density and multi-fold ID tags. It results in less feature span and enables the use of ID for each feature, overcoming the problem of bulky fiducial markers and multiple instruments tracking confusion. And the marker can be recognized even when a large part of locating features is obscured. As for the elimination of intraoperative radiation exposure, we propose a point-based method to achieve patient-image registration based on anatomical landmarks. RESULTS Quantitative experiments are conducted to evaluate the feasibility of our system. The accuracy of instrument positioning is achieved at 0.33 ± 0.18 mm, and that of patient-image registration is achieved at 0.79 ± 0.15 mm. And qualitative experiments are also performed, verifying that our system can be used in compact surgical spatial volume and can address severe feature-loss and tracking confusion problems. In addition, our system does not require any intraoperative medical scans. CONCLUSION Experimental results indicate that our proposed system can assist surgeons without larger space occupations, radiation exposure, and extra incision, showing its potential application value in MITHA.
Collapse
Affiliation(s)
- Xianzhong Xie
- School of Mechanical Engineering, Fuzhou University, Fuzhou, 350108, Fujian, China
| | - Mingzhu Zhu
- School of Mechanical Engineering, Fuzhou University, Fuzhou, 350108, Fujian, China.
| | - Bingwei He
- School of Mechanical Engineering, Fuzhou University, Fuzhou, 350108, Fujian, China
| | - Jie Xu
- Department of Orthopedic Surgery, Fujian Provincial Hospital, Fuzhou, 350013, Fujian, China
| |
Collapse
|
2
|
Ping L, Wang Z, Yao J, Gao J, Yang S, Li J, Shi J, Wu W, Hua S, Wang H. Application and evaluation of surgical tool and tool tip recognition based on Convolutional Neural Network in multiple endoscopic surgical scenarios. Surg Endosc 2023; 37:7376-7384. [PMID: 37580576 DOI: 10.1007/s00464-023-10323-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2023] [Accepted: 07/19/2023] [Indexed: 08/16/2023]
Abstract
BACKGROUND In recent years, computer-assisted intervention and robot-assisted surgery are receiving increasing attention. The need for real-time identification and tracking of surgical tools and tool tips is constantly demanding. A series of researches focusing on surgical tool tracking and identification have been performed. However, the size of dataset, the sensitivity/precision, and the response time of these studies were limited. In this work, we developed and utilized an automated method based on Convolutional Neural Network (CNN) and You Only Look Once (YOLO) v3 algorithm to locate and identify surgical tools and tool tips covering five different surgical scenarios. MATERIALS AND METHODS An algorithm of object detection was applied to identify and locate the surgical tools and tool tips. DarkNet-19 was used as Backbone Network and YOLOv3 was modified and applied for the detection. We included a series of 181 endoscopy videos covering 5 different surgical scenarios: pancreatic surgery, thyroid surgery, colon surgery, gastric surgery, and external scenes. A total amount of 25,333 images containing 94,463 targets were collected. Training and test sets were divided in a proportion of 2.5:1. The data sets were openly stored at the Kaggle database. RESULTS Under an Intersection over Union threshold of 0.5, the overall sensitivity and precision rate of the model were 93.02% and 89.61% for tool recognition and 87.05% and 83.57% for tool tip recognition, respectively. The model demonstrated the highest tool and tool tip recognition sensitivity and precision rate under external scenes. Among the four different internal surgical scenes, the network had better performances in pancreatic and colon surgeries and poorer performances in gastric and thyroid surgeries. CONCLUSION We developed a surgical tool and tool tip recognition model based on CNN and YOLOv3. Validation of our model demonstrated satisfactory precision, accuracy, and robustness across different surgical scenes.
Collapse
Affiliation(s)
- Lu Ping
- 8-Year MD Program, Chinese Academy of Medical Science and Peking Union Medical College, Beijing, China
- Department of General Surgery, State Key Laboratory of Complex Severe and Rare Diseases, Peking Union Medical College Hospital, Chinese Academy of Medical Science and Peking Union Medical College, Beijing, China
| | - Zhihong Wang
- 8-Year MD Program, Chinese Academy of Medical Science and Peking Union Medical College, Beijing, China
| | - Jingjing Yao
- Department of Nursing, Peking Union Medical College Hospital, Chinese Academy of Medical Science and Peking Union Medical College, Beijing, China
| | - Junyi Gao
- Department of General Surgery, State Key Laboratory of Complex Severe and Rare Diseases, Peking Union Medical College Hospital, Chinese Academy of Medical Science and Peking Union Medical College, Beijing, China
| | - Sen Yang
- Department of General Surgery, State Key Laboratory of Complex Severe and Rare Diseases, Peking Union Medical College Hospital, Chinese Academy of Medical Science and Peking Union Medical College, Beijing, China
| | - Jiayi Li
- 8-Year MD Program, Chinese Academy of Medical Science and Peking Union Medical College, Beijing, China
| | - Jile Shi
- 8-Year MD Program, Chinese Academy of Medical Science and Peking Union Medical College, Beijing, China
| | - Wenming Wu
- Department of General Surgery, State Key Laboratory of Complex Severe and Rare Diseases, Peking Union Medical College Hospital, Chinese Academy of Medical Science and Peking Union Medical College, Beijing, China
| | - Surong Hua
- Department of General Surgery, State Key Laboratory of Complex Severe and Rare Diseases, Peking Union Medical College Hospital, Chinese Academy of Medical Science and Peking Union Medical College, Beijing, China.
| | - Huizhen Wang
- Department of Nursing, Peking Union Medical College Hospital, Chinese Academy of Medical Science and Peking Union Medical College, Beijing, China.
| |
Collapse
|
3
|
Alikhani A, Osner S, Dehghani S, Busam B, Inagaki S, Maier M, Navab N, Nasseri MA. RCIT: A Robust Catadioptric-based Instrument 3D Tracking Method For Microsurgical Instruments In a Single-Camera System. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2023; 2023:1-5. [PMID: 38083453 DOI: 10.1109/embc40787.2023.10340955] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
The field of robotic microsurgery and micro-manipulation has undergone a profound evolution in recent years, particularly with regard to the accuracy, precision, versatility, and dexterity. These advancements have the potential to revolutionize high-precision biomedical procedures, such as neurosurgery, vitreoretinal surgery, and cell micro-manipulation. However, a critical challenge in developing micron-precision robotic systems is accurately verifying the end-effector motion in 3D. Such verification is complicated due to environmental vibrations, inaccuracy of mechanical assembly, and other physical uncertainties. To overcome these challenges, this paper proposes a novel single-camera framework that utilizes mirrors with known geometric parameters to estimate the 3D position of the microsurgical instrument. Euclidean distance between reconstructed points by the algorithm and the robot movement recorded by the highly accurate encoders is considered an error. Our method exhibits an accurate estimation with the mean absolute error of 0.044 mm when tested on a 23G surgical cannula with a diameter of 0.640 mm and operates at a resolution of 4024 × 3036 at 30 frames per second.
Collapse
|
4
|
Li B, Lu B, Wang Z, Zhong F, Dou Q, Liu YH. Learning Laparoscope Actions via Video Features for Proactive Robotic Field-of-View Control. IEEE Robot Autom Lett 2022. [DOI: 10.1109/lra.2022.3173442] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Affiliation(s)
- Bin Li
- T stone Robotics Institute, The Department of Mechanical and Automation Engineering, The Chinese University of Hong Kong, Hong Kong
| | - Bo Lu
- T stone Robotics Institute, The Department of Mechanical and Automation Engineering, The Chinese University of Hong Kong, Hong Kong
| | - Ziyi Wang
- T stone Robotics Institute, The Department of Mechanical and Automation Engineering, The Chinese University of Hong Kong, Hong Kong
| | - Fangxun Zhong
- T stone Robotics Institute, The Department of Mechanical and Automation Engineering, The Chinese University of Hong Kong, Hong Kong
| | - Qi Dou
- Department of Computer Science and Engineering, and T Stone Robotics Institute, The Chinese University of Hong Kong, Hong Kong
| | - Yun-Hui Liu
- T stone Robotics Institute, The Department of Mechanical and Automation Engineering, The Chinese University of Hong Kong, Hong Kong
| |
Collapse
|
5
|
Ma L, Tomii N, Wang J, Kiyomatsu H, Tsukihara H, Kobayashi E, Sakuma I. Robust and fast laparoscopic vision-based ultrasound probe tracking using a binary dot array marker. Comput Biol Med 2022; 145:105406. [PMID: 35339847 DOI: 10.1016/j.compbiomed.2022.105406] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2022] [Revised: 03/04/2022] [Accepted: 03/11/2022] [Indexed: 11/16/2022]
Abstract
Laparoscopic vision-based ultrasound probe tracking systems have gained considerable attention in ultrasound-guided laparoscopic surgeries as replacements for external tracking systems (e.g. optical tracking and electromagnetic tracking systems), which increase cost and setting time, require additional operation space, and introduce new limitations. Most existing laparoscopic ultrasound (LUS) probe tracking systems rely on fiducial markers, which cannot easily realise fast and robust vision-based tracking in laparoscopic surgery owing to their design limitations. Therefore, we propose a novel binary dot array marker to realise a robust and fast LUS probe tracking system. The binary dot array marker comprises two dots (green and blue), which form multiple unique identification dot subarrays in the binary dot array. The binary dot array marker can be tracked when one of the identification dot subarrays is detected and identified; this novel design makes the binary dot array marker-based probe tracking system robust against occlusions during surgery. The evaluation results indicate that the proposed binary dot marker performs better in terms of robustness, computational efficiency, and tracking accuracy compared to the state-of-the-art fiducial markers used for vision-based probe tracking.
Collapse
Affiliation(s)
- Lei Ma
- Graduate School of Engineering, The University of Tokyo, Tokyo, Japan
| | - Naoki Tomii
- Graduate School of Engineering, The University of Tokyo, Tokyo, Japan
| | - Junchen Wang
- School of Mechanical Engineering, Beihang University, Beijing, China
| | | | | | - Etsuko Kobayashi
- Graduate School of Engineering, The University of Tokyo, Tokyo, Japan.
| | - Ichiro Sakuma
- Graduate School of Engineering, The University of Tokyo, Tokyo, Japan
| |
Collapse
|
6
|
Baek D, Nho YH, Kwon DS. ViO-Com: Feed-Forward Compensation Using Vision-Based Optimization for High-Precision Surgical Manipulation. IEEE Robot Autom Lett 2022. [DOI: 10.1109/lra.2021.3123375] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/14/2023]
|
7
|
Wu B, Wang L, Liu X, Wang L, Xu K. Closed-Loop Pose Control and Automated Suturing of Continuum Surgical Manipulators With Customized Wrist Markers Under Stereo Vision. IEEE Robot Autom Lett 2021. [DOI: 10.1109/lra.2021.3097260] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
|
8
|
Pachtrachai K, Vasconcelos F, Edwards P, Stoyanov D. Learning to Calibrate - Estimating the Hand-eye Transformation Without Calibration Objects. IEEE Robot Autom Lett 2021. [DOI: 10.1109/lra.2021.3098942] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
9
|
Azargoshasb S, van Alphen S, Slof LJ, Rosiello G, Puliatti S, van Leeuwen SI, Houwing KM, Boonekamp M, Verhart J, Dell'Oglio P, van der Hage J, van Oosterom MN, van Leeuwen FWB. The Click-On gamma probe, a second-generation tethered robotic gamma probe that improves dexterity and surgical decision-making. Eur J Nucl Med Mol Imaging 2021; 48:4142-4151. [PMID: 34031721 PMCID: PMC8566398 DOI: 10.1007/s00259-021-05387-z] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2021] [Accepted: 04/25/2021] [Indexed: 11/24/2022]
Abstract
Purpose Decision-making and dexterity, features that become increasingly relevant in (robot-assisted) minimally invasive surgery, are considered key components in improving the surgical accuracy. Recently, DROP-IN gamma probes were introduced to facilitate radioguided robotic surgery. We now studied if robotic DROP-IN radioguidance can be further improved using tethered Click-On designs that integrate gamma detection onto the robotic instruments themselves. Methods Using computer-assisted drawing software, 3D printing and precision machining, we created a Click-On probe containing two press-fit connections and an additional grasping moiety for a ProGrasp instrument combined with fiducials that could be video tracked using the Firefly laparoscope. Using a dexterity phantom, the duration of the specific tasks and the path traveled could be compared between use of the Click-On or DROP-IN probe. To study the impact on surgical decision-making, we performed a blinded study, in porcine models, wherein surgeons had to identify a hidden 57Co-source using either palpation or Click-On radioguidance. Results When assembled onto a ProGrasp instrument, while preserving grasping function and rotational freedom, the fully functional prototype could be inserted through a 12-mm trocar. In dexterity assessments, the Click-On provided a 40% reduction in movements compared to the DROP-IN, which converted into a reduction in time, path length, and increase in straightness index. Radioguidance also improved decision-making; task-completion rate increased by 60%, procedural time was reduced, and movements became more focused. Conclusion The Click-On gamma probe provides a step toward full integration of radioguidance in minimal invasive surgery. The value of this concept was underlined by its impact on surgical dexterity and decision-making.
Collapse
Affiliation(s)
- Samaneh Azargoshasb
- Interventional Molecular Imaging-Laboratory, Department of Radiology, Leiden University Medical Center, Leiden, the Netherlands.,Department of Urology, Netherlands Cancer Institute-Antoni van Leeuwenhoek Hospital, Amsterdam, the Netherlands
| | - Simon van Alphen
- Interventional Molecular Imaging-Laboratory, Department of Radiology, Leiden University Medical Center, Leiden, the Netherlands
| | - Leon J Slof
- Interventional Molecular Imaging-Laboratory, Department of Radiology, Leiden University Medical Center, Leiden, the Netherlands.,Instrumentele zaken ontwikkeling, facilitair bedrijf, Leiden University Medical Center, Leiden, the Netherlands
| | - Giuseppe Rosiello
- Department of Urology and Division of Experimental Oncology, Urological Research Institute IRCCS San Raffaele Scientific Institute, Milan, Italy
| | - Stefano Puliatti
- Department of Urology, University of Modena and Reggio Emilia, Via del Pozzo, 71, 41124, Modena, Italy.,ORSI Academy, Melle, Belgium.,Department of Urology, Onze Lieve Vrouw Hospital, Aalst, Belgium
| | - Sven I van Leeuwen
- Interventional Molecular Imaging-Laboratory, Department of Radiology, Leiden University Medical Center, Leiden, the Netherlands
| | - Krijn M Houwing
- Interventional Molecular Imaging-Laboratory, Department of Radiology, Leiden University Medical Center, Leiden, the Netherlands
| | - Michael Boonekamp
- Instrumentele zaken ontwikkeling, facilitair bedrijf, Leiden University Medical Center, Leiden, the Netherlands
| | - Jeroen Verhart
- Instrumentele zaken ontwikkeling, facilitair bedrijf, Leiden University Medical Center, Leiden, the Netherlands
| | - Paolo Dell'Oglio
- Interventional Molecular Imaging-Laboratory, Department of Radiology, Leiden University Medical Center, Leiden, the Netherlands.,Department of Urology, Netherlands Cancer Institute-Antoni van Leeuwenhoek Hospital, Amsterdam, the Netherlands.,ORSI Academy, Melle, Belgium.,Department of Urology, ASST Grande Ospedale Metropolitano Niguarda, Milan, Italy
| | - Jos van der Hage
- Department of Surgery, Leiden University Medical Center, Leiden, the Netherlands
| | - Matthias N van Oosterom
- Interventional Molecular Imaging-Laboratory, Department of Radiology, Leiden University Medical Center, Leiden, the Netherlands.,Department of Urology, Netherlands Cancer Institute-Antoni van Leeuwenhoek Hospital, Amsterdam, the Netherlands
| | - Fijs W B van Leeuwen
- Interventional Molecular Imaging-Laboratory, Department of Radiology, Leiden University Medical Center, Leiden, the Netherlands. .,Department of Urology, Netherlands Cancer Institute-Antoni van Leeuwenhoek Hospital, Amsterdam, the Netherlands. .,ORSI Academy, Melle, Belgium.
| |
Collapse
|
10
|
Novel Multimodal, Multiscale Imaging System with Augmented Reality. Diagnostics (Basel) 2021; 11:diagnostics11030441. [PMID: 33806547 PMCID: PMC7999725 DOI: 10.3390/diagnostics11030441] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2020] [Revised: 02/19/2021] [Accepted: 02/21/2021] [Indexed: 01/23/2023] Open
Abstract
A novel multimodal, multiscale imaging system with augmented reality capability were developed and characterized. The system offers 3D color reflectance imaging, 3D fluorescence imaging, and augmented reality in real time. Multiscale fluorescence imaging was enabled by developing and integrating an in vivo fiber-optic microscope. Real-time ultrasound-fluorescence multimodal imaging used optically tracked fiducial markers for registration. Tomographical data are also incorporated using optically tracked fiducial markers for registration. Furthermore, we characterized system performance and registration accuracy in a benchtop setting. The multiscale fluorescence imaging facilitated assessing the functional status of tissues, extending the minimal resolution of fluorescence imaging to ~17.5 µm. The system achieved a mean of Target Registration error of less than 2 mm for registering fluorescence images to ultrasound images and MRI-based 3D model, which is within clinically acceptable range. The low latency and high frame rate of the prototype system has shown the promise of applying the reported techniques in clinically relevant settings in the future.
Collapse
|
11
|
Horgan CC, Bergholt MS, Thin MZ, Nagelkerke A, Kennedy R, Kalber TL, Stuckey DJ, Stevens MM. Image-guided Raman spectroscopy probe-tracking for tumor margin delineation. JOURNAL OF BIOMEDICAL OPTICS 2021; 26:JBO-200321R. [PMID: 33715315 PMCID: PMC7960531 DOI: 10.1117/1.jbo.26.3.036002] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/07/2020] [Accepted: 02/17/2021] [Indexed: 06/01/2023]
Abstract
SIGNIFICANCE Tumor detection and margin delineation are essential for successful tumor resection. However, postsurgical positive margin rates remain high for many cancers. Raman spectroscopy has shown promise as a highly accurate clinical spectroscopic diagnostic modality, but its margin delineation capabilities are severely limited by the need for pointwise application. AIM We aim to extend Raman spectroscopic diagnostics and develop a multimodal computer vision-based diagnostic system capable of both the detection and identification of suspicious lesions and the precise delineation of disease margins. APPROACH We first apply visual tracking of a Raman spectroscopic probe to achieve real-time tumor margin delineation. We then combine this system with protoporphyrin IX fluorescence imaging to achieve fluorescence-guided Raman spectroscopic margin delineation. RESULTS Our system enables real-time Raman spectroscopic tumor margin delineation for both ex vivo human tumor biopsies and an in vivo tumor xenograft mouse model. We then further demonstrate that the addition of protoporphyrin IX fluorescence imaging enables fluorescence-guided Raman spectroscopic margin delineation in a tissue phantom model. CONCLUSIONS Our image-guided Raman spectroscopic probe-tracking system enables tumor margin delineation and is compatible with both white light and fluorescence image guidance, demonstrating the potential for our system to be developed toward clinical tumor resection surgeries.
Collapse
Affiliation(s)
- Conor C. Horgan
- Imperial College London, Department of Materials, London, United Kingdom
- Imperial College London, Department of Bioengineering, London, United Kingdom
- Imperial College London, Institute of Biomedical Engineering, London, United Kingdom
| | - Mads S. Bergholt
- Imperial College London, Department of Materials, London, United Kingdom
- Imperial College London, Department of Bioengineering, London, United Kingdom
- Imperial College London, Institute of Biomedical Engineering, London, United Kingdom
| | - May Zaw Thin
- University College London, Centre for Advanced Biomedical Imaging, London, United Kingdom
| | - Anika Nagelkerke
- Imperial College London, Department of Materials, London, United Kingdom
- Imperial College London, Department of Bioengineering, London, United Kingdom
- Imperial College London, Institute of Biomedical Engineering, London, United Kingdom
| | - Robert Kennedy
- King’s College London, Guy’s and St Thomas’ NHS Foundation Trust, Oral/Head and Neck Pathology Laboratory, London, United Kingdom
| | - Tammy L. Kalber
- University College London, Centre for Advanced Biomedical Imaging, London, United Kingdom
| | - Daniel J. Stuckey
- University College London, Centre for Advanced Biomedical Imaging, London, United Kingdom
| | - Molly M. Stevens
- Imperial College London, Department of Materials, London, United Kingdom
- Imperial College London, Department of Bioengineering, London, United Kingdom
- Imperial College London, Institute of Biomedical Engineering, London, United Kingdom
| |
Collapse
|
12
|
Ramalhinho J, Tregidgo HFJ, Gurusamy K, Hawkes DJ, Davidson B, Clarkson MJ. Registration of Untracked 2D Laparoscopic Ultrasound to CT Images of the Liver Using Multi-Labelled Content-Based Image Retrieval. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:1042-1054. [PMID: 33326379 DOI: 10.1109/tmi.2020.3045348] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Laparoscopic Ultrasound (LUS) is recommended as a standard-of-care when performing laparoscopic liver resections as it images sub-surface structures such as tumours and major vessels. Given that LUS probes are difficult to handle and some tumours are iso-echoic, registration of LUS images to a pre-operative CT has been proposed as an image-guidance method. This registration problem is particularly challenging due to the small field of view of LUS, and usually depends on both a manual initialisation and tracking to compose a volume, hindering clinical translation. In this paper, we extend a proposed registration approach using Content-Based Image Retrieval (CBIR), removing the requirement for tracking or manual initialisation. Pre-operatively, a set of possible LUS planes is simulated from CT and a descriptor generated for each image. Then, a Bayesian framework is employed to estimate the most likely sequence of CT simulations that matches a series of LUS images. We extend our CBIR formulation to use multiple labelled objects and constrain the registration by separating liver vessels into portal vein and hepatic vein branches. The value of this new labeled approach is demonstrated in retrospective data from 5 patients. Results show that, by including a series of 5 untracked images in time, a single LUS image can be registered with accuracies ranging from 5.7 to 16.4 mm with a success rate of 78%. Initialisation of the LUS to CT registration with the proposed framework could potentially enable the clinical translation of these image fusion techniques.
Collapse
|
13
|
Liu X, Plishker W, Shekhar R. Hybrid electromagnetic-ArUco tracking of laparoscopic ultrasound transducer in laparoscopic video. J Med Imaging (Bellingham) 2021; 8:015001. [PMID: 33585664 PMCID: PMC7857492 DOI: 10.1117/1.jmi.8.1.015001] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2020] [Accepted: 01/12/2021] [Indexed: 11/14/2022] Open
Abstract
Purpose: The purpose of this work was to develop a new method of tracking a laparoscopic ultrasound (LUS) transducer in laparoscopic video by combining the hardware [e.g., electromagnetic (EM)] and the computer vision-based (e.g., ArUco) tracking methods. Approach: We developed a special tracking mount for the imaging tip of the LUS transducer. The mount incorporated an EM sensor and an ArUco pattern registered to it. The hybrid method used ArUco tracking for ArUco-success frames (i.e., frames where ArUco succeeds in detecting the pattern) and used corrected EM tracking for the ArUco-failure frames. The corrected EM tracking result was obtained by applying correction matrices to the original EM tracking result. The correction matrices were calculated in previous ArUco-success frames by comparing the ArUco result and the original EM tracking result. Results: We performed phantom and animal studies to evaluate the performance of our hybrid tracking method. The corrected EM tracking results showed significant improvements over the original EM tracking results. In the animal study, 59.2% frames were ArUco-success frames. For the ArUco-failure frames, mean reprojection errors for the original EM tracking method and for the corrected EM tracking method were 30.8 pixel and 10.3 pixel, respectively. Conclusions: The new hybrid method is more reliable than using ArUco tracking alone and more accurate and practical than using EM tracking alone for tracking the LUS transducer in the laparoscope camera image. The proposed method has the potential to significantly improve tracking performance for LUS-based augmented reality applications.
Collapse
Affiliation(s)
- Xinyang Liu
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Hospital, Washington, DC, United States
| | | | - Raj Shekhar
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Hospital, Washington, DC, United States.,IGI Technologies, Inc., Silver Spring, Maryland, United States
| |
Collapse
|
14
|
Ma L, Wang J, Kiyomatsu H, Tsukihara H, Sakuma I, Kobayashi E. Surgical navigation system for laparoscopic lateral pelvic lymph node dissection in rectal cancer surgery using laparoscopic-vision-tracked ultrasonic imaging. Surg Endosc 2020; 35:6556-6567. [PMID: 33185764 DOI: 10.1007/s00464-020-08153-8] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2020] [Accepted: 11/04/2020] [Indexed: 10/23/2022]
Abstract
BACKGROUND Laparoscopic lateral pelvic lymph node dissection (LPLND) in rectal cancer surgery requires considerable skill because the pelvic arteries, which need to be located to guide the dissection, are covered by other tissues and cannot be observed on laparoscopic views. Therefore, surgeons need to localize the pelvic arteries accurately before dissection, to prevent injury to these arteries. METHODS This report proposes a surgical navigation system to facilitate artery localization in laparoscopic LPLND by combining ultrasonic imaging and laparoscopy. Specifically, free-hand laparoscopic ultrasound (LUS) is employed to capture the arteries intraoperatively in this approach, and a laparoscopic vision-based tracking system is utilized to track the LUS probe. To extract the artery contours from the two-dimensional ultrasound image sequences efficiently, an artery extraction framework based on local phase-based snakes was developed. After reconstructing the three-dimensional intraoperative artery model from ultrasound images, a high-resolution artery model segmented from preoperative computed tomography (CT) images was rigidly registered to the intraoperative artery model and overlaid onto the laparoscopic view to guide laparoscopic LPLND. RESULTS Experiments were conducted to evaluate the performance of the vision-based tracking system, and the average reconstruction error of the proposed tracking system was found to be 2.4 mm. Then, the proposed navigation system was quantitatively evaluated on an artery phantom. The reconstruction time and average navigation error were 8 min and 2.3 mm, respectively. A navigation system was also successfully constructed to localize the pelvic arteries in laparoscopic and open surgeries of a swine. This demonstrated the feasibility of the proposed system in vivo. The construction times in the laparoscopic and open surgeries were 14 and 12 min, respectively. CONCLUSIONS The experimental results showed that the proposed navigation system can guide laparoscopic LPLND and requires a significantly shorter setting time than the state-of-the-art navigation systems do.
Collapse
Affiliation(s)
- Lei Ma
- Graduate School of Engineering, The University of Tokyo, Tokyo, Japan
| | - Junchen Wang
- School of Mechanical Engineering, Beihang University, Beijing, China
| | | | | | - Ichiro Sakuma
- Graduate School of Engineering, The University of Tokyo, Tokyo, Japan
| | - Etsuko Kobayashi
- Graduate School of Engineering, The University of Tokyo, Tokyo, Japan.
| |
Collapse
|
15
|
Application of artificial intelligence in surgery. Front Med 2020; 14:417-430. [DOI: 10.1007/s11684-020-0770-0] [Citation(s) in RCA: 34] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2019] [Accepted: 03/05/2020] [Indexed: 12/14/2022]
|
16
|
Huang B, Tsai YY, Cartucho J, Vyas K, Tuch D, Giannarou S, Elson DS. Tracking and visualization of the sensing area for a tethered laparoscopic gamma probe. Int J Comput Assist Radiol Surg 2020; 15:1389-1397. [PMID: 32556919 PMCID: PMC7351835 DOI: 10.1007/s11548-020-02205-z] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2019] [Accepted: 05/27/2020] [Indexed: 12/17/2022]
Abstract
Purpose In surgical oncology, complete cancer resection and lymph node identification are challenging due to the lack of reliable intraoperative visualization. Recently, endoscopic radio-guided cancer resection has been introduced where a novel tethered laparoscopic gamma detector can be used to determine the location of tracer activity, which can complement preoperative nuclear imaging data and endoscopic imaging. However, these probes do not clearly indicate where on the tissue surface the activity originates, making localization of pathological sites difficult and increasing the mental workload of the surgeons. Therefore, a robust real-time gamma probe tracking system integrated with augmented reality is proposed. Methods A dual-pattern marker has been attached to the gamma probe, which combines chessboard vertices and circular dots for higher detection accuracy. Both patterns are detected simultaneously based on blob detection and the pixel intensity-based vertices detector and used to estimate the pose of the probe. Temporal information is incorporated into the framework to reduce tracking failure. Furthermore, we utilized the 3D point cloud generated from structure from motion to find the intersection between the probe axis and the tissue surface. When presented as an augmented image, this can provide visual feedback to the surgeons. Results The method has been validated with ground truth probe pose data generated using the OptiTrack system. When detecting the orientation of the pose using circular dots and chessboard dots alone, the mean error obtained is \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$0.05^{\circ }$$\end{document}0.05∘ and \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$0.06^{\circ }$$\end{document}0.06∘, respectively. As for the translation, the mean error for each pattern is 1.78 mm and 1.81 mm. The detection limits for pitch, roll and yaw are \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$360^{\circ }, 360^{\circ }$$\end{document}360∘,360∘ and \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$8^{\circ }$$\end{document}8∘–\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$82^{\circ }\cup 188^{\circ }$$\end{document}82∘∪188∘–\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$352^{\circ }$$\end{document}352∘ . Conclusion The performance evaluation results show that this dual-pattern marker can provide high detection rates, as well as more accurate pose estimation and a larger workspace than the previously proposed hybrid markers. The augmented reality will be used to provide visual feedback to the surgeons on the location of the affected lymph nodes or tumor.
Collapse
Affiliation(s)
- Baoru Huang
- The Hamlyn Centre for Robotic Surgery, Department of Surgery and Cancer, Imperial College London, London, SW7 2AZ, UK.
| | - Ya-Yen Tsai
- The Hamlyn Centre for Robotic Surgery, Department of Surgery and Cancer, Imperial College London, London, SW7 2AZ, UK
| | - João Cartucho
- The Hamlyn Centre for Robotic Surgery, Department of Surgery and Cancer, Imperial College London, London, SW7 2AZ, UK
| | | | | | - Stamatia Giannarou
- The Hamlyn Centre for Robotic Surgery, Department of Surgery and Cancer, Imperial College London, London, SW7 2AZ, UK
| | - Daniel S Elson
- The Hamlyn Centre for Robotic Surgery, Department of Surgery and Cancer, Imperial College London, London, SW7 2AZ, UK
| |
Collapse
|
17
|
Kügler D, Sehring J, Stefanov A, Stenin I, Kristin J, Klenzner T, Schipper J, Mukhopadhyay A. i3PosNet: instrument pose estimation from X-ray in temporal bone surgery. Int J Comput Assist Radiol Surg 2020; 15:1137-1145. [PMID: 32440956 PMCID: PMC7316684 DOI: 10.1007/s11548-020-02157-4] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2019] [Accepted: 04/03/2020] [Indexed: 11/03/2022]
Abstract
PURPOSE Accurate estimation of the position and orientation (pose) of surgical instruments is crucial for delicate minimally invasive temporal bone surgery. Current techniques lack in accuracy and/or line-of-sight constraints (conventional tracking systems) or expose the patient to prohibitive ionizing radiation (intra-operative CT). A possible solution is to capture the instrument with a c-arm at irregular intervals and recover the pose from the image. METHODS i3PosNet infers the position and orientation of instruments from images using a pose estimation network. Said framework considers localized patches and outputs pseudo-landmarks. The pose is reconstructed from pseudo-landmarks by geometric considerations. RESULTS We show i3PosNet reaches errors [Formula: see text] mm. It outperforms conventional image registration-based approaches reducing average and maximum errors by at least two thirds. i3PosNet trained on synthetic images generalizes to real X-rays without any further adaptation. CONCLUSION The translation of deep learning-based methods to surgical applications is difficult, because large representative datasets for training and testing are not available. This work empirically shows sub-millimeter pose estimation trained solely based on synthetic training data.
Collapse
Affiliation(s)
- David Kügler
- Department of Computer Science, Technischer Universität Darmstadt, Darmstadt, Germany. .,German Center for Degenerative Diseases (DZNE) e.V., Bonn, Germany.
| | - Jannik Sehring
- Department of Computer Science, Technischer Universität Darmstadt, Darmstadt, Germany
| | - Andrei Stefanov
- Department of Computer Science, Technischer Universität Darmstadt, Darmstadt, Germany
| | - Igor Stenin
- ENT Clinic, University Düsseldorf, Düsseldorf, Germany
| | - Julia Kristin
- ENT Clinic, University Düsseldorf, Düsseldorf, Germany
| | | | - Jörg Schipper
- ENT Clinic, University Düsseldorf, Düsseldorf, Germany
| | - Anirban Mukhopadhyay
- Department of Computer Science, Technischer Universität Darmstadt, Darmstadt, Germany
| |
Collapse
|
18
|
Qiu L, Li C, Ren H. Real-time surgical instrument tracking in robot-assisted surgery using multi-domain convolutional neural network. Healthc Technol Lett 2020; 6:159-164. [PMID: 32038850 PMCID: PMC6945802 DOI: 10.1049/htl.2019.0068] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2019] [Accepted: 10/02/2019] [Indexed: 11/20/2022] Open
Abstract
Image-based surgical instrument tracking in robot-assisted surgery is an active and challenging research area. Having a real-time knowledge of surgical instrument location is an essential part of a computer-assisted intervention system. Tracking can be used as visual feedback for servo control of a surgical robot or transformed as haptic feedback for surgeon–robot interaction. In this Letter, the authors apply a multi-domain convolutional neural network for fast 2D surgical instrument tracking considering the application for multiple surgical tools and use a focal loss to decrease the effect of easy negative examples. They further introduce a new dataset based on m2cai16-tool and their cadaver experiments due to the lack of established public surgical tool tracking dataset despite significant progress in this field. Their method is evaluated on the introduced dataset and outperforms the state-of-the-art real-time trackers.
Collapse
Affiliation(s)
- Liang Qiu
- Department of Biomedical Engineering, National University of Singapore, Singapore 117575, Singapore
| | - Changsheng Li
- Department of Biomedical Engineering, National University of Singapore, Singapore 117575, Singapore
| | - Hongliang Ren
- Department of Biomedical Engineering, National University of Singapore, Singapore 117575, Singapore
| |
Collapse
|
19
|
Du X, Allan M, Bodenstedt S, Maier-Hein L, Speidel S, Dore A, Stoyanov D. Patch-based adaptive weighting with segmentation and scale (PAWSS) for visual tracking in surgical video. Med Image Anal 2019; 57:120-135. [PMID: 31299494 PMCID: PMC6988132 DOI: 10.1016/j.media.2019.07.002] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2018] [Revised: 12/17/2018] [Accepted: 07/03/2019] [Indexed: 11/24/2022]
Abstract
Vision-based tracking in an important component for building computer assisted interventions in minimally invasive surgery as it facilitates estimation of motion for instruments and anatomical targets. Tracking-by-detection algorithms are widely used for visual tracking, where the problem is treated as a classification task and a tracking target appearance model is updated over time using online learning. In challenging conditions, like surgical scenes, where tracking targets deform and vary in scale, the update step is prone to include background information in model appearance or to lack the ability to estimate change of scale, which degrades the performance of classifier. In this paper, we propose a Patch-based Adaptive Weighting with Segmentation and Scale (PAWSS) tracking framework that tackles both scale and background problems. A simple but effective colour-based segmentation model is used to suppress background information and multi-scale samples are extracted to enrich the training pool, which allows the tracker to handle both incremental and abrupt scale variations between frames. Experimentally, we evaluate our approach on Online Tracking Benchmark (OTB) dataset and Visual Object Tracking (VOT) challenge datasets, showing that our approach outperforms recent state-of-the-art trackers, and it especially improves successful rate score on OTB dataset, while on VOT datasets, PAWSS ranks among the top trackers while operating at real-time frame rates. Focusing on the application of PAWSS to surgical scenes, we evaluate on MICCAI 2015 challenge instrument tracking challenge and in vivo datasets, showing that our approach performs the best among all submitted methods and also has promising performance on in vivo surgical instrument tracking.
Collapse
Affiliation(s)
- Xiaofei Du
- Wellcome / EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London, UK.
| | | | | | - Lena Maier-Hein
- Division of Computer-Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), Heidelberg, Germany.
| | | | | | - Danail Stoyanov
- Wellcome / EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London, UK.
| |
Collapse
|
20
|
Singla R, Edgcumbe P, Pratt P, Nguan C, Rohling R. Intra-operative ultrasound-based augmented reality guidance for laparoscopic surgery. Healthc Technol Lett 2017; 4:204-209. [PMID: 29184666 PMCID: PMC5683195 DOI: 10.1049/htl.2017.0063] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2017] [Accepted: 07/28/2017] [Indexed: 01/20/2023] Open
Abstract
In laparoscopic surgery, the surgeon must operate with a limited field of view and reduced depth perception. This makes spatial understanding of critical structures difficult, such as an endophytic tumour in a partial nephrectomy. Such tumours yield a high complication rate of 47%, and excising them increases the risk of cutting into the kidney's collecting system. To overcome these challenges, an augmented reality guidance system is proposed. Using intra-operative ultrasound, a single navigation aid, and surgical instrument tracking, four augmentations of guidance information are provided during tumour excision. Qualitative and quantitative system benefits are measured in simulated robot-assisted partial nephrectomies. Robot-to-camera calibration achieved a total registration error of 1.0 ± 0.4 mm while the total system error is 2.5 ± 0.5 mm. The system significantly reduced healthy tissue excised from an average (±standard deviation) of 30.6 ± 5.5 to 17.5 ± 2.4 cm3 (p < 0.05) and reduced the depth from the tumor underside to cut from an average (±standard deviation) of 10.2 ± 4.1 to 3.3 ± 2.3 mm (p < 0.05). Further evaluation is required in vivo, but the system has promising potential to reduce the amount of healthy parenchymal tissue excised.
Collapse
Affiliation(s)
- Rohit Singla
- Department of Electrical and Computer Engineering, University of British Columbia, Vancouver, CanadaV6T1Z4
| | - Philip Edgcumbe
- MD/PhD Program, University of British Columbia, Vancouver, CanadaV6T1Z4
| | - Philip Pratt
- Department of Surgery and Cancer, Imperial College London, UK, SW72BX
| | - Christopher Nguan
- Department of Urological Sciences, University of British Columbia, Vancouver, CanadaV6T1Z4
| | - Robert Rohling
- Department of Electrical and Computer Engineering, University of British Columbia, Vancouver, CanadaV6T1Z4.,Department of Mechanical Engineering, University of British Columbia, Vancouver, CanadaV6T1Z4
| |
Collapse
|
21
|
A computationally efficient method for hand-eye calibration. Int J Comput Assist Radiol Surg 2017; 12:1775-1787. [PMID: 28726116 PMCID: PMC5608875 DOI: 10.1007/s11548-017-1646-x] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2017] [Accepted: 07/10/2017] [Indexed: 11/05/2022]
Abstract
Purpose Surgical robots with cooperative control and semiautonomous features have shown increasing clinical potential, particularly for repetitive tasks under imaging and vision guidance. Effective performance of an autonomous task requires accurate hand–eye calibration so that the transformation between the robot coordinate frame and the camera coordinates is well defined. In practice, due to changes in surgical instruments, online hand–eye calibration must be performed regularly. In order to ensure seamless execution of the surgical procedure without affecting the normal surgical workflow, it is important to derive fast and efficient hand–eye calibration methods. Methods We present a computationally efficient iterative method for hand–eye calibration. In this method, dual quaternion is introduced to represent the rigid transformation, and a two-step iterative method is proposed to recover the real and dual parts of the dual quaternion simultaneously, and thus the estimation of rotation and translation of the transformation. Results The proposed method was applied to determine the rigid transformation between the stereo laparoscope and the robot manipulator. Promising experimental and simulation results have shown significant convergence speed improvement to 3 iterations from larger than 30 with regard to standard optimization method, which illustrates the effectiveness and efficiency of the proposed method.
Collapse
|