1
|
Renna MS, Grzeda MT, Bailey J, Hainsworth A, Ourselin S, Ebner M, Vercauteren T, Schizas A, Shapey J. Intraoperative bowel perfusion assessment methods and their effects on anastomotic leak rates: meta-analysis. Br J Surg 2023; 110:1131-1142. [PMID: 37253021 PMCID: PMC10416696 DOI: 10.1093/bjs/znad154] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2022] [Revised: 03/24/2023] [Accepted: 04/29/2023] [Indexed: 06/01/2023]
Abstract
BACKGROUND Anastomotic leak is one of the most feared complications of colorectal surgery, and probably linked to poor blood supply to the anastomotic site. Several technologies have been described for intraoperative assessment of bowel perfusion. This systematic review and meta-analysis aimed to evaluate the most frequently used bowel perfusion assessment modalities in elective colorectal procedures, and to assess their associated risk of anastomotic leak. Technologies included indocyanine green fluorescence angiography, diffuse reflectance spectroscopy, laser speckle contrast imaging, and hyperspectral imaging. METHODS The review was preregistered with PROSPERO (CRD42021297299). A comprehensive literature search was performed using Embase, MEDLINE, Cochrane Library, Scopus, and Web of Science. The final search was undertaken on 29 July 2022. Data were extracted by two reviewers and the MINORS criteria were applied to assess the risk of bias. RESULTS Some 66 eligible studies involving 11 560 participants were included. Indocyanine green fluorescence angiography was most used with 10 789 participants, followed by diffuse reflectance spectroscopy with 321, hyperspectral imaging with 265, and laser speckle contrast imaging with 185. In the meta-analysis, the total pooled effect of an intervention on anastomotic leak was 0.05 (95 per cent c.i. 0.04 to 0.07) in comparison with 0.10 (0.08 to 0.12) without. Use of indocyanine green fluorescence angiography, hyperspectral imaging, or laser speckle contrast imaging was associated with a significant reduction in anastomotic leak. CONCLUSION Bowel perfusion assessment reduced the incidence of anastomotic leak, with intraoperative indocyanine green fluorescence angiography, hyperspectral imaging, and laser speckle contrast imaging all demonstrating comparable results.
Collapse
Affiliation(s)
- Maxwell S Renna
- School of Biomedical Engineering and Imaging Sciences, King’s College London, London, UK
- Department of General Surgery, Guy’s and St Thomas’ NHS Foundation Trust, London, UK
| | - Mariusz T Grzeda
- School of Biomedical Engineering and Imaging Sciences, King’s College London, London, UK
| | - James Bailey
- Department of General Surgery, University of Nottingham, Nottingham, UK
| | - Alison Hainsworth
- Department of General Surgery, Guy’s and St Thomas’ NHS Foundation Trust, London, UK
| | - Sebastien Ourselin
- School of Biomedical Engineering and Imaging Sciences, King’s College London, London, UK
- Hypervision Surgical Ltd, London, UK
| | | | - Tom Vercauteren
- School of Biomedical Engineering and Imaging Sciences, King’s College London, London, UK
- Hypervision Surgical Ltd, London, UK
| | - Alexis Schizas
- Department of General Surgery, Guy’s and St Thomas’ NHS Foundation Trust, London, UK
| | - Jonathan Shapey
- School of Biomedical Engineering and Imaging Sciences, King’s College London, London, UK
- Hypervision Surgical Ltd, London, UK
- Department of Neurosurgery, King’s College Hospital, London, UK
| |
Collapse
|
2
|
Abstract
Purpose Robotic-assisted partial nephrectomy (RAPN) is a tissue-preserving approach to treating renal cancer, where ultrasound (US) imaging is used for intra-operative identification of tumour margins and localisation of blood vessels. With the da Vinci Surgical System (Sunnyvale, CA), the US probe is inserted through an auxiliary access port, grasped by the robotic tool and moved over the surface of the kidney. Images from US probe are displayed separately to the surgical site video within the surgical console leaving the surgeon to interpret and co-registers information which is challenging and complicates the procedural workflow. Methods We introduce a novel software architecture to support a hardware soft robotic rail designed to automate intra-operative US acquisition. As a preliminary step towards complete task automation, we automatically grasp the rail and position it on the tissue surface so that the surgeon is then able to manipulate manually the US probe along it. Results A preliminary clinical study, involving five surgeons, was carried out to evaluate the potential performance of the system. Results indicate that the proposed semi-autonomous approach reduced the time needed to complete a US scan compared to manual tele-operation. Conclusion Procedural automation can be an important workflow enhancement functionality in future robotic surgery systems. We have shown a preliminary study on semi-autonomous US imaging, and this could support more efficient data acquisition.
Collapse
Affiliation(s)
- Claudia D'Ettorre
- Department of Computer Science, Wellcome/EPSRC Centre for International and Surgical Sciences (WEISS), University College London, London, W1W 7EJ, UK.
| | - Agostino Stilli
- Department of Computer Science, Wellcome/EPSRC Centre for International and Surgical Sciences (WEISS), University College London, London, W1W 7EJ, UK
| | - George Dwyer
- Department of Computer Science, Wellcome/EPSRC Centre for International and Surgical Sciences (WEISS), University College London, London, W1W 7EJ, UK
| | - Maxine Tran
- Division of Surgery and Interventional Science, Department of Nanotechnology, University College London, Royal Free Hospital, London, NW3 2QG, UK
| | - Danail Stoyanov
- Department of Computer Science, Wellcome/EPSRC Centre for International and Surgical Sciences (WEISS), University College London, London, W1W 7EJ, UK
| |
Collapse
|
3
|
Bano S, Vasconcelos F, Tella-Amo M, Dwyer G, Gruijthuijsen C, Vander Poorten E, Vercauteren T, Ourselin S, Deprest J, Stoyanov D. Deep learning-based fetoscopic mosaicking for field-of-view expansion. Int J Comput Assist Radiol Surg 2020; 15:1807-1816. [PMID: 32808148 PMCID: PMC7603466 DOI: 10.1007/s11548-020-02242-8] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2020] [Accepted: 07/30/2020] [Indexed: 11/26/2022]
Abstract
PURPOSE Fetoscopic laser photocoagulation is a minimally invasive surgical procedure used to treat twin-to-twin transfusion syndrome (TTTS), which involves localization and ablation of abnormal vascular connections on the placenta to regulate the blood flow in both fetuses. This procedure is particularly challenging due to the limited field of view, poor visibility, occasional bleeding, and poor image quality. Fetoscopic mosaicking can help in creating an image with the expanded field of view which could facilitate the clinicians during the TTTS procedure. METHODS We propose a deep learning-based mosaicking framework for diverse fetoscopic videos captured from different settings such as simulation, phantoms, ex vivo, and in vivo environments. The proposed mosaicking framework extends an existing deep image homography model to handle video data by introducing the controlled data generation and consistent homography estimation modules. Training is performed on a small subset of fetoscopic images which are independent of the testing videos. RESULTS We perform both quantitative and qualitative evaluations on 5 diverse fetoscopic videos (2400 frames) that captured different environments. To demonstrate the robustness of the proposed framework, a comparison is performed with the existing feature-based and deep image homography methods. CONCLUSION The proposed mosaicking framework outperformed existing methods and generated meaningful mosaic, while reducing the accumulated drift, even in the presence of visual challenges such as specular highlights, reflection, texture paucity, and low video resolution.
Collapse
Affiliation(s)
- Sophia Bano
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS) and Department of Computer Science, University College London, London, UK
| | - Francisco Vasconcelos
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS) and Department of Computer Science, University College London, London, UK
| | - Marcel Tella-Amo
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS) and Department of Computer Science, University College London, London, UK
| | - George Dwyer
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS) and Department of Computer Science, University College London, London, UK
| | | | | | - Tom Vercauteren
- School of Biomedical Engineering and Imaging Sciences, King’s College London, London, UK
| | - Sebastien Ourselin
- School of Biomedical Engineering and Imaging Sciences, King’s College London, London, UK
| | - Jan Deprest
- Department of Development and Regeneration, University Hospital Leuven, Leuven, Belgium
| | - Danail Stoyanov
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS) and Department of Computer Science, University College London, London, UK
| |
Collapse
|
4
|
Wang C, Komninos C, Andersen S, D'Ettorre C, Dwyer G, Maneas E, Edwards P, Desjardins A, Stilli A, Stoyanov D. Ultrasound 3D reconstruction of malignant masses in robotic-assisted partial nephrectomy using the PAF rail system: a comparison study. Int J Comput Assist Radiol Surg 2020; 15:1147-1155. [PMID: 32385597 PMCID: PMC7316668 DOI: 10.1007/s11548-020-02149-4] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2019] [Accepted: 03/31/2020] [Indexed: 12/11/2022]
Abstract
Purpose In robotic-assisted partial nephrectomy (RAPN), the use of intraoperative ultrasound (IOUS) helps to localise and outline the tumours as well as the blood vessels within the kidney. The aim of this work is to evaluate the use of the pneumatically attachable flexible (PAF) rail system for US 3D reconstruction of malignant masses in RAPN. The PAF rail system is a novel device developed and previously presented by the authors to enable track-guided US scanning. Methods We present a comparison study between US 3D reconstruction of masses based on: the da Vinci Surgical System kinematics, single- and stereo-camera tracking of visual markers embedded on the probe. An US-realistic kidney phantom embedding a mass is used for testing. A new design for the US probe attachment to enhance the performance of the kinematic approach is presented. A feature extraction algorithm is proposed to detect the margins of the targeted mass in US images. Results To evaluate the performance of the investigated approaches the resulting 3D reconstructions have been compared to a CT scan of the phantom. The data collected indicates that single camera reconstruction outperformed the other approaches, reconstructing with a sub-millimetre accuracy the targeted mass. Conclusions This work demonstrates that the PAF rail system provides a reliable platform to enable accurate US 3D reconstruction of masses in RAPN procedures. The proposed system has also the potential to be employed in other surgical procedures such as hepatectomy or laparoscopic liver resection.
Collapse
Affiliation(s)
- Chongyun Wang
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London, 43-45 Foley St., Fitzrovia, London, W1W 7EJ, UK
| | - Charalampos Komninos
- Department of Electrical and Computer Engineering, University of Patras, 26504, Rio, Patras, Greece
| | - Stephanie Andersen
- Department of Computer Science, Stanford University, 353 Serra Mall, Stanford, CA, 94305, USA
| | - Claudia D'Ettorre
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London, 43-45 Foley St., Fitzrovia, London, W1W 7EJ, UK
| | - George Dwyer
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London, 43-45 Foley St., Fitzrovia, London, W1W 7EJ, UK
| | - Efthymios Maneas
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London, 43-45 Foley St., Fitzrovia, London, W1W 7EJ, UK
| | - Philip Edwards
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London, 43-45 Foley St., Fitzrovia, London, W1W 7EJ, UK
| | - Adrien Desjardins
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London, 43-45 Foley St., Fitzrovia, London, W1W 7EJ, UK
| | - Agostino Stilli
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London, 43-45 Foley St., Fitzrovia, London, W1W 7EJ, UK.
| | - Danail Stoyanov
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London, 43-45 Foley St., Fitzrovia, London, W1W 7EJ, UK
| |
Collapse
|
5
|
Bano S, Vasconcelos F, Vander Poorten E, Vercauteren T, Ourselin S, Deprest J, Stoyanov D. FetNet: a recurrent convolutional network for occlusion identification in fetoscopic videos. Int J Comput Assist Radiol Surg 2020; 15:791-801. [PMID: 32350787 PMCID: PMC7261278 DOI: 10.1007/s11548-020-02169-0] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2019] [Accepted: 04/10/2020] [Indexed: 12/18/2022]
Abstract
PURPOSE Fetoscopic laser photocoagulation is a minimally invasive surgery for the treatment of twin-to-twin transfusion syndrome (TTTS). By using a lens/fibre-optic scope, inserted into the amniotic cavity, the abnormal placental vascular anastomoses are identified and ablated to regulate blood flow to both fetuses. Limited field-of-view, occlusions due to fetus presence and low visibility make it difficult to identify all vascular anastomoses. Automatic computer-assisted techniques may provide better understanding of the anatomical structure during surgery for risk-free laser photocoagulation and may facilitate in improving mosaics from fetoscopic videos. METHODS We propose FetNet, a combined convolutional neural network (CNN) and long short-term memory (LSTM) recurrent neural network architecture for the spatio-temporal identification of fetoscopic events. We adapt an existing CNN architecture for spatial feature extraction and integrated it with the LSTM network for end-to-end spatio-temporal inference. We introduce differential learning rates during the model training to effectively utilising the pre-trained CNN weights. This may support computer-assisted interventions (CAI) during fetoscopic laser photocoagulation. RESULTS We perform quantitative evaluation of our method using 7 in vivo fetoscopic videos captured from different human TTTS cases. The total duration of these videos was 5551 s (138,780 frames). To test the robustness of the proposed approach, we perform 7-fold cross-validation where each video is treated as a hold-out or test set and training is performed using the remaining videos. CONCLUSION FetNet achieved superior performance compared to the existing CNN-based methods and provided improved inference because of the spatio-temporal information modelling. Online testing of FetNet, using a Tesla V100-DGXS-32GB GPU, achieved a frame rate of 114 fps. These results show that our method could potentially provide a real-time solution for CAI and automating occlusion and photocoagulation identification during fetoscopic procedures.
Collapse
Affiliation(s)
- Sophia Bano
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS) and Department of Computer Science, University College London, London, UK
| | - Francisco Vasconcelos
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS) and Department of Computer Science, University College London, London, UK
| | | | - Tom Vercauteren
- School of Biomedical Engineering and Imaging Sciences, King’s College London, London, UK
| | - Sebastien Ourselin
- School of Biomedical Engineering and Imaging Sciences, King’s College London, London, UK
| | - Jan Deprest
- Department of Development and Regeneration, University Hospital Leuven, Leuven, Belgium
| | - Danail Stoyanov
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS) and Department of Computer Science, University College London, London, UK
| |
Collapse
|
6
|
Vakharia VN, Sparks R, Miserocchi A, Vos SB, O'Keeffe A, Rodionov R, McEvoy AW, Ourselin S, Duncan JS. Computer-Assisted Planning for Stereoelectroencephalography (SEEG). Neurotherapeutics 2019; 16:1183-1197. [PMID: 31432448 PMCID: PMC6985077 DOI: 10.1007/s13311-019-00774-9] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2022] Open
Abstract
Stereoelectroencephalography (SEEG) is a diagnostic procedure in which multiple electrodes are stereotactically implanted within predefined areas of the brain to identify the seizure onset zone, which needs to be removed to achieve remission of focal epilepsy. Computer-assisted planning (CAP) has been shown to improve trajectory safety metrics and generate clinically feasible trajectories in a fraction of the time needed for manual planning. We report a prospective validation study of the use of EpiNav (UCL, London, UK) as a clinical decision support software for SEEG. Thirteen consecutive patients (125 electrodes) undergoing SEEG were prospectively recruited. EpiNav was used to generate 3D models of critical structures (including vasculature) and other important regions of interest. Manual planning utilizing the same 3D models was performed in advance of CAP. CAP was subsequently employed to automatically generate a plan for each patient. The treating neurosurgeon was able to modify CAP generated plans based on their preference. The plan with the lowest risk score metric was stereotactically implanted. In all cases (13/13), the final CAP generated plan returned a lower mean risk score and was stereotactically implanted. No complication or adverse event occurred. CAP trajectories were generated in 30% of the time with significantly lower risk scores compared to manually generated. EpiNav has successfully been integrated as a clinical decision support software (CDSS) into the clinical pathway for SEEG implantations at our institution. To our knowledge, this is the first prospective study of a complex CDSS in stereotactic neurosurgery and provides the highest level of evidence to date.
Collapse
Affiliation(s)
- Vejay N Vakharia
- Department of Clinical and Experimental Epilepsy, Institute of Neurology, University College London, London, UK.
- National Hospital for Neurology and Neurosurgery, Queen Square, London, UK.
- Chalfont Centre for Epilepsy, Chalfont St Peter, UK.
| | - Rachel Sparks
- School of Biomedical Engineering and Imaging Sciences, St Thomas' Hospital, King's College London, London, UK
| | - Anna Miserocchi
- Department of Clinical and Experimental Epilepsy, Institute of Neurology, University College London, London, UK
- National Hospital for Neurology and Neurosurgery, Queen Square, London, UK
- Chalfont Centre for Epilepsy, Chalfont St Peter, UK
| | - Sjoerd B Vos
- Wellcome Trust EPSRC Interventional and Surgical Sciences, University College London, London, UK
| | - Aidan O'Keeffe
- Department of Statistical Science, University College London, London, UK
| | - Roman Rodionov
- Department of Clinical and Experimental Epilepsy, Institute of Neurology, University College London, London, UK
- National Hospital for Neurology and Neurosurgery, Queen Square, London, UK
- Chalfont Centre for Epilepsy, Chalfont St Peter, UK
| | - Andrew W McEvoy
- Department of Clinical and Experimental Epilepsy, Institute of Neurology, University College London, London, UK
- National Hospital for Neurology and Neurosurgery, Queen Square, London, UK
- Chalfont Centre for Epilepsy, Chalfont St Peter, UK
| | - Sebastien Ourselin
- School of Biomedical Engineering and Imaging Sciences, St Thomas' Hospital, King's College London, London, UK
| | - John S Duncan
- Department of Clinical and Experimental Epilepsy, Institute of Neurology, University College London, London, UK
- National Hospital for Neurology and Neurosurgery, Queen Square, London, UK
- Chalfont Centre for Epilepsy, Chalfont St Peter, UK
| |
Collapse
|
7
|
Wang G, Zuluaga MA, Li W, Pratt R, Patel PA, Aertsen M, Doel T, David AL, Deprest J, Ourselin S, Vercauteren T. DeepIGeoS: A Deep Interactive Geodesic Framework for Medical Image Segmentation. IEEE Trans Pattern Anal Mach Intell 2019; 41:1559-1572. [PMID: 29993532 PMCID: PMC6594450 DOI: 10.1109/tpami.2018.2840695] [Citation(s) in RCA: 143] [Impact Index Per Article: 28.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/09/2017] [Revised: 04/17/2018] [Accepted: 05/22/2018] [Indexed: 05/20/2023]
Abstract
Accurate medical image segmentation is essential for diagnosis, surgical planning and many other applications. Convolutional Neural Networks (CNNs) have become the state-of-the-art automatic segmentation methods. However, fully automatic results may still need to be refined to become accurate and robust enough for clinical use. We propose a deep learning-based interactive segmentation method to improve the results obtained by an automatic CNN and to reduce user interactions during refinement for higher accuracy. We use one CNN to obtain an initial automatic segmentation, on which user interactions are added to indicate mis-segmentations. Another CNN takes as input the user interactions with the initial segmentation and gives a refined result. We propose to combine user interactions with CNNs through geodesic distance transforms, and propose a resolution-preserving network that gives a better dense prediction. In addition, we integrate user interactions as hard constraints into a back-propagatable Conditional Random Field. We validated the proposed framework in the context of 2D placenta segmentation from fetal MRI and 3D brain tumor segmentation from FLAIR images. Experimental results show our method achieves a large improvement from automatic CNNs, and obtains comparable and even higher accuracy with fewer user interventions and less time compared with traditional interactive methods.
Collapse
Affiliation(s)
- Guotai Wang
- Translational Imaging Group, Wellcome EPSRC Centre for Interventional and Surgical Sciences (WEISS)University College LondonLondonWC1E 6BTUnited Kingdom
| | - Maria A. Zuluaga
- Translational Imaging Group, Wellcome EPSRC Centre for Interventional and Surgical Sciences (WEISS)University College LondonLondonWC1E 6BTUnited Kingdom
| | - Wenqi Li
- Translational Imaging Group, Wellcome EPSRC Centre for Interventional and Surgical Sciences (WEISS)University College LondonLondonWC1E 6BTUnited Kingdom
| | - Rosalind Pratt
- Translational Imaging Group, Wellcome EPSRC Centre for Interventional and Surgical Sciences (WEISS)University College LondonLondonWC1E 6BTUnited Kingdom
- Institute for Women's HealthUniversity College LondonLondonWC1E 6BTUnited Kingdom
| | - Premal A. Patel
- Translational Imaging Group, Wellcome EPSRC Centre for Interventional and Surgical Sciences (WEISS)University College LondonLondonWC1E 6BTUnited Kingdom
| | - Michael Aertsen
- Department of RadiologyUniversity Hospitals KU LeuvenLeuven3000Belgium
| | - Tom Doel
- Translational Imaging Group, Wellcome EPSRC Centre for Interventional and Surgical Sciences (WEISS)University College LondonLondonWC1E 6BTUnited Kingdom
| | - Anna L. David
- Institute for Women's HealthUniversity College LondonLondonWC1E 6BTUnited Kingdom
| | - Jan Deprest
- Department of ObstetricsUniversity Hospitals KU LeuvenLeuven3000Belgium
| | - Sébastien Ourselin
- Translational Imaging Group, Wellcome EPSRC Centre for Interventional and Surgical Sciences (WEISS)University College LondonLondonWC1E 6BTUnited Kingdom
| | - Tom Vercauteren
- Translational Imaging Group, Wellcome EPSRC Centre for Interventional and Surgical Sciences (WEISS)University College LondonLondonWC1E 6BTUnited Kingdom
| |
Collapse
|
8
|
Ramalhinho J, Robu MR, Thompson S, Gurusamy K, Davidson B, Hawkes D, Barratt D, Clarkson MJ. A pre-operative planning framework for global registration of laparoscopic ultrasound to CT images. Int J Comput Assist Radiol Surg 2018; 13:1177-1186. [PMID: 29860550 PMCID: PMC6096745 DOI: 10.1007/s11548-018-1799-2] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2018] [Accepted: 05/21/2018] [Indexed: 12/31/2022]
Abstract
PURPOSE Laparoscopic ultrasound (LUS) enhances the safety of laparoscopic liver resection by enabling real-time imaging of internal structures such as vessels. However, LUS probes can be difficult to use, and many tumours are iso-echoic and hence are not visible. Registration of LUS to a pre-operative CT or MR scan has been proposed as a method of image guidance. However, the field of view of the probe is very small compared to the whole liver, making the registration task challenging and dependent on a very accurate initialisation. METHODS We propose the use of a subject-specific planning framework that provides information on which anatomical liver regions it is possible to acquire vascular data that is unique enough for a globally optimal initial registration. Vessel-based rigid registration on different areas of the pre-operative CT vascular tree is used in order to evaluate predicted accuracy and reliability. RESULTS The planning framework is tested on one porcine subject where we have taken 5 independent sweeps of LUS data from different sections of the liver. Target registration error of vessel branching points was used to measure accuracy. Global registration based on vessel centrelines is applied to the 5 datasets. In 3 out of 5 cases registration is successful and in agreement with the planning. Further tests with a CT scan under abdominal insufflation show that the framework can provide valuable information in all of the 5 cases. CONCLUSIONS We have introduced a planning framework that can guide the surgeon on how much LUS data to collect in order to provide a reliable globally unique registration without the need for an initial manual alignment. This could potentially improve the usability of these methods in clinic.
Collapse
Affiliation(s)
- João Ramalhinho
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK.
- Centre For Medical Image Computing, University College London, London, UK.
| | - Maria R Robu
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
- Centre For Medical Image Computing, University College London, London, UK
| | - Stephen Thompson
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
- Centre For Medical Image Computing, University College London, London, UK
| | - Kurinchi Gurusamy
- Division of Surgery and Interventional Science, University College London, London, UK
| | - Brian Davidson
- Division of Surgery and Interventional Science, University College London, London, UK
| | - David Hawkes
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
- Centre For Medical Image Computing, University College London, London, UK
| | - Dean Barratt
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
- Centre For Medical Image Computing, University College London, London, UK
| | - Matthew J Clarkson
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
- Centre For Medical Image Computing, University College London, London, UK
| |
Collapse
|
9
|
Robu MR, Ramalhinho J, Thompson S, Gurusamy K, Davidson B, Hawkes D, Stoyanov D, Clarkson MJ. Global rigid registration of CT to video in laparoscopic liver surgery. Int J Comput Assist Radiol Surg 2018; 13:947-956. [PMID: 29736801 PMCID: PMC5974008 DOI: 10.1007/s11548-018-1781-z] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2018] [Accepted: 04/27/2018] [Indexed: 11/09/2022]
Abstract
PURPOSE Image-guidance systems have the potential to aid in laparoscopic interventions by providing sub-surface structure information and tumour localisation. The registration of a preoperative 3D image with the intraoperative laparoscopic video feed is an important component of image guidance, which should be fast, robust and cause minimal disruption to the surgical procedure. Most methods for rigid and non-rigid registration require a good initial alignment. However, in most research systems for abdominal surgery, the user has to manually rotate and translate the models, which is usually difficult to perform quickly and intuitively. METHODS We propose a fast, global method for the initial rigid alignment between a 3D mesh derived from a preoperative CT of the liver and a surface reconstruction of the intraoperative scene. We formulate the shape matching problem as a quadratic assignment problem which minimises the dissimilarity between feature descriptors while enforcing geometrical consistency between all the feature points. We incorporate a novel constraint based on the liver contours which deals specifically with the challenges introduced by laparoscopic data. RESULTS We validate our proposed method on synthetic data, on a liver phantom and on retrospective clinical data acquired during a laparoscopic liver resection. We show robustness over reduced partial size and increasing levels of deformation. Our results on the phantom and on the real data show good initial alignment, which can successfully converge to the correct position using fine alignment techniques. Furthermore, since we can pre-process the CT scan before surgery, the proposed method runs faster than current algorithms. CONCLUSION The proposed shape matching method can provide a fast, global initial registration, which can be further refined by fine alignment methods. This approach will lead to a more usable and intuitive image-guidance system for laparoscopic liver surgery.
Collapse
Affiliation(s)
- Maria R Robu
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK.
- Centre For Medical Image Computing, University College London, London, UK.
| | - João Ramalhinho
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
- Centre For Medical Image Computing, University College London, London, UK
| | - Stephen Thompson
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
- Centre For Medical Image Computing, University College London, London, UK
| | - Kurinchi Gurusamy
- Division of Surgery and Interventional Science, University College London, London, UK
| | - Brian Davidson
- Division of Surgery and Interventional Science, University College London, London, UK
| | - David Hawkes
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
- Centre For Medical Image Computing, University College London, London, UK
| | - Danail Stoyanov
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
- Centre For Medical Image Computing, University College London, London, UK
| | - Matthew J Clarkson
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
- Centre For Medical Image Computing, University College London, London, UK
| |
Collapse
|
10
|
Granados A, Vakharia V, Rodionov R, Schweiger M, Vos SB, O'Keeffe AG, Li K, Wu C, Miserocchi A, McEvoy AW, Clarkson MJ, Duncan JS, Sparks R, Ourselin S. Automatic segmentation of stereoelectroencephalography (SEEG) electrodes post-implantation considering bending. Int J Comput Assist Radiol Surg 2018; 13:935-946. [PMID: 29736800 PMCID: PMC5973981 DOI: 10.1007/s11548-018-1740-8] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2018] [Accepted: 03/15/2018] [Indexed: 11/26/2022]
Abstract
Purpose The accurate and automatic localisation of SEEG electrodes is crucial for determining the location of epileptic seizure onset. We propose an algorithm for the automatic segmentation of electrode bolts and contacts that accounts for electrode bending in relation to regional brain anatomy. Methods Co-registered post-implantation CT, pre-implantation MRI, and brain parcellation images are used to create regions of interest to automatically segment bolts and contacts. Contact search strategy is based on the direction of the bolt with distance and angle constraints, in addition to post-processing steps that assign remaining contacts and predict contact position. We measured the accuracy of contact position, bolt angle, and anatomical region at the tip of the electrode in 23 post-SEEG cases comprising two different surgical approaches when placing a guiding stylet close to and far from target point. Local and global bending are computed when modelling electrodes as elastic rods. Results Our approach executed on average in 36.17 s with a sensitivity of 98.81% and a positive predictive value (PPV) of 95.01%. Compared to manual segmentation, the position of contacts had a mean absolute error of 0.38 mm and the mean bolt angle difference of \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$0.59^{\circ }$$\end{document}0.59∘ resulted in a mean displacement error of 0.68 mm at the tip of the electrode. Anatomical regions at the tip of the electrode were in strong concordance with those selected manually by neurosurgeons, \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$ICC(3,k)=0.76$$\end{document}ICC(3,k)=0.76, with average distance between regions of 0.82 mm when in disagreement. Our approach performed equally in two surgical approaches regardless of the amount of electrode bending. Conclusion We present a method robust to electrode bending that can accurately segment contact positions and bolt orientation. The techniques presented in this paper will allow further characterisation of bending within different brain regions. Electronic supplementary material The online version of this article (10.1007/s11548-018-1740-8) contains supplementary material, which is available to authorized users.
Collapse
Affiliation(s)
- Alejandro Granados
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, UCL, London, UK.
| | - Vejay Vakharia
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, UCL, London, UK
- National Hospital for Neurology and Neurosurgery, London, UK
- Department of Clinical and Experimental Epilepsy, Institute of Neurology, National Hospital for Neurology and Neurosurgery, London, UK
| | - Roman Rodionov
- National Hospital for Neurology and Neurosurgery, London, UK
- Department of Clinical and Experimental Epilepsy, Institute of Neurology, National Hospital for Neurology and Neurosurgery, London, UK
| | - Martin Schweiger
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, UCL, London, UK
| | - Sjoerd B Vos
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, UCL, London, UK
- Department of Clinical and Experimental Epilepsy, Institute of Neurology, National Hospital for Neurology and Neurosurgery, London, UK
| | - Aidan G O'Keeffe
- Department of Statistical Science, University College London, London, UK
| | - Kuo Li
- Department of Clinical and Experimental Epilepsy, Institute of Neurology, National Hospital for Neurology and Neurosurgery, London, UK
- The First Affiliated Hospital of Xian Jiaotong University, Xian, People's Republic of China
| | - Chengyuan Wu
- Vickie and Jack Farber Inst for Neuroscience, Thomas Jefferson University, Philadelphia, USA
| | - Anna Miserocchi
- National Hospital for Neurology and Neurosurgery, London, UK
| | - Andrew W McEvoy
- National Hospital for Neurology and Neurosurgery, London, UK
| | - Matthew J Clarkson
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, UCL, London, UK
| | - John S Duncan
- National Hospital for Neurology and Neurosurgery, London, UK
- Department of Clinical and Experimental Epilepsy, Institute of Neurology, National Hospital for Neurology and Neurosurgery, London, UK
| | - Rachel Sparks
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, UCL, London, UK
| | - Sébastien Ourselin
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, UCL, London, UK
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Institute of Neurology, London, UK
| |
Collapse
|