1
|
Le Lous M, Vasconcelos F, Di Vece C, Dromey B, Napolitano R, Yoo S, Edwards E, Huaulme A, Peebles D, Stoyanov D, Jannin P. Probe motion during mid-trimester fetal anomaly scan in the clinical setting: A prospective observational study. Eur J Obstet Gynecol Reprod Biol 2024; 298:13-17. [PMID: 38705008 DOI: 10.1016/j.ejogrb.2024.04.042] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2023] [Revised: 04/11/2024] [Accepted: 04/29/2024] [Indexed: 05/07/2024]
Abstract
INTRODUCTION This study aims to investigate probe motion during full mid-trimester anomaly scans. METHODS We undertook a prospective, observational study of obstetric sonographers at a UK University Teaching Hospital. We collected prospectively full-length video recordings of routine second-trimester anomaly scans synchronized with probe trajectory tracking data during the scan. Videos were reviewed and trajectories analyzed using duration, path metrics (path length, velocity, acceleration, jerk, and volume) and angular metrics (spectral arc, angular area, angular velocity, angular acceleration, and angular jerk). These trajectories were then compared according to the participant level of expertise, fetal presentation, and patient BMI. RESULTS A total of 17 anomaly scans were recorded. The average velocity of the probe was 12.9 ± 3.4 mm/s for the consultants versus 24.6 ± 5.7 mm/s for the fellows (p = 0.02), the average acceleration 170.4 ± 26.3 mm/s2 versus 328.9 ± 62.7 mm/s2 (p = 0.02), and the average jerk 7491.7 ± 1056.1 mm/s3 versus 14944.1 ± 3146.3 mm/s3 (p = 0.02), the working volume 9.106 ± 4.106 mm3 versus 29.106 ± 11.106 mm3 (p = 0.03), respectively. The angular metrics were not significantly different according to the participant level of expertise, the fetal presentation, or to patients BMI. CONCLUSION Some differences in the probe path metrics (velocity, acceleration, jerk and working volume) were noticed according to operator's level.
Collapse
Affiliation(s)
- Maela Le Lous
- Department of Obstetrics and Gynecology, University Hospital of Rennes, France; Univ Rennes, INSERM, LTSI - UMR 1099, F35000 Rennes, France; CIC Inserm 1414, University Hospital of Rennes, University of Rennes 1, Rennes, France; Department of Computer Science, Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London, London, United Kingdom.
| | - Francisco Vasconcelos
- Department of Computer Science, Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London, London, United Kingdom
| | - Chiara Di Vece
- Department of Computer Science, Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London, London, United Kingdom
| | - Brian Dromey
- Department of Computer Science, Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London, London, United Kingdom; Elizabeth Garrett Anderson Institute for Women's Health, University College London, London, United Kingdom
| | - Raffaele Napolitano
- Elizabeth Garrett Anderson Institute for Women's Health, University College London, London, United Kingdom; Fetal Medicine Unit, Elizabeth Garrett Anderson Wing, University College London Hospitals NHS Foundation Trust, London, United Kingdom
| | - Soojoeong Yoo
- Department of Computer Science, Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London, London, United Kingdom
| | - Eddie Edwards
- Department of Computer Science, Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London, London, United Kingdom
| | - Arnaud Huaulme
- Univ Rennes, INSERM, LTSI - UMR 1099, F35000 Rennes, France
| | - Donald Peebles
- Elizabeth Garrett Anderson Institute for Women's Health, University College London, London, United Kingdom; Fetal Medicine Unit, Elizabeth Garrett Anderson Wing, University College London Hospitals NHS Foundation Trust, London, United Kingdom
| | - Danail Stoyanov
- Department of Computer Science, Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London, London, United Kingdom
| | - Pierre Jannin
- Univ Rennes, INSERM, LTSI - UMR 1099, F35000 Rennes, France
| |
Collapse
|
2
|
He C, Karpavičiūtė N, Hariharan R, Lees L, Jacques C, Ferrand T, Chambost J, Wouters K, Malmsten J, Miller R, Zaninovic N, Vasconcelos F, Hickman C. Seeking arrangements: cell contact as a cleavage-stage biomarker. Reprod Biomed Online 2024; 48:103654. [PMID: 38246064 DOI: 10.1016/j.rbmo.2023.103654] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2023] [Revised: 10/10/2023] [Accepted: 10/30/2023] [Indexed: 01/23/2024]
Abstract
RESEARCH QUESTION What can three-dimensional cell contact networks tell us about the developmental potential of cleavage-stage human embryos? DESIGN This pilot study was a retrospective analysis of two Embryoscope imaging datasets from two clinics. An artificial intelligence system was used to reconstruct the three-dimensional structure of embryos from 11-plane focal stacks. Networks of cell contacts were extracted from the resulting embryo three-dimensional models and each embryo's mean contacts per cell was computed. Unpaired t-tests and receiver operating characteristic curve analysis were used to statistically analyse mean cell contact outcomes. Cell contact networks from different embryos were compared with identical embryos with similar cell arrangements. RESULTS At t4, a higher mean number of contacts per cell was associated with greater rates of blastulation and blastocyst quality. No associations were found with biochemical pregnancy, live birth, miscarriage or ploidy. At t8, a higher mean number of contacts was associated with increased blastocyst quality, biochemical pregnancy and live birth. No associations were found with miscarriage or aneuploidy. Mean contacts at t4 weakly correlated with those at t8. Four-cell embryos fell into nine distinct cell arrangements; the five most common accounted for 97% of embryos. Eight-cell embryos, however, displayed a greater degree of variation with 59 distinct cell arrangements. CONCLUSIONS Evidence is provided for the clinical relevance of cleavage-stage cell arrangement in the human preimplantation embryo beyond the four-cell stage, which may improve selection techniques for day-3 transfers. This pilot study provides a strong case for further investigation into spatial biomarkers and three-dimensional morphokinetics.
Collapse
Affiliation(s)
- Chloe He
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London 43-45 Foley St, London, W1W 7TY, UK.; Department of Computer Science, University College London, 66-72 Gower St, London WC1E 6EA, UK.; AI Team, Apricity, 14 Grays Inn Rd, London WC1 X 8HN, UK..
| | | | | | - Lilly Lees
- AI Team, Apricity, 14 Grays Inn Rd, London WC1 X 8HN, UK
| | | | | | | | - Koen Wouters
- Brussels IVF, University Hospital Brussels, Jette Bldg R, Laarbeeklaan 101 1090 Jette, Belgium, Brussels
| | - Jonas Malmsten
- Ronald O Perelman and Claudia Cohen Center for Reproductive Medicine, Weill Cornell Medicine, 1305 York Ave 6th floor, New York, NY 10021, USA
| | - Ryan Miller
- Ronald O Perelman and Claudia Cohen Center for Reproductive Medicine, Weill Cornell Medicine, 1305 York Ave 6th floor, New York, NY 10021, USA
| | - Nikica Zaninovic
- Ronald O Perelman and Claudia Cohen Center for Reproductive Medicine, Weill Cornell Medicine, 1305 York Ave 6th floor, New York, NY 10021, USA
| | - Francisco Vasconcelos
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London 43-45 Foley St, London, W1W 7TY, UK.; Department of Computer Science, University College London, 66-72 Gower St, London WC1E 6EA, UK
| | - Cristina Hickman
- AI Team, Apricity, 14 Grays Inn Rd, London WC1 X 8HN, UK.; Institute of Reproductive and Developmental Biology, Imperial College London, Hammersmith Campus, Du Cane Road, London, W12 0HS, UK
| |
Collapse
|
3
|
Casella A, Bano S, Vasconcelos F, David AL, Paladini D, Deprest J, De Momi E, Mattos LS, Moccia S, Stoyanov D. Learning-based keypoint registration for fetoscopic mosaicking. Int J Comput Assist Radiol Surg 2024; 19:481-492. [PMID: 38066354 PMCID: PMC10881678 DOI: 10.1007/s11548-023-03025-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2022] [Accepted: 09/20/2023] [Indexed: 02/22/2024]
Abstract
PURPOSE In twin-to-twin transfusion syndrome (TTTS), abnormal vascular anastomoses in the monochorionic placenta can produce uneven blood flow between the two fetuses. In the current practice, TTTS is treated surgically by closing abnormal anastomoses using laser ablation. This surgery is minimally invasive and relies on fetoscopy. Limited field of view makes anastomosis identification a challenging task for the surgeon. METHODS To tackle this challenge, we propose a learning-based framework for in vivo fetoscopy frame registration for field-of-view expansion. The novelties of this framework rely on a learning-based keypoint proposal network and an encoding strategy to filter (i) irrelevant keypoints based on fetoscopic semantic image segmentation and (ii) inconsistent homographies. RESULTS We validate our framework on a dataset of six intraoperative sequences from six TTTS surgeries from six different women against the most recent state-of-the-art algorithm, which relies on the segmentation of placenta vessels. CONCLUSION The proposed framework achieves higher performance compared to the state of the art, paving the way for robust mosaicking to provide surgeons with context awareness during TTTS surgery.
Collapse
Affiliation(s)
- Alessandro Casella
- Department of Advanced Robotics, Istituto Italiano di Tecnologia, Genoa, Italy
- Department of Electronic, Information and Bioengineering, Politecnico di Milano, Milan, Italy
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS) and Department of Computer Science, University College London, London, UK
| | - Sophia Bano
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS) and Department of Computer Science, University College London, London, UK.
| | - Francisco Vasconcelos
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS) and Department of Computer Science, University College London, London, UK
| | - Anna L David
- Fetal Medicine Unit, Elizabeth Garrett Anderson Wing, University College London Hospital, London, UK
- EGA Institute for Women's Health, Faculty of Population Health Sciences, University College London, London, UK
- Department of Development and Regeneration, University Hospital Leuven, Leuven, Belgium
| | - Dario Paladini
- Department of Fetal and Perinatal Medicine, Istituto Giannina Gaslini, Genoa, Italy
| | - Jan Deprest
- EGA Institute for Women's Health, Faculty of Population Health Sciences, University College London, London, UK
- Department of Development and Regeneration, University Hospital Leuven, Leuven, Belgium
| | - Elena De Momi
- Department of Electronic, Information and Bioengineering, Politecnico di Milano, Milan, Italy
| | - Leonardo S Mattos
- Department of Advanced Robotics, Istituto Italiano di Tecnologia, Genoa, Italy
| | - Sara Moccia
- The BioRobotics Institute and Department of Excellence in Robotics and AI, Scuola Superiore Sant'Anna, Pisa, Italy
| | - Danail Stoyanov
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS) and Department of Computer Science, University College London, London, UK
| |
Collapse
|
4
|
Bano S, Casella A, Vasconcelos F, Qayyum A, Benzinou A, Mazher M, Meriaudeau F, Lena C, Cintorrino IA, De Paolis GR, Biagioli J, Grechishnikova D, Jiao J, Bai B, Qiao Y, Bhattarai B, Gaire RR, Subedi R, Vazquez E, Płotka S, Lisowska A, Sitek A, Attilakos G, Wimalasundera R, David AL, Paladini D, Deprest J, De Momi E, Mattos LS, Moccia S, Stoyanov D. Placental vessel segmentation and registration in fetoscopy: Literature review and MICCAI FetReg2021 challenge findings. Med Image Anal 2024; 92:103066. [PMID: 38141453 DOI: 10.1016/j.media.2023.103066] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2022] [Revised: 11/27/2023] [Accepted: 12/19/2023] [Indexed: 12/25/2023]
Abstract
Fetoscopy laser photocoagulation is a widely adopted procedure for treating Twin-to-Twin Transfusion Syndrome (TTTS). The procedure involves photocoagulation pathological anastomoses to restore a physiological blood exchange among twins. The procedure is particularly challenging, from the surgeon's side, due to the limited field of view, poor manoeuvrability of the fetoscope, poor visibility due to amniotic fluid turbidity, and variability in illumination. These challenges may lead to increased surgery time and incomplete ablation of pathological anastomoses, resulting in persistent TTTS. Computer-assisted intervention (CAI) can provide TTTS surgeons with decision support and context awareness by identifying key structures in the scene and expanding the fetoscopic field of view through video mosaicking. Research in this domain has been hampered by the lack of high-quality data to design, develop and test CAI algorithms. Through the Fetoscopic Placental Vessel Segmentation and Registration (FetReg2021) challenge, which was organized as part of the MICCAI2021 Endoscopic Vision (EndoVis) challenge, we released the first large-scale multi-center TTTS dataset for the development of generalized and robust semantic segmentation and video mosaicking algorithms with a focus on creating drift-free mosaics from long duration fetoscopy videos. For this challenge, we released a dataset of 2060 images, pixel-annotated for vessels, tool, fetus and background classes, from 18 in-vivo TTTS fetoscopy procedures and 18 short video clips of an average length of 411 frames for developing placental scene segmentation and frame registration for mosaicking techniques. Seven teams participated in this challenge and their model performance was assessed on an unseen test dataset of 658 pixel-annotated images from 6 fetoscopic procedures and 6 short clips. For the segmentation task, overall baseline performed was the top performing (aggregated mIoU of 0.6763) and was the best on the vessel class (mIoU of 0.5817) while team RREB was the best on the tool (mIoU of 0.6335) and fetus (mIoU of 0.5178) classes. For the registration task, overall the baseline performed better than team SANO with an overall mean 5-frame SSIM of 0.9348. Qualitatively, it was observed that team SANO performed better in planar scenarios, while baseline was better in non-planner scenarios. The detailed analysis showed that no single team outperformed on all 6 test fetoscopic videos. The challenge provided an opportunity to create generalized solutions for fetoscopic scene understanding and mosaicking. In this paper, we present the findings of the FetReg2021 challenge, alongside reporting a detailed literature review for CAI in TTTS fetoscopy. Through this challenge, its analysis and the release of multi-center fetoscopic data, we provide a benchmark for future research in this field.
Collapse
Affiliation(s)
- Sophia Bano
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS) and Department of Computer Science, University College London, UK.
| | - Alessandro Casella
- Department of Advanced Robotics, Istituto Italiano di Tecnologia, Italy; Department of Electronics, Information and Bioengineering, Politecnico di Milano, Italy
| | - Francisco Vasconcelos
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS) and Department of Computer Science, University College London, UK
| | | | | | - Moona Mazher
- Department of Computer Engineering and Mathematics, University Rovira i Virgili, Spain
| | | | - Chiara Lena
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, Italy
| | | | - Gaia Romana De Paolis
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, Italy
| | - Jessica Biagioli
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, Italy
| | | | | | - Bizhe Bai
- Medical Computer Vision and Robotics Group, Department of Mathematical and Computational Sciences, University of Toronto, Canada
| | - Yanyan Qiao
- Shanghai MicroPort MedBot (Group) Co., Ltd, China
| | - Binod Bhattarai
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS) and Department of Computer Science, University College London, UK
| | | | - Ronast Subedi
- NepAL Applied Mathematics and Informatics Institute for Research, Nepal
| | | | - Szymon Płotka
- Sano Center for Computational Medicine, Poland; Quantitative Healthcare Analysis Group, Informatics Institute, University of Amsterdam, Amsterdam, The Netherlands
| | | | - Arkadiusz Sitek
- Sano Center for Computational Medicine, Poland; Center for Advanced Medical Computing and Simulation, Massachusetts General Hospital, Harvard Medical School, Boston, MA, United States of America
| | - George Attilakos
- Fetal Medicine Unit, Elizabeth Garrett Anderson Wing, University College London Hospital, UK; EGA Institute for Women's Health, Faculty of Population Health Sciences, University College London, UK
| | - Ruwan Wimalasundera
- Fetal Medicine Unit, Elizabeth Garrett Anderson Wing, University College London Hospital, UK; EGA Institute for Women's Health, Faculty of Population Health Sciences, University College London, UK
| | - Anna L David
- Fetal Medicine Unit, Elizabeth Garrett Anderson Wing, University College London Hospital, UK; EGA Institute for Women's Health, Faculty of Population Health Sciences, University College London, UK; Department of Development and Regeneration, University Hospital Leuven, Belgium
| | - Dario Paladini
- Department of Fetal and Perinatal Medicine, Istituto "Giannina Gaslini", Italy
| | - Jan Deprest
- EGA Institute for Women's Health, Faculty of Population Health Sciences, University College London, UK; Department of Development and Regeneration, University Hospital Leuven, Belgium
| | - Elena De Momi
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, Italy
| | - Leonardo S Mattos
- Department of Advanced Robotics, Istituto Italiano di Tecnologia, Italy
| | - Sara Moccia
- The BioRobotics Institute and Department of Excellence in Robotics and AI, Scuola Superiore Sant'Anna, Italy
| | - Danail Stoyanov
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS) and Department of Computer Science, University College London, UK
| |
Collapse
|
5
|
Alabi O, Bano S, Vasconcelos F, David AL, Deprest J, Stoyanov D. Correction to: Robust fetoscopic mosaicking from deep learned flow fields. Int J Comput Assist Radiol Surg 2024; 19:181. [PMID: 37787940 PMCID: PMC10770214 DOI: 10.1007/s11548-023-03018-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/04/2023]
Affiliation(s)
| | - Sophia Bano
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS) and Department of Computer Science, University College London, London, UK.
| | - Francisco Vasconcelos
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS) and Department of Computer Science, University College London, London, UK
| | - Anna L David
- Elizabeth Garrett Anderson Institute for Women's Health, University College London, London, UK
- NIHR University College London Hospitals Biomedical Research Centre, London, UK
| | - Jan Deprest
- Elizabeth Garrett Anderson Institute for Women's Health, University College London, London, UK
- Department of Development and Regeneration, University Hospital KU Leuven, Leuven, Belgium
| | - Danail Stoyanov
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS) and Department of Computer Science, University College London, London, UK
| |
Collapse
|
6
|
Cartucho J, Weld A, Tukra S, Xu H, Matsuzaki H, Ishikawa T, Kwon M, Jang YE, Kim KJ, Lee G, Bai B, Kahrs LA, Boecking L, Allmendinger S, Müller L, Zhang Y, Jin Y, Bano S, Vasconcelos F, Reiter W, Hajek J, Silva B, Lima E, Vilaça JL, Queirós S, Giannarou S. SurgT challenge: Benchmark of soft-tissue trackers for robotic surgery. Med Image Anal 2024; 91:102985. [PMID: 37844472 DOI: 10.1016/j.media.2023.102985] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2023] [Revised: 08/30/2023] [Accepted: 09/28/2023] [Indexed: 10/18/2023]
Abstract
This paper introduces the "SurgT: Surgical Tracking" challenge which was organized in conjunction with the 25th International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI 2022). There were two purposes for the creation of this challenge: (1) the establishment of the first standardized benchmark for the research community to assess soft-tissue trackers; and (2) to encourage the development of unsupervised deep learning methods, given the lack of annotated data in surgery. A dataset of 157 stereo endoscopic videos from 20 clinical cases, along with stereo camera calibration parameters, have been provided. Participants were assigned the task of developing algorithms to track the movement of soft tissues, represented by bounding boxes, in stereo endoscopic videos. At the end of the challenge, the developed methods were assessed on a previously hidden test subset. This assessment uses benchmarking metrics that were purposely developed for this challenge, to verify the efficacy of unsupervised deep learning algorithms in tracking soft-tissue. The metric used for ranking the methods was the Expected Average Overlap (EAO) score, which measures the average overlap between a tracker's and the ground truth bounding boxes. Coming first in the challenge was the deep learning submission by ICVS-2Ai with a superior EAO score of 0.617. This method employs ARFlow to estimate unsupervised dense optical flow from cropped images, using photometric and regularization losses. Second, Jmees with an EAO of 0.583, uses deep learning for surgical tool segmentation on top of a non-deep learning baseline method: CSRT. CSRT by itself scores a similar EAO of 0.563. The results from this challenge show that currently, non-deep learning methods are still competitive. The dataset and benchmarking tool created for this challenge have been made publicly available at https://surgt.grand-challenge.org/. This challenge is expected to contribute to the development of autonomous robotic surgery and other digital surgical technologies.
Collapse
Affiliation(s)
- João Cartucho
- The Hamlyn Centre for Robotic Surgery, Imperial College London, United Kingdom.
| | - Alistair Weld
- The Hamlyn Centre for Robotic Surgery, Imperial College London, United Kingdom
| | - Samyakh Tukra
- The Hamlyn Centre for Robotic Surgery, Imperial College London, United Kingdom
| | - Haozheng Xu
- The Hamlyn Centre for Robotic Surgery, Imperial College London, United Kingdom
| | | | | | - Minjun Kwon
- Electronics and Telecommunications Research Institute (ETRI), Daejeon, South Korea
| | - Yong Eun Jang
- Electronics and Telecommunications Research Institute (ETRI), Daejeon, South Korea
| | - Kwang-Ju Kim
- Electronics and Telecommunications Research Institute (ETRI), Daejeon, South Korea
| | - Gwang Lee
- Ajou University, Gyeonggi-do, South Korea
| | - Bizhe Bai
- Medical Computer Vision and Robotics Lab, University of Toronto, Canada
| | - Lueder A Kahrs
- Medical Computer Vision and Robotics Lab, University of Toronto, Canada
| | | | | | | | - Yitong Zhang
- Surgical Robot Vision, University College London, United Kingdom
| | - Yueming Jin
- Surgical Robot Vision, University College London, United Kingdom
| | - Sophia Bano
- Surgical Robot Vision, University College London, United Kingdom
| | | | | | | | - Bruno Silva
- Life and Health Sciences Research Institute (ICVS), School of Medicine, University of Minho, Braga, Portugal; ICVS/3B's - PT Government Associate Laboratory, Braga/Guimarães, Portugal; 2Ai - School of Technology, IPCA, Barcelos, Portugal
| | - Estevão Lima
- Life and Health Sciences Research Institute (ICVS), School of Medicine, University of Minho, Braga, Portugal; ICVS/3B's - PT Government Associate Laboratory, Braga/Guimarães, Portugal
| | - João L Vilaça
- 2Ai - School of Technology, IPCA, Barcelos, Portugal
| | - Sandro Queirós
- Life and Health Sciences Research Institute (ICVS), School of Medicine, University of Minho, Braga, Portugal; ICVS/3B's - PT Government Associate Laboratory, Braga/Guimarães, Portugal
| | - Stamatia Giannarou
- The Hamlyn Centre for Robotic Surgery, Imperial College London, United Kingdom
| |
Collapse
|
7
|
Daher R, Vasconcelos F, Stoyanov D. A Temporal Learning Approach to Inpainting Endoscopic Specularities and Its Effect on Image Correspondence. Med Image Anal 2023; 90:102994. [PMID: 37812856 PMCID: PMC10958122 DOI: 10.1016/j.media.2023.102994] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2021] [Revised: 08/31/2023] [Accepted: 10/02/2023] [Indexed: 10/11/2023]
Abstract
Video streams are utilised to guide minimally-invasive surgery and diagnosis in a wide range of procedures, and many computer-assisted techniques have been developed to automatically analyse them. These approaches can provide additional information to the surgeon such as lesion detection, instrument navigation, or anatomy 3D shape modelling. However, the necessary image features to recognise these patterns are not always reliably detected due to the presence of irregular light patterns such as specular highlight reflections. In this paper, we aim at removing specular highlights from endoscopic videos using machine learning. We propose using a temporal generative adversarial network (GAN) to inpaint the hidden anatomy under specularities, inferring its appearance spatially and from neighbouring frames, where they are not present in the same location. This is achieved using in-vivo data from gastric endoscopy (Hyper Kvasir) in a fully unsupervised manner that relies on the automatic detection of specular highlights. System evaluations show significant improvements to other methods through direct comparison and ablation studies that depict the importance of the network's temporal and transfer learning components. The generalisability of our system to different surgical setups and procedures was also evaluated qualitatively on in-vivo data of gastric endoscopy and ex-vivo porcine data (SERV-CT, SCARED). We also assess the effect of our method in comparison to other methods on computer vision tasks that underpin 3D reconstruction and camera motion estimation, namely stereo disparity, optical flow, and sparse point feature matching. These are evaluated quantitatively and qualitatively and results show a positive effect of our specular inpainting method on these tasks in a novel comprehensive analysis. Our code and dataset are made available at https://github.com/endomapper/Endo-STTN.
Collapse
Affiliation(s)
- Rema Daher
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London, Gower Street, London, WC1E 6BT, UK.
| | - Francisco Vasconcelos
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London, Gower Street, London, WC1E 6BT, UK.
| | - Danail Stoyanov
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London, Gower Street, London, WC1E 6BT, UK.
| |
Collapse
|
8
|
Le Lous M, Beridot C, Baxter JSH, Huaulme A, Vasconcelos F, Stoyanov D, Siassakos D, Jannin P. Physical environment of the operating room during cesarean section: A systematic review. Eur J Obstet Gynecol Reprod Biol 2023; 288:1-6. [PMID: 37406465 DOI: 10.1016/j.ejogrb.2023.06.029] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2023] [Accepted: 06/27/2023] [Indexed: 07/07/2023]
Abstract
INTRODUCTION Environmental factors in the operating room during cesarean sections are likely important for both women/birthing people and their babies but there is currently a lack of rigorous literature about their evaluation. The principal aim of this study was to systematically examine studies published on the physical environment in the obstetrical operating room during c-sections and its impact on mother and neonate outcomes. The secondary objective was to identify the sensors used to investigate the operating room environment during cesarean sections. METHODS In this literature review, we searched MEDLINE a database using the following keywords: Cesarean section AND (operating room environment OR Noise OR Music OR Video recording OR Light level OR Gentle OR Temperature OR Motion Data). Eligible studies had to be published in English or French within the past 10 years and had to investigate the operating room environment during cesarean sections in women. For each study we reported which aspects of the physical environment were investigated in the OR (i.e., noise, music, movement, light or temperature) and the involved sensors. RESULTS Of a total of 105 studies screened, we selected 8 articles from title and abstract in PubMed. This small number shows that the field is poorly investigated. The most evaluated environment factors to date are operating room noise and temperature, and the presence of music. Few studies used advanced sensors in the operating room to evaluate environmental factors in a more nuanced and complete way. Two studies concern the sound level, four concern music, one concerns temperature and one analyzed the number of entrances/exits into the OR. No study analyzed light level or more fine-grained movement data. CONCLUSIONS Main findings include increase of noise and motion at specific time-points, for example during delivery or anaesthesia; the positive impact of music on parents and staff alike; and that a warmer theatre is better for babies but more uncomfortable for surgeons.
Collapse
Affiliation(s)
- Maela Le Lous
- Department of Obstetrics and Gynecology, University Hospital of Rennes, France; LTSI - INSERM UMR 1099, University of Rennes 1, F35000 Rennes, France; Department of Computer Science, Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London, London, United Kingdom.
| | - Caroline Beridot
- Department of Obstetrics and Gynecology, University Hospital of Rennes, France
| | - John S H Baxter
- LTSI - INSERM UMR 1099, University of Rennes 1, F35000 Rennes, France
| | - Arnaud Huaulme
- LTSI - INSERM UMR 1099, University of Rennes 1, F35000 Rennes, France
| | - Francisco Vasconcelos
- Department of Computer Science, Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London, London, United Kingdom
| | - Danail Stoyanov
- Department of Computer Science, Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London, London, United Kingdom
| | - Dimitrios Siassakos
- Department of Computer Science, Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London, London, United Kingdom; EGA Institute for Women's Health, University College London, London, United Kingdom
| | - Pierre Jannin
- LTSI - INSERM UMR 1099, University of Rennes 1, F35000 Rennes, France
| |
Collapse
|
9
|
Li L, Mazomenos E, Chandler JH, Obstein KL, Valdastri P, Stoyanov D, Vasconcelos F. Robust endoscopic image mosaicking via fusion of multimodal estimation. Med Image Anal 2023; 84:102709. [PMID: 36549045 PMCID: PMC10636739 DOI: 10.1016/j.media.2022.102709] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2021] [Revised: 08/15/2022] [Accepted: 11/29/2022] [Indexed: 12/23/2022]
Abstract
We propose an endoscopic image mosaicking algorithm that is robust to light conditioning changes, specular reflections, and feature-less scenes. These conditions are especially common in minimally invasive surgery where the light source moves with the camera to dynamically illuminate close range scenes. This makes it difficult for a single image registration method to robustly track camera motion and then generate consistent mosaics of the expanded surgical scene across different and heterogeneous environments. Instead of relying on one specialised feature extractor or image registration method, we propose to fuse different image registration algorithms according to their uncertainties, formulating the problem as affine pose graph optimisation. This allows to combine landmarks, dense intensity registration, and learning-based approaches in a single framework. To demonstrate our application we consider deep learning-based optical flow, hand-crafted features, and intensity-based registration, however, the framework is general and could take as input other sources of motion estimation, including other sensor modalities. We validate the performance of our approach on three datasets with very different characteristics to highlighting its generalisability, demonstrating the advantages of our proposed fusion framework. While each individual registration algorithm eventually fails drastically on certain surgical scenes, the fusion approach flexibly determines which algorithms to use and in which proportion to more robustly obtain consistent mosaics.
Collapse
Affiliation(s)
- Liang Li
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences(WEISS) and Department of Computer Science, University College London, London, UK; College of Control Science and Engineering, Zhejiang University, Hangzhou, 310027, China.
| | - Evangelos Mazomenos
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences(WEISS) and Department of Computer Science, University College London, London, UK.
| | - James H Chandler
- Storm Lab UK, School of Electronic, and Electrical Engineering, University of Leeds, Leeds LS2 9JT, UK.
| | - Keith L Obstein
- Division of Gastroenterology, Hepatology, and Nutrition, Vanderbilt University Medical Center, Nashville, TN 37232, USA; STORM Lab, Department of Mechanical Engineering, Vanderbilt University, Nashville, TN 37235, USA.
| | - Pietro Valdastri
- Storm Lab UK, School of Electronic, and Electrical Engineering, University of Leeds, Leeds LS2 9JT, UK.
| | - Danail Stoyanov
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences(WEISS) and Department of Computer Science, University College London, London, UK.
| | - Francisco Vasconcelos
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences(WEISS) and Department of Computer Science, University College London, London, UK.
| |
Collapse
|
10
|
Birlo M, Edwards PJE, Yoo S, Dromey B, Vasconcelos F, Clarkson MJ, Stoyanov D. CAL-Tutor: A HoloLens 2 Application for Training in Obstetric Sonography and User Motion Data Recording. J Imaging 2022; 9:6. [PMID: 36662104 PMCID: PMC9860994 DOI: 10.3390/jimaging9010006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2022] [Revised: 11/30/2022] [Accepted: 12/20/2022] [Indexed: 12/30/2022] Open
Abstract
Obstetric ultrasound (US) training teaches the relationship between foetal anatomy and the viewed US slice to enable navigation to standardised anatomical planes (head, abdomen and femur) where diagnostic measurements are taken. This process is difficult to learn, and results in considerable inter-operator variability. We propose the CAL-Tutor system for US training based on a US scanner and phantom, where a model of both the baby and the US slice are displayed to the trainee in its physical location using the HoloLens 2. The intention is that AR guidance will shorten the learning curve for US trainees and improve spatial awareness. In addition to the AR guidance, we also record many data streams to assess user motion and the learning process. The HoloLens 2 provides eye gaze, head and hand position, ARToolkit and NDI Aurora tracking gives the US probe positions and an external camera records the overall scene. These data can provide a rich source for further analysis, such as distinguishing expert from novice motion. We have demonstrated the system in a sample of engineers. Feedback suggests that the system helps novice users navigate the US probe to the standard plane. The data capture is successful and initial data visualisations show that meaningful information about user behaviour can be captured. Initial feedback is encouraging and shows improved user assessment where AR guidance is provided.
Collapse
Affiliation(s)
- Manuel Birlo
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), Charles Bell House, 43–45 Foley Street, London W1W 7TY, UK
| | - Philip J. Eddie Edwards
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), Charles Bell House, 43–45 Foley Street, London W1W 7TY, UK
| | - Soojeong Yoo
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), Charles Bell House, 43–45 Foley Street, London W1W 7TY, UK
- UCL Interaction Centre (UCLIC), University College London, 66-72 Gower Street, London WC1E 6EA, UK
| | - Brian Dromey
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), Charles Bell House, 43–45 Foley Street, London W1W 7TY, UK
- UCL EGA Institute for Women’s Health, Medical School Building, 74 Huntley Street, London WC1E 6AU, UK
| | - Francisco Vasconcelos
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), Charles Bell House, 43–45 Foley Street, London W1W 7TY, UK
| | - Matthew J. Clarkson
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), Charles Bell House, 43–45 Foley Street, London W1W 7TY, UK
| | - Danail Stoyanov
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), Charles Bell House, 43–45 Foley Street, London W1W 7TY, UK
| |
Collapse
|
11
|
Bano S, Vasconcelos F, David AL, Deprest J, Stoyanov D. Placental vessel-guided hybrid framework for fetoscopic mosaicking. Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization 2022. [DOI: 10.1080/21681163.2022.2154278] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
Affiliation(s)
- Sophia Bano
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS) and Department of Computer Science, University College London, London, UK
| | - Francisco Vasconcelos
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS) and Department of Computer Science, University College London, London, UK
| | - Anna L. David
- Fetal Medicine Unit, University College London Hospital, London, UK
| | - Jan Deprest
- Department of Development and Regeneration, University Hospital Leuven, Leuven, Belgium
| | - Danail Stoyanov
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS) and Department of Computer Science, University College London, London, UK
| |
Collapse
|
12
|
Psychogyios D, Mazomenos E, Vasconcelos F, Stoyanov D. MSDESIS: Multitask Stereo Disparity Estimation and Surgical Instrument Segmentation. IEEE Trans Med Imaging 2022; 41:3218-3230. [PMID: 35675257 PMCID: PMC7613770 DOI: 10.1109/tmi.2022.3181229] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Reconstructing the 3D geometry of the surgical site and detecting instruments within it are important tasks for surgical navigation systems and robotic surgery automation. Traditional approaches treat each problem in isolation and do not account for the intrinsic relationship between segmentation and stereo matching. In this paper, we present a learning-based framework that jointly estimates disparity and binary tool segmentation masks. The core component of our architecture is a shared feature encoder which allows strong interaction between the aforementioned tasks. Experimentally, we train two variants of our network with different capacities and explore different training schemes including both multi-task and single-task learning. Our results show that supervising the segmentation task improves our network's disparity estimation accuracy. We demonstrate a domain adaptation scheme where we supervise the segmentation task with monocular data and achieve domain adaptation of the adjacent disparity task, reducing disparity End-Point-Error and depth mean absolute error by 77.73% and 61.73% respectively compared to the pre-trained baseline model. Our best overall multi-task model, trained with both disparity and segmentation data in subsequent phases, achieves 89.15% mean Intersection-over-Union in RIS and 3.18 millimetre depth mean absolute error in SCARED test sets. Our proposed multi-task architecture is real-time, able to process ( 1280×1024 ) stereo input and simultaneously estimate disparity maps and segmentation masks at 22 frames per second. The model code and pre-trained models are made available: https://github.com/dimitrisPs/msdesis.
Collapse
|
13
|
Colleoni E, Psychogyios D, Van Amsterdam B, Vasconcelos F, Stoyanov D. SSIS-Seg: Simulation-Supervised Image Synthesis for Surgical Instrument Segmentation. IEEE Trans Med Imaging 2022; 41:3074-3086. [PMID: 35622799 DOI: 10.1109/tmi.2022.3178549] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Surgical instrument segmentation can be used in a range of computer assisted interventions and automation in surgical robotics. While deep learning architectures have rapidly advanced the robustness and performance of segmentation models, most are still reliant on supervision and large quantities of labelled data. In this paper, we present a novel method for surgical image generation that can fuse robotic instrument simulation and recent domain adaptation techniques to synthesize artificial surgical images to train surgical instrument segmentation models. We integrate attention modules into well established image generation pipelines and propose a novel cost function to support supervision from simulation frames in model training. We provide an extensive evaluation of our method in terms of segmentation performance along with a validation study on image quality using evaluation metrics. Additionally, we release a novel segmentation dataset from real surgeries that will be shared for research purposes. Both binary and semantic segmentation have been considered, and we show the capability of our synthetic images to train segmentation models compared with the latest methods from the literature.
Collapse
|
14
|
He P, Hariharan R, Karpavičiūtė N, Croft N, Firminger L, Chambost J, Jacques C, Saravelos S, Wouters K, Fréour T, Zaninovic N, Malmsten J, Vasconcelos F, Hickman C. O-177 Towards 3D Reconstructions of Human Preimplantation Embryo Development. Hum Reprod 2022. [DOI: 10.1093/humrep/deac105.091] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
Abstract
Study question
Can we use focal stacks collected through Hoffman modulation contrast (HMC) microscopy to generate 3D reconstructions of preimplantation embryos?
Summary answer
A machine learning system was designed to generate 3D meshes that approximate the structures of embryos captured on HMC microscopes up to the 8-cell stage.
What is known already
The 3D arrangement of cells in preimplantation human embryos is a topic of clinical interest, with significant associations between the cell arrangement and blastulation potential from as early as the 4-cell stage. In basic research, the use of confocal microscopy for generating 3D reconstructions is commonplace. However, the use of confocal microscopy in the IVF clinic is often infeasible due to cost and concerns for embryos’ wellbeing. The assessment of 3D cell arrangement in clinical settings can thus prove difficult and time-consuming as many embryologists rely on focal stacks captured through the HMC microscopes widely integrated into incubators.
Study design, size, duration
The study was a retrospective analysis of 581 Embryoscope focal stacks of embryos from 4 clinics collected between 2018 and 2020. The number of planes in each stack ranged from 7-11 and cell outlines were annotated along with the depths at which they were most in-focus. A deep learning system was designed to generate 3D reconstructions of the embryos. Two clinics’ data were used for training (N = 551) and the others’ for evaluation (N = 30).
Participants/materials, setting, methods
The deep learning system consisted of three stages: a super-resolution module, a cell segmentation module and a depth regression module. The super-resolution stage was used to predict missing planes in focal stacks that did not contain 11 focal planes; the segmentation module identified individual cells; the depth regression module identified the focal plane at which each cell was most “in-focus”. Meshes were then generated under the assumption that blastomeres’ dimensions are similar along each axis.
Main results and the role of chance
The superresolution module was evaluated by calculating the structural similarity index (SSIM; an image similarity measure ranging from 0-1) between predicted and true planes when tasked with predicting missing frames in focal stacks with up to 4 planes artificially removed (by uniform random sampling). The module achieved an SSIM of 0.80. The predictions were also evaluated by 2 embryologists, a clinician and a developmental biologist on a scale of 1-5 (1=very unrealistic; 3=usable; 5=very realistic), achieving a mean score of 4.11.
The segmentation module was evaluated on the proportion of cells it managed to identify (91%) as well as the mean overlap between predicted cell segmentations and the ground truth (intersection-over-union of 0.86). The depth module was evaluated on the mean deviation of predictions from the true most “in-focus” plane (0.73 planes).
3D reconstructions generated by the system were evaluated with reference to the original focal stacks by 2 embryologists on a 1-5 scale similar to before, with a mean score of 3.72. The most common issues with the reconstructions identified by the embryologists were missing cells/fragments, incorrect cell shape due to obstruction by the well’s edge and imprecise depth predictions (with the “true” depth being between focal planes).
Limitations, reasons for caution
As previously mentioned, some reconstructions had inaccuracies. These would likely be ameliorated through modifications to the system modules and more training data. Moreover, the system was not trained or evaluated on morulae/blastocysts. Finally, each focal stack was analysed independently - future work may examine enforcing temporal consistency within timelapses.
Wider implications of the findings
This work serves as a first step towards unlocking data captured in IVF clinics for research into cell arrangement in preimplantation embryos. Combined with cell tracking, the system may be useful for research into cell fate. Moreover, the work may find clinical relevance in enabling easier assessment of cell arrangement.
Trial registration number
N/A
Collapse
Affiliation(s)
- P He
- Apricity, AI Team , London, United Kingdom
- University College London, Department of Computer Science , London, United Kingdom
- University College London , Wellcome / , London, United Kingdom
- EPSRC Centre for Interventional and Surgical Sciences , Wellcome / , London, United Kingdom
| | - R Hariharan
- Apricity, AI Team , London, United Kingdom
- University Hospitals of Morecambe Bay NHS Foundation Trust, Furness General Hospital , Barrow-in-Furness, United Kingdom
| | | | - N Croft
- Apricity, AI Team , London, United Kingdom
- University of Surrey, Department of Health and Medical Sciences , Guildford, United Kingdom
| | - L Firminger
- Apricity, AI Team , London, United Kingdom
- Manchester Metropolitan University, Department of Life Sciences , Manchester, United Kingdom
| | | | | | - S Saravelos
- Apricity, Care Team , London, United Kingdom
- Imperial College London, Faculty of Medicine , London, United Kingdom
| | - K Wouters
- University Hospital Brussels, Centre for Reproductive Medicine , Jette, Belgium
| | - T Fréour
- Nantes University Hospital, ART Centre , Nantes, France
| | - N Zaninovic
- Weill Cornell Medical College, Department of Obstetrics and Gynecology , New York City, U.S.A
- Weill Cornell Medical College, Department of Reproductive Medicine , New York City, U.S.A
| | - J Malmsten
- Weill Cornell Medical College, Department of Reproductive Medicine , New York City, U.S.A
| | - F Vasconcelos
- University College London, Department of Computer Science , London, United Kingdom
- University College London , Wellcome / , London, United Kingdom
- EPSRC Centre for Interventional and Surgical Sciences , Wellcome / , London, United Kingdom
| | - C Hickman
- Apricity, AI Team , London, United Kingdom
- Imperial College London, Faculty of Medicine , London, United Kingdom
| |
Collapse
|
15
|
Alabi O, Bano S, Vasconcelos F, David AL, Deprest J, Stoyanov D. Robust fetoscopic mosaicking from deep learned flow fields. Int J Comput Assist Radiol Surg 2022; 17:1125-1134. [PMID: 35503395 PMCID: PMC9124660 DOI: 10.1007/s11548-022-02623-1] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2021] [Accepted: 03/23/2022] [Indexed: 11/05/2022]
Abstract
PURPOSE Fetoscopic laser photocoagulation is a minimally invasive procedure to treat twin-to-twin transfusion syndrome during pregnancy by stopping irregular blood flow in the placenta. Building an image mosaic of the placenta and its network of vessels could assist surgeons to navigate in the challenging fetoscopic environment during the procedure. METHODOLOGY We propose a fetoscopic mosaicking approach by combining deep learning-based optical flow with robust estimation for filtering inconsistent motions that occurs due to floating particles and specularities. While the current state of the art for fetoscopic mosaicking relies on clearly visible vessels for registration, our approach overcomes this limitation by considering the motion of all consistent pixels within consecutive frames. We also overcome the challenges in applying off-the-shelf optical flow to fetoscopic mosaicking through the use of robust estimation and local refinement. RESULTS We compare our proposed method against the state-of-the-art vessel-based and optical flow-based image registration methods, and robust estimation alternatives. We also compare our proposed pipeline using different optical flow and robust estimation alternatives. CONCLUSIONS Through analysis of our results, we show that our method outperforms both the vessel-based state of the art and LK, noticeably when vessels are either poorly visible or too thin to be reliably identified. Our approach is thus able to build consistent placental vessel mosaics in challenging cases where currently available alternatives fail.
Collapse
Affiliation(s)
| | - Sophia Bano
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS) and Department of Computer Science, University College London, London, UK.
| | - Francisco Vasconcelos
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS) and Department of Computer Science, University College London, London, UK
| | - Anna L David
- Elizabeth Garrett Anderson Institute for Women's Health, University College London, London, UK
- NIHR University College London Hospitals Biomedical Research Centre, London, UK
| | - Jan Deprest
- Elizabeth Garrett Anderson Institute for Women's Health, University College London, London, UK
- Department of Development and Regeneration, University Hospital KU Leuven, Leuven, Belgium
| | - Danail Stoyanov
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS) and Department of Computer Science, University College London, London, UK
| |
Collapse
|
16
|
Das A, Bano S, Vasconcelos F, Khan DZ, Marcus HJ, Stoyanov D. Reducing Prediction volatility in the surgical workflow recognition of endoscopic pituitary surgery. Int J Comput Assist Radiol Surg 2022; 17:1445-1452. [PMID: 35362848 PMCID: PMC9307536 DOI: 10.1007/s11548-022-02599-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2022] [Accepted: 03/08/2022] [Indexed: 11/25/2022]
Abstract
Purpose: Workflow recognition can aid surgeons before an operation when used as a training tool, during an operation by increasing operating room efficiency, and after an operation in the completion of operation notes. Although several methods have been applied to this task, they have been tested on few surgical datasets. Therefore, their generalisability is not well tested, particularly for surgical approaches utilising smaller working spaces which are susceptible to occlusion and necessitate frequent withdrawal of the endoscope. This leads to rapidly changing predictions, which reduces the clinical confidence of the methods, and hence limits their suitability for clinical translation. Methods: Firstly, the optimal neural network is found using established methods, using endoscopic pituitary surgery as an exemplar. Then, prediction volatility is formally defined as a new evaluation metric as a proxy for uncertainty, and two temporal smoothing functions are created. The first (modal, \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$M_n$$\end{document}Mn) mode-averages over the previous n predictions, and the second (threshold, \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$T_n$$\end{document}Tn) ensures a class is only changed after being continuously predicted for n predictions. Both functions are independently applied to the predictions of the optimal network. Results: The methods are evaluated on a 50-video dataset using fivefold cross-validation, and the optimised evaluation metric is weighted-\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$F_1$$\end{document}F1 score. The optimal model is ResNet-50+LSTM achieving 0.84 in 3-phase classification and 0.74 in 7-step classification. Applying threshold smoothing further improves these results, achieving 0.86 in 3-phase classification, and 0.75 in 7-step classification, while also drastically reducing the prediction volatility. Conclusion: The results confirm the established methods generalise to endoscopic pituitary surgery, and show simple temporal smoothing not only reduces prediction volatility, but actively improves performance.
Collapse
Affiliation(s)
- Adrito Das
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, United Kingdom.
| | - Sophia Bano
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, United Kingdom
| | - Francisco Vasconcelos
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, United Kingdom
| | - Danyal Z Khan
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, United Kingdom
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London, United Kingdom
| | - Hani J Marcus
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, United Kingdom
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London, United Kingdom
| | - Danail Stoyanov
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, United Kingdom
| |
Collapse
|
17
|
Zhang Y, Bano S, Page AS, Deprest J, Stoyanov D, Vasconcelos F. Large-scale surgical workflow segmentation for laparoscopic sacrocolpopexy. Int J Comput Assist Radiol Surg 2022; 17:467-477. [PMID: 35050468 PMCID: PMC8873061 DOI: 10.1007/s11548-021-02544-5] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2021] [Accepted: 12/07/2021] [Indexed: 12/03/2022]
Abstract
Purpose Laparoscopic sacrocolpopexy is the gold standard procedure for the management of vaginal vault prolapse. Studying surgical skills and different approaches to this procedure requires an analysis at the level of each of its individual phases, thus motivating investigation of automated surgical workflow for expediting this research. Phase durations in this procedure are significantly larger and more variable than commonly available benchmarks such as Cholec80, and we assess these differences. Methodology We introduce sequence-to-sequence (seq2seq) models for coarse-level phase segmentation in order to deal with highly variable phase durations in Sacrocolpopexy. Multiple architectures (LSTM and transformer), configurations (time-shifted, time-synchronous), and training strategies are tested with this novel framework to explore its flexibility. Results We perform 7-fold cross-validation on a dataset with 14 complete videos of sacrocolpopexy. We perform both a frame-based (accuracy, F1-score) and an event-based (Ward metric) evaluation of our algorithms and show that different architectures present a trade-off between higher number of accurate frames (LSTM, Mode average) or more consistent ordering of phase transitions (Transformer). We compare the implementations on the widely used Cholec80 dataset and verify that relative performances are different to those in Sacrocolpopexy. Conclusions We show that workflow segmentation of Sacrocolpopexy videos has specific challenges that are different to the widely used benchmark Cholec80 and require dedicated approaches to deal with the significantly larger phase durations. We demonstrate the feasibility of seq2seq models in Sacrocolpopexy, a broad framework that can be further explored with new configurations. We show that an event-based evaluation metric is useful to evaluate workflow segmentation algorithms and provides complementary insight to the more commonly used metrics such as accuracy or F1-score. Supplementary Information The online version supplementary material available at 10.1007/s11548-021-02544-5.
Collapse
Affiliation(s)
- Yitong Zhang
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS) and Department of Computer Science, University College London, London, UK.
| | - Sophia Bano
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS) and Department of Computer Science, University College London, London, UK
| | - Ann-Sophie Page
- Department of Development and Regeneration, University Hospital Leuven, Leuven, Belgium
| | - Jan Deprest
- Department of Development and Regeneration, University Hospital Leuven, Leuven, Belgium
| | - Danail Stoyanov
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS) and Department of Computer Science, University College London, London, UK
| | - Francisco Vasconcelos
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS) and Department of Computer Science, University College London, London, UK
| |
Collapse
|
18
|
Li L, Bano S, Deprest J, David A, Stoyanov D, Vasconcelos F. Globally Optimal Fetoscopic Mosaicking Based on Pose Graph Optimisation With Affine Constraints. IEEE Robot Autom Lett 2021. [DOI: 10.1109/lra.2021.3100938] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
19
|
Pachtrachai K, Vasconcelos F, Edwards P, Stoyanov D. Learning to Calibrate - Estimating the Hand-eye Transformation Without Calibration Objects. IEEE Robot Autom Lett 2021. [DOI: 10.1109/lra.2021.3098942] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
20
|
Dromey BP, Ahmed S, Vasconcelos F, Mazomenos E, Kunpalin Y, Ourselin S, Deprest J, David AL, Stoyanov D, Peebles DM. Dimensionless squared jerk: An objective differential to assess experienced and novice probe movement in obstetric ultrasound. Prenat Diagn 2020; 41:271-277. [PMID: 33103808 PMCID: PMC7894282 DOI: 10.1002/pd.5855] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2020] [Revised: 09/10/2020] [Accepted: 10/15/2020] [Indexed: 01/13/2023]
Abstract
OBJECTIVE Widely accepted, validated and objective measures of ultrasound competency have not been established for clinical practice. Outcomes of training curricula are often based on arbitrary thresholds, such as the number of clinical cases completed. We aimed to define metrics against which competency could be measured. METHOD We undertook a prospective, observational study of obstetric sonographers at a UK University Teaching Hospital. Participants were either experienced in fetal ultrasound (n = 10, >200 ultrasound examinations) or novice operators (n = 10, <25 ultrasound examinations). We recorded probe motion data during the performance of biometry on a commercially available mid-trimester phantom. RESULTS We report that Dimensionless squared jerk, an assessment of deliberate hand movements, independent of movement duration, extent, spurious peaks and dimension differed significantly different between groups, 19.26 (SD 3.02) for experienced and 22.08 (SD 1.05, p = 0.01) for novice operators, respectively. Experienced operator performance, was associated with a shorter time to task completion of 176.46 s (SD 47.31) compared to 666.94 s (SD 490.36, p = 0.0004) for novice operators. Probe travel was also shorter for experienced operators 521.23 mm (SD 27.41) versus 2234.82 mm (SD 188.50, p = 0.007) when compared to novice operators. CONCLUSION Our results represent progress toward an objective assessment of technical skill in obstetric ultrasound. Repeating this methodology in a clinical environment may develop insight into the generalisability of these findings into ultrasound education.
Collapse
Affiliation(s)
- Brian P Dromey
- Elizabeth Garrett Anderson Institute for Women's Health, University College London, London, UK.,Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College, London, UK
| | - Shahanaz Ahmed
- Elizabeth Garrett Anderson Institute for Women's Health, University College London, London, UK
| | - Francisco Vasconcelos
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College, London, UK
| | - Evangelos Mazomenos
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College, London, UK
| | - Yada Kunpalin
- Elizabeth Garrett Anderson Institute for Women's Health, University College London, London, UK
| | - Sebastien Ourselin
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK
| | - Jan Deprest
- Elizabeth Garrett Anderson Institute for Women's Health, University College London, London, UK.,Department of Obstetrics and Gynecology, University Hospitals Leuven, Leuven, Belgium
| | - Anna L David
- Elizabeth Garrett Anderson Institute for Women's Health, University College London, London, UK.,NIHR University College London Hospitals Biomedical Research Centre, London, UK
| | - Danail Stoyanov
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College, London, UK
| | - Donald M Peebles
- Elizabeth Garrett Anderson Institute for Women's Health, University College London, London, UK.,NIHR University College London Hospitals Biomedical Research Centre, London, UK
| |
Collapse
|
21
|
Chadebecq F, Vasconcelos F, Mazomenos E, Stoyanov D. Computer Vision in the Surgical Operating Room. Visc Med 2020; 36:456-462. [PMID: 33447601 DOI: 10.1159/000511934] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2020] [Accepted: 09/30/2020] [Indexed: 12/20/2022] Open
Abstract
Background Multiple types of surgical cameras are used in modern surgical practice and provide a rich visual signal that is used by surgeons to visualize the clinical site and make clinical decisions. This signal can also be used by artificial intelligence (AI) methods to provide support in identifying instruments, structures, or activities both in real-time during procedures and postoperatively for analytics and understanding of surgical processes. Summary In this paper, we provide a succinct perspective on the use of AI and especially computer vision to power solutions for the surgical operating room (OR). The synergy between data availability and technical advances in computational power and AI methodology has led to rapid developments in the field and promising advances. Key Messages With the increasing availability of surgical video sources and the convergence of technologies around video storage, processing, and understanding, we believe clinical solutions and products leveraging vision are going to become an important component of modern surgical capabilities. However, both technical and clinical challenges remain to be overcome to efficiently make use of vision-based approaches into the clinic.
Collapse
Affiliation(s)
- François Chadebecq
- Department of Computer Science, Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London, London, United Kingdom
| | - Francisco Vasconcelos
- Department of Computer Science, Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London, London, United Kingdom
| | - Evangelos Mazomenos
- Department of Computer Science, Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London, London, United Kingdom
| | - Danail Stoyanov
- Department of Computer Science, Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London, London, United Kingdom
| |
Collapse
|
22
|
Kannan B, Campbell DL, Vasconcelos F, Winik R, Kim DK, Kjaergaard M, Krantz P, Melville A, Niedzielski BM, Yoder JL, Orlando TP, Gustavsson S, Oliver WD. Generating spatially entangled itinerant photons with waveguide quantum electrodynamics. Sci Adv 2020; 6:6/41/eabb8780. [PMID: 33028523 PMCID: PMC7541065 DOI: 10.1126/sciadv.abb8780] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/24/2020] [Accepted: 08/21/2020] [Indexed: 05/31/2023]
Abstract
Realizing a fully connected network of quantum processors requires the ability to distribute quantum entanglement. For distant processing nodes, this can be achieved by generating, routing, and capturing spatially entangled itinerant photons. In this work, we demonstrate the deterministic generation of such photons using superconducting transmon qubits that are directly coupled to a waveguide. In particular, we generate two-photon N00N states and show that the state and spatial entanglement of the emitted photons are tunable via the qubit frequencies. Using quadrature amplitude detection, we reconstruct the moments and correlations of the photonic modes and demonstrate state preparation fidelities of 84%. Our results provide a path toward realizing quantum communication and teleportation protocols using itinerant photons generated by quantum interference within a waveguide quantum electrodynamics architecture.
Collapse
Affiliation(s)
- B Kannan
- Research Laboratory of Electronics, Massachusetts Institute of Technology, Cambridge, MA 02139, USA.
- Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| | - D L Campbell
- Research Laboratory of Electronics, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| | - F Vasconcelos
- Research Laboratory of Electronics, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
- Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| | - R Winik
- Research Laboratory of Electronics, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| | - D K Kim
- MIT Lincoln Laboratory, 244 Wood Street, Lexington, MA 02420, USA
| | - M Kjaergaard
- Research Laboratory of Electronics, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| | - P Krantz
- Research Laboratory of Electronics, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| | - A Melville
- MIT Lincoln Laboratory, 244 Wood Street, Lexington, MA 02420, USA
| | - B M Niedzielski
- MIT Lincoln Laboratory, 244 Wood Street, Lexington, MA 02420, USA
| | - J L Yoder
- MIT Lincoln Laboratory, 244 Wood Street, Lexington, MA 02420, USA
| | - T P Orlando
- Research Laboratory of Electronics, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
- Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| | - S Gustavsson
- Research Laboratory of Electronics, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| | - W D Oliver
- Research Laboratory of Electronics, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
- Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
- MIT Lincoln Laboratory, 244 Wood Street, Lexington, MA 02420, USA
- Department of Physics, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| |
Collapse
|
23
|
Bano S, Vasconcelos F, Tella-Amo M, Dwyer G, Gruijthuijsen C, Vander Poorten E, Vercauteren T, Ourselin S, Deprest J, Stoyanov D. Deep learning-based fetoscopic mosaicking for field-of-view expansion. Int J Comput Assist Radiol Surg 2020; 15:1807-1816. [PMID: 32808148 PMCID: PMC7603466 DOI: 10.1007/s11548-020-02242-8] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2020] [Accepted: 07/30/2020] [Indexed: 11/26/2022]
Abstract
PURPOSE Fetoscopic laser photocoagulation is a minimally invasive surgical procedure used to treat twin-to-twin transfusion syndrome (TTTS), which involves localization and ablation of abnormal vascular connections on the placenta to regulate the blood flow in both fetuses. This procedure is particularly challenging due to the limited field of view, poor visibility, occasional bleeding, and poor image quality. Fetoscopic mosaicking can help in creating an image with the expanded field of view which could facilitate the clinicians during the TTTS procedure. METHODS We propose a deep learning-based mosaicking framework for diverse fetoscopic videos captured from different settings such as simulation, phantoms, ex vivo, and in vivo environments. The proposed mosaicking framework extends an existing deep image homography model to handle video data by introducing the controlled data generation and consistent homography estimation modules. Training is performed on a small subset of fetoscopic images which are independent of the testing videos. RESULTS We perform both quantitative and qualitative evaluations on 5 diverse fetoscopic videos (2400 frames) that captured different environments. To demonstrate the robustness of the proposed framework, a comparison is performed with the existing feature-based and deep image homography methods. CONCLUSION The proposed mosaicking framework outperformed existing methods and generated meaningful mosaic, while reducing the accumulated drift, even in the presence of visual challenges such as specular highlights, reflection, texture paucity, and low video resolution.
Collapse
Affiliation(s)
- Sophia Bano
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS) and Department of Computer Science, University College London, London, UK
| | - Francisco Vasconcelos
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS) and Department of Computer Science, University College London, London, UK
| | - Marcel Tella-Amo
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS) and Department of Computer Science, University College London, London, UK
| | - George Dwyer
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS) and Department of Computer Science, University College London, London, UK
| | | | | | - Tom Vercauteren
- School of Biomedical Engineering and Imaging Sciences, King’s College London, London, UK
| | - Sebastien Ourselin
- School of Biomedical Engineering and Imaging Sciences, King’s College London, London, UK
| | - Jan Deprest
- Department of Development and Regeneration, University Hospital Leuven, Leuven, Belgium
| | - Danail Stoyanov
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS) and Department of Computer Science, University College London, London, UK
| |
Collapse
|
24
|
Bano S, Vasconcelos F, Vander Poorten E, Vercauteren T, Ourselin S, Deprest J, Stoyanov D. FetNet: a recurrent convolutional network for occlusion identification in fetoscopic videos. Int J Comput Assist Radiol Surg 2020; 15:791-801. [PMID: 32350787 PMCID: PMC7261278 DOI: 10.1007/s11548-020-02169-0] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2019] [Accepted: 04/10/2020] [Indexed: 12/18/2022]
Abstract
PURPOSE Fetoscopic laser photocoagulation is a minimally invasive surgery for the treatment of twin-to-twin transfusion syndrome (TTTS). By using a lens/fibre-optic scope, inserted into the amniotic cavity, the abnormal placental vascular anastomoses are identified and ablated to regulate blood flow to both fetuses. Limited field-of-view, occlusions due to fetus presence and low visibility make it difficult to identify all vascular anastomoses. Automatic computer-assisted techniques may provide better understanding of the anatomical structure during surgery for risk-free laser photocoagulation and may facilitate in improving mosaics from fetoscopic videos. METHODS We propose FetNet, a combined convolutional neural network (CNN) and long short-term memory (LSTM) recurrent neural network architecture for the spatio-temporal identification of fetoscopic events. We adapt an existing CNN architecture for spatial feature extraction and integrated it with the LSTM network for end-to-end spatio-temporal inference. We introduce differential learning rates during the model training to effectively utilising the pre-trained CNN weights. This may support computer-assisted interventions (CAI) during fetoscopic laser photocoagulation. RESULTS We perform quantitative evaluation of our method using 7 in vivo fetoscopic videos captured from different human TTTS cases. The total duration of these videos was 5551 s (138,780 frames). To test the robustness of the proposed approach, we perform 7-fold cross-validation where each video is treated as a hold-out or test set and training is performed using the remaining videos. CONCLUSION FetNet achieved superior performance compared to the existing CNN-based methods and provided improved inference because of the spatio-temporal information modelling. Online testing of FetNet, using a Tesla V100-DGXS-32GB GPU, achieved a frame rate of 114 fps. These results show that our method could potentially provide a real-time solution for CAI and automating occlusion and photocoagulation identification during fetoscopic procedures.
Collapse
Affiliation(s)
- Sophia Bano
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS) and Department of Computer Science, University College London, London, UK
| | - Francisco Vasconcelos
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS) and Department of Computer Science, University College London, London, UK
| | | | - Tom Vercauteren
- School of Biomedical Engineering and Imaging Sciences, King’s College London, London, UK
| | - Sebastien Ourselin
- School of Biomedical Engineering and Imaging Sciences, King’s College London, London, UK
| | - Jan Deprest
- Department of Development and Regeneration, University Hospital Leuven, Leuven, Belgium
| | - Danail Stoyanov
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS) and Department of Computer Science, University College London, London, UK
| |
Collapse
|
25
|
Chadebecq F, Vasconcelos F, Lacher R, Maneas E, Desjardins A, Ourselin S, Vercauteren T, Stoyanov D. Refractive Two-View Reconstruction for Underwater 3D Vision. Int J Comput Vis 2019; 128:1101-1117. [PMID: 33343083 PMCID: PMC7738342 DOI: 10.1007/s11263-019-01218-9] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2018] [Accepted: 08/23/2019] [Indexed: 11/01/2022]
Abstract
Recovering 3D geometry from cameras in underwater applications involves the Refractive Structure-from-Motion problem where the non-linear distortion of light induced by a change of medium density invalidates the single viewpoint assumption. The pinhole-plus-distortion camera projection model suffers from a systematic geometric bias since refractive distortion depends on object distance. This leads to inaccurate camera pose and 3D shape estimation. To account for refraction, it is possible to use the axial camera model or to explicitly consider one or multiple parallel refractive interfaces whose orientations and positions with respect to the camera can be calibrated. Although it has been demonstrated that the refractive camera model is well-suited for underwater imaging, Refractive Structure-from-Motion remains particularly difficult to use in practice when considering the seldom studied case of a camera with a flat refractive interface. Our method applies to the case of underwater imaging systems whose entrance lens is in direct contact with the external medium. By adopting the refractive camera model, we provide a succinct derivation and expression for the refractive fundamental matrix and use this as the basis for a novel two-view reconstruction method for underwater imaging. For validation we use synthetic data to show the numerical properties of our method and we provide results on real data to demonstrate its practical application within laboratory settings and for medical applications in fluid-immersed endoscopy. We demonstrate our approach outperforms classic two-view Structure-from-Motion method relying on the pinhole-plus-distortion camera model.
Collapse
Affiliation(s)
- François Chadebecq
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), London, UK
| | | | - René Lacher
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), London, UK
| | - Efthymios Maneas
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), London, UK
| | - Adrien Desjardins
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), London, UK
| | - Sébastien Ourselin
- School of Biomedical Engineering & Imaging Sciences, King’s College London, London, UK
| | - Tom Vercauteren
- School of Biomedical Engineering & Imaging Sciences, King’s College London, London, UK
| | - Danail Stoyanov
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), London, UK
| |
Collapse
|
26
|
|
27
|
Lacher RM, Vasconcelos F, Williams NR, Rindermann G, Hipwell J, Hawkes D, Stoyanov D. Nonrigid reconstruction of 3D breast surfaces with a low-cost RGBD camera for surgical planning and aesthetic evaluation. Med Image Anal 2019; 53:11-25. [PMID: 30660103 PMCID: PMC6854464 DOI: 10.1016/j.media.2019.01.003] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2018] [Revised: 01/06/2019] [Accepted: 01/10/2019] [Indexed: 12/18/2022]
Abstract
A nonrigid 3D breast surface reconstruction pipeline running on a standard PC taking a noisy RGBD input video from a Kinect-style camera is proposed. Pairwise nonrigid ICP is extended to the multi-view case incorporating soft mobility constraints in areas of non-overlap. Shortest distance correspondences as a new technique for data association are shown to lead to consistently better alignment. The method is able to reconstruct clinical-quality surface models in spite of varying degrees of postural sway during data capture. Landmark and volumetric quantitative validation in metric units demonstrate improved reconstruction quality on par with the gold standard and superior to a competing method.
Accounting for 26% of all new cancer cases worldwide, breast cancer remains the most common form of cancer in women. Although early breast cancer has a favourable long-term prognosis, roughly a third of patients suffer from a suboptimal aesthetic outcome despite breast conserving cancer treatment. Clinical-quality 3D modelling of the breast surface therefore assumes an increasingly important role in advancing treatment planning, prediction and evaluation of breast cosmesis. Yet, existing 3D torso scanners are expensive and either infrastructure-heavy or subject to motion artefacts. In this paper we employ a single consumer-grade RGBD camera with an ICP-based registration approach to jointly align all points from a sequence of depth images non-rigidly. Subtle body deformation due to postural sway and respiration is successfully mitigated leading to a higher geometric accuracy through regularised locally affine transformations. We present results from 6 clinical cases where our method compares well with the gold standard and outperforms a previous approach. We show that our method produces better reconstructions qualitatively by visual assessment and quantitatively by consistently obtaining lower landmark error scores and yielding more accurate breast volume estimates.
Collapse
Affiliation(s)
- R M Lacher
- Wellcome / EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London, London, UK.
| | - F Vasconcelos
- Wellcome / EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London, London, UK.
| | - N R Williams
- Surgical & Interventional Trials Unit, University College London, London, United Kingdom.
| | | | - J Hipwell
- Centre for Medical Image Computing (CMIC), University College London, London, United Kingdom.
| | - D Hawkes
- Wellcome / EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London, London, UK.
| | - D Stoyanov
- Wellcome / EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London, London, UK.
| |
Collapse
|
28
|
An VVG, Mirza Y, Mazomenos E, Vasconcelos F, Stoyanov D, Oussedik S. Arthroscopic simulation using a knee model can be used to train speed and gaze strategies in knee arthroscopy. Knee 2018; 25:1214-1221. [PMID: 29933932 DOI: 10.1016/j.knee.2018.05.019] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/06/2018] [Revised: 04/30/2018] [Accepted: 05/30/2018] [Indexed: 02/02/2023]
Abstract
PURPOSE This study aimed to determine the effect of a simulation course on gaze fixation strategies of participants performing arthroscopy. METHODS Participants (n = 16) were recruited from two one-day simulation-based knee arthroscopy courses, and were asked to undergo a task before and after the course, which involved identifying a series of arthroscopic landmarks. The gaze fixation of the participants was recorded with a wearable eye-tracking system. The time taken to complete the task and proportion of time participants spent with their gaze fixated on the arthroscopic stack, the knee model, and away from the stack or knee model were recorded. RESULTS Participants demonstrated a statistically decreased completion time in their second attempt compared to the first attempt (P = 0.001). In their second attempt, they also demonstrated improved gaze fixation strategies, with a significantly increased amount (P = 0.008) and proportion of time (P = 0.003) spent fixated on the screen vs. knee model. CONCLUSION Simulation improved arthroscopic skills in orthopaedic surgeons, specifically by improving their gaze control strategies and decreasing the amount of time taken to identify and mark landmarks in an arthroscopic task.
Collapse
Affiliation(s)
- Vincent V G An
- School of Medicine, University of Sydney, Camperdown, NSW 2050, Australia.
| | - Yusuf Mirza
- Department of Orthopaedics, University College London Hospitals, London, United Kingdom
| | - Evangelos Mazomenos
- Department of Computer Science, University College London, London, United Kingdom
| | | | - Danail Stoyanov
- Department of Computer Science, University College London, London, United Kingdom
| | - Sam Oussedik
- Department of Orthopaedics, University College London Hospitals, London, United Kingdom
| |
Collapse
|
29
|
Pachtrachai K, Vasconcelos F, Chadebecq F, Allan M, Hailes S, Pawar V, Stoyanov D. Adjoint Transformation Algorithm for Hand-Eye Calibration with Applications in Robotic Assisted Surgery. Ann Biomed Eng 2018; 46:1606-1620. [PMID: 30051249 PMCID: PMC6154014 DOI: 10.1007/s10439-018-2097-4] [Citation(s) in RCA: 25] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2017] [Accepted: 07/17/2018] [Indexed: 11/30/2022]
Abstract
Hand–eye calibration aims at determining the unknown rigid transformation between the coordinate systems of a robot arm and a camera. Existing hand–eye algorithms using closed-form solutions followed by iterative non-linear refinement provide accurate calibration results within a broad range of robotic applications. However, in the context of surgical robotics hand–eye calibration is still a challenging problem due to the required accuracy within the millimetre range, coupled with a large displacement between endoscopic cameras and the robot end-effector. This paper presents a new method for hand–eye calibration based on the adjoint transformation of twist motions that solves the problem iteratively through alternating estimations of rotation and translation. We show that this approach converges to a solution with a higher accuracy than closed form initializations within a broad range of synthetic and real experiments. We also propose a stereo hand–eye formulation that can be used in the context of both our proposed method and previous state-of-the-art closed form solutions. Experiments with real data are conducted with a stereo laparoscope, the KUKA robot arm manipulator, and the da Vinci surgical robot, showing that both our new alternating solution and the explicit representation of stereo camera hand–eye relations contribute to a higher calibration accuracy.
Collapse
Affiliation(s)
- Krittin Pachtrachai
- Wellcome / EPSRC Centre for Interventional and Surgical Sciences (WEISS) and the Department of Computer Science, University College London, London, UK.
| | - Francisco Vasconcelos
- Wellcome / EPSRC Centre for Interventional and Surgical Sciences (WEISS) and the Department of Computer Science, University College London, London, UK
| | - François Chadebecq
- Wellcome / EPSRC Centre for Interventional and Surgical Sciences (WEISS) and the Department of Computer Science, University College London, London, UK
| | - Max Allan
- Intuitive Surgical, Sunnyvale, CA, USA
| | - Stephen Hailes
- Department of Computer Science, University College London, London, UK
| | - Vijay Pawar
- Department of Computer Science, University College London, London, UK
| | - Danail Stoyanov
- Wellcome / EPSRC Centre for Interventional and Surgical Sciences (WEISS) and the Department of Computer Science, University College London, London, UK
| |
Collapse
|
30
|
Pachtrachai K, Vasconcelos F, Dwyer G, Pawar V, Hailes S, Stoyanov D. CHESS—Calibrating the Hand-Eye Matrix With Screw Constraints and Synchronization. IEEE Robot Autom Lett 2018. [DOI: 10.1109/lra.2018.2800088] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
31
|
Vasconcelos F, Mazomentos E, Kelly J, Ourselin S, Stoyanov D. Relative Pose Estimation From Image Correspondences Under a Remote Center of Motion Constraint. IEEE Robot Autom Lett 2018. [DOI: 10.1109/lra.2018.2809617] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
32
|
Vasconcelos F, Barreto JP, Boyer E. Automatic Camera Calibration Using Multiple Sets of Pairwise Correspondences. IEEE Trans Pattern Anal Mach Intell 2018; 40:791-803. [PMID: 28463187 DOI: 10.1109/tpami.2017.2699648] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
We propose a new method to add an uncalibrated node into a network of calibrated cameras using only pairwise point correspondences. While previous methods perform this task using triple correspondences, these are often difficult to establish when there is limited overlap between different views. In such challenging cases we must rely on pairwise correspondences and our solution becomes more advantageous. Our method includes an 11-point minimal solution for the intrinsic and extrinsic calibration of a camera from pairwise correspondences with other two calibrated cameras, and a new inlier selection framework that extends the traditional RANSAC family of algorithms to sampling across multiple datasets. Our method is validated on different application scenarios where a lack of triple correspondences might occur: addition of a new node to a camera network; calibration and motion estimation of a moving camera inside a camera network; and addition of views with limited overlap to a Structure-from-Motion model.
Collapse
|
33
|
Vasconcelos F, Bastos-Leite S, Gomes T, Goulart C, Sousa A, Fontinele G. Organic acids, essential oils and symbiotic in semi-heavy laying hen diets: productive performance and economic analysis. AVB 2016. [DOI: 10.21708/2016.10.3.5468] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
|
34
|
Vasconcelos F, Peebles D, Ourselin S, Stoyanov D. Spatial calibration of a 2D/3D ultrasound using a tracked needle. Int J Comput Assist Radiol Surg 2016; 11:1091-9. [PMID: 27059023 PMCID: PMC4893368 DOI: 10.1007/s11548-016-1392-5] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2016] [Accepted: 03/17/2016] [Indexed: 11/30/2022]
Abstract
Purpose Spatial calibration between a 2D/3D ultrasound and a pose tracking system requires a complex and time-consuming procedure. Simplifying this procedure without compromising the calibration accuracy is still a challenging problem. Method We propose a new calibration method for both 2D and 3D ultrasound probes that involves scanning an arbitrary region of a tracked needle in different poses. This approach is easier to perform than most alternative methods that require a precise alignment between US scans and a calibration phantom. Results Our calibration method provides an average accuracy of 2.49 mm for a 2D US probe with 107 mm scanning depth, and an average accuracy of 2.39 mm for a 3D US with 107 mm scanning depth. Conclusion Our method proposes a unified calibration framework for 2D and 3D probes using the same phantom object, work-flow, and algorithm. Our method significantly improves the accuracy of needle-based methods for 2D US probes as well as extends its use for 3D US probes.
Collapse
Affiliation(s)
| | - Donald Peebles
- />Department of Obstetrics and Gynecology, UCL, London, UK
| | | | | |
Collapse
|
35
|
Vasconcelos F, Barreto JP, Nunes U. A minimal solution for the extrinsic calibration of a camera and a laser-rangefinder. IEEE Trans Pattern Anal Mach Intell 2012; 34:2097-2107. [PMID: 22231591 DOI: 10.1109/tpami.2012.18] [Citation(s) in RCA: 28] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
This paper presents a new algorithm for the extrinsic calibration of a perspective camera and an invisible 2D laser-rangefinder (LRF). The calibration is achieved by freely moving a checkerboard pattern in order to obtain plane poses in camera coordinates and depth readings in the LRF reference frame. The problem of estimating the rigid displacement between the two sensors is formulated as one of registering a set of planes and lines in the 3D space. It is proven for the first time that the alignment of three plane-line correspondences has at most eight solutions that can be determined by solving a standard p3p problem and a linear system of equations. This leads to a minimal closed-form solution for the extrinsic calibration that can be used as hypothesis generator in a RANSAC paradigm. Our calibration approach is validated through simulation and real experiments that show the superiority with respect to the current state-of-the-art method requiring a minimum of five input planes.
Collapse
Affiliation(s)
- Francisco Vasconcelos
- Department of Electrical and Computer Engineering, Institute for Systems and Robotics, University of Coimbra, Coimbra 3030-290, Portugal.
| | | | | |
Collapse
|
36
|
Zaghloul M, Ahmed S, Eldebaway E, Mousa A, Amin A, Elkhateeb N, Sabry M, Ogiwara H, Morota N, Sufit A, Donson A, Birks D, Patel P, Foreman N, Handler M, Massimino M, Biassoni V, Gandola L, Schiavello E, Pecori E, Potepan P, Bach F, Janssens GO, Jansen MH, Lauwers SJ, Nowak PJ, Oldenburger FR, Bouffet E, Saran F, van Ulzen KK, van Lindert EJ, Schieving JH, Boterberg T, Kaspers GJ, Span PN, Kaanders JH, Gidding CE, Hargrave D, Bailey S, Howman A, Pizer B, Harris D, Jones D, Kearns P, Picton S, Saran F, Wheatley K, Gibson M, Glaser A, Connolly D, Hargrave D, Kawamura A, Nagashima T, Yamamoto K, Sakata J, Lober R, Freret M, Fisher P, Edwards M, Yeom K, Monje M, Jansen M, Aliaga ES, Van Der Hoeven E, Van Vuurden D, Heymans M, Gidding C, De Bont E, Reddingius R, Peeters-Scholte C, van Meeteren AS, Gooskens R, Granzen B, Paardekoper G, Janssens G, Noske D, Barkhof F, Vandertop WP, Kaspers G, Saratsis A, Yadavilli S, Nazarian J, Monje M, Freret M, Mitra S, Mallick S, Kim J, Beachy P, Nobre L, Vasconcelos F, Lima F, Mattos D, Kuiven N, Lima G, Silveira J, Sevilha M, Lima MA, Ferman S, Leblond P, Lansiaux A, Rialland X, Gentet JC, Geoerger B, Frappaz D, Aerts I, Bernier-Chastagner V, Shah R, Zaky W, Grimm J, Bluml S, Wong K, Dhall G, Caretti V, Schellen P, Lagerweij T, Bugiani M, Navis A, Wesseling P, Vandertop WP, Noske DP, Kaspers G, Wurdinger T, Lee H, Ziegler D, Schroeder K, Huang E, Berlow N, Patel R, Becher O, Taylor I, Mao XG, Hutt M, Weingart M, Kahlert U, Maciacyk J, Nikkhah G, Eberhart C, Raabe E, Barton K, Misuraca K, Misuraca K, Becher O, Zhou Z, Rotman L, Ho S, Souweidane M, Hutt M, Lim KJ, Warren K, Chang H, Eberhart C, Raabe E, Lightner D, Haque S, Souweidane M, Khakoo Y, Dunkel I, Gilheeney S, Kramer K, Lyden D, Wolden S, Greenfield J, De Braganca K, Ting-Rong H, Muh-Li L, Kai-Ping C, Tai-Tong W, Hsin-Hung C, Kebudi R, Cakir FB, Agaoglu FY, Gorgun O, Dizdar Y, Ayan I, Darendeliler E, Zapotocky M, Churackova M, Malinova B, Kodet R, Kyncl M, Tichy M, Stary J, Sumerauer D, Minturn J, Shu HK, Fisher M, Patti R, Janss A, Allen J, Phillips P, Belasco J, Taylor K, Baudis M, von Beuren A, Fouladi M, Jones C. DIFFUSE INTRINSIC PONTINE GLIOMA (DIPG). Neuro Oncol 2012. [DOI: 10.1093/neuonc/nos098] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
|
37
|
|
38
|
Cusinato DAC, Souza AM, Vasconcelos F, Guimarães LFL, Leite FP, Gregório ZMO, Giglio JR, Arantes EC. Assessment of biochemical and hematological parameters in rats injected with Tityus serrulatus scorpion venom. Toxicon 2010; 56:1477-86. [PMID: 20837041 DOI: 10.1016/j.toxicon.2010.09.003] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2010] [Revised: 09/01/2010] [Accepted: 09/03/2010] [Indexed: 10/19/2022]
Abstract
The aim of this work was to evaluate the hematological changes induced by Tityus serrulatus venom (TsV). Blood of Wistar rats was collected 0.5, 2, 6 and 24 h after i.p. injection of TsV (0.5 mg/kg) or saline (controls). Two additional groups were injected with 0.67 mg/kg and 0.25 mg/kg of TsV and the blood was collected after 0.5 and 2 h, respectively. The results showed an increase on hematocrit (Ht), red blood cells (RBC) count, hemoglobin concentration (Hb), albumin and total protein, mainly 2-6 h after envenoming. Increase in serum activities of amylase, creatine kinase and aspartate aminotransferase were also observed, indicating tecidual damages. Hyperglycemia was observed at all times analyzed, as a consequence of catecholamine release. No significant changes were detected in the urea, [Na(+)] and [Ca(2+)], but an increase of [Mg(2+)], [K(+)] and conductivity was observed. TsV induced a reduction of erythrocytes osmotic fragility as consequence of dehydration and increase in plasma electrolytes concentration, as evidenced by its higher conductivity. This study demonstrated that TsV is able to induce severe hematological changes, that appear within the first hours after envenoming, justifying the seeking of medical attention as soon as possible to avoid worsening of clinical symptoms.
Collapse
Affiliation(s)
- D A C Cusinato
- Depto. Física e Química, Faculdade de Ciências Farmacêuticas de Ribeirão Preto - USP, Av. do Café, s/n, 14040-903 Ribeirão Preto-SP, Brazil
| | | | | | | | | | | | | | | |
Collapse
|