1
|
Bano S, Casella A, Vasconcelos F, Qayyum A, Benzinou A, Mazher M, Meriaudeau F, Lena C, Cintorrino IA, De Paolis GR, Biagioli J, Grechishnikova D, Jiao J, Bai B, Qiao Y, Bhattarai B, Gaire RR, Subedi R, Vazquez E, Płotka S, Lisowska A, Sitek A, Attilakos G, Wimalasundera R, David AL, Paladini D, Deprest J, De Momi E, Mattos LS, Moccia S, Stoyanov D. Placental vessel segmentation and registration in fetoscopy: Literature review and MICCAI FetReg2021 challenge findings. Med Image Anal 2024; 92:103066. [PMID: 38141453 PMCID: PMC11162867 DOI: 10.1016/j.media.2023.103066] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2022] [Revised: 11/27/2023] [Accepted: 12/19/2023] [Indexed: 12/25/2023]
Abstract
Fetoscopy laser photocoagulation is a widely adopted procedure for treating Twin-to-Twin Transfusion Syndrome (TTTS). The procedure involves photocoagulation pathological anastomoses to restore a physiological blood exchange among twins. The procedure is particularly challenging, from the surgeon's side, due to the limited field of view, poor manoeuvrability of the fetoscope, poor visibility due to amniotic fluid turbidity, and variability in illumination. These challenges may lead to increased surgery time and incomplete ablation of pathological anastomoses, resulting in persistent TTTS. Computer-assisted intervention (CAI) can provide TTTS surgeons with decision support and context awareness by identifying key structures in the scene and expanding the fetoscopic field of view through video mosaicking. Research in this domain has been hampered by the lack of high-quality data to design, develop and test CAI algorithms. Through the Fetoscopic Placental Vessel Segmentation and Registration (FetReg2021) challenge, which was organized as part of the MICCAI2021 Endoscopic Vision (EndoVis) challenge, we released the first large-scale multi-center TTTS dataset for the development of generalized and robust semantic segmentation and video mosaicking algorithms with a focus on creating drift-free mosaics from long duration fetoscopy videos. For this challenge, we released a dataset of 2060 images, pixel-annotated for vessels, tool, fetus and background classes, from 18 in-vivo TTTS fetoscopy procedures and 18 short video clips of an average length of 411 frames for developing placental scene segmentation and frame registration for mosaicking techniques. Seven teams participated in this challenge and their model performance was assessed on an unseen test dataset of 658 pixel-annotated images from 6 fetoscopic procedures and 6 short clips. For the segmentation task, overall baseline performed was the top performing (aggregated mIoU of 0.6763) and was the best on the vessel class (mIoU of 0.5817) while team RREB was the best on the tool (mIoU of 0.6335) and fetus (mIoU of 0.5178) classes. For the registration task, overall the baseline performed better than team SANO with an overall mean 5-frame SSIM of 0.9348. Qualitatively, it was observed that team SANO performed better in planar scenarios, while baseline was better in non-planner scenarios. The detailed analysis showed that no single team outperformed on all 6 test fetoscopic videos. The challenge provided an opportunity to create generalized solutions for fetoscopic scene understanding and mosaicking. In this paper, we present the findings of the FetReg2021 challenge, alongside reporting a detailed literature review for CAI in TTTS fetoscopy. Through this challenge, its analysis and the release of multi-center fetoscopic data, we provide a benchmark for future research in this field.
Collapse
Affiliation(s)
- Sophia Bano
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS) and Department of Computer Science, University College London, UK.
| | - Alessandro Casella
- Department of Advanced Robotics, Istituto Italiano di Tecnologia, Italy; Department of Electronics, Information and Bioengineering, Politecnico di Milano, Italy
| | - Francisco Vasconcelos
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS) and Department of Computer Science, University College London, UK
| | | | | | - Moona Mazher
- Department of Computer Engineering and Mathematics, University Rovira i Virgili, Spain
| | | | - Chiara Lena
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, Italy
| | | | - Gaia Romana De Paolis
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, Italy
| | - Jessica Biagioli
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, Italy
| | | | | | - Bizhe Bai
- Medical Computer Vision and Robotics Group, Department of Mathematical and Computational Sciences, University of Toronto, Canada
| | - Yanyan Qiao
- Shanghai MicroPort MedBot (Group) Co., Ltd, China
| | - Binod Bhattarai
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS) and Department of Computer Science, University College London, UK
| | | | - Ronast Subedi
- NepAL Applied Mathematics and Informatics Institute for Research, Nepal
| | | | - Szymon Płotka
- Sano Center for Computational Medicine, Poland; Quantitative Healthcare Analysis Group, Informatics Institute, University of Amsterdam, Amsterdam, The Netherlands
| | | | - Arkadiusz Sitek
- Sano Center for Computational Medicine, Poland; Center for Advanced Medical Computing and Simulation, Massachusetts General Hospital, Harvard Medical School, Boston, MA, United States of America
| | - George Attilakos
- Fetal Medicine Unit, Elizabeth Garrett Anderson Wing, University College London Hospital, UK; EGA Institute for Women's Health, Faculty of Population Health Sciences, University College London, UK
| | - Ruwan Wimalasundera
- Fetal Medicine Unit, Elizabeth Garrett Anderson Wing, University College London Hospital, UK; EGA Institute for Women's Health, Faculty of Population Health Sciences, University College London, UK
| | - Anna L David
- Fetal Medicine Unit, Elizabeth Garrett Anderson Wing, University College London Hospital, UK; EGA Institute for Women's Health, Faculty of Population Health Sciences, University College London, UK; Department of Development and Regeneration, University Hospital Leuven, Belgium
| | - Dario Paladini
- Department of Fetal and Perinatal Medicine, Istituto "Giannina Gaslini", Italy
| | - Jan Deprest
- EGA Institute for Women's Health, Faculty of Population Health Sciences, University College London, UK; Department of Development and Regeneration, University Hospital Leuven, Belgium
| | - Elena De Momi
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, Italy
| | - Leonardo S Mattos
- Department of Advanced Robotics, Istituto Italiano di Tecnologia, Italy
| | - Sara Moccia
- The BioRobotics Institute and Department of Excellence in Robotics and AI, Scuola Superiore Sant'Anna, Italy
| | - Danail Stoyanov
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS) and Department of Computer Science, University College London, UK
| |
Collapse
|
2
|
van der Schot A, Sikkel E, Niekolaas M, Spaanderman M, de Jong G. Placental Vessel Segmentation Using Pix2pix Compared to U-Net. J Imaging 2023; 9:226. [PMID: 37888333 PMCID: PMC10607321 DOI: 10.3390/jimaging9100226] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2023] [Revised: 09/21/2023] [Accepted: 10/02/2023] [Indexed: 10/28/2023] Open
Abstract
Computer-assisted technologies have made significant progress in fetoscopic laser surgery, including placental vessel segmentation. However, the intra- and inter-procedure variabilities in the state-of-the-art segmentation methods remain a significant hurdle. To address this, we investigated the use of conditional generative adversarial networks (cGANs) for fetoscopic image segmentation and compared their performance with the benchmark U-Net technique for placental vessel segmentation. Two deep-learning models, U-Net and pix2pix (a popular cGAN model), were trained and evaluated using a publicly available dataset and an internal validation set. The overall results showed that the pix2pix model outperformed the U-Net model, with a Dice score of 0.80 [0.70; 0.86] versus 0.75 [0.0.60; 0.84] (p-value < 0.01) and an Intersection over Union (IoU) score of 0.70 [0.61; 0.77] compared to 0.66 [0.53; 0.75] (p-value < 0.01), respectively. The internal validation dataset further validated the superiority of the pix2pix model, achieving Dice and IoU scores of 0.68 [0.53; 0.79] and 0.59 [0.49; 0.69] (p-value < 0.01), respectively, while the U-Net model obtained scores of 0.53 [0.49; 0.64] and 0.49 [0.17; 0.56], respectively. This study successfully compared U-Net and pix2pix models for placental vessel segmentation in fetoscopic images, demonstrating improved results with the cGAN-based approach. However, the challenge of achieving generalizability still needs to be addressed.
Collapse
Affiliation(s)
- Anouk van der Schot
- Obstetrics & Gynecology, Radboud University Medical Center, 6525 GA Nijmegen, The Netherlands
| | - Esther Sikkel
- Obstetrics & Gynecology, Radboud University Medical Center, 6525 GA Nijmegen, The Netherlands
| | - Marèll Niekolaas
- Obstetrics & Gynecology, Radboud University Medical Center, 6525 GA Nijmegen, The Netherlands
| | - Marc Spaanderman
- Obstetrics & Gynecology, Radboud University Medical Center, 6525 GA Nijmegen, The Netherlands
- Obstetrics & Gynecology, Maastricht University Medical Center, 6229 ER Maastricht, The Netherlands
- Department of GROW, School for Oncology and Reproduction, Maastricht University, 6229 ER Maastricht, The Netherlands
| | - Guido de Jong
- 3D Lab, Radboud University Medical Center, 6525 GA Nijmegen, The Netherlands
| |
Collapse
|
3
|
Kojima S, Kitaguchi D, Igaki T, Nakajima K, Ishikawa Y, Harai Y, Yamada A, Lee Y, Hayashi K, Kosugi N, Hasegawa H, Ito M. Deep-learning-based semantic segmentation of autonomic nerves from laparoscopic images of colorectal surgery: an experimental pilot study. Int J Surg 2023; 109:813-820. [PMID: 36999784 PMCID: PMC10389575 DOI: 10.1097/js9.0000000000000317] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2022] [Accepted: 02/21/2023] [Indexed: 04/01/2023]
Abstract
BACKGROUND The preservation of autonomic nerves is the most important factor in maintaining genitourinary function in colorectal surgery; however, these nerves are not clearly recognisable, and their identification is strongly affected by the surgical ability. Therefore, this study aimed to develop a deep learning model for the semantic segmentation of autonomic nerves during laparoscopic colorectal surgery and to experimentally verify the model through intraoperative use and pathological examination. MATERIALS AND METHODS The annotation data set comprised videos of laparoscopic colorectal surgery. The images of the hypogastric nerve (HGN) and superior hypogastric plexus (SHP) were manually annotated under a surgeon's supervision. The Dice coefficient was used to quantify the model performance after five-fold cross-validation. The model was used in actual surgeries to compare the recognition timing of the model with that of surgeons, and pathological examination was performed to confirm whether the samples labelled by the model from the colorectal branches of the HGN and SHP were nerves. RESULTS The data set comprised 12 978 video frames of the HGN from 245 videos and 5198 frames of the SHP from 44 videos. The mean (±SD) Dice coefficients of the HGN and SHP were 0.56 (±0.03) and 0.49 (±0.07), respectively. The proposed model was used in 12 surgeries, and it recognised the right HGN earlier than the surgeons did in 50.0% of the cases, the left HGN earlier in 41.7% of the cases and the SHP earlier in 50.0% of the cases. Pathological examination confirmed that all 11 samples were nerve tissue. CONCLUSION An approach for the deep-learning-based semantic segmentation of autonomic nerves was developed and experimentally validated. This model may facilitate intraoperative recognition during laparoscopic colorectal surgery.
Collapse
Affiliation(s)
- Shigehiro Kojima
- Surgical Device Innovation
- Department of Colorectal Surgery, National Cancer Center Hospital East, Chiba
- Division of Frontier Surgery, The Institute of Medical Science, The University of Tokyo, Tokyo, Japan
| | - Daichi Kitaguchi
- Surgical Device Innovation
- Department of Colorectal Surgery, National Cancer Center Hospital East, Chiba
| | - Takahiro Igaki
- Surgical Device Innovation
- Department of Colorectal Surgery, National Cancer Center Hospital East, Chiba
| | - Kei Nakajima
- Surgical Device Innovation
- Department of Colorectal Surgery, National Cancer Center Hospital East, Chiba
| | | | | | | | | | | | | | - Hiro Hasegawa
- Surgical Device Innovation
- Department of Colorectal Surgery, National Cancer Center Hospital East, Chiba
| | - Masaaki Ito
- Surgical Device Innovation
- Department of Colorectal Surgery, National Cancer Center Hospital East, Chiba
| |
Collapse
|
4
|
Dhombres F, Bonnard J, Bailly K, Maurice P, Papageorghiou A, Jouannic JM. Contributions of artificial intelligence reported in Obstetrics and Gynecology journals: a systematic review. J Med Internet Res 2022; 24:e35465. [PMID: 35297766 PMCID: PMC9069308 DOI: 10.2196/35465] [Citation(s) in RCA: 20] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2021] [Revised: 02/11/2022] [Accepted: 03/15/2022] [Indexed: 11/13/2022] Open
Abstract
Background The applications of artificial intelligence (AI) processes have grown significantly in all medical disciplines during the last decades. Two main types of AI have been applied in medicine: symbolic AI (eg, knowledge base and ontologies) and nonsymbolic AI (eg, machine learning and artificial neural networks). Consequently, AI has also been applied across most obstetrics and gynecology (OB/GYN) domains, including general obstetrics, gynecology surgery, fetal ultrasound, and assisted reproductive medicine, among others. Objective The aim of this study was to provide a systematic review to establish the actual contributions of AI reported in OB/GYN discipline journals. Methods The PubMed database was searched for citations indexed with “artificial intelligence” and at least one of the following medical subject heading (MeSH) terms between January 1, 2000, and April 30, 2020: “obstetrics”; “gynecology”; “reproductive techniques, assisted”; or “pregnancy.” All publications in OB/GYN core disciplines journals were considered. The selection of journals was based on disciplines defined in Web of Science. The publications were excluded if no AI process was used in the study. Review, editorial, and commentary articles were also excluded. The study analysis comprised (1) classification of publications into OB/GYN domains, (2) description of AI methods, (3) description of AI algorithms, (4) description of data sets, (5) description of AI contributions, and (6) description of the validation of the AI process. Results The PubMed search retrieved 579 citations and 66 publications met the selection criteria. All OB/GYN subdomains were covered: obstetrics (41%, 27/66), gynecology (3%, 2/66), assisted reproductive medicine (33%, 22/66), early pregnancy (2%, 1/66), and fetal medicine (21%, 14/66). Both machine learning methods (39/66) and knowledge base methods (25/66) were represented. Machine learning used imaging, numerical, and clinical data sets. Knowledge base methods used mostly omics data sets. The actual contributions of AI were method/algorithm development (53%, 35/66), hypothesis generation (42%, 28/66), or software development (3%, 2/66). Validation was performed on one data set (86%, 57/66) and no external validation was reported. We observed a general rising trend in publications related to AI in OB/GYN over the last two decades. Most of these publications (82%, 54/66) remain out of the scope of the usual OB/GYN journals. Conclusions In OB/GYN discipline journals, mostly preliminary work (eg, proof-of-concept algorithm or method) in AI applied to this discipline is reported and clinical validation remains an unmet prerequisite. Improvement driven by new AI research guidelines is expected. However, these guidelines are covering only a part of AI approaches (nonsymbolic) reported in this review; hence, updates need to be considered.
Collapse
Affiliation(s)
- Ferdinand Dhombres
- Sorbonne University, Armand Trousseau University hospital, Fetal Medicine department, APHP, Armand Trousseau University hospital, Fetal Medicine department, APHP26 AV du Dr Arnold Netter, Paris, FR.,INSERM, Laboratory in Medical Informatics and Knowledge Engineering in e-Health (LIMICS), Paris, FR
| | - Jules Bonnard
- Sorbonne University, Institute for Intelligent Systems and Robotics (ISIR), Paris, FR
| | - Kévin Bailly
- Sorbonne University, Institute for Intelligent Systems and Robotics (ISIR), Paris, FR
| | - Paul Maurice
- Sorbonne University, Armand Trousseau University hospital, Fetal Medicine department, APHP, Paris, FR
| | - Aris Papageorghiou
- Oxford Maternal & Perinatal Health Institute, Green Templeton College, Oxford, GB
| | - Jean-Marie Jouannic
- Sorbonne University, Armand Trousseau University hospital, Fetal Medicine department, APHP, Paris, FR.,INSERM, Laboratory in Medical Informatics and Knowledge Engineering in e-Health (LIMICS), Paris, FR
| |
Collapse
|
5
|
Kim MS, Cha JH, Lee S, Han L, Park W, Ahn JS, Park SC. Deep-Learning-Based Cerebral Artery Semantic Segmentation in Neurosurgical Operating Microscope Vision Using Indocyanine Green Fluorescence Videoangiography. Front Neurorobot 2022; 15:735177. [PMID: 35095454 PMCID: PMC8790180 DOI: 10.3389/fnbot.2021.735177] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2021] [Accepted: 11/23/2021] [Indexed: 11/18/2022] Open
Abstract
There have been few anatomical structure segmentation studies using deep learning. Numbers of training and ground truth images applied were small and the accuracies of which were low or inconsistent. For a surgical video anatomy analysis, various obstacles, including a variable fast-changing view, large deformations, occlusions, low illumination, and inadequate focus occur. In addition, it is difficult and costly to obtain a large and accurate dataset on operational video anatomical structures, including arteries. In this study, we investigated cerebral artery segmentation using an automatic ground-truth generation method. Indocyanine green (ICG) fluorescence intraoperative cerebral videoangiography was used to create a ground-truth dataset mainly for cerebral arteries and partly for cerebral blood vessels, including veins. Four different neural network models were trained using the dataset and compared. Before augmentation, 35,975 training images and 11,266 validation images were used. After augmentation, 260,499 training and 90,129 validation images were used. A Dice score of 79% for cerebral artery segmentation was achieved using the DeepLabv3+ model trained using an automatically generated dataset. Strict validation in different patient groups was conducted. Arteries were also discerned from the veins using the ICG videoangiography phase. We achieved fair accuracy, which demonstrated the appropriateness of the methodology. This study proved the feasibility of operating field view of the cerebral artery segmentation using deep learning, and the effectiveness of the automatic blood vessel ground truth generation method using ICG fluorescence videoangiography. Using this method, computer vision can discern blood vessels and arteries from veins in a neurosurgical microscope field of view. Thus, this technique is essential for neurosurgical field vessel anatomy-based navigation. In addition, surgical assistance, safety, and autonomous surgery neurorobotics that can detect or manipulate cerebral vessels would require computer vision to identify blood vessels and arteries.
Collapse
Affiliation(s)
- Min-seok Kim
- Clinical Research Team, Deepnoid, Seoul, South Korea
| | - Joon Hyuk Cha
- Department of Internal Medicine, Inha University Hospital, Incheon, South Korea
| | - Seonhwa Lee
- Department of Bio-convergence Engineering, Korea University, Seoul, South Korea
| | - Lihong Han
- Clinical Research Team, Deepnoid, Seoul, South Korea
- Department of Computer Science and Engineering, Soongsil University, Seoul, South Korea
| | - Wonhyoung Park
- Department of Neurosurgery, Asan Medical Center, University of Ulsan College of Medicine, Seoul, South Korea
| | - Jae Sung Ahn
- Department of Neurosurgery, Asan Medical Center, University of Ulsan College of Medicine, Seoul, South Korea
| | - Seong-Cheol Park
- Clinical Research Team, Deepnoid, Seoul, South Korea
- Department of Neurosurgery, Gangneung Asan Hospital, University of Ulsan College of Medicine, Gangneung, South Korea
- Department of Neurosurgery, Seoul Metropolitan Government—Seoul National University Boramae Medical Center, Seoul, South Korea
- Department of Neurosurgery, Hallym Hospital, Incheon, South Korea
- *Correspondence: Seong-Cheol Park
| |
Collapse
|
6
|
Tun WM, Poologasundarampillai G, Bischof H, Nye G, King ONF, Basham M, Tokudome Y, Lewis RM, Johnstone ED, Brownbill P, Darrow M, Chernyavsky IL. A massively multi-scale approach to characterizing tissue architecture by synchrotron micro-CT applied to the human placenta. J R Soc Interface 2021; 18:20210140. [PMID: 34062108 PMCID: PMC8169212 DOI: 10.1098/rsif.2021.0140] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2021] [Accepted: 05/06/2021] [Indexed: 12/03/2022] Open
Abstract
Multi-scale structural assessment of biological soft tissue is challenging but essential to gain insight into structure-function relationships of tissue/organ. Using the human placenta as an example, this study brings together sophisticated sample preparation protocols, advanced imaging and robust, validated machine-learning segmentation techniques to provide the first massively multi-scale and multi-domain information that enables detailed morphological and functional analyses of both maternal and fetal placental domains. Finally, we quantify the scale-dependent error in morphological metrics of heterogeneous placental tissue, estimating the minimal tissue scale needed in extracting meaningful biological data. The developed protocol is beneficial for high-throughput investigation of structure-function relationships in both normal and diseased placentas, allowing us to optimize therapeutic approaches for pathological pregnancies. In addition, the methodology presented is applicable in the characterization of tissue architecture and physiological behaviours of other complex organs with similarity to the placenta, where an exchange barrier possesses circulating vascular and avascular fluid spaces.
Collapse
Affiliation(s)
- W. M. Tun
- Diamond Light Source, Didcot OX11 0DE, UK
| | | | - H. Bischof
- Maternal and Fetal Health Research Centre, School of Medical Sciences, University of Manchester, Manchester, UK
- MAHSC, St Mary's Hospital, NHS MFT, Manchester M13 9WL, UK
| | - G. Nye
- Chester Medical School, University of Chester, Chester CH1 4BJ, UK
| | | | - M. Basham
- Diamond Light Source, Didcot OX11 0DE, UK
- Rosalind Franklin Institute, Didcot OX11 0DE, UK
| | - Y. Tokudome
- Department of Materials Science, Graduate School of Engineering, Osaka Prefecture University, Osaka 599-8531, Japan
| | - R. M. Lewis
- Faculty of Medicine, University of Southampton, Southampton SO16 6YD, UK
| | - E. D. Johnstone
- Maternal and Fetal Health Research Centre, School of Medical Sciences, University of Manchester, Manchester, UK
- MAHSC, St Mary's Hospital, NHS MFT, Manchester M13 9WL, UK
| | - P. Brownbill
- Maternal and Fetal Health Research Centre, School of Medical Sciences, University of Manchester, Manchester, UK
- MAHSC, St Mary's Hospital, NHS MFT, Manchester M13 9WL, UK
| | - M. Darrow
- SPT Labtech Ltd, Melbourn SG8 6HB, UK
| | - I. L. Chernyavsky
- Maternal and Fetal Health Research Centre, School of Medical Sciences, University of Manchester, Manchester, UK
- MAHSC, St Mary's Hospital, NHS MFT, Manchester M13 9WL, UK
- Department of Mathematics, University of Manchester, Manchester M13 9PL, UK
| |
Collapse
|
7
|
Casella A, Moccia S, Paladini D, Frontoni E, De Momi E, Mattos LS. A shape-constraint adversarial framework with instance-normalized spatio-temporal features for inter-fetal membrane segmentation. Med Image Anal 2021; 70:102008. [PMID: 33647785 DOI: 10.1016/j.media.2021.102008] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2020] [Revised: 12/17/2020] [Accepted: 02/16/2021] [Indexed: 12/01/2022]
Abstract
BACKGROUND AND OBJECTIVES During Twin-to-Twin Transfusion Syndrome (TTTS), abnormal vascular anastomoses in the monochorionic placenta can produce uneven blood flow between the fetuses. In the current practice, this syndrome is surgically treated by closing the abnormal connections using laser ablation. Surgeons commonly use the inter-fetal membrane as a reference. Limited field of view, low fetoscopic image quality and high inter-subject variability make the membrane identification a challenging task. However, currently available tools are not optimal for automatic membrane segmentation in fetoscopic videos, due to membrane texture homogeneity and high illumination variability. METHODS To tackle these challenges, we present a new deep-learning framework for inter-fetal membrane segmentation on in-vivo fetoscopic videos. The framework enhances existing architectures by (i) encoding a novel (instance-normalized) dense block, invariant to illumination changes, that extracts spatio-temporal features to enforce pixel connectivity in time, and (ii) relying on an adversarial training, which constrains macro appearance. RESULTS We performed a comprehensive validation using 20 different videos (2000 frames) from 20 different surgeries, achieving a mean Dice Similarity Coefficient of 0.8780±0.1383. CONCLUSIONS The proposed framework has great potential to positively impact the actual surgical practice for TTTS treatment, allowing the implementation of surgical guidance systems that can enhance context awareness and potentially lower the duration of the surgeries.
Collapse
Affiliation(s)
- Alessandro Casella
- Department of Advanced Robotics, Istituto Italiano di Tecnologia, Genoa, Italy; Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milan, Italy.
| | - Sara Moccia
- The BioRobotics Institute, Scuola Superiore Sant'Anna, Pisa, Italy; Department of Excellence in Robotics and AI, Scuola Superiore Sant'Anna, Pisa, Italy
| | - Dario Paladini
- Department of Fetal and Perinatal Medicine, Istituto "Giannina Gaslini", Genoa, Italy
| | - Emanuele Frontoni
- Department of Information Engineering, Universitá Politecnica delle Marche, Ancona, Italy
| | - Elena De Momi
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milan, Italy
| | - Leonard S Mattos
- Department of Advanced Robotics, Istituto Italiano di Tecnologia, Genoa, Italy
| |
Collapse
|
8
|
Bano S, Vasconcelos F, Vander Poorten E, Vercauteren T, Ourselin S, Deprest J, Stoyanov D. FetNet: a recurrent convolutional network for occlusion identification in fetoscopic videos. Int J Comput Assist Radiol Surg 2020; 15:791-801. [PMID: 32350787 PMCID: PMC7261278 DOI: 10.1007/s11548-020-02169-0] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2019] [Accepted: 04/10/2020] [Indexed: 12/18/2022]
Abstract
PURPOSE Fetoscopic laser photocoagulation is a minimally invasive surgery for the treatment of twin-to-twin transfusion syndrome (TTTS). By using a lens/fibre-optic scope, inserted into the amniotic cavity, the abnormal placental vascular anastomoses are identified and ablated to regulate blood flow to both fetuses. Limited field-of-view, occlusions due to fetus presence and low visibility make it difficult to identify all vascular anastomoses. Automatic computer-assisted techniques may provide better understanding of the anatomical structure during surgery for risk-free laser photocoagulation and may facilitate in improving mosaics from fetoscopic videos. METHODS We propose FetNet, a combined convolutional neural network (CNN) and long short-term memory (LSTM) recurrent neural network architecture for the spatio-temporal identification of fetoscopic events. We adapt an existing CNN architecture for spatial feature extraction and integrated it with the LSTM network for end-to-end spatio-temporal inference. We introduce differential learning rates during the model training to effectively utilising the pre-trained CNN weights. This may support computer-assisted interventions (CAI) during fetoscopic laser photocoagulation. RESULTS We perform quantitative evaluation of our method using 7 in vivo fetoscopic videos captured from different human TTTS cases. The total duration of these videos was 5551 s (138,780 frames). To test the robustness of the proposed approach, we perform 7-fold cross-validation where each video is treated as a hold-out or test set and training is performed using the remaining videos. CONCLUSION FetNet achieved superior performance compared to the existing CNN-based methods and provided improved inference because of the spatio-temporal information modelling. Online testing of FetNet, using a Tesla V100-DGXS-32GB GPU, achieved a frame rate of 114 fps. These results show that our method could potentially provide a real-time solution for CAI and automating occlusion and photocoagulation identification during fetoscopic procedures.
Collapse
Affiliation(s)
- Sophia Bano
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS) and Department of Computer Science, University College London, London, UK
| | - Francisco Vasconcelos
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS) and Department of Computer Science, University College London, London, UK
| | | | - Tom Vercauteren
- School of Biomedical Engineering and Imaging Sciences, King’s College London, London, UK
| | - Sebastien Ourselin
- School of Biomedical Engineering and Imaging Sciences, King’s College London, London, UK
| | - Jan Deprest
- Department of Development and Regeneration, University Hospital Leuven, Leuven, Belgium
| | - Danail Stoyanov
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS) and Department of Computer Science, University College London, London, UK
| |
Collapse
|
9
|
Inter-foetus Membrane Segmentation for TTTS Using Adversarial Networks. Ann Biomed Eng 2019; 48:848-859. [DOI: 10.1007/s10439-019-02424-9] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2019] [Accepted: 11/23/2019] [Indexed: 12/18/2022]
|
10
|
Rubin DL. Artificial Intelligence in Imaging: The Radiologist's Role. J Am Coll Radiol 2019; 16:1309-1317. [PMID: 31492409 PMCID: PMC6733578 DOI: 10.1016/j.jacr.2019.05.036] [Citation(s) in RCA: 49] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2019] [Accepted: 05/17/2019] [Indexed: 12/15/2022]
Abstract
Rapid technological advancements in artificial intelligence (AI) methods have fueled explosive growth in decision tools being marketed by a rapidly growing number of companies. AI developments are being driven largely by computer scientists, informaticians, engineers, and businesspeople, with much less direct participation by radiologists. Participation by radiologists in AI is largely restricted to educational efforts to familiarize them with the tools and promising results, but techniques to help them decide which AI tools should be used in their practices and to how to quantify their value are not being addressed. This article focuses on the role of radiologists in imaging AI and suggests specific ways they can be engaged by (1) considering the clinical need for AI tools in specific clinical use cases, (2) undertaking formal evaluation of AI tools they are considering adopting in their practices, and (3) maintaining their expertise and guarding against the pitfalls of overreliance on technology.
Collapse
Affiliation(s)
- Daniel L Rubin
- Department of Biomedical Data Science, Radiology, and Medicine (Biomedical Informatics Research), Stanford University, Stanford, California.
| |
Collapse
|