1
|
Matasyoh NM, Schmidt R, Zeineldin RA, Spetzger U, Mathis-Ullrich F. Interactive Surgical Training in Neuroendoscopy: Real-Time Anatomical Feature Localization Using Natural Language Expressions. IEEE Trans Biomed Eng 2024; 71:2991-2999. [PMID: 38801697 DOI: 10.1109/tbme.2024.3405814] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/29/2024]
Abstract
OBJECTIVE This study addresses challenges in surgical education, particularly in neuroendoscopy, where the demand for optimized workflow conflicts with the need for trainees' active participation in surgeries. To overcome these challenges, we propose a framework that accurately identifies anatomical structures within images guided by language descriptions, facilitating authentic and interactive learning experiences in neuroendoscopy. METHODS Utilizing the encoder-decoder architecture of a conventional transformer, our framework processes multimodal inputs (images and language descriptions) to identify and localize features in neuroendoscopic images. We curate a dataset from recorded endoscopic third ventriculostomy (ETV) procedures for training and evaluation. Utilizing evaluation metrics, including "R@n," "IoU= θ," "mIoU," and top-1 accuracy, we systematically benchmark our framework against state-of-the-art methodologies. RESULTS The framework demonstrates excellent generalization, surpassing the compared methods with 93.67 % accuracy and 76.08 % mIoU on unseen data. It also exhibits better computational speed compared with other methods. Qualitative results affirms the framework's effectiveness in precise localization of referred anatomical features within neuroendoscopic images. CONCLUSION The framework's adeptness at localizing anatomical features using language descriptions positions it as a valuable tool for integration into future interactive clinical learning systems, enhancing surgical training in neuroendoscopy. SIGNIFICANCE The exemplary performance reinforces the framework's potential in enhancing surgical education, leading to improved skills and outcomes for trainees in neuroendoscopy.
Collapse
|
2
|
Lee JH, Ku E, Chung YS, Kim YJ, Kim KG. Intraoperative detection of parathyroid glands using artificial intelligence: optimizing medical image training with data augmentation methods. Surg Endosc 2024; 38:5732-5745. [PMID: 39138679 PMCID: PMC11458679 DOI: 10.1007/s00464-024-11115-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2024] [Accepted: 07/21/2024] [Indexed: 08/15/2024]
Abstract
BACKGROUND Postoperative hypoparathyroidism is a major complication of thyroidectomy, occurring when the parathyroid glands are inadvertently damaged during surgery. Although intraoperative images are rarely used to train artificial intelligence (AI) because of its complex nature, AI may be trained to intraoperatively detect parathyroid glands using various augmentation methods. The purpose of this study was to train an effective AI model to detect parathyroid glands during thyroidectomy. METHODS Video clips of the parathyroid gland were collected during thyroid lobectomy procedures. Confirmed parathyroid images were used to train three types of datasets according to augmentation status: baseline, geometric transformation, and generative adversarial network-based image inpainting. The primary outcome was the average precision of the performance of AI in detecting parathyroid glands. RESULTS 152 Fine-needle aspiration-confirmed parathyroid gland images were acquired from 150 patients who underwent unilateral lobectomy. The average precision of the AI model in detecting parathyroid glands based on baseline data was 77%. This performance was enhanced by applying both geometric transformation and image inpainting augmentation methods, with the geometric transformation data augmentation dataset showing a higher average precision (79%) than the image inpainting model (78.6%). When this model was subjected to external validation using a completely different thyroidectomy approach, the image inpainting method was more effective (46%) than both the geometric transformation (37%) and baseline (33%) methods. CONCLUSION This AI model was found to be an effective and generalizable tool in the intraoperative identification of parathyroid glands during thyroidectomy, especially when aided by appropriate augmentation methods. Additional studies comparing model performance and surgeon identification, however, are needed to assess the true clinical relevance of this AI model.
Collapse
Affiliation(s)
- Joon-Hyop Lee
- Division of Endocrine Surgery, Department of Surgery, Samsung Medical Center, 81 Irwon-ro, Gangnam-gu, Seoul, Korea
| | - EunKyung Ku
- Department of Digital Media, The Catholic University of Korea, 43, Jibong-ro, Wonmi-gu, Bucheon, Gyeonggi, 14662, Korea
| | - Yoo Seung Chung
- Division of Endocrine Surgery, Department of Surgery, Gachon University, College of Medicine, Gil Medical Center, Incheon, Korea
| | - Young Jae Kim
- Department of Biomedical Engineering, College of Medicine, Gachon University, Gil Medical Center, 38-13 Dokjeom-ro 3Beon-gil, Namdong-gu, Incheon, 21565, Korea
| | - Kwang Gi Kim
- Department of Biomedical Engineering, College of Medicine, Gachon University, Gil Medical Center, 38-13 Dokjeom-ro 3Beon-gil, Namdong-gu, Incheon, 21565, Korea.
| |
Collapse
|
3
|
Sayols N, Hernansanz A, Parra J, Eixarch E, Xambó-Descamps S, Gratacós E, Casals A. Robust tracking of deformable anatomical structures with severe occlusions using deformable geometrical primitives. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 251:108201. [PMID: 38703719 DOI: 10.1016/j.cmpb.2024.108201] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/27/2023] [Revised: 01/30/2024] [Accepted: 04/22/2024] [Indexed: 05/06/2024]
Abstract
BACKGROUND AND OBJECTIVE Surgical robotics tends to develop cognitive control architectures to provide certain degree of autonomy to improve patient safety and surgery outcomes, while decreasing the required surgeons' cognitive load dedicated to low level decisions. Cognition needs workspace perception, which is an essential step towards automatic decision-making and task planning capabilities. Robust and accurate detection and tracking in minimally invasive surgery suffers from limited visibility, occlusions, anatomy deformations and camera movements. METHOD This paper develops a robust methodology to detect and track anatomical structures in real time to be used in automatic control of robotic systems and augmented reality. The work focuses on the experimental validation in highly challenging surgery: fetoscopic repair of Open Spina Bifida. The proposed method is based on two sequential steps: first, selection of relevant points (contour) using a Convolutional Neural Network and, second, reconstruction of the anatomical shape by means of deformable geometric primitives. RESULTS The methodology performance was validated with different scenarios. Synthetic scenario tests, designed for extreme validation conditions, demonstrate the safety margin offered by the methodology with respect to the nominal conditions during surgery. Real scenario experiments have demonstrated the validity of the method in terms of accuracy, robustness and computational efficiency. CONCLUSIONS This paper presents a robust anatomical structure detection in present of abrupt camera movements, severe occlusions and deformations. Even though the paper focuses on a case study, Open Spina Bifida, the methodology is applicable in all anatomies which contours can be approximated by geometric primitives. The methodology is designed to provide effective inputs to cognitive robotic control and augmented reality systems that require accurate tracking of sensitive anatomies.
Collapse
Affiliation(s)
- Narcís Sayols
- Center of Research in Biomedical Engineering, Universitat Politècnica de Catalunya, Barcelona, Spain; Simulation, Imaging and Modelling for Biomedical Systems Research Group (SIMBiosys), Universitat Pompeu Fabra, Barcelona, Spain.
| | - Albert Hernansanz
- Center of Research in Biomedical Engineering, Universitat Politècnica de Catalunya, Barcelona, Spain; SurgiTrainer SL., Barcelona, Spain; Simulation, Imaging and Modelling for Biomedical Systems Research Group (SIMBiosys), Universitat Pompeu Fabra, Barcelona, Spain
| | - Johanna Parra
- BCNatal, Barcelona Center for Maternal-Fetal and Neonatal Medicine, Fetal Medicine Research Center (Hospital Clínic and Hospital Sant Joan de Deu, University of Barcelona, Barcelona, Spain
| | - Elisenda Eixarch
- BCNatal, Barcelona Center for Maternal-Fetal and Neonatal Medicine, Fetal Medicine Research Center (Hospital Clínic and Hospital Sant Joan de Deu, University of Barcelona, Barcelona, Spain; Institut d'Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS), Barcelona, Spain; Centre for Biomedical Research on Rare Diseases (CIBERER), Barcelona, Spain
| | - Sebastià Xambó-Descamps
- Department of Mathematics, Universitat Politècnica de Catalunya, Barcelona, Spain; Mathematical Institute (IMTech), Universitat Politècnica de Catalunya, Barcelona, Spain
| | - Eduard Gratacós
- BCNatal, Barcelona Center for Maternal-Fetal and Neonatal Medicine, Fetal Medicine Research Center (Hospital Clínic and Hospital Sant Joan de Deu, University of Barcelona, Barcelona, Spain; Institut d'Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS), Barcelona, Spain; Centre for Biomedical Research on Rare Diseases (CIBERER), Barcelona, Spain
| | - Alícia Casals
- Center of Research in Biomedical Engineering, Universitat Politècnica de Catalunya, Barcelona, Spain; SurgiTrainer SL., Barcelona, Spain
| |
Collapse
|
4
|
Zhu Y, Du L, Fu PY, Geng ZH, Zhang DF, Chen WF, Li QL, Zhou PH. An Automated Video Analysis System for Retrospective Assessment and Real-Time Monitoring of Endoscopic Procedures (with Video). Bioengineering (Basel) 2024; 11:445. [PMID: 38790312 PMCID: PMC11118061 DOI: 10.3390/bioengineering11050445] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2024] [Revised: 04/21/2024] [Accepted: 04/22/2024] [Indexed: 05/26/2024] Open
Abstract
BACKGROUND AND AIMS Accurate recognition of endoscopic instruments facilitates quantitative evaluation and quality control of endoscopic procedures. However, no relevant research has been reported. In this study, we aimed to develop a computer-assisted system, EndoAdd, for automated endoscopic surgical video analysis based on our dataset of endoscopic instrument images. METHODS Large training and validation datasets containing 45,143 images of 10 different endoscopic instruments and a test dataset of 18,375 images collected from several medical centers were used in this research. Annotated image frames were used to train the state-of-the-art object detection model, YOLO-v5, to identify the instruments. Based on the frame-level prediction results, we further developed a hidden Markov model to perform video analysis and generate heatmaps to summarize the videos. RESULTS EndoAdd achieved high accuracy (>97%) on the test dataset for all 10 endoscopic instrument types. The mean average accuracy, precision, recall, and F1-score were 99.1%, 92.0%, 88.8%, and 89.3%, respectively. The area under the curve values exceeded 0.94 for all instrument types. Heatmaps of endoscopic procedures were generated for both retrospective and real-time analyses. CONCLUSIONS We successfully developed an automated endoscopic video analysis system, EndoAdd, which supports retrospective assessment and real-time monitoring. It can be used for data analysis and quality control of endoscopic procedures in clinical practice.
Collapse
Affiliation(s)
- Yan Zhu
- Endoscopy Center and Endoscopy Research Institute, Zhongshan Hospital, Fudan University, Shanghai 200032, China; (Y.Z.); (L.D.); (P.-Y.F.); (Z.-H.G.); (D.-F.Z.); (W.-F.C.)
- Shanghai Collaborative Innovation Center of Endoscopy, Shanghai 200032, China
| | - Ling Du
- Endoscopy Center and Endoscopy Research Institute, Zhongshan Hospital, Fudan University, Shanghai 200032, China; (Y.Z.); (L.D.); (P.-Y.F.); (Z.-H.G.); (D.-F.Z.); (W.-F.C.)
- Shanghai Collaborative Innovation Center of Endoscopy, Shanghai 200032, China
| | - Pei-Yao Fu
- Endoscopy Center and Endoscopy Research Institute, Zhongshan Hospital, Fudan University, Shanghai 200032, China; (Y.Z.); (L.D.); (P.-Y.F.); (Z.-H.G.); (D.-F.Z.); (W.-F.C.)
- Shanghai Collaborative Innovation Center of Endoscopy, Shanghai 200032, China
| | - Zi-Han Geng
- Endoscopy Center and Endoscopy Research Institute, Zhongshan Hospital, Fudan University, Shanghai 200032, China; (Y.Z.); (L.D.); (P.-Y.F.); (Z.-H.G.); (D.-F.Z.); (W.-F.C.)
- Shanghai Collaborative Innovation Center of Endoscopy, Shanghai 200032, China
| | - Dan-Feng Zhang
- Endoscopy Center and Endoscopy Research Institute, Zhongshan Hospital, Fudan University, Shanghai 200032, China; (Y.Z.); (L.D.); (P.-Y.F.); (Z.-H.G.); (D.-F.Z.); (W.-F.C.)
- Shanghai Collaborative Innovation Center of Endoscopy, Shanghai 200032, China
| | - Wei-Feng Chen
- Endoscopy Center and Endoscopy Research Institute, Zhongshan Hospital, Fudan University, Shanghai 200032, China; (Y.Z.); (L.D.); (P.-Y.F.); (Z.-H.G.); (D.-F.Z.); (W.-F.C.)
- Shanghai Collaborative Innovation Center of Endoscopy, Shanghai 200032, China
| | - Quan-Lin Li
- Endoscopy Center and Endoscopy Research Institute, Zhongshan Hospital, Fudan University, Shanghai 200032, China; (Y.Z.); (L.D.); (P.-Y.F.); (Z.-H.G.); (D.-F.Z.); (W.-F.C.)
- Shanghai Collaborative Innovation Center of Endoscopy, Shanghai 200032, China
| | - Ping-Hong Zhou
- Endoscopy Center and Endoscopy Research Institute, Zhongshan Hospital, Fudan University, Shanghai 200032, China; (Y.Z.); (L.D.); (P.-Y.F.); (Z.-H.G.); (D.-F.Z.); (W.-F.C.)
- Shanghai Collaborative Innovation Center of Endoscopy, Shanghai 200032, China
| |
Collapse
|
5
|
Buyck F, Vandemeulebroucke J, Ceranka J, Van Gestel F, Cornelius JF, Duerinck J, Bruneau M. Computer-vision based analysis of the neurosurgical scene - A systematic review. BRAIN & SPINE 2023; 3:102706. [PMID: 38020988 PMCID: PMC10668095 DOI: 10.1016/j.bas.2023.102706] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/30/2023] [Revised: 10/23/2023] [Accepted: 10/29/2023] [Indexed: 12/01/2023]
Abstract
Introduction With increasing use of robotic surgical adjuncts, artificial intelligence and augmented reality in neurosurgery, the automated analysis of digital images and videos acquired over various procedures becomes a subject of increased interest. While several computer vision (CV) methods have been developed and implemented for analyzing surgical scenes, few studies have been dedicated to neurosurgery. Research question In this work, we present a systematic literature review focusing on CV methodologies specifically applied to the analysis of neurosurgical procedures based on intra-operative images and videos. Additionally, we provide recommendations for the future developments of CV models in neurosurgery. Material and methods We conducted a systematic literature search in multiple databases until January 17, 2023, including Web of Science, PubMed, IEEE Xplore, Embase, and SpringerLink. Results We identified 17 studies employing CV algorithms on neurosurgical videos/images. The most common applications of CV were tool and neuroanatomical structure detection or characterization, and to a lesser extent, surgical workflow analysis. Convolutional neural networks (CNN) were the most frequently utilized architecture for CV models (65%), demonstrating superior performances in tool detection and segmentation. In particular, mask recurrent-CNN manifested most robust performance outcomes across different modalities. Discussion and conclusion Our systematic review demonstrates that CV models have been reported that can effectively detect and differentiate tools, surgical phases, neuroanatomical structures, as well as critical events in complex neurosurgical scenes with accuracies above 95%. Automated tool recognition contributes to objective characterization and assessment of surgical performance, with potential applications in neurosurgical training and intra-operative safety management.
Collapse
Affiliation(s)
- Félix Buyck
- Department of Neurosurgery, Universitair Ziekenhuis Brussel (UZ Brussel), 1090, Brussels, Belgium
- Vrije Universiteit Brussel (VUB), Research group Center For Neurosciences (C4N-NEUR), 1090, Brussels, Belgium
| | - Jef Vandemeulebroucke
- Vrije Universiteit Brussel (VUB), Department of Electronics and Informatics (ETRO), 1050, Brussels, Belgium
- Department of Radiology, Universitair Ziekenhuis Brussel (UZ Brussel), 1090, Brussels, Belgium
- imec, 3001, Leuven, Belgium
| | - Jakub Ceranka
- Vrije Universiteit Brussel (VUB), Department of Electronics and Informatics (ETRO), 1050, Brussels, Belgium
- imec, 3001, Leuven, Belgium
| | - Frederick Van Gestel
- Department of Neurosurgery, Universitair Ziekenhuis Brussel (UZ Brussel), 1090, Brussels, Belgium
- Vrije Universiteit Brussel (VUB), Research group Center For Neurosciences (C4N-NEUR), 1090, Brussels, Belgium
| | - Jan Frederick Cornelius
- Department of Neurosurgery, Medical Faculty, Heinrich-Heine-University, 40225, Düsseldorf, Germany
| | - Johnny Duerinck
- Department of Neurosurgery, Universitair Ziekenhuis Brussel (UZ Brussel), 1090, Brussels, Belgium
- Vrije Universiteit Brussel (VUB), Research group Center For Neurosciences (C4N-NEUR), 1090, Brussels, Belgium
| | - Michaël Bruneau
- Department of Neurosurgery, Universitair Ziekenhuis Brussel (UZ Brussel), 1090, Brussels, Belgium
- Vrije Universiteit Brussel (VUB), Research group Center For Neurosciences (C4N-NEUR), 1090, Brussels, Belgium
| |
Collapse
|
6
|
Talaat WM, Shetty S, Al Bayatti S, Talaat S, Mourad L, Shetty S, Kaboudan A. An artificial intelligence model for the radiographic diagnosis of osteoarthritis of the temporomandibular joint. Sci Rep 2023; 13:15972. [PMID: 37749161 PMCID: PMC10519983 DOI: 10.1038/s41598-023-43277-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2023] [Accepted: 09/21/2023] [Indexed: 09/27/2023] Open
Abstract
The interpretation of the signs of Temporomandibular joint (TMJ) osteoarthritis on cone-beam computed tomography (CBCT) is highly subjective that hinders the diagnostic process. The objectives of this study were to develop and test the performance of an artificial intelligence (AI) model for the diagnosis of TMJ osteoarthritis from CBCT. A total of 2737 CBCT images from 943 patients were used for the training and validation of the AI model. The model was based on a single convolutional network while object detection was achieved using a single regression model. Two experienced evaluators performed a Diagnostic Criteria for Temporomandibular Disorders (DC/TMD)-based assessment to generate a separate model-testing set of 350 images in which the concluded diagnosis was considered the golden reference. The diagnostic performance of the model was then compared to an experienced oral radiologist. The AI diagnosis showed statistically higher agreement with the golden reference compared to the radiologist. Cohen's kappa showed statistically significant differences in the agreement between the AI and the radiologist with the golden reference for the diagnosis of all signs collectively (P = 0.0079) and for subcortical cysts (P = 0.0214). AI is expected to eliminate the subjectivity associated with the human interpretation and expedite the diagnostic process of TMJ osteoarthritis.
Collapse
Affiliation(s)
- Wael M Talaat
- Department of Oral and Craniofacial Health Sciences, College of Dental Medicine, University of Sharjah, Sharjah, 27272, UAE.
- Research Institute for Medical and Health Sciences, University of Sharjah, Sharjah, 27272, UAE.
- Department of Oral and Maxillofacial Surgery, Faculty of Dentistry, Suez Canal University, Ismailia, Egypt.
- Chair, Department of Oral and Craniofacial Health Sciences, College of Dental Medicine, University of Sharjah, Sharjah, UAE.
| | - Shishir Shetty
- Department of Oral and Craniofacial Health Sciences, College of Dental Medicine, University of Sharjah, Sharjah, 27272, UAE
- Research Institute for Medical and Health Sciences, University of Sharjah, Sharjah, 27272, UAE
| | - Saad Al Bayatti
- Department of Oral and Craniofacial Health Sciences, College of Dental Medicine, University of Sharjah, Sharjah, 27272, UAE
| | - Sameh Talaat
- Department of Orthodontics, Future University in Egypt, Cairo, Egypt
- Department of Oral Technology, University Clinic, Bonn, Germany
| | - Louloua Mourad
- Department of Oral and Maxillofacial Surgery, Faculty of Dentistry, Beirut Arab University, Tripoli, Lebanon
| | - Sunaina Shetty
- Department of Restorative and Preventive Dentistry, College of Dental Medicine, University of Sharjah, Sharjah, 27272, UAE
| | - Ahmed Kaboudan
- Department of Computer Science, Shorouk Academy, El Shorouk, Egypt
- Interdisciplinary AI Hub, Future University in Egypt, Cairo, Egypt
- DigiBrain4 Inc, Chicago, USA
| |
Collapse
|
7
|
Cheikh Youssef S, Haram K, Noël J, Patel V, Porter J, Dasgupta P, Hachach-Haram N. Evolution of the digital operating room: the place of video technology in surgery. Langenbecks Arch Surg 2023; 408:95. [PMID: 36807211 PMCID: PMC9939374 DOI: 10.1007/s00423-023-02830-7] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2022] [Accepted: 02/06/2023] [Indexed: 02/23/2023]
Abstract
PURPOSE The aim of this review was to collate current evidence wherein digitalisation, through the incorporation of video technology and artificial intelligence (AI), is being applied to the practice of surgery. Applications are vast, and the literature investigating the utility of surgical video and its synergy with AI has steadily increased over the last 2 decades. This type of technology is widespread in other industries, such as autonomy in transportation and manufacturing. METHODS Articles were identified primarily using the PubMed and MEDLINE databases. The MeSH terms used were "surgical education", "surgical video", "video labelling", "surgery", "surgical workflow", "telementoring", "telemedicine", "machine learning", "deep learning" and "operating room". Given the breadth of the subject and the scarcity of high-level data in certain areas, a narrative synthesis was selected over a meta-analysis or systematic review to allow for a focussed discussion of the topic. RESULTS Three main themes were identified and analysed throughout this review, (1) the multifaceted utility of surgical video recording, (2) teleconferencing/telemedicine and (3) artificial intelligence in the operating room. CONCLUSIONS Evidence suggests the routine collection of intraoperative data will be beneficial in the advancement of surgery, by driving standardised, evidence-based surgical care and personalised training of future surgeons. However, many barriers stand in the way of widespread implementation, necessitating close collaboration between surgeons, data scientists, medicolegal personnel and hospital policy makers.
Collapse
Affiliation(s)
| | | | - Jonathan Noël
- Guy's and St. Thomas' NHS Foundation Trust, Urology Centre, King's Health Partners, London, UK
| | - Vipul Patel
- Adventhealth Global Robotics Institute, 400 Celebration Place, Celebration, FL, USA
| | - James Porter
- Department of Urology, Swedish Urology Group, Seattle, WA, USA
| | - Prokar Dasgupta
- Guy's and St. Thomas' NHS Foundation Trust, Urology Centre, King's Health Partners, London, UK
| | - Nadine Hachach-Haram
- Department of Plastic Surgery, Guy's and St. Thomas' NHS Foundation Trust, King's Health Partners, London, UK
| |
Collapse
|
8
|
Kim MS, Cha JH, Lee S, Han L, Park W, Ahn JS, Park SC. Deep-Learning-Based Cerebral Artery Semantic Segmentation in Neurosurgical Operating Microscope Vision Using Indocyanine Green Fluorescence Videoangiography. Front Neurorobot 2022; 15:735177. [PMID: 35095454 PMCID: PMC8790180 DOI: 10.3389/fnbot.2021.735177] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2021] [Accepted: 11/23/2021] [Indexed: 11/18/2022] Open
Abstract
There have been few anatomical structure segmentation studies using deep learning. Numbers of training and ground truth images applied were small and the accuracies of which were low or inconsistent. For a surgical video anatomy analysis, various obstacles, including a variable fast-changing view, large deformations, occlusions, low illumination, and inadequate focus occur. In addition, it is difficult and costly to obtain a large and accurate dataset on operational video anatomical structures, including arteries. In this study, we investigated cerebral artery segmentation using an automatic ground-truth generation method. Indocyanine green (ICG) fluorescence intraoperative cerebral videoangiography was used to create a ground-truth dataset mainly for cerebral arteries and partly for cerebral blood vessels, including veins. Four different neural network models were trained using the dataset and compared. Before augmentation, 35,975 training images and 11,266 validation images were used. After augmentation, 260,499 training and 90,129 validation images were used. A Dice score of 79% for cerebral artery segmentation was achieved using the DeepLabv3+ model trained using an automatically generated dataset. Strict validation in different patient groups was conducted. Arteries were also discerned from the veins using the ICG videoangiography phase. We achieved fair accuracy, which demonstrated the appropriateness of the methodology. This study proved the feasibility of operating field view of the cerebral artery segmentation using deep learning, and the effectiveness of the automatic blood vessel ground truth generation method using ICG fluorescence videoangiography. Using this method, computer vision can discern blood vessels and arteries from veins in a neurosurgical microscope field of view. Thus, this technique is essential for neurosurgical field vessel anatomy-based navigation. In addition, surgical assistance, safety, and autonomous surgery neurorobotics that can detect or manipulate cerebral vessels would require computer vision to identify blood vessels and arteries.
Collapse
Affiliation(s)
- Min-seok Kim
- Clinical Research Team, Deepnoid, Seoul, South Korea
| | - Joon Hyuk Cha
- Department of Internal Medicine, Inha University Hospital, Incheon, South Korea
| | - Seonhwa Lee
- Department of Bio-convergence Engineering, Korea University, Seoul, South Korea
| | - Lihong Han
- Clinical Research Team, Deepnoid, Seoul, South Korea
- Department of Computer Science and Engineering, Soongsil University, Seoul, South Korea
| | - Wonhyoung Park
- Department of Neurosurgery, Asan Medical Center, University of Ulsan College of Medicine, Seoul, South Korea
| | - Jae Sung Ahn
- Department of Neurosurgery, Asan Medical Center, University of Ulsan College of Medicine, Seoul, South Korea
| | - Seong-Cheol Park
- Clinical Research Team, Deepnoid, Seoul, South Korea
- Department of Neurosurgery, Gangneung Asan Hospital, University of Ulsan College of Medicine, Gangneung, South Korea
- Department of Neurosurgery, Seoul Metropolitan Government—Seoul National University Boramae Medical Center, Seoul, South Korea
- Department of Neurosurgery, Hallym Hospital, Incheon, South Korea
- *Correspondence: Seong-Cheol Park
| |
Collapse
|
9
|
den Boer RB, de Jongh C, Huijbers WTE, Jaspers TJM, Pluim JPW, van Hillegersberg R, Van Eijnatten M, Ruurda JP. Computer-aided anatomy recognition in intrathoracic and -abdominal surgery: a systematic review. Surg Endosc 2022; 36:8737-8752. [PMID: 35927354 PMCID: PMC9652273 DOI: 10.1007/s00464-022-09421-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2022] [Accepted: 06/24/2022] [Indexed: 01/06/2023]
Abstract
BACKGROUND Minimally invasive surgery is complex and associated with substantial learning curves. Computer-aided anatomy recognition, such as artificial intelligence-based algorithms, may improve anatomical orientation, prevent tissue injury, and improve learning curves. The study objective was to provide a comprehensive overview of current literature on the accuracy of anatomy recognition algorithms in intrathoracic and -abdominal surgery. METHODS This systematic review is reported according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guideline. Pubmed, Embase, and IEEE Xplore were searched for original studies up until January 2022 on computer-aided anatomy recognition, without requiring intraoperative imaging or calibration equipment. Extracted features included surgical procedure, study population and design, algorithm type, pre-training methods, pre- and post-processing methods, data augmentation, anatomy annotation, training data, testing data, model validation strategy, goal of the algorithm, target anatomical structure, accuracy, and inference time. RESULTS After full-text screening, 23 out of 7124 articles were included. Included studies showed a wide diversity, with six possible recognition tasks in 15 different surgical procedures, and 14 different accuracy measures used. Risk of bias in the included studies was high, especially regarding patient selection and annotation of the reference standard. Dice and intersection over union (IoU) scores of the algorithms ranged from 0.50 to 0.98 and from 74 to 98%, respectively, for various anatomy recognition tasks. High-accuracy algorithms were typically trained using larger datasets annotated by expert surgeons and focused on less-complex anatomy. Some of the high-accuracy algorithms were developed using pre-training and data augmentation. CONCLUSIONS The accuracy of included anatomy recognition algorithms varied substantially, ranging from moderate to good. Solid comparison between algorithms was complicated by the wide variety of applied methodology, target anatomical structures, and reported accuracy measures. Computer-aided intraoperative anatomy recognition is an upcoming research discipline, but still at its infancy. Larger datasets and methodological guidelines are required to improve accuracy and clinical applicability in future research. TRIAL REGISTRATION PROSPERO registration number: CRD42021264226.
Collapse
Affiliation(s)
- R. B. den Boer
- Department of Surgery, University Medical Center Utrecht, Heidelberglaan 100, 3584 CX Utrecht, The Netherlands
| | - C. de Jongh
- Department of Surgery, University Medical Center Utrecht, Heidelberglaan 100, 3584 CX Utrecht, The Netherlands
| | - W. T. E. Huijbers
- Department of Surgery, University Medical Center Utrecht, Heidelberglaan 100, 3584 CX Utrecht, The Netherlands
| | - T. J. M. Jaspers
- Department of Biomedical Engineering, Eindhoven University of Technology, Groene Loper 3, 5612 AE Eindhoven, The Netherlands
| | - J. P. W. Pluim
- Department of Biomedical Engineering, Eindhoven University of Technology, Groene Loper 3, 5612 AE Eindhoven, The Netherlands
| | - R. van Hillegersberg
- Department of Surgery, University Medical Center Utrecht, Heidelberglaan 100, 3584 CX Utrecht, The Netherlands
| | - M. Van Eijnatten
- Department of Biomedical Engineering, Eindhoven University of Technology, Groene Loper 3, 5612 AE Eindhoven, The Netherlands
| | - J. P. Ruurda
- Department of Surgery, University Medical Center Utrecht, Heidelberglaan 100, 3584 CX Utrecht, The Netherlands
| |
Collapse
|
10
|
Digital workflows for pathological assessment of rat estrous cycle stage using images of uterine horn and vaginal tissue. J Pathol Inform 2022; 13:100120. [PMID: 36268108 PMCID: PMC9577039 DOI: 10.1016/j.jpi.2022.100120] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2022] [Revised: 06/23/2022] [Accepted: 06/24/2022] [Indexed: 11/24/2022] Open
Abstract
Assessment of the estrous cycle of mature female mammals is an important component of verifying the efficacy and safety of drug candidates. The common pathological approach of relying on expert observation has several drawbacks, including laborious work and inter-viewer variability. The recent advent of image recognition technologies using deep learning is expected to bring substantial benefits to such pathological assessments. We herein propose 2 distinct deep learning-based workflows to classify the estrous cycle stage from tissue images of the uterine horn and vagina, respectively. These constructed models were able to classify the estrous cycle stages with accuracy comparable with that of expert pathologists. Our digital workflows allow efficient pathological assessments of the estrous cycle stage in rats and are thus expected to accelerate drug research and development.
Collapse
|