1
|
Buyck F, Vandemeulebroucke J, Ceranka J, Van Gestel F, Cornelius JF, Duerinck J, Bruneau M. Computer-vision based analysis of the neurosurgical scene - A systematic review. BRAIN & SPINE 2023; 3:102706. [PMID: 38020988 PMCID: PMC10668095 DOI: 10.1016/j.bas.2023.102706] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/30/2023] [Revised: 10/23/2023] [Accepted: 10/29/2023] [Indexed: 12/01/2023]
Abstract
Introduction With increasing use of robotic surgical adjuncts, artificial intelligence and augmented reality in neurosurgery, the automated analysis of digital images and videos acquired over various procedures becomes a subject of increased interest. While several computer vision (CV) methods have been developed and implemented for analyzing surgical scenes, few studies have been dedicated to neurosurgery. Research question In this work, we present a systematic literature review focusing on CV methodologies specifically applied to the analysis of neurosurgical procedures based on intra-operative images and videos. Additionally, we provide recommendations for the future developments of CV models in neurosurgery. Material and methods We conducted a systematic literature search in multiple databases until January 17, 2023, including Web of Science, PubMed, IEEE Xplore, Embase, and SpringerLink. Results We identified 17 studies employing CV algorithms on neurosurgical videos/images. The most common applications of CV were tool and neuroanatomical structure detection or characterization, and to a lesser extent, surgical workflow analysis. Convolutional neural networks (CNN) were the most frequently utilized architecture for CV models (65%), demonstrating superior performances in tool detection and segmentation. In particular, mask recurrent-CNN manifested most robust performance outcomes across different modalities. Discussion and conclusion Our systematic review demonstrates that CV models have been reported that can effectively detect and differentiate tools, surgical phases, neuroanatomical structures, as well as critical events in complex neurosurgical scenes with accuracies above 95%. Automated tool recognition contributes to objective characterization and assessment of surgical performance, with potential applications in neurosurgical training and intra-operative safety management.
Collapse
Affiliation(s)
- Félix Buyck
- Department of Neurosurgery, Universitair Ziekenhuis Brussel (UZ Brussel), 1090, Brussels, Belgium
- Vrije Universiteit Brussel (VUB), Research group Center For Neurosciences (C4N-NEUR), 1090, Brussels, Belgium
| | - Jef Vandemeulebroucke
- Vrije Universiteit Brussel (VUB), Department of Electronics and Informatics (ETRO), 1050, Brussels, Belgium
- Department of Radiology, Universitair Ziekenhuis Brussel (UZ Brussel), 1090, Brussels, Belgium
- imec, 3001, Leuven, Belgium
| | - Jakub Ceranka
- Vrije Universiteit Brussel (VUB), Department of Electronics and Informatics (ETRO), 1050, Brussels, Belgium
- imec, 3001, Leuven, Belgium
| | - Frederick Van Gestel
- Department of Neurosurgery, Universitair Ziekenhuis Brussel (UZ Brussel), 1090, Brussels, Belgium
- Vrije Universiteit Brussel (VUB), Research group Center For Neurosciences (C4N-NEUR), 1090, Brussels, Belgium
| | - Jan Frederick Cornelius
- Department of Neurosurgery, Medical Faculty, Heinrich-Heine-University, 40225, Düsseldorf, Germany
| | - Johnny Duerinck
- Department of Neurosurgery, Universitair Ziekenhuis Brussel (UZ Brussel), 1090, Brussels, Belgium
- Vrije Universiteit Brussel (VUB), Research group Center For Neurosciences (C4N-NEUR), 1090, Brussels, Belgium
| | - Michaël Bruneau
- Department of Neurosurgery, Universitair Ziekenhuis Brussel (UZ Brussel), 1090, Brussels, Belgium
- Vrije Universiteit Brussel (VUB), Research group Center For Neurosciences (C4N-NEUR), 1090, Brussels, Belgium
| |
Collapse
|
2
|
Su B, Li H, Xiu W, Gao Y, Gong Y, Wang Z, Hu YD, Yao W, Tang J, Liu W, Wang J, Gao L. Autonomous aspirating robot for removing saliva blood mixed liquid in oral surgery. Comput Methods Biomech Biomed Engin 2023; 26:1523-1531. [PMID: 36382359 DOI: 10.1080/10255842.2022.2125806] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2022] [Revised: 07/23/2022] [Accepted: 08/24/2022] [Indexed: 11/17/2022]
Abstract
Saliva blood mixed liquid (SBML) appears in oral surgery, such as scaling and root planning, and it affects surgical vision and causes discomfort to the patient. However, removing SBML, i.e. frequent aspiration of the mixed liquid, is a routine task involving heavy workload and interruption of oral surgery. Therefore, it is valuable to alternate the manual mode by autonomous robotic technique. The robotic system is designed consisting of an RGB-D camera, a manipulator, a disposable oral aspirator. An algorithm is developed for detection of SBML. Path planning method is also addressed for the distal end of the aspirator. A workflow for removing SBML is presented. 95% of the area of the SBML in the oral cavity was removed after liquid aspiration among a group of ten SBML aspiration experiments. This study provides the first result of the autonomous aspirating robot (AAR) for removing SBML in oral surgery, demonstrating that SBML can be removed by the autonomous robot, freeing stomatology surgeon from tedious work.
Collapse
Affiliation(s)
- Baiquan Su
- Medical Robotics Laboratory, School of Automation, Beijing University of Posts and Telecommunications, Beijing, China
| | - Han Li
- Medical Robotics Laboratory, School of Automation, Beijing University of Posts and Telecommunications, Beijing, China
| | - Wei Xiu
- Chinese Institute of Electronics, Beijing, China
| | - Yang Gao
- Chinese Institute of Electronics, Beijing, China
| | - Yi Gong
- Medical Robotics Laboratory, School of Automation, Beijing University of Posts and Telecommunications, Beijing, China
| | - Zehao Wang
- Medical Robotics Laboratory, School of Automation, Beijing University of Posts and Telecommunications, Beijing, China
| | | | - Wei Yao
- Gastroenterology Department, Peking University Third Hospital, Beijing, China
| | - Jie Tang
- Department of Neurosurgery, Xuanwu Hospital, Capital Medical University, Beijing, China
| | - Wenyong Liu
- School of Biological Science and Medical Engineering, Beihang University, Beijing, China
| | - Junchen Wang
- School of Mechanical Engineering and Automation, Beihang University, Beijing
| | - Li Gao
- Department of Periodontology, National Stomatological Center, Peking University School and Hospital of Stomatology & National Clinical Research Center for Oral Diseases, Beijing, China
| |
Collapse
|
3
|
Su B, Zhang Q, Gong Y, Xiu W, Gao Y, Xu L, Li H, Wang Z, Yu S, Hu YD, Yao W, Wang J, Li C, Tang J, Gao L. Deep learning-based classification and segmentation for scalpels. Int J Comput Assist Radiol Surg 2023; 18:855-864. [PMID: 36602643 DOI: 10.1007/s11548-022-02825-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2022] [Accepted: 12/22/2022] [Indexed: 01/06/2023]
Abstract
PURPOSE Scalpels are typical tools used for cutting in surgery, and the surgical tray is one of the locations where the scalpel is present during surgery. However, there is no known method for the classification and segmentation of multiple types of scalpels. This paper presents a dataset of multiple types of scalpels and a classification and segmentation method that can be applied as a first step for validating segmentation of scalpels and further applications can include identifying scalpels from other tools in different clinical scenarios. METHODS The proposed scalpel dataset contains 6400 images with labeled information of 10 types of scalpels, and a classification and segmentation model for multiple types of scalpels is obtained by training the dataset based on Mask R-CNN. The article concludes with an analysis and evaluation of the network performance, verifying the feasibility of the work. RESULTS A multi-type scalpel dataset was established, and the classification and segmentation models of multi-type scalpel were obtained by training the Mask R-CNN. The average accuracy and average recall reached 94.19% and 96.61%, respectively, in the classification task and 93.30% and 95.14%, respectively, in the segmentation task. CONCLUSION The first scalpel dataset is created covering multiple types of scalpels. And the classification and segmentation of multiple types of scalpels are realized for the first time. This study achieves the classification and segmentation of scalpels in a surgical tray scene, providing a potential solution for scalpel recognition, localization and tracking.
Collapse
Affiliation(s)
- Baiquan Su
- Medical Robotics Laboratory, School of Automation, Beijing University of Posts and Telecommunications, Beijing, China
| | - Qingqian Zhang
- Medical Robotics Laboratory, School of Automation, Beijing University of Posts and Telecommunications, Beijing, China
| | - Yi Gong
- Medical Robotics Laboratory, School of Automation, Beijing University of Posts and Telecommunications, Beijing, China
| | - Wei Xiu
- Chinese Institute of Electronics, Beijing, China
| | - Yang Gao
- Chinese Institute of Electronics, Beijing, China
| | - Lixin Xu
- Department of Neurosurgery, Xuanwu Hospital, Capital Medical University, Beijing, China
| | - Han Li
- Medical Robotics Laboratory, School of Automation, Beijing University of Posts and Telecommunications, Beijing, China
| | - Zehao Wang
- Medical Robotics Laboratory, School of Automation, Beijing University of Posts and Telecommunications, Beijing, China
| | - Shi Yu
- Medical Robotics Laboratory, School of Automation, Beijing University of Posts and Telecommunications, Beijing, China
| | - Yida David Hu
- Brigham and Women's Hospital, Harvard Medical School, Boston, USA
| | - Wei Yao
- Gastroenterology Department, Peking University Third Hospital, Beijing, China
| | - Junchen Wang
- School of Mechanical Engineering and Automation, Beihang University, Beijing, China
| | - Changsheng Li
- School of Mechatronical Engineering, Beijing Institute of Technology, Beijing, China
| | - Jie Tang
- Department of Neurosurgery, Xuanwu Hospital, Capital Medical University, Beijing, China.
| | - Li Gao
- Department of Periodontology, National Stomatological Center, Peking University School and Hospital of Stomatology, Beijing, China.
- National Clinical Research Center for Oral Diseases, Beijing, China.
- National Engineering Research Center of Oral Biomaterials and Digital Medical Devices, Beijing, China.
- Beijing Key Laboratory of Digital Stomatology, Beijing, China.
| |
Collapse
|