1
|
Kim YG, Shim JW, Gimm G, Kang S, Rhee W, Lee JH, Kim BS, Yoon D, Kim M, Cho M, Kim S. Speech-mediated manipulation of da Vinci surgical system for continuous surgical flow. Biomed Eng Lett 2025; 15:117-129. [PMID: 39781059 PMCID: PMC11704117 DOI: 10.1007/s13534-024-00429-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2024] [Revised: 09/05/2024] [Accepted: 09/17/2024] [Indexed: 01/12/2025] Open
Abstract
With the advent of robot-assisted surgery, user-friendly technologies have been applied to the da Vinci surgical system (dVSS), and their efficacy has been validated in worldwide surgical fields. However, further improvements are required to the traditional manipulation methods, which cannot control an endoscope and surgical instruments simultaneously. This study proposes a speech recognition control interface (SRCI) for controlling the endoscope via speech commands while manipulating surgical instruments to replace the traditional method. The usability-focused comparisons of the newly proposed SRCI-based and the traditional manipulation method were conducted based on ISO 9241-11. 20 surgeons and 18 novices evaluated both manipulation methods through the line tracking task (LTT) and sea spike pod task (SSPT). After the tasks, they responded to the globally reliable questionnaires: after-scenario questionnaire (ASQ), system usability scale (SUS), and NASA task load index (TLX). The completion times in the LTT and SSPT using the proposed method were 44.72% and 26.59% respectively less than the traditional method, which shows statistically significant differences (p < 0.001). The overall results of ASQ, SUS, and NASA TLX were positive for the proposed method, especially substantial reductions in the workloads such as physical demands and efforts (p < 0.05). The proposed speech-mediated method can be a candidate suitable for the simultaneous manipulation of an endoscope and surgical instruments in dVSS-used robotic surgery. Therefore, it can replace the traditional method when controlling the endoscope while manipulating the surgical instruments, which contributes to enabling the continuous surgical flow in operations consequentially. Supplementary Information The online version contains supplementary material available at 10.1007/s13534-024-00429-5.
Collapse
Affiliation(s)
- Young Gyun Kim
- Interdisciplinary Program in Bioengineering, Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul, 08826 Republic of Korea
| | - Jae Woo Shim
- Interdisciplinary Program in Bioengineering, Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul, 08826 Republic of Korea
| | - Geunwu Gimm
- Department of Biomedical Engineering, Seoul National University College of Medicine, 103 Daehak-ro, Jongno- gu, Seoul, 03080 Republic of Korea
| | - Seongjoon Kang
- Interdisciplinary Program in Bioengineering, Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul, 08826 Republic of Korea
| | - Wounsuk Rhee
- Seoul National University Hospital, 101 Daehak-ro, Jongno-gu, Seoul, 03080 Republic of Korea
| | - Jong Hyeon Lee
- Interdisciplinary Program in Bioengineering, Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul, 08826 Republic of Korea
| | - Byeong Soo Kim
- Interdisciplinary Program in Bioengineering, Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul, 08826 Republic of Korea
| | - Dan Yoon
- Interdisciplinary Program in Bioengineering, Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul, 08826 Republic of Korea
| | - Myungjoon Kim
- MedInTech Inc., 60 Daehak-ro, Jongno-gu, Seoul, 03100 Republic of Korea
| | - Minwoo Cho
- Department of Transdisciplinary Medicine, Seoul National University Hospital, 101 Daehak-ro, Jongno-gu, Seoul, 03080 Republic of Korea
- Department of Medicine, Seoul National University College of Medicine, 103 Daehak-ro, Jongno-gu, Seoul, 03080 Republic of Korea
| | - Sungwan Kim
- Department of Biomedical Engineering, Seoul National University College of Medicine, 103 Daehak-ro, Jongno- gu, Seoul, 03080 Republic of Korea
- Artificial Intelligence Institute, Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul, 08826 Republic of Korea
| |
Collapse
|
2
|
Kim YG, Lee JH, Shim JW, Rhee W, Kim BS, Yoon D, Kim MJ, Park JW, Jeong CW, Yang HK, Cho M, Kim S. A multimodal virtual vision platform as a next-generation vision system for a surgical robot. Med Biol Eng Comput 2024; 62:1535-1548. [PMID: 38305815 PMCID: PMC11021270 DOI: 10.1007/s11517-024-03030-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2023] [Accepted: 01/19/2024] [Indexed: 02/03/2024]
Abstract
Robot-assisted surgery platforms are utilized globally thanks to their stereoscopic vision systems and enhanced functional assistance. However, the necessity of ergonomic improvement for their use by surgeons has been increased. In surgical robots, issues with chronic fatigue exist owing to the fixed posture of the conventional stereo viewer (SV) vision system. A head-mounted display was adopted to alleviate the inconvenience, and a virtual vision platform (VVP) is proposed in this study. The VVP can provide various critical data, including medical images, vital signs, and patient records, in three-dimensional virtual reality space so that users can access medical information simultaneously. An availability of the VVP was investigated based on various user evaluations by surgeons and novices, who executed the given tasks and answered questionnaires. The performances of the SV and VVP were not significantly different; however, the craniovertebral angle of the VVP was 16.35° higher on average than that of the SV. Survey results regarding the VVP were positive; participants indicated that the optimal number of displays was six, preferring the 2 × 3 array. Reflecting the tendencies, the VVP can be a neoconceptual candidate to be customized for medical use, which opens a new prospect in a next-generation surgical robot.
Collapse
Affiliation(s)
- Young Gyun Kim
- Interdisciplinary Program in Bioengineering, Seoul National University, 1 Gwanak-Ro, Gwanak-Gu, Seoul, 08826, Republic of Korea
| | - Jong Hyeon Lee
- Interdisciplinary Program in Bioengineering, Seoul National University, 1 Gwanak-Ro, Gwanak-Gu, Seoul, 08826, Republic of Korea
| | - Jae Woo Shim
- Interdisciplinary Program in Bioengineering, Seoul National University, 1 Gwanak-Ro, Gwanak-Gu, Seoul, 08826, Republic of Korea
| | - Wounsuk Rhee
- Seoul National University Hospital, 101 Daehak-Ro, Jongno-Gu, Seoul, 03080, Republic of Korea
| | - Byeong Soo Kim
- Interdisciplinary Program in Bioengineering, Seoul National University, 1 Gwanak-Ro, Gwanak-Gu, Seoul, 08826, Republic of Korea
| | - Dan Yoon
- Interdisciplinary Program in Bioengineering, Seoul National University, 1 Gwanak-Ro, Gwanak-Gu, Seoul, 08826, Republic of Korea
| | - Min Jung Kim
- Department of Surgery, Seoul National University College of Medicine, 103 Daehak-Ro, Jongno-Gu, Seoul, 03080, Republic of Korea
| | - Ji Won Park
- Department of Surgery, Seoul National University College of Medicine, 103 Daehak-Ro, Jongno-Gu, Seoul, 03080, Republic of Korea
| | - Chang Wook Jeong
- Department of Urology, Seoul National University College of Medicine, 103 Daehak-Ro, Jongno-Gu, Seoul, 03080, Republic of Korea
| | - Han-Kwang Yang
- Department of Surgery, Seoul National University College of Medicine, 103 Daehak-Ro, Jongno-Gu, Seoul, 03080, Republic of Korea
| | - Minwoo Cho
- Department of Transdisciplinary Medicine, Seoul National University Hospital, 101 Daehak-Ro, Jongno-Gu, Seoul, 03080, Republic of Korea.
- Department of Medicine, Seoul National University College of Medicine, 103 Daehak-Ro, Jongno-Gu, Seoul, 03080, Republic of Korea.
| | - Sungwan Kim
- Department of Biomedical Engineering, Seoul National University College of Medicine, 103 Daehak-Ro, Jongno-Gu, Seoul, 03080, Republic of Korea.
- Artificial Intelligence Institute, Seoul National University, 1 Gwanak-Ro, Gwanak-Gu, Seoul, 08826, Republic of Korea.
| |
Collapse
|
3
|
Wang X, Nie Y, Ren W, Wei M, Zhang J. Multi-scale, multi-dimensional binocular endoscopic image depth estimation network. Comput Biol Med 2023; 164:107305. [PMID: 37597409 DOI: 10.1016/j.compbiomed.2023.107305] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2023] [Revised: 07/07/2023] [Accepted: 07/28/2023] [Indexed: 08/21/2023]
Abstract
During invasive surgery, the use of deep learning techniques to acquire depth information from lesion sites in real-time is hindered by the lack of endoscopic environmental datasets. This work aims to develop a high-accuracy three-dimensional (3D) simulation model for generating image datasets and acquiring depth information in real-time. Here, we proposed an end-to-end multi-scale supervisory depth estimation network (MMDENet) model for the depth estimation of pairs of binocular images. The proposed MMDENet highlights a multi-scale feature extraction module incorporating contextual information to enhance the correspondence precision of poorly exposed regions. A multi-dimensional information-guidance refinement module is also proposed to refine the initial coarse disparity map. Statistical experimentation demonstrated a 3.14% reduction in endpoint error compared to state-of-the-art methods. With a processing time of approximately 30fps, satisfying the requirements of real-time operation applications. In order to validate the performance of the trained MMDENet in actual endoscopic images, we conduct both qualitative and quantitative analysis with 93.38% high precision, which holds great promise for applications in surgical navigation.
Collapse
Affiliation(s)
- Xiongzhi Wang
- School of Future Technology, University of Chinese Academy of Sciences, Beijing 100039, China; School of Aerospace Science And Technology, Xidian University, Xian 710071, China.
| | - Yunfeng Nie
- Brussel Photonics, Department of Applied Physics and Photonics, Vrije Universiteit Brussel and Flanders Make, 1050 Brussels, Belgium
| | - Wenqi Ren
- State Key Laboratory of Information Security, Institute of Information Engineering, Chinese Academy of Sciences, Beijing, 100093, China
| | - Min Wei
- Department of Orthopedics, the Fourth Medical Center, Chinese PLA General Hospital, Beijing 100853, China
| | - Jingang Zhang
- School of Future Technology, University of Chinese Academy of Sciences, Beijing 100039, China; School of Aerospace Science And Technology, Xidian University, Xian 710071, China.
| |
Collapse
|
4
|
Disparity-constrained stereo endoscopic image super-resolution. Int J Comput Assist Radiol Surg 2022; 17:867-875. [PMID: 35377037 DOI: 10.1007/s11548-022-02611-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2022] [Accepted: 03/14/2022] [Indexed: 11/05/2022]
Abstract
PURPOSE With the increasing usage of stereo cameras in computer-assisted surgery techniques, surgeons can benefit from better 3D context of the surgical site in minimally invasive operations. However, since stereo cameras are placed together at the confined endoscope tip, the size of lens and sensors is limited, resulting in low resolution of stereo endoscopic images. How to effectively exploit and utilize stereo information in stereo endoscopic super-resolution (SR) becomes a challenging problem. METHODS In this work, we propose a disparity-constrained stereo super-resolution network (DCSSRnet) to reconstruct images using a stereo image pair. In particular, a disparity constraint mechanism is incorporated into the generation of SR images in the deep neural network framework with effective feature extractors and atrous parallax attention modules. RESULTS Extensive experiments were conducted to evaluate the performance of proposed DCSSRnet on the da Vinci dataset and Medtronic dataset. The results on endoscopic image datasets demonstrate that the proposed approach produces a more effective improvement over current SR methods on both quantitative measurements. The ablation studies further verify the effectiveness of the components of the proposed framework. CONCLUSION The proposed DCSSRnet provides a promising solution on enhancing the spatial resolution of stereo endoscopic image pairs. Specifically, the disparity consistency of the stereo image pair provides informative supervision for image reconstruction. The proposed model can serve as a tool for improving the quality of stereo endoscopic images of endoscopic surgery systems.
Collapse
|
5
|
Zuo S, Chen T, Chen X, Chen B. A Wearable Hands-Free Human-Robot Interface for Robotized Flexible Endoscope. IEEE Robot Autom Lett 2022. [DOI: 10.1109/lra.2022.3149303] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/13/2023]
|
6
|
Fu Z, Jin Z, Zhang C, He Z, Zha Z, Hu C, Gan T, Yan Q, Wang P, Ye X. The Future of Endoscopic Navigation: A Review of Advanced Endoscopic Vision Technology. IEEE ACCESS 2021; 9:41144-41167. [DOI: 10.1109/access.2021.3065104] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/04/2025]
|
7
|
Ma X, Song C, Chiu PW, Li Z. Visual Servo of a 6-DOF Robotic Stereo Flexible Endoscope Based on da Vinci Research Kit (dVRK) System. IEEE Robot Autom Lett 2020. [DOI: 10.1109/lra.2020.2965863] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
8
|
Evaluation of haptic guidance virtual fixtures and 3D visualization methods in telemanipulation—a user study. INTEL SERV ROBOT 2019. [DOI: 10.1007/s11370-019-00283-w] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|
9
|
Ma X, Song C, Chiu PW, Li Z. Autonomous Flexible Endoscope for Minimally Invasive Surgery With Enhanced Safety. IEEE Robot Autom Lett 2019. [DOI: 10.1109/lra.2019.2895273] [Citation(s) in RCA: 47] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
10
|
Head-mounted interface for intuitive vision control and continuous surgical operation in a surgical robot system. Med Biol Eng Comput 2018; 57:601-614. [PMID: 30280331 DOI: 10.1007/s11517-018-1902-4] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2018] [Accepted: 09/18/2018] [Indexed: 02/06/2023]
Abstract
Although robot-assisted surgeries offer various advantages, the discontinuous surgical operation flow resulting from switching the control between the patient-side manipulators and the endoscopic robot arm can be improved to enhance the efficiency further. Therefore, in this study, a head-mounted master interface (HMI) that can be implemented to an existing surgical robot system and allows continuous surgical operation flow using the head motion is proposed. The proposed system includes an HMI, a four degrees of freedom endoscope control system, a simple three-dimensional endoscope, and a da Vinci Research Kit. Eight volunteers performed seven head movements and their data from HMI was collected to perform support vector machine (SVM) classification. Further, ten-fold cross-validation was performed to optimize its parameters. Using the ten-fold cross-validation result, the SVM classifier with the Gaussian kernel (σ = 0.85) was chosen, which had an accuracy of 92.28%. An endoscopic control algorithm was developed using the SVM classification result. A peg transfer task was conducted to check the time-related effect of HMI's usability on the system, and the paired t test result showed that the task completion time was reduced. Further, the time delay of the system was measured to be 0.72 s. Graphical abstract A head-mounted master interface (HMI), which can be implemented to an existing surgical robot system, was developed to allow simultaneous surgical operation flow. The surgeon's head motion is detected through the proposed HMI and classified using a support vector machine to manipulate the endoscopic robotic arm. A classification accuracy of 92.28% was achieved.
Collapse
|