1
|
Khan R, Akbar S, Khan A, Marwan M, Qaisar ZH, Mehmood A, Shahid F, Munir K, Zheng Z. Dental image enhancement network for early diagnosis of oral dental disease. Sci Rep 2023; 13:5312. [PMID: 37002256 PMCID: PMC10066200 DOI: 10.1038/s41598-023-30548-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2022] [Accepted: 02/24/2023] [Indexed: 04/03/2023] Open
Abstract
Intelligent robotics and expert system applications in dentistry suffer from identification and detection problems due to the non-uniform brightness and low contrast in the captured images. Moreover, during the diagnostic process, exposure of sensitive facial parts to ionizing radiations (e.g., X-Rays) has several disadvantages and provides a limited angle for the view of vision. Capturing high-quality medical images with advanced digital devices is challenging, and processing these images distorts the contrast and visual quality. It curtails the performance of potential intelligent and expert systems and disincentives the early diagnosis of oral and dental diseases. The traditional enhancement methods are designed for specific conditions, and network-based methods rely on large-scale datasets with limited adaptability towards varying conditions. This paper proposed a novel and adaptive dental image enhancement strategy based on a small dataset and proposed a paired branch Denticle-Edification network (Ded-Net). The input dental images are decomposed into reflection and illumination in a multilayer Denticle network (De-Net). The subsequent enhancement operations are performed to remove the hidden degradation of reflection and illumination. The adaptive illumination consistency is maintained through the Edification network (Ed-Net). The network is regularized following the decomposition congruity of the input data and provides user-specific freedom of adaptability towards desired contrast levels. The experimental results demonstrate that the proposed method improves visibility and contrast and preserves the edges and boundaries of the low-contrast input images. It proves that the proposed method is suitable for intelligent and expert system applications for future dental imaging.
Collapse
Affiliation(s)
- Rizwan Khan
- Department of Computer Science and Mathematics, Zhejiang Normal University, Jinhua, 321004, Zhejiang, China
| | - Saeed Akbar
- School of Computer Science, Huazhong University of Science and Technology, Wuhan, China
| | - Ali Khan
- Department of Computer Science and Mathematics, Zhejiang Normal University, Jinhua, 321004, Zhejiang, China
| | - Muhammad Marwan
- Department of Computer Science and Mathematics, Zhejiang Normal University, Jinhua, 321004, Zhejiang, China
| | - Zahid Hussain Qaisar
- School of Computer Science, Huazhong University of Science and Technology, Wuhan, China
| | - Atif Mehmood
- Department of Computer Science, National University of Modern Language, NUML, Islamabad, Pakistan
- Division of Biomedical Imaging, Department of Biomedical Engineering and Health Systems, KTH Royal Institute of Technology, Stockholm, Sweden
| | - Farah Shahid
- Department of Computer Science, University of Agriculture, Sub-Campus (Burewala-Vehari), Faisalabad, Punjab, Pakistan
| | - Khushboo Munir
- Department of Radiology and Diagnostic Imaging, University of Alberta, Edmonton, Alberta, Canada
| | - Zhonglong Zheng
- Department of Computer Science and Mathematics, Zhejiang Normal University, Jinhua, 321004, Zhejiang, China.
- Key Laboratory of Intelligent Education Technology and Application of Zhejiang Province, Zhejiang Normal University, Jinhua, 321004, Zhejiang, China.
| |
Collapse
|
2
|
Abstract
Abstract
Because of the increasing use of laparoscopic surgeries, robotic technologies have been developed to overcome the challenges these surgeries impose on surgeons. This paper presents an overview of the current state of surgical robots used in laparoscopic surgeries. Four main categories were discussed: handheld laparoscopic devices, laparoscope positioning robots, master–slave teleoperated systems with dedicated consoles, and robotic training systems. A generalized control block diagram is developed to demonstrate the general control scheme for each category of surgical robots. In order to review these robotic technologies, related published works were investigated and discussed. Detailed discussions and comparison tables are presented to compare their effectiveness in laparoscopic surgeries. Each of these technologies has proved to be beneficial in laparoscopic surgeries.
Collapse
|
3
|
Gruijthuijsen C, Garcia-Peraza-Herrera LC, Borghesan G, Reynaerts D, Deprest J, Ourselin S, Vercauteren T, Vander Poorten E. Robotic Endoscope Control Via Autonomous Instrument Tracking. Front Robot AI 2022; 9:832208. [PMID: 35480090 PMCID: PMC9035496 DOI: 10.3389/frobt.2022.832208] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2021] [Accepted: 02/17/2022] [Indexed: 11/13/2022] Open
Abstract
Many keyhole interventions rely on bi-manual handling of surgical instruments, forcing the main surgeon to rely on a second surgeon to act as a camera assistant. In addition to the burden of excessively involving surgical staff, this may lead to reduced image stability, increased task completion time and sometimes errors due to the monotony of the task. Robotic endoscope holders, controlled by a set of basic instructions, have been proposed as an alternative, but their unnatural handling may increase the cognitive load of the (solo) surgeon, which hinders their clinical acceptance. More seamless integration in the surgical workflow would be achieved if robotic endoscope holders collaborated with the operating surgeon via semantically rich instructions that closely resemble instructions that would otherwise be issued to a human camera assistant, such as “focus on my right-hand instrument.” As a proof of concept, this paper presents a novel system that paves the way towards a synergistic interaction between surgeons and robotic endoscope holders. The proposed platform allows the surgeon to perform a bimanual coordination and navigation task, while a robotic arm autonomously performs the endoscope positioning tasks. Within our system, we propose a novel tooltip localization method based on surgical tool segmentation and a novel visual servoing approach that ensures smooth and appropriate motion of the endoscope camera. We validate our vision pipeline and run a user study of this system. The clinical relevance of the study is ensured through the use of a laparoscopic exercise validated by the European Academy of Gynaecological Surgery which involves bi-manual coordination and navigation. Successful application of our proposed system provides a promising starting point towards broader clinical adoption of robotic endoscope holders.
Collapse
Affiliation(s)
| | - Luis C. Garcia-Peraza-Herrera
- Department of Medical Physics and Biomedical Engineering, University College London, London, United Kingdom
- Department of Surgical and Interventional Engineering, King’s College London, London, United Kingdom
- *Correspondence: Luis C. Garcia-Peraza-Herrera,
| | - Gianni Borghesan
- Department of Mechanical Engineering, KU Leuven, Leuven, Belgium
- Core Lab ROB, Flanders Make, Lommel, Belgium
| | | | - Jan Deprest
- Department of Development and Regeneration, Division Woman and Child, KU Leuven, Leuven, Belgium
| | - Sebastien Ourselin
- Department of Surgical and Interventional Engineering, King’s College London, London, United Kingdom
| | - Tom Vercauteren
- Department of Surgical and Interventional Engineering, King’s College London, London, United Kingdom
| | | |
Collapse
|
4
|
Gautier B, Tugal H, Tang B, Nabi G, Erden MS. Real-Time 3D Tracking of Laparoscopy Training Instruments for Assessment and Feedback. Front Robot AI 2021; 8:751741. [PMID: 34805292 PMCID: PMC8600079 DOI: 10.3389/frobt.2021.751741] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2021] [Accepted: 10/13/2021] [Indexed: 11/13/2022] Open
Abstract
Assessment of minimally invasive surgical skills is a non-trivial task, usually requiring the presence and time of expert observers, including subjectivity and requiring special and expensive equipment and software. Although there are virtual simulators that provide self-assessment features, they are limited as the trainee loses the immediate feedback from realistic physical interaction. The physical training boxes, on the other hand, preserve the immediate physical feedback, but lack the automated self-assessment facilities. This study develops an algorithm for real-time tracking of laparoscopy instruments in the video cues of a standard physical laparoscopy training box with a single fisheye camera. The developed visual tracking algorithm recovers the 3D positions of the laparoscopic instrument tips, to which simple colored tapes (markers) are attached. With such system, the extracted instrument trajectories can be digitally processed, and automated self-assessment feedback can be provided. In this way, both the physical interaction feedback would be preserved and the need for the observance of an expert would be overcome. Real-time instrument tracking with a suitable assessment criterion would constitute a significant step towards provision of real-time (immediate) feedback to correct trainee actions and show them how the action should be performed. This study is a step towards achieving this with a low cost, automated, and widely applicable laparoscopy training and assessment system using a standard physical training box equipped with a fisheye camera.
Collapse
Affiliation(s)
| | - Harun Tugal
- Heriot-Watt University, Scotland, United Kingdom
| | - Benjie Tang
- University of Dundee and Ninewells Hospital, Dundee, United Kingdom
| | - Ghulam Nabi
- University of Dundee and Ninewells Hospital, Dundee, United Kingdom
| | | |
Collapse
|
5
|
Choi J, Cho S, Chung JW, Kim N. Video recognition of simple mastoidectomy using convolutional neural networks: Detection and segmentation of surgical tools and anatomical regions. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 208:106251. [PMID: 34271262 DOI: 10.1016/j.cmpb.2021.106251] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/03/2020] [Accepted: 06/20/2021] [Indexed: 06/13/2023]
Abstract
A simple mastoidectomy is used to remove inflammation of the mastoid cavity and to create a route to the skull base and middle ear. However, due to the complexity and difficulty of the simple mastoidectomy, implementing robot vision for assisted surgery is a challenge. To overcome this issue using a convolutional neural network architecture in a surgical environment, each surgical instrument and anatomical region must be distinguishable in real time. To meet this condition, we used the latest instance segmentation architecture, YOLACT. In this study, a data set comprising 5,319 extracted frames from 70 simple mastoidectomy surgery videos were used. Six surgical tools and five anatomic regions were identified for the training. The YOLACT-based model in the surgical environment was trained and evaluated for real-time object detection and semantic segmentation. Detection accuracies of surgical tools and anatomic regions were 91.2% and 56.5% in mean average precision, respectively. Additionally, the dice similarity coefficient metric for segmentation of the five anatomic regions was 48.2%. The mean frames per second of this model was 32.3, which is sufficient for real-time robotic applications.
Collapse
Affiliation(s)
- Joonmyeong Choi
- University of Ulsan College of Medicine, Convergence Medicine, 388-1 pungnap2-dong, Radiology, East bld 2nd fl Seoul, Songpa-gu, 05505 Korea
| | - Sungman Cho
- University of Ulsan College of Medicine, Convergence Medicine, 388-1 pungnap2-dong, Radiology, East bld 2nd fl Seoul, Songpa-gu, 05505 Korea
| | - Jong Woo Chung
- University of Ulsan College of Medicine, Convergence Medicine, 388-1 pungnap2-dong, Radiology, East bld 2nd fl Seoul, Songpa-gu, 05505 Korea.
| | - Namkug Kim
- University of Ulsan College of Medicine, Convergence Medicine, 388-1 pungnap2-dong, Radiology, East bld 2nd fl Seoul, Songpa-gu, 05505 Korea.
| |
Collapse
|
6
|
He Y, Zhao B, Qi X, Li S, Yang Y, Hu Y. Automatic Surgical Field of View Control in Robot-Assisted Nasal Surgery. IEEE Robot Autom Lett 2021. [DOI: 10.1109/lra.2020.3039732] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
7
|
Chu Y, Yang X, Li H, Ai D, Ding Y, Fan J, Song H, Yang J. Multi-level feature aggregation network for instrument identification of endoscopic images. Phys Med Biol 2020; 65:165004. [PMID: 32344381 DOI: 10.1088/1361-6560/ab8dda] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
Identification of surgical instruments is crucial in understanding surgical scenarios and providing an assistive process in endoscopic image-guided surgery. This study proposes a novel multilevel feature-aggregated deep convolutional neural network (MLFA-Net) for identifying surgical instruments in endoscopic images. First, a global feature augmentation layer is created on the top layer of the backbone to improve the localization ability of object identification by boosting the high-level semantic information to the feature flow network. Second, a modified interaction path of cross-channel features is proposed to increase the nonlinear combination of features in the same level and improve the efficiency of information propagation. Third, a multiview fusion branch of features is built to aggregate the location-sensitive information of the same level in different views, increase the information diversity of features, and enhance the localization ability of objects. By utilizing the latent information, the proposed network of multilevel feature aggregation can accomplish multitask instrument identification with a single network. Three tasks are handled by the proposed network, including object detection, which classifies the type of instrument and locates its border; mask segmentation, which detects the instrument shape; and pose estimation, which detects the keypoint of instrument parts. The experiments are performed on laparoscopic images from MICCAI 2017 Endoscopic Vision Challenge, and the mean average precision (AP) and average recall (AR) are utilized to quantify the segmentation and pose estimation results. For the bounding box regression, the AP and AR are 79.1% and 63.2%, respectively, while the AP and AR of mask segmentation are 78.1% and 62.1%, and the AP and AR of the pose estimation achieve 67.1% and 55.7%, respectively. The experiments demonstrate that our method efficiently improves the recognition accuracy of the instrument in endoscopic images, and outperforms the other state-of-the-art methods.
Collapse
Affiliation(s)
- Yakui Chu
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081 People's Republic of China. Authors contribute equally to this article
| | | | | | | | | | | | | | | |
Collapse
|
8
|
Sarikaya D, Corso JJ, Guru KA. Detection and Localization of Robotic Tools in Robot-Assisted Surgery Videos Using Deep Neural Networks for Region Proposal and Detection. IEEE TRANSACTIONS ON MEDICAL IMAGING 2017; 36:1542-1549. [PMID: 28186883 DOI: 10.1109/tmi.2017.2665671] [Citation(s) in RCA: 87] [Impact Index Per Article: 12.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
Video understanding of robot-assisted surgery (RAS) videos is an active research area. Modeling the gestures and skill level of surgeons presents an interesting problem. The insights drawn may be applied in effective skill acquisition, objective skill assessment, real-time feedback, and human-robot collaborative surgeries. We propose a solution to the tool detection and localization open problem in RAS video understanding, using a strictly computer vision approach and the recent advances of deep learning. We propose an architecture using multimodal convolutional neural networks for fast detection and localization of tools in RAS videos. To the best of our knowledge, this approach will be the first to incorporate deep neural networks for tool detection and localization in RAS videos. Our architecture applies a region proposal network (RPN) and a multimodal two stream convolutional network for object detection to jointly predict objectness and localization on a fusion of image and temporal motion cues. Our results with an average precision of 91% and a mean computation time of 0.1 s per test frame detection indicate that our study is superior to conventionally used methods for medical imaging while also emphasizing the benefits of using RPN for precision and efficiency. We also introduce a new data set, ATLAS Dione, for RAS video understanding. Our data set provides video data of ten surgeons from Roswell Park Cancer Institute, Buffalo, NY, USA, performing six different surgical tasks on the daVinci Surgical System (dVSS) with annotations of robotic tools per frame.
Collapse
|
9
|
Bouget D, Allan M, Stoyanov D, Jannin P. Vision-based and marker-less surgical tool detection and tracking: a review of the literature. Med Image Anal 2016; 35:633-654. [PMID: 27744253 DOI: 10.1016/j.media.2016.09.003] [Citation(s) in RCA: 104] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2016] [Revised: 06/26/2016] [Accepted: 09/05/2016] [Indexed: 11/16/2022]
Abstract
In recent years, tremendous progress has been made in surgical practice for example with Minimally Invasive Surgery (MIS). To overcome challenges coming from deported eye-to-hand manipulation, robotic and computer-assisted systems have been developed. Having real-time knowledge of the pose of surgical tools with respect to the surgical camera and underlying anatomy is a key ingredient for such systems. In this paper, we present a review of the literature dealing with vision-based and marker-less surgical tool detection. This paper includes three primary contributions: (1) identification and analysis of data-sets used for developing and testing detection algorithms, (2) in-depth comparison of surgical tool detection methods from the feature extraction process to the model learning strategy and highlight existing shortcomings, and (3) analysis of validation techniques employed to obtain detection performance results and establish comparison between surgical tool detectors. The papers included in the review were selected through PubMed and Google Scholar searches using the keywords: "surgical tool detection", "surgical tool tracking", "surgical instrument detection" and "surgical instrument tracking" limiting results to the year range 2000 2015. Our study shows that despite significant progress over the years, the lack of established surgical tool data-sets, and reference format for performance assessment and method ranking is preventing faster improvement.
Collapse
Affiliation(s)
- David Bouget
- Medicis team, INSERM U1099, Université de Rennes 1 LTSI, 35000 Rennes, France.
| | - Max Allan
- Center for Medical Image Computing. University College London, WC1E 6BT London, United Kingdom.
| | - Danail Stoyanov
- Center for Medical Image Computing. University College London, WC1E 6BT London, United Kingdom.
| | - Pierre Jannin
- Medicis team, INSERM U1099, Université de Rennes 1 LTSI, 35000 Rennes, France.
| |
Collapse
|
10
|
Yang Y, Song Y, Pan H, Cheng Y, Feng H, Wu H. Visual servo simulation of EAST articulated maintenance arm robot. FUSION ENGINEERING AND DESIGN 2016. [DOI: 10.1016/j.fusengdes.2016.01.024] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
11
|
Abstract
In this paper, we present an appearance learning approach which is used to detect and track surgical robotic tools in laparoscopic sequences. By training a robust visual feature descriptor on low-level landmark features, we build a framework for fusing robot kinematics and 3D visual observations to track surgical tools over long periods of time across various types of environment. We demonstrate 3D tracking on multiple types of tool (with different overall appearances) as well as multiple tools simultaneously. We present experimental results using the da Vinci® surgical robot using a combination of both ex-vivo and in-vivo environments.
Collapse
Affiliation(s)
- Austin Reiter
- Department of Computer Science, Columbia University, USA
| | - Peter K Allen
- Department of Computer Science, Columbia University, USA
| | - Tao Zhao
- Intuitive Surgical, Inc., CA, USA
| |
Collapse
|
12
|
Azizian M, Khoshnam M, Najmaei N, Patel RV. Visual servoing in medical robotics: a survey. Part I: endoscopic and direct vision imaging - techniques and applications. Int J Med Robot 2013; 10:263-74. [PMID: 24106103 DOI: 10.1002/rcs.1531] [Citation(s) in RCA: 48] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2013] [Revised: 08/05/2013] [Accepted: 08/08/2013] [Indexed: 12/20/2022]
Abstract
BACKGROUND Intra-operative imaging is widely used to provide visual feedback to a clinician when he/she performs a procedure. In visual servoing, surgical instruments and parts of tissue/body are tracked by processing the acquired images. This information is then used within a control loop to manoeuvre a robotic manipulator during a procedure. METHODS A comprehensive search of electronic databases was completed for the period 2000-2013 to provide a survey of the visual servoing applications in medical robotics. The focus is on medical applications where image-based tracking is used for closed-loop control of a robotic system. RESULTS Detailed classification and comparative study of various contributions in visual servoing using endoscopic or direct visual images are presented and summarized in tables and diagrams. CONCLUSION The main challenges in using visual servoing for medical robotic applications are identified and potential future directions are suggested. 'Supervised automation of medical robotics' is found to be a major trend in this field.
Collapse
|
13
|
Sánchez-Margallo JA, Sánchez-Margallo FM, Oropesa I, Gómez EJ. Systems and technologies for objective evaluation of technical skills in laparoscopic surgery. MINIM INVASIV THER 2013; 23:40-51. [DOI: 10.3109/13645706.2013.827122] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/22/2023]
|
14
|
Sánchez-Margallo JA, Sánchez-Margallo FM, Pagador JB, Gómez EJ, Sánchez-González P, Usón J, Moreno J. Video-based assistance system for training in minimally invasive surgery. MINIM INVASIV THER 2010; 20:197-205. [DOI: 10.3109/13645706.2010.534243] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
|
15
|
|
16
|
Three-dimensional heart motion estimation using endoscopic monocular vision system: From artificial landmarks to texture analysis. Biomed Signal Process Control 2007. [DOI: 10.1016/j.bspc.2007.07.006] [Citation(s) in RCA: 29] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
|
17
|
Voros S, Long JA, Cinquin P. Automatic localization of laparoscopic instruments for the visual servoing of an endoscopic camera holder. ACTA ACUST UNITED AC 2007; 9:535-42. [PMID: 17354932 DOI: 10.1007/11866565_66] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/14/2023]
Abstract
The use of a robotized camera holder in laparoscopic surgery allows a surgeon to control the endoscope without the intervention of an assistant. Today, the orders that the surgeon can give to robotized camera holders remain limited. In order to provide higher level interactions between the surgeon and a robotized camera holder, we have developed a new method for the automatic tracking of laparoscopic instruments which works in near real-time. The method is based on the measurement of the 3D positions of the insertion points of the instruments in the abdominal cavity and a simple shape model of the laparoscopic instruments. We present the results of our first experimentation on a cadaver.
Collapse
Affiliation(s)
- Sandrine Voros
- TIMC-IMAG, UMR CNRS 5525, Université Joseph Fourier, Grenoble.
| | | | | |
Collapse
|
18
|
Sauvée M, Poignet P, Triboulet J, Dombre E, Malis E, Demaria R. 3D HEART MOTION ESTIMATION USING ENDOSCOPIC MONOCULAR VISION SYSTEM. ACTA ACUST UNITED AC 2006. [DOI: 10.3182/20060920-3-fr-2912.00029] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
|
19
|
Bayesian Differentiation of Multi-scale Line-Structures for Model-Free Instrument Segmentation in Thoracoscopic Images. ACTA ACUST UNITED AC 2005. [DOI: 10.1007/11559573_114] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register]
|
20
|
Jaspers JEN, Breedveld P, Herder JL, Grimbergen CA. Camera and instrument holders and their clinical value in minimally invasive surgery. Surg Laparosc Endosc Percutan Tech 2004; 14:145-52. [PMID: 15471021 DOI: 10.1097/01.sle.0000129395.42501.5d] [Citation(s) in RCA: 51] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
During minimally invasive procedures an assistant is controlling the laparoscope. Ideally, the surgeon should be able to manipulate all instruments including the camera him/herself, to avoid communication problems and disturbing camera movements. Camera holders return camera-control to the surgeon and stabilize the laparoscopic image. An additional holder can be used to stabilize an extra laparoscopic instrument for retracting. A literature survey has been carried out giving an overview of the existing "robotic" and passive camera and instrument holders and, if available, results of their clinical value. Benefits and limitations were identified. Most studies showed that camera holders, passive and active, provide the surgeon with a more stable image and enables them to control their own view direction. Only the passive holders were suitable for holding instruments. Comparisons between different systems are reviewed. Both active and passive camera and instrument holders are functional, and may be helpful to perform solo-surgery. The benefits of active holders are questionable in relation to the performance of the much simpler passive designs.
Collapse
Affiliation(s)
- Joris E N Jaspers
- Medical Technological Development Department, Academic Medical Center, University of Amsterdam, Amsterdam, The Netherlands.
| | | | | | | |
Collapse
|
21
|
Segmentation of Laparoscopic Images for Computer Assisted Surgery. ACTA ACUST UNITED AC 2003. [DOI: 10.1007/3-540-45103-x_78] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register]
|