1
|
Nabagło T, Tabor Z, Augustyniak P. Measurement Systems for Use in the Navigation of the Cannula-Guide Assembly within the Deep Regions of the Bronchial Tree. SENSORS (BASEL, SWITZERLAND) 2023; 23:2306. [PMID: 36850904 PMCID: PMC9967606 DOI: 10.3390/s23042306] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/30/2022] [Revised: 01/31/2023] [Accepted: 02/14/2023] [Indexed: 06/18/2023]
Abstract
BACKGROUND The purpose of this paper is to present the spatial navigation system prototype for localizing the distal tip of the cannula-guide assembly. This assembly is shifted through the channel of a bronchoscope, which is fixed in relation to the patient. The navigation is carried out in the bronchial tree, based on maneuvers of the aforementioned assembly. METHODS The system consists of three devices mounted on the guide handle and at the entrance to the bronchoscope working channel. The devices record the following values: cannula displacement, rotation of the guide handle, and displacement of the handle ring associated with the bending of the distal tip of the guide. RESULTS In laboratory experiments, we demonstrate that the cannula displacement can be monitored with an accuracy of 2 mm, and the angles of rotation and bending of the guide tip with an accuracy of 10 and 20 degrees, respectively, which outperforms the accuracy of currently used methods of bronchoscopy support. CONCLUSIONS This accuracy is crucial to ensure that we collect the material for histopathological examination from a precisely defined place. It makes it possible to reach cancer cells at their very early stage.
Collapse
|
2
|
Khan ZA, Beghdadi A, Kaaniche M, Alaya-Cheikh F, Gharbi O. A neural network based framework for effective laparoscopic video quality assessment. Comput Med Imaging Graph 2022; 101:102121. [PMID: 36174307 DOI: 10.1016/j.compmedimag.2022.102121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2022] [Revised: 08/22/2022] [Accepted: 08/30/2022] [Indexed: 01/27/2023]
Abstract
Video quality assessment is a challenging problem having a critical significance in the context of medical imaging. For instance, in laparoscopic surgery, the acquired video data suffers from different kinds of distortion that not only hinder surgery performance but also affect the execution of subsequent tasks in surgical navigation and robotic surgeries. For this reason, we propose in this paper neural network-based approaches for distortion classification as well as quality prediction. More precisely, a Residual Network (ResNet) based approach is firstly developed for simultaneous ranking and classification task. Then, this architecture is extended to make it appropriate for the quality prediction task by using an additional Fully Connected Neural Network (FCNN). To train the overall architecture (ResNet and FCNN models), transfer learning and end-to-end learning approaches are investigated. Experimental results, carried out on a new laparoscopic video quality database, have shown the efficiency of the proposed methods compared to recent conventional and deep learning based approaches.
Collapse
Affiliation(s)
- Zohaib Amjad Khan
- Université Sorbonne Paris Nord, L2TI, UR 3043, F-93430, Villetaneuse, France
| | - Azeddine Beghdadi
- Université Sorbonne Paris Nord, L2TI, UR 3043, F-93430, Villetaneuse, France.
| | - Mounir Kaaniche
- Université Sorbonne Paris Nord, L2TI, UR 3043, F-93430, Villetaneuse, France
| | | | - Osama Gharbi
- Université Sorbonne Paris Nord, L2TI, UR 3043, F-93430, Villetaneuse, France
| |
Collapse
|
3
|
Multimodal Registration for Image-Guided EBUS Bronchoscopy. J Imaging 2022; 8:jimaging8070189. [PMID: 35877633 PMCID: PMC9320860 DOI: 10.3390/jimaging8070189] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2022] [Revised: 06/27/2022] [Accepted: 06/29/2022] [Indexed: 12/24/2022] Open
Abstract
The state-of-the-art procedure for examining the lymph nodes in a lung cancer patient involves using an endobronchial ultrasound (EBUS) bronchoscope. The EBUS bronchoscope integrates two modalities into one device: (1) videobronchoscopy, which gives video images of the airway walls; and (2) convex-probe EBUS, which gives 2D fan-shaped views of extraluminal structures situated outside the airways. During the procedure, the physician first employs videobronchoscopy to navigate the device through the airways. Next, upon reaching a given node’s approximate vicinity, the physician probes the airway walls using EBUS to localize the node. Due to the fact that lymph nodes lie beyond the airways, EBUS is essential for confirming a node’s location. Unfortunately, it is well-documented that EBUS is difficult to use. In addition, while new image-guided bronchoscopy systems provide effective guidance for videobronchoscopic navigation, they offer no assistance for guiding EBUS localization. We propose a method for registering a patient’s chest CT scan to live surgical EBUS views, thereby facilitating accurate image-guided EBUS bronchoscopy. The method entails an optimization process that registers CT-based virtual EBUS views to live EBUS probe views. Results using lung cancer patient data show that the method correctly registered 28/28 (100%) lymph nodes scanned by EBUS, with a mean registration time of 3.4 s. In addition, the mean position and direction errors of registered sites were 2.2 mm and 11.8∘, respectively. In addition, sensitivity studies show the method’s robustness to parameter variations. Lastly, we demonstrate the method’s use in an image-guided system designed for guiding both phases of EBUS bronchoscopy.
Collapse
|
4
|
Shi RB, Mirza S, Martinez D, Douglas C, Cho J, Irish JC, Jaffray DA, Weersink RA. Cost-function testing methodology for image-based registration of endoscopy to CT images in the head and neck. Phys Med Biol 2020; 65. [PMID: 32702685 DOI: 10.1088/1361-6560/aba8b3] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2020] [Accepted: 07/23/2020] [Indexed: 11/11/2022]
Abstract
One of the largest geometric uncertainties in designing radiotherapy treatment plans for squamous cell cancers of the head and neck is contouring the gross tumour volume. We have previously described a method of projecting mucosal disease contours, visible on endoscopy, to volumetrically reconstructed planning CT datasets, using electromagnetic (EM) tracking of a flexible endoscope, enabling rigid registration between endoscopic and CT images. However, to achieve better accuracy for radiotherapy planning, we propose refining this initial registration with image-based registration methods. In this paper, several types of cost functions are evaluated based on accuracy and robustness. Three phantoms and eight clinical cases are used to test each cost function, with initial registration of endoscopy to CT provided by the pose of the flexible endoscope recovered from EM tracking. Cost function classes include: cross correlation, mutual information and gradient methods. For each test case, a ground truth virtual camera pose was first defined by manual registration of anatomical features visible in both real and virtual endoscope images. A new set of evenly spaced fiducial points and a sample contour were created and projected onto the CT image to be used in assessing image registration quality. A new set of 5000 displaced poses was generated by random sampling displacements along each translational and rotational dimension. At each pose, fiducial and contour points in the real image were again projected on the CT image. The cost function, fiducial registration error and contouring error values were then calculated. While all cost functions performed well in select cases, only the normalized gradient field function consistently had registration errors less than 2 mm, which is the accuracy needed if this application of registering mucosal disease identified on optical image to CT images is to be used in the clinical practice of radiation treatment planning. (Registration: ClinicalTrials.gov NCT02704169).
Collapse
Affiliation(s)
| | - Souzan Mirza
- University of Toronto Institute of Biomaterials and Biomedical Engineering, Toronto, Ontario, CANADA
| | - Diego Martinez
- Radiation Medicine Program, Princess Margaret Hospital Cancer Centre, Toronto, Ontario, CANADA
| | - Catriona Douglas
- Surgical Oncology, University of Toronto Department of Surgery, Toronto, Ontario, CANADA
| | - John Cho
- Radiation Medicine Program, Princess Margaret Hospital Cancer Centre, Toronto, Ontario, CANADA
| | - Jonathan C Irish
- Surgical Oncology, University of Toronto Department of Surgery, Toronto, Ontario, CANADA
| | - David A Jaffray
- Radiation Medicine Program, Princess Margaret Hospital Cancer Centre, Toronto, Ontario, CANADA
| | - Robert A Weersink
- Radiation Medicine Program, Princess Margaret Hospital Cancer Centre, Toronto, Ontario, CANADA
| |
Collapse
|
5
|
Sabri YY, Kamel KM, Hafez MAF, Nasef SMSS. Evaluation of the role of MSCT airway mapping in guiding trans-bronchial lung biopsy in cases of inaccessible lung lesions. THE EGYPTIAN JOURNAL OF RADIOLOGY AND NUCLEAR MEDICINE 2017. [DOI: 10.1016/j.ejrnm.2017.06.004] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022] Open
|
6
|
Pre-clinical validation of virtual bronchoscopy using 3D Slicer. Int J Comput Assist Radiol Surg 2016; 12:25-38. [PMID: 27325238 DOI: 10.1007/s11548-016-1447-7] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2016] [Accepted: 06/11/2016] [Indexed: 10/21/2022]
Abstract
PURPOSE Lung cancer still represents the leading cause of cancer-related death, and the long-term survival rate remains low. Computed tomography (CT) is currently the most common imaging modality for lung diseases recognition. The purpose of this work was to develop a simple and easily accessible virtual bronchoscopy system to be coupled with a customized electromagnetic (EM) tracking system for navigation in the lung and which requires as little user interaction as possible, while maintaining high usability. METHODS The proposed method has been implemented as an extension to the open-source platform, 3D Slicer. It creates a virtual reconstruction of the airways starting from CT images for virtual navigation. It provides tools for pre-procedural planning and virtual navigation, and it has been optimized for use in combination with a [Formula: see text] of freedom EM tracking sensor. Performance of the algorithm has been evaluated in ex vivo and in vivo testing. RESULTS During ex vivo testing, nine volunteer physicians tested the implemented algorithm to navigate three separate targets placed inside a breathing pig lung model. In general, the system proved easy to use and accurate in replicating the clinical setting and seemed to help choose the correct path without any previous experience or image analysis. Two separate animal studies confirmed technical feasibility and usability of the system. CONCLUSIONS This work describes an easily accessible virtual bronchoscopy system for navigation in the lung. The system provides the user with a complete set of tools that facilitate navigation towards user-selected regions of interest. Results from ex vivo and in vivo studies showed that the system opens the way for potential future work with virtual navigation for safe and reliable airway disease diagnosis.
Collapse
|
7
|
Abstract
Bronchoscopy is a commonly used minimally invasive procedure for lung-cancer staging. In standard practice, however, physicians differ greatly in their levels of performance. To address this concern, image-guided intervention (IGI) systems have been devised to improve procedure success. Current IGI bronchoscopy systems based on virtual bronchoscopic navigation (VBN), however, require involvement from the attending technician. This lessens physician control and hinders the overall acceptance of such systems. We propose a hands-free VBN system for planning and guiding bronchoscopy. The system introduces two major contributions. First, it incorporates a new procedure-planning method that automatically computes airway navigation plans conforming to the physician's bronchoscopy training and manual dexterity. Second, it incorporates a guidance strategy for bronchoscope navigation that enables user-friendly system control via a foot switch, coupled with a novel position-verification mechanism. Phantom studies verified that the system enables smooth operation under physician control, while also enabling faster navigation than an existing technician-assisted VBN system. In a clinical human study, we noted a 97% bronchoscopy navigation success rate, in line with existing VBN systems, and a mean guidance time per diagnostic site = 52 s. This represents a guidance time often nearly 3 min faster per diagnostic site than guidance times reported for other technician-assisted VBN systems. Finally, an ergonomic study further asserts the system's acceptability to the physician and long-term potential.
Collapse
|
8
|
Puerto-Souza GA, Cadeddu JA, Mariottini GL. Toward Long-Term and Accurate Augmented-Reality for Monocular Endoscopic Videos. IEEE Trans Biomed Eng 2014; 61:2609-20. [DOI: 10.1109/tbme.2014.2323999] [Citation(s) in RCA: 36] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
9
|
|
10
|
Cheng R, Xu S, Bokinsky A, McCreedy E, Gandler W, Wood BJ, McAuliffe MJ. GPU based multi-histogram volume navigation for virtual bronchoscopy. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2014; 2014:3308-12. [PMID: 25570698 PMCID: PMC5990254 DOI: 10.1109/embc.2014.6944330] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
An interactive navigation system for virtual bronchoscopy is presented, which is based solely on GPU based high performance multi-histogram volume rendering.
Collapse
|
11
|
Gibbs JD, Graham MW, Bascom R, Cornish DC, Khare R, Higgins WE. Optimal procedure planning and guidance system for peripheral bronchoscopy. IEEE Trans Biomed Eng 2013; 61:638-57. [PMID: 24235246 DOI: 10.1109/tbme.2013.2285627] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2022]
Abstract
With the development of multidetector computed-tomography (MDCT) scanners and ultrathin bronchoscopes, the use of bronchoscopy for diagnosing peripheral lung-cancer nodules is becoming a viable option. The work flow for assessing lung cancer consists of two phases: 1) 3-D MDCT analysis and 2) live bronchoscopy. Unfortunately, the yield rates for peripheral bronchoscopy have been reported to be as low as 14%, and bronchoscopy performance varies considerably between physicians. Recently, proposed image-guided systems have shown promise for assisting with peripheral bronchoscopy. Yet, MDCT-based route planning to target sites has relied on tedious error-prone techniques. In addition, route planning tends not to incorporate known anatomical, device, and procedural constraints that impact a feasible route. Finally, existing systems do not effectively integrate MDCT-derived route information into the live guidance process. We propose a system that incorporates an automatic optimal route-planning method, which integrates known route constraints. Furthermore, our system offers a natural translation of the MDCT-based route plan into the live guidance strategy via MDCT/video data fusion. An image-based study demonstrates the route-planning method's functionality. Next, we present a prospective lung-cancer patient study in which our system achieved a successful navigation rate of 91% to target sites. Furthermore, when compared to a competing commercial system, our system enabled bronchoscopy over two airways deeper into the airway-tree periphery with a sample time that was nearly 2 min shorter on average. Finally, our system's ability to almost perfectly predict the depth of a bronchoscope's navigable route in advance represents a substantial benefit of optimal route planning.
Collapse
|
12
|
Merritt SA, Khare R, Bascom R, Higgins WE. Interactive CT-video registration for the continuous guidance of bronchoscopy. IEEE TRANSACTIONS ON MEDICAL IMAGING 2013; 32:1376-96. [PMID: 23508260 PMCID: PMC3911781 DOI: 10.1109/tmi.2013.2252361] [Citation(s) in RCA: 33] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
Bronchoscopy is a major step in lung cancer staging. To perform bronchoscopy, the physician uses a procedure plan, derived from a patient's 3D computed-tomography (CT) chest scan, to navigate the bronchoscope through the lung airways. Unfortunately, physicians vary greatly in their ability to perform bronchoscopy. As a result, image-guided bronchoscopy systems, drawing upon the concept of CT-based virtual bronchoscopy (VB), have been proposed. These systems attempt to register the bronchoscope's live position within the chest to a CT-based virtual chest space. Recent methods, which register the bronchoscopic video to CT-based endoluminal airway renderings, show promise but do not enable continuous real-time guidance. We present a CT-video registration method inspired by computer-vision innovations in the fields of image alignment and image-based rendering. In particular, motivated by the Lucas-Kanade algorithm, we propose an inverse-compositional framework built around a gradient-based optimization procedure. We next propose an implementation of the framework suitable for image-guided bronchoscopy. Laboratory tests, involving both single frames and continuous video sequences, demonstrate the robustness and accuracy of the method. Benchmark timing tests indicate that the method can run continuously at 300 frames/s, well beyond the real-time bronchoscopic video rate of 30 frames/s. This compares extremely favorably to the ≥ 1 s/frame speeds of other methods and indicates the method's potential for real-time continuous registration. A human phantom study confirms the method's efficacy for real-time guidance in a controlled setting, and, hence, points the way toward the first interactive CT-video registration approach for image-guided bronchoscopy. Along this line, we demonstrate the method's efficacy in a complete guidance system by presenting a clinical study involving lung cancer patients.
Collapse
Affiliation(s)
| | - Rahul Khare
- Sheikh Zayed Institute at the Childrens National Medical Center, Washington, DC 20010 USA
| | - Rebecca Bascom
- Department of Medicine, Pennsylvania State Hershey Medical Center, Hershey, PA 17033 USA
| | - William E. Higgins
- Departments of Electrical Engineering, Computer Science and Engineering, and Bioengineering, Pennsylvania State University, University Park, PA 16802 USA
| |
Collapse
|
13
|
Puerto-Souza GA, Mariottini GL. A fast and accurate feature-matching algorithm for minimally-invasive endoscopic images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2013; 32:1201-1214. [PMID: 23335663 DOI: 10.1109/tmi.2013.2239306] [Citation(s) in RCA: 28] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
The ability to find image similarities between two distinct endoscopic views is known as feature matching, and is essential in many robotic-assisted minimally-invasive surgery (MIS) applications. Differently from feature-tracking methods, feature matching does not make any restrictive assumption about the chronological order between the two images or about the organ motion, but first obtains a set of appearance-based image matches, and subsequently removes possible outliers based on geometric constraints. As a consequence, feature-matching algorithms can be used to recover the position of any image feature after unexpected camera events, such as complete occlusions, sudden endoscopic-camera retraction, or strong illumination changes. We introduce the hierarchical multi-affine (HMA) algorithm, which improves over existing feature-matching methods because of the larger number of image correspondences, the increased speed, and the higher accuracy and robustness. We tested HMA over a large (and annotated) dataset with more than 100 MIS image pairs obtained from real interventions, and containing many of the aforementioned sudden events. In all of these cases, HMA outperforms the existing state-of-the-art methods in terms of speed, accuracy, and robustness. In addition, HMA and the image database are made freely available on the internet.
Collapse
Affiliation(s)
- Gustavo A Puerto-Souza
- Department of Computer Science and Engineering, University of Texas at Arlington, Arlington, TX 76019, USA.
| | | |
Collapse
|
14
|
Mirota DJ, Uneri A, Schafer S, Nithiananthan S, Reh DD, Ishii M, Gallia GL, Taylor RH, Hager GD, Siewerdsen JH. Evaluation of a system for high-accuracy 3D image-based registration of endoscopic video to C-arm cone-beam CT for image-guided skull base surgery. IEEE TRANSACTIONS ON MEDICAL IMAGING 2013; 32:1215-26. [PMID: 23372078 PMCID: PMC4118820 DOI: 10.1109/tmi.2013.2243464] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
The safety of endoscopic skull base surgery can be enhanced by accurate navigation in preoperative computed tomography (CT) or, more recently, intraoperative cone-beam CT (CBCT). The ability to register real-time endoscopic video with CBCT offers an additional advantage by rendering information directly within the visual scene to account for intraoperative anatomical change. However, tracker localization error ( ∼ 1-2 mm ) limits the accuracy with which video and tomographic images can be registered. This paper reports the first implementation of image-based video-CBCT registration, conducts a detailed quantitation of the dependence of registration accuracy on system parameters, and demonstrates improvement in registration accuracy achieved by the image-based approach. Performance was evaluated as a function of parameters intrinsic to the image-based approach, including system geometry, CBCT image quality, and computational runtime. Overall system performance was evaluated in a cadaver study simulating transsphenoidal skull base tumor excision. Results demonstrated significant improvement in registration accuracy with a mean reprojection distance error of 1.28 mm for the image-based approach versus 1.82 mm for the conventional tracker-based method. Image-based registration was highly robust against CBCT image quality factors of noise and resolution, permitting integration with low-dose intraoperative CBCT.
Collapse
Affiliation(s)
- Daniel J. Mirota
- Department of Computer Science, Johns Hopkins University, Baltimore, MD 21218 USA
| | - Ali Uneri
- Department of Computer Science, Johns Hopkins University, Baltimore, MD 21218 USA
| | - Sebastian Schafer
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21218 USA
| | | | - Douglas D. Reh
- Department of Otolaryngology—Head and Neck Surgery, Johns Hopkins Medical Institutions, Baltimore, MD 21218 USA
| | - Masaru Ishii
- Department of Otolaryngology—Head and Neck Surgery, Johns Hopkins Medical Institutions, Baltimore, MD 21218 USA
| | - Gary L. Gallia
- Department of Neurosurgery and Oncology, Johns Hopkins Medical Institutions, Baltimore, MD 21218 USA
| | - Russell H. Taylor
- Department of Computer Science, Johns Hopkins University, Baltimore, MD 21218 USA
| | - Gregory D. Hager
- Department of Computer Science, Johns Hopkins University, Baltimore, MD 21218 USA
| | - Jeffrey H. Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21218 USA
| |
Collapse
|
15
|
An optical flow approach to tracking colonoscopy video. Comput Med Imaging Graph 2013; 37:207-23. [DOI: 10.1016/j.compmedimag.2013.01.010] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2012] [Revised: 01/18/2013] [Accepted: 01/25/2013] [Indexed: 11/22/2022]
|
16
|
Adamczyk M, Tomaszewski G, Naumczyk P, Kluczewska E, Walecki J. Usefulness of computed tomography virtual bronchoscopy in the evaluation of bronchi divisions. Pol J Radiol 2013; 78:30-41. [PMID: 23494710 PMCID: PMC3596143 DOI: 10.12659/pjr.883765] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2012] [Accepted: 01/14/2013] [Indexed: 12/20/2022] Open
Abstract
Background: Since introduction of multislice CT scanners into clinical practice, virtual brochoscopy has gained a lot of quality and diagnostic potential. Nevertheless it does not have established place in diagnostics of tracheal and bronchi disorders and its potential has not been examined enough. Nowadays a majority of bronchial tree variants and lesions are revealed by bronchofiberoscopy, which is an objective and a relatively safe method, but has side effects, especially in higher-risk subjects. Therefore noninvasive techniques enabling evaluation of airways should be consistently developed and updated. Material/Methods: Material consisted of 100 adults (45 female, 55 male) aged between 18 and 65 years (mean 40 years, median 40.5 years, SD 14.02), who underwent chest CT examination by means of a 16-slice scanner. Every patient had normal appearance of chest organs, with the exception of minor abnormalities that did not alter airways route. Divisions of bronchial tree to segmental level were evaluated and assigned to particular types by means of virtual bronchoscopy projection. In case of difficulties MPR or MinIP projection was used. Results: The frequency of lobar bronchi divisions other than the typical ones was in: right upper lobar bronchi 45%, left 55%; middle lobar bronchi 21%, lingula 26%; right lower lobar bronchi 28%, left 29%. Subsuperior bronchus or bronchi were found on the right side in 44% and on the left side in 37%. No dependency between types of bronchial divisions on different levels was found.
Collapse
Affiliation(s)
- Michał Adamczyk
- Department Diagnostic Radiology, Central Clinical Hospital of the Ministry of Interior in Warsaw, Warsaw, Poland
| | | | | | | | | |
Collapse
|
17
|
Liu J, Subramanian KR, Yoo TS. A robust method to track colonoscopy videos with non-informative images. Int J Comput Assist Radiol Surg 2013; 8:575-92. [PMID: 23377706 DOI: 10.1007/s11548-013-0814-x] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2012] [Accepted: 01/11/2013] [Indexed: 11/28/2022]
Abstract
PURPOSE Continuously, optical and virtual image alignment can significantly supplement the clinical value of colonoscopy. However, the co-alignment process is frequently interrupted by non-informative images. A video tracking framework to continuously track optical colonoscopy images was developed and tested. METHODS A video tracking framework with immunity to non-informative images was developed with three essential components: temporal volume flow, region flow, and incremental egomotion estimation. Temporal volume flow selects two similar images interrupted by non-informative images; region flow measures large visual motion between selected images; and incremental egomotion processing estimates significant camera motion by decomposing each large visual motion vector into a sequence of small optical flow vectors. The framework was extensively evaluated via phantom and colonoscopy image sequences. We constructed two colon-like phantoms, a straight phantom and a curved phantom, to measure actual colonoscopy motion. RESULTS In the straight phantom, after 48 frames were excluded, the tracking error was [Formula: see text]3 mm of 16 mm traveled. In the curved phantom, the error was [Formula: see text]4 mm of 23.88 mm traveled after 72 frames were excluded. Through evaluations with clinical sequences, the robustness of the tracking framework was demonstrated on 30 colonoscopy image sequences from 22 different patients. Four specific sequences among these were chosen to illustrate the algorithm's decreased sensitivity to (1) fluid immersion, (2) wall contact, (3) surgery-induced colon deformation, and (4) multiple non-informative image sequences. CONCLUSION A robust tracking framework for real-time colonoscopy was developed that facilitates continuous alignment of optical and virtual images, immune to non-informative images that enter the video stream. The system was validated in phantom testing and achieved success with clinical image sequences.
Collapse
Affiliation(s)
- Jianfei Liu
- Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Radiology and Imaging Sciences, Clinical Center, National Institutes of Health, Bethesda, MD 20892, USA.
| | | | | |
Collapse
|
18
|
CZARNECKA KASIA, YASUFUKU KAZUHIRO. Interventional pulmonology: Focus on pulmonary diagnostics. Respirology 2012; 18:47-60. [DOI: 10.1111/j.1440-1843.2012.02211.x] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
19
|
Qiu J, Hope AJ, Cho BCJ, Sharpe MB, Dickie CI, DaCosta RS, Jaffray DA, Weersink RA. Displaying 3D radiation dose on endoscopic video for therapeutic assessment and surgical guidance. Phys Med Biol 2012; 57:6601-14. [PMID: 23010769 DOI: 10.1088/0031-9155/57/20/6601] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
We have developed a method to register and display 3D parametric data, in particular radiation dose, on two-dimensional endoscopic images. This registration of radiation dose to endoscopic or optical imaging may be valuable in assessment of normal tissue response to radiation, and visualization of radiated tissues in patients receiving post-radiation surgery. Electromagnetic sensors embedded in a flexible endoscope were used to track the position and orientation of the endoscope allowing registration of 2D endoscopic images to CT volumetric images and radiation doses planned with respect to these images. A surface was rendered from the CT image based on the air/tissue threshold, creating a virtual endoscopic view analogous to the real endoscopic view. Radiation dose at the surface or at known depth below the surface was assigned to each segment of the virtual surface. Dose could be displayed as either a colorwash on this surface or surface isodose lines. By assigning transparency levels to each surface segment based on dose or isoline location, the virtual dose display was overlaid onto the real endoscope image. Spatial accuracy of the dose display was tested using a cylindrical phantom with a treatment plan created for the phantom that matched dose levels with grid lines on the phantom surface. The accuracy of the dose display in these phantoms was 0.8-0.99 mm. To demonstrate clinical feasibility of this approach, the dose display was also tested on clinical data of a patient with laryngeal cancer treated with radiation therapy, with estimated display accuracy of ∼2-3 mm. The utility of the dose display for registration of radiation dose information to the surgical field was further demonstrated in a mock sarcoma case using a leg phantom. With direct overlay of radiation dose on endoscopic imaging, tissue toxicities and tumor response in endoluminal organs can be directly correlated with the actual tissue dose, offering a more nuanced assessment of normal tissue toxicities following radiation therapy and accurate registration of radiation dose to the surgical field.
Collapse
Affiliation(s)
- Jimmy Qiu
- Radiation Medicine Program, Princess Margaret Hospital, 610 University Ave, Toronto, ON M5G 2M9, Canada
| | | | | | | | | | | | | | | |
Collapse
|
20
|
Graham MW, Gibbs JD, Higgins WE. Computer-based route-definition system for peripheral bronchoscopy. J Digit Imaging 2012; 25:307-17. [PMID: 22083553 DOI: 10.1007/s10278-011-9433-7] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022] Open
Abstract
Multi-detector computed tomography (MDCT) scanners produce high-resolution images of the chest. Given a patient's MDCT scan, a physician can use an image-guided intervention system to first plan and later perform bronchoscopy to diagnostic sites situated deep in the lung periphery. An accurate definition of complete routes through the airway tree leading to the diagnostic sites, however, is vital for avoiding navigation errors during image-guided bronchoscopy. We present a system for the robust definition of complete airway routes suitable for image-guided bronchoscopy. The system incorporates both automatic and semiautomatic MDCT analysis methods for this purpose. Using an intuitive graphical user interface, the user invokes automatic analysis on a patient's MDCT scan to produce a series of preliminary routes. Next, the user visually inspects each route and quickly corrects the observed route defects using the built-in semiautomatic methods. Application of the system to a human study for the planning and guidance of peripheral bronchoscopy demonstrates the efficacy of the system.
Collapse
Affiliation(s)
- Michael W Graham
- Department of Electrical Engineering, Penn State University, University Park, Pennsylvania, PA 16802, USA
| | | | | |
Collapse
|
21
|
Abstract
The trend toward minimally invasive surgical interventions has created new challenges for visualization during surgical procedures. However, at the same time, the introduction of high-definition digital endoscopy offers the opportunity to apply methods from computer vision to provide visualization enhancements such as anatomic reconstruction, surface registration, motion tracking, and augmented reality. This review provides a perspective on this rapidly evolving field. It first introduces the clinical and technical background necessary for developing vision-based algorithms for interventional applications. It then discusses several examples of clinical interventions where computer vision can be applied, including bronchoscopy, rhinoscopy, transnasal skull-base neurosurgery, upper airway interventions, laparoscopy, robotic-assisted surgery, and Natural Orifice Transluminal Endoscopic Surgery (NOTES). It concludes that the currently reported work is only the beginning. As the demand for minimally invasive procedures rises, computer vision in surgery will continue to advance through close interdisciplinary work between interventionists and engineers.
Collapse
Affiliation(s)
- Daniel J Mirota
- Department of Computer Science, Johns Hopkins University, Baltimore, MD 21218, USA.
| | | | | |
Collapse
|
22
|
Weersink RA, Qiu J, Hope AJ, Daly MJ, Cho BCJ, Dacosta RS, Sharpe MB, Breen SL, Chan H, Jaffray DA. Improving superficial target delineation in radiation therapy with endoscopic tracking and registration. Med Phys 2012; 38:6458-68. [PMID: 22149829 DOI: 10.1118/1.3658569] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/25/2022] Open
Abstract
PURPOSE Target delineation within volumetric imaging is a critical step in the planning process of intensity modulated radiation therapy. In endoluminal cancers, endoscopy often reveals superficial areas of visible disease beyond what is seen on volumetric imaging. Quantitatively relating these findings to the volumetric imaging is prone to human error during the recall and contouring of the target. We have developed a method to improve target delineation in the radiation therapy planning process by quantitatively registering endoscopic findings contours traced on endoscopic images to volumetric imaging. METHODS Using electromagnetic sensors embedded in an endoscope, 2D endoscopic images were registered to computed tomography (CT) volumetric images by tracking the position and orientation of the endoscope relative to a CT image set. Regions-of-interest (ROI) in the 2D endoscopic view were delineated. A mesh created within the boundary of the ROI was projected onto the 3D image data, registering the ROI with the volumetric image. This 3D ROI was exported to clinical radiation treatment planning software. The precision and accuracy of the procedure was tested on two solid phantoms with superficial markings visible on both endoscopy and CT images. The first phantom was T-shaped tube with X-marks etched on the interior. The second phantom was an anatomically correct skull phantom with a phantom superficial lesion placed on the pharyngeal surface. Markings were contoured on the endoscope images and compared with contours delineated in the treatment planning system based on the CT images. Clinical feasibility was tested on three patients with early stage glottic cancer. Image-based rendering using manually identified landmarks was used to improve the registration. RESULTS Using the T-shaped phantom with X-markings, the 2D to 3D registration accuracy was 1.5-3.5 mm, depending on the endoscope position relative to the markings. Intraobserver standard variation was 0.5 mm. Rotational accuracy was within 2°. Using the skull phantom, registration accuracy was assessed by calculating the average surface minimum distance between the endoscopy and treatment planning contours. The average surface distance was 0.92 mm with 93% of all points in the 2D-endoscopy ROI within 1.5 mm of any point within the ROI contoured in the treatment planning software. This accuracy is limited by the CT imaging resolution and the electromagnetic (EM) sensor accuracy. The clinical testing demonstrated that endoscopic contouring is feasible. With registration based on em tracking only, accuracy was 5.6-8.4 mm. Image-based registration reduced this error to less than 3.5 mm and enabled endoscopic contouring in all cases. CONCLUSIONS Registration of contours generated on 2D endoscopic images to 3D planning space is feasible, with accuracy smaller than typical set-up margins. Used in addition to standard 3D contouring methods in radiation planning, the technology may improve gross tumour volume (GTV) delineation for superficial tumors in luminal sites that are only visible in endoscopy.
Collapse
Affiliation(s)
- R A Weersink
- Radiation Medicine Program, Princess Margaret Hospital, Toronto, Ontario M5G 2M9, Canada and Ontario Cancer Institute, University Health Network, Toronto, Ontario M5G 2M9, Canada
| | | | | | | | | | | | | | | | | | | |
Collapse
|
23
|
Uneri A, Schafer S, Mirota DJ, Nithiananthan S, Otake Y, Taylor RH, Gallia GL, Khanna AJ, Lee S, Reh DD, Siewerdsen JH. TREK: an integrated system architecture for intraoperative cone-beam CT-guided surgery. Int J Comput Assist Radiol Surg 2011; 7:159-73. [PMID: 21744085 DOI: 10.1007/s11548-011-0636-7] [Citation(s) in RCA: 74] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2011] [Accepted: 06/10/2011] [Indexed: 10/18/2022]
Abstract
PURPOSE A system architecture has been developed for integration of intraoperative 3D imaging [viz., mobile C-arm cone-beam CT (CBCT)] with surgical navigation (e.g., trackers, endoscopy, and preoperative image and planning data). The goal of this paper is to describe the architecture and its handling of a broad variety of data sources in modular tool development for streamlined use of CBCT guidance in application-specific surgical scenarios. METHODS The architecture builds on two proven open-source software packages, namely the cisst package (Johns Hopkins University, Baltimore, MD) and 3D Slicer (Brigham and Women's Hospital, Boston, MA), and combines data sources common to image-guided procedures with intraoperative 3D imaging. Integration at the software component level is achieved through language bindings to a scripting language (Python) and an object-oriented approach to abstract and simplify the use of devices with varying characteristics. The platform aims to minimize offline data processing and to expose quantitative tools that analyze and communicate factors of geometric precision online. Modular tools are defined to accomplish specific surgical tasks, demonstrated in three clinical scenarios (temporal bone, skull base, and spine surgery) that involve a progressively increased level of complexity in toolset requirements. RESULTS The resulting architecture (referred to as "TREK") hosts a collection of modules developed according to application-specific surgical tasks, emphasizing streamlined integration with intraoperative CBCT. These include multi-modality image display; 3D-3D rigid and deformable registration to bring preoperative image and planning data to the most up-to-date CBCT; 3D-2D registration of planning and image data to real-time fluoroscopy; infrared, electromagnetic, and video-based trackers used individually or in hybrid arrangements; augmented overlay of image and planning data in endoscopic or in-room video; and real-time "virtual fluoroscopy" computed from GPU-accelerated digitally reconstructed radiographs (DRRs). Application in three preclinical scenarios (temporal bone, skull base, and spine surgery) demonstrates the utility of the modular, task-specific approach in progressively complex tasks. CONCLUSIONS The design and development of a system architecture for image-guided surgery has been reported, demonstrating enhanced utilization of intraoperative CBCT in surgical applications with vastly different requirements. The system integrates C-arm CBCT with a broad variety of data sources in a modular fashion that streamlines the interface to application-specific tools, accommodates distinct workflow scenarios, and accelerates testing and translation of novel toolsets to clinical use. The modular architecture was shown to adapt to and satisfy the requirements of distinct surgical scenarios from a common code-base, leveraging software components arising from over a decade of effort within the imaging and computer-assisted interventions community.
Collapse
Affiliation(s)
- A Uneri
- Department of Computer Science, Johns Hopkins University, Baltimore, MD 21205-2109, USA.
| | | | | | | | | | | | | | | | | | | | | |
Collapse
|
24
|
Automatic definition of the central-chest lymph-node stations. Int J Comput Assist Radiol Surg 2011; 6:539-55. [PMID: 21359877 DOI: 10.1007/s11548-011-0547-7] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2010] [Accepted: 01/18/2011] [Indexed: 10/18/2022]
Abstract
PURPOSE Lung cancer remains the leading cause of cancer death in the United States. Central to the lung-cancer diagnosis and staging process is the assessment of the central-chest lymph nodes. This assessment requires two steps: (1) examination of the lymph-node stations and identification of diagnostically important nodes in a three-dimensional (3D) multidetector computed tomography (MDCT) chest scan; (2) tissue sampling of the identified nodes. We describe a computer-based system for automatically defining the central-chest lymph-node stations in a 3D MDCT chest scan. METHODS Automated methods first construct a 3D chest model, consisting of the airway tree, aorta, pulmonary artery, and other anatomical structures. Subsequent automated analysis then defines the 3D regional nodal stations, as specified by the internationally standardized TNM lung-cancer staging system. This analysis involves extracting over 140 pertinent anatomical landmarks from structures contained in the 3D chest model. Next, the physician uses data mining tools within the system to interactively select diagnostically important lymph nodes contained in the regional nodal stations. RESULTS Results from a ground-truth database of unlabeled lymph nodes identified in 32 MDCT scans verify the system's performance. The system automatically defined 3D regional nodal stations that correctly labeled 96% of the database's lymph nodes, with 93% of the stations correctly labeling 100% of their constituent nodes. CONCLUSIONS The system accurately defines the regional nodal stations in a given high-resolution 3D MDCT chest scan and eases a physician's burden for analyzing a given MDCT scan for lymph-node station assessment. It also shows potential as an aid for preplanning lung-cancer staging procedures.
Collapse
|
25
|
Graham MW, Gibbs JD, Cornish DC, Higgins WE. Robust 3-D airway tree segmentation for image-guided peripheral bronchoscopy. IEEE TRANSACTIONS ON MEDICAL IMAGING 2010; 29:982-997. [PMID: 20335095 DOI: 10.1109/tmi.2009.2035813] [Citation(s) in RCA: 46] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/29/2023]
Abstract
A vital task in the planning of peripheral bronchoscopy is the segmentation of the airway tree from a 3-D multidetector computed tomography chest scan. Unfortunately, existing methods typically do not sufficiently extract the necessary peripheral airways needed to plan a procedure. We present a robust method that draws upon both local and global information. The method begins with a conservative segmentation of the major airways. Follow-on stages then exhaustively search for additional candidate airway locations. Finally, a graph-based optimization method counterbalances both the benefit and cost of retaining candidate airway locations for the final segmentation. Results demonstrate that the proposed method typically extracts 2-3 more generations of airways than several other methods, and that the extracted airway trees enable image-guided bronchoscopy deeper into the human lung periphery than past studies.
Collapse
|
26
|
Hergott CA, Tremblay A. Role of Bronchoscopy in the Evaluation of Solitary Pulmonary Nodules. Clin Chest Med 2010; 31:49-63, Table of Contents. [DOI: 10.1016/j.ccm.2009.08.003] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
27
|
|
28
|
Deguchi D, Mori K, Feuerstein M, Kitasaka T, Maurer Jr. CR, Suenaga Y, Takabatake H, Mori M, Natori H. Selective image similarity measure for bronchoscope tracking based on image registration. Med Image Anal 2009; 13:621-33. [DOI: 10.1016/j.media.2009.06.001] [Citation(s) in RCA: 26] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2008] [Revised: 05/29/2009] [Accepted: 06/02/2009] [Indexed: 10/20/2022]
|
29
|
Gibbs JD, Graham MW, Higgins WE. 3D MDCT-based system for planning peripheral bronchoscopic procedures. Comput Biol Med 2009; 39:266-79. [PMID: 19217089 DOI: 10.1016/j.compbiomed.2008.12.012] [Citation(s) in RCA: 27] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2008] [Revised: 12/13/2008] [Accepted: 12/23/2008] [Indexed: 11/17/2022]
Abstract
The diagnosis and staging of lung cancer often begins with the assessment of a suspect peripheral chest site. Such suspicious peripheral sites may be solitary pulmonary nodules or other abnormally appearing regions of interest (ROIs). The state-of-the-art process for assessing such peripheral ROIs involves off-line procedure planning using a three-dimensional (3D) multidetector computed tomography (MDCT) chest scan followed by bronchoscopy with an ultrathin bronchoscope. We present an integrated computer-based system for planning peripheral bronchoscopic procedures. The system takes a 3D MDCT chest image as input and performs nearly all operations automatically. The only interaction required by the physician is the selection of ROI locations. The system is computationally efficient and fits smoothly within the clinical work flow. Integrated into the system and described in detail in the paper is a new surface-definition method, which is vital for effective analysis and planning to peripheral sites. Results demonstrate the efficacy of the system and its usage for the live guidance of ultrathin bronchoscopy to the periphery.
Collapse
Affiliation(s)
- Jason D Gibbs
- Department of Electrical Engineering, Penn State University, University Park, PA 16802, USA
| | | | | |
Collapse
|
30
|
Yu KC, Gibbs JD, Graham MW, Higgins WE. Image-based reporting for bronchoscopy. J Digit Imaging 2008; 23:39-50. [PMID: 19050956 DOI: 10.1007/s10278-008-9170-8] [Citation(s) in RCA: 13] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2008] [Revised: 10/12/2008] [Accepted: 10/26/2008] [Indexed: 11/30/2022] Open
Abstract
Bronchoscopy is often performed for staging lung cancer. The recent development of multidetector computed tomography (MDCT) scanners and ultrathin bronchoscopes now enable the bronchoscopic biopsy and treatment of peripheral diagnostic regions of interest (ROIs). Because these ROIs are often located several generations within the airway tree, careful planning and interpretation of the bronchoscopic route is required prior to a procedure. The current practice for planning bronchoscopic procedures, however, is difficult, error prone, and time consuming. To alleviate these issues, we propose a method for producing and previewing reports for bronchoscopic procedures using patient-specific MDCT chest scans. The reports provide quantitative data about the bronchoscopic routes and both static and dynamic previews of the proper airway route. The previews consist of virtual bronchoscopic endoluminal renderings along the route and three-dimensional cues for a final biopsy site. The reports require little storage space and computational resources, enabling physicians to view the reports on a portable tablet PC. To evaluate the efficacy of the reporting system, we have generated reports for 22 patients in a human lung cancer patient pilot study. For 17 of these patients, we used the reports in conjunction with live image-based bronchoscopic guidance to direct physicians to central chest and peripheral ROIs for subsequent diagnostic evaluation. Our experience shows that the tool enabled useful procedure preview and an effective means for planning strategy prior to a live bronchoscopy.
Collapse
Affiliation(s)
- Kun-Chang Yu
- Endographics Imaging Systems, State College, PA 16801, USA
| | | | | | | |
Collapse
|
31
|
Merritt SA, Gibbs JD, Yu KC, Patel V, Rai L, Cornish DC, Bascom R, Higgins WE. Image-Guided Bronchoscopy for Peripheral Lung Lesions. Chest 2008; 134:1017-1026. [DOI: 10.1378/chest.08-0603] [Citation(s) in RCA: 42] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/01/2022] Open
|
32
|
Rai L, Helferty JP, Higgins WE. Combined video tracking and image-video registration for continuous bronchoscopic guidance. Int J Comput Assist Radiol Surg 2008. [DOI: 10.1007/s11548-008-0241-6] [Citation(s) in RCA: 25] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
33
|
Dolina MY, Cornish DC, Merritt SA, Rai L, Mahraj R, Higgins WE, Bascom R. Interbronchoscopist variability in endobronchial path selection: a simulation study. Chest 2008; 133:897-905. [PMID: 18263679 DOI: 10.1378/chest.07-2540] [Citation(s) in RCA: 41] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/01/2022] Open
Abstract
BACKGROUND Endobronchial path selection is important for the bronchoscopic diagnosis of focal lung lesions. Path selection typically involves mentally reconstructing a three-dimensional path by interpreting a stack of two-dimensional (2D) axial plane CT scan sections. The hypotheses of our study about path selection were as follows: (1) bronchoscopists are inaccurate and overly confident when making endobronchial path selections based on 2D CT scan analysis; and (2) path selection accuracy and confidence improve and become better aligned when bronchoscopists employ path-planning methods based on virtual bronchoscopy (VB). METHODS Studies of endobronchial path selection comparing three path-planning methods (ie, the standard 2D CT scan analysis and two new VB-based techniques) were performed. The task was to navigate to discrete lesions located between the third-order and fifth-order bronchi of the right upper and middle lobes. Outcome measures were the cumulative accuracy of making four sequential path selection decisions and self-reported confidence (1, least confident; 5, most confident). Both experienced and inexperienced bronchoscopists participated in the studies. RESULTS In the first study involving a static paper-based tool, the mean (+/- SD) cumulative accuracy was 14 +/- 3% using 2D CT scan analysis (confidence, 3.4 +/- 1.3) and 49 +/- 15% using a VB-based technique (confidence, 4.2 +/- 1.1; p = 0.0001 across all comparisons). For a second study using an interactive computer-based tool, the mean accuracy was 40 +/- 28% using 2D CT scan analysis (confidence, 3.0 +/- 0.3) and 96 +/- 3% using a dynamic VB-based technique (confidence, 4.6 +/- 0.2). Regardless of the experience level of the bronchoscopist, use of the standard 2D CT scan analysis resulted in poor path selection accuracy and misaligned confidence. Use of the VB-based techniques resulted in considerably higher accuracy and better aligned decision confidence. CONCLUSIONS Endobronchial path selection is a source of error in the bronchoscopy workflow. The use of VB-based path-planning techniques significantly improves path selection accuracy over use of the standard 2D CT scan section analysis in this simulation format.
Collapse
Affiliation(s)
- Marina Y Dolina
- Department of Medicine, College of Medicine Penn State University, Hershey, PA 17033, USA
| | | | | | | | | | | | | |
Collapse
|