1
|
Cai J, Zhu H, Liu S, Qi Y, Chen R. Lung image segmentation via generative adversarial networks. Front Physiol 2024; 15:1408832. [PMID: 39219839 PMCID: PMC11365075 DOI: 10.3389/fphys.2024.1408832] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2024] [Accepted: 08/01/2024] [Indexed: 09/04/2024] Open
Abstract
Introduction Lung image segmentation plays an important role in computer-aid pulmonary disease diagnosis and treatment. Methods This paper explores the lung CT image segmentation method by generative adversarial networks. We employ a variety of generative adversarial networks and used their capability of image translation to perform image segmentation. The generative adversarial network is employed to translate the original lung image into the segmented image. Results The generative adversarial networks-based segmentation method is tested on real lung image data set. Experimental results show that the proposed method outperforms the state-of-the-art method. Discussion The generative adversarial networks-based method is effective for lung image segmentation.
Collapse
Affiliation(s)
- Jiaxin Cai
- School of Mathematics and Statistics, Xiamen University of Technology, Xiamen, China
| | - Hongfeng Zhu
- School of Mathematics and Statistics, Xiamen University of Technology, Xiamen, China
| | - Siyu Liu
- School of Computer and Information Engineering, Xiamen University of Technology, Xiamen, China
| | - Yang Qi
- School of Computer and Information Engineering, Xiamen University of Technology, Xiamen, China
| | - Rongshang Chen
- School of Computer and Information Engineering, Xiamen University of Technology, Xiamen, China
| |
Collapse
|
2
|
Nguyen DCT, Benameur S, Mignotte M, Lavoie F. Unsupervised registration of 3D knee implant components to biplanar X-ray images. BMC Med Imaging 2023; 23:133. [PMID: 37718452 PMCID: PMC10506289 DOI: 10.1186/s12880-023-01048-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2021] [Accepted: 06/06/2023] [Indexed: 09/19/2023] Open
Abstract
BACKGROUND Registration of three-dimensional (3D) knee implant components to radiographic images provides the 3D position of the implants which aids to analyze the component alignment after total knee arthroplasty. METHODS We present an automatic 3D to two-dimensional (2D) registration using biplanar radiographic images based on a hybrid similarity measure integrating region and edge-based information. More precisely, this measure is herein defined as a weighted combination of an edge potential field-based similarity, which represents the relation between the external contours of the component projections and an edge potential field estimated on the two radiographic images, and an object specificity property, which is based on the distinction of the region-label inside and outside of the object. RESULTS The accuracy of our 3D/2D registration algorithm was assessed on a sample of 64 components (32 femoral components and 32 tibial components). In our tests, we obtained an average of the root mean square error (RMSE) of 0.18 mm, which is significantly lower than that of both single similarity methods, supporting our hypothesis of better stability and accuracy with the proposed approach. CONCLUSION Our method, which provides six accurate registration parameters (three rotations and three translations) without requiring any fiducial markers, makes it possible to perform the important analyses on the rotational alignment of the femoral and tibial components on a large number of cases. In addition, this method can be extended to register other implants or bones.
Collapse
Affiliation(s)
- Dac Cong Tai Nguyen
- Département d'Informatique et de Recherche Opérationnelle (DIRO), Université de Montréal, Montréal, Québec, Canada.
- Eiffel Medtech Inc., Montréal, Québec, Canada.
| | | | - Max Mignotte
- Département d'Informatique et de Recherche Opérationnelle (DIRO), Université de Montréal, Montréal, Québec, Canada
| | - Frédéric Lavoie
- Eiffel Medtech Inc., Montréal, Québec, Canada
- Orthopedic Surgery Department, Centre Hospitalier de l'Université de Montréal (CHUM), Montréal, Québec, Canada
| |
Collapse
|
3
|
Huang S, Han X, Fan J, Chen J, Du L, Gao W, Liu B, Chen Y, Liu X, Wang Y, Ai D, Ma G, Yang J. Anterior Mediastinal Lesion Segmentation Based on Two-Stage 3D ResUNet With Attention Gates and Lung Segmentation. Front Oncol 2021; 10:618357. [PMID: 33634027 PMCID: PMC7901488 DOI: 10.3389/fonc.2020.618357] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2020] [Accepted: 12/15/2020] [Indexed: 01/13/2023] Open
Abstract
OBJECTIVES Anterior mediastinal disease is a common disease in the chest. Computed tomography (CT), as an important imaging technology, is widely used in the diagnosis of mediastinal diseases. Doctors find it difficult to distinguish lesions in CT images because of image artifact, intensity inhomogeneity, and their similarity with other tissues. Direct segmentation of lesions can provide doctors a method to better subtract the features of the lesions, thereby improving the accuracy of diagnosis. METHOD As the trend of image processing technology, deep learning is more accurate in image segmentation than traditional methods. We employ a two-stage 3D ResUNet network combined with lung segmentation to segment CT images. Given that the mediastinum is between the two lungs, the original image is clipped through the lung mask to remove some noises that may affect the segmentation of the lesion. To capture the feature of the lesions, we design a two-stage network structure. In the first stage, the features of the lesion are learned from the low-resolution downsampled image, and the segmentation results under a rough scale are obtained. The results are concatenated with the original image and encoded into the second stage to capture more accurate segmentation information from the image. In addition, attention gates are introduced in the upsampling of the network, and these gates can focus on the lesion and play a role in filtering the features. The proposed method has achieved good results in the segmentation of the anterior mediastinal. RESULTS The proposed method was verified on 230 patients, and the anterior mediastinal lesions were well segmented. The average Dice coefficient reached 87.73%. Compared with the model without lung segmentation, the model with lung segmentation greatly improved the accuracy of lesion segmentation by approximately 9%. The addition of attention gates slightly improved the segmentation accuracy. CONCLUSION The proposed automatic segmentation method has achieved good results in clinical data. In clinical application, automatic segmentation of lesions can assist doctors in the diagnosis of diseases and may facilitate the automated diagnosis of illnesses in the future.
Collapse
Affiliation(s)
- Su Huang
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, China
| | - Xiaowei Han
- Department of Radiology, The Affiliated Drum Tower Hospital of Nanjing University Medical School, Nanjing, China
| | - Jingfan Fan
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, China
| | - Jing Chen
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, China
| | - Lei Du
- Department of Radiology, China-Japan Friendship Hospital, Beijing, China
| | - Wenwen Gao
- Department of Radiology, China-Japan Friendship Hospital, Beijing, China
| | - Bing Liu
- Department of Radiology, China-Japan Friendship Hospital, Beijing, China
| | - Yue Chen
- Department of Radiology, China-Japan Friendship Hospital, Beijing, China
| | - Xiuxiu Liu
- Department of Radiology, China-Japan Friendship Hospital, Beijing, China
| | - Yige Wang
- Department of Radiology, China-Japan Friendship Hospital, Beijing, China
| | - Danni Ai
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, China
| | - Guolin Ma
- Department of Radiology, China-Japan Friendship Hospital, Beijing, China
| | - Jian Yang
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, China
| |
Collapse
|
4
|
Tan J, Jing L, Huo Y, Li L, Akin O, Tian Y. LGAN: Lung segmentation in CT scans using generative adversarial network. Comput Med Imaging Graph 2021; 87:101817. [PMID: 33278767 PMCID: PMC8477299 DOI: 10.1016/j.compmedimag.2020.101817] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2020] [Revised: 10/13/2020] [Accepted: 10/23/2020] [Indexed: 11/17/2022]
Abstract
Lung segmentation in Computerized Tomography (CT) images plays an important role in various lung disease diagnosis. Most of the current lung segmentation approaches are performed through a series of procedures with manually empirical parameter adjustments in each step. Pursuing an automatic segmentation method with fewer steps, we propose a novel deep learning Generative Adversarial Network (GAN)-based lung segmentation schema, which we denote as LGAN. The proposed schema can be generalized to different kinds of neural networks for lung segmentation in CT images. We evaluated the proposed LGAN schema on datasets including Lung Image Database Consortium image collection (LIDC-IDRI) and Quantitative Imaging Network (QIN) collection with two metrics: segmentation quality and shape similarity. Also, we compared our work with current state-of-the-art methods. The experimental results demonstrated that the proposed LGAN schema can be used as a promising tool for automatic lung segmentation due to its simplified procedure as well as its improved performance and efficiency.
Collapse
Affiliation(s)
- Jiaxing Tan
- The City University of New York, New York 10016, USA
| | - Longlong Jing
- The City University of New York, New York 10016, USA
| | - Yumei Huo
- The City University of New York, New York 10016, USA
| | - Lihong Li
- The City University of New York, New York 10016, USA
| | - Oguz Akin
- Memorial Sloan Kettering Cancer Center, New York 10065, USA
| | - Yingli Tian
- The City University of New York, New York 10016, USA.
| |
Collapse
|
5
|
Zhao Z, Jordan S, Tse ZTH. Devices for image-guided lung interventions: State-of-the-art review. Proc Inst Mech Eng H 2019; 233:444-463. [DOI: 10.1177/0954411919832042] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Lung cancer is the leading cause of cancer-related death. According to the American Cancer Society, there were an estimated 222,500 new cases of lung cancer and 155,870 deaths from lung cancer in the United States in 2017. Accurate localization in lung interventions is one of the keys to reducing the death rate from lung cancer. In this study, a total of 217 publications from 2006 to 2017 about designs of medical devices for localization in lung interventions were screened, shortlisted, and categorized by localization principle and reviewed for functionality. Each study was analyzed for engineering characteristics and clinical significance. Research regarding interventional imaging equipment, navigation systems, and surgical devices was reviewed, and both research prototypes and commercial products were discussed. Finally, the future directions and existing challenges were summarized, including real-time intra-procedure guidance, accuracy of localization, clinical application, clinical adoptability, and clinical regulatory issues.
Collapse
Affiliation(s)
- Zhuo Zhao
- School of Electrical and Computer Engineering, College of Engineering, University of Georgia, Athens, GA, USA
| | - Sophie Jordan
- School of Electrical and Computer Engineering, College of Engineering, University of Georgia, Athens, GA, USA
| | - Zion Tsz Ho Tse
- School of Electrical and Computer Engineering, College of Engineering, University of Georgia, Athens, GA, USA
- 3T Technologies LLC, Atlanta, GA, USA
| |
Collapse
|
6
|
Abstract
Lung cancer is the leading cause of cancer-related deaths. Many methods and devices help acquire more accurate clinical and localization information during lung interventions and may impact the death rate for lung cancer. However, there is a learning curve for operating these tools due to the complex structure of the airway. In this study, we first discuss the creation of a lung phantom model from medical images, which is followed by a comparison of 3D printing in terms of quality and consistency. Two tests were conducted to test the performance of the developed phantom, which was designed for training simulations of the target and ablation processes in endochonchial interventions. The target test was conducted through an electromagnetic tracking catheter with navigation software. An ablation catheter with a recently developed thermochromic ablation gel conducted the ablation test. The results of two tests show that the phantom was very useful for target and ablation simulation. In addition, the thermochromic gel allowed doctors to visualize the ablation zone. Many lung interventions may benefit from custom training or accuracy with the proposed low-cost and patient-specific phantom.
Collapse
|
7
|
Comparison of image registration methods for composing spectral retinal images. Biomed Signal Process Control 2017. [DOI: 10.1016/j.bspc.2017.03.003] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
|
8
|
Swierczynski P, Papież BW, Schnabel JA, Macdonald C. A level-set approach to joint image segmentation and registration with application to CT lung imaging. Comput Med Imaging Graph 2017; 65:58-68. [PMID: 28705410 PMCID: PMC5885990 DOI: 10.1016/j.compmedimag.2017.06.003] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2017] [Revised: 06/06/2017] [Accepted: 06/12/2017] [Indexed: 11/19/2022]
Abstract
A simple novel joint image registration and segmentation method is presented. The new algorithm is based on a level-set formulation. The algorithm merges Chan–Vese segmentation with active dense displacement estimation. Numerical implementation is evaluated on a publicly available lung CT data set. Improvement of registration and segmentation properties compared with existing methods is shown.
Automated analysis of structural imaging such as lung Computed Tomography (CT) plays an increasingly important role in medical imaging applications. Despite significant progress in the development of image registration and segmentation methods, lung registration and segmentation remain a challenging task. In this paper, we present a novel image registration and segmentation approach, for which we develop a new mathematical formulation to jointly segment and register three-dimensional lung CT volumes. The new algorithm is based on a level-set formulation, which merges a classic Chan–Vese segmentation with the active dense displacement field estimation. Combining registration with segmentation has two key advantages: it allows to eliminate the problem of initializing surface based segmentation methods, and to incorporate prior knowledge into the registration in a mathematically justified manner, while remaining computationally attractive. We evaluate our framework on a publicly available lung CT data set to demonstrate the properties of the new formulation. The presented results show the improved accuracy for our joint segmentation and registration algorithm when compared to registration and segmentation performed separately.
Collapse
Affiliation(s)
- Piotr Swierczynski
- Institute for Numerical Mathematics, Technische Universität München, Germany.
| | - Bartłomiej W Papież
- Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, UK
| | - Julia A Schnabel
- Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, UK; Division of Imaging Sciences & Biomedical Engineering, King's College London, UK
| | - Colin Macdonald
- Department of Mathematics, The University of British Columbia, Canada
| |
Collapse
|
9
|
Markel D, Levesque I, Larkin J, Léger P, El Naqa I. A 4D biomechanical lung phantom for joint segmentation/registration evaluation. Phys Med Biol 2016; 61:7012-7030. [DOI: 10.1088/0031-9155/61/19/7012] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
|
10
|
|
11
|
Bayesian longitudinal segmentation of hippocampal substructures in brain MRI using subject-specific atlases. Neuroimage 2016; 141:542-555. [PMID: 27426838 DOI: 10.1016/j.neuroimage.2016.07.020] [Citation(s) in RCA: 112] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2016] [Revised: 05/13/2016] [Accepted: 07/07/2016] [Indexed: 02/01/2023] Open
Abstract
The hippocampal formation is a complex, heterogeneous structure that consists of a number of distinct, interacting subregions. Atrophy of these subregions is implied in a variety of neurodegenerative diseases, most prominently in Alzheimer's disease (AD). Thanks to the increasing resolution of MR images and computational atlases, automatic segmentation of hippocampal subregions is becoming feasible in MRI scans. Here we introduce a generative model for dedicated longitudinal segmentation that relies on subject-specific atlases. The segmentations of the scans at the different time points are jointly computed using Bayesian inference. All time points are treated the same to avoid processing bias. We evaluate this approach using over 4700 scans from two publicly available datasets (ADNI and MIRIAD). In test-retest reliability experiments, the proposed method yielded significantly lower volume differences and significantly higher Dice overlaps than the cross-sectional approach for nearly every subregion (average across subregions: 4.5% vs. 6.5%, Dice overlap: 81.8% vs. 75.4%). The longitudinal algorithm also demonstrated increased sensitivity to group differences: in MIRIAD (69 subjects: 46 with AD and 23 controls), it found differences in atrophy rates between AD and controls that the cross sectional method could not detect in a number of subregions: right parasubiculum, left and right presubiculum, right subiculum, left dentate gyrus, left CA4, left HATA and right tail. In ADNI (836 subjects: 369 with AD, 215 with early cognitive impairment - eMCI - and 252 controls), all methods found significant differences between AD and controls, but the proposed longitudinal algorithm detected differences between controls and eMCI and differences between eMCI and AD that the cross sectional method could not find: left presubiculum, right subiculum, left and right parasubiculum, left and right HATA. Moreover, many of the differences that the cross-sectional method already found were detected with higher significance. The presented algorithm will be made available as part of the open-source neuroimaging package FreeSurfer.
Collapse
|
12
|
He T, Xue Z, Teh BS, Wong ST. Reconstruction of four-dimensional computed tomography lung images by applying spatial and temporal anatomical constraints using a Bayesian model. J Med Imaging (Bellingham) 2015; 2:024004. [PMID: 26158099 DOI: 10.1117/1.jmi.2.2.024004] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2015] [Accepted: 04/14/2015] [Indexed: 11/14/2022] Open
Abstract
Current four-dimensional computed tomography (4-D CT) lung image reconstruction methods rely on respiratory gating, such as surrogate, to sort the large number of axial images captured during multiple breathing cycles into serial three-dimensional CT images of different respiratory phases. Such sorting methods may be subject to external surrogate signal noises due to poor reproducibility of breathing cycles. New image-matching-based reconstruction algorithms refine the 4-D CT reconstruction by matching neighboring image slices, and they generally work better for the cine mode of 4-D CT acquisition than the helical mode due to different table positions of axial images in the helical mode. We propose a Bayesian model (BM) based automated 4-D CT lung image reconstruction for helical mode scans. BM allows for applying new spatial and temporal anatomical constraints in the optimization procedure. Using an iterative optimization procedure, each axial image is assigned to a respiratory phase to make sure the anatomical structures are spatially and temporally smooth based on the BM framework. In experiments, we visually and quantitatively compared the results of the proposed BM-based 4-D CT reconstruction with the respiratory surrogate and the normalized cross-correlation based image matching method using both simulated and actual 4-D patient scans. The results indicated that the proposed algorithm yielded more accurate reconstruction and fewer artifacts in the 4-D CT image series.
Collapse
Affiliation(s)
- Tiancheng He
- Weill Cornell Medical College , Houston Methodist Research Institute, Department of Systems Medicine and Bioengineering, Houston, Texas 77030, United States
| | - Zhong Xue
- Weill Cornell Medical College , Houston Methodist Research Institute, Department of Systems Medicine and Bioengineering, Houston, Texas 77030, United States
| | - Bin S Teh
- Weill Cornell Medical College , Houston Methodist Hospital, Department of Radiation Oncology, Houston, Texas 77030, United States
| | - Stephen T Wong
- Weill Cornell Medical College , Houston Methodist Research Institute, Department of Systems Medicine and Bioengineering, Houston, Texas 77030, United States
| |
Collapse
|
13
|
Estimating dynamic lung images from high-dimension chest surface motion using 4D statistical model. ACTA ACUST UNITED AC 2015; 17:138-45. [PMID: 25485372 DOI: 10.1007/978-3-319-10470-6_18] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/14/2023]
Abstract
Computed Tomography (CT) has been widely used in image-guided procedures such as intervention and radiotherapy of lung cancer. However, due to poor reproducibility of breath holding or respiratory cycles, discrepancies between static images and patient's current lung shape and tumor location could potentially reduce the accuracy for image guidance. Current methods are either using multiple intra-procedural scans or monitoring respiratory motion with tracking sensors. Although intra-procedural scanning provides more accurate information, it increases the radiation dose and still only provides snapshots of patient's chest. Tracking-based breath monitoring techniques can effectively detect respiratory phases but have not yet provided accurate tumor shape and location due to low dimensional signals. Therefore, estimating the lung motion and generating dynamic CT images from real-time captured high-dimensional sensor signals acts as a key component for image-guided procedures. This paper applies a principal component analysis (PCA)-based statistical model to establish the relationship between lung motion and chest surface motion from training samples, on a template space, and then uses this model to estimate dynamic images for a new patient from the chest surface motion. Qualitative and quantitative results showed that the proposed high-dimensional estimation algorithm yielded more accurate 4D-CT compared to fiducial marker-based estimation.
Collapse
|
14
|
Hodneland E, Hanson EA, Lundervold A, Modersitzki J, Eikefjord E, Munthe-Kaas AZ. Segmentation-driven image registration- application to 4D DCE-MRI recordings of the moving kidneys. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2014; 23:2392-2404. [PMID: 24710831 DOI: 10.1109/tip.2014.2315155] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
Dynamic contrast enhanced magnetic resonance imaging (DCE-MRI) of the kidneys requires proper motion correction and segmentation to enable an estimation of glomerular filtration rate through pharmacokinetic modeling. Traditionally, co-registration, segmentation, and pharmacokinetic modeling have been applied sequentially as separate processing steps. In this paper, a combined 4D model for simultaneous registration and segmentation of the whole kidney is presented. To demonstrate the model in numerical experiments, we used normalized gradients as data term in the registration and a Mahalanobis distance from the time courses of the segmented regions to a training set for supervised segmentation. By applying this framework to an input consisting of 4D image time series, we conduct simultaneous motion correction and two-region segmentation into kidney and background. The potential of the new approach is demonstrated on real DCE-MRI data from ten healthy volunteers.
Collapse
|
15
|
Po Su, Jianhua Yang, Kongkuo Lu, Nam Yu, Wong ST, Zhong Xue. A Fast CT and CT-Fluoroscopy Registration Algorithm With Respiratory Motion Compensation for Image-Guided Lung Intervention. IEEE Trans Biomed Eng 2013; 60:2034-41. [DOI: 10.1109/tbme.2013.2245895] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
16
|
He T, Xue Z, Lu K, Valdivia y Alvarado M, Wong KK, Xie W, Wong ST. A minimally invasive multimodality image-guided (MIMIG) system for peripheral lung cancer intervention and diagnosis. Comput Med Imaging Graph 2012; 36:345-55. [PMID: 22483054 DOI: 10.1016/j.compmedimag.2012.03.002] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2011] [Revised: 03/07/2012] [Accepted: 03/08/2012] [Indexed: 11/29/2022]
Abstract
BACKGROUND Lung cancer is the leading cause of cancer-related death in the United States, with more than half of the cancers are located peripherally. Computed tomography (CT) has been utilized in the last decade to detect early peripheral lung cancer. However, due to the high false diagnosis rate of CT, further biopsy is often necessary to confirm cancerous cases. This renders intervention for peripheral lung nodules (especially for small peripheral lung cancer) difficult and time-consuming, and it is highly desirable to develop new, on-the-spot earlier lung cancer diagnosis and treatment strategies. PURPOSE The objective of this study is to develop a minimally invasive multimodality image-guided (MIMIG) intervention system to detect lesions, confirm small peripheral lung cancer, and potentially guide on-the-spot treatment at an early stage. Accurate image guidance and real-time optical imaging of nodules are thus the key techniques to be explored in this work. METHODS The MIMIG system uses CT images and electromagnetic (EM) tracking to help interventional radiologists target the lesion efficiently. After targeting the lesion, a fiber-optic probe coupled with optical molecular imaging contrast agents is used to confirm the existence of cancerous tissues on-site at microscopic resolution. Using the software developed, pulmonary vessels, airways, and nodules can be segmented and visualized for surgical planning; the segmented results are then transformed onto the intra-procedural CT for interventional guidance using EM tracking. Endomicroscopy through a fiber-optic probe is then performed to visualize tumor tissues. Experiments using IntegriSense 680 fluorescent contrast agent labeling αvβ3 integrin were carried out for rabbit lung cancer models. Confirmed cancers could then be treated on-the-spot using radio-frequency ablation (RFA). RESULTS The prototype system is evaluated using the rabbit VX2 lung cancer model to evaluate the targeting accuracy, guidance efficiency, and performance of molecular imaging. Using this system, we achieved an average targeting accuracy of 3.04 mm, and the IntegriSense signals within the VX2 tumors were found to be at least two-fold higher than those of normal tissues. The results demonstrate great potential for applying the system in human trials in the future if an optical molecular imaging agent is approved by the Food and Drug Administration (FDA). CONCLUSIONS The MIMIG system was developed for on-the-spot interventional diagnosis of peripheral lung tumors by combining image-guidance and molecular imaging. The system can be potentially applied to human trials on diagnosing and treating earlier stage lung cancer. For current clinical applications, where a biopsy is unavoidable, the MIMIG system without contrast agents could be used for biopsy guidance to improve the accuracy and efficiency.
Collapse
Affiliation(s)
- Tiancheng He
- Department of Systems Medicine and Bioengineering, The Methodist Hospital Research Institute, Weill Cornell Medical College, Houston, TX, United States
| | | | | | | | | | | | | |
Collapse
|
17
|
Valdivia Y Alvarado M, Wong K, He TC, Xue Z, Wong ST. Image-guided fiberoptic molecular imaging in a VX2 rabbit lung tumor model. J Vasc Interv Radiol 2011; 22:1758-64. [PMID: 22019854 DOI: 10.1016/j.jvir.2011.08.025] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2011] [Revised: 08/24/2011] [Accepted: 08/25/2011] [Indexed: 11/19/2022] Open
Abstract
PURPOSE To show the feasibility of computed tomography (CT) image-guided fiberoptic confocal fluorescence molecular imaging in a rabbit lung tumor model. MATERIALS AND METHODS Eight lung tumor models were created by injection of a VX2 cell suspension. The fluorescent imaging agent IntegriSense 680 was given to the animals 3.5-4 hours before the procedure. CT images were obtained and transferred to the minimally invasive multimodality image-guided (MIMIG) system as a guidance map. A real-time electromagnetically tracked needle was inserted under the visual guidance of the MIMIG system. A second CT image was obtained to confirm the location of the needle tip. Next, fiberoptic fluorescence imaging was acquired along the needle track. Finally, tumor samples were obtained for histopathologic confirmation. RESULTS All cases were performed during breath-hold. Tumor size was 12.5 mm ± 1.6; the distance from the chest wall was 2.1 mm ± 0.5. The needle tip reached the tumor in all cases with an accuracy of 3.3 mm ± 1.6. Only one skin entry point was necessary, and no needle adjustments were required. No pneumothorax was observed. At least two-fold α(v)β(3) integrin image contrast was detected in the tumor compared with normal lung tissue. Tumor samples were confirmed to have viable VX2 cells and contrast uptake. CONCLUSIONS The MIMIG system enables effective in situ fluorescence molecular imaging in a needle biopsy lung procedure. In situ α(v)β(3) integrin molecular imaging allows molecular characterization of lung tumors at multiple regions and can be used to guide biopsy procedures.
Collapse
Affiliation(s)
- Miguel Valdivia Y Alvarado
- Department of Systems Medicine and Bioengineering, Weill Cornell Medical College of Cornell University, Houston, TX 77030, USA
| | | | | | | | | |
Collapse
|
18
|
Gao X, Xue Z, Xing J, Lee DY, Gottschalk SM, Heslop HH, Bollard CM, Wong ST. Computer-assisted quantitative evaluation of therapeutic responses for lymphoma using serial PET/CT imaging. Acad Radiol 2010; 17:479-88. [PMID: 20060747 PMCID: PMC2846835 DOI: 10.1016/j.acra.2009.10.026] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2009] [Revised: 10/26/2009] [Accepted: 10/27/2009] [Indexed: 10/20/2022]
Abstract
RATIONALE AND OBJECTIVES Molecular imaging modalities such as positron emission tomography (PET)/computed tomography (CT) have emerged as an essential diagnostic tool for monitoring treatment response in lymphoma patients. However, quantitative assessment of treatment outcomes from serial scans is often difficult, laborious, and time consuming. Automatic quantization of longitudinal PET/CT scans provides more efficient and comprehensive quantitative evaluation of cancer therapeutic responses. This study develops and validates a Longitudinal Image Navigation and Analysis (LINA) system for this quantitative imaging application. MATERIALS AND METHODS LINA is designed to automatically construct longitudinal correspondence along serial images of individual patients for changes in tumor volume and metabolic activity via regions of interest (ROI) segmented from a given time point image and propagated into the space of all follow-up PET/CT images. We applied LINA retrospectively to nine lymphoma patients enrolled in an immunotherapy clinical trial conducted at the Center for Cell and Gene Therapy, Baylor College of Medicine. This methodology was compared to the readout by a diagnostic radiologist, who manually measured the ROI metabolic activity as defined by the maximal standardized uptake value (SUVmax). RESULTS Quantitative results showed that the measured SUVs obtained from automatic mapping are as accurate as semiautomatic segmentation and consistent with clinical examination findings. The average of relative squared differences of SUVmax between automatic and semiautomatic segmentation was found to be 0.02. CONCLUSIONS These data support a role for LINA in facilitating quantitative analysis of serial PET/CT images to efficiently assess cancer treatment responses in a comprehensive and intuitive software platform.
Collapse
Affiliation(s)
- Xin Gao
- Center for Bioengineering and Informatics, The Methodist Hospital and The Methodist Hospital Research Institute, Weill Medical College of Cornell University, Houston TX
| | - Zhong Xue
- Center for Bioengineering and Informatics, The Methodist Hospital and The Methodist Hospital Research Institute, Weill Medical College of Cornell University, Houston TX
| | - Jiong Xing
- Center for Bioengineering and Informatics, The Methodist Hospital and The Methodist Hospital Research Institute, Weill Medical College of Cornell University, Houston TX
| | - Daniel Y. Lee
- Department of Radiology, The Methodist Hospital and The Methodist Hospital Research Institute, Weill Medical College of Cornell University, Houston TX
| | - Stephen M. Gottschalk
- Department of Pediatrics, Section of Hematology/Oncology, Texas Children’s Hospital, Baylor College of Medicine, Houston TX
| | - Helen H. Heslop
- Department of Pediatrics, Section of Hematology/Oncology, Texas Children’s Hospital, Baylor College of Medicine, Houston TX
| | - Catherine M. Bollard
- Department of Pediatrics, Section of Hematology/Oncology, Texas Children’s Hospital, Baylor College of Medicine, Houston TX
| | - Stephen T.C. Wong
- Center for Bioengineering and Informatics, The Methodist Hospital and The Methodist Hospital Research Institute, Weill Medical College of Cornell University, Houston TX
| |
Collapse
|
19
|
Online 4-D CT Estimation for Patient-Specific Respiratory Motion Based on Real-Time Breathing Signals. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION – MICCAI 2010 2010; 13:392-9. [DOI: 10.1007/978-3-642-15711-0_49] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
|
20
|
A Minimally Invasive Multimodality Image-Guided (MIMIG) Molecular Imaging System for Peripheral Lung Cancer Intervention and Diagnosis. ACTA ACUST UNITED AC 2010. [DOI: 10.1007/978-3-642-13711-2_10] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register]
|