1
|
Ouyang B, Yang Q, Wang X, He H, Ma L, Yang Q, Zhou Z, Cai S, Chen Z, Wu Z, Zhong J, Cai C. Single-shot T 2 mapping via multi-echo-train multiple overlapping-echo detachment planar imaging and multitask deep learning. Med Phys 2022; 49:7095-7107. [PMID: 35765150 DOI: 10.1002/mp.15820] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2022] [Revised: 05/02/2022] [Accepted: 06/13/2022] [Indexed: 12/13/2022] Open
Abstract
BACKGROUND Quantitative magnetic resonance imaging provides robust biomarkers in clinics. Nevertheless, the lengthy scan time reduces imaging throughput and increases the susceptibility of imaging results to motion. In this context, a single-shot T2 mapping method based on multiple overlapping-echo detachment (MOLED) planar imaging was presented, but the relatively small echo time range limits its accuracy, especially in tissues with large T2 . PURPOSE In this work we proposed a novel single-shot method, Multi-Echo-Train Multiple OverLapping-Echo Detachment (METMOLED) planar imaging, to accommodate a large range of T2 quantification without additional measurements to rectify signal degeneration arisen from refocusing pulse imperfection. METHODS Multiple echo-train techniques were integrated into the MOLED sequence to capture larger TE information. Maps of T2 , B1 , and spin density were reconstructed synchronously from acquired METMOLED data via multitask deep learning. A typical U-Net was trained with 3000/600 synthetic data with geometric/brain patterns to learn the mapping relationship between METMOLED signals and quantitative maps. The refocusing pulse imperfection was settled through the inherent information of METMOLED data and auxiliary tasks. RESULTS Experimental results on the digital brain (structural similarity (SSIM) index = 0.975/0.991/0.988 for MOLED/METMOLED-2/METMOLED-3, hyphenated number denotes the number of echo-trains), physical phantom (the slope of linear fitting with reference T2 map = 1.047/1.017/1.006 for MOLED/METMOLED-2/METMOLED-3), and human brain (Pearson's correlation coefficient (PCC) = 0.9581/0.9760/0.9900 for MOLED/METMOLED-2/METMOLED-3) demonstrated that the METMOLED improved the quantitative accuracy and the tissue details in contrast to the MOLED. These improvements were more pronounced in tissues with large T2 and in application scenarios with high temporal resolution (PCC = 0.8692/0.9465/0.9743 for MOLED/METMOLED-2/METMOLED-3). Moreover, the METMOLED could rectify the signal deviations induced by the non-ideal slice profiles of refocusing pulses without additional measurements. A preliminary measurement also demonstrated that the METMOLED is highly repeatable (mean coefficient of variation (CV) = 1.65%). CONCLUSIONS METMOLED breaks the restriction of echo-train length to TE and implements unbiased T2 estimates in an extensive range. Furthermore, it corrects the effect of refocusing pulse inaccuracy without additional measurements or signal post-processing, thus retaining its single-shot characteristic. This technique would be beneficial for accurate T2 quantification.
Collapse
Affiliation(s)
- Binyu Ouyang
- Department of Electronic Science, Fujian Provincial Key Laboratory of Plasma and Magnetic Resonance, Xiamen University, Xiamen, Fujian, 361005, China
| | - Qizhi Yang
- Department of Electronic Science, Fujian Provincial Key Laboratory of Plasma and Magnetic Resonance, Xiamen University, Xiamen, Fujian, 361005, China
| | - Xiaoyin Wang
- Center for Brain Imaging Science and Technology, College of Biomedical Engineering and Instrumental Science, Zhejiang University, Hangzhou, Zhejiang, 310058, China
| | - Hongjian He
- Center for Brain Imaging Science and Technology, College of Biomedical Engineering and Instrumental Science, Zhejiang University, Hangzhou, Zhejiang, 310058, China
| | - Lingceng Ma
- Department of Electronic Science, Fujian Provincial Key Laboratory of Plasma and Magnetic Resonance, Xiamen University, Xiamen, Fujian, 361005, China
| | - Qinqin Yang
- Department of Electronic Science, Fujian Provincial Key Laboratory of Plasma and Magnetic Resonance, Xiamen University, Xiamen, Fujian, 361005, China
| | - Zihan Zhou
- Center for Brain Imaging Science and Technology, College of Biomedical Engineering and Instrumental Science, Zhejiang University, Hangzhou, Zhejiang, 310058, China
| | - Shuhui Cai
- Department of Electronic Science, Fujian Provincial Key Laboratory of Plasma and Magnetic Resonance, Xiamen University, Xiamen, Fujian, 361005, China
| | - Zhong Chen
- Department of Electronic Science, Fujian Provincial Key Laboratory of Plasma and Magnetic Resonance, Xiamen University, Xiamen, Fujian, 361005, China
| | - Zhigang Wu
- MSC Clinical and Technical Solutions, Philips Healthcare, Shenzhen, Guangdong, 518005, China
| | - Jianhui Zhong
- Center for Brain Imaging Science and Technology, College of Biomedical Engineering and Instrumental Science, Zhejiang University, Hangzhou, Zhejiang, 310058, China.,Department of Imaging Sciences, University of Rochester, Rochester, New York, 14642, USA
| | - Congbo Cai
- Department of Electronic Science, Fujian Provincial Key Laboratory of Plasma and Magnetic Resonance, Xiamen University, Xiamen, Fujian, 361005, China
| |
Collapse
|
2
|
Lei Y, Momin S, Tian Z, Roper J, Lin J, Kahn S, Shu HK, Bradley JD, Liu T, Yang X. Brain multi-parametric MRI tumor subregion segmentation via hierarchical substructural activation network. MEDICAL IMAGING 2022: BIOMEDICAL APPLICATIONS IN MOLECULAR, STRUCTURAL, AND FUNCTIONAL IMAGING 2022:25. [DOI: 10.1117/12.2611809] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/03/2025]
|
3
|
Mamatha SK, Krishnappa HK, Shalini N. Graph Theory Based Segmentation of Magnetic Resonance Images for Brain Tumor Detection. PATTERN RECOGNITION AND IMAGE ANALYSIS 2022. [DOI: 10.1134/s1054661821040167] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
|
4
|
Siriapisith T, Kusakunniran W, Haddawy P. Pyramid graph cut: Integrating intensity and gradient information for grayscale medical image segmentation. Comput Biol Med 2020; 126:103997. [PMID: 32987203 DOI: 10.1016/j.compbiomed.2020.103997] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2020] [Revised: 08/30/2020] [Accepted: 08/30/2020] [Indexed: 11/17/2022]
Abstract
Segmentation of grayscale medical images is challenging because of the similarity of pixel intensities and poor gradient strength between adjacent regions. The existing image segmentation approaches based on either intensity or gradient information alone often fail to produce accurate segmentation results. Previous approaches in the literature have approached the problem by embedded or sequential integration of different information types to improve the performance of the image segmentation on specific tasks. However, an effective combination or integration of such information is difficult to implement and not sufficiently generic for closely related tasks. Integration of the two information sources in a single graph structure is a potentially more effective way to solve the problem. In this paper we introduce a novel technique for grayscale medical image segmentation called pyramid graph cut, which combines intensity and gradient sources of information in a pyramid-shaped graph structure using a single source node and multiple sink nodes. The source node, which is the top of the pyramid graph, embeds intensity information into its linked edges. The sink nodes, which are the base of the pyramid graph, embed gradient information into their linked edges. The min-cut uses intensity information and gradient information, depending on which one is more useful or has a higher influence in each cutting location of each iteration. The experimental results demonstrate the effectiveness of the proposed method over intensity-based segmentation alone (i.e. Gaussian mixture model) and gradient-based segmentation alone (i.e. distance regularized level set evolution) on grayscale medical image datasets, including the public 3DIRCADb-01 dataset. The proposed method archives excellent segmentation results on the sample CT of abdominal aortic aneurysm, MRI of liver tumor and US of liver tumor, with dice scores of 90.49±5.23%, 88.86±11.77%, 90.68±2.45%, respectively.
Collapse
Affiliation(s)
- Thanongchai Siriapisith
- Department Radiology, Faculty of Medicine Siriraj Hospital, Mahidol University, Bangkok, 10700, Thailand.
| | - Worapan Kusakunniran
- Faculty of Information and Communication Technology, Mahidol University, Nakhonpathom, 73170, Thailand
| | - Peter Haddawy
- Faculty of Information and Communication Technology, Mahidol University, Nakhonpathom, 73170, Thailand; Bremen Spatial Cognition Center, University of Bremen, Bremen, Germany
| |
Collapse
|
5
|
Li X, Li B, Liu F, Yin H, Zhou F. Segmentation of Pulmonary Nodules Using a GMM Fuzzy C-Means Algorithm. IEEE ACCESS 2020; 8:37541-37556. [DOI: 10.1109/access.2020.2968936] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/07/2025]
|
6
|
Lei Y, Shu HK, Tian S, Wang T, Liu T, Mao H, Shim H, Curran WJ, Yang X. Pseudo CT Estimation using Patch-based Joint Dictionary Learning. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2019; 2018:5150-5153. [PMID: 30441499 DOI: 10.1109/embc.2018.8513475] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Magnetic resonance (MR) simulators have recently gained popularity; it avoids the unnecessary radiation exposure associated with Computed Tomography (CT) when used for radiation therapy planning. We propose a method for pseudo CT estimation from MR images based on joint dictionary learning. Patient-specific anatomical features were extracted from the aligned training images and adopted as signatures for each voxel. The most relevant and informative features were identified to train the joint dictionary learning-based model. The well-trained dictionary was used to predict the pseudo CT of a new patient. This prediction technique was validated with a clinical study of 12 patients with MR and CT images of the brain. The mean absolute error (MAE), peak signal-to-noise ratio (PSNR), normalized cross correlation (NCC) indexes were used to quantify the prediction accuracy. We compared our proposed method with a state-of-the-art dictionary learning method. Overall our proposed method significantly improves the prediction accuracy over the state-of-the-art dictionary learning method. We have investigated a novel joint dictionary Iearning- based approach to predict CT images from routine MRIs and demonstrated its reliability. This CT prediction technique could be a useful tool for MRI-based radiation treatment planning or attenuation correction for quantifying PET images for PET/MR imaging.
Collapse
|
7
|
Nayak DR, Dash R, Majhi B, Zhang Y. A hybrid regularized extreme learning machine for automated detection of pathological brain. Biocybern Biomed Eng 2019. [DOI: 10.1016/j.bbe.2019.08.005] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
|
8
|
|
9
|
Kong Y, Chen X, Wu J, Zhang P, Chen Y, Shu H. Automatic brain tissue segmentation based on graph filter. BMC Med Imaging 2018; 18:9. [PMID: 29739350 PMCID: PMC5941431 DOI: 10.1186/s12880-018-0252-x] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2017] [Accepted: 04/30/2018] [Indexed: 01/24/2023] Open
Abstract
Background Accurate segmentation of brain tissues from magnetic resonance imaging (MRI) is of significant importance in clinical applications and neuroscience research. Accurate segmentation is challenging due to the tissue heterogeneity, which is caused by noise, bias filed and partial volume effects. Methods To overcome this limitation, this paper presents a novel algorithm for brain tissue segmentation based on supervoxel and graph filter. Firstly, an effective supervoxel method is employed to generate effective supervoxels for the 3D MRI image. Secondly, the supervoxels are classified into different types of tissues based on filtering of graph signals. Results The performance is evaluated on the BrainWeb 18 dataset and the Internet Brain Segmentation Repository (IBSR) 18 dataset. The proposed method achieves mean dice similarity coefficient (DSC) of 0.94, 0.92 and 0.90 for the segmentation of white matter (WM), grey matter (GM) and cerebrospinal fluid (CSF) for BrainWeb 18 dataset, and mean DSC of 0.85, 0.87 and 0.57 for the segmentation of WM, GM and CSF for IBSR18 dataset. Conclusions The proposed approach can well discriminate different types of brain tissues from the brain MRI image, which has high potential to be applied for clinical applications.
Collapse
Affiliation(s)
- Youyong Kong
- Laboratory of Image Science and Technology, Key Laboratory of Computer Network and Information Integration, School of Computer Science and Engineering, Southeast University, Nanjing, People's Republic of China. .,International Joint Laboratory of Information Display and Visualization, Nanjing, People's Republic of China.
| | - Xiaopeng Chen
- Laboratory of Image Science and Technology, Key Laboratory of Computer Network and Information Integration, School of Computer Science and Engineering, Southeast University, Nanjing, People's Republic of China.,International Joint Laboratory of Information Display and Visualization, Nanjing, People's Republic of China
| | - Jiasong Wu
- Laboratory of Image Science and Technology, Key Laboratory of Computer Network and Information Integration, School of Computer Science and Engineering, Southeast University, Nanjing, People's Republic of China.,International Joint Laboratory of Information Display and Visualization, Nanjing, People's Republic of China
| | - Pinzheng Zhang
- Laboratory of Image Science and Technology, Key Laboratory of Computer Network and Information Integration, School of Computer Science and Engineering, Southeast University, Nanjing, People's Republic of China.,International Joint Laboratory of Information Display and Visualization, Nanjing, People's Republic of China
| | - Yang Chen
- Laboratory of Image Science and Technology, Key Laboratory of Computer Network and Information Integration, School of Computer Science and Engineering, Southeast University, Nanjing, People's Republic of China.,International Joint Laboratory of Information Display and Visualization, Nanjing, People's Republic of China
| | - Huazhong Shu
- Laboratory of Image Science and Technology, Key Laboratory of Computer Network and Information Integration, School of Computer Science and Engineering, Southeast University, Nanjing, People's Republic of China.,International Joint Laboratory of Information Display and Visualization, Nanjing, People's Republic of China
| |
Collapse
|
10
|
Lei Y, Xu D, Zhou Z, Wang T, Dong X, Liu T, Dhabaan A, Curran WJ, Yang X. A Denoising Algorithm for CT Image Using Low-rank Sparse Coding. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2018; 10574:105741P. [PMID: 31551644 PMCID: PMC6759222 DOI: 10.1117/12.2292890] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
We propose a denoising method of CT image based on low-rank sparse coding. The proposed method constructs an adaptive dictionary of image patches and estimates the sparse coding regularization parameters using the Bayesian interpretation. A low-rank approximation approach is used to simultaneously construct the dictionary and achieve sparse representation through clustering similar image patches. A variable-splitting scheme and a quadratic optimization are used to reconstruct CT image based on achieved sparse coefficients. We tested this denoising technology using phantom, brain and abdominal CT images. The experimental results showed that the proposed method delivers state-of-art denoising performance, both in terms of objective criteria and visual quality.
Collapse
Affiliation(s)
- Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322
| | - Dong Xu
- Department of Ultrasound Imaging, Zhejiang Cancer Hospital, Hangzhou, China 310022
| | - Zhengyang Zhou
- Department of Radiology, Nanjing Drum Tower Hospital, the Affiliated Hospital of Nanjing University Medical School, Nanjing, China 210008
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322
| | - Xue Dong
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322
| | - Anees Dhabaan
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322
| | - Walter J. Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322
| |
Collapse
|
11
|
MRI Brain Images Classification: A Multi-Level Threshold Based Region Optimization Technique. J Med Syst 2018; 42:62. [DOI: 10.1007/s10916-018-0915-8] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2017] [Accepted: 02/12/2018] [Indexed: 10/17/2022]
|
12
|
Lei Y, Tang X, Higgins K, Wang T, Liu T, Dhabaan A, Shim H, Curran WJ, Yang X. Improving Image Quality of Cone-Beam CT Using Alternating Regression Forest. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2018; 10573:1057345. [PMID: 31456600 PMCID: PMC6711599 DOI: 10.1117/12.2292886] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
We propose a CBCT image quality improvement method based on anatomic signature and auto-context alternating regression forest. Patient-specific anatomical features are extracted from the aligned training images and served as signatures for each voxel. The most relevant and informative features are identified to train regression forest. The well-trained regression forest is used to correct the CBCT of a new patient. This proposed algorithm was evaluated using 10 patients' data with CBCT and CT images. The mean absolute error (MAE), peak signal-to-noise ratio (PSNR) and normalized cross correlation (NCC) indexes were used to quantify the correction accuracy of the proposed algorithm. The mean MAE, PSNR and NCC between corrected CBCT and ground truth CT were 16.66HU, 37.28dB and 0.98, which demonstrated the CBCT correction accuracy of the proposed learning-based method. We have developed a learning-based method and demonstrated that this method could significantly improve CBCT image quality. The proposed method has great potential in improving CBCT image quality to a level close to planning CT, therefore, allowing its quantitative use in CBCT-guided adaptive radiotherapy.
Collapse
Affiliation(s)
- Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322
| | - Xiangyang Tang
- Department of Radiology and Imaging Sciences and Winship Cancer Institute, Emory University, Atlanta, GA 30322
| | - Kristin Higgins
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322
| | - Anees Dhabaan
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322
| | - Hyunsuk Shim
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322
- Department of Radiology and Imaging Sciences and Winship Cancer Institute, Emory University, Atlanta, GA 30322
| | - Walter J. Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322
| |
Collapse
|
13
|
Lei Y, Xu D, Zhou Z, Higgins K, Dong X, Liu T, Shim H, Mao H, Curran WJ, Yang X. High-resolution CT Image Retrieval Using Sparse Convolutional Neural Network. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2018; 10573:105733F. [PMID: 31456601 PMCID: PMC6711608 DOI: 10.1117/12.2292891] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
We propose a high-resolution CT image retrieval method based on sparse convolutional neural network. The proposed framework is used to train the end-to-end mapping from low-resolution to high-resolution images. The patch-wise feature of low-resolution CT is extracted and sparsely represented by a convolutional layer and a learned iterative shrinkage threshold framework, respectively. Restricted linear unit is utilized to non-linearly map the low-resolution sparse coefficients to the high-resolution ones. An adaptive high-resolution dictionary is applied to construct the informative signature which is highly connected to a high-resolution patch. Finally, we feed the signature to a convolutional layer to reconstruct the predicted high-resolution patches and average these overlapping patches to generate high-resolution CT. The loss function between reconstructed images and the corresponding ground truth high-resolution images is applied to optimize the parameters of end-to-end neural network. The well-trained map is used to generate the high-resolution CT from a new low-resolution input. This technique was tested with brain and lung CT images and the image quality was assessed using the corresponding CT images. Peak signal-to-noise ratio (PSNR), structural similarity index (SSIM) and mean absolute error (MAE) indexes were used to quantify the differences between the generated high-resolution and corresponding ground truth CT images. The experimental results showed the proposed method could enhance images resolution from low-resolution images. The proposed method has great potential in improving radiation dose calculation and delivery accuracy and decreasing CT radiation exposure of patients.
Collapse
Affiliation(s)
- Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322
| | - Dong Xu
- Department of Ultrasound Imaging, Zhejiang Cancer Hospital, Hangzhou, China 310022
| | - Zhengyang Zhou
- Department of Radiology, Nanjing Drum Tower Hospital, the Affiliated Hospital of Nanjing University Medical School, Nanjing, China 210008
| | - Kristin Higgins
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322
| | - Xue Dong
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322
| | - Hyunsuk Shim
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322
- Department of Radiation Oncology and Imaging Sciences and Winship Cancer Institute, Emory University, Atlanta, GA 30322
| | - Hui Mao
- Department of Radiation Oncology and Imaging Sciences and Winship Cancer Institute, Emory University, Atlanta, GA 30322
| | - Walter J. Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322
| |
Collapse
|
14
|
Chang HH, Chang YN. CUDA-based acceleration and BPN-assisted automation of bilateral filtering for brain MR image restoration. Med Phys 2017; 44:1420-1436. [PMID: 28196280 DOI: 10.1002/mp.12157] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2016] [Revised: 02/02/2017] [Accepted: 02/08/2017] [Indexed: 11/11/2022] Open
Abstract
PURPOSE Bilateral filters have been substantially exploited in numerous magnetic resonance (MR) image restoration applications for decades. Due to the deficiency of theoretical basis on the filter parameter setting, empirical manipulation with fixed values and noise variance-related adjustments has generally been employed. The outcome of these strategies is usually sensitive to the variation of the brain structures and not all the three parameter values are optimal. This article is in an attempt to investigate the optimal setting of the bilateral filter, from which an accelerated and automated restoration framework is developed. METHODS To reduce the computational burden of the bilateral filter, parallel computing with the graphics processing unit (GPU) architecture is first introduced. The NVIDIA Tesla K40c GPU with the compute unified device architecture (CUDA) functionality is specifically utilized to emphasize thread usages and memory resources. To correlate the filter parameters with image characteristics for automation, optimal image texture features are subsequently acquired based on the sequential forward floating selection (SFFS) scheme. Subsequently, the selected features are introduced into the back propagation network (BPN) model for filter parameter estimation. Finally, the k-fold cross validation method is adopted to evaluate the accuracy of the proposed filter parameter prediction framework. RESULTS A wide variety of T1-weighted brain MR images with various scenarios of noise levels and anatomic structures were utilized to train and validate this new parameter decision system with CUDA-based bilateral filtering. For a common brain MR image volume of 256 × 256 × 256 pixels, the speed-up gain reached 284. Six optimal texture features were acquired and associated with the BPN to establish a "high accuracy" parameter prediction system, which achieved a mean absolute percentage error (MAPE) of 5.6%. Automatic restoration results on 2460 brain MR images received an average relative error in terms of peak signal-to-noise ratio (PSNR) less than 0.1%. In comparison with many state-of-the-art filters, the proposed automation framework with CUDA-based bilateral filtering provided more favorable results both quantitatively and qualitatively. CONCLUSIONS Possessing unique characteristics and demonstrating exceptional performances, the proposed CUDA-based bilateral filter adequately removed random noise in multifarious brain MR images for further study in neurosciences and radiological sciences. It requires no prior knowledge of the noise variance and automatically restores MR images while preserving fine details. The strategy of exploiting the CUDA to accelerate the computation and incorporating texture features into the BPN to completely automate the bilateral filtering process is achievable and validated, from which the best performance is reached.
Collapse
Affiliation(s)
- Herng-Hua Chang
- Computational Biomedical Engineering Laboratory (CBEL), Department of Engineering Science and Ocean Engineering, National Taiwan University, Taipei, 10617, Taiwan
| | - Yu-Ning Chang
- Computational Biomedical Engineering Laboratory (CBEL), Department of Engineering Science and Ocean Engineering, National Taiwan University, Taipei, 10617, Taiwan
| |
Collapse
|
15
|
Objective Ventricle Segmentation in Brain CT with Ischemic Stroke Based on Anatomical Knowledge. BIOMED RESEARCH INTERNATIONAL 2017; 2017:8690892. [PMID: 28271071 PMCID: PMC5320078 DOI: 10.1155/2017/8690892] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/03/2016] [Revised: 08/23/2016] [Accepted: 12/15/2016] [Indexed: 12/03/2022]
Abstract
Ventricle segmentation is a challenging technique for the development of detection system of ischemic stroke in computed tomography (CT), as ischemic stroke regions are adjacent to the brain ventricle with similar intensity. To address this problem, we developed an objective segmentation system of brain ventricle in CT. The intensity distribution of the ventricle was estimated based on clustering technique, connectivity, and domain knowledge, and the initial ventricle segmentation results were then obtained. To exclude the stroke regions from initial segmentation, a combined segmentation strategy was proposed, which is composed of three different schemes: (1) the largest three-dimensional (3D) connected component was considered as the ventricular region; (2) the big stroke areas were removed by the image difference methods based on searching optimal threshold values; (3) the small stroke regions were excluded by the adaptive template algorithm. The proposed method was evaluated on 50 cases of patients with ischemic stroke. The mean Dice, sensitivity, specificity, and root mean squared error were 0.9447, 0.969, 0.998, and 0.219 mm, respectively. This system can offer a desirable performance. Therefore, the proposed system is expected to bring insights into clinic research and the development of detection system of ischemic stroke in CT.
Collapse
|
16
|
Yang X, Lei Y, Shu HK, Rossi P, Mao H, Shim H, Curran WJ, Liu T. Pseudo CT Estimation from MRI Using Patch-based Random Forest. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2017; 10133:101332Q. [PMID: 31607771 PMCID: PMC6788808 DOI: 10.1117/12.2253936] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
Recently, MR simulators gain popularity because of unnecessary radiation exposure of CT simulators being used in radiation therapy planning. We propose a method for pseudo CT estimation from MR images based on a patch-based random forest. Patient-specific anatomical features are extracted from the aligned training images and adopted as signatures for each voxel. The most robust and informative features are identified using feature selection to train the random forest. The well-trained random forest is used to predict the pseudo CT of a new patient. This prediction technique was tested with human brain images and the prediction accuracy was assessed using the original CT images. Peak signal-to-noise ratio (PSNR) and feature similarity (FSIM) indexes were used to quantify the differences between the pseudo and original CT images. The experimental results showed the proposed method could accurately generate pseudo CT images from MR images. In summary, we have developed a new pseudo CT prediction method based on patch-based random forest, demonstrated its clinical feasibility, and validated its prediction accuracy. This pseudo CT prediction technique could be a useful tool for MRI-based radiation treatment planning and attenuation correction in a PET/MRI scanner.
Collapse
Affiliation(s)
- Xiaofeng Yang
- Department of Radiation Oncology, Winship Cancer Institute
| | - Yang Lei
- Department of Radiation Oncology, Winship Cancer Institute
| | - Hui-Kuo Shu
- Department of Radiation Oncology, Winship Cancer Institute
| | - Peter Rossi
- Department of Radiation Oncology, Winship Cancer Institute
| | - Hui Mao
- Department of Radiology and Imaging Sciences, Winship Cancer Institute, Emory University, Atlanta, GA
| | - Hyunsuk Shim
- Department of Radiation Oncology, Winship Cancer Institute
- Department of Radiology and Imaging Sciences, Winship Cancer Institute, Emory University, Atlanta, GA
| | | | - Tian Liu
- Department of Radiation Oncology, Winship Cancer Institute
| |
Collapse
|
17
|
Ugarte V, Sinha U, Malis V, Csapo R, Sinha S. 3D multimodal spatial fuzzy segmentation of intramuscular connective and adipose tissue from ultrashort TE MR images of calf muscle. Magn Reson Med 2016; 77:870-883. [PMID: 26892499 DOI: 10.1002/mrm.26156] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2015] [Revised: 12/20/2015] [Accepted: 01/17/2016] [Indexed: 11/10/2022]
Abstract
PURPOSE To develop and evaluate an automated algorithm to segment intramuscular adipose (IMAT) and connective (IMCT) tissue from musculoskeletal MRI images acquired with a dual echo Ultrashort TE (UTE) sequence. THEORY AND METHODS The dual echo images and calculated structure tensor images are the inputs to the multichannel fuzzy cluster mean (MCFCM) algorithm. Modifications to the basic multichannel fuzzy cluster mean include an adaptive spatial term and bias shading correction. The algorithm was tested on digital phantoms simulating IMAT/IMCT tissue under varying conditions of image noise and bias and on ten subjects with varying amounts of IMAT/IMCT. RESULTS The MCFCM including the adaptive spatial term and bias shading correction performed better than the original MCFCM and adaptive spatial MCFCM algorithms. IMAT/IMCT was segmented from the unsmoothed simulated phantom data with a mean Dice coefficient of 0.933 ±0.001 when contrast-to-noise (CNR) was 140 and bias was varied between 30% and 65%. The algorithm yielded accurate in vivo segmentations of IMAT/IMCT with a mean Dice coefficient of 0.977 ±0.066. CONCLUSION The proposed algorithm is completely automated and yielded accurate segmentation of intramuscular adipose and connective tissue in the digital phantom and in human calf data. Magn Reson Med 77:870-883, 2017. © 2016 International Society for Magnetic Resonance in Medicine.
Collapse
Affiliation(s)
- Vincent Ugarte
- Department of Physics, San Diego State University, San Diego, California, USA
| | - Usha Sinha
- Department of Physics, San Diego State University, San Diego, California, USA
| | - Vadim Malis
- Muscle Imaging and Modeling Lab, Department Of Radiology, University of California, San Diego, California, USA
| | - Robert Csapo
- Muscle Imaging and Modeling Lab, Department Of Radiology, University of California, San Diego, California, USA
| | - Shantanu Sinha
- Muscle Imaging and Modeling Lab, Department Of Radiology, University of California, San Diego, California, USA
| |
Collapse
|
18
|
A Unified Framework for Brain Segmentation in MR Images. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2015; 2015:829893. [PMID: 26089978 PMCID: PMC4450290 DOI: 10.1155/2015/829893] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/04/2014] [Revised: 11/07/2014] [Accepted: 11/18/2014] [Indexed: 12/03/2022]
Abstract
Brain MRI segmentation is an important issue for discovering the brain structure and diagnosis of subtle anatomical changes in different brain diseases. However, due to several artifacts brain tissue segmentation remains a challenging task. The aim of this paper is to improve the automatic segmentation of brain into gray matter, white matter, and cerebrospinal fluid in magnetic resonance images (MRI). We proposed an automatic hybrid image segmentation method that integrates the modified statistical expectation-maximization (EM) method and the spatial information combined with support vector machine (SVM). The combined method has more accurate results than what can be achieved with its individual techniques that is demonstrated through experiments on both real data and simulated images. Experiments are carried out on both synthetic and real MRI. The results of proposed technique are evaluated against manual segmentation results and other methods based on real T1-weighted scans from Internet Brain Segmentation Repository (IBSR) and simulated images from BrainWeb. The Kappa index is calculated to assess the performance of the proposed framework relative to the ground truth and expert segmentations. The results demonstrate that the proposed combined method has satisfactory results on both simulated MRI and real brain datasets.
Collapse
|
19
|
Yazdani S, Yusof R, Riazi A, Karimian A. Magnetic resonance image tissue classification using an automatic method. Diagn Pathol 2014; 9:207. [PMID: 25540017 PMCID: PMC4300026 DOI: 10.1186/s13000-014-0207-7] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2014] [Accepted: 10/08/2014] [Indexed: 01/09/2023] Open
Abstract
Background Brain segmentation in magnetic resonance images (MRI) is an important stage in clinical studies for different issues such as diagnosis, analysis, 3-D visualizations for treatment and surgical planning. MR Image segmentation remains a challenging problem in spite of different existing artifacts such as noise, bias field, partial volume effects and complexity of the images. Some of the automatic brain segmentation techniques are complex and some of them are not sufficiently accurate for certain applications. The goal of this paper is proposing an algorithm that is more accurate and less complex). Methods In this paper we present a simple and more accurate automated technique for brain segmentation into White Matter, Gray Matter and Cerebrospinal fluid (CSF) in three-dimensional MR images. The algorithm’s three steps are histogram based segmentation, feature extraction and final classification using SVM. The integrated algorithm has more accurate results than what can be obtained with its individual components. To produce much more efficient segmentation method our framework captures different types of features in each step that are of special importance for MRI, i.e., distributions of tissue intensities, textural features, and relationship with neighboring voxels or spatial features. Results Our method has been validated on real images and simulated data, with desirable performance in the presence of noise and intensity inhomogeneities. Conclusions The experimental results demonstrate that our proposed method is a simple and accurate technique to define brain tissues with high reproducibility in comparison with other techniques. Virtual Slides The virtual slide(s) for this article can be found here: http://www.diagnosticpathology.diagnomx.eu/vs/13000_2014_207
Collapse
Affiliation(s)
- Sepideh Yazdani
- Malaysia-Japan International Institute of Technology (MJIIT), Universiti Teknologi Malaysia, Jalan semarak, Kuala Lumpur, 54100, Malaysia.
| | - Rubiyah Yusof
- Malaysia-Japan International Institute of Technology (MJIIT), Universiti Teknologi Malaysia, Jalan semarak, Kuala Lumpur, 54100, Malaysia.
| | - Amirhosein Riazi
- Control and Intelligent Processing Center of Excellence School of Electrical and Computer Engineering, University College of Engineering, University of Tehran, Tehran, Iran.
| | - Alireza Karimian
- Department of Biomedical Engineering, Faculty of Engineering, University of Isfahan, Isfahan, Iran.
| |
Collapse
|
20
|
Yang X, Wu N, Cheng G, Zhou Z, Yu DS, Beitler JJ, Curran WJ, Liu T. Automated segmentation of the parotid gland based on atlas registration and machine learning: a longitudinal MRI study in head-and-neck radiation therapy. Int J Radiat Oncol Biol Phys 2014; 90:1225-33. [PMID: 25442347 DOI: 10.1016/j.ijrobp.2014.08.350] [Citation(s) in RCA: 54] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2014] [Revised: 08/08/2014] [Accepted: 08/28/2014] [Indexed: 10/24/2022]
Abstract
PURPOSE To develop an automated magnetic resonance imaging (MRI) parotid segmentation method to monitor radiation-induced parotid gland changes in patients after head and neck radiation therapy (RT). METHODS AND MATERIALS The proposed method combines the atlas registration method, which captures the global variation of anatomy, with a machine learning technology, which captures the local statistical features, to automatically segment the parotid glands from the MRIs. The segmentation method consists of 3 major steps. First, an atlas (pre-RT MRI and manually contoured parotid gland mask) is built for each patient. A hybrid deformable image registration is used to map the pre-RT MRI to the post-RT MRI, and the transformation is applied to the pre-RT parotid volume. Second, the kernel support vector machine (SVM) is trained with the subject-specific atlas pair consisting of multiple features (intensity, gradient, and others) from the aligned pre-RT MRI and the transformed parotid volume. Third, the well-trained kernel SVM is used to differentiate the parotid from surrounding tissues in the post-RT MRIs by statistically matching multiple texture features. A longitudinal study of 15 patients undergoing head and neck RT was conducted: baseline MRI was acquired prior to RT, and the post-RT MRIs were acquired at 3-, 6-, and 12-month follow-up examinations. The resulting segmentations were compared with the physicians' manual contours. RESULTS Successful parotid segmentation was achieved for all 15 patients (42 post-RT MRIs). The average percentage of volume differences between the automated segmentations and those of the physicians' manual contours were 7.98% for the left parotid and 8.12% for the right parotid. The average volume overlap was 91.1% ± 1.6% for the left parotid and 90.5% ± 2.4% for the right parotid. The parotid gland volume reduction at follow-up was 25% at 3 months, 27% at 6 months, and 16% at 12 months. CONCLUSIONS We have validated our automated parotid segmentation algorithm in a longitudinal study. This segmentation method may be useful in future studies to address radiation-induced xerostomia in head and neck radiation therapy.
Collapse
Affiliation(s)
- Xiaofeng Yang
- Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia
| | - Ning Wu
- Radiation Oncology, Jilin University, Chuangchun, Jilin, China
| | - Guanghui Cheng
- Radiation Oncology, Jilin University, Chuangchun, Jilin, China
| | - Zhengyang Zhou
- Department of Radiology, Nanjing Drum Tower Hospital, The Affiliated Hospital of Nanjing University Medical School, Nanjing, China
| | - David S Yu
- Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia
| | - Jonathan J Beitler
- Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia
| | - Walter J Curran
- Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia
| | - Tian Liu
- Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia.
| |
Collapse
|
21
|
Spijkerman J, Fontanarosa D, Das M, Van Elmpt W. Validation of nonrigid registration in pretreatment and follow-up PET/CT scans for quantification of tumor residue in lung cancer patients. J Appl Clin Med Phys 2014; 15:4847. [PMID: 25207414 PMCID: PMC5875523 DOI: 10.1120/jacmp.v15i4.4847] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2013] [Revised: 03/24/2014] [Accepted: 03/20/2014] [Indexed: 11/23/2022] Open
Abstract
Nonrigid registrations of pre‐ and postradiotherapy (RT) PET/CT scans of NSCLC patients were performed with different algorithms and validated tracking internal landmarks. Dice overlap ratios (DR) of high FDG‐uptake areas in registered PET/CT scans were then calculated to study patterns of relapse. For 22 patients, pre‐ and post‐RT PET/CT scans were registered first rigidly and then nonrigidly. For three patients, two types (based on Demons or Morphons) of nonrigid registration algorithms each with four different parameter settings were applied and assessed using landmark validation. The two best performing methods were tested on all patients, who were then classified into three groups: large (Group 1), minor (Group 2) or insufficient improvement (Group 3) of registration accuracy. For Group 1 and 2, DRs between high FDG‐uptake areas in pre‐ and post‐RT PET scans were determined. Distances between corresponding landmarks on deformed pre‐RT and post‐RT scans decreased for all registration methods. Differences between Demons and Morphons methods were smaller than 1 mm. For Group 1, landmark distance decreased from 9.5 ± 2.1 mm to 3.8 ± 1.2 mm (mean ± 1 SD, p < 0.001), and for Group 3 from 13.6 ± 3.2 mm to 8.0 ± 2.2 mm (p=0.02). No significant change was observed for Group 2 where distances decreased from 5.6 ± 1.3 mm to 4.5 ± 1.1 mm (p=0.02). DRs of high FDG‐uptake areas improved significantly after nonrigid registration for most patients in Group 1. Landmark validation of nonrigid registration methods for follow‐up CT imaging in NSCLC is necessary. Nonrigid registration significantly improves matching between pre‐ and post‐RT CT scans for a subset of patients, although not in all patients. Hence, the quality of the registration needs to be assessed for each patient individually. Successful nonrigid registration increased the overlap between pre‐ and post‐RT high FDG‐uptake regions. PACS number: 87.57.Q‐, 87.57.C‐, 87.57.N‐, 87.57.‐s, 87.55.‐x, 87.55.D‐, 87.55.dh, 87.57.uk, 87.57.nj
Collapse
|
22
|
Pike R, Patton SK, Lu G, Halig LV, Wang D, Chen ZG, Fei B. A Minimum Spanning Forest Based Hyperspectral Image Classification Method for Cancerous Tissue Detection. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2014; 9034:90341W. [PMID: 25426272 PMCID: PMC4241346 DOI: 10.1117/12.2043848] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
Abstract
Hyperspectral imaging is a developing modality for cancer detection. The rich information associated with hyperspectral images allow for the examination between cancerous and healthy tissue. This study focuses on a new method that incorporates support vector machines into a minimum spanning forest algorithm for differentiating cancerous tissue from normal tissue. Spectral information was gathered to test the algorithm. Animal experiments were performed and hyperspectral images were acquired from tumor-bearing mice. In vivo imaging experimental results demonstrate the applicability of the proposed classification method for cancer tissue classification on hyperspectral images.
Collapse
Affiliation(s)
- Robert Pike
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA
| | - Samuel K. Patton
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA
| | - Guolan Lu
- The Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA
| | - Luma V. Halig
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA
| | - Dongsheng Wang
- Department of Hematology and Medical Oncology, Emory University, Atlanta, GA
| | - Zhuo Georgia Chen
- Department of Hematology and Medical Oncology, Emory University, Atlanta, GA
| | - Baowei Fei
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA
- The Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA
- Department of Mathematics & Computer Science, Emory University, Atlanta, GA
- Winship Cancer Institute of Emory University, Atlanta, GA
| |
Collapse
|
23
|
Yang X, Fei B. Multiscale segmentation of the skull in MR images for MRI-based attenuation correction of combined MR/PET. J Am Med Inform Assoc 2013; 20:1037-45. [PMID: 23761683 PMCID: PMC3822115 DOI: 10.1136/amiajnl-2012-001544] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2012] [Revised: 05/03/2013] [Accepted: 05/23/2013] [Indexed: 11/03/2022] Open
Abstract
BACKGROUND AND OBJECTIVE Combined magnetic resonance/positron emission tomography (MR/PET) is a relatively new, hybrid imaging modality. MR-based attenuation correction often requires segmentation of the bone on MR images. In this study, we present an automatic segmentation method for the skull on MR images for attenuation correction in brain MR/PET applications. MATERIALS AND METHODS Our method transforms T1-weighted MR images to the Radon domain and then detects the features of the skull image. In the Radon domain we use a bilateral filter to construct a multiscale image series. For the repeated convolution we increase the spatial smoothing in each scale and make the width of the spatial and range Gaussian function doubled in each scale. Two filters with different kernels along the vertical direction are applied along the scales from the coarse to fine levels. The results from a coarse scale give a mask for the next fine scale and supervise the segmentation in the next fine scale. The use of the multiscale bilateral filtering scheme is to improve the robustness of the method for noise MR images. After combining the two filtered sinograms, the reciprocal binary sinogram of the skull is obtained for the reconstruction of the skull image. RESULTS This method has been tested with brain phantom data, simulated brain data, and real MRI data. For real MRI data the Dice overlap ratios are 92.2%±1.9% between our segmentation and manual segmentation. CONCLUSIONS The multiscale segmentation method is robust and accurate and can be used for MRI-based attenuation correction in combined MR/PET.
Collapse
Affiliation(s)
- Xiaofeng Yang
- Department of Radiology and Imaging Sciences, Center for Systems Imaging, Emory University, Atlanta, Georgia, USA
| | - Baowei Fei
- Department of Radiology and Imaging Sciences, Center for Systems Imaging, Emory University, Atlanta, Georgia, USA
- Department of Biomedical Engineering, Emory University and Georgia Institute of Technology, Atlanta, Georgia, USA
- Winship Cancer Institute of Emory University, Atlanta, Georgia, USA
- Department of Mathematics and Computer Science, Emory University, Atlanta, Georgia, USA
| |
Collapse
|
24
|
Qin X, Cong Z, Jiang R, Shen M, Wagner MB, Kishbom P, Fei B. Extracting Cardiac Myofiber Orientations from High Frequency Ultrasound Images. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2013; 8675. [PMID: 24392208 DOI: 10.1117/12.2006494] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
Cardiac myofiber plays an important role in stress mechanism during heart beating periods. The orientation of myofibers decides the effects of the stress distribution and the whole heart deformation. It is important to image and quantitatively extract these orientations for understanding the cardiac physiological and pathological mechanism and for diagnosis of chronic diseases. Ultrasound has been wildly used in cardiac diagnosis because of its ability of performing dynamic and noninvasive imaging and because of its low cost. An extraction method is proposed to automatically detect the cardiac myofiber orientations from high frequency ultrasound images. First, heart walls containing myofibers are imaged by B-mode high frequency (>20 MHz) ultrasound imaging. Second, myofiber orientations are extracted from ultrasound images using the proposed method that combines a nonlinear anisotropic diffusion filter, Canny edge detector, Hough transform, and K-means clustering. This method is validated by the results of ultrasound data from phantoms and pig hearts.
Collapse
Affiliation(s)
- Xulei Qin
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA
| | - Zhibin Cong
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA
| | - Rong Jiang
- Department of Pediatrics, Emory University School of Medicine, Atlanta, GA
| | - Ming Shen
- Department of Pediatrics, Emory University School of Medicine, Atlanta, GA
| | - Mary B Wagner
- Department of Pediatrics, Emory University School of Medicine, Atlanta, GA
| | - Paul Kishbom
- Department of Surgery, Emory University School of Medicine, Atlanta, GA
| | - Baowei Fei
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA ; Department of Biomedical Engineering, Emory University and Georgia Institute of Technology ; Department of Mathematics & Computer Science, Emory University, Atlanta, GA
| |
Collapse
|
25
|
Qin X, Cong Z, Halig LV, Fei B. Automatic Segmentation of Right Ventricle on Ultrasound Images Using Sparse Matrix Transform and Level Set. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2013; 8669. [PMID: 24236228 DOI: 10.1117/12.2006490] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
An automatic framework is proposed to segment right ventricle on ultrasound images. This method can automatically segment both epicardial and endocardial boundaries from a continuous echocardiography series by combining sparse matrix transform (SMT), a training model, and a localized region based level set. First, the sparse matrix transform extracts main motion regions of myocardium as eigenimages by analyzing statistical information of these images. Second, a training model of right ventricle is registered to the extracted eigenimages in order to automatically detect the main location of the right ventricle and the corresponding transform relationship between the training model and the SMT-extracted results in the series. Third, the training model is then adjusted as an adapted initialization for the segmentation of each image in the series. Finally, based on the adapted initializations, a localized region based level set algorithm is applied to segment both epicardial and endocardial boundaries of the right ventricle from the whole series. Experimental results from real subject data validated the performance of the proposed framework in segmenting right ventricle from echocardiography. The mean Dice scores for both epicardial and endocardial boundaries are 89.1%±2.3% and 83.6±7.3%, respectively. The automatic segmentation method based on sparse matrix transform and level set can provide a useful tool for quantitative cardiac imaging.
Collapse
Affiliation(s)
- Xulei Qin
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA
| | | | | | | |
Collapse
|
26
|
Cheng G, Yang X, Wu N, Xu Z, Zhao H, Wang Y, Liu T. Multi-atlas-based Segmentation of the Parotid Glands of MR Images in Patients Following Head-and-neck Cancer Radiotherapy. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2013; 8670:86702Q. [PMID: 25914491 PMCID: PMC4405673 DOI: 10.1117/12.2007783] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
Xerostomia (dry mouth), resulting from radiation damage to the parotid glands, is one of the most common and distressing side effects of head-and-neck cancer radiotherapy. Recent MRI studies have demonstrated that the volume reduction of parotid glands is an important indicator for radiation damage and xerostomia. In the clinic, parotid-volume evaluation is exclusively based on physicians' manual contours. However, manual contouring is time-consuming and prone to inter-observer and intra-observer variability. Here, we report a fully automated multi-atlas-based registration method for parotid-gland delineation in 3D head-and-neck MR images. The multi-atlas segmentation utilizes a hybrid deformable image registration to map the target subject to multiple patients' images, applies the transformation to the corresponding segmented parotid glands, and subsequently uses the multiple patient-specific pairs (head-and-neck MR image and transformed parotid-gland mask) to train support vector machine (SVM) to reach consensus to segment the parotid gland of the target subject. This segmentation algorithm was tested with head-and-neck MRIs of 5 patients following radiotherapy for the nasopharyngeal cancer. The average parotid-gland volume overlapped 85% between the automatic segmentations and the physicians' manual contours. In conclusion, we have demonstrated the feasibility of an automatic multi-atlas based segmentation algorithm to segment parotid glands in head-and-neck MR images.
Collapse
Affiliation(s)
- Guanghui Cheng
- Radiation Oncology, China-Japan Union Hospital of Jilin University, Changchun, China
| | - Xiaofeng Yang
- Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Ning Wu
- Radiation Oncology, China-Japan Union Hospital of Jilin University, Changchun, China
| | - Zhijian Xu
- Radiation Oncology, China-Japan Union Hospital of Jilin University, Changchun, China
| | - Hongfu Zhao
- Radiation Oncology, China-Japan Union Hospital of Jilin University, Changchun, China
| | - Yuefeng Wang
- Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Tian Liu
- Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| |
Collapse
|
27
|
Akbari H, Fei B. Automatic 3D Segmentation of the Kidney in MR Images Using Wavelet Feature Extraction and Probability Shape Model. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2013; 8314:83143D. [PMID: 24027620 PMCID: PMC3766988 DOI: 10.1117/12.912028] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
Numerical estimation of the size of the kidney is useful in evaluating conditions of the kidney, especially, when serial MR imaging is performed to evaluate the kidney function. This paper presents a new method for automatic segmentation of the kidney in three-dimensional (3D) MR images, by extracting texture features and statistical matching of geometrical shape of the kidney. A set of Wavelet-based support vector machines (W-SVMs) is trained on the MR images. The W-SVMs capture texture priors of MRI for classification of the kidney and non-kidney tissues in different zones around the kidney boundary. In the segmentation procedure, these W-SVMs are trained to tentatively label each voxel around the kidney model as a kidney or non-kidney voxel by texture matching. A probability kidney model is created using 10 segmented MRI data. The model is initially localized based on the intensity profiles in three directions. The weight functions are defined for each labeled voxel for each Wavelet-based, intensity-based, and model-based label. Consequently, each voxel has three labels and three weights for the Wavelet feature, intensity, and probability model. Using a 3D edge detection method, the model is re-localized and the segmented kidney is modified based on a region growing method in the model region. The probability model is re-localized based on the results and this loop continues until the segmentation converges. Experimental results with mouse MRI data show the good performance of the proposed method in segmenting the kidney in MR images.
Collapse
Affiliation(s)
- Hamed Akbari
- Department of Radiology and Imaging Sciences, Emory University and Georgia Institute of Technology, Atlanta, GA
| | - Baowei Fei
- Department of Radiology and Imaging Sciences, Emory University and Georgia Institute of Technology, Atlanta, GA
- Department of Biomedical Engineering, Emory University and Georgia Institute of Technology, Atlanta, GA
| |
Collapse
|
28
|
Qian X, Wang J, Guo S, Li Q. An active contour model for medical image segmentation with application to brain CT image. Med Phys 2013; 40:021911. [PMID: 23387759 PMCID: PMC4108712 DOI: 10.1118/1.4774359] [Citation(s) in RCA: 43] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2012] [Revised: 12/19/2012] [Accepted: 12/20/2012] [Indexed: 11/07/2022] Open
Abstract
PURPOSE Cerebrospinal fluid (CSF) segmentation in computed tomography (CT) is a key step in computer-aided detection (CAD) of acute ischemic stroke. Because of image noise, low contrast and intensity inhomogeneity, CSF segmentation has been a challenging task. A region-based active contour model, which is insensitive to contour initialization and robust to intensity inhomogeneity, was developed for segmenting CSF in brain CT images. METHODS The energy function of the region-based active contour model is composed of a range domain kernel function, a space domain kernel function, and an edge indicator function. By minimizing the energy function, the region of edge elements of the target could be automatically identified in images with less dependence on initial contours. The energy function was optimized by means of the deepest descent method with a level set framework. An overlap rate between segmentation results and the reference standard was used to assess the segmentation accuracy. The authors evaluated the performance of the proposed method on both synthetic data and real brain CT images. They also compared the performance level of our method to those of region-scalable fitting (RSF) and global convex segment (GCS) models. RESULTS For the experiment of CSF segmentation in 67 brain CT images, their method achieved an average overlap rate of 66% compared to the average overlap rates of 16% and 46% from the RSF model and the GCS model, respectively. CONCLUSIONS Their region-based active contour model has the ability to achieve accurate segmentation results in images with high noise level and intensity inhomogeneity. Therefore, their method has great potential in the segmentation of medical images and would be useful for developing CAD schemes for acute ischemic stroke in brain CT images.
Collapse
Affiliation(s)
- Xiaohua Qian
- Department of Radiology, Duke University, Durham, NC 27705, USA
| | | | | | | |
Collapse
|
29
|
Yang X, Wu S, Sechopoulos I, Fei B. Cupping artifact correction and automated classification for high-resolution dedicated breast CT images. Med Phys 2012; 39:6397-406. [PMID: 23039675 DOI: 10.1118/1.4754654] [Citation(s) in RCA: 49] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/26/2022] Open
Abstract
PURPOSE To develop and test an automated algorithm to classify the different tissues present in dedicated breast CT images. METHODS The original CT images are first corrected to overcome cupping artifacts, and then a multiscale bilateral filter is used to reduce noise while keeping edge information on the images. As skin and glandular tissues have similar CT values on breast CT images, morphologic processing is used to identify the skin mask based on its position information. A modified fuzzy C-means (FCM) classification method is then used to classify breast tissue as fat and glandular tissue. By combining the results of the skin mask with the FCM, the breast tissue is classified as skin, fat, and glandular tissue. To evaluate the authors' classification method, the authors use Dice overlap ratios to compare the results of the automated classification to those obtained by manual segmentation on eight patient images. RESULTS The correction method was able to correct the cupping artifacts and improve the quality of the breast CT images. For glandular tissue, the overlap ratios between the authors' automatic classification and manual segmentation were 91.6% ± 2.0%. CONCLUSIONS A cupping artifact correction method and an automatic classification method were applied and evaluated for high-resolution dedicated breast CT images. Breast tissue classification can provide quantitative measurements regarding breast composition, density, and tissue distribution.
Collapse
Affiliation(s)
- Xiaofeng Yang
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA 30329, USA
| | | | | | | |
Collapse
|
30
|
Fei B, Yang X, Nye JA, Aarsvold JN, Raghunath N, Cervo M, Stark R, Meltzer CC, Votaw JR. MR∕PET quantification tools: registration, segmentation, classification, and MR-based attenuation correction. Med Phys 2012; 39:6443-54. [PMID: 23039679 PMCID: PMC3477199 DOI: 10.1118/1.4754796] [Citation(s) in RCA: 43] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2012] [Revised: 07/27/2012] [Accepted: 09/11/2012] [Indexed: 11/07/2022] Open
Abstract
PURPOSE Combined MR∕PET is a relatively new, hybrid imaging modality. A human MR∕PET prototype system consisting of a Siemens 3T Trio MR and brain PET insert was installed and tested at our institution. Its present design does not offer measured attenuation correction (AC) using traditional transmission imaging. This study is the development of quantification tools including MR-based AC for quantification in combined MR∕PET for brain imaging. METHODS The developed quantification tools include image registration, segmentation, classification, and MR-based AC. These components were integrated into a single scheme for processing MR∕PET data. The segmentation method is multiscale and based on the Radon transform of brain MR images. It was developed to segment the skull on T1-weighted MR images. A modified fuzzy C-means classification scheme was developed to classify brain tissue into gray matter, white matter, and cerebrospinal fluid. Classified tissue is assigned an attenuation coefficient so that AC factors can be generated. PET emission data are then reconstructed using a three-dimensional ordered sets expectation maximization method with the MR-based AC map. Ten subjects had separate MR and PET scans. The PET with [(11)C]PIB was acquired using a high-resolution research tomography (HRRT) PET. MR-based AC was compared with transmission (TX)-based AC on the HRRT. Seventeen volumes of interest were drawn manually on each subject image to compare the PET activities between the MR-based and TX-based AC methods. RESULTS For skull segmentation, the overlap ratio between our segmented results and the ground truth is 85.2 ± 2.6%. Attenuation correction results from the ten subjects show that the difference between the MR and TX-based methods was <6.5%. CONCLUSIONS MR-based AC compared favorably with conventional transmission-based AC. Quantitative tools including registration, segmentation, classification, and MR-based AC have been developed for use in combined MR∕PET.
Collapse
Affiliation(s)
- Baowei Fei
- Department of Radiology and Imaging Sciences, Emory University School of Medicine, Atlanta, GA, USA.
| | | | | | | | | | | | | | | | | |
Collapse
|
31
|
Akbari H, Halig LV, Schuster DM, Osunkoya A, Master V, Nieh PT, Chen GZ, Fei B. Hyperspectral imaging and quantitative analysis for prostate cancer detection. JOURNAL OF BIOMEDICAL OPTICS 2012; 17:076005. [PMID: 22894488 PMCID: PMC3608529 DOI: 10.1117/1.jbo.17.7.076005] [Citation(s) in RCA: 76] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/11/2023]
Abstract
Hyperspectral imaging (HSI) is an emerging modality for various medical applications. Its spectroscopic data might be able to be used to noninvasively detect cancer. Quantitative analysis is often necessary in order to differentiate healthy from diseased tissue. We propose the use of an advanced image processing and classification method in order to analyze hyperspectral image data for prostate cancer detection. The spectral signatures were extracted and evaluated in both cancerous and normal tissue. Least squares support vector machines were developed and evaluated for classifying hyperspectral data in order to enhance the detection of cancer tissue. This method was used to detect prostate cancer in tumor-bearing mice and on pathology slides. Spatially resolved images were created to highlight the differences of the reflectance properties of cancer versus those of normal tissue. Preliminary results with 11 mice showed that the sensitivity and specificity of the hyperspectral image classification method are 92.8% to 2.0% and 96.9% to 1.3%, respectively. Therefore, this imaging method may be able to help physicians to dissect malignant regions with a safe margin and to evaluate the tumor bed after resection. This pilot study may lead to advances in the optical diagnosis of prostate cancer using HSI technology.
Collapse
Affiliation(s)
- Hamed Akbari
- Emory University, Department of Radiology and Imaging Sciences, Atlanta, 30329 Georgia
| | - Luma V. Halig
- Emory University, Department of Radiology and Imaging Sciences, Atlanta, 30329 Georgia
| | - David M. Schuster
- Emory University, Department of Radiology and Imaging Sciences, Atlanta, 30329 Georgia
| | - Adeboye Osunkoya
- Emory University, Department of Pathology, Atlanta, 30329 Georgia
- Emory University, Department of Urology, Atlanta, 30329 Georgia
- Emory University, Winship Cancer Institute, Atlanta, 30329 Georgia
| | - Viraj Master
- Emory University, Department of Urology, Atlanta, 30329 Georgia
| | - Peter T. Nieh
- Emory University, Department of Urology, Atlanta, 30329 Georgia
| | - Georgia Z. Chen
- Emory University, Winship Cancer Institute, Atlanta, 30329 Georgia
| | - Baowei Fei
- Emory University, Department of Radiology and Imaging Sciences, Atlanta, 30329 Georgia
- Emory University and Georgia Institute of Technology, Department of Biomedical Engineering, Atlanta, 30329 Georgia
- Emory University, Winship Cancer Institute, Atlanta, 30329 Georgia
- Address all correspondence to: Baowei Fei, Emory University, Center for Systems Imaging, Department of Radiology and Imaging Sciences, 1841 Clifton Road NE, Atlanta, GA 30329. Tel: (404) 712-5649; Fax: (404) 712-5689; E-mail: , http://feilab.org
| |
Collapse
|
32
|
Topology-based nonlocal fuzzy segmentation of brain MR image with inhomogeneous and partial volume intensity. J Clin Neurophysiol 2012; 29:278-86. [PMID: 22659725 DOI: 10.1097/wnp.0b013e3182570f94] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022] Open
Abstract
PURPOSE The aim was to automatically segment brain magnetic resonance (MR) image with inhomogeneous and partial volume (PV) intensity for brain and neurophysiology analysis. METHODS Rather than assuming the presence of a single bias field over the image data, we first apply a local model to MR image analysis. With the brain topology knowledge, several specific local regions are selected, and typical brain tissues are then extracted for the prior estimation of fuzzy clustering center and member function. A new nonlocal fuzzy labeling scheme is applied to global optimization segmentation based on the block comparison and distance weight, which is robust to noise and inhomogeneous intensity. The nonlocal labeling provides optimized fuzzy member value and local intensity estimation of brain tissues such as cerebrospinal fluid (CSF), white matter (WM), and gray matter (GM). In addition to inhomogeneous intensity, PV may lead to error segmentation. To correct error segmentation because of PV, this article also provides two correction schemes. The first one is to extract CSF in deep sulci, which captures more CSF candidate by intensity comparison and topology shape comparison. The local pure CSF, WM, and GM is then estimated to correct the interfaces of CSF/GM and WM/GM. RESULTS The segmentation experiments are performed on both brainweb-simulated images and Internet brain segmentation repository database (IBSR) real images. The experimental results demonstrate the robust and efficient performance of our approach. CONCLUSIONS Our approach can be applied to automatic segmentation of the brain MR image.
Collapse
|
33
|
Akbari H, Halig LV, Zhang H, Wang D, Chen ZG, Fei B. Detection of Cancer Metastasis Using a Novel Macroscopic Hyperspectral Method. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2012; 8317:831711. [PMID: 23336061 DOI: 10.1117/12.912026] [Citation(s) in RCA: 40] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
Abstract
The proposed macroscopic optical histopathology includes a broad-band light source which is selected to illuminate the tissue glass slide of suspicious pathology, and a hyperspectral camera that captures all wavelength bands from 450 to 950 nm. The system has been trained to classify each histologic slide based on predetermined pathology with light having a wavelength within a predetermined range of wavelengths. This technology is able to capture both the spatial and spectral data of tissue. Highly metastatic human head and neck cancer cells were transplanted to nude mice. After 2-3 weeks, the mice were euthanized and the lymph nodes and lung tissues were sent to pathology. The metastatic cancer is studied in lymph nodes and lungs. The pathological slides were imaged using the hyperspectral camera. The results of the proposed method were compared to the pathologic report. Using hyperspectral images, a library of spectral signatures for different tissues was created. The high-dimensional data were classified using a support vector machine (SVM). The spectra are extracted in cancerous and non-cancerous tissues in lymph nodes and lung tissues. The spectral dimension is used as the input of SVM. Twelve glasses are employed for training and evaluation. The leave-one-out cross-validation method is used in the study. After training, the proposed SVM method can detect the metastatic cancer in lung histologic slides with the specificity of 97.7% and the sensitivity of 92.6%, and in lymph node slides with the specificity of 98.3% and the sensitivity of 96.2%. This method may be able to help pathologists to evaluate many histologic slides in a short time.
Collapse
Affiliation(s)
- Hamed Akbari
- Department of Radiology and Imaging Sciences, Emory University and Georgia Institute of Technology, Atlanta, GA
| | | | | | | | | | | |
Collapse
|
34
|
Yang X, Fei B. 3D Prostate Segmentation of Ultrasound Images Combining Longitudinal Image Registration and Machine Learning. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2012; 8316:83162O. [PMID: 24027622 DOI: 10.1117/12.912188] [Citation(s) in RCA: 28] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
We developed a three-dimensional (3D) segmentation method for transrectal ultrasound (TRUS) images, which is based on longitudinal image registration and machine learning. Using longitudinal images of each individual patient, we register previously acquired images to the new images of the same subject. Three orthogonal Gabor filter banks were used to extract texture features from each registered image. Patient-specific Gabor features from the registered images are used to train kernel support vector machines (KSVMs) and then to segment the newly acquired prostate image. The segmentation method was tested in TRUS data from five patients. The average surface distance between our and manual segmentation is 1.18 ± 0.31 mm, indicating that our automatic segmentation method based on longitudinal image registration is feasible for segmenting the prostate in TRUS images.
Collapse
Affiliation(s)
- Xiaofeng Yang
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA
| | | |
Collapse
|
35
|
Fei B, Schuster DM, Master V, Akbari H, Fenster A, Nieh P. A Molecular Image-directed, 3D Ultrasound-guided Biopsy System for the Prostate. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2012; 2012. [PMID: 22708023 DOI: 10.1117/12.912182] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
Systematic transrectal ultrasound (TRUS)-guided biopsy is the standard method for a definitive diagnosis of prostate cancer. However, this biopsy approach uses two-dimensional (2D) ultrasound images to guide biopsy and can miss up to 30% of prostate cancers. We are developing a molecular image-directed, three-dimensional (3D) ultrasound image-guided biopsy system for improved detection of prostate cancer. The system consists of a 3D mechanical localization system and software workstation for image segmentation, registration, and biopsy planning. In order to plan biopsy in a 3D prostate, we developed an automatic segmentation method based wavelet transform. In order to incorporate PET/CT images into ultrasound-guided biopsy, we developed image registration methods to fuse TRUS and PET/CT images. The segmentation method was tested in ten patients with a DICE overlap ratio of 92.4% ± 1.1 %. The registration method has been tested in phantoms. The biopsy system was tested in prostate phantoms and 3D ultrasound images were acquired from two human patients. We are integrating the system for PET/CT directed, 3D ultrasound-guided, targeted biopsy in human patients.
Collapse
Affiliation(s)
- Baowei Fei
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA 30329
| | | | | | | | | | | |
Collapse
|
36
|
Yang X, Ghafourian P, Sharma P, Salman K, Martin D, Fei B. Nonrigid Registration and Classification of the Kidneys in 3D Dynamic Contrast Enhanced (DCE) MR Images. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2012; 8314:83140B. [PMID: 22468206 DOI: 10.1117/12.912190] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
We have applied image analysis methods in the assessment of human kidney perfusion based on 3D dynamic contrast-enhanced (DCE) MRI data. This approach consists of 3D non-rigid image registration of the kidneys and fuzzy C-mean classification of kidney tissues. The proposed registration method reduced motion artifacts in the dynamic images and improved the analysis of kidney compartments (cortex, medulla, and cavities). The dynamic intensity curves show the successive transition of the contrast agent through kidney compartments. The proposed method for motion correction and kidney compartment classification may be used to improve the validity and usefulness of further model-based pharmacokinetic analysis of kidney function.
Collapse
Affiliation(s)
- Xiaofeng Yang
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA
| | | | | | | | | | | |
Collapse
|