1
|
|
2
|
Deep Multi-Objective Learning from Low-Dose CT for Automatic Lung-RADS Report Generation. J Pers Med 2022; 12:jpm12030417. [PMID: 35330417 PMCID: PMC8951579 DOI: 10.3390/jpm12030417] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2022] [Revised: 03/02/2022] [Accepted: 03/04/2022] [Indexed: 02/04/2023] Open
Abstract
Radiology report generation through chest radiography interpretation is a time-consuming task that involves the interpretation of images by expert radiologists. It is common for fatigue-induced diagnostic error to occur, and especially difficult in areas of the world where radiologists are not available or lack diagnostic expertise. In this research, we proposed a multi-objective deep learning model called CT2Rep (Computed Tomography to Report) for generating lung radiology reports by extracting semantic features from lung CT scans. A total of 458 CT scans were used in this research, from which 107 radiomics features and 6 slices of segmentation related nodule features were extracted for the input of our model. The CT2Rep can simultaneously predict position, margin, and texture, which are three important indicators of lung cancer, and achieves remarkable performance with an F1-score of 87.29%. We conducted a satisfaction survey for estimating the practicality of CT2Rep, and the results show that 95% of the reports received satisfactory ratings. The results demonstrate the great potential in this model for the production of robust and reliable quantitative lung diagnosis reports. Medical personnel can obtain important indicators simply by providing the lung CT scan to the system, which can bring about the widespread application of the proposed framework.
Collapse
|
3
|
Peng T, Xiao J, Li L, Pu B, Niu X, Zeng X, Wang Z, Gao C, Li C, Chen L, Yang J. Can machine learning-based analysis of multiparameter MRI and clinical parameters improve the performance of clinically significant prostate cancer diagnosis? Int J Comput Assist Radiol Surg 2021; 16:2235-2249. [PMID: 34677748 PMCID: PMC8616865 DOI: 10.1007/s11548-021-02507-w] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2021] [Accepted: 09/22/2021] [Indexed: 12/24/2022]
Abstract
Purpose To establish machine learning(ML) models for the diagnosis of clinically significant prostate cancer (csPC) using multiparameter magnetic resonance imaging (mpMRI), texture analysis (TA), dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) quantitative analysis and clinical parameters and to evaluate the stability of these models in internal and temporal validation. Methods The dataset of 194 men was split into training (n = 135) and internal validation (n = 59) cohorts, and a temporal dataset (n = 58) was used for evaluation. The lesions with Gleason score ≥ 7 were defined as csPC. Logistic regression (LR), stepwise regression (SR), classical decision tree (cDT), conditional inference tree (CIT), random forest (RF) and support vector machine (SVM) models were established by combining mpMRI-TA, DCE-MRI and clinical parameters and validated by internal and temporal validation using the receiver operating characteristic (ROC) curve and Delong’s method. Results Eight variables were determined as important predictors for csPC, with the first three related to texture features derived from the apparent diffusion coefficient (ADC) mapping. RF, LR and SR models yielded larger and more stable area under the ROC curve values (AUCs) than other models. In the temporal validation, the sensitivity was lower than that of the internal validation (p < 0.05). There were no significant differences in specificity, accuracy, positive predictive value (PPV), negative predictive value (NPV) and AUC (p > 0.05). Conclusions Each machine learning model in this study has good classification ability for csPC. Compared with internal validation, the sensitivity of each machine learning model in temporal validation was reduced, but the specificity, accuracy, PPV, NPV and AUCs remained stable at a good level. The RF, LR and SR models have better classification performance in the imaging-based diagnosis of csPC, and ADC texture-related parameters are of the highest importance. Supplementary Information The online version contains supplementary material available at 10.1007/s11548-021-02507-w.
Collapse
Affiliation(s)
- Tao Peng
- Department of Radiology, Affiliated Hospital of Chengdu University, 82 2nd N Section of Second Ring Rd, Chengdu, 610081, Sichuan Province, China
| | - JianMing Xiao
- Department of Radiology, Affiliated Hospital of Chengdu University, 82 2nd N Section of Second Ring Rd, Chengdu, 610081, Sichuan Province, China
| | - Lin Li
- Department of Radiology, Affiliated Hospital of Chengdu University, 82 2nd N Section of Second Ring Rd, Chengdu, 610081, Sichuan Province, China
| | - BingJie Pu
- Department of Radiology, Affiliated Hospital of Chengdu University, 82 2nd N Section of Second Ring Rd, Chengdu, 610081, Sichuan Province, China
| | - XiangKe Niu
- Department of Radiology, Affiliated Hospital of Chengdu University, 82 2nd N Section of Second Ring Rd, Chengdu, 610081, Sichuan Province, China.
| | - XiaoHui Zeng
- Department of Radiology, Affiliated Hospital of Chengdu University, 82 2nd N Section of Second Ring Rd, Chengdu, 610081, Sichuan Province, China
| | - ZongYong Wang
- Department of Radiology, Affiliated Hospital of Chengdu University, 82 2nd N Section of Second Ring Rd, Chengdu, 610081, Sichuan Province, China
| | - ChaoBang Gao
- College of Information Science and Technology, Chengdu University, 1 Shiling shang Street, Chengdu, 610106, Sichuan Province, China
| | - Ci Li
- Department of Pathology, Affiliated Hospital of Chengdu University, 82 2nd N Section of Second Ring Rd, Chengdu, 610081, Sichuan Province, China
| | - Lin Chen
- Department of Urology Surgery, Affiliated Hospital of Chengdu University, 82 2nd N Section of Second Ring Rd, Chengdu, 610081, Sichuan Province, China
| | - Jin Yang
- Department of Urology Surgery, Affiliated Hospital of Chengdu University, 82 2nd N Section of Second Ring Rd, Chengdu, 610081, Sichuan Province, China
| |
Collapse
|
4
|
Amniotic fluid segmentation based on pixel classification using local window information and distance angle pixel. Appl Soft Comput 2021. [DOI: 10.1016/j.asoc.2021.107196] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/03/2023]
|
5
|
Wang S, Liu M, Lian J, Shen D. Boundary Coding Representation for Organ Segmentation in Prostate Cancer Radiotherapy. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:310-320. [PMID: 32956051 PMCID: PMC8202780 DOI: 10.1109/tmi.2020.3025517] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/13/2023]
Abstract
Accurate segmentation of the prostate and organs at risk (OARs, e.g., bladder and rectum) in male pelvic CT images is a critical step for prostate cancer radiotherapy. Unfortunately, the unclear organ boundary and large shape variation make the segmentation task very challenging. Previous studies usually used representations defined directly on unclear boundaries as context information to guide segmentation. Those boundary representations may not be so discriminative, resulting in limited performance improvement. To this end, we propose a novel boundary coding network (BCnet) to learn a discriminative representation for organ boundary and use it as the context information to guide the segmentation. Specifically, we design a two-stage learning strategy in the proposed BCnet: 1) Boundary coding representation learning. Two sub-networks under the supervision of the dilation and erosion masks transformed from the manually delineated organ mask are first separately trained to learn the spatial-semantic context near the organ boundary. Then we encode the organ boundary based on the predictions of these two sub-networks and design a multi-atlas based refinement strategy by transferring the knowledge from training data to inference. 2) Organ segmentation. The boundary coding representation as context information, in addition to the image patches, are used to train the final segmentation network. Experimental results on a large and diverse male pelvic CT dataset show that our method achieves superior performance compared with several state-of-the-art methods.
Collapse
|
6
|
A tree ensemble-based two-stage model for advanced-stage colorectal cancer survival prediction. Inf Sci (N Y) 2019. [DOI: 10.1016/j.ins.2018.09.046] [Citation(s) in RCA: 30] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/03/2023]
|
7
|
Zhang K, Liu X, Jiang J, Li W, Wang S, Liu L, Zhou X, Wang L. Prediction of postoperative complications of pediatric cataract patients using data mining. J Transl Med 2019; 17:2. [PMID: 30602368 PMCID: PMC6317183 DOI: 10.1186/s12967-018-1758-2] [Citation(s) in RCA: 28] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2018] [Accepted: 12/21/2018] [Indexed: 12/31/2022] Open
Abstract
BACKGROUND The common treatment for pediatric cataracts is to replace the cloudy lens with an artificial one. However, patients may suffer complications (severe lens proliferation into the visual axis and abnormal high intraocular pressure; SLPVA and AHIP) within 1 year after surgery and factors causing these complications are unknown. METHODS Apriori algorithm is employed to find association rules related to complications. We use random forest (RF) and Naïve Bayesian (NB) to predict the complications with datasets preprocessed by SMOTE (synthetic minority oversampling technique). Genetic feature selection is exploited to find real features related to complications. RESULTS Average classification accuracies in three binary classification problems are over 75%. Second, the relationship between the classification performance and the number of random forest tree is studied. Results show except for gender and age at surgery (AS); other attributes are related to complications. Except for the secondary IOL placement, operation mode, AS and area of cataracts; other attributes are related to SLPVA. Except for the gender, operation mode, and laterality; other attributes are related to the AHIP. Next, the association rules related to the complications are mined out. Then additional 50 data were used to test the performance of RF and NB, both of then obtained the accuracies of over 65% for three classification problems. Finally, we developed a webserver to assist doctors. CONCLUSIONS The postoperative complications of pediatric cataracts patients can be predicted. Then the factors related to the complications are found. Finally, the association rules that is about the complications can provide reference to doctors.
Collapse
Affiliation(s)
- Kai Zhang
- School of Computer Science and Technology, Xidian University, No.2 South Taibai Rd, Xi'an, 710071, China.,State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, 510060, China
| | - Xiyang Liu
- School of Computer Science and Technology, Xidian University, No.2 South Taibai Rd, Xi'an, 710071, China. .,Institute of Software Engineering, Xidian University, Xi'an, 710071, China. .,School of Software, Xidian University, Xi'an, 710071, China.
| | - Jiewei Jiang
- School of Computer Science and Technology, Xidian University, No.2 South Taibai Rd, Xi'an, 710071, China.,State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, 510060, China
| | - Wangting Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, 510060, China
| | - Shuai Wang
- School of Software, Xidian University, Xi'an, 710071, China
| | - Lin Liu
- School of Computer Science and Technology, Xidian University, No.2 South Taibai Rd, Xi'an, 710071, China
| | - Xiaojing Zhou
- School of Computer Science, Northwestern Polytechnical University, Xi'an, 710072, China
| | - Liming Wang
- School of Computer Science and Technology, Xidian University, No.2 South Taibai Rd, Xi'an, 710071, China.,Institute of Software Engineering, Xidian University, Xi'an, 710071, China.,School of Software, Xidian University, Xi'an, 710071, China
| |
Collapse
|
8
|
Amiri S, Ali Mahjoub M, Rekik I. Tree-based Ensemble Classifier Learning for Automatic Brain Glioma Segmentation. Neurocomputing 2018. [DOI: 10.1016/j.neucom.2018.05.112] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
9
|
Kamiya N, Li J, Kume M, Fujita H, Shen D, Zheng G. Fully automatic segmentation of paraspinal muscles from 3D torso CT images via multi-scale iterative random forest classifications. Int J Comput Assist Radiol Surg 2018; 13:1697-1706. [PMID: 30173335 DOI: 10.1007/s11548-018-1852-1] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2018] [Accepted: 08/27/2018] [Indexed: 01/10/2023]
Abstract
PURPOSE To develop and validate a fully automatic method for segmentation of paraspinal muscles from 3D torso CT images. METHODS We propose a novel learning-based method to address this challenging problem. Multi-scale iterative random forest classifications with multi-source information are employed in this study to speed up the segmentation and to improve the accuracy. Here, multi-source images include the original torso CT images and later also the iteratively estimated and refined probability maps of the paraspinal muscles. We validated our method on 20 torso CT data with associated manual segmentation. We randomly partitioned the 20 CT data into two evenly distributed groups and took one group as the training data and the other group as the test data. RESULTS The proposed method achieved a mean Dice coefficient of 93.0%. It took on average 46.5 s to segment a 3D torso CT image with the size ranging from [Formula: see text] voxels to [Formula: see text] voxels. CONCLUSIONS Our fully automatic, learning-based method can accurately segment paraspinal muscles from 3D torso CT images. It generates segmentation results that are better than those achieved by the state-of-the-art methods.
Collapse
Affiliation(s)
- Naoki Kamiya
- School of Information Science and Technology, Aichi Prefectural University, Nagakute, Japan
| | - Jing Li
- Institute for Surgical Technology and Biomechanics, University of Bern, Bern, Switzerland
| | - Masanori Kume
- Department of Electrical, Electronic and Computer Engineering, Faculty of Engineering, Gifu University, Gifu, Japan
| | - Hiroshi Fujita
- Department of Electrical, Electronic and Computer Engineering, Faculty of Engineering, Gifu University, Gifu, Japan
| | - Dinggang Shen
- Department of Brain and Cognitive Engineering, Korea University, Seoul, 02841, Republic of Korea.
| | - Guoyan Zheng
- Institute for Surgical Technology and Biomechanics, University of Bern, Bern, Switzerland.
| |
Collapse
|
10
|
Rohrbach D, Wodlinger B, Wen J, Mamou J, Feleppa E. High-Frequency Quantitative Ultrasound for Imaging Prostate Cancer Using a Novel Micro-Ultrasound Scanner. ULTRASOUND IN MEDICINE & BIOLOGY 2018; 44:1341-1354. [PMID: 29627083 DOI: 10.1016/j.ultrasmedbio.2018.02.014] [Citation(s) in RCA: 45] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/14/2017] [Revised: 02/20/2018] [Accepted: 02/26/2018] [Indexed: 06/08/2023]
Abstract
Currently, biopsies guided by transrectal ultrasound (TRUS) are the only method for definitive diagnosis of prostate cancer. Studies by our group suggest that quantitative ultrasound (QUS) could provide a more sensitive means of targeting biopsies and directing focal treatments to cancer-suspicious regions in the prostate. Previous studies have utilized ultrasound signals at typical clinical frequencies, i.e., in the 6-MHz range. In the present study, a 29-MHz, TRUS, micro-ultrasound system and transducer (ExactVu micro-ultrasound, Exact Imaging, Markham, Canada) was used to acquire radio frequency data from 163 patients immediately before 12-core biopsy procedures, comprising 1956 cores. These retrospective data are a subset of data acquired in an ongoing, multisite, 2000-patient, randomized, clinical trial (clinicaltrials.gov NCT02079025). Spectrum-based QUS estimates of effective scatter diameter (ESD), effective acoustic concentration (EAC), midband (M), intercept (I) and slope (S) as well as envelope statistics employing a Nakagami distribution were used to train linear discriminant classifiers (LDCs) and support vector machines (SVMs). Classifier performance was assessed using area-under-the-curve (AUC) values obtained from receiver operating characteristic (ROC) analyses with 10-fold cross validation. A combination of ESD and EAC parameters resulted in an AUC value of 0.77 using a LDC. When Nakagami-µ or prostate-specific antigen (PSA) values were added as features, the AUC value increased to 0.79. SVM produced an AUC value of 0.77, using a combination of envelope and spectral QUS estimates. The best classification produced an AUC value of 0.81 using an LDC when combining envelope statistics, PSA, ESD and EAC. In a previous study, B-mode-based scoring and evaluation using the PRI-MUS protocol produced a maximal AUC value of 0.74 for higher Gleason-score values (GS >7) when read by an expert. Our initial results with AUC values of 0.81 are very encouraging for developing a new, predominantly user-independent, prostate-cancer, risk-assessing tool.
Collapse
Affiliation(s)
- Daniel Rohrbach
- Lizzi Center for Biomedical Engineering, Riverside Research, New York, NY 10038, USA.
| | | | | | - Jonathan Mamou
- Lizzi Center for Biomedical Engineering, Riverside Research, New York, NY 10038, USA
| | - Ernest Feleppa
- Lizzi Center for Biomedical Engineering, Riverside Research, New York, NY 10038, USA
| |
Collapse
|
11
|
Panda R, Puhan N, Rao A, Padhy D, Panda G. Automated retinal nerve fiber layer defect detection using fundus imaging in glaucoma. Comput Med Imaging Graph 2018; 66:56-65. [DOI: 10.1016/j.compmedimag.2018.02.006] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2017] [Revised: 01/30/2018] [Accepted: 02/27/2018] [Indexed: 10/17/2022]
|
12
|
Ren X, Xiang L, Nie D, Shao Y, Zhang H, Shen D, Wang Q. Interleaved 3D-CNNs for joint segmentation of small-volume structures in head and neck CT images. Med Phys 2018; 45:2063-2075. [PMID: 29480928 DOI: 10.1002/mp.12837] [Citation(s) in RCA: 81] [Impact Index Per Article: 13.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2017] [Revised: 01/05/2018] [Accepted: 02/10/2018] [Indexed: 01/17/2023] Open
Abstract
PURPOSE Accurate 3D image segmentation is a crucial step in radiation therapy planning of head and neck tumors. These segmentation results are currently obtained by manual outlining of tissues, which is a tedious and time-consuming procedure. Automatic segmentation provides an alternative solution, which, however, is often difficult for small tissues (i.e., chiasm and optic nerves in head and neck CT images) because of their small volumes and highly diverse appearance/shape information. In this work, we propose to interleave multiple 3D Convolutional Neural Networks (3D-CNNs) to attain automatic segmentation of small tissues in head and neck CT images. METHOD A 3D-CNN was designed to segment each structure of interest. To make full use of the image appearance information, multiscale patches are extracted to describe the center voxel under consideration and then input to the CNN architecture. Next, as neighboring tissues are often highly related in the physiological and anatomical perspectives, we interleave the CNNs designated for the individual tissues. In this way, the tentative segmentation result of a specific tissue can contribute to refine the segmentations of other neighboring tissues. Finally, as more CNNs are interleaved and cascaded, a complex network of CNNs can be derived, such that all tissues can be jointly segmented and iteratively refined. RESULT Our method was validated on a set of 48 CT images, obtained from the Medical Image Computing and Computer Assisted Intervention (MICCAI) Challenge 2015. The Dice coefficient (DC) and the 95% Hausdorff Distance (95HD) are computed to measure the accuracy of the segmentation results. The proposed method achieves higher segmentation accuracy (with the average DC: 0.58 ± 0.17 for optic chiasm, and 0.71 ± 0.08 for optic nerve; 95HD: 2.81 ± 1.56 mm for optic chiasm, and 2.23 ± 0.90 mm for optic nerve) than the MICCAI challenge winner (with the average DC: 0.38 for optic chiasm, and 0.68 for optic nerve; 95HD: 3.48 for optic chiasm, and 2.48 for optic nerve). CONCLUSION An accurate and automatic segmentation method has been proposed for small tissues in head and neck CT images, which is important for the planning of radiotherapy.
Collapse
Affiliation(s)
- Xuhua Ren
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200030, China
| | - Lei Xiang
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200030, China
| | - Dong Nie
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, 27599, USA
| | - Yeqin Shao
- Nantong University, Nantong, Jiangsu, 226019, China
| | - Huan Zhang
- Department of Radiology, Ruijin Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, 200025, China
| | - Dinggang Shen
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, 27599, USA.,Department of Brain and Cognitive Engineering, Korea University, Seoul, 02841, Korea
| | - Qian Wang
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200030, China
| |
Collapse
|
13
|
Qian C, Yang X. An integrated method for atherosclerotic carotid plaque segmentation in ultrasound image. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2018; 153:19-32. [PMID: 29157451 DOI: 10.1016/j.cmpb.2017.10.002] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/10/2017] [Revised: 09/16/2017] [Accepted: 10/02/2017] [Indexed: 06/07/2023]
Abstract
BACKGROUND AND OBJECTIVE Carotid artery atherosclerosis is an important cause of stroke. Ultrasound imaging has been widely used in the diagnosis of atherosclerosis. Therefore, segmenting atherosclerotic carotid plaque in ultrasound image is an important task. Accurate plaque segmentation is helpful for the measurement of carotid plaque burden. In this paper, we propose and evaluate a novel learning-based integrated framework for plaque segmentation. METHODS In our study, four different classification algorithms, along with the auto-context iterative algorithm, were employed to effectively integrate features from ultrasound images and later also the iteratively estimated and refined probability maps together for pixel-wise classification. The four classification algorithms were support vector machine with linear kernel, support vector machine with radial basis function kernel, AdaBoost and random forest. The plaque segmentation was implemented in the generated probability map. The performance of the four different learning-based plaque segmentation methods was tested on 29 B-mode ultrasound images. The evaluation indices for our proposed methods were consisted of sensitivity, specificity, Dice similarity coefficient, overlap index, error of area, absolute error of area, point-to-point distance, and Hausdorff point-to-point distance, along with the area under the ROC curve. RESULTS The segmentation method integrated the random forest and an auto-context model obtained the best results (sensitivity 80.4 ± 8.4%, specificity 96.5 ± 2.0%, Dice similarity coefficient 81.0 ± 4.1%, overlap index 68.3 ± 5.8%, error of area -1.02 ± 18.3%, absolute error of area 14.7 ± 10.9%, point-to-point distance 0.34 ± 0.10 mm, Hausdorff point-to-point distance 1.75 ± 1.02 mm, and area under the ROC curve 0.897), which were almost the best, compared with that from the existed methods. CONCLUSIONS Our proposed learning-based integrated framework investigated in this study could be useful for atherosclerotic carotid plaque segmentation, which will be helpful for the measurement of carotid plaque burden.
Collapse
Affiliation(s)
- Chunjun Qian
- School of Science, Nanjing University of Science and Technology, Jiangsu, China.
| | - Xiaoping Yang
- School of Science, Nanjing University of Science and Technology, Jiangsu, China; Department of Mathematics, Nanjing University, Jiangsu, China
| |
Collapse
|
14
|
Segmenting hippocampal subfields from 3T MRI with multi-modality images. Med Image Anal 2017; 43:10-22. [PMID: 28961451 DOI: 10.1016/j.media.2017.09.006] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2017] [Revised: 08/14/2017] [Accepted: 09/18/2017] [Indexed: 11/23/2022]
Abstract
Hippocampal subfields play important roles in many brain activities. However, due to the small structural size, low signal contrast, and insufficient image resolution of 3T MR, automatic hippocampal subfields segmentation is less explored. In this paper, we propose an automatic learning-based hippocampal subfields segmentation method using 3T multi-modality MR images, including structural MRI (T1, T2) and resting state fMRI (rs-fMRI). The appearance features and relationship features are both extracted to capture the appearance patterns in structural MR images and also the connectivity patterns in rs-fMRI, respectively. In the training stage, these extracted features are adopted to train a structured random forest classifier, which is further iteratively refined in an auto-context model by adopting the context features and the updated relationship features. In the testing stage, the extracted features are fed into the trained classifiers to predict the segmentation for each hippocampal subfield, and the predicted segmentation is iteratively refined by the trained auto-context model. To our best knowledge, this is the first work that addresses the challenging automatic hippocampal subfields segmentation using relationship features from rs-fMRI, which is designed to capture the connectivity patterns of different hippocampal subfields. The proposed method is validated on two datasets and the segmentation results are quantitatively compared with manual labels using the leave-one-out strategy, which shows the effectiveness of our method. From experiments, we find a) multi-modality features can significantly increase subfields segmentation performance compared to those only using one modality; b) automatic segmentation results using 3T multi-modality MR images could be partially comparable to those using 7T T1 MRI.
Collapse
|