1
|
Stefano A. Challenges and limitations in applying radiomics to PET imaging: Possible opportunities and avenues for research. Comput Biol Med 2024; 179:108827. [PMID: 38964244 DOI: 10.1016/j.compbiomed.2024.108827] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2024] [Revised: 06/05/2024] [Accepted: 06/29/2024] [Indexed: 07/06/2024]
Abstract
Radiomics, the high-throughput extraction of quantitative imaging features from medical images, holds immense potential for advancing precision medicine in oncology and beyond. While radiomics applied to positron emission tomography (PET) imaging offers unique insights into tumor biology and treatment response, it is imperative to elucidate the challenges and constraints inherent in this domain to facilitate their translation into clinical practice. This review examines the challenges and limitations of applying radiomics to PET imaging, synthesizing findings from the last five years (2019-2023) and highlights the significance of addressing these challenges to realize the full clinical potential of radiomics in oncology and molecular imaging. A comprehensive search was conducted across multiple electronic databases, including PubMed, Scopus, and Web of Science, using keywords relevant to radiomics issues in PET imaging. Only studies published in peer-reviewed journals were eligible for inclusion in this review. Although many studies have highlighted the potential of radiomics in predicting treatment response, assessing tumor heterogeneity, enabling risk stratification, and personalized therapy selection, various challenges regarding the practical implementation of the proposed models still need to be addressed. This review illustrates the challenges and limitations of radiomics in PET imaging across various cancer types, encompassing both phantom and clinical investigations. The analyzed studies highlight the importance of reproducible segmentation methods, standardized pre-processing and post-processing methodologies, and the need to create large multicenter studies registered in a centralized database to promote the continuous validation and clinical integration of radiomics into PET imaging.
Collapse
Affiliation(s)
- Alessandro Stefano
- Institute of Molecular Bioimaging and Physiology, National Research Council (IBFM-CNR), Cefalù, Italy.
| |
Collapse
|
2
|
Wyatt JJ, Petrides G, Pearson RA, McCallum HM, Maxwell RJ. Impact of attenuation correction of radiotherapy hardware for positron emission tomography-magnetic resonance in ano-rectal radiotherapy patients. J Appl Clin Med Phys 2024; 25:e14193. [PMID: 37922377 PMCID: PMC10962489 DOI: 10.1002/acm2.14193] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2023] [Revised: 10/02/2023] [Accepted: 10/11/2023] [Indexed: 11/05/2023] Open
Abstract
BACKGROUND Positron Emission Tomography-Magnetic Resonance (PET-MR) scanners could improve ano-rectal radiotherapy planning through improved Gross Tumour Volume (GTV) delineation and enabling dose painting strategies using metabolic measurements. This requires accurate quantitative PET images acquired in the radiotherapy treatment position. PURPOSE This study aimed to evaluate the impact on GTV delineation and metabolic parameter measurement of using novel Attenuation Correction (AC) maps that included the radiotherapy flat couch, coil bridge and anterior coil to see if they were necessary. METHODS Seventeen ano-rectal radiotherapy patients received a18 F $\mathrm{^{18}F}$ -FluoroDeoxyGlucose PET-MR scan in the radiotherapy position. PET images were reconstructed without (CTAC std $\mathrm{CTAC_{std}}$ ) and with (CTAC cba $\mathrm{CTAC_{cba}}$ ) the radiotherapy hardware included. Both AC maps used the same Computed Tomography image for patient AC. Semi-manual and threshold GTVs were delineated on both PET images, the volumes compared and the Dice coefficient calculated. Metabolic parameters: Standardized Uptake ValuesSUV max $\mathrm{SUV_{max}}$ ,SUV mean $\mathrm{SUV_{mean}}$ and Total Lesion Glycolysis (TLG) were compared using paired t-tests with a Bonferroni corrected significance level ofp = 0.05 / 8 = 0.006 $p = 0.05/8 = 0.006$ . RESULTS Differences in semi-manual GTV volumes betweenCTAC cba $\mathrm{CTAC_{cba}}$ andCTAC std $\mathrm{CTAC_{std}}$ were approaching statistical significance (difference- 15.9 % ± 1.6 % $-15.9\%\pm 1.6\%$ ,p = 0.007 $p = 0.007$ ), with larger differences in low FDG-avid tumours (SUV mean < 8.5 g mL - 1 $\mathrm{SUV_{mean}} < 8.5\;\mathrm{g\: mL^{-1}}$ ). TheCTAC cba $\mathrm{CTAC_{cba}}$ andCTAC std $\mathrm{CTAC_{std}}$ GTVs were concordant with Dice coefficients0.89 ± 0.01 $0.89 \pm 0.01$ (manual) and0.98 ± 0.00 $0.98 \pm 0.00$ (threshold). Metabolic parameters were significantly different, withSUV max $\mathrm{SUV_{max}}$ ,SUV mean $\mathrm{SUV_{mean}}$ and TLG differences of- 11.5 % ± 0.3 % $-11.5\%\ \pm 0.3\%$ (p < 0.001 $p < 0.001$ ),- 11.6 % ± 0.3 % $-11.6\% \pm 0.3\%$ (p < 0.001 $p < 0.001$ ) and- 13.7 % ± 0.6 % $-13.7\%\ \pm 0.6\%$ (p = 0.003 $p = 0.003$ ) respectively. The TLG difference resulted in 1/8 rectal cancer patients changing prognosis group, based on literature TLG cut-offs, when usingCTAC cba $\mathrm{CTAC_{cba}}$ rather thanCTAC std $\mathrm{CTAC_{std}}$ . CONCLUSIONS This study suggests that using AC maps with the radiotherapy hardware included is feasible for patient imaging. The impact on tumour delineation was mixed and needs to be evaluated in larger cohorts. However using AC of the radiotherapy hardware is important for situations where accurate metabolic measurements are required, such as dose painting and treatment prognostication.
Collapse
Affiliation(s)
- Jonathan J. Wyatt
- Translational and Clinical Research InstituteNewcastle UniversityNewcastleUK
- Northern Centre for Cancer CareNewcastle upon Tyne Hospitals NHS Foundation TrustNewcastleUK
| | - George Petrides
- Translational and Clinical Research InstituteNewcastle UniversityNewcastleUK
- Nuclear Medicine DepartmentNewcastle upon Tyne Hospitals NHS Foundation TrustNewcastleUK
| | - Rachel A. Pearson
- Translational and Clinical Research InstituteNewcastle UniversityNewcastleUK
- Northern Centre for Cancer CareNewcastle upon Tyne Hospitals NHS Foundation TrustNewcastleUK
| | - Hazel M. McCallum
- Translational and Clinical Research InstituteNewcastle UniversityNewcastleUK
- Northern Centre for Cancer CareNewcastle upon Tyne Hospitals NHS Foundation TrustNewcastleUK
| | - Ross J. Maxwell
- Translational and Clinical Research InstituteNewcastle UniversityNewcastleUK
| |
Collapse
|
3
|
Wyatt JJ, Kaushik S, Cozzini C, Pearson RA, Petrides G, Wiesinger F, McCallum HM, Maxwell RJ. Evaluating a radiotherapy deep learning synthetic CT algorithm for PET-MR attenuation correction in the pelvis. EJNMMI Phys 2024; 11:10. [PMID: 38282050 PMCID: PMC11266329 DOI: 10.1186/s40658-024-00617-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2023] [Accepted: 01/15/2024] [Indexed: 01/30/2024] Open
Abstract
BACKGROUND Positron emission tomography-magnetic resonance (PET-MR) attenuation correction is challenging because the MR signal does not represent tissue density and conventional MR sequences cannot image bone. A novel zero echo time (ZTE) MR sequence has been previously developed which generates signal from cortical bone with images acquired in 65 s. This has been combined with a deep learning model to generate a synthetic computed tomography (sCT) for MR-only radiotherapy. This study aimed to evaluate this algorithm for PET-MR attenuation correction in the pelvis. METHODS Ten patients being treated with ano-rectal radiotherapy received a [Formula: see text]F-FDG-PET-MR in the radiotherapy position. Attenuation maps were generated from ZTE-based sCT (sCTAC) and the standard vendor-supplied MRAC. The radiotherapy planning CT scan was rigidly registered and cropped to generate a gold standard attenuation map (CTAC). PET images were reconstructed using each attenuation map and compared for standard uptake value (SUV) measurement, automatic thresholded gross tumour volume (GTV) delineation and GTV metabolic parameter measurement. The last was assessed for clinical equivalence to CTAC using two one-sided paired t tests with a significance level corrected for multiple testing of [Formula: see text]. Equivalence margins of [Formula: see text] were used. RESULTS Mean whole-image SUV differences were -0.02% (sCTAC) compared to -3.0% (MRAC), with larger differences in the bone regions (-0.5% to -16.3%). There was no difference in thresholded GTVs, with Dice similarity coefficients [Formula: see text]. However, there were larger differences in GTV metabolic parameters. Mean differences to CTAC in [Formula: see text] were [Formula: see text] (± standard error, sCTAC) and [Formula: see text] (MRAC), and [Formula: see text] (sCTAC) and [Formula: see text] (MRAC) in [Formula: see text]. The sCTAC was statistically equivalent to CTAC within a [Formula: see text] equivalence margin for [Formula: see text] and [Formula: see text] ([Formula: see text] and [Formula: see text]), whereas the MRAC was not ([Formula: see text] and [Formula: see text]). CONCLUSION Attenuation correction using this radiotherapy ZTE-based sCT algorithm was substantially more accurate than current MRAC methods with only a 40 s increase in MR acquisition time. This did not impact tumour delineation but did significantly improve the accuracy of whole-image and tumour SUV measurements, which were clinically equivalent to CTAC. This suggests PET images reconstructed with sCTAC would enable accurate quantitative PET images to be acquired on a PET-MR scanner.
Collapse
Affiliation(s)
- Jonathan J Wyatt
- Translation and Clinical Research Institute, Newcastle University, Newcastle upon Tyne, UK.
- Northern Centre for Cancer Care, Newcastle upon Tyne Hospitals NHS Foundation Trust, Newcastle upon Tyne, UK.
| | - Sandeep Kaushik
- GE Healthcare, Munich, Germany
- Department of Quantitative Biomedicine, University of Zurich, Zurich, Switzerland
| | | | - Rachel A Pearson
- Translation and Clinical Research Institute, Newcastle University, Newcastle upon Tyne, UK
- Northern Centre for Cancer Care, Newcastle upon Tyne Hospitals NHS Foundation Trust, Newcastle upon Tyne, UK
| | - George Petrides
- Nuclear Medicine Department, Newcastle upon Tyne Hospitals NHS Foundation Trust, Newcastle upon Tyne, UK
| | | | - Hazel M McCallum
- Translation and Clinical Research Institute, Newcastle University, Newcastle upon Tyne, UK
- Northern Centre for Cancer Care, Newcastle upon Tyne Hospitals NHS Foundation Trust, Newcastle upon Tyne, UK
| | - Ross J Maxwell
- Translation and Clinical Research Institute, Newcastle University, Newcastle upon Tyne, UK
| |
Collapse
|
4
|
Li L, Jiang C, Wang PSP, Zheng S. 3D PET/CT Tumor Co-Segmentation Based on Background Subtraction Hybrid Active Contour Model. INT J PATTERN RECOGN 2023; 37. [DOI: 10.1142/s0218001423570069] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/15/2024]
Abstract
Accurate tumor segmentation in medical images plays an important role in clinical diagnosis and disease analysis. However, medical images usually have great complexity, such as low contrast of computed tomography (CT) or low spatial resolution of positron emission tomography (PET). In the actual radiotherapy plan, multimodal imaging technology, such as PET/CT, is often used. PET images provide basic metabolic information and CT images provide anatomical details. In this paper, we propose a 3D PET/CT tumor co-segmentation framework based on active contour model. First, a new edge stop function (ESF) based on PET image and CT image is defined, which combines the grayscale standard deviation information of the image and is more effective for blurry medical image edges. Second, we propose a background subtraction model to solve the problem of uneven grayscale level in medical images. Apart from that, the calculation format adopts the level set algorithm based on the additive operator splitting (AOS) format. The solution is unconditionally stable and eliminates the dependence on time step size. Experimental results on a dataset of 50 pairs of PET/CT images of non-small cell lung cancer patients show that the proposed method has a good performance for tumor segmentation.
Collapse
Affiliation(s)
- Laquan Li
- School of Science, Chongqing University of Posts and Telecommunications, Chongqing 400065, P. R. China
- College of Computer Science and Technology, Chongqing University of Posts and Telecommunications, Chongqing 400065, P. R. China
| | - Chuangbo Jiang
- School of Science, Chongqing University of Posts and Telecommunications, Chongqing 400065, P. R. China
| | - Patrick Shen-Pei Wang
- College of Computer and Information Science, Northeastern University, Boston 02115, USA
| | - Shenhai Zheng
- College of Computer Science and Technology, Chongqing University of Posts and Telecommunications, Chongqing 400065, P. R. China
- College of Computer Science, Chongqing University, Chongqing 400044, P. R. China
| |
Collapse
|
5
|
Nigam R, Field M, Harris G, Barton M, Carolan M, Metcalfe P, Holloway L. Automated detection, delineation and quantification of whole-body bone metastasis using FDG-PET/CT images. Phys Eng Sci Med 2023; 46:851-863. [PMID: 37126152 DOI: 10.1007/s13246-023-01258-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2022] [Accepted: 04/11/2023] [Indexed: 05/02/2023]
Abstract
Non-small cell lung cancer (NSCLC) patients with the metastatic spread of disease to the bone have high morbidity and mortality. Stereotactic ablative body radiotherapy increases the progression free survival and overall survival of these patients with oligometastases. FDG-PET/CT, a functional imaging technique combining positron emission tomography (PET) with 18 F-fluorodeoxyglucose (FDG) and computer tomography (CT) provides improved staging and identification of treatment response. It is also associated with reduction in size of the radiotherapy tumour volume delineation compared with CT based contouring in radiotherapy, thus allowing for dose escalation to the target volume with lower doses to the surrounding organs at risk. FDG-PET/CT is increasingly being used for the clinical management of NSCLC patients undergoing radiotherapy and has shown high sensitivity and specificity for the detection of bone metastases in these patients. Here, we present a software tool for detection, delineation and quantification of bone metastases using FDG-PET/CT images. The tool extracts standardised uptake values (SUV) from FDG-PET images for auto-segmentation of bone lesions and calculates volume of each lesion and associated mean and maximum SUV. The tool also allows automatic statistical validation of the auto-segmented bone lesions against the manual contours of a radiation oncologist. A retrospective review of FDG-PET/CT scans of more than 30 candidate NSCLC patients was performed and nine patients with one or more metastatic bone lesions were selected for the present study. The SUV threshold prediction model was designed by splitting the cohort of patients into a subset of 'development' and 'validation' cohorts. The development cohort yielded an optimum SUV threshold of 3.0 for automatic detection of bone metastases using FDG-PET/CT images. The validity of the derived optimum SUV threshold on the validation cohort demonstrated that auto-segmented and manually contoured bone lesions showed strong concordance for volume of bone lesion (r = 0.993) and number of detected lesions (r = 0.996). The tool has various applications in radiotherapy, including but not limited to studies determining optimum SUV threshold for accurate and standardised delineation of bone lesions and in scientific studies utilising large patient populations for instance for investigation of the number of metastatic lesions that can be treated safety with an ablative dose of radiotherapy without exceeding the normal tissue toxicity.
Collapse
Affiliation(s)
- R Nigam
- Centre for Medical Radiation Physics, University of Wollongong, Wollongong, NSW, 2522, Australia.
- Ingham Institute for Applied Medical Research, Liverpool, NSW, 2170, Australia.
- Illawarra Cancer Care Centre, Wollongong Hospital, Wollongong, NSW, 2500, Australia.
| | - M Field
- Ingham Institute for Applied Medical Research, Liverpool, NSW, 2170, Australia
- Liverpool and Macarthur Cancer Therapy Centre, Liverpool, NSW, 2170, Australia
- South Western Sydney Clinical Campus, School of Clinical Medicine, University of New South Wales, Sydney, NSW, Australia
| | - G Harris
- Chris O'Brien Lifehouse, Camperdown, NSW, 2050, Australia
| | - M Barton
- Ingham Institute for Applied Medical Research, Liverpool, NSW, 2170, Australia
- Liverpool and Macarthur Cancer Therapy Centre, Liverpool, NSW, 2170, Australia
- South Western Sydney Clinical Campus, School of Clinical Medicine, University of New South Wales, Sydney, NSW, Australia
| | - M Carolan
- Illawarra Cancer Care Centre, Wollongong Hospital, Wollongong, NSW, 2500, Australia
| | - P Metcalfe
- Centre for Medical Radiation Physics, University of Wollongong, Wollongong, NSW, 2522, Australia
- Ingham Institute for Applied Medical Research, Liverpool, NSW, 2170, Australia
| | - L Holloway
- Centre for Medical Radiation Physics, University of Wollongong, Wollongong, NSW, 2522, Australia
- Ingham Institute for Applied Medical Research, Liverpool, NSW, 2170, Australia
- Liverpool and Macarthur Cancer Therapy Centre, Liverpool, NSW, 2170, Australia
- South Western Sydney Clinical Campus, School of Clinical Medicine, University of New South Wales, Sydney, NSW, Australia
- Institute of Medical Physics, University of Sydney, Camperdown, NSW, 2505, Australia
| |
Collapse
|
6
|
Liu Y, Wei X, Feng X, Liu Y, Feng G, Du Y. Repeatability of radiomics studies in colorectal cancer: a systematic review. BMC Gastroenterol 2023; 23:125. [PMID: 37059990 PMCID: PMC10105401 DOI: 10.1186/s12876-023-02743-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/22/2021] [Accepted: 03/22/2023] [Indexed: 04/16/2023] Open
Abstract
BACKGROUND Recently, radiomics has been widely used in colorectal cancer, but many variable factors affect the repeatability of radiomics research. This review aims to analyze the repeatability of radiomics studies in colorectal cancer and to evaluate the current status of radiomics in the field of colorectal cancer. METHODS The included studies in this review by searching from the PubMed and Embase databases. Then each study in our review was evaluated using the Radiomics Quality Score (RQS). We analyzed the factors that may affect the repeatability in the radiomics workflow and discussed the repeatability of the included studies. RESULTS A total of 188 studies was included in this review, of which only two (2/188, 1.06%) studies controlled the influence of individual factors. In addition, the median score of RQS was 11 (out of 36), range-1 to 27. CONCLUSIONS The RQS score was moderately low, and most studies did not consider the repeatability of radiomics features, especially in terms of Intra-individual, scanners, and scanning parameters. To improve the generalization of the radiomics model, it is necessary to further control the variable factors of repeatability.
Collapse
Affiliation(s)
- Ying Liu
- School of Medical Imaging, North Sichuan Medical College, Sichuan Province, Nanchong City, 637000, China
| | - Xiaoqin Wei
- School of Medical Imaging, North Sichuan Medical College, Sichuan Province, Nanchong City, 637000, China
| | | | - Yan Liu
- Department of Radiology, the Affiliated Hospital of North Sichuan Medical College, 1 Maoyuannan Road, Sichuan Province, 637000, Nanchong City, China
| | - Guiling Feng
- Department of Radiology, the Affiliated Hospital of North Sichuan Medical College, 1 Maoyuannan Road, Sichuan Province, 637000, Nanchong City, China
| | - Yong Du
- Department of Radiology, the Affiliated Hospital of North Sichuan Medical College, 1 Maoyuannan Road, Sichuan Province, 637000, Nanchong City, China.
| |
Collapse
|
7
|
Zhang J, Jiang H, Shi T. ASE-Net: A tumor segmentation method based on image pseudo enhancement and adaptive-scale attention supervision module. Comput Biol Med 2023; 152:106363. [PMID: 36516579 DOI: 10.1016/j.compbiomed.2022.106363] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2022] [Revised: 11/08/2022] [Accepted: 11/25/2022] [Indexed: 11/29/2022]
Abstract
Fluorine 18(18F) fluorodeoxyglucose positron emission tomography and Computed Tomography (PET/CT) is the preferred imaging method of choice for the diagnosis and treatment of many cancers. However, factors such as low-contrast organ and tissue images, and the original scale of tumors pose huge obstacles to the accurate segmentation of tumors. In this work, we propose a novel model ASE-Net which is used for multimodality tumor segmentation. Firstly, we propose a pseudo-enhanced CT image generation method based on metabolic intensity to generate pseudo-enhanced CT images as additional input, which reduces the learning of the network in the spatial position of PET/CT and increases the discriminability of the corresponding structural positions of the high and low metabolic region. Second, unlike previous networks that directly segment tumors of all scales, we propose an Adaptive-Scale Attention Supervision Module at the skip connections, after combining the results of all paths, tumors of different scales will be given different receptive fields. Finally, Dual Path Block is used as the backbone of our network to leverage the ability of residual learning for feature reuse and dense connection for exploring new features. Our experimental results on two clinical PET/CT datasets demonstrate the effectiveness of our proposed network and achieve 78.56% and 72.57% in Dice Similarity Coefficient, respectively, which has better performance compared to state-of-the-art network models, whether for large or small tumors. The proposed model will help pathologists formulate more accurate diagnoses by providing reference opinions during diagnosis, consequently improving patient survival rate.
Collapse
Affiliation(s)
- Junzhi Zhang
- Software College, Northeastern University, No. 195, Chuangxin Road, Hunnan District, Shenyang, 110169, Liaoning, China
| | - Huiyan Jiang
- Software College, Northeastern University, No. 195, Chuangxin Road, Hunnan District, Shenyang, 110169, Liaoning, China; Key Laboratory of Intelligent Computing in Biomedical Image, Ministry of Education, Northeastern University, No. 195, Chuangxin Road, Hunnan District, Shenyang, 110169, Liaoning, China.
| | - Tianyu Shi
- Software College, Northeastern University, No. 195, Chuangxin Road, Hunnan District, Shenyang, 110169, Liaoning, China
| |
Collapse
|
8
|
Huang Z, Zou S, Wang G, Chen Z, Shen H, Wang H, Zhang N, Zhang L, Yang F, Wang H, Liang D, Niu T, Zhu X, Hu Z. ISA-Net: Improved spatial attention network for PET-CT tumor segmentation. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 226:107129. [PMID: 36156438 DOI: 10.1016/j.cmpb.2022.107129] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/14/2022] [Revised: 07/06/2022] [Accepted: 09/13/2022] [Indexed: 06/16/2023]
Abstract
BACKGROUND AND OBJECTIVE Achieving accurate and automated tumor segmentation plays an important role in both clinical practice and radiomics research. Segmentation in medicine is now often performed manually by experts, which is a laborious, expensive and error-prone task. Manual annotation relies heavily on the experience and knowledge of these experts. In addition, there is much intra- and interobserver variation. Therefore, it is of great significance to develop a method that can automatically segment tumor target regions. METHODS In this paper, we propose a deep learning segmentation method based on multimodal positron emission tomography-computed tomography (PET-CT), which combines the high sensitivity of PET and the precise anatomical information of CT. We design an improved spatial attention network(ISA-Net) to increase the accuracy of PET or CT in detecting tumors, which uses multi-scale convolution operation to extract feature information and can highlight the tumor region location information and suppress the non-tumor region location information. In addition, our network uses dual-channel inputs in the coding stage and fuses them in the decoding stage, which can take advantage of the differences and complementarities between PET and CT. RESULTS We validated the proposed ISA-Net method on two clinical datasets, a soft tissue sarcoma(STS) and a head and neck tumor(HECKTOR) dataset, and compared with other attention methods for tumor segmentation. The DSC score of 0.8378 on STS dataset and 0.8076 on HECKTOR dataset show that ISA-Net method achieves better segmentation performance and has better generalization. CONCLUSIONS The method proposed in this paper is based on multi-modal medical image tumor segmentation, which can effectively utilize the difference and complementarity of different modes. The method can also be applied to other multi-modal data or single-modal data by proper adjustment.
Collapse
Affiliation(s)
- Zhengyong Huang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China; University of Chinese Academy of Sciences, Beijing, 101408, China
| | - Sijuan Zou
- Department of Nuclear Medicine and PET, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, 430000, China
| | - Guoshuai Wang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China; University of Chinese Academy of Sciences, Beijing, 101408, China
| | - Zixiang Chen
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China; Chinese Academy of Sciences Key Laboratory of Health Informatics, Shenzhen, 518055, China
| | - Hao Shen
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China; University of Chinese Academy of Sciences, Beijing, 101408, China
| | - Haiyan Wang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China; University of Chinese Academy of Sciences, Beijing, 101408, China
| | - Na Zhang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China; Chinese Academy of Sciences Key Laboratory of Health Informatics, Shenzhen, 518055, China
| | - Lu Zhang
- Brain Cognition and Brain Disease Institute (BCBDI), Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China; Shenzhen-Hong Kong Institute of Brain Science-Shenzhen Fundamental Research Institutions,Shenzhen, 518055, China
| | - Fan Yang
- Brain Cognition and Brain Disease Institute (BCBDI), Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China; Shenzhen-Hong Kong Institute of Brain Science-Shenzhen Fundamental Research Institutions,Shenzhen, 518055, China
| | - Haining Wang
- United Imaging Research Institute of Innovative Medical Equipment, Shenzhen, 518045, China
| | - Dong Liang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China; Chinese Academy of Sciences Key Laboratory of Health Informatics, Shenzhen, 518055, China
| | - Tianye Niu
- Institute of Biomedical Engineering, Shenzhen Bay Laboratory, Shenzhen, 518118, China
| | - Xiaohua Zhu
- Department of Nuclear Medicine and PET, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, 430000, China
| | - Zhanli Hu
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China; Chinese Academy of Sciences Key Laboratory of Health Informatics, Shenzhen, 518055, China.
| |
Collapse
|
9
|
Positron Emission Tomography Image Segmentation Based on Atanassov’s Intuitionistic Fuzzy Sets. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12104865] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/10/2022]
Abstract
In this paper, we present an approach to fully automate tumor delineation in positron emission tomography (PET) images. PET images play a major role in medicine for in vivo imaging in oncology (PET images are used to evaluate oncology patients, detecting emitted photons from a radiotracer localized in abnormal cells). PET image tumor delineation plays a vital role both in pre- and post-treatment stages. The low spatial resolution and high noise characteristics of PET images increase the challenge in PET image segmentation. Despite the difficulties and known limitations, several image segmentation approaches have been proposed. This paper introduces a new unsupervised approach to perform tumor delineation in PET images using Atanassov’s intuitionistic fuzzy sets (A-IFSs) and restricted dissimilarity functions. Moreover, the implementation of this methodology is presented and tested against other existing methodologies. The proposed algorithm increases the accuracy of tumor delineation in PET images, and the experimental results show that the proposed method outperformed all methods tested.
Collapse
|
10
|
Wang X, Jemaa S, Fredrickson J, Coimbra AF, Nielsen T, De Crespigny A, Bengtsson T, Carano RAD. Heart and bladder detection and segmentation on FDG PET/CT by deep learning. BMC Med Imaging 2022; 22:58. [PMID: 35354384 PMCID: PMC8977865 DOI: 10.1186/s12880-022-00785-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2021] [Accepted: 03/22/2022] [Indexed: 12/04/2022] Open
Abstract
Purpose Positron emission tomography (PET)/ computed tomography (CT) has been extensively used to quantify metabolically active tumors in various oncology indications. However, FDG-PET/CT often encounters false positives in tumor detection due to 18fluorodeoxyglucose (FDG) accumulation from the heart and bladder that often exhibit similar FDG uptake as tumors. Thus, it is necessary to eliminate this source of physiological noise. Major challenges for this task include: (1) large inter-patient variability in the appearance for the heart and bladder. (2) The size and shape of bladder or heart may appear different on PET and CT. (3) Tumors can be very close or connected to the heart or bladder. Approach A deep learning based approach is proposed to segment the heart and bladder on whole body PET/CT automatically. Two 3D U-Nets were developed separately to segment the heart and bladder, where each network receives the PET and CT as a multi-modal input. Data sets were obtained from retrospective clinical trials and include 575 PET/CT for heart segmentation and 538 for bladder segmentation. Results The models were evaluated on a test set from an independent trial and achieved a Dice Similarity Coefficient (DSC) of 0.96 for heart segmentation and 0.95 for bladder segmentation, Average Surface Distance (ASD) of 0.44 mm on heart and 0.90 mm on bladder. Conclusions This methodology could be a valuable component to the FDG-PET/CT data processing chain by removing FDG physiological noise associated with heart and/or bladder accumulation prior to image analysis by manual, semi- or automated tumor analysis methods.
Collapse
|
11
|
Jiang Y, Xu S, Fan H, Qian J, Luo W, Zhen S, Tao Y, Sun J, Lin H. ALA-Net: Adaptive Lesion-Aware Attention Network for 3D Colorectal Tumor Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:3627-3640. [PMID: 34197319 DOI: 10.1109/tmi.2021.3093982] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Accurate and reliable segmentation of colorectal tumors and surrounding colorectal tissues on 3D magnetic resonance images has critical importance in preoperative prediction, staging, and radiotherapy. Previous works simply combine multilevel features without aggregating representative semantic information and without compensating for the loss of spatial information caused by down-sampling. Therefore, they are vulnerable to noise from complex backgrounds and suffer from misclassification and target incompleteness-related failures. In this paper, we address these limitations with a novel adaptive lesion-aware attention network (ALA-Net) which explicitly integrates useful contextual information with spatial details and captures richer feature dependencies based on 3D attention mechanisms. The model comprises two parallel encoding paths. One of these is designed to explore global contextual features and enlarge the receptive field using a recurrent strategy. The other captures sharper object boundaries and the details of small objects that are lost in repeated down-sampling layers. Our lesion-aware attention module adaptively captures long-range semantic dependencies and highlights the most discriminative features, improving semantic consistency and completeness. Furthermore, we introduce a prediction aggregation module to combine multiscale feature maps and to further filter out irrelevant information for precise voxel-wise prediction. Experimental results show that ALA-Net outperforms state-of-the-art methods and inherently generalizes well to other 3D medical images segmentation tasks, providing multiple benefits in terms of target completeness, reduction of false positives, and accurate detection of ambiguous lesion regions.
Collapse
|
12
|
Shiri I, Arabi H, Sanaat A, Jenabi E, Becker M, Zaidi H. Fully Automated Gross Tumor Volume Delineation From PET in Head and Neck Cancer Using Deep Learning Algorithms. Clin Nucl Med 2021; 46:872-883. [PMID: 34238799 DOI: 10.1097/rlu.0000000000003789] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
PURPOSE The availability of automated, accurate, and robust gross tumor volume (GTV) segmentation algorithms is critical for the management of head and neck cancer (HNC) patients. In this work, we evaluated 3 state-of-the-art deep learning algorithms combined with 8 different loss functions for PET image segmentation using a comprehensive training set and evaluated its performance on an external validation set of HNC patients. PATIENTS AND METHODS 18F-FDG PET/CT images of 470 patients presenting with HNC on which manually defined GTVs serving as standard of reference were used for training (340 patients), evaluation (30 patients), and testing (100 patients from different centers) of these algorithms. PET image intensity was converted to SUVs and normalized in the range (0-1) using the SUVmax of the whole data set. PET images were cropped to 12 × 12 × 12 cm3 subvolumes using isotropic voxel spacing of 3 × 3 × 3 mm3 containing the whole tumor and neighboring background including lymph nodes. We used different approaches for data augmentation, including rotation (-15 degrees, +15 degrees), scaling (-20%, 20%), random flipping (3 axes), and elastic deformation (sigma = 1 and proportion to deform = 0.7) to increase the number of training sets. Three state-of-the-art networks, including Dense-VNet, NN-UNet, and Res-Net, with 8 different loss functions, including Dice, generalized Wasserstein Dice loss, Dice plus XEnt loss, generalized Dice loss, cross-entropy, sensitivity-specificity, and Tversky, were used. Overall, 28 different networks were built. Standard image segmentation metrics, including Dice similarity, image-derived PET metrics, first-order, and shape radiomic features, were used for performance assessment of these algorithms. RESULTS The best results in terms of Dice coefficient (mean ± SD) were achieved by cross-entropy for Res-Net (0.86 ± 0.05; 95% confidence interval [CI], 0.85-0.87), Dense-VNet (0.85 ± 0.058; 95% CI, 0.84-0.86), and Dice plus XEnt for NN-UNet (0.87 ± 0.05; 95% CI, 0.86-0.88). The difference between the 3 networks was not statistically significant (P > 0.05). The percent relative error (RE%) of SUVmax quantification was less than 5% in networks with a Dice coefficient more than 0.84, whereas a lower RE% (0.41%) was achieved by Res-Net with cross-entropy loss. For maximum 3-dimensional diameter and sphericity shape features, all networks achieved a RE ≤ 5% and ≤10%, respectively, reflecting a small variability. CONCLUSIONS Deep learning algorithms exhibited promising performance for automated GTV delineation on HNC PET images. Different loss functions performed competitively when using different networks and cross-entropy for Res-Net, Dense-VNet, and Dice plus XEnt for NN-UNet emerged as reliable networks for GTV delineation. Caution should be exercised for clinical deployment owing to the occurrence of outliers in deep learning-based algorithms.
Collapse
Affiliation(s)
- Isaac Shiri
- From the Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Hossein Arabi
- From the Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Amirhossein Sanaat
- From the Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Elnaz Jenabi
- Research Centre for Nuclear Medicine, Shariati Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | | | | |
Collapse
|
13
|
Barone S, Cannella R, Comelli A, Pellegrino A, Salvaggio G, Stefano A, Vernuccio F. Hybrid descriptive‐inferential method for key feature selection in prostate cancer radiomics. APPLIED STOCHASTIC MODELS IN BUSINESS AND INDUSTRY 2021; 37:961-972. [DOI: 10.1002/asmb.2642] [Citation(s) in RCA: 29] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/15/2021] [Accepted: 07/18/2021] [Indexed: 01/03/2025]
Abstract
AbstractIn healthcare industry 4.0, a big role is played by radiomics. Radiomics concerns the extraction and analysis of quantitative information not visible to the naked eye, even by expert operators, from biomedical images. Radiomics involves the management of digital images as data matrices, with the aim of extracting a number of morphological and predictive variables, named features, using automatic or semi‐automatic methods. Multidisciplinary methods as machine learning and deep learning are fully involved in this field. However, the large number of features requires efficient and effective core methods for their selection, in order to avoid bias or misinterpretations problems. In this work, the authors propose a novel method for feature selection in radiomics. The proposed method is based on an original combination of descriptive and inferential statistics. Its validity is illustrated through a case study on prostate cancer analysis, conducted at the university hospital of Palermo, Italy.
Collapse
Affiliation(s)
- Stefano Barone
- Dipartimento di Scienze Agrarie, Alimentari e Forestali Università degli Studi di Palermo Palermo Italy
| | - Roberto Cannella
- Dipartimento di Biomedicina, Neuroscienze e Diagnostica Avanzata Università degli Studi di Palermo Palermo Italy
| | - Albert Comelli
- Fondazione Ri.MED Palermo Italy
- Istituto di Bioimmagini e Fisiologia Molecolare, Consiglio Nazionale delle Ricerche (IBFM‐CNR) Cefalù Italy
| | - Arianna Pellegrino
- Dipartimento di Ingegneria Meccanica e Aerospaziale Politecnico di Torino Turin Italy
| | - Giuseppe Salvaggio
- Dipartimento di Biomedicina, Neuroscienze e Diagnostica Avanzata Università degli Studi di Palermo Palermo Italy
| | - Alessandro Stefano
- Istituto di Bioimmagini e Fisiologia Molecolare, Consiglio Nazionale delle Ricerche (IBFM‐CNR) Cefalù Italy
| | - Federica Vernuccio
- Dipartimento di Biomedicina, Neuroscienze e Diagnostica Avanzata Università degli Studi di Palermo Palermo Italy
| |
Collapse
|
14
|
Cui R, Chen Z, Wu J, Tan Y, Yu G. A Multiprocessing Scheme for PET Image Pre-Screening, Noise Reduction, Segmentation and Lesion Partitioning. IEEE J Biomed Health Inform 2021; 25:1699-1711. [PMID: 32946400 DOI: 10.1109/jbhi.2020.3024563] [Citation(s) in RCA: 25] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
OBJECTIVE Accurate segmentation and partitioning of lesions in PET images provide computer-aided procedures and doctors with parameters for tumour diagnosis, staging and prognosis. Currently, PET segmentation and lesion partitioning are manually measured by radiologists, which is time consuming and laborious, and tedious manual procedures might lead to inaccurate measurement results. Therefore, we designed a new automatic multiprocessing scheme for PET image pre-screening, noise reduction, segmentation and lesion partitioning in this study. PET image pre-screening can reduce the time cost of noise reduction, segmentation and lesion partitioning methods, and denoising can enhance both quantitative metrics and visual quality for better segmentation accuracy. For pre-screening, we propose a new differential activation filter (DAF) to screen the lesion images from whole-body scanning. For noise reduction, neural network inverse (NN inverse) as the inverse transformation of generalized Anscombe transformation (GAT), which does not depend on the distribution of residual noise, was presented to improve the SNR of images. For segmentation and lesion partitioning, definition density peak clustering (DDPC) was proposed to realize instance segmentation of lesion and normal tissue with unsupervised images, which helped reduce the cost of density calculation and completely deleted the cluster halo. The experimental results of clinical data demonstrate that our proposed methods have good results and better performance in noise reduction, segmentation and lesion partitioning compared with state-of-the-art methods.
Collapse
|
15
|
Leung KH, Marashdeh W, Wray R, Ashrafinia S, Pomper MG, Rahmim A, Jha AK. A physics-guided modular deep-learning based automated framework for tumor segmentation in PET. Phys Med Biol 2020; 65:245032. [PMID: 32235059 DOI: 10.1088/1361-6560/ab8535] [Citation(s) in RCA: 37] [Impact Index Per Article: 7.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/27/2022]
Abstract
An important need exists for reliable positron emission tomography (PET) tumor-segmentation methods for tasks such as PET-based radiation-therapy planning and reliable quantification of volumetric and radiomic features. To address this need, we propose an automated physics-guided deep-learning-based three-module framework to segment PET images on a per-slice basis. The framework is designed to help address the challenges of limited spatial resolution and lack of clinical training data with known ground-truth tumor boundaries in PET. The first module generates PET images containing highly realistic tumors with known ground-truth using a new stochastic and physics-based approach, addressing lack of training data. The second module trains a modified U-net using these images, helping it learn the tumor-segmentation task. The third module fine-tunes this network using a small-sized clinical dataset with radiologist-defined delineations as surrogate ground-truth, helping the framework learn features potentially missed in simulated tumors. The framework was evaluated in the context of segmenting primary tumors in 18F-fluorodeoxyglucose (FDG)-PET images of patients with lung cancer. The framework's accuracy, generalizability to different scanners, sensitivity to partial volume effects (PVEs) and efficacy in reducing the number of training images were quantitatively evaluated using Dice similarity coefficient (DSC) and several other metrics. The framework yielded reliable performance in both simulated (DSC: 0.87 (95% confidence interval (CI): 0.86, 0.88)) and patient images (DSC: 0.73 (95% CI: 0.71, 0.76)), outperformed several widely used semi-automated approaches, accurately segmented relatively small tumors (smallest segmented cross-section was 1.83 cm2), generalized across five PET scanners (DSC: 0.74 (95% CI: 0.71, 0.76)), was relatively unaffected by PVEs, and required low training data (training with data from even 30 patients yielded DSC of 0.70 (95% CI: 0.68, 0.71)). In conclusion, the proposed automated physics-guided deep-learning-based PET-segmentation framework yielded reliable performance in delineating tumors in FDG-PET images of patients with lung cancer.
Collapse
Affiliation(s)
- Kevin H Leung
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States of America
- The Russell H. Morgan Department of Radiology, Johns Hopkins University, Baltimore, MD, United States of America
| | - Wael Marashdeh
- Department of Radiology and Nuclear Medicine, Jordan University of Science and Technology, Ar Ramtha, Jordan
| | - Rick Wray
- Memorial Sloan Kettering Cancer Center, Greater New York City Area, NY, United States of America
| | - Saeed Ashrafinia
- The Russell H. Morgan Department of Radiology, Johns Hopkins University, Baltimore, MD, United States of America
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD, United States of America
| | - Martin G Pomper
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States of America
- The Russell H. Morgan Department of Radiology, Johns Hopkins University, Baltimore, MD, United States of America
| | - Arman Rahmim
- The Russell H. Morgan Department of Radiology, Johns Hopkins University, Baltimore, MD, United States of America
- Departments of Radiology and Physics, University of British Columbia, Vancouver, BC, Canada
| | - Abhinav K Jha
- Department of Biomedical Engineering and Mallinckrodt Institute of Radiology, Washington University in St. Louis, St. Louis, MO, United States of America
| |
Collapse
|
16
|
Tamal M. Intensity threshold based solid tumour segmentation method for Positron Emission Tomography (PET) images: A review. Heliyon 2020; 6:e05267. [PMID: 33163642 PMCID: PMC7610228 DOI: 10.1016/j.heliyon.2020.e05267] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2019] [Revised: 05/14/2020] [Accepted: 10/12/2020] [Indexed: 12/02/2022] Open
Abstract
Accurate, robust and reproducible delineation of tumour in Positron Emission Tomography (PET) is essential for diagnosis, treatment planning and response assessment. Since standardized uptake value (SUV) – a normalized semiquantitative parameter used in PET is represented by the intensity of the PET images and related to the radiotracer uptake, a SUV based threshold method is a natural choice to delineate the tumour. However, determination of an optimum threshold value is a challenging task due to low spatial resolution, and signal-to-noise ratio (SNR) along with finite image sampling constraint. The aim of the review is to summarize different fixed and adaptive threshold-based PET image segmentation approaches under a common mathematical framework Advantages and disadvantages of different threshold based methods are also highlighted from the perspectives of diagnosis, treatment planning and response assessment. Several fixed threshold values (30%–70% of the maximum SUV of the tumour (SUVmaxT)) have been investigated. It has been reported that the fixed threshold-based method is very much dependent on the SNR, tumour to background ratio (TBR) and the size of the tumour. Adaptive threshold-based method, an alternative to fixed threshold, can minimize these dependencies by accounting for tumour to background ratio (TBR) and tumour size. However, the parameters for the adaptive methods need to be calibrated for each PET camera system (e.g., scanner geometry, image acquisition protocol, reconstruction algorithm etc.) and it is not straight forward to implement the same procedure to other PET systems to obtain similar results. It has been reported that the performance of the adaptive methods is also not optimum for smaller volumes with lower TBR and SNR. Statistical analysis carried out on the NEMA thorax phantom images also indicates that regions segmented by the fixed threshold method are significantly different for all cases. On the other hand, the adaptive method provides significantly different segmented regions only for low TBR with different SNR. From this viewpoint, a robust threshold based segmentation method that will be less sensitive to SUVmaxT, SNR, TBR and volume needs to be developed. It was really challenging to compare the performance of different threshold-based methods because the performance of each method was tested on dissimilar data set with different data acquisition and reconstruction protocols along with different TBR, SNR and volumes. To avoid such difficulties, it will be desirable to have a common database of clinical PET images acquired with different image acquisition protocols and different PET cameras to compare the performance of automatic segmentation methods. It is also suggested to report the changes in SNR and TBR while reporting the response using threshold based methods.
Collapse
Affiliation(s)
- Mahbubunnabi Tamal
- Department of Biomedical Engineering, Imam Abdulrahman Bin Faisal University, PO Box 1982, Dammam, 31441, Saudi Arabia
| |
Collapse
|
17
|
Accuracy of target delineation by positron emission tomography-based auto-segmentation methods after deformable image registration: A phantom study. Phys Med 2020; 76:194-201. [DOI: 10.1016/j.ejmp.2020.07.015] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/17/2020] [Revised: 06/19/2020] [Accepted: 07/12/2020] [Indexed: 11/21/2022] Open
|
18
|
Comelli A, Bignardi S, Stefano A, Russo G, Sabini MG, Ippolito M, Yezzi A. Development of a new fully three-dimensional methodology for tumours delineation in functional images. Comput Biol Med 2020; 120:103701. [PMID: 32217282 PMCID: PMC7237290 DOI: 10.1016/j.compbiomed.2020.103701] [Citation(s) in RCA: 25] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2019] [Revised: 03/11/2020] [Accepted: 03/11/2020] [Indexed: 01/15/2023]
Abstract
Delineation of tumours in Positron Emission Tomography (PET) plays a crucial role in accurate diagnosis and radiotherapy treatment planning. In this context, it is of outmost importance to devise efficient and operator-independent segmentation algorithms capable of reconstructing the tumour three-dimensional (3D) shape. In previous work, we proposed a system for 3D tumour delineation on PET data (expressed in terms of Standardized Uptake Value - SUV), based on a two-step approach. Step 1 identified the slice enclosing the maximum SUV and generated a rough contour surrounding it. Such contour was then used to initialize step 2, where the 3D shape of the tumour was obtained by separately segmenting 2D PET slices, leveraging the slice-by-slice marching approach. Additionally, we combined active contours and machine learning components to improve performance. Despite its success, the slice marching approach poses unnecessary limitations that are naturally removed by performing the segmentation directly in 3D. In this paper, we migrate our system into 3D. In particular, the segmentation in step 2 is now performed by evolving an active surface directly in the 3D space. The key points of such an advancement are that it performs the shape reconstruction on the whole stack of slices simultaneously, naturally leveraging cross-slice information that could not be exploited before. Additionally, it does not require any specific stopping condition, as the active surface naturally reaches a stable topology once convergence is achieved. Performance of this fully 3D approach is evaluated on the same dataset discussed in our previous work, which comprises fifty PET scans of lung, head and neck, and brain tumours. The results have confirmed that a benefit is indeed achieved in practice for all investigated anatomical districts, both quantitatively, through a set of commonly used quality indicators (dice similarity coefficient >87.66%, Hausdorff distance < 1.48 voxel and Mahalanobis distance < 0.82 voxel), and qualitatively in terms of Likert score (>3 in 54% of the tumours).
Collapse
Affiliation(s)
- Albert Comelli
- Ri.MED Foundation, via Bandiera 11, 90133, Palermo, Italy
| | - Samuel Bignardi
- Department of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA, 30332, USA
| | - Alessandro Stefano
- Institute of Molecular Bioimaging and Physiology, National Research Council (IBFM-CNR), Cefalù, Italy.
| | - Giorgio Russo
- Institute of Molecular Bioimaging and Physiology, National Research Council (IBFM-CNR), Cefalù, Italy; Medical Physics Unit, Cannizzaro Hospital, Catania, Italy
| | | | - Massimo Ippolito
- Nuclear Medicine Department, Cannizzaro Hospital, Catania, Italy
| | - Anthony Yezzi
- Department of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA, 30332, USA
| |
Collapse
|
19
|
Comelli A, Stefano A, Bignardi S, Coronnello C, Russo G, Sabini MG, Ippolito M, Yezzi A. Tissue Classification to Support Local Active Delineation of Brain Tumors. COMMUNICATIONS IN COMPUTER AND INFORMATION SCIENCE 2020. [DOI: 10.1007/978-3-030-39343-4_1] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
|
20
|
Tamal M. A hybrid region growing tumour segmentation method for low contrast and high noise Nuclear Medicine (NM) images by combining a novel non-linear diffusion filter and global gradient measure (HNDF-GGM-RG). Heliyon 2019; 5:e02993. [PMID: 31879709 PMCID: PMC6920261 DOI: 10.1016/j.heliyon.2019.e02993] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2019] [Revised: 09/26/2019] [Accepted: 12/03/2019] [Indexed: 10/27/2022] Open
Abstract
Poor spatial resolution and low signal-to-noise ratio (SNR) along with the finite image sampling constraint make lesion segmentation on Nuclear Medicine (NM) images (e.g., PET-Positron Emission Tomography) a challenging task. Since the size, signal-to-background ratio (SBR) and SNR of lesion vary within and between patients, performance of the conventional segmentation methods are not consistent against statistical fluctuations. To overcome these limitations, a hybrid region growing segmentation method is proposed combining non-linear diffusion filter and global gradient measure (HNDF-GGM-RG). The performance of the algorithm is validated on PET images and compared with the 40%-fixed threshold and a state-of-the-art active contour (AC) methods. Segmented volume, dice similarity coefficient (DSC) and percentage classification error (% CE) were used as the quantitative figures of merit (FOM) using the torso NEMA phantom that contains six different sizes of spheres. A 2:1 SBR was created between the spheres and background and the phantom was scanned with a Siemens TrueV PET-CT scanner. 40T method is SNR dependent and overestimates the volumes ( ≈ 4.5 times ) . AC volumes match with the true volumes only for the largest three spheres. On the other hand, the proposed HNDF-GGM-RG volumes match closely with the true volumes irrespective of the size and SNR. Average DSC of 0.32 and 0.66 and % CE of 700% and 160% were achieved by the 40T and AC methods respectively. Conversely, average DSC and %CE are 0.70 and 60% for HNDF-GGM-RG and less dependent on SNR. Since two-sample t-test indicates that the performance of AC and HNDF-GGM-RG are statistically significant for the smallest three spheres and similar for the rest, HNDF-GGM-RG can be applied where the size, SBR and SNR are subject to change either due to alterations in the radiotracer uptake because of treatment or uptake variability of different radiotracers because of differences in their molecular pathways.
Collapse
Affiliation(s)
- Mahbubunnabi Tamal
- Department of Biomedical Engineering, Imam Abdulrahman Bin Faisal University, PO Box 1982, Dammam, 31441, Saudi Arabia
| |
Collapse
|
21
|
New Computerized Method in Measuring the Sagittal Bowing of Femur from Plain Radiograph—A Validation Study. J Clin Med 2019; 8:jcm8101598. [PMID: 31623300 PMCID: PMC6832379 DOI: 10.3390/jcm8101598] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2019] [Revised: 09/20/2019] [Accepted: 10/01/2019] [Indexed: 11/17/2022] Open
Abstract
Background: Mismatch of intramedullary nails with the bowing of femur is a frequent clinical finding. Previous studies showed inconsistent results. Methods: We present an algorithm of region growing territory method to get the radii of the anterior bowing of femur. We also tested it on ten radiographs. Plain radiographs of the lateral view of femur from five men and five women taken between January and August 2014 in Taipei Hospital were chosen randomly. The curvature of femur outline and medullary canal were measured for three times each. Radii of curvature of whole femur, proximal, middle and distal parts were calculated and analyzed. Results: The coefficient of variation of the 240 measurements ranged from 0.007 to 0.295 and averaged 0.088. The average radii of curvature of the whole, proximal, middle, and distal femur were 1318 mm, 752 mm, 1379 mm, and 599 mm, respectively. At the distal part of the femur, the radius of curvature of the femur outline (452 mm) was smaller than the medullary canal (746 mm) (p < 0.05). Women’s femur was straighter than men’s when we compared the whole length (1435 mm vs. 1201 mm, p < 0.05). The radii we calculated were smaller than the current intramedullary nails. Conclusion: The results showed that the inter-observer and intra-observer differences are acceptable, support the impression that different bowing conditions existed for Asians as compared to Caucasians, and also indicate the mismatch of current instruments to the curvature of femur.
Collapse
|
22
|
A phantom study to assess the reproducibility, robustness and accuracy of PET image segmentation methods against statistical fluctuations. PLoS One 2019; 14:e0219127. [PMID: 31283779 PMCID: PMC6613706 DOI: 10.1371/journal.pone.0219127] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2018] [Accepted: 06/17/2019] [Indexed: 01/21/2023] Open
Abstract
Background Automatic and semi-automatic segmentation methods for PET serve as alternatives to manual delineation and eliminate observer variability. The robustness of these segmentation methods against statistical fluctuations arising from variable size, contrast and noise are vital for providing reliable clinical outcomes for diagnosis and treatment response assessment. In this study, the performances of several segmentation methods have been investigated using the torso NEMA phantom against statistical fluctuations. Methods The six hot spheres (0.5-27ml) and the background of the phantom were filled with different activities of 18F to yield 2:1 and 4:1 contrast ratios. The phantom was scanned on a TrueV PET-CT scanner for 120 minutes. The images were reconstructed using OSEM (4iterations-21subsets) for different durations (15, 20, 34 and 67 minutes) to represent different noise levels and smoothed with a 4-mm Gaussian filter. Each sphere with different settings was delineated using a fixed 40% threshold (40T), fuzzy clustering mean (FCM), adaptive threshold and region based variational (C-V) segmentation methods and compared with the gold standard volume, which was estimated from the known diameter and position of each sphere. Results The smallest three spheres at the 2:1 contrast level are not evaluable for the 40T method. For the other spheres, the 40T method grossly overestimates the volumes and the segmented volumes are highly dependent on the statistical variations. These volumes are the least reproducible (80%) with a mean Dice Similarity Coefficient (DSC) of 0.67 and 90% classification error (CE). The other three methods reduce the dependency on noise and contrast in a similar manner by providing low bias (<10%) and CE (<25%) as well as a high DSC (0.88) and reproducibility (30%) for objects >17mm in diameter. However, for the smallest three spheres at a 2:1 contrast level, the performances of all three methods were significantly low, with the adaptive method being superior to the FCM and C-V (mean bias 168% and 350%, mean DSC 0.65 and 0.50, mean CE 227% and 454% for the adaptive and other two methods (approximately similar for FCM and C-V), respectively). Conclusions The segmentation accuracy of the fixed threshold-based method depends on size, contrast and noise. The intensity thresholds determined by the adaptive threshold methods are less sensitive to noise and therefore, the segmented volumes are more reproducible across different acquisition durations. A similar performance can be achieved with the FCM and C-V methods. Though, for small lesions (< 2cm diameter) with low counts and contrast, the adaptive threshold-based method outperforms the FCM and C-V methods, and the performance of neither of these methods is optimal for volumes <2cm in diameter. These three methods can only reliably be used to delineate tumours for diagnostic and monitoring purposes provided that the contrast between the tumour and background is not below a 2:1 ratio and the size of the tumour does not fall not below 2cm in diameter in response to treatment. They can also be used for different radiotracers with variable uptake. However, the FCM and C-V methods have the advantage of not requiring calibrations for different scanners and settings.
Collapse
|
23
|
Parkinson C, Evans M, Guerrero-Urbano T, Michaelidou A, Pike L, Barrington S, Jayaprakasam V, Rackley T, Palaniappan N, Staffurth J, Marshall C, Spezi E. Machine-learned target volume delineation of 18F-FDG PET images after one cycle of induction chemotherapy. Phys Med 2019; 61:85-93. [PMID: 31151585 DOI: 10.1016/j.ejmp.2019.04.020] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/21/2018] [Revised: 04/04/2019] [Accepted: 04/23/2019] [Indexed: 12/18/2022] Open
Abstract
Biological tumour volume (GTVPET) delineation on 18F-FDG PET acquired during induction chemotherapy (ICT) is challenging due to the reduced metabolic uptake and volume of the GTVPET. Automatic segmentation algorithms applied to 18F-FDG PET (PET-AS) imaging have been used for GTVPET delineation on 18F-FDG PET imaging acquired before ICT. However, their role has not been investigated in 18F-FDG PET imaging acquired after ICT. In this study we investigate PET-AS techniques, including ATLAAS a machine learned method, for accurate delineation of the GTVPET after ICT. Twenty patients were enrolled onto a prospective phase I study (FiGaRO). PET/CT imaging was acquired at baseline and 3 weeks following 1 cycle of induction chemotherapy. The GTVPET was manually delineated by a nuclear medicine physician and clinical oncologist. The resulting GTVPET was used as the reference contour. The ATLAAS original statistical model was expanded to include images of reduced metabolic activity and the ATLAAS algorithm was re-trained on the new reference dataset. Estimated GTVPET contours were derived using sixteen PET-AS methods and compared to the GTVPET using the Dice Similarity Coefficient (DSC). The mean DSC for ATLAAS, 60% Peak Thresholding (PT60), Adaptive Thresholding (AT) and Watershed Thresholding (WT) was 0.72, 0.61, 0.63 and 0.60 respectively. The GTVPET generated by ATLAAS compared favourably with manually delineated volumes and in comparison, to other PET-AS methods, was more accurate for GTVPET delineation after ICT. ATLAAS would be a feasible method to reduce inter-observer variability in multi-centre trials.
Collapse
Affiliation(s)
- Craig Parkinson
- School of Engineering, Cardiff University, Queen's Buildings, 14-17 The Parade, Cardiff CF24 3AA, UK.
| | - Mererid Evans
- Velindre Cancer Centre, Velindre Rd, Cardiff CF14 2TL, UK
| | | | | | - Lucy Pike
- King's College London and Guy's and St Thomas' PET Centre, School of Biomedical Engineering and Imaging Sciences, King's College London, King's Health Partners, London, UK
| | - Sally Barrington
- King's College London and Guy's and St Thomas' PET Centre, School of Biomedical Engineering and Imaging Sciences, King's College London, King's Health Partners, London, UK
| | | | - Thomas Rackley
- Velindre Cancer Centre, Velindre Rd, Cardiff CF14 2TL, UK
| | | | - John Staffurth
- Velindre Cancer Centre, Velindre Rd, Cardiff CF14 2TL, UK; School of Medicine, UHW Main Building, Heath Park, Cardiff CF14 4XN, UK
| | - Christopher Marshall
- Wales Research & Diagnostic PET Imaging Centre, Cardiff University, School of Medicine, Ground Floor, C Block, UHW Main Building, Heath Park, Cardiff CF14 4XN, UK
| | - Emiliano Spezi
- School of Engineering, Cardiff University, Queen's Buildings, 14-17 The Parade, Cardiff CF24 3AA, UK; Velindre Cancer Centre, Velindre Rd, Cardiff CF14 2TL, UK
| |
Collapse
|
24
|
Comelli A, Stefano A, Bignardi S, Russo G, Sabini MG, Ippolito M, Barone S, Yezzi A. Active contour algorithm with discriminant analysis for delineating tumors in positron emission tomography. Artif Intell Med 2019; 94:67-78. [PMID: 30871684 DOI: 10.1016/j.artmed.2019.01.002] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2018] [Revised: 10/18/2018] [Accepted: 01/07/2019] [Indexed: 12/19/2022]
Abstract
In the context of cancer delineation using positron emission tomography datasets, we present an innovative approach which purpose is to tackle the real-time, three-dimensional segmentation task in a full, or at least nearly full automatized way. The approach comprises a preliminary initialization phase where the user highlights a region of interest around the cancer on just one slice of the tomographic dataset. The algorithm takes care of identifying an optimal and user-independent region of interest around the anomalous tissue and located on the slice containing the highest standardized uptake value so to start the successive segmentation task. The three-dimensional volume is then reconstructed using a slice-by-slice marching approach until a suitable automatic stop condition is met. On each slice, the segmentation is performed using an enhanced local active contour based on the minimization of a novel energy functional which combines the information provided by a machine learning component, the discriminant analysis in the present study. As a result, the whole algorithm is almost completely automatic and the output segmentation is independent from the input provided by the user. Phantom experiments comprising spheres and zeolites, and clinical cases comprising various body districts (lung, brain, and head and neck), and two different radio-tracers (18 F-fluoro-2-deoxy-d-glucose, and 11C-labeled Methionine) were used to assess the algorithm performances. Phantom experiments with spheres and with zeolites showed a dice similarity coefficient above 90% and 80%, respectively. Clinical cases showed high agreement with the gold standard (R2 = 0.98). These results indicate that the proposed method can be efficiently applied in the clinical routine with potential benefit for the treatment response assessment, and targeting in radiotherapy.
Collapse
Affiliation(s)
- Albert Comelli
- Department of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta GA, 30332, USA; Institute of Molecular Bioimaging and Physiology, National Research Council (IBFM-CNR), Cefalù, PA, Italy; Department of Industrial and Digital Innovation (DIID) - University of Palermo, PA, Italy
| | - Alessandro Stefano
- Institute of Molecular Bioimaging and Physiology, National Research Council (IBFM-CNR), Cefalù, PA, Italy.
| | - Samuel Bignardi
- Department of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta GA, 30332, USA
| | - Giorgio Russo
- Institute of Molecular Bioimaging and Physiology, National Research Council (IBFM-CNR), Cefalù, PA, Italy; Medical Physics Unit, Cannizzaro Hospital, Catania, Italy
| | | | - Massimo Ippolito
- Nuclear Medicine Department, Cannizzaro Hospital, Catania, Italy
| | - Stefano Barone
- Department of Industrial and Digital Innovation (DIID) - University of Palermo, PA, Italy
| | - Anthony Yezzi
- Department of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta GA, 30332, USA
| |
Collapse
|
25
|
A smart and operator independent system to delineate tumours in Positron Emission Tomography scans. Comput Biol Med 2018; 102:1-15. [PMID: 30219733 DOI: 10.1016/j.compbiomed.2018.09.002] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2018] [Revised: 08/20/2018] [Accepted: 09/06/2018] [Indexed: 12/30/2022]
Abstract
Positron Emission Tomography (PET) imaging has an enormous potential to improve radiation therapy treatment planning offering complementary functional information with respect to other anatomical imaging approaches. The aim of this study is to develop an operator independent, reliable, and clinically feasible system for biological tumour volume delineation from PET images. Under this design hypothesis, we combine several known approaches in an original way to deploy a system with a high level of automation. The proposed system automatically identifies the optimal region of interest around the tumour and performs a slice-by-slice marching local active contour segmentation. It automatically stops when a "cancer-free" slice is identified. User intervention is limited at drawing an initial rough contour around the cancer region. By design, the algorithm performs the segmentation minimizing any dependence from the initial input, so that the final result is extremely repeatable. To assess the performances under different conditions, our system is evaluated on a dataset comprising five synthetic experiments and fifty oncological lesions located in different anatomical regions (i.e. lung, head and neck, and brain) using PET studies with 18F-fluoro-2-deoxy-d-glucose and 11C-labeled Methionine radio-tracers. Results on synthetic lesions demonstrate enhanced performances when compared against the most common PET segmentation methods. In clinical cases, the proposed system produces accurate segmentations (average dice similarity coefficient: 85.36 ± 2.94%, 85.98 ± 3.40%, 88.02 ± 2.75% in the lung, head and neck, and brain district, respectively) with high agreement with the gold standard (determination coefficient R2 = 0.98). We believe that the proposed system could be efficiently used in the everyday clinical routine as a medical decision tool, and to provide the clinicians with additional information, derived from PET, which can be of use in radiation therapy, treatment, and planning.
Collapse
|
26
|
Liang F, Qian P, Su KH, Baydoun A, Leisser A, Van Hedent S, Kuo JW, Zhao K, Parikh P, Lu Y, Traughber BJ, Muzic RF. Abdominal, multi-organ, auto-contouring method for online adaptive magnetic resonance guided radiotherapy: An intelligent, multi-level fusion approach. Artif Intell Med 2018; 90:34-41. [PMID: 30054121 DOI: 10.1016/j.artmed.2018.07.001] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2017] [Revised: 06/06/2018] [Accepted: 07/06/2018] [Indexed: 01/30/2023]
Abstract
BACKGROUND Manual contouring remains the most laborious task in radiation therapy planning and is a major barrier to implementing routine Magnetic Resonance Imaging (MRI) Guided Adaptive Radiation Therapy (MR-ART). To address this, we propose a new artificial intelligence-based, auto-contouring method for abdominal MR-ART modeled after human brain cognition for manual contouring. METHODS/MATERIALS Our algorithm is based on two types of information flow, i.e. top-down and bottom-up. Top-down information is derived from simulation MR images. It grossly delineates the object based on its high-level information class by transferring the initial planning contours onto daily images. Bottom-up information is derived from pixel data by a supervised, self-adaptive, active learning based support vector machine. It uses low-level pixel features, such as intensity and location, to distinguish each target boundary from the background. The final result is obtained by fusing top-down and bottom-up outputs in a unified framework through artificial intelligence fusion. For evaluation, we used a dataset of four patients with locally advanced pancreatic cancer treated with MR-ART using a clinical system (MRIdian, Viewray, Oakwood Village, OH, USA). Each set included the simulation MRI and onboard T1 MRI corresponding to a randomly selected treatment session. Each MRI had 144 axial slices of 266 × 266 pixels. Using the Dice Similarity Index (DSI) and the Hausdorff Distance Index (HDI), we compared the manual and automated contours for the liver, left and right kidneys, and the spinal cord. RESULTS The average auto-segmentation time was two minutes per set. Visually, the automatic and manual contours were similar. Fused results achieved better accuracy than either the bottom-up or top-down method alone. The DSI values were above 0.86. The spinal canal contours yielded a low HDI value. CONCLUSION With a DSI significantly higher than the usually reported 0.7, our novel algorithm yields a high segmentation accuracy. To our knowledge, this is the first fully automated contouring approach using T1 MRI images for adaptive radiotherapy.
Collapse
Affiliation(s)
- Fan Liang
- Department of Radiology, Case Western Reserve University School of Medicine, Cleveland, OH, USA; Case Center for Imaging Research, University Hospitals Case Medical Center, Case Western Reserve University, Cleveland, OH, USA; Tianjin Key Laboratory of Information Sensing & Intelligent Control, Tianjin University of Technology and Education, Tianjin, China.
| | - Pengjiang Qian
- School of Digital Media, Jiangnan University, Wuxi, Jiangsu, China.
| | - Kuan-Hao Su
- Department of Radiology, Case Western Reserve University School of Medicine, Cleveland, OH, USA; Case Center for Imaging Research, University Hospitals Case Medical Center, Case Western Reserve University, Cleveland, OH, USA.
| | - Atallah Baydoun
- Department of Internal Medicine, Case Western Reserve University School of Medicine, Cleveland, OH, USA; Department of Internal Medicine, Louis Stokes VA Medical Center, Cleveland, OH, USA; Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, USA.
| | - Asha Leisser
- Department of Radiology, Case Western Reserve University School of Medicine, Cleveland, OH, USA; Case Center for Imaging Research, University Hospitals Case Medical Center, Case Western Reserve University, Cleveland, OH, USA; Department of Biomedical Imaging and Image-guided Therapy, Medical University of Vienna, Vienna, Austria.
| | - Steven Van Hedent
- Department of Radiology, Case Western Reserve University School of Medicine, Cleveland, OH, USA; Case Center for Imaging Research, University Hospitals Case Medical Center, Case Western Reserve University, Cleveland, OH, USA; Department of Radiology, UZ Brussel (VUB), Brussels, Belgium.
| | - Jung-Wen Kuo
- Department of Radiology, Case Western Reserve University School of Medicine, Cleveland, OH, USA; Case Center for Imaging Research, University Hospitals Case Medical Center, Case Western Reserve University, Cleveland, OH, USA.
| | - Kaifa Zhao
- School of Digital Media, Jiangnan University, Wuxi, Jiangsu, China.
| | - Parag Parikh
- Department of Radiation Oncology, Washington University School of Medicine, St. Louis, MO, USA.
| | - Yonggang Lu
- Department of Radiation Oncology, Washington University School of Medicine, St. Louis, MO, USA.
| | - Bryan J Traughber
- Case Center for Imaging Research, University Hospitals Case Medical Center, Case Western Reserve University, Cleveland, OH, USA; Department of Radiation Oncology, Case Western Reserve University School of Medicine, Cleveland, OH, USA; Department of Radiation Oncology, University Hospitals Seidman Cancer Center, Cleveland, OH, USA.
| | - Raymond F Muzic
- Department of Radiology, Case Western Reserve University School of Medicine, Cleveland, OH, USA; Case Center for Imaging Research, University Hospitals Case Medical Center, Case Western Reserve University, Cleveland, OH, USA; Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, USA; Department of Radiology, University Hospitals Cleveland Medical Center, Cleveland, OH, USA.
| |
Collapse
|
27
|
Chen L, Shen C, Zhou Z, Maquilan G, Thomas K, Folkert MR, Albuquerque K, Wang J. Accurate segmenting of cervical tumors in PET imaging based on similarity between adjacent slices. Comput Biol Med 2018; 97:30-36. [PMID: 29684783 DOI: 10.1016/j.compbiomed.2018.04.009] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2018] [Revised: 04/12/2018] [Accepted: 04/13/2018] [Indexed: 11/18/2022]
Abstract
Because in PET imaging cervical tumors are close to the bladder with high capacity for the secreted 18FDG tracer, conventional intensity-based segmentation methods often misclassify the bladder as a tumor. Based on the observation that tumor position and area do not change dramatically from slice to slice, we propose a two-stage scheme that facilitates segmentation. In the first stage, we used a graph-cut based algorithm to obtain initial contouring of the tumor based on local similarity information between voxels; this was achieved through manual contouring of the cervical tumor on one slice. In the second stage, initial tumor contours were fine-tuned to more accurate segmentation by incorporating similarity information on tumor shape and position among adjacent slices, according to an intensity-spatial-distance map. Experimental results illustrate that the proposed two-stage algorithm provides a more effective approach to segmenting cervical tumors in 3D18FDG PET images than the benchmarks used for comparison.
Collapse
Affiliation(s)
- Liyuan Chen
- University of Texas Southwestern Medical Center, Department of Radiation Oncology, 2280 Inwood Rd., Dallas, TX, 75214, United States.
| | - Chenyang Shen
- University of Texas Southwestern Medical Center, Department of Radiation Oncology, 2280 Inwood Rd., Dallas, TX, 75214, United States
| | - Zhiguo Zhou
- University of Texas Southwestern Medical Center, Department of Radiation Oncology, 2280 Inwood Rd., Dallas, TX, 75214, United States
| | - Genevieve Maquilan
- University of Texas Southwestern Medical Center, Department of Radiation Oncology, 2280 Inwood Rd., Dallas, TX, 75214, United States
| | - Kimberly Thomas
- University of Texas Southwestern Medical Center, Department of Radiation Oncology, 2280 Inwood Rd., Dallas, TX, 75214, United States
| | - Michael R Folkert
- University of Texas Southwestern Medical Center, Department of Radiation Oncology, 2280 Inwood Rd., Dallas, TX, 75214, United States
| | - Kevin Albuquerque
- University of Texas Southwestern Medical Center, Department of Radiation Oncology, 2280 Inwood Rd., Dallas, TX, 75214, United States
| | - Jing Wang
- University of Texas Southwestern Medical Center, Department of Radiation Oncology, 2280 Inwood Rd., Dallas, TX, 75214, United States.
| |
Collapse
|
28
|
Fully convolutional networks (FCNs)-based segmentation method for colorectal tumors on T2-weighted magnetic resonance images. AUSTRALASIAN PHYSICAL & ENGINEERING SCIENCES IN MEDICINE 2018; 41:393-401. [DOI: 10.1007/s13246-018-0636-9] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/08/2018] [Accepted: 04/04/2018] [Indexed: 12/14/2022]
|
29
|
Parkinson C, Foley K, Whybra P, Hills R, Roberts A, Marshall C, Staffurth J, Spezi E. Evaluation of prognostic models developed using standardised image features from different PET automated segmentation methods. EJNMMI Res 2018; 8:29. [PMID: 29644499 PMCID: PMC5895559 DOI: 10.1186/s13550-018-0379-3] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2017] [Accepted: 03/23/2018] [Indexed: 12/25/2022] Open
Abstract
Background Prognosis in oesophageal cancer (OC) is poor. The 5-year overall survival (OS) rate is approximately 15%. Personalised medicine is hoped to increase the 5- and 10-year OS rates. Quantitative analysis of PET is gaining substantial interest in prognostic research but requires the accurate definition of the metabolic tumour volume. This study compares prognostic models developed in the same patient cohort using individual PET segmentation algorithms and assesses the impact on patient risk stratification. Consecutive patients (n = 427) with biopsy-proven OC were included in final analysis. All patients were staged with PET/CT between September 2010 and July 2016. Nine automatic PET segmentation methods were studied. All tumour contours were subjectively analysed for accuracy, and segmentation methods with < 90% accuracy were excluded. Standardised image features were calculated, and a series of prognostic models were developed using identical clinical data. The proportion of patients changing risk classification group were calculated. Results Out of nine PET segmentation methods studied, clustering means (KM2), general clustering means (GCM3), adaptive thresholding (AT) and watershed thresholding (WT) methods were included for analysis. Known clinical prognostic factors (age, treatment and staging) were significant in all of the developed prognostic models. AT and KM2 segmentation methods developed identical prognostic models. Patient risk stratification was dependent on the segmentation method used to develop the prognostic model with up to 73 patients (17.1%) changing risk stratification group. Conclusion Prognostic models incorporating quantitative image features are dependent on the method used to delineate the primary tumour. This has a subsequent effect on risk stratification, with patients changing groups depending on the image segmentation method used. Electronic supplementary material The online version of this article (10.1186/s13550-018-0379-3) contains supplementary material, which is available to authorized users.
Collapse
Affiliation(s)
- Craig Parkinson
- School of Engineering, Cardiff University, Queen's Buildings, 14-17 The Parade, Cardiff, CF24 3AA, UK
| | - Kieran Foley
- Division of Cancer and Genetics, School of Medicine, UHW Main Building, Heath Park, Cardiff, CF14 4XN, UK.
| | - Philip Whybra
- School of Engineering, Cardiff University, Queen's Buildings, 14-17 The Parade, Cardiff, CF24 3AA, UK
| | - Robert Hills
- Clinical Trials Unit, Cardiff University, Cardiff, CF10 3AT, UK
| | - Ashley Roberts
- Clinical Radiology, University Hospital of Wales, Heath Park, Cardiff, CF14 4XW, UK
| | - Chris Marshall
- Wales Research and Diagnostic PET Imaging Centre, Cardiff University, School of Medicine, Ground Floor, C Block, UHW Main Building, Heath Park, Cardiff, CF14 4XN, UK
| | - John Staffurth
- Division of Cancer and Genetics, School of Medicine, UHW Main Building, Heath Park, Cardiff, CF14 4XN, UK.,Velindre Cancer Centre, Velindre Rd, Cardiff, CF14 2TL, UK
| | - Emiliano Spezi
- School of Engineering, Cardiff University, Queen's Buildings, 14-17 The Parade, Cardiff, CF24 3AA, UK.,Velindre Cancer Centre, Velindre Rd, Cardiff, CF14 2TL, UK
| |
Collapse
|
30
|
Sbei A, ElBedoui K, Barhoumi W, Maksud P, Maktouf C. Hybrid PET/MRI co-segmentation based on joint fuzzy connectedness and graph cut. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2017; 149:29-41. [PMID: 28802328 DOI: 10.1016/j.cmpb.2017.07.006] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/10/2017] [Revised: 06/03/2017] [Accepted: 07/18/2017] [Indexed: 06/07/2023]
Abstract
BACKGROUND AND OBJECTIVE Tumor segmentation from hybrid PET/MRI scans may be highly beneficial in radiotherapy treatment planning. Indeed, it gives for both modalities the suitable information that could make the delineation of tumors more accurate than using each one apart. We aim in this work to propose a co-segmentation method that deals with several challenges, notably the lack of one-to-one correspondence between tumors of the two modalities and the boundaries' smoothing. METHODS The proposed method is designed to surpass these limits, we propose a segmentation method based on the GCsummax technique. The method takes the advantage of Iterative Relative Fuzzy Connectedness (IRFC) on seeds initialization, and the standard min-cut/max-flow technique for the boundary smoothing. Seed initialization was accurately performed thanks to high uptake regions on PET. Besides, a visibility weighting scheme was adapted to achieve the task of co-segmentation using the IRFC algorithm. Then, given the co-segmented regions, we introduce a morphological-based technique that provides object seeds to standard Graph Cut (GC) allowing it to avoid the shrinking problem. Finally, for each modality, the segmentation task is formulated as an energy minimization problem which is resolved by a min-cut/max-flow technique. RESULTS The overlap ratio (denoted DSC) between our segmentation results and the ground-truth for PET images is 92.63 ± 1.03, while the DSC for MRI images is 90.61 ± 3.70. CONCLUSIONS The proposed method was tested on different types of diseases and it outperformed the state-of-the-art methods. We show its superiority in terms of assymetric relation between PET and MRI and tumors heterogeneity.
Collapse
Affiliation(s)
- Arafet Sbei
- Université de Tunis El Manar, Institut Supérieur d'Informatique, Research Team on Intelligent Systems in Imaging and Articial Vision (SIIVA), LR16ES06 Laboratoire de recherche en Informatique, Modélisation et Traitement de l'Information et de la Connaissance (LIMTIC), Tunisia; Nuclear Medicine Department, Pasteur Institute of Tunis, Tunis, Tunisia
| | - Khaoula ElBedoui
- Université de Tunis El Manar, Institut Supérieur d'Informatique, Research Team on Intelligent Systems in Imaging and Articial Vision (SIIVA), LR16ES06 Laboratoire de recherche en Informatique, Modélisation et Traitement de l'Information et de la Connaissance (LIMTIC), Tunisia; Université de Carthage, Ecole Nationale d'Ingénieurs de Carthage, Tunisia
| | - Walid Barhoumi
- Université de Tunis El Manar, Institut Supérieur d'Informatique, Research Team on Intelligent Systems in Imaging and Articial Vision (SIIVA), LR16ES06 Laboratoire de recherche en Informatique, Modélisation et Traitement de l'Information et de la Connaissance (LIMTIC), Tunisia; Université de Carthage, Ecole Nationale d'Ingénieurs de Carthage, Tunisia.
| | - Philippe Maksud
- Nuclear Medicine Department, Pitié-Salpêtrière Hospital, AP-HP, Paris, France
| | - Chokri Maktouf
- Nuclear Medicine Department, Pasteur Institute of Tunis, Tunis, Tunisia
| |
Collapse
|
31
|
Kawata Y, Arimura H, Ikushima K, Jin Z, Morita K, Tokunaga C, Yabu-Uchi H, Shioyama Y, Sasaki T, Honda H, Sasaki M. Impact of pixel-based machine-learning techniques on automated frameworks for delineation of gross tumor volume regions for stereotactic body radiation therapy. Phys Med 2017; 42:141-149. [PMID: 29173908 DOI: 10.1016/j.ejmp.2017.08.012] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/25/2017] [Revised: 08/21/2017] [Accepted: 08/26/2017] [Indexed: 01/03/2023] Open
Abstract
The aim of this study was to investigate the impact of pixel-based machine learning (ML) techniques, i.e., fuzzy-c-means clustering method (FCM), and the artificial neural network (ANN) and support vector machine (SVM), on an automated framework for delineation of gross tumor volume (GTV) regions of lung cancer for stereotactic body radiation therapy. The morphological and metabolic features for GTV regions, which were determined based on the knowledge of radiation oncologists, were fed on a pixel-by-pixel basis into the respective FCM, ANN, and SVM ML techniques. Then, the ML techniques were incorporated into the automated delineation framework of GTVs followed by an optimum contour selection (OCS) method, which we proposed in a previous study. The three-ML-based frameworks were evaluated for 16 lung cancer cases (six solid, four ground glass opacity (GGO), six part-solid GGO) with the datasets of planning computed tomography (CT) and 18F-fluorodeoxyglucose (FDG) positron emission tomography (PET)/CT images using the three-dimensional Dice similarity coefficient (DSC). DSC denotes the degree of region similarity between the GTVs contoured by radiation oncologists and those estimated using the automated framework. The FCM-based framework achieved the highest DSCs of 0.79±0.06, whereas DSCs of the ANN-based and SVM-based frameworks were 0.76±0.14 and 0.73±0.14, respectively. The FCM-based framework provided the highest segmentation accuracy and precision without a learning process (lowest calculation cost). Therefore, the FCM-based framework can be useful for delineation of tumor regions in practical treatment planning.
Collapse
Affiliation(s)
- Yasuo Kawata
- Graduate School of Medical Sciences, Kyushu University, 3-1-1, Maidashi, Higashi-ku, Fukuoka 812-8582, Japan
| | - Hidetaka Arimura
- Faculty of Medical Sciences, Kyushu University, 3-1-1, Maidashi, Higashi-ku, Fukuoka 812-8582, Japan.
| | - Koujirou Ikushima
- Graduate School of Medical Sciences, Kyushu University, 3-1-1, Maidashi, Higashi-ku, Fukuoka 812-8582, Japan
| | - Ze Jin
- Graduate School of Medical Sciences, Kyushu University, 3-1-1, Maidashi, Higashi-ku, Fukuoka 812-8582, Japan
| | - Kento Morita
- Department of Health Sciences, School of Medicine, Kyushu University, 3-1-1, Maidashi, Higashi-ku, Fukuoka 812-8582, Japan
| | - Chiaki Tokunaga
- Department of Medical Technology, Kyushu University Hospital, 3-1-1, Maidashi, Higashi-ku, Fukuoka 812-8582, Japan
| | - Hidetake Yabu-Uchi
- Faculty of Medical Sciences, Kyushu University, 3-1-1, Maidashi, Higashi-ku, Fukuoka 812-8582, Japan
| | - Yoshiyuki Shioyama
- Saga Heavy Ion Medical Accelerator in Tosu, 415, Harakoga-cho, Tosu 841-0071, Japan
| | - Tomonari Sasaki
- Faculty of Medical Sciences, Kyushu University, 3-1-1, Maidashi, Higashi-ku, Fukuoka 812-8582, Japan
| | - Hiroshi Honda
- Faculty of Medical Sciences, Kyushu University, 3-1-1, Maidashi, Higashi-ku, Fukuoka 812-8582, Japan
| | - Masayuki Sasaki
- Faculty of Medical Sciences, Kyushu University, 3-1-1, Maidashi, Higashi-ku, Fukuoka 812-8582, Japan
| |
Collapse
|
32
|
Trebeschi S, van Griethuysen JJM, Lambregts DMJ, Lahaye MJ, Parmar C, Bakers FCH, Peters NHGM, Beets-Tan RGH, Aerts HJWL. Deep Learning for Fully-Automated Localization and Segmentation of Rectal Cancer on Multiparametric MR. Sci Rep 2017; 7:5301. [PMID: 28706185 PMCID: PMC5509680 DOI: 10.1038/s41598-017-05728-9] [Citation(s) in RCA: 168] [Impact Index Per Article: 21.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2017] [Accepted: 06/01/2017] [Indexed: 11/24/2022] Open
Abstract
Multiparametric Magnetic Resonance Imaging (MRI) can provide detailed information of the physical characteristics of rectum tumours. Several investigations suggest that volumetric analyses on anatomical and functional MRI contain clinically valuable information. However, manual delineation of tumours is a time consuming procedure, as it requires a high level of expertise. Here, we evaluate deep learning methods for automatic localization and segmentation of rectal cancers on multiparametric MR imaging. MRI scans (1.5T, T2-weighted, and DWI) of 140 patients with locally advanced rectal cancer were included in our analysis, equally divided between discovery and validation datasets. Two expert radiologists segmented each tumor. A convolutional neural network (CNN) was trained on the multiparametric MRIs of the discovery set to classify each voxel into tumour or non-tumour. On the independent validation dataset, the CNN showed high segmentation accuracy for reader1 (Dice Similarity Coefficient (DSC = 0.68) and reader2 (DSC = 0.70). The area under the curve (AUC) of the resulting probability maps was very high for both readers, AUC = 0.99 (SD = 0.05). Our results demonstrate that deep learning can perform accurate localization and segmentation of rectal cancer in MR imaging in the majority of patients. Deep learning technologies have the potential to improve the speed and accuracy of MRI-based rectum segmentations.
Collapse
Affiliation(s)
- Stefano Trebeschi
- Department of Radiology, the Netherlands Cancer Institute, Amsterdam, The Netherlands
- GROW School for Oncology and Developmental Biology, Maastricht University Medical Center, Maastricht, The Netherlands
| | - Joost J M van Griethuysen
- Department of Radiology, the Netherlands Cancer Institute, Amsterdam, The Netherlands
- GROW School for Oncology and Developmental Biology, Maastricht University Medical Center, Maastricht, The Netherlands
| | - Doenja M J Lambregts
- Department of Radiology, the Netherlands Cancer Institute, Amsterdam, The Netherlands
| | - Max J Lahaye
- Department of Radiology, the Netherlands Cancer Institute, Amsterdam, The Netherlands
| | - Chintan Parmar
- Department of Radiation Oncology and Radiology, Dana-Farber Cancer Institute, Brigham and Women's Hospital, Harvard Medical School, Boston, USA
| | - Frans C H Bakers
- Department of Radiology, Maastricht University Medical Centre, Maastricht, The Netherlands
| | - Nicky H G M Peters
- Department of Radiology, Zuyderland Medical Center, location Heerlen, Heerlen, The Netherlands
| | - Regina G H Beets-Tan
- Department of Radiology, the Netherlands Cancer Institute, Amsterdam, The Netherlands
- GROW School for Oncology and Developmental Biology, Maastricht University Medical Center, Maastricht, The Netherlands
| | - Hugo J W L Aerts
- Department of Radiology, the Netherlands Cancer Institute, Amsterdam, The Netherlands.
- Department of Radiation Oncology and Radiology, Dana-Farber Cancer Institute, Brigham and Women's Hospital, Harvard Medical School, Boston, USA.
| |
Collapse
|
33
|
Mohammed MA, Abd Ghani MK, Hamed RI, Ibrahim DA, Abdullah MK. Artificial neural networks for automatic segmentation and identification of nasopharyngeal carcinoma. JOURNAL OF COMPUTATIONAL SCIENCE 2017; 21:263-274. [DOI: 10.1016/j.jocs.2017.03.026] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/02/2023]
|
34
|
Chang CC, Chen HH, Chang YC, Yang MY, Lo CM, Ko WC, Lee YF, Liu KL, Chang RF. Computer-aided diagnosis of liver tumors on computed tomography images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2017; 145:45-51. [PMID: 28552125 DOI: 10.1016/j.cmpb.2017.04.008] [Citation(s) in RCA: 35] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/28/2016] [Revised: 02/19/2017] [Accepted: 04/12/2017] [Indexed: 06/07/2023]
Abstract
BACKGROUND AND OBJECTIVE Liver cancer is the tenth most common cancer in the USA, and its incidence has been increasing for several decades. Early detection, diagnosis, and treatment of the disease are very important. Computed tomography (CT) is one of the most common and robust imaging techniques for the detection of liver cancer. CT scanners can provide multiple-phase sequential scans of the whole liver. In this study, we proposed a computer-aided diagnosis (CAD) system to diagnose liver cancer using the features of tumors obtained from multiphase CT images. METHODS A total of 71 histologically-proven liver tumors including 49 benign and 22 malignant lesions were evaluated with the proposed CAD system to evaluate its performance. Tumors were identified by the user and then segmented using a region growing algorithm. After tumor segmentation, three kinds of features were obtained for each tumor, including texture, shape, and kinetic curve. The texture was quantified using 3 dimensional (3-D) texture data of the tumor based on the grey level co-occurrence matrix (GLCM). Compactness, margin, and an elliptic model were used to describe the 3-D shape of the tumor. The kinetic curve was established from each phase of tumor and represented as variations in density between each phase. Backward elimination was used to select the best combination of features, and binary logistic regression analysis was used to classify the tumors with leave-one-out cross validation. RESULTS The accuracy and sensitivity for the texture were 71.82% and 68.18%, respectively, which were better than for the shape and kinetic curve under closed specificity. Combining all of the features achieved the highest accuracy (58/71, 81.69%), sensitivity (18/22, 81.82%), and specificity (40/49, 81.63%). The Az value of combining all features was 0.8713. CONCLUSIONS Combining texture, shape, and kinetic curve features may be able to differentiate benign from malignant tumors in the liver using our proposed CAD system.
Collapse
Affiliation(s)
- Chin-Chen Chang
- Department of Medical Imaging, National Taiwan University Hospital and National Taiwan University College of Medicine, Taipei, Taiwan
| | - Hong-Hao Chen
- Department of Computer Science and Information Engineering, National Taiwan University, Taipei, Taiwan
| | - Yeun-Chung Chang
- Department of Medical Imaging, National Taiwan University Hospital and National Taiwan University College of Medicine, Taipei, Taiwan
| | - Ming-Yang Yang
- Department of Computer Science and Information Engineering, National Taiwan University, Taipei, Taiwan
| | - Chung-Ming Lo
- Department of Computer Science and Information Engineering, National Taiwan University, Taipei, Taiwan; Graduate Institute of Biomedical Informatics, College of Medical Science and Technology, Taipei Medical University, Taipei, Taiwan
| | - Wei-Chun Ko
- Department of Medical Imaging, National Taiwan University Hospital and National Taiwan University College of Medicine, Taipei, Taiwan
| | - Yee-Fan Lee
- Department of Medical Imaging, National Taiwan University Hospital and National Taiwan University College of Medicine, Taipei, Taiwan
| | - Kao-Lang Liu
- Department of Medical Imaging, National Taiwan University Hospital and National Taiwan University College of Medicine, Taipei, Taiwan.
| | - Ruey-Feng Chang
- Department of Computer Science and Information Engineering, National Taiwan University, Taipei, Taiwan; Graduate Institute of Biomedical Electronics and Bioinformatics, National Taiwan University, Taipei, Taiwan.
| |
Collapse
|
35
|
Tan S, Li L, Choi W, Kang MK, D'Souza WD, Lu W. Adaptive region-growing with maximum curvature strategy for tumor segmentation in 18F-FDG PET. Phys Med Biol 2017; 62:5383-5402. [PMID: 28604372 DOI: 10.1088/1361-6560/aa6e20] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
Abstract
Accurate tumor segmentation in PET is crucial in many oncology applications. We developed an adaptive region-growing (ARG) algorithm with a maximum curvature strategy (ARG_MC) for tumor segmentation in PET. The ARG_MC repeatedly applied a confidence connected region-growing algorithm with increasing relaxing factor f. The optimal relaxing factor (ORF) was then determined at the transition point on the f-volume curve, where the volume just grew from the tumor into the surrounding normal tissues. The ARG_MC along with five widely used algorithms were tested on a phantom with 6 spheres at different signal to background ratios and on two clinic datasets including 20 patients with esophageal cancer and 11 patients with non-Hodgkin lymphoma (NHL). The ARG_MC did not require any phantom calibration or any a priori knowledge of the tumor or PET scanner. The identified ORF varied with tumor types (mean ORF = 9.61, 3.78 and 2.55 respectively for the phantom, esophageal cancer, and NHL datasets), and varied from one tumor to another. For the phantom, the ARG_MC ranked the second in segmentation accuracy with an average Dice similarity index (DSI) of 0.86, only slightly worse than Daisne's adaptive thresholding method (DSI = 0.87), which required phantom calibration. For both the esophageal cancer dataset and the NHL dataset, the ARG_MC had the highest accuracy with an average DSI of 0.87 and 0.84, respectively. The ARG_MC was robust to parameter settings and region of interest selection, and it did not depend on scanners, imaging protocols, or tumor types. Furthermore, the ARG_MC made no assumption about the tumor size or tumor uptake distribution, making it suitable for segmenting tumors with heterogeneous FDG uptake. In conclusion, the ARG_MC was accurate, robust and easy to use, it provides a highly potential tool for PET tumor segmentation in clinic.
Collapse
Affiliation(s)
- Shan Tan
- Key Laboratory of Image Processing and Intelligent Control of Ministry of Education of China, School of Automation, Huazhong University of Science and Technology, Wuhan 430074, People's Republic of China. Department of Radiation Oncology, University of Maryland School of Medicine, Baltimore, Maryland 21201, United States of America
| | | | | | | | | | | |
Collapse
|
36
|
Hatt M, Lee JA, Schmidtlein CR, Naqa IE, Caldwell C, De Bernardi E, Lu W, Das S, Geets X, Gregoire V, Jeraj R, MacManus MP, Mawlawi OR, Nestle U, Pugachev AB, Schöder H, Shepherd T, Spezi E, Visvikis D, Zaidi H, Kirov AS. Classification and evaluation strategies of auto-segmentation approaches for PET: Report of AAPM task group No. 211. Med Phys 2017; 44:e1-e42. [PMID: 28120467 DOI: 10.1002/mp.12124] [Citation(s) in RCA: 142] [Impact Index Per Article: 17.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2016] [Revised: 12/09/2016] [Accepted: 01/04/2017] [Indexed: 12/14/2022] Open
Abstract
PURPOSE The purpose of this educational report is to provide an overview of the present state-of-the-art PET auto-segmentation (PET-AS) algorithms and their respective validation, with an emphasis on providing the user with help in understanding the challenges and pitfalls associated with selecting and implementing a PET-AS algorithm for a particular application. APPROACH A brief description of the different types of PET-AS algorithms is provided using a classification based on method complexity and type. The advantages and the limitations of the current PET-AS algorithms are highlighted based on current publications and existing comparison studies. A review of the available image datasets and contour evaluation metrics in terms of their applicability for establishing a standardized evaluation of PET-AS algorithms is provided. The performance requirements for the algorithms and their dependence on the application, the radiotracer used and the evaluation criteria are described and discussed. Finally, a procedure for algorithm acceptance and implementation, as well as the complementary role of manual and auto-segmentation are addressed. FINDINGS A large number of PET-AS algorithms have been developed within the last 20 years. Many of the proposed algorithms are based on either fixed or adaptively selected thresholds. More recently, numerous papers have proposed the use of more advanced image analysis paradigms to perform semi-automated delineation of the PET images. However, the level of algorithm validation is variable and for most published algorithms is either insufficient or inconsistent which prevents recommending a single algorithm. This is compounded by the fact that realistic image configurations with low signal-to-noise ratios (SNR) and heterogeneous tracer distributions have rarely been used. Large variations in the evaluation methods used in the literature point to the need for a standardized evaluation protocol. CONCLUSIONS Available comparison studies suggest that PET-AS algorithms relying on advanced image analysis paradigms provide generally more accurate segmentation than approaches based on PET activity thresholds, particularly for realistic configurations. However, this may not be the case for simple shape lesions in situations with a narrower range of parameters, where simpler methods may also perform well. Recent algorithms which employ some type of consensus or automatic selection between several PET-AS methods have potential to overcome the limitations of the individual methods when appropriately trained. In either case, accuracy evaluation is required for each different PET scanner and scanning and image reconstruction protocol. For the simpler, less robust approaches, adaptation to scanning conditions, tumor type, and tumor location by optimization of parameters is necessary. The results from the method evaluation stage can be used to estimate the contouring uncertainty. All PET-AS contours should be critically verified by a physician. A standard test, i.e., a benchmark dedicated to evaluating both existing and future PET-AS algorithms needs to be designed, to aid clinicians in evaluating and selecting PET-AS algorithms and to establish performance limits for their acceptance for clinical use. The initial steps toward designing and building such a standard are undertaken by the task group members.
Collapse
Affiliation(s)
- Mathieu Hatt
- INSERM, UMR 1101, LaTIM, University of Brest, IBSAM, Brest, France
| | - John A Lee
- Université catholique de Louvain (IREC/MIRO) & FNRS, Brussels, 1200, Belgium
| | | | | | - Curtis Caldwell
- Sunnybrook Health Sciences Center, Toronto, ON, M4N 3M5, Canada
| | | | - Wei Lu
- Memorial Sloan Kettering Cancer Center, New York, NY, 10065, USA
| | - Shiva Das
- University of North Carolina, Chapel Hill, NC, 27599, USA
| | - Xavier Geets
- Université catholique de Louvain (IREC/MIRO) & FNRS, Brussels, 1200, Belgium
| | - Vincent Gregoire
- Université catholique de Louvain (IREC/MIRO) & FNRS, Brussels, 1200, Belgium
| | - Robert Jeraj
- University of Wisconsin, Madison, WI, 53705, USA
| | | | | | - Ursula Nestle
- Universitätsklinikum Freiburg, Freiburg, 79106, Germany
| | - Andrei B Pugachev
- University of Texas Southwestern Medical Center, Dallas, TX, 75390, USA
| | - Heiko Schöder
- Memorial Sloan Kettering Cancer Center, New York, NY, 10065, USA
| | | | - Emiliano Spezi
- School of Engineering, Cardiff University, Cardiff, Wales, United Kingdom
| | | | - Habib Zaidi
- Geneva University Hospital, Geneva, CH-1211, Switzerland
| | - Assen S Kirov
- Memorial Sloan Kettering Cancer Center, New York, NY, 10065, USA
| |
Collapse
|
37
|
Beichel RR, Van Tol M, Ulrich EJ, Bauer C, Chang T, Plichta KA, Smith BJ, Sunderland JJ, Graham MM, Sonka M, Buatti JM. Semiautomated segmentation of head and neck cancers in 18F-FDG PET scans: A just-enough-interaction approach. Med Phys 2017; 43:2948-2964. [PMID: 27277044 PMCID: PMC4874930 DOI: 10.1118/1.4948679] [Citation(s) in RCA: 37] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/28/2023] Open
Abstract
Purpose: The purpose of this work was to develop, validate, and compare a highly computer-aided method for the segmentation of hot lesions in head and neck 18F-FDG PET scans. Methods: A semiautomated segmentation method was developed, which transforms the segmentation problem into a graph-based optimization problem. For this purpose, a graph structure around a user-provided approximate lesion centerpoint is constructed and a suitable cost function is derived based on local image statistics. To handle frequently occurring situations that are ambiguous (e.g., lesions adjacent to each other versus lesion with inhomogeneous uptake), several segmentation modes are introduced that adapt the behavior of the base algorithm accordingly. In addition, the authors present approaches for the efficient interactive local and global refinement of initial segmentations that are based on the “just-enough-interaction” principle. For method validation, 60 PET/CT scans from 59 different subjects with 230 head and neck lesions were utilized. All patients had squamous cell carcinoma of the head and neck. A detailed comparison with the current clinically relevant standard manual segmentation approach was performed based on 2760 segmentations produced by three experts. Results: Segmentation accuracy measured by the Dice coefficient of the proposed semiautomated and standard manual segmentation approach was 0.766 and 0.764, respectively. This difference was not statistically significant (p = 0.2145). However, the intra- and interoperator standard deviations were significantly lower for the semiautomated method. In addition, the proposed method was found to be significantly faster and resulted in significantly higher intra- and interoperator segmentation agreement when compared to the manual segmentation approach. Conclusions: Lack of consistency in tumor definition is a critical barrier for radiation treatment targeting as well as for response assessment in clinical trials and in clinical oncology decision-making. The properties of the authors approach make it well suited for applications in image-guided radiation oncology, response assessment, or treatment outcome prediction.
Collapse
Affiliation(s)
- Reinhard R Beichel
- Department of Electrical and Computer Engineering, The University of Iowa, Iowa City, Iowa 52242; The Iowa Institute for Biomedical Imaging, The University of Iowa, Iowa City, Iowa 52242; and Department of Internal Medicine, The University of Iowa, Iowa City, Iowa 52242
| | - Markus Van Tol
- Department of Electrical and Computer Engineering, The University of Iowa, Iowa City, Iowa 52242 and The Iowa Institute for Biomedical Imaging, The University of Iowa, Iowa City, Iowa 52242
| | - Ethan J Ulrich
- Department of Electrical and Computer Engineering, The University of Iowa, Iowa City, Iowa 52242 and The Iowa Institute for Biomedical Imaging, The University of Iowa, Iowa City, Iowa 52242
| | - Christian Bauer
- Department of Electrical and Computer Engineering, The University of Iowa, Iowa City, Iowa 52242 and The Iowa Institute for Biomedical Imaging, The University of Iowa, Iowa City, Iowa 52242
| | - Tangel Chang
- Department of Radiation Oncology, The University of Iowa, Iowa City, Iowa 52242
| | - Kristin A Plichta
- Department of Radiation Oncology, The University of Iowa, Iowa City, Iowa 52242
| | - Brian J Smith
- Department of Biostatistics, The University of Iowa, Iowa City, Iowa 52242
| | - John J Sunderland
- Department of Radiology, The University of Iowa, Iowa City, Iowa 52242
| | - Michael M Graham
- Department of Radiology, The University of Iowa, Iowa City, Iowa 52242
| | - Milan Sonka
- Department of Electrical and Computer Engineering, The University of Iowa, Iowa City, Iowa 52242; Department of Radiation Oncology, The University of Iowa, Iowa City, Iowa 52242; and The Iowa Institute for Biomedical Imaging, The University of Iowa, Iowa City, Iowa 52242
| | - John M Buatti
- Department of Radiation Oncology, The University of Iowa, Iowa City, Iowa 52242 and The Iowa Institute for Biomedical Imaging, The University of Iowa, Iowa City, Iowa 52242
| |
Collapse
|
38
|
Giri MG, Cavedon C, Mazzarotto R, Ferdeghini M. A Dirichlet process mixture model for automatic (18)F-FDG PET image segmentation: Validation study on phantoms and on lung and esophageal lesions. Med Phys 2017; 43:2491. [PMID: 27147360 DOI: 10.1118/1.4947123] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/10/2023] Open
Abstract
PURPOSE The aim of this study was to implement a Dirichlet process mixture (DPM) model for automatic tumor edge identification on (18)F-fluorodeoxyglucose positron emission tomography ((18)F-FDG PET) images by optimizing the parameters on which the algorithm depends, to validate it experimentally, and to test its robustness. METHODS The DPM model belongs to the class of the Bayesian nonparametric models and uses the Dirichlet process prior for flexible nonparametric mixture modeling, without any preliminary choice of the number of mixture components. The DPM algorithm implemented in the statistical software package R was used in this work. The contouring accuracy was evaluated on several image data sets: on an IEC phantom (spherical inserts with diameter in the range 10-37 mm) acquired by a Philips Gemini Big Bore PET-CT scanner, using 9 different target-to-background ratios (TBRs) from 2.5 to 70; on a digital phantom simulating spherical/uniform lesions and tumors, irregular in shape and activity; and on 20 clinical cases (10 lung and 10 esophageal cancer patients). The influence of the DPM parameters on contour generation was studied in two steps. In the first one, only the IEC spheres having diameters of 22 and 37 mm and a sphere of the digital phantom (41.6 mm diameter) were studied by varying the main parameters until the diameter of the spheres was obtained within 0.2% of the true value. In the second step, the results obtained for this training set were applied to the entire data set to determine DPM based volumes of all available lesions. These volumes were compared to those obtained by applying already known algorithms (Gaussian mixture model and gradient-based) and to true values, when available. RESULTS Only one parameter was found able to significantly influence segmentation accuracy (ANOVA test). This parameter was linearly connected to the uptake variance of the tested region of interest (ROI). In the first step of the study, a calibration curve was determined to automatically generate the optimal parameter from the variance of the ROI. This "calibration curve" was then applied to contour the whole data set. The accuracy (mean discrepancy between DPM model-based contours and reference contours) of volume estimation was below (1 ± 7)% on the whole data set (1 SD). The overlap between true and automatically segmented contours, measured by the Dice similarity coefficient, was 0.93 with a SD of 0.03. CONCLUSIONS The proposed DPM model was able to accurately reproduce known volumes of FDG concentration, with high overlap between segmented and true volumes. For all the analyzed inserts of the IEC phantom, the algorithm proved to be robust to variations in radius and in TBR. The main advantage of this algorithm was that no setting of DPM parameters was required in advance, since the proper setting of the only parameter that could significantly influence the segmentation results was automatically related to the uptake variance of the chosen ROI. Furthermore, the algorithm did not need any preliminary choice of the optimum number of classes to describe the ROIs within PET images and no assumption about the shape of the lesion and the uptake heterogeneity of the tracer was required.
Collapse
Affiliation(s)
- Maria Grazia Giri
- Medical Physics Unit, University Hospital of Verona, P.le Stefani 1, Verona 37126, Italy
| | - Carlo Cavedon
- Medical Physics Unit, University Hospital of Verona, P.le Stefani 1, Verona 37126, Italy
| | - Renzo Mazzarotto
- Radiation Oncology Unit, University Hospital of Verona, P.le Stefani 1, Verona 37126, Italy
| | - Marco Ferdeghini
- Nuclear Medicine Unit, University Hospital of Verona, P.le Stefani 1, Verona 37126, Italy
| |
Collapse
|
39
|
Ikushima K, Arimura H, Jin Z, Yabu-Uchi H, Kuwazuru J, Shioyama Y, Sasaki T, Honda H, Sasaki M. Computer-assisted framework for machine-learning-based delineation of GTV regions on datasets of planning CT and PET/CT images. JOURNAL OF RADIATION RESEARCH 2017; 58:123-134. [PMID: 27609193 PMCID: PMC5321188 DOI: 10.1093/jrr/rrw082] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/08/2016] [Revised: 05/14/2016] [Accepted: 07/03/2016] [Indexed: 06/06/2023]
Abstract
We have proposed a computer-assisted framework for machine-learning-based delineation of gross tumor volumes (GTVs) following an optimum contour selection (OCS) method. The key idea of the proposed framework was to feed image features around GTV contours (determined based on the knowledge of radiation oncologists) into a machine-learning classifier during the training step, after which the classifier produces the 'degree of GTV' for each voxel in the testing step. Initial GTV regions were extracted using a support vector machine (SVM) that learned the image features inside and outside each tumor region (determined by radiation oncologists). The leave-one-out-by-patient test was employed for training and testing the steps of the proposed framework. The final GTV regions were determined using the OCS method that can be used to select a global optimum object contour based on multiple active delineations with a LSM around the GTV. The efficacy of the proposed framework was evaluated in 14 lung cancer cases [solid: 6, ground-glass opacity (GGO): 4, mixed GGO: 4] using the 3D Dice similarity coefficient (DSC), which denotes the degree of region similarity between the GTVs contoured by radiation oncologists and those determined using the proposed framework. The proposed framework achieved an average DSC of 0.777 for 14 cases, whereas the OCS-based framework produced an average DSC of 0.507. The average DSCs for GGO and mixed GGO were 0.763 and 0.701, respectively, obtained by the proposed framework. The proposed framework can be employed as a tool to assist radiation oncologists in delineating various GTV regions.
Collapse
Affiliation(s)
- Koujiro Ikushima
- Graduate School of Medical Sciences, Kyushu University, 3-1-1, Maidashi, Higashi-ku, Fukuoka 812-8582, Japan
| | - Hidetaka Arimura
- Faculty of Medical Sciences, Kyushu University, 3-1-1, Maidashi, Higashi-ku, Fukuoka 812-8582, Japan
| | - Ze Jin
- Graduate School of Medical Sciences, Kyushu University, 3-1-1, Maidashi, Higashi-ku, Fukuoka 812-8582, Japan
- Research Fellow of the Japan Society for the Promotion of Science
| | - Hidetake Yabu-Uchi
- Faculty of Medical Sciences, Kyushu University, 3-1-1, Maidashi, Higashi-ku, Fukuoka 812-8582, Japan
| | - Jumpei Kuwazuru
- Saiseikai Fukuoka General Hospital, 1-3-46, Tenjin, Chuo-ku, Fukuoka 810-0001, Japan
| | - Yoshiyuki Shioyama
- Saga Heavy Ion Medical Accelerator in Tosu, 415, Harakoga-cho, Tosu 841-0071, Japan
| | - Tomonari Sasaki
- Faculty of Medical Sciences, Kyushu University, 3-1-1, Maidashi, Higashi-ku, Fukuoka 812-8582, Japan
| | - Hiroshi Honda
- Faculty of Medical Sciences, Kyushu University, 3-1-1, Maidashi, Higashi-ku, Fukuoka 812-8582, Japan
| | - Masayuki Sasaki
- Faculty of Medical Sciences, Kyushu University, 3-1-1, Maidashi, Higashi-ku, Fukuoka 812-8582, Japan
| |
Collapse
|
40
|
Wang H, Udupa JK, Odhner D, Tong Y, Zhao L, Torigian DA. Automatic anatomy recognition in whole-body PET/CT images. Med Phys 2016; 43:613. [PMID: 26745953 DOI: 10.1118/1.4939127] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
Abstract
PURPOSE Whole-body positron emission tomography/computed tomography (PET/CT) has become a standard method of imaging patients with various disease conditions, especially cancer. Body-wide accurate quantification of disease burden in PET/CT images is important for characterizing lesions, staging disease, prognosticating patient outcome, planning treatment, and evaluating disease response to therapeutic interventions. However, body-wide anatomy recognition in PET/CT is a critical first step for accurately and automatically quantifying disease body-wide, body-region-wise, and organwise. This latter process, however, has remained a challenge due to the lower quality of the anatomic information portrayed in the CT component of this imaging modality and the paucity of anatomic details in the PET component. In this paper, the authors demonstrate the adaptation of a recently developed automatic anatomy recognition (AAR) methodology [Udupa et al., "Body-wide hierarchical fuzzy modeling, recognition, and delineation of anatomy in medical images," Med. Image Anal. 18, 752-771 (2014)] to PET/CT images. Their goal was to test what level of object localization accuracy can be achieved on PET/CT compared to that achieved on diagnostic CT images. METHODS The authors advance the AAR approach in this work in three fronts: (i) from body-region-wise treatment in the work of Udupa et al. to whole body; (ii) from the use of image intensity in optimal object recognition in the work of Udupa et al. to intensity plus object-specific texture properties, and (iii) from the intramodality model-building-recognition strategy to the intermodality approach. The whole-body approach allows consideration of relationships among objects in different body regions, which was previously not possible. Consideration of object texture allows generalizing the previous optimal threshold-based fuzzy model recognition method from intensity images to any derived fuzzy membership image, and in the process, to bring performance to the level achieved on diagnostic CT and MR images in body-region-wise approaches. The intermodality approach fosters the use of already existing fuzzy models, previously created from diagnostic CT images, on PET/CT and other derived images, thus truly separating the modality-independent object assembly anatomy from modality-specific tissue property portrayal in the image. RESULTS Key ways of combining the above three basic ideas lead them to 15 different strategies for recognizing objects in PET/CT images. Utilizing 50 diagnostic CT image data sets from the thoracic and abdominal body regions and 16 whole-body PET/CT image data sets, the authors compare the recognition performance among these 15 strategies on 18 objects from the thorax, abdomen, and pelvis in object localization error and size estimation error. Particularly on texture membership images, object localization is within three voxels on whole-body low-dose CT images and 2 voxels on body-region-wise low-dose images of known true locations. Surprisingly, even on direct body-region-wise PET images, localization error within 3 voxels seems possible. CONCLUSIONS The previous body-region-wise approach can be extended to whole-body torso with similar object localization performance. Combined use of image texture and intensity property yields the best object localization accuracy. In both body-region-wise and whole-body approaches, recognition performance on low-dose CT images reaches levels previously achieved on diagnostic CT images. The best object recognition strategy varies among objects; the proposed framework however allows employing a strategy that is optimal for each object.
Collapse
Affiliation(s)
- Huiqian Wang
- College of Optoelectronic Engineering, Chongqing University, Chongqing 400044, China and Medical Image Processing Group Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania 19104
| | - Jayaram K Udupa
- Medical Image Processing Group Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania 19104
| | - Dewey Odhner
- Medical Image Processing Group Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania 19104
| | - Yubing Tong
- Medical Image Processing Group Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania 19104
| | - Liming Zhao
- Medical Image Processing Group Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania 19104 and Research Center of Intelligent System and Robotics, Chongqing University of Posts and Telecommunications, Chongqing 400065, China
| | - Drew A Torigian
- Medical Image Processing Group Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania 19104
| |
Collapse
|
41
|
An enhanced random walk algorithm for delineation of head and neck cancers in PET studies. Med Biol Eng Comput 2016; 55:897-908. [PMID: 27638108 DOI: 10.1007/s11517-016-1571-0] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2015] [Accepted: 09/07/2016] [Indexed: 01/09/2023]
Abstract
An algorithm for delineating complex head and neck cancers in positron emission tomography (PET) images is presented in this article. An enhanced random walk (RW) algorithm with automatic seed detection is proposed and used to make the segmentation process feasible in the event of inhomogeneous lesions with bifurcations. In addition, an adaptive probability threshold and a k-means based clustering technique have been integrated in the proposed enhanced RW algorithm. The new threshold is capable of following the intensity changes between adjacent slices along the whole cancer volume, leading to an operator-independent algorithm. Validation experiments were first conducted on phantom studies: High Dice similarity coefficients, high true positive volume fractions, and low Hausdorff distance confirm the accuracy of the proposed method. Subsequently, forty head and neck lesions were segmented in order to evaluate the clinical feasibility of the proposed approach against the most common segmentation algorithms. Experimental results show that the proposed algorithm is more accurate and robust than the most common algorithms in the literature. Finally, the proposed method also shows real-time performance, addressing the physician's requirements in a radiotherapy environment.
Collapse
|
42
|
Berthon B, Marshall C, Evans M, Spezi E. ATLAAS: an automatic decision tree-based learning algorithm for advanced image segmentation in positron emission tomography. Phys Med Biol 2016; 61:4855-69. [PMID: 27273293 DOI: 10.1088/0031-9155/61/13/4855] [Citation(s) in RCA: 35] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/26/2022]
Abstract
Accurate and reliable tumour delineation on positron emission tomography (PET) is crucial for radiotherapy treatment planning. PET automatic segmentation (PET-AS) eliminates intra- and interobserver variability, but there is currently no consensus on the optimal method to use, as different algorithms appear to perform better for different types of tumours. This work aimed to develop a predictive segmentation model, trained to automatically select and apply the best PET-AS method, according to the tumour characteristics. ATLAAS, the automatic decision tree-based learning algorithm for advanced segmentation is based on supervised machine learning using decision trees. The model includes nine PET-AS methods and was trained on a 100 PET scans with known true contour. A decision tree was built for each PET-AS algorithm to predict its accuracy, quantified using the Dice similarity coefficient (DSC), according to the tumour volume, tumour peak to background SUV ratio and a regional texture metric. The performance of ATLAAS was evaluated for 85 PET scans obtained from fillable and printed subresolution sandwich phantoms. ATLAAS showed excellent accuracy across a wide range of phantom data and predicted the best or near-best segmentation algorithm in 93% of cases. ATLAAS outperformed all single PET-AS methods on fillable phantom data with a DSC of 0.881, while the DSC for H&N phantom data was 0.819. DSCs higher than 0.650 were achieved in all cases. ATLAAS is an advanced automatic image segmentation algorithm based on decision tree predictive modelling, which can be trained on images with known true contour, to predict the best PET-AS method when the true contour is unknown. ATLAAS provides robust and accurate image segmentation with potential applications to radiation oncology.
Collapse
Affiliation(s)
- Beatrice Berthon
- Wales Research & Diagnostic PET Imaging Centre, Cardiff University, CF14 4XN, Cardiff, UK
| | | | | | | |
Collapse
|
43
|
Schreibmann E, Schuster DM, Rossi PJ, Shelton J, Cooper S, Jani AB. Image Guided Planning for Prostate Carcinomas With Incorporation of Anti-3-[18F]FACBC (Fluciclovine) Positron Emission Tomography: Workflow and Initial Findings From a Randomized Trial. Int J Radiat Oncol Biol Phys 2016; 96:206-13. [PMID: 27511856 DOI: 10.1016/j.ijrobp.2016.04.023] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2015] [Revised: 04/08/2016] [Accepted: 04/17/2016] [Indexed: 01/14/2023]
Abstract
PURPOSE (18)F-Fluciclovine (anti-1-amino-3-[(18)F]fluorocyclobutane-1-carboxylic acid) is a novel positron emission tomography (PET)/computed tomography (CT) radiotracer that has demonstrated utility for detection of prostate cancer. Our goal is to report the initial results from a randomized controlled trial of the integration of (18)F-fluciclovine PET-CT into treatment planning for defining prostate bed and lymph node target volumes. METHODS AND MATERIALS We report our initial findings from a cohort of 41 patients, of the first enrolled on a randomized controlled trial, who were randomized to the (18)F-fluciclovine arm. All patients underwent (18)F-fluciclovine PET-CT for the detection of metabolic abnormalities and high-resolution CT for treatment planning. The 2 datasets were registered first by use of a rigid registration. If soft tissue displacement was observable, the rigid registration was improved with a deformable registration. Each (18)F-fluciclovine abnormality was segmented as a percentage of the maximum standard uptake value (SUV) within a small region of interest around the lesion. The percentage best describing the SUV falloff was integrated in planning by expanding standard target volumes with the PET abnormality. RESULTS In 21 of 55 abnormalities, a deformable registration was needed to map the (18)F-fluciclovine activity into the simulation CT. The most selected percentage was 50% of maximum SUV, although values ranging from 15% to 70% were used for specific patients, illustrating the need for a per-patient selection of a threshold SUV value. The inclusion of (18)F-fluciclovine changed the planning volumes for 46 abnormalities (83%) of the total 55, with 28 (51%) located in the lymph nodes, 11 (20%) in the prostate bed, 10 (18%) in the prostate, and 6 (11%) in the seminal vesicles. Only 9 PET abnormalities were fully contained in the standard target volumes based on the CT-based segmentations and did not necessitate expansion. CONCLUSIONS The use of (18)F-fluciclovine in postprostatectomy radiation therapy planning was feasible and led to augmentation of the target volumes in the majority (30 of 41) of the patients studied.
Collapse
Affiliation(s)
- Eduard Schreibmann
- Department of Radiation Oncology and Winship Cancer Institute of Emory University, Emory University, Atlanta, Georgia.
| | - David M Schuster
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, Georgia
| | - Peter J Rossi
- Department of Radiation Oncology and Winship Cancer Institute of Emory University, Emory University, Atlanta, Georgia
| | - Joseph Shelton
- Department of Radiation Oncology and Winship Cancer Institute of Emory University, Emory University, Atlanta, Georgia
| | - Sherrie Cooper
- Department of Radiation Oncology and Winship Cancer Institute of Emory University, Emory University, Atlanta, Georgia
| | - Ashesh B Jani
- Department of Radiation Oncology and Winship Cancer Institute of Emory University, Emory University, Atlanta, Georgia
| |
Collapse
|
44
|
Larsson E, Tromba G, Uvdal K, Accardo A, Monego SD, Biffi S, Garrovo C, Lorenzon A, Dullin C. Quantification of structural alterations in lung disease—a proposed analysis methodology of CT scans of preclinical mouse models and patients. Biomed Phys Eng Express 2015. [DOI: 10.1088/2057-1976/1/3/035201] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
|
45
|
Jiang J, Wu H, Huang M, Wu Y, Wang Q, Zhao J, Yang W, Chen W, Feng Q. Variability of Gross Tumor Volume in Nasopharyngeal Carcinoma Using 11C-Choline and 18F-FDG PET/CT. PLoS One 2015; 10:e0131801. [PMID: 26161910 PMCID: PMC4498791 DOI: 10.1371/journal.pone.0131801] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2014] [Accepted: 06/05/2015] [Indexed: 11/19/2022] Open
Abstract
This study was conducted to evaluate the variability of gross tumor volume (GTV) using 11C-Choline and 18F-FDG PET/CT images for nasopharyngeal carcinomas boundary definition. Assessment consisted of inter-observer and inter-modality variation analysis. Four radiation oncologists were invited to manually contour GTV by using PET/CT fusion obtained from a cohort of 12 patients with nasopharyngeal carcinoma (NPC) and who underwent both 11C-Choline and 18F-FDG scans. Student’s paired-sample t-test was performed for analyzing inter-observer and inter-modality variability. Semi-automatic segmentation methods, including thresholding and region growing, were also validated against the manual contouring of the two types of PET images. We observed no significant variation in the results obtained by different oncologists in terms of the same type of PET/CT volumes. Choline fusion volumes were significantly larger than the FDG volumes (p < 0.0001, mean ± SD = 18.21 ± 8.19). While significantly consistent results were obtained between the oncologists and the standard references in Choline volumes compared with those in FDG volumes (p = 0.0025). Simple semi-automatic delineation methods indicated that 11C-Choline PET images could provide better results than FDG volumes (p = 0.076, CI = [–0.29, 0.025]). 11C-Choline PET/CT may be more advantageous in GTV delineation for the radiotherapy of NPC than 18F-FDG. Phantom simulations and clinical trials should be conducted to prove the possible improvement of the treatment outcome.
Collapse
Affiliation(s)
- Jun Jiang
- School of Biomedical Engineering, Southern Medical University, Guangzhou, China
| | - Hubing Wu
- Department of PET Center, Nanfang Hospital, Guangzhou, China
| | - Meiyan Huang
- School of Biomedical Engineering, Southern Medical University, Guangzhou, China
| | - Yao Wu
- School of Biomedical Engineering, Southern Medical University, Guangzhou, China
| | - Quanshi Wang
- Department of PET Center, Nanfang Hospital, Guangzhou, China
| | - Jianqi Zhao
- School of Biomedical Engineering, Southern Medical University, Guangzhou, China
| | - Wei Yang
- School of Biomedical Engineering, Southern Medical University, Guangzhou, China
| | - Wufan Chen
- School of Biomedical Engineering, Southern Medical University, Guangzhou, China
| | - Qianjin Feng
- School of Biomedical Engineering, Southern Medical University, Guangzhou, China
- * E-mail:
| |
Collapse
|
46
|
An Adaptive Thresholding Method for BTV Estimation Incorporating PET Reconstruction Parameters: A Multicenter Study of the Robustness and the Reliability. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2015; 2015:571473. [PMID: 26078777 PMCID: PMC4452364 DOI: 10.1155/2015/571473] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/12/2014] [Accepted: 12/25/2014] [Indexed: 12/20/2022]
Abstract
OBJECTIVE The aim of this work was to assess robustness and reliability of an adaptive thresholding algorithm for the biological target volume estimation incorporating reconstruction parameters. METHOD In a multicenter study, a phantom with spheres of different diameters (6.5-57.4 mm) was filled with (18)F-FDG at different target-to-background ratios (TBR: 2.5-70) and scanned for different acquisition periods (2-5 min). Image reconstruction algorithms were used varying number of iterations and postreconstruction transaxial smoothing. Optimal thresholds (TS) for volume estimation were determined as percentage of the maximum intensity in the cross section area of the spheres. Multiple regression techniques were used to identify relevant predictors of TS. RESULTS The goodness of the model fit was high (R(2): 0.74-0.92). TBR was the most significant predictor of TS. For all scanners, except the Gemini scanners, FWHM was an independent predictor of TS. Significant differences were observed between scanners of different models, but not between different scanners of the same model. The shrinkage on cross validation was small and indicative of excellent reliability of model estimation. CONCLUSIONS Incorporation of postreconstruction filtering FWHM in an adaptive thresholding algorithm for the BTV estimation allows obtaining a robust and reliable method to be applied to a variety of different scanners, without scanner-specific individual calibration.
Collapse
|
47
|
Mu W, Chen Z, Shen W, Yang F, Liang Y, Dai R, Wu N, Tian J. A Segmentation Algorithm for Quantitative Analysis of Heterogeneous Tumors of the Cervix With ¹⁸F-FDG PET/CT. IEEE Trans Biomed Eng 2015; 62:2465-79. [PMID: 25993699 DOI: 10.1109/tbme.2015.2433397] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
As positron-emission tomography (PET) images have low spatial resolution and much noise, accurate image segmentation is one of the most challenging issues in tumor quantification. Tumors of the uterine cervix present a particular challenge because of urine activity in the adjacent bladder. Here, we propose and validate an automatic segmentation method adapted to cervical tumors. Our proposed methodology combined the gradient field information of both the filtered PET image and the level set function into a level set framework by constructing a new evolution equation. Furthermore, we also constructed a new hyperimage to recognize a rough tumor region using the fuzzy c-means algorithm according to the tissue specificity as defined by both PET (uptake) and computed tomography (attenuation) to provide the initial zero level set, which could make the segmentation process fully automatic. The proposed method was verified based on simulation and clinical studies. For simulation studies, seven different phantoms, representing tumors with homogenous/heterogeneous-low/high uptake patterns and different volumes, were simulated with five different noise levels. Twenty-seven cervical cancer patients at different stages were enrolled for clinical evaluation of the method. Dice similarity coefficients (DSC) and Hausdorff distance (HD) were used to evaluate the accuracy of the segmentation method, while a Bland-Altman analysis of the mean standardized uptake value (SUVmean) and metabolic tumor volume (MTV) was used to evaluate the accuracy of the quantification. Using this method, the DSCs and HDs of the homogenous and heterogeneous phantoms under clinical noise level were 93.39 ±1.09% and 6.02 ±1.09 mm, 93.59 ±1.63% and 8.92 ±2.57 mm, respectively. The DSCs and HDs in patients measured 91.80 ±2.46% and 7.79 ±2.18 mm. Through Bland-Altman analysis, the SUVmean and the MTV using our method showed high correlation with the clinical gold standard. The results of both simulation and clinical studies demonstrated the accuracy, effectiveness, and robustness of the proposed method. Further assessment of the quantitative indices indicates the feasibility of this algorithm in accurate quantitative analysis of cervical tumors in clinical practice.
Collapse
|
48
|
Jin Z, Arimura H, Shioyama Y, Nakamura K, Kuwazuru J, Magome T, Yabu-Uchi H, Honda H, Hirata H, Sasaki M. Computer-assisted delineation of lung tumor regions in treatment planning CT images with PET/CT image sets based on an optimum contour selection method. JOURNAL OF RADIATION RESEARCH 2014; 55:1153-62. [PMID: 24980022 PMCID: PMC4229921 DOI: 10.1093/jrr/rru056] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/09/2023]
Abstract
To assist radiation oncologists in the delineation of tumor regions during treatment planning for lung cancer, we have proposed an automated contouring algorithm based on an optimum contour selection (OCS) method for treatment planning computed tomography (CT) images with positron emission tomography (PET)/CT images. The basic concept of the OCS is to select a global optimum object contour based on multiple active delineations with a level set method around tumors. First, the PET images were registered to the planning CT images by using affine transformation matrices. The initial gross tumor volume (GTV) of each lung tumor was identified by thresholding the PET image at a certain standardized uptake value, and then each initial GTV location was corrected in the region of interest of the planning CT image. Finally, the contours of final GTV regions were determined in the planning CT images by using the OCS. The proposed method was evaluated by testing six cases with a Dice similarity coefficient (DSC), which denoted the degree of region similarity between the GTVs contoured by radiation oncologists and the proposed method. The average three-dimensional DSC for the six cases was 0.78 by the proposed method, but only 0.34 by a conventional method based on a simple level set method. The proposed method may be helpful for treatment planners in contouring the GTV regions.
Collapse
Affiliation(s)
- Ze Jin
- Department of Health Sciences, Graduate School of Medical Sciences, Kyushu University, 3-1-1, Maidashi, Higashi-ku, Fukuoka 812-8582, Japan
| | - Hidetaka Arimura
- Department of Health Sciences, Faculty of Medical Sciences, Kyushu University, 3-1-1, Maidashi, Higashi-ku, Fukuoka 812-8582, Japan
| | - Yoshiyuki Shioyama
- Department of Heavy Particle Therapy and Radiation Oncology, Graduate School of Medical Sciences, Kyushu University, Fukuoka, Japan
| | - Katsumasa Nakamura
- Department of Clinical Radiology, Graduate School of Medical Sciences, Kyushu University, Fukuoka, Japan
| | - Jumpei Kuwazuru
- Medipolis Proton Therapy and Research Center, Higashikata, Ibusuki-shi, Kagoshima, Japan
| | - Taiki Magome
- Department of Health Sciences, Graduate School of Medical Sciences, Kyushu University, 3-1-1, Maidashi, Higashi-ku, Fukuoka 812-8582, Japan
| | - Hidetake Yabu-Uchi
- Department of Health Sciences, Faculty of Medical Sciences, Kyushu University, 3-1-1, Maidashi, Higashi-ku, Fukuoka 812-8582, Japan
| | - Hiroshi Honda
- Department of Clinical Radiology, Graduate School of Medical Sciences, Kyushu University, Fukuoka, Japan
| | - Hideki Hirata
- Department of Health Sciences, Faculty of Medical Sciences, Kyushu University, 3-1-1, Maidashi, Higashi-ku, Fukuoka 812-8582, Japan
| | - Masayuki Sasaki
- Department of Health Sciences, Faculty of Medical Sciences, Kyushu University, 3-1-1, Maidashi, Higashi-ku, Fukuoka 812-8582, Japan
| |
Collapse
|
49
|
Withofs N, Bernard C, Van der Rest C, Martinive P, Hatt M, Jodogne S, Visvikis D, Lee JA, Coucke PA, Hustinx R. FDG PET/CT for rectal carcinoma radiotherapy treatment planning: comparison of functional volume delineation algorithms and clinical challenges. J Appl Clin Med Phys 2014; 15:4696. [PMID: 25207560 PMCID: PMC5711099 DOI: 10.1120/jacmp.v15i5.4696] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2013] [Revised: 05/02/2014] [Accepted: 04/25/2014] [Indexed: 01/24/2023] Open
Abstract
PET/CT imaging could improve delineation of rectal carcinoma gross tumor volume (GTV) and reduce interobserver variability. The objective of this work was to compare various functional volume delineation algorithms. We enrolled 31 consecutive patients with locally advanced rectal carcinoma. The FDG PET/CT and the high dose CT (CTRT) were performed in the radiation treatment position. For each patient, the anatomical GTVRT was delineated based on the CTRT and compared to six different functional/metabolic GTVPET derived from two automatic segmentation approaches (FLAB and a gradient-based method); a relative threshold (45% of the SUVmax) and an absolute threshold (SUV > 2.5), using two different commercially available software (Philips EBW4 and Segami OASIS). The spatial sizes and shapes of all volumes were compared using the conformity index (CI). All the delineated metabolic tumor volumes (MTVs) were significantly different. The MTVs were as follows (mean ± SD): GTVRT (40.6 ± 31.28ml); FLAB (21.36± 16.34 ml); the gradient-based method (18.97± 16.83ml); OASIS 45% (15.89 ± 12.68 ml); Philips 45% (14.52 ± 10.91 ml); OASIS 2.5 (41.6 2 ± 33.26 ml); Philips 2.5 (40 ± 31.27 ml). CI between these various volumes ranged from 0.40 to 0.90. The mean CI between the different MTVs and the GTVCT was < 0.4. Finally, the DICOM transfer of MTVs led to additional volume variations. In conclusion, we observed large and statistically significant variations in tumor volume delineation according to the segmentation algorithms and the software products. The manipulation of PET/CT images and MTVs, such as the DICOM transfer to the Radiation Oncology Department, induced additional volume variations.
Collapse
|
50
|
Foster B, Bagci U, Mansoor A, Xu Z, Mollura DJ. A review on segmentation of positron emission tomography images. Comput Biol Med 2014; 50:76-96. [PMID: 24845019 DOI: 10.1016/j.compbiomed.2014.04.014] [Citation(s) in RCA: 229] [Impact Index Per Article: 20.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2013] [Revised: 03/19/2014] [Accepted: 04/16/2014] [Indexed: 11/20/2022]
Abstract
Positron Emission Tomography (PET), a non-invasive functional imaging method at the molecular level, images the distribution of biologically targeted radiotracers with high sensitivity. PET imaging provides detailed quantitative information about many diseases and is often used to evaluate inflammation, infection, and cancer by detecting emitted photons from a radiotracer localized to abnormal cells. In order to differentiate abnormal tissue from surrounding areas in PET images, image segmentation methods play a vital role; therefore, accurate image segmentation is often necessary for proper disease detection, diagnosis, treatment planning, and follow-ups. In this review paper, we present state-of-the-art PET image segmentation methods, as well as the recent advances in image segmentation techniques. In order to make this manuscript self-contained, we also briefly explain the fundamentals of PET imaging, the challenges of diagnostic PET image analysis, and the effects of these challenges on the segmentation results.
Collapse
Affiliation(s)
- Brent Foster
- Center for Infectious Disease Imaging, Department of Radiology and Imaging Sciences, National Institutes of Health (NIH), Bethesda, MD 20892, United States
| | - Ulas Bagci
- Center for Infectious Disease Imaging, Department of Radiology and Imaging Sciences, National Institutes of Health (NIH), Bethesda, MD 20892, United States.
| | - Awais Mansoor
- Center for Infectious Disease Imaging, Department of Radiology and Imaging Sciences, National Institutes of Health (NIH), Bethesda, MD 20892, United States
| | - Ziyue Xu
- Center for Infectious Disease Imaging, Department of Radiology and Imaging Sciences, National Institutes of Health (NIH), Bethesda, MD 20892, United States
| | - Daniel J Mollura
- Center for Infectious Disease Imaging, Department of Radiology and Imaging Sciences, National Institutes of Health (NIH), Bethesda, MD 20892, United States
| |
Collapse
|