1
|
Wahid KA, Kaffey ZY, Farris DP, Humbert-Vidan L, Moreno AC, Rasmussen M, Ren J, Naser MA, Netherton TJ, Korreman S, Balakrishnan G, Fuller CD, Fuentes D, Dohopolski MJ. Artificial Intelligence Uncertainty Quantification in Radiotherapy Applications - A Scoping Review. MEDRXIV : THE PREPRINT SERVER FOR HEALTH SCIENCES 2024:2024.05.13.24307226. [PMID: 38798581 PMCID: PMC11118597 DOI: 10.1101/2024.05.13.24307226] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/29/2024]
Abstract
Background/purpose The use of artificial intelligence (AI) in radiotherapy (RT) is expanding rapidly. However, there exists a notable lack of clinician trust in AI models, underscoring the need for effective uncertainty quantification (UQ) methods. The purpose of this study was to scope existing literature related to UQ in RT, identify areas of improvement, and determine future directions. Methods We followed the PRISMA-ScR scoping review reporting guidelines. We utilized the population (human cancer patients), concept (utilization of AI UQ), context (radiotherapy applications) framework to structure our search and screening process. We conducted a systematic search spanning seven databases, supplemented by manual curation, up to January 2024. Our search yielded a total of 8980 articles for initial review. Manuscript screening and data extraction was performed in Covidence. Data extraction categories included general study characteristics, RT characteristics, AI characteristics, and UQ characteristics. Results We identified 56 articles published from 2015-2024. 10 domains of RT applications were represented; most studies evaluated auto-contouring (50%), followed by image-synthesis (13%), and multiple applications simultaneously (11%). 12 disease sites were represented, with head and neck cancer being the most common disease site independent of application space (32%). Imaging data was used in 91% of studies, while only 13% incorporated RT dose information. Most studies focused on failure detection as the main application of UQ (60%), with Monte Carlo dropout being the most commonly implemented UQ method (32%) followed by ensembling (16%). 55% of studies did not share code or datasets. Conclusion Our review revealed a lack of diversity in UQ for RT applications beyond auto-contouring. Moreover, there was a clear need to study additional UQ methods, such as conformal prediction. Our results may incentivize the development of guidelines for reporting and implementation of UQ in RT.
Collapse
Affiliation(s)
- Kareem A. Wahid
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Zaphanlene Y. Kaffey
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - David P. Farris
- Research Medical Library, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Laia Humbert-Vidan
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Amy C. Moreno
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | | | - Jintao Ren
- Department of Oncology, Aarhus University Hospital, Denmark
| | - Mohamed A. Naser
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Tucker J. Netherton
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX, USA
| | - Stine Korreman
- Department of Oncology, Aarhus University Hospital, Denmark
| | | | - Clifton D. Fuller
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - David Fuentes
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Michael J. Dohopolski
- Department of Radiation Oncology, The University of Texas Southwestern Medical Center, Dallas, Texas, USA
| |
Collapse
|
2
|
Hussain D, Al-Masni MA, Aslam M, Sadeghi-Niaraki A, Hussain J, Gu YH, Naqvi RA. Revolutionizing tumor detection and classification in multimodality imaging based on deep learning approaches: methods, applications and limitations. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2024:XST230429. [PMID: 38701131 DOI: 10.3233/xst-230429] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/05/2024]
Abstract
BACKGROUND The emergence of deep learning (DL) techniques has revolutionized tumor detection and classification in medical imaging, with multimodal medical imaging (MMI) gaining recognition for its precision in diagnosis, treatment, and progression tracking. OBJECTIVE This review comprehensively examines DL methods in transforming tumor detection and classification across MMI modalities, aiming to provide insights into advancements, limitations, and key challenges for further progress. METHODS Systematic literature analysis identifies DL studies for tumor detection and classification, outlining methodologies including convolutional neural networks (CNNs), recurrent neural networks (RNNs), and their variants. Integration of multimodality imaging enhances accuracy and robustness. RESULTS Recent advancements in DL-based MMI evaluation methods are surveyed, focusing on tumor detection and classification tasks. Various DL approaches, including CNNs, YOLO, Siamese Networks, Fusion-Based Models, Attention-Based Models, and Generative Adversarial Networks, are discussed with emphasis on PET-MRI, PET-CT, and SPECT-CT. FUTURE DIRECTIONS The review outlines emerging trends and future directions in DL-based tumor analysis, aiming to guide researchers and clinicians toward more effective diagnosis and prognosis. Continued innovation and collaboration are stressed in this rapidly evolving domain. CONCLUSION Conclusions drawn from literature analysis underscore the efficacy of DL approaches in tumor detection and classification, highlighting their potential to address challenges in MMI analysis and their implications for clinical practice.
Collapse
Affiliation(s)
- Dildar Hussain
- Department of Artificial Intelligence and Data Science, Sejong University, Seoul, Republic of Korea
| | - Mohammed A Al-Masni
- Department of Artificial Intelligence and Data Science, Sejong University, Seoul, Republic of Korea
| | - Muhammad Aslam
- Department of Artificial Intelligence and Data Science, Sejong University, Seoul, Republic of Korea
| | - Abolghasem Sadeghi-Niaraki
- Department of Computer Science & Engineering and Convergence Engineering for Intelligent Drone, XR Research Center, Sejong University, Seoul, Republic of Korea
| | - Jamil Hussain
- Department of Artificial Intelligence and Data Science, Sejong University, Seoul, Republic of Korea
| | - Yeong Hyeon Gu
- Department of Artificial Intelligence and Data Science, Sejong University, Seoul, Republic of Korea
| | - Rizwan Ali Naqvi
- Department of Intelligent Mechatronics Engineering, Sejong University, Seoul, Republic of Korea
| |
Collapse
|
3
|
Veziroglu EM, Farhadi F, Hasani N, Nikpanah M, Roschewski M, Summers RM, Saboury B. Role of Artificial Intelligence in PET/CT Imaging for Management of Lymphoma. Semin Nucl Med 2023; 53:426-448. [PMID: 36870800 DOI: 10.1053/j.semnuclmed.2022.11.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2022] [Revised: 11/20/2022] [Accepted: 11/22/2022] [Indexed: 03/06/2023]
Abstract
Our review shows that AI-based analysis of lymphoma whole-body FDG-PET/CT can inform all phases of clinical management including staging, prognostication, treatment planning, and treatment response evaluation. We highlight advancements in the role of neural networks for performing automated image segmentation to calculate PET-based imaging biomarkers such as the total metabolic tumor volume (TMTV). AI-based image segmentation methods are at levels where they can be semi-automatically implemented with minimal human inputs and nearing the level of a second-opinion radiologist. Advances in automated segmentation methods are particularly apparent in the discrimination of lymphomatous vs non-lymphomatous FDG-avid regions, which carries through to automated staging. Automated TMTV calculators, in addition to automated calculation of measures such as Dmax are informing robust models of progression-free survival which can then feed into improved treatment planning.
Collapse
Affiliation(s)
| | - Faraz Farhadi
- Radiology and Imaging Sciences, Clinical Center, National Institutes of Health, Bethesda, MD; Geisel School of Medicine at Dartmouth, Hanover, NH
| | - Navid Hasani
- Radiology and Imaging Sciences, Clinical Center, National Institutes of Health, Bethesda, MD
| | - Moozhan Nikpanah
- Radiology and Imaging Sciences, Clinical Center, National Institutes of Health, Bethesda, MD; Department of Radiology, University of Alabama at Birmingham, AL
| | - Mark Roschewski
- Lymphoid Malignancies Branch, Center for Cancer Research, National Cancer Institute, Bethesda, MD
| | - Ronald M Summers
- Radiology and Imaging Sciences, Clinical Center, National Institutes of Health, Bethesda, MD; Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, MD
| | - Babak Saboury
- Radiology and Imaging Sciences, Clinical Center, National Institutes of Health, Bethesda, MD.
| |
Collapse
|
4
|
Wang F, Cheng C, Cao W, Wu Z, Wang H, Wei W, Yan Z, Liu Z. MFCNet: A multi-modal fusion and calibration networks for 3D pancreas tumor segmentation on PET-CT images. Comput Biol Med 2023; 155:106657. [PMID: 36791551 DOI: 10.1016/j.compbiomed.2023.106657] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2022] [Revised: 01/29/2023] [Accepted: 02/09/2023] [Indexed: 02/12/2023]
Abstract
In clinical diagnosis, positron emission tomography and computed tomography (PET-CT) images containing complementary information are fused. Tumor segmentation based on multi-modal PET-CT images is an important part of clinical diagnosis and treatment. However, the existing current PET-CT tumor segmentation methods mainly focus on positron emission tomography (PET) and computed tomography (CT) feature fusion, which weakens the specificity of the modality. In addition, the information interaction between different modal images is usually completed by simple addition or concatenation operations, but this has the disadvantage of introducing irrelevant information during the multi-modal semantic feature fusion, so effective features cannot be highlighted. To overcome this problem, this paper propose a novel Multi-modal Fusion and Calibration Networks (MFCNet) for tumor segmentation based on three-dimensional PET-CT images. First, a Multi-modal Fusion Down-sampling Block (MFDB) with a residual structure is developed. The proposed MFDB can fuse complementary features of multi-modal images while retaining the unique features of different modal images. Second, a Multi-modal Mutual Calibration Block (MMCB) based on the inception structure is designed. The MMCB can guide the network to focus on a tumor region by combining different branch decoding features using the attention mechanism and extracting multi-scale pathological features using a convolution kernel of different sizes. The proposed MFCNet is verified on both the public dataset (Head and Neck cancer) and the in-house dataset (pancreas cancer). The experimental results indicate that on the public and in-house datasets, the average Dice values of the proposed multi-modal segmentation network are 74.14% and 76.20%, while the average Hausdorff distances are 6.41 and 6.84, respectively. In addition, the experimental results show that the proposed MFCNet outperforms the state-of-the-art methods on the two datasets.
Collapse
Affiliation(s)
- Fei Wang
- Institute of Biomedical Engineering, School of Communication and Information Engineering, Shanghai University, Shanghai, 200444, China; Department of Medical Imaging, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, 215163, China
| | - Chao Cheng
- Department of Nuclear Medicine, The First Affiliated Hospital of Naval Medical University(Changhai Hospital), Shanghai, 200433, China
| | - Weiwei Cao
- Department of Medical Imaging, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, 215163, China
| | - Zhongyi Wu
- Department of Medical Imaging, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, 215163, China
| | - Heng Wang
- School of Electronic and Information Engineering, Changchun University of Science and Technology, Changchun, 130022, China
| | - Wenting Wei
- School of Electronic and Information Engineering, Changchun University of Science and Technology, Changchun, 130022, China
| | - Zhuangzhi Yan
- Institute of Biomedical Engineering, School of Communication and Information Engineering, Shanghai University, Shanghai, 200444, China.
| | - Zhaobang Liu
- Department of Medical Imaging, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, 215163, China.
| |
Collapse
|
5
|
De Biase A, Sijtsema NM, van Dijk LV, Langendijk JA, van Ooijen PMA. Deep learning aided oropharyngeal cancer segmentation with adaptive thresholding for predicted tumor probability in FDG PET and CT images. Phys Med Biol 2023; 68. [PMID: 36749988 DOI: 10.1088/1361-6560/acb9cf] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2022] [Accepted: 02/07/2023] [Indexed: 02/09/2023]
Abstract
Objective. Tumor segmentation is a fundamental step for radiotherapy treatment planning. To define an accurate segmentation of the primary tumor (GTVp) of oropharyngeal cancer patients (OPC) each image volume is explored slice-by-slice from different orientations on different image modalities. However, the manual fixed boundary of segmentation neglects the spatial uncertainty known to occur in tumor delineation. This study proposes a novel deep learning-based method that generates probability maps which capture the model uncertainty in the segmentation task.Approach. We included 138 OPC patients treated with (chemo)radiation in our institute. Sequences of 3 consecutive 2D slices of concatenated FDG-PET/CT images and GTVp contours were used as input. Our framework exploits inter and intra-slice context using attention mechanisms and bi-directional long short term memory (Bi-LSTM). Each slice resulted in three predictions that were averaged. A 3-fold cross validation was performed on sequences extracted from the axial, sagittal, and coronal plane. 3D volumes were reconstructed and single- and multi-view ensembling were performed to obtain final results. The output is a tumor probability map determined by averaging multiple predictions.Main Results. Model performance was assessed on 25 patients at different probability thresholds. Predictions were the closest to the GTVp at a threshold of 0.9 (mean surface DSC of 0.81, median HD95of 3.906 mm).Significance. The promising results of the proposed method show that is it possible to offer the probability maps to radiation oncologists to guide them in a in a slice-by-slice adaptive GTVp segmentation.
Collapse
Affiliation(s)
- Alessia De Biase
- Department of Radiation Oncology, University Medical Center Groningen, Groningen, 9700RB, The Netherlands.,Data Science Center in Health (DASH), University Medical Center Groningen, Groningen, 9700RB, The Netherlands
| | - Nanna M Sijtsema
- Department of Radiation Oncology, University Medical Center Groningen, Groningen, 9700RB, The Netherlands
| | - Lisanne V van Dijk
- Department of Radiation Oncology, University Medical Center Groningen, Groningen, 9700RB, The Netherlands.,Department of Radiation Oncology, University of Texas MD Anderson Cancer Center, Houston, TX-77030, Texas, United States of America
| | - Johannes A Langendijk
- Department of Radiation Oncology, University Medical Center Groningen, Groningen, 9700RB, The Netherlands
| | - Peter M A van Ooijen
- Department of Radiation Oncology, University Medical Center Groningen, Groningen, 9700RB, The Netherlands.,Data Science Center in Health (DASH), University Medical Center Groningen, Groningen, 9700RB, The Netherlands
| |
Collapse
|
6
|
Zhu X, Jiang H, Diao Z. CGBO-Net: Cruciform structure guided and boundary-optimized lymphoma segmentation network. Comput Biol Med 2023; 153:106534. [PMID: 36608464 DOI: 10.1016/j.compbiomed.2022.106534] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2022] [Revised: 12/27/2022] [Accepted: 12/31/2022] [Indexed: 01/05/2023]
Abstract
Lymphoma segmentation plays an important role in the diagnosis and treatment of lymphocytic tumor. Most current existing automatic segmentation methods are difficult to give precise tumor boundary and location. Semi-automatic methods are usually combined with manually added features such as bounding box or points to locate the tumor. Inspired by this, we propose a cruciform structure guided and boundary-optimized lymphoma segmentation network(CGBS-Net). The method uses a cruciform structure extracted based on PET images as an additional input to the network, while using a boundary gradient loss function to optimize the boundary of the tumor. Our method is divided into two main stages: In the first stage, we use the proposed axial context-based cruciform structure extraction (CCE) method to extract the cruciform structures of all tumor slices. In the second stage, we use PET/CT and the corresponding cruciform structure as input in the designed network (CGBO-Net) to extract tumor structure and boundary information. The Dice, Precision, Recall, IOU and RVD are 90.7%, 89.4%, 92.5%, 83.1% and 4.5%, respectively. Validate on the lymphoma dataset and publicly available head and neck data, our proposed approach is better than the other state-of-the-art semi-segmentation methods, which produces promising segmentation results.
Collapse
Affiliation(s)
- Xiaolin Zhu
- Northeastern University, No. 195, Chuangxin Road, Hunnan District, Shenyang, 110169, Liaoning, China
| | - Huiyan Jiang
- Software College, Northeastern University, No. 195, Chuangxin Road, Hunnan District, Shenyang, 110169, Liaoning, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, No. 195, Chuangxin Road, Hunnan District, Shenyang, 110169, Liaoning, China.
| | - Zhaoshuo Diao
- Software College, Northeastern University, No. 195, Chuangxin Road, Hunnan District, Shenyang, 110169, Liaoning, China
| |
Collapse
|
7
|
Zhou Y, Jiang H, Diao Z, Tong G, Luan Q, Li Y, Li X. MRLA-Net: A tumor segmentation network embedded with a multiple receptive-field lesion attention module in PET-CT images. Comput Biol Med 2023; 153:106538. [PMID: 36646023 DOI: 10.1016/j.compbiomed.2023.106538] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2022] [Revised: 12/14/2022] [Accepted: 01/10/2023] [Indexed: 01/13/2023]
Abstract
The tumor image segmentation is an important basis for doctors to diagnose and formulate treatment planning. PET-CT is an extremely important technology for recognizing the systemic situation of diseases due to the complementary advantages of their respective modal information. However, current PET-CT tumor segmentation methods generally focus on the fusion of PET and CT features. The fusion of features will weaken the characteristics of the modality itself. Therefore, enhancing the modal features of the lesions can obtain optimized feature sets, which is extremely necessary to improve the segmentation results. This paper proposed an attention module that integrates the PET-CT diagnostic visual field and the modality characteristics of the lesion, that is, the multiple receptive-field lesion attention module. This paper made full use of the spatial domain, frequency domain, and channel attention, and proposed a large receptive-field lesion localization module and a small receptive-field lesion enhancement module, which together constitute the multiple receptive-field lesion attention module. In addition, a network embedded with a multiple receptive-field lesion attention module has been proposed for tumor segmentation. This paper conducted experiments on a private liver tumor dataset as well as two publicly available datasets, the soft tissue sarcoma dataset, and the head and neck tumor segmentation dataset. The experimental results showed that the proposed method achieves excellent performance on multiple datasets, and has a significant improvement compared with DenseUNet, and the tumor segmentation results on the above three PET/CT datasets were improved by 7.25%, 6.5%, 5.29% in Dice per case. Compared with the latest PET-CT liver tumor segmentation research, the proposed method improves by 8.32%.
Collapse
Affiliation(s)
- Yang Zhou
- Department of Software College, Northeastern University, Shenyang 110819, China
| | - Huiyan Jiang
- Department of Software College, Northeastern University, Shenyang 110819, China.
| | - Zhaoshuo Diao
- Department of Software College, Northeastern University, Shenyang 110819, China
| | - Guoyu Tong
- Department of Software College, Northeastern University, Shenyang 110819, China
| | - Qiu Luan
- Department of Nuclear Medicine, The First Affiliated Hospital of China Medical University, Shenyang 110001, China
| | - Yaming Li
- Department of Nuclear Medicine, The First Affiliated Hospital of China Medical University, Shenyang 110001, China
| | - Xuena Li
- Department of Nuclear Medicine, The First Affiliated Hospital of China Medical University, Shenyang 110001, China.
| |
Collapse
|
8
|
Luo S, Jiang H, Wang M. C 2BA-UNet: A context-coordination multi-atlas boundary-aware UNet-like method for PET/CT images based tumor segmentation. Comput Med Imaging Graph 2023; 103:102159. [PMID: 36549193 DOI: 10.1016/j.compmedimag.2022.102159] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2022] [Revised: 11/11/2022] [Accepted: 12/05/2022] [Indexed: 12/13/2022]
Abstract
Tumor segmentation is a necessary step in clinical processing that can help doctors diagnose tumors and plan surgical treatments. Since tumors are usually small, the locations and appearances vary substantially across individuals, and the contrast between tumors and adjacent normal tissues is low, tumor segmentation is still a challenging task. Although convolutional neural networks (CNNs) have achieved good results in tumor segmentation, the information about tumor boundaries has been rarely explored. To solve the problem, this paper proposes a new method for automatic tumor segmentation in PET/CT images based on context-coordination and boundary-aware, termed as C2BA-UNet. We employ a UNet-like backbone network and replace the encoder with EfficientNet-B0 for efficiency. To acquire potential tumor boundaries, we propose a new multi-atlas boundary-aware (MABA) module based on gradient atlas, uncertainty atlas, and level set atlas, that focuses on uncertain regions between tumors and adjacent tissues. Furthermore, we propose a new context coordination module (CCM) to combine multi-scale context information with attention mechanism to optimize skip connection in high-level layers. To validate the superiority of our method, we conduct experiments on a publicly available soft tissue sarcoma (STS) dataset and a lymphoma dataset, and the results show our method is competitive with other comparison methods.
Collapse
Affiliation(s)
- Shijie Luo
- Software College, Northeastern University, Shenyang 110819, China
| | - Huiyan Jiang
- Software College, Northeastern University, Shenyang 110819, China; Key Laboratory of Intelligent Computing in Biomedical Image, Ministry of Education, Northeastern University, Shenyang 110819, China.
| | - Meng Wang
- Software College, Northeastern University, Shenyang 110819, China
| |
Collapse
|
9
|
Zhang X, Zhang B, Deng S, Meng Q, Chen X, Xiang D. Cross modality fusion for modality-specific lung tumor segmentation in PET-CT images. Phys Med Biol 2022; 67. [DOI: 10.1088/1361-6560/ac994e] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2022] [Accepted: 10/11/2022] [Indexed: 11/09/2022]
Abstract
Abstract
Although positron emission tomography-computed tomography (PET-CT) images have been widely used, it is still challenging to accurately segment the lung tumor. The respiration, movement and imaging modality lead to large modality discrepancy of the lung tumors between PET images and CT images. To overcome these difficulties, a novel network is designed to simultaneously obtain the corresponding lung tumors of PET images and CT images. The proposed network can fuse the complementary information and preserve modality-specific features of PET images and CT images. Due to the complementarity between PET images and CT images, the two modality images should be fused for automatic lung tumor segmentation. Therefore, cross modality decoding blocks are designed to extract modality-specific features of PET images and CT images with the constraints of the other modality. The edge consistency loss is also designed to solve the problem of blurred boundaries of PET images and CT images. The proposed method is tested on 126 PET-CT images with non-small cell lung cancer, and Dice similarity coefficient scores of lung tumor segmentation reach 75.66 ± 19.42 in CT images and 79.85 ± 16.76 in PET images, respectively. Extensive comparisons with state-of-the-art lung tumor segmentation methods have also been performed to demonstrate the superiority of the proposed network.
Collapse
|