1
|
Xiang F, Zhang Y, Tan X, Zhang J, Li T, Yan Y, Ma W, Chen Y. A bibliometric analysis based on hotspots and frontier trends of positron emission tomography/computed tomography utility in bone and soft tissue sarcoma. Front Oncol 2024; 14:1344643. [PMID: 38974238 PMCID: PMC11224451 DOI: 10.3389/fonc.2024.1344643] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2024] [Accepted: 06/10/2024] [Indexed: 07/09/2024] Open
Abstract
Purpose This study aimed to analyze articles on the diagnosis and treatment of bone and soft tissue sarcoma using positron emission tomography (PET)/computed tomography (CT) published in the last 13 years. The objective was to conduct a bibliometric analysis and identify the research hotspots and emerging trends. Methods Web of Science was used to search for articles on PET/CT diagnosis and treatment of bone and soft tissue sarcoma published from January 2010 to June 2023. CiteSpace was utilized to import data for bibliometric analysis. Results In total, 425 relevant publications were identified. Publications have maintained a relatively stable growth rate for the past 13 years. The USA has the highest number of published articles (139) and the highest centrality (0.35). The UDICE-French Research Universities group is the most influential institution. BYUN BH is a prominent contributor to this field. The Journal of Clinical Oncology has the highest impact factor in the field. Conclusion The clinical application of PET/CT is currently a research hotspot. Upcoming areas of study concentrate on the merging of PET/CT with advanced machine learning and/or alternative imaging methods, novel imaging substances, and the fusion of diagnosis and therapy. The use of PET/CT has progressively become a crucial element in the identification and management of sarcomas. To confirm its efficacy, there is a need for extensive, multicenter, prospective studies.
Collapse
Affiliation(s)
- Feifan Xiang
- The State Key Laboratory of Quality Research in Chinese Medicine, Macau University of Science and Technology, Macao, Macao SAR, China
- Department of Orthopedic, Affiliated Hospital of Southwest Medical University, Luzhou, China
- Department of Nuclear Medicine, Affiliated Hospital of Southwest Medical University, Luzhou, China
| | - Yue Zhang
- Department of Orthopedic, Affiliated Hospital of Southwest Medical University, Luzhou, China
| | - Xiaoqi Tan
- Department of Dermatology, Affiliated Hospital of Southwest Medical University, Luzhou, China
| | - Jintao Zhang
- Department of Nuclear Medicine, Affiliated Hospital of Southwest Medical University, Luzhou, China
- Nuclear Medicine and Molecular Imaging Key Laboratory of Sichuan Province, Luzhou, China
- Institute of Nuclear Medicine, Southwest Medical University, Luzhou, China
| | - Tengfei Li
- Department of Nuclear Medicine, Affiliated Hospital of Southwest Medical University, Luzhou, China
- Nuclear Medicine and Molecular Imaging Key Laboratory of Sichuan Province, Luzhou, China
- Institute of Nuclear Medicine, Southwest Medical University, Luzhou, China
| | - Yuanzhuo Yan
- Department of Nuclear Medicine, Affiliated Hospital of Southwest Medical University, Luzhou, China
- Nuclear Medicine and Molecular Imaging Key Laboratory of Sichuan Province, Luzhou, China
- Institute of Nuclear Medicine, Southwest Medical University, Luzhou, China
| | - Wenzhe Ma
- The State Key Laboratory of Quality Research in Chinese Medicine, Macau University of Science and Technology, Macao, Macao SAR, China
| | - Yue Chen
- Department of Nuclear Medicine, Affiliated Hospital of Southwest Medical University, Luzhou, China
- Nuclear Medicine and Molecular Imaging Key Laboratory of Sichuan Province, Luzhou, China
- Institute of Nuclear Medicine, Southwest Medical University, Luzhou, China
| |
Collapse
|
2
|
Zou Z, Zou B, Kui X, Chen Z, Li Y. DGCBG-Net: A dual-branch network with global cross-modal interaction and boundary guidance for tumor segmentation in PET/CT images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 250:108125. [PMID: 38631130 DOI: 10.1016/j.cmpb.2024.108125] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/14/2023] [Revised: 02/24/2024] [Accepted: 03/07/2024] [Indexed: 04/19/2024]
Abstract
BACKGROUND AND OBJECTIVES Automatic tumor segmentation plays a crucial role in cancer diagnosis and treatment planning. Computed tomography (CT) and positron emission tomography (PET) are extensively employed for their complementary medical information. However, existing methods ignore bilateral cross-modal interaction of global features during feature extraction, and they underutilize multi-stage tumor boundary features. METHODS To address these limitations, we propose a dual-branch tumor segmentation network based on global cross-modal interaction and boundary guidance in PET/CT images (DGCBG-Net). DGCBG-Net consists of 1) a global cross-modal interaction module that extracts global contextual information from PET/CT images and promotes bilateral cross-modal interaction of global feature; 2) a shared multi-path downsampling module that learns complementary features from PET/CT modalities to mitigate the impact of misleading features and decrease the loss of discriminative features during downsampling; 3) a boundary prior-guided branch that extracts potential boundary features from CT images at multiple stages, assisting the semantic segmentation branch in improving the accuracy of tumor boundary segmentation. RESULTS Extensive experiments are conducted on STS and Hecktor 2022 datasets to evaluate the proposed method. The average Dice scores of our DGCB-Net on the two datasets are 80.33% and 79.29%, with average IOU scores of 67.64% and 70.18%. DGCB-Net outperformed the current state-of-the-art methods with a 1.77% higher Dice score and a 2.12% higher IOU score. CONCLUSIONS Extensive experimental results demonstrate that DGCBG-Net outperforms existing segmentation methods, and is competitive to state-of-arts.
Collapse
Affiliation(s)
- Ziwei Zou
- School of Computer Science and Engineering, Central South University, No. 932, Lushan South Road, ChangSha, 410083, China
| | - Beiji Zou
- School of Computer Science and Engineering, Central South University, No. 932, Lushan South Road, ChangSha, 410083, China
| | - Xiaoyan Kui
- School of Computer Science and Engineering, Central South University, No. 932, Lushan South Road, ChangSha, 410083, China.
| | - Zhi Chen
- School of Computer Science and Engineering, Central South University, No. 932, Lushan South Road, ChangSha, 410083, China
| | - Yang Li
- School of Informatics, Hunan University of Chinese Medicine, No. 300, Xueshi Road, ChangSha, 410208, China
| |
Collapse
|
3
|
Shafi SM, Chinnappan SK. Segmenting and classifying lung diseases with M-Segnet and Hybrid Squeezenet-CNN architecture on CT images. PLoS One 2024; 19:e0302507. [PMID: 38753712 PMCID: PMC11098347 DOI: 10.1371/journal.pone.0302507] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2024] [Accepted: 04/07/2024] [Indexed: 05/18/2024] Open
Abstract
Diagnosing lung diseases accurately and promptly is essential for effectively managing this significant public health challenge on a global scale. This paper introduces a new framework called Modified Segnet-based Lung Disease Segmentation and Severity Classification (MSLDSSC). The MSLDSSC model comprises four phases: "preprocessing, segmentation, feature extraction, and classification." Initially, the input image undergoes preprocessing using an improved Wiener filter technique. This technique estimates the power spectral density of the noisy and original images and computes the SNR assisted by PSNR to evaluate image quality. Next, the preprocessed image undergoes Segmentation to identify and separate the RoI from the background objects in the lung image. We employ a Modified Segnet mechanism that utilizes a proposed hard tanh-Softplus activation function for effective Segmentation. Following Segmentation, features such as MLDN, entropy with MRELBP, shape features, and deep features are extracted. Following the feature extraction phase, the retrieved feature set is input into a hybrid severity classification model. This hybrid model comprises two classifiers: SDPA-Squeezenet and DCNN. These classifiers train on the retrieved feature set and effectively classify the severity level of lung diseases.
Collapse
Affiliation(s)
- Syed Mohammed Shafi
- School of Computer Science and Engineering Vellore Institute of Technology, Vellore, India
| | | |
Collapse
|
4
|
Shi J, Wang Z, Ruan S, Zhao M, Zhu Z, Kan H, An H, Xue X, Yan B. Rethinking automatic segmentation of gross target volume from a decoupling perspective. Comput Med Imaging Graph 2024; 112:102323. [PMID: 38171254 DOI: 10.1016/j.compmedimag.2023.102323] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2023] [Revised: 10/19/2023] [Accepted: 12/12/2023] [Indexed: 01/05/2024]
Abstract
Accurate and reliable segmentation of Gross Target Volume (GTV) is critical in cancer Radiation Therapy (RT) planning, but manual delineation is time-consuming and subject to inter-observer variations. Recently, deep learning methods have achieved remarkable success in medical image segmentation. However, due to the low image contrast and extreme pixel imbalance between GTV and adjacent tissues, most existing methods usually obtained limited performance on automatic GTV segmentation. In this paper, we propose a Heterogeneous Cascade Framework (HCF) from a decoupling perspective, which decomposes the GTV segmentation into independent recognition and segmentation subtasks. The former aims to screen out the abnormal slices containing GTV, while the latter performs pixel-wise segmentation of these slices. With the decoupled two-stage framework, we can efficiently filter normal slices to reduce false positives. To further improve the segmentation performance, we design a multi-level Spatial Alignment Network (SANet) based on the feature pyramid structure, which introduces a spatial alignment module into the decoder to compensate for the information loss caused by downsampling. Moreover, we propose a Combined Regularization (CR) loss and Balance-Sampling Strategy (BSS) to alleviate the pixel imbalance problem and improve network convergence. Extensive experiments on two public datasets of StructSeg2019 challenge demonstrate that our method outperforms state-of-the-art methods, especially with significant advantages in reducing false positives and accurately segmenting small objects. The code is available at https://github.com/shijun18/GTV_AutoSeg.
Collapse
Affiliation(s)
- Jun Shi
- School of Computer Science and Technology, University of Science and Technology of China, Hefei, 230026, China.
| | - Zhaohui Wang
- School of Computer Science and Technology, University of Science and Technology of China, Hefei, 230026, China.
| | - Shulan Ruan
- School of Computer Science and Technology, University of Science and Technology of China, Hefei, 230026, China.
| | - Minfan Zhao
- School of Computer Science and Technology, University of Science and Technology of China, Hefei, 230026, China.
| | - Ziqi Zhu
- School of Computer Science and Technology, University of Science and Technology of China, Hefei, 230026, China.
| | - Hongyu Kan
- School of Computer Science and Technology, University of Science and Technology of China, Hefei, 230026, China.
| | - Hong An
- School of Computer Science and Technology, University of Science and Technology of China, Hefei, 230026, China; Laoshan Laboratory Qingdao, Qindao, 266221, China.
| | - Xudong Xue
- Hubei Cancer Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, 430074, China.
| | - Bing Yan
- Department of radiation oncology, The First Affiliated Hospital of USTC, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, 230001, China.
| |
Collapse
|
5
|
Tsuchiya N, Kimura K, Tateishi U, Watabe T, Hatano K, Uemura M, Nonomura N, Shimizu A. Detection support of lesions in patients with prostate cancer using [Formula: see text]-PSMA 1007 PET/CT. Int J Comput Assist Radiol Surg 2024:10.1007/s11548-024-03067-5. [PMID: 38329565 DOI: 10.1007/s11548-024-03067-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2023] [Accepted: 01/19/2024] [Indexed: 02/09/2024]
Abstract
PURPOSE This study proposes a detection support system for primary and metastatic lesions of prostate cancer using [Formula: see text]-PSMA 1007 positron emission tomography/computed tomography (PET/CT) images with non-image information, including patient metadata and location information of an input slice image. METHODS A convolutional neural network with condition generators and feature-wise linear modulation (FiLM) layers was employed to allow input of not only PET/CT images but also non-image information, namely, Gleason score, flag of pre- or post-prostatectomy, and normalized z-coordinate of an input slice. We explored the insertion position of the FiLM layers to optimize the conditioning of the network using non-image information. RESULTS [Formula: see text]-PSMA 1007 PET/CT images were collected from 163 patients with prostate cancer and applied to the proposed system in a threefold cross-validation manner to evaluate the performance. The proposed system achieved a Dice score of 0.5732 (per case) and sensitivity of 0.8200 (per lesion), which are 3.87 and 4.16 points higher than the network without non-image information. CONCLUSION This study demonstrated the effectiveness of the use of non-image information, including metadata of the patient and location information of the input slice image, in the detection of prostate cancer from [Formula: see text]-PSMA 1007 PET/CT images. Improvement in the sensitivity of inactive and small lesions remains a future challenge.
Collapse
Affiliation(s)
- Naoki Tsuchiya
- Institute of Engineering, Tokyo University of Agriculture and Technology, Koganei, Tokyo, Japan.
| | - Koichiro Kimura
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, Bunkyo City, Tokyo, Japan
| | - Ukihide Tateishi
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, Bunkyo City, Tokyo, Japan
| | - Tadashi Watabe
- Department of Nuclear Medicine and Tracer Kinetics, Graduate School of Medicine, Osaka University, Osaka, Japan
| | - Koji Hatano
- Department of Urology, Graduate School of Medicine, Osaka University, Osaka, Japan
| | - Motohide Uemura
- Department of Urology, Graduate School of Medicine, Osaka University, Osaka, Japan
- Department of Urology, Fukushima Medical University School of Medicine, Fukushima, Japan
| | - Norio Nonomura
- Department of Urology, Graduate School of Medicine, Osaka University, Osaka, Japan
| | - Akinobu Shimizu
- Institute of Engineering, Tokyo University of Agriculture and Technology, Koganei, Tokyo, Japan.
| |
Collapse
|
6
|
Bi L, Buehner U, Fu X, Williamson T, Choong P, Kim J. Hybrid CNN-transformer network for interactive learning of challenging musculoskeletal images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 243:107875. [PMID: 37871450 DOI: 10.1016/j.cmpb.2023.107875] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/06/2023] [Revised: 10/16/2023] [Accepted: 10/17/2023] [Indexed: 10/25/2023]
Abstract
BACKGROUND AND OBJECTIVES Segmentation of regions of interest (ROIs) such as tumors and bones plays an essential role in the analysis of musculoskeletal (MSK) images. Segmentation results can help with orthopaedic surgeons in surgical outcomes assessment and patient's gait cycle simulation. Deep learning-based automatic segmentation methods, particularly those using fully convolutional networks (FCNs), are considered as the state-of-the-art. However, in scenarios where the training data is insufficient to account for all the variations in ROIs, these methods struggle to segment the challenging ROIs that with less common image characteristics. Such characteristics might include low contrast to the background, inhomogeneous textures, and fuzzy boundaries. METHODS we propose a hybrid convolutional neural network - transformer network (HCTN) for semi-automatic segmentation to overcome the limitations of segmenting challenging MSK images. Specifically, we propose to fuse user-inputs (manual, e.g., mouse clicks) with high-level semantic image features derived from the neural network (automatic) where the user-inputs are used in an interactive training for uncommon image characteristics. In addition, we propose to leverage the transformer network (TN) - a deep learning model designed for handling sequence data, in together with features derived from FCNs for segmentation; this addresses the limitation of FCNs that can only operate on small kernels, which tends to dismiss global context and only focus on local patterns. RESULTS We purposely selected three MSK imaging datasets covering a variety of structures to evaluate the generalizability of the proposed method. Our semi-automatic HCTN method achieved a dice coefficient score (DSC) of 88.46 ± 9.41 for segmenting the soft-tissue sarcoma tumors from magnetic resonance (MR) images, 73.32 ± 11.97 for segmenting the osteosarcoma tumors from MR images and 93.93 ± 1.84 for segmenting the clavicle bones from chest radiographs. When compared to the current state-of-the-art automatic segmentation method, our HCTN method is 11.7%, 19.11% and 7.36% higher in DSC on the three datasets, respectively. CONCLUSION Our experimental results demonstrate that HCTN achieved more generalizable results than the current methods, especially with challenging MSK studies.
Collapse
Affiliation(s)
- Lei Bi
- Institute of Translational Medicine, National Center for Translational Medicine, Shanghai Jiao Tong University, Shanghai, China; School of Computer Science, University of Sydney, NSW, Australia
| | | | - Xiaohang Fu
- School of Computer Science, University of Sydney, NSW, Australia
| | - Tom Williamson
- Stryker Corporation, Kalamazoo, Michigan, USA; Centre for Additive Manufacturing, School of Engineering, RMIT University, VIC, Australia
| | - Peter Choong
- Department of Surgery, University of Melbourne, VIC, Australia
| | - Jinman Kim
- School of Computer Science, University of Sydney, NSW, Australia.
| |
Collapse
|
7
|
Zamanian M, Treglia G, Abedi I. Diagnostic Accuracy of PET with Different Radiotracers versus Bone Scintigraphy for Detecting Bone Metastases of Breast Cancer: A Systematic Review and a Meta-Analysis. J Imaging 2023; 9:274. [PMID: 38132692 PMCID: PMC10744045 DOI: 10.3390/jimaging9120274] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2023] [Revised: 11/05/2023] [Accepted: 12/03/2023] [Indexed: 12/23/2023] Open
Abstract
Due to the importance of correct and timely diagnosis of bone metastases in advanced breast cancer (BrC), we performed a meta-analysis evaluating the diagnostic accuracy of [18F]FDG, or Na[18F]F PET, PET(/CT), and (/MRI) versus [99mTc]Tc-diphosphonates bone scintigraphy (BS). The PubMed, Embase, Scopus, and Scholar electronic databases were searched. The results of the selected studies were analyzed using pooled sensitivity and specificity, diagnostic odds ratio (DOR), positive-negative likelihood ratio (LR+-LR-), and summary receiver-operating characteristic (SROC) curves. Eleven studies including 753 BrC patients were included in the meta-analysis. The patient-based pooled values of sensitivity, specificity, and area under the SROC curve (AUC) for BS (with 95% confidence interval values) were 90% (86-93), 91% (87-94), and 0.93, respectively. These indices for [18F]FDG PET(/CT) were 92% (88-95), 99% (96-100), and 0.99, respectively, and for Na[18F]F PET(/CT) were 96% (90-99), 81% (72-88), and 0.99, respectively. BS has good diagnostic performance in detecting BrC bone metastases. However, due to the higher and balanced sensitivity and specificity of [18F]FDG PET(/CT) compared to BS and Na[18F]F PET(/CT), and its advantage in evaluating extra-skeletal lesions, [18F]FDG PET(/CT) should be the preferred multimodal imaging method for evaluating bone metastases of BrC, if available.
Collapse
Affiliation(s)
- Maryam Zamanian
- Department of Medical Physics, School of Medicine, Isfahan University of Medical Sciences, Isfahan 8174673461, Iran; (M.Z.); (I.A.)
| | - Giorgio Treglia
- Faculty of Biomedical Sciences, Università della Svizzera Italiana, 6900 Lugano, Switzerland
- Division of Nuclear Medicine and Molecular Imaging, Imaging Institute of Southern Switzerland, Ente Ospedaliero Cantonale, 6500 Bellinzona, Switzerland
- Faculty of Biology and Medicine, University of Lausanne, 1015 Lausanne, Switzerland
| | - Iraj Abedi
- Department of Medical Physics, School of Medicine, Isfahan University of Medical Sciences, Isfahan 8174673461, Iran; (M.Z.); (I.A.)
| |
Collapse
|
8
|
Alshmrani GM, Ni Q, Jiang R, Muhammed N. Hyper-Dense_Lung_Seg: Multimodal-Fusion-Based Modified U-Net for Lung Tumour Segmentation Using Multimodality of CT-PET Scans. Diagnostics (Basel) 2023; 13:3481. [PMID: 37998617 PMCID: PMC10670323 DOI: 10.3390/diagnostics13223481] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2023] [Revised: 11/09/2023] [Accepted: 11/15/2023] [Indexed: 11/25/2023] Open
Abstract
The majority of cancer-related deaths globally are due to lung cancer, which also has the second-highest mortality rate. The segmentation of lung tumours, treatment evaluation, and tumour stage classification have become significantly more accessible with the advent of PET/CT scans. With the advent of PET/CT scans, it is possible to obtain both functioning and anatomic data during a single examination. However, integrating images from different modalities can indeed be time-consuming for medical professionals and remains a challenging task. This challenge arises from several factors, including differences in image acquisition techniques, image resolutions, and the inherent variations in the spectral and temporal data captured by different imaging modalities. Artificial Intelligence (AI) methodologies have shown potential in the automation of image integration and segmentation. To address these challenges, multimodal fusion approach-based U-Net architecture (early fusion, late fusion, dense fusion, hyper-dense fusion, and hyper-dense VGG16 U-Net) are proposed for lung tumour segmentation. Dice scores of 73% show that hyper-dense VGG16 U-Net is superior to the other four proposed models. The proposed method can potentially aid medical professionals in detecting lung cancer at an early stage.
Collapse
Affiliation(s)
- Goram Mufarah Alshmrani
- School of Computing and Commutations, Lancaster University, Lancaster LA1 4YW, UK; (Q.N.); (R.J.)
- College of Computing and Information Technology, University of Bisha, Bisha 67714, Saudi Arabia
| | - Qiang Ni
- School of Computing and Commutations, Lancaster University, Lancaster LA1 4YW, UK; (Q.N.); (R.J.)
| | - Richard Jiang
- School of Computing and Commutations, Lancaster University, Lancaster LA1 4YW, UK; (Q.N.); (R.J.)
| | - Nada Muhammed
- Computers and Control Engineering Department, Faculty of Engineering, Tanta University, Tanta 31733, Egypt;
| |
Collapse
|
9
|
Zhong Y, Cai C, Chen T, Gui H, Deng J, Yang M, Yu B, Song Y, Wang T, Sun X, Shi J, Chen Y, Xie D, Chen C, She Y. PET/CT based cross-modal deep learning signature to predict occult nodal metastasis in lung cancer. Nat Commun 2023; 14:7513. [PMID: 37980411 PMCID: PMC10657428 DOI: 10.1038/s41467-023-42811-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2023] [Accepted: 10/20/2023] [Indexed: 11/20/2023] Open
Abstract
Occult nodal metastasis (ONM) plays a significant role in comprehensive treatments of non-small cell lung cancer (NSCLC). This study aims to develop a deep learning signature based on positron emission tomography/computed tomography to predict ONM of clinical stage N0 NSCLC. An internal cohort (n = 1911) is included to construct the deep learning nodal metastasis signature (DLNMS). Subsequently, an external cohort (n = 355) and a prospective cohort (n = 999) are utilized to fully validate the predictive performances of the DLNMS. Here, we show areas under the receiver operating characteristic curve of the DLNMS for occult N1 prediction are 0.958, 0.879 and 0.914 in the validation set, external cohort and prospective cohort, respectively, and for occult N2 prediction are 0.942, 0.875 and 0.919, respectively, which are significantly better than the single-modal deep learning models, clinical model and physicians. This study demonstrates that the DLNMS harbors the potential to predict ONM of clinical stage N0 NSCLC.
Collapse
Affiliation(s)
- Yifan Zhong
- Department of Thoracic Surgery, Shanghai Pulmonary Hospital, Tongji University School of Medicine, Shanghai, China
| | - Chuang Cai
- School of Computer Science and Communication Engineering, Jiangsu University, Zhenjiang, Jiangsu, China
| | - Tao Chen
- Department of Thoracic Surgery, Shanghai Pulmonary Hospital, Tongji University School of Medicine, Shanghai, China
| | - Hao Gui
- Graduate School at Shenzhen, Tsinghua University, Shenzhen, China
| | - Jiajun Deng
- Department of Thoracic Surgery, Shanghai Pulmonary Hospital, Tongji University School of Medicine, Shanghai, China
| | - Minglei Yang
- Department of Thoracic Surgery, Ningbo HwaMei Hospital, Chinese Academy of Sciences, Zhejiang, China
| | - Bentong Yu
- Department of Thoracic Surgery, The First Affiliated Hospital of Nanchang University, Jiangxi, China
| | - Yongxiang Song
- Department of Thoracic Surgery, Affiliated Hospital of Zunyi Medical University, Guizhou, China
| | - Tingting Wang
- Department of Radiology, Zhongshan Hospital, Fudan University, Shanghai, China
| | - Xiwen Sun
- Department of Radiology, Shanghai Pulmonary Hospital, Tongji University School of Medicine, Shanghai, China
| | - Jingyun Shi
- Department of Radiology, Shanghai Pulmonary Hospital, Tongji University School of Medicine, Shanghai, China
| | - Yangchun Chen
- Department of Nuclear Medicine, Shanghai Pulmonary Hospital, Tongji University School of Medicine, Shanghai, China
| | - Dong Xie
- Department of Thoracic Surgery, Shanghai Pulmonary Hospital, Tongji University School of Medicine, Shanghai, China.
| | - Chang Chen
- Department of Thoracic Surgery, Shanghai Pulmonary Hospital, Tongji University School of Medicine, Shanghai, China.
| | - Yunlang She
- Department of Thoracic Surgery, Shanghai Pulmonary Hospital, Tongji University School of Medicine, Shanghai, China.
| |
Collapse
|
10
|
He J, Zhang Y, Chung M, Wang M, Wang K, Ma Y, Ding X, Li Q, Pu Y. Whole-body tumor segmentation from PET/CT images using a two-stage cascaded neural network with camouflaged object detection mechanisms. Med Phys 2023; 50:6151-6162. [PMID: 37134002 DOI: 10.1002/mp.16438] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2022] [Revised: 03/25/2023] [Accepted: 04/12/2023] [Indexed: 05/04/2023] Open
Abstract
BACKGROUND Whole-body Metabolic Tumor Volume (MTVwb) is an independent prognostic factor for overall survival in lung cancer patients. Automatic segmentation methods have been proposed for MTV calculation. Nevertheless, most of existing methods for patients with lung cancer only segment tumors in the thoracic region. PURPOSE In this paper, we present a Two-Stage cascaded neural network integrated with Camouflaged Object Detection mEchanisms (TS-Code-Net) for automatic segmenting tumors from whole-body PET/CT images. METHODS Firstly, tumors are detected from the Maximum Intensity Projection (MIP) images of PET/CT scans, and tumors' approximate localizations along z-axis are identified. Secondly, the segmentations are performed on PET/CT slices that contain tumors identified by the first step. Camouflaged object detection mechanisms are utilized to distinguish the tumors from their surrounding regions that have similar Standard Uptake Values (SUV) and texture appearance. Finally, the TS-Code-Net is trained by minimizing the total loss that incorporates the segmentation accuracy loss and the class imbalance loss. RESULTS The performance of the TS-Code-Net is tested on a whole-body PET/CT image data-set including 480 Non-Small Cell Lung Cancer (NSCLC) patients with five-fold cross-validation using image segmentation metrics. Our method achieves 0.70, 0.76, and 0.70, for Dice, Sensitivity and Precision, respectively, which demonstrates the superiority of the TS-Code-Net over several existing methods related to metastatic lung cancer segmentation from whole-body PET/CT images. CONCLUSIONS The proposed TS-Code-Net is effective for whole-body tumor segmentation of PET/CT images. Codes for TS-Code-Net are available at: https://github.com/zyj19/TS-Code-Net.
Collapse
Affiliation(s)
- Jiangping He
- Department of Electronic Engineering, Lanzhou University of Finance and Economics, Lanzhou, Gansu, China
| | - Yangjie Zhang
- Department of Electronic Engineering, Lanzhou University of Finance and Economics, Lanzhou, Gansu, China
| | - Maggie Chung
- Department of Radiology, University of California, San Francisco, California, USA
| | - Michael Wang
- Department of Pathology, University of California, San Francisco, California, USA
| | - Kun Wang
- Department of Electronic Engineering, Lanzhou University of Finance and Economics, Lanzhou, Gansu, China
| | - Yan Ma
- Department of Electronic Engineering, Lanzhou University of Finance and Economics, Lanzhou, Gansu, China
| | - Xiaoyang Ding
- Department of Electronic Engineering, Lanzhou University of Finance and Economics, Lanzhou, Gansu, China
| | - Qiang Li
- Department of Electronic Engineering, Lanzhou University of Finance and Economics, Lanzhou, Gansu, China
| | - Yonglin Pu
- Department of Radiology, University of Chicago, Chicago, Illinois, USA
| |
Collapse
|
11
|
Xue H, Fang Q, Yao Y, Teng Y. 3D PET/CT tumor segmentation based on nnU-Net with GCN refinement. Phys Med Biol 2023; 68:185018. [PMID: 37549672 DOI: 10.1088/1361-6560/acede6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2023] [Accepted: 08/07/2023] [Indexed: 08/09/2023]
Abstract
Objective. Whole-body positron emission tomography/computed tomography (PET/CT) scans are an important tool for diagnosing various malignancies (e.g. malignant melanoma, lymphoma, or lung cancer), and accurate segmentation of tumors is a key part of subsequent treatment. In recent years, convolutional neural network based segmentation methods have been extensively investigated. However, these methods often give inaccurate segmentation results, such as oversegmentation and undersegmentation. To address these issues, we propose a postprocessing method based on a graph convolutional network (GCN) to refine inaccurate segmentation results and improve the overall segmentation accuracy.Approach. First, nnU-Net is used as an initial segmentation framework, and the uncertainty in the segmentation results is analyzed. Certain and uncertain pixels are used to establish the nodes of a graph. Each node and its 6 neighbors form an edge, and 32 nodes are randomly selected as uncertain nodes to form edges. The highly uncertain nodes are used as the subsequent refinement targets. Second, the nnU-Net results of the certain nodes are used as labels to form a semisupervised graph network problem, and the uncertain part is optimized by training the GCN to improve the segmentation performance. This describes our proposed nnU-Net + GCN segmentation framework.Main results.We perform tumor segmentation experiments with the PET/CT dataset from the MICCIA2022 autoPET challenge. Among these data, 30 cases are randomly selected for testing, and the experimental results show that the false-positive rate is effectively reduced with nnU-Net + GCN refinement. In quantitative analysis, there is an improvement of 2.1% for the average Dice score, 6.4 for the 95% Hausdorff distance (HD95), and 1.7 for the average symmetric surface distance.Significance. The quantitative and qualitative evaluation results show that GCN postprocessing methods can effectively improve the tumor segmentation performance.
Collapse
Affiliation(s)
- Hengzhi Xue
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110004, People's Republic of China
| | - Qingqing Fang
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110004, People's Republic of China
| | - Yudong Yao
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110004, People's Republic of China
- Department of Electrical and Computer Engineering, Steven Institute of Technology, Hoboken, NJ 07102, United States of America
| | - Yueyang Teng
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110004, People's Republic of China
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Shenyang, 110169, People's Republic of China
| |
Collapse
|
12
|
Zhang W, Ray S. From coarse to fine: a deep 3D probability volume contours framework for tumour segmentation and dose painting in PET images. FRONTIERS IN RADIOLOGY 2023; 3:1225215. [PMID: 37745205 PMCID: PMC10512384 DOI: 10.3389/fradi.2023.1225215] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/18/2023] [Accepted: 08/21/2023] [Indexed: 09/26/2023]
Abstract
With the increasing integration of functional imaging techniques like Positron Emission Tomography (PET) into radiotherapy (RT) practices, a paradigm shift in cancer treatment methodologies is underway. A fundamental step in RT planning is the accurate segmentation of tumours based on clinical diagnosis. Furthermore, novel tumour control methods, such as intensity modulated radiation therapy (IMRT) dose painting, demand the precise delineation of multiple intensity value contours to ensure optimal tumour dose distribution. Recently, convolutional neural networks (CNNs) have made significant strides in 3D image segmentation tasks, most of which present the output map at a voxel-wise level. However, because of information loss in subsequent downsampling layers, they frequently fail to precisely identify precise object boundaries. Moreover, in the context of dose painting strategies, there is an imperative need for reliable and precise image segmentation techniques to delineate high recurrence-risk contours. To address these challenges, we introduce a 3D coarse-to-fine framework, integrating a CNN with a kernel smoothing-based probability volume contour approach (KsPC). This integrated approach generates contour-based segmentation volumes, mimicking expert-level precision and providing accurate probability contours crucial for optimizing dose painting/IMRT strategies. Our final model, named KsPC-Net, leverages a CNN backbone to automatically learn parameters in the kernel smoothing process, thereby obviating the need for user-supplied tuning parameters. The 3D KsPC-Net exploits the strength of KsPC to simultaneously identify object boundaries and generate corresponding probability volume contours, which can be trained within an end-to-end framework. The proposed model has demonstrated promising performance, surpassing state-of-the-art models when tested against the MICCAI 2021 challenge dataset (HECKTOR).
Collapse
Affiliation(s)
- Wenhui Zhang
- School of Mathematics and Statistics, University of Glasgow, Glasgow, United Kingdom
| | | |
Collapse
|
13
|
Yu X, He L, Wang Y, Dong Y, Song Y, Yuan Z, Yan Z, Wang W. A deep learning approach for automatic tumor delineation in stereotactic radiotherapy for non-small cell lung cancer using diagnostic PET-CT and planning CT. Front Oncol 2023; 13:1235461. [PMID: 37601687 PMCID: PMC10437048 DOI: 10.3389/fonc.2023.1235461] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2023] [Accepted: 07/10/2023] [Indexed: 08/22/2023] Open
Abstract
Introduction Accurate delineation of tumor targets is crucial for stereotactic body radiation therapy (SBRT) for non-small cell lung cancer (NSCLC). This study aims to develop a deep learning-based segmentation approach to accurately and efficiently delineate NSCLC targets using diagnostic PET-CT and SBRT planning CT (pCT). Methods The diagnostic PET was registered to pCT using the transform matrix from registering diagnostic CT to the pCT. We proposed a 3D-UNet-based segmentation method to segment NSCLC tumor targets on dual-modality PET-pCT images. This network contained squeeze-and-excitation and Residual blocks in each convolutional block to perform dynamic channel-wise feature recalibration. Furthermore, up-sampling paths were added to supplement low-resolution features to the model and also to compute the overall loss function. The dice similarity coefficient (DSC), precision, recall, and the average symmetric surface distances were used to assess the performance of the proposed approach on 86 pairs of diagnostic PET and pCT images. The proposed model using dual-modality images was compared with both conventional 3D-UNet architecture and single-modality image input. Results The average DSC of the proposed model with both PET and pCT images was 0.844, compared to 0.795 and 0.827, when using 3D-UNet and nnUnet. It also outperformed using either pCT or PET alone with the same network, which had DSC of 0.823 and 0.732, respectively. Discussion Therefore, our proposed segmentation approach is able to outperform the current 3D-UNet network with diagnostic PET and pCT images. The integration of two image modalities helps improve segmentation accuracy.
Collapse
Affiliation(s)
- Xuyao Yu
- Department of Radiation Oncology, Tianjin Medical University Cancer Institute and Hospital, National Clinical Research Center for Cancer, Tianjin’s Clinical Research Center for Cancer, Key Laboratory of Cancer Prevention and Therapy, Tianjin, China
- Tianjin Medical University, Tianjin, China
| | - Lian He
- Perception Vision Medical Technologies Co Ltd, Guangzhou, China
| | - Yuwen Wang
- Department of Radiotherapy, Tianjin Cancer Hospital Airport Hospital, Tianjin, China
| | - Yang Dong
- Department of Radiation Oncology, Tianjin Medical University Cancer Institute and Hospital, National Clinical Research Center for Cancer, Tianjin’s Clinical Research Center for Cancer, Key Laboratory of Cancer Prevention and Therapy, Tianjin, China
| | - Yongchun Song
- Department of Radiation Oncology, Tianjin Medical University Cancer Institute and Hospital, National Clinical Research Center for Cancer, Tianjin’s Clinical Research Center for Cancer, Key Laboratory of Cancer Prevention and Therapy, Tianjin, China
| | - Zhiyong Yuan
- Department of Radiation Oncology, Tianjin Medical University Cancer Institute and Hospital, National Clinical Research Center for Cancer, Tianjin’s Clinical Research Center for Cancer, Key Laboratory of Cancer Prevention and Therapy, Tianjin, China
| | - Ziye Yan
- Perception Vision Medical Technologies Co Ltd, Guangzhou, China
| | - Wei Wang
- Department of Radiation Oncology, Tianjin Medical University Cancer Institute and Hospital, National Clinical Research Center for Cancer, Tianjin’s Clinical Research Center for Cancer, Key Laboratory of Cancer Prevention and Therapy, Tianjin, China
| |
Collapse
|
14
|
Bi L, Fulham M, Song S, Feng DD, Kim J. Hyper-Connected Transformer Network for Multi-Modality PET-CT Segmentation. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2023; 2023:1-4. [PMID: 38083369 DOI: 10.1109/embc40787.2023.10340635] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
[18F]-Fluorodeoxyglucose (FDG) positron emission tomography - computed tomography (PET-CT) has become the imaging modality of choice for diagnosing many cancers. Co-learning complementary PET-CT imaging features is a fundamental requirement for automatic tumor segmentation and for developing computer aided cancer diagnosis systems. In this study, we propose a hyper-connected transformer (HCT) network that integrates a transformer network (TN) with a hyper connected fusion for multi-modality PET-CT images. The TN was leveraged for its ability to provide global dependencies in image feature learning, which was achieved by using image patch embeddings with a self-attention mechanism to capture image-wide contextual information. We extended the single-modality definition of TN with multiple TN based branches to separately extract image features. We also introduced a hyper connected fusion to fuse the contextual and complementary image features across multiple transformers in an iterative manner. Our results with two clinical datasets show that HCT achieved better performance in segmentation accuracy when compared to the existing methods.Clinical Relevance-We anticipate that our approach can be an effective and supportive tool to aid physicians in tumor quantification and in identifying image biomarkers for cancer treatment.
Collapse
|
15
|
Wu Y, Jiang H, Pang W. MSRA-Net: Tumor segmentation network based on Multi-scale Residual Attention. Comput Biol Med 2023; 158:106818. [PMID: 36966557 DOI: 10.1016/j.compbiomed.2023.106818] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2022] [Revised: 03/08/2023] [Accepted: 03/20/2023] [Indexed: 03/31/2023]
Abstract
Automatic Medical segmentation of medical images is an important part in the field of computer medical diagnosis, among which tumor segmentation is an important branch of medical image segmentation. Accurate automatic segmentation method is very important in medical diagnosis and treatment. Positron emission computed tomography (PET) and X-ray computed tomography (CT) images are widely used in medical image segmentation to help doctors accurately locate information such as tumor location and shape, providing metabolic and anatomical information, respectively. At present, PET/CT images have not been effectively combined in the research of medical image segmentation, and the complementary semantic information between the superficial and deep layers of neural network has not been ensured. To solve the above problems, this paper proposed a Multi-scale Residual Attention network (MSRA-Net) for tumor segmentation of PET/CT. We first use an attention-fusion based approach to automatically learn the tumor-related areas of PET images and weaken the irrelevant area. Then, the segmentation results of PET branch are processed to optimize the segmentation results of CT branch by using attention mechanism. The proposed neural network (MSRA-Net) can effectively fuse PET image and CT image, which can improve the precision of tumor segmentation by using complementary information of the multi-modal image, and reduce the uncertainty of single modal image segmentation. Proposed model uses multi-scale attention mechanism and residual module, which fuse multi-scale features to form complementary features of different scales. We compare with state-of-the-art medical image segmentation methods. The experiment showed that the Dice coefficient of the proposed network in soft tissue sarcoma and lymphoma datasets increased by 8.5% and 6.1% respectively compared with UNet, showing a significant improvement.
Collapse
|
16
|
Xie T, Wang Z, Li H, Wu P, Huang H, Zhang H, Alsaadi FE, Zeng N. Progressive attention integration-based multi-scale efficient network for medical imaging analysis with application to COVID-19 diagnosis. Comput Biol Med 2023; 159:106947. [PMID: 37099976 PMCID: PMC10116157 DOI: 10.1016/j.compbiomed.2023.106947] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2023] [Revised: 03/30/2023] [Accepted: 04/15/2023] [Indexed: 04/28/2023]
Abstract
In this paper, a novel deep learning-based medical imaging analysis framework is developed, which aims to deal with the insufficient feature learning caused by the imperfect property of imaging data. Named as multi-scale efficient network (MEN), the proposed method integrates different attention mechanisms to realize sufficient extraction of both detailed features and semantic information in a progressive learning manner. In particular, a fused-attention block is designed to extract fine-grained details from the input, where the squeeze-excitation (SE) attention mechanism is applied to make the model focus on potential lesion areas. A multi-scale low information loss (MSLIL)-attention block is proposed to compensate for potential global information loss and enhance the semantic correlations among features, where the efficient channel attention (ECA) mechanism is adopted. The proposed MEN is comprehensively evaluated on two COVID-19 diagnostic tasks, and the results show that as compared with some other advanced deep learning models, the proposed method is competitive in accurate COVID-19 recognition, which yields the best accuracy of 98.68% and 98.85%, respectively, and exhibits satisfactory generalization ability as well.
Collapse
Affiliation(s)
- Tingyi Xie
- School of Opto-electronic and Communication Engineering, Xiamen University of Technology, Xiamen 361024, China
| | - Zidong Wang
- Department of Computer Science, Brunel University London, Uxbridge UB8 3PH, UK.
| | - Han Li
- Department of Instrumental and Electrical Engineering, Xiamen University, Fujian 361005, China
| | - Peishu Wu
- Department of Instrumental and Electrical Engineering, Xiamen University, Fujian 361005, China
| | - Huixiang Huang
- School of Opto-electronic and Communication Engineering, Xiamen University of Technology, Xiamen 361024, China
| | - Hongyi Zhang
- School of Opto-electronic and Communication Engineering, Xiamen University of Technology, Xiamen 361024, China
| | - Fuad E Alsaadi
- Communication Systems and Networks Research Group, Department of Electrical and Computer Engineering, Faculty of Engineering, King Abdulaziz University, Jeddah, Saudi Arabia
| | - Nianyin Zeng
- Department of Instrumental and Electrical Engineering, Xiamen University, Fujian 361005, China.
| |
Collapse
|
17
|
Ahmad I, Xia Y, Cui H, Islam ZU. AATSN: Anatomy Aware Tumor Segmentation Network for PET-CT volumes and images using a lightweight fusion-attention mechanism. Comput Biol Med 2023; 157:106748. [PMID: 36958235 DOI: 10.1016/j.compbiomed.2023.106748] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2022] [Revised: 10/11/2022] [Accepted: 03/06/2023] [Indexed: 03/13/2023]
Abstract
Fluorodeoxyglucose Positron Emission Tomography (FDG-PET) provides metabolic information, while Computed Tomography (CT) provides the anatomical context of the tumors. Combined PET-CT segmentation helps in computer-assisted tumor diagnosis, staging, and treatment planning. Current state-of-the-art models mainly rely on early or late fusion techniques. These methods, however, rarely learn PET-CT complementary features and cannot efficiently co-relate anatomical and metabolic features. These drawbacks can be removed by intermediate fusion; however, it produces inaccurate segmentations in the case of heterogeneous textures in the modalities. Furthermore, it requires massive computation. In this work, we propose AATSN (Anatomy Aware Tumor Segmentation Network), which extracts anatomical CT features, and then intermediately fuses with PET features through a fusion-attention mechanism. Our anatomy-aware fusion-attention mechanism fuses the selective useful CT and PET features instead of fusing the full features set. Thus this not only improves the network performance but also requires lesser resources. Furthermore, our model is scalable to 2D images and 3D volumes. The proposed model is rigorously trained, tested, evaluated, and compared to the state-of-the-art through several ablation studies on the largest available datasets. We have achieved a 0.8104 dice score and 2.11 median HD95 score in a 3D setup, while 0.6756 dice score in a 2D setup. We demonstrate that AATSN achieves a significant performance gain while being lightweight at the same time compared to the state-of-the-art methods. The implications of AATSN include improved tumor delineation for diagnosis, analysis, and radiotherapy treatment.
Collapse
Affiliation(s)
- Ibtihaj Ahmad
- School of Computer Science and Engineering, Northwestern Polytechnical University, Xi'an, 710072, Shaanxi, PR China.
| | - Yong Xia
- School of Computer Science and Engineering, Northwestern Polytechnical University, Xi'an, 710072, Shaanxi, PR China.
| | - Hengfei Cui
- School of Computer Science and Engineering, Northwestern Polytechnical University, Xi'an, 710072, Shaanxi, PR China.
| | - Zain Ul Islam
- School of Information Science and Technology, University of Science and Technology of China, Hefei, 230000, Anhui, PR China.
| |
Collapse
|
18
|
Wang F, Cheng C, Cao W, Wu Z, Wang H, Wei W, Yan Z, Liu Z. MFCNet: A multi-modal fusion and calibration networks for 3D pancreas tumor segmentation on PET-CT images. Comput Biol Med 2023; 155:106657. [PMID: 36791551 DOI: 10.1016/j.compbiomed.2023.106657] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2022] [Revised: 01/29/2023] [Accepted: 02/09/2023] [Indexed: 02/12/2023]
Abstract
In clinical diagnosis, positron emission tomography and computed tomography (PET-CT) images containing complementary information are fused. Tumor segmentation based on multi-modal PET-CT images is an important part of clinical diagnosis and treatment. However, the existing current PET-CT tumor segmentation methods mainly focus on positron emission tomography (PET) and computed tomography (CT) feature fusion, which weakens the specificity of the modality. In addition, the information interaction between different modal images is usually completed by simple addition or concatenation operations, but this has the disadvantage of introducing irrelevant information during the multi-modal semantic feature fusion, so effective features cannot be highlighted. To overcome this problem, this paper propose a novel Multi-modal Fusion and Calibration Networks (MFCNet) for tumor segmentation based on three-dimensional PET-CT images. First, a Multi-modal Fusion Down-sampling Block (MFDB) with a residual structure is developed. The proposed MFDB can fuse complementary features of multi-modal images while retaining the unique features of different modal images. Second, a Multi-modal Mutual Calibration Block (MMCB) based on the inception structure is designed. The MMCB can guide the network to focus on a tumor region by combining different branch decoding features using the attention mechanism and extracting multi-scale pathological features using a convolution kernel of different sizes. The proposed MFCNet is verified on both the public dataset (Head and Neck cancer) and the in-house dataset (pancreas cancer). The experimental results indicate that on the public and in-house datasets, the average Dice values of the proposed multi-modal segmentation network are 74.14% and 76.20%, while the average Hausdorff distances are 6.41 and 6.84, respectively. In addition, the experimental results show that the proposed MFCNet outperforms the state-of-the-art methods on the two datasets.
Collapse
Affiliation(s)
- Fei Wang
- Institute of Biomedical Engineering, School of Communication and Information Engineering, Shanghai University, Shanghai, 200444, China; Department of Medical Imaging, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, 215163, China
| | - Chao Cheng
- Department of Nuclear Medicine, The First Affiliated Hospital of Naval Medical University(Changhai Hospital), Shanghai, 200433, China
| | - Weiwei Cao
- Department of Medical Imaging, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, 215163, China
| | - Zhongyi Wu
- Department of Medical Imaging, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, 215163, China
| | - Heng Wang
- School of Electronic and Information Engineering, Changchun University of Science and Technology, Changchun, 130022, China
| | - Wenting Wei
- School of Electronic and Information Engineering, Changchun University of Science and Technology, Changchun, 130022, China
| | - Zhuangzhi Yan
- Institute of Biomedical Engineering, School of Communication and Information Engineering, Shanghai University, Shanghai, 200444, China.
| | - Zhaobang Liu
- Department of Medical Imaging, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, 215163, China.
| |
Collapse
|
19
|
Zhu X, Jiang H, Diao Z. CGBO-Net: Cruciform structure guided and boundary-optimized lymphoma segmentation network. Comput Biol Med 2023; 153:106534. [PMID: 36608464 DOI: 10.1016/j.compbiomed.2022.106534] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2022] [Revised: 12/27/2022] [Accepted: 12/31/2022] [Indexed: 01/05/2023]
Abstract
Lymphoma segmentation plays an important role in the diagnosis and treatment of lymphocytic tumor. Most current existing automatic segmentation methods are difficult to give precise tumor boundary and location. Semi-automatic methods are usually combined with manually added features such as bounding box or points to locate the tumor. Inspired by this, we propose a cruciform structure guided and boundary-optimized lymphoma segmentation network(CGBS-Net). The method uses a cruciform structure extracted based on PET images as an additional input to the network, while using a boundary gradient loss function to optimize the boundary of the tumor. Our method is divided into two main stages: In the first stage, we use the proposed axial context-based cruciform structure extraction (CCE) method to extract the cruciform structures of all tumor slices. In the second stage, we use PET/CT and the corresponding cruciform structure as input in the designed network (CGBO-Net) to extract tumor structure and boundary information. The Dice, Precision, Recall, IOU and RVD are 90.7%, 89.4%, 92.5%, 83.1% and 4.5%, respectively. Validate on the lymphoma dataset and publicly available head and neck data, our proposed approach is better than the other state-of-the-art semi-segmentation methods, which produces promising segmentation results.
Collapse
Affiliation(s)
- Xiaolin Zhu
- Northeastern University, No. 195, Chuangxin Road, Hunnan District, Shenyang, 110169, Liaoning, China
| | - Huiyan Jiang
- Software College, Northeastern University, No. 195, Chuangxin Road, Hunnan District, Shenyang, 110169, Liaoning, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, No. 195, Chuangxin Road, Hunnan District, Shenyang, 110169, Liaoning, China.
| | - Zhaoshuo Diao
- Software College, Northeastern University, No. 195, Chuangxin Road, Hunnan District, Shenyang, 110169, Liaoning, China
| |
Collapse
|
20
|
Wang S, Mahon R, Weiss E, Jan N, Taylor RJ, McDonagh PR, Quinn B, Yuan L. Automated Lung Cancer Segmentation Using a PET and CT Dual-Modality Deep Learning Neural Network. Int J Radiat Oncol Biol Phys 2023; 115:529-539. [PMID: 35934160 DOI: 10.1016/j.ijrobp.2022.07.2312] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2021] [Revised: 06/16/2022] [Accepted: 07/28/2022] [Indexed: 01/11/2023]
Abstract
PURPOSE To develop an automated lung tumor segmentation method for radiation therapy planning based on deep learning and dual-modality positron emission tomography (PET) and computed tomography (CT) images. METHODS AND MATERIALS A 3-dimensional (3D) convolutional neural network using inputs from diagnostic PETs and simulation CTs was constructed with 2 parallel convolution paths for independent feature extraction at multiple resolution levels and a single deconvolution path. At each resolution level, the extracted features from the convolution arms were concatenated and fed through the skip connections into the deconvolution path that produced the tumor segmentation. Our network was trained/validated/tested by a 3:1:1 split on 290 pairs of PET and CT images from patients with lung cancer treated at our clinic, with manual physician contours as the ground truth. A stratified training strategy based on the magnitude of the gross tumor volume (GTV) was investigated to improve performance, especially for small tumors. Multiple radiation oncologists assessed the clinical acceptability of the network-produced segmentations. RESULTS The mean Dice similarity coefficient, Hausdorff distance, and bidirectional local distance comparing manual versus automated contours were 0.79 ± 0.10, 5.8 ± 3.2 mm, and 2.8 ± 1.5 mm for the unstratified 3D dual-modality model. Stratification delivered the best results when the model for the large GTVs (>25 mL) was trained with all-size GTVs and the model for the small GTVs (<25 mL) was trained with small GTVs only. The best combined Dice similarity coefficient, Hausdorff distance, and bidirectional local distance from the 2 stratified models on their corresponding test data sets were 0.83 ± 0.07, 5.9 ± 2.5 mm, and 2.8 ± 1.4 mm, respectively. In the multiobserver review, 91.25% manual versus 88.75% automatic contours were accepted or accepted with modifications. CONCLUSIONS By using an expansive clinical PET and CT image database and a dual-modality architecture, the proposed 3D network with a novel GTVbased stratification strategy generated clinically useful lung cancer contours that were highly acceptable on physician review.
Collapse
Affiliation(s)
- Siqiu Wang
- Department of Radiation Oncology, Virginia Commonwealth University, Richmond, Virginia
| | - Rebecca Mahon
- Washington University School of Medicine in St Louis, St Louis, Missouri
| | - Elisabeth Weiss
- Department of Radiation Oncology, Virginia Commonwealth University, Richmond, Virginia
| | - Nuzhat Jan
- Department of Radiation Oncology, Virginia Commonwealth University, Richmond, Virginia
| | - Ross James Taylor
- Department of Radiation Oncology, Virginia Commonwealth University, Richmond, Virginia
| | - Philip Reed McDonagh
- Department of Radiation Oncology, Virginia Commonwealth University, Richmond, Virginia
| | - Bridget Quinn
- Department of Radiation Oncology, Virginia Commonwealth University, Richmond, Virginia
| | - Lulin Yuan
- Department of Radiation Oncology, Virginia Commonwealth University, Richmond, Virginia.
| |
Collapse
|
21
|
Zheng S, Tan J, Jiang C, Li L. Automated multi-modal Transformer network (AMTNet) for 3D medical images segmentation. Phys Med Biol 2023; 68. [PMID: 36595252 DOI: 10.1088/1361-6560/aca74c] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2022] [Accepted: 11/29/2022] [Indexed: 12/05/2022]
Abstract
Objective.Over the past years, convolutional neural networks based methods have dominated the field of medical image segmentation. But the main drawback of these methods is that they have difficulty representing long-range dependencies. Recently, the Transformer has demonstrated super performance in computer vision and has also been successfully applied to medical image segmentation because of the self-attention mechanism and long-range dependencies encoding on images. To the best of our knowledge, only a few works focus on cross-modalities of image segmentation using the Transformer. Hence, the main objective of this study was to design, propose and validate a deep learning method to extend the application of Transformer to multi-modality medical image segmentation.Approach.This paper proposes a novel automated multi-modal Transformer network termed AMTNet for 3D medical image segmentation. Especially, the network is a well-modeled U-shaped network architecture where many effective and significant changes have been made in the feature encoding, fusion, and decoding parts. The encoding part comprises 3D embedding, 3D multi-modal Transformer, and 3D Co-learn down-sampling blocks. Symmetrically, the 3D Transformer block, upsampling block, and 3D-expanding blocks are included in the decoding part. In addition, a Transformer-based adaptive channel interleaved Transformer feature fusion module is designed to fully fuse features of different modalities.Main results.We provide a comprehensive experimental analysis of the Prostate and BraTS2021 datasets. The results show that our method achieves an average DSC of 0.907 and 0.851 (0.734 for ET, 0.895 for TC, and 0.924 for WT) on these two datasets, respectively. These values show that AMTNet yielded significant improvements over the state-of-the-art segmentation networks.Significance.The proposed 3D segmentation network exploits complementary features of different modalities during the feature extraction process at multiple scales to increase the 3D feature representations and improve the segmentation efficiency. This powerful network enriches the research of the Transformer to multi-modal medical image segmentation.
Collapse
Affiliation(s)
- Shenhai Zheng
- College of Computer Science and Technology, Chongqing University of Posts and Telecommunications, Chongqing 400065, People's Republic of China.,Chongqing Key Laboratory of Image Cognition, Chongqing University of Posts and Telecommunications, Chongqing 400065, People's Republic of China
| | - Jiaxin Tan
- College of Computer Science and Technology, Chongqing University of Posts and Telecommunications, Chongqing 400065, People's Republic of China
| | - Chuangbo Jiang
- School of Science, Chongqing University of Posts and Telecommunications, Chongqing 400065, People's Republic of China
| | - Laquan Li
- College of Computer Science and Technology, Chongqing University of Posts and Telecommunications, Chongqing 400065, People's Republic of China.,School of Science, Chongqing University of Posts and Telecommunications, Chongqing 400065, People's Republic of China
| |
Collapse
|
22
|
Chaddad A, Peng J, Xu J, Bouridane A. Survey of Explainable AI Techniques in Healthcare. SENSORS (BASEL, SWITZERLAND) 2023; 23:s23020634. [PMID: 36679430 PMCID: PMC9862413 DOI: 10.3390/s23020634] [Citation(s) in RCA: 33] [Impact Index Per Article: 33.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/07/2022] [Revised: 12/14/2022] [Accepted: 12/29/2022] [Indexed: 05/27/2023]
Abstract
Artificial intelligence (AI) with deep learning models has been widely applied in numerous domains, including medical imaging and healthcare tasks. In the medical field, any judgment or decision is fraught with risk. A doctor will carefully judge whether a patient is sick before forming a reasonable explanation based on the patient's symptoms and/or an examination. Therefore, to be a viable and accepted tool, AI needs to mimic human judgment and interpretation skills. Specifically, explainable AI (XAI) aims to explain the information behind the black-box model of deep learning that reveals how the decisions are made. This paper provides a survey of the most recent XAI techniques used in healthcare and related medical imaging applications. We summarize and categorize the XAI types, and highlight the algorithms used to increase interpretability in medical imaging topics. In addition, we focus on the challenging XAI problems in medical applications and provide guidelines to develop better interpretations of deep learning models using XAI concepts in medical image and text analysis. Furthermore, this survey provides future directions to guide developers and researchers for future prospective investigations on clinical topics, particularly on applications with medical imaging.
Collapse
Affiliation(s)
- Ahmad Chaddad
- School of Artificial Intelligence, Guilin University of Electronic Technology, Jinji Road, Guilin 541004, China
- The Laboratory for Imagery Vision and Artificial Intelligence, Ecole de Technologie Superieure, 1100 Rue Notre Dame O, Montreal, QC H3C 1K3, Canada
| | - Jihao Peng
- School of Artificial Intelligence, Guilin University of Electronic Technology, Jinji Road, Guilin 541004, China
| | - Jian Xu
- School of Artificial Intelligence, Guilin University of Electronic Technology, Jinji Road, Guilin 541004, China
| | - Ahmed Bouridane
- Centre for Data Analytics and Cybersecurity, University of Sharjah, Sharjah 27272, United Arab Emirates
| |
Collapse
|
23
|
Luo S, Jiang H, Wang M. C 2BA-UNet: A context-coordination multi-atlas boundary-aware UNet-like method for PET/CT images based tumor segmentation. Comput Med Imaging Graph 2023; 103:102159. [PMID: 36549193 DOI: 10.1016/j.compmedimag.2022.102159] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2022] [Revised: 11/11/2022] [Accepted: 12/05/2022] [Indexed: 12/13/2022]
Abstract
Tumor segmentation is a necessary step in clinical processing that can help doctors diagnose tumors and plan surgical treatments. Since tumors are usually small, the locations and appearances vary substantially across individuals, and the contrast between tumors and adjacent normal tissues is low, tumor segmentation is still a challenging task. Although convolutional neural networks (CNNs) have achieved good results in tumor segmentation, the information about tumor boundaries has been rarely explored. To solve the problem, this paper proposes a new method for automatic tumor segmentation in PET/CT images based on context-coordination and boundary-aware, termed as C2BA-UNet. We employ a UNet-like backbone network and replace the encoder with EfficientNet-B0 for efficiency. To acquire potential tumor boundaries, we propose a new multi-atlas boundary-aware (MABA) module based on gradient atlas, uncertainty atlas, and level set atlas, that focuses on uncertain regions between tumors and adjacent tissues. Furthermore, we propose a new context coordination module (CCM) to combine multi-scale context information with attention mechanism to optimize skip connection in high-level layers. To validate the superiority of our method, we conduct experiments on a publicly available soft tissue sarcoma (STS) dataset and a lymphoma dataset, and the results show our method is competitive with other comparison methods.
Collapse
Affiliation(s)
- Shijie Luo
- Software College, Northeastern University, Shenyang 110819, China
| | - Huiyan Jiang
- Software College, Northeastern University, Shenyang 110819, China; Key Laboratory of Intelligent Computing in Biomedical Image, Ministry of Education, Northeastern University, Shenyang 110819, China.
| | - Meng Wang
- Software College, Northeastern University, Shenyang 110819, China
| |
Collapse
|
24
|
Zhang X, Zhang B, Deng S, Meng Q, Chen X, Xiang D. Cross modality fusion for modality-specific lung tumor segmentation in PET-CT images. Phys Med Biol 2022; 67. [DOI: 10.1088/1361-6560/ac994e] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2022] [Accepted: 10/11/2022] [Indexed: 11/09/2022]
Abstract
Abstract
Although positron emission tomography-computed tomography (PET-CT) images have been widely used, it is still challenging to accurately segment the lung tumor. The respiration, movement and imaging modality lead to large modality discrepancy of the lung tumors between PET images and CT images. To overcome these difficulties, a novel network is designed to simultaneously obtain the corresponding lung tumors of PET images and CT images. The proposed network can fuse the complementary information and preserve modality-specific features of PET images and CT images. Due to the complementarity between PET images and CT images, the two modality images should be fused for automatic lung tumor segmentation. Therefore, cross modality decoding blocks are designed to extract modality-specific features of PET images and CT images with the constraints of the other modality. The edge consistency loss is also designed to solve the problem of blurred boundaries of PET images and CT images. The proposed method is tested on 126 PET-CT images with non-small cell lung cancer, and Dice similarity coefficient scores of lung tumor segmentation reach 75.66 ± 19.42 in CT images and 79.85 ± 16.76 in PET images, respectively. Extensive comparisons with state-of-the-art lung tumor segmentation methods have also been performed to demonstrate the superiority of the proposed network.
Collapse
|
25
|
Huang Z, Zou S, Wang G, Chen Z, Shen H, Wang H, Zhang N, Zhang L, Yang F, Wang H, Liang D, Niu T, Zhu X, Hu Z. ISA-Net: Improved spatial attention network for PET-CT tumor segmentation. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 226:107129. [PMID: 36156438 DOI: 10.1016/j.cmpb.2022.107129] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/14/2022] [Revised: 07/06/2022] [Accepted: 09/13/2022] [Indexed: 06/16/2023]
Abstract
BACKGROUND AND OBJECTIVE Achieving accurate and automated tumor segmentation plays an important role in both clinical practice and radiomics research. Segmentation in medicine is now often performed manually by experts, which is a laborious, expensive and error-prone task. Manual annotation relies heavily on the experience and knowledge of these experts. In addition, there is much intra- and interobserver variation. Therefore, it is of great significance to develop a method that can automatically segment tumor target regions. METHODS In this paper, we propose a deep learning segmentation method based on multimodal positron emission tomography-computed tomography (PET-CT), which combines the high sensitivity of PET and the precise anatomical information of CT. We design an improved spatial attention network(ISA-Net) to increase the accuracy of PET or CT in detecting tumors, which uses multi-scale convolution operation to extract feature information and can highlight the tumor region location information and suppress the non-tumor region location information. In addition, our network uses dual-channel inputs in the coding stage and fuses them in the decoding stage, which can take advantage of the differences and complementarities between PET and CT. RESULTS We validated the proposed ISA-Net method on two clinical datasets, a soft tissue sarcoma(STS) and a head and neck tumor(HECKTOR) dataset, and compared with other attention methods for tumor segmentation. The DSC score of 0.8378 on STS dataset and 0.8076 on HECKTOR dataset show that ISA-Net method achieves better segmentation performance and has better generalization. CONCLUSIONS The method proposed in this paper is based on multi-modal medical image tumor segmentation, which can effectively utilize the difference and complementarity of different modes. The method can also be applied to other multi-modal data or single-modal data by proper adjustment.
Collapse
Affiliation(s)
- Zhengyong Huang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China; University of Chinese Academy of Sciences, Beijing, 101408, China
| | - Sijuan Zou
- Department of Nuclear Medicine and PET, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, 430000, China
| | - Guoshuai Wang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China; University of Chinese Academy of Sciences, Beijing, 101408, China
| | - Zixiang Chen
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China; Chinese Academy of Sciences Key Laboratory of Health Informatics, Shenzhen, 518055, China
| | - Hao Shen
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China; University of Chinese Academy of Sciences, Beijing, 101408, China
| | - Haiyan Wang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China; University of Chinese Academy of Sciences, Beijing, 101408, China
| | - Na Zhang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China; Chinese Academy of Sciences Key Laboratory of Health Informatics, Shenzhen, 518055, China
| | - Lu Zhang
- Brain Cognition and Brain Disease Institute (BCBDI), Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China; Shenzhen-Hong Kong Institute of Brain Science-Shenzhen Fundamental Research Institutions,Shenzhen, 518055, China
| | - Fan Yang
- Brain Cognition and Brain Disease Institute (BCBDI), Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China; Shenzhen-Hong Kong Institute of Brain Science-Shenzhen Fundamental Research Institutions,Shenzhen, 518055, China
| | - Haining Wang
- United Imaging Research Institute of Innovative Medical Equipment, Shenzhen, 518045, China
| | - Dong Liang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China; Chinese Academy of Sciences Key Laboratory of Health Informatics, Shenzhen, 518055, China
| | - Tianye Niu
- Institute of Biomedical Engineering, Shenzhen Bay Laboratory, Shenzhen, 518118, China
| | - Xiaohua Zhu
- Department of Nuclear Medicine and PET, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, 430000, China
| | - Zhanli Hu
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China; Chinese Academy of Sciences Key Laboratory of Health Informatics, Shenzhen, 518055, China.
| |
Collapse
|
26
|
A hybrid hemodynamic knowledge-powered and feature reconstruction-guided scheme for breast cancer segmentation based on DCE-MRI. Med Image Anal 2022; 82:102572. [PMID: 36055051 DOI: 10.1016/j.media.2022.102572] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2021] [Revised: 07/08/2022] [Accepted: 08/11/2022] [Indexed: 11/24/2022]
Abstract
Automatically and accurately annotating tumor in dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI), which provides a noninvasive in vivo method to evaluate tumor vasculature architectures based on contrast accumulation and washout, is a crucial step in computer-aided breast cancer diagnosis and treatment. However, it remains challenging due to the varying sizes, shapes, appearances and densities of tumors caused by the high heterogeneity of breast cancer, and the high dimensionality and ill-posed artifacts of DCE-MRI. In this paper, we propose a hybrid hemodynamic knowledge-powered and feature reconstruction-guided scheme that integrates pharmacokinetics prior and feature refinement to generate sufficiently adequate features in DCE-MRI for breast cancer segmentation. The pharmacokinetics prior expressed by time intensity curve (TIC) is incorporated into the scheme through objective function called dynamic contrast-enhanced prior (DCP) loss. It contains contrast agent kinetic heterogeneity prior knowledge, which is important to optimize our model parameters. Besides, we design a spatial fusion module (SFM) embedded in the scheme to exploit intra-slices spatial structural correlations, and deploy a spatial-kinetic fusion module (SKFM) to effectively leverage the complementary information extracted from spatial-kinetic space. Furthermore, considering that low spatial resolution often leads to poor image quality in DCE-MRI, we integrate a reconstruction autoencoder into the scheme to refine feature maps in an unsupervised manner. We conduct extensive experiments to validate the proposed method and show that our approach can outperform recent state-of-the-art segmentation methods on breast cancer DCE-MRI dataset. Moreover, to explore the generalization for other segmentation tasks on dynamic imaging, we also extend the proposed method to brain segmentation in DSC-MRI sequence. Our source code will be released on https://github.com/AI-medical-diagnosis-team-of-JNU/DCEDuDoFNet.
Collapse
|
27
|
Chen Y, Zhou T, Chen Y, Feng L, Zheng C, Liu L, Hu L, Pan B. HADCNet: Automatic segmentation of COVID-19 infection based on a hybrid attention dense connected network with dilated convolution. Comput Biol Med 2022; 149:105981. [PMID: 36029749 PMCID: PMC9391231 DOI: 10.1016/j.compbiomed.2022.105981] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2022] [Revised: 08/03/2022] [Accepted: 08/14/2022] [Indexed: 12/01/2022]
Abstract
the automatic segmentation of lung infections in CT slices provides a rapid and effective strategy for diagnosing, treating, and assessing COVID-19 cases. However, the segmentation of the infected areas presents several difficulties, including high intraclass variability and interclass similarity among infected areas, as well as blurred edges and low contrast. Therefore, we propose HADCNet, a deep learning framework that segments lung infections based on a dual hybrid attention strategy. HADCNet uses an encoder hybrid attention module to integrate feature information at different scales across the peer hierarchy to refine the feature map. Furthermore, a decoder hybrid attention module uses an improved skip connection to embed the semantic information of higher-level features into lower-level features by integrating multi-scale contextual structures and assigning the spatial information of lower-level features to higher-level features, thereby capturing the contextual dependencies of lesion features across levels and refining the semantic structure, which reduces the semantic gap between feature maps at different levels and improves the model segmentation performance. We conducted fivefold cross-validations of our model on four publicly available datasets, with final mean Dice scores of 0.792, 0.796, 0.785, and 0.723. These results show that the proposed model outperforms popular state-of-the-art semantic segmentation methods and indicate its potential use in the diagnosis and treatment of COVID-19.
Collapse
Affiliation(s)
- Ying Chen
- School of Software, Nanchang Hangkong University, Nanchang, 330063, PR China.
| | - Taohui Zhou
- School of Software, Nanchang Hangkong University, Nanchang, 330063, PR China.
| | - Yi Chen
- Department of Computer Science and Artificial Intelligence, Wenzhou University, Wenzhou, 325035, PR China.
| | - Longfeng Feng
- School of Software, Nanchang Hangkong University, Nanchang, 330063, PR China.
| | - Cheng Zheng
- School of Software, Nanchang Hangkong University, Nanchang, 330063, PR China.
| | - Lan Liu
- Department of Radiology, Jiangxi Cancer Hospital, Nanchang, 330029, PR China.
| | - Liping Hu
- Department of Radiology, Jiangxi Cancer Hospital, Nanchang, 330029, PR China.
| | - Bujian Pan
- Department of Hepatobiliary Surgery, Wenzhou Central Hospital, The Dingli Clinical Institute of Wenzhou Medical University, Wenzhou, Zhejiang, 325000, PR China.
| |
Collapse
|
28
|
Zhou Q, Zou H. A layer-wise fusion network incorporating self-supervised learning for multimodal MR image synthesis. Front Genet 2022; 13:937042. [PMID: 36017492 PMCID: PMC9396279 DOI: 10.3389/fgene.2022.937042] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2022] [Accepted: 07/04/2022] [Indexed: 11/13/2022] Open
Abstract
Magnetic resonance (MR) imaging plays an important role in medical diagnosis and treatment; different modalities of MR images can provide rich and complementary information to improve the accuracy of diagnosis. However, due to the limitations of scanning time and medical conditions, certain modalities of MR may be unavailable or of low quality in clinical practice. In this study, we propose a new multimodal MR image synthesis network to generate missing MR images. The proposed model comprises three stages: feature extraction, feature fusion, and image generation. During feature extraction, 2D and 3D self-supervised pretext tasks are introduced to pre-train the backbone for better representations of each modality. Then, a channel attention mechanism is used when fusing features so that the network can adaptively weigh different fusion operations to learn common representations of all modalities. Finally, a generative adversarial network is considered as the basic framework to generate images, in which a feature-level edge information loss is combined with the pixel-wise loss to ensure consistency between the synthesized and real images in terms of anatomical characteristics. 2D and 3D self-supervised pre-training can have better performance on feature extraction to retain more details in the synthetic images. Moreover, the proposed multimodal attention feature fusion block (MAFFB) in the well-designed layer-wise fusion strategy can model both common and unique information in all modalities, consistent with the clinical analysis. We also perform an interpretability analysis to confirm the rationality and effectiveness of our method. The experimental results demonstrate that our method can be applied in both single-modal and multimodal synthesis with high robustness and outperforms other state-of-the-art approaches objectively and subjectively.
Collapse
|
29
|
Li H, Song Q, Gui D, Wang M, Min X, Li A. Reconstruction-assisted Feature Encoding Network for Histologic Subtype Classification of Non-small Cell Lung Cancer. IEEE J Biomed Health Inform 2022; 26:4563-4574. [PMID: 35849680 DOI: 10.1109/jbhi.2022.3192010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
Accurate histological subtype classification between adenocarcinoma (ADC) and squamous cell carcinoma (SCC) using computed tomography (CT) images is of great importance to assist clinicians in determining treatment and therapy plans for non-small cell lung cancer (NSCLC) patients. Although current deep learning approaches have achieved promising progress in this field, they are often difficult to capture efficient tumor representations due to inadequate training data, and in consequence show limited performance. In this study, we propose a novel and effective reconstruction-assisted feature encoding network (RAFENet) for histological subtype classification by leveraging an auxiliary image reconstruction task to enable extra guidance and regularization for enhanced tumor feature representations. Different from existing reconstruction-assisted methods that directly use generalizable features obtained from shared encoder for primary task, a dedicated task-aware encoding module is utilized in RAFENet to perform refinement of generalizable features. Specifically, a cascade of cross-level non-local blocks are introduced to progressively refine generalizable features at different levels with the aid of lower-level task-specific information, which can successfully learn multi-level task-specific features tailored to histological subtype classification. Moreover, in addition to widely adopted pixel-wise reconstruction loss, we introduce a powerful semantic consistency loss function to explicitly supervise the training of RAFENet, which combines both feature consistency loss and prediction consistency loss to ensure semantic invariance during image reconstruction. Extensive experimental results show that RAFENet effectively addresses the difficult issues that cannot be resolved by existing reconstruction-based methods and consistently outperforms other state-of-the-art methods on both public and in-house NSCLC datasets.
Collapse
|
30
|
FGAM: A pluggable light-weight attention module for medical image segmentation. Comput Biol Med 2022; 146:105628. [DOI: 10.1016/j.compbiomed.2022.105628] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2021] [Revised: 04/08/2022] [Accepted: 04/15/2022] [Indexed: 11/22/2022]
|
31
|
Manafi-Farid R, Askari E, Shiri I, Pirich C, Asadi M, Khateri M, Zaidi H, Beheshti M. [ 18F]FDG-PET/CT radiomics and artificial intelligence in lung cancer: Technical aspects and potential clinical applications. Semin Nucl Med 2022; 52:759-780. [PMID: 35717201 DOI: 10.1053/j.semnuclmed.2022.04.004] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2022] [Revised: 04/10/2022] [Accepted: 04/13/2022] [Indexed: 02/07/2023]
Abstract
Lung cancer is the second most common cancer and the leading cause of cancer-related death worldwide. Molecular imaging using [18F]fluorodeoxyglucose Positron Emission Tomography and/or Computed Tomography ([18F]FDG-PET/CT) plays an essential role in the diagnosis, evaluation of response to treatment, and prediction of outcomes. The images are evaluated using qualitative and conventional quantitative indices. However, there is far more information embedded in the images, which can be extracted by sophisticated algorithms. Recently, the concept of uncovering and analyzing the invisible data extracted from medical images, called radiomics, is gaining more attention. Currently, [18F]FDG-PET/CT radiomics is growingly evaluated in lung cancer to discover if it enhances the diagnostic performance or implication of [18F]FDG-PET/CT in the management of lung cancer. In this review, we provide a short overview of the technical aspects, as they are discussed in different articles of this special issue. We mainly focus on the diagnostic performance of the [18F]FDG-PET/CT-based radiomics and the role of artificial intelligence in non-small cell lung cancer, impacting the early detection, staging, prediction of tumor subtypes, biomarkers, and patient's outcomes.
Collapse
Affiliation(s)
- Reyhaneh Manafi-Farid
- Research Center for Nuclear Medicine, Shariati Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | - Emran Askari
- Department of Nuclear Medicine, School of Medicine, Mashhad University of Medical Sciences, Mashhad, Iran
| | - Isaac Shiri
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Christian Pirich
- Division of Molecular Imaging and Theranostics, Department of Nuclear Medicine, University Hospital Salzburg, Paracelsus Medical University, Salzburg, Austria
| | - Mahboobeh Asadi
- Research Center for Nuclear Medicine, Shariati Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | - Maziar Khateri
- Research Center for Nuclear Medicine, Shariati Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland; Geneva University Neurocenter, Geneva University, Geneva, Switzerland; Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Groningen, The Netherlands; Department of Nuclear Medicine, University of Southern Denmark, Odense, Denmark
| | - Mohsen Beheshti
- Division of Molecular Imaging and Theranostics, Department of Nuclear Medicine, University Hospital Salzburg, Paracelsus Medical University, Salzburg, Austria.
| |
Collapse
|
32
|
Teacher-student approach for lung tumor segmentation from mixed-supervised datasets. PLoS One 2022; 17:e0266147. [PMID: 35381046 PMCID: PMC8982833 DOI: 10.1371/journal.pone.0266147] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2022] [Accepted: 03/15/2022] [Indexed: 11/19/2022] Open
Abstract
Purpose
Cancer is among the leading causes of death in the developed world, and lung cancer is the most lethal type. Early detection is crucial for better prognosis, but can be resource intensive to achieve. Automating tasks such as lung tumor localization and segmentation in radiological images can free valuable time for radiologists and other clinical personnel. Convolutional neural networks may be suited for such tasks, but require substantial amounts of labeled data to train. Obtaining labeled data is a challenge, especially in the medical domain.
Methods
This paper investigates the use of a teacher-student design to utilize datasets with different types of supervision to train an automatic model performing pulmonary tumor segmentation on computed tomography images. The framework consists of two models: the student that performs end-to-end automatic tumor segmentation and the teacher that supplies the student additional pseudo-annotated data during training.
Results
Using only a small proportion of semantically labeled data and a large number of bounding box annotated data, we achieved competitive performance using a teacher-student design. Models trained on larger amounts of semantic annotations did not perform better than those trained on teacher-annotated data. Our model trained on a small number of semantically labeled data achieved a mean dice similarity coefficient of 71.0 on the MSD Lung dataset.
Conclusions
Our results demonstrate the potential of utilizing teacher-student designs to reduce the annotation load, as less supervised annotation schemes may be performed, without any real degradation in segmentation accuracy.
Collapse
|
33
|
Qiao X, Jiang C, Li P, Yuan Y, Zeng Q, Bi L, Song S, Kim J, Feng DD, Huang Q. Improving Breast Tumor Segmentation in PET via Attentive Transformation Based Normalization. IEEE J Biomed Health Inform 2022; 26:3261-3271. [PMID: 35377850 DOI: 10.1109/jbhi.2022.3164570] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Positron Emission Tomography (PET) has become a preferred imaging modality for cancer diagnosis, radiotherapy planning, and treatment responses monitoring. Accurate and automatic tumor segmentation is the fundamental requirement for these clinical applications. Deep convolutional neural networks have become the state-of-the-art in PET tumor segmentation. The normalization process is one of the key components for accelerating network training and improving the performance of the network. However, existing normalization methods either introduce batch noise into the instance PET image by calculating statistics on batch level or introduce background noise into every single pixel by sharing the same learnable parameters spatially. In this paper, we proposed an attentive transformation (AT)-based normalization method for PET tumor segmentation. We exploit the distinguishability of breast tumor in PET images and dynamically generate dedicated and pixel-dependent learnable parameters in normalization via the transformation on a combination of channel-wise and spatial-wise attentive responses. The attentive learnable parameters allow to re-calibrate features pixel-by-pixel to focus on the high-uptake area while attenuating the background noise of PET images. Our experimental results on two real clinical datasets show that the AT-based normalization method improves breast tumor segmentation performance when compared with the existing normalization methods.
Collapse
|
34
|
Punn NS, Agarwal S. Modality specific U-Net variants for biomedical image segmentation: a survey. Artif Intell Rev 2022; 55:5845-5889. [PMID: 35250146 PMCID: PMC8886195 DOI: 10.1007/s10462-022-10152-1] [Citation(s) in RCA: 28] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/09/2022] [Indexed: 02/06/2023]
Abstract
With the advent of advancements in deep learning approaches, such as deep convolution neural network, residual neural network, adversarial network; U-Net architectures are most widely utilized in biomedical image segmentation to address the automation in identification and detection of the target regions or sub-regions. In recent studies, U-Net based approaches have illustrated state-of-the-art performance in different applications for the development of computer-aided diagnosis systems for early diagnosis and treatment of diseases such as brain tumor, lung cancer, alzheimer, breast cancer, etc., using various modalities. This article contributes in presenting the success of these approaches by describing the U-Net framework, followed by the comprehensive analysis of the U-Net variants by performing (1) inter-modality, and (2) intra-modality categorization to establish better insights into the associated challenges and solutions. Besides, this article also highlights the contribution of U-Net based frameworks in the ongoing pandemic, severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) also known as COVID-19. Finally, the strengths and similarities of these U-Net variants are analysed along with the challenges involved in biomedical image segmentation to uncover promising future research directions in this area.
Collapse
|
35
|
Kao YS, Yang J. Deep learning-based auto-segmentation of lung tumor PET/CT scans: a systematic review. Clin Transl Imaging 2022. [DOI: 10.1007/s40336-022-00482-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
|
36
|
Hirata K, Sugimori H, Fujima N, Toyonaga T, Kudo K. Artificial intelligence for nuclear medicine in oncology. Ann Nucl Med 2022; 36:123-132. [PMID: 35028877 DOI: 10.1007/s12149-021-01693-6] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2021] [Accepted: 11/07/2021] [Indexed: 12/12/2022]
Abstract
As in all other medical fields, artificial intelligence (AI) is increasingly being used in nuclear medicine for oncology. There are many articles that discuss AI from the viewpoint of nuclear medicine, but few focus on nuclear medicine from the viewpoint of AI. Nuclear medicine images are characterized by their low spatial resolution and high quantitativeness. It is noted that AI has been used since before the emergence of deep learning. AI can be divided into three categories by its purpose: (1) assisted interpretation, i.e., computer-aided detection (CADe) or computer-aided diagnosis (CADx). (2) Additional insight, i.e., AI provides information beyond the radiologist's eye, such as predicting genes and prognosis from images. It is also related to the field called radiomics/radiogenomics. (3) Augmented image, i.e., image generation tasks. To apply AI to practical use, harmonization between facilities and the possibility of black box explanations need to be resolved.
Collapse
Affiliation(s)
- Kenji Hirata
- Department of Diagnostic Imaging, Hokkaido University Graduate School of Medicine, Kita 15, Nishi 7, Kita-Ku, Sapporo, Hokkaido, 060-8638, Japan. .,Department of Nuclear Medicine, Hokkaido University Hospital, Sapporo, Japan. .,Division of Medical AI Education and Research, Hokkaido University Graduate School of Medicine, Sapporo, Japan.
| | | | - Noriyuki Fujima
- Department of Diagnostic Imaging, Hokkaido University Graduate School of Medicine, Kita 15, Nishi 7, Kita-Ku, Sapporo, Hokkaido, 060-8638, Japan.,Department of Diagnostic and Interventional Radiology, Hokkaido University Hospital, Sapporo, Japan
| | - Takuya Toyonaga
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA
| | - Kohsuke Kudo
- Department of Diagnostic Imaging, Hokkaido University Graduate School of Medicine, Kita 15, Nishi 7, Kita-Ku, Sapporo, Hokkaido, 060-8638, Japan.,Division of Medical AI Education and Research, Hokkaido University Graduate School of Medicine, Sapporo, Japan.,Department of Diagnostic and Interventional Radiology, Hokkaido University Hospital, Sapporo, Japan.,Global Center for Biomedical Science and Engineering, Hokkaido University Faculty of Medicine, Sapporo, Japan
| |
Collapse
|
37
|
Yu X, Jin F, Luo H, Lei Q, Wu Y. Gross Tumor Volume Segmentation for Stage III NSCLC Radiotherapy Using 3D ResSE-Unet. Technol Cancer Res Treat 2022; 21:15330338221090847. [PMID: 35443832 PMCID: PMC9047806 DOI: 10.1177/15330338221090847] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/22/2023] Open
Abstract
INTRODUCTION Radiotherapy is one of the most effective ways to treat lung cancer. Accurately delineating the gross target volume is a key step in the radiotherapy process. In current clinical practice, the target area is still delineated manually by radiologists, which is time-consuming and laborious. However, these problems can be better solved by deep learning-assisted automatic segmentation methods. METHODS In this paper, a 3D CNN model named 3D ResSE-Unet is proposed for gross tumor volume segmentation for stage III NSCLC radiotherapy. This model is based on 3D Unet and combines residual connection and channel attention mechanisms. Three-dimensional convolution operation and encoding-decoding structure are used to mine three-dimensional spatial information of tumors from computed tomography data. Inspired by ResNet and SE-Net, residual connection and channel attention mechanisms are used to improve segmentation performance. A total of 214 patients with stage III NSCLC were collected selectively and 148 cases were randomly selected as the training set, 30 cases as the validation set, and 36 cases as the testing set. The segmentation performance of models was evaluated by the testing set. In addition, the segmentation results of different depths of 3D Unet were analyzed. And the performance of 3D ResSE-Unet was compared with 3D Unet, 3D Res-Unet, and 3D SE-Unet. RESULTS Compared with other depths, 3D Unet with four downsampling depths is more suitable for our work. Compared with 3D Unet, 3D Res-Unet, and 3D SE-Unet, 3D ResSE-Unet can obtain superior results. Its dice similarity coefficient, 95th-percentile of Hausdorff distance, and average surface distance can reach 0.7367, 21.39mm, 4.962mm, respectively. And the average time cost of 3D ResSE-Unet to segment a patient is only about 10s. CONCLUSION The method proposed in this study provides a new tool for GTV auto-segmentation and may be useful for lung cancer radiotherapy.
Collapse
Affiliation(s)
- Xinhao Yu
- College of Bioengineering, 47913Chongqing University, Chongqing, China.,Department of radiation oncology, 605425Chongqing University Cancer Hospital, Chongqing, China
| | - Fu Jin
- Department of radiation oncology, 605425Chongqing University Cancer Hospital, Chongqing, China
| | - HuanLi Luo
- Department of radiation oncology, 605425Chongqing University Cancer Hospital, Chongqing, China
| | - Qianqian Lei
- Department of radiation oncology, 605425Chongqing University Cancer Hospital, Chongqing, China
| | - Yongzhong Wu
- Department of radiation oncology, 605425Chongqing University Cancer Hospital, Chongqing, China
| |
Collapse
|
38
|
Oreiller V, Andrearczyk V, Jreige M, Boughdad S, Elhalawani H, Castelli J, Vallières M, Zhu S, Xie J, Peng Y, Iantsen A, Hatt M, Yuan Y, Ma J, Yang X, Rao C, Pai S, Ghimire K, Feng X, Naser MA, Fuller CD, Yousefirizi F, Rahmim A, Chen H, Wang L, Prior JO, Depeursinge A. Head and neck tumor segmentation in PET/CT: The HECKTOR challenge. Med Image Anal 2021; 77:102336. [PMID: 35016077 DOI: 10.1016/j.media.2021.102336] [Citation(s) in RCA: 48] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2021] [Revised: 10/13/2021] [Accepted: 12/14/2021] [Indexed: 12/23/2022]
Abstract
This paper relates the post-analysis of the first edition of the HEad and neCK TumOR (HECKTOR) challenge. This challenge was held as a satellite event of the 23rd International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI) 2020, and was the first of its kind focusing on lesion segmentation in combined FDG-PET and CT image modalities. The challenge's task is the automatic segmentation of the Gross Tumor Volume (GTV) of Head and Neck (H&N) oropharyngeal primary tumors in FDG-PET/CT images. To this end, the participants were given a training set of 201 cases from four different centers and their methods were tested on a held-out set of 53 cases from a fifth center. The methods were ranked according to the Dice Score Coefficient (DSC) averaged across all test cases. An additional inter-observer agreement study was organized to assess the difficulty of the task from a human perspective. 64 teams registered to the challenge, among which 10 provided a paper detailing their approach. The best method obtained an average DSC of 0.7591, showing a large improvement over our proposed baseline method and the inter-observer agreement, associated with DSCs of 0.6610 and 0.61, respectively. The automatic methods proved to successfully leverage the wealth of metabolic and structural properties of combined PET and CT modalities, significantly outperforming human inter-observer agreement level, semi-automatic thresholding based on PET images as well as other single modality-based methods. This promising performance is one step forward towards large-scale radiomics studies in H&N cancer, obviating the need for error-prone and time-consuming manual delineation of GTVs.
Collapse
Affiliation(s)
- Valentin Oreiller
- Institute of Information Systems, University of Applied Sciences Western Switzerland (HES-SO), Sierre, Switzerland; Department of Nuclear Medicine and Molecular Imaging, Centre Hospitalier Universitaire Vaudois (CHUV), Lausanne, Switzerland.
| | - Vincent Andrearczyk
- Institute of Information Systems, University of Applied Sciences Western Switzerland (HES-SO), Sierre, Switzerland
| | - Mario Jreige
- Department of Nuclear Medicine and Molecular Imaging, Centre Hospitalier Universitaire Vaudois (CHUV), Lausanne, Switzerland
| | - Sarah Boughdad
- Department of Nuclear Medicine and Molecular Imaging, Centre Hospitalier Universitaire Vaudois (CHUV), Lausanne, Switzerland
| | - Hesham Elhalawani
- Department of Radiation Oncology, Brigham and Women's Hospital and Dana-Farber Cancer Institute, Harvard Medical School, Boston, MA, USA
| | - Joel Castelli
- Radiotherapy Department, Cancer Institute Eugène Marquis, Rennes, France
| | - Martin Vallières
- Department of Computer Science, Université de Sherbrooke, Sherbrooke, Québec, Canada
| | - Simeng Zhu
- Department of Radiation Oncology, Henry Ford Cancer Institute, Detroit, MI, USA
| | - Juanying Xie
- School of Computer Science, Shaanxi Normal University, Xi'an 710119, PR China
| | - Ying Peng
- School of Computer Science, Shaanxi Normal University, Xi'an 710119, PR China
| | - Andrei Iantsen
- LaTIM, INSERM, UMR 1101, University Brest, Brest, France
| | - Mathieu Hatt
- LaTIM, INSERM, UMR 1101, University Brest, Brest, France
| | - Yading Yuan
- Department of Radiation Oncology, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Jun Ma
- Department of Mathematics, Nanjing University of Science and Technology, Jiangsu, China
| | - Xiaoping Yang
- Department of Mathematics, Nanjing University, Jiangsu, China
| | - Chinmay Rao
- Department of Radiation Oncology (Maastro), GROW School for Oncology, Maastricht University Medical Centre+, Maastricht, The Netherlands
| | - Suraj Pai
- Department of Radiation Oncology (Maastro), GROW School for Oncology, Maastricht University Medical Centre+, Maastricht, The Netherlands
| | | | - Xue Feng
- Carina Medical, Lexington, KY, 40513, USA; Department of Biomedical Engineering, University of Virginia, Charlottesville VA 22903, USA
| | - Mohamed A Naser
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas 77030, USA
| | - Clifton D Fuller
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas 77030, USA
| | - Fereshteh Yousefirizi
- Department of Integrative Oncology, BC Cancer Research Institute, Vancouver BC, Canada
| | - Arman Rahmim
- Department of Integrative Oncology, BC Cancer Research Institute, Vancouver BC, Canada
| | - Huai Chen
- Department of Automation, Institute of Image Processing and Pattern Recognition, Shangai Jiao Tong University, Shanghai 200240, People's Republic of China
| | - Lisheng Wang
- Department of Automation, Institute of Image Processing and Pattern Recognition, Shangai Jiao Tong University, Shanghai 200240, People's Republic of China
| | - John O Prior
- Department of Nuclear Medicine and Molecular Imaging, Centre Hospitalier Universitaire Vaudois (CHUV), Lausanne, Switzerland
| | - Adrien Depeursinge
- Institute of Information Systems, University of Applied Sciences Western Switzerland (HES-SO), Sierre, Switzerland; Department of Nuclear Medicine and Molecular Imaging, Centre Hospitalier Universitaire Vaudois (CHUV), Lausanne, Switzerland
| |
Collapse
|
39
|
Guo H, Xu K, Duan G, Wen L, He Y. Progress and future prospective of FDG-PET/CT imaging combined with optimized procedures in lung cancer: toward precision medicine. Ann Nucl Med 2021; 36:1-14. [PMID: 34727331 DOI: 10.1007/s12149-021-01683-8] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2021] [Accepted: 09/30/2021] [Indexed: 12/19/2022]
Abstract
With a 5-year overall survival of approximately 20%, lung cancer has always been the number one cancer-specific killer all over the world. As a fusion of positron emission computed tomography (PET) and computed tomography (CT), PET/CT has revolutionized cancer imaging over the past 20 years. In this review, we focused on the optimization of the function of 18F-flurodeoxyglucose (FDG)-PET/CT in diagnosis, prognostic prediction and therapy management of lung cancers by computer programs. FDG-PET/CT has demonstrated a surprising role in development of therapeutic biomarkers, prediction of therapeutic responses and long-term survival, which could be conducive to solving existing dilemmas. Meanwhile, novel tracers and optimized procedures are also developed to control the quality and improve the effect of PET/CT. With the continuous development of some new imaging agents and their clinical applications, application value of PET/CT has broad prospects in this area.
Collapse
Affiliation(s)
- Haoyue Guo
- Department of Medical Oncology, Shanghai Pulmonary Hospital, Tongji University Medical School Cancer Institute, Tongji University School of Medicine, No. 507 Zhengmin Road, Shanghai, 200433, China
- School of Medicine, Tongji University, No. 1239 Siping Road, Shanghai, 200092, China
| | - Kandi Xu
- Department of Medical Oncology, Shanghai Pulmonary Hospital, Tongji University Medical School Cancer Institute, Tongji University School of Medicine, No. 507 Zhengmin Road, Shanghai, 200433, China
- School of Medicine, Tongji University, No. 1239 Siping Road, Shanghai, 200092, China
| | - Guangxin Duan
- State Key Laboratory of Radiation Medicine and Protection, School for Radiological and Interdisciplinary Sciences (RAD-X), Collaborative Innovation Center of Radiation Medicine of Jiangsu Higher Education Institutions, Soochow University, Suzhou, 215123, China
| | - Ling Wen
- Department of Radiology, The First Affiliated Hospital of Soochow University, Suzhou, 215000, China.
| | - Yayi He
- Department of Medical Oncology, Shanghai Pulmonary Hospital, Tongji University Medical School Cancer Institute, Tongji University School of Medicine, No. 507 Zhengmin Road, Shanghai, 200433, China.
- School of Medicine, Tongji University, No. 1239 Siping Road, Shanghai, 200092, China.
| |
Collapse
|
40
|
Abstract
Positron emission tomography (PET)/computed tomography (CT) are nuclear diagnostic imaging modalities that are routinely deployed for cancer staging and monitoring. They hold the advantage of detecting disease related biochemical and physiologic abnormalities in advance of anatomical changes, thus widely used for staging of disease progression, identification of the treatment gross tumor volume, monitoring of disease, as well as prediction of outcomes and personalization of treatment regimens. Among the arsenal of different functional imaging modalities, nuclear imaging has benefited from early adoption of quantitative image analysis starting from simple standard uptake value normalization to more advanced extraction of complex imaging uptake patterns; thanks to application of sophisticated image processing and machine learning algorithms. In this review, we discuss the application of image processing and machine/deep learning techniques to PET/CT imaging with special focus on the oncological radiotherapy domain as a case study and draw examples from our work and others to highlight current status and future potentials.
Collapse
Affiliation(s)
- Lise Wei
- Department of Radiation Oncology, Physics Division, University of Michigan, Ann Arbor, MI
| | - Issam El Naqa
- Department of Radiation Oncology, Physics Division, University of Michigan, Ann Arbor, MI.
| |
Collapse
|