1
|
Zou Z, Zou B, Kui X, Chen Z, Li Y. DGCBG-Net: A dual-branch network with global cross-modal interaction and boundary guidance for tumor segmentation in PET/CT images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 250:108125. [PMID: 38631130 DOI: 10.1016/j.cmpb.2024.108125] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/14/2023] [Revised: 02/24/2024] [Accepted: 03/07/2024] [Indexed: 04/19/2024]
Abstract
BACKGROUND AND OBJECTIVES Automatic tumor segmentation plays a crucial role in cancer diagnosis and treatment planning. Computed tomography (CT) and positron emission tomography (PET) are extensively employed for their complementary medical information. However, existing methods ignore bilateral cross-modal interaction of global features during feature extraction, and they underutilize multi-stage tumor boundary features. METHODS To address these limitations, we propose a dual-branch tumor segmentation network based on global cross-modal interaction and boundary guidance in PET/CT images (DGCBG-Net). DGCBG-Net consists of 1) a global cross-modal interaction module that extracts global contextual information from PET/CT images and promotes bilateral cross-modal interaction of global feature; 2) a shared multi-path downsampling module that learns complementary features from PET/CT modalities to mitigate the impact of misleading features and decrease the loss of discriminative features during downsampling; 3) a boundary prior-guided branch that extracts potential boundary features from CT images at multiple stages, assisting the semantic segmentation branch in improving the accuracy of tumor boundary segmentation. RESULTS Extensive experiments are conducted on STS and Hecktor 2022 datasets to evaluate the proposed method. The average Dice scores of our DGCB-Net on the two datasets are 80.33% and 79.29%, with average IOU scores of 67.64% and 70.18%. DGCB-Net outperformed the current state-of-the-art methods with a 1.77% higher Dice score and a 2.12% higher IOU score. CONCLUSIONS Extensive experimental results demonstrate that DGCBG-Net outperforms existing segmentation methods, and is competitive to state-of-arts.
Collapse
Affiliation(s)
- Ziwei Zou
- School of Computer Science and Engineering, Central South University, No. 932, Lushan South Road, ChangSha, 410083, China
| | - Beiji Zou
- School of Computer Science and Engineering, Central South University, No. 932, Lushan South Road, ChangSha, 410083, China
| | - Xiaoyan Kui
- School of Computer Science and Engineering, Central South University, No. 932, Lushan South Road, ChangSha, 410083, China.
| | - Zhi Chen
- School of Computer Science and Engineering, Central South University, No. 932, Lushan South Road, ChangSha, 410083, China
| | - Yang Li
- School of Informatics, Hunan University of Chinese Medicine, No. 300, Xueshi Road, ChangSha, 410208, China
| |
Collapse
|
2
|
Wu H, Peng L, Du D, Xu H, Lin G, Zhou Z, Lu L, Lv W. BAF-Net: bidirectional attention-aware fluid pyramid feature integrated multimodal fusion network for diagnosis and prognosis. Phys Med Biol 2024; 69:105007. [PMID: 38593831 DOI: 10.1088/1361-6560/ad3cb2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2023] [Accepted: 04/09/2024] [Indexed: 04/11/2024]
Abstract
Objective. To go beyond the deficiencies of the three conventional multimodal fusion strategies (i.e. input-, feature- and output-level fusion), we propose a bidirectional attention-aware fluid pyramid feature integrated fusion network (BAF-Net) with cross-modal interactions for multimodal medical image diagnosis and prognosis.Approach. BAF-Net is composed of two identical branches to preserve the unimodal features and one bidirectional attention-aware distillation stream to progressively assimilate cross-modal complements and to learn supplementary features in both bottom-up and top-down processes. Fluid pyramid connections were adopted to integrate the hierarchical features at different levels of the network, and channel-wise attention modules were exploited to mitigate cross-modal cross-level incompatibility. Furthermore, depth-wise separable convolution was introduced to fuse the cross-modal cross-level features to alleviate the increase in parameters to a great extent. The generalization abilities of BAF-Net were evaluated in terms of two clinical tasks: (1) an in-house PET-CT dataset with 174 patients for differentiation between lung cancer and pulmonary tuberculosis. (2) A public multicenter PET-CT head and neck cancer dataset with 800 patients from nine centers for overall survival prediction.Main results. On the LC-PTB dataset, improved performance was found in BAF-Net (AUC = 0.7342) compared with input-level fusion model (AUC = 0.6825;p< 0.05), feature-level fusion model (AUC = 0.6968;p= 0.0547), output-level fusion model (AUC = 0.7011;p< 0.05). On the H&N cancer dataset, BAF-Net (C-index = 0.7241) outperformed the input-, feature-, and output-level fusion model, with 2.95%, 3.77%, and 1.52% increments of C-index (p= 0.3336, 0.0479 and 0.2911, respectively). The ablation experiments demonstrated the effectiveness of all the designed modules regarding all the evaluated metrics in both datasets.Significance. Extensive experiments on two datasets demonstrated better performance and robustness of BAF-Net than three conventional fusion strategies and PET or CT unimodal network in terms of diagnosis and prognosis.
Collapse
Affiliation(s)
- Huiqin Wu
- Department of Medical Imaging, Guangdong Second Provincial General Hospital, Guangzhou, Guangdong, 518037, People's Republic of China
- School of Biomedical Engineering and Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, Guangdong, 510515, People's Republic of China
- Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, Guangdong, 510515, People's Republic of China
| | - Lihong Peng
- School of Biomedical Engineering and Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, Guangdong, 510515, People's Republic of China
- Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, Guangdong, 510515, People's Republic of China
| | - Dongyang Du
- School of Biomedical Engineering and Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, Guangdong, 510515, People's Republic of China
- Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, Guangdong, 510515, People's Republic of China
| | - Hui Xu
- School of Biomedical Engineering and Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, Guangdong, 510515, People's Republic of China
- Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, Guangdong, 510515, People's Republic of China
| | - Guoyu Lin
- School of Biomedical Engineering and Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, Guangdong, 510515, People's Republic of China
- Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, Guangdong, 510515, People's Republic of China
| | - Zidong Zhou
- School of Biomedical Engineering and Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, Guangdong, 510515, People's Republic of China
- Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, Guangdong, 510515, People's Republic of China
| | - Lijun Lu
- School of Biomedical Engineering and Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, Guangdong, 510515, People's Republic of China
- Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, Guangdong, 510515, People's Republic of China
- Pazhou Lab, Guangzhou, Guangdong, 510330, People's Republic of China
| | - Wenbing Lv
- School of Information and Yunnan Key Laboratory of Intelligent Systems and Computing, Yunnan University, Kunming, Yunnan, 650504, People's Republic of China
| |
Collapse
|
3
|
Hussain D, Al-Masni MA, Aslam M, Sadeghi-Niaraki A, Hussain J, Gu YH, Naqvi RA. Revolutionizing tumor detection and classification in multimodality imaging based on deep learning approaches: methods, applications and limitations. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2024:XST230429. [PMID: 38701131 DOI: 10.3233/xst-230429] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/05/2024]
Abstract
BACKGROUND The emergence of deep learning (DL) techniques has revolutionized tumor detection and classification in medical imaging, with multimodal medical imaging (MMI) gaining recognition for its precision in diagnosis, treatment, and progression tracking. OBJECTIVE This review comprehensively examines DL methods in transforming tumor detection and classification across MMI modalities, aiming to provide insights into advancements, limitations, and key challenges for further progress. METHODS Systematic literature analysis identifies DL studies for tumor detection and classification, outlining methodologies including convolutional neural networks (CNNs), recurrent neural networks (RNNs), and their variants. Integration of multimodality imaging enhances accuracy and robustness. RESULTS Recent advancements in DL-based MMI evaluation methods are surveyed, focusing on tumor detection and classification tasks. Various DL approaches, including CNNs, YOLO, Siamese Networks, Fusion-Based Models, Attention-Based Models, and Generative Adversarial Networks, are discussed with emphasis on PET-MRI, PET-CT, and SPECT-CT. FUTURE DIRECTIONS The review outlines emerging trends and future directions in DL-based tumor analysis, aiming to guide researchers and clinicians toward more effective diagnosis and prognosis. Continued innovation and collaboration are stressed in this rapidly evolving domain. CONCLUSION Conclusions drawn from literature analysis underscore the efficacy of DL approaches in tumor detection and classification, highlighting their potential to address challenges in MMI analysis and their implications for clinical practice.
Collapse
Affiliation(s)
- Dildar Hussain
- Department of Artificial Intelligence and Data Science, Sejong University, Seoul, Republic of Korea
| | - Mohammed A Al-Masni
- Department of Artificial Intelligence and Data Science, Sejong University, Seoul, Republic of Korea
| | - Muhammad Aslam
- Department of Artificial Intelligence and Data Science, Sejong University, Seoul, Republic of Korea
| | - Abolghasem Sadeghi-Niaraki
- Department of Computer Science & Engineering and Convergence Engineering for Intelligent Drone, XR Research Center, Sejong University, Seoul, Republic of Korea
| | - Jamil Hussain
- Department of Artificial Intelligence and Data Science, Sejong University, Seoul, Republic of Korea
| | - Yeong Hyeon Gu
- Department of Artificial Intelligence and Data Science, Sejong University, Seoul, Republic of Korea
| | - Rizwan Ali Naqvi
- Department of Intelligent Mechatronics Engineering, Sejong University, Seoul, Republic of Korea
| |
Collapse
|
4
|
Xu C, Fan K, Mo W, Cao X, Jiao K. Dual ensemble system for polyp segmentation with submodels adaptive selection ensemble. Sci Rep 2024; 14:6152. [PMID: 38485963 PMCID: PMC10940608 DOI: 10.1038/s41598-024-56264-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2023] [Accepted: 03/04/2024] [Indexed: 03/18/2024] Open
Abstract
Colonoscopy is one of the main methods to detect colon polyps, and its detection is widely used to prevent and diagnose colon cancer. With the rapid development of computer vision, deep learning-based semantic segmentation methods for colon polyps have been widely researched. However, the accuracy and stability of some methods in colon polyp segmentation tasks show potential for further improvement. In addition, the issue of selecting appropriate sub-models in ensemble learning for the colon polyp segmentation task still needs to be explored. In order to solve the above problems, we first implement the utilization of multi-complementary high-level semantic features through the Multi-Head Control Ensemble. Then, to solve the sub-model selection problem in training, we propose SDBH-PSO Ensemble for sub-model selection and optimization of ensemble weights for different datasets. The experiments were conducted on the public datasets CVC-ClinicDB, Kvasir, CVC-ColonDB, ETIS-LaribPolypDB and PolypGen. The results show that the DET-Former, constructed based on the Multi-Head Control Ensemble and the SDBH-PSO Ensemble, consistently provides improved accuracy across different datasets. Among them, the Multi-Head Control Ensemble demonstrated superior feature fusion capability in the experiments, and the SDBH-PSO Ensemble demonstrated excellent sub-model selection capability. The sub-model selection capabilities of the SDBH-PSO Ensemble will continue to have significant reference value and practical utility as deep learning networks evolve.
Collapse
Affiliation(s)
- Cun Xu
- Guilin University of Electronic Technology, Guilin, 541000, China
| | - Kefeng Fan
- China Electronics Standardization Institute, Beijing, 100007, China.
| | - Wei Mo
- Guilin University of Electronic Technology, Guilin, 541000, China
| | - Xuguang Cao
- Guilin University of Electronic Technology, Guilin, 541000, China
| | - Kaijie Jiao
- Guilin University of Electronic Technology, Guilin, 541000, China
| |
Collapse
|
5
|
Bi L, Buehner U, Fu X, Williamson T, Choong P, Kim J. Hybrid CNN-transformer network for interactive learning of challenging musculoskeletal images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 243:107875. [PMID: 37871450 DOI: 10.1016/j.cmpb.2023.107875] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/06/2023] [Revised: 10/16/2023] [Accepted: 10/17/2023] [Indexed: 10/25/2023]
Abstract
BACKGROUND AND OBJECTIVES Segmentation of regions of interest (ROIs) such as tumors and bones plays an essential role in the analysis of musculoskeletal (MSK) images. Segmentation results can help with orthopaedic surgeons in surgical outcomes assessment and patient's gait cycle simulation. Deep learning-based automatic segmentation methods, particularly those using fully convolutional networks (FCNs), are considered as the state-of-the-art. However, in scenarios where the training data is insufficient to account for all the variations in ROIs, these methods struggle to segment the challenging ROIs that with less common image characteristics. Such characteristics might include low contrast to the background, inhomogeneous textures, and fuzzy boundaries. METHODS we propose a hybrid convolutional neural network - transformer network (HCTN) for semi-automatic segmentation to overcome the limitations of segmenting challenging MSK images. Specifically, we propose to fuse user-inputs (manual, e.g., mouse clicks) with high-level semantic image features derived from the neural network (automatic) where the user-inputs are used in an interactive training for uncommon image characteristics. In addition, we propose to leverage the transformer network (TN) - a deep learning model designed for handling sequence data, in together with features derived from FCNs for segmentation; this addresses the limitation of FCNs that can only operate on small kernels, which tends to dismiss global context and only focus on local patterns. RESULTS We purposely selected three MSK imaging datasets covering a variety of structures to evaluate the generalizability of the proposed method. Our semi-automatic HCTN method achieved a dice coefficient score (DSC) of 88.46 ± 9.41 for segmenting the soft-tissue sarcoma tumors from magnetic resonance (MR) images, 73.32 ± 11.97 for segmenting the osteosarcoma tumors from MR images and 93.93 ± 1.84 for segmenting the clavicle bones from chest radiographs. When compared to the current state-of-the-art automatic segmentation method, our HCTN method is 11.7%, 19.11% and 7.36% higher in DSC on the three datasets, respectively. CONCLUSION Our experimental results demonstrate that HCTN achieved more generalizable results than the current methods, especially with challenging MSK studies.
Collapse
Affiliation(s)
- Lei Bi
- Institute of Translational Medicine, National Center for Translational Medicine, Shanghai Jiao Tong University, Shanghai, China; School of Computer Science, University of Sydney, NSW, Australia
| | | | - Xiaohang Fu
- School of Computer Science, University of Sydney, NSW, Australia
| | - Tom Williamson
- Stryker Corporation, Kalamazoo, Michigan, USA; Centre for Additive Manufacturing, School of Engineering, RMIT University, VIC, Australia
| | - Peter Choong
- Department of Surgery, University of Melbourne, VIC, Australia
| | - Jinman Kim
- School of Computer Science, University of Sydney, NSW, Australia.
| |
Collapse
|
6
|
Wang Z, Zhang L, Shu X, Wang Y, Feng Y. Consistent representation via contrastive learning for skin lesion diagnosis. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 242:107826. [PMID: 37837885 DOI: 10.1016/j.cmpb.2023.107826] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/28/2023] [Revised: 09/19/2023] [Accepted: 09/21/2023] [Indexed: 10/16/2023]
Abstract
BACKGROUND Skin lesions are a prevalent ailment, with melanoma emerging as a particularly perilous variant. Encouragingly, artificial intelligence displays promising potential in early detection, yet its integration within clinical contexts, particularly involving multi-modal data, presents challenges. While multi-modal approaches enhance diagnostic efficacy, the influence of modal bias is often disregarded. METHODS In this investigation, a multi-modal feature learning technique termed "Contrast-based Consistent Representation Disentanglement" for dermatological diagnosis is introduced. This approach employs adversarial domain adaptation to disentangle features from distinct modalities, fostering a shared representation. Furthermore, a contrastive learning strategy is devised to incentivize the model to preserve uniformity in common lesion attributes across modalities. Emphasizing the learning of a uniform representation among models, this approach circumvents reliance on supplementary data. RESULTS Assessment of the proposed technique on a 7-point criteria evaluation dataset yields an average accuracy of 76.1% for multi-classification tasks, surpassing researched state-of-the-art methods. The approach tackles modal bias, enabling the acquisition of a consistent representation of common lesion appearances across diverse modalities, which transcends modality boundaries. This study underscores the latent potential of multi-modal feature learning in dermatological diagnosis. CONCLUSION In summation, a multi-modal feature learning strategy is posited for dermatological diagnosis. This approach outperforms other state-of-the-art methods, underscoring its capacity to enhance diagnostic precision for skin lesions.
Collapse
Affiliation(s)
- Zizhou Wang
- College of Computer Science, Sichuan University, Chengdu 610065, China; Institute of High Performance Computing, Agency for Science, Technology and Research (A*STAR), Singapore 138632, Singapore.
| | - Lei Zhang
- College of Computer Science, Sichuan University, Chengdu 610065, China.
| | - Xin Shu
- College of Computer Science, Sichuan University, Chengdu 610065, China.
| | - Yan Wang
- Institute of High Performance Computing, Agency for Science, Technology and Research (A*STAR), Singapore 138632, Singapore.
| | - Yangqin Feng
- Institute of High Performance Computing, Agency for Science, Technology and Research (A*STAR), Singapore 138632, Singapore.
| |
Collapse
|
7
|
Abi Nader C, Vetil R, Wood LK, Rohe MM, Bône A, Karteszi H, Vullierme MP. Automatic Detection of Pancreatic Lesions and Main Pancreatic Duct Dilatation on Portal Venous CT Scans Using Deep Learning. Invest Radiol 2023; 58:791-798. [PMID: 37289274 DOI: 10.1097/rli.0000000000000992] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
OBJECTIVES This study proposes and evaluates a deep learning method to detect pancreatic neoplasms and to identify main pancreatic duct (MPD) dilatation on portal venous computed tomography scans. MATERIALS AND METHODS A total of 2890 portal venous computed tomography scans from 9 institutions were acquired, among which 2185 had a pancreatic neoplasm and 705 were healthy controls. Each scan was reviewed by one in a group of 9 radiologists. Physicians contoured the pancreas, pancreatic lesions if present, and the MPD if visible. They also assessed tumor type and MPD dilatation. Data were split into a training and independent testing set of 2134 and 756 cases, respectively.A method to detect pancreatic lesions and MPD dilatation was built in 3 steps. First, a segmentation network was trained in a 5-fold cross-validation manner. Second, outputs of this network were postprocessed to extract imaging features: a normalized lesion risk, the predicted lesion diameter, and the MPD diameter in the head, body, and tail of the pancreas. Third, 2 logistic regression models were calibrated to predict lesion presence and MPD dilatation, respectively. Performance was assessed on the independent test cohort using receiver operating characteristic analysis. The method was also evaluated on subgroups defined based on lesion types and characteristics. RESULTS The area under the curve of the model detecting lesion presence in a patient was 0.98 (95% confidence interval [CI], 0.97-0.99). A sensitivity of 0.94 (469 of 493; 95% CI, 0.92-0.97) was reported. Similar values were obtained in patients with small (less than 2 cm) and isodense lesions with a sensitivity of 0.94 (115 of 123; 95% CI, 0.87-0.98) and 0.95 (53 of 56, 95% CI, 0.87-1.0), respectively. The model sensitivity was also comparable across lesion types with values of 0.94 (95% CI, 0.91-0.97), 1.0 (95% CI, 0.98-1.0), 0.96 (95% CI, 0.97-1.0) for pancreatic ductal adenocarcinoma, neuroendocrine tumor, and intraductal papillary neoplasm, respectively. Regarding MPD dilatation detection, the model had an area under the curve of 0.97 (95% CI, 0.96-0.98). CONCLUSIONS The proposed approach showed high quantitative performance to identify patients with pancreatic neoplasms and to detect MPD dilatation on an independent test cohort. Performance was robust across subgroups of patients with different lesion characteristics and types. Results confirmed the interest to combine a direct lesion detection approach with secondary features such as the MPD diameter, thus indicating a promising avenue for the detection of pancreatic cancer at early stages.
Collapse
Affiliation(s)
| | | | | | | | | | | | - Marie-Pierre Vullierme
- Department of Radiology, Hospital of Annecy-Genevois, Université Paris-Cité, Paris, France
| |
Collapse
|
8
|
Xue H, Fang Q, Yao Y, Teng Y. 3D PET/CT tumor segmentation based on nnU-Net with GCN refinement. Phys Med Biol 2023; 68:185018. [PMID: 37549672 DOI: 10.1088/1361-6560/acede6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2023] [Accepted: 08/07/2023] [Indexed: 08/09/2023]
Abstract
Objective. Whole-body positron emission tomography/computed tomography (PET/CT) scans are an important tool for diagnosing various malignancies (e.g. malignant melanoma, lymphoma, or lung cancer), and accurate segmentation of tumors is a key part of subsequent treatment. In recent years, convolutional neural network based segmentation methods have been extensively investigated. However, these methods often give inaccurate segmentation results, such as oversegmentation and undersegmentation. To address these issues, we propose a postprocessing method based on a graph convolutional network (GCN) to refine inaccurate segmentation results and improve the overall segmentation accuracy.Approach. First, nnU-Net is used as an initial segmentation framework, and the uncertainty in the segmentation results is analyzed. Certain and uncertain pixels are used to establish the nodes of a graph. Each node and its 6 neighbors form an edge, and 32 nodes are randomly selected as uncertain nodes to form edges. The highly uncertain nodes are used as the subsequent refinement targets. Second, the nnU-Net results of the certain nodes are used as labels to form a semisupervised graph network problem, and the uncertain part is optimized by training the GCN to improve the segmentation performance. This describes our proposed nnU-Net + GCN segmentation framework.Main results.We perform tumor segmentation experiments with the PET/CT dataset from the MICCIA2022 autoPET challenge. Among these data, 30 cases are randomly selected for testing, and the experimental results show that the false-positive rate is effectively reduced with nnU-Net + GCN refinement. In quantitative analysis, there is an improvement of 2.1% for the average Dice score, 6.4 for the 95% Hausdorff distance (HD95), and 1.7 for the average symmetric surface distance.Significance. The quantitative and qualitative evaluation results show that GCN postprocessing methods can effectively improve the tumor segmentation performance.
Collapse
Affiliation(s)
- Hengzhi Xue
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110004, People's Republic of China
| | - Qingqing Fang
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110004, People's Republic of China
| | - Yudong Yao
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110004, People's Republic of China
- Department of Electrical and Computer Engineering, Steven Institute of Technology, Hoboken, NJ 07102, United States of America
| | - Yueyang Teng
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110004, People's Republic of China
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Shenyang, 110169, People's Republic of China
| |
Collapse
|
9
|
Bi L, Fulham M, Song S, Feng DD, Kim J. Hyper-Connected Transformer Network for Multi-Modality PET-CT Segmentation. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2023; 2023:1-4. [PMID: 38083369 DOI: 10.1109/embc40787.2023.10340635] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
[18F]-Fluorodeoxyglucose (FDG) positron emission tomography - computed tomography (PET-CT) has become the imaging modality of choice for diagnosing many cancers. Co-learning complementary PET-CT imaging features is a fundamental requirement for automatic tumor segmentation and for developing computer aided cancer diagnosis systems. In this study, we propose a hyper-connected transformer (HCT) network that integrates a transformer network (TN) with a hyper connected fusion for multi-modality PET-CT images. The TN was leveraged for its ability to provide global dependencies in image feature learning, which was achieved by using image patch embeddings with a self-attention mechanism to capture image-wide contextual information. We extended the single-modality definition of TN with multiple TN based branches to separately extract image features. We also introduced a hyper connected fusion to fuse the contextual and complementary image features across multiple transformers in an iterative manner. Our results with two clinical datasets show that HCT achieved better performance in segmentation accuracy when compared to the existing methods.Clinical Relevance-We anticipate that our approach can be an effective and supportive tool to aid physicians in tumor quantification and in identifying image biomarkers for cancer treatment.
Collapse
|
10
|
Wang L, Song D, Wang W, Li C, Zhou Y, Zheng J, Rao S, Wang X, Shao G, Cai J, Yang S, Dong J. Data-Driven Assisted Decision Making for Surgical Procedure of Hepatocellular Carcinoma Resection and Prognostic Prediction: Development and Validation of Machine Learning Models. Cancers (Basel) 2023; 15:cancers15061784. [PMID: 36980670 PMCID: PMC10046511 DOI: 10.3390/cancers15061784] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2023] [Revised: 03/02/2023] [Accepted: 03/09/2023] [Indexed: 03/18/2023] Open
Abstract
Background: Currently, surgical decisions for hepatocellular carcinoma (HCC) resection are difficult and not sufficiently personalized. We aimed to develop and validate data driven prediction models to assist surgeons in selecting the optimal surgical procedure for patients. Methods: Retrospective data from 361 HCC patients who underwent radical resection in two institutions were included. End-to-end deep learning models were built to automatically segment lesions from the arterial phase (AP) of preoperative dynamic contrast enhanced magnetic resonance imaging (DCE-MRI). Clinical baseline characteristics and radiomic features were rigorously screened. The effectiveness of radiomic features and radiomic-clinical features was also compared. Three ensemble learning models were proposed to perform the surgical procedure decision and the overall survival (OS) and recurrence-free survival (RFS) predictions after taking different solutions, respectively. Results: SegFormer performed best in terms of automatic segmentation, achieving a Mean Intersection over Union (mIoU) of 0.8860. The five-fold cross-validation results showed that inputting radiomic-clinical features outperformed using only radiomic features. The proposed models all outperformed the other mainstream ensemble models. On the external test set, the area under the receiver operating characteristic curve (AUC) of the proposed decision model was 0.7731, and the performance of the prognostic prediction models was also relatively excellent. The application web server based on automatic lesion segmentation was deployed and is available online. Conclusions: In this study, we developed and externally validated the surgical decision-making procedures and prognostic prediction models for HCC for the first time, and the results demonstrated relatively accurate predictions and strong generalizations, which are expected to help clinicians optimize surgical procedures.
Collapse
Affiliation(s)
- Liyang Wang
- School of Clinical Medicine, Tsinghua University, Beijing 100084, China
- Hepato-Pancreato-Biliary Center, Beijing Tsinghua Changgung Hospital, School of Clinical Medicine, Tsinghua University, Beijing 102218, China
| | - Danjun Song
- Department of Interventional Therapy, The Cancer Hospital of the University of Chinese Academy of Sciences (Zhejiang Cancer Hospital), Institute of Basic Medicine and Cancer (IBMC), Chinese Academy of Sciences, Hangzhou 310022, China
- Department of Liver Surgery, Key Laboratory of Carcinogenesis and Cancer Invasion of Ministry of Education, Liver Cancer Institute, Zhongshan Hospital, Fudan University, Shanghai 200032, China
| | - Wentao Wang
- Department of Radiology, Zhongshan Hospital, Fudan University, Shanghai 200032, China
| | - Chengquan Li
- School of Clinical Medicine, Tsinghua University, Beijing 100084, China
| | - Yiming Zhou
- Department of Hepatobiliary and Pancreatic Surgery, The Cancer Hospital of the University of Chinese Academy of Sciences (Zhejiang Cancer Hospital), Institute of Basic Medicine and Cancer (IBMC), Chinese Academy of Sciences, Hangzhou 310022, China
| | - Jiaping Zheng
- Department of Interventional Therapy, The Cancer Hospital of the University of Chinese Academy of Sciences (Zhejiang Cancer Hospital), Institute of Basic Medicine and Cancer (IBMC), Chinese Academy of Sciences, Hangzhou 310022, China
| | - Shengxiang Rao
- Department of Radiology, Zhongshan Hospital, Fudan University, Shanghai 200032, China
| | - Xiaoying Wang
- Department of Liver Surgery, Key Laboratory of Carcinogenesis and Cancer Invasion of Ministry of Education, Liver Cancer Institute, Zhongshan Hospital, Fudan University, Shanghai 200032, China
| | - Guoliang Shao
- Department of Interventional Therapy, The Cancer Hospital of the University of Chinese Academy of Sciences (Zhejiang Cancer Hospital), Institute of Basic Medicine and Cancer (IBMC), Chinese Academy of Sciences, Hangzhou 310022, China
- Department of Radiology, The Cancer Hospital of the University of Chinese Academy of Sciences (Zhejiang Cancer Hospital), Institute of Basic Medicine and Cancer (IBMC), Chinese Academy of Sciences, Hangzhou 310022, China
| | - Jiabin Cai
- Department of Liver Surgery, Key Laboratory of Carcinogenesis and Cancer Invasion of Ministry of Education, Liver Cancer Institute, Zhongshan Hospital, Fudan University, Shanghai 200032, China
- Correspondence: (J.C.); (S.Y.)
| | - Shizhong Yang
- Hepato-Pancreato-Biliary Center, Beijing Tsinghua Changgung Hospital, School of Clinical Medicine, Tsinghua University, Beijing 102218, China
- Correspondence: (J.C.); (S.Y.)
| | - Jiahong Dong
- School of Clinical Medicine, Tsinghua University, Beijing 100084, China
- Hepato-Pancreato-Biliary Center, Beijing Tsinghua Changgung Hospital, School of Clinical Medicine, Tsinghua University, Beijing 102218, China
| |
Collapse
|
11
|
Zhou Y, Jiang H, Diao Z, Tong G, Luan Q, Li Y, Li X. MRLA-Net: A tumor segmentation network embedded with a multiple receptive-field lesion attention module in PET-CT images. Comput Biol Med 2023; 153:106538. [PMID: 36646023 DOI: 10.1016/j.compbiomed.2023.106538] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2022] [Revised: 12/14/2022] [Accepted: 01/10/2023] [Indexed: 01/13/2023]
Abstract
The tumor image segmentation is an important basis for doctors to diagnose and formulate treatment planning. PET-CT is an extremely important technology for recognizing the systemic situation of diseases due to the complementary advantages of their respective modal information. However, current PET-CT tumor segmentation methods generally focus on the fusion of PET and CT features. The fusion of features will weaken the characteristics of the modality itself. Therefore, enhancing the modal features of the lesions can obtain optimized feature sets, which is extremely necessary to improve the segmentation results. This paper proposed an attention module that integrates the PET-CT diagnostic visual field and the modality characteristics of the lesion, that is, the multiple receptive-field lesion attention module. This paper made full use of the spatial domain, frequency domain, and channel attention, and proposed a large receptive-field lesion localization module and a small receptive-field lesion enhancement module, which together constitute the multiple receptive-field lesion attention module. In addition, a network embedded with a multiple receptive-field lesion attention module has been proposed for tumor segmentation. This paper conducted experiments on a private liver tumor dataset as well as two publicly available datasets, the soft tissue sarcoma dataset, and the head and neck tumor segmentation dataset. The experimental results showed that the proposed method achieves excellent performance on multiple datasets, and has a significant improvement compared with DenseUNet, and the tumor segmentation results on the above three PET/CT datasets were improved by 7.25%, 6.5%, 5.29% in Dice per case. Compared with the latest PET-CT liver tumor segmentation research, the proposed method improves by 8.32%.
Collapse
Affiliation(s)
- Yang Zhou
- Department of Software College, Northeastern University, Shenyang 110819, China
| | - Huiyan Jiang
- Department of Software College, Northeastern University, Shenyang 110819, China.
| | - Zhaoshuo Diao
- Department of Software College, Northeastern University, Shenyang 110819, China
| | - Guoyu Tong
- Department of Software College, Northeastern University, Shenyang 110819, China
| | - Qiu Luan
- Department of Nuclear Medicine, The First Affiliated Hospital of China Medical University, Shenyang 110001, China
| | - Yaming Li
- Department of Nuclear Medicine, The First Affiliated Hospital of China Medical University, Shenyang 110001, China
| | - Xuena Li
- Department of Nuclear Medicine, The First Affiliated Hospital of China Medical University, Shenyang 110001, China.
| |
Collapse
|
12
|
Hu Q, Li K, Yang C, Wang Y, Huang R, Gu M, Xiao Y, Huang Y, Chen L. The role of artificial intelligence based on PET/CT radiomics in NSCLC: Disease management, opportunities, and challenges. Front Oncol 2023; 13:1133164. [PMID: 36959810 PMCID: PMC10028142 DOI: 10.3389/fonc.2023.1133164] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2022] [Accepted: 02/20/2023] [Indexed: 03/09/2023] Open
Abstract
Objectives Lung cancer has been widely characterized through radiomics and artificial intelligence (AI). This review aims to summarize the published studies of AI based on positron emission tomography/computed tomography (PET/CT) radiomics in non-small-cell lung cancer (NSCLC). Materials and methods A comprehensive search of literature published between 2012 and 2022 was conducted on the PubMed database. There were no language or publication status restrictions on the search. About 127 articles in the search results were screened and gradually excluded according to the exclusion criteria. Finally, this review included 39 articles for analysis. Results Classification is conducted according to purposes and several studies were identified at each stage of disease:1) Cancer detection (n=8), 2) histology and stage of cancer (n=11), 3) metastases (n=6), 4) genotype (n=6), 5) treatment outcome and survival (n=8). There is a wide range of heterogeneity among studies due to differences in patient sources, evaluation criteria and workflow of radiomics. On the whole, most models show diagnostic performance comparable to or even better than experts, and the common problems are repeatability and clinical transformability. Conclusion AI-based PET/CT Radiomics play potential roles in NSCLC clinical management. However, there is still a long way to go before being translated into clinical application. Large-scale, multi-center, prospective research is the direction of future efforts, while we need to face the risk of repeatability of radiomics features and the limitation of access to large databases.
Collapse
Affiliation(s)
- Qiuyuan Hu
- Department of positron emission tomography/computed tomography (PET/CT) Center, Yunnan Cancer Hospital, The Third Affiliated Hospital of Kunming Medical University, Cancer Center of Yunnan Province, Kunming, Yunnan, China
| | - Ke Li
- Department of Cancer Biotherapy Center, Yunnan Cancer Hospital, The Third Affiliated Hospital of Kunming Medical University, Cancer Center of Yunnan Province, Kunming, Yunnan, China
| | - Conghui Yang
- Department of positron emission tomography/computed tomography (PET/CT) Center, Yunnan Cancer Hospital, The Third Affiliated Hospital of Kunming Medical University, Cancer Center of Yunnan Province, Kunming, Yunnan, China
| | - Yue Wang
- Department of positron emission tomography/computed tomography (PET/CT) Center, Yunnan Cancer Hospital, The Third Affiliated Hospital of Kunming Medical University, Cancer Center of Yunnan Province, Kunming, Yunnan, China
| | - Rong Huang
- Department of positron emission tomography/computed tomography (PET/CT) Center, Yunnan Cancer Hospital, The Third Affiliated Hospital of Kunming Medical University, Cancer Center of Yunnan Province, Kunming, Yunnan, China
| | - Mingqiu Gu
- Department of positron emission tomography/computed tomography (PET/CT) Center, Yunnan Cancer Hospital, The Third Affiliated Hospital of Kunming Medical University, Cancer Center of Yunnan Province, Kunming, Yunnan, China
| | - Yuqiang Xiao
- Department of positron emission tomography/computed tomography (PET/CT) Center, Yunnan Cancer Hospital, The Third Affiliated Hospital of Kunming Medical University, Cancer Center of Yunnan Province, Kunming, Yunnan, China
| | - Yunchao Huang
- Department of Thoracic Surgery I, Key Laboratory of Lung Cancer of Yunnan Province, Yunnan Cancer Hospital, The Third Affiliated Hospital of Kunming Medical University, Cancer Center of Yunnan Province, Kunming, Yunnan, China
- *Correspondence: Long Chen, ; Yunchao Huang,
| | - Long Chen
- Department of positron emission tomography/computed tomography (PET/CT) Center, Yunnan Cancer Hospital, The Third Affiliated Hospital of Kunming Medical University, Cancer Center of Yunnan Province, Kunming, Yunnan, China
- *Correspondence: Long Chen, ; Yunchao Huang,
| |
Collapse
|
13
|
Huang Z, Zou S, Wang G, Chen Z, Shen H, Wang H, Zhang N, Zhang L, Yang F, Wang H, Liang D, Niu T, Zhu X, Hu Z. ISA-Net: Improved spatial attention network for PET-CT tumor segmentation. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 226:107129. [PMID: 36156438 DOI: 10.1016/j.cmpb.2022.107129] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/14/2022] [Revised: 07/06/2022] [Accepted: 09/13/2022] [Indexed: 06/16/2023]
Abstract
BACKGROUND AND OBJECTIVE Achieving accurate and automated tumor segmentation plays an important role in both clinical practice and radiomics research. Segmentation in medicine is now often performed manually by experts, which is a laborious, expensive and error-prone task. Manual annotation relies heavily on the experience and knowledge of these experts. In addition, there is much intra- and interobserver variation. Therefore, it is of great significance to develop a method that can automatically segment tumor target regions. METHODS In this paper, we propose a deep learning segmentation method based on multimodal positron emission tomography-computed tomography (PET-CT), which combines the high sensitivity of PET and the precise anatomical information of CT. We design an improved spatial attention network(ISA-Net) to increase the accuracy of PET or CT in detecting tumors, which uses multi-scale convolution operation to extract feature information and can highlight the tumor region location information and suppress the non-tumor region location information. In addition, our network uses dual-channel inputs in the coding stage and fuses them in the decoding stage, which can take advantage of the differences and complementarities between PET and CT. RESULTS We validated the proposed ISA-Net method on two clinical datasets, a soft tissue sarcoma(STS) and a head and neck tumor(HECKTOR) dataset, and compared with other attention methods for tumor segmentation. The DSC score of 0.8378 on STS dataset and 0.8076 on HECKTOR dataset show that ISA-Net method achieves better segmentation performance and has better generalization. CONCLUSIONS The method proposed in this paper is based on multi-modal medical image tumor segmentation, which can effectively utilize the difference and complementarity of different modes. The method can also be applied to other multi-modal data or single-modal data by proper adjustment.
Collapse
Affiliation(s)
- Zhengyong Huang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China; University of Chinese Academy of Sciences, Beijing, 101408, China
| | - Sijuan Zou
- Department of Nuclear Medicine and PET, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, 430000, China
| | - Guoshuai Wang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China; University of Chinese Academy of Sciences, Beijing, 101408, China
| | - Zixiang Chen
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China; Chinese Academy of Sciences Key Laboratory of Health Informatics, Shenzhen, 518055, China
| | - Hao Shen
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China; University of Chinese Academy of Sciences, Beijing, 101408, China
| | - Haiyan Wang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China; University of Chinese Academy of Sciences, Beijing, 101408, China
| | - Na Zhang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China; Chinese Academy of Sciences Key Laboratory of Health Informatics, Shenzhen, 518055, China
| | - Lu Zhang
- Brain Cognition and Brain Disease Institute (BCBDI), Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China; Shenzhen-Hong Kong Institute of Brain Science-Shenzhen Fundamental Research Institutions,Shenzhen, 518055, China
| | - Fan Yang
- Brain Cognition and Brain Disease Institute (BCBDI), Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China; Shenzhen-Hong Kong Institute of Brain Science-Shenzhen Fundamental Research Institutions,Shenzhen, 518055, China
| | - Haining Wang
- United Imaging Research Institute of Innovative Medical Equipment, Shenzhen, 518045, China
| | - Dong Liang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China; Chinese Academy of Sciences Key Laboratory of Health Informatics, Shenzhen, 518055, China
| | - Tianye Niu
- Institute of Biomedical Engineering, Shenzhen Bay Laboratory, Shenzhen, 518118, China
| | - Xiaohua Zhu
- Department of Nuclear Medicine and PET, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, 430000, China
| | - Zhanli Hu
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China; Chinese Academy of Sciences Key Laboratory of Health Informatics, Shenzhen, 518055, China.
| |
Collapse
|
14
|
A whole-body FDG-PET/CT Dataset with manually annotated Tumor Lesions. Sci Data 2022; 9:601. [PMID: 36195599 PMCID: PMC9532417 DOI: 10.1038/s41597-022-01718-3] [Citation(s) in RCA: 21] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2022] [Accepted: 09/23/2022] [Indexed: 12/03/2022] Open
Abstract
We describe a publicly available dataset of annotated Positron Emission Tomography/Computed Tomography (PET/CT) studies. 1014 whole body Fluorodeoxyglucose (FDG)-PET/CT datasets (501 studies of patients with malignant lymphoma, melanoma and non small cell lung cancer (NSCLC) and 513 studies without PET-positive malignant lesions (negative controls)) acquired between 2014 and 2018 were included. All examinations were acquired on a single, state-of-the-art PET/CT scanner. The imaging protocol consisted of a whole-body FDG-PET acquisition and a corresponding diagnostic CT scan. All FDG-avid lesions identified as malignant based on the clinical PET/CT report were manually segmented on PET images in a slice-per-slice (3D) manner. We provide the anonymized original DICOM files of all studies as well as the corresponding DICOM segmentation masks. In addition, we provide scripts for image processing and conversion to different file formats (NIfTI, mha, hdf5). Primary diagnosis, age and sex are provided as non-imaging information. We demonstrate how this dataset can be used for deep learning-based automated analysis of PET/CT data and provide the trained deep learning model. Measurement(s) | tumor lesions | Technology Type(s) | PET/CT | Sample Characteristic - Organism | Homo sapiens |
Collapse
|
15
|
Manafi-Farid R, Askari E, Shiri I, Pirich C, Asadi M, Khateri M, Zaidi H, Beheshti M. [ 18F]FDG-PET/CT radiomics and artificial intelligence in lung cancer: Technical aspects and potential clinical applications. Semin Nucl Med 2022; 52:759-780. [PMID: 35717201 DOI: 10.1053/j.semnuclmed.2022.04.004] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2022] [Revised: 04/10/2022] [Accepted: 04/13/2022] [Indexed: 02/07/2023]
Abstract
Lung cancer is the second most common cancer and the leading cause of cancer-related death worldwide. Molecular imaging using [18F]fluorodeoxyglucose Positron Emission Tomography and/or Computed Tomography ([18F]FDG-PET/CT) plays an essential role in the diagnosis, evaluation of response to treatment, and prediction of outcomes. The images are evaluated using qualitative and conventional quantitative indices. However, there is far more information embedded in the images, which can be extracted by sophisticated algorithms. Recently, the concept of uncovering and analyzing the invisible data extracted from medical images, called radiomics, is gaining more attention. Currently, [18F]FDG-PET/CT radiomics is growingly evaluated in lung cancer to discover if it enhances the diagnostic performance or implication of [18F]FDG-PET/CT in the management of lung cancer. In this review, we provide a short overview of the technical aspects, as they are discussed in different articles of this special issue. We mainly focus on the diagnostic performance of the [18F]FDG-PET/CT-based radiomics and the role of artificial intelligence in non-small cell lung cancer, impacting the early detection, staging, prediction of tumor subtypes, biomarkers, and patient's outcomes.
Collapse
Affiliation(s)
- Reyhaneh Manafi-Farid
- Research Center for Nuclear Medicine, Shariati Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | - Emran Askari
- Department of Nuclear Medicine, School of Medicine, Mashhad University of Medical Sciences, Mashhad, Iran
| | - Isaac Shiri
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Christian Pirich
- Division of Molecular Imaging and Theranostics, Department of Nuclear Medicine, University Hospital Salzburg, Paracelsus Medical University, Salzburg, Austria
| | - Mahboobeh Asadi
- Research Center for Nuclear Medicine, Shariati Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | - Maziar Khateri
- Research Center for Nuclear Medicine, Shariati Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland; Geneva University Neurocenter, Geneva University, Geneva, Switzerland; Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Groningen, The Netherlands; Department of Nuclear Medicine, University of Southern Denmark, Odense, Denmark
| | - Mohsen Beheshti
- Division of Molecular Imaging and Theranostics, Department of Nuclear Medicine, University Hospital Salzburg, Paracelsus Medical University, Salzburg, Austria.
| |
Collapse
|
16
|
Wang Y, Cai H, Pu Y, Li J, Yang F, Yang C, Chen L, Hu Z. The value of AI in the Diagnosis, Treatment, and Prognosis of Malignant Lung Cancer. FRONTIERS IN RADIOLOGY 2022; 2:810731. [PMID: 37492685 PMCID: PMC10365105 DOI: 10.3389/fradi.2022.810731] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/07/2021] [Accepted: 03/30/2022] [Indexed: 07/27/2023]
Abstract
Malignant tumors is a serious public health threat. Among them, lung cancer, which has the highest fatality rate globally, has significantly endangered human health. With the development of artificial intelligence (AI) and its integration with medicine, AI research in malignant lung tumors has become critical. This article reviews the value of CAD, computer neural network deep learning, radiomics, molecular biomarkers, and digital pathology for the diagnosis, treatment, and prognosis of malignant lung tumors.
Collapse
Affiliation(s)
- Yue Wang
- Department of PET/CT Center, Cancer Center of Yunnan Province, Yunnan Cancer Hospital, The Third Affiliated Hospital of Kunming Medical University, Kunming, China
| | - Haihua Cai
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Yongzhu Pu
- Department of PET/CT Center, Cancer Center of Yunnan Province, Yunnan Cancer Hospital, The Third Affiliated Hospital of Kunming Medical University, Kunming, China
| | - Jindan Li
- Department of PET/CT Center, Cancer Center of Yunnan Province, Yunnan Cancer Hospital, The Third Affiliated Hospital of Kunming Medical University, Kunming, China
| | - Fake Yang
- Department of PET/CT Center, Cancer Center of Yunnan Province, Yunnan Cancer Hospital, The Third Affiliated Hospital of Kunming Medical University, Kunming, China
| | - Conghui Yang
- Department of PET/CT Center, Cancer Center of Yunnan Province, Yunnan Cancer Hospital, The Third Affiliated Hospital of Kunming Medical University, Kunming, China
| | - Long Chen
- Department of PET/CT Center, Cancer Center of Yunnan Province, Yunnan Cancer Hospital, The Third Affiliated Hospital of Kunming Medical University, Kunming, China
| | - Zhanli Hu
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| |
Collapse
|
17
|
Multi-Focus Image Fusion Based on Convolution Neural Network for Parkinson's Disease Image Classification. Diagnostics (Basel) 2021; 11:diagnostics11122379. [PMID: 34943615 PMCID: PMC8700359 DOI: 10.3390/diagnostics11122379] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2021] [Revised: 12/06/2021] [Accepted: 12/15/2021] [Indexed: 11/17/2022] Open
Abstract
Parkinson's disease (PD) is a common neurodegenerative disease that has a significant impact on people's lives. Early diagnosis is imperative since proper treatment stops the disease's progression. With the rapid development of CAD techniques, there have been numerous applications of computer-aided diagnostic (CAD) techniques in the diagnosis of PD. In recent years, image fusion has been applied in various fields and is valuable in medical diagnosis. This paper mainly adopts a multi-focus image fusion method primarily based on deep convolutional neural networks to fuse magnetic resonance images (MRI) and positron emission tomography (PET) neural photographs into multi-modal images. Additionally, the study selected Alexnet, Densenet, ResNeSt, and Efficientnet neural networks to classify the single-modal MRI dataset and the multi-modal dataset. The test accuracy rates of the single-modal MRI dataset are 83.31%, 87.76%, 86.37%, and 86.44% on the Alexnet, Densenet, ResNeSt, and Efficientnet, respectively. Moreover, the test accuracy rates of the multi-modal fusion dataset on the Alexnet, Densenet, ResNeSt, and Efficientnet are 90.52%, 97.19%, 94.15%, and 93.39%. As per all four networks discussed above, it can be concluded that the test results for the multi-modal dataset are better than those for the single-modal MRI dataset. The experimental results showed that the multi-focus image fusion method according to deep learning can enhance the accuracy of PD image classification.
Collapse
|