1
|
Yang L, Shao D, Huang Z, Geng M, Zhang N, Chen L, Wang X, Liang D, Pang ZF, Hu Z. Few-shot segmentation framework for lung nodules via an optimized active contour model. Med Phys 2024; 51:2788-2805. [PMID: 38189528 DOI: 10.1002/mp.16933] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2023] [Revised: 11/07/2023] [Accepted: 12/15/2023] [Indexed: 01/09/2024] Open
Abstract
BACKGROUND Accurate segmentation of lung nodules is crucial for the early diagnosis and treatment of lung cancer in clinical practice. However, the similarity between lung nodules and surrounding tissues has made their segmentation a longstanding challenge. PURPOSE Existing deep learning and active contour models each have their limitations. This paper aims to integrate the strengths of both approaches while mitigating their respective shortcomings. METHODS In this paper, we propose a few-shot segmentation framework that combines a deep neural network with an active contour model. We introduce heat kernel convolutions and high-order total variation into the active contour model and solve the challenging nonsmooth optimization problem using the alternating direction method of multipliers. Additionally, we use the presegmentation results obtained from training a deep neural network on a small sample set as the initial contours for our optimized active contour model, addressing the difficulty of manually setting the initial contours. RESULTS We compared our proposed method with state-of-the-art methods for segmentation effectiveness using clinical computed tomography (CT) images acquired from two different hospitals and the publicly available LIDC dataset. The results demonstrate that our proposed method achieved outstanding segmentation performance according to both visual and quantitative indicators. CONCLUSION Our approach utilizes the output of few-shot network training as prior information, avoiding the need to select the initial contour in the active contour model. Additionally, it provides mathematical interpretability to the deep learning, reducing its dependency on the quantity of training samples.
Collapse
Affiliation(s)
- Lin Yang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- College of Mathematics and Statistics, Henan University, Kaifeng, China
| | - Dan Shao
- Department of Nuclear Medicine, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Guangzhou, China
| | - Zhenxing Huang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Mengxiao Geng
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- College of Mathematics and Statistics, Henan University, Kaifeng, China
| | - Na Zhang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- Key Laboratory of Biomedical Imaging Science and System, Chinese Academy of Sciences, Shenzhen, China
| | - Long Chen
- Department of PET/CT Center and the Department of Thoracic Cancer I, Cancer Center of Yunnan Province, Yunnan Cancer Hospital, The Third Affiliated Hospital of Kunming Medical University, Kunming, China
| | - Xi Wang
- Department of PET/CT Center and the Department of Thoracic Cancer I, Cancer Center of Yunnan Province, Yunnan Cancer Hospital, The Third Affiliated Hospital of Kunming Medical University, Kunming, China
| | - Dong Liang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- Key Laboratory of Biomedical Imaging Science and System, Chinese Academy of Sciences, Shenzhen, China
| | - Zhi-Feng Pang
- College of Mathematics and Statistics, Henan University, Kaifeng, China
| | - Zhanli Hu
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- Key Laboratory of Biomedical Imaging Science and System, Chinese Academy of Sciences, Shenzhen, China
| |
Collapse
|
2
|
Wang H, Wu Y, Huang Z, Li Z, Zhang N, Fu F, Meng N, Wang H, Zhou Y, Yang Y, Liu X, Liang D, Zheng H, Mok GSP, Wang M, Hu Z. Deep learning-based dynamic PET parametric K i image generation from lung static PET. Eur Radiol 2023; 33:2676-2685. [PMID: 36399164 DOI: 10.1007/s00330-022-09237-w] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2022] [Revised: 09/30/2022] [Accepted: 10/12/2022] [Indexed: 11/19/2022]
Abstract
OBJECTIVES PET/CT is a first-line tool for the diagnosis of lung cancer. The accuracy of quantification may suffer from various factors throughout the acquisition process. The dynamic PET parametric Ki provides better quantification and improve specificity for cancer detection. However, parametric imaging is difficult to implement clinically due to the long acquisition time (~ 1 h). We propose a dynamic parametric imaging method based on conventional static PET using deep learning. METHODS Based on the imaging data of 203 participants, an improved cycle generative adversarial network incorporated with squeeze-and-excitation attention block was introduced to learn the potential mapping relationship between static PET and Ki parametric images. The image quality of the synthesized images was qualitatively and quantitatively evaluated by using several physical and clinical metrics. Statistical analysis of correlation and consistency was also performed on the synthetic images. RESULTS Compared with those of other networks, the images synthesized by our proposed network exhibited superior performance in both qualitative and quantitative evaluation, statistical analysis, and clinical scoring. Our synthesized Ki images had significant correlation (Pearson correlation coefficient, 0.93), consistency, and excellent quantitative evaluation results with the Ki images obtained in standard dynamic PET practice. CONCLUSIONS Our proposed deep learning method can be used to synthesize highly correlated and consistent dynamic parametric images obtained from static lung PET. KEY POINTS • Compared with conventional static PET, dynamic PET parametric Ki imaging has been shown to provide better quantification and improved specificity for cancer detection. • The purpose of this work was to develop a dynamic parametric imaging method based on static PET images using deep learning. • Our proposed network can synthesize highly correlated and consistent dynamic parametric images, providing an additional quantitative diagnostic reference for clinicians.
Collapse
Affiliation(s)
- Haiyan Wang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China.,Biomedical Imaging Laboratory (BIG), Department of Electrical and Computer Engineering, Faculty of Science and Technology, University of Macau, Avenida da Universidade, Macau, 999078, SAR, China
| | - Yaping Wu
- Department of Medical Imaging, Henan Provincial People's Hospital & People's Hospital of Zhengzhou University, Zhengzhou, 450003, China
| | - Zhenxing Huang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Zhicheng Li
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Na Zhang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Fangfang Fu
- Department of Medical Imaging, Henan Provincial People's Hospital & People's Hospital of Zhengzhou University, Zhengzhou, 450003, China
| | - Nan Meng
- Department of Medical Imaging, Henan Provincial People's Hospital & People's Hospital of Zhengzhou University, Zhengzhou, 450003, China
| | - Haining Wang
- Shenzhen United Imaging Research Institute of Innovative Medical Equipment, Shenzhen, 518045, China
| | - Yun Zhou
- Central Research Institute, United Imaging Healthcare Group, Shanghai, 201807, China
| | - Yongfeng Yang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Xin Liu
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Dong Liang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Hairong Zheng
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Greta S P Mok
- Biomedical Imaging Laboratory (BIG), Department of Electrical and Computer Engineering, Faculty of Science and Technology, University of Macau, Avenida da Universidade, Macau, 999078, SAR, China
| | - Meiyun Wang
- Department of Medical Imaging, Henan Provincial People's Hospital & People's Hospital of Zhengzhou University, Zhengzhou, 450003, China.
| | - Zhanli Hu
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China.
| |
Collapse
|