1
|
Yuan L, An L, Zhu Y, Duan C, Kong W, Jiang P, Yu QQ. Machine Learning in Diagnosis and Prognosis of Lung Cancer by PET-CT. Cancer Manag Res 2024; 16:361-375. [PMID: 38699652 PMCID: PMC11063459 DOI: 10.2147/cmar.s451871] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2023] [Accepted: 04/16/2024] [Indexed: 05/05/2024] Open
Abstract
As a disease with high morbidity and high mortality, lung cancer has seriously harmed people's health. Therefore, early diagnosis and treatment are more important. PET/CT is usually used to obtain the early diagnosis, staging, and curative effect evaluation of tumors, especially lung cancer, due to the heterogeneity of tumors and the differences in artificial image interpretation and other reasons, it also fails to entirely reflect the real situation of tumors. Artificial intelligence (AI) has been applied to all aspects of life. Machine learning (ML) is one of the important ways to realize AI. With the help of the ML method used by PET/CT imaging technology, there are many studies in the diagnosis and treatment of lung cancer. This article summarizes the application progress of ML based on PET/CT in lung cancer, in order to better serve the clinical. In this study, we searched PubMed using machine learning, lung cancer, and PET/CT as keywords to find relevant articles in the past 5 years or more. We found that PET/CT-based ML approaches have achieved significant results in the detection, delineation, classification of pathology, molecular subtyping, staging, and response assessment with survival and prognosis of lung cancer, which can provide clinicians a powerful tool to support and assist in critical daily clinical decisions. However, ML has some shortcomings such as slightly poor repeatability and reliability.
Collapse
Affiliation(s)
- Lili Yuan
- Jining NO.1 People’s Hospital, Shandong First Medical University, Jining, People’s Republic of China
| | - Lin An
- Jining NO.1 People’s Hospital, Shandong First Medical University, Jining, People’s Republic of China
| | - Yandong Zhu
- Jining NO.1 People’s Hospital, Shandong First Medical University, Jining, People’s Republic of China
| | - Chongling Duan
- Jining NO.1 People’s Hospital, Shandong First Medical University, Jining, People’s Republic of China
| | - Weixiang Kong
- Jining NO.1 People’s Hospital, Shandong First Medical University, Jining, People’s Republic of China
| | - Pei Jiang
- Translational Pharmaceutical Laboratory, Jining NO.1 People’s Hospital, Shandong First Medical University, Jining, People’s Republic of China
| | - Qing-Qing Yu
- Jining NO.1 People’s Hospital, Shandong First Medical University, Jining, People’s Republic of China
| |
Collapse
|
2
|
Patel K, Huang S, Rashid A, Varghese B, Gholamrezanezhad A. A Narrative Review of the Use of Artificial Intelligence in Breast, Lung, and Prostate Cancer. Life (Basel) 2023; 13:2011. [PMID: 37895393 PMCID: PMC10608739 DOI: 10.3390/life13102011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2023] [Revised: 09/30/2023] [Accepted: 09/30/2023] [Indexed: 10/29/2023] Open
Abstract
Artificial intelligence (AI) has been an important topic within radiology. Currently, AI is used clinically to assist with the detection of lesions through detection systems. However, a number of recent studies have demonstrated the increased value of neural networks in radiology. With an increasing number of screening requirements for cancers, this review aims to study the accuracy of the numerous AI models used in the detection and diagnosis of breast, lung, and prostate cancers. This study summarizes pertinent findings from reviewed articles and provides analysis on the relevancy to clinical radiology. This study found that whereas AI is showing continual improvement in radiology, AI alone does not surpass the effectiveness of a radiologist. Additionally, it was found that there are multiple variations on how AI should be integrated with a radiologist's workflow.
Collapse
Affiliation(s)
- Kishan Patel
- Department of Radiology, Keck School of Medicine, University of Southern California, Los Angeles, CA 90033, USA (A.G.)
| | - Sherry Huang
- Department of Urology, University of Pittsburgh Medical Center, Pittsburgh, PA 15213, USA
| | - Arnav Rashid
- Department of Biological Sciences, Dana and David Dornsife College of Letters, Arts and Sciences, University of Southern California, Los Angeles, CA 90089, USA
| | - Bino Varghese
- Department of Radiology, Keck School of Medicine, University of Southern California, Los Angeles, CA 90033, USA (A.G.)
| | - Ali Gholamrezanezhad
- Department of Radiology, Keck School of Medicine, University of Southern California, Los Angeles, CA 90033, USA (A.G.)
| |
Collapse
|
3
|
Sfayyih AH, Sulaiman N, Sabry AH. A review on lung disease recognition by acoustic signal analysis with deep learning networks. JOURNAL OF BIG DATA 2023; 10:101. [PMID: 37333945 PMCID: PMC10259357 DOI: 10.1186/s40537-023-00762-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/08/2022] [Accepted: 05/08/2023] [Indexed: 06/20/2023]
Abstract
Recently, assistive explanations for difficulties in the health check area have been made viable thanks in considerable portion to technologies like deep learning and machine learning. Using auditory analysis and medical imaging, they also increase the predictive accuracy for prompt and early disease detection. Medical professionals are thankful for such technological support since it helps them manage further patients because of the shortage of skilled human resources. In addition to serious illnesses like lung cancer and respiratory diseases, the plurality of breathing difficulties is gradually rising and endangering society. Because early prediction and immediate treatment are crucial for respiratory disorders, chest X-rays and respiratory sound audio are proving to be quite helpful together. Compared to related review studies on lung disease classification/detection using deep learning algorithms, only two review studies based on signal analysis for lung disease diagnosis have been conducted in 2011 and 2018. This work provides a review of lung disease recognition with acoustic signal analysis with deep learning networks. We anticipate that physicians and researchers working with sound-signal-based machine learning will find this material beneficial.
Collapse
Affiliation(s)
- Alyaa Hamel Sfayyih
- Department of Electrical and Electronic Engineering, Faculty of Engineering, Universiti Putra Malaysia, 43400 Serdang, Malaysia
| | - Nasri Sulaiman
- Department of Electrical and Electronic Engineering, Faculty of Engineering, Universiti Putra Malaysia, 43400 Serdang, Malaysia
| | - Ahmad H. Sabry
- Department of Computer Engineering, Al-Nahrain University, Al Jadriyah Bridge, 64074 Baghdad, Iraq
| |
Collapse
|
4
|
Sfayyih AH, Sabry AH, Jameel SM, Sulaiman N, Raafat SM, Humaidi AJ, Kubaiaisi YMA. Acoustic-Based Deep Learning Architectures for Lung Disease Diagnosis: A Comprehensive Overview. Diagnostics (Basel) 2023; 13:diagnostics13101748. [PMID: 37238233 DOI: 10.3390/diagnostics13101748] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2023] [Revised: 05/04/2023] [Accepted: 05/11/2023] [Indexed: 05/28/2023] Open
Abstract
Lung auscultation has long been used as a valuable medical tool to assess respiratory health and has gotten a lot of attention in recent years, notably following the coronavirus epidemic. Lung auscultation is used to assess a patient's respiratory role. Modern technological progress has guided the growth of computer-based respiratory speech investigation, a valuable tool for detecting lung abnormalities and diseases. Several recent studies have reviewed this important area, but none are specific to lung sound-based analysis with deep-learning architectures from one side and the provided information was not sufficient for a good understanding of these techniques. This paper gives a complete review of prior deep-learning-based architecture lung sound analysis. Deep-learning-based respiratory sound analysis articles are found in different databases including the Plos, ACM Digital Libraries, Elsevier, PubMed, MDPI, Springer, and IEEE. More than 160 publications were extracted and submitted for assessment. This paper discusses different trends in pathology/lung sound, the common features for classifying lung sounds, several considered datasets, classification methods, signal processing techniques, and some statistical information based on previous study findings. Finally, the assessment concludes with a discussion of potential future improvements and recommendations.
Collapse
Affiliation(s)
- Alyaa Hamel Sfayyih
- Department of Electrical and Electronic Engineering, Faculty of Engineering, University Putra Malaysia, Serdang 43400, Malaysia
| | - Ahmad H Sabry
- Department of Computer Engineering, Al-Nahrain University Al Jadriyah Bridge, Baghdad 64074, Iraq
| | | | - Nasri Sulaiman
- Department of Electrical and Electronic Engineering, Faculty of Engineering, University Putra Malaysia, Serdang 43400, Malaysia
| | - Safanah Mudheher Raafat
- Department of Control and Systems Engineering, University of Technology, Baghdad 10011, Iraq
| | - Amjad J Humaidi
- Department of Control and Systems Engineering, University of Technology, Baghdad 10011, Iraq
| | - Yasir Mahmood Al Kubaiaisi
- Department of Sustainability Management, Dubai Academic Health Corporation, Dubai 4545, United Arab Emirates
| |
Collapse
|
5
|
Park J, Kang SK, Hwang D, Choi H, Ha S, Seo JM, Eo JS, Lee JS. Automatic Lung Cancer Segmentation in [ 18F]FDG PET/CT Using a Two-Stage Deep Learning Approach. Nucl Med Mol Imaging 2023; 57:86-93. [PMID: 36998591 PMCID: PMC10043063 DOI: 10.1007/s13139-022-00745-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2021] [Revised: 03/10/2022] [Accepted: 03/12/2022] [Indexed: 10/18/2022] Open
Abstract
Purpose Since accurate lung cancer segmentation is required to determine the functional volume of a tumor in [18F]FDG PET/CT, we propose a two-stage U-Net architecture to enhance the performance of lung cancer segmentation using [18F]FDG PET/CT. Methods The whole-body [18F]FDG PET/CT scan data of 887 patients with lung cancer were retrospectively used for network training and evaluation. The ground-truth tumor volume of interest was drawn using the LifeX software. The dataset was randomly partitioned into training, validation, and test sets. Among the 887 PET/CT and VOI datasets, 730 were used to train the proposed models, 81 were used as the validation set, and the remaining 76 were used to evaluate the model. In Stage 1, the global U-net receives 3D PET/CT volume as input and extracts the preliminary tumor area, generating a 3D binary volume as output. In Stage 2, the regional U-net receives eight consecutive PET/CT slices around the slice selected by the Global U-net in Stage 1 and generates a 2D binary image as the output. Results The proposed two-stage U-Net architecture outperformed the conventional one-stage 3D U-Net in primary lung cancer segmentation. The two-stage U-Net model successfully predicted the detailed margin of the tumors, which was determined by manually drawing spherical VOIs and applying an adaptive threshold. Quantitative analysis using the Dice similarity coefficient confirmed the advantages of the two-stage U-Net. Conclusion The proposed method will be useful for reducing the time and effort required for accurate lung cancer segmentation in [18F]FDG PET/CT.
Collapse
Affiliation(s)
- Junyoung Park
- Department of Electrical and Computer Engineering, Seoul National University College of Engineering, Seoul, 08826 Korea
- Department of Nuclear Medicine, Seoul National University College of Medicine, 103 Daehak-ro, Jongno-gu, Seoul, 03080 Korea
| | - Seung Kwan Kang
- Department of Nuclear Medicine, Seoul National University College of Medicine, 103 Daehak-ro, Jongno-gu, Seoul, 03080 Korea
- Department of Biomedical Sciences, Seoul National University College of Medicine, Seoul, 03080 Korea
- Artificial Intelligence Institute, Seoul National University, Seoul, 08826 Korea
- Brightonix Imaging Inc., Seoul, 03080 Korea
| | - Donghwi Hwang
- Department of Nuclear Medicine, Seoul National University College of Medicine, 103 Daehak-ro, Jongno-gu, Seoul, 03080 Korea
- Department of Biomedical Sciences, Seoul National University College of Medicine, Seoul, 03080 Korea
- Artificial Intelligence Institute, Seoul National University, Seoul, 08826 Korea
| | - Hongyoon Choi
- Department of Nuclear Medicine, Seoul National University College of Medicine, 103 Daehak-ro, Jongno-gu, Seoul, 03080 Korea
| | - Seunggyun Ha
- Division of Nuclear Medicine, Department of Radiology, Seoul St Mary’s Hospital, The Catholic University of Korea, Seoul, 06591 Korea
| | - Jong Mo Seo
- Department of Electrical and Computer Engineering, Seoul National University College of Engineering, Seoul, 08826 Korea
| | - Jae Seon Eo
- Department of Nuclear Medicine, Korea University Guro Hospital, 148 Gurodong-ro, Guro-gu, Seoul, 08308 Korea
| | - Jae Sung Lee
- Department of Nuclear Medicine, Seoul National University College of Medicine, 103 Daehak-ro, Jongno-gu, Seoul, 03080 Korea
- Department of Biomedical Sciences, Seoul National University College of Medicine, Seoul, 03080 Korea
- Artificial Intelligence Institute, Seoul National University, Seoul, 08826 Korea
- Brightonix Imaging Inc., Seoul, 03080 Korea
- Institute of Radiation Medicine, Medical Research Center, Seoul National University College of Medicine, Seoul, 03080 Korea
| |
Collapse
|
6
|
Weikert T, Jaeger PF, Yang S, Baumgartner M, Breit HC, Winkel DJ, Sommer G, Stieltjes B, Thaiss W, Bremerich J, Maier-Hein KH, Sauter AW. Automated lung cancer assessment on 18F-PET/CT using Retina U-Net and anatomical region segmentation. Eur Radiol 2023; 33:4270-4279. [PMID: 36625882 PMCID: PMC10182147 DOI: 10.1007/s00330-022-09332-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2022] [Revised: 08/13/2022] [Accepted: 10/17/2022] [Indexed: 01/11/2023]
Abstract
OBJECTIVES To develop and test a Retina U-Net algorithm for the detection of primary lung tumors and associated metastases of all stages on FDG-PET/CT. METHODS A data set consisting of 364 FDG-PET/CTs of patients with histologically confirmed lung cancer was used for algorithm development and internal testing. The data set comprised tumors of all stages. All lung tumors (T), lymphatic metastases (N), and distant metastases (M) were manually segmented as 3D volumes using whole-body PET/CT series. The data set was split into a training (n = 216), validation (n = 74), and internal test data set (n = 74). Detection performance for all lesion types at multiple classifier thresholds was evaluated and false-positive-findings-per-case (FP/c) calculated. Next, detected lesions were assigned to categories T, N, or M using an automated anatomical region segmentation. Furthermore, reasons for FPs were visually assessed and analyzed. Finally, performance was tested on 20 PET/CTs from another institution. RESULTS Sensitivity for T lesions was 86.2% (95% CI: 77.2-92.7) at a FP/c of 2.0 on the internal test set. The anatomical correlate to most FPs was the physiological activity of bone marrow (16.8%). TNM categorization based on the anatomical region approach was correct in 94.3% of lesions. Performance on the external test set confirmed the good performance of the algorithm (overall detection rate = 88.8% (95% CI: 82.5-93.5%) and FP/c = 2.7). CONCLUSIONS Retina U-Nets are a valuable tool for tumor detection tasks on PET/CT and can form the backbone of reading assistance tools in this field. FPs have anatomical correlates that can lead the way to further algorithm improvements. The code is publicly available. KEY POINTS • Detection of malignant lesions in PET/CT with Retina U-Net is feasible. • All false-positive findings had anatomical correlates, physiological bone marrow activity being the most prevalent. • Retina U-Nets can build the backbone for tools assisting imaging professionals in lung tumor staging.
Collapse
Affiliation(s)
- T Weikert
- Department of Radiology, University Hospital Basel, University of Basel, Petersgraben 4, 4031, Basel, Switzerland.
| | - P F Jaeger
- Division of Medical Image Computing, German Cancer Research Center, Im Neuenheimer Feld 223, 69120, Heidelberg, Germany
| | - S Yang
- Department of Radiology, University Hospital Basel, University of Basel, Petersgraben 4, 4031, Basel, Switzerland
| | - M Baumgartner
- Division of Medical Image Computing, German Cancer Research Center, Im Neuenheimer Feld 223, 69120, Heidelberg, Germany
| | - H C Breit
- Department of Radiology, University Hospital Basel, University of Basel, Petersgraben 4, 4031, Basel, Switzerland
| | - D J Winkel
- Department of Radiology, University Hospital Basel, University of Basel, Petersgraben 4, 4031, Basel, Switzerland
| | - G Sommer
- Institute of Radiology and Nuclear Medicine, Hirslanden Klinik St. Anna, St. Anna-Strasse 32, 6006, Lucerne, Switzerland
| | - B Stieltjes
- Department of Radiology, University Hospital Basel, University of Basel, Petersgraben 4, 4031, Basel, Switzerland
| | - W Thaiss
- Department of Nuclear Medicine, University Hospital Ulm, Albert-Einstein-Allee 23, 89081, Ulm, Germany
| | - J Bremerich
- Department of Radiology, University Hospital Basel, University of Basel, Petersgraben 4, 4031, Basel, Switzerland
| | - K H Maier-Hein
- Division of Medical Image Computing, German Cancer Research Center, Im Neuenheimer Feld 223, 69120, Heidelberg, Germany.,Department of Radiation Oncology, Pattern Analysis and Learning Group, Heidelberg University Hospital, Im Neuenheimer Feld 400, 69120, Heidelberg, Germany
| | - A W Sauter
- Department of Radiology, University Hospital Basel, University of Basel, Petersgraben 4, 4031, Basel, Switzerland
| |
Collapse
|
7
|
|
8
|
Borrelli P, Góngora JLL, Kaboteh R, Enqvist O, Edenbrandt L. Automated Classification of PET‐CT Lesions in Lung Cancer: An Independent Validation Study. Clin Physiol Funct Imaging 2022; 42:327-332. [PMID: 35760559 PMCID: PMC9540653 DOI: 10.1111/cpf.12773] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2022] [Revised: 06/15/2022] [Accepted: 06/22/2022] [Indexed: 12/04/2022]
Abstract
Introduction Recently, a tool called the positron emission tomography (PET)‐assisted reporting system (PARS) was developed and presented to classify lesions in PET/computed tomography (CT) studies in patients with lung cancer or lymphoma. The aim of this study was to validate PARS with an independent group of lung‐cancer patients using manual lesion segmentations as a reference standard, as well as to evaluate the association between PARS‐based measurements and overall survival (OS). Methods This study retrospectively included 115 patients who had undergone clinically indicated (18F)‐fluorodeoxyglucose (FDG) PET/CT due to suspected or known lung cancer. The patients had a median age of 66 years (interquartile range [IQR]: 61–72 years). Segmentations were made manually by visual inspection in a consensus reading by two nuclear medicine specialists and used as a reference. The research prototype PARS was used to automatically analyse all the PET/CT studies. The PET foci classified as suspicious by PARS were compared with the manual segmentations. No manual corrections were applied. Total lesion glycolysis (TLG) was calculated based on the manual and PARS‐based lung‐tumour segmentations. Associations between TLG and OS were investigated using Cox analysis. Results PARS showed sensitivities for lung tumours of 55.6% per lesion and 80.2% per patient. Both manual and PARS TLG were significantly associated with OS. Conclusion Automatically calculated TLG by PARS contains prognostic information comparable to manually measured TLG in patients with known or suspected lung cancer. The low sensitivity at both the lesion and patient levels makes the present version of PARS less useful to support clinical reading, reporting and staging.
Collapse
Affiliation(s)
- Pablo Borrelli
- Region Västra Götaland, Sahlgrenska University HospitalDepartment of Clinical PhysiologyGothenburgSweden
| | - José Luis Loaiza Góngora
- Region Västra Götaland, Sahlgrenska University HospitalDepartment of Clinical PhysiologyGothenburgSweden
| | - Reza Kaboteh
- Region Västra Götaland, Sahlgrenska University HospitalDepartment of Clinical PhysiologyGothenburgSweden
| | | | - Lars Edenbrandt
- Region Västra Götaland, Sahlgrenska University HospitalDepartment of Clinical PhysiologyGothenburgSweden
- Department of Molecular and Clinical Medicine, Institute of MedicineSahlgrenska Academy, University of GothenburgGothenburgSweden
| |
Collapse
|
9
|
Hamdeh A, Househ M, Abd-alrazaq A, Muchori G, Al-saadi A, Alzubaidi M. Artificial Intelligence and the diagnosis of lung cancer in early stage: scoping review. (Preprint).. [DOI: 10.2196/preprints.38773] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/01/2023]
Abstract
BACKGROUND
Lung cancer is considered to be the most fatal out of all diagnoseable cancers. This is, in part, due to the difficulty in detecting lung cancer at an early stage. Moreover, approximately one in five individuals who will develop lung cancer will pass away due to a misdiagnosis. Fortunately, Machine Learning (ML) and Deep Learning (DL) is considered to be a promising solution for detection of lung cancer through developments in radiology.
OBJECTIVE
The purpose of this paper is to is to review how AI can assist identifying and diagnosing of lung cancer in an early stage.
METHODS
PRISMA was utilized and were retrieved from 4 databases: Google Scholar, PubMed, EMBASE, and Institute of Electrical and Electronics Engineers (IEEE). In addition, two phases of screening were implemented in order to determine relevant literature. The first phase was reading the title and abstract, and the second stage was reading the full text. These two steps were independently conducted by three reviewers. Finally, the three authors use a narrative synthesis to present the data.
RESULTS
Overall, 543 potential studies were extracted from four databases. After screening, 26 articles that met the inclusion criteria were included in this scoping review. Several articles utilized privet data including patients’ data and other public sources. 15 articles used data from UCI repository dataset (58%). However, CT scan images was utilized on 9 studies (normal CT was mentioned in 5 articles (19%), two studies used CT scan with PET (7.7%), and two articles used FDG with CT (7.7%). While two articles used demographic data such as age, sex, and educational background (7.7%).
CONCLUSIONS
This scoping review illustrates recent studies that utilize AI models to diagnose lung cancer. The literature currently relies on private and public databases and compare models with physicians or other machine learning technology. Additional studies should be conducted to explore the efficacy of these technologies in clinical settings.
Collapse
|
10
|
Protonotarios NE, Katsamenis I, Sykiotis S, Dikaios N, Kastis GA, Chatziioannou SN, Metaxas M, Doulamis N, Doulamis A. A few-shot U-Net deep learning model for lung cancer lesion segmentation via PET/CT imaging. Biomed Phys Eng Express 2022; 8. [PMID: 35144242 DOI: 10.1088/2057-1976/ac53bd] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2021] [Accepted: 02/10/2022] [Indexed: 11/12/2022]
Abstract
Over the past few years, positron emission tomography/computed tomography (PET/CT) imaging for computer-aided diagnosis has received increasing attention. Supervised deep learning architectures are usually employed for the detection of abnormalities, with anatomical localization, especially in the case of CT scans. However, the main limitations of the supervised learning paradigm include (i) large amounts of data required for model training, and (ii) the assumption of fixed network weights upon training completion, implying that the performance of the model cannot be further improved after training. In order to overcome these limitations, we apply a few-shot learning (FSL) scheme. Contrary to traditional deep learning practices, in FSL the model is provided with less data during training. The model then utilizes end-user feedback after training to constantly improve its performance. We integrate FSL in a U-Net architecture for lung cancer lesion segmentation on PET/CT scans, allowing for dynamic model weight fine-tuning and resulting in an online supervised learning scheme. Constant online readjustments of the model weights according to the user's feedback, increase the detection and classification accuracy, especially in cases where low detection performance is encountered. Our proposed method is validated on the Lung-PET-CT-DX TCIA database. PET/CT scans from 87 patients were included in the dataset and were acquired 60 minutes after intravenous18F-FDG injection. Experimental results indicate the superiority of our approach compared to other state of the art methods.
Collapse
Affiliation(s)
- Nicholas E Protonotarios
- Department of Applied Mathematics and Theoretical Physics (DAMTP), University of Cambridge, University of Cambridge, Cambridge, CB3 0WA, UNITED KINGDOM OF GREAT BRITAIN AND NORTHERN IRELAND
| | - Iason Katsamenis
- School of Rural and Surveying Engineering, National Technical University of Athens, 9, Heroon Polytechniou, Zografou, Attica, 157 73, GREECE
| | - Stavros Sykiotis
- School of Rural and Surveying Engineering, National Technical University of Athens, 9, Heroon Polytechniou, Zografou, Attica, 157 73, GREECE
| | - Nikolaos Dikaios
- Mathematics Research Center, Academy of Athens, 4, Soranou Efesiou, Athens, 115 27, GREECE
| | - George Anthony Kastis
- Mathematics Research Center, Academy of Athens, 4, Soranou Efesiou, Athens, Attica, 115 27, GREECE
| | - Sofia N Chatziioannou
- PET/CT, Biomedical Research Foundation of the Academy of Athens, 4, Soranou Efesiou, Athens, Attica, 115 27, GREECE
| | - Marinos Metaxas
- PET/CT, Biomedical Research Foundation of the Academy of Athens, 4, Soranou Efesiou, Athens, Attica, 115 27, GREECE
| | - Nikolaos Doulamis
- School of Rural and Surveying Engineering, National Technical University of Athens, 9, Heroon Polytechniou, Zografou, Attica, 157 73, GREECE
| | - Anastasios Doulamis
- School of Rural and Surveying Engineering, National Technical University of Athens, 9, Heroon Polytechniou, Zografou, Attica, 157 73, GREECE
| |
Collapse
|
11
|
Freely available convolutional neural network-based quantification of PET/CT lesions is associated with survival in patients with lung cancer. EJNMMI Phys 2022; 9:6. [PMID: 35113252 PMCID: PMC8814082 DOI: 10.1186/s40658-022-00437-3] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2021] [Accepted: 01/24/2022] [Indexed: 12/11/2022] Open
Abstract
Background Metabolic positron emission tomography/computed tomography (PET/CT) parameters describing tumour activity contain valuable prognostic information, but to perform the measurements manually leads to both intra- and inter-reader variability and is too time-consuming in clinical practice. The use of modern artificial intelligence-based methods offers new possibilities for automated and objective image analysis of PET/CT data.
Purpose We aimed to train a convolutional neural network (CNN) to segment and quantify tumour burden in [18F]-fluorodeoxyglucose (FDG) PET/CT images and to evaluate the association between CNN-based measurements and overall survival (OS) in patients with lung cancer. A secondary aim was to make the method available to other researchers. Methods A total of 320 consecutive patients referred for FDG PET/CT due to suspected lung cancer were retrospectively selected for this study. Two nuclear medicine specialists manually segmented abnormal FDG uptake in all of the PET/CT studies. One-third of the patients were assigned to a test group. Survival data were collected for this group. The CNN was trained to segment lung tumours and thoracic lymph nodes. Total lesion glycolysis (TLG) was calculated from the CNN-based and manual segmentations. Associations between TLG and OS were investigated using a univariate Cox proportional hazards regression model. Results The test group comprised 106 patients (median age, 76 years (IQR 61–79); n = 59 female). Both CNN-based TLG (hazard ratio 1.64, 95% confidence interval 1.21–2.21; p = 0.001) and manual TLG (hazard ratio 1.54, 95% confidence interval 1.14–2.07; p = 0.004) estimations were significantly associated with OS. Conclusion Fully automated CNN-based TLG measurements of PET/CT data showed were significantly associated with OS in patients with lung cancer. This type of measurement may be of value for the management of future patients with lung cancer. The CNN is publicly available for research purposes.
Collapse
|
12
|
Yousefirizi F, Pierre Decazes, Amyar A, Ruan S, Saboury B, Rahmim A. AI-Based Detection, Classification and Prediction/Prognosis in Medical Imaging:: Towards Radiophenomics. PET Clin 2021; 17:183-212. [PMID: 34809866 DOI: 10.1016/j.cpet.2021.09.010] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
Artificial intelligence (AI) techniques have significant potential to enable effective, robust, and automated image phenotyping including the identification of subtle patterns. AI-based detection searches the image space to find the regions of interest based on patterns and features. There is a spectrum of tumor histologies from benign to malignant that can be identified by AI-based classification approaches using image features. The extraction of minable information from images gives way to the field of "radiomics" and can be explored via explicit (handcrafted/engineered) and deep radiomics frameworks. Radiomics analysis has the potential to be used as a noninvasive technique for the accurate characterization of tumors to improve diagnosis and treatment monitoring. This work reviews AI-based techniques, with a special focus on oncological PET and PET/CT imaging, for different detection, classification, and prediction/prognosis tasks. We also discuss needed efforts to enable the translation of AI techniques to routine clinical workflows, and potential improvements and complementary techniques such as the use of natural language processing on electronic health records and neuro-symbolic AI techniques.
Collapse
Affiliation(s)
- Fereshteh Yousefirizi
- Department of Integrative Oncology, BC Cancer Research Institute, 675 West 10th Avenue, Vancouver, British Columbia V5Z 1L3, Canada.
| | - Pierre Decazes
- Department of Nuclear Medicine, Henri Becquerel Centre, Rue d'Amiens - CS 11516 - 76038 Rouen Cedex 1, France; QuantIF-LITIS, Faculty of Medicine and Pharmacy, Research Building - 1st floor, 22 boulevard Gambetta, 76183 Rouen Cedex, France
| | - Amine Amyar
- QuantIF-LITIS, Faculty of Medicine and Pharmacy, Research Building - 1st floor, 22 boulevard Gambetta, 76183 Rouen Cedex, France; General Electric Healthcare, Buc, France
| | - Su Ruan
- QuantIF-LITIS, Faculty of Medicine and Pharmacy, Research Building - 1st floor, 22 boulevard Gambetta, 76183 Rouen Cedex, France
| | - Babak Saboury
- Department of Radiology and Imaging Sciences, Clinical Center, National Institutes of Health, Bethesda, MD, USA; Department of Computer Science and Electrical Engineering, University of Maryland, Baltimore County, Baltimore, MD, USA; Department of Radiology, Hospital of the University of Pennsylvania, Philadelphia, PA, USA
| | - Arman Rahmim
- Department of Integrative Oncology, BC Cancer Research Institute, 675 West 10th Avenue, Vancouver, British Columbia V5Z 1L3, Canada; Department of Radiology, University of British Columbia, Vancouver, British Columbia, Canada; Department of Physics, University of British Columbia, Vancouver, British Columbia, Canada
| |
Collapse
|
13
|
Rosar F, Wenner F, Khreish F, Dewes S, Wagenpfeil G, Hoffmann MA, Schreckenberger M, Bartholomä M, Ezziddin S. Early molecular imaging response assessment based on determination of total viable tumor burden in [ 68Ga]Ga-PSMA-11 PET/CT independently predicts overall survival in [ 177Lu]Lu-PSMA-617 radioligand therapy. Eur J Nucl Med Mol Imaging 2021; 49:1584-1594. [PMID: 34725725 PMCID: PMC8940840 DOI: 10.1007/s00259-021-05594-8] [Citation(s) in RCA: 25] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2021] [Accepted: 10/13/2021] [Indexed: 12/19/2022]
Abstract
Purpose In patients with metastatic castration-resistant prostate cancer (mCRPC) treated with prostate-specific membrane antigen-targeted radioligand therapy (PSMA-RLT), the predictive value of PSMA PET/CT-derived response is still under investigation. Early molecular imaging response based on total viable tumor burden and its association with overall survival (OS) was explored in this study. Methods Sixty-six mCRPC patients who received [177Lu]Lu-PSMA-617 RLT within a prospective patient registry (REALITY Study, NCT04833517) were analyzed. Patients received a [68Ga]Ga-PSMA-11 PET/CT scan before the first and after the second cycle of PSMA-RLT. Total lesion PSMA (TLP) was determined by semiautomatic whole-body tumor segmentation. Molecular imaging response was assessed by change in TLP and modified PERCIST criteria. Biochemical response was assessed using standard serum PSA and PCWG3 criteria. Both response assessment methods and additional baseline parameters were analyzed regarding their association with OS by univariate and multivariable analysis. Results By molecular imaging, 40/66 (60.6%) patients showed partial remission (PR), 19/66 (28.7%) stable disease (SD), and 7/66 (10.6%) progressive disease (PD). Biochemical response assessment revealed PR in 34/66 (51.5%) patients, SD in 20/66 (30.3%), and PD in 12/66 (18.2%). Response assessments were concordant in 49/66 (74.3%) cases. On univariate analysis, both molecular and biochemical response (p = 0.001 and 0.008, respectively) as well as two baseline characteristics (ALP and ECOG) were each significantly associated with OS. The median OS of patients showing molecular PR was 24.6 versus 10.7 months in the remaining patients (with SD or PD). On multivariable analysis molecular imaging response remained an independent predictor of OS (p = 0.002), eliminating biochemical response as insignificant (p = 0.515). Conclusion The new whole-body molecular imaging–derived biomarker, early change of total lesion PSMA (TLP), independently predicts overall survival in [177Lu]Lu-PSMA-617 RLT in mCRPC, outperforming conventional PSA-based response assessment. TLP might be considered a more distinguished and advanced biomarker for monitoring PSMA-RLT over commonly used serum PSA.
Collapse
Affiliation(s)
- Florian Rosar
- Department of Nuclear Medicine, Saarland University - Medical Center, Kirrberger Str. 100, Geb. 50, 66421, Homburg, Germany
| | - Felix Wenner
- Department of Nuclear Medicine, Saarland University - Medical Center, Kirrberger Str. 100, Geb. 50, 66421, Homburg, Germany
| | - Fadi Khreish
- Department of Nuclear Medicine, Saarland University - Medical Center, Kirrberger Str. 100, Geb. 50, 66421, Homburg, Germany
| | - Sebastian Dewes
- Department of Nuclear Medicine, Saarland University - Medical Center, Kirrberger Str. 100, Geb. 50, 66421, Homburg, Germany
| | | | - Manuela A Hoffmann
- Department of Nuclear Medicine, Johannes Gutenberg-University, Mainz, Germany
| | | | - Mark Bartholomä
- Department of Nuclear Medicine, Saarland University - Medical Center, Kirrberger Str. 100, Geb. 50, 66421, Homburg, Germany
| | - Samer Ezziddin
- Department of Nuclear Medicine, Saarland University - Medical Center, Kirrberger Str. 100, Geb. 50, 66421, Homburg, Germany.
| |
Collapse
|