1
|
Häggström I, Leithner D, Alvén J, Campanella G, Abusamra M, Zhang H, Chhabra S, Beer L, Haug A, Salles G, Raderer M, Staber PB, Becker A, Hricak H, Fuchs TJ, Schöder H, Mayerhoefer ME. Deep learning for [ 18F]fluorodeoxyglucose-PET-CT classification in patients with lymphoma: a dual-centre retrospective analysis. Lancet Digit Health 2024; 6:e114-e125. [PMID: 38135556 DOI: 10.1016/s2589-7500(23)00203-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2023] [Revised: 08/29/2023] [Accepted: 09/26/2023] [Indexed: 12/24/2023]
Abstract
BACKGROUND The rising global cancer burden has led to an increasing demand for imaging tests such as [18F]fluorodeoxyglucose ([18F]FDG)-PET-CT. To aid imaging specialists in dealing with high scan volumes, we aimed to train a deep learning artificial intelligence algorithm to classify [18F]FDG-PET-CT scans of patients with lymphoma with or without hypermetabolic tumour sites. METHODS In this retrospective analysis we collected 16 583 [18F]FDG-PET-CTs of 5072 patients with lymphoma who had undergone PET-CT before or after treatment at the Memorial Sloa Kettering Cancer Center, New York, NY, USA. Using maximum intensity projection (MIP), three dimensional (3D) PET, and 3D CT data, our ResNet34-based deep learning model (Lymphoma Artificial Reader System [LARS]) for [18F]FDG-PET-CT binary classification (Deauville 1-3 vs 4-5), was trained on 80% of the dataset, and tested on 20% of this dataset. For external testing, 1000 [18F]FDG-PET-CTs were obtained from a second centre (Medical University of Vienna, Vienna, Austria). Seven model variants were evaluated, including MIP-based LARS-avg (optimised for accuracy) and LARS-max (optimised for sensitivity), and 3D PET-CT-based LARS-ptct. Following expert curation, areas under the curve (AUCs), accuracies, sensitivities, and specificities were calculated. FINDINGS In the internal test cohort (3325 PET-CTs, 1012 patients), LARS-avg achieved an AUC of 0·949 (95% CI 0·942-0·956), accuracy of 0·890 (0·879-0·901), sensitivity of 0·868 (0·851-0·885), and specificity of 0·913 (0·899-0·925); LARS-max achieved an AUC of 0·949 (0·942-0·956), accuracy of 0·868 (0·858-0·879), sensitivity of 0·909 (0·896-0·924), and specificity of 0·826 (0·808-0·843); and LARS-ptct achieved an AUC of 0·939 (0·930-0·948), accuracy of 0·875 (0·864-0·887), sensitivity of 0·836 (0·817-0·855), and specificity of 0·915 (0·901-0·927). In the external test cohort (1000 PET-CTs, 503 patients), LARS-avg achieved an AUC of 0·953 (0·938-0·966), accuracy of 0·907 (0·888-0·925), sensitivity of 0·874 (0·843-0·904), and specificity of 0·949 (0·921-0·960); LARS-max achieved an AUC of 0·952 (0·937-0·965), accuracy of 0·898 (0·878-0·916), sensitivity of 0·899 (0·871-0·926), and specificity of 0·897 (0·871-0·922); and LARS-ptct achieved an AUC of 0·932 (0·915-0·948), accuracy of 0·870 (0·850-0·891), sensitivity of 0·827 (0·793-0·863), and specificity of 0·913 (0·889-0·937). INTERPRETATION Deep learning accurately distinguishes between [18F]FDG-PET-CT scans of lymphoma patients with and without hypermetabolic tumour sites. Deep learning might therefore be potentially useful to rule out the presence of metabolically active disease in such patients, or serve as a second reader or decision support tool. FUNDING National Institutes of Health-National Cancer Institute Cancer Center Support Grant.
Collapse
Affiliation(s)
- Ida Häggström
- Department of Electrical Engineering, Chalmers University of Technology, Gothenburg, Sweden; Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - Doris Leithner
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY, USA; Department of Radiology, NYU Langone Health, Grossman School of Medicine, New York, NY, USA
| | - Jennifer Alvén
- Department of Electrical Engineering, Chalmers University of Technology, Gothenburg, Sweden
| | - Gabriele Campanella
- Hasso Plattner Institute for Digital Health, Mount Sinai Medical School, New York, NY, USA; Department of AI and Human Health, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Murad Abusamra
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - Honglei Zhang
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - Shalini Chhabra
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - Lucian Beer
- Department of Biomedical Imaging and Image-guided Therapy, Medical University of Vienna, Vienna, Austria
| | - Alexander Haug
- Department of Biomedical Imaging and Image-guided Therapy, Medical University of Vienna, Vienna, Austria
| | - Gilles Salles
- Department of Medicine, Memorial Sloan Kettering Cancer Center, New York, NY, USA; Weill Cornell Medical College, Cornell University, New York, NY, USA
| | - Markus Raderer
- Department of Medicine I, Medical University of Vienna, Vienna, Austria
| | - Philipp B Staber
- Department of Medicine, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - Anton Becker
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY, USA; Weill Cornell Medical College, Cornell University, New York, NY, USA; Department of Radiology, NYU Langone Health, Grossman School of Medicine, New York, NY, USA
| | - Hedvig Hricak
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY, USA; Weill Cornell Medical College, Cornell University, New York, NY, USA
| | - Thomas J Fuchs
- Hasso Plattner Institute for Digital Health, Mount Sinai Medical School, New York, NY, USA; Department of AI and Human Health, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Heiko Schöder
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY, USA; Weill Cornell Medical College, Cornell University, New York, NY, USA
| | - Marius E Mayerhoefer
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY, USA; Department of Biomedical Imaging and Image-guided Therapy, Medical University of Vienna, Vienna, Austria; Weill Cornell Medical College, Cornell University, New York, NY, USA; Department of Radiology, NYU Langone Health, Grossman School of Medicine, New York, NY, USA.
| |
Collapse
|
2
|
Yuan J, Zhang Y, Wang X. Application of machine learning in the management of lymphoma: Current practice and future prospects. Digit Health 2024; 10:20552076241247963. [PMID: 38628632 PMCID: PMC11020711 DOI: 10.1177/20552076241247963] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2023] [Accepted: 03/28/2024] [Indexed: 04/19/2024] Open
Abstract
In the past decade, digitization of medical records and multiomics data analysis in lymphoma has led to the accessibility of high-dimensional records. The digitization of medical records, the visualization of extensive volume data extracted from medical images, and the integration of multiomics methods into clinical decision-making have produced many datasets. As a promising auxiliary tool, machine learning (ML) intends to extract homologous features in large-scale data sets and encode them into various patterns to complete complicated tasks. At present, artificial intelligence and digital mining have shown promising prospects in the field of lymphoma pathological image analysis. The paradigm shift from qualitative analysis to quantitative analysis makes the pathological diagnosis more intelligent and the results more accurate and objective. ML can promote accurate lymphoma diagnosis and provide patients with prognostic information and more individualized treatment options. Based on the above, this comprehensive review of the general workflow of ML highlights recent advances in ML techniques in the diagnosis, treatment, and prognosis of lymphoma, and clarifies the boundedness and future orientation of the ML technique in the clinical practice of lymphoma.
Collapse
Affiliation(s)
- Junyun Yuan
- Department of Hematology, Shandong Provincial Hospital Affiliated to Shandong First Medical University, Jinan, Shandong, China
| | - Ya Zhang
- Department of Hematology, Shandong Provincial Hospital Affiliated to Shandong First Medical University, Jinan, Shandong, China
- Department of Hematology, Shandong Provincial Hospital, Shandong University, Jinan, Shandong, China
- Taishan Scholars Program of Shandong Province, Jinan, Shandong, China
| | - Xin Wang
- Department of Hematology, Shandong Provincial Hospital Affiliated to Shandong First Medical University, Jinan, Shandong, China
- Department of Hematology, Shandong Provincial Hospital, Shandong University, Jinan, Shandong, China
- Taishan Scholars Program of Shandong Province, Jinan, Shandong, China
- Branch of National Clinical Research Center for Hematologic Diseases, Jinan, Shandong, China
- National Clinical Research Center for Hematologic Diseases, Hospital of Soochow University, Suzhou, China
| |
Collapse
|
3
|
Dai J, Wang H, Xu Y, Chen X, Tian R. Clinical application of AI-based PET images in oncological patients. Semin Cancer Biol 2023; 91:124-142. [PMID: 36906112 DOI: 10.1016/j.semcancer.2023.03.005] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2022] [Revised: 02/28/2023] [Accepted: 03/07/2023] [Indexed: 03/11/2023]
Abstract
Based on the advantages of revealing the functional status and molecular expression of tumor cells, positron emission tomography (PET) imaging has been performed in numerous types of malignant diseases for diagnosis and monitoring. However, insufficient image quality, the lack of a convincing evaluation tool and intra- and interobserver variation in human work are well-known limitations of nuclear medicine imaging and restrict its clinical application. Artificial intelligence (AI) has gained increasing interest in the field of medical imaging due to its powerful information collection and interpretation ability. The combination of AI and PET imaging potentially provides great assistance to physicians managing patients. Radiomics, an important branch of AI applied in medical imaging, can extract hundreds of abstract mathematical features of images for further analysis. In this review, an overview of the applications of AI in PET imaging is provided, focusing on image enhancement, tumor detection, response and prognosis prediction and correlation analyses with pathology or specific gene mutations in several types of tumors. Our aim is to describe recent clinical applications of AI-based PET imaging in malignant diseases and to focus on the description of possible future developments.
Collapse
Affiliation(s)
- Jiaona Dai
- Department of Nuclear Medicine, West China Hospital, Sichuan University, Chengdu 610041, China
| | - Hui Wang
- Department of Nuclear Medicine, West China Hospital, Sichuan University, Chengdu 610041, China
| | - Yuchao Xu
- School of Nuclear Science and Technology, University of South China, Hengyang City 421001, China
| | - Xiyang Chen
- Division of Vascular Surgery, Department of General Surgery, West China Hospital, Sichuan University, Chengdu 610041, China.
| | - Rong Tian
- Department of Nuclear Medicine, West China Hospital, Sichuan University, Chengdu 610041, China.
| |
Collapse
|
4
|
A review of deep learning-based multiple-lesion recognition from medical images: classification, detection and segmentation. Comput Biol Med 2023; 157:106726. [PMID: 36924732 DOI: 10.1016/j.compbiomed.2023.106726] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2022] [Revised: 02/07/2023] [Accepted: 02/27/2023] [Indexed: 03/05/2023]
Abstract
Deep learning-based methods have become the dominant methodology in medical image processing with the advancement of deep learning in natural image classification, detection, and segmentation. Deep learning-based approaches have proven to be quite effective in single lesion recognition and segmentation. Multiple-lesion recognition is more difficult than single-lesion recognition due to the little variation between lesions or the too wide range of lesions involved. Several studies have recently explored deep learning-based algorithms to solve the multiple-lesion recognition challenge. This paper includes an in-depth overview and analysis of deep learning-based methods for multiple-lesion recognition developed in recent years, including multiple-lesion recognition in diverse body areas and recognition of whole-body multiple diseases. We discuss the challenges that still persist in the multiple-lesion recognition tasks by critically assessing these efforts. Finally, we outline existing problems and potential future research areas, with the hope that this review will help researchers in developing future approaches that will drive additional advances.
Collapse
|
5
|
Mackay K, Bernstein D, Glocker B, Kamnitsas K, Taylor A. A Review of the Metrics Used to Assess Auto-Contouring Systems in Radiotherapy. Clin Oncol (R Coll Radiol) 2023; 35:354-369. [PMID: 36803407 DOI: 10.1016/j.clon.2023.01.016] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2022] [Revised: 12/05/2022] [Accepted: 01/23/2023] [Indexed: 02/01/2023]
Abstract
Auto-contouring could revolutionise future planning of radiotherapy treatment. The lack of consensus on how to assess and validate auto-contouring systems currently limits clinical use. This review formally quantifies the assessment metrics used in studies published during one calendar year and assesses the need for standardised practice. A PubMed literature search was undertaken for papers evaluating radiotherapy auto-contouring published during 2021. Papers were assessed for types of metric and the methodology used to generate ground-truth comparators. Our PubMed search identified 212 studies, of which 117 met the criteria for clinical review. Geometric assessment metrics were used in 116 of 117 studies (99.1%). This includes the Dice Similarity Coefficient used in 113 (96.6%) studies. Clinically relevant metrics, such as qualitative, dosimetric and time-saving metrics, were less frequently used in 22 (18.8%), 27 (23.1%) and 18 (15.4%) of 117 studies, respectively. There was heterogeneity within each category of metric. Over 90 different names for geometric measures were used. Methods for qualitative assessment were different in all but two papers. Variation existed in the methods used to generate radiotherapy plans for dosimetric assessment. Consideration of editing time was only given in 11 (9.4%) papers. A single manual contour as a ground-truth comparator was used in 65 (55.6%) studies. Only 31 (26.5%) studies compared auto-contours to usual inter- and/or intra-observer variation. In conclusion, significant variation exists in how research papers currently assess the accuracy of automatically generated contours. Geometric measures are the most popular, however their clinical utility is unknown. There is heterogeneity in the methods used to perform clinical assessment. Considering the different stages of system implementation may provide a framework to decide the most appropriate metrics. This analysis supports the need for a consensus on the clinical implementation of auto-contouring.
Collapse
Affiliation(s)
- K Mackay
- The Institute of Cancer Research, London, UK; The Royal Marsden Hospital, London, UK.
| | - D Bernstein
- The Institute of Cancer Research, London, UK; The Royal Marsden Hospital, London, UK
| | - B Glocker
- Department of Computing, Imperial College London, South Kensington Campus, London, UK
| | - K Kamnitsas
- Department of Computing, Imperial College London, South Kensington Campus, London, UK; Department of Engineering Science, University of Oxford, Oxford, UK
| | - A Taylor
- The Institute of Cancer Research, London, UK; The Royal Marsden Hospital, London, UK
| |
Collapse
|
6
|
Kotsyfakis S, Iliaki-Giannakoudaki E, Anagnostopoulos A, Papadokostaki E, Giannakoudakis K, Goumenakis M, Kotsyfakis M. The application of machine learning to imaging in hematological oncology: A scoping review. Front Oncol 2022; 12:1080988. [PMID: 36605438 PMCID: PMC9808781 DOI: 10.3389/fonc.2022.1080988] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2022] [Accepted: 12/05/2022] [Indexed: 12/24/2022] Open
Abstract
Background Here, we conducted a scoping review to (i) establish which machine learning (ML) methods have been applied to hematological malignancy imaging; (ii) establish how ML is being applied to hematological cancer radiology; and (iii) identify addressable research gaps. Methods The review was conducted according to the Preferred Reporting Items for Systematic Reviews and Meta-Analysis Extension for Scoping Reviews guidelines. The inclusion criteria were (i) pediatric and adult patients with suspected or confirmed hematological malignancy undergoing imaging (population); (ii) any study using ML techniques to derive models using radiological images to apply to the clinical management of these patients (concept); and (iii) original research articles conducted in any setting globally (context). Quality Assessment of Diagnostic Accuracy Studies 2 criteria were used to assess diagnostic and segmentation studies, while the Newcastle-Ottawa scale was used to assess the quality of observational studies. Results Of 53 eligible studies, 33 applied diverse ML techniques to diagnose hematological malignancies or to differentiate them from other diseases, especially discriminating gliomas from primary central nervous system lymphomas (n=18); 11 applied ML to segmentation tasks, while 9 applied ML to prognostication or predicting therapeutic responses, especially for diffuse large B-cell lymphoma. All studies reported discrimination statistics, but no study calculated calibration statistics. Every diagnostic/segmentation study had a high risk of bias due to their case-control design; many studies failed to provide adequate details of the reference standard; and only a few studies used independent validation. Conclusion To deliver validated ML-based models to radiologists managing hematological malignancies, future studies should (i) adhere to standardized, high-quality reporting guidelines such as the Checklist for Artificial Intelligence in Medical Imaging; (ii) validate models in independent cohorts; (ii) standardize volume segmentation methods for segmentation tasks; (iv) establish comprehensive prospective studies that include different tumor grades, comparisons with radiologists, optimal imaging modalities, sequences, and planes; (v) include side-by-side comparisons of different methods; and (vi) include low- and middle-income countries in multicentric studies to enhance generalizability and reduce inequity.
Collapse
Affiliation(s)
| | | | | | | | | | | | - Michail Kotsyfakis
- Biology Center of the Czech Academy of Sciences, Budweis (Ceske Budejovice), Czechia,*Correspondence: Michail Kotsyfakis,
| |
Collapse
|
7
|
Huang Z, Guo Y, Zhang N, Huang X, Decazes P, Becker S, Ruan S. Multi-scale feature similarity-based weakly supervised lymphoma segmentation in PET/CT images. Comput Biol Med 2022; 151:106230. [PMID: 36306574 DOI: 10.1016/j.compbiomed.2022.106230] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2022] [Revised: 09/28/2022] [Accepted: 10/16/2022] [Indexed: 12/27/2022]
Abstract
Accurate lymphoma segmentation in PET/CT images is important for evaluating Diffuse Large B-Cell Lymphoma (DLBCL) prognosis. As systemic multiple lymphomas, DLBCL lesions vary in number and size for different patients, which makes DLBCL labeling labor-intensive and time-consuming. To reduce the reliance on accurately labeled datasets, a weakly supervised deep learning method based on multi-scale feature similarity is proposed for automatic lymphoma segmentation. Weak labeling was performed by randomly dawning a small and salient lymphoma volume for the patient without accurate labels. A 3D V-Net is used as the backbone of the segmentation network and image features extracted in different convolutional layers are fused with the Atrous Spatial Pyramid Pooling (ASPP) module to generate multi-scale feature representations of input images. By imposing multi-scale feature consistency constraints on the predicted tumor regions as well as the labeled tumor regions, weakly labeled data can also be effectively used for network training. The cosine similarity, which has strong generalization, is exploited here to measure feature distances. The proposed method is evaluated with a PET/CT dataset of 147 lymphoma patients. Experimental results show that when using data, half of which have accurate labels and the other half have weak labels, the proposed method performed similarly to a fully supervised segmentation network and achieved an average Dice Similarity Coefficient (DSC) of 71.47%. The proposed method is able to reduce the requirement for expert annotations in deep learning-based lymphoma segmentation.
Collapse
Affiliation(s)
- Zhengshan Huang
- Department of Biomedical Engineering, School of Precision Instrument and Opto-Electronics Engineering, Tianjin University, Tianjin, China
| | - Yu Guo
- Department of Biomedical Engineering, School of Precision Instrument and Opto-Electronics Engineering, Tianjin University, Tianjin, China.
| | - Ning Zhang
- Department of Biomedical Engineering, School of Precision Instrument and Opto-Electronics Engineering, Tianjin University, Tianjin, China
| | - Xian Huang
- Department of Biomedical Engineering, School of Precision Instrument and Opto-Electronics Engineering, Tianjin University, Tianjin, China
| | - Pierre Decazes
- LITIS, University of Rouen Normandy, Rouen, France; Department of Nuclear Medicine, Henri Becquerel Cancer Centre, Rouen, France
| | - Stephanie Becker
- LITIS, University of Rouen Normandy, Rouen, France; Department of Nuclear Medicine, Henri Becquerel Cancer Centre, Rouen, France
| | - Su Ruan
- LITIS, University of Rouen Normandy, Rouen, France
| |
Collapse
|
8
|
Wang M, Jiang H, Shi T, Wang Z, Guo J, Lu G, Wang Y, Yao YD. PSR-Nets: Deep neural networks with prior shift regularization for PET/CT based automatic, accurate, and calibrated whole-body lymphoma segmentation. Comput Biol Med 2022; 151:106215. [PMID: 36306584 DOI: 10.1016/j.compbiomed.2022.106215] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2022] [Revised: 10/04/2022] [Accepted: 10/15/2022] [Indexed: 12/27/2022]
Abstract
Lymphoma is a type of lymphatic tissue originated cancer. Automatic and accurate lymphoma segmentation is critical for its diagnosis and prognosis yet challenging due to the severely class-imbalanced problem. Generally, deep neural networks trained with class-observation-frequency based re-weighting loss functions are used to address this problem. However, the majority class can be under-weighted by them, due to the existence of data overlap. Besides, they are more mis-calibrated. To resolve these, we propose a neural network with prior-shift regularization (PSR-Net), which comprises a UNet-like backbone with re-weighting loss functions, and a prior-shift regularization (PSR) module including a prior-shift layer (PSL), a regularizer generation layer (RGL), and an expected prediction confidence updating layer (EPCUL). We first propose a trainable expected prediction confidence (EPC) for each class. Periodically, PSL shifts a prior training dataset to a more informative dataset based on EPCs; RGL presents a generalized informative-voxel-aware (GIVA) loss with EPCs and calculates it on the informative dataset for model finetuning in back-propagation; and EPCUL updates EPCs to refresh PSL and RRL in next forward-propagation. PSR-Net is trained in a two- stage manner. The backbone is first trained with re-weighting loss functions, then we reload the best saved model for the backbone and continue to train it with the weighted sum of the re-weighting loss functions, the GIVA regularizer and the L2 loss function of EPCs for regularization fine-tuning. Extensive experiments are performed based on PET/CT volumes with advanced stage lymphomas. Our PSR-Net achieves 95.12% sensitivity and 87.18% Dice coefficient, demonstrating the effectiveness of PSR-Net, when compared to the baselines and the state-of-the-arts.
Collapse
Affiliation(s)
- Meng Wang
- Department of Software College, Northeastern University, Shenyang 110819, China
| | - Huiyan Jiang
- Department of Software College, Northeastern University, Shenyang 110819, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang 110819, China.
| | - Tianyu Shi
- Department of Software College, Northeastern University, Shenyang 110819, China
| | - Zhiguo Wang
- Department of Nuclear Medicine, General Hospital of Northern Military Area, Shenyang 110016, China
| | - Jia Guo
- Department of Nuclear Medicine, General Hospital of Northern Military Area, Shenyang 110016, China
| | - Guoxiu Lu
- Department of Nuclear Medicine, General Hospital of Northern Military Area, Shenyang 110016, China
| | - Youchao Wang
- Department of Nuclear Medicine, General Hospital of Northern Military Area, Shenyang 110016, China
| | - Yu-Dong Yao
- Department of Electrical and Computer Engineering, Stevens Institute of Technology, Hoboken, NJ 07030, USA
| |
Collapse
|
9
|
Yuan C, Shi Q, Huang X, Wang L, He Y, Li B, Zhao W, Qian D. Multimodal deep learning model on interim [ 18F]FDG PET/CT for predicting primary treatment failure in diffuse large B-cell lymphoma. Eur Radiol 2022; 33:77-88. [PMID: 36029345 DOI: 10.1007/s00330-022-09031-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2022] [Revised: 05/30/2022] [Accepted: 07/13/2022] [Indexed: 11/28/2022]
Abstract
OBJECTIVES The prediction of primary treatment failure (PTF) is necessary for patients with diffuse large B-cell lymphoma (DLBCL) since it serves as a prominent means for improving front-line outcomes. Using interim 18F-fluoro-2-deoxyglucose ([18F]FDG) positron emission tomography/computed tomography (PET/CT) imaging data, we aimed to construct multimodal deep learning (MDL) models to predict possible PTF in low-risk DLBCL. METHODS Initially, 205 DLBCL patients undergoing interim [18F]FDG PET/CT scans and the front-line standard of care were included in the primary dataset for model development. Then, 44 other patients were included in the external dataset for generalization evaluation. Based on the powerful backbone of the Conv-LSTM network, we incorporated five different multimodal fusion strategies (pixel intermixing, separate channel, separate branch, quantitative weighting, and hybrid learning) to make full use of PET/CT features and built five corresponding MDL models. Moreover, we found the best model, that is, the hybrid learning model, and optimized it by integrating the contrastive training objective to further improve its prediction performance. RESULTS The final model with contrastive objective optimization, named the contrastive hybrid learning model, performed best, with an accuracy of 91.22% and an area under the receiver operating characteristic curve (AUC) of 0.926, in the primary dataset. In the external dataset, its accuracy and AUC remained at 88.64% and 0.925, respectively, indicating its good generalization ability. CONCLUSIONS The proposed model achieved good performance, validated the predictive value of interim PET/CT, and holds promise for directing individualized clinical treatment. KEY POINTS • The proposed multimodal models achieved accurate prediction of primary treatment failure in DLBCL patients. • Using an appropriate feature-level fusion strategy can make the same class close to each other regardless of the modal heterogeneity of the data source domain and positively impact the prediction performance. • Deep learning validated the predictive value of interim PET/CT in a way that exceeded human capabilities.
Collapse
Affiliation(s)
- Cheng Yuan
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200040, China
| | - Qing Shi
- Shanghai Institute of Hematology, State Key Laboratory of Medical Genomics, National Research Center for Translational Medicine at Shanghai, Ruijin Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, 200025, China
| | - Xinyun Huang
- Department of Nuclear Medicine, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, 200025, China
| | - Li Wang
- Shanghai Institute of Hematology, State Key Laboratory of Medical Genomics, National Research Center for Translational Medicine at Shanghai, Ruijin Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, 200025, China
| | - Yang He
- Shanghai Institute of Hematology, State Key Laboratory of Medical Genomics, National Research Center for Translational Medicine at Shanghai, Ruijin Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, 200025, China
| | - Biao Li
- Department of Nuclear Medicine, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, 200025, China.
| | - Weili Zhao
- Shanghai Institute of Hematology, State Key Laboratory of Medical Genomics, National Research Center for Translational Medicine at Shanghai, Ruijin Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, 200025, China.
| | - Dahong Qian
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200040, China.
| |
Collapse
|
10
|
Deep Neural Networks and Machine Learning Radiomics Modelling for Prediction of Relapse in Mantle Cell Lymphoma. Cancers (Basel) 2022; 14:cancers14082008. [PMID: 35454914 PMCID: PMC9028737 DOI: 10.3390/cancers14082008] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2022] [Revised: 04/05/2022] [Accepted: 04/12/2022] [Indexed: 12/12/2022] Open
Abstract
Simple Summary Mantle cell lymphoma (MCL) is an aggressive lymphoid tumour with a poor prognosis. There exist no routine biomarkers for the early prediction of relapse. Our study compared the potential of radiomics-based machine learning and 3D deep learning models as non-invasive biomarkers to risk-stratify MCL patients, thus promoting precision imaging in clinical oncology. Abstract Mantle cell lymphoma (MCL) is a rare lymphoid malignancy with a poor prognosis characterised by frequent relapse and short durations of treatment response. Most patients present with aggressive disease, but there exist indolent subtypes without the need for immediate intervention. The very heterogeneous behaviour of MCL is genetically characterised by the translocation t(11;14)(q13;q32), leading to Cyclin D1 overexpression with distinct clinical and biological characteristics and outcomes. There is still an unfulfilled need for precise MCL prognostication in real-time. Machine learning and deep learning neural networks are rapidly advancing technologies with promising results in numerous fields of application. This study develops and compares the performance of deep learning (DL) algorithms and radiomics-based machine learning (ML) models to predict MCL relapse on baseline CT scans. Five classification algorithms were used, including three deep learning models (3D SEResNet50, 3D DenseNet, and an optimised 3D CNN) and two machine learning models based on K-nearest Neighbor (KNN) and Random Forest (RF). The best performing method, our optimised 3D CNN, predicted MCL relapse with a 70% accuracy, better than the 3D SEResNet50 (62%) and the 3D DenseNet (59%). The second-best performing method was the KNN-based machine learning model (64%) after principal component analysis for improved accuracy. Our optimised CNN developed by ourselves correctly predicted MCL relapse in 70% of the patients on baseline CT imaging. Once prospectively tested in clinical trials with a larger sample size, our proposed 3D deep learning model could facilitate clinical management by precision imaging in MCL.
Collapse
|
11
|
Hasani N, Paravastu SS, Farhadi F, Yousefirizi F, Morris MA, Rahmim A, Roschewski M, Summers RM, Saboury B. Artificial Intelligence in Lymphoma PET Imaging:: A Scoping Review (Current Trends and Future Directions). PET Clin 2022; 17:145-174. [PMID: 34809864 PMCID: PMC8735853 DOI: 10.1016/j.cpet.2021.09.006] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
Malignant lymphomas are a family of heterogenous disorders caused by clonal proliferation of lymphocytes. 18F-FDG-PET has proven to provide essential information for accurate quantification of disease burden, treatment response evaluation, and prognostication. However, manual delineation of hypermetabolic lesions is often a time-consuming and impractical task. Applications of artificial intelligence (AI) may provide solutions to overcome this challenge. Beyond segmentation and detection of lesions, AI could enhance tumor characterization and heterogeneity quantification, as well as treatment response prediction and recurrence risk stratification. In this scoping review, we have systematically mapped and discussed the current applications of AI (such as detection, classification, segmentation as well as the prediction and prognostication) in lymphoma PET.
Collapse
Affiliation(s)
- Navid Hasani
- Department of Radiology and Imaging Sciences, Clinical Center, National Institutes of Health, 9000 Rockville Pike, Building 10, Room 1C455, Bethesda, MD 20892, USA; University of Queensland Faculty of Medicine, Ochsner Clinical School, New Orleans, LA 70121, USA
| | - Sriram S Paravastu
- Department of Radiology and Imaging Sciences, Clinical Center, National Institutes of Health, 9000 Rockville Pike, Building 10, Room 1C455, Bethesda, MD 20892, USA
| | - Faraz Farhadi
- Department of Radiology and Imaging Sciences, Clinical Center, National Institutes of Health, 9000 Rockville Pike, Building 10, Room 1C455, Bethesda, MD 20892, USA
| | - Fereshteh Yousefirizi
- Department of Integrative Oncology, BC Cancer Research Institute, Vancouver, BC, Canada
| | - Michael A Morris
- Department of Radiology and Imaging Sciences, Clinical Center, National Institutes of Health, 9000 Rockville Pike, Building 10, Room 1C455, Bethesda, MD 20892, USA; Department of Computer Science and Electrical Engineering, University of Maryland-Baltimore Country, Baltimore, MD, USA
| | - Arman Rahmim
- Department of Integrative Oncology, BC Cancer Research Institute, Vancouver, BC, Canada; Department of Radiology, BC Cancer Research Institute, University of British Columbia, 675 West 10th Avenue, Vancouver, British Columbia, V5Z 1L3, Canada
| | - Mark Roschewski
- Lymphoid Malignancies Branch, Center for Cancer Research, National Institutes of Health, Bethesda, MD, USA
| | - Ronald M Summers
- Department of Radiology and Imaging Sciences, Clinical Center, National Institutes of Health, 9000 Rockville Pike, Building 10, Room 1C455, Bethesda, MD 20892, USA.
| | - Babak Saboury
- Department of Radiology and Imaging Sciences, Clinical Center, National Institutes of Health, 9000 Rockville Pike, Building 10, Room 1C455, Bethesda, MD 20892, USA; Department of Computer Science and Electrical Engineering, University of Maryland-Baltimore Country, Baltimore, MD, USA; Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA.
| |
Collapse
|
12
|
Yousefirizi F, Jha AK, Brosch-Lenz J, Saboury B, Rahmim A. Toward High-Throughput Artificial Intelligence-Based Segmentation in Oncological PET Imaging. PET Clin 2021; 16:577-596. [PMID: 34537131 DOI: 10.1016/j.cpet.2021.06.001] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Abstract
Artificial intelligence (AI) techniques for image-based segmentation have garnered much attention in recent years. Convolutional neural networks have shown impressive results and potential toward fully automated segmentation in medical imaging, and particularly PET imaging. To cope with the limited access to annotated data needed in supervised AI methods, given tedious and prone-to-error manual delineations, semi-supervised and unsupervised AI techniques have also been explored for segmentation of tumors or normal organs in single- and bimodality scans. This work reviews existing AI techniques for segmentation tasks and the evaluation criteria for translational AI-based segmentation efforts toward routine adoption in clinical workflows.
Collapse
Affiliation(s)
- Fereshteh Yousefirizi
- Department of Integrative Oncology, BC Cancer Research Institute, 675 West 10th Avenue, Vancouver, British Columbia V5Z 1L3, Canada.
| | - Abhinav K Jha
- Department of Biomedical Engineering, Washington University in St. Louis, St Louis, MO 63130, USA; Mallinckrodt Institute of Radiology, Washington University School of Medicine, St Louis, MO 63110, USA
| | - Julia Brosch-Lenz
- Department of Integrative Oncology, BC Cancer Research Institute, 675 West 10th Avenue, Vancouver, British Columbia V5Z 1L3, Canada
| | - Babak Saboury
- Department of Radiology and Imaging Sciences, Clinical Center, National Institutes of Health, 9000 Rockville Pike, Bethesda, MD 20892, USA; Department of Computer Science and Electrical Engineering, University of Maryland Baltimore County, Baltimore, MD, USA; Department of Radiology, Hospital of the University of Pennsylvania, 3400 Spruce Street, Philadelphia, PA 19104, USA
| | - Arman Rahmim
- Department of Radiology, University of British Columbia, BC Cancer, BC Cancer Research Institute, 675 West 10th Avenue, Office 6-112, Vancouver, British Columbia V5Z 1L3, Canada; Department of Physics, University of British Columbia, Senior Scientist & Provincial Medical Imaging Physicist, BC Cancer, BC Cancer Research Institute, 675 West 10th Avenue, Office 6-112, Vancouver, British Columbia V5Z 1L3, Canada
| |
Collapse
|