1
|
Dai J, Wang H, Xu Y, Chen X, Tian R. Clinical application of AI-based PET images in oncological patients. Semin Cancer Biol 2023; 91:124-142. [PMID: 36906112 DOI: 10.1016/j.semcancer.2023.03.005] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2022] [Revised: 02/28/2023] [Accepted: 03/07/2023] [Indexed: 03/11/2023]
Abstract
Based on the advantages of revealing the functional status and molecular expression of tumor cells, positron emission tomography (PET) imaging has been performed in numerous types of malignant diseases for diagnosis and monitoring. However, insufficient image quality, the lack of a convincing evaluation tool and intra- and interobserver variation in human work are well-known limitations of nuclear medicine imaging and restrict its clinical application. Artificial intelligence (AI) has gained increasing interest in the field of medical imaging due to its powerful information collection and interpretation ability. The combination of AI and PET imaging potentially provides great assistance to physicians managing patients. Radiomics, an important branch of AI applied in medical imaging, can extract hundreds of abstract mathematical features of images for further analysis. In this review, an overview of the applications of AI in PET imaging is provided, focusing on image enhancement, tumor detection, response and prognosis prediction and correlation analyses with pathology or specific gene mutations in several types of tumors. Our aim is to describe recent clinical applications of AI-based PET imaging in malignant diseases and to focus on the description of possible future developments.
Collapse
Affiliation(s)
- Jiaona Dai
- Department of Nuclear Medicine, West China Hospital, Sichuan University, Chengdu 610041, China
| | - Hui Wang
- Department of Nuclear Medicine, West China Hospital, Sichuan University, Chengdu 610041, China
| | - Yuchao Xu
- School of Nuclear Science and Technology, University of South China, Hengyang City 421001, China
| | - Xiyang Chen
- Division of Vascular Surgery, Department of General Surgery, West China Hospital, Sichuan University, Chengdu 610041, China.
| | - Rong Tian
- Department of Nuclear Medicine, West China Hospital, Sichuan University, Chengdu 610041, China.
| |
Collapse
|
2
|
Sutaji D, Yıldız O. LEMOXINET: Lite ensemble MobileNetV2 and Xception models to predict plant disease. ECOL INFORM 2022. [DOI: 10.1016/j.ecoinf.2022.101698] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/03/2022]
|
3
|
Minoshima S, Cross D. Application of artificial intelligence in brain molecular imaging. Ann Nucl Med 2022; 36:103-110. [PMID: 35028878 DOI: 10.1007/s12149-021-01697-2] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2021] [Accepted: 11/15/2021] [Indexed: 12/22/2022]
Abstract
Initial development of artificial Intelligence (AI) and machine learning (ML) dates back to the mid-twentieth century. A growing awareness of the potential for AI, as well as increases in computational resources, research, and investment are rapidly advancing AI applications to medical imaging and, specifically, brain molecular imaging. AI/ML can improve imaging operations and decision making, and potentially perform tasks that are not readily possible by physicians, such as predicting disease prognosis, and identifying latent relationships from multi-modal clinical information. The number of applications of image-based AI algorithms, such as convolutional neural network (CNN), is increasing rapidly. The applications for brain molecular imaging (MI) include image denoising, PET and PET/MRI attenuation correction, image segmentation and lesion detection, parametric image formation, and the detection/diagnosis of Alzheimer's disease and other brain disorders. When effectively used, AI will likely improve the quality of patient care, instead of replacing radiologists. A regulatory framework is being developed to facilitate AI adaptation for medical imaging.
Collapse
Affiliation(s)
- Satoshi Minoshima
- Department of Radiology and Imaging Sciences, University of Utah, 30 North 1900 East #1A071, Salt Lake City, UT, 84132, USA.
| | - Donna Cross
- Department of Radiology and Imaging Sciences, University of Utah, 30 North 1900 East #1A071, Salt Lake City, UT, 84132, USA
| |
Collapse
|
4
|
Deep Learning Using Multiple Degrees of Maximum-Intensity Projection for PET/CT Image Classification in Breast Cancer. Tomography 2022; 8:131-141. [PMID: 35076612 PMCID: PMC8788419 DOI: 10.3390/tomography8010011] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2021] [Revised: 12/23/2021] [Accepted: 12/31/2021] [Indexed: 11/17/2022] Open
Abstract
Deep learning (DL) has become a remarkably powerful tool for image processing recently. However, the usefulness of DL in positron emission tomography (PET)/computed tomography (CT) for breast cancer (BC) has been insufficiently studied. This study investigated whether a DL model using images with multiple degrees of PET maximum-intensity projection (MIP) images contributes to increase diagnostic accuracy for PET/CT image classification in BC. We retrospectively gathered 400 images of 200 BC and 200 non-BC patients for training data. For each image, we obtained PET MIP images with four different degrees (0°, 30°, 60°, 90°) and made two DL models using Xception. One DL model diagnosed BC with only 0-degree MIP and the other used four different degrees. After training phases, our DL models analyzed test data including 50 BC and 50 non-BC patients. Five radiologists interpreted these test data. Sensitivity, specificity, and area under the receiver operating characteristic curve (AUC) were calculated. Our 4-degree model, 0-degree model, and radiologists had a sensitivity of 96%, 82%, and 80–98% and a specificity of 80%, 88%, and 76–92%, respectively. Our 4-degree model had equal or better diagnostic performance compared with that of the radiologists (AUC = 0.936 and 0.872–0.967, p = 0.036–0.405). A DL model similar to our 4-degree model may lead to help radiologists in their diagnostic work in the future.
Collapse
|
5
|
Boyle AJ, Gaudet VC, Black SE, Vasdev N, Rosa-Neto P, Zukotynski KA. Artificial intelligence for molecular neuroimaging. ANNALS OF TRANSLATIONAL MEDICINE 2021; 9:822. [PMID: 34268435 PMCID: PMC8246223 DOI: 10.21037/atm-20-6220] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/01/2020] [Accepted: 01/08/2021] [Indexed: 11/25/2022]
Abstract
In recent years, artificial intelligence (AI) or the study of how computers and machines can gain intelligence, has been increasingly applied to problems in medical imaging, and in particular to molecular imaging of the central nervous system. Many AI innovations in medical imaging include improving image quality, segmentation, and automating classification of disease. These advances have led to an increased availability of supportive AI tools to assist physicians in interpreting images and making decisions affecting patient care. This review focuses on the role of AI in molecular neuroimaging, primarily applied to positron emission tomography (PET) and single photon emission computed tomography (SPECT). We emphasize technical innovations such as AI in computed tomography (CT) generation for the purposes of attenuation correction and disease localization, as well as applications in neuro-oncology and neurodegenerative diseases. Limitations and future prospects for AI in molecular brain imaging are also discussed. Just as new equipment such as SPECT and PET revolutionized the field of medical imaging a few decades ago, AI and its related technologies are now poised to bring on further disruptive changes. An understanding of these new technologies and how they work will help physicians adapt their practices and succeed with these new tools.
Collapse
Affiliation(s)
- Amanda J Boyle
- Azrieli Centre for Neuro-Radiochemistry, Brain Health Imaging Centre, Centre for Addiction and Mental Health, Toronto, Ontario, Canada
| | - Vincent C Gaudet
- Department of Electrical and Computer Engineering, University of Waterloo, Waterloo, Ontario, Canada
| | - Sandra E Black
- Department of Medicine (Neurology), Sunnybrook Health Sciences Centre, University of Toronto, Toronto, Ontario, Canada
| | - Neil Vasdev
- Azrieli Centre for Neuro-Radiochemistry, Brain Health Imaging Centre, Centre for Addiction and Mental Health, Toronto, Ontario, Canada.,Department of Psychiatry, University of Toronto, Toronto, Ontario, Canada
| | - Pedro Rosa-Neto
- Translational Neuroimaging Laboratory, McGill University Research Centre for Studies in Aging, Douglas Research Institute, McGill University, Montréal, Québec, Canada
| | | |
Collapse
|
6
|
Torres-Velázquez M, Chen WJ, Li X, McMillan AB. Application and Construction of Deep Learning Networks in Medical Imaging. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2021; 5:137-159. [PMID: 34017931 PMCID: PMC8132932 DOI: 10.1109/trpms.2020.3030611] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022]
Abstract
Deep learning (DL) approaches are part of the machine learning (ML) subfield concerned with the development of computational models to train artificial intelligence systems. DL models are characterized by automatically extracting high-level features from the input data to learn the relationship between matching datasets. Thus, its implementation offers an advantage over common ML methods that often require the practitioner to have some domain knowledge of the input data to select the best latent representation. As a result of this advantage, DL has been successfully applied within the medical imaging field to address problems, such as disease classification and tumor segmentation for which it is difficult or impossible to determine which image features are relevant. Therefore, taking into consideration the positive impact of DL on the medical imaging field, this article reviews the key concepts associated with its evolution and implementation. The sections of this review summarize the milestones related to the development of the DL field, followed by a description of the elements of deep neural network and an overview of its application within the medical imaging field. Subsequently, the key steps necessary to implement a supervised DL application are defined, and associated limitations are discussed.
Collapse
Affiliation(s)
- Maribel Torres-Velázquez
- Department of Biomedical Engineering, College of Engineering, University of Wisconsin-Madison, Madison, WI 53705 USA
| | - Wei-Jie Chen
- Department of Electrical and Computer Engineering, College of Engineering, University of Wisconsin-Madison, Madison, WI 53705 USA
| | - Xue Li
- Department of Radiology, University of Wisconsin School of Medicine and Public Health, University of Wisconsin-Madison, Madison, WI 53705 USA
| | - Alan B McMillan
- Department of Radiology, University of Wisconsin School of Medicine and Public Health, University of Wisconsin-Madison, Madison, WI 53705 USA, and also with the Department of Medical Physics, University of Wisconsin-Madison, Madison, WI 53705 USA
| |
Collapse
|
7
|
Deep learning in Nuclear Medicine—focus on CNN-based approaches for PET/CT and PET/MR: where do we stand? Clin Transl Imaging 2021. [DOI: 10.1007/s40336-021-00411-6] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
|
8
|
Zaharchuk G, Davidzon G. Artificial Intelligence for Optimization and Interpretation of PET/CT and PET/MR Images. Semin Nucl Med 2020; 51:134-142. [PMID: 33509370 DOI: 10.1053/j.semnuclmed.2020.10.001] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
Artificial intelligence (AI) has recently attracted much attention for its potential use in healthcare applications. The use of AI to improve and extract more information out of medical images, given their parallels with natural images and the immense progress in the area of computer vision, has been at the forefront of these advances. This is due to a convergence of factors, including the increasing numbers of scans performed, the availability of open source AI tools, and decreases in the costs of hardware required to implement these technologies. In this article, we review the progress in the use of AI toward optimizing PET/CT and PET/MRI studies. These two methods, which combine molecular information with structural and (in the case of MRI) functional imaging, are extremely valuable for a wide range of clinical indications. They are also tremendously data-rich modalities and as such are highly amenable to data-driven technologies such as AI. The first half of the article will focus on methods to improve PET reconstruction and image quality, which has multiple benefits including faster image acquisition, image reconstruction, and lower or even "zero" radiation dose imaging. It will also address the value of AI-driven methods to perform MR-based attenuation correction. The second half will address how some of these advances can be used to perform to optimize diagnosis from the acquired images, with examples given for whole-body oncology, cardiology, and neurology indications. Overall, it is likely that the use of AI will markedly improve both the quality and safety of PET/CT and PET/MRI as well as enhance our ability to interpret the scans and follow lesions over time. This will hopefully lead to expanded clinical use cases for these valuable technologies leading to better patient care.
Collapse
Affiliation(s)
- Greg Zaharchuk
- Department of Radiology, Stanford University, Stanford, CA.
| | - Guido Davidzon
- Division of Nuclear Medicine & Molecular Imaging, Department of Radiology, Stanford University, Stanford, CA
| |
Collapse
|
9
|
Lee JJ, Yang H, Franc BL, Iagaru A, Davidzon GA. Deep learning detection of prostate cancer recurrence with 18F-FACBC (fluciclovine, Axumin®) positron emission tomography. Eur J Nucl Med Mol Imaging 2020; 47:2992-2997. [PMID: 32556481 DOI: 10.1007/s00259-020-04912-w] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2020] [Accepted: 06/07/2020] [Indexed: 10/24/2022]
Abstract
PURPOSE To evaluate the performance of deep learning (DL) classifiers in discriminating normal and abnormal 18F-FACBC (fluciclovine, Axumin®) PET scans based on the presence of tumor recurrence and/or metastases in patients with prostate cancer (PC) and biochemical recurrence (BCR). METHODS A total of 251 consecutive 18F-fluciclovine PET scans were acquired between September 2017 and June 2019 in 233 PC patients with BCR (18 patients had 2 scans). PET images were labeled as normal or abnormal using clinical reports as the ground truth. Convolutional neural network (CNN) models were trained using two different architectures, a 2D-CNN (ResNet-50) using single slices (slice-based approach) and the same 2D-CNN and a 3D-CNN (ResNet-14) using a hundred slices per PET image (case-based approach). Models' performances were evaluated on independent test datasets. RESULTS For the 2D-CNN slice-based approach, 6800 and 536 slices were used for training and test datasets, respectively. The sensitivity and specificity of this model were 90.7% and 95.1%, and the area under the curve (AUC) of receiver operating characteristic curve was 0.971 (p < 0.001). For the case-based approaches using both 2D-CNN and 3D-CNN architectures, a training dataset of 100 images and a test dataset of 28 images were randomly allocated. The sensitivity, specificity, and AUC to discriminate abnormal images by the 2D-CNN and 3D-CNN case-based approaches were 85.7%, 71.4%, and 0.750 (p = 0.013) and 71.4%, 71.4%, and 0.699 (p = 0.053), respectively. CONCLUSION DL accurately classifies abnormal 18F-fluciclovine PET images of the pelvis in patients with BCR of PC. A DL classifier using single slice prediction had superior performance over case-based prediction.
Collapse
Affiliation(s)
- Jong Jin Lee
- Division of Nuclear Medicine and Molecular Imaging, Department of Radiology, Stanford University, 300 Pasteur Dr, Stanford, CA, 94305, USA
- Department of Nuclear Medicine, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Hongye Yang
- DimensionalMechanics Inc.®, Seattle, WA, USA
| | - Benjamin L Franc
- Division of Nuclear Medicine and Molecular Imaging, Department of Radiology, Stanford University, 300 Pasteur Dr, Stanford, CA, 94305, USA
| | - Andrei Iagaru
- Division of Nuclear Medicine and Molecular Imaging, Department of Radiology, Stanford University, 300 Pasteur Dr, Stanford, CA, 94305, USA
| | - Guido A Davidzon
- Division of Nuclear Medicine and Molecular Imaging, Department of Radiology, Stanford University, 300 Pasteur Dr, Stanford, CA, 94305, USA.
| |
Collapse
|