1
|
Masayoshi K, Katada Y, Ozawa N, Ibuki M, Negishi K, Kurihara T. Deep learning segmentation of non-perfusion area from color fundus images and AI-generated fluorescein angiography. Sci Rep 2024; 14:10801. [PMID: 38734727 PMCID: PMC11088618 DOI: 10.1038/s41598-024-61561-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2024] [Accepted: 05/07/2024] [Indexed: 05/13/2024] Open
Abstract
The non-perfusion area (NPA) of the retina is an important indicator in the visual prognosis of patients with branch retinal vein occlusion (BRVO). However, the current evaluation method of NPA, fluorescein angiography (FA), is invasive and burdensome. In this study, we examined the use of deep learning models for detecting NPA in color fundus images, bypassing the need for FA, and we also investigated the utility of synthetic FA generated from color fundus images. The models were evaluated using the Dice score and Monte Carlo dropout uncertainty. We retrospectively collected 403 sets of color fundus and FA images from 319 BRVO patients. We trained three deep learning models on FA, color fundus images, and synthetic FA. As a result, though the FA model achieved the highest score, the other two models also performed comparably. We found no statistical significance in median Dice scores between the models. However, the color fundus model showed significantly higher uncertainty than the other models (p < 0.05). In conclusion, deep learning models can detect NPAs from color fundus images with reasonable accuracy, though with somewhat less prediction stability. Synthetic FA stabilizes the prediction and reduces misleading uncertainty estimates by enhancing image quality.
Collapse
Affiliation(s)
- Kanato Masayoshi
- Laboratory of Photobiology, Keio University School of Medicine, 35 Shinanomachi, Shinjuku-Ku, Tokyo, Japan
| | - Yusaku Katada
- Laboratory of Photobiology, Keio University School of Medicine, 35 Shinanomachi, Shinjuku-Ku, Tokyo, Japan
- Department of Ophthalmology, Keio University School of Medicine, Shinanomachi, Shinjuku-Ku, Tokyo, Japan
| | - Nobuhiro Ozawa
- Laboratory of Photobiology, Keio University School of Medicine, 35 Shinanomachi, Shinjuku-Ku, Tokyo, Japan
- Department of Ophthalmology, Keio University School of Medicine, Shinanomachi, Shinjuku-Ku, Tokyo, Japan
| | - Mari Ibuki
- Laboratory of Photobiology, Keio University School of Medicine, 35 Shinanomachi, Shinjuku-Ku, Tokyo, Japan
- Department of Ophthalmology, Keio University School of Medicine, Shinanomachi, Shinjuku-Ku, Tokyo, Japan
| | - Kazuno Negishi
- Department of Ophthalmology, Keio University School of Medicine, Shinanomachi, Shinjuku-Ku, Tokyo, Japan
| | - Toshihide Kurihara
- Laboratory of Photobiology, Keio University School of Medicine, 35 Shinanomachi, Shinjuku-Ku, Tokyo, Japan.
- Department of Ophthalmology, Keio University School of Medicine, Shinanomachi, Shinjuku-Ku, Tokyo, Japan.
| |
Collapse
|
2
|
Sadia RT, Chen J, Zhang J. CT image denoising methods for image quality improvement and radiation dose reduction. J Appl Clin Med Phys 2024; 25:e14270. [PMID: 38240466 PMCID: PMC10860577 DOI: 10.1002/acm2.14270] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2023] [Revised: 12/15/2023] [Accepted: 12/28/2023] [Indexed: 02/13/2024] Open
Abstract
With the ever-increasing use of computed tomography (CT), concerns about its radiation dose have become a significant public issue. To address the need for radiation dose reduction, CT denoising methods have been widely investigated and applied in low-dose CT images. Numerous noise reduction algorithms have emerged, such as iterative reconstruction and most recently, deep learning (DL)-based approaches. Given the rapid advancements in Artificial Intelligence techniques, we recognize the need for a comprehensive review that emphasizes the most recently developed methods. Hence, we have performed a thorough analysis of existing literature to provide such a review. Beyond directly comparing the performance, we focus on pivotal aspects, including model training, validation, testing, generalizability, vulnerability, and evaluation methods. This review is expected to raise awareness of the various facets involved in CT image denoising and the specific challenges in developing DL-based models.
Collapse
Affiliation(s)
- Rabeya Tus Sadia
- Department of Computer ScienceUniversity of KentuckyLexingtonKentuckyUSA
| | - Jin Chen
- Department of Medicine‐NephrologyUniversity of Alabama at BirminghamBirminghamAlabamaUSA
| | - Jie Zhang
- Department of RadiologyUniversity of KentuckyLexingtonKentuckyUSA
| |
Collapse
|
3
|
Ng CKC. Generative Adversarial Network (Generative Artificial Intelligence) in Pediatric Radiology: A Systematic Review. CHILDREN (BASEL, SWITZERLAND) 2023; 10:1372. [PMID: 37628371 PMCID: PMC10453402 DOI: 10.3390/children10081372] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/21/2023] [Revised: 08/07/2023] [Accepted: 08/09/2023] [Indexed: 08/27/2023]
Abstract
Generative artificial intelligence, especially with regard to the generative adversarial network (GAN), is an important research area in radiology as evidenced by a number of literature reviews on the role of GAN in radiology published in the last few years. However, no review article about GAN in pediatric radiology has been published yet. The purpose of this paper is to systematically review applications of GAN in pediatric radiology, their performances, and methods for their performance evaluation. Electronic databases were used for a literature search on 6 April 2023. Thirty-seven papers met the selection criteria and were included. This review reveals that the GAN can be applied to magnetic resonance imaging, X-ray, computed tomography, ultrasound and positron emission tomography for image translation, segmentation, reconstruction, quality assessment, synthesis and data augmentation, and disease diagnosis. About 80% of the included studies compared their GAN model performances with those of other approaches and indicated that their GAN models outperformed the others by 0.1-158.6%. However, these study findings should be used with caution because of a number of methodological weaknesses. For future GAN studies, more robust methods will be essential for addressing these issues. Otherwise, this would affect the clinical adoption of the GAN-based applications in pediatric radiology and the potential advantages of GAN could not be realized widely.
Collapse
Affiliation(s)
- Curtise K. C. Ng
- Curtin Medical School, Curtin University, GPO Box U1987, Perth, WA 6845, Australia; or ; Tel.: +61-8-9266-7314; Fax: +61-8-9266-2377
- Curtin Health Innovation Research Institute (CHIRI), Faculty of Health Sciences, Curtin University, GPO Box U1987, Perth, WA 6845, Australia
| |
Collapse
|
4
|
Vats A, Pedersen M, Mohammed A, Hovde Ø. Evaluating clinical diversity and plausibility of synthetic capsule endoscopic images. Sci Rep 2023; 13:10857. [PMID: 37407635 PMCID: PMC10322862 DOI: 10.1038/s41598-023-36883-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2022] [Accepted: 06/12/2023] [Indexed: 07/07/2023] Open
Abstract
Wireless Capsule Endoscopy (WCE) is being increasingly used as an alternative imaging modality for complete and non-invasive screening of the gastrointestinal tract. Although this is advantageous in reducing unnecessary hospital admissions, it also demands that a WCE diagnostic protocol be in place so larger populations can be effectively screened. This calls for training and education protocols attuned specifically to this modality. Like training in other modalities such as traditional endoscopy, CT, MRI, etc., a WCE training protocol would require an atlas comprising of a large corpora of images that show vivid descriptions of pathologies, ideally observed over a period of time. Since such comprehensive atlases are presently lacking in WCE, in this work, we propose a deep learning method for utilizing already available studies across different institutions for the creation of a realistic WCE atlas using StyleGAN. We identify clinically relevant attributes in WCE such that synthetic images can be generated with selected attributes on cue. Beyond this, we also simulate several disease progression scenarios. The generated images are evaluated for realism and plausibility through three subjective online experiments with the participation of eight gastroenterology experts from three geographical locations and a variety of years of experience. The results from the experiments indicate that the images are highly realistic and the disease scenarios plausible. The images comprising the atlas are available publicly for use in training applications as well as supplementing real datasets for deep learning.
Collapse
Affiliation(s)
- Anuja Vats
- Department of Computer Science, NTNU, 2819, Gjøvik, Norway.
| | | | - Ahmed Mohammed
- Department of Computer Science, NTNU, 2819, Gjøvik, Norway
- SINTEF Digital, Smart Sensor Systems, Oslo, Norway
| | - Øistein Hovde
- Department of Computer Science, NTNU, 2819, Gjøvik, Norway
- Innlandet Hospital Trust, 2819, Gjøvik, Norway
| |
Collapse
|
5
|
He M, Cao Y, Chi C, Yang X, Ramin R, Wang S, Yang G, Mukhtorov O, Zhang L, Kazantsev A, Enikeev M, Hu K. Research progress on deep learning in magnetic resonance imaging-based diagnosis and treatment of prostate cancer: a review on the current status and perspectives. Front Oncol 2023; 13:1189370. [PMID: 37546423 PMCID: PMC10400334 DOI: 10.3389/fonc.2023.1189370] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2023] [Accepted: 05/30/2023] [Indexed: 08/08/2023] Open
Abstract
Multiparametric magnetic resonance imaging (mpMRI) has emerged as a first-line screening and diagnostic tool for prostate cancer, aiding in treatment selection and noninvasive radiotherapy guidance. However, the manual interpretation of MRI data is challenging and time-consuming, which may impact sensitivity and specificity. With recent technological advances, artificial intelligence (AI) in the form of computer-aided diagnosis (CAD) based on MRI data has been applied to prostate cancer diagnosis and treatment. Among AI techniques, deep learning involving convolutional neural networks contributes to detection, segmentation, scoring, grading, and prognostic evaluation of prostate cancer. CAD systems have automatic operation, rapid processing, and accuracy, incorporating multiple sequences of multiparametric MRI data of the prostate gland into the deep learning model. Thus, they have become a research direction of great interest, especially in smart healthcare. This review highlights the current progress of deep learning technology in MRI-based diagnosis and treatment of prostate cancer. The key elements of deep learning-based MRI image processing in CAD systems and radiotherapy of prostate cancer are briefly described, making it understandable not only for radiologists but also for general physicians without specialized imaging interpretation training. Deep learning technology enables lesion identification, detection, and segmentation, grading and scoring of prostate cancer, and prediction of postoperative recurrence and prognostic outcomes. The diagnostic accuracy of deep learning can be improved by optimizing models and algorithms, expanding medical database resources, and combining multi-omics data and comprehensive analysis of various morphological data. Deep learning has the potential to become the key diagnostic method in prostate cancer diagnosis and treatment in the future.
Collapse
Affiliation(s)
- Mingze He
- Institute for Urology and Reproductive Health, I.M. Sechenov First Moscow State Medical University (Sechenov University), Moscow, Russia
| | - Yu Cao
- I.M. Sechenov First Moscow State Medical University (Sechenov University), Moscow, Russia
| | - Changliang Chi
- Department of Urology, The First Hospital of Jilin University (Lequn Branch), Changchun, Jilin, China
| | - Xinyi Yang
- I.M. Sechenov First Moscow State Medical University (Sechenov University), Moscow, Russia
| | - Rzayev Ramin
- Department of Radiology, The Second University Clinic, I.M. Sechenov First Moscow State Medical University (Sechenov University), Moscow, Russia
| | - Shuowen Wang
- I.M. Sechenov First Moscow State Medical University (Sechenov University), Moscow, Russia
| | - Guodong Yang
- I.M. Sechenov First Moscow State Medical University (Sechenov University), Moscow, Russia
| | - Otabek Mukhtorov
- Regional State Budgetary Health Care Institution, Kostroma Regional Clinical Hospital named after Korolev E.I. Avenue Mira, Kostroma, Russia
| | - Liqun Zhang
- School of Biomedical Engineering, Faculty of Medicine, Dalian University of Technology, Dalian, Liaoning, China
| | - Anton Kazantsev
- Regional State Budgetary Health Care Institution, Kostroma Regional Clinical Hospital named after Korolev E.I. Avenue Mira, Kostroma, Russia
| | - Mikhail Enikeev
- Institute for Urology and Reproductive Health, I.M. Sechenov First Moscow State Medical University (Sechenov University), Moscow, Russia
| | - Kebang Hu
- Department of Urology, The First Hospital of Jilin University (Lequn Branch), Changchun, Jilin, China
| |
Collapse
|
6
|
Thakur S, Dinh LL, Lavanya R, Quek TC, Liu Y, Cheng CY. Use of artificial intelligence in forecasting glaucoma progression. Taiwan J Ophthalmol 2023; 13:168-183. [PMID: 37484617 PMCID: PMC10361424 DOI: 10.4103/tjo.tjo-d-23-00022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2023] [Accepted: 03/03/2023] [Indexed: 07/25/2023] Open
Abstract
Artificial intelligence (AI) has been widely used in ophthalmology for disease detection and monitoring progression. For glaucoma research, AI has been used to understand progression patterns and forecast disease trajectory based on analysis of clinical and imaging data. Techniques such as machine learning, natural language processing, and deep learning have been employed for this purpose. The results from studies using AI for forecasting glaucoma progression however vary considerably due to dataset constraints, lack of a standard progression definition and differences in methodology and approach. While glaucoma detection and screening have been the focus of most research that has been published in the last few years, in this narrative review we focus on studies that specifically address glaucoma progression. We also summarize the current evidence, highlight studies that have translational potential, and provide suggestions on how future research that addresses glaucoma progression can be improved.
Collapse
Affiliation(s)
- Sahil Thakur
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
| | - Linh Le Dinh
- Institute of High Performance Computing, The Agency for Science, Technology and Research, Singapore
| | - Raghavan Lavanya
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
| | - Ten Cheer Quek
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
| | - Yong Liu
- Institute of High Performance Computing, The Agency for Science, Technology and Research, Singapore
| | - Ching-Yu Cheng
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
- Department of Ophthalmology, Centre for Innovation and Precision Eye Health, Yong Loo Lin School of Medicine, National University of Singapore, Singapore
- Ophthalmology and Visual Sciences Academic Clinical Program (Eye ACP), Duke-NUS Medical School, Singapore
| |
Collapse
|
7
|
Premananthan G, Nagaraj B, Jaya J. A new AI assisted medical molecular image diagnostic model. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2023. [DOI: 10.3233/jifs-223354] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/12/2023]
Abstract
In recent times, ML algorithms that plays a significant role right from drug discovery to clinical decision making. The recent advances in DL technologies contribute towards improved performance for carrying out computer aided medical image analysis and disease diagnosis. The key benefit of AI in processing of medical big data offers spectacular insights into the hierarchal relationships that exist among data which can be algorithmically explored thus replacing the tedious manual processes to extract and localize specific areas of interests in medical images thus considerably changing the way medicine has been practiced so far. In bio medical related clinical applications, there is a constant demand pertaining the research and development with respect to deploying AI as a mainstream tool to perform several medical imaging activities like analysis, diagnosis, segmentation as well as classification. The increased usage of electronic health records and medical images being its integral component the need for appropriate and efficient AI assisted medical image analysis system that takes care of accurate and automated decision making could be of great help to radiologists and medical practitioners. Molecular image analysis is a dynamic field that makes use of ML and DL algorithms that utilizes labeled and structured information which also proves to be helpful to the patients as they serve as an initial interface before further diagnosis and treatments. Thus our research aims to offer a novel and efficient AI based medical analysis system that can assist clinical practitioners to focus on enhancing the disease diagnosis through DL based medical image analysis and decision making. In addition, we also address specific challenges related to disease diagnosis and propose novel GAN model for improved diagnosis and implementation. Our proposed technique can also be generalized to generate synthetic data for further issues related to molecular image analysis in the field of medicine and help towards building a better disease diagnosis model.
Collapse
Affiliation(s)
- G. Premananthan
- Department of ECE, Karpagam College of Engineering, Coimbatore, Tamilnadu, India
| | - B. Nagaraj
- Department of ECE, Rathinam Technical Campus, Coimbatore, Tamilnadu, India
| | - J. Jaya
- Department of ECE, Hindusthan College of Engineering and Technology, Coimbatore, Tamilnadu, India
| |
Collapse
|
8
|
An Enhanced Machine Learning Approach for Brain MRI Classification. Diagnostics (Basel) 2022; 12:diagnostics12112791. [PMID: 36428850 PMCID: PMC9689115 DOI: 10.3390/diagnostics12112791] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2022] [Revised: 11/07/2022] [Accepted: 11/10/2022] [Indexed: 11/16/2022] Open
Abstract
Magnetic Resonance Imaging (MRI) is a noninvasive technique used in medical imaging to diagnose a variety of disorders. The majority of previous systems performed well on MRI datasets with a small number of images, but their performance deteriorated when applied to large MRI datasets. Therefore, the objective is to develop a quick and trustworthy classification system that can sustain the best performance over a comprehensive MRI dataset. This paper presents a robust approach that has the ability to analyze and classify different types of brain diseases using MRI images. In this paper, global histogram equalization is utilized to remove unwanted details from the MRI images. After the picture has been enhanced, a symlet wavelet transform-based technique has been suggested that can extract the best features from the MRI images for feature extraction. On gray scale images, the suggested feature extraction approach is a compactly supported wavelet with the lowest asymmetry and the most vanishing moments for a given support width. Because the symlet wavelet can accommodate the orthogonal, biorthogonal, and reverse biorthogonal features of gray scale images, it delivers higher classification results. Following the extraction of the best feature, the linear discriminant analysis (LDA) is employed to minimize the feature space's dimensions. The model was trained and evaluated using logistic regression, and it correctly classified several types of brain illnesses based on MRI pictures. To illustrate the importance of the proposed strategy, a standard dataset from Harvard Medical School and the Open Access Series of Imaging Studies (OASIS), which encompasses 24 different brain disorders (including normal), is used. The proposed technique achieved the best classification accuracy of 96.6% when measured against current cutting-edge systems.
Collapse
|
9
|
Generative adversarial network-created brain SPECTs of cerebral ischemia are indistinguishable to scans from real patients. Sci Rep 2022; 12:18787. [PMID: 36335166 PMCID: PMC9637159 DOI: 10.1038/s41598-022-23325-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2022] [Accepted: 10/29/2022] [Indexed: 11/06/2022] Open
Abstract
Deep convolutional generative adversarial networks (GAN) allow for creating images from existing databases. We applied a modified light-weight GAN (FastGAN) algorithm to cerebral blood flow SPECTs and aimed to evaluate whether this technology can generate created images close to real patients. Investigating three anatomical levels (cerebellum, CER; basal ganglia, BG; cortex, COR), 551 normal (248 CER, 174 BG, 129 COR) and 387 pathological brain SPECTs using N-isopropyl p-I-123-iodoamphetamine (123I-IMP) were included. For the latter scans, cerebral ischemic disease comprised 291 uni- (66 CER, 116 BG, 109 COR) and 96 bilateral defect patterns (44 BG, 52 COR). Our model was trained using a three-compartment anatomical input (dataset 'A'; including CER, BG, and COR), while for dataset 'B', only one anatomical region (COR) was included. Quantitative analyses provided mean counts (MC) and left/right (LR) hemisphere ratios, which were then compared to quantification from real images. For MC, 'B' was significantly different for normal and bilateral defect patterns (P < 0.0001, respectively), but not for unilateral ischemia (P = 0.77). Comparable results were recorded for LR, as normal and ischemia scans were significantly different relative to images acquired from real patients (P ≤ 0.01, respectively). Images provided by 'A', however, revealed comparable quantitative results when compared to real images, including normal (P = 0.8) and pathological scans (unilateral, P = 0.99; bilateral, P = 0.68) for MC. For LR, only uni- (P = 0.03), but not normal or bilateral defect scans (P ≥ 0.08) reached significance relative to images of real patients. With a minimum of only three anatomical compartments serving as stimuli, created cerebral SPECTs are indistinguishable to images from real patients. The applied FastGAN algorithm may allow to provide sufficient scan numbers in various clinical scenarios, e.g., for "data-hungry" deep learning technologies or in the context of orphan diseases.
Collapse
|
10
|
Rasteau S, Ernenwein D, Savoldelli C, Bouletreau P. Artificial intelligence for oral and maxillo-facial surgery: A narrative review. JOURNAL OF STOMATOLOGY, ORAL AND MAXILLOFACIAL SURGERY 2022; 123:276-282. [PMID: 35091121 DOI: 10.1016/j.jormas.2022.01.010] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/23/2022] [Accepted: 01/23/2022] [Indexed: 12/24/2022]
Abstract
Artificial Intelligence (AI) is a set of technologies that simulate human cognition in order to address a specific problem. The improvement in computing speed, the exponential production and the routine collection of data have led to the rapid development of AI in the health sector. In this review, we propose to provide surgeons with the essential technical elements to help them understand the possibilities offered by AI and to review the current applications of AI for oral and maxillofacial surgery (OMFS). The review of the literature reveals a real research boom of AI in all fields in OMFS. The algorithms used are related to machine learning, with a strong representation of the convolutional neural networks specific to deep learning. The complex architecture of these networks gives them the capacity to extract and process the elementary characteristics of an image, and they are therefore particularly used for diagnostic purposes on medical imagery or facial photography. We identified representative articles dealing with AI algorithms providing assistance in diagnosis, therapeutic decision, preoperative planning, or prediction and evaluation of the outcomes. Thanks to their learning, classification, prediction and detection capabilities, AI algorithms complement human skills while limiting their imperfections. However, these algorithms should be subject to rigorous clinical evaluation, and ethical reflection on data protection should be systematically conducted.
Collapse
Affiliation(s)
- Simon Rasteau
- Maxillo-Facial Surgery, Facial Plastic Surgery, Stomatology and Oral Surgery, Hospices Civils de Lyon, Lyon-Sud Hospital - Claude-Bernard Lyon 1 University, 165 Chemin du Grand-Revoyet, Pierre-Bénite 69310, France.
| | - Didier Ernenwein
- Department of Pediatric Oral & Maxillofacial & Plastic Surgery, Children's Hospital Robert-Debré, Paris-Diderot University, Paris, France
| | - Charles Savoldelli
- University Institute of the Face and Neck, Côte d'Azur University, Nice University Hospital, 31 Avenue de Valombrose, Nice 06100, France
| | - Pierre Bouletreau
- Maxillo-Facial Surgery, Facial Plastic Surgery, Stomatology and Oral Surgery, Hospices Civils de Lyon, Lyon-Sud Hospital - Claude-Bernard Lyon 1 University, 165 Chemin du Grand-Revoyet, Pierre-Bénite 69310, France
| |
Collapse
|
11
|
Applications of Generative Adversarial Networks (GANs) in Positron Emission Tomography (PET) imaging: A review. Eur J Nucl Med Mol Imaging 2022; 49:3717-3739. [PMID: 35451611 DOI: 10.1007/s00259-022-05805-w] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2021] [Accepted: 04/12/2022] [Indexed: 11/04/2022]
Abstract
PURPOSE This paper reviews recent applications of Generative Adversarial Networks (GANs) in Positron Emission Tomography (PET) imaging. Recent advances in Deep Learning (DL) and GANs catalysed the research of their applications in medical imaging modalities. As a result, several unique GAN topologies have emerged and been assessed in an experimental environment over the last two years. METHODS The present work extensively describes GAN architectures and their applications in PET imaging. The identification of relevant publications was performed via approved publication indexing websites and repositories. Web of Science, Scopus, and Google Scholar were the major sources of information. RESULTS The research identified a hundred articles that address PET imaging applications such as attenuation correction, de-noising, scatter correction, removal of artefacts, image fusion, high-dose image estimation, super-resolution, segmentation, and cross-modality synthesis. These applications are presented and accompanied by the corresponding research works. CONCLUSION GANs are rapidly employed in PET imaging tasks. However, specific limitations must be eliminated to reach their full potential and gain the medical community's trust in everyday clinical practice.
Collapse
|
12
|
Efficient Strike Artifact Reduction Based on 3D-Morphological Structure Operators from Filtered Back-Projection PET Images. SENSORS 2021; 21:s21217228. [PMID: 34770534 PMCID: PMC8587286 DOI: 10.3390/s21217228] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/28/2021] [Revised: 10/26/2021] [Accepted: 10/28/2021] [Indexed: 11/16/2022]
Abstract
Positron emission tomography (PET) can provide functional images and identify abnormal metabolic regions of the whole-body to effectively detect tumor presence and distribution. The filtered back-projection (FBP) algorithm is one of the most common images reconstruction methods. However, it will generate strike artifacts on the reconstructed image and affect the clinical diagnosis of lesions. Past studies have shown reduction in strike artifacts and improvement in quality of images by two-dimensional morphological structure operators (2D-MSO). The morphological structure method merely processes the noise distribution of 2D space and never considers the noise distribution of 3D space. This study was designed to develop three-dimensional-morphological structure operators (3D MSO) for nuclear medicine imaging and effectively eliminating strike artifacts without reducing image quality. A parallel operation was also used to calculate the minimum background standard deviation of the images for three-dimensional morphological structure operators with the optimal response curve (3D-MSO/ORC). As a result of Jaszczak phantom and rat verification, 3D-MSO/ORC showed better denoising performance and image quality than the 2D-MSO method. Thus, 3D MSO/ORC with a 3 × 3 × 3 mask can reduce noise efficiently and provide stability in FBP images.
Collapse
|