1
|
Li F, Wang D, Yang Z, Zhang Y, Jiang J, Liu X, Kong K, Zhou F, Tham CC, Medeiros F, Han Y, Grzybowski A, Zangwill LM, Lam DSC, Zhang X. The AI revolution in glaucoma: Bridging challenges with opportunities. Prog Retin Eye Res 2024; 103:101291. [PMID: 39186968 DOI: 10.1016/j.preteyeres.2024.101291] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2024] [Revised: 08/19/2024] [Accepted: 08/19/2024] [Indexed: 08/28/2024]
Abstract
Recent advancements in artificial intelligence (AI) herald transformative potentials for reshaping glaucoma clinical management, improving screening efficacy, sharpening diagnosis precision, and refining the detection of disease progression. However, incorporating AI into healthcare usages faces significant hurdles in terms of developing algorithms and putting them into practice. When creating algorithms, issues arise due to the intensive effort required to label data, inconsistent diagnostic standards, and a lack of thorough testing, which often limits the algorithms' widespread applicability. Additionally, the "black box" nature of AI algorithms may cause doctors to be wary or skeptical. When it comes to using these tools, challenges include dealing with lower-quality images in real situations and the systems' limited ability to work well with diverse ethnic groups and different diagnostic equipment. Looking ahead, new developments aim to protect data privacy through federated learning paradigms, improving algorithm generalizability by diversifying input data modalities, and augmenting datasets with synthetic imagery. The integration of smartphones appears promising for using AI algorithms in both clinical and non-clinical settings. Furthermore, bringing in large language models (LLMs) to act as interactive tool in medicine may signify a significant change in how healthcare will be delivered in the future. By navigating through these challenges and leveraging on these as opportunities, the field of glaucoma AI will not only have improved algorithmic accuracy and optimized data integration but also a paradigmatic shift towards enhanced clinical acceptance and a transformative improvement in glaucoma care.
Collapse
Affiliation(s)
- Fei Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China.
| | - Deming Wang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China.
| | - Zefeng Yang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China.
| | - Yinhang Zhang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China.
| | - Jiaxuan Jiang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China.
| | - Xiaoyi Liu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China.
| | - Kangjie Kong
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China.
| | - Fengqi Zhou
- Ophthalmology, Mayo Clinic Health System, Eau Claire, WI, USA.
| | - Clement C Tham
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong SAR, China.
| | - Felipe Medeiros
- Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, FL, USA.
| | - Ying Han
- University of California, San Francisco, Department of Ophthalmology, San Francisco, CA, USA; The Francis I. Proctor Foundation for Research in Ophthalmology, University of California, San Francisco, CA, USA.
| | - Andrzej Grzybowski
- Institute for Research in Ophthalmology, Foundation for Ophthalmology Development, Poznan, Poland.
| | - Linda M Zangwill
- Hamilton Glaucoma Center, Viterbi Family Department of Ophthalmology, Shiley Eye Institute, University of California, San Diego, CA, USA.
| | - Dennis S C Lam
- The International Eye Research Institute of the Chinese University of Hong Kong (Shenzhen), Shenzhen, China; The C-MER Dennis Lam & Partners Eye Center, C-MER International Eye Care Group, Hong Kong, China.
| | - Xiulan Zhang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China.
| |
Collapse
|
2
|
Rai S, Bhatt JS, Patra SK. An AI-Based Low-Risk Lung Health Image Visualization Framework Using LR-ULDCT. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:2047-2062. [PMID: 38491236 PMCID: PMC11522248 DOI: 10.1007/s10278-024-01062-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/23/2023] [Revised: 01/18/2024] [Accepted: 02/12/2024] [Indexed: 03/18/2024]
Abstract
In this article, we propose an AI-based low-risk visualization framework for lung health monitoring using low-resolution ultra-low-dose CT (LR-ULDCT). We present a novel deep cascade processing workflow to achieve diagnostic visualization on LR-ULDCT (<0.3 mSv) at par high-resolution CT (HRCT) of 100 mSV radiation technology. To this end, we build a low-risk and affordable deep cascade network comprising three sequential deep processes: restoration, super-resolution (SR), and segmentation. Given degraded LR-ULDCT, the first novel network unsupervisedly learns restoration function from augmenting patch-based dictionaries and residuals. The restored version is then super-resolved (SR) for target (sensor) resolution. Here, we combine perceptual and adversarial losses in novel GAN to establish the closeness between probability distributions of generated SR-ULDCT and restored LR-ULDCT. Thus SR-ULDCT is presented to the segmentation network that first separates the chest portion from SR-ULDCT followed by lobe-wise colorization. Finally, we extract five lobes to account for the presence of ground glass opacity (GGO) in the lung. Hence, our AI-based system provides low-risk visualization of input degraded LR-ULDCT to various stages, i.e., restored LR-ULDCT, restored SR-ULDCT, and segmented SR-ULDCT, and achieves diagnostic power of HRCT. We perform case studies by experimenting on real datasets of COVID-19, pneumonia, and pulmonary edema/congestion while comparing our results with state-of-the-art. Ablation experiments are conducted for better visualizing different operating pipelines. Finally, we present a verification report by fourteen (14) experienced radiologists and pulmonologists.
Collapse
Affiliation(s)
- Swati Rai
- Indian Institute of Information Technology Vadodara, Vadodara, India.
| | - Jignesh S Bhatt
- Indian Institute of Information Technology Vadodara, Vadodara, India
| | | |
Collapse
|
3
|
Lasala A, Fiorentino MC, Bandini A, Moccia S. FetalBrainAwareNet: Bridging GANs with anatomical insight for fetal ultrasound brain plane synthesis. Comput Med Imaging Graph 2024; 116:102405. [PMID: 38824716 DOI: 10.1016/j.compmedimag.2024.102405] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2023] [Revised: 04/25/2024] [Accepted: 05/22/2024] [Indexed: 06/04/2024]
Abstract
Over the past decade, deep-learning (DL) algorithms have become a promising tool to aid clinicians in identifying fetal head standard planes (FHSPs) during ultrasound (US) examination. However, the adoption of these algorithms in clinical settings is still hindered by the lack of large annotated datasets. To overcome this barrier, we introduce FetalBrainAwareNet, an innovative framework designed to synthesize anatomically accurate images of FHSPs. FetalBrainAwareNet introduces a cutting-edge approach that utilizes class activation maps as a prior in its conditional adversarial training process. This approach fosters the presence of the specific anatomical landmarks in the synthesized images. Additionally, we investigate specialized regularization terms within the adversarial training loss function to control the morphology of the fetal skull and foster the differentiation between the standard planes, ensuring that the synthetic images faithfully represent real US scans in both structure and overall appearance. The versatility of our FetalBrainAwareNet framework is highlighted by its ability to generate high-quality images of three predominant FHSPs using a singular, integrated framework. Quantitative (Fréchet inception distance of 88.52) and qualitative (t-SNE) results suggest that our framework generates US images with greater variability compared to state-of-the-art methods. By using the synthetic images generated with our framework, we increase the accuracy of FHSP classifiers by 3.2% compared to training the same classifiers solely with real acquisitions. These achievements suggest that using our synthetic images to increase the training set could provide benefits to enhance the performance of DL algorithms for FHSPs classification that could be integrated in real clinical scenarios.
Collapse
Affiliation(s)
- Angelo Lasala
- The BioRobotics Institute and Department of Excellence in Robotics and AI, Scuola Superiore Sant'Anna, Pisa, Italy.
| | | | - Andrea Bandini
- The BioRobotics Institute and Department of Excellence in Robotics and AI, Scuola Superiore Sant'Anna, Pisa, Italy; Health Science Interdisciplinary Research Center, Scuola Superiore Sant'Anna, Pisa, Italy
| | - Sara Moccia
- The BioRobotics Institute and Department of Excellence in Robotics and AI, Scuola Superiore Sant'Anna, Pisa, Italy
| |
Collapse
|
4
|
Yoon JT, Lee KM, Oh JH, Kim HG, Jeong JW. Insights and Considerations in Development and Performance Evaluation of Generative Adversarial Networks (GANs): What Radiologists Need to Know. Diagnostics (Basel) 2024; 14:1756. [PMID: 39202244 PMCID: PMC11353572 DOI: 10.3390/diagnostics14161756] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2024] [Accepted: 08/05/2024] [Indexed: 09/03/2024] Open
Abstract
The rapid development of deep learning in medical imaging has significantly enhanced the capabilities of artificial intelligence while simultaneously introducing challenges, including the need for vast amounts of training data and the labor-intensive tasks of labeling and segmentation. Generative adversarial networks (GANs) have emerged as a solution, offering synthetic image generation for data augmentation and streamlining medical image processing tasks through models such as cGAN, CycleGAN, and StyleGAN. These innovations not only improve the efficiency of image augmentation, reconstruction, and segmentation, but also pave the way for unsupervised anomaly detection, markedly reducing the reliance on labeled datasets. Our investigation into GANs in medical imaging addresses their varied architectures, the considerations for selecting appropriate GAN models, and the nuances of model training and performance evaluation. This paper aims to provide radiologists who are new to GAN technology with a thorough understanding, guiding them through the practical application and evaluation of GANs in brain imaging with two illustrative examples using CycleGAN and pixel2style2pixel (pSp)-combined StyleGAN. It offers a comprehensive exploration of the transformative potential of GANs in medical imaging research. Ultimately, this paper strives to equip radiologists with the knowledge to effectively utilize GANs, encouraging further research and application within the field.
Collapse
Affiliation(s)
- Jeong Taek Yoon
- Department of Radiology, Kyung Hee University Hospital, Kyung Hee University College of Medicine, 23 Kyungheedae-ro, Dongdaemun-gu, Seoul 02447, Republic of Korea; (J.T.Y.); (H.-G.K.)
| | - Kyung Mi Lee
- Department of Radiology, Kyung Hee University Hospital, Kyung Hee University College of Medicine, 23 Kyungheedae-ro, Dongdaemun-gu, Seoul 02447, Republic of Korea; (J.T.Y.); (H.-G.K.)
| | - Jang-Hoon Oh
- Department of Radiology, Kyung Hee University Hospital, Kyung Hee University College of Medicine, 23 Kyungheedae-ro, Dongdaemun-gu, Seoul 02447, Republic of Korea; (J.T.Y.); (H.-G.K.)
| | - Hyug-Gi Kim
- Department of Radiology, Kyung Hee University Hospital, Kyung Hee University College of Medicine, 23 Kyungheedae-ro, Dongdaemun-gu, Seoul 02447, Republic of Korea; (J.T.Y.); (H.-G.K.)
| | - Ji Won Jeong
- Department of Medicine, Graduate School, Kyung Hee University, 23 Kyungheedae-ro, Dongdaemun-gu, Seoul 02447, Republic of Korea;
| |
Collapse
|
5
|
Li Y, Xin Y, Li X, Zhang Y, Liu C, Cao Z, Du S, Wang L. Omni-dimensional dynamic convolution feature coordinate attention network for pneumonia classification. Vis Comput Ind Biomed Art 2024; 7:17. [PMID: 38976189 PMCID: PMC11231110 DOI: 10.1186/s42492-024-00168-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2024] [Accepted: 06/22/2024] [Indexed: 07/09/2024] Open
Abstract
Pneumonia is a serious disease that can be fatal, particularly among children and the elderly. The accuracy of pneumonia diagnosis can be improved by combining artificial-intelligence technology with X-ray imaging. This study proposes X-ODFCANet, which addresses the issues of low accuracy and excessive parameters in existing deep-learning-based pneumonia-classification methods. This network incorporates a feature coordination attention module and an omni-dimensional dynamic convolution (ODConv) module, leveraging the residual module for feature extraction from X-ray images. The feature coordination attention module utilizes two one-dimensional feature encoding processes to aggregate feature information from different spatial directions. Additionally, the ODConv module extracts and fuses feature information in four dimensions: the spatial dimension of the convolution kernel, input and output channel quantities, and convolution kernel quantity. The experimental results demonstrate that the proposed method can effectively improve the accuracy of pneumonia classification, which is 3.77% higher than that of ResNet18. The model parameters are 4.45M, which was reduced by approximately 2.5 times. The code is available at https://github.com/limuni/X-ODFCANET .
Collapse
Affiliation(s)
- Yufei Li
- School of Information Science and Technology, Northwest University, Xi'an, 710127, Shaanxi Province, China
| | - Yufei Xin
- School of Information Science and Technology, Northwest University, Xi'an, 710127, Shaanxi Province, China
| | - Xinni Li
- School of Information Science and Technology, Northwest University, Xi'an, 710127, Shaanxi Province, China
| | - Yinrui Zhang
- School of Information Science and Technology, Northwest University, Xi'an, 710127, Shaanxi Province, China
| | - Cheng Liu
- School of Information Science and Technology, Northwest University, Xi'an, 710127, Shaanxi Province, China
| | - Zhengwen Cao
- School of Information Science and Technology, Northwest University, Xi'an, 710127, Shaanxi Province, China
| | - Shaoyi Du
- Department of Ultrasound, the Second Affiliated Hospital of Xi'an Jiaotong University, Xi'an, Shaanxi Province, 710004, China.
- National Key Laboratory of Human-Machine Hybrid Augmented Intelligence, National Engineering Research Center for Visual Information and Applications, and Institute of Artificial Intelligence and Robotics, Xi'an Jiaotong University, Xi'an, Shaanxi Province, 710049, China.
| | - Lin Wang
- School of Information Science and Technology, Northwest University, Xi'an, 710127, Shaanxi Province, China.
| |
Collapse
|
6
|
Schoenhof R, Schoenhof R, Blumenstock G, Lethaus B, Hoefert S. Synthetic, non-person related panoramic radiographs created by generative adversarial networks in research, clinical, and teaching applications. J Dent 2024; 146:105042. [PMID: 38710314 DOI: 10.1016/j.jdent.2024.105042] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2024] [Accepted: 05/03/2024] [Indexed: 05/08/2024] Open
Abstract
OBJECTIVES Generative Adversarial Networks (GANs) can produce synthetic images free from personal data. They hold significant value in medical research, where data protection is increasingly regulated. Panoramic radiographs (PRs) are a well-suited modality due to their significant level of standardization while simultaneously displaying a high degree of personally identifiable data. METHODS We produced synthetic PRs (syPRs) out of real PRs (rePRs) using StyleGAN2-ADA by NVIDIA©. A survey was performed on 54 medical professionals and 33 dentistry students. They assessed 45 radiological images (20 rePRs, 20 syPRs, and 5 syPRcontrols) as real or synthetic and interpreted a single-image syPR according to the image quality (0-10) and 14 different items (agreement/disagreement). They also rated the importance for the profession (0-10). A follow-up was performed for test-retest reliability with >10 % of all participants. RESULTS Overall, the sensitivity was 78.2 % and the specificity was 82.5 %. For professionals, the sensitivity was 79.9 % and the specificity was 82.3 %. For students, the sensitivity was 75.5 % and the specificity was 82.7 %. In the single syPR-interpretation image quality was rated at a median of 6 and 11 items were considered as agreement. The importance for the profession was rated at a median score of 7. The Test-retest reliability yielded a value of 0.23 (Cohen's kappa). CONCLUSIONS The study marks a comprehensive testing to demonstrate that GANs can produce synthetic radiological images that even health professionals can sometimes not differentiate from real radiological images, thereby being genuinely considered authentic. This enables their utilization and/or modification free from personally identifiable information. CLINICAL SIGNIFICANCE Synthetic images can be used for university teaching and patient education without relying on patient-related data. They can also be utilized to upscale existing training datasets to improve the accuracy of AI-based diagnostic systems. The study thereby supports clinical teaching as well as diagnostic and therapeutic decision-making.
Collapse
Affiliation(s)
- Rouven Schoenhof
- Department of Oral and Maxillofacial Surgery (Head: Prof. Dr. Dr. B. Lethaus), University Hospital Tuebingen, Osianderstrasse 2-8, 72076 Tuebingen, Germany.
| | - Raoul Schoenhof
- Fraunhofer Society for the Advancement of Applied Research, Hansastraße 27c, 80686 München, Germany
| | - Gunnar Blumenstock
- Institute for Clinical Epidemiology and Applied Biometry (Head: Prof. Dr. rer. nat. P. Martus), University Hospital Tuebingen, Silcherstrasse 5, 72076 Tuebingen, Germany
| | - Bernd Lethaus
- Department of Oral and Maxillofacial Surgery (Head: Prof. Dr. Dr. B. Lethaus), University Hospital Tuebingen, Osianderstrasse 2-8, 72076 Tuebingen, Germany
| | - Sebastian Hoefert
- Department of Oral and Maxillofacial Surgery (Head: Prof. Dr. Dr. B. Lethaus), University Hospital Tuebingen, Osianderstrasse 2-8, 72076 Tuebingen, Germany
| |
Collapse
|
7
|
Liu X, Qu L, Xie Z, Zhao J, Shi Y, Song Z. Towards more precise automatic analysis: a systematic review of deep learning-based multi-organ segmentation. Biomed Eng Online 2024; 23:52. [PMID: 38851691 PMCID: PMC11162022 DOI: 10.1186/s12938-024-01238-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2023] [Accepted: 04/11/2024] [Indexed: 06/10/2024] Open
Abstract
Accurate segmentation of multiple organs in the head, neck, chest, and abdomen from medical images is an essential step in computer-aided diagnosis, surgical navigation, and radiation therapy. In the past few years, with a data-driven feature extraction approach and end-to-end training, automatic deep learning-based multi-organ segmentation methods have far outperformed traditional methods and become a new research topic. This review systematically summarizes the latest research in this field. We searched Google Scholar for papers published from January 1, 2016 to December 31, 2023, using keywords "multi-organ segmentation" and "deep learning", resulting in 327 papers. We followed the PRISMA guidelines for paper selection, and 195 studies were deemed to be within the scope of this review. We summarized the two main aspects involved in multi-organ segmentation: datasets and methods. Regarding datasets, we provided an overview of existing public datasets and conducted an in-depth analysis. Concerning methods, we categorized existing approaches into three major classes: fully supervised, weakly supervised and semi-supervised, based on whether they require complete label information. We summarized the achievements of these methods in terms of segmentation accuracy. In the discussion and conclusion section, we outlined and summarized the current trends in multi-organ segmentation.
Collapse
Affiliation(s)
- Xiaoyu Liu
- Digital Medical Research Center, School of Basic Medical Sciences, Fudan University, 138 Yixueyuan Road, Shanghai, 200032, People's Republic of China
- Shanghai Key Laboratory of Medical Image Computing and Computer Assisted Intervention, Shanghai, 200032, China
| | - Linhao Qu
- Digital Medical Research Center, School of Basic Medical Sciences, Fudan University, 138 Yixueyuan Road, Shanghai, 200032, People's Republic of China
- Shanghai Key Laboratory of Medical Image Computing and Computer Assisted Intervention, Shanghai, 200032, China
| | - Ziyue Xie
- Digital Medical Research Center, School of Basic Medical Sciences, Fudan University, 138 Yixueyuan Road, Shanghai, 200032, People's Republic of China
- Shanghai Key Laboratory of Medical Image Computing and Computer Assisted Intervention, Shanghai, 200032, China
| | - Jiayue Zhao
- Digital Medical Research Center, School of Basic Medical Sciences, Fudan University, 138 Yixueyuan Road, Shanghai, 200032, People's Republic of China
- Shanghai Key Laboratory of Medical Image Computing and Computer Assisted Intervention, Shanghai, 200032, China
| | - Yonghong Shi
- Digital Medical Research Center, School of Basic Medical Sciences, Fudan University, 138 Yixueyuan Road, Shanghai, 200032, People's Republic of China.
- Shanghai Key Laboratory of Medical Image Computing and Computer Assisted Intervention, Shanghai, 200032, China.
| | - Zhijian Song
- Digital Medical Research Center, School of Basic Medical Sciences, Fudan University, 138 Yixueyuan Road, Shanghai, 200032, People's Republic of China.
- Shanghai Key Laboratory of Medical Image Computing and Computer Assisted Intervention, Shanghai, 200032, China.
| |
Collapse
|
8
|
Masayoshi K, Katada Y, Ozawa N, Ibuki M, Negishi K, Kurihara T. Deep learning segmentation of non-perfusion area from color fundus images and AI-generated fluorescein angiography. Sci Rep 2024; 14:10801. [PMID: 38734727 PMCID: PMC11088618 DOI: 10.1038/s41598-024-61561-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2024] [Accepted: 05/07/2024] [Indexed: 05/13/2024] Open
Abstract
The non-perfusion area (NPA) of the retina is an important indicator in the visual prognosis of patients with branch retinal vein occlusion (BRVO). However, the current evaluation method of NPA, fluorescein angiography (FA), is invasive and burdensome. In this study, we examined the use of deep learning models for detecting NPA in color fundus images, bypassing the need for FA, and we also investigated the utility of synthetic FA generated from color fundus images. The models were evaluated using the Dice score and Monte Carlo dropout uncertainty. We retrospectively collected 403 sets of color fundus and FA images from 319 BRVO patients. We trained three deep learning models on FA, color fundus images, and synthetic FA. As a result, though the FA model achieved the highest score, the other two models also performed comparably. We found no statistical significance in median Dice scores between the models. However, the color fundus model showed significantly higher uncertainty than the other models (p < 0.05). In conclusion, deep learning models can detect NPAs from color fundus images with reasonable accuracy, though with somewhat less prediction stability. Synthetic FA stabilizes the prediction and reduces misleading uncertainty estimates by enhancing image quality.
Collapse
Affiliation(s)
- Kanato Masayoshi
- Laboratory of Photobiology, Keio University School of Medicine, 35 Shinanomachi, Shinjuku-Ku, Tokyo, Japan
| | - Yusaku Katada
- Laboratory of Photobiology, Keio University School of Medicine, 35 Shinanomachi, Shinjuku-Ku, Tokyo, Japan
- Department of Ophthalmology, Keio University School of Medicine, Shinanomachi, Shinjuku-Ku, Tokyo, Japan
| | - Nobuhiro Ozawa
- Laboratory of Photobiology, Keio University School of Medicine, 35 Shinanomachi, Shinjuku-Ku, Tokyo, Japan
- Department of Ophthalmology, Keio University School of Medicine, Shinanomachi, Shinjuku-Ku, Tokyo, Japan
| | - Mari Ibuki
- Laboratory of Photobiology, Keio University School of Medicine, 35 Shinanomachi, Shinjuku-Ku, Tokyo, Japan
- Department of Ophthalmology, Keio University School of Medicine, Shinanomachi, Shinjuku-Ku, Tokyo, Japan
| | - Kazuno Negishi
- Department of Ophthalmology, Keio University School of Medicine, Shinanomachi, Shinjuku-Ku, Tokyo, Japan
| | - Toshihide Kurihara
- Laboratory of Photobiology, Keio University School of Medicine, 35 Shinanomachi, Shinjuku-Ku, Tokyo, Japan.
- Department of Ophthalmology, Keio University School of Medicine, Shinanomachi, Shinjuku-Ku, Tokyo, Japan.
| |
Collapse
|
9
|
Yang S, Kim KD, Ariji E, Kise Y. Generative adversarial networks in dental imaging: a systematic review. Oral Radiol 2024; 40:93-108. [PMID: 38001347 DOI: 10.1007/s11282-023-00719-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2023] [Accepted: 10/27/2023] [Indexed: 11/26/2023]
Abstract
OBJECTIVES This systematic review on generative adversarial network (GAN) architectures for dental image analysis provides a comprehensive overview to readers regarding current GAN trends in dental imagery and potential future applications. METHODS Electronic databases (PubMed/MEDLINE, Scopus, Embase, and Cochrane Library) were searched to identify studies involving GANs for dental image analysis. Eighteen full-text articles describing the applications of GANs in dental imagery were reviewed. Risk of bias and applicability concerns were assessed using the QUADAS-2 tool. RESULTS GANs were used for various imaging modalities, including two-dimensional and three-dimensional images. In dental imaging, GANs were utilized for tasks such as artifact reduction, denoising, and super-resolution, domain transfer, image generation for augmentation, outcome prediction, and identification. The generated images were incorporated into tasks such as landmark detection, object detection and classification. Because of heterogeneity among the studies, a meta-analysis could not be conducted. Most studies (72%) had a low risk of bias in all four domains. However, only three (17%) studies had a low risk of applicability concerns. CONCLUSIONS This extensive analysis of GANs in dental imaging highlighted their broad application potential within the dental field. Future studies should address limitations related to the stability, repeatability, and overall interpretability of GAN architectures. By overcoming these challenges, the applicability of GANs in dentistry can be enhanced, ultimately benefiting the dental field in its use of GANs and artificial intelligence.
Collapse
Affiliation(s)
- Sujin Yang
- Department of Advanced General Dentistry, College of Dentistry, Yonsei University, Seoul, Korea
| | - Kee-Deog Kim
- Department of Advanced General Dentistry, College of Dentistry, Yonsei University, Seoul, Korea
| | - Eiichiro Ariji
- Department of Oral and Maxillofacial Radiology, School of Dentistry, Aichi Gakuin University, 2-11 Suemori-dori, Chikusa-ku, Nagoya, 464-8651, Japan
| | - Yoshitaka Kise
- Department of Oral and Maxillofacial Radiology, School of Dentistry, Aichi Gakuin University, 2-11 Suemori-dori, Chikusa-ku, Nagoya, 464-8651, Japan.
| |
Collapse
|
10
|
Waikel RL, Othman AA, Patel T, Ledgister Hanchard S, Hu P, Tekendo-Ngongang C, Duong D, Solomon BD. Recognition of Genetic Conditions After Learning With Images Created Using Generative Artificial Intelligence. JAMA Netw Open 2024; 7:e242609. [PMID: 38488790 PMCID: PMC10943405 DOI: 10.1001/jamanetworkopen.2024.2609] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/14/2023] [Accepted: 01/12/2024] [Indexed: 03/18/2024] Open
Abstract
Importance The lack of standardized genetics training in pediatrics residencies, along with a shortage of medical geneticists, necessitates innovative educational approaches. Objective To compare pediatric resident recognition of Kabuki syndrome (KS) and Noonan syndrome (NS) after 1 of 4 educational interventions, including generative artificial intelligence (AI) methods. Design, Setting, and Participants This comparative effectiveness study used generative AI to create images of children with KS and NS. From October 1, 2022, to February 28, 2023, US pediatric residents were provided images through a web-based survey to assess whether these images helped them recognize genetic conditions. Interventions Participants categorized 20 images after exposure to 1 of 4 educational interventions (text-only descriptions, real images, and 2 types of images created by generative AI). Main Outcomes and Measures Associations between educational interventions with accuracy and self-reported confidence. Results Of 2515 contacted pediatric residents, 106 and 102 completed the KS and NS surveys, respectively. For KS, the sensitivity of text description was 48.5% (128 of 264), which was not significantly different from random guessing (odds ratio [OR], 0.94; 95% CI, 0.69-1.29; P = .71). Sensitivity was thus compared for real images vs random guessing (60.3% [188 of 312]; OR, 1.52; 95% CI, 1.15-2.00; P = .003) and 2 types of generative AI images vs random guessing (57.0% [212 of 372]; OR, 1.32; 95% CI, 1.04-1.69; P = .02 and 59.6% [193 of 324]; OR, 1.47; 95% CI, 1.12-1.94; P = .006) (denominators differ according to survey responses). The sensitivity of the NS text-only description was 65.3% (196 of 300). Compared with text-only, the sensitivity of the real images was 74.3% (205 of 276; OR, 1.53; 95% CI, 1.08-2.18; P = .02), and the sensitivity of the 2 types of images created by generative AI was 68.0% (204 of 300; OR, 1.13; 95% CI, 0.77-1.66; P = .54) and 71.0% (247 of 328; OR, 1.30; 95% CI, 0.92-1.83; P = .14). For specificity, no intervention was statistically different from text only. After the interventions, the number of participants who reported being unsure about important diagnostic facial features decreased from 56 (52.8%) to 5 (7.6%) for KS (P < .001) and 25 (24.5%) to 4 (4.7%) for NS (P < .001). There was a significant association between confidence level and sensitivity for real and generated images. Conclusions and Relevance In this study, real and generated images helped participants recognize KS and NS; real images appeared most helpful. Generated images were noninferior to real images and could serve an adjunctive role, particularly for rare conditions.
Collapse
Affiliation(s)
- Rebekah L. Waikel
- Medical Genetics Branch, National Human Genome Research Institute, Bethesda, Maryland
| | - Amna A. Othman
- Medical Genetics Branch, National Human Genome Research Institute, Bethesda, Maryland
| | - Tanviben Patel
- Medical Genetics Branch, National Human Genome Research Institute, Bethesda, Maryland
| | | | - Ping Hu
- Medical Genetics Branch, National Human Genome Research Institute, Bethesda, Maryland
| | | | - Dat Duong
- Medical Genetics Branch, National Human Genome Research Institute, Bethesda, Maryland
| | - Benjamin D. Solomon
- Medical Genetics Branch, National Human Genome Research Institute, Bethesda, Maryland
| |
Collapse
|
11
|
Li W, Liu J, Wang S, Feng C. MTFN: multi-temporal feature fusing network with co-attention for DCE-MRI synthesis. BMC Med Imaging 2024; 24:47. [PMID: 38373915 PMCID: PMC10875895 DOI: 10.1186/s12880-024-01201-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2022] [Accepted: 01/15/2024] [Indexed: 02/21/2024] Open
Abstract
BACKGROUND Dynamic Contrast Enhanced Magnetic Resonance Imaging (DCE-MRI) plays an important role in the diagnosis and treatment of breast cancer. However, obtaining complete eight temporal images of DCE-MRI requires a long scanning time, which causes patients' discomfort in the scanning process. Therefore, to reduce the time, the multi temporal feature fusing neural network with Co-attention (MTFN) is proposed to generate the eighth temporal images of DCE-MRI, which enables the acquisition of DCE-MRI images without scanning. In order to reduce the time, multi-temporal feature fusion cooperative attention mechanism neural network (MTFN) is proposed to generate the eighth temporal images of DCE-MRI, which enables DCE-MRI image acquisition without scanning. METHODS In this paper, we propose multi temporal feature fusing neural network with Co-attention (MTFN) for DCE-MRI Synthesis, in which the Co-attention module can fully fuse the features of the first and third temporal image to obtain the hybrid features. The Co-attention explore long-range dependencies, not just relationships between pixels. Therefore, the hybrid features are more helpful to generate the eighth temporal images. RESULTS We conduct experiments on the private breast DCE-MRI dataset from hospitals and the multi modal Brain Tumor Segmentation Challenge2018 dataset (BraTs2018). Compared with existing methods, the experimental results of our method show the improvement and our method can generate more realistic images. In the meanwhile, we also use synthetic images to classify the molecular typing of breast cancer that the accuracy on the original eighth time-series images and the generated images are 89.53% and 92.46%, which have been improved by about 3%, and the classification results verify the practicability of the synthetic images. CONCLUSIONS The results of subjective evaluation and objective image quality evaluation indicators show the effectiveness of our method, which can obtain comprehensive and useful information. The improvement of classification accuracy proves that the images generated by our method are practical.
Collapse
Affiliation(s)
- Wei Li
- Key Laboratory of Intelligent Computing in Medical Image MIIC, Northeastern University, Shenyang, China
| | - Jiaye Liu
- School of Computer Science and Engineering, Northeastern University, Shenyang, China
| | - Shanshan Wang
- School of Computer Science and Engineering, Northeastern University, Shenyang, China.
| | - Chaolu Feng
- Key Laboratory of Intelligent Computing in Medical Image MIIC, Northeastern University, Shenyang, China
| |
Collapse
|
12
|
Chen R, Zhang W, Song F, Yu H, Cao D, Zheng Y, He M, Shi D. Translating color fundus photography to indocyanine green angiography using deep-learning for age-related macular degeneration screening. NPJ Digit Med 2024; 7:34. [PMID: 38347098 PMCID: PMC10861476 DOI: 10.1038/s41746-024-01018-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2023] [Accepted: 01/18/2024] [Indexed: 02/15/2024] Open
Abstract
Age-related macular degeneration (AMD) is the leading cause of central vision impairment among the elderly. Effective and accurate AMD screening tools are urgently needed. Indocyanine green angiography (ICGA) is a well-established technique for detecting chorioretinal diseases, but its invasive nature and potential risks impede its routine clinical application. Here, we innovatively developed a deep-learning model capable of generating realistic ICGA images from color fundus photography (CF) using generative adversarial networks (GANs) and evaluated its performance in AMD classification. The model was developed with 99,002 CF-ICGA pairs from a tertiary center. The quality of the generated ICGA images underwent objective evaluation using mean absolute error (MAE), peak signal-to-noise ratio (PSNR), structural similarity measures (SSIM), etc., and subjective evaluation by two experienced ophthalmologists. The model generated realistic early, mid and late-phase ICGA images, with SSIM spanned from 0.57 to 0.65. The subjective quality scores ranged from 1.46 to 2.74 on the five-point scale (1 refers to the real ICGA image quality, Kappa 0.79-0.84). Moreover, we assessed the application of translated ICGA images in AMD screening on an external dataset (n = 13887) by calculating area under the ROC curve (AUC) in classifying AMD. Combining generated ICGA with real CF images improved the accuracy of AMD classification with AUC increased from 0.93 to 0.97 (P < 0.001). These results suggested that CF-to-ICGA translation can serve as a cross-modal data augmentation method to address the data hunger often encountered in deep-learning research, and as a promising add-on for population-based AMD screening. Real-world validation is warranted before clinical usage.
Collapse
Affiliation(s)
- Ruoyu Chen
- Experimental Ophthalmology, School of Optometry, The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, China
- Research Centre for SHARP Vision, The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, China
| | - Weiyi Zhang
- Experimental Ophthalmology, School of Optometry, The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, China
- Research Centre for SHARP Vision, The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, China
| | - Fan Song
- Experimental Ophthalmology, School of Optometry, The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, China
- Research Centre for SHARP Vision, The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, China
| | - Honghua Yu
- Department of Ophthalmology, Guangdong Academy of Medical Sciences, Guangdong Provincial People's Hospital, Southern Medical University, Guangzhou, China
| | - Dan Cao
- Department of Ophthalmology, Guangdong Academy of Medical Sciences, Guangdong Provincial People's Hospital, Southern Medical University, Guangzhou, China
| | - Yingfeng Zheng
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China.
| | - Mingguang He
- Experimental Ophthalmology, School of Optometry, The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, China.
- Research Centre for SHARP Vision, The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, China.
- Centre for Eye and Vision Research (CEVR), 17W Hong Kong Science Park, Hong Kong SAR, China.
| | - Danli Shi
- Experimental Ophthalmology, School of Optometry, The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, China.
- Research Centre for SHARP Vision, The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, China.
| |
Collapse
|
13
|
Vrudhula A, Kwan AC, Ouyang D, Cheng S. Machine Learning and Bias in Medical Imaging: Opportunities and Challenges. Circ Cardiovasc Imaging 2024; 17:e015495. [PMID: 38377237 PMCID: PMC10883605 DOI: 10.1161/circimaging.123.015495] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 02/22/2024]
Abstract
Bias in health care has been well documented and results in disparate and worsened outcomes for at-risk groups. Medical imaging plays a critical role in facilitating patient diagnoses but involves multiple sources of bias including factors related to access to imaging modalities, acquisition of images, and assessment (ie, interpretation) of imaging data. Machine learning (ML) applied to diagnostic imaging has demonstrated the potential to improve the quality of imaging-based diagnosis and the precision of measuring imaging-based traits. Algorithms can leverage subtle information not visible to the human eye to detect underdiagnosed conditions or derive new disease phenotypes by linking imaging features with clinical outcomes, all while mitigating cognitive bias in interpretation. Importantly, however, the application of ML to diagnostic imaging has the potential to either reduce or propagate bias. Understanding the potential gain as well as the potential risks requires an understanding of how and what ML models learn. Common risks of propagating bias can arise from unbalanced training, suboptimal architecture design or selection, and uneven application of models. Notwithstanding these risks, ML may yet be applied to improve gain from imaging across all 3A's (access, acquisition, and assessment) for all patients. In this review, we present a framework for understanding the balance of opportunities and challenges for minimizing bias in medical imaging, how ML may improve current approaches to imaging, and what specific design considerations should be made as part of efforts to maximize the quality of health care for all.
Collapse
Affiliation(s)
- Amey Vrudhula
- Icahn School of Medicine at Mount Sinai, New York
- Department of Cardiology, Smidt Heart Institute, Cedars-Sinai Medical Center
| | - Alan C Kwan
- Department of Cardiology, Smidt Heart Institute, Cedars-Sinai Medical Center
| | - David Ouyang
- Department of Cardiology, Smidt Heart Institute, Cedars-Sinai Medical Center
- Division of Artificial Intelligence in Medicine, Department of Medicine, Cedars-Sinai Medical Center
| | - Susan Cheng
- Department of Cardiology, Smidt Heart Institute, Cedars-Sinai Medical Center
| |
Collapse
|
14
|
Vrettos K, Koltsakis E, Zibis AH, Karantanas AH, Klontzas ME. Generative adversarial networks for spine imaging: A critical review of current applications. Eur J Radiol 2024; 171:111313. [PMID: 38237518 DOI: 10.1016/j.ejrad.2024.111313] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2023] [Revised: 12/18/2023] [Accepted: 01/09/2024] [Indexed: 02/10/2024]
Abstract
PURPOSE In recent years, the field of medical imaging has witnessed remarkable advancements, with innovative technologies which revolutionized the visualization and analysis of the human spine. Among the groundbreaking developments in medical imaging, Generative Adversarial Networks (GANs) have emerged as a transformative tool, offering unprecedented possibilities in enhancing spinal imaging techniques and diagnostic outcomes. This review paper aims to provide a comprehensive overview of the use of GANs in spinal imaging, and to emphasize their potential to improve the diagnosis and treatment of spine-related disorders. A specific review focusing on Generative Adversarial Networks (GANs) in the context of medical spine imaging is needed to provide a comprehensive and specialized analysis of the unique challenges, applications, and advancements within this specific domain, which might not be fully addressed in broader reviews covering GANs in general medical imaging. Such a review can offer insights into the tailored solutions and innovations that GANs bring to the field of spinal medical imaging. METHODS An extensive literature search from 2017 until July 2023, was conducted using the most important search engines and identified studies that used GANs in spinal imaging. RESULTS The implementations include generating fat suppressed T2-weighted (fsT2W) images from T1 and T2-weighted sequences, to reduce scan time. The generated images had a significantly better image quality than true fsT2W images and could improve diagnostic accuracy for certain pathologies. GANs were also utilized in generating virtual thin-slice images of intervertebral spaces, creating digital twins of human vertebrae, and predicting fracture response. Lastly, they could be applied to convert CT to MRI images, with the potential to generate near-MR images from CT without MRI. CONCLUSIONS GANs have promising applications in personalized medicine, image augmentation, and improved diagnostic accuracy. However, limitations such as small databases and misalignment in CT-MRI pairs, must be considered.
Collapse
Affiliation(s)
- Konstantinos Vrettos
- Department of Radiology, School of Medicine, University of Crete, Voutes Campus, Heraklion, Greece
| | - Emmanouil Koltsakis
- Department of Radiology, Karolinska University Hospital, Solna, Stockholm, Sweden
| | - Aristeidis H Zibis
- Department of Anatomy, Medical School, University of Thessaly, Larissa, Greece
| | - Apostolos H Karantanas
- Department of Radiology, School of Medicine, University of Crete, Voutes Campus, Heraklion, Greece; Computational BioMedicine Laboratory, Institute of Computer Science, Foundation for Research and Technology (FORTH), Heraklion, Crete, Greece; Department of Medical Imaging, University Hospital of Heraklion, Heraklion, Crete, Greece
| | - Michail E Klontzas
- Department of Radiology, School of Medicine, University of Crete, Voutes Campus, Heraklion, Greece; Computational BioMedicine Laboratory, Institute of Computer Science, Foundation for Research and Technology (FORTH), Heraklion, Crete, Greece; Department of Medical Imaging, University Hospital of Heraklion, Heraklion, Crete, Greece.
| |
Collapse
|
15
|
Kurysheva NI, Rodionova OY, Pomerantsev AL, Sharova GA. [Application of artificial intelligence in glaucoma. Part 2. Neural networks and machine learning in the monitoring and treatment of glaucoma]. Vestn Oftalmol 2024; 140:80-85. [PMID: 39254394 DOI: 10.17116/oftalma202414004180] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/11/2024]
Abstract
The second part of the literature review on the application of artificial intelligence (AI) methods for screening, diagnosing, monitoring, and treating glaucoma provides information on how AI methods enhance the effectiveness of glaucoma monitoring and treatment, presents technologies that use machine learning, including neural networks, to predict disease progression and determine the need for anti-glaucoma surgery. The article also discusses the methods of personalized treatment based on projection machine learning methods and outlines the problems and prospects of using AI in solving tasks related to screening, diagnosing, and treating glaucoma.
Collapse
Affiliation(s)
- N I Kurysheva
- Medical Biological University of Innovations and Continuing Education of the Federal Biophysical Center named after A.I. Burnazyan, Moscow, Russia
- Ophthalmological Center of the Federal Medical-Biological Agency at the Federal Biophysical Center named after A.I. Burnazyan, Moscow, Russia
| | - O Ye Rodionova
- N.N. Semenov Federal Research Center for Chemical Physics, Moscow, Russia
| | - A L Pomerantsev
- N.N. Semenov Federal Research Center for Chemical Physics, Moscow, Russia
| | - G A Sharova
- Medical Biological University of Innovations and Continuing Education of the Federal Biophysical Center named after A.I. Burnazyan, Moscow, Russia
- OOO Glaznaya Klinika Doktora Belikovoy, Moscow, Russia
| |
Collapse
|
16
|
Alajaji SA, Khoury ZH, Elgharib M, Saeed M, Ahmed ARH, Khan MB, Tavares T, Jessri M, Puche AC, Hoorfar H, Stojanov I, Sciubba JJ, Sultan AS. Generative Adversarial Networks in Digital Histopathology: Current Applications, Limitations, Ethical Considerations, and Future Directions. Mod Pathol 2024; 37:100369. [PMID: 37890670 DOI: 10.1016/j.modpat.2023.100369] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2023] [Revised: 10/04/2023] [Accepted: 10/19/2023] [Indexed: 10/29/2023]
Abstract
Generative adversarial networks (GANs) have gained significant attention in the field of image synthesis, particularly in computer vision. GANs consist of a generative model and a discriminative model trained in an adversarial setting to generate realistic and novel data. In the context of image synthesis, the generator produces synthetic images, whereas the discriminator determines their authenticity by comparing them with real examples. Through iterative training, the generator allows the creation of images that are indistinguishable from real ones, leading to high-quality image generation. Considering their success in computer vision, GANs hold great potential for medical diagnostic applications. In the medical field, GANs can generate images of rare diseases, aid in learning, and be used as visualization tools. GANs can leverage unlabeled medical images, which are large in size, numerous in quantity, and challenging to annotate manually. GANs have demonstrated remarkable capabilities in image synthesis and have the potential to significantly impact digital histopathology. This review article focuses on the emerging use of GANs in digital histopathology, examining their applications and potential challenges. Histopathology plays a crucial role in disease diagnosis, and GANs can contribute by generating realistic microscopic images. However, ethical considerations arise because of the reliance on synthetic or pseudogenerated images. Therefore, the manuscript also explores the current limitations and highlights the ethical considerations associated with the use of this technology. In conclusion, digital histopathology has seen an emerging use of GANs for image enhancement, such as color (stain) normalization, virtual staining, and ink/marker removal. GANs offer significant potential in transforming digital pathology when applied to specific and narrow tasks (preprocessing enhancements). Evaluating data quality, addressing biases, protecting privacy, ensuring accountability and transparency, and developing regulation are imperative to ensure the ethical application of GANs.
Collapse
Affiliation(s)
- Shahd A Alajaji
- Department of Oncology and Diagnostic Sciences, University of Maryland School of Dentistry, Baltimore, Maryland; Department of Oral Medicine and Diagnostic Sciences, College of Dentistry, King Saud University, Riyadh, Saudi Arabia; Division of Artificial Intelligence Research, University of Maryland School of Dentistry, Baltimore, Maryland
| | - Zaid H Khoury
- Department of Oral Diagnostic Sciences and Research, School of Dentistry, Meharry Medical College, Nashville, Tennessee
| | | | | | | | | | - Tiffany Tavares
- Department of Comprehensive Dentistry, UT Health San Antonio, School of Dentistry, San Antonio, Texas
| | - Maryam Jessri
- Oral Medicine and Pathology Department, School of Dentistry, University of Queensland, Herston, Queensland, Australia; Oral Medicine Department, Metro North Hospital and Health Services, Queensland Health, Queensland, Australia
| | - Adam C Puche
- Department of Neurobiology, University of Maryland School of Medicine, Baltimore, Maryland
| | - Hamid Hoorfar
- Department of Epidemiology and Public Health, University of Maryland School of Medicine, Baltimore, Maryland
| | - Ivan Stojanov
- Department of Pathology, Robert J. Tomsich Pathology and Laboratory Medicine Institute, Cleveland Clinic, Cleveland, Ohio
| | - James J Sciubba
- Department of Otolaryngology, Head and Neck Surgery, The Johns Hopkins University, Baltimore, Maryland
| | - Ahmed S Sultan
- Department of Oncology and Diagnostic Sciences, University of Maryland School of Dentistry, Baltimore, Maryland; Division of Artificial Intelligence Research, University of Maryland School of Dentistry, Baltimore, Maryland; University of Maryland Marlene and Stewart Greenebaum Comprehensive Cancer Center, Baltimore, Maryland.
| |
Collapse
|
17
|
Yousefpour Shahrivar R, Karami F, Karami E. Enhancing Fetal Anomaly Detection in Ultrasonography Images: A Review of Machine Learning-Based Approaches. Biomimetics (Basel) 2023; 8:519. [PMID: 37999160 PMCID: PMC10669151 DOI: 10.3390/biomimetics8070519] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2023] [Revised: 10/05/2023] [Accepted: 10/26/2023] [Indexed: 11/25/2023] Open
Abstract
Fetal development is a critical phase in prenatal care, demanding the timely identification of anomalies in ultrasound images to safeguard the well-being of both the unborn child and the mother. Medical imaging has played a pivotal role in detecting fetal abnormalities and malformations. However, despite significant advances in ultrasound technology, the accurate identification of irregularities in prenatal images continues to pose considerable challenges, often necessitating substantial time and expertise from medical professionals. In this review, we go through recent developments in machine learning (ML) methods applied to fetal ultrasound images. Specifically, we focus on a range of ML algorithms employed in the context of fetal ultrasound, encompassing tasks such as image classification, object recognition, and segmentation. We highlight how these innovative approaches can enhance ultrasound-based fetal anomaly detection and provide insights for future research and clinical implementations. Furthermore, we emphasize the need for further research in this domain where future investigations can contribute to more effective ultrasound-based fetal anomaly detection.
Collapse
Affiliation(s)
- Ramin Yousefpour Shahrivar
- Department of Biology, College of Convergent Sciences and Technologies, Science and Research Branch, Islamic Azad University, Tehran, 14515-775, Iran
| | - Fatemeh Karami
- Department of Medical Genetics, Applied Biophotonics Research Center, Science and Research Branch, Islamic Azad University, Tehran, 14515-775, Iran
| | - Ebrahim Karami
- Department of Engineering and Applied Sciences, Memorial University of Newfoundland, St. John’s, NL A1B 3X5, Canada
| |
Collapse
|
18
|
Paladugu PS, Ong J, Nelson N, Kamran SA, Waisberg E, Zaman N, Kumar R, Dias RD, Lee AG, Tavakkoli A. Generative Adversarial Networks in Medicine: Important Considerations for this Emerging Innovation in Artificial Intelligence. Ann Biomed Eng 2023; 51:2130-2142. [PMID: 37488468 DOI: 10.1007/s10439-023-03304-z] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2023] [Accepted: 07/03/2023] [Indexed: 07/26/2023]
Abstract
The advent of artificial intelligence (AI) and machine learning (ML) has revolutionized the field of medicine. Although highly effective, the rapid expansion of this technology has created some anticipated and unanticipated bioethical considerations. With these powerful applications, there is a necessity for framework regulations to ensure equitable and safe deployment of technology. Generative Adversarial Networks (GANs) are emerging ML techniques that have immense applications in medical imaging due to their ability to produce synthetic medical images and aid in medical AI training. Producing accurate synthetic images with GANs can address current limitations in AI development for medical imaging and overcome current dataset type and size constraints. Offsetting these constraints can dramatically improve the development and implementation of AI medical imaging and restructure the practice of medicine. As observed with its other AI predecessors, considerations must be taken into place to help regulate its development for clinical use. In this paper, we discuss the legal, ethical, and technical challenges for future safe integration of this technology in the healthcare sector.
Collapse
Affiliation(s)
- Phani Srivatsav Paladugu
- Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
- Sidney Kimmel Medical College, Thomas Jefferson University, Philadelphia, PA, USA
| | - Joshua Ong
- Michigan Medicine, University of Michigan, Ann Arbor, MI, USA
| | - Nicolas Nelson
- Sidney Kimmel Medical College, Thomas Jefferson University, Philadelphia, PA, USA
| | - Sharif Amit Kamran
- Human-Machine Perception Laboratory, Department of Computer Science and Engineering, University of Nevada, Reno, Reno, NV, USA
| | - Ethan Waisberg
- University College Dublin School of Medicine, Belfield, Dublin, Ireland
| | - Nasif Zaman
- Human-Machine Perception Laboratory, Department of Computer Science and Engineering, University of Nevada, Reno, Reno, NV, USA
| | | | - Roger Daglius Dias
- Department of Emergency Medicine, Harvard Medical School, Boston, MA, USA
- STRATUS Center for Medical Simulation, Brigham and Women's Hospital, Boston, MA, USA
| | - Andrew Go Lee
- Center for Space Medicine, Baylor College of Medicine, Houston, TX, USA
- Department of Ophthalmology, Blanton Eye Institute, Houston Methodist Hospital, Houston, TX, USA
- The Houston Methodist Research Institute, Houston Methodist Hospital, Houston, TX, USA
- Departments of Ophthalmology, Neurology, and Neurosurgery, Weill Cornell Medicine, New York, NY, USA
- Department of Ophthalmology, University of Texas Medical Branch, Galveston, TX, USA
- University of Texas MD Anderson Cancer Center, Houston, TX, USA
- Texas A&M College of Medicine, Bryan, TX, USA
- Department of Ophthalmology, The University of Iowa Hospitals and Clinics, Iowa City, IA, USA
| | - Alireza Tavakkoli
- Human-Machine Perception Laboratory, Department of Computer Science and Engineering, University of Nevada, Reno, Reno, NV, USA.
| |
Collapse
|
19
|
Berger L, Haberbusch M, Moscato F. Generative adversarial networks in electrocardiogram synthesis: Recent developments and challenges. Artif Intell Med 2023; 143:102632. [PMID: 37673589 DOI: 10.1016/j.artmed.2023.102632] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2023] [Revised: 07/25/2023] [Accepted: 08/08/2023] [Indexed: 09/08/2023]
Abstract
Training deep neural network classifiers for electrocardiograms (ECGs) requires sufficient data. However, imbalanced datasets pose a major problem for the training process and hence data augmentation is commonly performed. Generative adversarial networks (GANs) can create synthetic ECG data to augment such imbalanced datasets. This review aims at identifying the present literature concerning synthetic ECG signal generation using GANs to provide a comprehensive overview of architectures, quality evaluation metrics, and classification performances. Thirty publications from the years 2019 to 2022 were selected from three separate databases. Nine publications used a quality evaluation metric neglecting classification, eleven performed a classification but omitted a quality evaluation metric, and ten publications performed both. Twenty different quality evaluation metrics were observed. Overall, the classification performance of databases augmented with synthetically created ECG signals increased by 7 % to 98 % in accuracy and 6 % to 97 % in sensitivity. In conclusion, synthetic ECG signal generation using GANs represents a promising tool for data augmentation of imbalanced datasets. Consistent quality evaluation of generated signals remains challenging. Hence, future work should focus on the establishment of a gold standard for quality evaluation metrics for GANs.
Collapse
Affiliation(s)
- Laurenz Berger
- Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Währinger Gürtel 18-20, A-1090 Vienna, Austria; Ludwig Boltzmann Institute for Cardiovascular Research, Währinger Gürtel 18-20, A-1090 Vienna, Austria.
| | - Max Haberbusch
- Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Währinger Gürtel 18-20, A-1090 Vienna, Austria; Ludwig Boltzmann Institute for Cardiovascular Research, Währinger Gürtel 18-20, A-1090 Vienna, Austria
| | - Francesco Moscato
- Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Währinger Gürtel 18-20, A-1090 Vienna, Austria; Ludwig Boltzmann Institute for Cardiovascular Research, Währinger Gürtel 18-20, A-1090 Vienna, Austria; Austrian Cluster for Tissue Regeneration, Donaueschingenstraße 13, A-1200 Vienna, Austria
| |
Collapse
|
20
|
Ng CKC. Generative Adversarial Network (Generative Artificial Intelligence) in Pediatric Radiology: A Systematic Review. CHILDREN (BASEL, SWITZERLAND) 2023; 10:1372. [PMID: 37628371 PMCID: PMC10453402 DOI: 10.3390/children10081372] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/21/2023] [Revised: 08/07/2023] [Accepted: 08/09/2023] [Indexed: 08/27/2023]
Abstract
Generative artificial intelligence, especially with regard to the generative adversarial network (GAN), is an important research area in radiology as evidenced by a number of literature reviews on the role of GAN in radiology published in the last few years. However, no review article about GAN in pediatric radiology has been published yet. The purpose of this paper is to systematically review applications of GAN in pediatric radiology, their performances, and methods for their performance evaluation. Electronic databases were used for a literature search on 6 April 2023. Thirty-seven papers met the selection criteria and were included. This review reveals that the GAN can be applied to magnetic resonance imaging, X-ray, computed tomography, ultrasound and positron emission tomography for image translation, segmentation, reconstruction, quality assessment, synthesis and data augmentation, and disease diagnosis. About 80% of the included studies compared their GAN model performances with those of other approaches and indicated that their GAN models outperformed the others by 0.1-158.6%. However, these study findings should be used with caution because of a number of methodological weaknesses. For future GAN studies, more robust methods will be essential for addressing these issues. Otherwise, this would affect the clinical adoption of the GAN-based applications in pediatric radiology and the potential advantages of GAN could not be realized widely.
Collapse
Affiliation(s)
- Curtise K. C. Ng
- Curtin Medical School, Curtin University, GPO Box U1987, Perth, WA 6845, Australia; or ; Tel.: +61-8-9266-7314; Fax: +61-8-9266-2377
- Curtin Health Innovation Research Institute (CHIRI), Faculty of Health Sciences, Curtin University, GPO Box U1987, Perth, WA 6845, Australia
| |
Collapse
|
21
|
Rich JM, Bhardwaj LN, Shah A, Gangal K, Rapaka MS, Oberai AA, Fields BKK, Matcuk GR, Duddalwar VA. Deep learning image segmentation approaches for malignant bone lesions: a systematic review and meta-analysis. FRONTIERS IN RADIOLOGY 2023; 3:1241651. [PMID: 37614529 PMCID: PMC10442705 DOI: 10.3389/fradi.2023.1241651] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/17/2023] [Accepted: 07/28/2023] [Indexed: 08/25/2023]
Abstract
Introduction Image segmentation is an important process for quantifying characteristics of malignant bone lesions, but this task is challenging and laborious for radiologists. Deep learning has shown promise in automating image segmentation in radiology, including for malignant bone lesions. The purpose of this review is to investigate deep learning-based image segmentation methods for malignant bone lesions on Computed Tomography (CT), Magnetic Resonance Imaging (MRI), and Positron-Emission Tomography/CT (PET/CT). Method The literature search of deep learning-based image segmentation of malignant bony lesions on CT and MRI was conducted in PubMed, Embase, Web of Science, and Scopus electronic databases following the guidelines of Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA). A total of 41 original articles published between February 2017 and March 2023 were included in the review. Results The majority of papers studied MRI, followed by CT, PET/CT, and PET/MRI. There was relatively even distribution of papers studying primary vs. secondary malignancies, as well as utilizing 3-dimensional vs. 2-dimensional data. Many papers utilize custom built models as a modification or variation of U-Net. The most common metric for evaluation was the dice similarity coefficient (DSC). Most models achieved a DSC above 0.6, with medians for all imaging modalities between 0.85-0.9. Discussion Deep learning methods show promising ability to segment malignant osseous lesions on CT, MRI, and PET/CT. Some strategies which are commonly applied to help improve performance include data augmentation, utilization of large public datasets, preprocessing including denoising and cropping, and U-Net architecture modification. Future directions include overcoming dataset and annotation homogeneity and generalizing for clinical applicability.
Collapse
Affiliation(s)
- Joseph M. Rich
- Keck School of Medicine, University of Southern California, Los Angeles, CA, United States
| | - Lokesh N. Bhardwaj
- Keck School of Medicine, University of Southern California, Los Angeles, CA, United States
| | - Aman Shah
- Department of Applied Biostatistics and Epidemiology, University of Southern California, Los Angeles, CA, United States
| | - Krish Gangal
- Bridge UnderGrad Science Summer Research Program, Irvington High School, Fremont, CA, United States
| | - Mohitha S. Rapaka
- Department of Biology, University of Texas at Austin, Austin, TX, United States
| | - Assad A. Oberai
- Department of Aerospace and Mechanical Engineering Department, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States
| | - Brandon K. K. Fields
- Department of Radiology & Biomedical Imaging, University of California, San Francisco, San Francisco, CA, United States
| | - George R. Matcuk
- Department of Radiology, Cedars-Sinai Medical Center, Los Angeles, CA, United States
| | - Vinay A. Duddalwar
- Department of Radiology, Keck School of Medicine of the University of Southern California, Los Angeles, CA, United States
- Department of Radiology, USC Radiomics Laboratory, Keck School of Medicine, University of Southern California, Los Angeles, CA, United States
| |
Collapse
|
22
|
Chan K, Maralani PJ, Moody AR, Khademi A. Synthesis of diffusion-weighted MRI scalar maps from FLAIR volumes using generative adversarial networks. Front Neuroinform 2023; 17:1197330. [PMID: 37603783 PMCID: PMC10436214 DOI: 10.3389/fninf.2023.1197330] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2023] [Accepted: 07/18/2023] [Indexed: 08/23/2023] Open
Abstract
Introduction Acquisition and pre-processing pipelines for diffusion-weighted imaging (DWI) volumes are resource- and time-consuming. Generating synthetic DWI scalar maps from commonly acquired brain MRI sequences such as fluid-attenuated inversion recovery (FLAIR) could be useful for supplementing datasets. In this work we design and compare GAN-based image translation models for generating DWI scalar maps from FLAIR MRI for the first time. Methods We evaluate a pix2pix model, two modified CycleGANs using paired and unpaired data, and a convolutional autoencoder in synthesizing DWI fractional anisotropy (FA) and mean diffusivity (MD) from whole FLAIR volumes. In total, 420 FLAIR and DWI volumes (11,957 images) from multi-center dementia and vascular disease cohorts were used for training/testing. Generated images were evaluated using two groups of metrics: (1) human perception metrics including peak signal-to-noise ratio (PSNR) and structural similarity (SSIM), (2) structural metrics including a newly proposed histogram similarity (Hist-KL) metric and mean squared error (MSE). Results Pix2pix demonstrated the best performance both quantitatively and qualitatively with mean PSNR, SSIM, and MSE metrics of 23.41 dB, 0.8, 0.004, respectively for MD generation, and 24.05 dB, 0.78, 0.004, respectively for FA generation. The new histogram similarity metric demonstrated sensitivity to differences in fine details between generated and real images with mean pix2pix MD and FA Hist-KL metrics of 11.73 and 3.74, respectively. Detailed analysis of clinically relevant regions of white matter (WM) and gray matter (GM) in the pix2pix images also showed strong significant (p < 0.001) correlations between real and synthetic FA values in both tissue types (R = 0.714 for GM, R = 0.877 for WM). Discussion/conclusion Our results show that pix2pix's FA and MD models had significantly better structural similarity of tissue structures and fine details than other models, including WM tracts and CSF spaces, between real and generated images. Regional analysis of synthetic volumes showed that synthetic DWI images can not only be used to supplement clinical datasets, but demonstrates potential utility in bypassing or correcting registration in data pre-processing.
Collapse
Affiliation(s)
- Karissa Chan
- Electrical, Computer and Biomedical Engineering Department, Toronto Metropolitan University, Toronto, ON, Canada
- Institute of Biomedical Engineering, Science and Technology (iBEST), Toronto, ON, Canada
| | - Pejman Jabehdar Maralani
- Department of Medical Imaging, Sunnybrook Health Sciences Centre, University of Toronto, Toronto, ON, Canada
| | - Alan R. Moody
- Department of Medical Imaging, Sunnybrook Health Sciences Centre, University of Toronto, Toronto, ON, Canada
| | - April Khademi
- Electrical, Computer and Biomedical Engineering Department, Toronto Metropolitan University, Toronto, ON, Canada
- Institute of Biomedical Engineering, Science and Technology (iBEST), Toronto, ON, Canada
- Keenan Research Center, St. Michael’s Hospital, Toronto, ON, Canada
| |
Collapse
|
23
|
Waikel RL, Othman AA, Patel T, Hanchard SL, Hu P, Tekendo-Ngongang C, Duong D, Solomon BD. Generative Methods for Pediatric Genetics Education. MEDRXIV : THE PREPRINT SERVER FOR HEALTH SCIENCES 2023:2023.08.01.23293506. [PMID: 37790417 PMCID: PMC10543060 DOI: 10.1101/2023.08.01.23293506] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/05/2023]
Abstract
Artificial intelligence (AI) is used in an increasing number of areas, with recent interest in generative AI, such as using ChatGPT to generate programming code or DALL-E to make illustrations. We describe the use of generative AI in medical education. Specifically, we sought to determine whether generative AI could help train pediatric residents to better recognize genetic conditions. From publicly available images of individuals with genetic conditions, we used generative AI methods to create new images, which were checked for accuracy with an external classifier. We selected two conditions for study, Kabuki (KS) and Noonan (NS) syndromes, which are clinically important conditions that pediatricians may encounter. In this study, pediatric residents completed 208 surveys, where they each classified 20 images following exposure to one of 4 possible educational interventions, including with and without generative AI methods. Overall, we find that generative images perform similarly but appear to be slightly less helpful than real images. Most participants reported that images were useful, although real images were felt to be more helpful. We conclude that generative AI images may serve as an adjunctive educational tool, particularly for less familiar conditions, such as KS.
Collapse
Affiliation(s)
- Rebekah L. Waikel
- Medical Genetics Branch, National Human Genome Research Institute, Bethesda, Maryland, United States of America
| | - Amna A. Othman
- Medical Genetics Branch, National Human Genome Research Institute, Bethesda, Maryland, United States of America
| | - Tanviben Patel
- Medical Genetics Branch, National Human Genome Research Institute, Bethesda, Maryland, United States of America
| | - Suzanna Ledgister Hanchard
- Medical Genetics Branch, National Human Genome Research Institute, Bethesda, Maryland, United States of America
| | - Ping Hu
- Medical Genetics Branch, National Human Genome Research Institute, Bethesda, Maryland, United States of America
| | - Cedrik Tekendo-Ngongang
- Medical Genetics Branch, National Human Genome Research Institute, Bethesda, Maryland, United States of America
| | - Dat Duong
- Medical Genetics Branch, National Human Genome Research Institute, Bethesda, Maryland, United States of America
| | - Benjamin D. Solomon
- Medical Genetics Branch, National Human Genome Research Institute, Bethesda, Maryland, United States of America
| |
Collapse
|
24
|
Ong J, Waisberg E, Masalkhi M, Kamran SA, Lowry K, Sarker P, Zaman N, Paladugu P, Tavakkoli A, Lee AG. Artificial Intelligence Frameworks to Detect and Investigate the Pathophysiology of Spaceflight Associated Neuro-Ocular Syndrome (SANS). Brain Sci 2023; 13:1148. [PMID: 37626504 PMCID: PMC10452366 DOI: 10.3390/brainsci13081148] [Citation(s) in RCA: 21] [Impact Index Per Article: 21.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2023] [Revised: 07/24/2023] [Accepted: 07/28/2023] [Indexed: 08/27/2023] Open
Abstract
Spaceflight associated neuro-ocular syndrome (SANS) is a unique phenomenon that has been observed in astronauts who have undergone long-duration spaceflight (LDSF). The syndrome is characterized by distinct imaging and clinical findings including optic disc edema, hyperopic refractive shift, posterior globe flattening, and choroidal folds. SANS serves a large barrier to planetary spaceflight such as a mission to Mars and has been noted by the National Aeronautics and Space Administration (NASA) as a high risk based on its likelihood to occur and its severity to human health and mission performance. While it is a large barrier to future spaceflight, the underlying etiology of SANS is not well understood. Current ophthalmic imaging onboard the International Space Station (ISS) has provided further insights into SANS. However, the spaceflight environment presents with unique challenges and limitations to further understand this microgravity-induced phenomenon. The advent of artificial intelligence (AI) has revolutionized the field of imaging in ophthalmology, particularly in detection and monitoring. In this manuscript, we describe the current hypothesized pathophysiology of SANS and the medical diagnostic limitations during spaceflight to further understand its pathogenesis. We then introduce and describe various AI frameworks that can be applied to ophthalmic imaging onboard the ISS to further understand SANS including supervised/unsupervised learning, generative adversarial networks, and transfer learning. We conclude by describing current research in this area to further understand SANS with the goal of enabling deeper insights into SANS and safer spaceflight for future missions.
Collapse
Affiliation(s)
- Joshua Ong
- Department of Ophthalmology and Visual Sciences, University of Michigan Kellogg Eye Center, Ann Arbor, MI 48105, USA
| | | | - Mouayad Masalkhi
- University College Dublin School of Medicine, Belfield, Dublin 4, Ireland
| | - Sharif Amit Kamran
- Human-Machine Perception Laboratory, Department of Computer Science and Engineering, University of Nevada, Reno, NV 89512, USA
| | | | - Prithul Sarker
- Human-Machine Perception Laboratory, Department of Computer Science and Engineering, University of Nevada, Reno, NV 89512, USA
| | - Nasif Zaman
- Human-Machine Perception Laboratory, Department of Computer Science and Engineering, University of Nevada, Reno, NV 89512, USA
| | - Phani Paladugu
- Brigham and Women’s Hospital, Harvard Medical School, Boston, MA 02115, USA
- Sidney Kimmel Medical College, Thomas Jefferson University, Philadelphia, PA 19107, USA
| | - Alireza Tavakkoli
- Human-Machine Perception Laboratory, Department of Computer Science and Engineering, University of Nevada, Reno, NV 89512, USA
| | - Andrew G. Lee
- Center for Space Medicine, Baylor College of Medicine, Houston, TX 77030, USA
- Department of Ophthalmology, Blanton Eye Institute, Houston Methodist Hospital, Houston, TX 77030, USA
- The Houston Methodist Research Institute, Houston Methodist Hospital, Houston, TX 77030, USA
- Departments of Ophthalmology, Neurology, and Neurosurgery, Weill Cornell Medicine, New York, NY 10065, USA
- Department of Ophthalmology, University of Texas Medical Branch, Galveston, TX 77555, USA
- University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
- Texas A&M College of Medicine, Bryan, TX 77030, USA
- Department of Ophthalmology, The University of Iowa Hospitals and Clinics, Iowa City, IA 50010, USA
| |
Collapse
|
25
|
Chen C, Zhou K, Lu T, Ning H, Xiao R. Integration- and separation-aware adversarial model for cerebrovascular segmentation from TOF-MRA. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 233:107475. [PMID: 36931018 DOI: 10.1016/j.cmpb.2023.107475] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/17/2022] [Revised: 03/08/2023] [Accepted: 03/09/2023] [Indexed: 06/18/2023]
Abstract
PURPOSE Cerebrovascular segmentation from time-of-flight magnetic resonance angiography (TOF-MRA) is important but challenging for the simulation and measurement of cerebrovascular diseases. Recently, deep learning has promoted the rapid development of cerebrovascular segmentation. However, model optimization relies on voxel or regional punishment and lacks global awareness and interpretation from the texture and edge. To overcome the limitations of the existing methods, we propose a new cerebrovascular segmentation method to obtain more refined structures. METHODS In this paper, we propose a new adversarial model that achieves segmentation using segmentation model and filters the results using discriminator. Considering the sample imbalance in cerebrovascular imaging, we separated the TOF-MRA images and utilized high- and low-frequency images to enhance the texture and edge representation. The encoder weight sharing from the segmentation model not only saves the model parameters, but also strengthens the integration and separation correlation. Diversified discrimination enhances the robustness and regularization of the model. RESULTS The adversarial model was tested using two cerebrovascular datasets. It scored 82.26% and 73.38%, respectively, ranking first on both datasets. The results show that our method not only outperforms the recent cerebrovascular segmentation model, but also surpasses the common adversarial models. CONCLUSION Our adversarial model focuses on improving the extraction ability of the model on texture and edge, thereby achieving awareness of the global cerebrovascular topology. Therefore, we obtained an accurate and robust cerebrovascular segmentation. This framework has potential applications in many imaging fields, particularly in the application of sample imbalance. Our code is available at the website https://github.com/MontaEllis/ISA-model.
Collapse
Affiliation(s)
- Cheng Chen
- School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing 100083, China
| | - Kangneng Zhou
- School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing 100083, China
| | - Tong Lu
- Visual 3D Medical Science and Technology Development, Co. Ltd, Beijing 100082, China
| | - Huansheng Ning
- School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing 100083, China
| | - Ruoxiu Xiao
- School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing 100083, China; Shunde Innovation School, University of Science and Technology Beijing, Foshan 100024, China.
| |
Collapse
|
26
|
Hussain S, Haider S, Maqsood S, Damaševičius R, Maskeliūnas R, Khan M. ETISTP: An Enhanced Model for Brain Tumor Identification and Survival Time Prediction. Diagnostics (Basel) 2023; 13:diagnostics13081456. [PMID: 37189556 DOI: 10.3390/diagnostics13081456] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2023] [Revised: 03/30/2023] [Accepted: 04/14/2023] [Indexed: 05/17/2023] Open
Abstract
Technology-assisted diagnosis is increasingly important in healthcare systems. Brain tumors are a leading cause of death worldwide, and treatment plans rely heavily on accurate survival predictions. Gliomas, a type of brain tumor, have particularly high mortality rates and can be further classified as low- or high-grade, making survival prediction challenging. Existing literature provides several survival prediction models that use different parameters, such as patient age, gross total resection status, tumor size, or tumor grade. However, accuracy is often lacking in these models. The use of tumor volume instead of size may improve the accuracy of survival prediction. In response to this need, we propose a novel model, the enhanced brain tumor identification and survival time prediction (ETISTP), which computes tumor volume, classifies it into low- or high-grade glioma, and predicts survival time with greater accuracy. The ETISTP model integrates four parameters: patient age, survival days, gross total resection (GTR) status, and tumor volume. Notably, ETISTP is the first model to employ tumor volume for prediction. Furthermore, our model minimizes the computation time by allowing for parallel execution of tumor volume computation and classification. The simulation results demonstrate that ETISTP outperforms prominent survival prediction models.
Collapse
Affiliation(s)
- Shah Hussain
- Department of Computer Science, City University of Science and Information Technology, Peshawar 25000, Pakistan
| | - Shahab Haider
- Department of Computer Science, City University of Science and Information Technology, Peshawar 25000, Pakistan
| | - Sarmad Maqsood
- Faculty of Informatics, Kaunas University of Technology, 51368 Kaunas, Lithuania
| | - Robertas Damaševičius
- Department of Applied Informatics, Vytautas Magnus University, 44404 Kaunas, Lithuania
| | - Rytis Maskeliūnas
- Faculty of Informatics, Kaunas University of Technology, 51368 Kaunas, Lithuania
- Faculty of Applied Mathematics, Silesian University of Technology, 44-100 Gliwice, Poland
| | - Muzammil Khan
- Department of Computer & Software Technology, University of Swat, Swat 19200, Pakistan
| |
Collapse
|
27
|
Wang R, Bashyam V, Yang Z, Yu F, Tassopoulou V, Chintapalli SS, Skampardoni I, Sreepada LP, Sahoo D, Nikita K, Abdulkadir A, Wen J, Davatzikos C. Applications of generative adversarial networks in neuroimaging and clinical neuroscience. Neuroimage 2023; 269:119898. [PMID: 36702211 PMCID: PMC9992336 DOI: 10.1016/j.neuroimage.2023.119898] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2022] [Revised: 12/16/2022] [Accepted: 01/21/2023] [Indexed: 01/25/2023] Open
Abstract
Generative adversarial networks (GANs) are one powerful type of deep learning models that have been successfully utilized in numerous fields. They belong to the broader family of generative methods, which learn to generate realistic data with a probabilistic model by learning distributions from real samples. In the clinical context, GANs have shown enhanced capabilities in capturing spatially complex, nonlinear, and potentially subtle disease effects compared to traditional generative methods. This review critically appraises the existing literature on the applications of GANs in imaging studies of various neurological conditions, including Alzheimer's disease, brain tumors, brain aging, and multiple sclerosis. We provide an intuitive explanation of various GAN methods for each application and further discuss the main challenges, open questions, and promising future directions of leveraging GANs in neuroimaging. We aim to bridge the gap between advanced deep learning methods and neurology research by highlighting how GANs can be leveraged to support clinical decision making and contribute to a better understanding of the structural and functional patterns of brain diseases.
Collapse
Affiliation(s)
- Rongguang Wang
- Center for AI and Data Science for Integrated Diagnostics, University of Pennsylvania, Philadelphia, USA; Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, USA.
| | - Vishnu Bashyam
- Center for AI and Data Science for Integrated Diagnostics, University of Pennsylvania, Philadelphia, USA; Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, USA
| | - Zhijian Yang
- Center for AI and Data Science for Integrated Diagnostics, University of Pennsylvania, Philadelphia, USA; Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, USA
| | - Fanyang Yu
- Center for AI and Data Science for Integrated Diagnostics, University of Pennsylvania, Philadelphia, USA; Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, USA
| | - Vasiliki Tassopoulou
- Center for AI and Data Science for Integrated Diagnostics, University of Pennsylvania, Philadelphia, USA; Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, USA
| | - Sai Spandana Chintapalli
- Center for AI and Data Science for Integrated Diagnostics, University of Pennsylvania, Philadelphia, USA; Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, USA
| | - Ioanna Skampardoni
- Center for AI and Data Science for Integrated Diagnostics, University of Pennsylvania, Philadelphia, USA; Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, USA; School of Electrical and Computer Engineering, National Technical University of Athens, Athens, Greece
| | - Lasya P Sreepada
- Center for AI and Data Science for Integrated Diagnostics, University of Pennsylvania, Philadelphia, USA; Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, USA
| | - Dushyant Sahoo
- Center for AI and Data Science for Integrated Diagnostics, University of Pennsylvania, Philadelphia, USA; Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, USA
| | - Konstantina Nikita
- School of Electrical and Computer Engineering, National Technical University of Athens, Athens, Greece
| | - Ahmed Abdulkadir
- Center for AI and Data Science for Integrated Diagnostics, University of Pennsylvania, Philadelphia, USA; Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, USA; Department of Clinical Neurosciences, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland
| | - Junhao Wen
- Center for AI and Data Science for Integrated Diagnostics, University of Pennsylvania, Philadelphia, USA; Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, USA
| | - Christos Davatzikos
- Center for AI and Data Science for Integrated Diagnostics, University of Pennsylvania, Philadelphia, USA; Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, USA; Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, USA.
| |
Collapse
|
28
|
Skandarani Y, Jodoin PM, Lalande A. GANs for Medical Image Synthesis: An Empirical Study. J Imaging 2023; 9:69. [PMID: 36976120 PMCID: PMC10055771 DOI: 10.3390/jimaging9030069] [Citation(s) in RCA: 16] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2022] [Revised: 03/11/2023] [Accepted: 03/14/2023] [Indexed: 03/19/2023] Open
Abstract
Generative adversarial networks (GANs) have become increasingly powerful, generating mind-blowing photorealistic images that mimic the content of datasets they have been trained to replicate. One recurrent theme in medical imaging, is whether GANs can also be as effective at generating workable medical data, as they are for generating realistic RGB images. In this paper, we perform a multi-GAN and multi-application study, to gauge the benefits of GANs in medical imaging. We tested various GAN architectures, from basic DCGAN to more sophisticated style-based GANs, on three medical imaging modalities and organs, namely: cardiac cine-MRI, liver CT, and RGB retina images. GANs were trained on well-known and widely utilized datasets, from which their FID scores were computed, to measure the visual acuity of their generated images. We further tested their usefulness by measuring the segmentation accuracy of a U-Net trained on these generated images and the original data. The results reveal that GANs are far from being equal, as some are ill-suited for medical imaging applications, while others performed much better. The top-performing GANs are capable of generating realistic-looking medical images by FID standards, that can fool trained experts in a visual Turing test and comply to some metrics. However, segmentation results suggest that no GAN is capable of reproducing the full richness of medical datasets.
Collapse
Affiliation(s)
- Youssef Skandarani
- ImViA Laboratory, University of Bourgogne Franche-Comte, 21000 Dijon, France
- CASIS Inc., 21800 Quetigny, France
| | - Pierre-Marc Jodoin
- Department of Computer Science, University of Sherbrooke, Sherbrooke, QC J1K 2R1, Canada
| | - Alain Lalande
- ImViA Laboratory, University of Bourgogne Franche-Comte, 21000 Dijon, France
- Department of Medical Imaging, University Hospital of Dijon, 21079 Dijon, France
| |
Collapse
|
29
|
Papadomanolakis TN, Sergaki ES, Polydorou AA, Krasoudakis AG, Makris-Tsalikis GN, Polydorou AA, Afentakis NM, Athanasiou SA, Vardiambasis IO, Zervakis ME. Tumor Diagnosis against Other Brain Diseases Using T2 MRI Brain Images and CNN Binary Classifier and DWT. Brain Sci 2023; 13:348. [PMID: 36831891 PMCID: PMC9954603 DOI: 10.3390/brainsci13020348] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2022] [Revised: 02/08/2023] [Accepted: 02/14/2023] [Indexed: 02/22/2023] Open
Abstract
PURPOSE Brain tumors are diagnosed and classified manually and noninvasively by radiologists using Magnetic Resonance Imaging (MRI) data. The risk of misdiagnosis may exist due to human factors such as lack of time, fatigue, and relatively low experience. Deep learning methods have become increasingly important in MRI classification. To improve diagnostic accuracy, researchers emphasize the need to develop Computer-Aided Diagnosis (CAD) computational diagnostics based on artificial intelligence (AI) systems by using deep learning methods such as convolutional neural networks (CNN) and improving the performance of CNN by combining it with other data analysis tools such as wavelet transform. In this study, a novel diagnostic framework based on CNN and DWT data analysis is developed for the diagnosis of glioma tumors in the brain, among other tumors and other diseases, with T2-SWI MRI scans. It is a binary CNN classifier that treats the disease "glioma tumor" as positive and the other pathologies as negative, resulting in a very unbalanced binary problem. The study includes a comparative analysis of a CNN trained with wavelet transform data of MRIs instead of their pixel intensity values in order to demonstrate the increased performance of the CNN and DWT analysis in diagnosing brain gliomas. The results of the proposed CNN architecture are also compared with a deep CNN pre-trained on VGG16 transfer learning network and with the SVM machine learning method using DWT knowledge. METHODS To improve the accuracy of the CNN classifier, the proposed CNN model uses as knowledge the spatial and temporal features extracted by converting the original MRI images to the frequency domain by performing Discrete Wavelet Transformation (DWT), instead of the traditionally used original scans in the form of pixel intensities. Moreover, no pre-processing was applied to the original images. The images used are MRIs of type T2-SWI sequences parallel to the axial plane. Firstly, a compression step is applied for each MRI scan applying DWT up to three levels of decomposition. These data are used to train a 2D CNN in order to classify the scans as showing glioma or not. The proposed CNN model is trained on MRI slices originated from 382 various male and female adult patients, showing healthy and pathological images from a selection of diseases (showing glioma, meningioma, pituitary, necrosis, edema, non-enchasing tumor, hemorrhagic foci, edema, ischemic changes, cystic areas, etc.). The images are provided by the database of the Medical Image Computing and Computer-Assisted Intervention (MICCAI) and the Ischemic Stroke Lesion Segmentation (ISLES) challenges on Brain Tumor Segmentation (BraTS) challenges 2016 and 2017, as well as by the numerous records kept in the public general hospital of Chania, Crete, "Saint George". RESULTS The proposed frameworks are experimentally evaluated by examining MRI slices originating from 190 different patients (not included in the training set), of which 56% are showing gliomas by the longest two axes less than 2 cm and 44% are showing other pathological effects or healthy cases. Results show convincing performance when using as information the spatial and temporal features extracted by the original scans. With the proposed CNN model and with data in DWT format, we achieved the following statistic percentages: accuracy 0.97, sensitivity (recall) 1, specificity 0.93, precision 0.95, FNR 0, and FPR 0.07. These numbers are higher for this data format (respectively: accuracy by 6% higher, recall by 11%, specificity by 7%, precision by 5%, FNR by 0.1%, and FPR is the same) than it would be, had we used as input data the intensity values of the MRIs (instead of the DWT analysis of the MRIs). Additionally, our study showed that when our CNN takes into account the TL of the existing network VGG, the performance values are lower, as follows: accuracy 0.87, sensitivity (recall) 0.91, specificity 0.84, precision 0.86, FNR of 0.08, and FPR 0.14. CONCLUSIONS The experimental results show the outperformance of the CNN, which is not based on transfer learning, but is using as information the MRI brain scans decomposed into DWT information instead of the pixel intensity of the original scans. The results are promising for the proposed CNN based on DWT knowledge to serve for binary diagnosis of glioma tumors among other tumors and diseases. Moreover, the SVM learning model using DWT data analysis performs with higher accuracy and sensitivity than using pixel values.
Collapse
Affiliation(s)
| | - Eleftheria S. Sergaki
- School of Electrical and Computer Engineering, Technical University of Crete, 73100 Chania, Greece
| | - Andreas A. Polydorou
- Areteio Hospital, 2nd University Department of Surgery, Medical School, National and Kapodistrian University of Athens, 11528 Athens, Greece
| | | | | | - Alexios A. Polydorou
- Medical School, National and Kapodistrian University of Athens, 11528 Athens, Greece
| | - Nikolaos M. Afentakis
- Department of Electronic Engineering, Hellenic Mediterranean University, 73133 Chania, Greece
| | - Sofia A. Athanasiou
- Department of Electronic Engineering, Hellenic Mediterranean University, 73133 Chania, Greece
| | - Ioannis O. Vardiambasis
- Department of Electronic Engineering, Hellenic Mediterranean University, 73133 Chania, Greece
| | - Michail E. Zervakis
- School of Electrical and Computer Engineering, Technical University of Crete, 73100 Chania, Greece
| |
Collapse
|
30
|
Endocrine Tumor Classification via Machine-Learning-Based Elastography: A Systematic Scoping Review. Cancers (Basel) 2023; 15:cancers15030837. [PMID: 36765794 PMCID: PMC9913672 DOI: 10.3390/cancers15030837] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2022] [Revised: 01/26/2023] [Accepted: 01/27/2023] [Indexed: 01/31/2023] Open
Abstract
Elastography complements traditional medical imaging modalities by mapping tissue stiffness to identify tumors in the endocrine system, and machine learning models can further improve diagnostic accuracy and reliability. Our objective in this review was to summarize the applications and performance of machine-learning-based elastography on the classification of endocrine tumors. Two authors independently searched electronic databases, including PubMed, Scopus, Web of Science, IEEEXpress, CINAHL, and EMBASE. Eleven (n = 11) articles were eligible for the review, of which eight (n = 8) focused on thyroid tumors and three (n = 3) considered pancreatic tumors. In all thyroid studies, the researchers used shear-wave ultrasound elastography, whereas the pancreas researchers applied strain elastography with endoscopy. Traditional machine learning approaches or the deep feature extractors were used to extract the predetermined features, followed by classifiers. The applied deep learning approaches included the convolutional neural network (CNN) and multilayer perceptron (MLP). Some researchers considered the mixed or sequential training of B-mode and elastographic ultrasound data or fusing data from different image segmentation techniques in machine learning models. All reviewed methods achieved an accuracy of ≥80%, but only three were ≥90% accurate. The most accurate thyroid classification (94.70%) was achieved by applying sequential training CNN; the most accurate pancreas classification (98.26%) was achieved using a CNN-long short-term memory (LSTM) model integrating elastography with B-mode and Doppler images.
Collapse
|
31
|
Kubicek J, Varysova A, Cerny M, Hancarova K, Oczka D, Augustynek M, Penhaker M, Prokop O, Scurek R. Performance and Robustness of Regional Image Segmentation Driven by Selected Evolutionary and Genetic Algorithms: Study on MR Articular Cartilage Images. SENSORS (BASEL, SWITZERLAND) 2022; 22:s22176335. [PMID: 36080793 PMCID: PMC9460494 DOI: 10.3390/s22176335] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/06/2022] [Revised: 08/16/2022] [Accepted: 08/18/2022] [Indexed: 05/12/2023]
Abstract
The analysis and segmentation of articular cartilage magnetic resonance (MR) images belongs to one of the most commonly routine tasks in diagnostics of the musculoskeletal system of the knee area. Conventional regional segmentation methods, which are based either on the histogram partitioning (e.g., Otsu method) or clustering methods (e.g., K-means), have been frequently used for the task of regional segmentation. Such methods are well known as fast and well working in the environment, where cartilage image features are reliably recognizable. The well-known fact is that the performance of these methods is prone to the image noise and artefacts. In this context, regional segmentation strategies, driven by either genetic algorithms or selected evolutionary computing strategies, have the potential to overcome these traditional methods such as Otsu thresholding or K-means in the context of their performance. These optimization strategies consecutively generate a pyramid of a possible set of histogram thresholds, of which the quality is evaluated by using the fitness function based on Kapur's entropy maximization to find the most optimal combination of thresholds for articular cartilage segmentation. On the other hand, such optimization strategies are often computationally demanding, which is a limitation of using such methods for a stack of MR images. In this study, we publish a comprehensive analysis of the optimization methods based on fuzzy soft segmentation, driven by artificial bee colony (ABC), particle swarm optimization (PSO), Darwinian particle swarm optimization (DPSO), and a genetic algorithm for an optimal thresholding selection against the routine segmentations Otsu and K-means for analysis and the features extraction of articular cartilage from MR images. This study objectively analyzes the performance of the segmentation strategies upon variable noise with dynamic intensities to report a segmentation's robustness in various image conditions for a various number of segmentation classes (4, 7, and 10), cartilage features (area, perimeter, and skeleton) extraction preciseness against the routine segmentation strategies, and lastly the computing time, which represents an important factor of segmentation performance. We use the same settings on individual optimization strategies: 100 iterations and 50 population. This study suggests that the combination of fuzzy thresholding with an ABC algorithm gives the best performance in the comparison with other methods as from the view of the segmentation influence of additive dynamic noise influence, also for cartilage features extraction. On the other hand, using genetic algorithms for cartilage segmentation in some cases does not give a good performance. In most cases, the analyzed optimization strategies significantly overcome the routine segmentation methods except for the computing time, which is normally lower for the routine algorithms. We also publish statistical tests of significance, showing differences in the performance of individual optimization strategies against Otsu and K-means method. Lastly, as a part of this study, we publish a software environment, integrating all the methods from this study.
Collapse
Affiliation(s)
- Jan Kubicek
- Department of Cybernetics and Biomedical Engineering, VŠB—Technical University of Ostrava, 17.listopadu 2172/15, Poruba, 708 00 Ostrava, Czech Republic
- Correspondence:
| | - Alice Varysova
- Department of Cybernetics and Biomedical Engineering, VŠB—Technical University of Ostrava, 17.listopadu 2172/15, Poruba, 708 00 Ostrava, Czech Republic
| | - Martin Cerny
- Department of Cybernetics and Biomedical Engineering, VŠB—Technical University of Ostrava, 17.listopadu 2172/15, Poruba, 708 00 Ostrava, Czech Republic
| | - Kristyna Hancarova
- Department of Cybernetics and Biomedical Engineering, VŠB—Technical University of Ostrava, 17.listopadu 2172/15, Poruba, 708 00 Ostrava, Czech Republic
| | - David Oczka
- Department of Cybernetics and Biomedical Engineering, VŠB—Technical University of Ostrava, 17.listopadu 2172/15, Poruba, 708 00 Ostrava, Czech Republic
| | - Martin Augustynek
- Department of Cybernetics and Biomedical Engineering, VŠB—Technical University of Ostrava, 17.listopadu 2172/15, Poruba, 708 00 Ostrava, Czech Republic
| | - Marek Penhaker
- Department of Cybernetics and Biomedical Engineering, VŠB—Technical University of Ostrava, 17.listopadu 2172/15, Poruba, 708 00 Ostrava, Czech Republic
| | - Ondrej Prokop
- MEDIN, a.s., Vlachovicka 619, 592 31 Nove Mesto na Morave, Czech Republic
| | - Radomir Scurek
- Department of Security Services, Faculty of Safety Engineering, VŠB—Technical University of Ostrava, ul. Lumirova 3, 700 30 Ostrava, Czech Republic
| |
Collapse
|
32
|
Ng CKC. Artificial Intelligence for Radiation Dose Optimization in Pediatric Radiology: A Systematic Review. CHILDREN 2022; 9:children9071044. [PMID: 35884028 PMCID: PMC9320231 DOI: 10.3390/children9071044] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/29/2022] [Revised: 07/11/2022] [Accepted: 07/11/2022] [Indexed: 01/19/2023]
Abstract
Radiation dose optimization is particularly important in pediatric radiology, as children are more susceptible to potential harmful effects of ionizing radiation. However, only one narrative review about artificial intelligence (AI) for dose optimization in pediatric computed tomography (CT) has been published yet. The purpose of this systematic review is to answer the question “What are the AI techniques and architectures introduced in pediatric radiology for dose optimization, their specific application areas, and performances?” Literature search with use of electronic databases was conducted on 3 June 2022. Sixteen articles that met selection criteria were included. The included studies showed deep convolutional neural network (CNN) was the most common AI technique and architecture used for dose optimization in pediatric radiology. All but three included studies evaluated AI performance in dose optimization of abdomen, chest, head, neck, and pelvis CT; CT angiography; and dual-energy CT through deep learning image reconstruction. Most studies demonstrated that AI could reduce radiation dose by 36–70% without losing diagnostic information. Despite the dominance of commercially available AI models based on deep CNN with promising outcomes, homegrown models could provide comparable performances. Future exploration of AI value for dose optimization in pediatric radiology is necessary due to small sample sizes and narrow scopes (only three modalities, CT, positron emission tomography/magnetic resonance imaging and mobile radiography, and not all examination types covered) of existing studies.
Collapse
Affiliation(s)
- Curtise K. C. Ng
- Curtin Medical School, Curtin University, GPO Box U1987, Perth, WA 6845, Australia; or ; Tel.: +61-8-9266-7314; Fax: +61-8-9266-2377
- Curtin Health Innovation Research Institute (CHIRI), Faculty of Health Sciences, Curtin University, GPO Box U1987, Perth, WA 6845, Australia
| |
Collapse
|
33
|
Arora A, Arora A. Generative adversarial networks and synthetic patient data: current challenges and future perspectives. Future Healthc J 2022; 9:190-193. [DOI: 10.7861/fhj.2022-0013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
34
|
Sivari E, Güzel MS, Bostanci E, Mishra A. A Novel Hybrid Machine Learning Based System to Classify Shoulder Implant Manufacturers. Healthcare (Basel) 2022; 10:healthcare10030580. [PMID: 35327056 PMCID: PMC8952500 DOI: 10.3390/healthcare10030580] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2022] [Revised: 03/17/2022] [Accepted: 03/18/2022] [Indexed: 02/05/2023] Open
Abstract
It is necessary to know the manufacturer and model of a previously implanted shoulder prosthesis before performing Total Shoulder Arthroplasty operations, which may need to be performed repeatedly in accordance with the need for repair or replacement. In cases where the patient’s previous records cannot be found, where the records are not clear, or the surgery was conducted abroad, the specialist should identify the implant manufacturer and model during preoperative X-ray controls. In this study, an auxiliary expert system is proposed for classifying manufacturers of shoulder implants on the basis of X-ray images that is automated, objective, and based on hybrid machine learning models. In the proposed system, ten different hybrid models consisting of a combination of deep learning and machine learning algorithms were created and statistically tested. According to the experimental results, an accuracy of 95.07% was achieved using the DenseNet201 + Logistic Regression model, one of the proposed hybrid machine learning models (p < 0.05). The proposed hybrid machine learning algorithms achieve the goal of low cost and high performance compared to other studies in the literature. The results lead the authors to believe that the proposed system could be used in hospitals as an automatic and objective system for assisting orthopedists in the rapid and effective determination of shoulder implant types before performing revision surgery.
Collapse
Affiliation(s)
- Esra Sivari
- Computer Engineering Department, Cankiri Karatekin University, Cankiri 18100, Turkey;
| | - Mehmet Serdar Güzel
- Computer Engineering Department, Ankara University, Ankara 06830, Turkey; (M.S.G.); (E.B.)
| | - Erkan Bostanci
- Computer Engineering Department, Ankara University, Ankara 06830, Turkey; (M.S.G.); (E.B.)
| | - Alok Mishra
- Faculty of Logistics, Molde University College-Specialized University in Logistics, 6402 Molde, Norway
- Software Engineering Department, Atilim University, Ankara 06830, Turkey
- Correspondence:
| |
Collapse
|