1
|
Li F, Wang D, Yang Z, Zhang Y, Jiang J, Liu X, Kong K, Zhou F, Tham CC, Medeiros F, Han Y, Grzybowski A, Zangwill LM, Lam DSC, Zhang X. The AI revolution in glaucoma: Bridging challenges with opportunities. Prog Retin Eye Res 2024; 103:101291. [PMID: 39186968 DOI: 10.1016/j.preteyeres.2024.101291] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2024] [Revised: 08/19/2024] [Accepted: 08/19/2024] [Indexed: 08/28/2024]
Abstract
Recent advancements in artificial intelligence (AI) herald transformative potentials for reshaping glaucoma clinical management, improving screening efficacy, sharpening diagnosis precision, and refining the detection of disease progression. However, incorporating AI into healthcare usages faces significant hurdles in terms of developing algorithms and putting them into practice. When creating algorithms, issues arise due to the intensive effort required to label data, inconsistent diagnostic standards, and a lack of thorough testing, which often limits the algorithms' widespread applicability. Additionally, the "black box" nature of AI algorithms may cause doctors to be wary or skeptical. When it comes to using these tools, challenges include dealing with lower-quality images in real situations and the systems' limited ability to work well with diverse ethnic groups and different diagnostic equipment. Looking ahead, new developments aim to protect data privacy through federated learning paradigms, improving algorithm generalizability by diversifying input data modalities, and augmenting datasets with synthetic imagery. The integration of smartphones appears promising for using AI algorithms in both clinical and non-clinical settings. Furthermore, bringing in large language models (LLMs) to act as interactive tool in medicine may signify a significant change in how healthcare will be delivered in the future. By navigating through these challenges and leveraging on these as opportunities, the field of glaucoma AI will not only have improved algorithmic accuracy and optimized data integration but also a paradigmatic shift towards enhanced clinical acceptance and a transformative improvement in glaucoma care.
Collapse
Affiliation(s)
- Fei Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China.
| | - Deming Wang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China.
| | - Zefeng Yang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China.
| | - Yinhang Zhang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China.
| | - Jiaxuan Jiang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China.
| | - Xiaoyi Liu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China.
| | - Kangjie Kong
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China.
| | - Fengqi Zhou
- Ophthalmology, Mayo Clinic Health System, Eau Claire, WI, USA.
| | - Clement C Tham
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong SAR, China.
| | - Felipe Medeiros
- Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, FL, USA.
| | - Ying Han
- University of California, San Francisco, Department of Ophthalmology, San Francisco, CA, USA; The Francis I. Proctor Foundation for Research in Ophthalmology, University of California, San Francisco, CA, USA.
| | - Andrzej Grzybowski
- Institute for Research in Ophthalmology, Foundation for Ophthalmology Development, Poznan, Poland.
| | - Linda M Zangwill
- Hamilton Glaucoma Center, Viterbi Family Department of Ophthalmology, Shiley Eye Institute, University of California, San Diego, CA, USA.
| | - Dennis S C Lam
- The International Eye Research Institute of the Chinese University of Hong Kong (Shenzhen), Shenzhen, China; The C-MER Dennis Lam & Partners Eye Center, C-MER International Eye Care Group, Hong Kong, China.
| | - Xiulan Zhang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China.
| |
Collapse
|
2
|
Wang X, An Y, Hu Q. Anomaly prediction of Internet behavior based on generative adversarial networks. PeerJ Comput Sci 2024; 10:e2009. [PMID: 39145230 PMCID: PMC11323085 DOI: 10.7717/peerj-cs.2009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2023] [Accepted: 04/01/2024] [Indexed: 08/16/2024]
Abstract
With the popularity of Internet applications, a large amount of Internet behavior log data is generated. Abnormal behaviors of corporate employees may lead to internet security issues and data leakage incidents. To ensure the safety of information systems, it is important to research on anomaly prediction of Internet behaviors. Due to the high cost of labeling big data manually, an unsupervised generative model-Anomaly Prediction of Internet behavior based on Generative Adversarial Networks (APIBGAN), which works only with a small amount of labeled data, is proposed to predict anomalies of Internet behaviors. After the input Internet behavior data is preprocessed by the proposed method, the data-generating generative adversarial network (DGGAN) in APIBGAN learns the distribution of real Internet behavior data by leveraging neural networks' powerful feature extraction from the data to generate Internet behavior data with random noise. The APIBGAN utilizes these labeled generated data as a benchmark to complete the distance-based anomaly prediction. Three categories of Internet behavior sampling data from corporate employees are employed to train APIBGAN: (1) Online behavior data of an individual in a department. (2) Online behavior data of multiple employees in the same department. (3) Online behavior data of multiple employees in different departments. The prediction scores of the three categories of Internet behavior data are 87.23%, 85.13%, and 83.47%, respectively, and are above the highest score of 81.35% which is obtained by the comparison method based on Isolation Forests in the CCF Big Data & Computing Intelligence Contest (CCF-BDCI). The experimental results validate that APIBGAN predicts the outlier of Internet behaviors effectively through the GAN, which is composed of a simple three-layer fully connected neural networks (FNNs). We can use APIBGAN not only for anomaly prediction of Internet behaviors but also for anomaly prediction in many other applications, which have big data infeasible to label manually. Above all, APIBGAN has broad application prospects for anomaly prediction, and our work also provides valuable input for anomaly prediction-based GAN.
Collapse
Affiliation(s)
- XiuQing Wang
- College of Computer and Cyber Security, Hebei Normal University, Shijiazhuang, Hebei, China
- Hebei Provincial Key Laboratory of Network & Information Security, College of Computer and Cyber Security, Shijiazhuang, Hebei, China
- Hebei Provincial Engineering Research Center for Supply Chain Big Data Analytics & Data Security, Hebei Normal University, Shijiazhuang, Hebei, China
| | - Yang An
- College of Computer and Cyber Security, Hebei Normal University, Shijiazhuang, Hebei, China
| | - Qianwei Hu
- College of Computer and Cyber Security, Hebei Normal University, Shijiazhuang, Hebei, China
| |
Collapse
|
3
|
Hu B, Zhang Z, Chen S, Xu Q, Li J. A metric for quantitative evaluation of glioma margin changes in magnetic resonance imaging. Acta Radiol 2024; 65:645-653. [PMID: 38449078 DOI: 10.1177/02841851241229597] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/08/2024]
Abstract
BACKGROUND Gliomas differ from meningiomas in their margins, most of which are not separated from the surrounding tissue by a distinct interface. PURPOSE To characterize the margins of gliomas quantitatively based on the margin sharpness coefficient (MSC) is significant for clinical judgment and invasive analysis of gliomas. MATERIAL AND METHODS The data for this study used magnetic resonance image (MRI) data from 67 local patients and 15 open patients to quantify the intensity of changes in the glioma margins of the brain using MSC. The accuracy of MSC was assessed by consistency analysis and Bland-Altman test analysis, as well as invasive correlations using receiver operating characteristic (ROC) and Spearman correlation coefficients for subjects. RESULTS In grading the tumors, the mean MSC values were significantly lower for high-grade gliomas (HGG) than for low-grade gliomas (LGG). The concordance correlation between the measured gradient and the actual gradient was high (HGG: 0.981; LGG: 0.993), and the Bland-Altman mean difference at the 95% confidence interval (HGG: -0.576; LGG: 0.254) and the limits of concordance (HGG: 5.580; LGG: 5.436) indicated no statistical difference. The correlation between MSC and invasion based on the margins of gliomas showed an AUC of 0.903 and 0.911 for HGG and LGG, respectively. The mean Spearman correlation coefficient of the MSC versus the actual distance of invasion was -0.631 in gliomas. CONCLUSION The relatively low MSC on the blurred margins and irregular shape of gliomas may help in benign-malignant differentiation and invasion prediction of gliomas and has potential application for clinical judgment.
Collapse
Affiliation(s)
- Binwu Hu
- School of Electronics & Information Engineering, Nanjing University of Information Science and Technology, Nanjing, PR China
| | - Zhiqiang Zhang
- Department of Medical Imaging, Jinling Hospital, Nanjing University School of Medicine, Nanjing, PR China
| | - Suting Chen
- School of Electronics & Information Engineering, Nanjing University of Information Science and Technology, Nanjing, PR China
| | - Qiang Xu
- Department of Medical Imaging, Jinling Hospital, Nanjing University School of Medicine, Nanjing, PR China
| | - Jianrui Li
- Department of Medical Imaging, Jinling Hospital, Nanjing University School of Medicine, Nanjing, PR China
| |
Collapse
|
4
|
Fang L, Jiang Y. Dual path parallel hierarchical diagnosis model for intracranial tumors based on multi-feature entropy weight. Comput Biol Med 2024; 173:108353. [PMID: 38520918 DOI: 10.1016/j.compbiomed.2024.108353] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2023] [Revised: 02/23/2024] [Accepted: 03/19/2024] [Indexed: 03/25/2024]
Abstract
The grading diagnosis of intracranial tumors is a key step in formulating clinical treatment plans and surgical guidelines. To effectively grade the diagnosis of intracranial tumors, this paper proposes a dual path parallel hierarchical model that can automatically grade the diagnosis of intracranial tumors with high accuracy. In this model, prior features of solid tumor mass and intratumoral necrosis are extracted. Then the optimal division of the data set is achieved through multi-feature entropy weight. The multi-modal input is realized by the dual path structure. Multiple features are superimposed and fused to achieve the image grading. The model has been tested on the actual clinical medical images provided by the Second Affiliated Hospital of Dalian Medical University. The experiment shows that the proposed model has good generalization ability, with an accuracy of 0.990. The proposed model can be applied to clinical diagnosis and has practical application prospects.
Collapse
Affiliation(s)
- Lingling Fang
- School of Computer Science and Artificial Intelligence, Liaoning Normal University, Dalian City, Liaoning Province, China.
| | - Yumeng Jiang
- School of Computer Science and Artificial Intelligence, Liaoning Normal University, Dalian City, Liaoning Province, China
| |
Collapse
|
5
|
Urcuyo JC, Curtin L, Langworthy JM, De Leon G, Anderies B, Singleton KW, Hawkins-Daarud A, Jackson PR, Bond KM, Ranjbar S, Lassiter-Morris Y, Clark-Swanson KR, Paulson LE, Sereduk C, Mrugala MM, Porter AB, Baxter L, Salomao M, Donev K, Hudson M, Meyer J, Zeeshan Q, Sattur M, Patra DP, Jones BA, Rahme RJ, Neal MT, Patel N, Kouloumberis P, Turkmani AH, Lyons M, Krishna C, Zimmerman RS, Bendok BR, Tran NL, Hu LS, Swanson KR. Image-localized biopsy mapping of brain tumor heterogeneity: A single-center study protocol. PLoS One 2023; 18:e0287767. [PMID: 38117803 PMCID: PMC10732423 DOI: 10.1371/journal.pone.0287767] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2023] [Accepted: 06/13/2023] [Indexed: 12/22/2023] Open
Abstract
Brain cancers pose a novel set of difficulties due to the limited accessibility of human brain tumor tissue. For this reason, clinical decision-making relies heavily on MR imaging interpretation, yet the mapping between MRI features and underlying biology remains ambiguous. Standard (clinical) tissue sampling fails to capture the full heterogeneity of the disease. Biopsies are required to obtain a pathological diagnosis and are predominantly taken from the tumor core, which often has different traits to the surrounding invasive tumor that typically leads to recurrent disease. One approach to solving this issue is to characterize the spatial heterogeneity of molecular, genetic, and cellular features of glioma through the intraoperative collection of multiple image-localized biopsy samples paired with multi-parametric MRIs. We have adopted this approach and are currently actively enrolling patients for our 'Image-Based Mapping of Brain Tumors' study. Patients are eligible for this research study (IRB #16-002424) if they are 18 years or older and undergoing surgical intervention for a brain lesion. Once identified, candidate patients receive dynamic susceptibility contrast (DSC) perfusion MRI and diffusion tensor imaging (DTI), in addition to standard sequences (T1, T1Gd, T2, T2-FLAIR) at their presurgical scan. During surgery, sample anatomical locations are tracked using neuronavigation. The collected specimens from this research study are used to capture the intra-tumoral heterogeneity across brain tumors including quantification of genetic aberrations through whole-exome and RNA sequencing as well as other tissue analysis techniques. To date, these data (made available through a public portal) have been used to generate, test, and validate predictive regional maps of the spatial distribution of tumor cell density and/or treatment-related key genetic marker status to identify biopsy and/or treatment targets based on insight from the entire tumor makeup. This type of methodology, when delivered within clinically feasible time frames, has the potential to further inform medical decision-making by improving surgical intervention, radiation, and targeted drug therapy for patients with glioma.
Collapse
Affiliation(s)
- Javier C Urcuyo
- Mathematical NeuroOncology Lab, Mayo Clinic, Phoenix, Arizona, United States of America
- Department of Neurosurgery, Mayo Clinic, Phoenix, Arizona, United States of America
| | - Lee Curtin
- Mathematical NeuroOncology Lab, Mayo Clinic, Phoenix, Arizona, United States of America
- Department of Neurosurgery, Mayo Clinic, Phoenix, Arizona, United States of America
| | - Jazlynn M. Langworthy
- Mathematical NeuroOncology Lab, Mayo Clinic, Phoenix, Arizona, United States of America
- Department of Neurosurgery, Mayo Clinic, Phoenix, Arizona, United States of America
| | - Gustavo De Leon
- Mathematical NeuroOncology Lab, Mayo Clinic, Phoenix, Arizona, United States of America
- Department of Neurosurgery, Mayo Clinic, Phoenix, Arizona, United States of America
| | - Barrett Anderies
- Mathematical NeuroOncology Lab, Mayo Clinic, Phoenix, Arizona, United States of America
- Department of Neurosurgery, Mayo Clinic, Phoenix, Arizona, United States of America
| | - Kyle W. Singleton
- Mathematical NeuroOncology Lab, Mayo Clinic, Phoenix, Arizona, United States of America
- Department of Neurosurgery, Mayo Clinic, Phoenix, Arizona, United States of America
| | - Andrea Hawkins-Daarud
- Mathematical NeuroOncology Lab, Mayo Clinic, Phoenix, Arizona, United States of America
- Department of Neurosurgery, Mayo Clinic, Phoenix, Arizona, United States of America
| | - Pamela R. Jackson
- Mathematical NeuroOncology Lab, Mayo Clinic, Phoenix, Arizona, United States of America
- Department of Neurosurgery, Mayo Clinic, Phoenix, Arizona, United States of America
| | - Kamila M. Bond
- Mathematical NeuroOncology Lab, Mayo Clinic, Phoenix, Arizona, United States of America
- Department of Neurosurgery, Mayo Clinic, Phoenix, Arizona, United States of America
| | - Sara Ranjbar
- Mathematical NeuroOncology Lab, Mayo Clinic, Phoenix, Arizona, United States of America
- Department of Neurosurgery, Mayo Clinic, Phoenix, Arizona, United States of America
| | - Yvette Lassiter-Morris
- Mathematical NeuroOncology Lab, Mayo Clinic, Phoenix, Arizona, United States of America
- Department of Neurosurgery, Mayo Clinic, Phoenix, Arizona, United States of America
| | - Kamala R. Clark-Swanson
- Mathematical NeuroOncology Lab, Mayo Clinic, Phoenix, Arizona, United States of America
- Department of Neurosurgery, Mayo Clinic, Phoenix, Arizona, United States of America
| | - Lisa E. Paulson
- Mathematical NeuroOncology Lab, Mayo Clinic, Phoenix, Arizona, United States of America
- Department of Neurosurgery, Mayo Clinic, Phoenix, Arizona, United States of America
| | - Chris Sereduk
- Department of Cancer Biology, Mayo Clinic, Phoenix, Arizona, United States of America
| | - Maciej M. Mrugala
- Department of Neurology, Mayo Clinic, Phoenix, Arizona, United States of America
- Department of Oncology, Mayo Clinic, Phoenix, Arizona, United States of America
| | - Alyx B. Porter
- Department of Neurology, Mayo Clinic, Phoenix, Arizona, United States of America
- Department of Oncology, Mayo Clinic, Phoenix, Arizona, United States of America
| | - Leslie Baxter
- Department of Neurophysiology, Mayo Clinic, Phoenix, Arizona, United States of America
| | - Marcela Salomao
- Department of Pathology, Mayo Clinic, Phoenix, Arizona, United States of America
| | - Kliment Donev
- Department of Pathology, Mayo Clinic, Phoenix, Arizona, United States of America
| | - Miles Hudson
- Department of Neurosurgery, Mayo Clinic, Phoenix, Arizona, United States of America
| | - Jenna Meyer
- Department of Neurosurgery, Mayo Clinic, Phoenix, Arizona, United States of America
| | - Qazi Zeeshan
- Department of Neurosurgery, Mayo Clinic, Phoenix, Arizona, United States of America
| | - Mithun Sattur
- Department of Neurosurgery, Mayo Clinic, Phoenix, Arizona, United States of America
| | - Devi P. Patra
- Department of Neurosurgery, Mayo Clinic, Phoenix, Arizona, United States of America
| | - Breck A. Jones
- Department of Neurosurgery, Mayo Clinic, Phoenix, Arizona, United States of America
| | - Rudy J. Rahme
- Department of Neurosurgery, Mayo Clinic, Phoenix, Arizona, United States of America
| | - Matthew T. Neal
- Department of Neurosurgery, Mayo Clinic, Phoenix, Arizona, United States of America
| | - Naresh Patel
- Department of Neurosurgery, Mayo Clinic, Phoenix, Arizona, United States of America
| | - Pelagia Kouloumberis
- Department of Neurosurgery, Mayo Clinic, Phoenix, Arizona, United States of America
| | - Ali H. Turkmani
- Department of Neurosurgery, Mayo Clinic, Phoenix, Arizona, United States of America
| | - Mark Lyons
- Department of Neurosurgery, Mayo Clinic, Phoenix, Arizona, United States of America
| | - Chandan Krishna
- Department of Neurosurgery, Mayo Clinic, Phoenix, Arizona, United States of America
| | - Richard S. Zimmerman
- Department of Neurosurgery, Mayo Clinic, Phoenix, Arizona, United States of America
| | - Bernard R. Bendok
- Department of Neurosurgery, Mayo Clinic, Phoenix, Arizona, United States of America
| | - Nhan L. Tran
- Department of Cancer Biology, Mayo Clinic, Phoenix, Arizona, United States of America
| | - Leland S. Hu
- Department of Radiology, Mayo Clinic, Phoenix, Arizona, United States of America
| | - Kristin R. Swanson
- Mathematical NeuroOncology Lab, Mayo Clinic, Phoenix, Arizona, United States of America
- Department of Neurosurgery, Mayo Clinic, Phoenix, Arizona, United States of America
- Department of Cancer Biology, Mayo Clinic, Phoenix, Arizona, United States of America
- Department of Radiation Oncology, Mayo Clinic, Phoenix, Arizona, United States of America
| |
Collapse
|
6
|
Yang S, Kim KD, Ariji E, Takata N, Kise Y. Evaluating the performance of generative adversarial network-synthesized periapical images in classifying C-shaped root canals. Sci Rep 2023; 13:18038. [PMID: 37865655 PMCID: PMC10590373 DOI: 10.1038/s41598-023-45290-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2023] [Accepted: 10/18/2023] [Indexed: 10/23/2023] Open
Abstract
This study evaluated the performance of generative adversarial network (GAN)-synthesized periapical images for classifying C-shaped root canals, which are challenging to diagnose because of their complex morphology. GANs have emerged as a promising technique for generating realistic images, offering a potential solution for data augmentation in scenarios with limited training datasets. Periapical images were synthesized using the StyleGAN2-ADA framework, and their quality was evaluated based on the average Frechet inception distance (FID) and the visual Turing test. The average FID was found to be 35.353 (± 4.386) for synthesized C-shaped canal images and 25.471 (± 2.779) for non C-shaped canal images. The visual Turing test conducted by two radiologists on 100 randomly selected images revealed that distinguishing between real and synthetic images was difficult. These results indicate that GAN-synthesized images exhibit satisfactory visual quality. The classification performance of the neural network, when augmented with GAN data, showed improvements compared with using real data alone, and could be advantageous in addressing data conditions with class imbalance. GAN-generated images have proven to be an effective data augmentation method, addressing the limitations of limited training data and computational resources in diagnosing dental anomalies.
Collapse
Affiliation(s)
- Sujin Yang
- Department of Advanced General Dentistry, College of Dentistry, Yonsei University, Seoul, Korea
| | - Kee-Deog Kim
- Department of Advanced General Dentistry, College of Dentistry, Yonsei University, Seoul, Korea
| | - Eiichiro Ariji
- Department of Oral and Maxillofacial Radiology, Aichi Gakuin University, 2-11 Seuemori-Dori, Chikusa-Ku, Nagoya, 464-8651, Japan
| | - Natsuho Takata
- Department of Oral and Maxillofacial Radiology, Aichi Gakuin University, 2-11 Seuemori-Dori, Chikusa-Ku, Nagoya, 464-8651, Japan
| | - Yoshitaka Kise
- Department of Oral and Maxillofacial Radiology, Aichi Gakuin University, 2-11 Seuemori-Dori, Chikusa-Ku, Nagoya, 464-8651, Japan.
| |
Collapse
|
7
|
Ma M, Zhang X, Li Y, Wang X, Zhang R, Wang Y, Sun P, Wang X, Sun X. ConvLSTM coordinated longitudinal transformer under spatio-temporal features for tumor growth prediction. Comput Biol Med 2023; 164:107313. [PMID: 37562325 DOI: 10.1016/j.compbiomed.2023.107313] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2023] [Revised: 07/17/2023] [Accepted: 08/07/2023] [Indexed: 08/12/2023]
Abstract
Accurate quantification of tumor growth patterns can indicate the development process of the disease. According to the important features of tumor growth rate and expansion, physicians can intervene and diagnose patients more efficiently to improve the cure rate. However, the existing longitudinal growth model can not well analyze the dependence between tumor growth pixels in the long space-time, and fail to effectively fit the nonlinear growth law of tumors. So, we propose the ConvLSTM coordinated longitudinal Transformer (LCTformer) under spatiotemporal features for tumor growth prediction. We design the Adaptive Edge Enhancement Module (AEEM) to learn static spatial features of different size tumors under time series and make the depth model more focused on tumor edge regions. In addition, we propose the Growth Prediction Module (GPM) to characterize the future growth trend of tumors. It consists of a Longitudinal Transformer and ConvLSTM. Based on the adaptive abstract features of current tumors, Longitudinal Transformer explores the dynamic growth patterns between spatiotemporal CT sequences and learns the future morphological features of tumors under the dual views of residual information and sequence motion relationship in parallel. ConvLSTM can better learn the location information of target tumors, and it complements Longitudinal Transformer to jointly predict future imaging of tumors to reduce the loss of growth information. Finally, Channel Enhancement Fusion Module (CEFM) performs the dense fusion of the generated tumor feature images in the channel and spatial dimensions and realizes accurate quantification of the whole tumor growth process. Our model has been strictly trained and tested on the NLST dataset. The average prediction accuracy can reach 88.52% (Dice score), 89.64% (Recall), and 11.06 (RMSE), which can improve the work efficiency of doctors.
Collapse
Affiliation(s)
- Manfu Ma
- College of Computer Science & Engineering, Northwest Normal University, Lanzhou, 730070, China
| | - Xiaoming Zhang
- College of Computer Science & Engineering, Northwest Normal University, Lanzhou, 730070, China
| | - Yong Li
- College of Computer Science & Engineering, Northwest Normal University, Lanzhou, 730070, China.
| | - Xia Wang
- Department of Pharmacy, The People's Hospital of Gansu Province, Lanzhou, 730000, China
| | - Ruigen Zhang
- College of Computer Science & Engineering, Northwest Normal University, Lanzhou, 730070, China
| | - Yang Wang
- College of Computer Science & Engineering, Northwest Normal University, Lanzhou, 730070, China
| | - Penghui Sun
- College of Computer Science & Engineering, Northwest Normal University, Lanzhou, 730070, China
| | - Xuegang Wang
- College of Computer Science & Engineering, Northwest Normal University, Lanzhou, 730070, China
| | - Xuan Sun
- College of Computer Science & Engineering, Northwest Normal University, Lanzhou, 730070, China
| |
Collapse
|
8
|
Jacobs F, D'Amico S, Benvenuti C, Gaudio M, Saltalamacchia G, Miggiano C, De Sanctis R, Della Porta MG, Santoro A, Zambelli A. Opportunities and Challenges of Synthetic Data Generation in Oncology. JCO Clin Cancer Inform 2023; 7:e2300045. [PMID: 37535875 DOI: 10.1200/cci.23.00045] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2023] [Revised: 05/05/2023] [Accepted: 05/25/2023] [Indexed: 08/05/2023] Open
Abstract
Widespread interest in artificial intelligence (AI) in health care has focused mainly on deductive systems that analyze available real-world data to discover patterns not otherwise visible. Generative adversarial network, a new type of inductive AI, has recently evolved to generate high-fidelity virtual synthetic data (SD) trained on relatively limited real-world information. The AI system is fed with a collection of real data, and it learns to generate new augmented data while maintaining the general characteristics of the original data set. The use of SD to enhance clinical research and protect patient privacy has drawn a lot of interest in medicine and in the complex field of oncology. This article summarizes the main characteristics of this innovative technology and critically discusses how it can be used to accelerate data access for secondary purposes, providing an overview of the opportunities and challenges of SD generation for clinical cancer research and health care.
Collapse
Affiliation(s)
- Flavia Jacobs
- Department of Biomedical Sciences, Humanitas University, Milan, Italy
- IRCCS Istituto Clinico Humanitas, Milan, Italy
| | | | - Chiara Benvenuti
- Department of Biomedical Sciences, Humanitas University, Milan, Italy
- IRCCS Istituto Clinico Humanitas, Milan, Italy
| | - Mariangela Gaudio
- Department of Biomedical Sciences, Humanitas University, Milan, Italy
- IRCCS Istituto Clinico Humanitas, Milan, Italy
| | | | - Chiara Miggiano
- Department of Biomedical Sciences, Humanitas University, Milan, Italy
- IRCCS Istituto Clinico Humanitas, Milan, Italy
| | - Rita De Sanctis
- Department of Biomedical Sciences, Humanitas University, Milan, Italy
- IRCCS Istituto Clinico Humanitas, Milan, Italy
| | - Matteo Giovanni Della Porta
- Department of Biomedical Sciences, Humanitas University, Milan, Italy
- IRCCS Istituto Clinico Humanitas, Milan, Italy
| | - Armando Santoro
- Department of Biomedical Sciences, Humanitas University, Milan, Italy
- IRCCS Istituto Clinico Humanitas, Milan, Italy
| | - Alberto Zambelli
- Department of Biomedical Sciences, Humanitas University, Milan, Italy
- IRCCS Istituto Clinico Humanitas, Milan, Italy
| |
Collapse
|
9
|
Alrumiah SS, Alrebdi N, Ibrahim DM. Augmenting healthy brain magnetic resonance images using generative adversarial networks. PeerJ Comput Sci 2023; 9:e1318. [PMID: 37346635 PMCID: PMC10280481 DOI: 10.7717/peerj-cs.1318] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2022] [Accepted: 03/09/2023] [Indexed: 06/23/2023]
Abstract
Machine learning applications in the medical sector face a lack of medical data due to privacy issues. For instance, brain tumor image-based classification suffers from the lack of brain images. The lack of such images produces some classification problems, i.e., class imbalance issues which can cause a bias toward one class over the others. This study aims to solve the imbalance problem of the "no tumor" class in the publicly available brain magnetic resonance imaging (MRI) dataset. Generative adversarial network (GAN)-based augmentation techniques were used to solve the imbalance classification problem. Specifically, deep convolutional GAN (DCGAN) and single GAN (SinGAN). Moreover, the traditional-based augmentation techniques were implemented using the rotation method. Thus, several VGG16 classification experiments were conducted, including (i) the original dataset, (ii) the DCGAN-based dataset, (iii) the SinGAN-based dataset, (iv) a combination of the DCGAN and SinGAN dataset, and (v) the rotation-based dataset. However, the results show that the original dataset achieved the highest accuracy, 73%. Additionally, SinGAN outperformed DCGAN by a significant margin of 4%. In contrast, experimenting with the non-augmented original dataset resulted in the highest classification loss value, which explains the effect of the imbalance issue. These results provide a general view of the effect of different image augmentation techniques on enlarging the healthy brain dataset.
Collapse
Affiliation(s)
- Sarah S. Alrumiah
- Department of Information Technology, College of Computer, Qassim University, Buraydah, Saudi Arabia
| | - Norah Alrebdi
- Department of Information Technology, College of Computer, Qassim University, Buraydah, Saudi Arabia
| | - Dina M. Ibrahim
- Department of Information Technology, College of Computer, Qassim University, Buraydah, Saudi Arabia
- Department of Computers and Control Engineering, Faculty of Engineering, Tanta University, Tanta, Egypt
| |
Collapse
|
10
|
Song P, Hou J, Xiao N, Zhao J, Zhao J, Qiang Y, Yang Q. MSTS-Net: malignancy evolution prediction of pulmonary nodules from longitudinal CT images via multi-task spatial-temporal self-attention network. Int J Comput Assist Radiol Surg 2023; 18:685-693. [PMID: 36447076 DOI: 10.1007/s11548-022-02744-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2022] [Accepted: 09/01/2022] [Indexed: 11/30/2022]
Abstract
PURPOSE Longitudinal CT images contain the law of lesion growth and evolution over time. Therefore, our purpose is to explore the growth and evolution law of pulmonary lesions in the time dimension to improve the performance of predicting the malignant evolution of pulmonary nodules. METHODS In this paper, we propose a Multi-task Spatial-Temporal Self-attention network (MSTS-Net) to predict the malignancy growth trend of pulmonary nodules from different periods. More specifically, the model achieves lesion segmentation task and lesion prediction task by sharing the same encoder. Segmentation task boosts the performance of the prediction task. In addition, a Static Context Spatial Self-attention Module and a Dynamic Adaptive Temporal Self-Attention Module are introduced to capture both static spatial coherence patterns between consecutive slices of lesions in the same period and temporal dynamics across different time points. RESULTS We repeatedly evaluated the proposed method on the National Lung Screening Trial dataset and the Shanxi Cancer Hospital dataset. The final experimental results show that our MSTS-Net has an area under the ROC curve score of 0.919. CONCLUSION In the computer-aided prediction of the malignant evolution of pulmonary nodules, combining the characteristics of the temporal dimension of pulmonary nodules with CT data can effectively improve the accuracy of prediction. The MSTS-Net we developed has high predictive value and broad prospects for clinical application.
Collapse
Affiliation(s)
- Ping Song
- College of Information and Computer, Taiyuan University of Technology, Taiyuan, China
| | - Jiaxin Hou
- College of Information and Computer, Taiyuan University of Technology, Taiyuan, China
| | - Ning Xiao
- College of Information and Computer, Taiyuan University of Technology, Taiyuan, China
| | - Jun Zhao
- College of Information and Computer, Taiyuan University of Technology, Taiyuan, China
| | - Juanjuan Zhao
- College of Information and Computer, Taiyuan University of Technology, Taiyuan, China.
- College of Information, Jinzhong College of Information, Jinzhong, China.
| | - Yan Qiang
- College of Information and Computer, Taiyuan University of Technology, Taiyuan, China
| | - Qianqian Yang
- College of Information, Jinzhong College of Information, Jinzhong, China
| |
Collapse
|
11
|
Wang R, Bashyam V, Yang Z, Yu F, Tassopoulou V, Chintapalli SS, Skampardoni I, Sreepada LP, Sahoo D, Nikita K, Abdulkadir A, Wen J, Davatzikos C. Applications of generative adversarial networks in neuroimaging and clinical neuroscience. Neuroimage 2023; 269:119898. [PMID: 36702211 PMCID: PMC9992336 DOI: 10.1016/j.neuroimage.2023.119898] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2022] [Revised: 12/16/2022] [Accepted: 01/21/2023] [Indexed: 01/25/2023] Open
Abstract
Generative adversarial networks (GANs) are one powerful type of deep learning models that have been successfully utilized in numerous fields. They belong to the broader family of generative methods, which learn to generate realistic data with a probabilistic model by learning distributions from real samples. In the clinical context, GANs have shown enhanced capabilities in capturing spatially complex, nonlinear, and potentially subtle disease effects compared to traditional generative methods. This review critically appraises the existing literature on the applications of GANs in imaging studies of various neurological conditions, including Alzheimer's disease, brain tumors, brain aging, and multiple sclerosis. We provide an intuitive explanation of various GAN methods for each application and further discuss the main challenges, open questions, and promising future directions of leveraging GANs in neuroimaging. We aim to bridge the gap between advanced deep learning methods and neurology research by highlighting how GANs can be leveraged to support clinical decision making and contribute to a better understanding of the structural and functional patterns of brain diseases.
Collapse
Affiliation(s)
- Rongguang Wang
- Center for AI and Data Science for Integrated Diagnostics, University of Pennsylvania, Philadelphia, USA; Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, USA.
| | - Vishnu Bashyam
- Center for AI and Data Science for Integrated Diagnostics, University of Pennsylvania, Philadelphia, USA; Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, USA
| | - Zhijian Yang
- Center for AI and Data Science for Integrated Diagnostics, University of Pennsylvania, Philadelphia, USA; Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, USA
| | - Fanyang Yu
- Center for AI and Data Science for Integrated Diagnostics, University of Pennsylvania, Philadelphia, USA; Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, USA
| | - Vasiliki Tassopoulou
- Center for AI and Data Science for Integrated Diagnostics, University of Pennsylvania, Philadelphia, USA; Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, USA
| | - Sai Spandana Chintapalli
- Center for AI and Data Science for Integrated Diagnostics, University of Pennsylvania, Philadelphia, USA; Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, USA
| | - Ioanna Skampardoni
- Center for AI and Data Science for Integrated Diagnostics, University of Pennsylvania, Philadelphia, USA; Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, USA; School of Electrical and Computer Engineering, National Technical University of Athens, Athens, Greece
| | - Lasya P Sreepada
- Center for AI and Data Science for Integrated Diagnostics, University of Pennsylvania, Philadelphia, USA; Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, USA
| | - Dushyant Sahoo
- Center for AI and Data Science for Integrated Diagnostics, University of Pennsylvania, Philadelphia, USA; Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, USA
| | - Konstantina Nikita
- School of Electrical and Computer Engineering, National Technical University of Athens, Athens, Greece
| | - Ahmed Abdulkadir
- Center for AI and Data Science for Integrated Diagnostics, University of Pennsylvania, Philadelphia, USA; Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, USA; Department of Clinical Neurosciences, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland
| | - Junhao Wen
- Center for AI and Data Science for Integrated Diagnostics, University of Pennsylvania, Philadelphia, USA; Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, USA
| | - Christos Davatzikos
- Center for AI and Data Science for Integrated Diagnostics, University of Pennsylvania, Philadelphia, USA; Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, USA; Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, USA.
| |
Collapse
|
12
|
Plaszczynski S, Grammaticos B, Pallud J, Campagne JE, Badoual M. Predicting regrowth of low-grade gliomas after radiotherapy. PLoS Comput Biol 2023; 19:e1011002. [PMID: 37000852 PMCID: PMC10128962 DOI: 10.1371/journal.pcbi.1011002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2022] [Revised: 04/25/2023] [Accepted: 03/04/2023] [Indexed: 04/03/2023] Open
Abstract
Diffuse low grade gliomas are invasive and incurable brain tumors that inevitably transform into higher grade ones. A classical treatment to delay this transition is radiotherapy (RT). Following RT, the tumor gradually shrinks during a period of typically 6 months to 4 years before regrowing. To improve the patient’s health-related quality of life and help clinicians build personalized follow-ups, one would benefit from predictions of the time during which the tumor is expected to decrease. The challenge is to provide a reliable estimate of this regrowth time shortly after RT (i.e. with few data), although patients react differently to the treatment. To this end, we analyze the tumor size dynamics from a batch of 20 high-quality longitudinal data, and propose a simple and robust analytical model, with just 4 parameters. From the study of their correlations, we build a statistical constraint that helps determine the regrowth time even for patients for which we have only a few measurements of the tumor size. We validate the procedure on the data and predict the regrowth time at the moment of the first MRI after RT, with precision of, typically, 6 months. Using virtual patients, we study whether some forecast is still possible just three months after RT. We obtain some reliable estimates of the regrowth time in 75% of the cases, in particular for all “fast-responders”. The remaining 25% represent cases where the actual regrowth time is large and can be safely estimated with another measurement a year later. These results show the feasibility of making personalized predictions of the tumor regrowth time shortly after RT.
Collapse
Affiliation(s)
- Stéphane Plaszczynski
- Université Paris-Saclay, CNRS/IN2P3, IJCLab, Orsay, France
- Université Paris-Cité, IJCLab, Orsay, France
- * E-mail:
| | - Basile Grammaticos
- Université Paris-Saclay, CNRS/IN2P3, IJCLab, Orsay, France
- Université Paris-Cité, IJCLab, Orsay, France
| | - Johan Pallud
- Department of Neurosurgery, GHU Paris Sainte-Anne Hospital, Paris, France
- Université de Paris, Sorbonne Paris Cité, Paris, France
- Inserm, U1266, IMA-Brain, Institut de Psychiatrie et Neurosciences de Paris, Paris, France
| | - Jean-Eric Campagne
- Université Paris-Saclay, CNRS/IN2P3, IJCLab, Orsay, France
- Université Paris-Cité, IJCLab, Orsay, France
| | - Mathilde Badoual
- Université Paris-Saclay, CNRS/IN2P3, IJCLab, Orsay, France
- Université Paris-Cité, IJCLab, Orsay, France
| |
Collapse
|
13
|
Zhou T, Noeuveglise A, Modzelewski R, Ghazouani F, Thureau S, Fontanilles M, Ruan S. Prediction of brain tumor recurrence location based on multi-modal fusion and nonlinear correlation learning. Comput Med Imaging Graph 2023; 106:102218. [PMID: 36947921 DOI: 10.1016/j.compmedimag.2023.102218] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2023] [Revised: 02/13/2023] [Accepted: 03/06/2023] [Indexed: 03/18/2023]
Abstract
Brain tumor is one of the leading causes of cancer death. The high-grade brain tumors are easier to recurrent even after standard treatment. Therefore, developing a method to predict brain tumor recurrence location plays an important role in the treatment planning and it can potentially prolong patient's survival time. There is still little work to deal with this issue. In this paper, we present a deep learning-based brain tumor recurrence location prediction network. Since the dataset is usually small, we propose to use transfer learning to improve the prediction. We first train a multi-modal brain tumor segmentation network on the public dataset BraTS 2021. Then, the pre-trained encoder is transferred to our private dataset for extracting the rich semantic features. Following that, a multi-scale multi-channel feature fusion model and a nonlinear correlation learning module are developed to learn the effective features. The correlation between multi-channel features is modeled by a nonlinear equation. To measure the similarity between the distributions of original features of one modality and the estimated correlated features of another modality, we propose to use Kullback-Leibler divergence. Based on this divergence, a correlation loss function is designed to maximize the similarity between the two feature distributions. Finally, two decoders are constructed to jointly segment the present brain tumor and predict its future tumor recurrence location. To the best of our knowledge, this is the first work that can segment the present tumor and at the same time predict future tumor recurrence location, making the treatment planning more efficient and precise. The experimental results demonstrated the effectiveness of our proposed method to predict the brain tumor recurrence location from the limited dataset.
Collapse
Affiliation(s)
- Tongxue Zhou
- School of Information Science and Technology, Hangzhou Normal University, Hangzhou 311121, China
| | | | - Romain Modzelewski
- Department of Nuclear Medicine, Henri Becquerel Cancer Center, Rouen, 76038, France
| | - Fethi Ghazouani
- Université de Rouen Normandie, LITIS - QuantIF, Rouen 76183, France
| | - Sébastien Thureau
- Department of Nuclear Medicine, Henri Becquerel Cancer Center, Rouen, 76038, France
| | - Maxime Fontanilles
- Department of Nuclear Medicine, Henri Becquerel Cancer Center, Rouen, 76038, France
| | - Su Ruan
- Université de Rouen Normandie, LITIS - QuantIF, Rouen 76183, France.
| |
Collapse
|
14
|
Luo J, Pan M, Mo K, Mao Y, Zou D. Emerging role of artificial intelligence in diagnosis, classification and clinical management of glioma. Semin Cancer Biol 2023; 91:110-123. [PMID: 36907387 DOI: 10.1016/j.semcancer.2023.03.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2022] [Revised: 03/05/2023] [Accepted: 03/08/2023] [Indexed: 03/12/2023]
Abstract
Glioma represents a dominant primary intracranial malignancy in the central nervous system. Artificial intelligence that mainly includes machine learning, and deep learning computational approaches, presents a unique opportunity to enhance clinical management of glioma through improving tumor segmentation, diagnosis, differentiation, grading, treatment, prediction of clinical outcomes (prognosis, and recurrence), molecular features, clinical classification, characterization of the tumor microenvironment, and drug discovery. A growing body of recent studies apply artificial intelligence-based models to disparate data sources of glioma, covering imaging modalities, digital pathology, high-throughput multi-omics data (especially emerging single-cell RNA sequencing and spatial transcriptome), etc. While these early findings are promising, future studies are required to normalize artificial intelligence-based models to improve the generalizability and interpretability of the results. Despite prominent issues, targeted clinical application of artificial intelligence approaches in glioma will facilitate the development of precision medicine of this field. If these challenges can be overcome, artificial intelligence has the potential to profoundly change the way patients with or at risk of glioma are provided with more rational care.
Collapse
Affiliation(s)
- Jiefeng Luo
- Department of Neurology, The Second Affiliated Hospital of Guangxi Medical University, Nanning 530007, Guangxi, China
| | - Mika Pan
- Department of Neurology, The Second Affiliated Hospital of Guangxi Medical University, Nanning 530007, Guangxi, China
| | - Ke Mo
- Clinical Research Center, The Second Affiliated Hospital of Guangxi Medical University, Nanning 530007, Guangxi, China
| | - Yingwei Mao
- Department of Biology, Pennsylvania State University, University Park, PA 16802, USA.
| | - Donghua Zou
- Department of Neurology, The Second Affiliated Hospital of Guangxi Medical University, Nanning 530007, Guangxi, China; Clinical Research Center, The Second Affiliated Hospital of Guangxi Medical University, Nanning 530007, Guangxi, China.
| |
Collapse
|
15
|
Zhao Y, Wang X, Che T, Bao G, Li S. Multi-task deep learning for medical image computing and analysis: A review. Comput Biol Med 2023; 153:106496. [PMID: 36634599 DOI: 10.1016/j.compbiomed.2022.106496] [Citation(s) in RCA: 14] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2022] [Revised: 12/06/2022] [Accepted: 12/27/2022] [Indexed: 12/29/2022]
Abstract
The renaissance of deep learning has provided promising solutions to various tasks. While conventional deep learning models are constructed for a single specific task, multi-task deep learning (MTDL) that is capable to simultaneously accomplish at least two tasks has attracted research attention. MTDL is a joint learning paradigm that harnesses the inherent correlation of multiple related tasks to achieve reciprocal benefits in improving performance, enhancing generalizability, and reducing the overall computational cost. This review focuses on the advanced applications of MTDL for medical image computing and analysis. We first summarize four popular MTDL network architectures (i.e., cascaded, parallel, interacted, and hybrid). Then, we review the representative MTDL-based networks for eight application areas, including the brain, eye, chest, cardiac, abdomen, musculoskeletal, pathology, and other human body regions. While MTDL-based medical image processing has been flourishing and demonstrating outstanding performance in many tasks, in the meanwhile, there are performance gaps in some tasks, and accordingly we perceive the open challenges and the perspective trends. For instance, in the 2018 Ischemic Stroke Lesion Segmentation challenge, the reported top dice score of 0.51 and top recall of 0.55 achieved by the cascaded MTDL model indicate further research efforts in high demand to escalate the performance of current models.
Collapse
Affiliation(s)
- Yan Zhao
- Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering, Beihang University, Beijing, 100083, China
| | - Xiuying Wang
- School of Computer Science, The University of Sydney, Sydney, NSW, 2008, Australia.
| | - Tongtong Che
- Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering, Beihang University, Beijing, 100083, China
| | - Guoqing Bao
- School of Computer Science, The University of Sydney, Sydney, NSW, 2008, Australia
| | - Shuyu Li
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, 100875, China.
| |
Collapse
|
16
|
Osuala R, Kushibar K, Garrucho L, Linardos A, Szafranowska Z, Klein S, Glocker B, Diaz O, Lekadir K. Data synthesis and adversarial networks: A review and meta-analysis in cancer imaging. Med Image Anal 2023; 84:102704. [PMID: 36473414 DOI: 10.1016/j.media.2022.102704] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2021] [Revised: 11/02/2022] [Accepted: 11/21/2022] [Indexed: 11/26/2022]
Abstract
Despite technological and medical advances, the detection, interpretation, and treatment of cancer based on imaging data continue to pose significant challenges. These include inter-observer variability, class imbalance, dataset shifts, inter- and intra-tumour heterogeneity, malignancy determination, and treatment effect uncertainty. Given the recent advancements in image synthesis, Generative Adversarial Networks (GANs), and adversarial training, we assess the potential of these technologies to address a number of key challenges of cancer imaging. We categorise these challenges into (a) data scarcity and imbalance, (b) data access and privacy, (c) data annotation and segmentation, (d) cancer detection and diagnosis, and (e) tumour profiling, treatment planning and monitoring. Based on our analysis of 164 publications that apply adversarial training techniques in the context of cancer imaging, we highlight multiple underexplored solutions with research potential. We further contribute the Synthesis Study Trustworthiness Test (SynTRUST), a meta-analysis framework for assessing the validation rigour of medical image synthesis studies. SynTRUST is based on 26 concrete measures of thoroughness, reproducibility, usefulness, scalability, and tenability. Based on SynTRUST, we analyse 16 of the most promising cancer imaging challenge solutions and observe a high validation rigour in general, but also several desirable improvements. With this work, we strive to bridge the gap between the needs of the clinical cancer imaging community and the current and prospective research on data synthesis and adversarial networks in the artificial intelligence community.
Collapse
Affiliation(s)
- Richard Osuala
- Artificial Intelligence in Medicine Lab (BCN-AIM), Facultat de Matemàtiques i Informàtica, Universitat de Barcelona, Spain.
| | - Kaisar Kushibar
- Artificial Intelligence in Medicine Lab (BCN-AIM), Facultat de Matemàtiques i Informàtica, Universitat de Barcelona, Spain
| | - Lidia Garrucho
- Artificial Intelligence in Medicine Lab (BCN-AIM), Facultat de Matemàtiques i Informàtica, Universitat de Barcelona, Spain
| | - Akis Linardos
- Artificial Intelligence in Medicine Lab (BCN-AIM), Facultat de Matemàtiques i Informàtica, Universitat de Barcelona, Spain
| | - Zuzanna Szafranowska
- Artificial Intelligence in Medicine Lab (BCN-AIM), Facultat de Matemàtiques i Informàtica, Universitat de Barcelona, Spain
| | - Stefan Klein
- Biomedical Imaging Group Rotterdam, Department of Radiology & Nuclear Medicine, Erasmus MC, Rotterdam, The Netherlands
| | - Ben Glocker
- Biomedical Image Analysis Group, Department of Computing, Imperial College London, UK
| | - Oliver Diaz
- Artificial Intelligence in Medicine Lab (BCN-AIM), Facultat de Matemàtiques i Informàtica, Universitat de Barcelona, Spain
| | - Karim Lekadir
- Artificial Intelligence in Medicine Lab (BCN-AIM), Facultat de Matemàtiques i Informàtica, Universitat de Barcelona, Spain
| |
Collapse
|
17
|
A Systematic Literature Review on Applications of GAN-Synthesized Images for Brain MRI. FUTURE INTERNET 2022. [DOI: 10.3390/fi14120351] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022] Open
Abstract
With the advances in brain imaging, magnetic resonance imaging (MRI) is evolving as a popular radiological tool in clinical diagnosis. Deep learning (DL) methods can detect abnormalities in brain images without an extensive manual feature extraction process. Generative adversarial network (GAN)-synthesized images have many applications in this field besides augmentation, such as image translation, registration, super-resolution, denoising, motion correction, segmentation, reconstruction, and contrast enhancement. The existing literature was reviewed systematically to understand the role of GAN-synthesized dummy images in brain disease diagnosis. Web of Science and Scopus databases were extensively searched to find relevant studies from the last 6 years to write this systematic literature review (SLR). Predefined inclusion and exclusion criteria helped in filtering the search results. Data extraction is based on related research questions (RQ). This SLR identifies various loss functions used in the above applications and software to process brain MRIs. A comparative study of existing evaluation metrics for GAN-synthesized images helps choose the proper metric for an application. GAN-synthesized images will have a crucial role in the clinical sector in the coming years, and this paper gives a baseline for other researchers in the field.
Collapse
|
18
|
Rafael-Palou X, Aubanell A, Ceresa M, Ribas V, Piella G, Ballester MAG. Prediction of Lung Nodule Progression with an Uncertainty-Aware Hierarchical Probabilistic Network. Diagnostics (Basel) 2022; 12:2639. [PMID: 36359482 PMCID: PMC9689366 DOI: 10.3390/diagnostics12112639] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2022] [Revised: 10/21/2022] [Accepted: 10/24/2022] [Indexed: 09/08/2024] Open
Abstract
Predicting whether a lung nodule will grow, remain stable or regress over time, especially early in its follow-up, would help doctors prescribe personalized treatments and better surgical planning. However, the multifactorial nature of lung tumour progression hampers the identification of growth patterns. In this work, we propose a deep hierarchical generative and probabilistic network that, given an initial image of the nodule, predicts whether it will grow, quantifies its future size and provides its expected semantic appearance at a future time. Unlike previous solutions, our approach also estimates the uncertainty in the predictions from the intrinsic noise in medical images and the inter-observer variability in the annotations. The evaluation of this method on an independent test set reported a future tumour growth size mean absolute error of 1.74 mm, a nodule segmentation Dice's coefficient of 78% and a tumour growth accuracy of 84% on predictions made up to 24 months ahead. Due to the lack of similar methods for providing future lung tumour growth predictions, along with their associated uncertainty, we adapted equivalent deterministic and alternative generative networks (i.e., probabilistic U-Net, Bayesian test dropout and Pix2Pix). Our method outperformed all these methods, corroborating the adequacy of our approach.
Collapse
Affiliation(s)
- Xavier Rafael-Palou
- BCN MedTech, Department of Information and Communication Technologies, Universitat Pompeu Fabra, 08108 Barcelona, Spain
- Eurecat Centre Tecnològic de Catalunya, Digital Health Unit, 08005 Barcelona, Spain
| | - Anton Aubanell
- Vall d’Hebron University Hospital, 08035 Barcelona, Spain
| | - Mario Ceresa
- BCN MedTech, Department of Information and Communication Technologies, Universitat Pompeu Fabra, 08108 Barcelona, Spain
| | - Vicent Ribas
- Eurecat Centre Tecnològic de Catalunya, Digital Health Unit, 08005 Barcelona, Spain
| | - Gemma Piella
- BCN MedTech, Department of Information and Communication Technologies, Universitat Pompeu Fabra, 08108 Barcelona, Spain
| | - Miguel A. González Ballester
- BCN MedTech, Department of Information and Communication Technologies, Universitat Pompeu Fabra, 08108 Barcelona, Spain
- ICREA, 08690 Barcelona, Spain
| |
Collapse
|
19
|
Zhang F, Zhang Y, Zhu X, Chen X, Du H, Zhang X. PregGAN: A prognosis prediction model for breast cancer based on conditional generative adversarial networks. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 224:107026. [PMID: 35872384 DOI: 10.1016/j.cmpb.2022.107026] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/01/2021] [Revised: 07/13/2022] [Accepted: 07/13/2022] [Indexed: 06/15/2023]
Abstract
BACKGROUND AND OBJECTIVE Generative adversarial network (GAN) is able to learn from a set of training data and generate new data with the same characteristics as the training data. Based on the characteristics of GAN, this paper developed its capability as a tool of disease prognosis prediction, and proposed a prognostic model PregGAN based on conditional generative adversarial network (CGAN). METHODS The idea of PregGAN is to generate the prognosis prediction results based on the clinical data of patients. PregGAN added the clinical data as conditions to the training process. Conditions were used as the input to the generator along with noises. The generator synthesized new samples using the noises vectors and the conditions. In order to solve the mode collapse problem during PregGAN training, Wasserstein distance and gradient penalty strategy were used to make the training process more stable. RESULTS In the prognosis prediction experiments using the METABRIC breast cancer dataset, PregGAN achieved good results, with the average accurate (ACC) of 90.6% and the average AUC (area under curve) of 0.946. CONCLUSIONS Experimental results show that PregGAN is a reliable prognosis predictive model for breast cancer. Due to the strong ability of probability distribution learning, PregGAN can also be used for the prognosis prediction of other diseases.
Collapse
Affiliation(s)
- Fan Zhang
- Henan Key Laboratory of Big Data Analysis and Processing, Henan University, Kaifeng 475004, China; Henan Engineering Laboratory of Spatial Information Processing, Henan University, Kaifeng 475004, China
| | - Yingqi Zhang
- Henan Key Laboratory of Big Data Analysis and Processing, Henan University, Kaifeng 475004, China
| | - Xiaoke Zhu
- Henan Key Laboratory of Big Data Analysis and Processing, Henan University, Kaifeng 475004, China
| | - Xiaopan Chen
- Henan Key Laboratory of Big Data Analysis and Processing, Henan University, Kaifeng 475004, China
| | - Haishun Du
- School of Artificial Intelligence, Henan University, Kaifeng 475004, China
| | - Xinhong Zhang
- School of Software, Henan University, Kaifeng 475004, China.
| |
Collapse
|
20
|
Static-Dynamic coordinated Transformer for Tumor Longitudinal Growth Prediction. Comput Biol Med 2022; 148:105922. [DOI: 10.1016/j.compbiomed.2022.105922] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2022] [Revised: 07/18/2022] [Accepted: 07/30/2022] [Indexed: 11/20/2022]
|
21
|
Iqbal A, Sharif M, Yasmin M, Raza M, Aftab S. Generative adversarial networks and its applications in the biomedical image segmentation: a comprehensive survey. INTERNATIONAL JOURNAL OF MULTIMEDIA INFORMATION RETRIEVAL 2022; 11:333-368. [PMID: 35821891 PMCID: PMC9264294 DOI: 10.1007/s13735-022-00240-x] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/24/2021] [Revised: 03/16/2022] [Accepted: 05/24/2022] [Indexed: 05/13/2023]
Abstract
Recent advancements with deep generative models have proven significant potential in the task of image synthesis, detection, segmentation, and classification. Segmenting the medical images is considered a primary challenge in the biomedical imaging field. There have been various GANs-based models proposed in the literature to resolve medical segmentation challenges. Our research outcome has identified 151 papers; after the twofold screening, 138 papers are selected for the final survey. A comprehensive survey is conducted on GANs network application to medical image segmentation, primarily focused on various GANs-based models, performance metrics, loss function, datasets, augmentation methods, paper implementation, and source codes. Secondly, this paper provides a detailed overview of GANs network application in different human diseases segmentation. We conclude our research with critical discussion, limitations of GANs, and suggestions for future directions. We hope this survey is beneficial and increases awareness of GANs network implementations for biomedical image segmentation tasks.
Collapse
Affiliation(s)
- Ahmed Iqbal
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Pakistan
| | - Muhammad Sharif
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Pakistan
| | - Mussarat Yasmin
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Pakistan
| | - Mudassar Raza
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Pakistan
| | - Shabib Aftab
- Department of Computer Science, Virtual University of Pakistan, Lahore, Pakistan
| |
Collapse
|
22
|
Ali H, Biswas R, Ali F, Shah U, Alamgir A, Mousa O, Shah Z. The role of generative adversarial networks in brain MRI: a scoping review. Insights Imaging 2022; 13:98. [PMID: 35662369 PMCID: PMC9167371 DOI: 10.1186/s13244-022-01237-0] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2022] [Accepted: 05/11/2022] [Indexed: 11/23/2022] Open
Abstract
The performance of artificial intelligence (AI) for brain MRI can improve if enough data are made available. Generative adversarial networks (GANs) showed a lot of potential to generate synthetic MRI data that can capture the distribution of real MRI. Besides, GANs are also popular for segmentation, noise removal, and super-resolution of brain MRI images. This scoping review aims to explore how GANs methods are being used on brain MRI data, as reported in the literature. The review describes the different applications of GANs for brain MRI, presents the most commonly used GANs architectures, and summarizes the publicly available brain MRI datasets for advancing the research and development of GANs-based approaches. This review followed the guidelines of PRISMA-ScR to perform the study search and selection. The search was conducted on five popular scientific databases. The screening and selection of studies were performed by two independent reviewers, followed by validation by a third reviewer. Finally, the data were synthesized using a narrative approach. This review included 139 studies out of 789 search results. The most common use case of GANs was the synthesis of brain MRI images for data augmentation. GANs were also used to segment brain tumors and translate healthy images to diseased images or CT to MRI and vice versa. The included studies showed that GANs could enhance the performance of AI methods used on brain MRI imaging data. However, more efforts are needed to transform the GANs-based methods in clinical applications.
Collapse
Affiliation(s)
- Hazrat Ali
- College of Science and Engineering, Hamad Bin Khalifa University, Qatar Foundation, 34110, Doha, Qatar.
| | - Rafiul Biswas
- College of Science and Engineering, Hamad Bin Khalifa University, Qatar Foundation, 34110, Doha, Qatar
| | - Farida Ali
- College of Science and Engineering, Hamad Bin Khalifa University, Qatar Foundation, 34110, Doha, Qatar
| | - Uzair Shah
- College of Science and Engineering, Hamad Bin Khalifa University, Qatar Foundation, 34110, Doha, Qatar
| | - Asma Alamgir
- College of Science and Engineering, Hamad Bin Khalifa University, Qatar Foundation, 34110, Doha, Qatar
| | - Osama Mousa
- College of Science and Engineering, Hamad Bin Khalifa University, Qatar Foundation, 34110, Doha, Qatar
| | - Zubair Shah
- College of Science and Engineering, Hamad Bin Khalifa University, Qatar Foundation, 34110, Doha, Qatar.
| |
Collapse
|
23
|
Xiong W, Yeung N, Wang S, Liao H, Wang L, Luo J. Breast Cancer Induced Bone Osteolysis Prediction Using Temporal Variational Autoencoders. BME FRONTIERS 2022; 2022:9763284. [PMID: 37850158 PMCID: PMC10521666 DOI: 10.34133/2022/9763284] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2021] [Accepted: 03/14/2022] [Indexed: 10/19/2023] Open
Abstract
Objective and Impact Statement. We adopt a deep learning model for bone osteolysis prediction on computed tomography (CT) images of murine breast cancer bone metastases. Given the bone CT scans at previous time steps, the model incorporates the bone-cancer interactions learned from the sequential images and generates future CT images. Its ability of predicting the development of bone lesions in cancer-invading bones can assist in assessing the risk of impending fractures and choosing proper treatments in breast cancer bone metastasis. Introduction. Breast cancer often metastasizes to bone, causes osteolytic lesions, and results in skeletal-related events (SREs) including severe pain and even fatal fractures. Although current imaging techniques can detect macroscopic bone lesions, predicting the occurrence and progression of bone lesions remains a challenge. Methods. We adopt a temporal variational autoencoder (T-VAE) model that utilizes a combination of variational autoencoders and long short-term memory networks to predict bone lesion emergence on our micro-CT dataset containing sequential images of murine tibiae. Given the CT scans of murine tibiae at early weeks, our model can learn the distribution of their future states from data. Results. We test our model against other deep learning-based prediction models on the bone lesion progression prediction task. Our model produces much more accurate predictions than existing models under various evaluation metrics. Conclusion. We develop a deep learning framework that can accurately predict and visualize the progression of osteolytic bone lesions. It will assist in planning and evaluating treatment strategies to prevent SREs in breast cancer patients.
Collapse
Affiliation(s)
- Wei Xiong
- Department of Computer Science, University of Rochester, Rochester, USA
| | - Neil Yeung
- Department of Computer Science, University of Rochester, Rochester, USA
| | - Shubo Wang
- Department of Mechanical Engineering, University of Delaware, USA
| | | | - Liyun Wang
- Department of Mechanical Engineering, University of Delaware, USA
| | - Jiebo Luo
- Department of Computer Science, University of Rochester, Rochester, USA
| |
Collapse
|
24
|
Karandikar P, Massaad E, Hadzipasic M, Kiapour A, Joshi RS, Shankar GM, Shin JH. Machine Learning Applications of Surgical Imaging for the Diagnosis and Treatment of Spine Disorders: Current State of the Art. Neurosurgery 2022; 90:372-382. [PMID: 35107085 DOI: 10.1227/neu.0000000000001853] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2021] [Accepted: 11/10/2021] [Indexed: 01/18/2023] Open
Abstract
Recent developments in machine learning (ML) methods demonstrate unparalleled potential for application in the spine. The ability for ML to provide diagnostic faculty, produce novel insights from existing capabilities, and augment or accelerate elements of surgical planning and decision making at levels equivalent or superior to humans will tremendously benefit spine surgeons and patients alike. In this review, we aim to provide a clinically relevant outline of ML-based technology in the contexts of spinal deformity, degeneration, and trauma, as well as an overview of commercial-level and precommercial-level surgical assist systems and decisional support tools. Furthermore, we briefly discuss potential applications of generative networks before highlighting some of the limitations of ML applications. We conclude that ML in spine imaging represents a significant addition to the neurosurgeon's armamentarium-it has the capacity to directly address and manifest clinical needs and improve diagnostic and procedural quality and safety-but is yet subject to challenges that must be addressed before widespread implementation.
Collapse
Affiliation(s)
- Paramesh Karandikar
- Department of Neurosurgery, Massachusetts General Hospital, Harvard Medical School, Boston, Massachusetts, USA
- T.H. Chan School of Medicine, University of Massachusetts, Worcester, Massachusetts, USA
| | - Elie Massaad
- Department of Neurosurgery, Massachusetts General Hospital, Harvard Medical School, Boston, Massachusetts, USA
| | - Muhamed Hadzipasic
- Department of Neurosurgery, Massachusetts General Hospital, Harvard Medical School, Boston, Massachusetts, USA
| | - Ali Kiapour
- Department of Neurosurgery, Massachusetts General Hospital, Harvard Medical School, Boston, Massachusetts, USA
| | - Rushikesh S Joshi
- Department of Neurosurgery, University of Michigan, Ann Arbor, Michigan, USA
| | - Ganesh M Shankar
- Department of Neurosurgery, Massachusetts General Hospital, Harvard Medical School, Boston, Massachusetts, USA
| | - John H Shin
- Department of Neurosurgery, Massachusetts General Hospital, Harvard Medical School, Boston, Massachusetts, USA
| |
Collapse
|
25
|
Zhou T, Vera P, Canu S, Ruan S. Missing Data Imputation via Conditional Generator and Correlation Learning for Multimodal Brain Tumor Segmentation. Pattern Recognit Lett 2022. [DOI: 10.1016/j.patrec.2022.04.019] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
26
|
Generative Adversarial Networks in Brain Imaging: A Narrative Review. J Imaging 2022; 8:jimaging8040083. [PMID: 35448210 PMCID: PMC9028488 DOI: 10.3390/jimaging8040083] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2021] [Revised: 03/08/2022] [Accepted: 03/15/2022] [Indexed: 02/04/2023] Open
Abstract
Artificial intelligence (AI) is expected to have a major effect on radiology as it demonstrated remarkable progress in many clinical tasks, mostly regarding the detection, segmentation, classification, monitoring, and prediction of diseases. Generative Adversarial Networks have been proposed as one of the most exciting applications of deep learning in radiology. GANs are a new approach to deep learning that leverages adversarial learning to tackle a wide array of computer vision challenges. Brain radiology was one of the first fields where GANs found their application. In neuroradiology, indeed, GANs open unexplored scenarios, allowing new processes such as image-to-image and cross-modality synthesis, image reconstruction, image segmentation, image synthesis, data augmentation, disease progression models, and brain decoding. In this narrative review, we will provide an introduction to GANs in brain imaging, discussing the clinical potential of GANs, future clinical applications, as well as pitfalls that radiologists should be aware of.
Collapse
|
27
|
Festag S, Denzler J, Spreckelsen C. Generative Adversarial Networks for Biomedical Time Series Forecasting and Imputation A systematic review. J Biomed Inform 2022; 129:104058. [DOI: 10.1016/j.jbi.2022.104058] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2021] [Revised: 02/23/2022] [Accepted: 03/22/2022] [Indexed: 10/18/2022]
|
28
|
Li X, Jiang Y, Rodriguez-Andina JJ, Luo H, Yin S, Kaynak O. When medical images meet generative adversarial network: recent development and research opportunities. DISCOVER ARTIFICIAL INTELLIGENCE 2021. [DOI: 10.1007/s44163-021-00006-0] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Abstract
AbstractDeep learning techniques have promoted the rise of artificial intelligence (AI) and performed well in computer vision. Medical image analysis is an important application of deep learning, which is expected to greatly reduce the workload of doctors, contributing to more sustainable health systems. However, most current AI methods for medical image analysis are based on supervised learning, which requires a lot of annotated data. The number of medical images available is usually small and the acquisition of medical image annotations is an expensive process. Generative adversarial network (GAN), an unsupervised method that has become very popular in recent years, can simulate the distribution of real data and reconstruct approximate real data. GAN opens some exciting new ways for medical image generation, expanding the number of medical images available for deep learning methods. Generated data can solve the problem of insufficient data or imbalanced data categories. Adversarial training is another contribution of GAN to medical imaging that has been applied to many tasks, such as classification, segmentation, or detection. This paper investigates the research status of GAN in medical images and analyzes several GAN methods commonly applied in this area. The study addresses GAN application for both medical image synthesis and adversarial learning for other medical image tasks. The open challenges and future research directions are also discussed.
Collapse
|
29
|
Zhang Y, Cai H, Nie L, Xu P, Zhao S, Guan C. An end-to-end 3D convolutional neural network for decoding attentive mental state. Neural Netw 2021; 144:129-137. [PMID: 34492547 DOI: 10.1016/j.neunet.2021.08.019] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2021] [Revised: 08/01/2021] [Accepted: 08/12/2021] [Indexed: 11/26/2022]
Abstract
The detection of attentive mental state plays an essential role in the neurofeedback process and the treatment of Attention Deficit and Hyperactivity Disorder (ADHD). However, the performance of the detection methods is still not satisfactory. One of the challenges is to find a proper representation for the electroencephalogram (EEG) data, which could preserve the temporal information and maintain the spatial topological characteristics. Inspired by the deep learning (DL) methods in the research of brain-computer interface (BCI) field, a 3D representation of EEG signal was introduced into attention detection task, and a 3D convolutional neural network model with cascade and parallel convolution operations was proposed. The model utilized three cascade blocks, each consisting of two parallel 3D convolution branches, to simultaneously extract the multi-scale features. Evaluated on a public dataset containing twenty-six subjects, the proposed model achieved better performance compared with the baseline methods under the intra-subject, inter-subject and subject-adaptive classification scenarios. This study demonstrated the promising potential of the 3D CNN model for detecting attentive mental state.
Collapse
Affiliation(s)
- Yangsong Zhang
- School of Computer Science and Technology, Southwest University of Science and Technology, Mianyang, China; Key Laboratory of Cognition and Personality, Ministry of Education, Chongqing, China.
| | - Huan Cai
- School of Computer Science and Technology, Southwest University of Science and Technology, Mianyang, China.
| | - Li Nie
- School of Computer Science and Technology, Southwest University of Science and Technology, Mianyang, China.
| | - Peng Xu
- MOE Key Lab for Neuroinformation, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, China.
| | - Sirui Zhao
- School of Computer Science and Technology, Southwest University of Science and Technology, Mianyang, China.
| | - Cuntai Guan
- School of Computer Science and Engineering, Nanyang Technological University (NTU), Singapore.
| |
Collapse
|
30
|
Grassucci E, Comminiello D, Uncini A. An Information-Theoretic Perspective on Proper Quaternion Variational Autoencoders. ENTROPY (BASEL, SWITZERLAND) 2021; 23:856. [PMID: 34356397 PMCID: PMC8305877 DOI: 10.3390/e23070856] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/30/2021] [Revised: 06/24/2021] [Accepted: 07/01/2021] [Indexed: 11/30/2022]
Abstract
Variational autoencoders are deep generative models that have recently received a great deal of attention due to their ability to model the latent distribution of any kind of input such as images and audio signals, among others. A novel variational autoncoder in the quaternion domain H, namely the QVAE, has been recently proposed, leveraging the augmented second order statics of H-proper signals. In this paper, we analyze the QVAE under an information-theoretic perspective, studying the ability of the H-proper model to approximate improper distributions as well as the built-in H-proper ones and the loss of entropy due to the improperness of the input signal. We conduct experiments on a substantial set of quaternion signals, for each of which the QVAE shows the ability of modelling the input distribution, while learning the improperness and increasing the entropy of the latent space. The proposed analysis will prove that proper QVAEs can be employed with a good approximation even when the quaternion input data are improper.
Collapse
Affiliation(s)
- Eleonora Grassucci
- Department of Information Engineering, Electronics and Telecommunications, Sapienza University of Rome, Via Eudossiana 18, 00184 Rome, Italy; (D.C.); (A.U.)
| | | | | |
Collapse
|
31
|
Chlap P, Min H, Vandenberg N, Dowling J, Holloway L, Haworth A. A review of medical image data augmentation techniques for deep learning applications. J Med Imaging Radiat Oncol 2021; 65:545-563. [PMID: 34145766 DOI: 10.1111/1754-9485.13261] [Citation(s) in RCA: 174] [Impact Index Per Article: 58.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2021] [Accepted: 05/23/2021] [Indexed: 12/21/2022]
Abstract
Research in artificial intelligence for radiology and radiotherapy has recently become increasingly reliant on the use of deep learning-based algorithms. While the performance of the models which these algorithms produce can significantly outperform more traditional machine learning methods, they do rely on larger datasets being available for training. To address this issue, data augmentation has become a popular method for increasing the size of a training dataset, particularly in fields where large datasets aren't typically available, which is often the case when working with medical images. Data augmentation aims to generate additional data which is used to train the model and has been shown to improve performance when validated on a separate unseen dataset. This approach has become commonplace so to help understand the types of data augmentation techniques used in state-of-the-art deep learning models, we conducted a systematic review of the literature where data augmentation was utilised on medical images (limited to CT and MRI) to train a deep learning model. Articles were categorised into basic, deformable, deep learning or other data augmentation techniques. As artificial intelligence models trained using augmented data make their way into the clinic, this review aims to give an insight to these techniques and confidence in the validity of the models produced.
Collapse
Affiliation(s)
- Phillip Chlap
- South Western Sydney Clinical School, University of New South Wales, Sydney, New South Wales, Australia.,Ingham Institute for Applied Medical Research, Sydney, New South Wales, Australia.,Liverpool and Macarthur Cancer Therapy Centre, Liverpool Hospital, Sydney, New South Wales, Australia
| | - Hang Min
- South Western Sydney Clinical School, University of New South Wales, Sydney, New South Wales, Australia.,Ingham Institute for Applied Medical Research, Sydney, New South Wales, Australia.,The Australian e-Health and Research Centre, CSIRO Health and Biosecurity, Brisbane, Queensland, Australia
| | - Nym Vandenberg
- Institute of Medical Physics, University of Sydney, Sydney, New South Wales, Australia
| | - Jason Dowling
- South Western Sydney Clinical School, University of New South Wales, Sydney, New South Wales, Australia.,The Australian e-Health and Research Centre, CSIRO Health and Biosecurity, Brisbane, Queensland, Australia
| | - Lois Holloway
- South Western Sydney Clinical School, University of New South Wales, Sydney, New South Wales, Australia.,Ingham Institute for Applied Medical Research, Sydney, New South Wales, Australia.,Liverpool and Macarthur Cancer Therapy Centre, Liverpool Hospital, Sydney, New South Wales, Australia.,Institute of Medical Physics, University of Sydney, Sydney, New South Wales, Australia.,Centre for Medical Radiation Physics, University of Wollongong, Wollongong, New South Wales, Australia
| | - Annette Haworth
- Institute of Medical Physics, University of Sydney, Sydney, New South Wales, Australia
| |
Collapse
|
32
|
|
33
|
Dou L, Yang F, Xu L, Zou Q. A comprehensive review of the imbalance classification of protein post-translational modifications. Brief Bioinform 2021; 22:6217722. [PMID: 33834199 DOI: 10.1093/bib/bbab089] [Citation(s) in RCA: 26] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2021] [Revised: 02/17/2021] [Accepted: 02/24/2021] [Indexed: 12/13/2022] Open
Abstract
Post-translational modifications (PTMs) play significant roles in regulating protein structure, activity and function, and they are closely involved in various pathologies. Therefore, the identification of associated PTMs is the foundation of in-depth research on related biological mechanisms, disease treatments and drug design. Due to the high cost and time consumption of high-throughput sequencing techniques, developing machine learning-based predictors has been considered an effective approach to rapidly recognize potential modified sites. However, the imbalanced distribution of true and false PTM sites, namely, the data imbalance problem, largely effects the reliability and application of prediction tools. In this article, we conduct a systematic survey of the research progress in the imbalanced PTMs classification. First, we describe the modeling process in detail and outline useful data imbalance solutions. Then, we summarize the recently proposed bioinformatics tools based on imbalanced PTM data and simultaneously build a convenient website, ImClassi_PTMs (available at lab.malab.cn/∼dlj/ImbClassi_PTMs/), to facilitate the researchers to view. Moreover, we analyze the challenges of current computational predictors and propose some suggestions to improve the efficiency of imbalance learning. We hope that this work will provide comprehensive knowledge of imbalanced PTM recognition and contribute to advanced predictors in the future.
Collapse
Affiliation(s)
- Lijun Dou
- University of Electronic Science and Technology of China and the Shenzhen Polytechnic, China
| | - Fenglong Yang
- University of Electronic Science and Technology of China and the Shenzhen Polytechnic, China
| | - Lei Xu
- School of Electronic and Communication Engineering, Shenzhen Polytechnic, China
| | - Quan Zou
- Institute of Fundamental and Frontier Sciences, University of Electronic Science and Technology of China, Chengdu, China
| |
Collapse
|