1
|
Lu J, Yan T, Yang L, Zhang X, Li J, Li D, Xiang J, Wang B. Brain fingerprinting and cognitive behavior predicting using functional connectome of high inter-subject variability. Neuroimage 2024; 295:120651. [PMID: 38788914 DOI: 10.1016/j.neuroimage.2024.120651] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2024] [Revised: 05/16/2024] [Accepted: 05/22/2024] [Indexed: 05/26/2024] Open
Abstract
The functional connectivity (FC) graph of the brain has been widely recognized as a ``fingerprint'' that can be used to identify individuals from a group of subjects. Research has indicated that individual identification accuracy can be improved by eliminating the impact of shared information among individuals. However, current research extracts not only shared information of inter-subject but also individual-specific information from FC graphs, resulting in incomplete separation of shared information and fingerprint information among individuals, leading to lower individual identification accuracy across all functional magnetic resonance imaging (fMRI) states session pairs and poor cognitive behavior prediction performance. In this paper, we propose a method to enhance inter-subject variability combining conditional variational autoencoder (CVAE) network and sparse dictionary learning (SDL) module. By embedding fMRI state information in the encoding and decoding processes, the CVAE network can better capture and represent the common features among individuals and enhance inter-subject variability by residual. Our experimental results on Human Connectome Project (HCP) data show that the refined connectomes obtained by using CVAE with SDL can accurately distinguish an individual from the remaining participants. The success accuracies reached 99.7 % and 99.6 % in the session pair rest1-rest2 and reverse rest2-rest1, respectively. In the identification experiment involving task-task combinations carried out on the same day, the identification accuracies ranged from 94.2 % to 98.8 %. Furthermore, we showed the Frontoparietal and Default networks make the most significant contributions to individual identification and the edges that significantly contribute to individual identification are found within and between the Frontoparietal and Default networks. Additionally, high-level cognitive behaviors can also be better predicted with the obtained refined connectomes, suggesting that higher fingerprinting can be useful for resulting in higher behavioral associations. In summary, our proposed framework provides a promising approach to use functional connectivity networks for studying cognition and behavior, promoting a deeper understanding of brain functions.
Collapse
Affiliation(s)
- Jiayu Lu
- College of Computer Science and Technology, Taiyuan University of Technology, Taiyuan, 030024, China
| | - Tianyi Yan
- School of Life Science, Beijing Institute of Technology, 100081, China
| | - Lan Yang
- College of Computer Science and Technology, Taiyuan University of Technology, Taiyuan, 030024, China
| | - Xi Zhang
- College of Computer Science and Technology, Taiyuan University of Technology, Taiyuan, 030024, China
| | - Jiaxin Li
- College of Computer Science and Technology, Taiyuan University of Technology, Taiyuan, 030024, China
| | - Dandan Li
- College of Computer Science and Technology, Taiyuan University of Technology, Taiyuan, 030024, China
| | - Jie Xiang
- College of Computer Science and Technology, Taiyuan University of Technology, Taiyuan, 030024, China
| | - Bin Wang
- College of Computer Science and Technology, Taiyuan University of Technology, Taiyuan, 030024, China.
| |
Collapse
|
2
|
Csore J, Karmonik C, Wilhoit K, Buckner L, Roy TL. Automatic Classification of Magnetic Resonance Histology of Peripheral Arterial Chronic Total Occlusions Using a Variational Autoencoder: A Feasibility Study. Diagnostics (Basel) 2023; 13:diagnostics13111925. [PMID: 37296778 DOI: 10.3390/diagnostics13111925] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2023] [Revised: 05/18/2023] [Accepted: 05/22/2023] [Indexed: 06/12/2023] Open
Abstract
The novel approach of our study consists in adapting and in evaluating a custom-made variational autoencoder (VAE) using two-dimensional (2D) convolutional neural networks (CNNs) on magnetic resonance imaging (MRI) images for differentiate soft vs. hard plaque components in peripheral arterial disease (PAD). Five amputated lower extremities were imaged at a clinical ultra-high field 7 Tesla MRI. Ultrashort echo time (UTE), T1-weighted (T1w) and T2-weighted (T2w) datasets were acquired. Multiplanar reconstruction (MPR) images were obtained from one lesion per limb. Images were aligned to each other and pseudo-color red-green-blue images were created. Four areas in latent space were defined corresponding to the sorted images reconstructed by the VAE. Images were classified from their position in latent space and scored using tissue score (TS) as following: (1) lumen patent, TS:0; (2) partially patent, TS:1; (3) mostly occluded with soft tissue, TS:3; (4) mostly occluded with hard tissue, TS:5. Average and relative percentage of TS was calculated per lesion defined as the sum of the tissue score for each image divided by the total number of images. In total, 2390 MPR reconstructed images were included in the analysis. Relative percentage of average tissue score varied from only patent (lesion #1) to presence of all four classes. Lesions #2, #3 and #5 were classified to contain tissues except mostly occluded with hard tissue while lesion #4 contained all (ranges (I): 0.2-100%, (II): 46.3-75.9%, (III): 18-33.5%, (IV): 20%). Training the VAE was successful as images with soft/hard tissues in PAD lesions were satisfactory separated in latent space. Using VAE may assist in rapid classification of MRI histology images acquired in a clinical setup for facilitating endovascular procedures.
Collapse
Affiliation(s)
- Judit Csore
- DeBakey Heart and Vascular Center, Houston Methodist Hospital, 6565 Fannin Street, Houston, TX 77030, USA
- Heart and Vascular Center, Semmelweis University, 68 Városmajor Street, 1122 Budapest, Hungary
| | - Christof Karmonik
- MRI Core, Translational Imaging Center, Houston Methodist Research Institute, 6670 Bertner Avenue, Houston, 77030 TX, USA
| | - Kayla Wilhoit
- MRI Core, Translational Imaging Center, Houston Methodist Research Institute, 6670 Bertner Avenue, Houston, 77030 TX, USA
| | - Lily Buckner
- MRI Core, Translational Imaging Center, Houston Methodist Research Institute, 6670 Bertner Avenue, Houston, 77030 TX, USA
| | - Trisha L Roy
- DeBakey Heart and Vascular Center, Houston Methodist Hospital, 6565 Fannin Street, Houston, TX 77030, USA
| |
Collapse
|
3
|
Celard P, Iglesias EL, Sorribes-Fdez JM, Romero R, Vieira AS, Borrajo L. A survey on deep learning applied to medical images: from simple artificial neural networks to generative models. Neural Comput Appl 2023; 35:2291-2323. [PMID: 36373133 PMCID: PMC9638354 DOI: 10.1007/s00521-022-07953-4] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2022] [Accepted: 10/12/2022] [Indexed: 11/06/2022]
Abstract
Deep learning techniques, in particular generative models, have taken on great importance in medical image analysis. This paper surveys fundamental deep learning concepts related to medical image generation. It provides concise overviews of studies which use some of the latest state-of-the-art models from last years applied to medical images of different injured body areas or organs that have a disease associated with (e.g., brain tumor and COVID-19 lungs pneumonia). The motivation for this study is to offer a comprehensive overview of artificial neural networks (NNs) and deep generative models in medical imaging, so more groups and authors that are not familiar with deep learning take into consideration its use in medicine works. We review the use of generative models, such as generative adversarial networks and variational autoencoders, as techniques to achieve semantic segmentation, data augmentation, and better classification algorithms, among other purposes. In addition, a collection of widely used public medical datasets containing magnetic resonance (MR) images, computed tomography (CT) scans, and common pictures is presented. Finally, we feature a summary of the current state of generative models in medical image including key features, current challenges, and future research paths.
Collapse
Affiliation(s)
- P. Celard
- Computer Science Department, Universidade de Vigo, Escuela Superior de Ingeniería Informática, Campus Universitario As Lagoas, 32004 Ourense, Spain ,CINBIO - Biomedical Research Centre, Universidade de Vigo, Campus Universitario Lagoas-Marcosende, 36310 Vigo, Spain ,SING Research Group, Galicia Sur Health Research Institute (IIS Galicia Sur), SERGAS-UVIGO, Vigo, Spain
| | - E. L. Iglesias
- Computer Science Department, Universidade de Vigo, Escuela Superior de Ingeniería Informática, Campus Universitario As Lagoas, 32004 Ourense, Spain ,CINBIO - Biomedical Research Centre, Universidade de Vigo, Campus Universitario Lagoas-Marcosende, 36310 Vigo, Spain ,SING Research Group, Galicia Sur Health Research Institute (IIS Galicia Sur), SERGAS-UVIGO, Vigo, Spain
| | - J. M. Sorribes-Fdez
- Computer Science Department, Universidade de Vigo, Escuela Superior de Ingeniería Informática, Campus Universitario As Lagoas, 32004 Ourense, Spain ,CINBIO - Biomedical Research Centre, Universidade de Vigo, Campus Universitario Lagoas-Marcosende, 36310 Vigo, Spain ,SING Research Group, Galicia Sur Health Research Institute (IIS Galicia Sur), SERGAS-UVIGO, Vigo, Spain
| | - R. Romero
- Computer Science Department, Universidade de Vigo, Escuela Superior de Ingeniería Informática, Campus Universitario As Lagoas, 32004 Ourense, Spain ,CINBIO - Biomedical Research Centre, Universidade de Vigo, Campus Universitario Lagoas-Marcosende, 36310 Vigo, Spain ,SING Research Group, Galicia Sur Health Research Institute (IIS Galicia Sur), SERGAS-UVIGO, Vigo, Spain
| | - A. Seara Vieira
- Computer Science Department, Universidade de Vigo, Escuela Superior de Ingeniería Informática, Campus Universitario As Lagoas, 32004 Ourense, Spain ,CINBIO - Biomedical Research Centre, Universidade de Vigo, Campus Universitario Lagoas-Marcosende, 36310 Vigo, Spain ,SING Research Group, Galicia Sur Health Research Institute (IIS Galicia Sur), SERGAS-UVIGO, Vigo, Spain
| | - L. Borrajo
- Computer Science Department, Universidade de Vigo, Escuela Superior de Ingeniería Informática, Campus Universitario As Lagoas, 32004 Ourense, Spain ,CINBIO - Biomedical Research Centre, Universidade de Vigo, Campus Universitario Lagoas-Marcosende, 36310 Vigo, Spain ,SING Research Group, Galicia Sur Health Research Institute (IIS Galicia Sur), SERGAS-UVIGO, Vigo, Spain
| |
Collapse
|
4
|
Allen C, Aryal S, Do T, Gautum R, Hasan MM, Jasthi BK, Gnimpieba E, Gadhamshetty V. Deep learning strategies for addressing issues with small datasets in 2D materials research: Microbial Corrosion. Front Microbiol 2022; 13:1059123. [PMID: 36620046 PMCID: PMC9815019 DOI: 10.3389/fmicb.2022.1059123] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2022] [Accepted: 12/02/2022] [Indexed: 12/24/2022] Open
Abstract
Protective coatings based on two dimensional materials such as graphene have gained traction for diverse applications. Their impermeability, inertness, excellent bonding with metals, and amenability to functionalization renders them as promising coatings for both abiotic and microbiologically influenced corrosion (MIC). Owing to the success of graphene coatings, the whole family of 2D materials, including hexagonal boron nitride and molybdenum disulphide are being screened to obtain other promising coatings. AI-based data-driven models can accelerate virtual screening of 2D coatings with desirable physical and chemical properties. However, lack of large experimental datasets renders training of classifiers difficult and often results in over-fitting. Generate large datasets for MIC resistance of 2D coatings is both complex and laborious. Deep learning data augmentation methods can alleviate this issue by generating synthetic electrochemical data that resembles the training data classes. Here, we investigated two different deep generative models, namely variation autoencoder (VAE) and generative adversarial network (GAN) for generating synthetic data for expanding small experimental datasets. Our model experimental system included few layered graphene over copper surfaces. The synthetic data generated using GAN displayed a greater neural network system performance (83-85% accuracy) than VAE generated synthetic data (78-80% accuracy). However, VAE data performed better (90% accuracy) than GAN data (84%-85% accuracy) when using XGBoost. Finally, we show that synthetic data based on VAE and GAN models can drive machine learning models for developing MIC resistant 2D coatings.
Collapse
Affiliation(s)
- Cody Allen
- Department of Civil and Environmental Engineering, South Dakota Mines, Rapid City, SD, United States,Two-Dimensional Materials for Biofilm Engineering Science and Technology (2DBEST) Center, South Dakota Mines, Rapid City, SD, United States,Data-Driven Materials Discovery Center, South Dakota Mines, Rapid City, SD, United States
| | - Shiva Aryal
- Department of Biomedical Engineering, University of South Dakota, Sioux Falls, SD, United States
| | - Tuyen Do
- Department of Biomedical Engineering, University of South Dakota, Sioux Falls, SD, United States
| | - Rishav Gautum
- Department of Biomedical Engineering, University of South Dakota, Sioux Falls, SD, United States
| | - Md Mahmudul Hasan
- Department of Civil and Environmental Engineering, South Dakota Mines, Rapid City, SD, United States,Two-Dimensional Materials for Biofilm Engineering Science and Technology (2DBEST) Center, South Dakota Mines, Rapid City, SD, United States,Data-Driven Materials Discovery Center, South Dakota Mines, Rapid City, SD, United States
| | - Bharat K. Jasthi
- Two-Dimensional Materials for Biofilm Engineering Science and Technology (2DBEST) Center, South Dakota Mines, Rapid City, SD, United States,Data-Driven Materials Discovery Center, South Dakota Mines, Rapid City, SD, United States,Department of Materials and Metallurgical Engineering, South Dakota Mines, Rapid City, SD, United States
| | - Etienne Gnimpieba
- Data-Driven Materials Discovery Center, South Dakota Mines, Rapid City, SD, United States,Department of Biomedical Engineering, University of South Dakota, Sioux Falls, SD, United States
| | - Venkataramana Gadhamshetty
- Department of Civil and Environmental Engineering, South Dakota Mines, Rapid City, SD, United States,Two-Dimensional Materials for Biofilm Engineering Science and Technology (2DBEST) Center, South Dakota Mines, Rapid City, SD, United States,Data-Driven Materials Discovery Center, South Dakota Mines, Rapid City, SD, United States,*Correspondence: Venkataramana Gadhamshetty,
| |
Collapse
|
5
|
Wang Y, Tiusaba L, Jacobs S, Saruwatari M, Ning B, Levitt M, Sandler AD, Nam SH, Kang JU, Cha J. Unsupervised and quantitative intestinal ischemia detection using conditional adversarial network in multimodal optical imaging. J Med Imaging (Bellingham) 2022; 9:064502. [PMID: 36466077 PMCID: PMC9704416 DOI: 10.1117/1.jmi.9.6.064502] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2022] [Accepted: 11/03/2022] [Indexed: 11/30/2023] Open
Abstract
Purpose Intraoperative evaluation of bowel perfusion is currently dependent upon subjective assessment. Thus, quantitative and objective methods of bowel viability in intestinal anastomosis are scarce. To address this clinical need, a conditional adversarial network is used to analyze the data from laser speckle contrast imaging (LSCI) paired with a visible-light camera to identify abnormal tissue perfusion regions. Approach Our vision platform was based on a dual-modality bench-top imaging system with red-green-blue (RGB) and dye-free LSCI channels. Swine model studies were conducted to collect data on bowel mesenteric vascular structures with normal/abnormal microvascular perfusion to construct the control or experimental group. Subsequently, a deep-learning model based on a conditional generative adversarial network (cGAN) was utilized to perform dual-modality image alignment and learn the distribution of normal datasets for training. Thereafter, abnormal datasets were fed into the predictive model for testing. Ischemic bowel regions could be detected by monitoring the erroneous reconstruction from the latent space. The main advantage is that it is unsupervised and does not require subjective manual annotations. Compared with the conventional qualitative LSCI technique, it provides well-defined segmentation results for different levels of ischemia. Results We demonstrated that our model could accurately segment the ischemic intestine images, with a Dice coefficient and accuracy of 90.77% and 93.06%, respectively, in 2560 RGB/LSCI image pairs. The ground truth was labeled by multiple and independent estimations, combining the surgeons' annotations with fastest gradient descent in suspicious areas of vascular images. The total processing time was 0.05 s for an image size of 256 × 256 . Conclusions The proposed cGAN can provide pixel-wise and dye-free quantitative analysis of intestinal perfusion, which is an ideal supplement to the traditional LSCI technique. It has potential to help surgeons increase the accuracy of intraoperative diagnosis and improve clinical outcomes of mesenteric ischemia and other gastrointestinal surgeries.
Collapse
Affiliation(s)
- Yaning Wang
- Johns Hopkins University, Department of Electrical and Computer Engineering, Baltimore, Maryland, United States
| | - Laura Tiusaba
- Children’s National Hospital, Division of Colorectal and Pelvic Reconstruction, Washington, District of Columbia, United States
| | - Shimon Jacobs
- Children’s National Hospital, Division of Colorectal and Pelvic Reconstruction, Washington, District of Columbia, United States
| | - Michele Saruwatari
- Children’s National Hospital, Sheikh Zayed Surgical Institute, Washington, District of Columbia, United States
| | - Bo Ning
- Children’s National Hospital, Sheikh Zayed Surgical Institute, Washington, District of Columbia, United States
| | - Marc Levitt
- Children’s National Hospital, Division of Colorectal and Pelvic Reconstruction, Washington, District of Columbia, United States
| | - Anthony D. Sandler
- Children’s National Hospital, Sheikh Zayed Surgical Institute, Washington, District of Columbia, United States
| | - So-Hyun Nam
- Dong-A University Medical Center, Department of Surgery, Busan, Republic of Korea
| | - Jin U. Kang
- Johns Hopkins University, Department of Electrical and Computer Engineering, Baltimore, Maryland, United States
| | - Jaepyeong Cha
- Children’s National Hospital, Sheikh Zayed Surgical Institute, Washington, District of Columbia, United States
- George Washington University School of Medicine and Health Sciences, Department of Pediatrics, Washington, District of Columbia, United States
| |
Collapse
|
6
|
Schena FP, Magistroni R, Narducci F, Abbrescia DI, Anelli VW, Di Noia T. Artificial intelligence in glomerular diseases. Pediatr Nephrol 2022; 37:2533-2545. [PMID: 35266037 DOI: 10.1007/s00467-021-05419-8] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/03/2021] [Revised: 12/19/2021] [Accepted: 12/21/2021] [Indexed: 11/30/2022]
Abstract
In this narrative review, we focus on the application of artificial intelligence in the clinical history of patients with glomerular disease, digital pathology in kidney biopsy, renal ultrasonography imaging, and prediction of chronic kidney disease (CKD). With the development of natural language processing, the clinical history of a patient can be used to identify a computable phenotype. In kidney pathology, digital imaging has adopted innovative deep learning algorithms (DLAs) that can improve the predictive capability of the examined lesions. However, at this time, these applications can only be used in research because there is no recognized validation to replace the conventional diagnostic applications. Kidney ultrasonography, used in the clinical examination of patients, provides information about the progression of kidney damage. Machine learning algorithms (MLAs) with promising results for the early detection of CKD have been proposed, but, still, they are not solid enough to be incorporated into the clinical practice. A few tools for glomerulonephritis, based on MLAs, are available in clinical practice. They can be downloaded on computers and cellular phones but can only be applied to uniracial cohorts of patients. To improve their performance, it is necessary to organize large consortia with multiracial cohorts. Finally, in many studies MLA development has been carried out using retrospective cohorts. The performance of the models might differ in retrospective cohorts compared to real-world data. Therefore, the models should be validated in prospective external large cohorts.
Collapse
Affiliation(s)
- Francesco P Schena
- Department of Emergency and Organ Transplantation, University of Bari, Bari, Italy.
| | | | - Fedelucio Narducci
- Department of Electrical and Information Engineering, Polytechnic of Bari, Bari, Italy
| | | | - Vito W Anelli
- Department of Electrical and Information Engineering, Polytechnic of Bari, Bari, Italy
| | - Tommaso Di Noia
- Department of Electrical and Information Engineering, Polytechnic of Bari, Bari, Italy
| |
Collapse
|
7
|
Couckuyt A, Seurinck R, Emmaneel A, Quintelier K, Novak D, Van Gassen S, Saeys Y. Challenges in translational machine learning. Hum Genet 2022; 141:1451-1466. [PMID: 35246744 PMCID: PMC8896412 DOI: 10.1007/s00439-022-02439-8] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2021] [Accepted: 02/08/2022] [Indexed: 11/25/2022]
Abstract
Machine learning (ML) algorithms are increasingly being used to help implement clinical decision support systems. In this new field, we define as "translational machine learning", joint efforts and strong communication between data scientists and clinicians help to span the gap between ML and its adoption in the clinic. These collaborations also improve interpretability and trust in translational ML methods and ultimately aim to result in generalizable and reproducible models. To help clinicians and bioinformaticians refine their translational ML pipelines, we review the steps from model building to the use of ML in the clinic. We discuss experimental setup, computational analysis, interpretability and reproducibility, and emphasize the challenges involved. We highly advise collaboration and data sharing between consortia and institutes to build multi-centric cohorts that facilitate ML methodologies that generalize across centers. In the end, we hope that this review provides a way to streamline translational ML and helps to tackle the challenges that come with it.
Collapse
Affiliation(s)
- Artuur Couckuyt
- Department of Applied Mathematics, Computer Science and Statistics, Ghent University, Gent, Belgium
- Data Mining and Modeling for Biomedicine, VIB-UGent Center for Inflammation Research, Gent, Belgium
| | - Ruth Seurinck
- Department of Applied Mathematics, Computer Science and Statistics, Ghent University, Gent, Belgium
- Data Mining and Modeling for Biomedicine, VIB-UGent Center for Inflammation Research, Gent, Belgium
| | - Annelies Emmaneel
- Department of Applied Mathematics, Computer Science and Statistics, Ghent University, Gent, Belgium
- Data Mining and Modeling for Biomedicine, VIB-UGent Center for Inflammation Research, Gent, Belgium
| | - Katrien Quintelier
- Department of Applied Mathematics, Computer Science and Statistics, Ghent University, Gent, Belgium
- Data Mining and Modeling for Biomedicine, VIB-UGent Center for Inflammation Research, Gent, Belgium
- Department of Pulmonary Diseases, Erasmus MC, Rotterdam, The Netherlands
| | - David Novak
- Department of Applied Mathematics, Computer Science and Statistics, Ghent University, Gent, Belgium
- Data Mining and Modeling for Biomedicine, VIB-UGent Center for Inflammation Research, Gent, Belgium
| | - Sofie Van Gassen
- Department of Applied Mathematics, Computer Science and Statistics, Ghent University, Gent, Belgium
- Data Mining and Modeling for Biomedicine, VIB-UGent Center for Inflammation Research, Gent, Belgium
| | - Yvan Saeys
- Department of Applied Mathematics, Computer Science and Statistics, Ghent University, Gent, Belgium.
- Data Mining and Modeling for Biomedicine, VIB-UGent Center for Inflammation Research, Gent, Belgium.
| |
Collapse
|
8
|
Li H, Song Q, Gui D, Wang M, Min X, Li A. Reconstruction-assisted Feature Encoding Network for Histologic Subtype Classification of Non-small Cell Lung Cancer. IEEE J Biomed Health Inform 2022; 26:4563-4574. [PMID: 35849680 DOI: 10.1109/jbhi.2022.3192010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
Accurate histological subtype classification between adenocarcinoma (ADC) and squamous cell carcinoma (SCC) using computed tomography (CT) images is of great importance to assist clinicians in determining treatment and therapy plans for non-small cell lung cancer (NSCLC) patients. Although current deep learning approaches have achieved promising progress in this field, they are often difficult to capture efficient tumor representations due to inadequate training data, and in consequence show limited performance. In this study, we propose a novel and effective reconstruction-assisted feature encoding network (RAFENet) for histological subtype classification by leveraging an auxiliary image reconstruction task to enable extra guidance and regularization for enhanced tumor feature representations. Different from existing reconstruction-assisted methods that directly use generalizable features obtained from shared encoder for primary task, a dedicated task-aware encoding module is utilized in RAFENet to perform refinement of generalizable features. Specifically, a cascade of cross-level non-local blocks are introduced to progressively refine generalizable features at different levels with the aid of lower-level task-specific information, which can successfully learn multi-level task-specific features tailored to histological subtype classification. Moreover, in addition to widely adopted pixel-wise reconstruction loss, we introduce a powerful semantic consistency loss function to explicitly supervise the training of RAFENet, which combines both feature consistency loss and prediction consistency loss to ensure semantic invariance during image reconstruction. Extensive experimental results show that RAFENet effectively addresses the difficult issues that cannot be resolved by existing reconstruction-based methods and consistently outperforms other state-of-the-art methods on both public and in-house NSCLC datasets.
Collapse
|
9
|
Chronic Lymphocytic Leukemia Progression Diagnosis with Intrinsic Cellular Patterns via Unsupervised Clustering. Cancers (Basel) 2022; 14:cancers14102398. [PMID: 35626003 PMCID: PMC9139505 DOI: 10.3390/cancers14102398] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2022] [Revised: 04/21/2022] [Accepted: 04/25/2022] [Indexed: 12/12/2022] Open
Abstract
Simple Summary Distinguishing between chronic lymphocytic leukemia (CLL), accelerated CLL (aCLL), and full-blown transformation to diffuse large B-cell lymphoma (Richter transformation; RT) has significant clinical implications. Identifying cellular phenotypes via unsupervised clustering provides the most robust analytic performance in analyzing digitized pathology slides. This study serves as a proof of concept that using an unsupervised machine learning scheme can enhance diagnostic accuracy. Abstract Identifying the progression of chronic lymphocytic leukemia (CLL) to accelerated CLL (aCLL) or transformation to diffuse large B-cell lymphoma (Richter transformation; RT) has significant clinical implications as it prompts a major change in patient management. However, the differentiation between these disease phases may be challenging in routine practice. Unsupervised learning has gained increased attention because of its substantial potential in data intrinsic pattern discovery. Here, we demonstrate that cellular feature engineering, identifying cellular phenotypes via unsupervised clustering, provides the most robust analytic performance in analyzing digitized pathology slides (accuracy = 0.925, AUC = 0.978) when compared to alternative approaches, such as mixed features, supervised features, unsupervised/mixed/supervised feature fusion and selection, as well as patch-based convolutional neural network (CNN) feature extraction. We further validate the reproducibility and robustness of unsupervised feature extraction via stability and repeated splitting analysis, supporting its utility as a diagnostic aid in identifying CLL patients with histologic evidence of disease progression. The outcome of this study serves as proof of principle using an unsupervised machine learning scheme to enhance the diagnostic accuracy of the heterogeneous histology patterns that pathologists might not easily see.
Collapse
|
10
|
A Survey of Dental Caries Segmentation and Detection Techniques. ScientificWorldJournal 2022; 2022:8415705. [PMID: 35450417 PMCID: PMC9017544 DOI: 10.1155/2022/8415705] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2022] [Revised: 02/21/2022] [Accepted: 03/10/2022] [Indexed: 01/15/2023] Open
Abstract
Dental caries detection, in the past, has been a challenging task given the amount of information got from various radiographic images. Several methods have been introduced to improve the quality of images for faster caries detection. Deep learning has become the methodology of choice when it comes to analysis of medical images. This survey gives an in-depth look into the use of deep learning for object detection, segmentation, and classification. It further looks into literature on segmentation and detection methods of dental images through deep learning. From the literature studied, we found out that methods were grouped according to the type of dental caries (proximal, enamel), type of X-ray images used (extraoral, intraoral), and segmentation method (threshold-based, cluster-based, boundary-based, and region-based). From the works reviewed, the main focus has been found to be on threshold-based segmentation methods. Most of the reviewed papers have preferred the use of intraoral X-ray images over extraoral X-ray images to perform segmentation on dental images of already isolated parts of the teeth. This paper presents an in-depth analysis of recent research in deep learning for dental caries segmentation and detection. It involves discussing the methods and algorithms used in segmenting and detecting dental caries. It also discusses various existing models used and how they compare with each other in terms of system performance and evaluation. We also discuss the limitations of these methods, as well as future perspectives on how to improve their performance.
Collapse
|
11
|
Chen X, Wang X, Zhang K, Fung KM, Thai TC, Moore K, Mannel RS, Liu H, Zheng B, Qiu Y. Recent advances and clinical applications of deep learning in medical image analysis. Med Image Anal 2022; 79:102444. [DOI: 10.1016/j.media.2022.102444] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2021] [Revised: 03/09/2022] [Accepted: 04/01/2022] [Indexed: 02/07/2023]
|
12
|
Uzunova H, Wilms M, Forkert ND, Handels H, Ehrhardt J. A systematic comparison of generative models for medical images. Int J Comput Assist Radiol Surg 2022; 17:1213-1224. [PMID: 35128605 PMCID: PMC9206635 DOI: 10.1007/s11548-022-02567-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2021] [Accepted: 01/14/2022] [Indexed: 11/05/2022]
Abstract
Abstract
Purpose
This work aims for a systematic comparison of popular shape and appearance models. Here, two statistical and four deep-learning-based shape and appearance models are compared and evaluated in terms of their expressiveness described by their generalization ability and specificity as well as further properties like input data format, interpretability and latent space distribution and dimension.
Methods
Classical shape models and their locality-based extension are considered next to autoencoders, variational autoencoders, diffeomorphic autoencoders and generative adversarial networks. The approaches are evaluated in terms of generalization ability, specificity and likeness depending on the amount of training data. Furthermore, various latent space metrics are presented in order to capture further major characteristics of the models.
Results
The experimental setup showed that locality statistical shape models yield best results in terms of generalization ability for 2D and 3D shape modeling. However, the deep learning approaches show strongly improved specificity. In the case of simultaneous shape and appearance modeling, the neural networks are able to generate more realistic and diverse appearances. A major drawback of the deep-learning models is, however, their impaired interpretability and ambiguity of the latent space.
Conclusions
It can be concluded that for applications not requiring particularly good specificity, shape modeling can be reliably established with locality-based statistical shape models, especially when it comes to 3D shapes. However, deep learning approaches are more worthwhile in terms of appearance modeling.
Collapse
|
13
|
Nakao T, Hanaoka S, Nomura Y, Hayashi N, Abe O. Anomaly detection in chest 18F-FDG PET/CT by Bayesian deep learning. Jpn J Radiol 2022; 40:730-739. [PMID: 35094221 PMCID: PMC9252947 DOI: 10.1007/s11604-022-01249-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2021] [Accepted: 01/11/2022] [Indexed: 12/25/2022]
Abstract
Purpose To develop an anomaly detection system in PET/CT with the tracer 18F-fluorodeoxyglucose (FDG) that requires only normal PET/CT images for training and can detect abnormal FDG uptake at any location in the chest region. Materials and methods We trained our model based on a Bayesian deep learning framework using 1878 PET/CT scans with no abnormal findings. Our model learns the distribution of standard uptake values in these normal training images and detects out-of-normal uptake regions. We evaluated this model using 34 scans showing focal abnormal FDG uptake in the chest region. This evaluation dataset includes 28 pulmonary and 17 extrapulmonary abnormal FDG uptake foci. We performed per-voxel and per-slice receiver operating characteristic (ROC) analyses and per-lesion free-response receiver operating characteristic analysis. Results Our model showed an area under the ROC curve of 0.992 on discriminating abnormal voxels and 0.852 on abnormal slices. Our model detected 41 of 45 (91.1%) of the abnormal FDG uptake foci with 12.8 false positives per scan (FPs/scan), which include 26 of 28 pulmonary and 15 of 17 extrapulmonary abnormalities. The sensitivity at 3.0 FPs/scan was 82.2% (37/45). Conclusion Our model trained only with normal PET/CT images successfully detected both pulmonary and extrapulmonary abnormal FDG uptake in the chest region.
Collapse
Affiliation(s)
- Takahiro Nakao
- Department of Computational Diagnostic Radiology and Preventive Medicine, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan.
| | - Shouhei Hanaoka
- Department of Radiology, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, Japan
| | - Yukihiro Nomura
- Department of Computational Diagnostic Radiology and Preventive Medicine, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
- Center for Frontier Medical Engineering, Chiba University, 1-33 Yayoicho, Inage-ku, Chiba, Japan
| | - Naoto Hayashi
- Department of Computational Diagnostic Radiology and Preventive Medicine, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Osamu Abe
- Department of Radiology, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, Japan
- Division of Radiology and Biomedical Engineering, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, Japan
| |
Collapse
|
14
|
Weis CA, Bindzus JN, Voigt J, Runz M, Hertjens S, Gaida MM, Popovic ZV, Porubsky S. Assessment of glomerular morphological patterns by deep learning algorithms. J Nephrol 2022; 35:417-427. [PMID: 34982414 PMCID: PMC8927010 DOI: 10.1007/s40620-021-01221-9] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/30/2021] [Indexed: 12/11/2022]
Abstract
Background Compilation of different morphological lesion signatures is characteristic of renal pathology. Previous studies have documented the potential value of artificial intelligence (AI) in recognizing relatively clear-cut glomerular structures and patterns, such as segmental or global sclerosis or mesangial hypercellularity. This study aimed to test the capacity of deep learning algorithms to recognize complex glomerular structural changes that reflect common diagnostic dilemmas in nephropathology. Methods For this purpose, we defined nine classes of glomerular morphological patterns and trained twelve convolutional neuronal network (CNN) models on these. The two-step training process was done on a first dataset defined by an expert nephropathologist (12,253 images) and a second consensus dataset (11,142 images) defined by three experts in the field. Results The efficacy of CNN training was evaluated using another set with 180 consensus images, showing convincingly good classification results (kappa-values 0.838–0.938). Furthermore, we elucidated the image areas decisive for CNN-based decision making by class activation maps. Finally, we demonstrated that the algorithm could decipher glomerular disease patterns coinciding in a single glomerulus (e.g. necrosis along with mesangial and endocapillary hypercellularity). Conclusions In summary, our model, focusing on glomerular lesions detectable by conventional microscopy, is the first sui generis to deploy deep learning as a reliable and promising tool in recognition of even discrete and/or overlapping morphological changes. Our results provide a stimulus for ongoing projects that integrate further input levels next to morphology (such as immunohistochemistry, electron microscopy, and clinical information) to develop a novel tool applicable for routine diagnostic nephropathology. Supplementary Information The online version contains supplementary material available at 10.1007/s40620-021-01221-9.
Collapse
Affiliation(s)
- Cleo-Aron Weis
- Institute of Pathology, University Medical Centre Mannheim, University of Heidelberg, 68167, Mannheim, Germany.
| | - Jan Niklas Bindzus
- Institute of Pathology, University Medical Centre Mannheim, University of Heidelberg, 68167, Mannheim, Germany
| | - Jonas Voigt
- Institute of Pathology, University Medical Centre Mannheim, University of Heidelberg, 68167, Mannheim, Germany
| | - Marlen Runz
- Institute of Pathology, University Medical Centre Mannheim, University of Heidelberg, 68167, Mannheim, Germany.,Mannheim Institute for Intelligent Systems in Medicine, University Medical Centre Mannheim, University of Heidelberg, Mannheim, Germany
| | - Svetlana Hertjens
- Institute of Medical Statistics and Biometry, Medical Faculty Mannheim, University of Heidelberg, Mannheim, Germany
| | - Matthias M Gaida
- Institute of Pathology, University Medical Center of the Johannes Gutenberg University Mainz, Langenbeckstrasse 1, 55131, Mainz, Germany
| | - Zoran V Popovic
- Institute of Pathology, University Medical Centre Mannheim, University of Heidelberg, 68167, Mannheim, Germany
| | - Stefan Porubsky
- Institute of Pathology, University Medical Center of the Johannes Gutenberg University Mainz, Langenbeckstrasse 1, 55131, Mainz, Germany.
| |
Collapse
|
15
|
Liu H, Vohra N, Bailey K, El-Shenawee M, Nelson AH. Deep Learning Classification of Breast Cancer Tissue from Terahertz Imaging Through Wavelet Synchro-Squeezed Transformation and Transfer Learning. JOURNAL OF INFRARED, MILLIMETER AND TERAHERTZ WAVES 2022; 43:48-70. [PMID: 36246840 PMCID: PMC9558445 DOI: 10.1007/s10762-021-00839-x] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/22/2021] [Accepted: 12/21/2021] [Indexed: 05/25/2023]
Abstract
Terahertz imaging and spectroscopy is an exciting technology that has the potential to provide insights in medical imaging. Prior research has leveraged statistical inference to classify tissue regions from terahertz images. To date, these approaches have shown that the segmentation problem is challenging for images of fresh tissue and for tumors that have invaded muscular regions. Artificial intelligence, particularly machine learning and deep learning, has been shown to improve performance in some medical imaging challenges. This paper builds on that literature by modifying a set of deep learning approaches to the challenge of classifying tissue regions of images captured by terahertz imaging and spectroscopy of freshly excised murine xenograft tissue. Our approach is to preprocess the images through a wavelet synchronous-squeezed transformation (WSST) to convert time-sequential terahertz data of each THz pixel to a spectrogram. Spectrograms are used as input tensors to a deep convolution neural network for pixel-wise classification. Based on the classification result of each pixel, a cancer tissue segmentation map is achieved. In experimentation, we adopt leave-one-sample-out cross-validation strategy, and evaluate our chosen networks and results using multiple metrics such as accuracy, precision, intersection, and size. The results from this experimentation demonstrate improvement in classification accuracy compared to statistical methods, an improvement to segmentation between muscle and cancerous regions in xenograft tumors, and identify areas to improve the imaging and classification methodology.
Collapse
Affiliation(s)
- Haoyan Liu
- Department of Computer Science and Computer Engineering, University of Arkansas, Fayetteville, AR, 72701, USA
| | - Nagma Vohra
- Department of Electrical Engineering, University of Arkansas, Fayetteville, AR 72701, USA
| | - Keith Bailey
- Charles River Laboratories, Mattawan, MI, 49071, USA
| | - Magda El-Shenawee
- Department of Electrical Engineering, University of Arkansas, Fayetteville, AR 72701, USA
| | - Alexander H. Nelson
- Department of Computer Science and Computer Engineering, University of Arkansas, Fayetteville, AR, 72701, USA
| |
Collapse
|
16
|
Mitra J, Qiu J, MacDonald M, Venugopal P, Wallace K, Abdou H, Richmond M, Elansary N, Edwards J, Patel N, Morrison J, Marinelli L. Automatic Hemorrhage Detection From Color Doppler Ultrasound Using a Generative Adversarial Network (GAN)-Based Anomaly Detection Method. IEEE JOURNAL OF TRANSLATIONAL ENGINEERING IN HEALTH AND MEDICINE 2022; 10:1800609. [PMID: 36051823 PMCID: PMC9423818 DOI: 10.1109/jtehm.2022.3199987] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/28/2022] [Revised: 07/21/2022] [Accepted: 08/10/2022] [Indexed: 11/21/2022]
Abstract
Hemorrhage control has been identified as a priority focus area both for civilian and military populations in the United States because exsanguination is the most common cause of preventable death in hemorrhagic injury. Non-compressible torso hemorrhage (NCTH) has high mortality rate and there are currently no broadly available therapies for NCTH outside of a surgical room environment. Novel therapies, which include High Intensity Focused Ultrasound (HIFU) have emerged as promising methods for hemorrhage control as they can non-invasively cauterize bleeding tissue deep within the body without injuring uninvolved regions. A major challenge in the application of HIFU with color Doppler US guidance is the interpretation and optimization of the blood flow images in real-time to identify the hemorrhagic focus. Today, this task requires an expert sonographer, limiting the utility of this therapy in non-clinical environments. In this work, we investigated the feasibility of an automated hemorrhage detection method using a Generative Adversarial Network (GAN) for anomaly detection that learns a manifold of normal blood flow variability and subsequently identifies anomalous flow patterns that fall outside the learned manifold. As an initial feasibility study, we collected ultrasound color Doppler images of femoral arteries in an animal model of vascular injury (N = 11 pigs). Velocity information of the blood flow were extracted from the color Doppler images that were used for training and testing the anomaly detection network. Normotensive images from 8 pigs were used for training, and testing was performed on normotensive, immediately after injury, 10 minutes post-injury and 30 minutes post-injury images from 3 other pigs. The residual images or the reconstructed error maps show promise in detecting hemorrhages with an AUC of 0.90, 0.87, 0.62 immediately, 10 minutes post-injury and 30 minutes post-injury respectively with an overall AUC of 0.83.
Collapse
Affiliation(s)
| | | | | | | | | | - Hossam Abdou
- School of Medicine, University of Maryland, Baltimore, Baltimore, MD, USA
| | - Michael Richmond
- School of Medicine, University of Maryland, Baltimore, Baltimore, MD, USA
| | - Noha Elansary
- School of Medicine, University of Maryland, Baltimore, Baltimore, MD, USA
| | - Joseph Edwards
- School of Medicine, University of Maryland, Baltimore, Baltimore, MD, USA
| | - Neerav Patel
- School of Medicine, University of Maryland, Baltimore, Baltimore, MD, USA
| | - Jonathan Morrison
- School of Medicine, University of Maryland, Baltimore, Baltimore, MD, USA
| | | |
Collapse
|
17
|
Luca AR, Ursuleanu TF, Gheorghe L, Grigorovici R, Iancu S, Hlusneac M, Grigorovici A. Impact of quality, type and volume of data used by deep learning models in the analysis of medical images. INFORMATICS IN MEDICINE UNLOCKED 2022. [DOI: 10.1016/j.imu.2022.100911] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022] Open
|
18
|
Rashid N, Hossain MAF, Ali M, Islam Sukanya M, Mahmud T, Fattah SA. AutoCovNet: Unsupervised feature learning using autoencoder and feature merging for detection of COVID-19 from chest X-ray images. Biocybern Biomed Eng 2021; 41:1685-1701. [PMID: 34690398 PMCID: PMC8526490 DOI: 10.1016/j.bbe.2021.09.004] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2021] [Revised: 09/16/2021] [Accepted: 09/18/2021] [Indexed: 12/11/2022]
Abstract
With the onset of the COVID-19 pandemic, the automated diagnosis has become one of the most trending topics of research for faster mass screening. Deep learning-based approaches have been established as the most promising methods in this regard. However, the limitation of the labeled data is the main bottleneck of the data-hungry deep learning methods. In this paper, a two-stage deep CNN based scheme is proposed to detect COVID-19 from chest X-ray images for achieving optimum performance with limited training images. In the first stage, an encoder-decoder based autoencoder network is proposed, trained on chest X-ray images in an unsupervised manner, and the network learns to reconstruct the X-ray images. An encoder-merging network is proposed for the second stage that consists of different layers of the encoder model followed by a merging network. Here the encoder model is initialized with the weights learned on the first stage and the outputs from different layers of the encoder model are used effectively by being connected to a proposed merging network. An intelligent feature merging scheme is introduced in the proposed merging network. Finally, the encoder-merging network is trained for feature extraction of the X-ray images in a supervised manner and resulting features are used in the classification layers of the proposed architecture. Considering the final classification task, an EfficientNet-B4 network is utilized in both stages. An end to end training is performed for datasets containing classes: COVID-19, Normal, Bacterial Pneumonia, Viral Pneumonia. The proposed method offers very satisfactory performances compared to the state of the art methods and achieves an accuracy of 90:13% on the 4-class, 96:45% on a 3-class, and 99:39% on 2-class classification.
Collapse
Affiliation(s)
- Nayeeb Rashid
- Department of EEE, BUET, ECE Building, West Palashi, Dhaka 1205, Bangladesh
| | | | - Mohammad Ali
- Department of EEE, BUET, ECE Building, West Palashi, Dhaka 1205, Bangladesh
| | | | - Tanvir Mahmud
- Department of EEE, BUET, ECE Building, West Palashi, Dhaka 1205, Bangladesh
| | | |
Collapse
|
19
|
Tschuchnig ME, Zillner D, Romanelli P, Hercher D, Heimel P, Oostingh GJ, Couillard-Després S, Gadermayr M. Quantification of anomalies in rats' spinal cords using autoencoders. Comput Biol Med 2021; 138:104939. [PMID: 34656872 DOI: 10.1016/j.compbiomed.2021.104939] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2021] [Revised: 10/05/2021] [Accepted: 10/09/2021] [Indexed: 10/20/2022]
Abstract
Computed tomography (CT) scans and magnetic resonance imaging (MRI) of spines are state-of-the-art for the evaluation of spinal cord lesions. This paper analyses micro-CT scans of rat spinal cords with the aim of generating lesion progression through the aggregation of anomaly-based scores. Since reliable labelling in spinal cords is only reasonable for the healthy class in the form of untreated spines, semi-supervised deviation-based anomaly detection algorithms are identified as powerful approaches. The main contribution of this paper is a large evaluation of different autoencoders and variational autoencoders for aggregated lesion quantification and a resulting spinal cord lesion quantification method that generates highly correlating quantifications. The conducted experiments showed that several models were able to generate 3D lesion quantifications of the data. These quantifications correlated with the weakly labelled true data with one model, reaching an average correlation of 0.83. We also introduced an area-based model, which correlated with a mean of 0.84. The possibility of the complementary use of the autoencoder-based method and the area feature were also discussed. Additionally to improving medical diagnostics, we anticipate features built on these quantifications to be useful for further applications like clustering into different lesions.
Collapse
Affiliation(s)
| | - Dominic Zillner
- Salzburg University of Applied Sciences, Urstein Süd 1, Puch, 5412, Salzburg, Austria
| | - Pasquale Romanelli
- Institute of Experimental Neuroregeneration, Spinal Cord Injury and Tissue Regeneration Center Salzburg, Strubergasse 21, Salzburg, 5020, Salzburg, Austria; Austrian Cluster for Tissue Regeneration, Donaueschingenstr 13, Vienna, 1200, Vienna, Austria
| | - David Hercher
- Austrian Cluster for Tissue Regeneration, Donaueschingenstr 13, Vienna, 1200, Vienna, Austria
| | - Patrick Heimel
- Austrian Cluster for Tissue Regeneration, Donaueschingenstr 13, Vienna, 1200, Vienna, Austria; Core Facility Hard Tissue and Biomaterial Research, Karl Donath Laboratory, University Clinic of Dentistry, Medical University Vienna, Spitalgasse 23, Wien, 1090, Wien, Austria
| | - Gertie J Oostingh
- Salzburg University of Applied Sciences, Urstein Süd 1, Puch, 5412, Salzburg, Austria
| | - Sébastien Couillard-Després
- Institute of Experimental Neuroregeneration, Spinal Cord Injury and Tissue Regeneration Center Salzburg, Strubergasse 21, Salzburg, 5020, Salzburg, Austria; Austrian Cluster for Tissue Regeneration, Donaueschingenstr 13, Vienna, 1200, Vienna, Austria
| | - Michael Gadermayr
- Salzburg University of Applied Sciences, Urstein Süd 1, Puch, 5412, Salzburg, Austria
| |
Collapse
|
20
|
Kim B, Kwon K, Oh C, Park H. Unsupervised anomaly detection in MR images using multicontrast information. Med Phys 2021; 48:7346-7359. [PMID: 34628653 DOI: 10.1002/mp.15269] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2021] [Revised: 09/14/2021] [Accepted: 09/14/2021] [Indexed: 11/07/2022] Open
Abstract
PURPOSE Anomaly detection in magnetic resonance imaging (MRI) is to distinguish the relevant biomarkers of diseases from those of normal tissues. In this paper, an unsupervised algorithm is proposed for pixel-level anomaly detection in multicontrast MRI. METHODS A deep neural network is developed, which uses only normal MR images as training data. The network has the two stages of feature generation and density estimation. For feature generation, relevant features are extracted from multicontrast MR images by performing contrast translation and dimension reduction. For density estimation, the distributions of the extracted features are estimated by using Gaussian mixture model (GMM). The two processes are trained to estimate normative distributions well presenting large normal datasets. In test phases, the proposed method can detect anomalies by measuring log-likelihood that a test sample belongs to the estimated normative distributions. RESULTS The proposed method and its variants were applied to detect glioblastoma and ischemic stroke lesion. Comparison studies with six previous anomaly detection algorithms demonstrated that the proposed method achieved relevant improvements in quantitative and qualitative evaluations. Ablation studies by removing each module from the proposed framework validated the effectiveness of each proposed module. CONCLUSION The proposed deep learning framework is an effective tool to detect anomalies in multicontrast MRI. The unsupervised approaches would have great potentials in detecting various lesions where annotated lesion data collection is limited.
Collapse
Affiliation(s)
- Byungjai Kim
- Department of Electrical Engineering, Korea Advanced Institute of Science and Technology (KAIST), Guseong-dong, Yuseong-gu, Daejeon, Republic of Korea
| | - Kinam Kwon
- Samsung Electronics, Maetan-dong, Yeongtong-gu, Suwon-si, Gyeonggi-do, Republic of Korea
| | - Changheun Oh
- Department of Electrical Engineering, Korea Advanced Institute of Science and Technology (KAIST), Guseong-dong, Yuseong-gu, Daejeon, Republic of Korea
| | - Hyunwook Park
- Department of Electrical Engineering, Korea Advanced Institute of Science and Technology (KAIST), Guseong-dong, Yuseong-gu, Daejeon, Republic of Korea
| |
Collapse
|
21
|
Manouchehri N, Bouguila N, Fan W. Batch and online variational learning of hierarchical Dirichlet process mixtures of multivariate Beta distributions in medical applications. Pattern Anal Appl 2021. [DOI: 10.1007/s10044-021-01023-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
22
|
Ursuleanu TF, Luca AR, Gheorghe L, Grigorovici R, Iancu S, Hlusneac M, Preda C, Grigorovici A. Deep Learning Application for Analyzing of Constituents and Their Correlations in the Interpretations of Medical Images. Diagnostics (Basel) 2021; 11:1373. [PMID: 34441307 PMCID: PMC8393354 DOI: 10.3390/diagnostics11081373] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2021] [Revised: 07/25/2021] [Accepted: 07/27/2021] [Indexed: 12/13/2022] Open
Abstract
The need for time and attention, given by the doctor to the patient, due to the increased volume of medical data to be interpreted and filtered for diagnostic and therapeutic purposes has encouraged the development of the option to support, constructively and effectively, deep learning models. Deep learning (DL) has experienced an exponential development in recent years, with a major impact on interpretations of the medical image. This has influenced the development, diversification and increase of the quality of scientific data, the development of knowledge construction methods and the improvement of DL models used in medical applications. All research papers focus on description, highlighting, classification of one of the constituent elements of deep learning models (DL), used in the interpretation of medical images and do not provide a unified picture of the importance and impact of each constituent in the performance of DL models. The novelty in our paper consists primarily in the unitary approach, of the constituent elements of DL models, namely, data, tools used by DL architectures or specifically constructed DL architecture combinations and highlighting their "key" features, for completion of tasks in current applications in the interpretation of medical images. The use of "key" characteristics specific to each constituent of DL models and the correct determination of their correlations, may be the subject of future research, with the aim of increasing the performance of DL models in the interpretation of medical images.
Collapse
Affiliation(s)
- Tudor Florin Ursuleanu
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
- Department of Surgery VI, “Sf. Spiridon” Hospital, 700111 Iasi, Romania
- Department of Surgery I, Regional Institute of Oncology, 700483 Iasi, Romania
| | - Andreea Roxana Luca
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
- Department Obstetrics and Gynecology, Integrated Ambulatory of Hospital “Sf. Spiridon”, 700106 Iasi, Romania
| | - Liliana Gheorghe
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
- Department of Radiology, “Sf. Spiridon” Hospital, 700111 Iasi, Romania
| | - Roxana Grigorovici
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
| | - Stefan Iancu
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
| | - Maria Hlusneac
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
| | - Cristina Preda
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
- Department of Endocrinology, “Sf. Spiridon” Hospital, 700111 Iasi, Romania
| | - Alexandru Grigorovici
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
- Department of Surgery VI, “Sf. Spiridon” Hospital, 700111 Iasi, Romania
| |
Collapse
|
23
|
Sandfort V, Yan K, Graffy PM, Pickhardt PJ, Summers RM. Use of Variational Autoencoders with Unsupervised Learning to Detect Incorrect Organ Segmentations at CT. Radiol Artif Intell 2021; 3:e200218. [PMID: 34350410 DOI: 10.1148/ryai.2021200218] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2020] [Revised: 03/23/2021] [Accepted: 04/15/2021] [Indexed: 11/11/2022]
Abstract
Purpose To develop a deep learning model to detect incorrect organ segmentations at CT. Materials and Methods In this retrospective study, a deep learning method was developed using variational autoencoders (VAEs) to identify problematic organ segmentations. First, three different three-dimensional (3D) U-Nets were trained on segmented CT images of the liver (n = 141), spleen (n = 51), and kidney (n = 66). A total of 12 495 CT images then were segmented by the 3D U-Nets, and output segmentations were used to train three different VAEs for the detection of problematic segmentations. Automatic reconstruction errors (Dice scores) were then calculated. A random sampling of 2510 segmented images each for the liver, spleen, and kidney models were assessed manually by a human reader to determine problematic and correct segmentations. The ability of the VAEs to identify unusual or problematic segmentations was evaluated using receiver operating characteristic curve analysis and compared with traditional non-deep learning methods for outlier detection. Using the VAE outputs, passive and active learning approaches were performed on the original 3D U-Nets to determine if training could decrease segmentation error rates (15 CT scans were added to the original training data, according to each approach). Results The mean area under the receiver operating characteristic curve (AUC) for detecting problematic segmentations using the VAE method was 0.90 (95% CI: 0.89, 0.92) for kidney, 0.94 (95% CI: 0.93, 0.95) for liver, and 0.81 (95% CI: 0.80, 0.82) for spleen. The VAE performance was higher compared with traditional methods in most cases. For example, for liver segmentation, the highest performing non-deep learning method for outlier detection had an AUC of 0.83 (95% CI: 0.77, 0.90) compared with 0.94 (95% CI: 0.93, 0.95) using the VAE method (P < .05). Using the information on problematic segmentations for active learning approaches decreased 3D U-Net segmentation error rates (original error rate, 7.1%; passive learning, 6.0%; active learning, 5.7%). Conclusion A method was developed to screen for unusual and problematic automatic organ segmentations using a 3D VAE.Keywords: Convolutional Neural Network (CNN), Deep Learning Algorithms, Machine Learning Algorithms, Segmentation, CT© RSNA, 2021.
Collapse
Affiliation(s)
- Veit Sandfort
- Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, Md (V.S., K.Y., R.M.S.); and Department of Radiology, University of Wisconsin School of Medicine and Public Health, Madison, Wis (P.M.G., P.J.P.)
| | - Ke Yan
- Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, Md (V.S., K.Y., R.M.S.); and Department of Radiology, University of Wisconsin School of Medicine and Public Health, Madison, Wis (P.M.G., P.J.P.)
| | - Peter M Graffy
- Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, Md (V.S., K.Y., R.M.S.); and Department of Radiology, University of Wisconsin School of Medicine and Public Health, Madison, Wis (P.M.G., P.J.P.)
| | - Perry J Pickhardt
- Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, Md (V.S., K.Y., R.M.S.); and Department of Radiology, University of Wisconsin School of Medicine and Public Health, Madison, Wis (P.M.G., P.J.P.)
| | - Ronald M Summers
- Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, Md (V.S., K.Y., R.M.S.); and Department of Radiology, University of Wisconsin School of Medicine and Public Health, Madison, Wis (P.M.G., P.J.P.)
| |
Collapse
|
24
|
Han C, Rundo L, Murao K, Noguchi T, Shimahara Y, Milacski ZÁ, Koshino S, Sala E, Nakayama H, Satoh S. MADGAN: unsupervised medical anomaly detection GAN using multiple adjacent brain MRI slice reconstruction. BMC Bioinformatics 2021; 22:31. [PMID: 33902457 PMCID: PMC8073969 DOI: 10.1186/s12859-020-03936-1] [Citation(s) in RCA: 46] [Impact Index Per Article: 15.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2020] [Accepted: 12/15/2020] [Indexed: 12/28/2022] Open
Abstract
BACKGROUND Unsupervised learning can discover various unseen abnormalities, relying on large-scale unannotated medical images of healthy subjects. Towards this, unsupervised methods reconstruct a 2D/3D single medical image to detect outliers either in the learned feature space or from high reconstruction loss. However, without considering continuity between multiple adjacent slices, they cannot directly discriminate diseases composed of the accumulation of subtle anatomical anomalies, such as Alzheimer's disease (AD). Moreover, no study has shown how unsupervised anomaly detection is associated with either disease stages, various (i.e., more than two types of) diseases, or multi-sequence magnetic resonance imaging (MRI) scans. RESULTS We propose unsupervised medical anomaly detection generative adversarial network (MADGAN), a novel two-step method using GAN-based multiple adjacent brain MRI slice reconstruction to detect brain anomalies at different stages on multi-sequence structural MRI: (Reconstruction) Wasserstein loss with Gradient Penalty + 100 [Formula: see text] loss-trained on 3 healthy brain axial MRI slices to reconstruct the next 3 ones-reconstructs unseen healthy/abnormal scans; (Diagnosis) Average [Formula: see text] loss per scan discriminates them, comparing the ground truth/reconstructed slices. For training, we use two different datasets composed of 1133 healthy T1-weighted (T1) and 135 healthy contrast-enhanced T1 (T1c) brain MRI scans for detecting AD and brain metastases/various diseases, respectively. Our self-attention MADGAN can detect AD on T1 scans at a very early stage, mild cognitive impairment (MCI), with area under the curve (AUC) 0.727, and AD at a late stage with AUC 0.894, while detecting brain metastases on T1c scans with AUC 0.921. CONCLUSIONS Similar to physicians' way of performing a diagnosis, using massive healthy training data, our first multiple MRI slice reconstruction approach, MADGAN, can reliably predict the next 3 slices from the previous 3 ones only for unseen healthy images. As the first unsupervised various disease diagnosis, MADGAN can reliably detect the accumulation of subtle anatomical anomalies and hyper-intense enhancing lesions, such as (especially late-stage) AD and brain metastases on multi-sequence MRI scans.
Collapse
Affiliation(s)
| | - Leonardo Rundo
- Department of Radiology, University of Cambridge, Cambridge, UK
- Cancer Research UK Cambridge Centre, University of Cambridge, Cambridge, UK
| | - Kohei Murao
- Research Center for Medical Big Data, National Institute of Informatics, Tokyo, Japan
| | | | | | - Zoltán Ádám Milacski
- Department of Artificial Intelligence, ELTE Eötvös Loránd University, Budapest, Hungary
| | - Saori Koshino
- Department of Radiology, Juntendo University, Tokyo, Japan
| | - Evis Sala
- Department of Radiology, University of Cambridge, Cambridge, UK
- Cancer Research UK Cambridge Centre, University of Cambridge, Cambridge, UK
| | - Hideki Nakayama
- Graduate School of Information Science and Technology, The University of Tokyo, Tokyo, Japan
- International Research Center for Neurointelligence (WPI-IRCN), The University of Tokyo Institutes for Advanced Study, The University of Tokyo, Tokyo, Japan
| | - Shin’ichi Satoh
- Research Center for Medical Big Data, National Institute of Informatics, Tokyo, Japan
| |
Collapse
|
25
|
Nakao T, Hanaoka S, Nomura Y, Murata M, Takenaga T, Miki S, Watadani T, Yoshikawa T, Hayashi N, Abe O. Unsupervised Deep Anomaly Detection in Chest Radiographs. J Digit Imaging 2021; 34:418-427. [PMID: 33555397 PMCID: PMC8289984 DOI: 10.1007/s10278-020-00413-2] [Citation(s) in RCA: 29] [Impact Index Per Article: 9.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2020] [Revised: 12/04/2020] [Accepted: 12/18/2020] [Indexed: 01/07/2023] Open
Abstract
The purposes of this study are to propose an unsupervised anomaly detection method based on a deep neural network (DNN) model, which requires only normal images for training, and to evaluate its performance with a large chest radiograph dataset. We used the auto-encoding generative adversarial network (α-GAN) framework, which is a combination of a GAN and a variational autoencoder, as a DNN model. A total of 29,684 frontal chest radiographs from the Radiological Society of North America Pneumonia Detection Challenge dataset were used for this study (16,880 male and 12,804 female patients; average age, 47.0 years). All these images were labeled as "Normal," "No Opacity/Not Normal," or "Opacity" by board-certified radiologists. About 70% (6,853/9,790) of the Normal images were randomly sampled as the training dataset, and the rest were randomly split into the validation and test datasets in a ratio of 1:2 (7,610 and 15,221). Our anomaly detection system could correctly visualize various lesions including a lung mass, cardiomegaly, pleural effusion, bilateral hilar lymphadenopathy, and even dextrocardia. Our system detected the abnormal images with an area under the receiver operating characteristic curve (AUROC) of 0.752. The AUROCs for the abnormal labels Opacity and No Opacity/Not Normal were 0.838 and 0.704, respectively. Our DNN-based unsupervised anomaly detection method could successfully detect various diseases or anomalies in chest radiographs by training with only the normal images.
Collapse
Affiliation(s)
- Takahiro Nakao
- Department of Computational Diagnostic Radiology and Preventive Medicine, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, Japan.
| | - Shouhei Hanaoka
- Department of Radiology, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, Japan
| | - Yukihiro Nomura
- Department of Computational Diagnostic Radiology and Preventive Medicine, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, Japan
| | - Masaki Murata
- Department of Management, Japan University of Economics, 3-11-25 Gojo, Dazaifu-shi, Fukuoka, Japan
| | - Tomomi Takenaga
- Department of Computational Diagnostic Radiology and Preventive Medicine, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, Japan
| | - Soichiro Miki
- Department of Computational Diagnostic Radiology and Preventive Medicine, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, Japan
| | - Takeyuki Watadani
- Division of Radiology and Biomedical Engineering, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, Japan
| | - Takeharu Yoshikawa
- Department of Computational Diagnostic Radiology and Preventive Medicine, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, Japan
| | - Naoto Hayashi
- Department of Computational Diagnostic Radiology and Preventive Medicine, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, Japan
| | - Osamu Abe
- Department of Radiology, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, Japan
- Division of Radiology and Biomedical Engineering, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, Japan
| |
Collapse
|
26
|
Burgos N, Cardoso MJ, Samper-González J, Habert MO, Durrleman S, Ourselin S, Colliot O. Anomaly detection for the individual analysis of brain PET images. J Med Imaging (Bellingham) 2021; 8:024003. [PMID: 33842668 PMCID: PMC8021015 DOI: 10.1117/1.jmi.8.2.024003] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2020] [Accepted: 03/12/2021] [Indexed: 11/14/2022] Open
Abstract
Purpose: In clinical practice, positron emission tomography (PET) images are mostly analyzed visually, but the sensitivity and specificity of this approach greatly depend on the observer's experience. Quantitative analysis of PET images would alleviate this problem by helping define an objective limit between normal and pathological findings. We present an anomaly detection framework for the individual analysis of PET images. Approach: We created subject-specific abnormality maps that summarize the pathology's topographical distribution in the brain by comparing the subject's PET image to a model of healthy PET appearance that is specific to the subject under investigation. This model was generated from demographically and morphologically matched PET scans from a control dataset. Results: We generated abnormality maps for healthy controls, patients at different stages of Alzheimer's disease and with different frontotemporal dementia syndromes. We showed that no anomalies were detected for the healthy controls and that the anomalies detected from the patients with dementia coincided with the regions where abnormal uptake was expected. We also validated the proposed framework using the abnormality maps as inputs of a classifier and obtained higher classification accuracies than when using the PET images themselves as inputs. Conclusions: The proposed method was able to automatically locate and characterize the areas characteristic of dementia from PET images. The abnormality maps are expected to (i) help clinicians in their diagnosis by highlighting, in a data-driven fashion, the pathological areas, and (ii) improve the interpretability of subsequent analyses, such as computer-aided diagnosis or spatiotemporal modeling.
Collapse
Affiliation(s)
- Ninon Burgos
- Paris Brain Institute, Hôpital Pitié-Salpêtrière, Paris, France
- INSERM, U 1127, Hôpital Pitié-Salpêtrière, Paris, France
- CNRS, UMR 7225, Hôpital Pitié-Salpêtrière, Paris, France
- Sorbonne Université, Hôpital Pitié-Salpêtrière, Paris, France
- Inria, Aramis Project-Team, Hôpital Pitié-Salpêtrière, Paris, France
| | - M. Jorge Cardoso
- King’s College London, Department of Imaging and Biomedical Engineering, London, United Kingdom
| | - Jorge Samper-González
- Paris Brain Institute, Hôpital Pitié-Salpêtrière, Paris, France
- INSERM, U 1127, Hôpital Pitié-Salpêtrière, Paris, France
- CNRS, UMR 7225, Hôpital Pitié-Salpêtrière, Paris, France
- Sorbonne Université, Hôpital Pitié-Salpêtrière, Paris, France
- Inria, Aramis Project-Team, Hôpital Pitié-Salpêtrière, Paris, France
| | - Marie-Odile Habert
- AP-HP, Hôpital Pitié-Salpêtrière, Department of Nuclear Medicine, Paris, France
- Laboratoire d’Imagerie Biomédicale, Sorbonne Université, Inserm U 1146, CNRS UMR 7371, Hôpital Pitié-Salpêtrière, Paris, France
- Centre Acquisition et Traitement des Images, Hôpital Pitié-Salpêtrière, Paris, France
| | - Stanley Durrleman
- Paris Brain Institute, Hôpital Pitié-Salpêtrière, Paris, France
- INSERM, U 1127, Hôpital Pitié-Salpêtrière, Paris, France
- CNRS, UMR 7225, Hôpital Pitié-Salpêtrière, Paris, France
- Sorbonne Université, Hôpital Pitié-Salpêtrière, Paris, France
- Inria, Aramis Project-Team, Hôpital Pitié-Salpêtrière, Paris, France
| | - Sébastien Ourselin
- King’s College London, Department of Imaging and Biomedical Engineering, London, United Kingdom
| | - Olivier Colliot
- Paris Brain Institute, Hôpital Pitié-Salpêtrière, Paris, France
- INSERM, U 1127, Hôpital Pitié-Salpêtrière, Paris, France
- CNRS, UMR 7225, Hôpital Pitié-Salpêtrière, Paris, France
- Sorbonne Université, Hôpital Pitié-Salpêtrière, Paris, France
- Inria, Aramis Project-Team, Hôpital Pitié-Salpêtrière, Paris, France
| | | | | |
Collapse
|
27
|
Nogales A, García-Tejedor ÁJ, Monge D, Vara JS, Antón C. A survey of deep learning models in medical therapeutic areas. Artif Intell Med 2021; 112:102020. [PMID: 33581832 DOI: 10.1016/j.artmed.2021.102020] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2020] [Revised: 12/21/2020] [Accepted: 01/10/2021] [Indexed: 12/18/2022]
Abstract
Artificial intelligence is a broad field that comprises a wide range of techniques, where deep learning is presently the one with the most impact. Moreover, the medical field is an area where data both complex and massive and the importance of the decisions made by doctors make it one of the fields in which deep learning techniques can have the greatest impact. A systematic review following the Cochrane recommendations with a multidisciplinary team comprised of physicians, research methodologists and computer scientists has been conducted. This survey aims to identify the main therapeutic areas and the deep learning models used for diagnosis and treatment tasks. The most relevant databases included were MedLine, Embase, Cochrane Central, Astrophysics Data System, Europe PubMed Central, Web of Science and Science Direct. An inclusion and exclusion criteria were defined and applied in the first and second peer review screening. A set of quality criteria was developed to select the papers obtained after the second screening. Finally, 126 studies from the initial 3493 papers were selected and 64 were described. Results show that the number of publications on deep learning in medicine is increasing every year. Also, convolutional neural networks are the most widely used models and the most developed area is oncology where they are used mainly for image analysis.
Collapse
Affiliation(s)
- Alberto Nogales
- CEIEC, Research Institute, Universidad Francisco de Vitoria, Ctra. M-515 Pozuelo-Majadahonda km 1800, 28223, Pozuelo de Alarcón, Spain.
| | - Álvaro J García-Tejedor
- CEIEC, Research Institute, Universidad Francisco de Vitoria, Ctra. M-515 Pozuelo-Majadahonda km 1800, 28223, Pozuelo de Alarcón, Spain.
| | - Diana Monge
- Faculty of Medicine, Research Institute, Universidad Francisco de Vitoria, Ctra. M-515 Pozuelo-Majadahonda km 1800, 28223, Pozuelo de Alarcón, Spain.
| | - Juan Serrano Vara
- CEIEC, Research Institute, Universidad Francisco de Vitoria, Ctra. M-515 Pozuelo-Majadahonda km 1800, 28223, Pozuelo de Alarcón, Spain.
| | - Cristina Antón
- Faculty of Medicine, Research Institute, Universidad Francisco de Vitoria, Ctra. M-515 Pozuelo-Majadahonda km 1800, 28223, Pozuelo de Alarcón, Spain.
| |
Collapse
|
28
|
Guo C. The evaluation model of reconstruction effect of ancient villages under the influence of epidemic situation based on big data. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2020. [DOI: 10.3233/jifs-189278] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
In rural construction, affected by covid-19, it leads to the collection and demand survey of basic information data of relevant interest groups. The specific situation of the transformation of ancient villages is also gradually increasing. However, due to the complexity of rural space, the dispersion of settlement space and the diversity of information demand of rural planning work, the data coverage is large, information acquisition is difficult, the use effect of data collection is not ideal, and there is no planning feedback mechanism. However, during the epidemic period, the staff could not carry out a series of reconstruction of ancient villages. At present, the data of village planning and construction and architectural design are complex, the needs of relevant interest groups are diversified, and regional planning is difficult. In this paper, the big data function is applied to the reconstruction of ancient villages in the epidemic period of covid-19.
Collapse
Affiliation(s)
- Chen Guo
- College of Fine Arts, Hubei Normal University, Huangshi, Hubei, China
| |
Collapse
|
29
|
Larrazabal AJ, Martinez C, Glocker B, Ferrante E. Post-DAE: Anatomically Plausible Segmentation via Post-Processing With Denoising Autoencoders. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:3813-3820. [PMID: 32746125 DOI: 10.1109/tmi.2020.3005297] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
We introduce Post-DAE, a post-processing method based on denoising autoencoders (DAE) to improve the anatomical plausibility of arbitrary biomedical image segmentation algorithms. Some of the most popular segmentation methods (e.g. based on convolutional neural networks or random forest classifiers) incorporate additional post-processing steps to ensure that the resulting masks fulfill expected connectivity constraints. These methods operate under the hypothesis that contiguous pixels with similar aspect should belong to the same class. Even if valid in general, this assumption does not consider more complex priors like topological restrictions or convexity, which cannot be easily incorporated into these methods. Post-DAE leverages the latest developments in manifold learning via denoising autoencoders. First, we learn a compact and non-linear embedding that represents the space of anatomically plausible segmentations. Then, given a segmentation mask obtained with an arbitrary method, we reconstruct its anatomically plausible version by projecting it onto the learnt manifold. The proposed method is trained using unpaired segmentation mask, what makes it independent of intensity information and image modality. We performed experiments in binary and multi-label segmentation of chest X-ray and cardiac magnetic resonance images. We show how erroneous and noisy segmentation masks can be improved using Post-DAE. With almost no additional computation cost, our method brings erroneous segmentations back to a feasible space.
Collapse
|
30
|
Nayarisseri A, Khandelwal R, Madhavi M, Selvaraj C, Panwar U, Sharma K, Hussain T, Singh SK. Shape-based Machine Learning Models for the Potential Novel COVID-19 Protease Inhibitors Assisted by Molecular Dynamics Simulation. Curr Top Med Chem 2020; 20:2146-2167. [PMID: 32621718 DOI: 10.2174/1568026620666200704135327] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2020] [Revised: 03/20/2020] [Accepted: 04/25/2020] [Indexed: 12/17/2022]
Abstract
BACKGROUND The vast geographical expansion of novel coronavirus and an increasing number of COVID-19 affected cases have overwhelmed health and public health services. Artificial Intelligence (AI) and Machine Learning (ML) algorithms have extended their major role in tracking disease patterns, and in identifying possible treatments. OBJECTIVE This study aims to identify potential COVID-19 protease inhibitors through shape-based Machine Learning assisted by Molecular Docking and Molecular Dynamics simulations. METHODS 31 Repurposed compounds have been selected targeting the main coronavirus protease (6LU7) and a machine learning approach was employed to generate shape-based molecules starting from the 3D shape to the pharmacophoric features of their seed compound. Ligand-Receptor Docking was performed with Optimized Potential for Liquid Simulations (OPLS) algorithms to identify highaffinity compounds from the list of selected candidates for 6LU7, which were subjected to Molecular Dynamic Simulations followed by ADMET studies and other analyses. RESULTS Shape-based Machine learning reported remdesivir, valrubicin, aprepitant, and fulvestrant as the best therapeutic agents with the highest affinity for the target protein. Among the best shape-based compounds, a novel compound identified was not indexed in any chemical databases (PubChem, Zinc, or ChEMBL). Hence, the novel compound was named 'nCorv-EMBS'. Further, toxicity analysis showed nCorv-EMBS to be suitable for further consideration as the main protease inhibitor in COVID-19. CONCLUSION Effective ACE-II, GAK, AAK1, and protease 3C blockers can serve as a novel therapeutic approach to block the binding and attachment of the main COVID-19 protease (PDB ID: 6LU7) to the host cell and thus inhibit the infection at AT2 receptors in the lung. The novel compound nCorv- EMBS herein proposed stands as a promising inhibitor to be evaluated further for COVID-19 treatment.
Collapse
Affiliation(s)
- Anuraj Nayarisseri
- In silico Research Laboratory, Eminent Biosciences, Mahalakshmi Nagar, Indore-452010, Madhya Pradesh, India,Bioinformatics Research Laboratory, LeGene Biosciences Pvt Ltd., Mahalakshmi Nagar, Indore-452010, Madhya
Pradesh, India,Research Chair for Biomedical Applications of Nanomaterials, Biochemistry Department, College of Science, King
Saud University, Riyadh, Saudi Arabia,Computer Aided Drug Designing and Molecular Modeling Lab, Department of Bioinformatics, Alagappa University, Karaikudi-630 003, Tamil Nadu, India
| | - Ravina Khandelwal
- In silico Research Laboratory, Eminent Biosciences, Mahalakshmi Nagar, Indore-452010, Madhya Pradesh, India
| | - Maddala Madhavi
- Department of Zoology, Nizam College, Osmania University, Hyderabad-500001, Telangana State, India
| | - Chandrabose Selvaraj
- Computer Aided Drug Designing and Molecular Modeling Lab, Department of Bioinformatics, Alagappa University, Karaikudi-630 003, Tamil Nadu, India
| | - Umesh Panwar
- Computer Aided Drug Designing and Molecular Modeling Lab, Department of Bioinformatics, Alagappa University, Karaikudi-630 003, Tamil Nadu, India
| | - Khushboo Sharma
- In silico Research Laboratory, Eminent Biosciences, Mahalakshmi Nagar, Indore-452010, Madhya Pradesh, India
| | - Tajamul Hussain
- Center of Excellence in Biotechnology Research, College of Science, King Saud University, Riyadh, Saudi Arabia,Research Chair for Biomedical Applications of Nanomaterials, Biochemistry Department, College of Science, King
Saud University, Riyadh, Saudi Arabia
| | - Sanjeev Kumar Singh
- Computer Aided Drug Designing and Molecular Modeling Lab, Department of Bioinformatics, Alagappa University, Karaikudi-630 003, Tamil Nadu, India
| |
Collapse
|
31
|
Uzunova H, Ehrhardt J, Handels H. Memory-efficient GAN-based domain translation of high resolution 3D medical images. Comput Med Imaging Graph 2020; 86:101801. [PMID: 33130418 DOI: 10.1016/j.compmedimag.2020.101801] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2020] [Revised: 06/26/2020] [Accepted: 09/24/2020] [Indexed: 11/25/2022]
Abstract
Generative adversarial networks (GANs) are currently rarely applied on 3D medical images of large size, due to their immense computational demand. The present work proposes a multi-scale patch-based GAN approach for establishing unpaired domain translation by generating 3D medical image volumes of high resolution in a memory-efficient way. The key idea to enable memory-efficient image generation is to first generate a low-resolution version of the image followed by the generation of patches of constant sizes but successively growing resolutions. To avoid patch artifacts and incorporate global information, the patch generation is conditioned on patches from previous resolution scales. Those multi-scale GANs are trained to generate realistically looking images from image sketches in order to perform an unpaired domain translation. This allows to preserve the topology of the test data and generate the appearance of the training domain data. The evaluation of the domain translation scenarios is performed on brain MRIs of size 155 × 240 × 240 and thorax CTs of size up to 5123. Compared to common patch-based approaches, the multi-resolution scheme enables better image quality and prevents patch artifacts. Also, it ensures constant GPU memory demand independent from the image size, allowing for the generation of arbitrarily large images.
Collapse
Affiliation(s)
- Hristina Uzunova
- Institute of Medical Informatics, University of Lübeck, Ratzeburger Allee 160, Lübeck, Germany.
| | - Jan Ehrhardt
- Institute of Medical Informatics, University of Lübeck, Ratzeburger Allee 160, Lübeck, Germany
| | - Heinz Handels
- Institute of Medical Informatics, University of Lübeck, Ratzeburger Allee 160, Lübeck, Germany
| |
Collapse
|
32
|
Pseudo-healthy synthesis with pathology disentanglement and adversarial learning. Med Image Anal 2020; 64:101719. [PMID: 32540700 DOI: 10.1016/j.media.2020.101719] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2019] [Revised: 02/10/2020] [Accepted: 05/01/2020] [Indexed: 11/21/2022]
Abstract
Pseudo-healthy synthesis is the task of creating a subject-specific 'healthy' image from a pathological one. Such images can be helpful in tasks such as anomaly detection and understanding changes induced by pathology and disease. In this paper, we present a model that is encouraged to disentangle the information of pathology from what seems to be healthy. We disentangle what appears to be healthy and where disease is as a segmentation map, which are then recombined by a network to reconstruct the input disease image. We train our models adversarially using either paired or unpaired settings, where we pair disease images and maps when available. We quantitatively and subjectively, with a human study, evaluate the quality of pseudo-healthy images using several criteria. We show in a series of experiments, performed on ISLES, BraTS and Cam-CAN datasets, that our method is better than several baselines and methods from the literature. We also show that due to better training processes we could recover deformations, on surrounding tissue, caused by disease. Our implementation is publicly available at https://github.com/xiat0616/pseudo-healthy-synthesis.
Collapse
|
33
|
Artificial intelligence and machine learning in nephropathology. Kidney Int 2020; 98:65-75. [PMID: 32475607 DOI: 10.1016/j.kint.2020.02.027] [Citation(s) in RCA: 44] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2019] [Revised: 01/03/2020] [Accepted: 02/12/2020] [Indexed: 12/15/2022]
Abstract
Artificial intelligence (AI) for the purpose of this review is an umbrella term for technologies emulating a nephropathologist's ability to extract information on diagnosis, prognosis, and therapy responsiveness from native or transplant kidney biopsies. Although AI can be used to analyze a wide variety of biopsy-related data, this review focuses on whole slide images traditionally used in nephropathology. AI applications in nephropathology have recently become available through several advancing technologies, including (i) widespread introduction of glass slide scanners, (ii) data servers in pathology departments worldwide, and (iii) through greatly improved computer hardware to enable AI training. In this review, we explain how AI can enhance the reproducibility of nephropathology results for certain parameters in the context of precision medicine using advanced architectures, such as convolutional neural networks, that are currently the state of the art in machine learning software for this task. Because AI applications in nephropathology are still in their infancy, we show the power and potential of AI applications mostly in the example of oncopathology. Moreover, we discuss the technological obstacles as well as the current stakeholder and regulatory concerns about developing AI applications in nephropathology from the perspective of nephropathologists and the wider nephrology community. We expect the gradual introduction of these technologies into routine diagnostics and research for selective tasks, suggesting that this technology will enhance the performance of nephropathologists rather than making them redundant.
Collapse
|
34
|
Lu L, Daigle BJ. Prognostic analysis of histopathological images using pre-trained convolutional neural networks: application to hepatocellular carcinoma. PeerJ 2020; 8:e8668. [PMID: 32201640 PMCID: PMC7073245 DOI: 10.7717/peerj.8668] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2019] [Accepted: 01/30/2020] [Indexed: 02/06/2023] Open
Abstract
Histopathological images contain rich phenotypic descriptions of the molecular processes underlying disease progression. Convolutional neural networks, state-of-the-art image analysis techniques in computer vision, automatically learn representative features from such images which can be useful for disease diagnosis, prognosis, and subtyping. Hepatocellular carcinoma (HCC) is the sixth most common type of primary liver malignancy. Despite the high mortality rate of HCC, little previous work has made use of CNN models to explore the use of histopathological images for prognosis and clinical survival prediction of HCC. We applied three pre-trained CNN models—VGG 16, Inception V3 and ResNet 50—to extract features from HCC histopathological images. Sample visualization and classification analyses based on these features showed a very clear separation between cancer and normal samples. In a univariate Cox regression analysis, 21.4% and 16% of image features on average were significantly associated with overall survival (OS) and disease-free survival (DFS), respectively. We also observed significant correlations between these features and integrated biological pathways derived from gene expression and copy number variation. Using an elastic net regularized Cox Proportional Hazards model of OS constructed from Inception image features, we obtained a concordance index (C-index) of 0.789 and a significant log-rank test (p = 7.6E−18). We also performed unsupervised classification to identify HCC subgroups from image features. The optimal two subgroups discovered using Inception model image features showed significant differences in both overall (C-index = 0.628 and p = 7.39E−07) and DFS (C-index = 0.558 and p = 0.012). Our work demonstrates the utility of extracting image features using pre-trained models by using them to build accurate prognostic models of HCC as well as highlight significant correlations between these features, clinical survival, and relevant biological pathways. Image features extracted from HCC histopathological images using the pre-trained CNN models VGG 16, Inception V3 and ResNet 50 can accurately distinguish normal and cancer samples. Furthermore, these image features are significantly correlated with survival and relevant biological pathways.
Collapse
Affiliation(s)
- Liangqun Lu
- Departments of Biological Sciences and Computer Science, The University of Memphis, Memphis, TN, USA
| | - Bernie J Daigle
- Departments of Biological Sciences and Computer Science, The University of Memphis, Memphis, TN, USA
| |
Collapse
|
35
|
Tobore I, Li J, Yuhang L, Al-Handarish Y, Kandwal A, Nie Z, Wang L. Deep Learning Intervention for Health Care Challenges: Some Biomedical Domain Considerations. JMIR Mhealth Uhealth 2019; 7:e11966. [PMID: 31376272 PMCID: PMC6696854 DOI: 10.2196/11966] [Citation(s) in RCA: 58] [Impact Index Per Article: 11.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2018] [Revised: 04/14/2019] [Accepted: 06/12/2019] [Indexed: 01/10/2023] Open
Abstract
The use of deep learning (DL) for the analysis and diagnosis of biomedical and health care problems has received unprecedented attention in the last decade. The technique has recorded a number of achievements for unearthing meaningful features and accomplishing tasks that were hitherto difficult to solve by other methods and human experts. Currently, biological and medical devices, treatment, and applications are capable of generating large volumes of data in the form of images, sounds, text, graphs, and signals creating the concept of big data. The innovation of DL is a developing trend in the wake of big data for data representation and analysis. DL is a type of machine learning algorithm that has deeper (or more) hidden layers of similar function cascaded into the network and has the capability to make meaning from medical big data. Current transformation drivers to achieve personalized health care delivery will be possible with the use of mobile health (mHealth). DL can provide the analysis for the deluge of data generated from mHealth apps. This paper reviews the fundamentals of DL methods and presents a general view of the trends in DL by capturing literature from PubMed and the Institute of Electrical and Electronics Engineers database publications that implement different variants of DL. We highlight the implementation of DL in health care, which we categorize into biological system, electronic health record, medical image, and physiological signals. In addition, we discuss some inherent challenges of DL affecting biomedical and health domain, as well as prospective research directions that focus on improving health management by promoting the application of physiological signals and modern internet technology.
Collapse
Affiliation(s)
- Igbe Tobore
- Center for Medical Robotics and Minimally Invasive Surgical Devices, Shenzhen Institutes of Advance Technology, Chinese Academy of Sciences, Shenzhen, China.,Graduate University, Chinese Academy of Sciences, Beijing, China
| | - Jingzhen Li
- Center for Medical Robotics and Minimally Invasive Surgical Devices, Shenzhen Institutes of Advance Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Liu Yuhang
- Center for Medical Robotics and Minimally Invasive Surgical Devices, Shenzhen Institutes of Advance Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Yousef Al-Handarish
- Center for Medical Robotics and Minimally Invasive Surgical Devices, Shenzhen Institutes of Advance Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Abhishek Kandwal
- Center for Medical Robotics and Minimally Invasive Surgical Devices, Shenzhen Institutes of Advance Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Zedong Nie
- Center for Medical Robotics and Minimally Invasive Surgical Devices, Shenzhen Institutes of Advance Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Lei Wang
- Center for Medical Robotics and Minimally Invasive Surgical Devices, Shenzhen Institutes of Advance Technology, Chinese Academy of Sciences, Shenzhen, China
| |
Collapse
|