1
|
Karikari E, Koshechkin KA. Review on brain-computer interface technologies in healthcare. Biophys Rev 2023; 15:1351-1358. [PMID: 37974976 PMCID: PMC10643750 DOI: 10.1007/s12551-023-01138-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2023] [Accepted: 08/31/2023] [Indexed: 11/19/2023] Open
Abstract
Brain-computer interface (BCI) technologies have developed as a game changer, altering how humans interact with computers and opening up new avenues for understanding and utilizing the power of the human brain. The goal of this research study is to assess recent breakthroughs in BCI technologies and their future prospects. The paper starts with an outline of the fundamental concepts and principles that underpin BCI technologies. It examines the many forms of BCIs, including as invasive, partially invasive, and non-invasive interfaces, emphasizing their advantages and disadvantages. The progress of BCI hardware and signal processing techniques is investigated, with a focus on the shift from bulky and invasive systems to more portable and user-friendly options. Following that, the article delves into the important advances in BCI applications across several fields. It investigates the use of BCIs in healthcare, particularly in neurorehabilitation, assistive technology, and cognitive enhancement. BCIs' potential for boosting human capacities such as communication, motor control, and sensory perception is being thoroughly researched. Furthermore, the article investigates developing BCI applications in gaming, entertainment, and virtual reality, demonstrating how BCI technologies are growing outside medical and therapeutic settings. The study also gives light on the problems and limits that prevent BCIs from being widely adopted. Ethical concerns about privacy, data security, and informed permission are addressed, highlighting the importance of strong legislative frameworks to enable responsible and ethical usage of BCI technologies. Furthermore, the study delves into technological issues such as increasing signal resolution and precision, increasing system reliability, and enabling smooth connection with existing technology. Finally, this study paper gives an in-depth examination of the advances and future possibilities of BCI technologies. It emphasizes the transformative influence of BCIs on human-computer interaction and their potential to alter healthcare, gaming, and other industries. This research intends to stimulate further innovation and progress in the field of brain-computer interfaces by addressing problems and imagining future possibilities.
Collapse
Affiliation(s)
- Evelyn Karikari
- Department of Public Health and Healthcare, I.M. Sechenov First Moscow State Medical University, Moscow, Russia
| | - Konstantin A. Koshechkin
- The Digital Health Institute, I.M. Sechenov First Moscow State Medical University, Moscow, Russia
| |
Collapse
|
2
|
Karimi N, Motovali-Bashi M, Ghaderi-Zefrehei M. Gene network reveals LASP1, TUBA1C, and S100A6 are likely playing regulatory roles in multiple sclerosis. Front Neurol 2023; 14:1090631. [PMID: 36970516 PMCID: PMC10035600 DOI: 10.3389/fneur.2023.1090631] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2022] [Accepted: 02/10/2023] [Indexed: 03/11/2023] Open
Abstract
IntroductionMultiple sclerosis (MS), a non-contagious and chronic disease of the central nervous system, is an unpredictable and indirectly inherited disease affecting different people in different ways. Using Omics platforms genomics, transcriptomics, proteomics, epigenomics, interactomics, and metabolomics database, it is now possible to construct sound systems biology models to extract full knowledge of the MS and recognize the pathway to uncover the personalized therapeutic tools.MethodsIn this study, we used several Bayesian Networks in order to find the transcriptional gene regulation networks that drive MS disease. We used a set of BN algorithms using the R add-on package bnlearn. The BN results underwent further downstream analysis and were validated using a wide range of Cytoscape algorithms, web based computational tools and qPCR amplification of blood samples from 56 MS patients and 44 healthy controls. The results were semantically integrated to improve understanding of the complex molecular architecture underlying MS, distinguishing distinct metabolic pathways and providing a valuable foundation for the discovery of involved genes and possibly new treatments.ResultsResults show that the LASP1, TUBA1C, and S100A6 genes were most likely playing a biological role in MS development. Results from qPCR showed a significant increase (P < 0.05) in LASP1 and S100A6 gene expression levels in MS patients compared to that in controls. However, a significant down regulation of TUBA1C gene was observed in the same comparison.ConclusionThis study provides potential diagnostic and therapeutic biomarkers for enhanced understanding of gene regulation underlying MS.
Collapse
Affiliation(s)
- Nafiseh Karimi
- Department of Cell and Molecular Biology and Microbiology, Faculty of Biological Science and Technology, University of Isfahan, Isfahan, Iran
| | - Majid Motovali-Bashi
- Department of Cell and Molecular Biology and Microbiology, Faculty of Biological Science and Technology, University of Isfahan, Isfahan, Iran
- *Correspondence: Majid Motovali-Bashi
| | | |
Collapse
|
3
|
Swanberg KM, Kurada AV, Prinsen H, Juchem C. Multiple sclerosis diagnosis and phenotype identification by multivariate classification of in vivo frontal cortex metabolite profiles. Sci Rep 2022; 12:13888. [PMID: 35974117 PMCID: PMC9381573 DOI: 10.1038/s41598-022-17741-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2021] [Accepted: 07/29/2022] [Indexed: 12/04/2022] Open
Abstract
Multiple sclerosis (MS) is a heterogeneous autoimmune disease for which diagnosis continues to rely on subjective clinical judgment over a battery of tests. Proton magnetic resonance spectroscopy (1H MRS) enables the noninvasive in vivo detection of multiple small-molecule metabolites and is therefore in principle a promising means of gathering information sufficient for multiple sclerosis diagnosis and subtype classification. Here we show that supervised classification using 1H-MRS-visible normal-appearing frontal cortex small-molecule metabolites alone can indeed differentiate individuals with progressive MS from control (held-out validation sensitivity 79% and specificity 68%), as well as between relapsing and progressive MS phenotypes (held-out validation sensitivity 84% and specificity 74%). Post hoc assessment demonstrated the disproportionate contributions of glutamate and glutamine to identifying MS status and phenotype, respectively. Our finding establishes 1H MRS as a viable means of characterizing progressive multiple sclerosis disease status and paves the way for continued refinement of this method as an auxiliary or mainstay of multiple sclerosis diagnostics.
Collapse
Affiliation(s)
- Kelley M. Swanberg
- grid.25879.310000 0004 1936 8972Department of Biomedical Engineering, Columbia University Fu Foundation School of Engineering and Applied Science, 351 Engineering Terrace, 1210 Amsterdam Avenue, Mail Code: 8904, New York, NY 10027 USA ,grid.47100.320000000419368710Department of Radiology and Biomedical Imaging, Yale University School of Medicine, New Haven, CT USA
| | - Abhinav V. Kurada
- grid.25879.310000 0004 1936 8972Department of Biomedical Engineering, Columbia University Fu Foundation School of Engineering and Applied Science, 351 Engineering Terrace, 1210 Amsterdam Avenue, Mail Code: 8904, New York, NY 10027 USA
| | - Hetty Prinsen
- grid.47100.320000000419368710Department of Radiology and Biomedical Imaging, Yale University School of Medicine, New Haven, CT USA
| | - Christoph Juchem
- grid.25879.310000 0004 1936 8972Department of Biomedical Engineering, Columbia University Fu Foundation School of Engineering and Applied Science, 351 Engineering Terrace, 1210 Amsterdam Avenue, Mail Code: 8904, New York, NY 10027 USA ,grid.47100.320000000419368710Department of Radiology and Biomedical Imaging, Yale University School of Medicine, New Haven, CT USA ,grid.21729.3f0000000419368729Department of Radiology, Columbia University College of Physicians and Surgeons, New York, NY USA ,grid.47100.320000000419368710Department of Neurology, Yale University School of Medicine, New Haven, CT USA
| |
Collapse
|
4
|
Mei J, Cheng MM, Xu G, Wan LR, Zhang H. SANet: A Slice-Aware Network for Pulmonary Nodule Detection. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2022; 44:4374-4387. [PMID: 33687839 DOI: 10.1109/tpami.2021.3065086] [Citation(s) in RCA: 20] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Lung cancer is the most common cause of cancer death worldwide. A timely diagnosis of the pulmonary nodules makes it possible to detect lung cancer in the early stage, and thoracic computed tomography (CT) provides a convenient way to diagnose nodules. However, it is hard even for experienced doctors to distinguish them from the massive CT slices. The currently existing nodule datasets are limited in both scale and category, which is insufficient and greatly restricts its applications. In this paper, we collect the largest and most diverse dataset named PN9 for pulmonary nodule detection by far. Specifically, it contains 8,798 CT scans and 40,439 annotated nodules from 9 common classes. We further propose a slice-aware network (SANet) for pulmonary nodule detection. A slice grouped non-local (SGNL) module is developed to capture long-range dependencies among any positions and any channels of one slice group in the feature map. And we introduce a 3D region proposal network to generate pulmonary nodule candidates with high sensitivity, while this detection stage usually comes with many false positives. Subsequently, a false positive reduction module (FPR) is proposed by using the multi-scale feature maps. To verify the performance of SANet and the significance of PN9, we perform extensive experiments compared with several state-of-the-art 2D CNN-based and 3D CNN-based detection methods. Promising evaluation results on PN9 prove the effectiveness of our proposed SANet. The dataset and source code is available at https://mmcheng.net/SANet/.
Collapse
|
5
|
Raza K, Singh NK. A Tour of Unsupervised Deep Learning for Medical Image Analysis. Curr Med Imaging 2021; 17:1059-1077. [PMID: 33504314 DOI: 10.2174/1573405617666210127154257] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2020] [Revised: 11/17/2020] [Accepted: 12/16/2020] [Indexed: 11/22/2022]
Abstract
BACKGROUND Interpretation of medical images for the diagnosis and treatment of complex diseases from high-dimensional and heterogeneous data remains a key challenge in transforming healthcare. In the last few years, both supervised and unsupervised deep learning achieved promising results in the area of medical image analysis. Several reviews on supervised deep learning are published, but hardly any rigorous review on unsupervised deep learning for medical image analysis is available. OBJECTIVES The objective of this review is to systematically present various unsupervised deep learning models, tools, and benchmark datasets applied to medical image analysis. Some of the discussed models are autoencoders and its other variants, Restricted Boltzmann machines (RBM), Deep belief networks (DBN), Deep Boltzmann machine (DBM), and Generative adversarial network (GAN). Further, future research opportunities and challenges of unsupervised deep learning techniques for medical image analysis are also discussed. CONCLUSION Currently, interpretation of medical images for diagnostic purposes is usually performed by human experts that may be replaced by computer-aided diagnosis due to advancement in machine learning techniques, including deep learning, and the availability of cheap computing infrastructure through cloud computing. Both supervised and unsupervised machine learning approaches are widely applied in medical image analysis, each of them having certain pros and cons. Since human supervisions are not always available or inadequate or biased, therefore, unsupervised learning algorithms give a big hope with lots of advantages for biomedical image analysis.
Collapse
Affiliation(s)
- Khalid Raza
- Department of Computer Science, Jamia Millia Islamia, New Delhi. India
| | | |
Collapse
|
6
|
Pedersen M, Verspoor K, Jenkinson M, Law M, Abbott DF, Jackson GD. Artificial intelligence for clinical decision support in neurology. Brain Commun 2020; 2:fcaa096. [PMID: 33134913 PMCID: PMC7585692 DOI: 10.1093/braincomms/fcaa096] [Citation(s) in RCA: 32] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2020] [Revised: 05/19/2020] [Accepted: 06/12/2020] [Indexed: 01/13/2023] Open
Abstract
Artificial intelligence is one of the most exciting methodological shifts in our era. It holds the potential to transform healthcare as we know it, to a system where humans and machines work together to provide better treatment for our patients. It is now clear that cutting edge artificial intelligence models in conjunction with high-quality clinical data will lead to improved prognostic and diagnostic models in neurological disease, facilitating expert-level clinical decision tools across healthcare settings. Despite the clinical promise of artificial intelligence, machine and deep-learning algorithms are not a one-size-fits-all solution for all types of clinical data and questions. In this article, we provide an overview of the core concepts of artificial intelligence, particularly contemporary deep-learning methods, to give clinician and neuroscience researchers an appreciation of how artificial intelligence can be harnessed to support clinical decisions. We clarify and emphasize the data quality and the human expertise needed to build robust clinical artificial intelligence models in neurology. As artificial intelligence is a rapidly evolving field, we take the opportunity to iterate important ethical principles to guide the field of medicine is it moves into an artificial intelligence enhanced future.
Collapse
Affiliation(s)
- Mangor Pedersen
- The Florey Institute of Neuroscience and Mental Health, The University of Melbourne, Heidelberg, VIC 3084, Australia.,Department of Psychology, Auckland University of Technology (AUT), Auckland, 0627, New Zealand
| | - Karin Verspoor
- School of Computing and Information Systems, The University of Melbourne, Parkville, VIC 3010, Australia
| | - Mark Jenkinson
- Wellcome Centre for Integrative Neuroimaging, FMRIB, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, OX3 9DU, UK.,South Australian Health and Medical Research Institute (SAHMRI), Adelaide, SA 5000, Australia.,Australian Institute for Machine Learning (AIML), The University of Adelaide, Adelaide, SA 5000, Australia
| | - Meng Law
- Department of Radiology, Alfred Hospital, Melbourne, VIC 3181, Australia.,Department of Electrical and Computer Systems Engineering, Monash University, Melbourne, VIC 3181, Australia.,Department of Neuroscience, Monash School of Medicine, Nursing and Health Sciences, Melbourne, VIC 3181, Australia
| | - David F Abbott
- The Florey Institute of Neuroscience and Mental Health, The University of Melbourne, Heidelberg, VIC 3084, Australia.,Department of Medicine Austin Health, The University of Melbourne, Heidelberg, VIC 3084, Australia
| | - Graeme D Jackson
- The Florey Institute of Neuroscience and Mental Health, The University of Melbourne, Heidelberg, VIC 3084, Australia.,Department of Medicine Austin Health, The University of Melbourne, Heidelberg, VIC 3084, Australia.,Department of Neurology, Austin Health, Heidelberg, VIC 3084, Australia
| |
Collapse
|
7
|
High-dimensional detection of imaging response to treatment in multiple sclerosis. NPJ Digit Med 2019; 2:49. [PMID: 31304395 PMCID: PMC6556513 DOI: 10.1038/s41746-019-0127-8] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2019] [Accepted: 05/15/2019] [Indexed: 12/14/2022] Open
Abstract
Changes on brain imaging may precede clinical manifestations or disclose disease progression opaque to conventional clinical measures. Where, as in multiple sclerosis, the pathological process has a complex anatomical distribution, such changes are not easily detected by low-dimensional models in common use. This hinders our ability to detect treatment effects, both in the management of individual patients and in interventional trials. Here we compared the ability of conventional models to detect an imaging response to treatment against high-dimensional models incorporating a wide multiplicity of imaging factors. We used fully-automated image analysis to extract 144 regional, longitudinal trajectories of pre- and post- treatment changes in brain volume and disconnection in a cohort of 124 natalizumab-treated patients. Low- and high-dimensional models of the relationship between treatment and the trajectories of change were built and evaluated with machine learning, quantifying performance with receiver operating characteristic curves. Simulations of randomised controlled trials enrolling varying numbers of patients were used to quantify the impact of dimensionality on statistical efficiency. Compared to existing methods, high-dimensional models were superior in treatment response detection (area under the receiver operating characteristic curve = 0.890 [95% CI = 0.885–0.895] vs. 0.686 [95% CI = 0.679–0.693], P < 0.01]) and in statistical efficiency (achieved statistical power = 0.806 [95% CI = 0.698–0.872] vs. 0.508 [95% CI = 0.403–0.593] with number of patients enrolled = 50, at α = 0.01). High-dimensional models based on routine, clinical imaging can substantially enhance the detection of the imaging response to treatment in multiple sclerosis, potentially enabling more accurate individual prediction and greater statistical efficiency of randomised controlled trials.
Collapse
|
8
|
Zhang S, Dong Q, Zhang W, Huang H, Zhu D, Liu T. Discovering hierarchical common brain networks via multimodal deep belief network. Med Image Anal 2019; 54:238-252. [PMID: 30954851 PMCID: PMC6487231 DOI: 10.1016/j.media.2019.03.011] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2018] [Revised: 02/04/2019] [Accepted: 03/27/2019] [Indexed: 01/08/2023]
Abstract
Studying a common architecture reflecting both brain's structural and functional organizations across individuals and populations in a hierarchical way has been of significant interest in the brain mapping field. Recently, deep learning models exhibited ability in extracting meaningful hierarchical structures from brain imaging data, e.g., fMRI and DTI. However, deep learning models have been rarely used to explore the relation between brain structure and function yet. In this paper, we proposed a novel multimodal deep believe network (DBN) model to discover and quantitatively represent the hierarchical organizations of common and consistent brain networks from both fMRI and DTI data. A prominent characteristic of DBN is that it is capable of extracting meaningful features from complex neuroimaging data with a hierarchical manner. With our proposed DBN model, three hierarchical layers with hundreds of common and consistent brain networks across individual brains are successfully constructed through learning a large dimension of representative features from fMRI/DTI data.
Collapse
Affiliation(s)
- Shu Zhang
- Cortical Architecture Imaging and Discovery Lab, Department of Computer Science and Bioimaging Research Center, The University of Georgia, Athens, GA, USA
| | - Qinglin Dong
- Cortical Architecture Imaging and Discovery Lab, Department of Computer Science and Bioimaging Research Center, The University of Georgia, Athens, GA, USA
| | - Wei Zhang
- Cortical Architecture Imaging and Discovery Lab, Department of Computer Science and Bioimaging Research Center, The University of Georgia, Athens, GA, USA
| | - Heng Huang
- School of Automation, Northwestern Polytechnical University, Xi'an, 710072, China
| | - Dajiang Zhu
- The University of Texas at Arlington, Arlington, TX 76010, USA
| | - Tianming Liu
- Cortical Architecture Imaging and Discovery Lab, Department of Computer Science and Bioimaging Research Center, The University of Georgia, Athens, GA, USA.
| |
Collapse
|
9
|
Shen T, Jiang J, Lin W, Ge J, Wu P, Zhou Y, Zuo C, Wang J, Yan Z, Shi K. Use of Overlapping Group LASSO Sparse Deep Belief Network to Discriminate Parkinson's Disease and Normal Control. Front Neurosci 2019; 13:396. [PMID: 31110472 PMCID: PMC6501727 DOI: 10.3389/fnins.2019.00396] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2019] [Accepted: 04/08/2019] [Indexed: 12/31/2022] Open
Abstract
As a medical imaging technology which can show the metabolism of the brain, 18F-fluorodeoxyglucose (FDG)-positron emission tomography (PET) is of great value for the diagnosis of Parkinson's Disease (PD). With the development of pattern recognition technology, analysis of brain images using deep learning are becoming more and more popular. However, existing computer-aided-diagnosis technologies often over fit and have poor generalizability. Therefore, we aimed to improve a framework based on Group Lasso Sparse Deep Belief Network (GLS-DBN) for discriminating PD and normal control (NC) subjects based on FDG-PET imaging. In this study, 225 NC and 125 PD cohorts from Huashan and Wuxi 904 hospitals were selected. They were divided into the training & validation dataset and 2 test datasets. First, in the training & validation set, subjects were randomly partitioned 80:20, with multiple training iterations for the deep learning model. Next, Locally Linear Embedding was used as a dimension reduction algorithm. Then, GLS-DBN was used for feature learning and classification. Different sparse DBN models were used to compare datasets to evaluate the effectiveness of our framework. Accuracy, sensitivity, and specificity were examined to validate the results. Output variables of the network were also correlated with longitudinal changes of rating scales about movement disorders (UPDRS, H&Y). As a result, accuracy of prediction (90% in Test 1, 86% in Test 2) for classification of PD and NC patients outperformed conventional approaches. Output scores of the network were strongly correlated with UPDRS and H&Y (R = 0.705, p < 0.001; R = 0.697, p < 0.001 in Test 1; R = 0.592, p = 0.0018, R = 0.528, p = 0.0067 in Test 2). These results show the GLS-DBN is feasible method for early diagnosis of PD.
Collapse
Affiliation(s)
- Ting Shen
- Shanghai Institute for Advanced Communication and Data Science, Shanghai University, Shanghai, China
| | - Jiehui Jiang
- Shanghai Institute for Advanced Communication and Data Science, Shanghai University, Shanghai, China.,Key laboratory of Specialty Fiber Optics and Optical Access Networks, Joint International Research Laboratory of Specialty Fiber Optics and Advanced Communication, Shanghai University, Shanghai, China
| | - Wei Lin
- Department of Neurosurgery, 904 Hospital of PLA, Anhui Medical University, Wuxi, China
| | - Jingjie Ge
- PET Center, Huashan Hospital, Fudan University, Shanghai, China
| | - Ping Wu
- PET Center, Huashan Hospital, Fudan University, Shanghai, China
| | - Yongjin Zhou
- Shanghai Institute for Advanced Communication and Data Science, Shanghai University, Shanghai, China
| | - Chuantao Zuo
- PET Center, Huashan Hospital, Fudan University, Shanghai, China.,Institute of Functional and Molecular Medical Imaging, Fudan University, Shanghai, China.,Human Phenome Institute, Fudan University, Shanghai, China
| | - Jian Wang
- Department of neurology, Huashan Hospital, Fudan University, Shanghai, China
| | - Zhuangzhi Yan
- Shanghai Institute for Advanced Communication and Data Science, Shanghai University, Shanghai, China
| | - Kuangyu Shi
- Department of Nuclear Medicine, University Hospital Bern, Bern, Switzerland.,Department of Nuclear Medicine, Technische Universitat Munchen, Munich, Germany
| |
Collapse
|
10
|
Bermudez C, Plassard AJ, Davis TL, Newton AT, Resnick SM, Landman BA. Learning Implicit Brain MRI Manifolds with Deep Learning. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2018; 10574:105741L. [PMID: 29887659 PMCID: PMC5990281 DOI: 10.1117/12.2293515] [Citation(s) in RCA: 27] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
An important task in image processing and neuroimaging is to extract quantitative information from the acquired images in order to make observations about the presence of disease or markers of development in populations. Having a low-dimensional manifold of an image allows for easier statistical comparisons between groups and the synthesis of group representatives. Previous studies have sought to identify the best mapping of brain MRI to a low-dimensional manifold, but have been limited by assumptions of explicit similarity measures. In this work, we use deep learning techniques to investigate implicit manifolds of normal brains and generate new, high-quality images. We explore implicit manifolds by addressing the problems of image synthesis and image denoising as important tools in manifold learning. First, we propose the unsupervised synthesis of T1-weighted brain MRI using a Generative Adversarial Network (GAN) by learning from 528 examples of 2D axial slices of brain MRI. Synthesized images were first shown to be unique by performing a cross-correlation with the training set. Real and synthesized images were then assessed in a blinded manner by two imaging experts providing an image quality score of 1-5. The quality score of the synthetic image showed substantial overlap with that of the real images. Moreover, we use an autoencoder with skip connections for image denoising, showing that the proposed method results in higher PSNR than FSL SUSAN after denoising. This work shows the power of artificial networks to synthesize realistic imaging data, which can be used to improve image processing techniques and provide a quantitative framework to structural changes in the brain.
Collapse
Affiliation(s)
- Camilo Bermudez
- Department of Biomedical Engineering, Vanderbilt University, 2201 West End Ave, Nashville, TN, USA 37235
| | - Andrew J Plassard
- Department of Computer Science, Vanderbilt University, 2201 West End Ave, Nashville, TN, USA 37235
| | - Taylor L Davis
- Department of Radiology, Vanderbilt University Medical Center, 2201 West End Ave, Nashville, TN, USA 37235
| | - Allen T Newton
- Department of Radiology, Vanderbilt University Medical Center, 2201 West End Ave, Nashville, TN, USA 37235
| | - Susan M Resnick
- Department of Computer Science, Vanderbilt University, 2201 West End Ave, Nashville, TN, USA 37235
| | - Bennett A Landman
- Department of Biomedical Engineering, Vanderbilt University, 2201 West End Ave, Nashville, TN, USA 37235
| |
Collapse
|
11
|
Doyle A, Elliott C, Karimaghaloo Z, Subbanna N, Arnold DL, Arbel T. Lesion Detection, Segmentation and Prediction in Multiple Sclerosis Clinical Trials. BRAINLESION: GLIOMA, MULTIPLE SCLEROSIS, STROKE AND TRAUMATIC BRAIN INJURIES 2018. [DOI: 10.1007/978-3-319-75238-9_2] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
|
12
|
Carneiro G, Nascimento J, Bradley AP. Automated Analysis of Unregistered Multi-View Mammograms With Deep Learning. IEEE TRANSACTIONS ON MEDICAL IMAGING 2017; 36:2355-2365. [PMID: 28920897 DOI: 10.1109/tmi.2017.2751523] [Citation(s) in RCA: 67] [Impact Index Per Article: 9.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
We describe an automated methodology for the analysis of unregistered cranio-caudal (CC) and medio-lateral oblique (MLO) mammography views in order to estimate the patient's risk of developing breast cancer. The main innovation behind this methodology lies in the use of deep learning models for the problem of jointly classifying unregistered mammogram views and respective segmentation maps of breast lesions (i.e., masses and micro-calcifications). This is a holistic methodology that can classify a whole mammographic exam, containing the CC and MLO views and the segmentation maps, as opposed to the classification of individual lesions, which is the dominant approach in the field. We also demonstrate that the proposed system is capable of using the segmentation maps generated by automated mass and micro-calcification detection systems, and still producing accurate results. The semi-automated approach (using manually defined mass and micro-calcification segmentation maps) is tested on two publicly available data sets (INbreast and DDSM), and results show that the volume under ROC surface (VUS) for a 3-class problem (normal tissue, benign, and malignant) is over 0.9, the area under ROC curve (AUC) for the 2-class "benign versus malignant" problem is over 0.9, and for the 2-class breast screening problem (malignancy versus normal/benign) is also over 0.9. For the fully automated approach, the VUS results on INbreast is over 0.7, and the AUC for the 2-class "benign versus malignant" problem is over 0.78, and the AUC for the 2-class breast screening is 0.86.
Collapse
|
13
|
Deep learning of joint myelin and T1w MRI features in normal-appearing brain tissue to distinguish between multiple sclerosis patients and healthy controls. NEUROIMAGE-CLINICAL 2017; 17:169-178. [PMID: 29071211 PMCID: PMC5651626 DOI: 10.1016/j.nicl.2017.10.015] [Citation(s) in RCA: 54] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/21/2017] [Revised: 10/08/2017] [Accepted: 10/11/2017] [Indexed: 01/12/2023]
Abstract
Myelin imaging is a form of quantitative magnetic resonance imaging (MRI) that measures myelin content and can potentially allow demyelinating diseases such as multiple sclerosis (MS) to be detected earlier. Although focal lesions are the most visible signs of MS pathology on conventional MRI, it has been shown that even tissues that appear normal may exhibit decreased myelin content as revealed by myelin-specific images (i.e., myelin maps). Current methods for analyzing myelin maps typically use global or regional mean myelin measurements to detect abnormalities, but ignore finer spatial patterns that may be characteristic of MS. In this paper, we present a machine learning method to automatically learn, from multimodal MR images, latent spatial features that can potentially improve the detection of MS pathology at early stage. More specifically, 3D image patches are extracted from myelin maps and the corresponding T1-weighted (T1w) MRIs, and are used to learn a latent joint myelin-T1w feature representation via unsupervised deep learning. Using a data set of images from MS patients and healthy controls, a common set of patches are selected via a voxel-wise t-test performed between the two groups. In each MS image, any patches overlapping with focal lesions are excluded, and a feature imputation method is used to fill in the missing values. A feature selection process (LASSO) is then utilized to construct a sparse representation. The resulting normal-appearing features are used to train a random forest classifier. Using the myelin and T1w images of 55 relapse-remitting MS patients and 44 healthy controls in an 11-fold cross-validation experiment, the proposed method achieved an average classification accuracy of 87.9% (SD = 8.4%), which is higher and more consistent across folds than those attained by regional mean myelin (73.7%, SD = 13.7%) and T1w measurements (66.7%, SD = 10.6%), or deep-learned features in either the myelin (83.8%, SD = 11.0%) or T1w (70.1%, SD = 13.6%) images alone, suggesting that the proposed method has strong potential for identifying image features that are more sensitive and specific to MS pathology in normal-appearing brain tissues.
Collapse
|
14
|
Yoo Y, Tang LYW, Li DKB, Metz L, Kolind S, Traboulsee AL, Tam RC. Deep learning of brain lesion patterns and user-defined clinical and MRI features for predicting conversion to multiple sclerosis from clinically isolated syndrome. COMPUTER METHODS IN BIOMECHANICS AND BIOMEDICAL ENGINEERING: IMAGING & VISUALIZATION 2017. [DOI: 10.1080/21681163.2017.1356750] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/03/2023]
Affiliation(s)
- Youngjin Yoo
- Department of Electrical and Computer Engineering, University of British Columbia , Vancouver, Canada
- Biomedical Engineering Program, University of British Columbia , Vancouver, Canada
- MS/MRI Research Group, University of British Columbia , Vancouver, Canada
| | - Lisa Y. W. Tang
- Department of Radiology, University of British Columbia , Vancouver, Canada
- MS/MRI Research Group, University of British Columbia , Vancouver, Canada
| | - David K. B. Li
- Department of Radiology, University of British Columbia , Vancouver, Canada
- MS/MRI Research Group, University of British Columbia , Vancouver, Canada
| | - Luanne Metz
- Division of Neurology, University of Calgary , Calgary, Canada
| | - Shannon Kolind
- Division of Neurology, University of British Columbia , Vancouver, Canada
| | - Anthony L. Traboulsee
- Division of Neurology, University of British Columbia , Vancouver, Canada
- MS/MRI Research Group, University of British Columbia , Vancouver, Canada
| | - Roger C. Tam
- Biomedical Engineering Program, University of British Columbia , Vancouver, Canada
- Department of Radiology, University of British Columbia , Vancouver, Canada
- MS/MRI Research Group, University of British Columbia , Vancouver, Canada
| |
Collapse
|
15
|
Song Y, Li Q, Huang H, Feng D, Chen M, Cai W. Low Dimensional Representation of Fisher Vectors for Microscopy Image Classification. IEEE TRANSACTIONS ON MEDICAL IMAGING 2017; 36:1636-1649. [PMID: 28358678 DOI: 10.1109/tmi.2017.2687466] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
Microscopy image classification is important in various biomedical applications, such as cancer subtype identification, and protein localization for high content screening. To achieve automated and effective microscopy image classification, the representative and discriminative capability of image feature descriptors is essential. To this end, in this paper, we propose a new feature representation algorithm to facilitate automated microscopy image classification. In particular, we incorporate Fisher vector (FV) encoding with multiple types of local features that are handcrafted or learned, and we design a separation-guided dimension reduction method to reduce the descriptor dimension while increasing its discriminative capability. Our method is evaluated on four publicly available microscopy image data sets of different imaging types and applications, including the UCSB breast cancer data set, MICCAI 2015 CBTC challenge data set, and IICBU malignant lymphoma, and RNAi data sets. Our experimental results demonstrate the advantage of the proposed low-dimensional FV representation, showing consistent performance improvement over the existing state of the art and the commonly used dimension reduction techniques.
Collapse
|
16
|
Wachinger C, Reuter M, Klein T. DeepNAT: Deep convolutional neural network for segmenting neuroanatomy. Neuroimage 2017; 170:434-445. [PMID: 28223187 DOI: 10.1016/j.neuroimage.2017.02.035] [Citation(s) in RCA: 178] [Impact Index Per Article: 25.4] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2016] [Revised: 02/13/2017] [Accepted: 02/13/2017] [Indexed: 10/20/2022] Open
Abstract
We introduce DeepNAT, a 3D Deep convolutional neural network for the automatic segmentation of NeuroAnaTomy in T1-weighted magnetic resonance images. DeepNAT is an end-to-end learning-based approach to brain segmentation that jointly learns an abstract feature representation and a multi-class classification. We propose a 3D patch-based approach, where we do not only predict the center voxel of the patch but also neighbors, which is formulated as multi-task learning. To address a class imbalance problem, we arrange two networks hierarchically, where the first one separates foreground from background, and the second one identifies 25 brain structures on the foreground. Since patches lack spatial context, we augment them with coordinates. To this end, we introduce a novel intrinsic parameterization of the brain volume, formed by eigenfunctions of the Laplace-Beltrami operator. As network architecture, we use three convolutional layers with pooling, batch normalization, and non-linearities, followed by fully connected layers with dropout. The final segmentation is inferred from the probabilistic output of the network with a 3D fully connected conditional random field, which ensures label agreement between close voxels. The roughly 2.7million parameters in the network are learned with stochastic gradient descent. Our results show that DeepNAT compares favorably to state-of-the-art methods. Finally, the purely learning-based method may have a high potential for the adaptation to young, old, or diseased brains by fine-tuning the pre-trained network with a small training sample on the target application, where the availability of larger datasets with manual annotations may boost the overall segmentation accuracy in the future.
Collapse
Affiliation(s)
- Christian Wachinger
- Department of Child and Adolescent Psychiatry, Psychosomatic and Psychotherapy, Ludwig-Maximilian-University, Waltherstr. 23, 81369 München, Munich, Germany.
| | - Martin Reuter
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA; Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA, USA; German Centre for Neurodegenerative Diseases (DZNE), Department of Image Analysis, Bonn, Germany
| | | |
Collapse
|
17
|
Comparison of Deep Learning and Support Vector Machine Learning for Subgroups of Multiple Sclerosis. COMPUTATIONAL SCIENCE AND ITS APPLICATIONS – ICCSA 2017 2017. [DOI: 10.1007/978-3-319-62395-5_11] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/05/2022]
|
18
|
Using deep belief network modelling to characterize differences in brain morphometry in schizophrenia. Sci Rep 2016; 6:38897. [PMID: 27941946 PMCID: PMC5151017 DOI: 10.1038/srep38897] [Citation(s) in RCA: 81] [Impact Index Per Article: 10.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2016] [Accepted: 11/15/2016] [Indexed: 11/08/2022] Open
Abstract
Neuroimaging-based models contribute to increasing our understanding of schizophrenia pathophysiology and can reveal the underlying characteristics of this and other clinical conditions. However, the considerable variability in reported neuroimaging results mirrors the heterogeneity of the disorder. Machine learning methods capable of representing invariant features could circumvent this problem. In this structural MRI study, we trained a deep learning model known as deep belief network (DBN) to extract features from brain morphometry data and investigated its performance in discriminating between healthy controls (N = 83) and patients with schizophrenia (N = 143). We further analysed performance in classifying patients with a first-episode psychosis (N = 32). The DBN highlighted differences between classes, especially in the frontal, temporal, parietal, and insular cortices, and in some subcortical regions, including the corpus callosum, putamen, and cerebellum. The DBN was slightly more accurate as a classifier (accuracy = 73.6%) than the support vector machine (accuracy = 68.1%). Finally, the error rate of the DBN in classifying first-episode patients was 56.3%, indicating that the representations learned from patients with schizophrenia and healthy controls were not suitable to define these patients. Our data suggest that deep learning could improve our understanding of psychiatric disorders such as schizophrenia by improving neuromorphometric analyses.
Collapse
|
19
|
Deep Learning of Brain Lesion Patterns for Predicting Future Disease Activity in Patients with Early Symptoms of Multiple Sclerosis. DEEP LEARNING AND DATA LABELING FOR MEDICAL APPLICATIONS 2016. [DOI: 10.1007/978-3-319-46976-8_10] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/18/2023]
|
20
|
Unregistered Multiview Mammogram Analysis with Pre-trained Deep Learning Models. LECTURE NOTES IN COMPUTER SCIENCE 2015. [DOI: 10.1007/978-3-319-24574-4_78] [Citation(s) in RCA: 93] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
|