1
|
Karikari E, Koshechkin KA. Review on brain-computer interface technologies in healthcare. Biophys Rev 2023; 15:1351-1358. [PMID: 37974976 PMCID: PMC10643750 DOI: 10.1007/s12551-023-01138-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2023] [Accepted: 08/31/2023] [Indexed: 11/19/2023] Open
Abstract
Brain-computer interface (BCI) technologies have developed as a game changer, altering how humans interact with computers and opening up new avenues for understanding and utilizing the power of the human brain. The goal of this research study is to assess recent breakthroughs in BCI technologies and their future prospects. The paper starts with an outline of the fundamental concepts and principles that underpin BCI technologies. It examines the many forms of BCIs, including as invasive, partially invasive, and non-invasive interfaces, emphasizing their advantages and disadvantages. The progress of BCI hardware and signal processing techniques is investigated, with a focus on the shift from bulky and invasive systems to more portable and user-friendly options. Following that, the article delves into the important advances in BCI applications across several fields. It investigates the use of BCIs in healthcare, particularly in neurorehabilitation, assistive technology, and cognitive enhancement. BCIs' potential for boosting human capacities such as communication, motor control, and sensory perception is being thoroughly researched. Furthermore, the article investigates developing BCI applications in gaming, entertainment, and virtual reality, demonstrating how BCI technologies are growing outside medical and therapeutic settings. The study also gives light on the problems and limits that prevent BCIs from being widely adopted. Ethical concerns about privacy, data security, and informed permission are addressed, highlighting the importance of strong legislative frameworks to enable responsible and ethical usage of BCI technologies. Furthermore, the study delves into technological issues such as increasing signal resolution and precision, increasing system reliability, and enabling smooth connection with existing technology. Finally, this study paper gives an in-depth examination of the advances and future possibilities of BCI technologies. It emphasizes the transformative influence of BCIs on human-computer interaction and their potential to alter healthcare, gaming, and other industries. This research intends to stimulate further innovation and progress in the field of brain-computer interfaces by addressing problems and imagining future possibilities.
Collapse
Affiliation(s)
- Evelyn Karikari
- Department of Public Health and Healthcare, I.M. Sechenov First Moscow State Medical University, Moscow, Russia
| | - Konstantin A Koshechkin
- The Digital Health Institute, I.M. Sechenov First Moscow State Medical University, Moscow, Russia
| |
Collapse
|
2
|
Faghani S, Baffour FI, Ringler MD, Hamilton-Cave M, Rouzrokh P, Moassefi M, Khosravi B, Erickson BJ. A deep learning algorithm for detecting lytic bone lesions of multiple myeloma on CT. Skeletal Radiol 2023; 52:91-98. [PMID: 35980454 DOI: 10.1007/s00256-022-04160-z] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/29/2022] [Revised: 08/10/2022] [Accepted: 08/11/2022] [Indexed: 02/02/2023]
Abstract
BACKGROUND Whole-body low-dose CT is the recommended initial imaging modality to evaluate bone destruction as a result of multiple myeloma. Accurate interpretation of these scans to detect small lytic bone lesions is time intensive. A functional deep learning) algorithm to detect lytic lesions on CTs could improve the value of these CTs for myeloma imaging. Our objectives were to develop a DL algorithm and determine its performance at detecting lytic lesions of multiple myeloma. METHODS Axial slices (2-mm section thickness) from whole-body low-dose CT scans of subjects with biochemically confirmed plasma cell dyscrasias were included in the study. Data were split into train and test sets at the patient level targeting a 90%/10% split. Two musculoskeletal radiologists annotated lytic lesions on the images with bounding boxes. Subsequently, we developed a two-step deep learning model comprising bone segmentation followed by lesion detection. Unet and "You Look Only Once" (YOLO) models were used as bone segmentation and lesion detection algorithms, respectively. Diagnostic performance was determined using the area under the receiver operating characteristic curve (AUROC). RESULTS Forty whole-body low-dose CTs from 40 subjects yielded 2193 image slices. A total of 5640 lytic lesions were annotated. The two-step model achieved a sensitivity of 91.6% and a specificity of 84.6%. Lesion detection AUROC was 90.4%. CONCLUSION We developed a deep learning model that detects lytic bone lesions of multiple myeloma on whole-body low-dose CTs with high performance. External validation is required prior to widespread adoption in clinical practice.
Collapse
Affiliation(s)
- Shahriar Faghani
- Artificial Intelligence Laboratory, Department of Radiology, Mayo Clinic, 200 1st St. SW, Rochester, MN, 55905, USA
| | - Francis I Baffour
- Division of Musculoskeletal Radiology, Department of Radiology, Mayo Clinic, Rochester, MN, USA.
| | - Michael D Ringler
- Division of Musculoskeletal Radiology, Department of Radiology, Mayo Clinic, Rochester, MN, USA
| | | | - Pouria Rouzrokh
- Artificial Intelligence Laboratory, Department of Radiology, Mayo Clinic, 200 1st St. SW, Rochester, MN, 55905, USA
| | - Mana Moassefi
- Artificial Intelligence Laboratory, Department of Radiology, Mayo Clinic, 200 1st St. SW, Rochester, MN, 55905, USA
| | - Bardia Khosravi
- Artificial Intelligence Laboratory, Department of Radiology, Mayo Clinic, 200 1st St. SW, Rochester, MN, 55905, USA
| | - Bradley J Erickson
- Artificial Intelligence Laboratory, Department of Radiology, Mayo Clinic, 200 1st St. SW, Rochester, MN, 55905, USA
| |
Collapse
|
3
|
Kataria P, Dogra A, Sharma T, Goyal B. Trends in DNN Model Based Classification and Segmentation of Brain Tumor Detection. Open Neuroimag J 2022. [DOI: 10.2174/18744400-v15-e2206290] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Background:
Due to the complexities of scrutinizing and diagnosing brain tumors from MR images, brain tumor analysis has become one of the most indispensable concerns. Characterization of a brain tumor before any treatment, such as radiotherapy, requires decisive treatment planning and accurate implementation. As a result, early detection of brain tumors is imperative for better clinical outcomes and subsequent patient survival.
Introduction:
Brain tumor segmentation is a crucial task in medical image analysis. Because of tumor heterogeneity and varied intensity patterns, manual segmentation takes a long time, limiting the use of accurate quantitative interventions in clinical practice. Automated computer-based brain tumor image processing has become more valuable with technological advancement. With various imaging and statistical analysis tools, deep learning algorithms offer a viable option to enable health care practitioners to rule out the disease and estimate the growth.
Methods:
This article presents a comprehensive evaluation of conventional machine learning models as well as evolving deep learning techniques for brain tumor segmentation and classification.
Conclusion:
In this manuscript, a hierarchical review has been presented for brain tumor segmentation and detection. It is found that the segmentation methods hold a wide margin of improvement in the context of the implementation of adaptive thresholding and segmentation methods, the feature training and mapping requires redundancy correction, the input data training needs to be more exhaustive and the detection algorithms are required to be robust in terms of handling online input data analysis/tumor detection.
Collapse
|
4
|
Nandhini Abirami R, Durai Raj Vincent PM, Rajinikanth V, Kadry S. COVID-19 Classification Using Medical Image Synthesis by Generative Adversarial Networks. INT J UNCERTAIN FUZZ 2022. [DOI: 10.1142/s0218488522400128] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Abstract
The outbreak of novel coronavirus disease 2019, also called COVID-19, in Wuhan, China, began in December 2019. Since its outbreak, infectious disease has rapidly spread across the globe. The testing methods adopted by the medical practitioners gave false negatives, which is a big challenge. Medical imaging using deep learning can be adopted to speed up the testing process and avoid false negatives. This work proposes a novel approach, COVID-19 GAN, to perform coronavirus disease classification using medical image synthesis by a generative adversarial network. Detecting coronavirus infections from the chest X-ray images is very crucial for its early diagnosis and effective treatment. To boost the performance of the deep learning model and improve the accuracy of classification, synthetic data augmentation is performed using generative adversarial networks. Here, the available COVID-19 positive chest X-ray images are fed into the styleGAN2 model. The styleGAN model is trained, and the data necessary for training the deep learning model for coronavirus classification is generated. The generated COVID-19 positive chest X-ray images and the normal chest X-ray images are fed into the deep learning model for training. An accuracy of 99.78% is achieved in classifying chest X-ray images using CNN binary classifier model.
Collapse
Affiliation(s)
- R. Nandhini Abirami
- School of Information Technology and Engineering, Vellore Institute of Technology, Vellore 632014, Tamil Nadhu, India
| | - P. M. Durai Raj Vincent
- School of Information Technology and Engineering, Vellore Institute of Technology, Vellore 632014, Tamil Nadhu, India
| | - Venkatesan Rajinikanth
- Department of Electronics & Instrumentation Engineering, St. Joseph’s College of Engineering, Chennai 600119, Tamil Nadhu, India
| | - Seifedine Kadry
- Department of Applied Data Science, Noroff University College, Norway
| |
Collapse
|
5
|
Thayumanavan M, Ramasamy A. Recurrent Neural Network Deep Learning Techniques for Brain Tumor Segmentation and Classification of Magnetic Resonance Imaging Images. JOURNAL OF MEDICAL IMAGING AND HEALTH INFORMATICS 2022. [DOI: 10.1166/jmihi.2022.3943] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
Brain Tumour is a one of the most threatful disease in the world. It reduces the life span of human beings. Computer vision is advantageous for human health research because it eliminates the need for human judgement to get accurate data. The most reliable and secure imaging techniques
for magnetic resonance imaging are CT scans, X-rays, and MRI scans (MRI). MRI can locate tiny objects. The focus of our paper will be the many techniques for detecting brain cancer using brain MRI. Early detection of tumour and diagnosis is might essential to radiologist to initiate better
treatment. MRI is a competent and speedy method of examining a brain tumour. Resonance in Magnetic Fields Imaging technology is a non-invasive technique that aids in the segmentation of brain tumour images. Deep learning algorithm delivers good outcomes in terms of reducing time consumption
and precise tumour diagnosis (solution). This research proposed that a Convolutional Neural Network (CNN) and Recurrent Neural Network (RNN) Supervised Deep Learning model be used to automatically find and split brain tumours. The RNN Model outperforms the CNN Model by 98.91 percentage. These
models categorize brain images as normal or pathological, and their performance was evaluated.
Collapse
Affiliation(s)
- Meenal Thayumanavan
- Department of ECE, Kongunadu College of Engineering and Technology, Trichy, 621215, Tamil Nadu, India
| | - Asokan Ramasamy
- Department of ECE, Kongunadu College of Engineering and Technology, Trichy, 621215, Tamil Nadu, India
| |
Collapse
|
6
|
Yousef R, Gupta G, Yousef N, Khari M. A holistic overview of deep learning approach in medical imaging. MULTIMEDIA SYSTEMS 2022; 28:881-914. [PMID: 35079207 PMCID: PMC8776556 DOI: 10.1007/s00530-021-00884-5] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/09/2021] [Accepted: 12/23/2021] [Indexed: 05/07/2023]
Abstract
Medical images are a rich source of invaluable necessary information used by clinicians. Recent technologies have introduced many advancements for exploiting the most of this information and use it to generate better analysis. Deep learning (DL) techniques have been empowered in medical images analysis using computer-assisted imaging contexts and presenting a lot of solutions and improvements while analyzing these images by radiologists and other specialists. In this paper, we present a survey of DL techniques used for variety of tasks along with the different medical image's modalities to provide critical review of the recent developments in this direction. We have organized our paper to provide significant contribution of deep leaning traits and learn its concepts, which is in turn helpful for non-expert in medical society. Then, we present several applications of deep learning (e.g., segmentation, classification, detection, etc.) which are commonly used for clinical purposes for different anatomical site, and we also present the main key terms for DL attributes like basic architecture, data augmentation, transfer learning, and feature selection methods. Medical images as inputs to deep learning architectures will be the mainstream in the coming years, and novel DL techniques are predicted to be the core of medical images analysis. We conclude our paper by addressing some research challenges and the suggested solutions for them found in literature, and also future promises and directions for further developments.
Collapse
Affiliation(s)
- Rammah Yousef
- Yogananda School of AI Computer and Data Sciences, Shoolini University, Solan, 173229 Himachal Pradesh India
| | - Gaurav Gupta
- Yogananda School of AI Computer and Data Sciences, Shoolini University, Solan, 173229 Himachal Pradesh India
| | - Nabhan Yousef
- Electronics and Communication Engineering, Marwadi University, Rajkot, Gujrat India
| | - Manju Khari
- Jawaharlal Nehru University, New Delhi, India
| |
Collapse
|
7
|
Improving geometric P-norm-based glioma segmentation through deep convolutional autoencoder encapsulation. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2021.103232] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
|
8
|
Brain Tumor Detection and Classification on MR Images by a Deep Wavelet Auto-Encoder Model. Diagnostics (Basel) 2021; 11:diagnostics11091589. [PMID: 34573931 PMCID: PMC8471235 DOI: 10.3390/diagnostics11091589] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2021] [Revised: 08/18/2021] [Accepted: 08/18/2021] [Indexed: 11/16/2022] Open
Abstract
The process of diagnosing brain tumors is very complicated for many reasons, including the brain's synaptic structure, size, and shape. Machine learning techniques are employed to help doctors to detect brain tumor and support their decisions. In recent years, deep learning techniques have made a great achievement in medical image analysis. This paper proposed a deep wavelet autoencoder model named "DWAE model", employed to divide input data slice as a tumor (abnormal) or no tumor (normal). This article used a high pass filter to show the heterogeneity of the MRI images and their integration with the input images. A high median filter was utilized to merge slices. We improved the output slices' quality through highlight edges and smoothened input MR brain images. Then, we applied the seed growing method based on 4-connected since the thresholding cluster equal pixels with input MR data. The segmented MR image slices provide two two-layer using the proposed deep wavelet auto-encoder model. We then used 200 hidden units in the first layer and 400 hidden units in the second layer. The softmax layer testing and training are performed for the identification of the MR image normal and abnormal. The contribution of the deep wavelet auto-encoder model is in the analysis of pixel pattern of MR brain image and the ability to detect and classify the tumor with high accuracy, short time, and low loss validation. To train and test the overall performance of the proposed model, we utilized 2500 MR brain images from BRATS2012, BRATS2013, BRATS2014, BRATS2015, 2015 challenge, and ISLES, which consists of normal and abnormal images. The experiments results show that the proposed model achieved an accuracy of 99.3%, loss validation of 0.1, low FPR and FNR values. This result demonstrates that the proposed DWAE model can facilitate the automatic detection of brain tumors.
Collapse
|
9
|
Zopes J, Platscher M, Paganucci S, Federau C. Multi-Modal Segmentation of 3D Brain Scans Using Neural Networks. Front Neurol 2021; 12:653375. [PMID: 34335436 PMCID: PMC8318570 DOI: 10.3389/fneur.2021.653375] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2021] [Accepted: 06/17/2021] [Indexed: 11/17/2022] Open
Abstract
Anatomical segmentation of brain scans is highly relevant for diagnostics and neuroradiology research. Conventionally, segmentation is performed on T1-weighted MRI scans, due to the strong soft-tissue contrast. In this work, we report on a comparative study of automated, learning-based brain segmentation on various other contrasts of MRI and also computed tomography (CT) scans and investigate the anatomical soft-tissue information contained in these imaging modalities. A large database of in total 853 MRI/CT brain scans enables us to train convolutional neural networks (CNNs) for segmentation. We benchmark the CNN performance on four different imaging modalities and 27 anatomical substructures. For each modality we train a separate CNN based on a common architecture. We find average Dice scores of 86.7 ± 4.1% (T1-weighted MRI), 81.9 ± 6.7% (fluid-attenuated inversion recovery MRI), 80.8 ± 6.6% (diffusion-weighted MRI) and 80.7 ± 8.2% (CT), respectively. The performance is assessed relative to labels obtained using the widely-adopted FreeSurfer software package. The segmentation pipeline uses dropout sampling to identify corrupted input scans or low-quality segmentations. Full segmentation of 3D volumes with more than 2 million voxels requires <1s of processing time on a graphical processing unit.
Collapse
Affiliation(s)
- Jonathan Zopes
- Institute for Biomedical Engineering, ETH Zürich, Zurich, Switzerland
| | - Moritz Platscher
- Institute for Biomedical Engineering, ETH Zürich, Zurich, Switzerland
| | - Silvio Paganucci
- Institute for Biomedical Engineering, ETH Zürich, Zurich, Switzerland
| | - Christian Federau
- Institute for Biomedical Engineering, ETH Zürich, Zurich, Switzerland
| |
Collapse
|
10
|
Gryska E, Schneiderman J, Björkman-Burtscher I, Heckemann RA. Automatic brain lesion segmentation on standard magnetic resonance images: a scoping review. BMJ Open 2021; 11:e042660. [PMID: 33514580 PMCID: PMC7849889 DOI: 10.1136/bmjopen-2020-042660] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/11/2020] [Revised: 01/09/2021] [Accepted: 01/12/2021] [Indexed: 12/11/2022] Open
Abstract
OBJECTIVES Medical image analysis practices face challenges that can potentially be addressed with algorithm-based segmentation tools. In this study, we map the field of automatic MR brain lesion segmentation to understand the clinical applicability of prevalent methods and study designs, as well as challenges and limitations in the field. DESIGN Scoping review. SETTING Three databases (PubMed, IEEE Xplore and Scopus) were searched with tailored queries. Studies were included based on predefined criteria. Emerging themes during consecutive title, abstract, methods and whole-text screening were identified. The full-text analysis focused on materials, preprocessing, performance evaluation and comparison. RESULTS Out of 2990 unique articles identified through the search, 441 articles met the eligibility criteria, with an estimated growth rate of 10% per year. We present a general overview and trends in the field with regard to publication sources, segmentation principles used and types of lesions. Algorithms are predominantly evaluated by measuring the agreement of segmentation results with a trusted reference. Few articles describe measures of clinical validity. CONCLUSIONS The observed reporting practices leave room for improvement with a view to studying replication, method comparison and clinical applicability. To promote this improvement, we propose a list of recommendations for future studies in the field.
Collapse
Affiliation(s)
- Emilia Gryska
- Medical Radiation Sciences, Goteborgs universitet Institutionen for kliniska vetenskaper, Goteborg, Sweden
| | - Justin Schneiderman
- Sektionen för klinisk neurovetenskap, Goteborgs Universitet Institutionen for Neurovetenskap och fysiologi, Goteborg, Sweden
| | | | - Rolf A Heckemann
- Medical Radiation Sciences, Goteborgs universitet Institutionen for kliniska vetenskaper, Goteborg, Sweden
| |
Collapse
|
11
|
Hadjiiski L, Samala R, Chan HP. Image Processing Analytics: Enhancements and Segmentation. Mol Imaging 2021. [DOI: 10.1016/b978-0-12-816386-3.00057-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022] Open
|
12
|
Pedersen M, Verspoor K, Jenkinson M, Law M, Abbott DF, Jackson GD. Artificial intelligence for clinical decision support in neurology. Brain Commun 2020; 2:fcaa096. [PMID: 33134913 PMCID: PMC7585692 DOI: 10.1093/braincomms/fcaa096] [Citation(s) in RCA: 27] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2020] [Revised: 05/19/2020] [Accepted: 06/12/2020] [Indexed: 01/13/2023] Open
Abstract
Artificial intelligence is one of the most exciting methodological shifts in our era. It holds the potential to transform healthcare as we know it, to a system where humans and machines work together to provide better treatment for our patients. It is now clear that cutting edge artificial intelligence models in conjunction with high-quality clinical data will lead to improved prognostic and diagnostic models in neurological disease, facilitating expert-level clinical decision tools across healthcare settings. Despite the clinical promise of artificial intelligence, machine and deep-learning algorithms are not a one-size-fits-all solution for all types of clinical data and questions. In this article, we provide an overview of the core concepts of artificial intelligence, particularly contemporary deep-learning methods, to give clinician and neuroscience researchers an appreciation of how artificial intelligence can be harnessed to support clinical decisions. We clarify and emphasize the data quality and the human expertise needed to build robust clinical artificial intelligence models in neurology. As artificial intelligence is a rapidly evolving field, we take the opportunity to iterate important ethical principles to guide the field of medicine is it moves into an artificial intelligence enhanced future.
Collapse
Affiliation(s)
- Mangor Pedersen
- The Florey Institute of Neuroscience and Mental Health, The University of Melbourne, Heidelberg, VIC 3084, Australia.,Department of Psychology, Auckland University of Technology (AUT), Auckland, 0627, New Zealand
| | - Karin Verspoor
- School of Computing and Information Systems, The University of Melbourne, Parkville, VIC 3010, Australia
| | - Mark Jenkinson
- Wellcome Centre for Integrative Neuroimaging, FMRIB, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, OX3 9DU, UK.,South Australian Health and Medical Research Institute (SAHMRI), Adelaide, SA 5000, Australia.,Australian Institute for Machine Learning (AIML), The University of Adelaide, Adelaide, SA 5000, Australia
| | - Meng Law
- Department of Radiology, Alfred Hospital, Melbourne, VIC 3181, Australia.,Department of Electrical and Computer Systems Engineering, Monash University, Melbourne, VIC 3181, Australia.,Department of Neuroscience, Monash School of Medicine, Nursing and Health Sciences, Melbourne, VIC 3181, Australia
| | - David F Abbott
- The Florey Institute of Neuroscience and Mental Health, The University of Melbourne, Heidelberg, VIC 3084, Australia.,Department of Medicine Austin Health, The University of Melbourne, Heidelberg, VIC 3084, Australia
| | - Graeme D Jackson
- The Florey Institute of Neuroscience and Mental Health, The University of Melbourne, Heidelberg, VIC 3084, Australia.,Department of Medicine Austin Health, The University of Melbourne, Heidelberg, VIC 3084, Australia.,Department of Neurology, Austin Health, Heidelberg, VIC 3084, Australia
| |
Collapse
|
13
|
Zaharchuk G. Fellow in a Box: Combining AI and Domain Knowledge with Bayesian Networks for Differential Diagnosis in Neuroimaging. Radiology 2020; 295:638-639. [PMID: 32267215 DOI: 10.1148/radiol.2020200819] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/12/2023]
Affiliation(s)
- Greg Zaharchuk
- From the Department of Radiology, Stanford University, 1201 Welch Rd, Mailcode 5488, Stanford, CA 94305-5488
| |
Collapse
|
14
|
Towards Personalized Diagnosis of Glioblastoma in Fluid-Attenuated Inversion Recovery (FLAIR) by Topological Interpretable Machine Learning. MATHEMATICS 2020. [DOI: 10.3390/math8050770] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Glioblastoma multiforme (GBM) is a fast-growing and highly invasive brain tumor, which tends to occur in adults between the ages of 45 and 70 and it accounts for 52 percent of all primary brain tumors. Usually, GBMs are detected by magnetic resonance images (MRI). Among MRI, a fluid-attenuated inversion recovery (FLAIR) sequence produces high quality digital tumor representation. Fast computer-aided detection and segmentation techniques are needed for overcoming subjective medical doctors (MDs) judgment. This study has three main novelties for demonstrating the role of topological features as new set of radiomics features which can be used as pillars of a personalized diagnostic systems of GBM analysis from FLAIR. For the first time topological data analysis is used for analyzing GBM from three complementary perspectives—tumor growth at cell level, temporal evolution of GBM in follow-up period and eventually GBM detection. The second novelty is represented by the definition of a new Shannon-like topological entropy, the so-called Generator Entropy. The third novelty is the combination of topological and textural features for training automatic interpretable machine learning. These novelties are demonstrated by three numerical experiments. Topological Data Analysis of a simplified 2D tumor growth mathematical model had allowed to understand the bio-chemical conditions that facilitate tumor growth—the higher the concentration of chemical nutrients the more virulent the process. Topological data analysis was used for evaluating GBM temporal progression on FLAIR recorded within 90 days following treatment completion and at progression. The experiment had confirmed that persistent entropy is a viable statistics for monitoring GBM evolution during the follow-up period. In the third experiment we developed a novel methodology based on topological and textural features and automatic interpretable machine learning for automatic GBM classification on FLAIR. The algorithm reached a classification accuracy up to 97%.
Collapse
|
15
|
Bisneto TRV, de Carvalho Filho AO, Magalhães DMV. Generative adversarial network and texture features applied to automatic glaucoma detection. Appl Soft Comput 2020. [DOI: 10.1016/j.asoc.2020.106165] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
16
|
Chang K, Beers AL, Bai HX, Brown JM, Ly KI, Li X, Senders JT, Kavouridis VK, Boaro A, Su C, Bi WL, Rapalino O, Liao W, Shen Q, Zhou H, Xiao B, Wang Y, Zhang PJ, Pinho MC, Wen PY, Batchelor TT, Boxerman JL, Arnaout O, Rosen BR, Gerstner ER, Yang L, Huang RY, Kalpathy-Cramer J. Automatic assessment of glioma burden: a deep learning algorithm for fully automated volumetric and bidimensional measurement. Neuro Oncol 2019; 21:1412-1422. [PMID: 31190077 PMCID: PMC6827825 DOI: 10.1093/neuonc/noz106] [Citation(s) in RCA: 101] [Impact Index Per Article: 20.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/03/2023] Open
Abstract
BACKGROUND Longitudinal measurement of glioma burden with MRI is the basis for treatment response assessment. In this study, we developed a deep learning algorithm that automatically segments abnormal fluid attenuated inversion recovery (FLAIR) hyperintensity and contrast-enhancing tumor, quantitating tumor volumes as well as the product of maximum bidimensional diameters according to the Response Assessment in Neuro-Oncology (RANO) criteria (AutoRANO). METHODS Two cohorts of patients were used for this study. One consisted of 843 preoperative MRIs from 843 patients with low- or high-grade gliomas from 4 institutions and the second consisted of 713 longitudinal postoperative MRI visits from 54 patients with newly diagnosed glioblastomas (each with 2 pretreatment "baseline" MRIs) from 1 institution. RESULTS The automatically generated FLAIR hyperintensity volume, contrast-enhancing tumor volume, and AutoRANO were highly repeatable for the double-baseline visits, with an intraclass correlation coefficient (ICC) of 0.986, 0.991, and 0.977, respectively, on the cohort of postoperative GBM patients. Furthermore, there was high agreement between manually and automatically measured tumor volumes, with ICC values of 0.915, 0.924, and 0.965 for preoperative FLAIR hyperintensity, postoperative FLAIR hyperintensity, and postoperative contrast-enhancing tumor volumes, respectively. Lastly, the ICCs for comparing manually and automatically derived longitudinal changes in tumor burden were 0.917, 0.966, and 0.850 for FLAIR hyperintensity volume, contrast-enhancing tumor volume, and RANO measures, respectively. CONCLUSIONS Our automated algorithm demonstrates potential utility for evaluating tumor burden in complex posttreatment settings, although further validation in multicenter clinical trials will be needed prior to widespread implementation.
Collapse
Affiliation(s)
- Ken Chang
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Boston, Massachusetts, USA
| | - Andrew L Beers
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Boston, Massachusetts, USA
| | - Harrison X Bai
- Department of Radiology, Hospital of the University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - James M Brown
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Boston, Massachusetts, USA
| | - K Ina Ly
- Stephen E. and Catherine Pappas Center for Neuro-Oncology, Massachusetts General Hospital, Boston, Massachusetts, USA
| | - Xuejun Li
- Department of Neurosurgery, Xiangya Hospital, Central South University, Changsha, Hunan, China
| | - Joeky T Senders
- Computational Neuroscience Outcomes Center, Department of Neurosurgery, Brigham and Women’s Hospital, Boston, Massachusetts, USA
| | - Vasileios K Kavouridis
- Computational Neuroscience Outcomes Center, Department of Neurosurgery, Brigham and Women’s Hospital, Boston, Massachusetts, USA
| | - Alessandro Boaro
- Computational Neuroscience Outcomes Center, Department of Neurosurgery, Brigham and Women’s Hospital, Boston, Massachusetts, USA
| | - Chang Su
- Yale School of Medicine, New Haven, Connecticut, USA
| | - Wenya Linda Bi
- Center for Skull Base and Pituitary Surgery, Department of Neurosurgery, Brigham and Women’s Hospital, Boston, Massachusetts, USA
| | - Otto Rapalino
- Department of Radiology, Massachusetts General Hospital, Boston, Massachusetts, USA
| | - Weihua Liao
- Department of Radiology, Xiangya Hospital, Central South University, Changsha, Hunan, China
| | - Qin Shen
- Department of Radiology, The Second Xiangya Hospital, Central South University, Changsha, Hunan, China
| | - Hao Zhou
- Department of Neurology, Xiangya Hospital, Central South University, Changsha, Hunan, China
| | - Bo Xiao
- Department of Neurology, Xiangya Hospital, Central South University, Changsha, Hunan, China
| | - Yinyan Wang
- Department of Neurosurgery, Beijing Tiantan Hospital, Capital Medical University, Beijing, China
| | - Paul J Zhang
- Department of Pathology and Laboratory Medicine, Hospital of the University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Marco C Pinho
- Department of Radiology and Advanced Imaging Research Center, UT Southwestern Medical Center, Dallas, Texas, USA
| | - Patrick Y Wen
- Center For Neuro-Oncology, Dana-Farber Cancer Institute, Harvard Medical School, Boston, Massachusetts, USA
| | - Tracy T Batchelor
- Department of Neurology, Brigham and Women’s Hospital, Boston, Massachusetts, USA
| | - Jerrold L Boxerman
- Department of Diagnostic Imaging, Rhode Island Hospital and Alpert Medical School of Brown University, Providence, Rhode Island, USA
| | - Omar Arnaout
- Computational Neuroscience Outcomes Center, Department of Neurosurgery, Brigham and Women’s Hospital, Boston, Massachusetts, USA
| | - Bruce R Rosen
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Boston, Massachusetts, USA
| | - Elizabeth R Gerstner
- Stephen E. and Catherine Pappas Center for Neuro-Oncology, Massachusetts General Hospital, Boston, Massachusetts, USA
| | - Li Yang
- Department of Neurology, The Second Xiangya Hospital, Central South University, Changsha, Hunan, China
| | - Raymond Y Huang
- Department of Radiology, Brigham and Women’s Hospital, Boston, Massachusetts, USA
| | - Jayashree Kalpathy-Cramer
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Boston, Massachusetts, USA
| |
Collapse
|
17
|
Kidoh M, Shinoda K, Kitajima M, Isogawa K, Nambu M, Uetani H, Morita K, Nakaura T, Tateishi M, Yamashita Y, Yamashita Y. Deep Learning Based Noise Reduction for Brain MR Imaging: Tests on Phantoms and Healthy Volunteers. Magn Reson Med Sci 2019; 19:195-206. [PMID: 31484849 PMCID: PMC7553817 DOI: 10.2463/mrms.mp.2019-0018] [Citation(s) in RCA: 131] [Impact Index Per Article: 26.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/01/2023] Open
Abstract
Purpose: To test whether our proposed denoising approach with deep learning-based reconstruction (dDLR) can effectively denoise brain MR images. Methods: In an initial experimental study, we obtained brain images from five volunteers and added different artificial noise levels. Denoising was applied to the modified images using a denoising convolutional neural network (DnCNN), a shrinkage convolutional neural network (SCNN), and dDLR. Using these brain MR images, we compared the structural similarity (SSIM) index and peak signal-to-noise ratio (PSNR) between the three denoising methods. Two neuroradiologists assessed the image quality of the three types of images. In the clinical study, we evaluated the denoising effect of dDLR in brain images with different levels of actual noise such as thermal noise. Specifically, we obtained 2D-T2-weighted image, 2D-fluid-attenuated inversion recovery (FLAIR) and 3D-magnetization-prepared rapid acquisition with gradient echo (MPRAGE) from 15 healthy volunteers at two different settings for the number of image acquisitions (NAQ): NAQ2 and NAQ5. We reconstructed dDLR-processed NAQ2 from NAQ2, then compared with SSIM and PSNR. Two neuroradiologists separately assessed the image quality of NAQ5, NAQ2 and dDLR-NAQ2. Statistical analysis was performed in the experimental and clinical study. In the clinical study, the inter-observer agreement was also assessed. Results: In the experimental study, PSNR and SSIM for dDLR were statistically higher than those of DnCNN and SCNN (P < 0.001). The image quality of dDLR was also superior to DnCNN and SCNN. In the clinical study, dDLR-NAQ2 was significantly better than NAQ2 images for SSIM and PSNR in all three sequences (P < 0.05), except for PSNR in FLAIR. For all qualitative items, dDLR-NAQ2 had equivalent or better image quality than NAQ5, and superior quality to that of NAQ2 (P < 0.05), for all criteria except artifact. The inter-observer agreement ranged from substantial to near perfect. Conclusion: dDLR reduces image noise while preserving image quality on brain MR images.
Collapse
Affiliation(s)
- Masafumi Kidoh
- Department of Diagnostic Radiology, Faculty of Life Sciences, Kumamoto University
| | | | - Mika Kitajima
- Department of Diagnostic Radiology, Faculty of Life Sciences, Kumamoto University
| | - Kenzo Isogawa
- Corporate Research and Development Center, Toshiba Corporation
| | | | - Hiroyuki Uetani
- Department of Diagnostic Radiology, Faculty of Life Sciences, Kumamoto University
| | - Kosuke Morita
- Department of Radiology, Kumamoto University Hospital, Kumamoto
| | - Takeshi Nakaura
- Department of Diagnostic Radiology, Faculty of Life Sciences, Kumamoto University
| | - Machiko Tateishi
- Department of Diagnostic Radiology, Faculty of Life Sciences, Kumamoto University
| | | | - Yasuyuki Yamashita
- Department of Diagnostic Radiology, Faculty of Life Sciences, Kumamoto University
| |
Collapse
|
18
|
Duong MT, Rudie JD, Wang J, Xie L, Mohan S, Gee JC, Rauschecker AM. Convolutional Neural Network for Automated FLAIR Lesion Segmentation on Clinical Brain MR Imaging. AJNR Am J Neuroradiol 2019; 40:1282-1290. [PMID: 31345943 PMCID: PMC6697209 DOI: 10.3174/ajnr.a6138] [Citation(s) in RCA: 39] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2019] [Accepted: 06/17/2019] [Indexed: 12/17/2022]
Abstract
BACKGROUND AND PURPOSE Most brain lesions are characterized by hyperintense signal on FLAIR. We sought to develop an automated deep learning-based method for segmentation of abnormalities on FLAIR and volumetric quantification on clinical brain MRIs across many pathologic entities and scanning parameters. We evaluated the performance of the algorithm compared with manual segmentation and existing automated methods. MATERIALS AND METHODS We adapted a U-Net convolutional neural network architecture for brain MRIs using 3D volumes. This network was retrospectively trained on 295 brain MRIs to perform automated FLAIR lesion segmentation. Performance was evaluated on 92 validation cases using Dice scores and voxelwise sensitivity and specificity, compared with radiologists' manual segmentations. The algorithm was also evaluated on measuring total lesion volume. RESULTS Our model demonstrated accurate FLAIR lesion segmentation performance (median Dice score, 0.79) on the validation dataset across a large range of lesion characteristics. Across 19 neurologic diseases, performance was significantly higher than existing methods (Dice, 0.56 and 0.41) and approached human performance (Dice, 0.81). There was a strong correlation between the predictions of lesion volume of the algorithm compared with true lesion volume (ρ = 0.99). Lesion segmentations were accurate across a large range of image-acquisition parameters on >30 different MR imaging scanners. CONCLUSIONS A 3D convolutional neural network adapted from a U-Net architecture can achieve high automated FLAIR segmentation performance on clinical brain MR imaging across a variety of underlying pathologies and image acquisition parameters. The method provides accurate volumetric lesion data that can be incorporated into assessments of disease burden or into radiologic reports.
Collapse
Affiliation(s)
- M T Duong
- From the Department of Radiology, Hospital of the University of Pennsylvania, Philadelphia, Pennsylvania
| | - J D Rudie
- From the Department of Radiology, Hospital of the University of Pennsylvania, Philadelphia, Pennsylvania
| | - J Wang
- From the Department of Radiology, Hospital of the University of Pennsylvania, Philadelphia, Pennsylvania
| | - L Xie
- From the Department of Radiology, Hospital of the University of Pennsylvania, Philadelphia, Pennsylvania
| | - S Mohan
- From the Department of Radiology, Hospital of the University of Pennsylvania, Philadelphia, Pennsylvania
| | - J C Gee
- From the Department of Radiology, Hospital of the University of Pennsylvania, Philadelphia, Pennsylvania
| | - A M Rauschecker
- From the Department of Radiology, Hospital of the University of Pennsylvania, Philadelphia, Pennsylvania.
| |
Collapse
|
19
|
Duong MT, Rauschecker AM, Rudie JD, Chen PH, Cook TS, Bryan RN, Mohan S. Artificial intelligence for precision education in radiology. Br J Radiol 2019; 92:20190389. [PMID: 31322909 DOI: 10.1259/bjr.20190389] [Citation(s) in RCA: 76] [Impact Index Per Article: 15.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023] Open
Abstract
In the era of personalized medicine, the emphasis of health care is shifting from populations to individuals. Artificial intelligence (AI) is capable of learning without explicit instruction and has emerging applications in medicine, particularly radiology. Whereas much attention has focused on teaching radiology trainees about AI, here our goal is to instead focus on how AI might be developed to better teach radiology trainees. While the idea of using AI to improve education is not new, the application of AI to medical and radiological education remains very limited. Based on the current educational foundation, we highlight an AI-integrated framework to augment radiology education and provide use case examples informed by our own institution's practice. The coming age of "AI-augmented radiology" may enable not only "precision medicine" but also what we describe as "precision medical education," where instruction is tailored to individual trainees based on their learning styles and needs.
Collapse
Affiliation(s)
- Michael Tran Duong
- Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA.,Department of Radiology, University of Pennsylvania, Philadelphia, PA, USA
| | - Andreas M Rauschecker
- Department of Radiology, University of Pennsylvania, Philadelphia, PA, USA.,Department of Radiology & Biomedical Imaging, University of California San Francisco, San Francisco, CA, USA
| | - Jeffrey D Rudie
- Department of Radiology, University of Pennsylvania, Philadelphia, PA, USA.,Department of Radiology & Biomedical Imaging, University of California San Francisco, San Francisco, CA, USA
| | - Po-Hao Chen
- Imaging Institute, Cleveland Clinic, Cleveland, OH, USA
| | - Tessa S Cook
- Department of Radiology, University of Pennsylvania, Philadelphia, PA, USA
| | - R Nick Bryan
- Department of Radiology, University of Pennsylvania, Philadelphia, PA, USA.,Department of Diagnostic Medicine, Dell Medical School, University of Texas at Austin, Austin, TX, USA
| | - Suyash Mohan
- Department of Radiology, University of Pennsylvania, Philadelphia, PA, USA.,Department of Neurosurgery, University of Pennsylvania, Philadelphia, PA, USA
| |
Collapse
|
20
|
Ribalta Lorenzo P, Nalepa J, Bobek-Billewicz B, Wawrzyniak P, Mrukwa G, Kawulok M, Ulrych P, Hayball MP. Segmenting brain tumors from FLAIR MRI using fully convolutional neural networks. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2019; 176:135-148. [PMID: 31200901 DOI: 10.1016/j.cmpb.2019.05.006] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/07/2018] [Revised: 04/05/2019] [Accepted: 05/10/2019] [Indexed: 06/09/2023]
Abstract
BACKGROUND AND OBJECTIVE Magnetic resonance imaging (MRI) is an indispensable tool in diagnosing brain-tumor patients. Automated tumor segmentation is being widely researched to accelerate the MRI analysis and allow clinicians to precisely plan treatment-accurate delineation of brain tumors is a critical step in assessing their volume, shape, boundaries, and other characteristics. However, it is still a very challenging task due to inherent MR data characteristics and high variability, e.g., in tumor sizes or shapes. We present a new deep learning approach for accurate brain tumor segmentation which can be trained from small and heterogeneous datasets annotated by a human reader (providing high-quality ground-truth segmentation is very costly in practice). METHODS In this paper, we present a new deep learning technique for segmenting brain tumors from fluid attenuation inversion recovery MRI. Our technique exploits fully convolutional neural networks, and it is equipped with a battery of augmentation techniques that make the algorithm robust against low data quality, and heterogeneity of small training sets. We train our models using only positive (tumorous) examples, due to the limited amount of available data. RESULTS Our algorithm was tested on a set of stage II-IV brain-tumor patients (image data collected using MAGNETOM Prisma 3T, Siemens). Rigorous experiments, backed up with statistical tests, revealed that our approach outperforms the state-of-the-art approach (utilizing hand-crafted features) in terms of segmentation accuracy, offers very fast training and instant segmentation (analysis of an image takes less than a second). Building our deep model is 1.3 times faster compared with extracting features for extremely randomized trees, and this training time can be controlled. Finally, we showed that too aggressive data augmentation may lead to deteriorated performance of the model, especially in the fixed-budget training (with maximum numbers of training epochs). CONCLUSIONS Our method yields the better performance when compared with the state of the art method which utilizes hand-crafted features. In addition, our deep network can be effectively applied to difficult (small, imbalanced, and heterogeneous) datasets, offers controllable training time, and infers in real-time.
Collapse
Affiliation(s)
| | - Jakub Nalepa
- Future Processing, Bojkowska 37A, 44-100 Gliwice, Poland; Institute of Informatics, Silesian University of Technology, Akademicka 16, 44-100 Gliwice, Poland.
| | - Barbara Bobek-Billewicz
- Maria Sklodowska-Curie Memorial Cancer Center and Institute of Oncology, Wybrzeze Armii Krajowej 15, 44-102 Gliwice, Poland.
| | - Pawel Wawrzyniak
- Maria Sklodowska-Curie Memorial Cancer Center and Institute of Oncology, Wybrzeze Armii Krajowej 15, 44-102 Gliwice, Poland.
| | | | - Michal Kawulok
- Future Processing, Bojkowska 37A, 44-100 Gliwice, Poland; Institute of Informatics, Silesian University of Technology, Akademicka 16, 44-100 Gliwice, Poland.
| | - Pawel Ulrych
- Maria Sklodowska-Curie Memorial Cancer Center and Institute of Oncology, Wybrzeze Armii Krajowej 15, 44-102 Gliwice, Poland.
| | | |
Collapse
|
21
|
The present and future of deep learning in radiology. Eur J Radiol 2019; 114:14-24. [PMID: 31005165 DOI: 10.1016/j.ejrad.2019.02.038] [Citation(s) in RCA: 169] [Impact Index Per Article: 33.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2018] [Revised: 02/17/2019] [Accepted: 02/26/2019] [Indexed: 12/18/2022]
Abstract
The advent of Deep Learning (DL) is poised to dramatically change the delivery of healthcare in the near future. Not only has DL profoundly affected the healthcare industry it has also influenced global businesses. Within a span of very few years, advances such as self-driving cars, robots performing jobs that are hazardous to human, and chat bots talking with human operators have proved that DL has already made large impact on our lives. The open source nature of DL and decreasing prices of computer hardware will further propel such changes. In healthcare, the potential is immense due to the need to automate the processes and evolve error free paradigms. The sheer quantum of DL publications in healthcare has surpassed other domains growing at a very fast pace, particular in radiology. It is therefore imperative for the radiologists to learn about DL and how it differs from other approaches of Artificial Intelligence (AI). The next generation of radiology will see a significant role of DL and will likely serve as the base for augmented radiology (AR). Better clinical judgement by AR will help in improving the quality of life and help in life saving decisions, while lowering healthcare costs. A comprehensive review of DL as well as its implications upon the healthcare is presented in this review. We had analysed 150 articles of DL in healthcare domain from PubMed, Google Scholar, and IEEE EXPLORE focused in medical imagery only. We have further examined the ethic, moral and legal issues surrounding the use of DL in medical imaging.
Collapse
|
22
|
Soffer S, Ben-Cohen A, Shimon O, Amitai MM, Greenspan H, Klang E. Convolutional Neural Networks for Radiologic Images: A Radiologist's Guide. Radiology 2019; 290:590-606. [PMID: 30694159 DOI: 10.1148/radiol.2018180547] [Citation(s) in RCA: 272] [Impact Index Per Article: 54.4] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
Deep learning has rapidly advanced in various fields within the past few years and has recently gained particular attention in the radiology community. This article provides an introduction to deep learning technology and presents the stages that are entailed in the design process of deep learning radiology research. In addition, the article details the results of a survey of the application of deep learning-specifically, the application of convolutional neural networks-to radiologic imaging that was focused on the following five major system organs: chest, breast, brain, musculoskeletal system, and abdomen and pelvis. The survey of the studies is followed by a discussion about current challenges and future trends and their potential implications for radiology. This article may be used as a guide for radiologists planning research in the field of radiologic image analysis using convolutional neural networks.
Collapse
Affiliation(s)
- Shelly Soffer
- From the Department of Diagnostic Imaging, Sheba Medical Center, Emek HaEla St 1, Ramat Gan, Israel (S.S., M.M.A., E.K.); Faculty of Engineering, Department of Biomedical Engineering, Medical Image Processing Laboratory, Tel Aviv University, Tel Aviv, Israel (A.B., H.G.); and Sackler School of Medicine, Tel Aviv University, Tel Aviv, Israel (S.S., O.S.)
| | - Avi Ben-Cohen
- From the Department of Diagnostic Imaging, Sheba Medical Center, Emek HaEla St 1, Ramat Gan, Israel (S.S., M.M.A., E.K.); Faculty of Engineering, Department of Biomedical Engineering, Medical Image Processing Laboratory, Tel Aviv University, Tel Aviv, Israel (A.B., H.G.); and Sackler School of Medicine, Tel Aviv University, Tel Aviv, Israel (S.S., O.S.)
| | - Orit Shimon
- From the Department of Diagnostic Imaging, Sheba Medical Center, Emek HaEla St 1, Ramat Gan, Israel (S.S., M.M.A., E.K.); Faculty of Engineering, Department of Biomedical Engineering, Medical Image Processing Laboratory, Tel Aviv University, Tel Aviv, Israel (A.B., H.G.); and Sackler School of Medicine, Tel Aviv University, Tel Aviv, Israel (S.S., O.S.)
| | - Michal Marianne Amitai
- From the Department of Diagnostic Imaging, Sheba Medical Center, Emek HaEla St 1, Ramat Gan, Israel (S.S., M.M.A., E.K.); Faculty of Engineering, Department of Biomedical Engineering, Medical Image Processing Laboratory, Tel Aviv University, Tel Aviv, Israel (A.B., H.G.); and Sackler School of Medicine, Tel Aviv University, Tel Aviv, Israel (S.S., O.S.)
| | - Hayit Greenspan
- From the Department of Diagnostic Imaging, Sheba Medical Center, Emek HaEla St 1, Ramat Gan, Israel (S.S., M.M.A., E.K.); Faculty of Engineering, Department of Biomedical Engineering, Medical Image Processing Laboratory, Tel Aviv University, Tel Aviv, Israel (A.B., H.G.); and Sackler School of Medicine, Tel Aviv University, Tel Aviv, Israel (S.S., O.S.)
| | - Eyal Klang
- From the Department of Diagnostic Imaging, Sheba Medical Center, Emek HaEla St 1, Ramat Gan, Israel (S.S., M.M.A., E.K.); Faculty of Engineering, Department of Biomedical Engineering, Medical Image Processing Laboratory, Tel Aviv University, Tel Aviv, Israel (A.B., H.G.); and Sackler School of Medicine, Tel Aviv University, Tel Aviv, Israel (S.S., O.S.)
| |
Collapse
|
23
|
Sahiner B, Pezeshk A, Hadjiiski LM, Wang X, Drukker K, Cha KH, Summers RM, Giger ML. Deep learning in medical imaging and radiation therapy. Med Phys 2019; 46:e1-e36. [PMID: 30367497 PMCID: PMC9560030 DOI: 10.1002/mp.13264] [Citation(s) in RCA: 372] [Impact Index Per Article: 74.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2018] [Revised: 09/18/2018] [Accepted: 10/09/2018] [Indexed: 12/15/2022] Open
Abstract
The goals of this review paper on deep learning (DL) in medical imaging and radiation therapy are to (a) summarize what has been achieved to date; (b) identify common and unique challenges, and strategies that researchers have taken to address these challenges; and (c) identify some of the promising avenues for the future both in terms of applications as well as technical innovations. We introduce the general principles of DL and convolutional neural networks, survey five major areas of application of DL in medical imaging and radiation therapy, identify common themes, discuss methods for dataset expansion, and conclude by summarizing lessons learned, remaining challenges, and future directions.
Collapse
Affiliation(s)
- Berkman Sahiner
- DIDSR/OSEL/CDRH U.S. Food and Drug AdministrationSilver SpringMD20993USA
| | - Aria Pezeshk
- DIDSR/OSEL/CDRH U.S. Food and Drug AdministrationSilver SpringMD20993USA
| | | | - Xiaosong Wang
- Imaging Biomarkers and Computer‐aided Diagnosis LabRadiology and Imaging SciencesNIH Clinical CenterBethesdaMD20892‐1182USA
| | - Karen Drukker
- Department of RadiologyUniversity of ChicagoChicagoIL60637USA
| | - Kenny H. Cha
- DIDSR/OSEL/CDRH U.S. Food and Drug AdministrationSilver SpringMD20993USA
| | - Ronald M. Summers
- Imaging Biomarkers and Computer‐aided Diagnosis LabRadiology and Imaging SciencesNIH Clinical CenterBethesdaMD20892‐1182USA
| | | |
Collapse
|
24
|
Machine Learning in Neurooncology Imaging: From Study Request to Diagnosis and Treatment. AJR Am J Roentgenol 2018; 212:52-56. [PMID: 30403523 DOI: 10.2214/ajr.18.20328] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
OBJECTIVE Machine learning has potential to play a key role across a variety of medical imaging applications. This review seeks to elucidate the ways in which machine learning can aid and enhance diagnosis, treatment, and follow-up in neurooncology. CONCLUSION Given the rapid pace of development in machine learning over the past several years, a basic proficiency of the key tenets and use cases in the field is critical to assessing potential opportunities and challenges of this exciting new technology.
Collapse
|
25
|
Zaharchuk G, Gong E, Wintermark M, Rubin D, Langlotz CP. Deep Learning in Neuroradiology. AJNR Am J Neuroradiol 2018; 39:1776-1784. [PMID: 29419402 PMCID: PMC7410723 DOI: 10.3174/ajnr.a5543] [Citation(s) in RCA: 170] [Impact Index Per Article: 28.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022]
Abstract
Deep learning is a form of machine learning using a convolutional neural network architecture that shows tremendous promise for imaging applications. It is increasingly being adapted from its original demonstration in computer vision applications to medical imaging. Because of the high volume and wealth of multimodal imaging information acquired in typical studies, neuroradiology is poised to be an early adopter of deep learning. Compelling deep learning research applications have been demonstrated, and their use is likely to grow rapidly. This review article describes the reasons, outlines the basic methods used to train and test deep learning models, and presents a brief overview of current and potential clinical applications with an emphasis on how they are likely to change future neuroradiology practice. Facility with these methods among neuroimaging researchers and clinicians will be important to channel and harness the vast potential of this new method.
Collapse
Affiliation(s)
- G Zaharchuk
- From the Departments of Radiology (G.Z., M.W., D.R., C.P.L.)
| | - E Gong
- Electrical Engineering (E.G.), Stanford University and Stanford University Medical Center, Stanford, California
| | - M Wintermark
- From the Departments of Radiology (G.Z., M.W., D.R., C.P.L.)
| | - D Rubin
- From the Departments of Radiology (G.Z., M.W., D.R., C.P.L.)
| | - C P Langlotz
- From the Departments of Radiology (G.Z., M.W., D.R., C.P.L.)
| |
Collapse
|
26
|
Noguchi T, Higa D, Asada T, Kawata Y, Machitori A, Shida Y, Okafuji T, Yokoyama K, Uchiyama F, Tajima T. Artificial intelligence using neural network architecture for radiology (AINNAR): classification of MR imaging sequences. Jpn J Radiol 2018; 36:691-697. [PMID: 30232585 DOI: 10.1007/s11604-018-0779-3] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2018] [Accepted: 09/14/2018] [Indexed: 12/17/2022]
Abstract
PURPOSE The confusion of MRI sequence names could be solved if MR images were automatically identified after image data acquisition. We revealed the ability of deep learning to classify head MRI sequences. MATERIALS AND METHODS Seventy-eight patients with mild cognitive impairment (MCI) having apparently normal head MR images and 78 intracranial hemorrhage (ICH) patients with morphologically deformed head MR images were enrolled. Six imaging protocols were selected to be performed: T2-weighted imaging, fluid attenuated inversion recovery imaging, T2-star-weighted imaging, diffusion-weighted imaging, apparent diffusion coefficient mapping, and source images of time-of-flight magnetic resonance angiography. The proximal first image slices and middle image slices having ambiguous and distinctive contrast patterns, respectively, were classified by two deep learning imaging classifiers, AlexNet and GoogLeNet. RESULTS AlexNet had accuracies of 73.3%, 73.6%, 73.1%, and 60.7% in the middle slices of MCI group, middle slices of ICH group, first slices of MCI group, and first slices of ICH group, while GoogLeNet had accuracies of 100%, 98.1%, 93.1%, and 94.8%, respectively. AlexNet significantly had lower classification ability than GoogLeNet for all datasets. CONCLUSIONS GoogLeNet could judge the types of head MRI sequences with a small amount of training data, irrespective of morphological or contrast conditions.
Collapse
Affiliation(s)
- Tomoyuki Noguchi
- Department of Radiology, National Center for Global Health and Medicine, 1-21-1 Toyama, Shinjuku-ku, Tokyo, 162-8655, Japan.
| | - Daichi Higa
- Department of Radiology, National Center for Global Health and Medicine, 1-21-1 Toyama, Shinjuku-ku, Tokyo, 162-8655, Japan
| | - Takashi Asada
- Memory Clinics Ochanomizu, 4th floor, Ochanomizu Igaku Kaikan, 1-5-34, Yushima, Bunkyo-ku, Tokyo, 113-0034, Japan
| | - Yusuke Kawata
- Department of Radiology, National Center for Global Health and Medicine, 1-21-1 Toyama, Shinjuku-ku, Tokyo, 162-8655, Japan
| | - Akihiro Machitori
- Department of Radiology, National Center for Global Health and Medicine, 1-21-1 Toyama, Shinjuku-ku, Tokyo, 162-8655, Japan
| | - Yoshitaka Shida
- Department of Radiology, National Center for Global Health and Medicine, 1-21-1 Toyama, Shinjuku-ku, Tokyo, 162-8655, Japan
| | - Takashi Okafuji
- Department of Radiology, National Center for Global Health and Medicine, 1-21-1 Toyama, Shinjuku-ku, Tokyo, 162-8655, Japan
| | - Kota Yokoyama
- Department of Radiology, National Center for Global Health and Medicine, 1-21-1 Toyama, Shinjuku-ku, Tokyo, 162-8655, Japan
| | - Fumiya Uchiyama
- Department of Radiology, National Center for Global Health and Medicine, 1-21-1 Toyama, Shinjuku-ku, Tokyo, 162-8655, Japan
| | - Tsuyoshi Tajima
- Department of Radiology, National Center for Global Health and Medicine, 1-21-1 Toyama, Shinjuku-ku, Tokyo, 162-8655, Japan
| |
Collapse
|
27
|
Guest Editorial: Discovery and Artificial Intelligence. AJR Am J Roentgenol 2018; 209:1189-1190. [PMID: 29161146 DOI: 10.2214/ajr.17.19178] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
|
28
|
Vamvakas A, Tsougos I, Arikidis N, Kapsalaki E, Fountas K, Fezoulidis I, Costaridou L. Exploiting morphology and texture of 3D tumor models in DTI for differentiating glioblastoma multiforme from solitary metastasis. Biomed Signal Process Control 2018. [DOI: 10.1016/j.bspc.2018.02.014] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/04/2023]
|
29
|
Kline TL, Korfiatis P, Edwards ME, Blais JD, Czerwiec FS, Harris PC, King BF, Torres VE, Erickson BJ. Performance of an Artificial Multi-observer Deep Neural Network for Fully Automated Segmentation of Polycystic Kidneys. J Digit Imaging 2018; 30:442-448. [PMID: 28550374 PMCID: PMC5537093 DOI: 10.1007/s10278-017-9978-1] [Citation(s) in RCA: 85] [Impact Index Per Article: 14.2] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022] Open
Abstract
Deep learning techniques are being rapidly applied to medical imaging tasks—from organ and lesion segmentation to tissue and tumor classification. These techniques are becoming the leading algorithmic approaches to solve inherently difficult image processing tasks. Currently, the most critical requirement for successful implementation lies in the need for relatively large datasets that can be used for training the deep learning networks. Based on our initial studies of MR imaging examinations of the kidneys of patients affected by polycystic kidney disease (PKD), we have generated a unique database of imaging data and corresponding reference standard segmentations of polycystic kidneys. In the study of PKD, segmentation of the kidneys is needed in order to measure total kidney volume (TKV). Automated methods to segment the kidneys and measure TKV are needed to increase measurement throughput and alleviate the inherent variability of human-derived measurements. We hypothesize that deep learning techniques can be leveraged to perform fast, accurate, reproducible, and fully automated segmentation of polycystic kidneys. Here, we describe a fully automated approach for segmenting PKD kidneys within MR images that simulates a multi-observer approach in order to create an accurate and robust method for the task of segmentation and computation of TKV for PKD patients. A total of 2000 cases were used for training and validation, and 400 cases were used for testing. The multi-observer ensemble method had mean ± SD percent volume difference of 0.68 ± 2.2% compared with the reference standard segmentations. The complete framework performs fully automated segmentation at a level comparable with interobserver variability and could be considered as a replacement for the task of segmentation of PKD kidneys by a human.
Collapse
Affiliation(s)
- Timothy L Kline
- Department of Radiology, Mayo Clinic College of Medicine, 200 First St SW, Rochester, MN, 55905, USA.
| | - Panagiotis Korfiatis
- Department of Radiology, Mayo Clinic College of Medicine, 200 First St SW, Rochester, MN, 55905, USA
| | - Marie E Edwards
- Division of Nephrology and Hypertension, Mayo Clinic College of Medicine, Rochester, MN, USA
| | - Jaime D Blais
- Otsuka Pharmaceutical Development & Commercialization Inc., Rockville, MD, USA
| | - Frank S Czerwiec
- Otsuka Pharmaceutical Development & Commercialization Inc., Rockville, MD, USA
| | - Peter C Harris
- Division of Nephrology and Hypertension, Mayo Clinic College of Medicine, Rochester, MN, USA
| | - Bernard F King
- Department of Radiology, Mayo Clinic College of Medicine, 200 First St SW, Rochester, MN, 55905, USA
| | - Vicente E Torres
- Division of Nephrology and Hypertension, Mayo Clinic College of Medicine, Rochester, MN, USA
| | - Bradley J Erickson
- Department of Radiology, Mayo Clinic College of Medicine, 200 First St SW, Rochester, MN, 55905, USA
| |
Collapse
|
30
|
Erickson BJ, Korfiatis P, Kline TL, Akkus Z, Philbrick K, Weston AD. Deep Learning in Radiology: Does One Size Fit All? J Am Coll Radiol 2018; 15:521-526. [PMID: 29396120 PMCID: PMC5877825 DOI: 10.1016/j.jacr.2017.12.027] [Citation(s) in RCA: 66] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2017] [Accepted: 12/15/2017] [Indexed: 11/29/2022]
Abstract
Deep learning (DL) is a popular method that is used to perform many important tasks in radiology and medical imaging. Some forms of DL are able to accurately segment organs (essentially, trace the boundaries, enabling volume measurements or calculation of other properties). Other DL networks are able to predict important properties from regions of an image-for instance, whether something is malignant, molecular markers for tissue in a region, even prognostic markers. DL is easier to train than traditional machine learning methods, but requires more data and much more care in analyzing results. It will automatically find the features of importance, but understanding what those features are can be a challenge. This article describes the basic concepts of DL systems and some of the traps that exist in building DL systems and how to identify those traps.
Collapse
Affiliation(s)
- Bradley J Erickson
- Radiology Informatics Laboratory, Department of Radiology, Mayo Clinic, Rochester, Minnesota.
| | - Panagiotis Korfiatis
- Radiology Informatics Laboratory, Department of Radiology, Mayo Clinic, Rochester, Minnesota
| | - Timothy L Kline
- Radiology Informatics Laboratory, Department of Radiology, Mayo Clinic, Rochester, Minnesota
| | - Zeynettin Akkus
- Radiology Informatics Laboratory, Department of Radiology, Mayo Clinic, Rochester, Minnesota
| | - Kenneth Philbrick
- Radiology Informatics Laboratory, Department of Radiology, Mayo Clinic, Rochester, Minnesota
| | - Alexander D Weston
- Radiology Informatics Laboratory, Department of Radiology, Mayo Clinic, Rochester, Minnesota
| |
Collapse
|
31
|
Nandu H, Wen PY, Huang RY. Imaging in neuro-oncology. Ther Adv Neurol Disord 2018; 11:1756286418759865. [PMID: 29511385 PMCID: PMC5833173 DOI: 10.1177/1756286418759865] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2017] [Accepted: 01/18/2018] [Indexed: 12/11/2022] Open
Abstract
Imaging plays several key roles in managing brain tumors, including diagnosis, prognosis, and treatment response assessment. Ongoing challenges remain as new therapies emerge and there are urgent needs to find accurate and clinically feasible methods to noninvasively evaluate brain tumors before and after treatment. This review aims to provide an overview of several advanced imaging modalities including magnetic resonance imaging and positron emission tomography (PET), including advances in new PET agents, and summarize several key areas of their applications, including improving the accuracy of diagnosis and addressing the challenging clinical problems such as evaluation of pseudoprogression and anti-angiogenic therapy, and rising challenges of imaging with immunotherapy.
Collapse
Affiliation(s)
- Hari Nandu
- Department of Radiology, Brigham and Women's Hospital, Boston, MA, USA
| | | | - Raymond Y Huang
- Department of Radiology, Brigham and Women's Hospital, 75 Francis Street, Boston, MA 02445, USA
| |
Collapse
|
32
|
Lee CS, Tyring AJ, Deruyter NP, Wu Y, Rokem A, Lee AY. Deep-learning based, automated segmentation of macular edema in optical coherence tomography. BIOMEDICAL OPTICS EXPRESS 2017; 8:3440-3448. [PMID: 28717579 PMCID: PMC5508840 DOI: 10.1364/boe.8.003440] [Citation(s) in RCA: 185] [Impact Index Per Article: 26.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/01/2017] [Revised: 06/17/2017] [Accepted: 06/20/2017] [Indexed: 05/18/2023]
Abstract
Evaluation of clinical images is essential for diagnosis in many specialties. Therefore the development of computer vision algorithms to help analyze biomedical images will be important. In ophthalmology, optical coherence tomography (OCT) is critical for managing retinal conditions. We developed a convolutional neural network (CNN) that detects intraretinal fluid (IRF) on OCT in a manner indistinguishable from clinicians. Using 1,289 OCT images, the CNN segmented images with a 0.911 cross-validated Dice coefficient, compared with segmentations by experts. Additionally, the agreement between experts and between experts and CNN were similar. Our results reveal that CNN can be trained to perform automated segmentations of clinically relevant image features.
Collapse
Affiliation(s)
- Cecilia S. Lee
- Department of Ophthalmology, University of Washington, Seattle, Washington, USA
| | - Ariel J. Tyring
- Department of Ophthalmology, University of Washington, Seattle, Washington, USA
| | | | - Yue Wu
- Department of Ophthalmology, University of Washington, Seattle, Washington, USA
| | - Ariel Rokem
- eScience Institute, University of Washington, Seattle, Washington, USA
| | - Aaron Y. Lee
- Department of Ophthalmology, University of Washington, Seattle, Washington, USA
- Department of Ophthalmology, Puget Sound Veteran Affairs, Seattle Washington, USA
- eScience Institute, University of Washington, Seattle, Washington, USA
| |
Collapse
|