651
|
Ryu K, Nam Y, Gho SM, Jang J, Lee HJ, Cha J, Baek HJ, Park J, Kim DH. Data-driven synthetic MRI FLAIR artifact correction via deep neural network. J Magn Reson Imaging 2019; 50:1413-1423. [PMID: 30884007 DOI: 10.1002/jmri.26712] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2018] [Revised: 02/22/2019] [Accepted: 02/22/2019] [Indexed: 12/23/2022] Open
Abstract
BACKGROUND FLAIR (fluid attenuated inversion recovery) imaging via synthetic MRI methods leads to artifacts in the brain, which can cause diagnostic limitations. The main sources of the artifacts are attributed to the partial volume effect and flow, which are difficult to correct by analytical modeling. In this study, a deep learning (DL)-based synthetic FLAIR method was developed, which does not require analytical modeling of the signal. PURPOSE To correct artifacts in synthetic FLAIR using a DL method. STUDY TYPE Retrospective. SUBJECTS A total of 80 subjects with clinical indications (60.6 ± 16.7 years, 38 males, 42 females) were divided into three groups: a training set (56 subjects, 62.1 ± 14.8 years, 25 males, 31 females), a validation set (1 subject, 62 years, male), and the testing set (23 subjects, 57.3 ± 20.4 years, 13 males, 10 females). FIELD STRENGTH/SEQUENCE 3 T MRI using a multiple-dynamic multiple-echo acquisition (MDME) sequence for synthetic MRI and a conventional FLAIR sequence. ASSESSMENT Normalized root mean square (NRMSE) and structural similarity (SSIM) were computed for uncorrected synthetic FLAIR and DL-corrected FLAIR. In addition, three neuroradiologists scored the three FLAIR datasets blindly, evaluating image quality and artifacts for sulci/periventricular and intraventricular/cistern space regions. STATISTICAL TESTS Pairwise Student's t-tests and a Wilcoxon test were performed. RESULTS For quantitative assessment, NRMSE improved from 4.2% to 2.9% (P < 0.0001) and SSIM improved from 0.85 to 0.93 (P < 0.0001). Additionally, NRMSE values significantly improved from 1.58% to 1.26% (P < 0.001), 3.1% to 1.5% (P < 0.0001), and 2.7% to 1.4% (P < 0.0001) in white matter, gray matter, and cerebral spinal fluid (CSF) regions, respectively, when using DL-corrected FLAIR. For qualitative assessment, DL correction achieved improved overall quality, fewer artifacts in sulci and periventricular regions, and in intraventricular and cistern space regions. DATA CONCLUSION The DL approach provides a promising method to correct artifacts in synthetic FLAIR. LEVEL OF EVIDENCE 4 Technical Efficacy: Stage 1 J. Magn. Reson. Imaging 2019;50:1413-1423.
Collapse
Affiliation(s)
- Kanghyun Ryu
- Department of Electrical and Electronic Engineering, Yonsei University, Seoul, Republic of Korea
| | - Yoonho Nam
- Department of Radiology, Seoul St. Mary's Hospital, College of Medicine, Catholic University of Korea, Seoul, Republic of Korea
| | - Sung-Min Gho
- MR Clinical research and Development, GE Healthcare, Seoul, Republic of Korea
| | - Jinhee Jang
- Department of Radiology, Seoul St. Mary's Hospital, College of Medicine, Catholic University of Korea, Seoul, Republic of Korea
| | - Ho-Joon Lee
- Department of Radiology and Research Institute of Radiological Science, Severance Hospital, Yonsei University College of Medicine, Seoul, Republic of Korea.,Department of Radiology, Inje University College of Medicine, Haeundae Paik Hospital, Busan, Republic of Korea
| | - Jihoon Cha
- Department of Radiology and Research Institute of Radiological Science, Severance Hospital, Yonsei University College of Medicine, Seoul, Republic of Korea
| | - Hye Jin Baek
- Department of Radiology, Gyeongsang National University School of Medicine and Gyeongsang National University Changwon Hospital, Changwon, Republic of Korea
| | - Jiyong Park
- Department of Electrical and Electronic Engineering, Yonsei University, Seoul, Republic of Korea
| | - Dong-Hyun Kim
- Department of Electrical and Electronic Engineering, Yonsei University, Seoul, Republic of Korea
| |
Collapse
|
652
|
Cheng A, Kim Y, Anas EMA, Rahmim A, Boctor EM, Seifabadi R, Wood B. Deep learning image reconstruction method for limited-angle ultrasound tomography in prostate cancer. MEDICAL IMAGING 2019: ULTRASONIC IMAGING AND TOMOGRAPHY 2019. [DOI: 10.1117/12.2512533] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/01/2023]
|
653
|
Liu F, Feng L, Kijowski R. MANTIS: Model-Augmented Neural neTwork with Incoherent k-space Sampling for efficient MR parameter mapping. Magn Reson Med 2019; 82:174-188. [PMID: 30860285 DOI: 10.1002/mrm.27707] [Citation(s) in RCA: 68] [Impact Index Per Article: 13.6] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2018] [Revised: 01/22/2019] [Accepted: 02/01/2019] [Indexed: 12/25/2022]
Abstract
PURPOSE To develop and evaluate a novel deep learning-based image reconstruction approach called MANTIS (Model-Augmented Neural neTwork with Incoherent k-space Sampling) for efficient MR parameter mapping. METHODS MANTIS combines end-to-end convolutional neural network (CNN) mapping, incoherent k-space undersampling, and a physical model as a synergistic framework. The CNN mapping directly converts a series of undersampled images straight into MR parameter maps using supervised training. Signal model fidelity is enforced by adding a pathway between the undersampled k-space and estimated parameter maps to ensure that the parameter maps produced synthesized k-space consistent with the acquired undersampling measurements. The MANTIS framework was evaluated on the T2 mapping of the knee at different acceleration rates and was compared with 2 other CNN mapping methods and conventional sparsity-based iterative reconstruction approaches. Global quantitative assessment and regional T2 analysis for the cartilage and meniscus were performed to demonstrate the reconstruction performance of MANTIS. RESULTS MANTIS achieved high-quality T2 mapping at both moderate (R = 5) and high (R = 8) acceleration rates. Compared to conventional reconstruction approaches that exploited image sparsity, MANTIS yielded lower errors (normalized root mean square error of 6.1% for R = 5 and 7.1% for R = 8) and higher similarity (structural similarity index of 86.2% at R = 5 and 82.1% at R = 8) to the reference in the T2 estimation. MANTIS also achieved superior performance compared to direct CNN mapping and a 2-step CNN method. CONCLUSION The MANTIS framework, with a combination of end-to-end CNN mapping, signal model-augmented data consistency, and incoherent k-space sampling, is a promising approach for efficient and robust estimation of quantitative MR parameters.
Collapse
Affiliation(s)
- Fang Liu
- Department of Radiology, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin
| | - Li Feng
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York
| | - Richard Kijowski
- Department of Radiology, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin
| |
Collapse
|
654
|
Bi WL, Hosny A, Schabath MB, Giger ML, Birkbak NJ, Mehrtash A, Allison T, Arnaout O, Abbosh C, Dunn IF, Mak RH, Tamimi RM, Tempany CM, Swanton C, Hoffmann U, Schwartz LH, Gillies RJ, Huang RY, Aerts HJWL. Artificial intelligence in cancer imaging: Clinical challenges and applications. CA Cancer J Clin 2019; 69:127-157. [PMID: 30720861 PMCID: PMC6403009 DOI: 10.3322/caac.21552] [Citation(s) in RCA: 595] [Impact Index Per Article: 119.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Judgement, as one of the core tenets of medicine, relies upon the integration of multilayered data with nuanced decision making. Cancer offers a unique context for medical decisions given not only its variegated forms with evolution of disease but also the need to take into account the individual condition of patients, their ability to receive treatment, and their responses to treatment. Challenges remain in the accurate detection, characterization, and monitoring of cancers despite improved technologies. Radiographic assessment of disease most commonly relies upon visual evaluations, the interpretations of which may be augmented by advanced computational analyses. In particular, artificial intelligence (AI) promises to make great strides in the qualitative interpretation of cancer imaging by expert clinicians, including volumetric delineation of tumors over time, extrapolation of the tumor genotype and biological course from its radiographic phenotype, prediction of clinical outcome, and assessment of the impact of disease and treatment on adjacent organs. AI may automate processes in the initial interpretation of images and shift the clinical workflow of radiographic detection, management decisions on whether or not to administer an intervention, and subsequent observation to a yet to be envisioned paradigm. Here, the authors review the current state of AI as applied to medical imaging of cancer and describe advances in 4 tumor types (lung, brain, breast, and prostate) to illustrate how common clinical problems are being addressed. Although most studies evaluating AI applications in oncology to date have not been vigorously validated for reproducibility and generalizability, the results do highlight increasingly concerted efforts in pushing AI technology to clinical use and to impact future directions in cancer care.
Collapse
Affiliation(s)
- Wenya Linda Bi
- Assistant Professor of Neurosurgery, Department of Neurosurgery, Brigham and Women’s Hospital, Dana‐Farber Cancer InstituteHarvard Medical SchoolBostonMA
| | - Ahmed Hosny
- Research Scientist, Department of Radiation Oncology, Brigham and Women’s Hospital, Dana‐Farber Cancer InstituteHarvard Medical SchoolBostonMA
| | - Matthew B. Schabath
- Associate Member, Department of Cancer EpidemiologyH. Lee Moffitt Cancer Center and Research InstituteTampaFL
| | - Maryellen L. Giger
- Professor of Radiology, Department of RadiologyUniversity of ChicagoChicagoIL
| | - Nicolai J. Birkbak
- Research Associate, The Francis Crick InstituteLondonUnited Kingdom
- Research Associate, University College London Cancer InstituteLondonUnited Kingdom
| | - Alireza Mehrtash
- Research Assistant, Department of Radiology, Brigham and Women’s Hospital, Dana‐Farber Cancer InstituteHarvard Medical SchoolBostonMA
- Research Assistant, Department of Electrical and Computer EngineeringUniversity of British ColumbiaVancouverBCCanada
| | - Tavis Allison
- Research Assistant, Department of RadiologyColumbia University College of Physicians and SurgeonsNew YorkNY
- Research Assistant, Department of RadiologyNew York Presbyterian HospitalNew YorkNY
| | - Omar Arnaout
- Assistant Professor of Neurosurgery, Department of Neurosurgery, Brigham and Women’s Hospital, Dana‐Farber Cancer InstituteHarvard Medical SchoolBostonMA
| | - Christopher Abbosh
- Research Fellow, The Francis Crick InstituteLondonUnited Kingdom
- Research Fellow, University College London Cancer InstituteLondonUnited Kingdom
| | - Ian F. Dunn
- Associate Professor of Neurosurgery, Department of Neurosurgery, Brigham and Women’s Hospital, Dana‐Farber Cancer InstituteHarvard Medical SchoolBostonMA
| | - Raymond H. Mak
- Associate Professor, Department of Radiation Oncology, Brigham and Women’s Hospital, Dana‐Farber Cancer InstituteHarvard Medical SchoolBostonMA
| | - Rulla M. Tamimi
- Associate Professor, Department of MedicineBrigham and Women’s Hospital, Dana‐Farber Cancer Institute, Harvard Medical SchoolBostonMA
| | - Clare M. Tempany
- Professor of Radiology, Department of Radiology, Brigham and Women’s Hospital, Dana‐Farber Cancer InstituteHarvard Medical SchoolBostonMA
| | - Charles Swanton
- Professor, The Francis Crick InstituteLondonUnited Kingdom
- Professor, University College London Cancer InstituteLondonUnited Kingdom
| | - Udo Hoffmann
- Professor of Radiology, Department of RadiologyMassachusetts General Hospital and Harvard Medical SchoolBostonMA
| | - Lawrence H. Schwartz
- Professor of Radiology, Department of RadiologyColumbia University College of Physicians and SurgeonsNew YorkNY
- Chair, Department of RadiologyNew York Presbyterian HospitalNew YorkNY
| | - Robert J. Gillies
- Professor of Radiology, Department of Cancer PhysiologyH. Lee Moffitt Cancer Center and Research InstituteTampaFL
| | - Raymond Y. Huang
- Assistant Professor, Department of Radiology, Brigham and Women’s Hospital, Dana‐Farber Cancer InstituteHarvard Medical SchoolBostonMA
| | - Hugo J. W. L. Aerts
- Associate Professor, Departments of Radiation Oncology and Radiology, Brigham and Women’s Hospital, Dana‐Farber Cancer InstituteHarvard Medical SchoolBostonMA
- Professor in AI in Medicine, Radiology and Nuclear Medicine, GROWMaastricht University Medical Centre (MUMC+)MaastrichtThe Netherlands
| |
Collapse
|
655
|
Zaiss M, Deshmane A, Schuppert M, Herz K, Glang F, Ehses P, Lindig T, Bender B, Ernemann U, Scheffler K. DeepCEST: 9.4 T Chemical exchange saturation transfer MRI contrast predicted from 3 T data - a proof of concept study. Magn Reson Med 2019; 81:3901-3914. [DOI: 10.1002/mrm.27690] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2018] [Revised: 01/11/2019] [Accepted: 01/23/2019] [Indexed: 12/28/2022]
Affiliation(s)
- Moritz Zaiss
- High-field Magnetic Resonance Center, Max Planck Institute for Biological Cybernetics; Tübingen Germany
| | - Anagha Deshmane
- High-field Magnetic Resonance Center, Max Planck Institute for Biological Cybernetics; Tübingen Germany
| | - Mark Schuppert
- High-field Magnetic Resonance Center, Max Planck Institute for Biological Cybernetics; Tübingen Germany
| | - Kai Herz
- High-field Magnetic Resonance Center, Max Planck Institute for Biological Cybernetics; Tübingen Germany
| | - Felix Glang
- High-field Magnetic Resonance Center, Max Planck Institute for Biological Cybernetics; Tübingen Germany
| | - Philipp Ehses
- German Center for Neurodegenerative Diseases (DZNE); Bonn Germany
| | - Tobias Lindig
- High-field Magnetic Resonance Center, Max Planck Institute for Biological Cybernetics; Tübingen Germany
- Department of Diagnostic and Interventional Neuroradiology; Eberhard-Karls University Tübingen; Tübingen Germany
| | - Benjamin Bender
- Department of Diagnostic and Interventional Neuroradiology; Eberhard-Karls University Tübingen; Tübingen Germany
| | - Ulrike Ernemann
- Department of Diagnostic and Interventional Neuroradiology; Eberhard-Karls University Tübingen; Tübingen Germany
| | - Klaus Scheffler
- High-field Magnetic Resonance Center, Max Planck Institute for Biological Cybernetics; Tübingen Germany
- Department of Biomedical Magnetic Resonance; Eberhard-Karls University Tübingen; Tübingen Germany
| |
Collapse
|
656
|
Cho J, Park H. Robust water–fat separation for multi‐echo gradient‐recalled echo sequence using convolutional neural network. Magn Reson Med 2019; 82:476-484. [DOI: 10.1002/mrm.27697] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2018] [Revised: 01/10/2019] [Accepted: 01/25/2019] [Indexed: 12/24/2022]
Affiliation(s)
- JaeJin Cho
- Department of Electrical Engineering Korea Advanced Institute of Science and Technology (KAIST) Daejeon South Korea
| | - HyunWook Park
- Department of Electrical Engineering Korea Advanced Institute of Science and Technology (KAIST) Daejeon South Korea
| |
Collapse
|
657
|
Edelman RR, Koktzoglou I. Noncontrast MR angiography: An update. J Magn Reson Imaging 2019; 49:355-373. [PMID: 30566270 PMCID: PMC6330154 DOI: 10.1002/jmri.26288] [Citation(s) in RCA: 72] [Impact Index Per Article: 14.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2018] [Revised: 07/24/2018] [Accepted: 07/26/2018] [Indexed: 12/12/2022] Open
Abstract
Both computed tomography (CT) angiography (CTA) and contrast-enhanced MR angiography (CEMRA) have proven to be useful and accurate cross-sectional imaging modalities over a wide range of vascular territories and vascular disorders. A key advantage of MRA is that, unlike CTA, it can be performed without the administration of a contrast agent. In this review article we consider the motivations for using noncontrast MRA, potential contrast mechanisms, imaging techniques, advantages, and drawbacks with respect to CTA and CEMRA, and the level of evidence for using the various MRA techniques. In addition, we explore new developments that promise to expand the reliability and range of clinical applications for noncontrast MRA, along with functional MRA capabilities not available with CTA or CEMRA. Level of Evidence: 1 Technical Efficacy: Stage 1 J. Magn. Reson. Imaging 2019;49:355-373.
Collapse
Affiliation(s)
- Robert R. Edelman
- Radiology, Northshore University HealthSystem, Evanston, IL
- Radiology, Northwestern Memorial Hospital, Chicago, IL
| | - Ioannis Koktzoglou
- Radiology, Northshore University HealthSystem, Evanston, IL
- Radiology, University of Chicago Pritzker School of Medicine, Chicago, IL
| |
Collapse
|
658
|
Wang G, Gong E, Banerjee S, Pauly J, Zaharchuk G. Accelerated MRI Reconstruction with Dual-Domain Generative Adversarial Network. MACHINE LEARNING FOR MEDICAL IMAGE RECONSTRUCTION 2019. [DOI: 10.1007/978-3-030-33843-5_5] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/07/2022]
|
659
|
Schlemper J, Salehi SSM, Kundu P, Lazarus C, Dyvorne H, Rueckert D, Sofka M. Nonuniform Variational Network: Deep Learning for Accelerated Nonuniform MR Image Reconstruction. LECTURE NOTES IN COMPUTER SCIENCE 2019. [DOI: 10.1007/978-3-030-32248-9_7] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/03/2022]
|
660
|
Topol EJ. High-performance medicine: the convergence of human and artificial intelligence. Nat Med 2019; 25:44-56. [PMID: 30617339 DOI: 10.1038/s41591-018-0300-7] [Citation(s) in RCA: 2107] [Impact Index Per Article: 421.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2018] [Accepted: 11/12/2018] [Indexed: 11/08/2022]
Abstract
The use of artificial intelligence, and the deep-learning subtype in particular, has been enabled by the use of labeled big data, along with markedly enhanced computing power and cloud storage, across all sectors. In medicine, this is beginning to have an impact at three levels: for clinicians, predominantly via rapid, accurate image interpretation; for health systems, by improving workflow and the potential for reducing medical errors; and for patients, by enabling them to process their own data to promote health. The current limitations, including bias, privacy and security, and lack of transparency, along with the future directions of these applications will be discussed in this article. Over time, marked improvements in accuracy, productivity, and workflow will likely be actualized, but whether that will be used to improve the patient-doctor relationship or facilitate its erosion remains to be seen.
Collapse
Affiliation(s)
- Eric J Topol
- Department of Molecular Medicine, Scripps Research, La Jolla, CA, USA.
| |
Collapse
|
661
|
Detection and Correction of Cardiac MRI Motion Artefacts During Reconstruction from k-space. LECTURE NOTES IN COMPUTER SCIENCE 2019. [DOI: 10.1007/978-3-030-32251-9_76] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
|
662
|
|
663
|
|
664
|
Johnson PM, Muckley MJ, Bruno M, Kobler E, Hammernik K, Pock T, Knoll F. Joint Multi-anatomy Training of a Variational Network for Reconstruction of Accelerated Magnetic Resonance Image Acquisitions. MACHINE LEARNING FOR MEDICAL IMAGE RECONSTRUCTION 2019. [DOI: 10.1007/978-3-030-33843-5_7] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/18/2023]
|
665
|
Affiliation(s)
- Doohee Lee
- Laboratory for Imaging Science and Technology, Department of Electrical and Computer Engineering, Institute of Engineering Research, Seoul National University, Seoul, Korea
| | - Jingu Lee
- Laboratory for Imaging Science and Technology, Department of Electrical and Computer Engineering, Institute of Engineering Research, Seoul National University, Seoul, Korea
| | - Jingyu Ko
- Laboratory for Imaging Science and Technology, Department of Electrical and Computer Engineering, Institute of Engineering Research, Seoul National University, Seoul, Korea
| | - Jaeyeon Yoon
- Laboratory for Imaging Science and Technology, Department of Electrical and Computer Engineering, Institute of Engineering Research, Seoul National University, Seoul, Korea
| | - Kanghyun Ryu
- Department of Electrical and Electronic Engineering, Yonsei University, Seoul, Korea
| | - Yoonho Nam
- Department of Radiology, Seoul St. Mary's Hospital, College of Medicine, The Catholic University of Korea, Seoul, Korea
| |
Collapse
|
666
|
Kim YC. Fast upper airway magnetic resonance imaging for assessment of speech production and sleep apnea. PRECISION AND FUTURE MEDICINE 2018. [DOI: 10.23838/pfm.2018.00100] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/02/2023] Open
|
667
|
How will “democratization of artificial intelligence” change the future of radiologists? Jpn J Radiol 2018; 37:9-14. [DOI: 10.1007/s11604-018-0793-5] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2018] [Accepted: 11/12/2018] [Indexed: 12/17/2022]
|
668
|
Lundervold AS, Lundervold A. An overview of deep learning in medical imaging focusing on MRI. Z Med Phys 2018; 29:102-127. [PMID: 30553609 DOI: 10.1016/j.zemedi.2018.11.002] [Citation(s) in RCA: 698] [Impact Index Per Article: 116.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2018] [Revised: 11/19/2018] [Accepted: 11/21/2018] [Indexed: 02/06/2023]
Abstract
What has happened in machine learning lately, and what does it mean for the future of medical image analysis? Machine learning has witnessed a tremendous amount of attention over the last few years. The current boom started around 2009 when so-called deep artificial neural networks began outperforming other established models on a number of important benchmarks. Deep neural networks are now the state-of-the-art machine learning models across a variety of areas, from image analysis to natural language processing, and widely deployed in academia and industry. These developments have a huge potential for medical imaging technology, medical data analysis, medical diagnostics and healthcare in general, slowly being realized. We provide a short overview of recent advances and some associated challenges in machine learning applied to medical image processing and image analysis. As this has become a very broad and fast expanding field we will not survey the entire landscape of applications, but put particular focus on deep learning in MRI. Our aim is threefold: (i) give a brief introduction to deep learning with pointers to core references; (ii) indicate how deep learning has been applied to the entire MRI processing chain, from acquisition to image retrieval, from segmentation to disease prediction; (iii) provide a starting point for people interested in experimenting and perhaps contributing to the field of deep learning for medical imaging by pointing out good educational resources, state-of-the-art open-source code, and interesting sources of data and problems related medical imaging.
Collapse
Affiliation(s)
- Alexander Selvikvåg Lundervold
- Mohn Medical Imaging and Visualization Centre (MMIV), Haukeland University Hospital, Norway; Department of Computing, Mathematics and Physics, Western Norway University of Applied Sciences, Norway.
| | - Arvid Lundervold
- Mohn Medical Imaging and Visualization Centre (MMIV), Haukeland University Hospital, Norway; Neuroinformatics and Image Analysis Laboratory, Department of Biomedicine, University of Bergen, Norway; Department of Health and Functioning, Western Norway University of Applied Sciences, Norway.
| |
Collapse
|
669
|
Improvement of image quality at CT and MRI using deep learning. Jpn J Radiol 2018; 37:73-80. [PMID: 30498876 DOI: 10.1007/s11604-018-0796-2] [Citation(s) in RCA: 111] [Impact Index Per Article: 18.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2018] [Accepted: 11/18/2018] [Indexed: 12/22/2022]
Abstract
Deep learning has been developed by computer scientists. Here, we discuss techniques for improving the image quality of diagnostic computed tomography and magnetic resonance imaging with the aid of deep learning. We categorize the techniques for improving the image quality as "noise and artifact reduction", "super resolution" and "image acquisition and reconstruction". For each category, we present and outline the features of some studies.
Collapse
|
670
|
Sahiner B, Pezeshk A, Hadjiiski LM, Wang X, Drukker K, Cha KH, Summers RM, Giger ML. Deep learning in medical imaging and radiation therapy. Med Phys 2018; 46:e1-e36. [PMID: 30367497 DOI: 10.1002/mp.13264] [Citation(s) in RCA: 369] [Impact Index Per Article: 61.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2018] [Revised: 09/18/2018] [Accepted: 10/09/2018] [Indexed: 12/15/2022] Open
Abstract
The goals of this review paper on deep learning (DL) in medical imaging and radiation therapy are to (a) summarize what has been achieved to date; (b) identify common and unique challenges, and strategies that researchers have taken to address these challenges; and (c) identify some of the promising avenues for the future both in terms of applications as well as technical innovations. We introduce the general principles of DL and convolutional neural networks, survey five major areas of application of DL in medical imaging and radiation therapy, identify common themes, discuss methods for dataset expansion, and conclude by summarizing lessons learned, remaining challenges, and future directions.
Collapse
Affiliation(s)
- Berkman Sahiner
- DIDSR/OSEL/CDRH U.S. Food and Drug Administration, Silver Spring, MD, 20993, USA
| | - Aria Pezeshk
- DIDSR/OSEL/CDRH U.S. Food and Drug Administration, Silver Spring, MD, 20993, USA
| | | | - Xiaosong Wang
- Imaging Biomarkers and Computer-aided Diagnosis Lab, Radiology and Imaging Sciences, NIH Clinical Center, Bethesda, MD, 20892-1182, USA
| | - Karen Drukker
- Department of Radiology, University of Chicago, Chicago, IL, 60637, USA
| | - Kenny H Cha
- DIDSR/OSEL/CDRH U.S. Food and Drug Administration, Silver Spring, MD, 20993, USA
| | - Ronald M Summers
- Imaging Biomarkers and Computer-aided Diagnosis Lab, Radiology and Imaging Sciences, NIH Clinical Center, Bethesda, MD, 20892-1182, USA
| | - Maryellen L Giger
- Department of Radiology, University of Chicago, Chicago, IL, 60637, USA
| |
Collapse
|
671
|
Cherukara MJ, Nashed YSG, Harder RJ. Real-time coherent diffraction inversion using deep generative networks. Sci Rep 2018; 8:16520. [PMID: 30410034 PMCID: PMC6224523 DOI: 10.1038/s41598-018-34525-1] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2018] [Accepted: 10/19/2018] [Indexed: 11/11/2022] Open
Abstract
Phase retrieval, or the process of recovering phase information in reciprocal space to reconstruct images from measured intensity alone, is the underlying basis to a variety of imaging applications including coherent diffraction imaging (CDI). Typical phase retrieval algorithms are iterative in nature, and hence, are time-consuming and computationally expensive, making real-time imaging a challenge. Furthermore, iterative phase retrieval algorithms struggle to converge to the correct solution especially in the presence of strong phase structures. In this work, we demonstrate the training and testing of CDI NN, a pair of deep deconvolutional networks trained to predict structure and phase in real space of a 2D object from its corresponding far-field diffraction intensities alone. Once trained, CDI NN can invert a diffraction pattern to an image within a few milliseconds of compute time on a standard desktop machine, opening the door to real-time imaging.
Collapse
Affiliation(s)
- Mathew J Cherukara
- Advanced Photon Source, Argonne National Laboratory, Lemont, IL, 60439, USA.
- Center for Nanoscale Materials, Argonne National Laboratory, Lemont, IL, 60439, USA.
| | - Youssef S G Nashed
- Mathematics and Computer Science, Argonne National Laboratory, Lemont, IL, 60439, USA
| | - Ross J Harder
- Advanced Photon Source, Argonne National Laboratory, Lemont, IL, 60439, USA
| |
Collapse
|
672
|
Zibetti MVW, Baboli R, Chang G, Otazo R, Regatte RR. Rapid compositional mapping of knee cartilage with compressed sensing MRI. J Magn Reson Imaging 2018; 48:1185-1198. [PMID: 30295344 PMCID: PMC6231228 DOI: 10.1002/jmri.26274] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2018] [Accepted: 07/12/2018] [Indexed: 12/15/2022] Open
Abstract
More than a decade after the introduction of compressed sensing (CS) in MRI, researchers are still working on ways to translate it into different research and clinical applications. The greatest advantage of CS in MRI is the reduced amount of k-space data needed to reconstruct images, which can be exploited to reduce scan time or to improve spatial resolution and volumetric coverage. Efficient data acquisition using CS is extremely important for compositional mapping of the musculoskeletal system in general and knee cartilage mapping techniques in particular. High-resolution quantitative information about tissue biochemical composition could be obtained in just a few minutes using CS MRI. However, in order to make this goal a reality, some issues still need to be addressed. In this article we review the current state of the art of CS methods for rapid compositional mapping of knee cartilage. Specifically, data acquisition strategies, image reconstruction algorithms, and data fitting models are discussed. Different CS studies for T2 and T1ρ mapping of knee cartilage are reviewed, with illustrative results. Future directions, opportunities, and challenges of rapid compositional mapping techniques are also discussed. Level of Evidence: 4 Technical Efficacy: Stage 6 J. Magn. Reson. Imaging 2018;47:1185-1198.
Collapse
Affiliation(s)
- Marcelo V W Zibetti
- Center for Biomedical Imaging, Department of Radiology, New York University School of Medicine, New York, NY, USA
| | - Rahman Baboli
- Center for Biomedical Imaging, Department of Radiology, New York University School of Medicine, New York, NY, USA
| | - Gregory Chang
- Center for Biomedical Imaging, Department of Radiology, New York University School of Medicine, New York, NY, USA
| | - Ricardo Otazo
- Memorial Sloan-Kettering Cancer Center, New York, NY, USA
| | - Ravinder R Regatte
- Center for Biomedical Imaging, Department of Radiology, New York University School of Medicine, New York, NY, USA
| |
Collapse
|
673
|
Restivo MC, Campbell-Washburn AE, Kellman P, Xue H, Ramasawmy R, Hansen MS. A framework for constraining image SNR loss due to MR raw data compression. MAGNETIC RESONANCE MATERIALS IN PHYSICS BIOLOGY AND MEDICINE 2018; 32:213-225. [PMID: 30361947 DOI: 10.1007/s10334-018-0709-5] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/02/2018] [Revised: 10/03/2018] [Accepted: 10/15/2018] [Indexed: 11/25/2022]
Abstract
INTRODUCTION Computationally intensive image reconstruction algorithms can be used online during MRI exams by streaming data to remote high-performance computers. However, data acquisition rates often exceed the bandwidth of the available network resources creating a bottleneck. Data compression is, therefore, desired to ensure fast data transmission. METHODS The added noise variance due to compression was determined through statistical analysis for two compression libraries (one custom and one generic) that were implemented in this framework. Limiting the compression error variance relative to the measured thermal noise allowed for image signal-to-noise ratio loss to be explicitly constrained. RESULTS Achievable compression ratios are dependent on image SNR, user-defined SNR loss tolerance, and acquisition type. However, a 1% reduction in SNR yields approximately four to ninefold compression ratios across MRI acquisition strategies. For free-breathing cine data reconstructed in the cloud, the streaming bandwidth was reduced from 37 to 6.1 MB/s, alleviating the network transmission bottleneck. CONCLUSION Our framework enabled data compression for online reconstructions and allowed SNR loss to be constrained based on a user-defined SNR tolerance. This practical tool will enable real-time data streaming and greater than fourfold faster cloud upload times.
Collapse
Affiliation(s)
- Matthew C Restivo
- Division of Intramural Research, National Heart, Lung, and Blood Institute, National Institutes of Health, Rm B1D47, 10 Center Dr, Bethesda, MD, 20814, USA.
| | - Adrienne E Campbell-Washburn
- Division of Intramural Research, National Heart, Lung, and Blood Institute, National Institutes of Health, Rm B1D47, 10 Center Dr, Bethesda, MD, 20814, USA
| | - Peter Kellman
- Division of Intramural Research, National Heart, Lung, and Blood Institute, National Institutes of Health, Rm B1D47, 10 Center Dr, Bethesda, MD, 20814, USA
| | - Hui Xue
- Division of Intramural Research, National Heart, Lung, and Blood Institute, National Institutes of Health, Rm B1D47, 10 Center Dr, Bethesda, MD, 20814, USA
| | - Rajiv Ramasawmy
- Division of Intramural Research, National Heart, Lung, and Blood Institute, National Institutes of Health, Rm B1D47, 10 Center Dr, Bethesda, MD, 20814, USA
| | - Michael S Hansen
- Division of Intramural Research, National Heart, Lung, and Blood Institute, National Institutes of Health, Rm B1D47, 10 Center Dr, Bethesda, MD, 20814, USA
| |
Collapse
|
674
|
Kim HC, Bandettini PA, Lee JH. Deep neural network predicts emotional responses of the human brain from functional magnetic resonance imaging. Neuroimage 2018; 186:607-627. [PMID: 30366076 DOI: 10.1016/j.neuroimage.2018.10.054] [Citation(s) in RCA: 20] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2017] [Revised: 08/15/2018] [Accepted: 10/21/2018] [Indexed: 10/28/2022] Open
Abstract
An artificial neural network with multiple hidden layers (known as a deep neural network, or DNN) was employed as a predictive model (DNNp) for the first time to predict emotional responses using whole-brain functional magnetic resonance imaging (fMRI) data from individual subjects. During fMRI data acquisition, 10 healthy participants listened to 80 International Affective Digital Sound stimuli and rated their own emotions generated by each sound stimulus in terms of the arousal, dominance, and valence dimensions. The whole-brain spatial patterns from a general linear model (i.e., beta-valued maps) for each sound stimulus and the emotional response ratings were used as the input and output for the DNNP, respectively. Based on a nested five-fold cross-validation scheme, the paired input and output data were divided into training (three-fold), validation (one-fold), and test (one-fold) data. The DNNP was trained and optimized using the training and validation data and was tested using the test data. The Pearson's correlation coefficients between the rated and predicted emotional responses from our DNNP model with weight sparsity optimization (mean ± standard error 0.52 ± 0.02 for arousal, 0.51 ± 0.03 for dominance, and 0.51 ± 0.03 for valence, with an input denoising level of 0.3 and a mini-batch size of 1) were significantly greater than those of DNN models with conventional regularization schemes including elastic net regularization (0.15 ± 0.05, 0.15 ± 0.06, and 0.21 ± 0.04 for arousal, dominance, and valence, respectively), those of shallow models including logistic regression (0.11 ± 0.04, 0.10 ± 0.05, and 0.17 ± 0.04 for arousal, dominance, and valence, respectively; average of logistic regression and sparse logistic regression), and those of support vector machine-based predictive models (SVMps; 0.12 ± 0.06, 0.06 ± 0.06, and 0.10 ± 0.06 for arousal, dominance, and valence, respectively; average of linear and non-linear SVMps). This difference was confirmed to be significant with a Bonferroni-corrected p-value of less than 0.001 from a one-way analysis of variance (ANOVA) and subsequent paired t-test. The weights of the trained DNNPs were interpreted and input patterns that maximized or minimized the output of the DNNPs (i.e., the emotional responses) were estimated. Based on a binary classification of each emotion category (e.g., high arousal vs. low arousal), the error rates for the DNNP (31.2% ± 1.3% for arousal, 29.0% ± 1.7% for dominance, and 28.6% ± 3.0% for valence) were significantly lower than those for the linear SVMP (44.7% ± 2.0%, 50.7% ± 1.7%, and 47.4% ± 1.9% for arousal, dominance, and valence, respectively) and the non-linear SVMP (48.8% ± 2.3%, 52.2% ± 1.9%, and 46.4% ± 1.3% for arousal, dominance, and valence, respectively), as confirmed by the Bonferroni-corrected p < 0.001 from the one-way ANOVA. Our study demonstrates that the DNNp model is able to reveal neuronal circuitry associated with human emotional processing - including structures in the limbic and paralimbic areas, which include the amygdala, prefrontal areas, anterior cingulate cortex, insula, and caudate. Our DNNp model was also able to use activation patterns in these structures to predict and classify emotional responses to stimuli.
Collapse
Affiliation(s)
- Hyun-Chul Kim
- Department of Brain and Cognitive Engineering, Korea University, Anam-ro 145, Seongbuk-gu, Seoul, 02841, Republic of Korea
| | - Peter A Bandettini
- Section on Functional Imaging Methods, Lab of Brain and Cognition, National Institute of Mental Health, National Institutes of Health, Bethesda, MD, 20892, USA
| | - Jong-Hwan Lee
- Department of Brain and Cognitive Engineering, Korea University, Anam-ro 145, Seongbuk-gu, Seoul, 02841, Republic of Korea.
| |
Collapse
|
675
|
Zaharchuk G, Gong E, Wintermark M, Rubin D, Langlotz CP. Deep Learning in Neuroradiology. AJNR Am J Neuroradiol 2018; 39:1776-1784. [PMID: 29419402 PMCID: PMC7410723 DOI: 10.3174/ajnr.a5543] [Citation(s) in RCA: 170] [Impact Index Per Article: 28.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022]
Abstract
Deep learning is a form of machine learning using a convolutional neural network architecture that shows tremendous promise for imaging applications. It is increasingly being adapted from its original demonstration in computer vision applications to medical imaging. Because of the high volume and wealth of multimodal imaging information acquired in typical studies, neuroradiology is poised to be an early adopter of deep learning. Compelling deep learning research applications have been demonstrated, and their use is likely to grow rapidly. This review article describes the reasons, outlines the basic methods used to train and test deep learning models, and presents a brief overview of current and potential clinical applications with an emphasis on how they are likely to change future neuroradiology practice. Facility with these methods among neuroimaging researchers and clinicians will be important to channel and harness the vast potential of this new method.
Collapse
Affiliation(s)
- G Zaharchuk
- From the Departments of Radiology (G.Z., M.W., D.R., C.P.L.)
| | - E Gong
- Electrical Engineering (E.G.), Stanford University and Stanford University Medical Center, Stanford, California
| | - M Wintermark
- From the Departments of Radiology (G.Z., M.W., D.R., C.P.L.)
| | - D Rubin
- From the Departments of Radiology (G.Z., M.W., D.R., C.P.L.)
| | - C P Langlotz
- From the Departments of Radiology (G.Z., M.W., D.R., C.P.L.)
| |
Collapse
|
676
|
Huff TJ, Ludwig PE, Zuniga JM. The potential for machine learning algorithms to improve and reduce the cost of 3-dimensional printing for surgical planning. Expert Rev Med Devices 2018; 15:349-356. [PMID: 29723481 DOI: 10.1080/17434440.2018.1473033] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/12/2023]
Abstract
INTRODUCTION 3D-printed anatomical models play an important role in medical and research settings. The recent successes of 3D anatomical models in healthcare have led many institutions to adopt the technology. However, there remain several issues that must be addressed before it can become more wide-spread. Of importance are the problems of cost and time of manufacturing. Machine learning (ML) could be utilized to solve these issues by streamlining the 3D modeling process through rapid medical image segmentation and improved patient selection and image acquisition. The current challenges, potential solutions, and future directions for ML and 3D anatomical modeling in healthcare are discussed. AREAS COVERED This review covers research articles in the field of machine learning as related to 3D anatomical modeling. Topics discussed include automated image segmentation, cost reduction, and related time constraints. EXPERT COMMENTARY ML-based segmentation of medical images could potentially improve the process of 3D anatomical modeling. However, until more research is done to validate these technologies in clinical practice, their impact on patient outcomes will remain unknown. We have the necessary computational tools to tackle the problems discussed. The difficulty now lies in our ability to collect sufficient data.
Collapse
Affiliation(s)
- Trevor J Huff
- a Creighton University School of Medicine , Omaha , USA
| | | | - Jorge M Zuniga
- b Department of Biomechanics , University of Nebraska at Omaha , USA.,c Facultad de Ciencias de la Salud , Universidad Autónoma de Chile , Chil
| |
Collapse
|
677
|
Liang J, Wang LV. Single-shot ultrafast optical imaging. OPTICA 2018; 5:1113-1127. [PMID: 30820445 PMCID: PMC6388706 DOI: 10.1364/optica.5.001113] [Citation(s) in RCA: 62] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/12/2018] [Accepted: 08/21/2018] [Indexed: 05/18/2023]
Abstract
Single-shot ultrafast optical imaging can capture two-dimensional transient scenes in the optical spectral range at ≥100 million frames per second. This rapidly evolving field surpasses conventional pump-probe methods by possessing the real-time imaging capability, which is indispensable for recording non-repeatable and difficult-to-reproduce events and for understanding physical, chemical, and biological mechanisms. In this mini-review, we survey comprehensively the state-of-the-art single-shot ultrafast optical imaging. Based on the illumination requirement, we categorized the field into active-detection and passive-detection domains. Depending on the specific image acquisition and reconstruction strategies, these two categories are further divided into a total of six sub-categories. Under each sub-category, we describe operating principles, present representative cutting-edge techniques with a particular emphasis on their methodology and applications, and discuss their advantages and challenges. Finally, we envision prospects of technical advancement in this field.
Collapse
Affiliation(s)
- Jinyang Liang
- Laboratory of Applied Computational Imaging, Centre Énergie Matériaux Télécommunications, Institut National de la Recherche Scientifique, 1650 Boulevard Lionel-Boulet, Varennes, QC J3X1S2, Canada
| | - Lihong V. Wang
- Caltech Optical Imaging Laboratory, Andrew and Peggy Cherng Department of Medical Engineering, Department of Electrical Engineering, California Institute of Technology, 1200 East California Boulevard, Mail Code 138-78, Pasadena, CA 91125, USA
| |
Collapse
|
678
|
Cardiac MR Motion Artefact Correction from K-space Using Deep Learning-Based Reconstruction. ACTA ACUST UNITED AC 2018. [DOI: 10.1007/978-3-030-00129-2_3] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/17/2023]
|
679
|
Liao P, Zhang J, Zeng K, Yang Y, Cai S, Guo G, Cai C. Referenceless distortion correction of gradient-echo echo-planar imaging under inhomogeneous magnetic fields based on a deep convolutional neural network. Comput Biol Med 2018; 100:230-238. [DOI: 10.1016/j.compbiomed.2018.07.010] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2018] [Revised: 07/14/2018] [Accepted: 07/14/2018] [Indexed: 12/15/2022]
|
680
|
Huang Y, Lu Y, Taubmann O, Lauritsch G, Maier A. Traditional machine learning for limited angle tomography. Int J Comput Assist Radiol Surg 2018; 14:11-19. [DOI: 10.1007/s11548-018-1851-2] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2018] [Accepted: 08/15/2018] [Indexed: 10/28/2022]
|
681
|
Yin X, Russek SE, Zabow G, Sun F, Mohapatra J, Keenan KE, Boss MA, Zeng H, Liu JP, Viert A, Liou SH, Moreland J. Large T 1 contrast enhancement using superparamagnetic nanoparticles in ultra-low field MRI. Sci Rep 2018; 8:11863. [PMID: 30089881 PMCID: PMC6082888 DOI: 10.1038/s41598-018-30264-5] [Citation(s) in RCA: 31] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2018] [Accepted: 07/23/2018] [Indexed: 01/16/2023] Open
Abstract
Superparamagnetic iron oxide nanoparticles (SPIONs) are widely investigated and utilized as magnetic resonance imaging (MRI) contrast and therapy agents due to their large magnetic moments. Local field inhomogeneities caused by these high magnetic moments are used to generate T2 contrast in clinical high-field MRI, resulting in signal loss (darker contrast). Here we present strong T1 contrast enhancement (brighter contrast) from SPIONs (diameters from 11 nm to 22 nm) as observed in the ultra-low field (ULF) MRI at 0.13 mT. We have achieved a high longitudinal relaxivity for 18 nm SPION solutions, r1 = 615 s−1 mM−1, which is two orders of magnitude larger than typical commercial Gd-based T1 contrast agents operating at high fields (1.5 T and 3 T). The significantly enhanced r1 value at ultra-low fields is attributed to the coupling of proton spins with SPION magnetic fluctuations (Brownian and Néel) associated with a low frequency peak in the imaginary part of AC susceptibility (χ”). SPION-based T1-weighted ULF MRI has the advantages of enhanced signal, shorter imaging times, and iron-oxide-based nontoxic biocompatible agents. This approach shows promise to become a functional imaging technique, similar to PET, where low spatial resolution is compensated for by important functional information.
Collapse
Affiliation(s)
- Xiaolu Yin
- National Institute of Standards and Technology, 325 Broadway, CO, Boulder, 80305, USA.,Nebraska Center for Materials and Nanoscience, University of Nebraska-Lincoln, 855 N.16th St, NE, 68588, Lincoln, USA
| | - Stephen E Russek
- National Institute of Standards and Technology, 325 Broadway, CO, Boulder, 80305, USA.
| | - Gary Zabow
- National Institute of Standards and Technology, 325 Broadway, CO, Boulder, 80305, USA
| | - Fan Sun
- Department of Physics, University at Buffalo, the State University of New York, 225 Fronczak Hall, NY, Buffalo, USA
| | - Jeotikanta Mohapatra
- Department of Physics, University of Texas- Arlington, 502 Yates St, TX, 76019, Arlington, USA
| | - Kathryn E Keenan
- National Institute of Standards and Technology, 325 Broadway, CO, Boulder, 80305, USA
| | - Michael A Boss
- National Institute of Standards and Technology, 325 Broadway, CO, Boulder, 80305, USA
| | - Hao Zeng
- Department of Physics, University at Buffalo, the State University of New York, 225 Fronczak Hall, NY, Buffalo, USA
| | - J Ping Liu
- Department of Physics, University of Texas- Arlington, 502 Yates St, TX, 76019, Arlington, USA
| | - Alexandrea Viert
- Wake Forest Institute for Regenerative Medicine, Wake Forest School of Medicine, NC, 27157, Winston-Salem, USA
| | - Sy-Hwang Liou
- Nebraska Center for Materials and Nanoscience, University of Nebraska-Lincoln, 855 N.16th St, NE, 68588, Lincoln, USA
| | - John Moreland
- National Institute of Standards and Technology, 325 Broadway, CO, Boulder, 80305, USA
| |
Collapse
|
682
|
Abstract
Artificial intelligence (AI) algorithms, particularly deep learning, have demonstrated remarkable progress in image-recognition tasks. Methods ranging from convolutional neural networks to variational autoencoders have found myriad applications in the medical image analysis field, propelling it forward at a rapid pace. Historically, in radiology practice, trained physicians visually assessed medical images for the detection, characterization and monitoring of diseases. AI methods excel at automatically recognizing complex patterns in imaging data and providing quantitative, rather than qualitative, assessments of radiographic characteristics. In this Opinion article, we establish a general understanding of AI methods, particularly those pertaining to image-based tasks. We explore how these methods could impact multiple facets of radiology, with a general focus on applications in oncology, and demonstrate ways in which these methods are advancing the field. Finally, we discuss the challenges facing clinical implementation and provide our perspective on how the domain could be advanced.
Collapse
Affiliation(s)
- Ahmed Hosny
- Department of Radiation Oncology, Dana-Farber Cancer Institute, Harvard Medical School, Boston, MA, USA
| | - Chintan Parmar
- Department of Radiation Oncology, Dana-Farber Cancer Institute, Harvard Medical School, Boston, MA, USA
| | - John Quackenbush
- Department of Biostatistics & Computational Biology, Dana-Farber Cancer Institute, Boston, MA, USA
- Department of Cancer Biology, Dana-Farber Cancer Institute, Boston, MA, USA
| | - Lawrence H Schwartz
- Department of Radiology, Columbia University College of Physicians and Surgeons, New York, NY, USA
- Department of Radiology, New York Presbyterian Hospital, New York, NY, USA
| | - Hugo J W L Aerts
- Department of Radiation Oncology, Dana-Farber Cancer Institute, Harvard Medical School, Boston, MA, USA.
- Department of Radiology, Dana-Farber Cancer Institute, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA.
| |
Collapse
|
683
|
Zhang Z, Beck MW, Winkler DA, Huang B, Sibanda W, Goyal H. Opening the black box of neural networks: methods for interpreting neural network models in clinical applications. ANNALS OF TRANSLATIONAL MEDICINE 2018; 6:216. [PMID: 30023379 PMCID: PMC6035992 DOI: 10.21037/atm.2018.05.32] [Citation(s) in RCA: 112] [Impact Index Per Article: 18.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/20/2018] [Accepted: 05/13/2018] [Indexed: 01/04/2023]
Abstract
Artificial neural networks (ANNs) are powerful tools for data analysis and are particularly suitable for modeling relationships between variables for best prediction of an outcome. While these models can be used to answer many important research questions, their utility has been critically limited because the interpretation of the "black box" model is difficult. Clinical investigators usually employ ANN models to predict the clinical outcomes or to make a diagnosis; the model however is difficult to interpret for clinicians. To address this important shortcoming of neural network modeling methods, we describe several methods to help subject-matter audiences (e.g., clinicians, medical policy makers) understand neural network models. Garson's algorithm describes the relative magnitude of the importance of a descriptor (predictor) in its connection with outcome variables by dissecting the model weights. The Lek's profile method explores the relationship of the outcome variable and a predictor of interest, while holding other predictors at constant values (e.g., minimum, 20th quartile, maximum). While Lek's profile was developed specifically for neural networks, partial dependence plot is a more generic version that visualize the relationship between an outcome and one or two predictors. Finally, the local interpretable model-agnostic explanations (LIME) method can show the predictions of any classification or regression, by approximating it locally with an interpretable model. R code for the implementations of these methods is shown by using example data fitted with a standard, feed-forward neural network model. We offer codes and step-by-step description on how to use these tools to facilitate better understanding of ANN.
Collapse
Affiliation(s)
- Zhongheng Zhang
- Department of Emergency Medicine, Sir Run-Run Shaw Hospital, Zhejiang University School of Medicine, Hangzhou 310016, China
| | - Marcus W. Beck
- Southern California Coastal Water Research Project, Costa Mesa, CA, USA
| | - David A. Winkler
- Monash Institute of Pharmaceutical Sciences, Monash University, Parkville, Victoria, Australia
- Latrobe Institute for Molecular Science, La Trobe University, Bundoora, Victoria, Australia
- School of Chemical and Physical Sciences, Flinders University, Bedford Park, South Australia, Australia
- School of Pharmacy, University of Nottingham, Nottingham NG7 2RD, UK
| | - Bin Huang
- Division of Biostatistics and Epidemiology, Cincinnati Children’s Hospital Medical Center, Cincinnati, OH, USA
| | - Wilbert Sibanda
- School of Nursing & Public, College of Health Sciences, University of KwaZulu-Natal, Durban, South Africa
| | - Hemant Goyal
- Department of Internal Medicine, Mercer University, School of Medicine, Macon, GA, USA
| | - written on behalf of AME Big-Data Clinical Trial Collaborative Group
- Department of Emergency Medicine, Sir Run-Run Shaw Hospital, Zhejiang University School of Medicine, Hangzhou 310016, China
- Southern California Coastal Water Research Project, Costa Mesa, CA, USA
- Monash Institute of Pharmaceutical Sciences, Monash University, Parkville, Victoria, Australia
- Latrobe Institute for Molecular Science, La Trobe University, Bundoora, Victoria, Australia
- School of Chemical and Physical Sciences, Flinders University, Bedford Park, South Australia, Australia
- School of Pharmacy, University of Nottingham, Nottingham NG7 2RD, UK
- Division of Biostatistics and Epidemiology, Cincinnati Children’s Hospital Medical Center, Cincinnati, OH, USA
- School of Nursing & Public, College of Health Sciences, University of KwaZulu-Natal, Durban, South Africa
- Department of Internal Medicine, Mercer University, School of Medicine, Macon, GA, USA
| |
Collapse
|
684
|
Magnetic Resonance Imaging technology-bridging the gap between noninvasive human imaging and optical microscopy. Curr Opin Neurobiol 2018; 50:250-260. [PMID: 29753942 DOI: 10.1016/j.conb.2018.04.026] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2017] [Revised: 04/20/2018] [Accepted: 04/24/2018] [Indexed: 12/23/2022]
Abstract
Technological advances in Magnetic Resonance Imaging (MRI) have provided substantial gains in the sensitivity and specificity of functional neuroimaging. Mounting evidence demonstrates that the hemodynamic changes utilized in functional MRI can be far more spatially and thus neuronally specific than previously believed. This has motivated a push toward novel, high-resolution MR imaging strategies that can match this biological resolution limit while recording from the entire human brain. Although sensitivity increases are a necessary component, new MR encoding technologies are required to convert improved sensitivity into higher resolution. These new sampling strategies improve image acquisition efficiency and enable increased image encoding in the time-frame needed to follow hemodynamic changes associated with brain activation.
Collapse
|
685
|
|
686
|
Gong E, Pauly JM, Wintermark M, Zaharchuk G. Deep learning enables reduced gadolinium dose for contrast-enhanced brain MRI. J Magn Reson Imaging 2018; 48:330-340. [PMID: 29437269 DOI: 10.1002/jmri.25970] [Citation(s) in RCA: 183] [Impact Index Per Article: 30.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2017] [Accepted: 01/25/2018] [Indexed: 11/08/2022] Open
Abstract
BACKGROUND There are concerns over gadolinium deposition from gadolinium-based contrast agents (GBCA) administration. PURPOSE To reduce gadolinium dose in contrast-enhanced brain MRI using a deep learning method. STUDY TYPE Retrospective, crossover. POPULATION Sixty patients receiving clinically indicated contrast-enhanced brain MRI. SEQUENCE 3D T1 -weighted inversion-recovery prepped fast-spoiled-gradient-echo (IR-FSPGR) imaging was acquired at both 1.5T and 3T. In 60 brain MRI exams, the IR-FSPGR sequence was obtained under three conditions: precontrast, postcontrast images with 10% low-dose (0.01mmol/kg) and 100% full-dose (0.1 mmol/kg) of gadobenate dimeglumine. We trained a deep learning model using the first 10 cases (with mixed indications) to approximate full-dose images from the precontrast and low-dose images. Synthesized full-dose images were created using the trained model in two test sets: 20 patients with mixed indications and 30 patients with glioma. ASSESSMENT For both test sets, low-dose, true full-dose, and the synthesized full-dose postcontrast image sets were compared quantitatively using peak-signal-to-noise-ratios (PSNR) and structural-similarity-index (SSIM). For the test set comprised of 20 patients with mixed indications, two neuroradiologists scored blindly and independently for the three postcontrast image sets, evaluating image quality, motion-artifact suppression, and contrast enhancement compared with precontrast images. STATISTICAL ANALYSIS Results were assessed using paired t-tests and noninferiority tests. RESULTS The proposed deep learning method yielded significant (n = 50, P < 0.001) improvements over the low-dose images (>5 dB PSNR gains and >11.0% SSIM). Ratings on image quality (n = 20, P = 0.003) and contrast enhancement (n = 20, P < 0.001) were significantly increased. Compared to true full-dose images, the synthesized full-dose images have a slight but not significant reduction in image quality (n = 20, P = 0.083) and contrast enhancement (n = 20, P = 0.068). Slightly better (n = 20, P = 0.039) motion-artifact suppression was noted in the synthesized images. The noninferiority test rejects the inferiority of the synthesized to true full-dose images for image quality (95% CI: -14-9%), artifacts suppression (95% CI: -5-20%), and contrast enhancement (95% CI: -13-6%). DATA CONCLUSION With the proposed deep learning method, gadolinium dose can be reduced 10-fold while preserving contrast information and avoiding significant image quality degradation. LEVEL OF EVIDENCE 3 Technical Efficacy: Stage 5 J. MAGN. RESON. IMAGING 2018;48:330-340.
Collapse
Affiliation(s)
- Enhao Gong
- Department of Electrical Engineering, Stanford University, Stanford, California, USA.,Department of Radiology, Stanford University, Stanford, California, USA
| | - John M Pauly
- Department of Electrical Engineering, Stanford University, Stanford, California, USA
| | - Max Wintermark
- Department of Radiology, Stanford University, Stanford, California, USA
| | - Greg Zaharchuk
- Department of Radiology, Stanford University, Stanford, California, USA
| |
Collapse
|
687
|
Ben Yedder H, BenTaieb A, Shokoufi M, Zahiremami A, Golnaraghi F, Hamarneh G. Deep Learning Based Image Reconstruction for Diffuse Optical Tomography. MACHINE LEARNING FOR MEDICAL IMAGE RECONSTRUCTION 2018. [DOI: 10.1007/978-3-030-00129-2_13] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
|
688
|
Eo T, Shin H, Kim T, Jun Y, Hwang D. Translation of 1D Inverse Fourier Transform of K-space to an Image Based on Deep Learning for Accelerating Magnetic Resonance Imaging. MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION – MICCAI 2018 2018. [DOI: 10.1007/978-3-030-00928-1_28] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/15/2023]
|
689
|
ETER-net: End to End MR Image Reconstruction Using Recurrent Neural Network. MACHINE LEARNING FOR MEDICAL IMAGE RECONSTRUCTION 2018. [DOI: 10.1007/978-3-030-00129-2_2] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
|
690
|
Complex Fully Convolutional Neural Networks for MR Image Reconstruction. MACHINE LEARNING FOR MEDICAL IMAGE RECONSTRUCTION 2018. [DOI: 10.1007/978-3-030-00129-2_4] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
|