201
|
Korfiatis P, Erickson B. Deep learning can see the unseeable: predicting molecular markers from MRI of brain gliomas. Clin Radiol 2019; 74:367-373. [DOI: 10.1016/j.crad.2019.01.028] [Citation(s) in RCA: 31] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2018] [Accepted: 01/31/2019] [Indexed: 11/26/2022]
|
202
|
A deep learning radiomics model for preoperative grading in meningioma. Eur J Radiol 2019; 116:128-134. [PMID: 31153553 DOI: 10.1016/j.ejrad.2019.04.022] [Citation(s) in RCA: 93] [Impact Index Per Article: 18.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2019] [Revised: 04/09/2019] [Accepted: 04/29/2019] [Indexed: 01/09/2023]
Abstract
OBJECTIVES To noninvasively differentiate meningioma grades by deep learning radiomics (DLR) model based on routine post-contrast MRI. METHODS We enrolled 181 patients with histopathologic diagnosis of meningioma who received post-contrast MRI preoperative examinations from 2 hospitals (99 in the primary cohort and 82 in the validation cohort). All the tumors were segmented based on post-contrast axial T1 weighted images (T1WI), from which 2048 deep learning features were extracted by the convolutional neural network. The random forest algorithm was used to select features with importance values over 0.001, upon which a deep learning signature was built by a linear discriminant analysis classifier. The performance of our DLR model was assessed by discrimination and calibration in the independent validation cohort. For comparison, a radiomic model based on hand-crafted features and a fusion model were built. RESULTS The DLR signature comprised 39 deep learning features and showed good discrimination performance in both the primary and validation cohorts. The area under curve (AUC), sensitivity, and specificity for predicting meningioma grades were 0.811(95% CI, 0.635-0.986), 0.769, and 0.898 respectively in the validation cohort. DLR performance was superior over the hand-crafted features. Calibration curves of DLR model showed good agreements between the prediction probability and the observed outcome of high-grade meningioma. CONCLUSIONS Using routine MRI data, we developed a DLR model with good performance for noninvasively individualized prediction of meningioma grades, which achieved a quantization capability superior over the hand-crafted features. This model has potential to guide and facilitate the clinical decision-making of whether to observe or to treat patients by providing prognostic information.
Collapse
|
203
|
Aoe J, Fukuma R, Yanagisawa T, Harada T, Tanaka M, Kobayashi M, Inoue Y, Yamamoto S, Ohnishi Y, Kishima H. Automatic diagnosis of neurological diseases using MEG signals with a deep neural network. Sci Rep 2019; 9:5057. [PMID: 30911028 PMCID: PMC6433906 DOI: 10.1038/s41598-019-41500-x] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2018] [Accepted: 03/11/2019] [Indexed: 11/29/2022] Open
Abstract
The application of deep learning to neuroimaging big data will help develop computer-aided diagnosis of neurological diseases. Pattern recognition using deep learning can extract features of neuroimaging signals unique to various neurological diseases, leading to better diagnoses. In this study, we developed MNet, a novel deep neural network to classify multiple neurological diseases using resting-state magnetoencephalography (MEG) signals. We used the MEG signals of 67 healthy subjects, 26 patients with spinal cord injury, and 140 patients with epilepsy to train and test the network using 10-fold cross-validation. The trained MNet succeeded in classifying the healthy subjects and those with the two neurological diseases with an accuracy of 70.7 ± 10.6%, which significantly exceeded the accuracy of 63.4 ± 12.7% calculated from relative powers of six frequency bands (δ: 1-4 Hz; θ: 4-8 Hz; low-α: 8-10 Hz; high-α: 10-13 Hz; β: 13-30 Hz; low-γ: 30-50 Hz) for each channel using a support vector machine as a classifier (p = 4.2 × 10-2). The specificity of classification for each disease ranged from 86-94%. Our results suggest that this technique would be useful for developing a classifier that will improve neurological diagnoses and allow high specificity in identifying diseases.
Collapse
Affiliation(s)
- Jo Aoe
- Osaka University Institute for Advanced Co-Creation Studies, Suita, Japan
| | - Ryohei Fukuma
- Department of Neurosurgery, Osaka University Graduate School of Medicine, Suita, Japan
| | - Takufumi Yanagisawa
- Osaka University Institute for Advanced Co-Creation Studies, Suita, Japan.
- Department of Neurosurgery, Osaka University Graduate School of Medicine, Suita, Japan.
- JST PRESTO, Suita, Japan.
| | - Tatsuya Harada
- Department of Mechano-Informatics, Graduate School of Information Science and Technology, The University of Tokyo, Tokyo, Japan.
- RIKEN, Tokyo, Japan.
| | - Masataka Tanaka
- Department of Neurosurgery, Osaka University Graduate School of Medicine, Suita, Japan
| | - Maki Kobayashi
- Department of Neurosurgery, Osaka University Graduate School of Medicine, Suita, Japan
| | - You Inoue
- Department of Neurosurgery, Osaka University Graduate School of Medicine, Suita, Japan
| | - Shota Yamamoto
- Department of Neurosurgery, Osaka University Graduate School of Medicine, Suita, Japan
| | - Yuichiro Ohnishi
- Department of Neurosurgery, Osaka University Graduate School of Medicine, Suita, Japan
| | - Haruhiko Kishima
- Department of Neurosurgery, Osaka University Graduate School of Medicine, Suita, Japan
| |
Collapse
|
204
|
Banzato T, Causin F, Della Puppa A, Cester G, Mazzai L, Zotti A. Accuracy of deep learning to differentiate the histopathological grading of meningiomas on MR images: A preliminary study. J Magn Reson Imaging 2019; 50:1152-1159. [PMID: 30896065 PMCID: PMC6767062 DOI: 10.1002/jmri.26723] [Citation(s) in RCA: 30] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2019] [Revised: 03/04/2019] [Accepted: 03/05/2019] [Indexed: 11/15/2022] Open
Abstract
Background Grading of meningiomas is important in the choice of the most effective treatment for each patient. Purpose To determine the diagnostic accuracy of a deep convolutional neural network (DCNN) in the differentiation of the histopathological grading of meningiomas from MR images. Study Type Retrospective. Population In all, 117 meningioma‐affected patients, 79 World Health Organization [WHO] Grade I, 32 WHO Grade II, and 6 WHO Grade III. Field Strength/Sequence 1.5 T, 3.0 T postcontrast enhanced T1 W (PCT1W), apparent diffusion coefficient (ADC) maps (b values of 0, 500, and 1000 s/mm2). Assessment WHO Grade II and WHO Grade III meningiomas were considered a single category. The diagnostic accuracy of the pretrained Inception‐V3 and AlexNet DCNNs was tested on ADC maps and PCT1W images separately. Receiver operating characteristic curves (ROC) and area under the curve (AUC) were used to asses DCNN performance. Statistical Test Leave‐one‐out cross‐validation. Results The application of the Inception‐V3 DCNN on ADC maps provided the best diagnostic accuracy results, with an AUC of 0.94 (95% confidence interval [CI], 0.88–0.98). Remarkably, only 1/38 WHO Grade II–III and 7/79 WHO Grade I lesions were misclassified by this model. The application of AlexNet on ADC maps had a low discriminating accuracy, with an AUC of 0.68 (95% CI, 0.59–0.76) and a high misclassification rate on both WHO Grade I and WHO Grade II–III cases. The discriminating accuracy of both DCNNs on postcontrast T1W images was low, with Inception‐V3 displaying an AUC of 0.68 (95% CI, 0.59–0.76) and AlexNet displaying an AUC of 0.55 (95% CI, 0.45–0.64). Data Conclusion DCNNs can accurately discriminate between benign and atypical/anaplastic meningiomas from ADC maps but not from PCT1W images. Level of evidence: 2 Technical Efficacy: Stage 2 J. Magn. Reson. Imaging 2019;50:1152–1159.
Collapse
Affiliation(s)
- Tommaso Banzato
- Department of Animal Medicine, Productions and Health, University of Padua, Legnaro, Italy
| | | | | | - Giacomo Cester
- Neuroradiology Unit, Padua University Hospital, Padova, Italy
| | - Linda Mazzai
- Neuroradiology Unit, Padua University Hospital, Padova, Italy
| | - Alessandro Zotti
- Department of Animal Medicine, Productions and Health, University of Padua, Legnaro, Italy
| |
Collapse
|
205
|
Predictive markers for Parkinson's disease using deep neural nets on neuromelanin sensitive MRI. NEUROIMAGE-CLINICAL 2019; 22:101748. [PMID: 30870733 PMCID: PMC6417260 DOI: 10.1016/j.nicl.2019.101748] [Citation(s) in RCA: 47] [Impact Index Per Article: 9.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/06/2019] [Revised: 02/19/2019] [Accepted: 03/04/2019] [Indexed: 01/23/2023]
Abstract
Neuromelanin sensitive magnetic resonance imaging (NMS-MRI) has been crucial in identifying abnormalities in the substantia nigra pars compacta (SNc) in Parkinson's disease (PD) as PD is characterized by loss of dopaminergic neurons in the SNc. Current techniques employ estimation of contrast ratios of the SNc, visualized on NMS-MRI, to discern PD patients from the healthy controls. However, the extraction of these features is time-consuming and laborious and moreover provides lower prediction accuracies. Furthermore, these do not account for patterns of subtle changes in PD in the SNc. To mitigate this, our work establishes a computer-based analysis technique that uses convolutional neural networks (CNNs) to create prognostic and diagnostic biomarkers of PD from NMS-MRI. Our technique not only performs with a superior testing accuracy (80%) as compared to contrast ratio-based classification (56.5% testing accuracy) and radiomics classifier (60.3% testing accuracy), but also supports discriminating PD from atypical parkinsonian syndromes (85.7% test accuracy). Moreover, it has the capability to locate the most discriminative regions on the neuromelanin contrast images. These discriminative activations demonstrate that the left SNc plays a key role in the classification in comparison to the right SNc, and are in agreement with the concept of asymmetry in PD. Overall, the proposed technique has the potential to support radiological diagnosis of PD while facilitating deeper understanding into the abnormalities in SNc. A novel convolutional neural net (CNN) based marker for Parkinson’s disease (PD) based on neuromelanin sensitive images The classifier demonstrates high accuracy in delineating PD from healthy controls when compared to other techniques The class activation maps demonstrated significant asymmetry reflecting the clinical asymmetry observed in PD
Collapse
|
206
|
Batchala PP, Muttikkal TJE, Donahue JH, Patrie JT, Schiff D, Fadul CE, Mrachek EK, Lopes MB, Jain R, Patel SH. Neuroimaging-Based Classification Algorithm for Predicting 1p/19q-Codeletion Status in IDH-Mutant Lower Grade Gliomas. AJNR Am J Neuroradiol 2019; 40:426-432. [PMID: 30705071 DOI: 10.3174/ajnr.a5957] [Citation(s) in RCA: 35] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2018] [Accepted: 12/12/2018] [Indexed: 01/18/2023]
Abstract
BACKGROUND AND PURPOSE Isocitrate dehydrogenase (IDH)-mutant lower grade gliomas are classified as oligodendrogliomas or diffuse astrocytomas based on 1p/19q-codeletion status. We aimed to test and validate neuroradiologists' performances in predicting the codeletion status of IDH-mutant lower grade gliomas based on simple neuroimaging metrics. MATERIALS AND METHODS One hundred two IDH-mutant lower grade gliomas with preoperative MR imaging and known 1p/19q status from The Cancer Genome Atlas composed a training dataset. Two neuroradiologists in consensus analyzed the training dataset for various imaging features: tumor texture, margins, cortical infiltration, T2-FLAIR mismatch, tumor cyst, T2* susceptibility, hydrocephalus, midline shift, maximum dimension, primary lobe, necrosis, enhancement, edema, and gliomatosis. Statistical analysis of the training data produced a multivariate classification model for codeletion prediction based on a subset of MR imaging features and patient age. To validate the classification model, 2 different independent neuroradiologists analyzed a separate cohort of 106 institutional IDH-mutant lower grade gliomas. RESULTS Training dataset analysis produced a 2-step classification algorithm with 86.3% codeletion prediction accuracy, based on the following: 1) the presence of the T2-FLAIR mismatch sign, which was 100% predictive of noncodeleted lower grade gliomas, (n = 21); and 2) a logistic regression model based on texture, patient age, T2* susceptibility, primary lobe, and hydrocephalus. Independent validation of the classification algorithm rendered codeletion prediction accuracies of 81.1% and 79.2% in 2 independent readers. The metrics used in the algorithm were associated with moderate-substantial interreader agreement (κ = 0.56-0.79). CONCLUSIONS We have validated a classification algorithm based on simple, reproducible neuroimaging metrics and patient age that demonstrates a moderate prediction accuracy of 1p/19q-codeletion status among IDH-mutant lower grade gliomas.
Collapse
Affiliation(s)
- P P Batchala
- From the Department of Radiology and Medical Imaging (P.P.B., T.J.E.M., J.H.D., S.H.P.)
| | - T J E Muttikkal
- From the Department of Radiology and Medical Imaging (P.P.B., T.J.E.M., J.H.D., S.H.P.)
| | - J H Donahue
- From the Department of Radiology and Medical Imaging (P.P.B., T.J.E.M., J.H.D., S.H.P.)
| | - J T Patrie
- Department of Public Health Sciences (J.T.P.)
| | - D Schiff
- Division of Neuro-Oncology (D.S., C.E.F.)
| | - C E Fadul
- Division of Neuro-Oncology (D.S., C.E.F.)
| | - E K Mrachek
- Department of Pathology (E.K.M., M.-B.L.), Divisions of Neuropathology and Molecular Diagnostics, University of Virginia Health System, Charlottesville, Virginia
| | - M-B Lopes
- Department of Pathology (E.K.M., M.-B.L.), Divisions of Neuropathology and Molecular Diagnostics, University of Virginia Health System, Charlottesville, Virginia
| | - R Jain
- Departments of Radiology (R.J.)
- Neurosurgery (R.J.), New York University School of Medicine, New York, New York
| | - S H Patel
- From the Department of Radiology and Medical Imaging (P.P.B., T.J.E.M., J.H.D., S.H.P.)
| |
Collapse
|
207
|
Bi WL, Hosny A, Schabath MB, Giger ML, Birkbak NJ, Mehrtash A, Allison T, Arnaout O, Abbosh C, Dunn IF, Mak RH, Tamimi RM, Tempany CM, Swanton C, Hoffmann U, Schwartz LH, Gillies RJ, Huang RY, Aerts HJWL. Artificial intelligence in cancer imaging: Clinical challenges and applications. CA Cancer J Clin 2019; 69:127-157. [PMID: 30720861 PMCID: PMC6403009 DOI: 10.3322/caac.21552] [Citation(s) in RCA: 595] [Impact Index Per Article: 119.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Judgement, as one of the core tenets of medicine, relies upon the integration of multilayered data with nuanced decision making. Cancer offers a unique context for medical decisions given not only its variegated forms with evolution of disease but also the need to take into account the individual condition of patients, their ability to receive treatment, and their responses to treatment. Challenges remain in the accurate detection, characterization, and monitoring of cancers despite improved technologies. Radiographic assessment of disease most commonly relies upon visual evaluations, the interpretations of which may be augmented by advanced computational analyses. In particular, artificial intelligence (AI) promises to make great strides in the qualitative interpretation of cancer imaging by expert clinicians, including volumetric delineation of tumors over time, extrapolation of the tumor genotype and biological course from its radiographic phenotype, prediction of clinical outcome, and assessment of the impact of disease and treatment on adjacent organs. AI may automate processes in the initial interpretation of images and shift the clinical workflow of radiographic detection, management decisions on whether or not to administer an intervention, and subsequent observation to a yet to be envisioned paradigm. Here, the authors review the current state of AI as applied to medical imaging of cancer and describe advances in 4 tumor types (lung, brain, breast, and prostate) to illustrate how common clinical problems are being addressed. Although most studies evaluating AI applications in oncology to date have not been vigorously validated for reproducibility and generalizability, the results do highlight increasingly concerted efforts in pushing AI technology to clinical use and to impact future directions in cancer care.
Collapse
Affiliation(s)
- Wenya Linda Bi
- Assistant Professor of Neurosurgery, Department of Neurosurgery, Brigham and Women’s Hospital, Dana‐Farber Cancer InstituteHarvard Medical SchoolBostonMA
| | - Ahmed Hosny
- Research Scientist, Department of Radiation Oncology, Brigham and Women’s Hospital, Dana‐Farber Cancer InstituteHarvard Medical SchoolBostonMA
| | - Matthew B. Schabath
- Associate Member, Department of Cancer EpidemiologyH. Lee Moffitt Cancer Center and Research InstituteTampaFL
| | - Maryellen L. Giger
- Professor of Radiology, Department of RadiologyUniversity of ChicagoChicagoIL
| | - Nicolai J. Birkbak
- Research Associate, The Francis Crick InstituteLondonUnited Kingdom
- Research Associate, University College London Cancer InstituteLondonUnited Kingdom
| | - Alireza Mehrtash
- Research Assistant, Department of Radiology, Brigham and Women’s Hospital, Dana‐Farber Cancer InstituteHarvard Medical SchoolBostonMA
- Research Assistant, Department of Electrical and Computer EngineeringUniversity of British ColumbiaVancouverBCCanada
| | - Tavis Allison
- Research Assistant, Department of RadiologyColumbia University College of Physicians and SurgeonsNew YorkNY
- Research Assistant, Department of RadiologyNew York Presbyterian HospitalNew YorkNY
| | - Omar Arnaout
- Assistant Professor of Neurosurgery, Department of Neurosurgery, Brigham and Women’s Hospital, Dana‐Farber Cancer InstituteHarvard Medical SchoolBostonMA
| | - Christopher Abbosh
- Research Fellow, The Francis Crick InstituteLondonUnited Kingdom
- Research Fellow, University College London Cancer InstituteLondonUnited Kingdom
| | - Ian F. Dunn
- Associate Professor of Neurosurgery, Department of Neurosurgery, Brigham and Women’s Hospital, Dana‐Farber Cancer InstituteHarvard Medical SchoolBostonMA
| | - Raymond H. Mak
- Associate Professor, Department of Radiation Oncology, Brigham and Women’s Hospital, Dana‐Farber Cancer InstituteHarvard Medical SchoolBostonMA
| | - Rulla M. Tamimi
- Associate Professor, Department of MedicineBrigham and Women’s Hospital, Dana‐Farber Cancer Institute, Harvard Medical SchoolBostonMA
| | - Clare M. Tempany
- Professor of Radiology, Department of Radiology, Brigham and Women’s Hospital, Dana‐Farber Cancer InstituteHarvard Medical SchoolBostonMA
| | - Charles Swanton
- Professor, The Francis Crick InstituteLondonUnited Kingdom
- Professor, University College London Cancer InstituteLondonUnited Kingdom
| | - Udo Hoffmann
- Professor of Radiology, Department of RadiologyMassachusetts General Hospital and Harvard Medical SchoolBostonMA
| | - Lawrence H. Schwartz
- Professor of Radiology, Department of RadiologyColumbia University College of Physicians and SurgeonsNew YorkNY
- Chair, Department of RadiologyNew York Presbyterian HospitalNew YorkNY
| | - Robert J. Gillies
- Professor of Radiology, Department of Cancer PhysiologyH. Lee Moffitt Cancer Center and Research InstituteTampaFL
| | - Raymond Y. Huang
- Assistant Professor, Department of Radiology, Brigham and Women’s Hospital, Dana‐Farber Cancer InstituteHarvard Medical SchoolBostonMA
| | - Hugo J. W. L. Aerts
- Associate Professor, Departments of Radiation Oncology and Radiology, Brigham and Women’s Hospital, Dana‐Farber Cancer InstituteHarvard Medical SchoolBostonMA
- Professor in AI in Medicine, Radiology and Nuclear Medicine, GROWMaastricht University Medical Centre (MUMC+)MaastrichtThe Netherlands
| |
Collapse
|
208
|
Chatterjee A, Vallieres M, Dohan A, Levesque IR, Ueno Y, Saif S, Reinhold C, Seuntjens J. Creating Robust Predictive Radiomic Models for Data From Independent Institutions Using Normalization. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2019. [DOI: 10.1109/trpms.2019.2893860] [Citation(s) in RCA: 25] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
209
|
Zhou Q, Zhou Z, Chen C, Fan G, Chen G, Heng H, Ji J, Dai Y. Grading of hepatocellular carcinoma using 3D SE-DenseNet in dynamic enhanced MR images. Comput Biol Med 2019; 107:47-57. [PMID: 30776671 DOI: 10.1016/j.compbiomed.2019.01.026] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2018] [Revised: 01/30/2019] [Accepted: 01/30/2019] [Indexed: 12/23/2022]
Abstract
BACKGROUND Clinical histological grading of hepatocellular carcinoma (HCC) differentiation is of great significance in clinical diagnoses, treatments, and prognoses. However, it is challenging for radiologists to evaluate HCC gradings from medical images. PURPOSE In this study, a novel deep neural network was developed by combining the squeeze-and-excitation networks (SENets) in a three-dimensional (3D) densely connected convolutional network (DenseNet), which is referred to as a 3D SE-DenseNet, for the classification of HCC grading using enhanced clinical magnetic resonance (MR) images obtained from two different clinical centers. METHOD In the proposed architecture, the SENet was added as an additional layer between the dense blocks of the 3D DenseNet, to mitigate the impact of feature redundancy. For the HCC grading task, the 3D SE-DenseNet was trained after data augmentation, and it outperformed the 3D DenseNet based on the clinical dataset. RESULTS The quantitative evaluations of the 3D SE-DenseNet on a two-class HCC grading task were conducted based on the dataset, which included 213 samples of the dynamic enhanced MR images. The proposed 3D SE-DenseNet demonstrated an accuracy of 83%, when compared with the 72% accuracy of the 3D DenseNet. CONCLUSION Owing to the advantage of useful automatic feature learning by the SE layer, the 3D SE-DenseNet can simultaneously handle useful feature enhancement and superfluous feature suppression. The quantitative experiments confirm the excellent performance of the 3D SE-DenseNet in the evaluation of the HCC grading.
Collapse
Affiliation(s)
- Qing Zhou
- University of Science and Technology of China, Hefei, 230026, China; Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, 215163, China
| | - Zhiyong Zhou
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, 215163, China
| | - Chunmiao Chen
- Key Laboratory of Imaging Diagnosis and Minimally Invasive Intervention Research, Affiliated Lishui Hospital of Zhejiang University, The Fifth Affiliated Hospital of Wenzhou Medical University, Lishui Central Hospital, Lishui, 323000, China
| | - Guohua Fan
- Department of Radiology, The Second Affiliated Hospital of Soochow University, Suzhou, 215004, China
| | - Guangqiang Chen
- Department of Radiology, The Second Affiliated Hospital of Soochow University, Suzhou, 215004, China
| | - Haiyan Heng
- Department of Radiology, The Second Affiliated Hospital of Soochow University, Suzhou, 215004, China
| | - Jiansong Ji
- Key Laboratory of Imaging Diagnosis and Minimally Invasive Intervention Research, Affiliated Lishui Hospital of Zhejiang University, The Fifth Affiliated Hospital of Wenzhou Medical University, Lishui Central Hospital, Lishui, 323000, China.
| | - Yakang Dai
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, 215163, China.
| |
Collapse
|
210
|
Soffer S, Ben-Cohen A, Shimon O, Amitai MM, Greenspan H, Klang E. Convolutional Neural Networks for Radiologic Images: A Radiologist's Guide. Radiology 2019; 290:590-606. [PMID: 30694159 DOI: 10.1148/radiol.2018180547] [Citation(s) in RCA: 266] [Impact Index Per Article: 53.2] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
Deep learning has rapidly advanced in various fields within the past few years and has recently gained particular attention in the radiology community. This article provides an introduction to deep learning technology and presents the stages that are entailed in the design process of deep learning radiology research. In addition, the article details the results of a survey of the application of deep learning-specifically, the application of convolutional neural networks-to radiologic imaging that was focused on the following five major system organs: chest, breast, brain, musculoskeletal system, and abdomen and pelvis. The survey of the studies is followed by a discussion about current challenges and future trends and their potential implications for radiology. This article may be used as a guide for radiologists planning research in the field of radiologic image analysis using convolutional neural networks.
Collapse
Affiliation(s)
- Shelly Soffer
- From the Department of Diagnostic Imaging, Sheba Medical Center, Emek HaEla St 1, Ramat Gan, Israel (S.S., M.M.A., E.K.); Faculty of Engineering, Department of Biomedical Engineering, Medical Image Processing Laboratory, Tel Aviv University, Tel Aviv, Israel (A.B., H.G.); and Sackler School of Medicine, Tel Aviv University, Tel Aviv, Israel (S.S., O.S.)
| | - Avi Ben-Cohen
- From the Department of Diagnostic Imaging, Sheba Medical Center, Emek HaEla St 1, Ramat Gan, Israel (S.S., M.M.A., E.K.); Faculty of Engineering, Department of Biomedical Engineering, Medical Image Processing Laboratory, Tel Aviv University, Tel Aviv, Israel (A.B., H.G.); and Sackler School of Medicine, Tel Aviv University, Tel Aviv, Israel (S.S., O.S.)
| | - Orit Shimon
- From the Department of Diagnostic Imaging, Sheba Medical Center, Emek HaEla St 1, Ramat Gan, Israel (S.S., M.M.A., E.K.); Faculty of Engineering, Department of Biomedical Engineering, Medical Image Processing Laboratory, Tel Aviv University, Tel Aviv, Israel (A.B., H.G.); and Sackler School of Medicine, Tel Aviv University, Tel Aviv, Israel (S.S., O.S.)
| | - Michal Marianne Amitai
- From the Department of Diagnostic Imaging, Sheba Medical Center, Emek HaEla St 1, Ramat Gan, Israel (S.S., M.M.A., E.K.); Faculty of Engineering, Department of Biomedical Engineering, Medical Image Processing Laboratory, Tel Aviv University, Tel Aviv, Israel (A.B., H.G.); and Sackler School of Medicine, Tel Aviv University, Tel Aviv, Israel (S.S., O.S.)
| | - Hayit Greenspan
- From the Department of Diagnostic Imaging, Sheba Medical Center, Emek HaEla St 1, Ramat Gan, Israel (S.S., M.M.A., E.K.); Faculty of Engineering, Department of Biomedical Engineering, Medical Image Processing Laboratory, Tel Aviv University, Tel Aviv, Israel (A.B., H.G.); and Sackler School of Medicine, Tel Aviv University, Tel Aviv, Israel (S.S., O.S.)
| | - Eyal Klang
- From the Department of Diagnostic Imaging, Sheba Medical Center, Emek HaEla St 1, Ramat Gan, Israel (S.S., M.M.A., E.K.); Faculty of Engineering, Department of Biomedical Engineering, Medical Image Processing Laboratory, Tel Aviv University, Tel Aviv, Israel (A.B., H.G.); and Sackler School of Medicine, Tel Aviv University, Tel Aviv, Israel (S.S., O.S.)
| |
Collapse
|
211
|
Multi-Institutional Deep Learning Modeling Without Sharing Patient Data: A Feasibility Study on Brain Tumor Segmentation. BRAINLESION : GLIOMA, MULTIPLE SCLEROSIS, STROKE AND TRAUMATIC BRAIN INJURIES. BRAINLES (WORKSHOP) 2019; 11383:92-104. [PMID: 31231720 DOI: 10.1007/978-3-030-11723-8_9] [Citation(s) in RCA: 74] [Impact Index Per Article: 14.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
Deep learning models for semantic segmentation of images require large amounts of data. In the medical imaging domain, acquiring sufficient data is a significant challenge. Labeling medical image data requires expert knowledge. Collaboration between institutions could address this challenge, but sharing medical data to a centralized location faces various legal, privacy, technical, and data-ownership challenges, especially among international institutions. In this study, we introduce the first use of federated learning for multi-institutional collaboration, enabling deep learning modeling without sharing patient data. Our quantitative results demonstrate that the performance of federated semantic segmentation models (Dice=0.852) on multimodal brain scans is similar to that of models trained by sharing data (Dice=0.862). We compare federated learning with two alternative collaborative learning methods and find that they fail to match the performance of federated learning.
Collapse
|
212
|
Kim H, Lee HJ. Computed tomography characteristics of non-small cell lung cancers with EGFR T790M mutation: role of imaging in the era of precision medicine. J Thorac Dis 2019; 10:S4126-S4129. [PMID: 30631572 DOI: 10.21037/jtd.2018.10.26] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Affiliation(s)
- Hyungjin Kim
- Department of Radiology, Seoul National University Hospital, Seoul, Korea.,Department of Radiology, Seoul National University College of Medicine, Seoul, Korea
| | - Hyun-Ju Lee
- Department of Radiology, Seoul National University Hospital, Seoul, Korea.,Department of Radiology, Seoul National University College of Medicine, Seoul, Korea
| |
Collapse
|
213
|
|
214
|
Chen T, Liu S, Li Y, Feng X, Xiong W, Zhao X, Yang Y, Zhang C, Hu Y, Chen H, Lin T, Zhao M, Liu H, Yu J, Xu Y, Zhang Y, Li G. Developed and validated a prognostic nomogram for recurrence-free survival after complete surgical resection of local primary gastrointestinal stromal tumors based on deep learning. EBioMedicine 2019; 39:272-279. [PMID: 30587460 PMCID: PMC6355433 DOI: 10.1016/j.ebiom.2018.12.028] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2018] [Revised: 12/08/2018] [Accepted: 12/14/2018] [Indexed: 12/09/2022] Open
Abstract
This study aimed to develop and validate a prognostic nomogram for recurrence-free survival (RFS) after surgery in the absence of adjuvant therapy to guide the selection for adjuvant imatinib therapy based on Residual Neural Network (ResNet). The ResNet model was developed based on contrast-enhanced computed tomography (CE-CT) in a training cohort consisted of 80 patients pathologically diagnosed gastrointestinal sromal tumors (GISTs) and validated in internal and external validation cohort respectively. Independent clinicopathologic factors were integrated with the ResNet model to construct the individualized nomogram. The performance of the nomogram was evaluated in regard to discrimination, calibration, and clinical usefulness. The ResNet model was significantly associated with RFS. Integrable predictors in the individualized ResNet nomogram included the tumor site, size, and mitotic count. Compared with modified NIH, AFIP, and clinicopathologic nomogram, both ResNet nomogram and ResNet model showed a better discrimination capability with AUCs of 0·947(95%CI, 0·910-0·984) for 3-year-RFS, 0·918(0·852-0·984) for 5-year-RFS, and AUCs of 0·912 (0·851-0·973) for 3-year-RFS, 0·887(0·816-0·960) for 5-year-RFS, respectively. Calibration curve shows the good calibration of the nomogram in terms of the agreement between the estimated and the observed 3- and 5- year outcomes. Decision curve analysis showed that the ResNet nomogram had a higher overall net benefit. In conclusion, we presented a deep learning-based prognostic nomogram to predict RFS after resection of localized primary GISTs with excellent performance and could be a potential tool to select patients for adjuvant imatinib therapy.
Collapse
Affiliation(s)
- Tao Chen
- Department of General Surgery, Nanfang Hospital, Guangdong Provincial Engineering Technology Research Center of Minimally Invasive Surgery, Southern Medical University, Guangdong Province, Guangzhou 510515, China.
| | - Shangqing Liu
- School of Biomedical Engineering, Southern Medical University, Guangdong Province, Guangzhou, 510515, China
| | - Yong Li
- Department of General Surgery, Guangdong Academy of Medical Science, Guangdong General Hospital, Guangdong Province, Guangzhou 510080, China
| | - Xingyu Feng
- Department of General Surgery, Guangdong Academy of Medical Science, Guangdong General Hospital, Guangdong Province, Guangzhou 510080, China
| | - Wei Xiong
- Medical Image Center, Nanfang Hospital, Guangdong Province, Southern Medical University, Guangzhou 510515, China
| | - Xixi Zhao
- Medical Image Center, Nanfang Hospital, Guangdong Province, Southern Medical University, Guangzhou 510515, China
| | - Yali Yang
- Medical Image Center, Nanfang Hospital, Guangdong Province, Southern Medical University, Guangzhou 510515, China
| | - Cangui Zhang
- Department of General Surgery, Nanfang Hospital, Guangdong Provincial Engineering Technology Research Center of Minimally Invasive Surgery, Southern Medical University, Guangdong Province, Guangzhou 510515, China
| | - Yanfeng Hu
- Department of General Surgery, Nanfang Hospital, Guangdong Provincial Engineering Technology Research Center of Minimally Invasive Surgery, Southern Medical University, Guangdong Province, Guangzhou 510515, China
| | - Hao Chen
- Department of General Surgery, Nanfang Hospital, Guangdong Provincial Engineering Technology Research Center of Minimally Invasive Surgery, Southern Medical University, Guangdong Province, Guangzhou 510515, China
| | - Tian Lin
- Department of General Surgery, Nanfang Hospital, Guangdong Provincial Engineering Technology Research Center of Minimally Invasive Surgery, Southern Medical University, Guangdong Province, Guangzhou 510515, China
| | - Mingli Zhao
- Department of General Surgery, Nanfang Hospital, Guangdong Provincial Engineering Technology Research Center of Minimally Invasive Surgery, Southern Medical University, Guangdong Province, Guangzhou 510515, China
| | - Hao Liu
- Department of General Surgery, Nanfang Hospital, Guangdong Provincial Engineering Technology Research Center of Minimally Invasive Surgery, Southern Medical University, Guangdong Province, Guangzhou 510515, China
| | - Jiang Yu
- Department of General Surgery, Nanfang Hospital, Guangdong Provincial Engineering Technology Research Center of Minimally Invasive Surgery, Southern Medical University, Guangdong Province, Guangzhou 510515, China
| | - Yikai Xu
- Medical Image Center, Nanfang Hospital, Guangdong Province, Southern Medical University, Guangzhou 510515, China.
| | - Yu Zhang
- School of Biomedical Engineering, Southern Medical University, Guangdong Province, Guangzhou, 510515, China.
| | - Guoxin Li
- Department of General Surgery, Nanfang Hospital, Guangdong Provincial Engineering Technology Research Center of Minimally Invasive Surgery, Southern Medical University, Guangdong Province, Guangzhou 510515, China.
| |
Collapse
|
215
|
Panayides AS, Pattichis MS, Leandrou S, Pitris C, Constantinidou A, Pattichis CS. Radiogenomics for Precision Medicine With a Big Data Analytics Perspective. IEEE J Biomed Health Inform 2018; 23:2063-2079. [PMID: 30596591 DOI: 10.1109/jbhi.2018.2879381] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Precision medicine promises better healthcare delivery by improving clinical practice. Using evidence-based substratification of patients, the objective is to achieve better prognosis, diagnosis, and treatment that will transform existing clinical pathways toward optimizing care for the specific needs of each patient. The wealth of today's healthcare data, often characterized as big data, provides invaluable resources toward new knowledge discovery that has the potential to advance precision medicine. The latter requires interdisciplinary efforts that will capitalize the information, know-how, and medical data of newly formed groups fusing different backgrounds and expertise. The objective of this paper is to provide insights with respect to the state-of-the-art research in precision medicine. More specifically, our goal is to highlight the fundamental challenges in emerging fields of radiomics and radiogenomics by reviewing the case studies of Cancer and Alzheimer's disease, describe the computational challenges from a big data analytics perspective, and discuss standardization and open data initiatives that will facilitate the adoption of precision medicine methods and practices.
Collapse
|
216
|
Diagnosis of thyroid cancer using deep convolutional neural network models applied to sonographic images: a retrospective, multicohort, diagnostic study. Lancet Oncol 2018; 20:193-201. [PMID: 30583848 DOI: 10.1016/s1470-2045(18)30762-9] [Citation(s) in RCA: 216] [Impact Index Per Article: 36.0] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2018] [Revised: 10/07/2018] [Accepted: 10/08/2018] [Indexed: 01/06/2023]
Abstract
BACKGROUND The incidence of thyroid cancer is rising steadily because of overdiagnosis and overtreatment conferred by widespread use of sensitive imaging techniques for screening. This overall incidence growth is especially driven by increased diagnosis of indolent and well-differentiated papillary subtype and early-stage thyroid cancer, whereas the incidence of advanced-stage thyroid cancer has increased marginally. Thyroid ultrasound is frequently used to diagnose thyroid cancer. The aim of this study was to use deep convolutional neural network (DCNN) models to improve the diagnostic accuracy of thyroid cancer by analysing sonographic imaging data from clinical ultrasounds. METHODS We did a retrospective, multicohort, diagnostic study using ultrasound images sets from three hospitals in China. We developed and trained the DCNN model on the training set, 131 731 ultrasound images from 17 627 patients with thyroid cancer and 180 668 images from 25 325 controls from the thyroid imaging database at Tianjin Cancer Hospital. Clinical diagnosis of the training set was made by 16 radiologists from Tianjin Cancer Hospital. Images from anatomical sites that were judged as not having cancer were excluded from the training set and only individuals with suspected thyroid cancer underwent pathological examination to confirm diagnosis. The model's diagnostic performance was validated in an internal validation set from Tianjin Cancer Hospital (8606 images from 1118 patients) and two external datasets in China (the Integrated Traditional Chinese and Western Medicine Hospital, Jilin, 741 images from 154 patients; and the Weihai Municipal Hospital, Shandong, 11 039 images from 1420 patients). All individuals with suspected thyroid cancer after clinical examination in the validation sets had pathological examination. We also compared the specificity and sensitivity of the DCNN model with the performance of six skilled thyroid ultrasound radiologists on the three validation sets. FINDINGS Between Jan 1, 2012, and March 28, 2018, ultrasound images for the four study cohorts were obtained. The model achieved high performance in identifying thyroid cancer patients in the validation sets tested, with area under the curve values of 0·947 (95% CI 0·935-0·959) for the Tianjin internal validation set, 0·912 (95% CI 0·865-0·958) for the Jilin external validation set, and 0·908 (95% CI 0·891-0·925) for the Weihai external validation set. The DCNN model also showed improved performance in identifying thyroid cancer patients versus skilled radiologists. For the Tianjin internal validation set, sensitivity was 93·4% (95% CI 89·6-96·1) versus 96·9% (93·9-98·6; p=0·003) and specificity was 86·1% (81·1-90·2) versus 59·4% (53·0-65·6; p<0·0001). For the Jilin external validation set, sensitivity was 84·3% (95% CI 73·6-91·9) versus 92·9% (84·1-97·6; p=0·048) and specificity was 86·9% (95% CI 77·8-93·3) versus 57·1% (45·9-67·9; p<0·0001). For the Weihai external validation set, sensitivity was 84·7% (95% CI 77·0-90·7) versus 89·0% (81·9-94·0; p=0·25) and specificity was 87·8% (95% CI 81·6-92·5) versus 68·6% (60·7-75·8; p<0·0001). INTERPRETATION The DCNN model showed similar sensitivity and improved specificity in identifying patients with thyroid cancer compared with a group of skilled radiologists. The improved technical performance of the DCNN model warrants further investigation as part of randomised clinical trials. FUNDING The Program for Changjiang Scholars and Innovative Research Team in University in China, and National Natural Science Foundation of China.
Collapse
|
217
|
Yang Y, Yan LF, Zhang X, Han Y, Nan HY, Hu YC, Hu B, Yan SL, Zhang J, Cheng DL, Ge XW, Cui GB, Zhao D, Wang W. Glioma Grading on Conventional MR Images: A Deep Learning Study With Transfer Learning. Front Neurosci 2018; 12:804. [PMID: 30498429 PMCID: PMC6250094 DOI: 10.3389/fnins.2018.00804] [Citation(s) in RCA: 114] [Impact Index Per Article: 19.0] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2018] [Accepted: 10/16/2018] [Indexed: 12/27/2022] Open
Abstract
Background: Accurate glioma grading before surgery is of the utmost importance in treatment planning and prognosis prediction. But previous studies on magnetic resonance imaging (MRI) images were not effective enough. According to the remarkable performance of convolutional neural network (CNN) in medical domain, we hypothesized that a deep learning algorithm can achieve high accuracy in distinguishing the World Health Organization (WHO) low grade and high grade gliomas. Methods: One hundred and thirteen glioma patients were retrospectively included. Tumor images were segmented with a rectangular region of interest (ROI), which contained about 80% of the tumor. Then, 20% data were randomly selected and leaved out at patient-level as test dataset. AlexNet and GoogLeNet were both trained from scratch and fine-tuned from models that pre-trained on the large scale natural image database, ImageNet, to magnetic resonance images. The classification task was evaluated with five-fold cross-validation (CV) on patient-level split. Results: The performance measures, including validation accuracy, test accuracy and test area under curve (AUC), averaged from five-fold CV of GoogLeNet which trained from scratch were 0.867, 0.909, and 0.939, respectively. With transfer learning and fine-tuning, better performances were obtained for both AlexNet and GoogLeNet, especially for AlexNet. Meanwhile, GoogLeNet performed better than AlexNet no matter trained from scratch or learned from pre-trained model. Conclusion: In conclusion, we demonstrated that the application of CNN, especially trained with transfer learning and fine-tuning, to preoperative glioma grading improves the performance, compared with either the performance of traditional machine learning method based on hand-crafted features, or even the CNNs trained from scratch.
Collapse
Affiliation(s)
- Yang Yang
- Functional and Molecular Imaging Key Lab of Shaanxi Province, Department of Radiology, Tangdu Hospital, Fourth Military Medical University, Xi'an, China
| | - Lin-Feng Yan
- Functional and Molecular Imaging Key Lab of Shaanxi Province, Department of Radiology, Tangdu Hospital, Fourth Military Medical University, Xi'an, China
| | - Xin Zhang
- Functional and Molecular Imaging Key Lab of Shaanxi Province, Department of Radiology, Tangdu Hospital, Fourth Military Medical University, Xi'an, China
| | - Yu Han
- Functional and Molecular Imaging Key Lab of Shaanxi Province, Department of Radiology, Tangdu Hospital, Fourth Military Medical University, Xi'an, China
| | - Hai-Yan Nan
- Functional and Molecular Imaging Key Lab of Shaanxi Province, Department of Radiology, Tangdu Hospital, Fourth Military Medical University, Xi'an, China
| | - Yu-Chuan Hu
- Functional and Molecular Imaging Key Lab of Shaanxi Province, Department of Radiology, Tangdu Hospital, Fourth Military Medical University, Xi'an, China
| | - Bo Hu
- Functional and Molecular Imaging Key Lab of Shaanxi Province, Department of Radiology, Tangdu Hospital, Fourth Military Medical University, Xi'an, China
| | - Song-Lin Yan
- Computer Network Information Center, Chinese Academy of Sciences, Beijing, China
| | - Jin Zhang
- Functional and Molecular Imaging Key Lab of Shaanxi Province, Department of Radiology, Tangdu Hospital, Fourth Military Medical University, Xi'an, China
| | - Dong-Liang Cheng
- Student Brigade, Fourth Military Medical University, Xi'an, China
| | - Xiang-Wei Ge
- Student Brigade, Fourth Military Medical University, Xi'an, China
| | - Guang-Bin Cui
- Functional and Molecular Imaging Key Lab of Shaanxi Province, Department of Radiology, Tangdu Hospital, Fourth Military Medical University, Xi'an, China
| | - Di Zhao
- Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China
| | - Wen Wang
- Functional and Molecular Imaging Key Lab of Shaanxi Province, Department of Radiology, Tangdu Hospital, Fourth Military Medical University, Xi'an, China
| |
Collapse
|
218
|
Press RH, Shu HKG, Shim H, Mountz JM, Kurland BF, Wahl RL, Jones EF, Hylton NM, Gerstner ER, Nordstrom RJ, Henderson L, Kurdziel KA, Vikram B, Jacobs MA, Holdhoff M, Taylor E, Jaffray DA, Schwartz LH, Mankoff DA, Kinahan PE, Linden HM, Lambin P, Dilling TJ, Rubin DL, Hadjiiski L, Buatti JM. The Use of Quantitative Imaging in Radiation Oncology: A Quantitative Imaging Network (QIN) Perspective. Int J Radiat Oncol Biol Phys 2018; 102:1219-1235. [PMID: 29966725 PMCID: PMC6348006 DOI: 10.1016/j.ijrobp.2018.06.023] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/01/2018] [Revised: 05/25/2018] [Accepted: 06/14/2018] [Indexed: 02/07/2023]
Abstract
Modern radiation therapy is delivered with great precision, in part by relying on high-resolution multidimensional anatomic imaging to define targets in space and time. The development of quantitative imaging (QI) modalities capable of monitoring biologic parameters could provide deeper insight into tumor biology and facilitate more personalized clinical decision-making. The Quantitative Imaging Network (QIN) was established by the National Cancer Institute to advance and validate these QI modalities in the context of oncology clinical trials. In particular, the QIN has significant interest in the application of QI to widen the therapeutic window of radiation therapy. QI modalities have great promise in radiation oncology and will help address significant clinical needs, including finer prognostication, more specific target delineation, reduction of normal tissue toxicity, identification of radioresistant disease, and clearer interpretation of treatment response. Patient-specific QI is being incorporated into radiation treatment design in ways such as dose escalation and adaptive replanning, with the intent of improving outcomes while lessening treatment morbidities. This review discusses the current vision of the QIN, current areas of investigation, and how the QIN hopes to enhance the integration of QI into the practice of radiation oncology.
Collapse
Affiliation(s)
- Robert H. Press
- Dept. of Radiation Oncology, Winship Cancer Institute of Emory University, Atlanta, GA
| | - Hui-Kuo G. Shu
- Dept. of Radiation Oncology, Winship Cancer Institute of Emory University, Atlanta, GA
| | - Hyunsuk Shim
- Dept. of Radiation Oncology, Winship Cancer Institute of Emory University, Atlanta, GA
| | - James M. Mountz
- Dept. of Radiology, University of Pittsburgh, Pittsburgh, PA
| | | | | | - Ella F. Jones
- Dept. of Radiology, University of California, San Francisco, San Francisco, CA
| | - Nola M. Hylton
- Dept. of Radiology, University of California, San Francisco, San Francisco, CA
| | - Elizabeth R. Gerstner
- Dept. of Neurology, Massachusetts General Hospital and Harvard Medical School, Boston, MA
| | | | - Lori Henderson
- Cancer Imaging Program, National Cancer Institute, Bethesda, MD
| | | | - Bhadrasain Vikram
- Radiation Research Program/Division of Cancer Treatment & Diagnosis, National Cancer Institute, Bethesda, MD
| | - Michael A. Jacobs
- Dept. of Radiology and Radiological Science, Sidney Kimmel Comprehensive Cancer Center, Johns Hopkins University, Baltimore MD
| | - Matthias Holdhoff
- Brain Cancer Program, Sidney Kimmel Comprehensive Cancer Center, Johns Hopkins University, Baltimore MD
| | - Edward Taylor
- Princess Margaret Cancer Centre, University Health Network, Toronto, Canada
| | - David A. Jaffray
- Princess Margaret Cancer Centre, University Health Network, Toronto, Canada
| | | | - David A. Mankoff
- Dept. of Radiology, University of Pennsylvania, Philadelphia, PA
| | | | | | - Philippe Lambin
- Dept. of Radiation Oncology (MAASTRO), GROW-School for Oncology and Developmental Biology, Maastricht University Medical Centre, Maastricht, The Netherlands
| | - Thomas J. Dilling
- Dept. of Radiation Oncology, H. Lee Moffitt Cancer Center and Research Institute, Tampa, FL
| | | | | | - John M. Buatti
- Dept. of Radiation Oncology, University of Iowa, Iowa City, IA
| |
Collapse
|
219
|
Li ZC, Bai H, Sun Q, Zhao Y, Lv Y, Zhou J, Liang C, Chen Y, Liang D, Zheng H. Multiregional radiomics profiling from multiparametric MRI: Identifying an imaging predictor of IDH1 mutation status in glioblastoma. Cancer Med 2018; 7:5999-6009. [PMID: 30426720 PMCID: PMC6308047 DOI: 10.1002/cam4.1863] [Citation(s) in RCA: 63] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2018] [Revised: 10/15/2018] [Accepted: 10/16/2018] [Indexed: 12/12/2022] Open
Abstract
Purpose Isocitrate dehydrogenase 1 (IDH1) has been proven as a prognostic and predictive marker in glioblastoma (GBM) patients. The purpose was to preoperatively predict IDH mutation status in GBM using multiregional radiomics features from multiparametric magnetic resonance imaging (MRI). Methods In this retrospective multicenter study, 225 patients were included. A total of 1614 multiregional features were extracted from enhancement area, non‐enhancement area, necrosis, edema, tumor core, and whole tumor in multiparametric MRI. Three multiregional radiomics models were built from tumor core, whole tumor, and all regions using an all‐relevant feature selection and a random forest classification for predicting IDH1. Four single‐region models and a model combining all‐region features with clinical factors (age, sex, and Karnofsky performance status) were also built. All models were built from a training cohort (118 patients) and tested on an independent validation cohort (107 patients). Results Among the four single‐region radiomics models, the edema model achieved the best accuracy of 96% and the best F1‐score of 0.75 while the non‐enhancement model achieved the best area under the receiver operating characteristic curve (AUC) of 0.88 in the validation cohort. The overall performance of the tumor‐core model (accuracy 0.96, AUC 0.86 and F1‐score 0.75) and the whole‐tumor model (accuracy 0.96, AUC 0.88 and F1‐score 0.75) was slightly better than the single‐regional models. The 8‐feature all‐region radiomics model achieved an improved overall performance of an accuracy 96%, an AUC 0.90, and an F1‐score 0.78. Among all models, the model combining all‐region imaging features with age achieved the best performance of an accuracy 97%, an AUC 0.96, and an F1‐score 0.84. Conclusions The radiomics model built with multiregional features from multiparametric MRI has the potential to preoperatively detect the IDH1 mutation status in GBM patients. The multiregional model built with all‐region features performed better than the single‐region models, while combining age with all‐region features achieved the best performance.
Collapse
Affiliation(s)
- Zhi-Cheng Li
- Institute of Biomedical and Health Engineering, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Hongmin Bai
- Department of Neurosurgery, Guangzhou General Hospital of Guangzhou Military Command, Guangzhou, China
| | - Qiuchang Sun
- Institute of Biomedical and Health Engineering, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Yuanshen Zhao
- Institute of Biomedical and Health Engineering, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Yanchun Lv
- Department of Radiology, Sun Yat-sen University Cancer Center, Guangzhou, China
| | - Jian Zhou
- Department of Radiology, Sun Yat-sen University Cancer Center, Guangzhou, China
| | - Chaofeng Liang
- Department of Neurosurgery, The 3rd Affiliated Hospital of Sun Yat-sen University, Guangzhou, China
| | - Yinsheng Chen
- Department of Neurosurgery/Neuro-oncology, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China
| | - Dong Liang
- Institute of Biomedical and Health Engineering, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Hairong Zheng
- Institute of Biomedical and Health Engineering, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| |
Collapse
|
220
|
Machine Learning in Neurooncology Imaging: From Study Request to Diagnosis and Treatment. AJR Am J Roentgenol 2018; 212:52-56. [PMID: 30403523 DOI: 10.2214/ajr.18.20328] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
OBJECTIVE Machine learning has potential to play a key role across a variety of medical imaging applications. This review seeks to elucidate the ways in which machine learning can aid and enhance diagnosis, treatment, and follow-up in neurooncology. CONCLUSION Given the rapid pace of development in machine learning over the past several years, a basic proficiency of the key tenets and use cases in the field is critical to assessing potential opportunities and challenges of this exciting new technology.
Collapse
|
221
|
Yi X, Guan X, Zhang Y, Liu L, Long X, Yin H, Wang Z, Li X, Liao W, Chen BT, Zee C. Radiomics improves efficiency for differentiating subclinical pheochromocytoma from lipid-poor adenoma: a predictive, preventive and personalized medical approach in adrenal incidentalomas. EPMA J 2018; 9:421-429. [PMID: 30538793 DOI: 10.1007/s13167-018-0149-3] [Citation(s) in RCA: 32] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2018] [Accepted: 08/30/2018] [Indexed: 12/21/2022]
Abstract
Objectives This study aims to define a radiomic signature for pre-operative differentiation between subclinical pheochromocytoma (sPHEO) and lipid-poor adrenal adenoma (LPA) in adrenal incidentaloma. The goal was to apply a predictive, preventive, and personalized medical approach to the management of adrenal tumors. Patients and methods This retrospective study consisted of 265 consecutive patients (training cohort, 212 (LPA, 145; sPHEO, 67); validation cohort, 53 (LPA, 36; sPHEO, 17)). Computed tomography (CT) imaging features were evaluated, including long diameter (LD), short diameter (SD), pre-enhanced CT value (CTpre), enhanced CT value (CTpost), shape, homogeneity, necrosis or cystic degeneration (N/C). Radiomic features were extracted and then were used to construct a radiomic signature (Rad-score) and radiomic nomogram. The area under the receiver operating characteristic curve (AUC) was used to evaluate their performance. Results Sixteen of three hundred forty candidate features were used to build a radiomic signature. The signature was significantly different between the sPHEO and LPA groups (AUC: training, 0.907; validation, 0.902). The radiomic nomogram based on enhanced CT features (M1) consisted of Rad-score, LD, SD, CTpre, shape, homogeneity and N/C (AUC: training, 0.957; validation, 0.967). The pre-enhanced CT features based radiomic nomogram (M2) included Rad-score, LD, SD, CTpre, shape, and homogeneity (AUC: training, 0.955; validation, 0.958). Conclusions Our radiomic nomograms based on pre-enhanced and enhanced CT images distinguished sPHEO from LPA. In addition, the promising result using pre-enhanced CT images for predictive diagnostics is important because patients could avoid the additional radiation and risk associated with enhanced CT.
Collapse
Affiliation(s)
- Xiaoping Yi
- 1Department of Radiology, Xiangya Hospital, Central South University, No. 87 Xiangya Road, Changsha, 410008 People's Republic of China.,2Postdoctoral Research Workstation of Pathology and Pathophysiology, Basic Medical Sciences, Xiangya Hospital, Central South University, Changsha, China
| | - Xiao Guan
- 3Department of Urology, Xiangya Hospital, Central South University, Changsha, China
| | - Youming Zhang
- 1Department of Radiology, Xiangya Hospital, Central South University, No. 87 Xiangya Road, Changsha, 410008 People's Republic of China
| | - Longfei Liu
- 3Department of Urology, Xiangya Hospital, Central South University, Changsha, China
| | - Xueying Long
- 1Department of Radiology, Xiangya Hospital, Central South University, No. 87 Xiangya Road, Changsha, 410008 People's Republic of China
| | - Hongling Yin
- 4Department of Pathology, Xiangya Hospital, Central South University, Changsha, China
| | - Zhongjie Wang
- 5Department of Neurosurgery, Xiangya Hospital, Central South University, Changsha, China
| | - Xuejun Li
- 5Department of Neurosurgery, Xiangya Hospital, Central South University, Changsha, China
| | - Weihua Liao
- 1Department of Radiology, Xiangya Hospital, Central South University, No. 87 Xiangya Road, Changsha, 410008 People's Republic of China
| | - Bihong T Chen
- 6Department of Diagnostic Radiology, City of Hope National Medical Centre, Duarte, CA USA
| | - Chishing Zee
- 7Department of Radiology, Keck Medical Center of USC, Los Angeles, CA USA
| |
Collapse
|
222
|
Kann BH, Aneja S, Loganadane GV, Kelly JR, Smith SM, Decker RH, Yu JB, Park HS, Yarbrough WG, Malhotra A, Burtness BA, Husain ZA. Pretreatment Identification of Head and Neck Cancer Nodal Metastasis and Extranodal Extension Using Deep Learning Neural Networks. Sci Rep 2018; 8:14036. [PMID: 30232350 PMCID: PMC6145900 DOI: 10.1038/s41598-018-32441-y] [Citation(s) in RCA: 105] [Impact Index Per Article: 17.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2018] [Accepted: 09/07/2018] [Indexed: 12/14/2022] Open
Abstract
Identification of nodal metastasis and tumor extranodal extension (ENE) is crucial for head and neck cancer management, but currently only can be diagnosed via postoperative pathology. Pretreatment, radiographic identification of ENE, in particular, has proven extremely difficult for clinicians, but would be greatly influential in guiding patient management. Here, we show that a deep learning convolutional neural network can be trained to identify nodal metastasis and ENE with excellent performance that surpasses what human clinicians have historically achieved. We trained a 3-dimensional convolutional neural network using a dataset of 2,875 CT-segmented lymph node samples with correlating pathology labels, cross-validated and fine-tuned on 124 samples, and conducted testing on a blinded test set of 131 samples. On the blinded test set, the model predicted ENE and nodal metastasis each with area under the receiver operating characteristic curve (AUC) of 0.91 (95%CI: 0.85-0.97). The model has the potential for use as a clinical decision-making tool to help guide head and neck cancer patient management.
Collapse
Affiliation(s)
- Benjamin H Kann
- Department of Therapeutic Radiology, Yale School of Medicine, New Haven, USA.
| | - Sanjay Aneja
- Department of Therapeutic Radiology, Yale School of Medicine, New Haven, USA
| | | | - Jacqueline R Kelly
- Department of Therapeutic Radiology, Yale School of Medicine, New Haven, USA
| | - Stephen M Smith
- Department of Pathology, Yale School of Medicine, New Haven, USA
| | - Roy H Decker
- Department of Therapeutic Radiology, Yale School of Medicine, New Haven, USA
| | - James B Yu
- Department of Therapeutic Radiology, Yale School of Medicine, New Haven, USA
| | - Henry S Park
- Department of Therapeutic Radiology, Yale School of Medicine, New Haven, USA
| | - Wendell G Yarbrough
- Department of Head and Neck Surgery, Yale School of Medicine, New Haven, USA
| | - Ajay Malhotra
- Department of Radiology, Yale School of Medicine, New Haven, USA
| | | | - Zain A Husain
- Department of Therapeutic Radiology, Yale School of Medicine, New Haven, USA
| |
Collapse
|
223
|
Morin O, Vallières M, Jochems A, Woodruff HC, Valdes G, Braunstein SE, Wildberger JE, Villanueva-Meyer JE, Kearney V, Yom SS, Solberg TD, Lambin P. A Deep Look Into the Future of Quantitative Imaging in Oncology: A Statement of Working Principles and Proposal for Change. Int J Radiat Oncol Biol Phys 2018; 102:1074-1082. [PMID: 30170101 DOI: 10.1016/j.ijrobp.2018.08.032] [Citation(s) in RCA: 47] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2018] [Revised: 08/21/2018] [Accepted: 08/21/2018] [Indexed: 12/13/2022]
Abstract
The adoption of enterprise digital imaging, along with the development of quantitative imaging methods and the re-emergence of statistical learning, has opened the opportunity for more personalized cancer treatments through transformative data science research. In the last 5 years, accumulating evidence has indicated that noninvasive advanced imaging analytics (i.e., radiomics) can reveal key components of tumor phenotype for multiple lesions at multiple time points over the course of treatment. Many groups using homegrown software have extracted engineered and deep quantitative features on 3-dimensional medical images for better spatial and longitudinal understanding of tumor biology and for the prediction of diverse outcomes. These developments could augment patient stratification and prognostication, buttressing emerging targeted therapeutic approaches. Unfortunately, the rapid growth in popularity of this immature scientific discipline has resulted in many early publications that miss key information or use underpowered patient data sets, without production of generalizable results. Quantitative imaging research is complex, and key principles should be followed to realize its full potential. The fields of quantitative imaging and radiomics in particular require a renewed focus on optimal study design and reporting practices, standardization, interpretability, data sharing, and clinical trials. Standardization of image acquisition, feature calculation, and statistical analysis (i.e., machine learning) are required for the field to move forward. A new data-sharing paradigm enacted among open and diverse participants (medical institutions, vendors and associations) should be embraced for faster development and comprehensive clinical validation of imaging biomarkers. In this review and critique of the field, we propose working principles and fundamental changes to the current scientific approach, with the goal of high-impact research and development of actionable prediction models that will yield more meaningful applications of precision cancer medicine.
Collapse
Affiliation(s)
- Olivier Morin
- Department of Radiation Oncology, University of California San Francisco, San Francisco, California.
| | | | - Arthur Jochems
- The D-Lab, Grow Research Institute for Oncology, Maastricht University, Maastricht, The Netherlands
| | - Henry C Woodruff
- The D-Lab, Grow Research Institute for Oncology, Maastricht University, Maastricht, The Netherlands
| | - Gilmer Valdes
- Department of Radiation Oncology, University of California San Francisco, San Francisco, California
| | - Steve E Braunstein
- Department of Radiation Oncology, University of California San Francisco, San Francisco, California
| | - Joachim E Wildberger
- Department of Radiology and Nuclear Medicine, Maastricht University Medical Center, Maastricht, The Netherlands
| | | | - Vasant Kearney
- Department of Radiation Oncology, University of California San Francisco, San Francisco, California
| | - Sue S Yom
- Department of Radiation Oncology, University of California San Francisco, San Francisco, California
| | - Timothy D Solberg
- Department of Radiation Oncology, University of California San Francisco, San Francisco, California
| | - Philippe Lambin
- The D-Lab, Grow Research Institute for Oncology, Maastricht University, Maastricht, The Netherlands
| |
Collapse
|
224
|
Liang S, Zhang R, Liang D, Song T, Ai T, Xia C, Xia L, Wang Y. Multimodal 3D DenseNet for IDH Genotype Prediction in Gliomas. Genes (Basel) 2018; 9:E382. [PMID: 30061525 PMCID: PMC6115744 DOI: 10.3390/genes9080382] [Citation(s) in RCA: 65] [Impact Index Per Article: 10.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2018] [Revised: 07/06/2018] [Accepted: 07/16/2018] [Indexed: 01/08/2023] Open
Abstract
Non-invasive prediction of isocitrate dehydrogenase (IDH) genotype plays an important role in tumor glioma diagnosis and prognosis. Recently, research has shown that radiology images can be a potential tool for genotype prediction, and fusion of multi-modality data by deep learning methods can further provide complementary information to enhance prediction accuracy. However, it still does not have an effective deep learning architecture to predict IDH genotype with three-dimensional (3D) multimodal medical images. In this paper, we proposed a novel multimodal 3D DenseNet (M3D-DenseNet) model to predict IDH genotypes with multimodal magnetic resonance imaging (MRI) data. To evaluate its performance, we conducted experiments on the BRATS-2017 and The Cancer Genome Atlas breast invasive carcinoma (TCGA-BRCA) dataset to get image data as input and gene mutation information as the target, respectively. We achieved 84.6% accuracy (area under the curve (AUC) = 85.7%) on the validation dataset. To evaluate its generalizability, we applied transfer learning techniques to predict World Health Organization (WHO) grade status, which also achieved a high accuracy of 91.4% (AUC = 94.8%) on validation dataset. With the properties of automatic feature extraction, and effective and high generalizability, M3D-DenseNet can serve as a useful method for other multimodal radiogenomics problems and has the potential to be applied in clinical decision making.
Collapse
Affiliation(s)
- Sen Liang
- Key Laboratory of Symbol Computation and Knowledge Engineering of Ministry of Education, and College of Computer Science and Technology, Jilin University, Changchun 130012, China.
| | - Rongguo Zhang
- Advanced Institute, Infervision, Beijing 100000, China.
| | - Dayang Liang
- School of Mechatronics Engineering, Nanchang University, Nanchang 330031, China.
| | - Tianci Song
- Advanced Institute, Infervision, Beijing 100000, China.
| | - Tao Ai
- Department of Radiology, Tongji Hospital, Wuhan 430030, China.
| | - Chen Xia
- Advanced Institute, Infervision, Beijing 100000, China.
| | - Liming Xia
- Department of Radiology, Tongji Hospital, Wuhan 430030, China.
| | - Yan Wang
- Key Laboratory of Symbol Computation and Knowledge Engineering of Ministry of Education, and College of Computer Science and Technology, Jilin University, Changchun 130012, China.
- Cancer Systems Biology Center, China-Japan Union Hospital, Jilin University, Changchun 130033, China.
| |
Collapse
|
225
|
Chang P, Grinband J, Weinberg BD, Bardis M, Khy M, Cadena G, Su MY, Cha S, Filippi CG, Bota D, Baldi P, Poisson LM, Jain R, Chow D. Deep-Learning Convolutional Neural Networks Accurately Classify Genetic Mutations in Gliomas. AJNR Am J Neuroradiol 2018; 39:1201-1207. [PMID: 29748206 DOI: 10.3174/ajnr.a5667] [Citation(s) in RCA: 248] [Impact Index Per Article: 41.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2017] [Accepted: 03/20/2018] [Indexed: 12/16/2022]
Abstract
BACKGROUND AND PURPOSE The World Health Organization has recently placed new emphasis on the integration of genetic information for gliomas. While tissue sampling remains the criterion standard, noninvasive imaging techniques may provide complimentary insight into clinically relevant genetic mutations. Our aim was to train a convolutional neural network to independently predict underlying molecular genetic mutation status in gliomas with high accuracy and identify the most predictive imaging features for each mutation. MATERIALS AND METHODS MR imaging data and molecular information were retrospectively obtained from The Cancer Imaging Archives for 259 patients with either low- or high-grade gliomas. A convolutional neural network was trained to classify isocitrate dehydrogenase 1 (IDH1) mutation status, 1p/19q codeletion, and O6-methylguanine-DNA methyltransferase (MGMT) promotor methylation status. Principal component analysis of the final convolutional neural network layer was used to extract the key imaging features critical for successful classification. RESULTS Classification had high accuracy: IDH1 mutation status, 94%; 1p/19q codeletion, 92%; and MGMT promotor methylation status, 83%. Each genetic category was also associated with distinctive imaging features such as definition of tumor margins, T1 and FLAIR suppression, extent of edema, extent of necrosis, and textural features. CONCLUSIONS Our results indicate that for The Cancer Imaging Archives dataset, machine-learning approaches allow classification of individual genetic mutations of both low- and high-grade gliomas. We show that relevant MR imaging features acquired from an added dimensionality-reduction technique demonstrate that neural networks are capable of learning key imaging components without prior feature selection or human-directed training.
Collapse
Affiliation(s)
- P Chang
- From the Department of Radiology (P.C., S.C.), University of California, San Francisco, San Francisco, California
| | - J Grinband
- Department of Radiology (J.G.), Columbia University, New York, New York
| | - B D Weinberg
- Department of Radiology (B.D.W.), Emory University School of Medicine, Atlanta, Georgia
| | - M Bardis
- Departments of Radiology (M.B., M.K., M.-Y.S., D.C.)
| | - M Khy
- Departments of Radiology (M.B., M.K., M.-Y.S., D.C.)
| | | | - M-Y Su
- Departments of Radiology (M.B., M.K., M.-Y.S., D.C.)
| | - S Cha
- From the Department of Radiology (P.C., S.C.), University of California, San Francisco, San Francisco, California
| | - C G Filippi
- Department of Radiology (C.G.F.), North Shore University Hospital, Long Island, New York
| | | | - P Baldi
- School of Information and Computer Sciences (P.B.), University of California, Irvine, Irvine, California
| | - L M Poisson
- Department of Public Health Sciences (L.M.P.), Henry Ford Health System, Detroit, Michigan
| | - R Jain
- Departments of Radiology and Neurosurgery (R.J.), New York University, New York, New York
| | - D Chow
- Departments of Radiology (M.B., M.K., M.-Y.S., D.C.)
| |
Collapse
|
226
|
Gliosarcoma: a clinical and radiological analysis of 48 cases. Eur Radiol 2018; 29:429-438. [PMID: 29948068 DOI: 10.1007/s00330-018-5398-y] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2017] [Revised: 02/15/2018] [Accepted: 02/21/2018] [Indexed: 12/21/2022]
Abstract
OBJECTIVES To retrospectively review the radiological and clinicopathological features of gliosarcoma (GSM) and differentiate it from glioblastoma multiforme (GBM). METHODS The clinicopathological data and imaging findings (including VASARI analysis) of 48 surgically and pathologically confirmed GSM patients (group 1) were reviewed in detail, and were compared with that of other glioblastoma (GBM) cases in our hospital (group 2). RESULTS There were 28 men and 20 women GSM patients with a median age of 52.5 years (range, 24-80 years) in this study. Haemorrhage (n = 21), a salt-and-pepper sign on T2-weighted images (n = 36), unevenly thickened wall (n = 36) even appearing as a paliform pattern (n = 32), an intra-tumoural large feeding artery (n = 32) and an eccentric cystic portion (ECP) (n = 19) were more commonly observed in the GSM group than in GBM patients. Based on our experience, GSM can be divided into four subtypes according to magnetic resonance imaging (MRI) features. When compared to GBM (group 2), there were more patients designated with type III lesions (having very unevenly thickened walls) and IV (solid) lesions among the GSM cases (group 1). On univariate prognostic analysis, adjuvant therapy (radiotherapy, chemotherapy, and radiochemotherapy) and existence of an eccentric cyst region were prognostic factors. However, Cox's regression model showed only adjuvant therapy as a prognostic factor for GSM. CONCLUSIONS When compared to GBM, certain imaging features are more likely to occur in GSM, which may help raise the possibility of this disease. All GSM patients are recommended to receive adjuvant therapy to achieve a better prognosis with radiotherapy, chemotherapy or radiochemotherapy all as options. KEY POINTS • Diagnosis of gliosarcoma can be suggested preoperatively by imaging. • Gliosarcoma can be divided into four subtypes based on MRI. • Paliform pattern and ECP tend to present in gliosarcoma more than GBM. • The cystic subtype of gliosarcoma may predict a more dismal prognosis. • All gliosarcoma patients should receive adjuvant therapy to achieve better prognosis.
Collapse
|
227
|
Harary M, Smith TR, Gormley WB, Arnaout O. Letter: Big Data Research in Neurosurgery: A Critical Look at This Popular New Study Design. Neurosurgery 2018; 82:E186-E187. [PMID: 29618111 DOI: 10.1093/neuros/nyy085] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022] Open
Affiliation(s)
- Maya Harary
- Department of Neurological Surgery Computational Neuroscience Outcomes Center Brigham and Women's Hospital Harvard Medical School Boston, Massachusetts
| | - Timothy R Smith
- Department of Neurological Surgery Computational Neuroscience Outcomes Center Brigham and Women's Hospital Harvard Medical School Boston, Massachusetts
| | - William B Gormley
- Department of Neurological Surgery Computational Neuroscience Outcomes Center Brigham and Women's Hospital Harvard Medical School Boston, Massachusetts
| | - Omar Arnaout
- Department of Neurological Surgery Computational Neuroscience Outcomes Center Brigham and Women's Hospital Harvard Medical School Boston, Massachusetts
| |
Collapse
|
228
|
Yasaka K, Akai H, Kunimatsu A, Abe O, Kiryu S. Deep learning for staging liver fibrosis on CT: a pilot study. Eur Radiol 2018; 28:4578-4585. [PMID: 29761358 DOI: 10.1007/s00330-018-5499-7] [Citation(s) in RCA: 61] [Impact Index Per Article: 10.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2018] [Revised: 04/10/2018] [Accepted: 04/18/2018] [Indexed: 12/13/2022]
Abstract
OBJECTIVES To investigate whether liver fibrosis can be staged by deep learning techniques based on CT images. METHODS This clinical retrospective study, approved by our institutional review board, included 496 CT examinations of 286 patients who underwent dynamic contrast-enhanced CT for evaluations of the liver and for whom histopathological information regarding liver fibrosis stage was available. The 396 portal phase images with age and sex data of patients (F0/F1/F2/F3/F4 = 113/36/56/66/125) were used for training a deep convolutional neural network (DCNN); the data for the other 100 (F0/F1/F2/F3/F4 = 29/9/14/16/32) were utilised for testing the trained network, with the histopathological fibrosis stage used as reference. To improve robustness, additional images for training data were generated by rotating or parallel shifting the images, or adding Gaussian noise. Supervised training was used to minimise the difference between the liver fibrosis stage and the fibrosis score obtained from deep learning based on CT images (FDLCT score) output by the model. Testing data were input into the trained DCNNs to evaluate their performance. RESULTS The FDLCT scores showed a significant correlation with liver fibrosis stage (Spearman's correlation coefficient = 0.48, p < 0.001). The areas under the receiver operating characteristic curves (with 95% confidence intervals) for diagnosing significant fibrosis (≥ F2), advanced fibrosis (≥ F3) and cirrhosis (F4) by using FDLCT scores were 0.74 (0.64-0.85), 0.76 (0.66-0.85) and 0.73 (0.62-0.84), respectively. CONCLUSIONS Liver fibrosis can be staged by using a deep learning model based on CT images, with moderate performance. KEY POINTS • Liver fibrosis can be staged by a deep learning model based on magnified CT images including the liver surface, with moderate performance. • Scores from a trained deep learning model showed moderate correlation with histopathological liver fibrosis staging. • Further improvement are necessary before utilisation in clinical settings.
Collapse
Affiliation(s)
- Koichiro Yasaka
- Department of Radiology, The Institute of Medical Science, The University of Tokyo, 4-6-1 Shirokanedai, Minato-ku, Tokyo, 108-8639, Japan
| | - Hiroyuki Akai
- Department of Radiology, The Institute of Medical Science, The University of Tokyo, 4-6-1 Shirokanedai, Minato-ku, Tokyo, 108-8639, Japan
| | - Akira Kunimatsu
- Department of Radiology, The Institute of Medical Science, The University of Tokyo, 4-6-1 Shirokanedai, Minato-ku, Tokyo, 108-8639, Japan
| | - Osamu Abe
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Shigeru Kiryu
- Department of Radiology, Graduate School of Medical Sciences, International University of Health and Welfare, 537-3 Iguchi, Nasushiobara, Tochigi, 329-2763, Japan.
| |
Collapse
|
229
|
Yasaka K, Akai H, Kunimatsu A, Kiryu S, Abe O. Deep learning with convolutional neural network in radiology. Jpn J Radiol 2018; 36:257-272. [PMID: 29498017 DOI: 10.1007/s11604-018-0726-3] [Citation(s) in RCA: 188] [Impact Index Per Article: 31.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2017] [Accepted: 02/26/2018] [Indexed: 12/28/2022]
Abstract
Deep learning with a convolutional neural network (CNN) is gaining attention recently for its high performance in image recognition. Images themselves can be utilized in a learning process with this technique, and feature extraction in advance of the learning process is not required. Important features can be automatically learned. Thanks to the development of hardware and software in addition to techniques regarding deep learning, application of this technique to radiological images for predicting clinically useful information, such as the detection and the evaluation of lesions, etc., are beginning to be investigated. This article illustrates basic technical knowledge regarding deep learning with CNNs along the actual course (collecting data, implementing CNNs, and training and testing phases). Pitfalls regarding this technique and how to manage them are also illustrated. We also described some advanced topics of deep learning, results of recent clinical studies, and the future directions of clinical application of deep learning techniques.
Collapse
Affiliation(s)
- Koichiro Yasaka
- Department of Radiology, The Institute of Medical Science, The University of Tokyo, 4-6-1 Shirokanedai, Minato-ku, Tokyo, 108-8639, Japan.
| | - Hiroyuki Akai
- Department of Radiology, The Institute of Medical Science, The University of Tokyo, 4-6-1 Shirokanedai, Minato-ku, Tokyo, 108-8639, Japan
| | - Akira Kunimatsu
- Department of Radiology, The Institute of Medical Science, The University of Tokyo, 4-6-1 Shirokanedai, Minato-ku, Tokyo, 108-8639, Japan
| | - Shigeru Kiryu
- Department of Radiology, Graduate School of Medical Sciences, International University of Health and Welfare, 4-3 Kozunomori, Narita, Chiba, Japan
| | - Osamu Abe
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, Japan
| |
Collapse
|