51
|
Humpire-Mamani GE, Bukala J, Scholten ET, Prokop M, van Ginneken B, Jacobs C. Fully Automatic Volume Measurement of the Spleen at CT Using Deep Learning. Radiol Artif Intell 2021; 2:e190102. [PMID: 33937830 DOI: 10.1148/ryai.2020190102] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2019] [Revised: 04/26/2020] [Accepted: 05/01/2020] [Indexed: 12/15/2022]
Abstract
Purpose To develop a fully automated algorithm for spleen segmentation and to assess the performance of this algorithm in a large dataset. Materials and Methods In this retrospective study, a three-dimensional deep learning network was developed to segment the spleen on thorax-abdomen CT scans. Scans were extracted from patients undergoing oncologic treatment from 2014 to 2017. A total of 1100 scans from 1100 patients were used in this study, and 400 were selected for development of the algorithm. For testing, a dataset of 50 scans was annotated to assess the segmentation accuracy and was compared against the splenic index equation. In a qualitative observer experiment, an enriched set of 100 scan-pairs was used to evaluate whether the algorithm could aid a radiologist in assessing splenic volume change. The reference standard was set by the consensus of two other independent radiologists. A Mann-Whitney U test was conducted to test whether there was a performance difference between the algorithm and the independent observer. Results The algorithm and the independent observer obtained comparable Dice scores (P = .834) on the test set of 50 scans of 0.962 and 0.964, respectively. The radiologist had an agreement with the reference standard in 81% (81 of 100) of the cases after a visual classification of volume change, which increased to 92% (92 of 100) when aided by the algorithm. Conclusion A segmentation method based on deep learning can accurately segment the spleen on CT scans and may help radiologists to detect abnormal splenic volumes and splenic volume changes.Supplemental material is available for this article.© RSNA, 2020.
Collapse
Affiliation(s)
- Gabriel E Humpire-Mamani
- Diagnostic Image Analysis Group, Radboud University Medical Center, Geert Grooteplein 10 (Route 767), 6525 GA, Nijmegen, the Netherlands (G.E.H.M., J.B., E.T.S., M.P., B.v.G., C.J.); and Fraunhofer MEVIS, Bremen, Germany (B.v.G.)
| | - Joris Bukala
- Diagnostic Image Analysis Group, Radboud University Medical Center, Geert Grooteplein 10 (Route 767), 6525 GA, Nijmegen, the Netherlands (G.E.H.M., J.B., E.T.S., M.P., B.v.G., C.J.); and Fraunhofer MEVIS, Bremen, Germany (B.v.G.)
| | - Ernst T Scholten
- Diagnostic Image Analysis Group, Radboud University Medical Center, Geert Grooteplein 10 (Route 767), 6525 GA, Nijmegen, the Netherlands (G.E.H.M., J.B., E.T.S., M.P., B.v.G., C.J.); and Fraunhofer MEVIS, Bremen, Germany (B.v.G.)
| | - Mathias Prokop
- Diagnostic Image Analysis Group, Radboud University Medical Center, Geert Grooteplein 10 (Route 767), 6525 GA, Nijmegen, the Netherlands (G.E.H.M., J.B., E.T.S., M.P., B.v.G., C.J.); and Fraunhofer MEVIS, Bremen, Germany (B.v.G.)
| | - Bram van Ginneken
- Diagnostic Image Analysis Group, Radboud University Medical Center, Geert Grooteplein 10 (Route 767), 6525 GA, Nijmegen, the Netherlands (G.E.H.M., J.B., E.T.S., M.P., B.v.G., C.J.); and Fraunhofer MEVIS, Bremen, Germany (B.v.G.)
| | - Colin Jacobs
- Diagnostic Image Analysis Group, Radboud University Medical Center, Geert Grooteplein 10 (Route 767), 6525 GA, Nijmegen, the Netherlands (G.E.H.M., J.B., E.T.S., M.P., B.v.G., C.J.); and Fraunhofer MEVIS, Bremen, Germany (B.v.G.)
| |
Collapse
|
52
|
Fernandes FE, Yen GG. Pruning of generative adversarial neural networks for medical imaging diagnostics with evolution strategy. Inf Sci (N Y) 2021. [DOI: 10.1016/j.ins.2020.12.086] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
53
|
Sugimori H, Hamaguchi H, Fujiwara T, Ishizaka K. Classification of type of brain magnetic resonance images with deep learning technique. Magn Reson Imaging 2021; 77:180-185. [PMID: 33359426 DOI: 10.1016/j.mri.2020.12.017] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2020] [Revised: 11/01/2020] [Accepted: 12/20/2020] [Indexed: 11/19/2022]
Affiliation(s)
- Hiroyuki Sugimori
- Faculty of Health Sciences, Hokkaido University, North- 12, West- 5, Kita- ku, Sapporo, Hokkaido 060-0812, Japan.
| | - Hiroyuki Hamaguchi
- Department of Radiological Technology, Hokkaido University Hospital, North- 14, West- 5, Kita- ku, Sapporo, Hokkaido 060-8648, Japan.
| | - Taro Fujiwara
- Department of Radiological Technology, Hokkaido University Hospital, North- 14, West- 5, Kita- ku, Sapporo, Hokkaido 060-8648, Japan.
| | - Kinya Ishizaka
- Department of Radiological Technology, Hokkaido University Hospital, North- 14, West- 5, Kita- ku, Sapporo, Hokkaido 060-8648, Japan.
| |
Collapse
|
54
|
Tian X, Li C, Liu H, Li P, He J, Gao W. Applications of artificial intelligence in radiophysics. J Cancer Res Ther 2021; 17:1603-1607. [DOI: 10.4103/jcrt.jcrt_1438_21] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
|
55
|
|
56
|
Giger ML. AI/Machine Learning in Medical Imaging. Mol Imaging 2021. [DOI: 10.1016/b978-0-12-816386-3.00052-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022] Open
|
57
|
Wei D, Jahani N, Cohen E, Weinstein S, Hsieh MK, Pantalone L, Kontos D. Fully automatic quantification of fibroglandular tissue and background parenchymal enhancement with accurate implementation for axial and sagittal breast MRI protocols. Med Phys 2020; 48:238-252. [PMID: 33150617 DOI: 10.1002/mp.14581] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2020] [Revised: 10/05/2020] [Accepted: 10/23/2020] [Indexed: 01/03/2023] Open
Abstract
PURPOSE To propose and evaluate a fully automated technique for quantification of fibroglandular tissue (FGT) and background parenchymal enhancement (BPE) in breast MRI. METHODS We propose a fully automated method, where after preprocessing, FGT is segmented in T1-weighted, nonfat-saturated MRI. Incorporating an anatomy-driven prior probability for FGT and robust texture descriptors against intensity variations, our method effectively addresses major image processing challenges, including wide variations in breast anatomy and FGT appearance among individuals. Our framework then propagates this segmentation to dynamic contrast-enhanced (DCE)-MRI to quantify BPE within the segmented FGT regions. Axial and sagittal image data from 40 cancer-unaffected women were used to evaluate our proposed method vs a manually annotated reference standard. RESULTS High spatial correspondence was observed between the automatic and manual FGT segmentation (mean Dice similarity coefficient 81.14%). The FGT and BPE quantifications (denoted FGT% and BPE%) indicated high correlation (Pearson's r = 0.99 for both) between automatic and manual segmentations. Furthermore, the differences between the FGT% and BPE% quantified using automatic and manual segmentations were low (mean differences: -0.66 ± 2.91% for FGT% and -0.17 ± 1.03% for BPE%). When correlated with qualitative clinical BI-RADS ratings, the correlation coefficient for FGT% was still high (Spearman's ρ = 0.92), whereas that for BPE was lower (ρ = 0.65). Our proposed approach also performed significantly better than a previously validated method for sagittal breast MRI. CONCLUSIONS Our method demonstrated accurate fully automated quantification of FGT and BPE in both sagittal and axial breast MRI. Our results also suggested the complexity of BPE assessment, demonstrating relatively low correlation between segmentation and clinical rating.
Collapse
Affiliation(s)
- Dong Wei
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, 19104, USA.,Tencent Jarvis Lab, Shenzhen, Guangdong, 518057, China
| | - Nariman Jahani
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Eric Cohen
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Susan Weinstein
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Meng-Kang Hsieh
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Lauren Pantalone
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Despina Kontos
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, 19104, USA
| |
Collapse
|
58
|
Fully Automated Breast Density Segmentation and Classification Using Deep Learning. Diagnostics (Basel) 2020; 10:diagnostics10110988. [PMID: 33238512 PMCID: PMC7700286 DOI: 10.3390/diagnostics10110988] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2020] [Revised: 11/12/2020] [Accepted: 11/17/2020] [Indexed: 01/16/2023] Open
Abstract
Breast density estimation with visual evaluation is still challenging due to low contrast and significant fluctuations in the mammograms’ fatty tissue background. The primary key to breast density classification is to detect the dense tissues in the mammographic images correctly. Many methods have been proposed for breast density estimation; nevertheless, most of them are not fully automated. Besides, they have been badly affected by low signal-to-noise ratio and variability of density in appearance and texture. This study intends to develop a fully automated and digitalized breast tissue segmentation and classification using advanced deep learning techniques. The conditional Generative Adversarial Networks (cGAN) network is applied to segment the dense tissues in mammograms. To have a complete system for breast density classification, we propose a Convolutional Neural Network (CNN) to classify mammograms based on the standardization of Breast Imaging-Reporting and Data System (BI-RADS). The classification network is fed by the segmented masks of dense tissues generated by the cGAN network. For screening mammography, 410 images of 115 patients from the INbreast dataset were used. The proposed framework can segment the dense regions with an accuracy, Dice coefficient, Jaccard index of 98%, 88%, and 78%, respectively. Furthermore, we obtained precision, sensitivity, and specificity of 97.85%, 97.85%, and 99.28%, respectively, for breast density classification. This study’s findings are promising and show that the proposed deep learning-based techniques can produce a clinically useful computer-aided tool for breast density analysis by digital mammography.
Collapse
|
59
|
Nam Y, Park GE, Kang J, Kim SH. Fully Automatic Assessment of Background Parenchymal Enhancement on Breast MRI Using Machine-Learning Models. J Magn Reson Imaging 2020; 53:818-826. [PMID: 33219624 DOI: 10.1002/jmri.27429] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2020] [Revised: 10/15/2020] [Accepted: 10/16/2020] [Indexed: 12/13/2022] Open
Abstract
BACKGROUND Automated measurement and classification models with objectivity and reproducibility are required for accurate evaluation of the breast cancer risk of fibroglandular tissue (FGT) and background parenchymal enhancement (BPE). PURPOSE To develop and evaluate a machine-learning algorithm for breast FGT segmentation and BPE classification. STUDY TYPE Retrospective. POPULATION A total of 794 patients with breast cancer, 594 patients assigned to the development set, and 200 patients to the test set. FIELD STRENGTH/SEQUENCE 3T and 1.5T; T2 -weighted, fat-saturated T1 -weighted (T1 W) with dynamic contrast enhancement (DCE). ASSESSMENT Manual segmentation was performed for the whole breast and FGT regions in the contralateral breast. The BPE region was determined by thresholding using the subtraction of the pre- and postcontrast T1 W images and the segmented FGT mask. Two radiologists independently assessed the categories of FGT and BPE. A deep-learning-based algorithm was designed to segment and measure the volume of whole breast and FGT and classify the grade of BPE. STATISTICAL TESTS Dice similarity coefficients (DSC) and Spearman correlation analysis were used to compare the volumes from the manual and deep-learning-based segmentations. Kappa statistics were used for agreement analysis. Comparison of area under the receiver operating characteristic (ROC) curves (AUC) and F1 scores were calculated to evaluate the performance of BPE classification. RESULTS The mean (±SD) DSC for manual and deep-learning segmentations was 0.85 ± 0.11. The correlation coefficient for FGT volume from manual- and deep-learning-based segmentations was 0.93. Overall accuracy of manual segmentation and deep-learning segmentation in BPE classification task was 66% and 67%, respectively. For binary categorization of BPE grade (minimal/mild vs. moderate/marked), overall accuracy increased to 91.5% in manual segmentation and 90.5% in deep-learning segmentation; the AUC was 0.93 in both methods. DATA CONCLUSION This deep-learning-based algorithm can provide reliable segmentation and classification results for BPE. LEVEL OF EVIDENCE 3 TECHNICAL EFFICACY STAGE: 2.
Collapse
Affiliation(s)
- Yoonho Nam
- Department of Radiology, Seoul St. Mary's Hospital, College of Medicine, The Catholic University of Korea, Seoul, Republic of Korea.,Division of Biomedical Engineering, Hankuk University of Foreign Studies, Yongin, Republic of Korea
| | - Ga Eun Park
- Department of Radiology, Seoul St. Mary's Hospital, College of Medicine, The Catholic University of Korea, Seoul, Republic of Korea
| | - Junghwa Kang
- Division of Biomedical Engineering, Hankuk University of Foreign Studies, Yongin, Republic of Korea
| | - Sung Hun Kim
- Department of Radiology, Seoul St. Mary's Hospital, College of Medicine, The Catholic University of Korea, Seoul, Republic of Korea
| |
Collapse
|
60
|
Sanderink WBG, Strobbe LJA, Bult P, Schlooz-Vries MS, Lardenoije S, Venderink DJ, Sechopoulos I, Karssemeijer N, Vreuls W, Mann RM. Minimally invasive breast cancer excision using the breast lesion excision system under ultrasound guidance. Breast Cancer Res Treat 2020; 184:37-43. [PMID: 32737712 PMCID: PMC7568696 DOI: 10.1007/s10549-020-05814-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2020] [Accepted: 07/15/2020] [Indexed: 11/08/2022]
Abstract
PURPOSE To assess the feasibility of completely excising small breast cancers using the automated, image-guided, single-pass radiofrequency-based breast lesion excision system (BLES) under ultrasound (US) guidance. METHODS From February 2018 to July 2019, 22 patients diagnosed with invasive carcinomas ≤ 15 mm at US and mammography were enrolled in this prospective, multi-center, ethics board-approved study. Patients underwent breast MRI to verify lesion size. BLES-based excision and surgery were performed during the same procedure. Histopathology findings from the BLES procedure and surgery were compared, and total excision findings were assessed. RESULTS Of the 22 patients, ten were excluded due to the lesion being > 15 mm and/or being multifocal at MRI, and one due to scheduling issues. The remaining 11 patients underwent BLES excision. Mean diameter of excised lesions at MRI was 11.8 mm (range 8.0-13.9 mm). BLES revealed ten (90.9%) invasive carcinomas of no special type, and one (9.1%) invasive lobular carcinoma. Histopathological results were identical for the needle biopsy, BLES, and surgical specimens for all lesions. None of the BLES excisions were adequate. Margins were usually compromised on both sides of the specimen, indicating that the excised volume was too small. Margin assessment was good for all BLES specimens. One technical complication occurred (retrieval of an empty BLES basket, specimen retrieved during subsequent surgery). CONCLUSIONS BLES allows accurate diagnosis of small invasive breast carcinomas. However, BLES cannot be considered as a therapeutic device for small invasive breast carcinomas due to not achieving adequate excision.
Collapse
Affiliation(s)
- W B G Sanderink
- Department of Medical Imaging/Radiology, Radboud University Medical Center, Geert Grooteplein 10, 6525 GA, Nijmegen, The Netherlands
| | - L J A Strobbe
- Department of Surgical Oncology, Canisius-Wilhelmina Hospital, Nijmegen, The Netherlands
| | - P Bult
- Department of Pathology, Radboud University Medical Center, Nijmegen, The Netherlands
| | - M S Schlooz-Vries
- Department of Surgery, Radboud University Medical Center, Nijmegen, The Netherlands
| | - S Lardenoije
- Department of Medical Imaging/Radiology, Radboud University Medical Center, Geert Grooteplein 10, 6525 GA, Nijmegen, The Netherlands
| | - D J Venderink
- Department of Radiology, Canisius-Wilhelmina Hospital, Nijmegen, The Netherlands
| | - I Sechopoulos
- Department of Medical Imaging/Radiology, Radboud University Medical Center, Geert Grooteplein 10, 6525 GA, Nijmegen, The Netherlands
| | - N Karssemeijer
- Department of Medical Imaging/Radiology, Radboud University Medical Center, Geert Grooteplein 10, 6525 GA, Nijmegen, The Netherlands
| | - W Vreuls
- Department of Pathology, Canisius-Wilhelmina Hospital, Nijmegen, The Netherlands
| | - R M Mann
- Department of Medical Imaging/Radiology, Radboud University Medical Center, Geert Grooteplein 10, 6525 GA, Nijmegen, The Netherlands.
| |
Collapse
|
61
|
Volumetric breast density estimation on MRI using explainable deep learning regression. Sci Rep 2020; 10:18095. [PMID: 33093572 PMCID: PMC7581772 DOI: 10.1038/s41598-020-75167-6] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2020] [Accepted: 10/12/2020] [Indexed: 01/10/2023] Open
Abstract
To purpose of this paper was to assess the feasibility of volumetric breast density estimations on MRI without segmentations accompanied with an explainability step. A total of 615 patients with breast cancer were included for volumetric breast density estimation. A 3-dimensional regression convolutional neural network (CNN) was used to estimate the volumetric breast density. Patients were split in training (N = 400), validation (N = 50), and hold-out test set (N = 165). Hyperparameters were optimized using Neural Network Intelligence and augmentations consisted of translations and rotations. The estimated densities were evaluated to the ground truth using Spearman’s correlation and Bland–Altman plots. The output of the CNN was visually analyzed using SHapley Additive exPlanations (SHAP). Spearman’s correlation between estimated and ground truth density was ρ = 0.81 (N = 165, P < 0.001) in the hold-out test set. The estimated density had a median bias of 0.70% (95% limits of agreement = − 6.8% to 5.0%) to the ground truth. SHAP showed that in correct density estimations, the algorithm based its decision on fibroglandular and fatty tissue. In incorrect estimations, other structures such as the pectoral muscle or the heart were included. To conclude, it is feasible to automatically estimate volumetric breast density on MRI without segmentations, and to provide accompanying explanations.
Collapse
|
62
|
Stember JN, Celik H, Krupinski E, Chang PD, Mutasa S, Wood BJ, Lignelli A, Moonis G, Schwartz LH, Jambawalikar S, Bagci U. Eye Tracking for Deep Learning Segmentation Using Convolutional Neural Networks. J Digit Imaging 2020; 32:597-604. [PMID: 31044392 PMCID: PMC6646645 DOI: 10.1007/s10278-019-00220-4] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022] Open
Abstract
Deep learning with convolutional neural networks (CNNs) has experienced tremendous growth in multiple healthcare applications and has been shown to have high accuracy in semantic segmentation of medical (e.g., radiology and pathology) images. However, a key barrier in the required training of CNNs is obtaining large-scale and precisely annotated imaging data. We sought to address the lack of annotated data with eye tracking technology. As a proof of principle, our hypothesis was that segmentation masks generated with the help of eye tracking (ET) would be very similar to those rendered by hand annotation (HA). Additionally, our goal was to show that a CNN trained on ET masks would be equivalent to one trained on HA masks, the latter being the current standard approach. Step 1: Screen captures of 19 publicly available radiologic images of assorted structures within various modalities were analyzed. ET and HA masks for all regions of interest (ROIs) were generated from these image datasets. Step 2: Utilizing a similar approach, ET and HA masks for 356 publicly available T1-weighted postcontrast meningioma images were generated. Three hundred six of these image + mask pairs were used to train a CNN with U-net-based architecture. The remaining 50 images were used as the independent test set. Step 1: ET and HA masks for the nonneurological images had an average Dice similarity coefficient (DSC) of 0.86 between each other. Step 2: Meningioma ET and HA masks had an average DSC of 0.85 between each other. After separate training using both approaches, the ET approach performed virtually identically to HA on the test set of 50 images. The former had an area under the curve (AUC) of 0.88, while the latter had AUC of 0.87. ET and HA predictions had trimmed mean DSCs compared to the original HA maps of 0.73 and 0.74, respectively. These trimmed DSCs between ET and HA were found to be statistically equivalent with a p value of 0.015. We have demonstrated that ET can create segmentation masks suitable for deep learning semantic segmentation. Future work will integrate ET to produce masks in a faster, more natural manner that distracts less from typical radiology clinical workflow.
Collapse
Affiliation(s)
- J N Stember
- Department of Radiology, Columbia University Medical Center - NYPH, New York, NY, 10032, USA.
| | - H Celik
- The National Institutes of Health, Clinical Center, Bethesda, MD, 20892, USA
| | - E Krupinski
- Department of Radiology & Imaging Sciences, Emory University, Atlanta, GA, 30322, USA
| | - P D Chang
- Department of Radiology, University of California, Irvine, CA, 92697, USA
| | - S Mutasa
- Department of Radiology, Columbia University Medical Center - NYPH, New York, NY, 10032, USA
| | - B J Wood
- The National Institutes of Health, Clinical Center, Bethesda, MD, 20892, USA
| | - A Lignelli
- Department of Radiology, Columbia University Medical Center - NYPH, New York, NY, 10032, USA
| | - G Moonis
- Department of Radiology, Columbia University Medical Center - NYPH, New York, NY, 10032, USA
| | - L H Schwartz
- Department of Radiology, Columbia University Medical Center - NYPH, New York, NY, 10032, USA
| | - S Jambawalikar
- Department of Radiology, Columbia University Medical Center - NYPH, New York, NY, 10032, USA
| | - U Bagci
- Center for Research in Computer Vision, University of Central Florida, 4328 Scorpius St. HEC 221, Orlando, FL, 32816, USA
| |
Collapse
|
63
|
Zhang Y, Lobo-Mueller EM, Karanicolas P, Gallinger S, Haider MA, Khalvati F. Prognostic Value of Transfer Learning Based Features in Resectable Pancreatic Ductal Adenocarcinoma. Front Artif Intell 2020; 3:550890. [PMID: 33733206 PMCID: PMC7861273 DOI: 10.3389/frai.2020.550890] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2020] [Accepted: 08/24/2020] [Indexed: 12/23/2022] Open
Abstract
Background: Pancreatic Ductal Adenocarcinoma (PDAC) is one of the most aggressive cancers with an extremely poor prognosis. Radiomics has shown prognostic ability in multiple types of cancer including PDAC. However, the prognostic value of traditional radiomics pipelines, which are based on hand-crafted radiomic features alone is limited. Methods: Convolutional neural networks (CNNs) have been shown to outperform radiomics models in computer vision tasks. However, training a CNN from scratch requires a large sample size which is not feasible in most medical imaging studies. As an alternative solution, CNN-based transfer learning models have shown the potential for achieving reasonable performance using small datasets. In this work, we developed and validated a CNN-based transfer learning model for prognostication of overall survival in PDAC patients using two independent resectable PDAC cohorts. Results: The proposed transfer learning-based prognostication model for overall survival achieved the area under the receiver operating characteristic curve of 0.81 on the test cohort, which was significantly higher than that of the traditional radiomics model (0.54). To further assess the prognostic value of the models, the predicted probabilities of death generated from the two models were used as risk scores in a univariate Cox Proportional Hazard model and while the risk score from the traditional radiomics model was not associated with overall survival, the proposed transfer learning-based risk score had significant prognostic value with hazard ratio of 1.86 (95% Confidence Interval: 1.15-3.53, p-value: 0.04). Conclusions: This result suggests that transfer learning-based models may significantly improve prognostic performance in typical small sample size medical imaging studies.
Collapse
Affiliation(s)
- Yucheng Zhang
- Department of Medical Imaging, University of Toronto, Toronto, ON, Canada
| | - Edrise M. Lobo-Mueller
- Department of Diagnostic Imaging and Department of Oncology, Faculty of Medicine and Dentistry, Cross Cancer Institute, University of Alberta, Edmonton, AB, Canada
| | - Paul Karanicolas
- Department of Surgery, Sunnybrook Health Sciences Centre, Toronto, ON, Canada
| | - Steven Gallinger
- Lunenfeld-Tanenbaum Research Institute, Sinai Health System, Toronto, ON, Canada
| | - Masoom A. Haider
- Lunenfeld-Tanenbaum Research Institute, Sinai Health System, Toronto, ON, Canada
- Joint Department of Medical Imaging, Sinai Health System, University Health Network, University of Toronto, Toronto, ON, Canada
| | - Farzad Khalvati
- Department of Medical Imaging, University of Toronto, Toronto, ON, Canada
- Research Institute, The Hospital for Sick Children, Toronto, ON, Canada
- Department of Mechanical and Industrial Engineering, University of Toronto, Toronto, ON, Canada
| |
Collapse
|
64
|
Breast Cancer Mass Detection in DCE–MRI Using Deep-Learning Features Followed by Discrimination of Infiltrative vs. In Situ Carcinoma through a Machine-Learning Approach. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app10176109] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Breast cancer is the leading cause of cancer deaths worldwide in women. This aggressive tumor can be categorized into two main groups—in situ and infiltrative, with the latter being the most common malignant lesions. The current use of magnetic resonance imaging (MRI) was shown to provide the highest sensitivity in the detection and discrimination between benign vs. malignant lesions, when interpreted by expert radiologists. In this article, we present the prototype of a computer-aided detection/diagnosis (CAD) system that could provide valuable assistance to radiologists for discrimination between in situ and infiltrating tumors. The system consists of two main processing levels—(1) localization of possibly tumoral regions of interest (ROIs) through an iterative procedure based on intensity values (ROI Hunter), followed by a deep-feature extraction and classification method for false-positive rejection; and (2) characterization of the selected ROIs and discrimination between in situ and invasive tumor, consisting of Radiomics feature extraction and classification through a machine-learning algorithm. The CAD system was developed and evaluated using a DCE–MRI image database, containing at least one confirmed mass per image, as diagnosed by an expert radiologist. When evaluating the accuracy of the ROI Hunter procedure with respect to the radiologist-drawn boundaries, sensitivity to mass detection was found to be 75%. The AUC of the ROC curve for discrimination between in situ and infiltrative tumors was 0.70.
Collapse
|
65
|
Chen Y, Ruan D, Xiao J, Wang L, Sun B, Saouaf R, Yang W, Li D, Fan Z. Fully automated multiorgan segmentation in abdominal magnetic resonance imaging with deep neural networks. Med Phys 2020; 47:4971-4982. [PMID: 32748401 DOI: 10.1002/mp.14429] [Citation(s) in RCA: 46] [Impact Index Per Article: 11.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2019] [Revised: 07/12/2020] [Accepted: 07/17/2020] [Indexed: 02/06/2023] Open
Abstract
PURPOSE Segmentation of multiple organs-at-risk (OARs) is essential for magnetic resonance (MR)-only radiation therapy treatment planning and MR-guided adaptive radiotherapy of abdominal cancers. Current practice requires manual delineation that is labor-intensive, time-consuming, and prone to intra- and interobserver variations. We developed a deep learning (DL) technique for fully automated segmentation of multiple OARs on clinical abdominal MR images with high accuracy, reliability, and efficiency. METHODS We developed Automated deep Learning-based abdominal multiorgan segmentation (ALAMO) technique based on two-dimensional U-net and a densely connected network structure with tailored design in data augmentation and training procedures such as deep connection, auxiliary supervision, and multiview. The model takes in multislice MR images and generates the output of segmentation results. 3.0-Tesla T1 VIBE (Volumetric Interpolated Breath-hold Examination) images of 102 subjects were used in our study and split into 66 for training, 16 for validation, and 20 for testing. Ten OARs were studied, including the liver, spleen, pancreas, left/right kidneys, stomach, duodenum, small intestine, spinal cord, and vertebral bodies. An experienced radiologist manually labeled each OAR, followed by reediting, if necessary, by a senior radiologist, to create the ground-truth. The performance was measured using volume overlapping and surface distance. RESULTS The ALAMO technique generated segmentation labels in good agreement with the manual results. Specifically, among the ten OARs, nine achieved high dice similarity coefficients (DSCs) in the range of 0.87-0.96, except for the duodenum with a DSC of 0.80. The inference completed within 1 min for a three-dimensional volume of 320 × 288 × 180. Overall, the ALAMO model matched the state-of-the-art techniques in performance. CONCLUSION The proposed ALAMO technique allows for fully automated abdominal MR segmentation with high accuracy and practical memory and computation time demands.
Collapse
Affiliation(s)
- Yuhua Chen
- Department of Bioengineering, University of California, Los Angeles, CA, USA.,Biomedical Imaging Research Institute, Cedars-Sinai Medical Center, Los Angeles, CA, USA
| | - Dan Ruan
- Department of Bioengineering, University of California, Los Angeles, CA, USA.,Department of Radiation Oncology, University of California, Los Angeles, CA, USA
| | - Jiayu Xiao
- Biomedical Imaging Research Institute, Cedars-Sinai Medical Center, Los Angeles, CA, USA
| | - Lixia Wang
- Biomedical Imaging Research Institute, Cedars-Sinai Medical Center, Los Angeles, CA, USA.,Department of Radiology, Chaoyang Hospital, Capital Medical University, Beijing, China
| | - Bin Sun
- Department of Radiology, Fujian Medical University Union Hospital, Fuzhou, Fujian, China
| | - Rola Saouaf
- Department of Imaging, Cedars-Sinai Medical Center, Los Angeles, CA, USA
| | - Wensha Yang
- Department of Radiation Oncology, University of Southern California, Los Angeles, CA, USA
| | - Debiao Li
- Department of Bioengineering, University of California, Los Angeles, CA, USA.,Biomedical Imaging Research Institute, Cedars-Sinai Medical Center, Los Angeles, CA, USA.,Department of Medicine, University of California, Los Angeles, CA, USA
| | - Zhaoyang Fan
- Department of Bioengineering, University of California, Los Angeles, CA, USA.,Biomedical Imaging Research Institute, Cedars-Sinai Medical Center, Los Angeles, CA, USA.,Department of Medicine, University of California, Los Angeles, CA, USA
| |
Collapse
|
66
|
Meyer-Bäse A, Morra L, Meyer-Bäse U, Pinker K. Current Status and Future Perspectives of Artificial Intelligence in Magnetic Resonance Breast Imaging. CONTRAST MEDIA & MOLECULAR IMAGING 2020; 2020:6805710. [PMID: 32934610 PMCID: PMC7474774 DOI: 10.1155/2020/6805710] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/09/2020] [Revised: 04/17/2020] [Accepted: 05/28/2020] [Indexed: 12/12/2022]
Abstract
Recent advances in artificial intelligence (AI) and deep learning (DL) have impacted many scientific fields including biomedical maging. Magnetic resonance imaging (MRI) is a well-established method in breast imaging with several indications including screening, staging, and therapy monitoring. The rapid development and subsequent implementation of AI into clinical breast MRI has the potential to affect clinical decision-making, guide treatment selection, and improve patient outcomes. The goal of this review is to provide a comprehensive picture of the current status and future perspectives of AI in breast MRI. We will review DL applications and compare them to standard data-driven techniques. We will emphasize the important aspect of developing quantitative imaging biomarkers for precision medicine and the potential of breast MRI and DL in this context. Finally, we will discuss future challenges of DL applications for breast MRI and an AI-augmented clinical decision strategy.
Collapse
Affiliation(s)
- Anke Meyer-Bäse
- Department of Scientific Computing, Florida State University, Tallahassee, Florida 32310-4120, USA
| | - Lia Morra
- Dipartimento di Automatica e Informatica, Politecnico di Torino, Torino, Italy
| | - Uwe Meyer-Bäse
- Department of Electrical and Computer Engineering, Florida A&M University and Florida State University, Tallahassee, Florida 32310-4120, USA
| | - Katja Pinker
- Department of Biomedical Imaging and Image-Guided Therapy, Division of Molecular and Gender Imaging, Medical University of Vienna, Vienna, Austria
- Department of Radiology, Memorial Sloan-Kettering Cancer Center, New York, New York 10065, USA
| |
Collapse
|
67
|
Rella R, Bufi E, Belli P, Petta F, Serra T, Masiello V, Scrofani AR, Barone R, Orlandi A, Valentini V, Manfredi R. Association between background parenchymal enhancement and tumor response in patients with breast cancer receiving neoadjuvant chemotherapy. Diagn Interv Imaging 2020; 101:649-655. [PMID: 32654985 DOI: 10.1016/j.diii.2020.05.010] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2020] [Revised: 05/21/2020] [Accepted: 05/27/2020] [Indexed: 12/12/2022]
Abstract
PURPOSE To analyze the relationships between background parenchymal enhancement (BPE) of the contralateral healthy breast and tumor response after neoadjuvant chemotherapy (NAC) in women with breast cancer. MATERIALS AND METHODS A total of 228 women (mean age, 47.6 years±10 [SD]; range: 24-74 years) with invasive breast cancer who underwent NAC were included. All patients underwent breast magnetic resonance imaging (MRI) before and after NAC and 127 patients underwent MRI before, during (after the 4th cycle of NAC) and after NAC. Quantitative semi-automated analysis of BPE of the contralateral healthy breast was performed. Enhancement level on baseline MRI (baseline BPE) and MRI after chemotherapy (final BPE), change in enhancement rate between baseline MRI and final MRI (total BPE change) and between baseline MRI and midline MRI (early BPE change) were recorded. Associations between BPE and tumor response, menopausal status, tumor phenotype, NAC type and tumor stage at diagnosis were searched for. Pathologic complete response (pCR) was defined as the absence of residual invasive cancer cells in the breast and ipsilateral lymph nodes. RESULTS No differences were found in baseline BPE, final BPE, early and total BPE changes between pCR and non-pCR groups. Early BPE change was higher in non-pCR group in patients with stages 3 and 4 breast cancers (P=0.019) and in human epidermal growth factor receptor 2 (HER2)-negative patients (P=0.020). CONCLUSION Early reduction of BPE in the contralateral breast during NAC may be an early predictor of loss of tumor response, showing potential as an imaging biomarker of treatment response, especially in women with stages 3 or 4 breast cancers and in HER2 - negative breast cancers.
Collapse
Affiliation(s)
- R Rella
- UOC di Diagnostica per Immagini ed Interventistica Generale, Dipartimento di Diagnostica per Immagini, Radioterapia Oncologica ed Ematologia, Fondazione Policlinico Universitario A. Gemelli IRCCS, 00168 Rome, Italy
| | - E Bufi
- UOC di Diagnostica per Immagini ed Interventistica Generale, Dipartimento di Diagnostica per Immagini, Radioterapia Oncologica ed Ematologia, Fondazione Policlinico Universitario A. Gemelli IRCCS, 00168 Rome, Italy.
| | - P Belli
- UOC di Diagnostica per Immagini ed Interventistica Generale, Dipartimento di Diagnostica per Immagini, Radioterapia Oncologica ed Ematologia, Fondazione Policlinico Universitario A. Gemelli IRCCS, 00168 Rome, Italy; Università Cattolica Sacro Cuore, 00168 Rome, Italy
| | - F Petta
- UOC di Diagnostica per Immagini ed Interventistica Generale, Dipartimento di Diagnostica per Immagini, Radioterapia Oncologica ed Ematologia, Fondazione Policlinico Universitario A. Gemelli IRCCS, 00168 Rome, Italy; Università Cattolica Sacro Cuore, 00168 Rome, Italy
| | - T Serra
- UOC di Diagnostica per Immagini ed Interventistica Generale, Dipartimento di Diagnostica per Immagini, Radioterapia Oncologica ed Ematologia, Fondazione Policlinico Universitario A. Gemelli IRCCS, 00168 Rome, Italy; Università Cattolica Sacro Cuore, 00168 Rome, Italy
| | - V Masiello
- UOC di Radioterapia Oncologica, Dipartimento di Diagnostica per Immagini, Radioterapia Oncologica ed Ematologia, Fondazione Policlinico Universitario A. Gemelli IRCCS, 00168 Rome, Italy
| | - A R Scrofani
- UOC di Diagnostica per Immagini ed Interventistica Generale, Dipartimento di Diagnostica per Immagini, Radioterapia Oncologica ed Ematologia, Fondazione Policlinico Universitario A. Gemelli IRCCS, 00168 Rome, Italy; Università Cattolica Sacro Cuore, 00168 Rome, Italy
| | - R Barone
- UOC di Radioterapia Oncologica, Dipartimento di Diagnostica per Immagini, Radioterapia Oncologica ed Ematologia, Fondazione Policlinico Universitario A. Gemelli IRCCS, 00168 Rome, Italy
| | - A Orlandi
- U.O.C Oncologia Medica, Dipartimento di Scienze Gastroenterologiche, Endocrino-Metaboliche e Nefro-Urologiche, Fondazione Policlinico Universitario Agostino Gemelli IRCCS, 00168 Rome, Italy
| | - V Valentini
- UOC di Radioterapia Oncologica, Dipartimento di Diagnostica per Immagini, Radioterapia Oncologica ed Ematologia, Fondazione Policlinico Universitario A. Gemelli IRCCS, 00168 Rome, Italy
| | - R Manfredi
- UOC di Diagnostica per Immagini ed Interventistica Generale, Dipartimento di Diagnostica per Immagini, Radioterapia Oncologica ed Ematologia, Fondazione Policlinico Universitario A. Gemelli IRCCS, 00168 Rome, Italy; Università Cattolica Sacro Cuore, 00168 Rome, Italy
| |
Collapse
|
68
|
Heutink F, Koch V, Verbist B, van der Woude WJ, Mylanus E, Huinck W, Sechopoulos I, Caballo M. Multi-Scale deep learning framework for cochlea localization, segmentation and analysis on clinical ultra-high-resolution CT images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2020; 191:105387. [PMID: 32109685 DOI: 10.1016/j.cmpb.2020.105387] [Citation(s) in RCA: 41] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/07/2019] [Revised: 02/07/2020] [Accepted: 02/11/2020] [Indexed: 06/10/2023]
Abstract
BACKGROUND AND OBJECTIVE Performing patient-specific, pre-operative cochlea CT-based measurements could be helpful to positively affect the outcome of cochlear surgery in terms of intracochlear trauma and loss of residual hearing. Therefore, we propose a method to automatically segment and measure the human cochlea in clinical ultra-high-resolution (UHR) CT images, and investigate differences in cochlea size for personalized implant planning. METHODS 123 temporal bone CT scans were acquired with two UHR-CT scanners, and used to develop and validate a deep learning-based system for automated cochlea segmentation and measurement. The segmentation algorithm is composed of two major steps (detection and pixel-wise classification) in cascade, and aims at combining the results of a multi-scale computer-aided detection scheme with a U-Net-like architecture for pixelwise classification. The segmentation results were used as an input to the measurement algorithm, which provides automatic cochlear measurements (volume, basal diameter, and cochlear duct length (CDL)) through the combined use of convolutional neural networks and thinning algorithms. Automatic segmentation was validated against manual annotation, by the means of Dice similarity, Boundary-F1 (BF) score, and maximum and average Hausdorff distances, while measurement errors were calculated between the automatic results and the corresponding manually obtained ground truth on a per-patient basis. Finally, the developed system was used to investigate the differences in cochlea size within our patient cohort, to relate the measurement errors to the actual variation in cochlear size across different patients. RESULTS Automatic segmentation resulted in a Dice of 0.90 ± 0.03, BF score of 0.95 ± 0.03, and maximum and average Hausdorff distance of 3.05 ± 0.39 and 0.32 ± 0.07 against manual annotation. Automatic cochlear measurements resulted in errors of 8.4% (volume), 5.5% (CDL), 7.8% (basal diameter). The cochlea size varied broadly, ranging between 0.10 and 0.28 ml (volume), 1.3 and 2.5 mm (basal diameter), and 27.7 and 40.1 mm (CDL). CONCLUSIONS The proposed algorithm could successfully segment and analyze the cochlea on UHR-CT images, resulting in accurate measurements of cochlear anatomy. Given the wide variation in cochlear size found in our patient cohort, it may find application as a pre-operative tool in cochlear implant surgery, potentially helping elaborate personalized treatment strategies based on patient-specific, image-based anatomical measurements.
Collapse
Affiliation(s)
- Floris Heutink
- Department of Otorhinolaryngology and Donders Institute for Brain, Cognition and Behavior, Radboudumc, Geert Grooteplein Zuid 10, 6525 GA, Nijmegen, the Netherlands
| | - Valentin Koch
- Department of Radiology and Nuclear Medicine, Radboudumc, Geert Grooteplein Zuid 10, 6525 GA, Nijmegen, the Netherlands
| | - Berit Verbist
- Department of Radiology, Leiden University Medical Centre, Albinusdreef 2, 2333 ZA, Leiden, the Netherlands
| | - Willem Jan van der Woude
- Department of Radiology and Nuclear Medicine, Radboudumc, Geert Grooteplein Zuid 10, 6525 GA, Nijmegen, the Netherlands
| | - Emmanuel Mylanus
- Department of Otorhinolaryngology and Donders Institute for Brain, Cognition and Behavior, Radboudumc, Geert Grooteplein Zuid 10, 6525 GA, Nijmegen, the Netherlands
| | - Wendy Huinck
- Department of Otorhinolaryngology and Donders Institute for Brain, Cognition and Behavior, Radboudumc, Geert Grooteplein Zuid 10, 6525 GA, Nijmegen, the Netherlands
| | - Ioannis Sechopoulos
- Department of Radiology and Nuclear Medicine, Radboudumc, Geert Grooteplein Zuid 10, 6525 GA, Nijmegen, the Netherlands; Dutch Expert Center for Screening (LRCB), Wijchenseweg 101, 6538 SW, Nijmegen, the Netherlands
| | - Marco Caballo
- Department of Radiology and Nuclear Medicine, Radboudumc, Geert Grooteplein Zuid 10, 6525 GA, Nijmegen, the Netherlands.
| |
Collapse
|
69
|
Shan H, Jia X, Yan P, Li Y, Paganetti H, Wang G. Synergizing medical imaging and radiotherapy with deep learning. MACHINE LEARNING-SCIENCE AND TECHNOLOGY 2020. [DOI: 10.1088/2632-2153/ab869f] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
|
70
|
Carl SH, Duempelmann L, Shimada Y, Bühler M. A fully automated deep learning pipeline for high-throughput colony segmentation and classification. Biol Open 2020; 9:bio052936. [PMID: 32487517 PMCID: PMC7328007 DOI: 10.1242/bio.052936] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2020] [Accepted: 05/14/2020] [Indexed: 11/29/2022] Open
Abstract
Adenine auxotrophy is a commonly used non-selective genetic marker in yeast research. It allows investigators to easily visualize and quantify various genetic and epigenetic events by simply reading out colony color. However, manual counting of large numbers of colonies is extremely time-consuming, difficult to reproduce and possibly inaccurate. Using cutting-edge neural networks, we have developed a fully automated pipeline for colony segmentation and classification, which speeds up white/red colony quantification 100-fold over manual counting by an experienced researcher. Our approach uses readily available training data and can be smoothly integrated into existing protocols, vastly speeding up screening assays and increasing the statistical power of experiments that employ adenine auxotrophy.
Collapse
Affiliation(s)
- Sarah H Carl
- Friedrich Miescher Institute for Biomedical Research, Maulbeerstrasse 66, 4058 Basel, Switzerland
- SIB Swiss Institute of Bioinformatics Quartier Sorge - Batiment Amphipole 1015, Lausanne, Switzerland
| | - Lea Duempelmann
- Friedrich Miescher Institute for Biomedical Research, Maulbeerstrasse 66, 4058 Basel, Switzerland
- University of Basel, Petersplatz 10, 4003 Basel, Switzerland
| | - Yukiko Shimada
- Friedrich Miescher Institute for Biomedical Research, Maulbeerstrasse 66, 4058 Basel, Switzerland
| | - Marc Bühler
- Friedrich Miescher Institute for Biomedical Research, Maulbeerstrasse 66, 4058 Basel, Switzerland
- University of Basel, Petersplatz 10, 4003 Basel, Switzerland
| |
Collapse
|
71
|
Development of a Deep Learning-Based Algorithm to Detect the Distal End of a Surgical Instrument. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app10124245] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
This work aims to develop an algorithm to detect the distal end of a surgical instrument using object detection with deep learning. We employed nine video recordings of carotid endarterectomies for training and testing. We obtained regions of interest (ROI; 32 × 32 pixels), at the end of the surgical instrument on the video images, as supervised data. We applied data augmentation to these ROIs. We employed a You Only Look Once Version 2 (YOLOv2) -based convolutional neural network as the network model for training. The detectors were validated to evaluate average detection precision. The proposed algorithm used the central coordinates of the bounding boxes predicted by YOLOv2. Using the test data, we calculated the detection rate. The average precision (AP) for the ROIs, without data augmentation, was 0.4272 ± 0.108. The AP with data augmentation, of 0.7718 ± 0.0824, was significantly higher than that without data augmentation. The detection rates, including the calculated coordinates of the center points in the centers of 8 × 8 pixels and 16 × 16 pixels, were 0.6100 ± 0.1014 and 0.9653 ± 0.0177, respectively. We expect that the proposed algorithm will be efficient for the analysis of surgical records.
Collapse
|
72
|
Cui S, Tseng HH, Pakela J, Ten Haken RK, Naqa IE. Introduction to machine and deep learning for medical physicists. Med Phys 2020; 47:e127-e147. [PMID: 32418339 PMCID: PMC7331753 DOI: 10.1002/mp.14140] [Citation(s) in RCA: 52] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2019] [Revised: 01/23/2020] [Accepted: 03/03/2020] [Indexed: 01/01/2023] Open
Abstract
Recent years have witnessed tremendous growth in the application of machine learning (ML) and deep learning (DL) techniques in medical physics. Embracing the current big data era, medical physicists equipped with these state-of-the-art tools should be able to solve pressing problems in modern radiation oncology. Here, a review of the basic aspects involved in ML/DL model building, including data processing, model training, and validation for medical physics applications is presented and discussed. Machine learning can be categorized based on the underlying task into supervised learning, unsupervised learning, or reinforcement learning; each of these categories has its own input/output dataset characteristics and aims to solve different classes of problems in medical physics ranging from automation of processes to predictive analytics. It is recognized that data size requirements may vary depending on the specific medical physics application and the nature of the algorithms applied. Data processing, which is a crucial step for model stability and precision, should be performed before training the model. Deep learning as a subset of ML is able to learn multilevel representations from raw input data, eliminating the necessity for hand crafted features in classical ML. It can be thought of as an extension of the classical linear models but with multilayer (deep) structures and nonlinear activation functions. The logic of going "deeper" is related to learning complex data structures and its realization has been aided by recent advancements in parallel computing architectures and the development of more robust optimization methods for efficient training of these algorithms. Model validation is an essential part of ML/DL model building. Without it, the model being developed cannot be easily trusted to generalize to unseen data. Whenever applying ML/DL, one should keep in mind, according to Amara's law, that humans may tend to overestimate the ability of a technology in the short term and underestimate its capability in the long term. To establish ML/DL role into standard clinical workflow, models considering balance between accuracy and interpretability should be developed. Machine learning/DL algorithms have potential in numerous radiation oncology applications, including automatizing mundane procedures, improving efficiency and safety of auto-contouring, treatment planning, quality assurance, motion management, and outcome predictions. Medical physicists have been at the frontiers of technology translation into medicine and they ought to be prepared to embrace the inevitable role of ML/DL in the practice of radiation oncology and lead its clinical implementation.
Collapse
Affiliation(s)
- Sunan Cui
- Department of Radiation Oncology, University of Michigan, Ann Arbor, MI 48103, USA; Applied Physics Program, University of Michigan, Ann Arbor, MI 48109, USA
| | - Huan-Hsin Tseng
- Department of Radiation Oncology, University of Michigan, Ann Arbor, MI 48103, USA
| | - Julia Pakela
- Department of Radiation Oncology, University of Michigan, Ann Arbor, MI 48103, USA; Applied Physics Program, University of Michigan, Ann Arbor, MI 48109, USA
| | - Randall K. Ten Haken
- Department of Radiation Oncology, University of Michigan, Ann Arbor, MI 48103, USA
| | - Issam El Naqa
- Department of Radiation Oncology, University of Michigan, Ann Arbor, MI 48103, USA
| |
Collapse
|
73
|
Liu Y, Nacewicz BM, Zhao G, Adluru N, Kirk GR, Ferrazzano PA, Styner MA, Alexander AL. A 3D Fully Convolutional Neural Network With Top-Down Attention-Guided Refinement for Accurate and Robust Automatic Segmentation of Amygdala and Its Subnuclei. Front Neurosci 2020; 14:260. [PMID: 32508558 PMCID: PMC7253589 DOI: 10.3389/fnins.2020.00260] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2019] [Accepted: 03/09/2020] [Indexed: 12/17/2022] Open
Abstract
Recent advances in deep learning have improved the segmentation accuracy of subcortical brain structures, which would be useful in neuroimaging studies of many neurological disorders. However, most existing deep learning based approaches in neuroimaging do not investigate the specific difficulties that exist in segmenting extremely small but important brain regions such as the subnuclei of the amygdala. To tackle this challenging task, we developed a dual-branch dilated residual 3D fully convolutional network with parallel convolutions to extract more global context and alleviate the class imbalance issue by maintaining a small receptive field that is just the size of the regions of interest (ROIs). We also conduct multi-scale feature fusion in both parallel and series to compensate the potential information loss during convolutions, which has been shown to be important for small objects. The serial feature fusion enabled by residual connections is further enhanced by a proposed top-down attention-guided refinement unit, where the high-resolution low-level spatial details are selectively integrated to complement the high-level but coarse semantic information, enriching the final feature representations. As a result, the segmentations resulting from our method are more accurate both volumetrically and morphologically, compared with other deep learning based approaches. To the best of our knowledge, this work is the first deep learning-based approach that targets the subregions of the amygdala. We also demonstrated the feasibility of using a cycle-consistent generative adversarial network (CycleGAN) to harmonize multi-site MRI data, and show that our method generalizes well to challenging traumatic brain injury (TBI) datasets collected from multiple centers. This appears to be a promising strategy for image segmentation for multiple site studies and increased morphological variability from significant brain pathology.
Collapse
Affiliation(s)
- Yilin Liu
- Waisman Brain Imaging Laboratory, University of Wisconsin-Madison, Madison, WI, United States
| | - Brendon M. Nacewicz
- Department of Psychiatry, University of Wisconsin-Madison, Madison, WI, United States
| | - Gengyan Zhao
- Department of Medical Physics, University of Wisconsin-Madison, Madison, WI, United States
| | - Nagesh Adluru
- Waisman Brain Imaging Laboratory, University of Wisconsin-Madison, Madison, WI, United States
| | - Gregory R. Kirk
- Waisman Brain Imaging Laboratory, University of Wisconsin-Madison, Madison, WI, United States
| | - Peter A. Ferrazzano
- Waisman Brain Imaging Laboratory, University of Wisconsin-Madison, Madison, WI, United States
- Department of Pediatrics, University of Wisconsin-Madison, Madison, WI, United States
| | - Martin A. Styner
- Department of Psychiatry, University of North Carolina-Chapel Hill, Chapel Hill, NC, United States
- Department of Computer Science, University of North Carolina-Chapel Hill, Chapel Hill, NC, United States
| | - Andrew L. Alexander
- Waisman Brain Imaging Laboratory, University of Wisconsin-Madison, Madison, WI, United States
- Department of Psychiatry, University of Wisconsin-Madison, Madison, WI, United States
- Department of Medical Physics, University of Wisconsin-Madison, Madison, WI, United States
| |
Collapse
|
74
|
Ma X, Wang J, Zheng X, Liu Z, Long W, Zhang Y, Wei J, Lu Y. Automated fibroglandular tissue segmentation in breast MRI using generative adversarial networks. Phys Med Biol 2020; 65:105006. [PMID: 32155611 DOI: 10.1088/1361-6560/ab7e7f] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
Fibroglandular tissue (FGT) segmentation is a crucial step for quantitative analysis of background parenchymal enhancement (BPE) in magnetic resonance imaging (MRI), which is useful for breast cancer risk assessment. In this study, we develop an automated deep learning method based on a generative adversarial network (GAN) to identify the FGT region in MRI volumes and evaluate its impact on a specific clinical application. The GAN consists of an improved U-Net as a generator to generate FGT candidate areas and a patch deep convolutional neural network (DCNN) as a discriminator to evaluate the authenticity of the synthetic FGT region. The proposed method has two improvements compared to the classical U-Net: (1) the improved U-Net is designed to extract more features of the FGT region for a more accurate description of the FGT region; (2) a patch DCNN is designed for discriminating the authenticity of the FGT region generated by the improved U-Net, which makes the segmentation result more stable and accurate. A dataset of 100 three-dimensional (3D) bilateral breast MRI scans from 100 patients (aged 22-78 years) was used in this study with Institutional Review Board (IRB) approval. 3D hand-segmented FGT areas for all breasts were provided as a reference standard. Five-fold cross-validation was used in training and testing of the models. The Dice similarity coefficient (DSC) and Jaccard index (JI) values were evaluated to measure the segmentation accuracy. The previous method using classical U-Net was used as a baseline in this study. In the five partitions of the cross-validation set, the GAN achieved DSC and JI values of 87.0 ± 7.0% and 77.6 ± 10.1%, respectively, while the corresponding values obtained through by the baseline method were 81.1 ± 8.7% and 69.0 ± 11.3%, respectively. The proposed method is significantly superior to the previous method using U-Net. The FGT segmentation impacted the BPE quantification application in the following manner: the correlation coefficients between the quantified BPE value and BI-RADS BPE categories provided by the radiologist were 0.46 ± 0.15 (best: 0.63) based on GAN segmented FGT areas, while the corresponding correlation coefficients were 0.41 ± 0.16 (best: 0.60) based on baseline U-Net segmented FGT areas. BPE can be quantified better using the FGT areas segmented by the proposed GAN model than using the FGT areas segmented by the baseline U-Net.
Collapse
Affiliation(s)
- Xiangyuan Ma
- School of Data and Computer Science, Sun Yat-Sen University, Guangzhou, People's Republic of China. Guangdong Province Key Laboratory Computational Science, Sun Yat-Sen University, Guangzhou, People's Republic of China
| | | | | | | | | | | | | | | |
Collapse
|
75
|
Liu M, Vanguri R, Mutasa S, Ha R, Liu YC, Button T, Jambawalikar S. Channel width optimized neural networks for liver and vessel segmentation in liver iron quantification. Comput Biol Med 2020; 122:103798. [PMID: 32658724 DOI: 10.1016/j.compbiomed.2020.103798] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2020] [Revised: 04/27/2020] [Accepted: 04/29/2020] [Indexed: 12/19/2022]
Abstract
INTRODUCTION MRI T2* relaxometry protocols are often used for Liver Iron Quantification in patients with hemochromatosis. Several methods exist to semi-automatically segment parenchyma and exclude vessels for this calculation. PURPOSE To determine if inclusion of multiple echoes inputs to Convolutional Neural Networks (CNN) improves automated liver and vessel segmentation in MRI T2* relaxometry protocols and to determine if the resultant segmentations agree with manual segmentations for liver iron quantification analysis. METHODS Multi echo Gradient Recalled Echo (GRE) MRI sequence for T2* relaxometry was performed for 79 exams on 31 patients with hemochromatosis for iron quantification analysis. 275 axial liver slices were manually segmented as ground truth masks. A batch normalized U-Net with variable input width to incorporate multiple echoes is used for segmentation, using DICE as the accuracy metric. ANOVA is used to evaluate significance of channel width changes in segmentation accuracy. Linear regression is used to model the relationship of channel width on segmentation accuracy. Liver segmentations are applied to relaxometry data to calculate liver T2* yielding liver iron concentration(LIC) derived from literature based calibration curves. Manual and CNN based LIC values are compared with Pearson correlation. Bland altman plots are used to visualize differences between manual and CNN based LIC values. RESULTS Performance metrics are tested on 55 hold out slices. Linear regression indicates that there is a monotonic increase of DICE with increasing channel depth (p = 0.001) with a slope of 3.61e-3. ANOVA indicates a significant increase segmentation accuracy over single channel starting at 3 channels. Incorporation of all channels results in an average DICE of 0.86, an average increase of 0.07 over single channel. The calculated LIC from CNN segmented livers agrees well with manual segmentation (R = 0.998, slope = 0.914, p«0.001), with an average absolute difference 0.27 ± 0.99 mg Fe/g or 1.34 ± 4.3%. CONCLUSION More input echoes yields higher model accuracy until the noise floor. Echos beyond the first three echo times in GRE based T2* relaxometry do not contribute significant information for segmentation of liver for LIC calculation. Deep learning models with three channel width allow for generalization of model to protocols of more than three echoes, effectively a universal requirement for relaxometry. Deep learning segmentations achieve a good accuracy compared with manual segmentations with minimal preprocessing. Liver iron values calculated from hand segmented liver and Neural network segmented liver were not statistically different from each other.
Collapse
Affiliation(s)
- Michael Liu
- Department of Radiology, Columbia University Irving Medical Center, 622 W 168th Street, New York, NY, 10032, USA.
| | - Rami Vanguri
- Department of Pathology & Cell Biology, Columbia University, New York, NY, USA
| | - Simukayi Mutasa
- Department of Radiology, Columbia University Irving Medical Center, 622 W 168th Street, New York, NY, 10032, USA
| | - Richard Ha
- Department of Radiology, Columbia University Irving Medical Center, 622 W 168th Street, New York, NY, 10032, USA
| | - Yu-Cheng Liu
- Department of Radiology, Columbia University Irving Medical Center, 622 W 168th Street, New York, NY, 10032, USA
| | - Terry Button
- Department of Radiology, Stony Brook University, Stony Brook, NY, USA
| | - Sachin Jambawalikar
- Department of Radiology, Columbia University Irving Medical Center, 622 W 168th Street, New York, NY, 10032, USA
| |
Collapse
|
76
|
Chhetri A, Li X, Rispoli JV. Current and Emerging Magnetic Resonance-Based Techniques for Breast Cancer. Front Med (Lausanne) 2020; 7:175. [PMID: 32478083 PMCID: PMC7235971 DOI: 10.3389/fmed.2020.00175] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2019] [Accepted: 04/15/2020] [Indexed: 01/10/2023] Open
Abstract
Breast cancer is the most commonly diagnosed cancer among women worldwide, and early detection remains a principal factor for improved patient outcomes and reduced mortality. Clinically, magnetic resonance imaging (MRI) techniques are routinely used in determining benign and malignant tumor phenotypes and for monitoring treatment outcomes. Static MRI techniques enable superior structural contrast between adipose and fibroglandular tissues, while dynamic MRI techniques can elucidate functional characteristics of malignant tumors. The preferred clinical procedure-dynamic contrast-enhanced MRI-illuminates the hypervascularity of breast tumors through a gadolinium-based contrast agent; however, accumulation of the potentially toxic contrast agent remains a major limitation of the technique, propelling MRI research toward finding an alternative, noninvasive method. Three such techniques are magnetic resonance spectroscopy, chemical exchange saturation transfer, and non-contrast diffusion weighted imaging. These methods shed light on underlying chemical composition, provide snapshots of tissue metabolism, and more pronouncedly characterize microstructural heterogeneity. This review article outlines the present state of clinical MRI for breast cancer and examines several research techniques that demonstrate capacity for clinical translation. Ultimately, multi-parametric MRI-incorporating one or more of these emerging methods-presently holds the best potential to afford improved specificity and deliver excellent accuracy to clinics for the prediction, detection, and monitoring of breast cancer.
Collapse
Affiliation(s)
- Apekshya Chhetri
- Magnetic Resonance Biomedical Engineering Laboratory, Weldon School of Biomedical Engineering, Purdue University, West Lafayette, IN, United States
- Basic Medical Sciences, College of Veterinary Medicine, Purdue University, West Lafayette, IN, United States
| | - Xin Li
- Magnetic Resonance Biomedical Engineering Laboratory, Weldon School of Biomedical Engineering, Purdue University, West Lafayette, IN, United States
| | - Joseph V. Rispoli
- Magnetic Resonance Biomedical Engineering Laboratory, Weldon School of Biomedical Engineering, Purdue University, West Lafayette, IN, United States
- Center for Cancer Research, Purdue University, West Lafayette, IN, United States
- School of Electrical & Computer Engineering, Purdue University, West Lafayette, IN, United States
| |
Collapse
|
77
|
Retson TA, Eghtedari M. Computer-Aided Detection/Diagnosis in Breast Imaging: A Focus on the Evolving FDA Regulations for Using Software as a Medical Device. CURRENT RADIOLOGY REPORTS 2020. [DOI: 10.1007/s40134-020-00350-6] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/06/2023]
|
78
|
A Deep Learning-Based Robust Change Detection Approach for Very High Resolution Remotely Sensed Images with Multiple Features. REMOTE SENSING 2020. [DOI: 10.3390/rs12091441] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
Abstract
Very high-resolution remote sensing change detection has always been an important research issue due to the registration error, robustness of the method, and monitoring accuracy, etc. This paper proposes a robust and more accurate approach of change detection (CD), and it is applied on a smaller experimental area, and then extended to a wider range. A feature space, including object features, Visual Geometry Group (VGG) depth features, and texture features, is constructed. The difference image is obtained by considering the contextual information in a radius scalable circular. This is to overcome the registration error caused by the rotation and shift of the instantaneous field of view and also to improve the reliability and robustness of the CD. To enhance the robustness of the U-Net model, the training dataset is constructed manually via various operations, such as blurring the image, increasing noise, and rotating the image. After this, the trained model is used to predict the experimental areas, which achieved 92.3% accuracy. The proposed method is compared with Support Vector Machine (SVM) and Siamese Network, and the check error rate dropped to 7.86%, while the Kappa increased to 0.8254. The results revealed that our method outperforms SVM and Siamese Network.
Collapse
|
79
|
Sanderink WBG, Caballo M, Strobbe LJA, Bult P, Vreuls W, Venderink DJ, Sechopoulos I, Karssemeijer N, Mann RM. Reliability of MRI tumor size measurements for minimal invasive treatment selection in small breast cancers. Eur J Surg Oncol 2020; 46:1463-1470. [PMID: 32536526 DOI: 10.1016/j.ejso.2020.04.038] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2020] [Revised: 04/06/2020] [Accepted: 04/19/2020] [Indexed: 01/18/2023] Open
Abstract
INTRODUCTION Due to the shift towards minimal invasive treatment, accurate tumor size estimation is essential for small breast cancers. The purpose of this study was to determine the reliability of MRI-based tumor size measurements with respect to clinical, histological and radiomics characteristics in small invasive or in situ carcinomas of the breast to select patients for minimal invasive therapy. MATERIALS AND METHODS All consecutive cases of cT1 invasive breast carcinomas that underwent pre-operative MRI, treated in two hospitals between 2005 and 2016, were identified retrospectively from the Dutch cancer registry and cross-correlated with local databases. Concordance between MRI-based measurements and final pathological size was analyzed. The influence of clinical, histological and radiomics characteristics on the accuracy of MRI size measurements were analyzed. RESULTS Analysis included 343 cT1 breast carcinomas in 336 patients (mean age, 55 years; range, 25-81 years). Overall correlation of MRI measurements with pathology was moderately strong (ρ = 0.530, P < 0.001), in 42 cases (12.2%) MRI underestimated the size with more than 5 mm. Underestimation occurs more often in grade 2 and grade 3 disease than in low grade invasive cancers. In DCIS the frequency of underestimation is higher than in invasive breast cancer. Unfortunately, none of the patient, imaging or biopsy characteristics appeared predictive for underestimation. CONCLUSION Size measurements of small breast cancers on breast MRI are within 5 mm of pathological size in 88% of patients. Nevertheless, underestimation cannot be adequately predicted, particularly for grade 2 and grade 3 tumors, which may hinder patient selection for minimal invasive therapy.
Collapse
Affiliation(s)
- W B G Sanderink
- Department of Radiology and Nuclear Medicine, Radboud University Medical Center, Nijmegen, the Netherlands
| | - M Caballo
- Department of Radiology and Nuclear Medicine, Radboud University Medical Center, Nijmegen, the Netherlands
| | - L J A Strobbe
- Department of Surgical Oncology, Canisius-Wilhelmina Hospital, Nijmegen, the Netherlands
| | - P Bult
- Department of Pathology, Radboud University Medical Center, Nijmegen, the Netherlands
| | - W Vreuls
- Department of Pathology, Canisius-Wilhelmina Hospital, Nijmegen, the Netherlands
| | - D J Venderink
- Department of Radiology, Canisius-Wilhelmina Hospital, Nijmegen, the Netherlands
| | - I Sechopoulos
- Department of Radiology and Nuclear Medicine, Radboud University Medical Center, Nijmegen, the Netherlands
| | - N Karssemeijer
- Department of Radiology and Nuclear Medicine, Radboud University Medical Center, Nijmegen, the Netherlands
| | - R M Mann
- Department of Radiology and Nuclear Medicine, Radboud University Medical Center, Nijmegen, the Netherlands.
| |
Collapse
|
80
|
Chan HP, Samala RK, Hadjiiski LM. CAD and AI for breast cancer-recent development and challenges. Br J Radiol 2020; 93:20190580. [PMID: 31742424 PMCID: PMC7362917 DOI: 10.1259/bjr.20190580] [Citation(s) in RCA: 80] [Impact Index Per Article: 20.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2019] [Revised: 11/13/2019] [Accepted: 11/17/2019] [Indexed: 12/15/2022] Open
Abstract
Computer-aided diagnosis (CAD) has been a popular area of research and development in the past few decades. In CAD, machine learning methods and multidisciplinary knowledge and techniques are used to analyze the patient information and the results can be used to assist clinicians in their decision making process. CAD may analyze imaging information alone or in combination with other clinical data. It may provide the analyzed information directly to the clinician or correlate the analyzed results with the likelihood of certain diseases based on statistical modeling of the past cases in the population. CAD systems can be developed to provide decision support for many applications in the patient care processes, such as lesion detection, characterization, cancer staging, treatment planning and response assessment, recurrence and prognosis prediction. The new state-of-the-art machine learning technique, known as deep learning (DL), has revolutionized speech and text recognition as well as computer vision. The potential of major breakthrough by DL in medical image analysis and other CAD applications for patient care has brought about unprecedented excitement of applying CAD, or artificial intelligence (AI), to medicine in general and to radiology in particular. In this paper, we will provide an overview of the recent developments of CAD using DL in breast imaging and discuss some challenges and practical issues that may impact the advancement of artificial intelligence and its integration into clinical workflow.
Collapse
Affiliation(s)
- Heang-Ping Chan
- Department of Radiology, University of Michigan, Ann Arbor, MI, United States
| | - Ravi K. Samala
- Department of Radiology, University of Michigan, Ann Arbor, MI, United States
| | | |
Collapse
|
81
|
Jeong JW, Lee MH, John F, Robinette NL, Amit-Yousif AJ, Barger GR, Mittal S, Juhász C. Feasibility of Multimodal MRI-Based Deep Learning Prediction of High Amino Acid Uptake Regions and Survival in Patients With Glioblastoma. Front Neurol 2020; 10:1305. [PMID: 31920928 PMCID: PMC6928045 DOI: 10.3389/fneur.2019.01305] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2019] [Accepted: 11/26/2019] [Indexed: 12/12/2022] Open
Abstract
Purpose: Amino acid PET has shown high accuracy for the diagnosis and prognostication of malignant gliomas, however, this imaging modality is not widely available in clinical practice. This study explores a novel end-to-end deep learning framework ("U-Net") for its feasibility to detect high amino acid uptake glioblastoma regions (i.e., metabolic tumor volume) using clinical multimodal MRI sequences. Methods: T2, fluid-attenuated inversion recovery (FLAIR), apparent diffusion coefficient map, contrast-enhanced T1, and alpha-[11C]-methyl-L-tryptophan (AMT)-PET images were analyzed in 21 patients with newly-diagnosed glioblastoma. U-Net system with data augmentation was implemented to deeply learn non-linear voxel-wise relationships between intensities of multimodal MRI as the input and metabolic tumor volume from AMT-PET as the output. The accuracy of the MRI- and PET-based volume measures to predict progression-free survival was tested. Results: In the augmented dataset using all four MRI modalities to investigate the upper limit of U-Net accuracy in the full study cohort, U-Net achieved high accuracy (sensitivity/specificity/positive predictive value [PPV]/negative predictive value [NPV]: 0.85/1.00/0.81/1.00, respectively) to predict PET-defined tumor volumes. Exclusion of FLAIR from the MRI input set had a strong negative effect on sensitivity (0.60). In repeated hold out validation in randomly selected subjects, specificity and NPV remained high (1.00), but mean sensitivity (0.62), and PPV (0.68) were moderate. AMT-PET-learned MRI tumor volume from this U-net model within the contrast-enhancing volume predicted 6-month progression-free survival with 0.86/0.63 sensitivity/specificity. Conclusions: These data indicate the feasibility of PET-based deep learning for enhanced pretreatment glioblastoma delineation and prognostication by clinical multimodal MRI.
Collapse
Affiliation(s)
- Jeong-Won Jeong
- Department of Pediatrics, Wayne State University School of Medicine and PET Center and Translational Imaging Laboratory, Children's Hospital of Michigan, Detroit, MI, United States.,Department of Neurology, Wayne State University, Detroit, MI, United States.,Translational Neuroscience Program, Wayne State University, Detroit, MI, United States
| | - Min-Hee Lee
- Department of Pediatrics, Wayne State University School of Medicine and PET Center and Translational Imaging Laboratory, Children's Hospital of Michigan, Detroit, MI, United States
| | - Flóra John
- Department of Pediatrics, Wayne State University School of Medicine and PET Center and Translational Imaging Laboratory, Children's Hospital of Michigan, Detroit, MI, United States
| | - Natasha L Robinette
- Department of Oncology, Wayne State University, Detroit, MI, United States.,Karmanos Cancer Institute, Detroit, MI, United States
| | - Alit J Amit-Yousif
- Department of Oncology, Wayne State University, Detroit, MI, United States.,Karmanos Cancer Institute, Detroit, MI, United States
| | - Geoffrey R Barger
- Department of Neurology, Wayne State University, Detroit, MI, United States.,Karmanos Cancer Institute, Detroit, MI, United States
| | - Sandeep Mittal
- Department of Oncology, Wayne State University, Detroit, MI, United States.,Karmanos Cancer Institute, Detroit, MI, United States.,Department of Neurosurgery, Wayne State University, Detroit, MI, United States.,Virginia Tech Carilion School of Medicine and Carilion Clinic, Roanoke, VA, United States
| | - Csaba Juhász
- Department of Pediatrics, Wayne State University School of Medicine and PET Center and Translational Imaging Laboratory, Children's Hospital of Michigan, Detroit, MI, United States.,Department of Neurology, Wayne State University, Detroit, MI, United States.,Translational Neuroscience Program, Wayne State University, Detroit, MI, United States.,Karmanos Cancer Institute, Detroit, MI, United States.,Department of Neurosurgery, Wayne State University, Detroit, MI, United States
| |
Collapse
|
82
|
Xiong X, Linhardt TJ, Liu W, Smith BJ, Sun W, Bauer C, Sunderland JJ, Graham MM, Buatti JM, Beichel RR. A 3D deep convolutional neural network approach for the automated measurement of cerebellum tracer uptake in FDG PET-CT scans. Med Phys 2019; 47:1058-1066. [PMID: 31855287 DOI: 10.1002/mp.13970] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2019] [Revised: 12/05/2019] [Accepted: 12/05/2019] [Indexed: 01/12/2023] Open
Abstract
PURPOSE The purpose of this work was to assess the potential of deep convolutional neural networks in automated measurement of cerebellum tracer uptake in F-18 fluorodeoxyglucose (FDG) positron emission tomography (PET) scans. METHODS Three different three-dimensional (3D) convolutional neural network architectures (U-Net, V-Net, and modified U-Net) were implemented and compared regarding their performance in 3D cerebellum segmentation in FDG PET scans. For network training and testing, 134 PET scans with corresponding manual volumetric segmentations were utilized. For segmentation performance assessment, a fivefold cross-validation was used, and the Dice coefficient as well as signed and unsigned distance errors were calculated. In addition, standardized uptake value (SUV) uptake measurement performance was assessed by means of a statistical comparison to an independent reference standard. Furthermore, a comparison to a previously reported active-shape-model-based approach was performed. RESULTS Out of the three convolutional neural networks investigated, the modified U-Net showed significantly better segmentation performance. It achieved a Dice coefficient of 0.911 ± 0.026, a signed distance error of 0.220 ± 0.103 mm, and an unsigned distance error of 1.048 ± 0.340 mm. When compared to the independent reference standard, SUV uptake measurements produced with the modified U-Net showed no significant error in slope and intercept. The estimated reduction in total SUV measurement error was 95.1%. CONCLUSIONS The presented work demonstrates the potential of deep convolutional neural networks in automated SUV measurement of reference regions. While it focuses on the cerebellum, utilized methods can be generalized to other reference regions like the liver or aortic arch. Future work will focus on combining lesion and reference region analysis into one approach.
Collapse
Affiliation(s)
- Xiaofan Xiong
- Department of Biomedical Engineering, The University of Iowa, Iowa City, IA, 52242, USA
| | - Timothy J Linhardt
- Department of Electrical and Computer Engineering, The University of Iowa, Iowa City, IA, 52242, USA
| | - Weiren Liu
- Roy J. and Lucille A. Carver College of Medicine, The University of Iowa, Iowa City, IA, 52242, USA
| | - Brian J Smith
- Department of Biostatistics, The University of Iowa, Iowa City, IA, 52242, USA
| | - Wenqing Sun
- Department of Radiation Oncology, University of Iowa Hospitals and Clinics, Iowa City, IA, 52242, USA
| | - Christian Bauer
- Department of Electrical and Computer Engineering, The University of Iowa, Iowa City, IA, 52242, USA
| | - John J Sunderland
- Department of Radiology, The University of Iowa, Iowa City, IA, 52242, USA
| | - Michael M Graham
- Department of Radiology, The University of Iowa, Iowa City, IA, 52242, USA
| | - John M Buatti
- Department of Radiation Oncology, University of Iowa Hospitals and Clinics, Iowa City, IA, 52242, USA
| | - Reinhard R Beichel
- Department of Electrical and Computer Engineering, The University of Iowa, Iowa City, IA, 52242, USA
| |
Collapse
|
83
|
Semi-supervised segmentation of lesion from breast ultrasound images with attentional generative adversarial network. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2019; 189:105275. [PMID: 31978805 DOI: 10.1016/j.cmpb.2019.105275] [Citation(s) in RCA: 34] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/04/2019] [Revised: 10/30/2019] [Accepted: 12/11/2019] [Indexed: 02/05/2023]
Abstract
BACKGROUND AND OBJECTIVE Automatic segmentation of breast lesion from ultrasound images is a crucial module for the computer aided diagnostic systems in clinical practice. Large-scale breast ultrasound (BUS) images remain unannotated and need to be effectively explored to improve the segmentation quality. To address this, a semi-supervised segmentation network is proposed based on generative adversarial networks (GAN). METHODS In this paper, a semi-supervised learning model, denoted as BUS-GAN, consisting of a segmentation base network-BUS-S and an evaluation base network-BUS-E, is proposed. The BUS-S network can densely extract multi-scale features in order to accommodate the individual variance of breast lesion, thereby enhancing the robustness of segmentation. Besides, the BUS-E network adopts a dual-attentive-fusion block having two independent spatial attention paths on the predicted segmentation map and leverages the corresponding original image to distill geometrical-level and intensity-level information, respectively, so that to enlarge the difference between lesion region and background, thus improving the discriminative ability of the BUS-E network. Then, through adversarial training, the BUS-GAN model can achieve higher segmentation quality because the BUS-E network guides the BUS-S network to generate more accurate segmentation maps with more similar distribution as ground truth. RESULTS The counterpart semi-supervised segmentation methods and the proposed BUS-GAN model were trained with 2000 in-house images, including 100 annotated images and 1900 unannotated images, and tested on two different sites, including 800 in-house images and 163 public images. The results validate that the proposed BUS-GAN model can achieve higher segmentation accuracy on both the in-house testing dataset and the public dataset than state-of-the-art semi-supervised segmentation methods. CONCLUSIONS The developed BUS-GAN model can effectively utilize the unannotated breast ultrasound images to improve the segmentation quality. In the future, the proposed segmentation method can be a potential module for the automatic breast ultrasound diagnose system, thus relieving the burden of a tedious image annotation process and alleviating the subjective influence of physicians' experiences in clinical practice. Our code will be made available on https://github.com/fiy2W/BUS-GAN.
Collapse
|
84
|
Deep learning analysis of breast MRIs for prediction of occult invasive disease in ductal carcinoma in situ. Comput Biol Med 2019; 115:103498. [DOI: 10.1016/j.compbiomed.2019.103498] [Citation(s) in RCA: 25] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2019] [Revised: 09/24/2019] [Accepted: 10/10/2019] [Indexed: 01/06/2023]
|
85
|
Lenchik L, Heacock L, Weaver AA, Boutin RD, Cook TS, Itri J, Filippi CG, Gullapalli RP, Lee J, Zagurovskaya M, Retson T, Godwin K, Nicholson J, Narayana PA. Automated Segmentation of Tissues Using CT and MRI: A Systematic Review. Acad Radiol 2019; 26:1695-1706. [PMID: 31405724 PMCID: PMC6878163 DOI: 10.1016/j.acra.2019.07.006] [Citation(s) in RCA: 62] [Impact Index Per Article: 12.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2019] [Revised: 07/17/2019] [Accepted: 07/17/2019] [Indexed: 01/10/2023]
Abstract
RATIONALE AND OBJECTIVES The automated segmentation of organs and tissues throughout the body using computed tomography and magnetic resonance imaging has been rapidly increasing. Research into many medical conditions has benefited greatly from these approaches by allowing the development of more rapid and reproducible quantitative imaging markers. These markers have been used to help diagnose disease, determine prognosis, select patients for therapy, and follow responses to therapy. Because some of these tools are now transitioning from research environments to clinical practice, it is important for radiologists to become familiar with various methods used for automated segmentation. MATERIALS AND METHODS The Radiology Research Alliance of the Association of University Radiologists convened an Automated Segmentation Task Force to conduct a systematic review of the peer-reviewed literature on this topic. RESULTS The systematic review presented here includes 408 studies and discusses various approaches to automated segmentation using computed tomography and magnetic resonance imaging for neurologic, thoracic, abdominal, musculoskeletal, and breast imaging applications. CONCLUSION These insights should help prepare radiologists to better evaluate automated segmentation tools and apply them not only to research, but eventually to clinical practice.
Collapse
Affiliation(s)
- Leon Lenchik
- Department of Radiology, Wake Forest School of Medicine, Medical Center Boulevard, Winston-Salem, NC 27157.
| | - Laura Heacock
- Department of Radiology, NYU Langone, New York, New York
| | - Ashley A Weaver
- Department of Biomedical Engineering, Wake Forest School of Medicine, Winston-Salem, North Carolina
| | - Robert D Boutin
- Department of Radiology, University of California Davis School of Medicine, Sacramento, California
| | - Tessa S Cook
- Department of Radiology, University of Pennsylvania, Philadelphia Pennsylvania
| | - Jason Itri
- Department of Radiology, Wake Forest School of Medicine, Medical Center Boulevard, Winston-Salem, NC 27157
| | - Christopher G Filippi
- Department of Radiology, Donald and Barbara School of Medicine at Hofstra/Northwell, Lenox Hill Hospital, NY, New York
| | - Rao P Gullapalli
- Department of Radiology, University of Maryland School of Medicine, Baltimore, Maryland
| | - James Lee
- Department of Radiology, University of Kentucky, Lexington, Kentucky
| | | | - Tara Retson
- Department of Radiology, University of California San Diego, San Diego, California
| | - Kendra Godwin
- Medical Library, Memorial Sloan Kettering Cancer Center, New York, New York
| | - Joey Nicholson
- NYU Health Sciences Library, NYU School of Medicine, NYU Langone Health, New York, New York
| | - Ponnada A Narayana
- Department of Diagnostic and Interventional Imaging, McGovern Medical School, University of Texas Health Science Center at Houston, Houston, Texas
| |
Collapse
|
86
|
Automatic Breast and Fibroglandular Tissue Segmentation in Breast MRI Using Deep Learning by a Fully-Convolutional Residual Neural Network U-Net. Acad Radiol 2019; 26:1526-1535. [PMID: 30713130 DOI: 10.1016/j.acra.2019.01.012] [Citation(s) in RCA: 54] [Impact Index Per Article: 10.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2018] [Revised: 01/03/2019] [Accepted: 01/13/2019] [Indexed: 12/17/2022]
Abstract
RATIONALE AND OBJECTIVES Breast segmentation using the U-net architecture was implemented and tested in independent validation datasets to quantify fibroglandular tissue volume in breast MRI. MATERIALS AND METHODS Two datasets were used. The training set was MRI of 286 patients with unilateral breast cancer. The segmentation was done on the contralateral normal breasts. The ground truth for the breast and fibroglandular tissue (FGT) was obtained by using a template-based segmentation method. The U-net deep learning algorithm was implemented to analyze the training set, and the final model was obtained using 10-fold cross-validation. The independent validation set was MRI of 28 normal volunteers acquired using four different MR scanners. Dice Similarity Coefficient (DSC), voxel-based accuracy, and Pearson's correlation were used to evaluate the performance. RESULTS For the 10-fold cross-validation in the initial training set of 286 patients, the DSC range was 0.83-0.98 (mean 0.95 ± 0.02) for breast and 0.73-0.97 (mean 0.91 ± 0.03) for FGT; and the accuracy range was 0.92-0.99 (mean 0.98 ± 0.01) for breast and 0.87-0.99 (mean 0.97 ± 0.01) for FGT. For the entire 224 testing breasts of the 28 normal volunteers in the validation datasets, the mean DSC was 0.86 ± 0.05 for breast, 0.83 ± 0.06 for FGT; and the mean accuracy was 0.94 ± 0.03 for breast and 0.93 ± 0.04 for FGT. The testing results for MRI acquired using four different scanners were comparable. CONCLUSION Deep learning based on the U-net algorithm can achieve accurate segmentation results for the breast and FGT on MRI. It may provide a reliable and efficient method to process large number of MR images for quantitative analysis of breast density.
Collapse
|
87
|
Quantitative Volumetric K-Means Cluster Segmentation of Fibroglandular Tissue and Skin in Breast MRI. J Digit Imaging 2019; 31:425-434. [PMID: 29047034 DOI: 10.1007/s10278-017-0031-1] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/26/2022] Open
Abstract
Mammographic breast density (MBD) is the most commonly used method to assess the volume of fibroglandular tissue (FGT). However, MRI could provide a clinically feasible and more accurate alternative. There were three aims in this study: (1) to evaluate a clinically feasible method to quantify FGT with MRI, (2) to assess the inter-rater agreement of MRI-based volumetric measurements and (3) to compare them to measurements acquired using digital mammography and 3D tomosynthesis. This retrospective study examined 72 women (mean age 52.4 ± 12.3 years) with 105 disease-free breasts undergoing diagnostic 3.0-T breast MRI and either digital mammography or tomosynthesis. Two observers analyzed MRI images for breast and FGT volumes and FGT-% from T1-weighted images (0.7-, 2.0-, and 4.0-mm-thick slices) using K-means clustering, data from histogram, and active contour algorithms. Reference values were obtained with Quantra software. Inter-rater agreement for MRI measurements made with 2-mm-thick slices was excellent: for FGT-%, r = 0.994 (95% CI 0.990-0.997); for breast volume, r = 0.985 (95% CI 0.934-0.994); and for FGT volume, r = 0.979 (95% CI 0.958-0.989). MRI-based FGT-% correlated strongly with MBD in mammography (r = 0.819-0.904, P < 0.001) and moderately to high with MBD in tomosynthesis (r = 0.630-0.738, P < 0.001). K-means clustering-based assessments of the proportion of the fibroglandular tissue in the breast at MRI are highly reproducible. In the future, quantitative assessment of FGT-% to complement visual estimation of FGT should be performed on a more regular basis as it provides a component which can be incorporated into the individual's breast cancer risk stratification.
Collapse
|
88
|
Stember JN, Chang P, Stember DM, Liu M, Grinband J, Filippi CG, Meyers P, Jambawalikar S. Convolutional Neural Networks for the Detection and Measurement of Cerebral Aneurysms on Magnetic Resonance Angiography. J Digit Imaging 2019; 32:808-815. [PMID: 30511281 PMCID: PMC6737124 DOI: 10.1007/s10278-018-0162-z] [Citation(s) in RCA: 27] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022] Open
Abstract
Aneurysm size correlates with rupture risk and is important for treatment planning. User annotation of aneurysm size is slow and tedious, particularly for large data sets. Geometric shortcuts to compute size have been shown to be inaccurate, particularly for nonstandard aneurysm geometries. To develop and train a convolutional neural network (CNN) to detect and measure cerebral aneurysms from magnetic resonance angiography (MRA) automatically and without geometric shortcuts. In step 1, a CNN based on the U-net architecture was trained on 250 MRA maximum intensity projection (MIP) images, then applied to a testing set. In step 2, the trained CNN was applied to a separate set of 14 basilar tip aneurysms for size prediction. Step 1-the CNN successfully identified aneurysms in 85/86 (98.8% of) testing set cases, with a receiver operating characteristic (ROC) area-under-the-curve of 0.87. Step 2-automated basilar tip aneurysm linear size differed from radiologist-traced aneurysm size on average by 2.01 mm, or 30%. The CNN aneurysm area differed from radiologist-derived area on average by 8.1 mm2 or 27%. CNN correctly predicted the area trend for the set of aneurysms. This approach is to our knowledge the first using CNNs to derive aneurysm size. In particular, we demonstrate the clinically pertinent application of computing maximal aneurysm one-dimensional size and two-dimensional area. We propose that future work can apply this to facilitate pre-treatment planning and possibly identify previously missed aneurysms in retrospective assessment.
Collapse
Affiliation(s)
- Joseph N Stember
- Radiology, Columbia University Medical Center, 622 West 168th Street, PB 1-301, New York, NY, USA.
| | - Peter Chang
- Radiology, University of California Irvine School of Medicine, Irvine, CA, USA
| | | | - Michael Liu
- Radiology, Columbia University Medical Center, 622 West 168th Street, PB 1-301, New York, NY, USA
| | - Jack Grinband
- Radiology, Columbia University Medical Center, 622 West 168th Street, PB 1-301, New York, NY, USA
| | | | - Philip Meyers
- Radiology, Columbia University Medical Center, 622 West 168th Street, PB 1-301, New York, NY, USA
| | - Sachin Jambawalikar
- Radiology, Columbia University Medical Center, 622 West 168th Street, PB 1-301, New York, NY, USA
| |
Collapse
|
89
|
Huang C, Zhou Y, Tan W, Qiu Z, Zhou H, Song Y, Zhao Y, Gao S. Applying deep learning in recognizing the femoral nerve block region on ultrasound images. ANNALS OF TRANSLATIONAL MEDICINE 2019; 7:453. [PMID: 31700889 PMCID: PMC6803209 DOI: 10.21037/atm.2019.08.61] [Citation(s) in RCA: 25] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/15/2019] [Accepted: 07/09/2019] [Indexed: 02/05/2023]
Abstract
BACKGROUND Identifying the nerve block region is important for the less experienced operators who are not skilled in ultrasound technology. Therefore, we constructed and shared a dataset of ultrasonic images to explore a method to identify the femoral nerve block region. METHODS Ultrasound images of femoral nerve block were retrospectively collected and marked to establish the dataset. The U-net framework was used for training data and output segmentation of region of interest. The performance of the model was evaluated by Intersection over Union and accuracy. Then the predicted masks were highlighted on the original image to give an intuitive evaluation. Finally, cross validation was used for the whole data to test the robust of the results. RESULTS We selected 562 ultrasound images as the whole dataset. The training set intersection over union (IoU) was 0.713, the development set IoU is 0.633 and the test set IoU is 0.638. For the single image, the median and upper/lower quartiles of IoU were 0.722 (0.647-0.789), 0.653 (0.586-0.703), 0.644 (0.555-0.735) for the training set, development set and test set respectively. The segmentation accuracy of the test set was 83.9%. For 10-fold cross validation, the median and quartiles of the 10-iteration sum IoUs was 0.656 (0.628-0.672); for accuracy, they were 88.4% (82.1-90.7%). CONCLUSIONS We provided a dataset and trained a model for femoral-nerve region segmentation with U-net, obtaining a satisfactory performance. This technique may have potential clinical application.
Collapse
Affiliation(s)
- Chanyan Huang
- Department of Anesthesia, the First Affiliated Hospital of Sun Yat-sen University, Guangzhou 510080, China
| | - Ying Zhou
- Department of Anesthesia, The Third People’s Hospital of Chengdu, Chengdu 610031, China
| | - Wulin Tan
- Department of Anesthesia, the First Affiliated Hospital of Sun Yat-sen University, Guangzhou 510080, China
| | - Zeting Qiu
- Department of Anesthesia, The First Affiliated Hospital of Shantou University Medical College, Shantou 515041, China
| | - Huaqiang Zhou
- Department of Medical Oncology, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou 510080, China
| | - Yiyan Song
- Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou 510080, China
| | - Yue Zhao
- Department of General Surgery, Guangdong Second Provincial General Hospital, Guangzhou 510310, China
| | - Shaowei Gao
- Department of Anesthesia, the First Affiliated Hospital of Sun Yat-sen University, Guangzhou 510080, China
| |
Collapse
|
90
|
Matuszewski DJ, Sintorn IM. Reducing the U-Net size for practical scenarios: Virus recognition in electron microscopy images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2019; 178:31-39. [PMID: 31416558 DOI: 10.1016/j.cmpb.2019.05.026] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/11/2019] [Revised: 05/13/2019] [Accepted: 05/28/2019] [Indexed: 05/10/2023]
Abstract
BACKGROUND AND OBJECTIVE Convolutional neural networks (CNNs) offer human experts-like performance and in the same time they are faster and more consistent in their prediction. However, most of the proposed CNNs require an expensive state-of-the-art hardware which substantially limits their use in practical scenarios and commercial systems, especially for clinical, biomedical and other applications that require on-the-fly analysis. In this paper, we investigate the possibility of making CNNs lighter by parametrizing the architecture and decreasing the number of trainable weights of a popular CNN: U-Net. METHODS In order to demonstrate that comparable results can be achieved with substantially less trainable weights than the original U-Net we used a challenging application of a pixel-wise virus classification in Transmission Electron Microscopy images with minimal annotations (i.e. consisting only of the virus particle centers or centerlines). We explored 4 U-Net hyper-parameters: the number of base feature maps, the feature maps multiplier, the number of the encoding-decoding levels and the number of feature maps in the last 2 convolutional layers. RESULTS Our experiments lead to two main conclusions: 1) the architecture hyper-parameters are pivotal if less trainable weights are to be used, and 2) if there is no restriction on the trainable weights number using a deeper network generally gives better results. However, training larger networks takes longer, typically requires more data and such networks are also more prone to overfitting. Our best model achieved an accuracy of 82.2% which is similar to the original U-Net while using nearly 4 times less trainable weights (7.8 M in comparison to 31.0 M). We also present a network with < 2 M trainable weights that achieved an accuracy of 76.4%. CONCLUSIONS The proposed U-Net hyper-parameter exploration can be adapted to other CNNs and other applications. It allows a comprehensive CNN architecture designing with the aim of a more efficient trainable weight use. Making the networks faster and lighter is crucial for their implementation in many practical applications. In addition, a lighter network ought to be less prone to over-fitting and hence generalize better.
Collapse
Affiliation(s)
| | - Ida-Maria Sintorn
- Department of Information Technology, Uppsala University, Uppsala, Sweden; Vironova AB, Gävlegatan 22, Stockholm, Sweden.
| |
Collapse
|
91
|
Verburg E, Wolterink JM, Waard SN, Išgum I, Gils CH, Veldhuis WB, Gilhuijs KGA. Knowledge‐based and deep learning‐based automated chest wall segmentation in magnetic resonance images of extremely dense breasts. Med Phys 2019; 46:4405-4416. [DOI: 10.1002/mp.13699] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2019] [Revised: 06/21/2019] [Accepted: 06/26/2019] [Indexed: 11/07/2022] Open
Affiliation(s)
- Erik Verburg
- Image Sciences Institute University Medical Center Utrecht, Utrecht University Utrecht 3584 CX the Netherlands
| | - Jelmer M. Wolterink
- Image Sciences Institute University Medical Center Utrecht, Utrecht University Utrecht 3584 CX the Netherlands
| | - Stephanie N. Waard
- Department of Radiology University Medical Center Utrecht, Utrecht University Utrecht 3584 CX the Netherlands
| | - Ivana Išgum
- Image Sciences Institute University Medical Center Utrecht, Utrecht University Utrecht 3584 CX the Netherlands
| | - Carla H. Gils
- Julius Center for Health Sciences and Primary Care University Medical Center Utrecht, Utrecht University Utrecht 3584 CX the Netherlands
| | - Wouter B. Veldhuis
- Department of Radiology University Medical Center Utrecht, Utrecht University Utrecht 3584 CX the Netherlands
| | - Kenneth G. A. Gilhuijs
- Image Sciences Institute University Medical Center Utrecht, Utrecht University Utrecht 3584 CX the Netherlands
| |
Collapse
|
92
|
Park H, Lee HJ, Kim HG, Ro YM, Shin D, Lee SR, Kim SH, Kong M. Endometrium segmentation on transvaginal ultrasound image using key-point discriminator. Med Phys 2019; 46:3974-3984. [PMID: 31230366 DOI: 10.1002/mp.13677] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2018] [Revised: 06/06/2019] [Accepted: 06/06/2019] [Indexed: 12/25/2022] Open
Abstract
PURPOSE Transvaginal ultrasound imaging provides useful information for diagnosing endometrial pathologies and reproductive health. Endometrium segmentation in transvaginal ultrasound (TVUS) images is very challenging due to ambiguous boundaries and heterogeneous textures. In this study, we developed a new segmentation framework which provides robust segmentation against ambiguous boundaries and heterogeneous textures of TVUS images. METHODS To achieve endometrium segmentation from TVUS images, we propose a new segmentation framework with a discriminator guided by four key points of the endometrium (namely, the endometrium cavity tip, the internal os of the cervix, and the two thickest points between the two basal layers on the anterior and posterior uterine walls). The key points of the endometrium are defined as meaningful points that are related to the characteristics of the endometrial morphology, namely the length and thickness of the endometrium. In the proposed segmentation framework, the key-point discriminator distinguishes a predicted segmentation map from a ground-truth segmentation map according to the key-point maps. Meanwhile, the endometrium segmentation network predicts accurate segmentation results that the key-point discriminator cannot discriminate. In this adversarial way, the key-point information containing endometrial morphology characteristics is effectively incorporated in the segmentation network. The segmentation network can accurately find the segmentation boundary while the key-point discriminator learns the shape distribution of the endometrium. Moreover, the endometrium segmentation can be robust to the heterogeneous texture of the endometrium. We conducted an experiment on a TVUS dataset that contained 3,372 sagittal TVUS images and the corresponding key points. The dataset was collected by three hospitals (Ewha Woman's University School of Medicine, Asan Medical Center, and Yonsei University College of Medicine) with the approval of the three hospitals' Institutional Review Board. For verification, fivefold cross-validation was performed. RESULT The proposed key-point discriminator improved the performance of the endometrium segmentation, achieving 82.67 % for the Dice coefficient and 70.46% for the Jaccard coefficient. In comparison, on the TVUS images UNet, showed 58.69 % for the Dice coefficient and 41.59 % for the Jaccard coefficient. The qualitative performance of the endometrium segmentation was also improved over the conventional deep learning segmentation networks. Our experimental results indicated robust segmentation by the proposed method on TVUS images with heterogeneous texture and unclear boundary. In addition, the effect of the key-point discriminator was verified by an ablation study. CONCLUSION We proposed a key-point discriminator to train a segmentation network for robust segmentation of the endometrium with TVUS images. By utilizing the key-point information, the proposed method showed more reliable and accurate segmentation performance and outperformed the conventional segmentation networks both in qualitative and quantitative comparisons.
Collapse
Affiliation(s)
- Hyenok Park
- School of Electrical Engineering, KAIST, Daejeon, 34141, Republic of Korea
| | - Hong Joo Lee
- School of Electrical Engineering, KAIST, Daejeon, 34141, Republic of Korea
| | - Hak Gu Kim
- School of Electrical Engineering, KAIST, Daejeon, 34141, Republic of Korea
| | - Yong Man Ro
- School of Electrical Engineering, KAIST, Daejeon, 34141, Republic of Korea
| | - Dongkuk Shin
- Medical Image Development Group, R&D Center, Samsung Medison, Seongnam, 13530, Republic of Korea
| | - Sa Ra Lee
- Department of Obstetrics and Gynecology, Ewha Womans University School of Medicine, Seoul, 07985, Republic of Korea
| | - Sung Hoon Kim
- Department of Obstetrics and Gynecology, University of Ulsan College of Medicine, Asan Medical Center, Seoul, 05505, Republic of Korea
| | - Mikyung Kong
- Department of Obstetrics and Gynecology, Yonsei University College of Medicine, Seoul, 03722, Republic of Korea
| |
Collapse
|
93
|
Sheth D, Giger ML. Artificial intelligence in the interpretation of breast cancer on MRI. J Magn Reson Imaging 2019; 51:1310-1324. [PMID: 31343790 DOI: 10.1002/jmri.26878] [Citation(s) in RCA: 89] [Impact Index Per Article: 17.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2019] [Accepted: 07/08/2019] [Indexed: 12/13/2022] Open
Abstract
Advances in both imaging and computers have led to the rise in the potential use of artificial intelligence (AI) in various tasks in breast imaging, going beyond the current use in computer-aided detection to include diagnosis, prognosis, response to therapy, and risk assessment. The automated capabilities of AI offer the potential to enhance the diagnostic expertise of clinicians, including accurate demarcation of tumor volume, extraction of characteristic cancer phenotypes, translation of tumoral phenotype features to clinical genotype implications, and risk prediction. The combination of image-specific findings with the underlying genomic, pathologic, and clinical features is becoming of increasing value in breast cancer. The concurrent emergence of newer imaging techniques has provided radiologists with greater diagnostic tools and image datasets to analyze and interpret. Integrating an AI-based workflow within breast imaging enables the integration of multiple data streams into powerful multidisciplinary applications that may lead the path to personalized patient-specific medicine. In this article we describe the goals of AI in breast cancer imaging, in particular MRI, and review the literature as it relates to the current application, potential, and limitations in breast cancer. Level of Evidence: 3 Technical Efficacy: Stage 3 J. Magn. Reson. Imaging 2020;51:1310-1324.
Collapse
Affiliation(s)
- Deepa Sheth
- Department of Radiology, University of Chicago, Chicago, Illinois, USA
| | - Maryellen L Giger
- Department of Radiology, University of Chicago, Chicago, Illinois, USA
| |
Collapse
|
94
|
Zhang L, Mohamed AA, Chai R, Guo Y, Zheng B, Wu S. Automated deep learning method for whole-breast segmentation in diffusion-weighted breast MRI. J Magn Reson Imaging 2019; 51:635-643. [PMID: 31301201 DOI: 10.1002/jmri.26860] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2019] [Accepted: 06/26/2019] [Indexed: 12/29/2022] Open
Abstract
BACKGROUND Diffusion-weighted imaging (DWI) in MRI plays an increasingly important role in diagnostic applications and developing imaging biomarkers. Automated whole-breast segmentation is an important yet challenging step for quantitative breast imaging analysis. While methods have been developed on dynamic contrast-enhanced (DCE) MRI, automatic whole-breast segmentation in breast DWI MRI is still underdeveloped. PURPOSE To develop a deep/transfer learning-based segmentation approach for DWI MRI scans and conduct an extensive study assessment on four imaging datasets from both internal and external sources. STUDY TYPE Retrospective. SUBJECTS In all, 98 patients (144 MRI scans; 11,035 slices) of four different breast MRI datasets from two different institutions. FIELD STRENGTH/SEQUENCES 1.5T scanners with DCE sequence (Dataset 1 and Dataset 2) and DWI sequence. A 3.0T scanner with one external DWI sequence. ASSESSMENT Deep learning models (UNet and SegNet) and transfer learning were used as segmentation approaches. The main DCE Dataset (4,251 2D slices from 39 patients) was used for pre-training and internal validation, and an unseen DCE Dataset (431 2D slices from 20 patients) was used as an independent test dataset for evaluating the pre-trained DCE models. The main DWI Dataset (6,343 2D slices from 75 MRI scans of 29 patients) was used for transfer learning and internal validation, and an unseen DWI Dataset (10 2D slices from 10 patients) was used for independent evaluation to the fine-tuned models for DWI segmentation. Manual segmentations by three radiologists (>10-year experience) were used to establish the ground truth for assessment. The segmentation performance was measured using the Dice Coefficient (DC) for the agreement between manual expert radiologist's segmentation and algorithm-generated segmentation. STATISTICAL TESTS The mean value and standard deviation of the DCs were calculated to compare segmentation results from different deep learning models. RESULTS For the segmentation on the DCE MRI, the average DC of the UNet was 0.92 (cross-validation on the main DCE dataset) and 0.87 (external evaluation on the unseen DCE dataset), both higher than the performance of the SegNet. When segmenting the DWI images by the fine-tuned models, the average DC of the UNet was 0.85 (cross-validation on the main DWI dataset) and 0.72 (external evaluation on the unseen DWI dataset), both outperforming the SegNet on the same datasets. DATA CONCLUSION The internal and independent tests show that the deep/transfer learning models can achieve promising segmentation effects validated on DWI data from different institutions and scanner types. Our proposed approach may provide an automated toolkit to help computer-aided quantitative analyses of breast DWI images. LEVEL OF EVIDENCE 3 Technical Efficacy: Stage 2 J. Magn. Reson. Imaging 2020;51:635-643.
Collapse
Affiliation(s)
- Lei Zhang
- Department of Radiology, University of Pittsburgh School of Medicine, Pittsburgh, Pennsylvania, USA
| | - Aly A Mohamed
- Department of Radiology, University of Pittsburgh School of Medicine, Pittsburgh, Pennsylvania, USA
| | - Ruimei Chai
- Department of Radiology, University of Pittsburgh School of Medicine, Pittsburgh, Pennsylvania, USA.,Department of Radiology, First Hospital of China Medical University, Heping District, Shenyang, Liaoning, China
| | - Yuan Guo
- Department of Radiology, University of Pittsburgh School of Medicine, Pittsburgh, Pennsylvania, USA.,Department of Radiology, Second Affiliated Hospital of South China University of Technology, Guangzhou First People's Hospital, Guangzhou, China
| | - Bingjie Zheng
- Department of Radiology, University of Pittsburgh School of Medicine, Pittsburgh, Pennsylvania, USA.,Department of Radiology, Henan Cancer Hospital, Affiliated Cancer Hospital of Zhengzhou University, Zhengzhou, Henan, China
| | - Shandong Wu
- Departments of Radiology, Biomedical Informatics, Bioengineering, Intelligent Systems, and Clinical and Translational Science, University of Pittsburgh, Pittsburgh, Pennsylvania, USA
| |
Collapse
|
95
|
Reig B, Heacock L, Geras KJ, Moy L. Machine learning in breast MRI. J Magn Reson Imaging 2019; 52:998-1018. [PMID: 31276247 DOI: 10.1002/jmri.26852] [Citation(s) in RCA: 85] [Impact Index Per Article: 17.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2019] [Revised: 06/18/2019] [Accepted: 06/19/2019] [Indexed: 12/13/2022] Open
Abstract
Machine-learning techniques have led to remarkable advances in data extraction and analysis of medical imaging. Applications of machine learning to breast MRI continue to expand rapidly as increasingly accurate 3D breast and lesion segmentation allows the combination of radiologist-level interpretation (eg, BI-RADS lexicon), data from advanced multiparametric imaging techniques, and patient-level data such as genetic risk markers. Advances in breast MRI feature extraction have led to rapid dataset analysis, which offers promise in large pooled multiinstitutional data analysis. The object of this review is to provide an overview of machine-learning and deep-learning techniques for breast MRI, including supervised and unsupervised methods, anatomic breast segmentation, and lesion segmentation. Finally, it explores the role of machine learning, current limitations, and future applications to texture analysis, radiomics, and radiogenomics. Level of Evidence: 3 Technical Efficacy Stage: 2 J. Magn. Reson. Imaging 2019. J. Magn. Reson. Imaging 2020;52:998-1018.
Collapse
Affiliation(s)
- Beatriu Reig
- The Department of Radiology, New York University School of Medicine, New York, New York, USA
| | - Laura Heacock
- Bernard and Irene Schwartz Center for Biomedical Imaging, Department of Radiology, New York University School of Medicine, New York, New York, USA
| | - Krzysztof J Geras
- Bernard and Irene Schwartz Center for Biomedical Imaging, Department of Radiology, New York University School of Medicine, New York, New York, USA
| | - Linda Moy
- Bernard and Irene Schwartz Center for Biomedical Imaging, Department of Radiology, New York University School of Medicine, New York, New York, USA.,Center for Advanced Imaging Innovation and Research (CAI2 R), New York University School of Medicine, New York, New York, USA
| |
Collapse
|
96
|
MRI Breast Tumor Segmentation Using Different Encoder and Decoder CNN Architectures. COMPUTERS 2019. [DOI: 10.3390/computers8030052] [Citation(s) in RCA: 39] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/28/2022]
Abstract
Breast tumor segmentation in medical images is a decisive step for diagnosis and treatment follow-up. Automating this challenging task helps radiologists to reduce the high manual workload of breast cancer analysis. In this paper, we propose two deep learning approaches to automate the breast tumor segmentation in dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) by building two fully convolutional neural networks (CNN) based on SegNet and U-Net. The obtained models can handle both detection and segmentation on each single DCE-MRI slice. In this study, we used a dataset of 86 DCE-MRIs, acquired before and after two cycles of chemotherapy, of 43 patients with local advanced breast cancer, a total of 5452 slices were used to train and validate the proposed models. The data were annotated manually by an experienced radiologist. To reduce the training time, a high-performance architecture composed of graphic processing units was used. The model was trained and validated, respectively, on 85% and 15% of the data. A mean intersection over union (IoU) of 68.88 was achieved using SegNet and 76.14% using U-Net architecture.
Collapse
|
97
|
Babarenda Gamage TP, Malcolm DTK, Maso Talou G, Mîra A, Doyle A, Nielsen PMF, Nash MP. An automated computational biomechanics workflow for improving breast cancer diagnosis and treatment. Interface Focus 2019; 9:20190034. [PMID: 31263540 DOI: 10.1098/rsfs.2019.0034] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/07/2019] [Indexed: 12/24/2022] Open
Abstract
Clinicians face many challenges when diagnosing and treating breast cancer. These challenges include interpreting and co-locating information between different medical imaging modalities that are used to identify tumours and predicting where these tumours move to during different treatment procedures. We have developed a novel automated breast image analysis workflow that integrates state-of-the-art image processing and machine learning techniques, personalized three-dimensional biomechanical modelling and population-based statistical analysis to assist clinicians during breast cancer detection and treatment procedures. This paper summarizes our recent research to address the various technical and implementation challenges associated with creating a fully automated system. The workflow is applied to predict the repositioning of tumours from the prone position, where diagnostic magnetic resonance imaging is performed, to the supine position where treatment procedures are performed. We discuss our recent advances towards addressing challenges in identifying the mechanical properties of the breast and evaluating the accuracy of the biomechanical models. We also describe our progress in implementing a prototype of this workflow in clinical practice. Clinical adoption of these state-of-the-art modelling techniques has significant potential for reducing the number of misdiagnosed breast cancers, while also helping to improve the treatment of patients.
Collapse
Affiliation(s)
| | - Duane T K Malcolm
- Auckland Bioengineering Institute, University of Auckland, Auckland, New Zealand
| | - Gonzalo Maso Talou
- Auckland Bioengineering Institute, University of Auckland, Auckland, New Zealand
| | - Anna Mîra
- Auckland Bioengineering Institute, University of Auckland, Auckland, New Zealand
| | - Anthony Doyle
- Faculty of Medical and Health Sciences, University of Auckland, Auckland, New Zealand
| | - Poul M F Nielsen
- Auckland Bioengineering Institute, University of Auckland, Auckland, New Zealand.,Department of Engineering Science, University of Auckland, Auckland, New Zealand
| | - Martyn P Nash
- Auckland Bioengineering Institute, University of Auckland, Auckland, New Zealand.,Department of Engineering Science, University of Auckland, Auckland, New Zealand
| |
Collapse
|
98
|
Wei D, Weinstein S, Hsieh MK, Pantalone L, Kontos D. Three-Dimensional Whole Breast Segmentation in Sagittal and Axial Breast MRI With Dense Depth Field Modeling and Localized Self-Adaptation for Chest-Wall Line Detection. IEEE Trans Biomed Eng 2019; 66:1567-1579. [PMID: 30334748 PMCID: PMC6684022 DOI: 10.1109/tbme.2018.2875955] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
OBJECTIVE Whole breast segmentation is an essential task in quantitative analysis of breast MRI for cancer risk assessment. It is challenging, mainly, because the chest-wall line (CWL) can be very difficult to locate due to its spatially varying appearance-caused by both nature and imaging artifacts-and neighboring distracting structures. This paper proposes an automatic three-dimensional (3-D) segmentation method, termed DeepSeA, of whole breast for breast MRI. METHODS DeepSeA distinguishes itself from previous methods in three aspects. First, it reformulates the challenging problem of CWL localization as an equivalent problem that optimizes a smooth depth field and so fully utilizes the CWL's 3-D continuity. Second, it employs a localized self-adapting algorithm to adjust to the CWL's spatial variation. Third, it applies to breast MRI data in both sagittal and axial orientations equally well without training. RESULTS A representative set of 99 breast MRI scans with varying imaging protocols is used for evaluation. Experimental results with expert-outlined reference standard show that DeepSeA can segment breasts accurately: the average Dice similarity coefficients, sensitivity, specificity, and CWL deviation error are 96.04%, 97.27%, 98.77%, and 1.63 mm, respectively. In addition, the configuration of DeepSeA is generalized based on experimental findings, for application to broad prospective data. CONCLUSION A fully automatic method-DeepSeA-for whole breast segmentation in sagittal and axial breast MRI is reported. SIGNIFICANCE DeepSeA can facilitate cancer risk assessment with breast MRI.
Collapse
Affiliation(s)
- Dong Wei
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Susan Weinstein
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Meng-Kang Hsieh
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Lauren Pantalone
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Despina Kontos
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA 19104, USA
| |
Collapse
|
99
|
Keshavan A, Yeatman JD, Rokem A. Combining Citizen Science and Deep Learning to Amplify Expertise in Neuroimaging. Front Neuroinform 2019; 13:29. [PMID: 31139070 PMCID: PMC6517786 DOI: 10.3389/fninf.2019.00029] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2018] [Accepted: 04/01/2019] [Indexed: 01/02/2023] Open
Abstract
Big Data promises to advance science through data-driven discovery. However, many standard lab protocols rely on manual examination, which is not feasible for large-scale datasets. Meanwhile, automated approaches lack the accuracy of expert examination. We propose to (1) start with expertly labeled data, (2) amplify labels through web applications that engage citizen scientists, and (3) train machine learning on amplified labels, to emulate the experts. Demonstrating this, we developed a system to quality control brain magnetic resonance images. Expert-labeled data were amplified by citizen scientists through a simple web interface. A deep learning algorithm was then trained to predict data quality, based on citizen scientist labels. Deep learning performed as well as specialized algorithms for quality control (AUC = 0.99). Combining citizen science and deep learning can generalize and scale expert decision making; this is particularly important in disciplines where specialized, automated tools do not yet exist.
Collapse
Affiliation(s)
- Anisha Keshavan
- eScience Institute, University of Washington, Seattle, WA, United States
- Institute for Neuroengineering, University of Washington, Seattle, WA, United States
- Institute for Learning and Brain Sciences, University of Washington, Seattle, WA, United States
- Department of Speech and Hearing, University of Washington, Seattle, WA, United States
| | - Jason D. Yeatman
- Institute for Learning and Brain Sciences, University of Washington, Seattle, WA, United States
- Department of Speech and Hearing, University of Washington, Seattle, WA, United States
| | - Ariel Rokem
- eScience Institute, University of Washington, Seattle, WA, United States
- Institute for Neuroengineering, University of Washington, Seattle, WA, United States
| |
Collapse
|
100
|
Abstract
Manual image segmentation is a time-consuming task routinely performed in radiotherapy to identify each patient's targets and anatomical structures. The efficacy and safety of the radiotherapy plan requires accurate segmentations as these regions of interest are generally used to optimize and assess the quality of the plan. However, reports have shown that this process can be subject to significant inter- and intraobserver variability. Furthermore, the quality of the radiotherapy treatment, and subsequent analyses (ie, radiomics, dosimetric), can be subject to the accuracy of these manual segmentations. Automatic segmentation (or auto-segmentation) of targets and normal tissues is, therefore, preferable as it would address these challenges. Previously, auto-segmentation techniques have been clustered into 3 generations of algorithms, with multiatlas based and hybrid techniques (third generation) being considered the state-of-the-art. More recently, however, the field of medical image segmentation has seen accelerated growth driven by advances in computer vision, particularly through the application of deep learning algorithms, suggesting we have entered the fourth generation of auto-segmentation algorithm development. In this paper, the authors review traditional (nondeep learning) algorithms particularly relevant for applications in radiotherapy. Concepts from deep learning are introduced focusing on convolutional neural networks and fully-convolutional networks which are generally used for segmentation tasks. Furthermore, the authors provide a summary of deep learning auto-segmentation radiotherapy applications reported in the literature. Lastly, considerations for clinical deployment (commissioning and QA) of auto-segmentation software are provided.
Collapse
|