1
|
Rafailidis V, Fang C, Leenknegt B, Ballal K, Deganello A, Sellars ME, Yusuf GT, Huang DY, Sidhu PS. Contrast-Enhanced Ultrasound Quantification Assessment of Focal Fatty Variations in Liver Parenchyma: Challenging the Traditional Qualitative Paradigm of Uniform Enhancement With Adjacent Parenchyma. JOURNAL OF ULTRASOUND IN MEDICINE : OFFICIAL JOURNAL OF THE AMERICAN INSTITUTE OF ULTRASOUND IN MEDICINE 2021; 40:1137-1145. [PMID: 32951283 DOI: 10.1002/jum.15494] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/19/2020] [Revised: 07/01/2020] [Accepted: 08/01/2020] [Indexed: 06/11/2023]
Abstract
OBJECTIVES The purpose of this study was to quantify contrast-enhanced ultrasound enhancement of focal fatty sparing (FFS) and focal fatty infiltration (FFI) and compare it with adjacent liver parenchyma. METHODS This was a retrospective observational study yielding 42 cases in the last 4 years. Inclusion criteria were a focal liver lesion, adequate video availability, and an established diagnosis of FFS or FFI based on clinical or imaging follow-up or a second modality. Contrast-enhanced ultrasound examinations were performed with a standard low-mechanical index technique. Commercially available software calculated quantitative parameters for a focal liver lesion and a reference area of liver parenchyma, producing relative indices. RESULTS In total, 42 patients were analyzed (19 male) with a median age of 18 (interquartile range, 42) years and a median lesion diameter of 30 (interquartile range, 16) mm. The cohort included 26 with FFS and 16 with FFI. Subjectively assessed, 27% of FFS and 25% of FFI were hypoenhancing in the arterial phase, and 73% of FFS and 75% of FFI were isoenhancing. In the venous and delayed phases, all lesions were isoenhancing. The peak enhancement (P = .001), wash-in area under the curve (P < .01), wash-in rate (P = .023), and wash-in perfusion index (P = .001) were significantly lower in FFS compared with adjacent parenchyma but not the mean transit time. In the FFI subgroup, no significant difference was detected. Comparing relative parameters, only the wash-in rate was significantly (P = .049) lower in FFS than FFI. The mean follow-up was 2.8 years. CONCLUSIONS Focal fatty sparing shows significantly lower and slower enhancement than the liver parenchyma, whereas FFI enhances identically. Focal fatty sparing had a significantly slower enhancement than FFI.
Collapse
Affiliation(s)
- Vasileios Rafailidis
- Department of Radiology, King's College Hospital National Health Service Foundation Trust, London, England
| | - Cheng Fang
- Department of Radiology, King's College Hospital National Health Service Foundation Trust, London, England
| | - Benjamin Leenknegt
- Department of Radiology, King's College Hospital National Health Service Foundation Trust, London, England
- Department of Radiology, University Hospital Ghent, Ghent, Belgium
| | - Khalid Ballal
- Department of Radiology, King's College Hospital National Health Service Foundation Trust, London, England
| | - Annamaria Deganello
- Department of Radiology, King's College Hospital National Health Service Foundation Trust, London, England
| | - Maria E Sellars
- Department of Radiology, King's College Hospital National Health Service Foundation Trust, London, England
| | - Gibran T Yusuf
- Department of Radiology, King's College Hospital National Health Service Foundation Trust, London, England
| | - Dean Y Huang
- Department of Radiology, King's College Hospital National Health Service Foundation Trust, London, England
| | - Paul S Sidhu
- Department of Radiology, King's College Hospital National Health Service Foundation Trust, London, England
| |
Collapse
|
2
|
Zhang Y, Liu Y, Cheng H, Li Z, Liu C. Fully multi-target segmentation for breast ultrasound image based on fully convolutional network. Med Biol Eng Comput 2020; 58:2049-2061. [PMID: 32638276 DOI: 10.1007/s11517-020-02200-1] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2019] [Accepted: 05/22/2020] [Indexed: 11/29/2022]
Abstract
Ultrasound image segmentation plays an important role in computer-aided diagnosis of breast cancer. Existing approaches focused on extracting the tumor tissue to characterize the tumor class. However, other tissues are also helpful for providing the references. In this paper, a multi-target semantic segmentation approach is proposed based on the fully convolutional network for segmenting the breast ultrasound image into different target tissue regions. For handling the uncertain affiliation of pixels in blurry boundaries, the certain outputs of pixel characteristics in AlexNet are transformed into the fuzzy decision expression. For improving the image detail representation, the AlexNet network structure of fully convolutional network is optimized with fully connected skip structure. In addition, the output of net model is optimized with fully connected conditional random field to improve the characterization of spatial consistency and pixels' correlation of the image. Moreover, a data training optimization method is developed for improving the efficiency of network training. In the experiment, 325 ultrasound images and four error metrics are utilized for validating the segmentation performance. Comparing with existing methods, experimental results show that the proposed approach is effective for handling the breast ultrasound images accurately and reliably. Graphical abstract.
Collapse
Affiliation(s)
- Yingtao Zhang
- School of Computer Science and Technology, Harbin Institute of Technology, No. 92, Xidazhi Street, Harbin, 150001, China
| | - Yan Liu
- Department of Mathematics, College of Science, Harbin Institute of Technology, No. 92, Xidazhi Street, Harbin, 150001, China.
| | - Hengda Cheng
- Department of Computer Science, Utah State University, Logan, UT, 84322, USA
| | - Ziyao Li
- Second Affiliated Hospital of Harbin Medical University, Nangang, Harbin, China
| | - Cong Liu
- Second Affiliated Hospital of Harbin Medical University, Nangang, Harbin, China
| |
Collapse
|
3
|
Mang A, Bakas S, Subramanian S, Davatzikos C, Biros G. Integrated Biophysical Modeling and Image Analysis: Application to Neuro-Oncology. Annu Rev Biomed Eng 2020; 22:309-341. [PMID: 32501772 PMCID: PMC7520881 DOI: 10.1146/annurev-bioeng-062117-121105] [Citation(s) in RCA: 28] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023]
Abstract
Central nervous system (CNS) tumors come with vastly heterogeneous histologic, molecular, and radiographic landscapes, rendering their precise characterization challenging. The rapidly growing fields of biophysical modeling and radiomics have shown promise in better characterizing the molecular, spatial, and temporal heterogeneity of tumors. Integrative analysis of CNS tumors, including clinically acquired multi-parametric magnetic resonance imaging (mpMRI) and the inverse problem of calibrating biophysical models to mpMRI data, assists in identifying macroscopic quantifiable tumor patterns of invasion and proliferation, potentially leading to improved (a) detection/segmentation of tumor subregions and (b) computer-aided diagnostic/prognostic/predictive modeling. This article presents a summary of (a) biophysical growth modeling and simulation,(b) inverse problems for model calibration, (c) these models' integration with imaging workflows, and (d) their application to clinically relevant studies. We anticipate that such quantitative integrative analysis may even be beneficial in a future revision of the World Health Organization (WHO) classification for CNS tumors, ultimately improving patient survival prospects.
Collapse
Affiliation(s)
- Andreas Mang
- Department of Mathematics, University of Houston, Houston, Texas 77204, USA;
| | - Spyridon Bakas
- Department of Mathematics, University of Houston, Houston, Texas 77204, USA;
| | - Shashank Subramanian
- Oden Institute of Computational Engineering and Sciences, The University of Texas at Austin, Austin, Texas 78712, USA; ,
| | - Christos Davatzikos
- Center for Biomedical Image Computing and Analytics (CBICA); Department of Radiology; and Department of Pathology and Laboratory Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania 19104, USA; ,
| | - George Biros
- Oden Institute of Computational Engineering and Sciences, The University of Texas at Austin, Austin, Texas 78712, USA; ,
| |
Collapse
|
4
|
Binder ZA, Thorne AH, Bakas S, Wileyto EP, Bilello M, Akbari H, Rathore S, Ha SM, Zhang L, Ferguson CJ, Dahiya S, Bi WL, Reardon DA, Idbaih A, Felsberg J, Hentschel B, Weller M, Bagley SJ, Morrissette JJD, Nasrallah MP, Ma J, Zanca C, Scott AM, Orellana L, Davatzikos C, Furnari FB, O'Rourke DM. Epidermal Growth Factor Receptor Extracellular Domain Mutations in Glioblastoma Present Opportunities for Clinical Imaging and Therapeutic Development. Cancer Cell 2018; 34:163-177.e7. [PMID: 29990498 PMCID: PMC6424337 DOI: 10.1016/j.ccell.2018.06.006] [Citation(s) in RCA: 116] [Impact Index Per Article: 19.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/04/2017] [Revised: 02/27/2018] [Accepted: 06/11/2018] [Indexed: 12/19/2022]
Abstract
We explored the clinical and pathological impact of epidermal growth factor receptor (EGFR) extracellular domain missense mutations. Retrospective assessment of 260 de novo glioblastoma patients revealed a significant reduction in overall survival of patients having tumors with EGFR mutations at alanine 289 (EGFRA289D/T/V). Quantitative multi-parametric magnetic resonance imaging analyses indicated increased tumor invasion for EGFRA289D/T/V mutants, corroborated in mice bearing intracranial tumors expressing EGFRA289V and dependent on ERK-mediated expression of matrix metalloproteinase-1. EGFRA289V tumor growth was attenuated with an antibody against a cryptic epitope, based on in silico simulation. The findings of this study indicate a highly invasive phenotype associated with the EGFRA289V mutation in glioblastoma, postulating EGFRA289V as a molecular marker for responsiveness to therapy with EGFR-targeting antibodies.
Collapse
Affiliation(s)
- Zev A Binder
- Department of Neurosurgery, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA 19104, USA; Center for Biomedical Image Computing and Analytics, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA 19104, USA
| | | | - Spyridon Bakas
- Center for Biomedical Image Computing and Analytics, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA 19104, USA; Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - E Paul Wileyto
- Department of Biostatistics, Epidemiology, and Informatics, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Michel Bilello
- Center for Biomedical Image Computing and Analytics, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA 19104, USA; Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Hamed Akbari
- Center for Biomedical Image Computing and Analytics, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA 19104, USA; Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Saima Rathore
- Center for Biomedical Image Computing and Analytics, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA 19104, USA; Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Sung Min Ha
- Center for Biomedical Image Computing and Analytics, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA 19104, USA; Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Logan Zhang
- Department of Neurosurgery, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Cole J Ferguson
- Division of Neuropathology, Department of Pathology and Immunology, Washington University School of Medicine, St. Louis, MO 63108, USA
| | - Sonika Dahiya
- Division of Neuropathology, Department of Pathology and Immunology, Washington University School of Medicine, St. Louis, MO 63108, USA
| | - Wenya Linda Bi
- Center for Skull Base and Pituitary Surgery, Department of Neurosurgery, Brigham and Woman's Hospital, Harvard Medical Center, Boston, MA 02115, USA
| | - David A Reardon
- Center for Neuro-Oncology, Dana-Farber Cancer Institute, Boston, MA 02215, USA
| | - Ahmed Idbaih
- Sorbonne Université, Inserm, CNRS, UMR S 1127, Institut du Cerveau et de la Moelle épinière, ICM, AP-HP, Hôpitaux Universitaires Pitié Salpêtrière - Charles Foix, Service de Neurologie 2-Mazarin, Paris 75013, France
| | - Joerg Felsberg
- Institute of Neuropathology, Heinrich Heine University, Medical Faculty, Moorenstrasse 5, Duesseldorf 40225, Germany
| | - Bettina Hentschel
- Institute for Medical Informatics, Statistics and Epidemiology, University of Leipzig, Medical Faculty, Härtelstrasse 16, Leipzig 04107, Germany
| | - Michael Weller
- Department of Neurology, University Hospital and University of Zurich, Zurich 8091, Switzerland
| | - Stephen J Bagley
- Abramson Cancer Center, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Jennifer J D Morrissette
- Department of Pathology and Laboratory Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - MacLean P Nasrallah
- Division of Neuropathology, Department of Pathology and Laboratory Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Jianhui Ma
- Ludwig Institute for Cancer Research, La Jolla, San Diego 92093, USA
| | - Ciro Zanca
- Ludwig Institute for Cancer Research, La Jolla, San Diego 92093, USA
| | - Andrew M Scott
- Olivia Newton-John Cancer Research Institute, La Trobe University, Melbourne, Australia
| | - Laura Orellana
- Science for Life Laboratory, KTH Royal Institute of Technology, Stockholm, Sweden; Department of Biochemistry and Biophysics, Stockholm University, Stockholm, Sweden
| | - Christos Davatzikos
- Center for Biomedical Image Computing and Analytics, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA 19104, USA; Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Frank B Furnari
- Ludwig Institute for Cancer Research, La Jolla, San Diego 92093, USA.
| | - Donald M O'Rourke
- Department of Neurosurgery, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA 19104, USA; Center for Biomedical Image Computing and Analytics, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA 19104, USA; Abramson Cancer Center, University of Pennsylvania, Philadelphia, PA 19104, USA.
| |
Collapse
|
5
|
Rathore S, Bakas S, Pati S, Akbari H, Kalarot R, Sridharan P, Rozycki M, Bergman M, Tunc B, Verma R, Bilello M, Davatzikos C. Brain Cancer Imaging Phenomics Toolkit (brain-CaPTk): An Interactive Platform for Quantitative Analysis of Glioblastoma. BRAINLESION : GLIOMA, MULTIPLE SCLEROSIS, STROKE AND TRAUMATIC BRAIN INJURIES. BRAINLES (WORKSHOP) 2018; 10670:133-145. [PMID: 29733087 DOI: 10.1007/978-3-319-75238-9_12] [Citation(s) in RCA: 26] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
Abstract
Quantitative research, especially in the field of radio(geno)mics, has helped us understand fundamental mechanisms of neurologic diseases. Such research is integrally based on advanced algorithms to derive extensive radiomic features and integrate them into diagnostic and predictive models. To exploit the benefit of such complex algorithms, their swift translation into clinical practice is required, currently hindered by their complicated nature. brain-CaPTk is a modular platform, with components spanning across image processing, segmentation, feature extraction, and machine learning, that facilitates such translation, enabling quantitative analyses without requiring substantial computational background. Thus, brain-CaPTk can be seamlessly integrated into the typical quantification, analysis and reporting workflow of a radiologist, underscoring its clinical potential. This paper describes currently available components of brain-CaPTk and example results from their application in glioblastoma.
Collapse
Affiliation(s)
- Saima Rathore
- Department of Radiology, Perelman School of Medicine, Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
| | - Spyridon Bakas
- Department of Radiology, Perelman School of Medicine, Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
| | - Sarthak Pati
- Department of Radiology, Perelman School of Medicine, Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
| | - Hamed Akbari
- Department of Radiology, Perelman School of Medicine, Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
| | - Ratheesh Kalarot
- Department of Radiology, Perelman School of Medicine, Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
| | - Patmaa Sridharan
- Department of Radiology, Perelman School of Medicine, Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
| | - Martin Rozycki
- Department of Radiology, Perelman School of Medicine, Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
| | - Mark Bergman
- Department of Radiology, Perelman School of Medicine, Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
| | - Birkan Tunc
- Department of Radiology, Perelman School of Medicine, Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
| | - Ragini Verma
- Department of Radiology, Perelman School of Medicine, Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
| | - Michel Bilello
- Department of Radiology, Perelman School of Medicine, Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
| | - Christos Davatzikos
- Department of Radiology, Perelman School of Medicine, Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
| |
Collapse
|
6
|
Davatzikos C, Rathore S, Bakas S, Pati S, Bergman M, Kalarot R, Sridharan P, Gastounioti A, Jahani N, Cohen E, Akbari H, Tunc B, Doshi J, Parker D, Hsieh M, Sotiras A, Li H, Ou Y, Doot RK, Bilello M, Fan Y, Shinohara RT, Yushkevich P, Verma R, Kontos D. Cancer imaging phenomics toolkit: quantitative imaging analytics for precision diagnostics and predictive modeling of clinical outcome. J Med Imaging (Bellingham) 2018; 5:011018. [PMID: 29340286 PMCID: PMC5764116 DOI: 10.1117/1.jmi.5.1.011018] [Citation(s) in RCA: 97] [Impact Index Per Article: 16.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2017] [Accepted: 12/05/2017] [Indexed: 12/26/2022] Open
Abstract
The growth of multiparametric imaging protocols has paved the way for quantitative imaging phenotypes that predict treatment response and clinical outcome, reflect underlying cancer molecular characteristics and spatiotemporal heterogeneity, and can guide personalized treatment planning. This growth has underlined the need for efficient quantitative analytics to derive high-dimensional imaging signatures of diagnostic and predictive value in this emerging era of integrated precision diagnostics. This paper presents cancer imaging phenomics toolkit (CaPTk), a new and dynamically growing software platform for analysis of radiographic images of cancer, currently focusing on brain, breast, and lung cancer. CaPTk leverages the value of quantitative imaging analytics along with machine learning to derive phenotypic imaging signatures, based on two-level functionality. First, image analysis algorithms are used to extract comprehensive panels of diverse and complementary features, such as multiparametric intensity histogram distributions, texture, shape, kinetics, connectomics, and spatial patterns. At the second level, these quantitative imaging signatures are fed into multivariate machine learning models to produce diagnostic, prognostic, and predictive biomarkers. Results from clinical studies in three areas are shown: (i) computational neuro-oncology of brain gliomas for precision diagnostics, prediction of outcome, and treatment planning; (ii) prediction of treatment response for breast and lung cancer, and (iii) risk assessment for breast cancer.
Collapse
Affiliation(s)
- Christos Davatzikos
- Center for Biomedical Image Computing and Analytics (CBICA), Philadelphia, Pennsylvania, United States
| | - Saima Rathore
- Center for Biomedical Image Computing and Analytics (CBICA), Philadelphia, Pennsylvania, United States
| | - Spyridon Bakas
- Center for Biomedical Image Computing and Analytics (CBICA), Philadelphia, Pennsylvania, United States
| | - Sarthak Pati
- Center for Biomedical Image Computing and Analytics (CBICA), Philadelphia, Pennsylvania, United States
| | - Mark Bergman
- Center for Biomedical Image Computing and Analytics (CBICA), Philadelphia, Pennsylvania, United States
| | - Ratheesh Kalarot
- Center for Biomedical Image Computing and Analytics (CBICA), Philadelphia, Pennsylvania, United States
| | - Patmaa Sridharan
- Center for Biomedical Image Computing and Analytics (CBICA), Philadelphia, Pennsylvania, United States
| | - Aimilia Gastounioti
- Center for Biomedical Image Computing and Analytics (CBICA), Philadelphia, Pennsylvania, United States
| | - Nariman Jahani
- Center for Biomedical Image Computing and Analytics (CBICA), Philadelphia, Pennsylvania, United States
| | - Eric Cohen
- Center for Biomedical Image Computing and Analytics (CBICA), Philadelphia, Pennsylvania, United States
| | - Hamed Akbari
- Center for Biomedical Image Computing and Analytics (CBICA), Philadelphia, Pennsylvania, United States
| | - Birkan Tunc
- Center for Biomedical Image Computing and Analytics (CBICA), Philadelphia, Pennsylvania, United States
| | - Jimit Doshi
- Center for Biomedical Image Computing and Analytics (CBICA), Philadelphia, Pennsylvania, United States
| | - Drew Parker
- Center for Biomedical Image Computing and Analytics (CBICA), Philadelphia, Pennsylvania, United States
| | - Michael Hsieh
- Center for Biomedical Image Computing and Analytics (CBICA), Philadelphia, Pennsylvania, United States
| | - Aristeidis Sotiras
- Center for Biomedical Image Computing and Analytics (CBICA), Philadelphia, Pennsylvania, United States
| | - Hongming Li
- Center for Biomedical Image Computing and Analytics (CBICA), Philadelphia, Pennsylvania, United States
| | - Yangming Ou
- Massachusetts General Hospital, Martinos Center for Biomedical Imaging, Boston, Massachusetts, United States
| | - Robert K. Doot
- Center for Biomedical Image Computing and Analytics (CBICA), Philadelphia, Pennsylvania, United States
| | - Michel Bilello
- Center for Biomedical Image Computing and Analytics (CBICA), Philadelphia, Pennsylvania, United States
| | - Yong Fan
- Center for Biomedical Image Computing and Analytics (CBICA), Philadelphia, Pennsylvania, United States
| | - Russell T. Shinohara
- Center for Biomedical Image Computing and Analytics (CBICA), Philadelphia, Pennsylvania, United States
- University of Pennsylvania, Perelman School of Medicine, Center for Clinical Epidemiology and Biostatistics (CCEB), Department of Biostatistics, Epidemiology, and Informatics, Philadelphia, Pennsylvania, United States
| | - Paul Yushkevich
- Center for Biomedical Image Computing and Analytics (CBICA), Philadelphia, Pennsylvania, United States
| | - Ragini Verma
- Center for Biomedical Image Computing and Analytics (CBICA), Philadelphia, Pennsylvania, United States
| | - Despina Kontos
- Center for Biomedical Image Computing and Analytics (CBICA), Philadelphia, Pennsylvania, United States
| |
Collapse
|
7
|
Hann A, Bettac L, Haenle MM, Graeter T, Berger AW, Dreyhaupt J, Schmalstieg D, Zoller WG, Egger J. Algorithm guided outlining of 105 pancreatic cancer liver metastases in Ultrasound. Sci Rep 2017; 7:12779. [PMID: 28986569 PMCID: PMC5630585 DOI: 10.1038/s41598-017-12925-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2017] [Accepted: 09/20/2017] [Indexed: 12/19/2022] Open
Abstract
Manual segmentation of hepatic metastases in ultrasound images acquired from patients suffering from pancreatic cancer is common practice. Semiautomatic measurements promising assistance in this process are often assessed using a small number of lesions performed by examiners who already know the algorithm. In this work, we present the application of an algorithm for the segmentation of liver metastases due to pancreatic cancer using a set of 105 different images of metastases. The algorithm and the two examiners had never assessed the images before. The examiners first performed a manual segmentation and, after five weeks, a semiautomatic segmentation using the algorithm. They were satisfied in up to 90% of the cases with the semiautomatic segmentation results. Using the algorithm was significantly faster and resulted in a median Dice similarity score of over 80%. Estimation of the inter-operator variability by using the intra class correlation coefficient was good with 0.8. In conclusion, the algorithm facilitates fast and accurate segmentation of liver metastases, comparable to the current gold standard of manual segmentation.
Collapse
Affiliation(s)
- Alexander Hann
- Department of Internal Medicine I, Ulm University, Ulm, Germany. .,Department of Internal Medicine and Gastroenterology, Katharinenhospital, Kriegsbergstraße 60, 70174, Stuttgart, Germany.
| | - Lucas Bettac
- Department of Internal Medicine I, Ulm University, Ulm, Germany
| | - Mark M Haenle
- Department of Internal Medicine I, Ulm University, Ulm, Germany
| | - Tilmann Graeter
- Department of Diagnostic and Interventional Radiology, Ulm University, Ulm, Germany
| | | | - Jens Dreyhaupt
- Institute of Epidemiology & Medical Biometry, Ulm University, Ulm, Germany
| | - Dieter Schmalstieg
- Institute for Computer Graphics and Vision, Graz University of Technology, Inffeldgasse 16, 8010, Graz, Austria
| | - Wolfram G Zoller
- Department of Internal Medicine and Gastroenterology, Katharinenhospital, Kriegsbergstraße 60, 70174, Stuttgart, Germany
| | - Jan Egger
- Institute for Computer Graphics and Vision, Graz University of Technology, Inffeldgasse 16, 8010, Graz, Austria.,BioTechMed, Krenngasse 37/1, 8010, Graz, Austria
| |
Collapse
|
8
|
Zeng K, Bakas S, Sotiras A, Akbari H, Rozycki M, Rathore S, Pati S, Davatzikos C. Segmentation of Gliomas in Pre-operative and Post-operative Multimodal Magnetic Resonance Imaging Volumes Based on a Hybrid Generative-Discriminative Framework. BRAINLESION : GLIOMA, MULTIPLE SCLEROSIS, STROKE AND TRAUMATIC BRAIN INJURIES. BRAINLES (WORKSHOP) 2016; 10154:184-194. [PMID: 28725878 PMCID: PMC5512606 DOI: 10.1007/978-3-319-55524-9_18] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
We present an approach for segmenting both low- and high-grade gliomas in multimodal magnetic resonance imaging volumes. The proposed framework is an extension of our previous work [6,7], with an additional component for segmenting post-operative scans. The proposed approach is based on a hybrid generative-discriminative model. Firstly, a generative model based on a joint segmentation-registration framework is used to segment the brain scans into cancerous and healthy tissues. Secondly, a gradient boosting classification scheme is used to refine tumor segmentation based on information from multiple patients. We evaluated our approach in 218 cases during the training phase of the BRAin Tumor Segmentation (BRATS) 2016 challenge and report promising results. During the testing phase, the proposed approach was ranked among the top performing methods, after being additionally evaluated in 191 unseen cases.
Collapse
Affiliation(s)
- Ke Zeng
- Section of Biomedical Image Analysis, Perelman School of Medicine, Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, USA
| | - Spyridon Bakas
- Section of Biomedical Image Analysis, Perelman School of Medicine, Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, USA
| | - Aristeidis Sotiras
- Section of Biomedical Image Analysis, Perelman School of Medicine, Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, USA
| | - Hamed Akbari
- Section of Biomedical Image Analysis, Perelman School of Medicine, Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, USA
| | - Martin Rozycki
- Section of Biomedical Image Analysis, Perelman School of Medicine, Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, USA
| | - Saima Rathore
- Section of Biomedical Image Analysis, Perelman School of Medicine, Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, USA
| | - Sarthak Pati
- Section of Biomedical Image Analysis, Perelman School of Medicine, Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, USA
| | - Christos Davatzikos
- Section of Biomedical Image Analysis, Perelman School of Medicine, Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, USA
| |
Collapse
|
9
|
Liang X, Lin L, Cao Q, Huang R, Wang Y. Recognizing Focal Liver Lesions in CEUS With Dynamically Trained Latent Structured Models. IEEE TRANSACTIONS ON MEDICAL IMAGING 2016; 35:713-27. [PMID: 26513779 DOI: 10.1109/tmi.2015.2492618] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
This work investigates how to automatically classify Focal Liver Lesions (FLLs) into three specific benign or malignant types in Contrast-Enhanced Ultrasound (CEUS) videos, and aims at providing a computational framework to assist clinicians in FLL diagnosis. The main challenge for this task is that FLLs in CEUS videos often show diverse enhancement patterns at different temporal phases. To handle these diverse patterns, we propose a novel structured model, which detects a number of discriminative Regions of Interest (ROIs) for the FLL and recognize the FLL based on these ROIs. Our model incorporates an ensemble of local classifiers in the attempt to identify different enhancement patterns of ROIs, and in particular, we make the model reconfigurable by introducing switch variables to adaptively select appropriate classifiers during inference. We formulate the model learning as a non-convex optimization problem, and present a principled optimization method to solve it in a dynamic manner: the latent structures (e.g. the selections of local classifiers, and the sizes and locations of ROIs) are iteratively determined along with the parameter learning. Given the updated model parameters in each step, the data-driven inference is also proposed to efficiently determine the latent structures by using the sequential pruning and dynamic programming method. In the experiments, we demonstrate superior performances over the state-of-the-art approaches. We also release hundreds of CEUS FLLs videos used to quantitatively evaluate this work, which to the best of our knowledge forms the largest dataset in the literature. Please find more information at "http://vision.sysu.edu.cn/projects/fllrecog/".
Collapse
|
10
|
GLISTRboost: Combining Multimodal MRI Segmentation, Registration, and Biophysical Tumor Growth Modeling with Gradient Boosting Machines for Glioma Segmentation. BRAINLESION : GLIOMA, MULTIPLE SCLEROSIS, STROKE AND TRAUMATIC BRAIN INJURIES. BRAINLES (WORKSHOP) 2016; 9556:144-155. [PMID: 28725877 DOI: 10.1007/978-3-319-30858-6_1] [Citation(s) in RCA: 42] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
We present an approach for segmenting low- and high-grade gliomas in multimodal magnetic resonance imaging volumes. The proposed approach is based on a hybrid generative-discriminative model. Firstly, a generative approach based on an Expectation-Maximization framework that incorporates a glioma growth model is used to segment the brain scans into tumor, as well as healthy tissue labels. Secondly, a gradient boosting multi-class classification scheme is used to refine tumor labels based on information from multiple patients. Lastly, a probabilistic Bayesian strategy is employed to further refine and finalize the tumor segmentation based on patient-specific intensity statistics from the multiple modalities. We evaluated our approach in 186 cases during the training phase of the BRAin Tumor Segmentation (BRATS) 2015 challenge and report promising results. During the testing phase, the algorithm was additionally evaluated in 53 unseen cases, achieving the best performance among the competing methods.
Collapse
|
11
|
GLISTRboost: Combining Multimodal MRI Segmentation, Registration, and Biophysical Tumor Growth Modeling with Gradient Boosting Machines for Glioma Segmentation. BRAINLESION: GLIOMA, MULTIPLE SCLEROSIS, STROKE AND TRAUMATIC BRAIN INJURIES 2016. [DOI: 10.1007/978-3-319-30858-6_13] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
|
12
|
Diamant I, Hoogi A, Beaulieu CF, Safdari M, Klang E, Amitai M, Greenspan H, Rubin DL. Improved Patch-Based Automated Liver Lesion Classification by Separate Analysis of the Interior and Boundary Regions. IEEE J Biomed Health Inform 2015; 20:1585-1594. [PMID: 26372661 DOI: 10.1109/jbhi.2015.2478255] [Citation(s) in RCA: 35] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
The bag-of-visual-words (BoVW) method with construction of a single dictionary of visual words has been used previously for a variety of classification tasks in medical imaging, including the diagnosis of liver lesions. In this paper, we describe a novel method for automated diagnosis of liver lesions in portal-phase computed tomography (CT) images that improves over single-dictionary BoVW methods by using an image patch representation of the interior and boundary regions of the lesions. Our approach captures characteristics of the lesion margin and of the lesion interior by creating two separate dictionaries for the margin and the interior regions of lesions ("dual dictionaries" of visual words). Based on these dictionaries, visual word histograms are generated for each region of interest within the lesion and its margin. For validation of our approach, we used two datasets from two different institutions, containing CT images of 194 liver lesions (61 cysts, 80 metastasis, and 53 hemangiomas). The final diagnosis of each lesion was established by radiologists. The classification accuracy for the images from the two institutions was 99% and 88%, respectively, and 93% for a combined dataset. Our new BoVW approach that uses dual dictionaries shows promising results. We believe the benefits of our approach may generalize to other application domains within radiology.
Collapse
|