1
|
Characterization of spatially mapped volumetric molecular ultrasound signals for predicting response to anti-vascular therapy. Sci Rep 2023; 13:1686. [PMID: 36717575 PMCID: PMC9886917 DOI: 10.1038/s41598-022-26273-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2022] [Accepted: 12/13/2022] [Indexed: 01/31/2023] Open
Abstract
Quantitative three-dimensional molecular ultrasound is a promising technology for longitudinal imaging applications such as therapy monitoring; the risk profile is favorable compared to positron emission tomography and computed tomography. However, clinical translation of quantitative methods for this technology are limited in that they assume that tumor tissues are homogeneous, and often depend on contrast-destruction events that can produce unintended bioeffects. Here, we develop quantitative features (henceforth image features) that capture tumor spatial information, and that are extracted without contrast destruction. We compare these techniques with the contrast-destruction derived differential targeted enhancement parameter (dTE) in predicting response to therapy. We found thirty-three reproducible image features that predict response to antiangiogenic therapy, without the need for a contrast agent disruption pulse. Multiparametric analysis shows that several of these image features can differentiate treated versus control animals with comparable performance to post-destruction measurements, suggesting that these can potentially replace parameters such as the dTE. The highest performing pre-destruction image features showed strong linear correlations with conventional dTE parameters with less overall variance. Thus, our study suggests that image features obtained during the wash in of the molecular agent, pre-destruction, may replace conventional post-destruction image features or the dTE parameter.
Collapse
|
2
|
Messaoudi R, Jaziri F, Mtibaa A, Gargouri F, Vacavant A. Ontology-Driven Approach for Liver MRI Classification and HCC Detection. INT J PATTERN RECOGN 2021. [DOI: 10.1142/s0218001421600077] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
Reading and interpreting the medical image still remains the most challenging task in radiology. Through the important achievement of deep Convolutional Neural Networks (CNN) in the context of medical image classification, various clinical applications have been provided to detect lesions from Magnetic Resonance Imaging (MRI) and Computed Tomography (CT) scans. In the diagnosis process for the liver cancer from Dynamic Contrast-Enhanced MRI (DCE-MRI), radiologists consider three phases during contrast injection: before injection, arterial phase, and portal phase for instance. Even if the contrast agent helps in enhancing the tumoral tissues, the diagnosis may be very difficult due to the possible low contrast and pathological tissues surrounding the tumors (cirrhosis). Alongside, in the medical field, ontologies have proven their effectiveness to solve several clinical problems such as offering shareable terminologies, vocabularies, and databases. In this article, we propose a multi-label CNN classification approach based on a parallel preprocessing algorithm. This algorithm is an extension of our previous work cited in the International Conference on Pattern Recognition and Artificial Intelligence (ICPRAI) 2020. The aim of our approach is to ameliorate the detection of HCC lesions and to extract more information about the detected tumor such as the stage, the localization, the size, and the type thanks to the use of ontologies. Moreover, the integration of such information has improved the detection process. In fact, experiments conducted by testing with real patient cases have shown that the proposed approach reached an accuracy of 93% using MRI patches of [Formula: see text] pixels, which is an improvement compared with our previous works.
Collapse
Affiliation(s)
- Rim Messaoudi
- MIRACL Laboratory, University of Sfax, Sfax, Tunisia
- Institut Pascal, Université Clermont Auvergne, UMR6602, CNRS/UCA/SIGMA, 63171 Aubière, France
| | - Faouzi Jaziri
- Institut Pascal, Université Clermont Auvergne, UMR6602, CNRS/UCA/SIGMA, 63171 Aubière, France
| | - Achraf Mtibaa
- MIRACL Laboratory, University of Sfax, Sfax, Tunisia
- National School of Electronic and Telecommunications, University of Sfax, Sfax, Tunisia
| | - Faïez Gargouri
- MIRACL Laboratory, University of Sfax, Sfax, Tunisia
- Higher Institute of Computer Science and Multimedia, University of Sfax, Sfax, Tunisia
| | - Antoine Vacavant
- Institut Pascal, Université Clermont Auvergne, UMR6602, CNRS/UCA/SIGMA, 63171 Aubière, France
| |
Collapse
|
3
|
Abstract
With the ongoing advances in imaging techniques, increasing volumes of anatomical and functional data are being generated as part of the routine clinical workflow. This surge of available imaging data coincides with increasing research in quantitative imaging, particularly in the domain of imaging features. An important and novel approach is radiomics, where high-dimensional image properties are extracted from routine medical images. The fundamental principle of radiomics is the hypothesis that biomedical images contain predictive information, not discernible to the human eye, that can be mined through quantitative image analysis. In this review, a general outline of radiomics and artificial intelligence (AI) will be provided, along with prominent use cases in immunotherapy (e.g. response and adverse event prediction) and targeted therapy (i.e. radiogenomics). While the increased use and development of radiomics and AI in immuno-oncology is highly promising, the technology is still in its early stages, and different challenges still need to be overcome. Nevertheless, novel AI algorithms are being constructed with an ever-increasing scope of applications.
Collapse
Affiliation(s)
- Z. Bodalal
- Department of Radiology, Netherlands Cancer Institute, Amsterdam, The Netherlands
- GROW School for Oncology and Developmental Biology, Maastricht University, Maastricht, The Netherlands
| | - I. Wamelink
- Department of Radiology, Netherlands Cancer Institute, Amsterdam, The Netherlands
- Technical Medicine, University of Twente, Enschede, The Netherlands
| | - S. Trebeschi
- Department of Radiology, Netherlands Cancer Institute, Amsterdam, The Netherlands
- GROW School for Oncology and Developmental Biology, Maastricht University, Maastricht, The Netherlands
| | - R.G.H. Beets-Tan
- Department of Radiology, Netherlands Cancer Institute, Amsterdam, The Netherlands
- GROW School for Oncology and Developmental Biology, Maastricht University, Maastricht, The Netherlands
| |
Collapse
|
4
|
Automatic segmentation and classification of liver tumor from CT image using feature difference and SVM based classifier-soft computing technique. Soft comput 2020. [DOI: 10.1007/s00500-020-05094-1] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
5
|
El Kaffas A, Hoogi A, Zhou J, Durot I, Wang H, Rosenberg J, Tseng A, Sagreiya H, Akhbardeh A, Rubin DL, Kamaya A, Hristov D, Willmann JK. Spatial Characterization of Tumor Perfusion Properties from 3D DCE-US Perfusion Maps are Early Predictors of Cancer Treatment Response. Sci Rep 2020; 10:6996. [PMID: 32332790 PMCID: PMC7181711 DOI: 10.1038/s41598-020-63810-1] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2019] [Accepted: 03/26/2020] [Indexed: 02/08/2023] Open
Abstract
There is a need for noninvasive repeatable biomarkers to detect early cancer treatment response and spare non-responders unnecessary morbidities and costs. Here, we introduce three-dimensional (3D) dynamic contrast enhanced ultrasound (DCE-US) perfusion map characterization as inexpensive, bedside and longitudinal indicator of tumor perfusion for prediction of vascular changes and therapy response. More specifically, we developed computational tools to generate perfusion maps in 3D of tumor blood flow, and identified repeatable quantitative features to use in machine-learning models to capture subtle multi-parametric perfusion properties, including heterogeneity. Models were developed and trained in mice data and tested in a separate mouse cohort, as well as early validation clinical data consisting of patients receiving therapy for liver metastases. Models had excellent (ROC-AUC > 0.9) prediction of response in pre-clinical data, as well as proof-of-concept clinical data. Significant correlations with histological assessments of tumor vasculature were noted (Spearman R > 0.70) in pre-clinical data. Our approach can identify responders based on early perfusion changes, using perfusion properties correlated to gold-standard vascular properties.
Collapse
Affiliation(s)
- Ahmed El Kaffas
- Department of Radiology, Molecular Imaging Program at Stanford, School of Medicine, Stanford University, Stanford, CA, USA. .,Department of Radiology, Integrative Biomedical Imaging Informatics at Stanford, School of Medicine, Stanford University, Stanford, CA, USA. .,Department of Radiology, Body Imaging, Stanford University, Stanford, CA, USA.
| | - Assaf Hoogi
- Department of Radiology, Integrative Biomedical Imaging Informatics at Stanford, School of Medicine, Stanford University, Stanford, CA, USA
| | - Jianhua Zhou
- Department of Radiology, Molecular Imaging Program at Stanford, School of Medicine, Stanford University, Stanford, CA, USA
| | - Isabelle Durot
- Department of Radiology, Molecular Imaging Program at Stanford, School of Medicine, Stanford University, Stanford, CA, USA
| | - Huaijun Wang
- Department of Radiology, Molecular Imaging Program at Stanford, School of Medicine, Stanford University, Stanford, CA, USA
| | - Jarrett Rosenberg
- Department of Radiology, Molecular Imaging Program at Stanford, School of Medicine, Stanford University, Stanford, CA, USA
| | - Albert Tseng
- Department of Radiology, Molecular Imaging Program at Stanford, School of Medicine, Stanford University, Stanford, CA, USA
| | - Hersh Sagreiya
- Department of Radiology, Integrative Biomedical Imaging Informatics at Stanford, School of Medicine, Stanford University, Stanford, CA, USA
| | - Alireza Akhbardeh
- Department of Radiology, Integrative Biomedical Imaging Informatics at Stanford, School of Medicine, Stanford University, Stanford, CA, USA
| | - Daniel L Rubin
- Department of Radiology, Integrative Biomedical Imaging Informatics at Stanford, School of Medicine, Stanford University, Stanford, CA, USA
| | - Aya Kamaya
- Department of Radiology, Molecular Imaging Program at Stanford, School of Medicine, Stanford University, Stanford, CA, USA.,Department of Radiology, Body Imaging, Stanford University, Stanford, CA, USA
| | - Dimitre Hristov
- Department of Radiation Oncology, School of Medicine, Stanford University, Stanford, CA, USA
| | - Jürgen K Willmann
- Department of Radiology, Molecular Imaging Program at Stanford, School of Medicine, Stanford University, Stanford, CA, USA.,Department of Radiology, Body Imaging, Stanford University, Stanford, CA, USA
| |
Collapse
|
6
|
Messaoudi R, Mtibaa A, Vacavant A, Gargouri F, Jaziri F. Ontologies for Liver Diseases Representation: A Systematic Literature Review. J Digit Imaging 2019; 33:563-573. [PMID: 31848894 DOI: 10.1007/s10278-019-00303-2] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023] Open
Abstract
Ontology, as a useful knowledge engineering technique, has been widely used for reducing ambiguity and helping with information sharing. It is considered originally to be clear, comprehensive, and with well-defined format. It characterizes several domains purposes description through structured and formalized languages. In various areas of research, it has become a significant way to realize successful and powerful accomplishments. Actually, medical ontologies were turned into an efficient application in medical domains. They also become a relevant approach to process large medical data volumes. Consequently, they are behaving as a support decision system in some cases. Also, they ensure diagnosis process acceleration and assistance. Additionally, they have been integrated especially to represent human healthcare concepts. For that reason, plenty of research works applied ontologies to design and treat liver diseases. In this article, we present a general overview of medical ontologies to stand for this type of disease. We expose and discuss these works in details by a complete comparison. Also, we show their performance to arrange clinical data and extract results.
Collapse
Affiliation(s)
- Rim Messaoudi
- MIRACL Laboratory, University of Sfax, Sfax, Tunisia.
- Institut Pascal, Université Clermont Auvergne, UMR6602 CNRS/UCA/SIGMA, 63171, Aubière, France.
| | - Achraf Mtibaa
- MIRACL Laboratory, University of Sfax, Sfax, Tunisia
- National School of Electronic and Telecommunications, University of Sfax, Sfax, Tunisia
| | - Antoine Vacavant
- Institut Pascal, Université Clermont Auvergne, UMR6602 CNRS/UCA/SIGMA, 63171, Aubière, France
| | - Faïez Gargouri
- MIRACL Laboratory, University of Sfax, Sfax, Tunisia
- Higher Institute of Computer Science and Multimedia, University of Sfax, Sfax, Tunisia
| | - Faouzi Jaziri
- Institut Pascal, Université Clermont Auvergne, UMR6602 CNRS/UCA/SIGMA, 63171, Aubière, France
| |
Collapse
|
7
|
Loveymi S, Dezfoulian MH, Mansoorizadeh M. Generate Structured Radiology Report from CT Images Using Image Annotation Techniques: Preliminary Results with Liver CT. J Digit Imaging 2019; 33:375-390. [PMID: 31728804 DOI: 10.1007/s10278-019-00298-w] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022] Open
Abstract
A medical annotation system for radiology images extracts clinically useful information from the images, allowing the machines to infer useful abstract semantics and become capable of automatic reasoning and making diagnostic decision. It also supplies human-interpretable explanation for the images. We have implemented a computerized framework that, given a liver CT image, predicts radiological annotations with high accuracy, in order to generate a structured report, which includes predicting very specific high-level semantic content. Each report of a liver CT image is related to different inhomogeneous parts like the liver, lesion, and vessel. We put forward a claim that gathering all kinds of features is not suitable for filling all parts of the report. As a matter of fact, for each group of annotations, one should find and extract the best feature that results in the best answers for that specific annotation. To this end, the main challenge is discovering the relationships between these specific semantic concepts and their association with the low-level image features. Our framework was implemented by combining a set of the state-of-the-art low-level imaging features. In addition, we propose a novel feature (DLBP (deep local binary pattern)) based on LBP that incorporates multi-slice analysis in CT images and further improves the performance. In order to model our annotation system, two methods were used, namely multi-class support vector machine (SVM) and random subspace (RS) which is an ensemble learning method. Applying this representation leads to a high prediction accuracy of 93.1% despite its relatively low dimension in comparison with the existing works.
Collapse
Affiliation(s)
- Samira Loveymi
- Computer Engineering Department, Bu-Ali Sina University, Shahid Fahmideh blvd., Hamedan, Iran
| | - Mir Hossein Dezfoulian
- Computer Engineering Department, Bu-Ali Sina University, Shahid Fahmideh blvd., Hamedan, Iran
| | - Muharram Mansoorizadeh
- Computer Engineering Department, Bu-Ali Sina University, Shahid Fahmideh blvd., Hamedan, Iran.
| |
Collapse
|
8
|
Automatic Staging of Cancer Tumors Using AIM Image Annotations and Ontologies. J Digit Imaging 2019; 33:287-303. [PMID: 31396778 DOI: 10.1007/s10278-019-00251-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022] Open
Abstract
A second opinion about cancer stage is crucial when clinicians assess patient treatment progress. Staging is a process that takes into account description, location, characteristics, and possible metastasis of tumors in a patient. It should follow standards, such as the TNM Classification of Malignant Tumors. However, in clinical practice, the implementation of this process can be tedious and error prone. In order to alleviate these problems, we intend to assist radiologists by providing a second opinion in the evaluation of cancer stage. For doing this, we developed a TNM classifier based on semantic annotations, made by radiologists, using the ePAD tool. It transforms the annotations (stored using the AIM format), using axioms and rules, into AIM4-O ontology instances. From then, it automatically calculates the liver TNM cancer stage. The AIM4-O ontology was developed, as part of this work, to represent annotations in the Web Ontology Language (OWL). A dataset of 51 liver radiology reports with staging data, from NCI's Genomic Data Commons (GDC), were used to evaluate our classifier. When compared with the stages attributed by physicians, the classifier stages had a precision of 85.7% and recall of 81.0%. In addition, 3 radiologists from 2 different institutions manually reviewed a random sample of 4 of the 51 records and agreed with the tool staging. AIM4-O was also evaluated with good results. Our classifier can be integrated into AIM aware imaging tools, such as ePAD, to offer a second opinion about staging as part of the cancer treatment workflow.
Collapse
|
9
|
Napel S, Mu W, Jardim‐Perassi BV, Aerts HJWL, Gillies RJ. Quantitative imaging of cancer in the postgenomic era: Radio(geno)mics, deep learning, and habitats. Cancer 2018; 124:4633-4649. [PMID: 30383900 PMCID: PMC6482447 DOI: 10.1002/cncr.31630] [Citation(s) in RCA: 110] [Impact Index Per Article: 18.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2018] [Revised: 07/11/2018] [Accepted: 07/17/2018] [Indexed: 11/07/2022]
Abstract
Although cancer often is referred to as "a disease of the genes," it is indisputable that the (epi)genetic properties of individual cancer cells are highly variable, even within the same tumor. Hence, preexisting resistant clones will emerge and proliferate after therapeutic selection that targets sensitive clones. Herein, the authors propose that quantitative image analytics, known as "radiomics," can be used to quantify and characterize this heterogeneity. Virtually every patient with cancer is imaged radiologically. Radiomics is predicated on the beliefs that these images reflect underlying pathophysiologies, and that they can be converted into mineable data for improved diagnosis, prognosis, prediction, and therapy monitoring. In the last decade, the radiomics of cancer has grown from a few laboratories to a worldwide enterprise. During this growth, radiomics has established a convention, wherein a large set of annotated image features (1-2000 features) are extracted from segmented regions of interest and used to build classifier models to separate individual patients into their appropriate class (eg, indolent vs aggressive disease). An extension of this conventional radiomics is the application of "deep learning," wherein convolutional neural networks can be used to detect the most informative regions and features without human intervention. A further extension of radiomics involves automatically segmenting informative subregions ("habitats") within tumors, which can be linked to underlying tumor pathophysiology. The goal of the radiomics enterprise is to provide informed decision support for the practice of precision oncology.
Collapse
Affiliation(s)
- Sandy Napel
- Department of RadiologyStanford UniversityStanfordCalifornia
| | - Wei Mu
- Department of Cancer PhysiologyH. Lee Moffitt Cancer CenterTampaFlorida
| | | | - Hugo J. W. L. Aerts
- Dana‐Farber Cancer Institute, Department of Radiology, Brigham and Women’s HospitalHarvard Medical SchoolBostonMassachusetts
| | - Robert J. Gillies
- Department of Cancer PhysiologyH. Lee Moffitt Cancer CenterTampaFlorida
| |
Collapse
|
10
|
Starkov P, Aguilera TA, Golden DI, Shultz DB, Trakul N, Maxim PG, Le QT, Loo BW, Diehn M, Depeursinge A, Rubin DL. The use of texture-based radiomics CT analysis to predict outcomes in early-stage non-small cell lung cancer treated with stereotactic ablative radiotherapy. Br J Radiol 2018; 92:20180228. [PMID: 30457885 DOI: 10.1259/bjr.20180228] [Citation(s) in RCA: 33] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/25/2022] Open
Abstract
OBJECTIVE: Stereotactic ablative radiotherapy (SABR) is being increasingly used as a non-invasive treatment for early-stage non-small cell lung cancer (NSCLC). A non-invasive method to estimate treatment outcomes in these patients would be valuable, especially since access to tissue specimens is often difficult in these cases. METHODS: We developed a method to predict survival following SABR in NSCLC patients using analysis of quantitative image features on pre-treatment CT images. We developed a Cox Lasso model based on two-dimensional Riesz wavelet quantitative texture features on CT scans with the goal of separating patients based on survival. RESULTS: The median log-rank p-value for 1000 cross-validations was 0.030. Our model was able to separate patients based upon predicted survival. When we added tumor size into the model, the p-value lost its significance, demonstrating that tumor size is not a key feature in the model but rather decreases significance likely due to the relatively small number of events in the dataset. Furthermore, running the model using Riesz features extracted either from the solid component of the tumor or from the ground glass opacity (GGO) component of the tumor maintained statistical significance. However, the p-value improved when combining features from the solid and the GGO components, demonstrating that there are important data that can be extracted from the entire tumor. CONCLUSIONS: The model predicting patient survival following SABR in NSCLC may be useful in future studies by enabling prediction of survival-based outcomes using radiomics features in CT images. ADVANCES IN KNOWLEDGE: Quantitative image features from NSCLC nodules on CT images have been found to significantly separate patient populations based on overall survival (p = 0.04). In the long term, a non-invasive method to estimate treatment outcomes in patients undergoing SABR would be valuable, especially since access to tissue specimens is often difficult in these cases.
Collapse
Affiliation(s)
- Pierre Starkov
- 1 Deparment of Signal Processing & Control, Systems, Centre Suisse d'Electronique et de Microtechnique , Neuchâtel , Switzerland
| | - Todd A Aguilera
- 2 Department of Radiation Oncology, UT Southwestern Medical Center , Dallas, TX , USA
| | - Daniel I Golden
- 3 Department of Biomedical Data Science, Radiology, and Medicine (Biomedical Informatics Research), Stanford University School of Medicine , Stanford, CA , USA
| | - David B Shultz
- 4 Department of Radiation Oncology, Princess Margaret Cancer Centre , Toronto, ON , Canada
| | - Nicholas Trakul
- 5 Department of Radiation Oncology, Stanford Cancer Institute and Stanford University School of Medicine , Stanford, CA , USA
| | - Peter G Maxim
- 5 Department of Radiation Oncology, Stanford Cancer Institute and Stanford University School of Medicine , Stanford, CA , USA
| | - Quynh-Thu Le
- 5 Department of Radiation Oncology, Stanford Cancer Institute and Stanford University School of Medicine , Stanford, CA , USA
| | - Billy W Loo
- 5 Department of Radiation Oncology, Stanford Cancer Institute and Stanford University School of Medicine , Stanford, CA , USA
| | - Maximillan Diehn
- 5 Department of Radiation Oncology, Stanford Cancer Institute and Stanford University School of Medicine , Stanford, CA , USA
| | - Adrien Depeursinge
- 6 Department of Signal Processing & Control, Systems, Biomedical Imaging Group, École Polytechnique Fédérale de Lausanne , Lausanne , Switzerland.,7 Department of Signal Processing & Control, Systems, Institute of Information Systems, University of Applied Sciences Western Switzerland (HES-SO) , Sierre , Switzerland
| | - Daniel L Rubin
- 3 Department of Biomedical Data Science, Radiology, and Medicine (Biomedical Informatics Research), Stanford University School of Medicine , Stanford, CA , USA
| |
Collapse
|
11
|
Banerjee I, Kurtz C, Devorah AE, Do B, Rubin DL, Beaulieu CF. Relevance feedback for enhancing content based image retrieval and automatic prediction of semantic image features: Application to bone tumor radiographs. J Biomed Inform 2018; 84:123-135. [DOI: 10.1016/j.jbi.2018.07.002] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2018] [Revised: 06/06/2018] [Accepted: 07/03/2018] [Indexed: 11/30/2022]
|
12
|
Yan K, Wang X, Lu L, Summers RM. DeepLesion: automated mining of large-scale lesion annotations and universal lesion detection with deep learning. J Med Imaging (Bellingham) 2018; 5:036501. [PMID: 30035154 DOI: 10.1117/1.jmi.5.3.036501] [Citation(s) in RCA: 139] [Impact Index Per Article: 23.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2018] [Accepted: 06/14/2018] [Indexed: 12/19/2022] Open
Abstract
Extracting, harvesting, and building large-scale annotated radiological image datasets is a greatly important yet challenging problem. Meanwhile, vast amounts of clinical annotations have been collected and stored in hospitals' picture archiving and communication systems (PACS). These types of annotations, also known as bookmarks in PACS, are usually marked by radiologists during their daily workflow to highlight significant image findings that may serve as reference for later studies. We propose to mine and harvest these abundant retrospective medical data to build a large-scale lesion image dataset. Our process is scalable and requires minimum manual annotation effort. We mine bookmarks in our institute to develop DeepLesion, a dataset with 32,735 lesions in 32,120 CT slices from 10,594 studies of 4,427 unique patients. There are a variety of lesion types in this dataset, such as lung nodules, liver tumors, enlarged lymph nodes, and so on. It has the potential to be used in various medical image applications. Using DeepLesion, we train a universal lesion detector that can find all types of lesions with one unified framework. In this challenging task, the proposed lesion detector achieves a sensitivity of 81.1% with five false positives per image.
Collapse
Affiliation(s)
- Ke Yan
- National Institutes of Health, Clinical Center, Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Bethesda, Maryland, United States
| | - Xiaosong Wang
- National Institutes of Health, Clinical Center, Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Bethesda, Maryland, United States
| | - Le Lu
- National Institutes of Health, Clinical Center, Clinical Image Processing Service, Radiology and Imaging Sciences, Bethesda, Maryland, United States
| | - Ronald M Summers
- National Institutes of Health, Clinical Center, Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Bethesda, Maryland, United States
| |
Collapse
|
13
|
Ramponi G, Badano A. Method for Adapting the Grayscale Standard Display Function to the Aging Eye. J Digit Imaging 2018; 30:17-25. [PMID: 27561752 DOI: 10.1007/s10278-016-9900-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022] Open
Abstract
Perceptual linearity of grayscale images based on a contrast sensitivity model is a widely recognized and used standard for medical imaging visualization. This approach ensures consistency across devices and provides perception of luminance variations in direct relationship to changes in image values. We analyze the effect of aging of the human eye on the precept of linearity and demonstrate that not only the number of just-noticeable differences diminishes for older subjects but also linearity across the range of luminance values is significantly affected. While loss of JNDs is inevitable for a fixed luminance range, our findings suggest possible corrective approaches for maintaining linearity.
Collapse
Affiliation(s)
- Giovanni Ramponi
- Department of Engineering and Architecture, University of Trieste, Trieste, Italy
| | - Aldo Badano
- Division of Imaging, Diagnostics, and Software Reliability, Office of Science and Engineering Laboratories, Center for Devices and Radiological Health, Food and Drug Administration, 10903 New Hampshire Ave, Silver Spring, MD, 20993, USA.
| |
Collapse
|
14
|
A fully automatic end-to-end method for content-based image retrieval of CT scans with similar liver lesion annotations. Int J Comput Assist Radiol Surg 2017; 13:165-174. [PMID: 29147954 DOI: 10.1007/s11548-017-1687-1] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2017] [Accepted: 11/06/2017] [Indexed: 10/18/2022]
Abstract
PURPOSE The goal of medical content-based image retrieval (M-CBIR) is to assist radiologists in the decision-making process by retrieving medical cases similar to a given image. One of the key interests of radiologists is lesions and their annotations, since the patient treatment depends on the lesion diagnosis. Therefore, a key feature of M-CBIR systems is the retrieval of scans with the most similar lesion annotations. To be of value, M-CBIR systems should be fully automatic to handle large case databases. METHODS We present a fully automatic end-to-end method for the retrieval of CT scans with similar liver lesion annotations. The input is a database of abdominal CT scans labeled with liver lesions, a query CT scan, and optionally one radiologist-specified lesion annotation of interest. The output is an ordered list of the database CT scans with the most similar liver lesion annotations. The method starts by automatically segmenting the liver in the scan. It then extracts a histogram-based features vector from the segmented region, learns the features' relative importance, and ranks the database scans according to the relative importance measure. The main advantages of our method are that it fully automates the end-to-end querying process, that it uses simple and efficient techniques that are scalable to large datasets, and that it produces quality retrieval results using an unannotated CT scan. RESULTS Our experimental results on 9 CT queries on a dataset of 41 volumetric CT scans from the 2014 Image CLEF Liver Annotation Task yield an average retrieval accuracy (Normalized Discounted Cumulative Gain index) of 0.77 and 0.84 without/with annotation, respectively. CONCLUSIONS Fully automatic end-to-end retrieval of similar cases based on image information alone, rather that on disease diagnosis, may help radiologists to better diagnose liver lesions.
Collapse
|
15
|
Marvasti NB, Yoruk E, Acar B. Computer-Aided Medical Image Annotation: Preliminary Results With Liver Lesions in CT. IEEE J Biomed Health Inform 2017; 22:1561-1570. [PMID: 29990179 DOI: 10.1109/jbhi.2017.2771211] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
The increasing volume of medical image data, as well as the need for multicenter data consolidation for big data analytics, require computer-aided medical image annotation (CMIA). Majority of the methods proposed so far do not exploit interdependencies between annotations explicitly. They further limit their annotations at a higher level than diagnostics and/or do not consider a standardized lexicon. A radiologist-in-the-loop semi-automatic CMIA system is proposed. It is based on a Bayesian tree structured model, linked to RadLex, and present preliminary results with liver lesions in computed tomography images. The proposed system guides the radiologist to input the most critical information in each iteration and uses a network model to update the full annotation online. The effectiveness of the system using this model-based interactive annotation scheme is shown by contrasting the domain-blind and domain-aware models. Preliminary results show that on average 7.50 (out of 29) manual annotations are sufficient for ${\text{95}}\%$ accuracy, which is ${\text{32.8}}\%$ less than the required manual effort when there is no guidance. The results also suggest that the domain-aware models perform better than the domain-blind models learned from data. Further analysis with larger datasets and in domains other than the liver lesions is needed.
Collapse
|
16
|
Banerjee I, Malladi S, Lee D, Depeursinge A, Telli M, Lipson J, Golden D, Rubin DL. Assessing treatment response in triple-negative breast cancer from quantitative image analysis in perfusion magnetic resonance imaging. J Med Imaging (Bellingham) 2017; 5:011008. [PMID: 29134191 DOI: 10.1117/1.jmi.5.1.011008] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2017] [Accepted: 10/16/2017] [Indexed: 12/31/2022] Open
Abstract
Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) is sensitive but not specific to determining treatment response in early stage triple-negative breast cancer (TNBC) patients. We propose an efficient computerized technique for assessing treatment response, specifically the residual tumor (RT) status and pathological complete response (pCR), in response to neoadjuvant chemotherapy. The proposed approach is based on Riesz wavelet analysis of pharmacokinetic maps derived from noninvasive DCE-MRI scans, obtained before and after treatment. We compared the performance of Riesz features with the traditional gray level co-occurrence matrices and a comprehensive characterization of the lesion that includes a wide range of quantitative features (e.g., shape and boundary). We investigated a set of predictive models ([Formula: see text]) incorporating distinct combinations of quantitative characterizations and statistical models at different time points of the treatment and some area under the receiver operating characteristic curve (AUC) values we reported are above 0.8. The most efficient models are based on first-order statistics and Riesz wavelets, which predicted RT with an AUC value of 0.85 and pCR with an AUC value of 0.83, improving results reported in a previous study by [Formula: see text]. Our findings suggest that Riesz texture analysis of TNBC lesions can be considered a potential framework for optimizing TNBC patient care.
Collapse
Affiliation(s)
- Imon Banerjee
- Stanford University, Department of Radiology, Stanford, California, United States
| | - Sadhika Malladi
- Massachusetts Institute of Technology, Department of Mathematics, Cambridge, Massachusetts, United States
| | - Daniela Lee
- Yale University, Department of Ecology and Evolutionary Biology, New Haven, Connecticut, United States
| | - Adrien Depeursinge
- University of Applied Sciences Western Switzerland (HES-SO), Department Institute of Information Systems, Sierre, Switzerland
| | - Melinda Telli
- Stanford University, Department of Medicine (Oncology), Stanford, California, United States
| | - Jafi Lipson
- Stanford University, Department of Radiology, Stanford, California, United States
| | - Daniel Golden
- Arterys Inc., San Francisco, California, United States
| | - Daniel L Rubin
- Stanford University, Department of Radiology, Stanford, California, United States
| |
Collapse
|
17
|
Than JCM, Saba L, Noor NM, Rijal OM, Kassim RM, Yunus A, Suri HS, Porcu M, Suri JS. Lung disease stratification using amalgamation of Riesz and Gabor transforms in machine learning framework. Comput Biol Med 2017; 89:197-211. [PMID: 28825994 DOI: 10.1016/j.compbiomed.2017.08.014] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2017] [Revised: 08/13/2017] [Accepted: 08/13/2017] [Indexed: 10/19/2022]
Abstract
Lung disease risk stratification is important for both diagnosis and treatment planning, particularly in biopsies and radiation therapy. Manual lung disease risk stratification is challenging because of: (a) large lung data sizes, (b) inter- and intra-observer variability of the lung delineation and (c) lack of feature amalgamation during machine learning paradigm. This paper presents a two stage CADx cascaded system consisting of: (a) semi-automated lung delineation subsystem (LDS) for lung region extraction in CT slices followed by (b) morphology-based lung tissue characterization, thereby addressing the above shortcomings. LDS primarily uses entropy-based region extraction while ML-based lung characterization is mainly based on an amalgamation of directional transforms such as Riesz and Gabor along with texture-based features comprising of 100 greyscale features using the K-fold cross-validation protocol (K = 2, 3, 5 and 10). The lung database consisted of 96 patients: 15 normal and 81 diseased. We use five high resolution Computed Tomography (HRCT) levels representing different anatomy landmarks where disease is commonly seen. We demonstrate the amalgamated ML stratification accuracy of 99.53%, an increase of 2% against the conventional non-amalgamation ML system that uses alone Riesz-based feature embedded with feature selection based on feature strength. The robustness of the system was determined based on the reliability and stability that showed a reliability index of 0.99 and the deviation in risk stratification accuracies less than 5%. Our CADx system shows 10% better performance when compared against the mean of five other prominent studies available in the current literature covering over one decade.
Collapse
Affiliation(s)
- Joel C M Than
- UTM Razak School of Engineering and Advanced Technology, Universiti Teknologi Malaysia, Malaysia.
| | - Luca Saba
- Azienda Ospedaliero Universitaria (A.O.U.) di Cagliari - Polo di Monserrato; Università di Cagliari, S.S. 554, Monserrato, Cagliari, 09045, Italy.
| | - Norliza M Noor
- Department of Engineering, UTM Razak School of Engineering and Advanced Technology, Universiti Teknologi Malaysia, Malaysia.
| | - Omar M Rijal
- Institute of Mathematical Sciences, Faculty of Science, University of Malaya, Kuala Lumpur, Malaysia.
| | | | | | | | - Michele Porcu
- Azienda Ospedaliero Universitaria (A.O.U.) di Cagliari - Polo di Monserrato; Università di Cagliari, S.S. 554, Monserrato, Cagliari, 09045, Italy.
| | - Jasjit S Suri
- Lung Diagnostic Division, Global Biomedical Technologies, Inc., Roseville, CA, USA; AtheroPoint™ LLC, Roseville, CA, USA; Department of Electrical Engineering (Affl.), Idaho State University, ID, USA.
| |
Collapse
|
18
|
Banerjee I, Beaulieu CF, Rubin DL. Computerized Prediction of Radiological Observations Based on Quantitative Feature Analysis: Initial Experience in Liver Lesions. J Digit Imaging 2017. [PMID: 28639186 PMCID: PMC5537098 DOI: 10.1007/s10278-017-9987-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
We propose a computerized framework that, given a region of interest (ROI) circumscribing a lesion, not only predicts radiological observations related to the lesion characteristics with 83.2% average prediction accuracy but also derives explicit association between low-level imaging features and high-level semantic terms by exploiting their statistical correlation. Such direct association between semantic concepts and low-level imaging features can be leveraged to build a powerful annotation system for radiological images that not only allows the computer to infer the semantics from diverse medical images and run automatic reasoning for making diagnostic decision but also provides "human-interpretable explanation" of the system output to facilitate better end user understanding of computer-based diagnostic decisions. The core component of our framework is a radiological observation detection algorithm that maximizes the low-level imaging feature relevancy for each high-level semantic term. On a liver lesion CT dataset, we have implemented our framework by incorporating a large set of state-of-the-art low-level imaging features. Additionally, we included a novel feature that quantifies lesion(s) present within the liver that have a similar appearance as the primary lesion identified by the radiologist. Our framework achieved a high prediction accuracy (83.2%), and the derived association between semantic concepts and imaging features closely correlates with human expectation. The framework has been only tested on liver lesion CT images, but it is capable of being applied to other imaging domains.
Collapse
Affiliation(s)
- Imon Banerjee
- Department of Radiology, Stanford University, Stanford, CA, 94305, USA.
| | | | - Daniel L Rubin
- Department of Radiology, Stanford University, Stanford, CA, 94305, USA
| |
Collapse
|
19
|
Kim Y, Furlan A, Borhani AA, Bae KT. Computer-aided diagnosis program for classifying the risk of hepatocellular carcinoma on MR images following liver imaging reporting and data system (LI-RADS). J Magn Reson Imaging 2017; 47:710-722. [PMID: 28556283 DOI: 10.1002/jmri.25772] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2016] [Accepted: 05/08/2017] [Indexed: 12/14/2022] Open
Abstract
PURPOSE To develop and evaluate a computer-aided diagnosis (CAD) program for liver lesions on magnetic resonance (MR) images for classification of the risk of hepatocellular carcinoma (HCC) following the liver imaging reporting and data system (LI-RADS). MATERIALS AND METHODS Liver MR images from 41 patients with hyperenhancing liver lesions categorized as LR 3, 4, and 5 were evaluated by two radiologists. The major LI-RADS features of each index liver lesion were recorded, including size (maximum transverse diameter), presence of hyperenhancement, washout appearance, and capsule appearance. A CAD program was implemented to register MR images at different contrast-enhancement phases, segment liver lesions, extract lesion features, and classify lesions according to LI-RADS. The LI-RADS features quantified by CAD were compared with those assessed by radiologists using the intraclass correlation coefficient (ICC) and receiver operator curve (ROC) analyses. The LI-RADS categorization between CAD and radiologists was evaluated using the weighted Cohen's kappa coefficient. RESULTS The mean and standard deviation of the lesion diameters were 21 ± 11 mm (range, 7-70 mm) by radiologists and 22 ± 11 mm (range, 8-72 mm) by CAD (ICC, 0.96-0.97). The area under the curve (AUC) for the washout assessment by CAD was 0.79-0.93 with sensitivity 0.69-0.82 and specificity 0.79-1. The AUC for the capsule assessment by CAD was 0.79-0.9 with sensitivity 0.75-0.9 and specificity 0.82-0.96. The classifications by the radiologists and CAD coincided in 76-83% lesions (k = 0.57-0.71), while the agreements between radiologists were in 78% lesions (k = 0.59). CONCLUSION We developed a CAD program for liver lesions on MR images and showed a substantial agreement in the LI-RADS-based classification of the risk of HCCs between the CAD and radiologists. LEVEL OF EVIDENCE 1 Technical Efficacy: Stage 1 J. Magn. Reson. Imaging 2018;47:710-722.
Collapse
Affiliation(s)
- Youngwoo Kim
- Department of Radiology, University of Pittsburgh School of Medicine, Pittsburgh, Pennsylvania, USA
| | - Alessandro Furlan
- Department of Radiology, University of Pittsburgh School of Medicine, Pittsburgh, Pennsylvania, USA
| | - Amir A Borhani
- Department of Radiology, University of Pittsburgh School of Medicine, Pittsburgh, Pennsylvania, USA
| | - Kyongtae T Bae
- Department of Radiology, University of Pittsburgh School of Medicine, Pittsburgh, Pennsylvania, USA
| |
Collapse
|
20
|
Chen S, Qin J, Ji X, Lei B, Wang T, Ni D, Cheng JZ. Automatic Scoring of Multiple Semantic Attributes With Multi-Task Feature Leverage: A Study on Pulmonary Nodules in CT Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2017; 36:802-814. [PMID: 28113928 DOI: 10.1109/tmi.2016.2629462] [Citation(s) in RCA: 54] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
The gap between the computational and semantic features is the one of major factors that bottlenecks the computer-aided diagnosis (CAD) performance from clinical usage. To bridge this gap, we exploit three multi-task learning (MTL) schemes to leverage heterogeneous computational features derived from deep learning models of stacked denoising autoencoder (SDAE) and convolutional neural network (CNN), as well as hand-crafted Haar-like and HoG features, for the description of 9 semantic features for lung nodules in CT images. We regard that there may exist relations among the semantic features of "spiculation", "texture", "margin", etc., that can be explored with the MTL. The Lung Image Database Consortium (LIDC) data is adopted in this study for the rich annotation resources. The LIDC nodules were quantitatively scored w.r.t. 9 semantic features from 12 radiologists of several institutes in U.S.A. By treating each semantic feature as an individual task, the MTL schemes select and map the heterogeneous computational features toward the radiologists' ratings with cross validation evaluation schemes on the randomly selected 2400 nodules from the LIDC dataset. The experimental results suggest that the predicted semantic scores from the three MTL schemes are closer to the radiologists' ratings than the scores from single-task LASSO and elastic net regression methods. The proposed semantic attribute scoring scheme may provide richer quantitative assessments of nodules for better support of diagnostic decision and management. Meanwhile, the capability of the automatic association of medical image contents with the clinical semantic terms by our method may also assist the development of medical search engine.
Collapse
|
21
|
Kumar A, Dyer S, Kim J, Li C, Leong PHW, Fulham M, Feng D. Adapting content-based image retrieval techniques for the semantic annotation of medical images. Comput Med Imaging Graph 2016; 49:37-45. [PMID: 26890880 DOI: 10.1016/j.compmedimag.2016.01.001] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2015] [Revised: 12/10/2015] [Accepted: 01/14/2016] [Indexed: 10/22/2022]
Abstract
The automatic annotation of medical images is a prerequisite for building comprehensive semantic archives that can be used to enhance evidence-based diagnosis, physician education, and biomedical research. Annotation also has important applications in the automatic generation of structured radiology reports. Much of the prior research work has focused on annotating images with properties such as the modality of the image, or the biological system or body region being imaged. However, many challenges remain for the annotation of high-level semantic content in medical images (e.g., presence of calcification, vessel obstruction, etc.) due to the difficulty in discovering relationships and associations between low-level image features and high-level semantic concepts. This difficulty is further compounded by the lack of labelled training data. In this paper, we present a method for the automatic semantic annotation of medical images that leverages techniques from content-based image retrieval (CBIR). CBIR is a well-established image search technology that uses quantifiable low-level image features to represent the high-level semantic content depicted in those images. Our method extends CBIR techniques to identify or retrieve a collection of labelled images that have similar low-level features and then uses this collection to determine the best high-level semantic annotations. We demonstrate our annotation method using retrieval via weighted nearest-neighbour retrieval and multi-class classification to show that our approach is viable regardless of the underlying retrieval strategy. We experimentally compared our method with several well-established baseline techniques (classification and regression) and showed that our method achieved the highest accuracy in the annotation of liver computed tomography (CT) images.
Collapse
Affiliation(s)
- Ashnil Kumar
- School of Information Technologies, University of Sydney, Australia; Institute of Biomedical Engineering and Technology, University of Sydney, Australia.
| | - Shane Dyer
- School of Electrical and Information Engineering, University of Sydney, Australia.
| | - Jinman Kim
- School of Information Technologies, University of Sydney, Australia; Institute of Biomedical Engineering and Technology, University of Sydney, Australia.
| | - Changyang Li
- School of Information Technologies, University of Sydney, Australia; Institute of Biomedical Engineering and Technology, University of Sydney, Australia.
| | - Philip H W Leong
- School of Electrical and Information Engineering, University of Sydney, Australia; Institute of Biomedical Engineering and Technology, University of Sydney, Australia.
| | - Michael Fulham
- School of Information Technologies, University of Sydney, Australia; Institute of Biomedical Engineering and Technology, University of Sydney, Australia; Department of Molecular Imaging, Royal Prince Alfred Hospital, Sydney, Australia; Sydney Medical School, University of Sydney, Australia.
| | - Dagan Feng
- School of Information Technologies, University of Sydney, Australia; Institute of Biomedical Engineering and Technology, University of Sydney, Australia; Med-X Research Institute, Shanghai Jiao Tong University, China.
| |
Collapse
|
22
|
Transforms and Operators for Directional Bioimage Analysis: A Survey. FOCUS ON BIO-IMAGE INFORMATICS 2016; 219:69-93. [DOI: 10.1007/978-3-319-28549-8_3] [Citation(s) in RCA: 240] [Impact Index Per Article: 30.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/04/2022]
|
23
|
Alahmer H, Ahmed A. Computer-aided Classification of Liver Lesions from CT Images Based on Multiple ROI. ACTA ACUST UNITED AC 2016. [DOI: 10.1016/j.procs.2016.07.027] [Citation(s) in RCA: 28] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|
24
|
Abstract
Content-based medical image retrieval (CBMIR) is an active research area for disease diagnosis and treatment but it can be problematic given the small visual variations between anatomical structures. We propose a retrieval method based on a bag-of-visual-words (BoVW) to identify discriminative characteristics between different medical images with Pruned Dictionary based on Latent Semantic Topic description. We refer to this as the PD-LST retrieval. Our method has two main components. First, we calculate a topic-word significance value for each visual word given a certain latent topic to evaluate how the word is connected to this latent topic. The latent topics are learnt, based on the relationship between the images and words, and are employed to bridge the gap between low-level visual features and high-level semantics. These latent topics describe the images and words semantically and can thus facilitate more meaningful comparisons between the words. Second, we compute an overall-word significance value to evaluate the significance of a visual word within the entire dictionary. We designed an iterative ranking method to measure overall-word significance by considering the relationship between all latent topics and words. The words with higher values are considered meaningful with more significant discriminative power in differentiating medical images. We evaluated our method on two public medical imaging datasets and it showed improved retrieval accuracy and efficiency.
Collapse
|
25
|
Zhang F, Song Y, Cai W, Liu S, Liu S, Pujol S, Kikinis R, Xia Y, Fulham MJ, Feng DD, Alzheimers Disease Neuroimaging Initiative. Pairwise Latent Semantic Association for Similarity Computation in Medical Imaging. IEEE Trans Biomed Eng 2015; 63:1058-1069. [PMID: 26372117 DOI: 10.1109/tbme.2015.2478028] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Retrieving medical images that present similar diseases is an active research area for diagnostics and therapy. However, it can be problematic given the visual variations between anatomical structures. In this paper, we propose a new feature extraction method for similarity computation in medical imaging. Instead of the low-level visual appearance, we design a CCA-PairLDA feature representation method to capture the similarity between images with high-level semantics. First, we extract the PairLDA topics to represent an image as a mixture of latent semantic topics in an image pair context. Second, we generate a CCA-correlation model to represent the semantic association between an image pair for similarity computation. While PairLDA adjusts the latent topics for all image pairs, CCA-correlation helps to associate an individual image pair. In this way, the semantic descriptions of an image pair are closely correlated, and naturally correspond to similarity computation between images. We evaluated our method on two public medical imaging datasets for image retrieval and showed improved performance.
Collapse
Affiliation(s)
- Fan Zhang
- Biomedical and Multimedia Information Technology Research Group, School of Information Technologies, University of Sydney, Sydney, N.S.W., Australia
| | - Yang Song
- Biomedical and BMIT Research Group, School of Information Technologies, University of Sydney
| | - Weidong Cai
- Biomedical and Multimedia Information Technology Research Group, School of Information Technologies, University of Sydney
| | - Sidong Liu
- Biomedical and BMIT Research Group, School of Information Technologies, University of Sydney
| | - Siqi Liu
- Biomedical and Multimedia Information Technology Research Group, School of Information Technologies, University of Sydney
| | - Sonia Pujol
- Surgical Planning Lab, Brigham & Women's Hospital, Harvard Medical School
| | - Ron Kikinis
- Surgical Planning Lab, Brigham & Women's Hospital, Harvard Medical School
| | - Yong Xia
- Shaanxi Key Lab of Speech and Image Information Processing, School of Computer Science and Technology, Northwestern Polytechnical University
| | - Michael J Fulham
- Department of PET and Nuclear Medicine, Royal Prince Alfred Hospital
| | - David Dagan Feng
- BMIT Research Group, School of Information Technologies, University of Sydney
| | | |
Collapse
|
26
|
Alper Selver M. Exploring Brushlet Based 3D Textures in Transfer Function Specification for Direct Volume Rendering of Abdominal Organs. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2015; 21:174-187. [PMID: 26357028 DOI: 10.1109/tvcg.2014.2359462] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
Intuitive and differentiating domains for transfer function (TF) specification for direct volume rendering is an important research area for producing informative and useful 3D images. One of the emerging branches of this research is the texture based transfer functions. Although several studies in two, three, and four dimensional image processing show the importance of using texture information, these studies generally focus on segmentation. However, TFs can also be built effectively using appropriate texture information. To accomplish this, methods should be developed to collect wide variety of shape, orientation, and texture of biological tissues and organs. In this study, volumetric data (i.e., domain of a TF) is enhanced using brushlet expansion, which represents both low and high frequency textured structures at different quadrants in transform domain. Three methods (i.e., expert based manual, atlas and machine learning based automatic) are proposed for selection of the quadrants. Non-linear manipulation of the complex brushlet coefficients is also used prior to the tiling of selected quadrants and reconstruction of the volume. Applications to abdominal data sets acquired with CT, MR, and PET show that the proposed volume enhancement effectively improves the quality of 3D rendering using well-known TF specification techniques.
Collapse
|
27
|
Kurtz C, Depeursinge A, Napel S, Beaulieu CF, Rubin DL. On combining image-based and ontological semantic dissimilarities for medical image retrieval applications. Med Image Anal 2014; 18:1082-100. [PMID: 25036769 PMCID: PMC4173098 DOI: 10.1016/j.media.2014.06.009] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2013] [Revised: 06/18/2014] [Accepted: 06/23/2014] [Indexed: 10/25/2022]
Abstract
Computer-assisted image retrieval applications can assist radiologists by identifying similar images in archives as a means to providing decision support. In the classical case, images are described using low-level features extracted from their contents, and an appropriate distance is used to find the best matches in the feature space. However, using low-level image features to fully capture the visual appearance of diseases is challenging and the semantic gap between these features and the high-level visual concepts in radiology may impair the system performance. To deal with this issue, the use of semantic terms to provide high-level descriptions of radiological image contents has recently been advocated. Nevertheless, most of the existing semantic image retrieval strategies are limited by two factors: they require manual annotation of the images using semantic terms and they ignore the intrinsic visual and semantic relationships between these annotations during the comparison of the images. Based on these considerations, we propose an image retrieval framework based on semantic features that relies on two main strategies: (1) automatic "soft" prediction of ontological terms that describe the image contents from multi-scale Riesz wavelets and (2) retrieval of similar images by evaluating the similarity between their annotations using a new term dissimilarity measure, which takes into account both image-based and ontological term relations. The combination of these strategies provides a means of accurately retrieving similar images in databases based on image annotations and can be considered as a potential solution to the semantic gap problem. We validated this approach in the context of the retrieval of liver lesions from computed tomographic (CT) images and annotated with semantic terms of the RadLex ontology. The relevance of the retrieval results was assessed using two protocols: evaluation relative to a dissimilarity reference standard defined for pairs of images on a 25-images dataset, and evaluation relative to the diagnoses of the retrieved images on a 72-images dataset. A normalized discounted cumulative gain (NDCG) score of more than 0.92 was obtained with the first protocol, while AUC scores of more than 0.77 were obtained with the second protocol. This automatical approach could provide real-time decision support to radiologists by showing them similar images with associated diagnoses and, where available, responses to therapies.
Collapse
Affiliation(s)
- Camille Kurtz
- Department of Radiology, School of Medicine, Stanford University, USA; LIPADE Laboratory (EA 2517), University Paris Descartes, France.
| | | | - Sandy Napel
- Department of Radiology, School of Medicine, Stanford University, USA.
| | | | - Daniel L Rubin
- Department of Radiology, School of Medicine, Stanford University, USA.
| |
Collapse
|