1
|
Galbusera F, Cina A. Image annotation and curation in radiology: an overview for machine learning practitioners. Eur Radiol Exp 2024; 8:11. [PMID: 38316659 PMCID: PMC10844188 DOI: 10.1186/s41747-023-00408-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2023] [Accepted: 11/07/2023] [Indexed: 02/07/2024] Open
Abstract
"Garbage in, garbage out" summarises well the importance of high-quality data in machine learning and artificial intelligence. All data used to train and validate models should indeed be consistent, standardised, traceable, correctly annotated, and de-identified, considering local regulations. This narrative review presents a summary of the techniques that are used to ensure that all these requirements are fulfilled, with special emphasis on radiological imaging and freely available software solutions that can be directly employed by the interested researcher. Topics discussed include key imaging concepts, such as image resolution and pixel depth; file formats for medical image data storage; free software solutions for medical image processing; anonymisation and pseudonymisation to protect patient privacy, including compliance with regulations such as the Regulation (EU) 2016/679 "General Data Protection Regulation" (GDPR) and the 1996 United States Act of Congress "Health Insurance Portability and Accountability Act" (HIPAA); methods to eliminate patient-identifying features within images, like facial structures; free and commercial tools for image annotation; and techniques for data harmonisation and normalisation.Relevance statement This review provides an overview of the methods and tools that can be used to ensure high-quality data for machine learning and artificial intelligence applications in radiology.Key points• High-quality datasets are essential for reliable artificial intelligence algorithms in medical imaging.• Software tools like ImageJ and 3D Slicer aid in processing medical images for AI research.• Anonymisation techniques protect patient privacy during dataset preparation.• Machine learning models can accelerate image annotation, enhancing efficiency and accuracy.• Data curation ensures dataset integrity, compliance, and quality for artificial intelligence development.
Collapse
Affiliation(s)
- Fabio Galbusera
- Spine Center, Schulthess Clinic, Lengghalde 2, Zurich, 8008, Switzerland.
| | - Andrea Cina
- Spine Center, Schulthess Clinic, Lengghalde 2, Zurich, 8008, Switzerland
- ETH Zürich, Department of Health Sciences and Technologies, Zurich, Switzerland
| |
Collapse
|
2
|
Kalokyri V, Kondylakis H, Sfakianakis S, Nikiforaki K, Karatzanis I, Mazzetti S, Tachos N, Regge D, Fotiadis DI, Marias K, Tsiknakis M. MI-Common Data Model: Extending Observational Medical Outcomes Partnership-Common Data Model (OMOP-CDM) for Registering Medical Imaging Metadata and Subsequent Curation Processes. JCO Clin Cancer Inform 2023; 7:e2300101. [PMID: 38061012 PMCID: PMC10715775 DOI: 10.1200/cci.23.00101] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2023] [Revised: 08/21/2023] [Accepted: 09/29/2023] [Indexed: 12/18/2023] Open
Abstract
PURPOSE The explosion of big data and artificial intelligence has rapidly increased the need for integrated, homogenized, and harmonized health data. Many common data models (CDMs) and standard vocabularies have appeared in an attempt to offer harmonized access to the available information, with Observational Medical Outcomes Partnership (OMOP)-CDM being one of the most prominent ones, allowing the standardization and harmonization of health care information. However, despite its flexibility, still capturing imaging metadata along with the corresponding clinical data continues to pose a challenge. This challenge arises from the absence of a comprehensive standard representation for image-related information and subsequent image curation processes and their interlinkage with the respective clinical information. Successful resolution of this challenge holds the potential to enable imaging and clinical data to become harmonized, quality-checked, annotated, and ready to be used in conjunction, in the development of artificial intelligence models and other data-dependent use cases. METHODS To address this challenge, we introduce medical imaging (MI)-CDM-an extension of the OMOP-CDM specifically designed for registering medical imaging data and curation-related processes. Our modeling choices were the result of iterative numerous discussions among clinical and AI experts to enable the integration of imaging and clinical data in the context of the ProCAncer-I project, for answering a set of clinical questions across the prostate cancer's continuum. RESULTS Our MI-CDM extension has been successfully implemented for the use case of prostate cancer for integrating imaging and curation metadata along with clinical information by using the OMOP-CDM and its oncology extension. CONCLUSION By using our proposed terminologies and standardized attributes, we demonstrate how diverse imaging modalities can be seamlessly integrated in the future.
Collapse
Affiliation(s)
- Varvara Kalokyri
- Institute of Computer Science, Foundation of Research and Technology Hellas, Heraklion, Greece
| | - Haridimos Kondylakis
- Institute of Computer Science, Foundation of Research and Technology Hellas, Heraklion, Greece
| | - Stelios Sfakianakis
- Institute of Computer Science, Foundation of Research and Technology Hellas, Heraklion, Greece
| | - Katerina Nikiforaki
- Institute of Computer Science, Foundation of Research and Technology Hellas, Heraklion, Greece
| | - Ioannis Karatzanis
- Institute of Computer Science, Foundation of Research and Technology Hellas, Heraklion, Greece
| | - Simone Mazzetti
- Institute of Computer Science, Foundation of Research and Technology Hellas, Heraklion, Greece
- Department of Surgical Sciences, University of Turin, Turin, Italy
- Radiology Unit, Candiolo Cancer Institute, FPO-IRCCS, Candiolo, Italy
| | - Nikolaos Tachos
- Institute of Computer Science, Foundation of Research and Technology Hellas, Heraklion, Greece
- Biomedical Research Institute, Foundation of Research and Technology Hellas, University Campus of Ioannina, Ioannina, Greece
| | - Daniele Regge
- Institute of Computer Science, Foundation of Research and Technology Hellas, Heraklion, Greece
- Radiology Unit, Candiolo Cancer Institute, FPO-IRCCS, Candiolo, Italy
| | - Dimitrios I. Fotiadis
- Institute of Computer Science, Foundation of Research and Technology Hellas, Heraklion, Greece
- Biomedical Research Institute, Foundation of Research and Technology Hellas, University Campus of Ioannina, Ioannina, Greece
| | - Konstantinos Marias
- Institute of Computer Science, Foundation of Research and Technology Hellas, Heraklion, Greece
| | - Manolis Tsiknakis
- Institute of Computer Science, Foundation of Research and Technology Hellas, Heraklion, Greece
| |
Collapse
|
3
|
Alkim E, Dowst H, DiCarlo J, Dobrolecki LE, Hernández-Herrera A, Hormuth DA, Liao Y, McOwiti A, Pautler R, Rimawi M, Roark A, Srinivasan RR, Virostko J, Zhang B, Zheng F, Rubin DL, Yankeelov TE, Lewis MT. Toward Practical Integration of Omic and Imaging Data in Co-Clinical Trials. Tomography 2023; 9:810-828. [PMID: 37104137 PMCID: PMC10144684 DOI: 10.3390/tomography9020066] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2023] [Revised: 03/31/2023] [Accepted: 04/03/2023] [Indexed: 04/28/2023] Open
Abstract
Co-clinical trials are the concurrent or sequential evaluation of therapeutics in both patients clinically and patient-derived xenografts (PDX) pre-clinically, in a manner designed to match the pharmacokinetics and pharmacodynamics of the agent(s) used. The primary goal is to determine the degree to which PDX cohort responses recapitulate patient cohort responses at the phenotypic and molecular levels, such that pre-clinical and clinical trials can inform one another. A major issue is how to manage, integrate, and analyze the abundance of data generated across both spatial and temporal scales, as well as across species. To address this issue, we are developing MIRACCL (molecular and imaging response analysis of co-clinical trials), a web-based analytical tool. For prototyping, we simulated data for a co-clinical trial in "triple-negative" breast cancer (TNBC) by pairing pre- (T0) and on-treatment (T1) magnetic resonance imaging (MRI) from the I-SPY2 trial, as well as PDX-based T0 and T1 MRI. Baseline (T0) and on-treatment (T1) RNA expression data were also simulated for TNBC and PDX. Image features derived from both datasets were cross-referenced to omic data to evaluate MIRACCL functionality for correlating and displaying MRI-based changes in tumor size, vascularity, and cellularity with changes in mRNA expression as a function of treatment.
Collapse
Affiliation(s)
- Emel Alkim
- Department of Biomedical Data Science, Stanford University School of Medicine, Stanford, CA 94305, USA
| | - Heidi Dowst
- Dan L. Duncan Cancer Center, Baylor College of Medicine, Houston, TX 77030, USA
| | - Julie DiCarlo
- Oden Institute for Computational Engineering and Sciences, Austin, TX 78712, USA
- Livestrong Cancer Institutes, Austin, TX 78712, USA
| | - Lacey E Dobrolecki
- Lester and Sue Smith Breast Center, Baylor College of Medicine, Houston, TX 77030, USA
- Department of Molecular and Cellular Biology and Radiology, Baylor College of Medicine, Houston, TX 77030, USA
| | | | - David A Hormuth
- Oden Institute for Computational Engineering and Sciences, Austin, TX 78712, USA
- Livestrong Cancer Institutes, Austin, TX 78712, USA
| | - Yuxing Liao
- Lester and Sue Smith Breast Center, Baylor College of Medicine, Houston, TX 77030, USA
| | - Apollo McOwiti
- Dan L. Duncan Cancer Center, Baylor College of Medicine, Houston, TX 77030, USA
| | - Robia Pautler
- Department of Physiology, Baylor College of Medicine, Houston, TX 77030, USA
| | - Mothaffar Rimawi
- Lester and Sue Smith Breast Center, Baylor College of Medicine, Houston, TX 77030, USA
- Department of Medicine, Baylor College of Medicine, Houston, TX 77030, USA
| | - Ashley Roark
- Lester and Sue Smith Breast Center, Baylor College of Medicine, Houston, TX 77030, USA
- Department of Medicine, Baylor College of Medicine, Houston, TX 77030, USA
| | | | - Jack Virostko
- Oden Institute for Computational Engineering and Sciences, Austin, TX 78712, USA
- Livestrong Cancer Institutes, Austin, TX 78712, USA
- Department of Oncology, The University of Texas at Austin, Austin, TX 78712, USA
- Department of Diagnostic Medicine, The University of Texas at Austin, Austin, TX 78712, USA
| | - Bing Zhang
- Lester and Sue Smith Breast Center, Baylor College of Medicine, Houston, TX 77030, USA
| | - Fei Zheng
- Dan L. Duncan Cancer Center, Baylor College of Medicine, Houston, TX 77030, USA
| | - Daniel L Rubin
- Department of Biomedical Data Science, Stanford University School of Medicine, Stanford, CA 94305, USA
- Department of Radiology, Stanford University School of Medicine, Stanford, CA 94305, USA
- Department of Medicine, Stanford University School of Medicine, Stanford, CA 94305, USA
| | - Thomas E Yankeelov
- Oden Institute for Computational Engineering and Sciences, Austin, TX 78712, USA
- Livestrong Cancer Institutes, Austin, TX 78712, USA
- Department of Oncology, The University of Texas at Austin, Austin, TX 78712, USA
- Department of Diagnostic Medicine, The University of Texas at Austin, Austin, TX 78712, USA
- Department of Biomedical Engineering, The University of Texas at Austin, Austin, TX 78712, USA
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Michael T Lewis
- Dan L. Duncan Cancer Center, Baylor College of Medicine, Houston, TX 77030, USA
- Lester and Sue Smith Breast Center, Baylor College of Medicine, Houston, TX 77030, USA
- Department of Molecular and Cellular Biology and Radiology, Baylor College of Medicine, Houston, TX 77030, USA
| |
Collapse
|
4
|
Chen TT, Sun YC, Chu WC, Lien CY. BlueLight: An Open Source DICOM Viewer Using Low-Cost Computation Algorithm Implemented with JavaScript Using Advanced Medical Imaging Visualization. J Digit Imaging 2023; 36:753-763. [PMID: 36538245 PMCID: PMC10039132 DOI: 10.1007/s10278-022-00746-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2022] [Revised: 11/16/2022] [Accepted: 11/23/2022] [Indexed: 12/24/2022] Open
Abstract
Recently, WebGL has been widely used in numerous web-based medical image viewers to present advanced imaging visualization. However, in the scenario of medical imaging, there are many challenges of computation time and memory consumption that limit the use of advanced image renderings, such as volume rendering and multiplanar reformation/reconstruction, in low-cost mobile devices. In this study, we propose a client-side rendering low-cost computation algorithm for common two- and three-dimensional medical imaging visualization implemented by pure JavaScript. Particularly, we used the functions of cascading style sheet transform and combinate with Digital Imaging and Communications in Medicine (DICOM)-related imaging to replace the application programming interface with high computation to reduce the computation time and save memory consumption while launching medical imaging interpretation on web browsers. The results show the proposed algorithm significantly reduced the consumption of central and graphics processing units on various web browsers. The proposed algorithm was implemented in an open-source web-based DICOM viewer BlueLight; the results show that it has sufficient rendering performance to display 3D medical images with DICOM-compliant annotations and has the ability to connect to image archive via DICOMweb as well.Keywords: WebGL, DICOMweb, Multiplanar reconstruction, Volume rendering, DICOM, JavaScript, Zero-footprint.
Collapse
Affiliation(s)
- Tseng-Tse Chen
- Department of Information Management, National Taipei University of Nursing and Health Sciences, Taipei, Taiwan
- Department of Biomedical Engineering, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Ying-Chou Sun
- School of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan
- Deptartment of Radiology, Taipei Veterans General Hospital, Taipei, Taiwan
- Department of Medical Imaging and Radiological Technology, Yuanpei University of Medical Technology, Hsinchu, Taiwan
| | - Woei-Chyn Chu
- Department of Biomedical Engineering, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Chung-Yueh Lien
- Department of Information Management, National Taipei University of Nursing and Health Sciences, Taipei, Taiwan.
| |
Collapse
|
5
|
Wahid KA, Glerean E, Sahlsten J, Jaskari J, Kaski K, Naser MA, He R, Mohamed ASR, Fuller CD. Artificial Intelligence for Radiation Oncology Applications Using Public Datasets. Semin Radiat Oncol 2022; 32:400-414. [PMID: 36202442 PMCID: PMC9587532 DOI: 10.1016/j.semradonc.2022.06.009] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
Artificial intelligence (AI) has exceptional potential to positively impact the field of radiation oncology. However, large curated datasets - often involving imaging data and corresponding annotations - are required to develop radiation oncology AI models. Importantly, the recent establishment of Findable, Accessible, Interoperable, Reusable (FAIR) principles for scientific data management have enabled an increasing number of radiation oncology related datasets to be disseminated through data repositories, thereby acting as a rich source of data for AI model building. This manuscript reviews the current and future state of radiation oncology data dissemination, with a particular emphasis on published imaging datasets, AI data challenges, and associated infrastructure. Moreover, we provide historical context of FAIR data dissemination protocols, difficulties in the current distribution of radiation oncology data, and recommendations regarding data dissemination for eventual utilization in AI models. Through FAIR principles and standardized approaches to data dissemination, radiation oncology AI research has nothing to lose and everything to gain.
Collapse
Affiliation(s)
- Kareem A Wahid
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Enrico Glerean
- Department of Neuroscience and Biomedical Engineering, Aalto University School of Science, Espoo, Finland; Department of Computer Science, Aalto University School of Science, Espoo, Finland
| | - Jaakko Sahlsten
- Department of Computer Science, Aalto University School of Science, Espoo, Finland
| | - Joel Jaskari
- Department of Computer Science, Aalto University School of Science, Espoo, Finland
| | - Kimmo Kaski
- Department of Computer Science, Aalto University School of Science, Espoo, Finland
| | - Mohamed A Naser
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Renjie He
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Abdallah S R Mohamed
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Clifton D Fuller
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA.
| |
Collapse
|
6
|
Swinburne NC, Mendelson D, Rubin DL. Advancing Semantic Interoperability of Image Annotations: Automated Conversion of Non-standard Image Annotations in a Commercial PACS to the Annotation and Image Markup. J Digit Imaging 2021; 33:49-53. [PMID: 30805778 DOI: 10.1007/s10278-019-00191-6] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/18/2023] Open
Abstract
Sharing radiologic image annotations among multiple institutions is important in many clinical scenarios; however, interoperability is prevented because different vendors' PACS store annotations in non-standardized formats that lack semantic interoperability. Our goal was to develop software to automate the conversion of image annotations in a commercial PACS to the Annotation and Image Markup (AIM) standardized format and demonstrate the utility of this conversion for automated matching of lesion measurements across time points for cancer lesion tracking. We created a software module in Java to parse the DICOM presentation state (DICOM-PS) objects (that contain the image annotations) for imaging studies exported from a commercial PACS (GE Centricity v3.x). Our software identifies line annotations encoded within the DICOM-PS objects and exports the annotations in the AIM format. A separate Python script processes the AIM annotation files to match line measurements (on lesions) across time points by tracking the 3D coordinates of annotated lesions. To validate the interoperability of our approach, we exported annotations from Centricity PACS into ePAD (http://epad.stanford.edu) (Rubin et al., Transl Oncol 7(1):23-35, 2014), a freely available AIM-compliant workstation, and the lesion measurement annotations were correctly linked by ePAD across sequential imaging studies. As quantitative imaging becomes more prevalent in radiology, interoperability of image annotations gains increasing importance. Our work demonstrates that image annotations in a vendor system lacking standard semantics can be automatically converted to a standardized metadata format such as AIM, enabling interoperability and potentially facilitating large-scale analysis of image annotations and the generation of high-quality labels for deep learning initiatives. This effort could be extended for use with other vendors' PACS.
Collapse
Affiliation(s)
- Nathaniel C Swinburne
- Neuroradiology Section, Department of Radiology, Memorial Sloan Kettering Cancer Center, C278 Box 29, 1275 York Ave, New York, NY, 10065, USA.
| | - David Mendelson
- Department of Radiology, Icahn School of Medicine at Mount Sinai, 1 Gustave L. Levy Place, New York, NY, 10029, USA
| | - Daniel L Rubin
- Department of Biomedical Data Science, Medical School Office Building, Stanford University, Room X-335, 1265 Welch Road, Stanford, CA, 94305, USA.
| |
Collapse
|
7
|
Machine Learning and Deep Learning in Oncologic Imaging: Potential Hurdles, Opportunities for Improvement, and Solutions-Abdominal Imagers' Perspective. J Comput Assist Tomogr 2021; 45:805-811. [PMID: 34270486 DOI: 10.1097/rct.0000000000001183] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
ABSTRACT The applications of machine learning in clinical radiology practice and in particular oncologic imaging practice are steadily evolving. However, there are several potential hurdles for widespread implementation of machine learning in oncologic imaging, including the lack of availability of a large number of annotated data sets and lack of use of consistent methodology and terminology for reporting the findings observed on the staging and follow-up imaging studies that apply to a wide spectrum of solid tumors. This short review discusses some potential hurdles to the implementation of machine learning in oncologic imaging, opportunities for improvement, and potential solutions that can facilitate robust machine learning from the vast number of radiology reports and annotations generated by the dictating radiologists.
Collapse
|
8
|
Mattonen SA, Gude D, Echegaray S, Bakr S, Rubin DL, Napel S. Quantitative imaging feature pipeline: a web-based tool for utilizing, sharing, and building image-processing pipelines. J Med Imaging (Bellingham) 2020; 7:042803. [PMID: 32206688 PMCID: PMC7070161 DOI: 10.1117/1.jmi.7.4.042803] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2019] [Accepted: 02/26/2020] [Indexed: 12/20/2022] Open
Abstract
Quantitative image features that can be computed from medical images are proving to be valuable biomarkers of underlying cancer biology that can be used for assessing treatment response and predicting clinical outcomes. However, validation and eventual clinical implementation of these tools is challenging due to the absence of shared software algorithms, architectures, and the tools required for computing, comparing, evaluating, and disseminating predictive models. Similarly, researchers need to have programming expertise in order to complete these tasks. The quantitative image feature pipeline (QIFP) is an open-source, web-based, graphical user interface (GUI) of configurable quantitative image-processing pipelines for both planar (two-dimensional) and volumetric (three-dimensional) medical images. This allows researchers and clinicians a GUI-driven approach to process and analyze images, without having to write any software code. The QIFP allows users to upload a repository of linked imaging, segmentation, and clinical data or access publicly available datasets (e.g., The Cancer Imaging Archive) through direct links. Researchers have access to a library of file conversion, segmentation, quantitative image feature extraction, and machine learning algorithms. An interface is also provided to allow users to upload their own algorithms in Docker containers. The QIFP gives researchers the tools and infrastructure for the assessment and development of new imaging biomarkers and the ability to use them for single and multicenter clinical and virtual clinical trials.
Collapse
Affiliation(s)
- Sarah A. Mattonen
- Stanford University, Department of Radiology, Stanford, California, United States
- The University of Western Ontario, Department of Medical Biophysics, London, Ontario, Canada
- The University of Western Ontario, Department of Oncology, London, Ontario, Canada
| | - Dev Gude
- Stanford University, Department of Radiology, Stanford, California, United States
| | - Sebastian Echegaray
- Stanford University, Department of Radiology, Stanford, California, United States
| | - Shaimaa Bakr
- Stanford University, Department of Electrical Engineering, Stanford, California, United States
| | - Daniel L. Rubin
- Stanford University, Department of Radiology, Stanford, California, United States
- Stanford University, Department of Medicine, Stanford, California, United States
- Stanford University, Department of Biomedical Data Science, Stanford, California, United States
| | - Sandy Napel
- Stanford University, Department of Radiology, Stanford, California, United States
- Stanford University, Department of Electrical Engineering, Stanford, California, United States
- Stanford University, Department of Medicine, Stanford, California, United States
| |
Collapse
|
9
|
Willemink MJ, Koszek WA, Hardell C, Wu J, Fleischmann D, Harvey H, Folio LR, Summers RM, Rubin DL, Lungren MP. Preparing Medical Imaging Data for Machine Learning. Radiology 2020; 295:4-15. [PMID: 32068507 PMCID: PMC7104701 DOI: 10.1148/radiol.2020192224] [Citation(s) in RCA: 358] [Impact Index Per Article: 89.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2019] [Revised: 12/03/2019] [Accepted: 12/30/2019] [Indexed: 12/19/2022]
Abstract
Artificial intelligence (AI) continues to garner substantial interest in medical imaging. The potential applications are vast and include the entirety of the medical imaging life cycle from image creation to diagnosis to outcome prediction. The chief obstacles to development and clinical implementation of AI algorithms include availability of sufficiently large, curated, and representative training data that includes expert labeling (eg, annotations). Current supervised AI methods require a curation process for data to optimally train, validate, and test algorithms. Currently, most research groups and industry have limited data access based on small sample sizes from small geographic areas. In addition, the preparation of data is a costly and time-intensive process, the results of which are algorithms with limited utility and poor generalization. In this article, the authors describe fundamental steps for preparing medical imaging data in AI algorithm development, explain current limitations to data curation, and explore new approaches to address the problem of data availability.
Collapse
Affiliation(s)
- Martin J. Willemink
- From the Department of Radiology, Stanford University School of Medicine, 300 Pasteur Dr, S-072, Stanford, CA 94305-5105 (M.J.W., D.F., D.L.R., M.P.L.); Segmed, Menlo Park, Calif (M.J.W., W.A.K., C.H., J.W.); School of Engineering, Stanford University, Stanford, Calif (J.W.); Institute of Cognitive Neuroscience, University College London, London, England (H.H.); Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, Md (L.R.F.); Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, National Institutes of Health, Clinical Center, Bethesda, Md (R.M.S.); Department of Biomedical Data Science, Stanford University School of Medicine, Stanford, Calif (D.L.R.); and Stanford Center for Artificial Intelligence in Medicine and Imaging (AIMI), Stanford, Calif (M.P.L.)
| | - Wojciech A. Koszek
- From the Department of Radiology, Stanford University School of Medicine, 300 Pasteur Dr, S-072, Stanford, CA 94305-5105 (M.J.W., D.F., D.L.R., M.P.L.); Segmed, Menlo Park, Calif (M.J.W., W.A.K., C.H., J.W.); School of Engineering, Stanford University, Stanford, Calif (J.W.); Institute of Cognitive Neuroscience, University College London, London, England (H.H.); Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, Md (L.R.F.); Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, National Institutes of Health, Clinical Center, Bethesda, Md (R.M.S.); Department of Biomedical Data Science, Stanford University School of Medicine, Stanford, Calif (D.L.R.); and Stanford Center for Artificial Intelligence in Medicine and Imaging (AIMI), Stanford, Calif (M.P.L.)
| | - Cailin Hardell
- From the Department of Radiology, Stanford University School of Medicine, 300 Pasteur Dr, S-072, Stanford, CA 94305-5105 (M.J.W., D.F., D.L.R., M.P.L.); Segmed, Menlo Park, Calif (M.J.W., W.A.K., C.H., J.W.); School of Engineering, Stanford University, Stanford, Calif (J.W.); Institute of Cognitive Neuroscience, University College London, London, England (H.H.); Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, Md (L.R.F.); Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, National Institutes of Health, Clinical Center, Bethesda, Md (R.M.S.); Department of Biomedical Data Science, Stanford University School of Medicine, Stanford, Calif (D.L.R.); and Stanford Center for Artificial Intelligence in Medicine and Imaging (AIMI), Stanford, Calif (M.P.L.)
| | - Jie Wu
- From the Department of Radiology, Stanford University School of Medicine, 300 Pasteur Dr, S-072, Stanford, CA 94305-5105 (M.J.W., D.F., D.L.R., M.P.L.); Segmed, Menlo Park, Calif (M.J.W., W.A.K., C.H., J.W.); School of Engineering, Stanford University, Stanford, Calif (J.W.); Institute of Cognitive Neuroscience, University College London, London, England (H.H.); Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, Md (L.R.F.); Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, National Institutes of Health, Clinical Center, Bethesda, Md (R.M.S.); Department of Biomedical Data Science, Stanford University School of Medicine, Stanford, Calif (D.L.R.); and Stanford Center for Artificial Intelligence in Medicine and Imaging (AIMI), Stanford, Calif (M.P.L.)
| | - Dominik Fleischmann
- From the Department of Radiology, Stanford University School of Medicine, 300 Pasteur Dr, S-072, Stanford, CA 94305-5105 (M.J.W., D.F., D.L.R., M.P.L.); Segmed, Menlo Park, Calif (M.J.W., W.A.K., C.H., J.W.); School of Engineering, Stanford University, Stanford, Calif (J.W.); Institute of Cognitive Neuroscience, University College London, London, England (H.H.); Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, Md (L.R.F.); Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, National Institutes of Health, Clinical Center, Bethesda, Md (R.M.S.); Department of Biomedical Data Science, Stanford University School of Medicine, Stanford, Calif (D.L.R.); and Stanford Center for Artificial Intelligence in Medicine and Imaging (AIMI), Stanford, Calif (M.P.L.)
| | - Hugh Harvey
- From the Department of Radiology, Stanford University School of Medicine, 300 Pasteur Dr, S-072, Stanford, CA 94305-5105 (M.J.W., D.F., D.L.R., M.P.L.); Segmed, Menlo Park, Calif (M.J.W., W.A.K., C.H., J.W.); School of Engineering, Stanford University, Stanford, Calif (J.W.); Institute of Cognitive Neuroscience, University College London, London, England (H.H.); Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, Md (L.R.F.); Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, National Institutes of Health, Clinical Center, Bethesda, Md (R.M.S.); Department of Biomedical Data Science, Stanford University School of Medicine, Stanford, Calif (D.L.R.); and Stanford Center for Artificial Intelligence in Medicine and Imaging (AIMI), Stanford, Calif (M.P.L.)
| | - Les R. Folio
- From the Department of Radiology, Stanford University School of Medicine, 300 Pasteur Dr, S-072, Stanford, CA 94305-5105 (M.J.W., D.F., D.L.R., M.P.L.); Segmed, Menlo Park, Calif (M.J.W., W.A.K., C.H., J.W.); School of Engineering, Stanford University, Stanford, Calif (J.W.); Institute of Cognitive Neuroscience, University College London, London, England (H.H.); Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, Md (L.R.F.); Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, National Institutes of Health, Clinical Center, Bethesda, Md (R.M.S.); Department of Biomedical Data Science, Stanford University School of Medicine, Stanford, Calif (D.L.R.); and Stanford Center for Artificial Intelligence in Medicine and Imaging (AIMI), Stanford, Calif (M.P.L.)
| | - Ronald M. Summers
- From the Department of Radiology, Stanford University School of Medicine, 300 Pasteur Dr, S-072, Stanford, CA 94305-5105 (M.J.W., D.F., D.L.R., M.P.L.); Segmed, Menlo Park, Calif (M.J.W., W.A.K., C.H., J.W.); School of Engineering, Stanford University, Stanford, Calif (J.W.); Institute of Cognitive Neuroscience, University College London, London, England (H.H.); Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, Md (L.R.F.); Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, National Institutes of Health, Clinical Center, Bethesda, Md (R.M.S.); Department of Biomedical Data Science, Stanford University School of Medicine, Stanford, Calif (D.L.R.); and Stanford Center for Artificial Intelligence in Medicine and Imaging (AIMI), Stanford, Calif (M.P.L.)
| | - Daniel L. Rubin
- From the Department of Radiology, Stanford University School of Medicine, 300 Pasteur Dr, S-072, Stanford, CA 94305-5105 (M.J.W., D.F., D.L.R., M.P.L.); Segmed, Menlo Park, Calif (M.J.W., W.A.K., C.H., J.W.); School of Engineering, Stanford University, Stanford, Calif (J.W.); Institute of Cognitive Neuroscience, University College London, London, England (H.H.); Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, Md (L.R.F.); Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, National Institutes of Health, Clinical Center, Bethesda, Md (R.M.S.); Department of Biomedical Data Science, Stanford University School of Medicine, Stanford, Calif (D.L.R.); and Stanford Center for Artificial Intelligence in Medicine and Imaging (AIMI), Stanford, Calif (M.P.L.)
| | - Matthew P. Lungren
- From the Department of Radiology, Stanford University School of Medicine, 300 Pasteur Dr, S-072, Stanford, CA 94305-5105 (M.J.W., D.F., D.L.R., M.P.L.); Segmed, Menlo Park, Calif (M.J.W., W.A.K., C.H., J.W.); School of Engineering, Stanford University, Stanford, Calif (J.W.); Institute of Cognitive Neuroscience, University College London, London, England (H.H.); Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, Md (L.R.F.); Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, National Institutes of Health, Clinical Center, Bethesda, Md (R.M.S.); Department of Biomedical Data Science, Stanford University School of Medicine, Stanford, Calif (D.L.R.); and Stanford Center for Artificial Intelligence in Medicine and Imaging (AIMI), Stanford, Calif (M.P.L.)
| |
Collapse
|
10
|
Chaki J, Dey N. Data Tagging in Medical Images: A Survey of the State-of-Art. Curr Med Imaging 2020; 16:1214-1228. [PMID: 32108002 DOI: 10.2174/1573405616666200218130043] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2019] [Revised: 12/02/2019] [Accepted: 12/16/2019] [Indexed: 11/22/2022]
Abstract
A huge amount of medical data is generated every second, and a significant percentage of the data are images that need to be analyzed and processed. One of the key challenges in this regard is the recovery of the data of medical images. The medical image recovery procedure should be done automatically by the computers that are the method of identifying object concepts and assigning homologous tags to them. To discover the hidden concepts in the medical images, the lowlevel characteristics should be used to achieve high-level concepts and that is a challenging task. In any specific case, it requires human involvement to determine the significance of the image. To allow machine-based reasoning on the medical evidence collected, the data must be accompanied by additional interpretive semantics; a change from a pure data-intensive methodology to a model of evidence rich in semantics. In this state-of-art, data tagging methods related to medical images are surveyed which is an important aspect for the recognition of a huge number of medical images. Different types of tags related to the medical image, prerequisites of medical data tagging, different techniques to develop medical image tags, different medical image tagging algorithms and different tools that are used to create the tags are discussed in this paper. The aim of this state-of-art paper is to produce a summary and a set of guidelines for using the tags for the identification of medical images and to identify the challenges and future research directions of tagging medical images.
Collapse
Affiliation(s)
- Jyotismita Chaki
- School of Information Technology and Engineering, Vellore Institute of Technology, Vellore, India
| | - Nilanjan Dey
- Department of Information Technology, Techno India College of Technology, West Bengal, India
| |
Collapse
|
11
|
Rubin DL, Ugur Akdogan M, Altindag C, Alkim E. ePAD: An Image Annotation and Analysis Platform for Quantitative Imaging. ACTA ACUST UNITED AC 2020; 5:170-183. [PMID: 30854455 PMCID: PMC6403025 DOI: 10.18383/j.tom.2018.00055] [Citation(s) in RCA: 22] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022]
Abstract
Medical imaging is critical for assessing the response of patients to new cancer therapies. Quantitative lesion assessment on images is time-consuming, and adopting new promising quantitative imaging biomarkers of response in clinical trials is challenging. The electronic Physician Annotation Device (ePAD) is a freely available web-based zero-footprint software application for viewing, annotation, and quantitative analysis of radiology images designed to meet the challenges of quantitative evaluation of cancer lesions. For imaging researchers, ePAD calculates a variety of quantitative imaging biomarkers that they can analyze and compare in ePAD to identify potential candidates as surrogate endpoints in clinical trials. For clinicians, ePAD provides clinical decision support tools for evaluating cancer response through reports summarizing changes in tumor burden based on different imaging biomarkers. As a workflow management and study oversight tool, ePAD lets clinical trial project administrators create worklists for users and oversee the progress of annotations created by research groups. To support interoperability of image annotations, ePAD writes all image annotations and results of quantitative imaging analyses in standardized file formats, and it supports migration of annotations from various propriety formats. ePAD also provides a plugin architecture supporting MATLAB server-side modules in addition to client-side plugins, permitting the community to extend the ePAD platform in various ways for new cancer use cases. We present an overview of ePAD as a platform for medical image annotation and quantitative analysis. We also discuss use cases and collaborations with different groups in the Quantitative Imaging Network and future directions.
Collapse
Affiliation(s)
- Daniel L Rubin
- Department of Biomedical Data Science, Radiology, and Medicine (Biomedical Informatics Research), Stanford University, Stanford, CA
| | - Mete Ugur Akdogan
- Department of Biomedical Data Science, Radiology, and Medicine (Biomedical Informatics Research), Stanford University, Stanford, CA
| | - Cavit Altindag
- Department of Biomedical Data Science, Radiology, and Medicine (Biomedical Informatics Research), Stanford University, Stanford, CA
| | - Emel Alkim
- Department of Biomedical Data Science, Radiology, and Medicine (Biomedical Informatics Research), Stanford University, Stanford, CA
| |
Collapse
|
12
|
The Importance of Imaging Informatics and Informaticists in the Implementation of AI. Acad Radiol 2020; 27:113-116. [PMID: 31636003 DOI: 10.1016/j.acra.2019.10.002] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2019] [Accepted: 10/01/2019] [Indexed: 12/18/2022]
Abstract
Imaging informatics is critical to the success of AI implementation in radiology. An imaging informaticist is a unique individual who sits at the intersection of clinical radiology, data science, and information technology. With the ability to understand each of the different domains and translate between the experts in these domains, imaging informaticists are now essential players in the development, evaluation, and deployment of AI in the clinical environment.
Collapse
|
13
|
Automatic Staging of Cancer Tumors Using AIM Image Annotations and Ontologies. J Digit Imaging 2019; 33:287-303. [PMID: 31396778 DOI: 10.1007/s10278-019-00251-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022] Open
Abstract
A second opinion about cancer stage is crucial when clinicians assess patient treatment progress. Staging is a process that takes into account description, location, characteristics, and possible metastasis of tumors in a patient. It should follow standards, such as the TNM Classification of Malignant Tumors. However, in clinical practice, the implementation of this process can be tedious and error prone. In order to alleviate these problems, we intend to assist radiologists by providing a second opinion in the evaluation of cancer stage. For doing this, we developed a TNM classifier based on semantic annotations, made by radiologists, using the ePAD tool. It transforms the annotations (stored using the AIM format), using axioms and rules, into AIM4-O ontology instances. From then, it automatically calculates the liver TNM cancer stage. The AIM4-O ontology was developed, as part of this work, to represent annotations in the Web Ontology Language (OWL). A dataset of 51 liver radiology reports with staging data, from NCI's Genomic Data Commons (GDC), were used to evaluate our classifier. When compared with the stages attributed by physicians, the classifier stages had a precision of 85.7% and recall of 81.0%. In addition, 3 radiologists from 2 different institutions manually reviewed a random sample of 4 of the 51 records and agreed with the tool staging. AIM4-O was also evaluated with good results. Our classifier can be integrated into AIM aware imaging tools, such as ePAD, to offer a second opinion about staging as part of the cancer treatment workflow.
Collapse
|
14
|
Gupta R, Kurc T, Sharma A, Almeida JS, Saltz J. The Emergence of Pathomics. CURRENT PATHOBIOLOGY REPORTS 2019. [DOI: 10.1007/s40139-019-00200-x] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
|
15
|
Napel S, Mu W, Jardim‐Perassi BV, Aerts HJWL, Gillies RJ. Quantitative imaging of cancer in the postgenomic era: Radio(geno)mics, deep learning, and habitats. Cancer 2018; 124:4633-4649. [PMID: 30383900 PMCID: PMC6482447 DOI: 10.1002/cncr.31630] [Citation(s) in RCA: 120] [Impact Index Per Article: 20.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2018] [Revised: 07/11/2018] [Accepted: 07/17/2018] [Indexed: 11/07/2022]
Abstract
Although cancer often is referred to as "a disease of the genes," it is indisputable that the (epi)genetic properties of individual cancer cells are highly variable, even within the same tumor. Hence, preexisting resistant clones will emerge and proliferate after therapeutic selection that targets sensitive clones. Herein, the authors propose that quantitative image analytics, known as "radiomics," can be used to quantify and characterize this heterogeneity. Virtually every patient with cancer is imaged radiologically. Radiomics is predicated on the beliefs that these images reflect underlying pathophysiologies, and that they can be converted into mineable data for improved diagnosis, prognosis, prediction, and therapy monitoring. In the last decade, the radiomics of cancer has grown from a few laboratories to a worldwide enterprise. During this growth, radiomics has established a convention, wherein a large set of annotated image features (1-2000 features) are extracted from segmented regions of interest and used to build classifier models to separate individual patients into their appropriate class (eg, indolent vs aggressive disease). An extension of this conventional radiomics is the application of "deep learning," wherein convolutional neural networks can be used to detect the most informative regions and features without human intervention. A further extension of radiomics involves automatically segmenting informative subregions ("habitats") within tumors, which can be linked to underlying tumor pathophysiology. The goal of the radiomics enterprise is to provide informed decision support for the practice of precision oncology.
Collapse
Affiliation(s)
- Sandy Napel
- Department of RadiologyStanford UniversityStanfordCalifornia
| | - Wei Mu
- Department of Cancer PhysiologyH. Lee Moffitt Cancer CenterTampaFlorida
| | | | - Hugo J. W. L. Aerts
- Dana‐Farber Cancer Institute, Department of Radiology, Brigham and Women’s HospitalHarvard Medical SchoolBostonMassachusetts
| | - Robert J. Gillies
- Department of Cancer PhysiologyH. Lee Moffitt Cancer CenterTampaFlorida
| |
Collapse
|
16
|
Imaging Biomarker Ontology (IBO): A Biomedical Ontology to Annotate and Share Imaging Biomarker Data. JOURNAL ON DATA SEMANTICS 2018. [DOI: 10.1007/s13740-018-0093-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/26/2022]
|
17
|
Abstract
PURPOSE Today, data surrounding most of our lives are collected and stored. Data scientists are beginning to explore applications that could harness this information and make sense of it. MATERIALS AND METHODS In this review, the topic of Big Data is explored, and applications in modern health care are considered. RESULTS Big Data is a concept that has evolved from the modern trend of "scientism." One of the primary goals of data scientists is to develop ways to discover new knowledge from the vast quantities of increasingly available information. CONCLUSIONS Current and future opportunities and challenges with respect to radiology are provided with emphasis on cardiothoracic imaging.
Collapse
|
18
|
Hwang KH, Lee H, Koh G, Willrett D, Rubin DL. Building and Querying RDF/OWL Database of Semantically Annotated Nuclear Medicine Images. J Digit Imaging 2018; 30:4-10. [PMID: 27785632 DOI: 10.1007/s10278-016-9916-7] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022] Open
Abstract
As the use of positron emission tomography-computed tomography (PET-CT) has increased rapidly, there is a need to retrieve relevant medical images that can assist image interpretation. However, the images themselves lack the explicit information needed for query. We constructed a semantically structured database of nuclear medicine images using the Annotation and Image Markup (AIM) format and evaluated the ability the AIM annotations to improve image search. We created AIM annotation templates specific to the nuclear medicine domain and used them to annotate 100 nuclear medicine PET-CT studies in AIM format using controlled vocabulary. We evaluated image retrieval from 20 specific clinical queries. As the gold standard, two nuclear medicine physicians manually retrieved the relevant images from the image database using free text search of radiology reports for the same queries. We compared query results with the manually retrieved results obtained by the physicians. The query performance indicated a 98 % recall for simple queries and a 89 % recall for complex queries. In total, the queries provided 95 % (75 of 79 images) recall, 100 % precision, and an F1 score of 0.97 for the 20 clinical queries. Three of the four images missed by the queries required reasoning for successful retrieval. Nuclear medicine images augmented using semantic annotations in AIM enabled high recall and precision for simple queries, helping physicians to retrieve the relevant images. Further study using a larger data set and the implementation of an inference engine may improve query results for more complex queries.
Collapse
Affiliation(s)
- Kyung Hoon Hwang
- Department of Nuclear Medicine, Gachon University Gil Medical Center, Incheon, South Korea
| | - Haejun Lee
- Department of Nuclear Medicine, Gachon University Gil Medical Center, Incheon, South Korea
| | - Geon Koh
- Department of Nuclear Medicine, Gachon University Gil Medical Center, Incheon, South Korea
| | - Debra Willrett
- Department of Radiology, Stanford University, Stanford, CA, USA
| | - Daniel L Rubin
- Department of Radiology, Stanford University, Stanford, CA, USA. .,Department of Medicine (Biomedical Informatics Research), Stanford University, Stanford, CA, USA.
| |
Collapse
|
19
|
A Comparison of Lung Nodule Segmentation Algorithms: Methods and Results from a Multi-institutional Study. J Digit Imaging 2018; 29:476-87. [PMID: 26847203 DOI: 10.1007/s10278-016-9859-z] [Citation(s) in RCA: 53] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022] Open
Abstract
Tumor volume estimation, as well as accurate and reproducible borders segmentation in medical images, are important in the diagnosis, staging, and assessment of response to cancer therapy. The goal of this study was to demonstrate the feasibility of a multi-institutional effort to assess the repeatability and reproducibility of nodule borders and volume estimate bias of computerized segmentation algorithms in CT images of lung cancer, and to provide results from such a study. The dataset used for this evaluation consisted of 52 tumors in 41 CT volumes (40 patient datasets and 1 dataset containing scans of 12 phantom nodules of known volume) from five collections available in The Cancer Imaging Archive. Three academic institutions developing lung nodule segmentation algorithms submitted results for three repeat runs for each of the nodules. We compared the performance of lung nodule segmentation algorithms by assessing several measurements of spatial overlap and volume measurement. Nodule sizes varied from 29 μl to 66 ml and demonstrated a diversity of shapes. Agreement in spatial overlap of segmentations was significantly higher for multiple runs of the same algorithm than between segmentations generated by different algorithms (p < 0.05) and was significantly higher on the phantom dataset compared to the other datasets (p < 0.05). Algorithms differed significantly in the bias of the measured volumes of the phantom nodules (p < 0.05) underscoring the need for assessing performance on clinical data in addition to phantoms. Algorithms that most accurately estimated nodule volumes were not the most repeatable, emphasizing the need to evaluate both their accuracy and precision. There were considerable differences between algorithms, especially in a subset of heterogeneous nodules, underscoring the recommendation that the same software be used at all time points in longitudinal studies.
Collapse
|
20
|
Saltz J, Sharma A, Iyer G, Bremer E, Wang F, Jasniewski A, DiPrima T, Almeida JS, Gao Y, Zhao T, Saltz M, Kurc T. A Containerized Software System for Generation, Management, and Exploration of Features from Whole Slide Tissue Images. Cancer Res 2017; 77:e79-e82. [PMID: 29092946 DOI: 10.1158/0008-5472.can-17-0316] [Citation(s) in RCA: 33] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2017] [Revised: 06/17/2017] [Accepted: 09/01/2017] [Indexed: 11/16/2022]
Abstract
Well-curated sets of pathology image features will be critical to clinical studies that aim to evaluate and predict treatment responses. Researchers require information synthesized across multiple biological scales, from the patient to the molecular scale, to more effectively study cancer. This article describes a suite of services and web applications that allow users to select regions of interest in whole slide tissue images, run a segmentation pipeline on the selected regions to extract nuclei and compute shape, size, intensity, and texture features, store and index images and analysis results, and visualize and explore images and computed features. All the services are deployed as containers and the user-facing interfaces as web-based applications. The set of containers and web applications presented in this article is used in cancer research studies of morphologic characteristics of tumor tissues. The software is free and open source. Cancer Res; 77(21); e79-82. ©2017 AACR.
Collapse
Affiliation(s)
- Joel Saltz
- Department of Biomedical Informatics, Stony Brook University, Stony Brook, New York.
| | - Ashish Sharma
- Department of Biomedical Informatics, Emory University, Atlanta, Georgia
| | - Ganesh Iyer
- Department of Biomedical Informatics, Emory University, Atlanta, Georgia
| | - Erich Bremer
- Department of Biomedical Informatics, Stony Brook University, Stony Brook, New York
| | - Feiqiao Wang
- Department of Biomedical Informatics, Stony Brook University, Stony Brook, New York
| | - Alina Jasniewski
- Department of Biomedical Informatics, Stony Brook University, Stony Brook, New York
| | - Tammy DiPrima
- Department of Biomedical Informatics, Stony Brook University, Stony Brook, New York
| | - Jonas S Almeida
- Department of Biomedical Informatics, Stony Brook University, Stony Brook, New York
| | - Yi Gao
- Department of Biomedical Informatics, Stony Brook University, Stony Brook, New York
| | - Tianhao Zhao
- Department of Biomedical Informatics, Stony Brook University, Stony Brook, New York.,Department of Pathology, Stony Brook University, Stony Brook, New York
| | - Mary Saltz
- Department of Radiology, Stony Brook University, Stony Brook, New York
| | - Tahsin Kurc
- Department of Biomedical Informatics, Stony Brook University, Stony Brook, New York.,Scientific Data Group, Oak Ridge National Laboratory, Oak Ridge, Tennessee
| |
Collapse
|
21
|
Owolabi M, Ogbole G, Akinyemi R, Salaam K, Akpa O, Mongkolwat P, Omisore A, Agunloye A, Efidi R, Odo J, Makanjuola A, Akpalu A, Sarfo F, Owolabi L, Obiako R, Wahab K, Sanya E, Adebayo P, Komolafe M, Adeoye AM, Fawale MB, Akinyemi J, Osaigbovo G, Sunmonu T, Olowoyo P, Chukwuonye I, Obiabo Y, Ibinaiye P, Dambatta A, Mensah Y, Abdul S, Olabinri E, Ikubor J, Oyinloye O, Odunlami F, Melikam E, Saulson R, Kolo P, Ogunniyi A, Ovbiagele B. Development and Reliability of a User-Friendly Multicenter Phenotyping Application for Hemorrhagic and Ischemic Stroke. J Stroke Cerebrovasc Dis 2017; 26:2662-2670. [PMID: 28760409 DOI: 10.1016/j.jstrokecerebrovasdis.2017.06.042] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2017] [Revised: 06/12/2017] [Accepted: 06/25/2017] [Indexed: 11/29/2022] Open
Abstract
BACKGROUND Annotation and Image Markup on ClearCanvas Enriched Stroke-phenotyping Software (ACCESS) is a novel stand-alone computer software application that allows the creation of simple standardized annotations for reporting brain images of all stroke types. We developed the ACCESS application and determined its inter-rater and intra-rater reliability in the Stroke Investigative Research and Educational Network (SIREN) study to assess its suitability for multicenter studies. METHODS One hundred randomly selected stroke imaging reports from 5 SIREN sites were re-evaluated by 4 trained independent raters to determine the inter-rater reliability of the ACCESS (version 12.0) software for stroke phenotyping. To determine intra-rater reliability, 6 raters reviewed the same cases previously reported by them after a month of interval. Ischemic stroke was classified using the Oxfordshire Community Stroke Project (OCSP), Trial of Org 10172 in Acute Stroke Treatment (TOAST), and Atherosclerosis, Small-vessel disease, Cardiac source, Other cause (ASCO) protocols, while hemorrhagic stroke was classified using the Structural lesion, Medication, Amyloid angiopathy, Systemic disease, Hypertensive angiopathy and Undetermined (SMASH-U) protocol in ACCESS. Agreement among raters was measured with Cohen's kappa statistics. RESULTS For primary stroke type, inter-rater agreement was .98 (95% confidence interval [CI], .94-1.00), while intra-rater agreement was 1.00 (95% CI, 1.00). For OCSP subtypes, inter-rater agreement was .97 (95% CI, .92-1.00) for the partial anterior circulation infarcts, .92 (95% CI, .76-1.00) for the total anterior circulation infarcts, and excellent for both lacunar infarcts and posterior circulation infarcts. Intra-rater agreement was .97 (.90-1.00), while inter-rater agreement was .93 (95% CI, .84-1.00) for TOAST subtypes. Inter-rater agreement ranged between .78 (cardioembolic) and .91 (large artery atherosclerotic) for ASCO subtypes and was .80 (95% CI, .56-1.00) for SMASH-U subtypes. CONCLUSION The ACCESS application facilitates a concordant and reproducible classification of stroke subtypes by multiple investigators, making it suitable for clinical use and multicenter research.
Collapse
Affiliation(s)
- Mayowa Owolabi
- University of Ibadan, Nigeria; University College Hospital, Ibadan, Nigeria
| | - Godwin Ogbole
- University of Ibadan, Nigeria; University College Hospital, Ibadan, Nigeria.
| | | | | | | | | | - Adeleye Omisore
- Obafemi Awolowo University Teaching Hospital, Ile-Ife, Nigeria
| | - Atinuke Agunloye
- University of Ibadan, Nigeria; University College Hospital, Ibadan, Nigeria
| | | | - Joseph Odo
- University College Hospital, Ibadan, Nigeria
| | | | | | - Fred Sarfo
- Kwame Nkrumah University of Science and Technology, Kumasi, Ghana
| | | | | | | | | | - Philip Adebayo
- Ladoke Akintola University Teaching Hospital, Ogbomosho, Nigeria
| | | | | | | | | | | | | | - Paul Olowoyo
- Federal University Teaching Hospital Ido-Ekiti, Nigeria
| | | | - Yahaya Obiabo
- Delta State University Teaching Hospital, Oghara, Nigeria
| | | | | | - Yaw Mensah
- University of Ghana Medical School, Accra, Ghana
| | | | | | - Joyce Ikubor
- Delta State University Teaching Hospital, Oghara, Nigeria
| | | | - Femi Odunlami
- Federal Medical Centre, Abeokuta, Nigeria; Sacred Heart Hospital, Abeokuta, Nigeria
| | - Ezinne Melikam
- University of Ibadan, Nigeria; University College Hospital, Ibadan, Nigeria
| | - Raelle Saulson
- Medical University of South Carolina, Charleston, South Carolina
| | - Philip Kolo
- University of Ilorin Teaching Hospital, Ilorin, Nigeria
| | - Adesola Ogunniyi
- University of Ibadan, Nigeria; University College Hospital, Ibadan, Nigeria
| | - Bruce Ovbiagele
- Medical University of South Carolina, Charleston, South Carolina
| | | |
Collapse
|
22
|
Saltz J, Almeida J, Gao Y, Sharma A, Bremer E, DiPrima T, Saltz M, Kalpathy-Cramer J, Kurc T. Towards Generation, Management, and Exploration of Combined Radiomics and Pathomics Datasets for Cancer Research. AMIA JOINT SUMMITS ON TRANSLATIONAL SCIENCE PROCEEDINGS. AMIA JOINT SUMMITS ON TRANSLATIONAL SCIENCE 2017; 2017:85-94. [PMID: 28815113 PMCID: PMC5543366] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Cancer is a complex multifactorial disease state and the ability to anticipate and steer treatment results will require information synthesis across multiple scales from the host to the molecular level. Radiomics and Pathomics, where image features are extracted from routine diagnostic Radiology and Pathology studies, are also evolving as valuable diagnostic and prognostic indicators in cancer. This information explosion provides new opportunities for integrated, multi-scale investigation of cancer, but also mandates a need to build systematic and integrated approaches to manage, query and mine combined Radiomics and Pathomics data. In this paper, we describe a suite of tools and web-based applications towards building a comprehensive framework to support the generation, management and interrogation of large volumes of Radiomics and Pathomics feature sets and the investigation of correlations between image features, molecular data, and clinical outcome.
Collapse
Affiliation(s)
- Joel Saltz
- Biomedical Informatics Department, Stony Brook University, Stony Brook, NY
| | - Jonas Almeida
- Biomedical Informatics Department, Stony Brook University, Stony Brook, NY
| | - Yi Gao
- Biomedical Informatics Department, Stony Brook University, Stony Brook, NY
| | - Ashish Sharma
- Biomedical Informatics Department, Emory University, Atlanta, GA
| | - Erich Bremer
- Biomedical Informatics Department, Stony Brook University, Stony Brook, NY
| | - Tammy DiPrima
- Biomedical Informatics Department, Stony Brook University, Stony Brook, NY
| | - Mary Saltz
- Department of Radiology, Stony Brook University, Stony Brook, NY
| | | | - Tahsin Kurc
- Biomedical Informatics Department, Stony Brook University, Stony Brook, NY
- Scientific Data Group, Oak Ridge National Laboratory, Oak Ridge, TN
| |
Collapse
|
23
|
Kanas VG, Zacharaki EI, Thomas GA, Zinn PO, Megalooikonomou V, Colen RR. Learning MRI-based classification models for MGMT methylation status prediction in glioblastoma. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2017; 140:249-257. [PMID: 28254081 DOI: 10.1016/j.cmpb.2016.12.018] [Citation(s) in RCA: 61] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/28/2016] [Revised: 12/14/2016] [Accepted: 12/29/2016] [Indexed: 06/06/2023]
Abstract
BACKGROUND AND OBJECTIVE The O6-methylguanine-DNA-methyltransferase (MGMT) promoter methylation has been shown to be associated with improved outcomes in patients with glioblastoma (GBM) and may be a predictive marker of sensitivity to chemotherapy. However, determination of the MGMT promoter methylation status requires tissue obtained via surgical resection or biopsy. The aim of this study was to assess the ability of quantitative and qualitative imaging variables in predicting MGMT methylation status noninvasively. METHODS A retrospective analysis of MR images from GBM patients was conducted. Multivariate prediction models were obtained by machine-learning methods and tested on data from The Cancer Genome Atlas (TCGA) database. RESULTS The status of MGMT promoter methylation was predicted with an accuracy of up to 73.6%. Experimental analysis showed that the edema/necrosis volume ratio, tumor/necrosis volume ratio, edema volume, and tumor location and enhancement characteristics were the most significant variables in respect to the status of MGMT promoter methylation in GBM. CONCLUSIONS The obtained results provide further evidence of an association between standard preoperative MRI variables and MGMT methylation status in GBM.
Collapse
Affiliation(s)
- Vasileios G Kanas
- Department of Electrical and Computer Engineering, University of Patras, Patras, Greece; Department of Computer Engineering and Informatics, University of Patras, Patras, Greece
| | - Evangelia I Zacharaki
- Department of Computer Engineering and Informatics, University of Patras, Patras, Greece; Center for Visual Computing (CVC), CentraleSupélec, INRIA, Université Paris-Saclay, France.
| | - Ginu A Thomas
- Department of Diagnostic Radiology, University of Texas MD Anderson Cancer Center, Houston, TX, USA
| | - Pascal O Zinn
- Department of Neurosurgery, Baylor College of Medicine, Houston, TX, USA
| | | | - Rivka R Colen
- Department of Diagnostic Radiology, University of Texas MD Anderson Cancer Center, Houston, TX, USA
| |
Collapse
|
24
|
Bialecki B, Park J, Tilkin M. Using Object Storage Technology vs Vendor Neutral Archives for an Image Data Repository Infrastructure. J Digit Imaging 2016; 29:460-5. [PMID: 26872657 PMCID: PMC4942393 DOI: 10.1007/s10278-016-9867-z] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022] Open
Abstract
The intent of this project was to use object storage and its database, which has the ability to add custom extensible metadata to an imaging object being stored within the system, to harness the power of its search capabilities, and to close the technology gap that healthcare faces. This creates a non-disruptive tool that can be used natively by both legacy systems and the healthcare systems of today which leverage more advanced storage technologies. The base infrastructure can be populated alongside current workflows without any interruption to the delivery of services. In certain use cases, this technology can be seen as a true alternative to the VNA (Vendor Neutral Archive) systems implemented by healthcare today. The scalability, security, and ability to process complex objects makes this more than just storage for image data and a commodity to be consumed by PACS (Picture Archiving and Communication System) and workstations. Object storage is a smart technology that can be leveraged to create vendor independence, standards compliance, and a data repository that can be mined for truly relevant content by adding additional context to search capabilities. This functionality can lead to efficiencies in workflow and a wealth of minable data to improve outcomes into the future.
Collapse
Affiliation(s)
- Brian Bialecki
- CIIP, American College of Radiology, 1818 Market Street Suite 1720, Philadelphia, PA, 19103, USA.
| | - James Park
- Hitachi Data Systems, Santa Clara, CA, USA
| | - Mike Tilkin
- American College of Radiology, 1891 Preston White Drive, Reston, VA, 20191, USA
| |
Collapse
|
25
|
Abstract
The science and applications of informatics in medical imaging have advanced dramatically in the past 25 years. This article provides a selective overview of key developments in medical imaging informatics. Advances in standards and technologies for compression and transmission of digital images have enabled Picture Archiving and Communications Systems (PACS) and teleradiology. Research in speech recognition, structured reporting, ontologies, and natural language processing has improved the ability to generate and analyze the reports of imaging procedures. Informatics has provided tools to address workflow and ergonomic issues engendered by the growing volume of medical image information. Research in computeraided detection and diagnosis of abnormalities in medical images has opened new avenues to improve patient care. The growing number of medical-imaging examinations and their large volumes of information create a natural platform for "big data" analytics, particularly when joined with high-dimensional genomic data. Radiogenomics investigates relationships between a disease's genetic and gene-expression characteristics and its imaging phenotype; this emerging field promises to help us better understand disease biology, prognosis, and treatment options. The next 25 years offer remarkable opportunities for informatics and medical imaging together to lead to further advances in both disciplines and to improve health.
Collapse
Affiliation(s)
| | | | - C E Kahn
- Charles E. Kahn, Jr., Department of Radiology, 3400 Spruce Street, 1 Silverstein, Philadelphia, PA 19104, USA, E-mail:
| |
Collapse
|
26
|
Kumar A, Dyer S, Kim J, Li C, Leong PHW, Fulham M, Feng D. Adapting content-based image retrieval techniques for the semantic annotation of medical images. Comput Med Imaging Graph 2016; 49:37-45. [PMID: 26890880 DOI: 10.1016/j.compmedimag.2016.01.001] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2015] [Revised: 12/10/2015] [Accepted: 01/14/2016] [Indexed: 10/22/2022]
Abstract
The automatic annotation of medical images is a prerequisite for building comprehensive semantic archives that can be used to enhance evidence-based diagnosis, physician education, and biomedical research. Annotation also has important applications in the automatic generation of structured radiology reports. Much of the prior research work has focused on annotating images with properties such as the modality of the image, or the biological system or body region being imaged. However, many challenges remain for the annotation of high-level semantic content in medical images (e.g., presence of calcification, vessel obstruction, etc.) due to the difficulty in discovering relationships and associations between low-level image features and high-level semantic concepts. This difficulty is further compounded by the lack of labelled training data. In this paper, we present a method for the automatic semantic annotation of medical images that leverages techniques from content-based image retrieval (CBIR). CBIR is a well-established image search technology that uses quantifiable low-level image features to represent the high-level semantic content depicted in those images. Our method extends CBIR techniques to identify or retrieve a collection of labelled images that have similar low-level features and then uses this collection to determine the best high-level semantic annotations. We demonstrate our annotation method using retrieval via weighted nearest-neighbour retrieval and multi-class classification to show that our approach is viable regardless of the underlying retrieval strategy. We experimentally compared our method with several well-established baseline techniques (classification and regression) and showed that our method achieved the highest accuracy in the annotation of liver computed tomography (CT) images.
Collapse
Affiliation(s)
- Ashnil Kumar
- School of Information Technologies, University of Sydney, Australia; Institute of Biomedical Engineering and Technology, University of Sydney, Australia.
| | - Shane Dyer
- School of Electrical and Information Engineering, University of Sydney, Australia.
| | - Jinman Kim
- School of Information Technologies, University of Sydney, Australia; Institute of Biomedical Engineering and Technology, University of Sydney, Australia.
| | - Changyang Li
- School of Information Technologies, University of Sydney, Australia; Institute of Biomedical Engineering and Technology, University of Sydney, Australia.
| | - Philip H W Leong
- School of Electrical and Information Engineering, University of Sydney, Australia; Institute of Biomedical Engineering and Technology, University of Sydney, Australia.
| | - Michael Fulham
- School of Information Technologies, University of Sydney, Australia; Institute of Biomedical Engineering and Technology, University of Sydney, Australia; Department of Molecular Imaging, Royal Prince Alfred Hospital, Sydney, Australia; Sydney Medical School, University of Sydney, Australia.
| | - Dagan Feng
- School of Information Technologies, University of Sydney, Australia; Institute of Biomedical Engineering and Technology, University of Sydney, Australia; Med-X Research Institute, Shanghai Jiao Tong University, China.
| |
Collapse
|
27
|
Oberkampf H, Zillner S, Overton JA, Bauer B, Cavallaro A, Uder M, Hammon M. Semantic representation of reported measurements in radiology. BMC Med Inform Decis Mak 2016; 16:5. [PMID: 26801764 PMCID: PMC4722630 DOI: 10.1186/s12911-016-0248-9] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2016] [Accepted: 01/20/2016] [Indexed: 12/23/2022] Open
Abstract
Background In radiology, a vast amount of diverse data is generated, and unstructured reporting is standard. Hence, much useful information is trapped in free-text form, and often lost in translation and transmission. One relevant source of free-text data consists of reports covering the assessment of changes in tumor burden, which are needed for the evaluation of cancer treatment success. Any change of lesion size is a critical factor in follow-up examinations. It is difficult to retrieve specific information from unstructured reports and to compare them over time. Therefore, a prototype was implemented that demonstrates the structured representation of findings, allowing selective review in consecutive examinations and thus more efficient comparison over time. Methods We developed a semantic Model for Clinical Information (MCI) based on existing ontologies from the Open Biological and Biomedical Ontologies (OBO) library. MCI is used for the integrated representation of measured image findings and medical knowledge about the normal size of anatomical entities. An integrated view of the radiology findings is realized by a prototype implementation of a ReportViewer. Further, RECIST (Response Evaluation Criteria In Solid Tumors) guidelines are implemented by SPARQL queries on MCI. The evaluation is based on two data sets of German radiology reports: An oncologic data set consisting of 2584 reports on 377 lymphoma patients and a mixed data set consisting of 6007 reports on diverse medical and surgical patients. All measurement findings were automatically classified as abnormal/normal using formalized medical background knowledge, i.e., knowledge that has been encoded into an ontology. A radiologist evaluated 813 classifications as correct or incorrect. All unclassified findings were evaluated as incorrect. Results The proposed approach allows the automatic classification of findings with an accuracy of 96.4 % for oncologic reports and 92.9 % for mixed reports. The ReportViewer permits efficient comparison of measured findings from consecutive examinations. The implementation of RECIST guidelines with SPARQL enhances the quality of the selection and comparison of target lesions as well as the corresponding treatment response evaluation. Conclusions The developed MCI enables an accurate integrated representation of reported measurements and medical knowledge. Thus, measurements can be automatically classified and integrated in different decision processes. The structured representation is suitable for improved integration of clinical findings during decision-making. The proposed ReportViewer provides a longitudinal overview of the measurements.
Collapse
Affiliation(s)
- Heiner Oberkampf
- Department of Computer Science, Software Methodologies for Distributed Systems, University of Augsburg, Universitätsstraße 6a, 86159, Augsburg, Germany. .,Corporate Technology, Siemens AG, Otto-Hahn-Ring 6, 81739, Münech, Germany.
| | - Sonja Zillner
- Corporate Technology, Siemens AG, Otto-Hahn-Ring 6, 81739, Münech, Germany. .,School of International Business and Entrepreneurship, Steinbeis University, Kalkofenstraße 53, 71083, Herrenberg, Germany.
| | | | - Bernhard Bauer
- Department of Computer Science, Software Methodologies for Distributed Systems, University of Augsburg, Universitätsstraße 6a, 86159, Augsburg, Germany.
| | - Alexander Cavallaro
- Department of Radiology, University Hospital Erlangen, Maximiliansplatz 1, 91054, Erlangen, Germany.
| | - Michael Uder
- Department of Radiology, University Hospital Erlangen, Maximiliansplatz 1, 91054, Erlangen, Germany.
| | - Matthias Hammon
- Department of Radiology, University Hospital Erlangen, Maximiliansplatz 1, 91054, Erlangen, Germany.
| |
Collapse
|
28
|
Czekierda Ł, Malawski F, Wyszkowski P. Holistic approach to design and implementation of a medical teleconsultation workspace. J Biomed Inform 2015; 57:225-44. [PMID: 26277117 DOI: 10.1016/j.jbi.2015.08.007] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2014] [Revised: 07/31/2015] [Accepted: 08/06/2015] [Indexed: 11/17/2022]
Abstract
While there are many state-of-the-art approaches to introducing telemedical services in the area of medical imaging, it is hard to point to studies which would address all relevant aspects in a complete and comprehensive manner. In this paper we describe our approach to design and implementation of a universal platform for imaging medicine which is based on our longstanding experience in this area. We claim it is holistic, because, contrary to most of the available studies it addresses all aspects related to creation and utilization of a medical teleconsultation workspace. We present an extensive analysis of requirements, including possible usage scenarios, user needs, organizational and security issues and infrastructure components. We enumerate and analyze multiple usage scenarios related to medical imaging data in treatment, research and educational applications - with typical teleconsultations treated as just one of many possible options. Certain phases common to all these scenarios have been identified, with the resulting classification distinguishing several modes of operation (local vs. remote, collaborative vs. non-interactive etc.). On this basis we propose a system architecture which addresses all of the identified requirements, applying two key concepts: Service Oriented Architecture (SOA) and Virtual Organizations (VO). The SOA paradigm allows us to decompose the functionality of the system into several distinct building blocks, ensuring flexibility and reliability. The VO paradigm defines the cooperation model for all participating healthcare institutions. Our approach is validated by an ICT platform called TeleDICOM II which implements the proposed architecture. All of its main elements are described in detail and cross-checked against the listed requirements. A case study presents the role and usage of the platform in a specific scenario. Finally, our platform is compared with similar systems described into-date studies and available on the market.
Collapse
Affiliation(s)
- Łukasz Czekierda
- Department of Computer Science, AGH University of Science and Technology, ul. Kawiory 21, 30-055 Kraków, Poland.
| | - Filip Malawski
- Department of Computer Science, AGH University of Science and Technology, ul. Kawiory 21, 30-055 Kraków, Poland.
| | - Przemysław Wyszkowski
- Department of Computer Science, AGH University of Science and Technology, ul. Kawiory 21, 30-055 Kraków, Poland.
| |
Collapse
|
29
|
Ellingson BM. Radiogenomics and imaging phenotypes in glioblastoma: novel observations and correlation with molecular characteristics. Curr Neurol Neurosci Rep 2015; 15:506. [PMID: 25410316 DOI: 10.1007/s11910-014-0506-0] [Citation(s) in RCA: 91] [Impact Index Per Article: 10.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
Radiogenomics is a provocative new area of research based on decades of previous work examining the association between radiological and histological features. Many generalized associations have been established linking anatomical imaging traits with underlying histopathology, including associations between contrast-enhancing tumor and vascular and tumor cell proliferation, hypointensity on pre-contrast T1-weighted images and necrotic tissue, and associations between hyperintensity on T2-weighted images and edema or nonenhancing tumor. Additionally, tumor location, tumor size, composition, and descriptive features tend to show significant associations with molecular and genomic factors, likely related to the cell of origin and growth characteristics. Additionally, physiologic MRI techniques also show interesting correlations with underlying histology and genomic programs, including associations with gene expression signatures and histological subtypes. Future studies extending beyond simple radiology-histology associations are warranted in order to establish radiogenomic analyses as tools for prospectively identifying patient subtypes that may benefit from specific therapies.
Collapse
Affiliation(s)
- Benjamin M Ellingson
- UCLA Brain Tumor Imaging Laboratory (BTIL), Center for Computer Vision and Imaging Biomarkers (CVIB), David Geffen School of Medicine, University of California-Los Angeles, Los Angeles, CA, USA,
| |
Collapse
|
30
|
Mongkolwat P, Kleper V, Talbot S, Rubin D. The National Cancer Informatics Program (NCIP) Annotation and Image Markup (AIM) Foundation model. J Digit Imaging 2015; 27:692-701. [PMID: 24934452 DOI: 10.1007/s10278-014-9710-3] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022] Open
Abstract
Knowledge contained within in vivo imaging annotated by human experts or computer programs is typically stored as unstructured text and separated from other associated information. The National Cancer Informatics Program (NCIP) Annotation and Image Markup (AIM) Foundation information model is an evolution of the National Institute of Health's (NIH) National Cancer Institute's (NCI) Cancer Bioinformatics Grid (caBIG®) AIM model. The model applies to various image types created by various techniques and disciplines. It has evolved in response to the feedback and changing demands from the imaging community at NCI. The foundation model serves as a base for other imaging disciplines that want to extend the type of information the model collects. The model captures physical entities and their characteristics, imaging observation entities and their characteristics, markups (two- and three-dimensional), AIM statements, calculations, image source, inferences, annotation role, task context or workflow, audit trail, AIM creator details, equipment used to create AIM instances, subject demographics, and adjudication observations. An AIM instance can be stored as a Digital Imaging and Communications in Medicine (DICOM) structured reporting (SR) object or Extensible Markup Language (XML) document for further processing and analysis. An AIM instance consists of one or more annotations and associated markups of a single finding along with other ancillary information in the AIM model. An annotation describes information about the meaning of pixel data in an image. A markup is a graphical drawing placed on the image that depicts a region of interest. This paper describes fundamental AIM concepts and how to use and extend AIM for various imaging disciplines.
Collapse
Affiliation(s)
- Pattanasak Mongkolwat
- Department of Radiology, Northwestern University, 737 N. Michigan Ave, Suite 1600, Chicago, IL, 60611, USA,
| | | | | | | |
Collapse
|
31
|
Deserno TM, Haak D, Brandenburg V, Deserno V, Classen C, Specht P. Integrated image data and medical record management for rare disease registries. A general framework and its instantiation to theGerman Calciphylaxis Registry. J Digit Imaging 2015; 27:702-13. [PMID: 24865858 DOI: 10.1007/s10278-014-9698-8] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022] Open
Abstract
Especially for investigator-initiated research at universities and academic institutions, Internet-based rare disease registries (RDR) are required that integrate electronic data capture (EDC) with automatic image analysis or manual image annotation. We propose a modular framework merging alpha-numerical and binary data capture. In concordance with the Office of Rare Diseases Research recommendations, a requirement analysis was performed based on several RDR databases currently hosted at Uniklinik RWTH Aachen, Germany. With respect to the study management tool that is already successfully operating at the Clinical Trial Center Aachen, the Google Web Toolkit was chosen with Hibernate and Gilead connecting a MySQL database management system. Image and signal data integration and processing is supported by Apache Commons FileUpload-Library and ImageJ-based Java code, respectively. As a proof of concept, the framework is instantiated to the German Calciphylaxis Registry. The framework is composed of five mandatory core modules: (1) Data Core, (2) EDC, (3) Access Control, (4) Audit Trail, and (5) Terminology as well as six optional modules: (6) Binary Large Object (BLOB), (7) BLOB Analysis, (8) Standard Operation Procedure, (9) Communication, (10) Pseudonymization, and (11) Biorepository. Modules 1-7 are implemented in the German Calciphylaxis Registry. The proposed RDR framework is easily instantiated and directly integrates image management and analysis. As open source software, it may assist improved data collection and analysis of rare diseases in near future.
Collapse
Affiliation(s)
- Thomas M Deserno
- Department of Medical Informatics, Uniklinik RWTH Aachen, Pauwelsstr. 30, 52057, Aachen, Germany,
| | | | | | | | | | | |
Collapse
|
32
|
Abstract
OBJECTIVE Informatics innovations of the past 30 years have improved radiology quality and efficiency immensely. Radiologists are groundbreaking leaders in clinical information technology (IT), and often radiologists and imaging informaticists created, specified, and implemented these technologies, while also carrying the ongoing burdens of training, maintenance, support, and operation of these IT solutions. Being pioneers of clinical IT had advantages of local radiology control and radiology-centric products and services. As health care businesses become more clinically IT savvy, however, they are standardizing IT products and procedures across the enterprise, resulting in the loss of radiologists' local control and flexibility. Although this inevitable consequence may provide new opportunities in the long run, several questions arise. CONCLUSION What will happen to the informatics expertise within the radiology domain? Will radiology's current and future concerns be heard and their needs addressed? What should radiologists do to understand, obtain, and use informatics products to maximize efficiency and provide the most value and quality for patients and the greater health care community? This article will propose some insights and considerations as we rethink radiology informatics.
Collapse
|
33
|
Wang KC, Salunkhe AR, Morrison JJ, Lee PP, Mejino JLV, Detwiler LT, Brinkley JF, Siegel EL, Rubin DL, Carrino JA. Ontology-based image navigation: exploring 3.0-T MR neurography of the brachial plexus using AIM and RadLex. Radiographics 2015; 35:142-51. [PMID: 25590394 PMCID: PMC4319494 DOI: 10.1148/rg.351130072] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2013] [Revised: 07/11/2014] [Accepted: 07/11/2014] [Indexed: 11/11/2022]
Abstract
Disorders of the peripheral nervous system have traditionally been evaluated using clinical history, physical examination, and electrodiagnostic testing. In selected cases, imaging modalities such as magnetic resonance (MR) neurography may help further localize or characterize abnormalities associated with peripheral neuropathies, and the clinical importance of such techniques is increasing. However, MR image interpretation with respect to peripheral nerve anatomy and disease often presents a diagnostic challenge because the relevant knowledge base remains relatively specialized. Using the radiology knowledge resource RadLex®, a series of RadLex queries, the Annotation and Image Markup standard for image annotation, and a Web services-based software architecture, the authors developed an application that allows ontology-assisted image navigation. The application provides an image browsing interface, allowing users to visually inspect the imaging appearance of anatomic structures. By interacting directly with the images, users can access additional structure-related information that is derived from RadLex (eg, muscle innervation, muscle attachment sites). These data also serve as conceptual links to navigate from one portion of the imaging atlas to another. With 3.0-T MR neurography of the brachial plexus as the initial area of interest, the resulting application provides support to radiologists in the image interpretation process by allowing efficient exploration of the MR imaging appearance of relevant nerve segments, muscles, bone structures, vascular landmarks, anatomic spaces, and entrapment sites, and the investigation of neuromuscular relationships.
Collapse
Affiliation(s)
- Kenneth C. Wang
- From the Imaging Service, Baltimore VA Medical Center, 10 N Greene St, Room C1-24, Baltimore, MD 21201 (K.C.W., E.L.S.); Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, Md (K.C.W., P.P.L.); Department of Computer Science, University of Maryland, Baltimore County, Baltimore, Md (A.R.S.); Department of Diagnostic Radiology and Nuclear Medicine, University of Maryland School of Medicine, Baltimore, Md (J.J.M., E.L.S.); Departments of Biological Structure (J.L.V.M., L.T.D., J.F.B.) and Biomedical Informatics and Medical Education (L.T.D., J.F.B.), University of Washington, Seattle, Wash; Department of Radiology, Stanford University School of Medicine, Stanford, Calif (D.L.R.); and Department of Radiology and Imaging, Hospital for Special Surgery, New York, NY (J.A.C.)
| | - Aditya R. Salunkhe
- From the Imaging Service, Baltimore VA Medical Center, 10 N Greene St, Room C1-24, Baltimore, MD 21201 (K.C.W., E.L.S.); Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, Md (K.C.W., P.P.L.); Department of Computer Science, University of Maryland, Baltimore County, Baltimore, Md (A.R.S.); Department of Diagnostic Radiology and Nuclear Medicine, University of Maryland School of Medicine, Baltimore, Md (J.J.M., E.L.S.); Departments of Biological Structure (J.L.V.M., L.T.D., J.F.B.) and Biomedical Informatics and Medical Education (L.T.D., J.F.B.), University of Washington, Seattle, Wash; Department of Radiology, Stanford University School of Medicine, Stanford, Calif (D.L.R.); and Department of Radiology and Imaging, Hospital for Special Surgery, New York, NY (J.A.C.)
| | - James J. Morrison
- From the Imaging Service, Baltimore VA Medical Center, 10 N Greene St, Room C1-24, Baltimore, MD 21201 (K.C.W., E.L.S.); Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, Md (K.C.W., P.P.L.); Department of Computer Science, University of Maryland, Baltimore County, Baltimore, Md (A.R.S.); Department of Diagnostic Radiology and Nuclear Medicine, University of Maryland School of Medicine, Baltimore, Md (J.J.M., E.L.S.); Departments of Biological Structure (J.L.V.M., L.T.D., J.F.B.) and Biomedical Informatics and Medical Education (L.T.D., J.F.B.), University of Washington, Seattle, Wash; Department of Radiology, Stanford University School of Medicine, Stanford, Calif (D.L.R.); and Department of Radiology and Imaging, Hospital for Special Surgery, New York, NY (J.A.C.)
| | - Pearlene P. Lee
- From the Imaging Service, Baltimore VA Medical Center, 10 N Greene St, Room C1-24, Baltimore, MD 21201 (K.C.W., E.L.S.); Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, Md (K.C.W., P.P.L.); Department of Computer Science, University of Maryland, Baltimore County, Baltimore, Md (A.R.S.); Department of Diagnostic Radiology and Nuclear Medicine, University of Maryland School of Medicine, Baltimore, Md (J.J.M., E.L.S.); Departments of Biological Structure (J.L.V.M., L.T.D., J.F.B.) and Biomedical Informatics and Medical Education (L.T.D., J.F.B.), University of Washington, Seattle, Wash; Department of Radiology, Stanford University School of Medicine, Stanford, Calif (D.L.R.); and Department of Radiology and Imaging, Hospital for Special Surgery, New York, NY (J.A.C.)
| | - José L. V. Mejino
- From the Imaging Service, Baltimore VA Medical Center, 10 N Greene St, Room C1-24, Baltimore, MD 21201 (K.C.W., E.L.S.); Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, Md (K.C.W., P.P.L.); Department of Computer Science, University of Maryland, Baltimore County, Baltimore, Md (A.R.S.); Department of Diagnostic Radiology and Nuclear Medicine, University of Maryland School of Medicine, Baltimore, Md (J.J.M., E.L.S.); Departments of Biological Structure (J.L.V.M., L.T.D., J.F.B.) and Biomedical Informatics and Medical Education (L.T.D., J.F.B.), University of Washington, Seattle, Wash; Department of Radiology, Stanford University School of Medicine, Stanford, Calif (D.L.R.); and Department of Radiology and Imaging, Hospital for Special Surgery, New York, NY (J.A.C.)
| | - Landon T. Detwiler
- From the Imaging Service, Baltimore VA Medical Center, 10 N Greene St, Room C1-24, Baltimore, MD 21201 (K.C.W., E.L.S.); Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, Md (K.C.W., P.P.L.); Department of Computer Science, University of Maryland, Baltimore County, Baltimore, Md (A.R.S.); Department of Diagnostic Radiology and Nuclear Medicine, University of Maryland School of Medicine, Baltimore, Md (J.J.M., E.L.S.); Departments of Biological Structure (J.L.V.M., L.T.D., J.F.B.) and Biomedical Informatics and Medical Education (L.T.D., J.F.B.), University of Washington, Seattle, Wash; Department of Radiology, Stanford University School of Medicine, Stanford, Calif (D.L.R.); and Department of Radiology and Imaging, Hospital for Special Surgery, New York, NY (J.A.C.)
| | - James F. Brinkley
- From the Imaging Service, Baltimore VA Medical Center, 10 N Greene St, Room C1-24, Baltimore, MD 21201 (K.C.W., E.L.S.); Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, Md (K.C.W., P.P.L.); Department of Computer Science, University of Maryland, Baltimore County, Baltimore, Md (A.R.S.); Department of Diagnostic Radiology and Nuclear Medicine, University of Maryland School of Medicine, Baltimore, Md (J.J.M., E.L.S.); Departments of Biological Structure (J.L.V.M., L.T.D., J.F.B.) and Biomedical Informatics and Medical Education (L.T.D., J.F.B.), University of Washington, Seattle, Wash; Department of Radiology, Stanford University School of Medicine, Stanford, Calif (D.L.R.); and Department of Radiology and Imaging, Hospital for Special Surgery, New York, NY (J.A.C.)
| | - Eliot L. Siegel
- From the Imaging Service, Baltimore VA Medical Center, 10 N Greene St, Room C1-24, Baltimore, MD 21201 (K.C.W., E.L.S.); Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, Md (K.C.W., P.P.L.); Department of Computer Science, University of Maryland, Baltimore County, Baltimore, Md (A.R.S.); Department of Diagnostic Radiology and Nuclear Medicine, University of Maryland School of Medicine, Baltimore, Md (J.J.M., E.L.S.); Departments of Biological Structure (J.L.V.M., L.T.D., J.F.B.) and Biomedical Informatics and Medical Education (L.T.D., J.F.B.), University of Washington, Seattle, Wash; Department of Radiology, Stanford University School of Medicine, Stanford, Calif (D.L.R.); and Department of Radiology and Imaging, Hospital for Special Surgery, New York, NY (J.A.C.)
| | - Daniel L. Rubin
- From the Imaging Service, Baltimore VA Medical Center, 10 N Greene St, Room C1-24, Baltimore, MD 21201 (K.C.W., E.L.S.); Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, Md (K.C.W., P.P.L.); Department of Computer Science, University of Maryland, Baltimore County, Baltimore, Md (A.R.S.); Department of Diagnostic Radiology and Nuclear Medicine, University of Maryland School of Medicine, Baltimore, Md (J.J.M., E.L.S.); Departments of Biological Structure (J.L.V.M., L.T.D., J.F.B.) and Biomedical Informatics and Medical Education (L.T.D., J.F.B.), University of Washington, Seattle, Wash; Department of Radiology, Stanford University School of Medicine, Stanford, Calif (D.L.R.); and Department of Radiology and Imaging, Hospital for Special Surgery, New York, NY (J.A.C.)
| | - John A. Carrino
- From the Imaging Service, Baltimore VA Medical Center, 10 N Greene St, Room C1-24, Baltimore, MD 21201 (K.C.W., E.L.S.); Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, Md (K.C.W., P.P.L.); Department of Computer Science, University of Maryland, Baltimore County, Baltimore, Md (A.R.S.); Department of Diagnostic Radiology and Nuclear Medicine, University of Maryland School of Medicine, Baltimore, Md (J.J.M., E.L.S.); Departments of Biological Structure (J.L.V.M., L.T.D., J.F.B.) and Biomedical Informatics and Medical Education (L.T.D., J.F.B.), University of Washington, Seattle, Wash; Department of Radiology, Stanford University School of Medicine, Stanford, Calif (D.L.R.); and Department of Radiology and Imaging, Hospital for Special Surgery, New York, NY (J.A.C.)
| |
Collapse
|
34
|
Zillner S, Neururer S. Technology Roadmap Development for Big Data Healthcare Applications. KUNSTLICHE INTELLIGENZ 2014. [DOI: 10.1007/s13218-014-0335-y] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
35
|
Bosmans JML, Neri E, Ratib O, Kahn CE. Structured reporting: a fusion reactor hungry for fuel. Insights Imaging 2014; 6:129-32. [PMID: 25476598 PMCID: PMC4330231 DOI: 10.1007/s13244-014-0368-7] [Citation(s) in RCA: 36] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2014] [Revised: 10/29/2014] [Accepted: 11/14/2014] [Indexed: 11/23/2022] Open
Affiliation(s)
- Jan M L Bosmans
- Departments of Radiology and Medical Imaging, Ghent University Hospital and University of Antwerp, Ghent and Antwerp, Flanders, Belgium,
| | | | | | | |
Collapse
|
36
|
Jain R, Poisson LM, Gutman D, Scarpace L, Hwang SN, Holder CA, Wintermark M, Rao A, Colen RR, Kirby J, Freymann J, Jaffe CC, Mikkelsen T, Flanders A. Outcome prediction in patients with glioblastoma by using imaging, clinical, and genomic biomarkers: focus on the nonenhancing component of the tumor. Radiology 2014; 272:484-93. [PMID: 24646147 PMCID: PMC4263660 DOI: 10.1148/radiol.14131691] [Citation(s) in RCA: 166] [Impact Index Per Article: 16.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
PURPOSE To correlate patient survival with morphologic imaging features and hemodynamic parameters obtained from the nonenhancing region (NER) of glioblastoma (GBM), along with clinical and genomic markers. MATERIALS AND METHODS An institutional review board waiver was obtained for this HIPAA-compliant retrospective study. Forty-five patients with GBM underwent baseline imaging with contrast material-enhanced magnetic resonance (MR) imaging and dynamic susceptibility contrast-enhanced T2*-weighted perfusion MR imaging. Molecular and clinical predictors of survival were obtained. Single and multivariable models of overall survival (OS) and progression-free survival (PFS) were explored with Kaplan-Meier estimates, Cox regression, and random survival forests. RESULTS Worsening OS (log-rank test, P = .0103) and PFS (log-rank test, P = .0223) were associated with increasing relative cerebral blood volume of NER (rCBVNER), which was higher with deep white matter involvement (t test, P = .0482) and poor NER margin definition (t test, P = .0147). NER crossing the midline was the only morphologic feature of NER associated with poor survival (log-rank test, P = .0125). Preoperative Karnofsky performance score (KPS) and resection extent (n = 30) were clinically significant OS predictors (log-rank test, P = .0176 and P = .0038, respectively). No genomic alterations were associated with survival, except patients with high rCBVNER and wild-type epidermal growth factor receptor (EGFR) mutation had significantly poor survival (log-rank test, P = .0306; area under the receiver operating characteristic curve = 0.62). Combining resection extent with rCBVNER marginally improved prognostic ability (permutation, P = .084). Random forest models of presurgical predictors indicated rCBVNER as the top predictor; also important were KPS, age at diagnosis, and NER crossing the midline. A multivariable model containing rCBVNER, age at diagnosis, and KPS can be used to group patients with more than 1 year of difference in observed median survival (0.49-1.79 years). CONCLUSION Patients with high rCBVNER and NER crossing the midline and those with high rCBVNER and wild-type EGFR mutation showed poor survival. In multivariable survival models, however, rCBVNER provided unique prognostic information that went above and beyond the assessment of all NER imaging features, as well as clinical and genomic features.
Collapse
Affiliation(s)
| | - Laila M. Poisson
- From the Division of Neuroradiology, Department of Radiology (R.J.), Bioinformatics Center, Department of Public Health Sciences (L.M.P.), and Department of Neurosurgery (R.J., L.S., T.M.), Henry Ford Health System, 2799 W Grand Blvd, Detroit, MI 48202; Department of Radiology, Emory University, Atlanta, Ga (D.G., C.A.H.); Department of Radiology, St Jude’s Children’s Research Hospital, Memphis, Tenn (S.N.H.); Department of Radiology, University of Virginia, Charlottesville, Va (M.W.); Department of Radiology, MD Anderson Cancer Center, Houston, Tex (A.R.); Department of Radiology, Brigham and Women’s Hospital, Boston, Mass (R.R.C.); Clinical Research Directorate, CMRP, SAIC-Frederick, NCI-Frederick, Frederick, Md (J.K., J.F.); Department of Radiology, Boston University, Boston, Mass (C.C.J.); and Department of Radiology, Thomas Jefferson University Hospital, Philadelphia, Pa (A.F.)
| | - David Gutman
- From the Division of Neuroradiology, Department of Radiology (R.J.), Bioinformatics Center, Department of Public Health Sciences (L.M.P.), and Department of Neurosurgery (R.J., L.S., T.M.), Henry Ford Health System, 2799 W Grand Blvd, Detroit, MI 48202; Department of Radiology, Emory University, Atlanta, Ga (D.G., C.A.H.); Department of Radiology, St Jude’s Children’s Research Hospital, Memphis, Tenn (S.N.H.); Department of Radiology, University of Virginia, Charlottesville, Va (M.W.); Department of Radiology, MD Anderson Cancer Center, Houston, Tex (A.R.); Department of Radiology, Brigham and Women’s Hospital, Boston, Mass (R.R.C.); Clinical Research Directorate, CMRP, SAIC-Frederick, NCI-Frederick, Frederick, Md (J.K., J.F.); Department of Radiology, Boston University, Boston, Mass (C.C.J.); and Department of Radiology, Thomas Jefferson University Hospital, Philadelphia, Pa (A.F.)
| | - Lisa Scarpace
- From the Division of Neuroradiology, Department of Radiology (R.J.), Bioinformatics Center, Department of Public Health Sciences (L.M.P.), and Department of Neurosurgery (R.J., L.S., T.M.), Henry Ford Health System, 2799 W Grand Blvd, Detroit, MI 48202; Department of Radiology, Emory University, Atlanta, Ga (D.G., C.A.H.); Department of Radiology, St Jude’s Children’s Research Hospital, Memphis, Tenn (S.N.H.); Department of Radiology, University of Virginia, Charlottesville, Va (M.W.); Department of Radiology, MD Anderson Cancer Center, Houston, Tex (A.R.); Department of Radiology, Brigham and Women’s Hospital, Boston, Mass (R.R.C.); Clinical Research Directorate, CMRP, SAIC-Frederick, NCI-Frederick, Frederick, Md (J.K., J.F.); Department of Radiology, Boston University, Boston, Mass (C.C.J.); and Department of Radiology, Thomas Jefferson University Hospital, Philadelphia, Pa (A.F.)
| | - Scott N. Hwang
- From the Division of Neuroradiology, Department of Radiology (R.J.), Bioinformatics Center, Department of Public Health Sciences (L.M.P.), and Department of Neurosurgery (R.J., L.S., T.M.), Henry Ford Health System, 2799 W Grand Blvd, Detroit, MI 48202; Department of Radiology, Emory University, Atlanta, Ga (D.G., C.A.H.); Department of Radiology, St Jude’s Children’s Research Hospital, Memphis, Tenn (S.N.H.); Department of Radiology, University of Virginia, Charlottesville, Va (M.W.); Department of Radiology, MD Anderson Cancer Center, Houston, Tex (A.R.); Department of Radiology, Brigham and Women’s Hospital, Boston, Mass (R.R.C.); Clinical Research Directorate, CMRP, SAIC-Frederick, NCI-Frederick, Frederick, Md (J.K., J.F.); Department of Radiology, Boston University, Boston, Mass (C.C.J.); and Department of Radiology, Thomas Jefferson University Hospital, Philadelphia, Pa (A.F.)
| | - Chad A. Holder
- From the Division of Neuroradiology, Department of Radiology (R.J.), Bioinformatics Center, Department of Public Health Sciences (L.M.P.), and Department of Neurosurgery (R.J., L.S., T.M.), Henry Ford Health System, 2799 W Grand Blvd, Detroit, MI 48202; Department of Radiology, Emory University, Atlanta, Ga (D.G., C.A.H.); Department of Radiology, St Jude’s Children’s Research Hospital, Memphis, Tenn (S.N.H.); Department of Radiology, University of Virginia, Charlottesville, Va (M.W.); Department of Radiology, MD Anderson Cancer Center, Houston, Tex (A.R.); Department of Radiology, Brigham and Women’s Hospital, Boston, Mass (R.R.C.); Clinical Research Directorate, CMRP, SAIC-Frederick, NCI-Frederick, Frederick, Md (J.K., J.F.); Department of Radiology, Boston University, Boston, Mass (C.C.J.); and Department of Radiology, Thomas Jefferson University Hospital, Philadelphia, Pa (A.F.)
| | - Max Wintermark
- From the Division of Neuroradiology, Department of Radiology (R.J.), Bioinformatics Center, Department of Public Health Sciences (L.M.P.), and Department of Neurosurgery (R.J., L.S., T.M.), Henry Ford Health System, 2799 W Grand Blvd, Detroit, MI 48202; Department of Radiology, Emory University, Atlanta, Ga (D.G., C.A.H.); Department of Radiology, St Jude’s Children’s Research Hospital, Memphis, Tenn (S.N.H.); Department of Radiology, University of Virginia, Charlottesville, Va (M.W.); Department of Radiology, MD Anderson Cancer Center, Houston, Tex (A.R.); Department of Radiology, Brigham and Women’s Hospital, Boston, Mass (R.R.C.); Clinical Research Directorate, CMRP, SAIC-Frederick, NCI-Frederick, Frederick, Md (J.K., J.F.); Department of Radiology, Boston University, Boston, Mass (C.C.J.); and Department of Radiology, Thomas Jefferson University Hospital, Philadelphia, Pa (A.F.)
| | - Arvind Rao
- From the Division of Neuroradiology, Department of Radiology (R.J.), Bioinformatics Center, Department of Public Health Sciences (L.M.P.), and Department of Neurosurgery (R.J., L.S., T.M.), Henry Ford Health System, 2799 W Grand Blvd, Detroit, MI 48202; Department of Radiology, Emory University, Atlanta, Ga (D.G., C.A.H.); Department of Radiology, St Jude’s Children’s Research Hospital, Memphis, Tenn (S.N.H.); Department of Radiology, University of Virginia, Charlottesville, Va (M.W.); Department of Radiology, MD Anderson Cancer Center, Houston, Tex (A.R.); Department of Radiology, Brigham and Women’s Hospital, Boston, Mass (R.R.C.); Clinical Research Directorate, CMRP, SAIC-Frederick, NCI-Frederick, Frederick, Md (J.K., J.F.); Department of Radiology, Boston University, Boston, Mass (C.C.J.); and Department of Radiology, Thomas Jefferson University Hospital, Philadelphia, Pa (A.F.)
| | | | - Justin Kirby
- From the Division of Neuroradiology, Department of Radiology (R.J.), Bioinformatics Center, Department of Public Health Sciences (L.M.P.), and Department of Neurosurgery (R.J., L.S., T.M.), Henry Ford Health System, 2799 W Grand Blvd, Detroit, MI 48202; Department of Radiology, Emory University, Atlanta, Ga (D.G., C.A.H.); Department of Radiology, St Jude’s Children’s Research Hospital, Memphis, Tenn (S.N.H.); Department of Radiology, University of Virginia, Charlottesville, Va (M.W.); Department of Radiology, MD Anderson Cancer Center, Houston, Tex (A.R.); Department of Radiology, Brigham and Women’s Hospital, Boston, Mass (R.R.C.); Clinical Research Directorate, CMRP, SAIC-Frederick, NCI-Frederick, Frederick, Md (J.K., J.F.); Department of Radiology, Boston University, Boston, Mass (C.C.J.); and Department of Radiology, Thomas Jefferson University Hospital, Philadelphia, Pa (A.F.)
| | - John Freymann
- From the Division of Neuroradiology, Department of Radiology (R.J.), Bioinformatics Center, Department of Public Health Sciences (L.M.P.), and Department of Neurosurgery (R.J., L.S., T.M.), Henry Ford Health System, 2799 W Grand Blvd, Detroit, MI 48202; Department of Radiology, Emory University, Atlanta, Ga (D.G., C.A.H.); Department of Radiology, St Jude’s Children’s Research Hospital, Memphis, Tenn (S.N.H.); Department of Radiology, University of Virginia, Charlottesville, Va (M.W.); Department of Radiology, MD Anderson Cancer Center, Houston, Tex (A.R.); Department of Radiology, Brigham and Women’s Hospital, Boston, Mass (R.R.C.); Clinical Research Directorate, CMRP, SAIC-Frederick, NCI-Frederick, Frederick, Md (J.K., J.F.); Department of Radiology, Boston University, Boston, Mass (C.C.J.); and Department of Radiology, Thomas Jefferson University Hospital, Philadelphia, Pa (A.F.)
| | - C. Carl Jaffe
- From the Division of Neuroradiology, Department of Radiology (R.J.), Bioinformatics Center, Department of Public Health Sciences (L.M.P.), and Department of Neurosurgery (R.J., L.S., T.M.), Henry Ford Health System, 2799 W Grand Blvd, Detroit, MI 48202; Department of Radiology, Emory University, Atlanta, Ga (D.G., C.A.H.); Department of Radiology, St Jude’s Children’s Research Hospital, Memphis, Tenn (S.N.H.); Department of Radiology, University of Virginia, Charlottesville, Va (M.W.); Department of Radiology, MD Anderson Cancer Center, Houston, Tex (A.R.); Department of Radiology, Brigham and Women’s Hospital, Boston, Mass (R.R.C.); Clinical Research Directorate, CMRP, SAIC-Frederick, NCI-Frederick, Frederick, Md (J.K., J.F.); Department of Radiology, Boston University, Boston, Mass (C.C.J.); and Department of Radiology, Thomas Jefferson University Hospital, Philadelphia, Pa (A.F.)
| | - Tom Mikkelsen
- From the Division of Neuroradiology, Department of Radiology (R.J.), Bioinformatics Center, Department of Public Health Sciences (L.M.P.), and Department of Neurosurgery (R.J., L.S., T.M.), Henry Ford Health System, 2799 W Grand Blvd, Detroit, MI 48202; Department of Radiology, Emory University, Atlanta, Ga (D.G., C.A.H.); Department of Radiology, St Jude’s Children’s Research Hospital, Memphis, Tenn (S.N.H.); Department of Radiology, University of Virginia, Charlottesville, Va (M.W.); Department of Radiology, MD Anderson Cancer Center, Houston, Tex (A.R.); Department of Radiology, Brigham and Women’s Hospital, Boston, Mass (R.R.C.); Clinical Research Directorate, CMRP, SAIC-Frederick, NCI-Frederick, Frederick, Md (J.K., J.F.); Department of Radiology, Boston University, Boston, Mass (C.C.J.); and Department of Radiology, Thomas Jefferson University Hospital, Philadelphia, Pa (A.F.)
| | - Adam Flanders
- From the Division of Neuroradiology, Department of Radiology (R.J.), Bioinformatics Center, Department of Public Health Sciences (L.M.P.), and Department of Neurosurgery (R.J., L.S., T.M.), Henry Ford Health System, 2799 W Grand Blvd, Detroit, MI 48202; Department of Radiology, Emory University, Atlanta, Ga (D.G., C.A.H.); Department of Radiology, St Jude’s Children’s Research Hospital, Memphis, Tenn (S.N.H.); Department of Radiology, University of Virginia, Charlottesville, Va (M.W.); Department of Radiology, MD Anderson Cancer Center, Houston, Tex (A.R.); Department of Radiology, Brigham and Women’s Hospital, Boston, Mass (R.R.C.); Clinical Research Directorate, CMRP, SAIC-Frederick, NCI-Frederick, Frederick, Md (J.K., J.F.); Department of Radiology, Boston University, Boston, Mass (C.C.J.); and Department of Radiology, Thomas Jefferson University Hospital, Philadelphia, Pa (A.F.)
| |
Collapse
|
37
|
Colen RR, Vangel M, Wang J, Gutman DA, Hwang SN, Wintermark M, Jain R, Jilwan-Nicolas M, Chen JY, Raghavan P, Holder CA, Rubin D, Huang E, Kirby J, Freymann J, Jaffe CC, Flanders A, Zinn PO. Imaging genomic mapping of an invasive MRI phenotype predicts patient outcome and metabolic dysfunction: a TCGA glioma phenotype research group project. BMC Med Genomics 2014; 7:30. [PMID: 24889866 PMCID: PMC4057583 DOI: 10.1186/1755-8794-7-30] [Citation(s) in RCA: 55] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2014] [Accepted: 05/06/2014] [Indexed: 12/16/2022] Open
Abstract
Background Invasion of tumor cells into adjacent brain parenchyma is a major cause of treatment failure in glioblastoma. Furthermore, invasive tumors are shown to have a different genomic composition and metabolic abnormalities that allow for a more aggressive GBM phenotype and resistance to therapy. We thus seek to identify those genomic abnormalities associated with a highly aggressive and invasive GBM imaging-phenotype. Methods We retrospectively identified 104 treatment-naïve glioblastoma patients from The Cancer Genome Atlas (TCGA) whom had gene expression profiles and corresponding MR imaging available in The Cancer Imaging Archive (TCIA). The standardized VASARI feature-set criteria were used for the qualitative visual assessments of invasion. Patients were assigned to classes based on the presence (Class A) or absence (Class B) of statistically significant invasion parameters to create an invasive imaging signature; imaging genomic analysis was subsequently performed using GenePattern Comparative Marker Selection module (Broad Institute). Results Our results show that patients with a combination of deep white matter tracts and ependymal invasion (Class A) on imaging had a significant decrease in overall survival as compared to patients with absence of such invasive imaging features (Class B) (8.7 versus 18.6 months, p < 0.001). Mitochondrial dysfunction was the top canonical pathway associated with Class A gene expression signature. The MYC oncogene was predicted to be the top activation regulator in Class A. Conclusion We demonstrate that MRI biomarker signatures can identify distinct GBM phenotypes associated with highly significant survival differences and specific molecular pathways. This study identifies mitochondrial dysfunction as the top canonical pathway in a very aggressive GBM phenotype. Thus, imaging-genomic analyses may prove invaluable in detecting novel targetable genomic pathways.
Collapse
Affiliation(s)
- Rivka R Colen
- Department of Diagnostic Radiology, M, D, Anderson Cancer Center, 1400 Pressler St; Unit 1482, Rm # FCT 16,5037, Houston, TX 77030, USA.
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Collapse
|
38
|
Abstract
Radiological image and data archives contain huge amounts of data which are barely utilized by current technologies. In the future semantic technologies currently under development will enable analysis of the contents not only on the level of individual patients but also along entire data collections thereby resulting in new applications that will benefit routine clinical practice, teaching activities and research. As a prerequisite the development of software for semantic analysis of image and report contents is necessary, i.e. an "understanding" of the contents by the software. Based on specific ontologies, standardized protocols and semantic image annotation new systems will be developed that make the content of these data archives accessible and support diagnosis, quality assurance, innovative research applications and last not least, the merging of data of different medical disciplines, such as radiology, pathology and clinical chemistry.
Collapse
Affiliation(s)
- A Gerstmair
- Abteilung Röntgendiagnostik, Universitätsklinik Freiburg, Hugstetterstr. 55, 79106 Freiburg, Deutschland.
| | | |
Collapse
|
39
|
Reiner BI. Innovation opportunities in critical results communication: theoretical concepts. J Digit Imaging 2014; 26:605-9. [PMID: 23775334 DOI: 10.1007/s10278-013-9609-4] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022] Open
Affiliation(s)
- Bruce I Reiner
- Department of Radiology, Veterans Affairs Maryland Healthcare System, 10 North Greene Street, Baltimore, MD 21201, USA.
| |
Collapse
|
40
|
Zheng S, Wang F, Lu J. Enabling Ontology Based Semantic Queries in Biomedical Database Systems. INTERNATIONAL JOURNAL OF SEMANTIC COMPUTING 2014; 8:67-83. [PMID: 25541585 PMCID: PMC4275106 DOI: 10.1142/s1793351x14500032] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
There is a lack of tools to ease the integration and ontology based semantic queries in biomedical databases, which are often annotated with ontology concepts. We aim to provide a middle layer between ontology repositories and semantically annotated databases to support semantic queries directly in the databases with expressive standard database query languages. We have developed a semantic query engine that provides semantic reasoning and query processing, and translates the queries into ontology repository operations on NCBO BioPortal. Semantic operators are implemented in the database as user defined functions extended to the database engine, thus semantic queries can be directly specified in standard database query languages such as SQL and XQuery. The system provides caching management to boosts query performance. The system is highly adaptable to support different ontologies through easy customizations. We have implemented the system DBOntoLink as an open source software, which supports major ontologies hosted at BioPortal. DBOntoLink supports a set of common ontology based semantic operations and have them fully integrated with a database management system IBM DB2. The system has been deployed and evaluated with an existing biomedical database for managing and querying image annotations and markups (AIM). Our performance study demonstrates the high expressiveness of semantic queries and the high efficiency of the queries.
Collapse
Affiliation(s)
- Shuai Zheng
- Department of Mathematics and Computer Science, Emory University Atlanta, Georgia, USA
| | - Fusheng Wang
- Department of Biomedical Informatics, Emory University Atlanta, Georgia, USA
| | - James Lu
- Department of Mathematics and Computer Science, Emory University Atlanta, Georgia, USA
| |
Collapse
|
41
|
Roy S, Brown MS, Shih GL. Visual Interpretation with Three-Dimensional Annotations (VITA): three-dimensional image interpretation tool for radiological reporting. J Digit Imaging 2014; 27:49-57. [PMID: 23979113 PMCID: PMC3903964 DOI: 10.1007/s10278-013-9624-5] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022] Open
Abstract
This paper introduces a software framework called Visual Interpretation with Three-Dimensional Annotations (VITA) that is able to automatically generate three-dimensional (3D) visual summaries based on radiological annotations made during routine exam reporting. VITA summaries are in the form of rotating 3D volumes where radiological annotations are highlighted to place important clinical observations into a 3D context. The rendered volume is produced as a Digital Imaging and Communications in Medicine (DICOM) object and is automatically added to the study for archival in Picture Archiving and Communication System (PACS). In addition, a video summary (e.g., MPEG4) can be generated for sharing with patients and for situations where DICOM viewers are not readily available to referring physicians. The current version of VITA is compatible with ClearCanvas; however, VITA can work with any PACS workstation that has a structured annotation implementation (e.g., Extendible Markup Language, Health Level 7, Annotation and Image Markup) and is able to seamlessly integrate into the existing reporting workflow. In a survey with referring physicians, the vast majority strongly agreed that 3D visual summaries improve the communication of the radiologists' reports and aid communication with patients.
Collapse
Affiliation(s)
- Sharmili Roy
- Department of Computer Science, School of Computing, National University of Singapore, Computing 1, 13 Computing Drive, Singapore, 117417, Singapore,
| | | | | |
Collapse
|
42
|
Kalpathy-Cramer J, Freymann JB, Kirby JS, Kinahan PE, Prior FW. Quantitative Imaging Network: Data Sharing and Competitive AlgorithmValidation Leveraging The Cancer Imaging Archive. Transl Oncol 2014; 7:147-52. [PMID: 24772218 PMCID: PMC3998686 DOI: 10.1593/tlo.13862] [Citation(s) in RCA: 63] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2013] [Revised: 03/17/2014] [Accepted: 03/19/2014] [Indexed: 12/23/2022] Open
Abstract
The Quantitative Imaging Network (QIN), supported by the National Cancer Institute, is designed to promote research and development of quantitative imaging methods and candidate biomarkers for the measurement of tumor response in clinical trial settings. An integral aspect of the QIN mission is to facilitate collaborative activities that seek to develop best practices for the analysis of cancer imaging data. The QIN working groups and teams are developing new algorithms for image analysis and novel biomarkers for the assessment of response to therapy. To validate these algorithms and biomarkers and translate them into clinical practice, algorithms need to be compared and evaluated on large and diverse data sets. Analysis competitions, or "challenges," are being conducted within the QIN as a means to accomplish this goal. The QIN has demonstrated, through its leveraging of The Cancer Imaging Archive (TCIA), that data sharing of clinical images across multiple sites is feasible and that it can enable and support these challenges. In addition to Digital Imaging and Communications in Medicine (DICOM) imaging data, many TCIA collections provide linked clinical, pathology, and "ground truth" data generated by readers that could be used for further challenges. The TCIA-QIN partnership is a successful model that provides resources for multisite sharing of clinical imaging data and the implementation of challenges to support algorithm and biomarker validation.
Collapse
Affiliation(s)
- Jayashree Kalpathy-Cramer
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, MA
| | - John Blake Freymann
- Clinical Research Directorate/Clinical Monitoring Research Program (CMRP), Leidos Biomedical Research Inc, Frederick National Laboratory for Cancer Research, Frederick, MD
| | - Justin Stephen Kirby
- Clinical Research Directorate/Clinical Monitoring Research Program (CMRP), Leidos Biomedical Research Inc, Frederick National Laboratory for Cancer Research, Frederick, MD
| | | | - Fred William Prior
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St Louis, MO
| |
Collapse
|
43
|
Huo Z, Summers RM, Paquerault S, Lo J, Hoffmeister J, Armato SG, Freedman MT, Lin J, Lo SCB, Petrick N, Sahiner B, Fryd D, Yoshida H, Chan HP. Quality assurance and training procedures for computer-aided detection and diagnosis systems in clinical use. Med Phys 2014; 40:077001. [PMID: 23822459 DOI: 10.1118/1.4807642] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022] Open
Abstract
Computer-aided detection/diagnosis (CAD) is increasingly used for decision support by clinicians for detection and interpretation of diseases. However, there are no quality assurance (QA) requirements for CAD in clinical use at present. QA of CAD is important so that end users can be made aware of changes in CAD performance both due to intentional or unintentional causes. In addition, end-user training is critical to prevent improper use of CAD, which could potentially result in lower overall clinical performance. Research on QA of CAD and user training are limited to date. The purpose of this paper is to bring attention to these issues, inform the readers of the opinions of the members of the American Association of Physicists in Medicine (AAPM) CAD subcommittee, and thus stimulate further discussion in the CAD community on these topics. The recommendations in this paper are intended to be work items for AAPM task groups that will be formed to address QA and user training issues on CAD in the future. The work items may serve as a framework for the discussion and eventual design of detailed QA and training procedures for physicists and users of CAD. Some of the recommendations are considered by the subcommittee to be reasonably easy and practical and can be implemented immediately by the end users; others are considered to be "best practice" approaches, which may require significant effort, additional tools, and proper training to implement. The eventual standardization of the requirements of QA procedures for CAD will have to be determined through consensus from members of the CAD community, and user training may require support of professional societies. It is expected that high-quality CAD and proper use of CAD could allow these systems to achieve their true potential, thus benefiting both the patients and the clinicians, and may bring about more widespread clinical use of CAD for many other diseases and applications. It is hoped that the awareness of the need for appropriate CAD QA and user training will stimulate new ideas and approaches for implementing such procedures efficiently and effectively as well as funding opportunities to fulfill such critical efforts.
Collapse
Affiliation(s)
- Zhimin Huo
- Carestream Health Inc., 1049 Ridge Road West, Rochester, New York 14615, USA
| | | | | | | | | | | | | | | | | | | | | | | | | | | |
Collapse
|
44
|
A picture is worth a thousand words: needs assessment for multimedia radiology reports in a large tertiary care medical center. Acad Radiol 2013; 20:1577-83. [PMID: 24200485 DOI: 10.1016/j.acra.2013.09.002] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2013] [Revised: 08/30/2013] [Accepted: 09/01/2013] [Indexed: 11/20/2022]
Abstract
RATIONALE AND OBJECTIVES Radiology reports are the major, and often only, means of communication between radiologists and their referring clinicians. The purposes of this study are to identify referring physicians' preferences about radiology reports and to quantify their perceived value of multimedia reports (with embedded images) compared with narrative text reports. MATERIALS AND METHODS We contacted 1800 attending physicians from a range of specialties at large tertiary care medical center via e-mail and a hospital newsletter linking to a 24-question electronic survey between July and November 2012. One hundred sixty physicians responded, yielding a response rate of 8.9%. Survey results were analyzed using Statistical Analysis Software (SAS Institute Inc, Cary, NC). RESULTS Of the 160 referring physicians respondents, 142 (89%) indicated a general interest in reports with embedded images and completed the remainder of the survey questions. Of 142 respondents, 103 (73%) agreed or strongly agreed that reports with embedded images could improve the quality of interactions with radiologists; 129 respondents (91%) agreed or strongly agreed that having access to significant images enhances understanding of a text-based report; 110 respondents (77%) agreed or strongly agreed that multimedia reports would significantly improve referring physician satisfaction; and 85 respondents (60%) felt strongly or very strongly that multimedia reports would significantly improve patient care and outcomes. CONCLUSIONS Creating accessible, readable, and automatic multimedia reports should be a high priority to enhance the practice and satisfaction of referring physicians, improve patient care, and emphasize the critical role radiology plays in current medical care.
Collapse
|
45
|
McLeod K, Iskandar DNFA, Burger A. Towards the Semantic Representation of Biological Images. INTERNATIONAL JOURNAL OF INTELLIGENT INFORMATION TECHNOLOGIES 2013. [DOI: 10.4018/ijiit.2013100103] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Biomedical images and models contain vast amounts of information. Regrettably, much of this information is only accessible by domain experts. This paper describes a biological use case in which this situation occurs. Motivation is given for describing images, from this use case, semantically. Furthermore, links are provided to the medical domain, demonstrating the transferability of this work. Subsequently, it is shown that a semantic representation in which every pixel is featured is needlessly expensive. This motivates the discussion of more abstract renditions, which are dealt with next. As part of this, the paper discusses the suitability of existing technologies. In particular, Region Connection Calculus and one implementation of the W3C Geospatial Vocabulary are considered. It transpires that the abstract representations provide a basic description that enables the user to perform a subset of the desired queries. However, a more complex depiction is required for this use case.
Collapse
Affiliation(s)
- Kenneth McLeod
- Department of Computer Science, Heriot-Watt University, Edinburgh, UK
| | - D. N. F. Awang Iskandar
- Faculty of Computer Science & Information Technology, Universiti Malaysia Sarawak, Kota Samarahan, Malaysia
| | - Albert Burger
- Department of Computer Science, Heriot-Watt University, Edinburgh, UK
| |
Collapse
|
46
|
Vergara-Niedermayr C, Wang F, Pan T, Kurc T, Saltz J. Semantically Interoperable XML Data. INTERNATIONAL JOURNAL OF SEMANTIC COMPUTING 2013; 7:237-255. [PMID: 25298789 PMCID: PMC4185431 DOI: 10.1142/s1793351x13500037] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
XML is ubiquitously used as an information exchange platform for web-based applications in healthcare, life sciences, and many other domains. Proliferating XML data are now managed through latest native XML database technologies. XML data sources conforming to common XML schemas could be shared and integrated with syntactic interoperability. Semantic interoperability can be achieved through semantic annotations of data models using common data elements linked to concepts from ontologies. In this paper, we present a framework and software system to support the development of semantic interoperable XML based data sources that can be shared through a Grid infrastructure. We also present our work on supporting semantic validated XML data through semantic annotations for XML Schema, semantic validation and semantic authoring of XML data. We demonstrate the use of the system for a biomedical database of medical image annotations and markups.
Collapse
Affiliation(s)
| | - Fusheng Wang
- Center for Comprehensive Informatics, Department of Biomedical Informatics, Emory University, Atlanta, Georgia, USA
| | - Tony Pan
- Center for Comprehensive Informatics, Department of Biomedical Informatics, Emory University, Atlanta, Georgia, USA
| | - Tahsin Kurc
- Center for Comprehensive Informatics, Department of Biomedical Informatics, Emory University, Atlanta, Georgia, USA
| | - Joel Saltz
- Center for Comprehensive Informatics, Department of Biomedical Informatics, Emory University, Atlanta, Georgia, USA
| |
Collapse
|
47
|
Bui AAT, Hsu W, Arnold C, El-Saden S, Aberle DR, Taira RK. Imaging-based observational databases for clinical problem solving: the role of informatics. J Am Med Inform Assoc 2013; 20:1053-8. [PMID: 23775172 DOI: 10.1136/amiajnl-2012-001340] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/03/2022] Open
Abstract
Imaging has become a prevalent tool in the diagnosis and treatment of many diseases, providing a unique in vivo, multi-scale view of anatomic and physiologic processes. With the increased use of imaging and its progressive technical advances, the role of imaging informatics is now evolving--from one of managing images, to one of integrating the full scope of clinical information needed to contextualize and link observations across phenotypic and genotypic scales. Several challenges exist for imaging informatics, including the need for methods to transform clinical imaging studies and associated data into structured information that can be organized and analyzed. We examine some of these challenges in establishing imaging-based observational databases that can support the creation of comprehensive disease models. The development of these databases and ensuing models can aid in medical decision making and knowledge discovery and ultimately, transform the use of imaging to support individually-tailored patient care.
Collapse
Affiliation(s)
- Alex A T Bui
- Medical Imaging Informatics (MII) Group, Department of Radiological Sciences, UCLA David Geffen School of Medicine, Los Angeles, California, USA
| | | | | | | | | | | |
Collapse
|
48
|
Gutman DA, Cooper LAD, Hwang SN, Holder CA, Gao J, Aurora TD, Dunn WD, Scarpace L, Mikkelsen T, Jain R, Wintermark M, Jilwan M, Raghavan P, Huang E, Clifford RJ, Mongkolwat P, Kleper V, Freymann J, Kirby J, Zinn PO, Moreno CS, Jaffe C, Colen R, Rubin DL, Saltz J, Flanders A, Brat DJ. MR imaging predictors of molecular profile and survival: multi-institutional study of the TCGA glioblastoma data set. Radiology 2013; 267:560-9. [PMID: 23392431 PMCID: PMC3632807 DOI: 10.1148/radiol.13120118] [Citation(s) in RCA: 312] [Impact Index Per Article: 28.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/10/2023]
Abstract
PURPOSE To conduct a comprehensive analysis of radiologist-made assessments of glioblastoma (GBM) tumor size and composition by using a community-developed controlled terminology of magnetic resonance (MR) imaging visual features as they relate to genetic alterations, gene expression class, and patient survival. MATERIALS AND METHODS Because all study patients had been previously deidentified by the Cancer Genome Atlas (TCGA), a publicly available data set that contains no linkage to patient identifiers and that is HIPAA compliant, no institutional review board approval was required. Presurgical MR images of 75 patients with GBM with genetic data in the TCGA portal were rated by three neuroradiologists for size, location, and tumor morphology by using a standardized feature set. Interrater agreements were analyzed by using the Krippendorff α statistic and intraclass correlation coefficient. Associations between survival, tumor size, and morphology were determined by using multivariate Cox regression models; associations between imaging features and genomics were studied by using the Fisher exact test. RESULTS Interrater analysis showed significant agreement in terms of contrast material enhancement, nonenhancement, necrosis, edema, and size variables. Contrast-enhanced tumor volume and longest axis length of tumor were strongly associated with poor survival (respectively, hazard ratio: 8.84, P = .0253, and hazard ratio: 1.02, P = .00973), even after adjusting for Karnofsky performance score (P = .0208). Proneural class GBM had significantly lower levels of contrast enhancement (P = .02) than other subtypes, while mesenchymal GBM showed lower levels of nonenhanced tumor (P < .01). CONCLUSION This analysis demonstrates a method for consistent image feature annotation capable of reproducibly characterizing brain tumors; this study shows that radiologists' estimations of macroscopic imaging features can be combined with genetic alterations and gene expression subtypes to provide deeper insight to the underlying biologic properties of GBM subsets.
Collapse
Affiliation(s)
- David A Gutman
- Department of Biomedical Informatics, 36 Eagle Row, Room 572 PAIS, Emory University Hospital, Atlanta, GA 30322, USA.
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Collapse
|
49
|
Wang F, Kong J, Gao J, Cooper LAD, Kurc T, Zhou Z, Adler D, Vergara-Niedermayr C, Katigbak B, Brat DJ, Saltz JH. A high-performance spatial database based approach for pathology imaging algorithm evaluation. J Pathol Inform 2013; 4:5. [PMID: 23599905 PMCID: PMC3624706 DOI: 10.4103/2153-3539.108543] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2012] [Accepted: 12/06/2012] [Indexed: 11/04/2022] Open
Abstract
BACKGROUND Algorithm evaluation provides a means to characterize variability across image analysis algorithms, validate algorithms by comparison with human annotations, combine results from multiple algorithms for performance improvement, and facilitate algorithm sensitivity studies. The sizes of images and image analysis results in pathology image analysis pose significant challenges in algorithm evaluation. We present an efficient parallel spatial database approach to model, normalize, manage, and query large volumes of analytical image result data. This provides an efficient platform for algorithm evaluation. Our experiments with a set of brain tumor images demonstrate the application, scalability, and effectiveness of the platform. CONTEXT The paper describes an approach and platform for evaluation of pathology image analysis algorithms. The platform facilitates algorithm evaluation through a high-performance database built on the Pathology Analytic Imaging Standards (PAIS) data model. AIMS (1) Develop a framework to support algorithm evaluation by modeling and managing analytical results and human annotations from pathology images; (2) Create a robust data normalization tool for converting, validating, and fixing spatial data from algorithm or human annotations; (3) Develop a set of queries to support data sampling and result comparisons; (4) Achieve high performance computation capacity via a parallel data management infrastructure, parallel data loading and spatial indexing optimizations in this infrastructure. MATERIALS AND METHODS WE HAVE CONSIDERED TWO SCENARIOS FOR ALGORITHM EVALUATION: (1) algorithm comparison where multiple result sets from different methods are compared and consolidated; and (2) algorithm validation where algorithm results are compared with human annotations. We have developed a spatial normalization toolkit to validate and normalize spatial boundaries produced by image analysis algorithms or human annotations. The validated data were formatted based on the PAIS data model and loaded into a spatial database. To support efficient data loading, we have implemented a parallel data loading tool that takes advantage of multi-core CPUs to accelerate data injection. The spatial database manages both geometric shapes and image features or classifications, and enables spatial sampling, result comparison, and result aggregation through expressive structured query language (SQL) queries with spatial extensions. To provide scalable and efficient query support, we have employed a shared nothing parallel database architecture, which distributes data homogenously across multiple database partitions to take advantage of parallel computation power and implements spatial indexing to achieve high I/O throughput. RESULTS Our work proposes a high performance, parallel spatial database platform for algorithm validation and comparison. This platform was evaluated by storing, managing, and comparing analysis results from a set of brain tumor whole slide images. The tools we develop are open source and available to download. CONCLUSIONS Pathology image algorithm validation and comparison are essential to iterative algorithm development and refinement. One critical component is the support for queries involving spatial predicates and comparisons. In our work, we develop an efficient data model and parallel database approach to model, normalize, manage and query large volumes of analytical image result data. Our experiments demonstrate that the data partitioning strategy and the grid-based indexing result in good data distribution across database nodes and reduce I/O overhead in spatial join queries through parallel retrieval of relevant data and quick subsetting of datasets. The set of tools in the framework provide a full pipeline to normalize, load, manage and query analytical results for algorithm evaluation.
Collapse
Affiliation(s)
- Fusheng Wang
- Department of Biomedical Informatics, Emory University, USA ; Center for Comprehensive Informatics, Emory University, USA
| | | | | | | | | | | | | | | | | | | | | |
Collapse
|
50
|
Marcus DS, Erickson BJ, Pan T. Imaging infrastructure for research. Part 2. Data management practices. J Digit Imaging 2013; 25:566-9. [PMID: 22710986 DOI: 10.1007/s10278-012-9502-6] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022] Open
Abstract
In part one of this series, best practices were described for acquiring and handling data at study sites and importing them into an image repository or database. Here, we present a similar treatment on data management practices for imaging-based studies.
Collapse
Affiliation(s)
- Daniel S Marcus
- Radiology, Washington University School of Medicine, 4525 Scott Ave, Campus Box 8225, St. Louis, MO 63110, USA.
| | | | | |
Collapse
|