1
|
Diot-Dejonghe T, Leporq B, Bouhamama A, Ratiney H, Pilleul F, Beuf O, Cervenansky F. Development of a Secure Web-Based Medical Imaging Analysis Platform: The AWESOMME Project. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:2612-2626. [PMID: 38689149 PMCID: PMC11522235 DOI: 10.1007/s10278-024-01110-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/09/2024] [Revised: 03/12/2024] [Accepted: 04/02/2024] [Indexed: 05/02/2024]
Abstract
Precision medicine research benefits from machine learning in the creation of robust models adapted to the processing of patient data. This applies both to pathology identification in images, i.e., annotation or segmentation, and to computer-aided diagnostic for classification or prediction. It comes with the strong need to exploit and visualize large volumes of images and associated medical data. The work carried out in this paper follows on from a main case study piloted in a cancer center. It proposes an analysis pipeline for patients with osteosarcoma through segmentation, feature extraction and application of a deep learning model to predict response to treatment. The main aim of the AWESOMME project is to leverage this work and implement the pipeline on an easy-to-access, secure web platform. The proposed WEB application is based on a three-component architecture: a data server, a heavy computation and authentication server and a medical imaging web-framework with a user interface. These existing components have been enhanced to meet the needs of security and traceability for the continuous production of expert data. It innovates by covering all steps of medical imaging processing (visualization and segmentation, feature extraction and aided diagnostic) and enables the test and use of machine learning models. The infrastructure is operational, deployed in internal production and is currently being installed in the hospital environment. The extension of the case study and user feedback enabled us to fine-tune functionalities and proved that AWESOMME is a modular solution capable to analyze medical data and share research algorithms with in-house clinicians.
Collapse
Affiliation(s)
- Tiphaine Diot-Dejonghe
- INSA-Lyon, Université Claude Bernard Lyon 1, CNRS, Inserm, CREATIS UMR 5220, U1294, Lyon, F-69XXX, France
| | - Benjamin Leporq
- INSA-Lyon, Université Claude Bernard Lyon 1, CNRS, Inserm, CREATIS UMR 5220, U1294, Lyon, F-69XXX, France
| | - Amine Bouhamama
- INSA-Lyon, Université Claude Bernard Lyon 1, CNRS, Inserm, CREATIS UMR 5220, U1294, Lyon, F-69XXX, France
- Department of Radiology, Centre Léon Bérard, 28 Prom. Léa et Napoléon Bullukian, Lyon, 69008, France
| | - Helene Ratiney
- INSA-Lyon, Université Claude Bernard Lyon 1, CNRS, Inserm, CREATIS UMR 5220, U1294, Lyon, F-69XXX, France
| | - Frank Pilleul
- INSA-Lyon, Université Claude Bernard Lyon 1, CNRS, Inserm, CREATIS UMR 5220, U1294, Lyon, F-69XXX, France
- Department of Radiology, Centre Léon Bérard, 28 Prom. Léa et Napoléon Bullukian, Lyon, 69008, France
| | - Olivier Beuf
- INSA-Lyon, Université Claude Bernard Lyon 1, CNRS, Inserm, CREATIS UMR 5220, U1294, Lyon, F-69XXX, France
| | - Frederic Cervenansky
- INSA-Lyon, Université Claude Bernard Lyon 1, CNRS, Inserm, CREATIS UMR 5220, U1294, Lyon, F-69XXX, France.
| |
Collapse
|
2
|
Alkim E, Dowst H, DiCarlo J, Dobrolecki LE, Hernández-Herrera A, Hormuth DA, Liao Y, McOwiti A, Pautler R, Rimawi M, Roark A, Srinivasan RR, Virostko J, Zhang B, Zheng F, Rubin DL, Yankeelov TE, Lewis MT. Toward Practical Integration of Omic and Imaging Data in Co-Clinical Trials. Tomography 2023; 9:810-828. [PMID: 37104137 PMCID: PMC10144684 DOI: 10.3390/tomography9020066] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2023] [Revised: 03/31/2023] [Accepted: 04/03/2023] [Indexed: 04/28/2023] Open
Abstract
Co-clinical trials are the concurrent or sequential evaluation of therapeutics in both patients clinically and patient-derived xenografts (PDX) pre-clinically, in a manner designed to match the pharmacokinetics and pharmacodynamics of the agent(s) used. The primary goal is to determine the degree to which PDX cohort responses recapitulate patient cohort responses at the phenotypic and molecular levels, such that pre-clinical and clinical trials can inform one another. A major issue is how to manage, integrate, and analyze the abundance of data generated across both spatial and temporal scales, as well as across species. To address this issue, we are developing MIRACCL (molecular and imaging response analysis of co-clinical trials), a web-based analytical tool. For prototyping, we simulated data for a co-clinical trial in "triple-negative" breast cancer (TNBC) by pairing pre- (T0) and on-treatment (T1) magnetic resonance imaging (MRI) from the I-SPY2 trial, as well as PDX-based T0 and T1 MRI. Baseline (T0) and on-treatment (T1) RNA expression data were also simulated for TNBC and PDX. Image features derived from both datasets were cross-referenced to omic data to evaluate MIRACCL functionality for correlating and displaying MRI-based changes in tumor size, vascularity, and cellularity with changes in mRNA expression as a function of treatment.
Collapse
Affiliation(s)
- Emel Alkim
- Department of Biomedical Data Science, Stanford University School of Medicine, Stanford, CA 94305, USA
| | - Heidi Dowst
- Dan L. Duncan Cancer Center, Baylor College of Medicine, Houston, TX 77030, USA
| | - Julie DiCarlo
- Oden Institute for Computational Engineering and Sciences, Austin, TX 78712, USA
- Livestrong Cancer Institutes, Austin, TX 78712, USA
| | - Lacey E Dobrolecki
- Lester and Sue Smith Breast Center, Baylor College of Medicine, Houston, TX 77030, USA
- Department of Molecular and Cellular Biology and Radiology, Baylor College of Medicine, Houston, TX 77030, USA
| | | | - David A Hormuth
- Oden Institute for Computational Engineering and Sciences, Austin, TX 78712, USA
- Livestrong Cancer Institutes, Austin, TX 78712, USA
| | - Yuxing Liao
- Lester and Sue Smith Breast Center, Baylor College of Medicine, Houston, TX 77030, USA
| | - Apollo McOwiti
- Dan L. Duncan Cancer Center, Baylor College of Medicine, Houston, TX 77030, USA
| | - Robia Pautler
- Department of Physiology, Baylor College of Medicine, Houston, TX 77030, USA
| | - Mothaffar Rimawi
- Lester and Sue Smith Breast Center, Baylor College of Medicine, Houston, TX 77030, USA
- Department of Medicine, Baylor College of Medicine, Houston, TX 77030, USA
| | - Ashley Roark
- Lester and Sue Smith Breast Center, Baylor College of Medicine, Houston, TX 77030, USA
- Department of Medicine, Baylor College of Medicine, Houston, TX 77030, USA
| | | | - Jack Virostko
- Oden Institute for Computational Engineering and Sciences, Austin, TX 78712, USA
- Livestrong Cancer Institutes, Austin, TX 78712, USA
- Department of Oncology, The University of Texas at Austin, Austin, TX 78712, USA
- Department of Diagnostic Medicine, The University of Texas at Austin, Austin, TX 78712, USA
| | - Bing Zhang
- Lester and Sue Smith Breast Center, Baylor College of Medicine, Houston, TX 77030, USA
| | - Fei Zheng
- Dan L. Duncan Cancer Center, Baylor College of Medicine, Houston, TX 77030, USA
| | - Daniel L Rubin
- Department of Biomedical Data Science, Stanford University School of Medicine, Stanford, CA 94305, USA
- Department of Radiology, Stanford University School of Medicine, Stanford, CA 94305, USA
- Department of Medicine, Stanford University School of Medicine, Stanford, CA 94305, USA
| | - Thomas E Yankeelov
- Oden Institute for Computational Engineering and Sciences, Austin, TX 78712, USA
- Livestrong Cancer Institutes, Austin, TX 78712, USA
- Department of Oncology, The University of Texas at Austin, Austin, TX 78712, USA
- Department of Diagnostic Medicine, The University of Texas at Austin, Austin, TX 78712, USA
- Department of Biomedical Engineering, The University of Texas at Austin, Austin, TX 78712, USA
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Michael T Lewis
- Dan L. Duncan Cancer Center, Baylor College of Medicine, Houston, TX 77030, USA
- Lester and Sue Smith Breast Center, Baylor College of Medicine, Houston, TX 77030, USA
- Department of Molecular and Cellular Biology and Radiology, Baylor College of Medicine, Houston, TX 77030, USA
| |
Collapse
|
3
|
Chen TT, Sun YC, Chu WC, Lien CY. BlueLight: An Open Source DICOM Viewer Using Low-Cost Computation Algorithm Implemented with JavaScript Using Advanced Medical Imaging Visualization. J Digit Imaging 2023; 36:753-763. [PMID: 36538245 PMCID: PMC10039132 DOI: 10.1007/s10278-022-00746-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2022] [Revised: 11/16/2022] [Accepted: 11/23/2022] [Indexed: 12/24/2022] Open
Abstract
Recently, WebGL has been widely used in numerous web-based medical image viewers to present advanced imaging visualization. However, in the scenario of medical imaging, there are many challenges of computation time and memory consumption that limit the use of advanced image renderings, such as volume rendering and multiplanar reformation/reconstruction, in low-cost mobile devices. In this study, we propose a client-side rendering low-cost computation algorithm for common two- and three-dimensional medical imaging visualization implemented by pure JavaScript. Particularly, we used the functions of cascading style sheet transform and combinate with Digital Imaging and Communications in Medicine (DICOM)-related imaging to replace the application programming interface with high computation to reduce the computation time and save memory consumption while launching medical imaging interpretation on web browsers. The results show the proposed algorithm significantly reduced the consumption of central and graphics processing units on various web browsers. The proposed algorithm was implemented in an open-source web-based DICOM viewer BlueLight; the results show that it has sufficient rendering performance to display 3D medical images with DICOM-compliant annotations and has the ability to connect to image archive via DICOMweb as well.Keywords: WebGL, DICOMweb, Multiplanar reconstruction, Volume rendering, DICOM, JavaScript, Zero-footprint.
Collapse
Affiliation(s)
- Tseng-Tse Chen
- Department of Information Management, National Taipei University of Nursing and Health Sciences, Taipei, Taiwan
- Department of Biomedical Engineering, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Ying-Chou Sun
- School of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan
- Deptartment of Radiology, Taipei Veterans General Hospital, Taipei, Taiwan
- Department of Medical Imaging and Radiological Technology, Yuanpei University of Medical Technology, Hsinchu, Taiwan
| | - Woei-Chyn Chu
- Department of Biomedical Engineering, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Chung-Yueh Lien
- Department of Information Management, National Taipei University of Nursing and Health Sciences, Taipei, Taiwan.
| |
Collapse
|
4
|
Cohen RY, Sodickson AD. An Orchestration Platform that Puts Radiologists in the Driver's Seat of AI Innovation: a Methodological Approach. J Digit Imaging 2023; 36:700-714. [PMID: 36417024 PMCID: PMC10039211 DOI: 10.1007/s10278-022-00649-0] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2021] [Revised: 01/14/2022] [Accepted: 04/29/2022] [Indexed: 11/24/2022] Open
Abstract
Current AI-driven research in radiology requires resources and expertise that are often inaccessible to small and resource-limited labs. The clinicians who are able to participate in AI research are frequently well-funded, well-staffed, and either have significant experience with AI and computing, or have access to colleagues or facilities that do. Current imaging data is clinician-oriented and is not easily amenable to machine learning initiatives, resulting in inefficient, time consuming, and costly efforts that rely upon a crew of data engineers and machine learning scientists, and all too often preclude radiologists from driving AI research and innovation. We present the system and methodology we have developed to address infrastructure and platform needs, while reducing the staffing and resource barriers to entry. We emphasize a data-first and modular approach that streamlines the AI development and deployment process while providing efficient and familiar interfaces for radiologists, such that they can be the drivers of new AI innovations.
Collapse
Affiliation(s)
- Raphael Y. Cohen
- Department of Radiology, Division of Emergency Radiology, Brigham and Women’s Hospital, Boston, 02115 USA
| | - Aaron D. Sodickson
- Department of Radiology, Division of Emergency Radiology, Brigham and Women’s Hospital, Boston, 02115 USA
- Harvard Medical School, Boston, 02115 USA
| |
Collapse
|
5
|
Bercean BA, Birhala A, Ardelean PG, Barbulescu I, Benta MM, Rasadean CD, Costachescu D, Avramescu C, Tenescu A, Iarca S, Buburuzan AS, Marcu M, Birsasteanu F. Evidence of a cognitive bias in the quantification of COVID-19 with CT: an artificial intelligence randomised clinical trial. Sci Rep 2023; 13:4887. [PMID: 36966179 PMCID: PMC10039355 DOI: 10.1038/s41598-023-31910-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2022] [Accepted: 03/19/2023] [Indexed: 03/27/2023] Open
Abstract
Chest computed tomography (CT) has played a valuable, distinct role in the screening, diagnosis, and follow-up of COVID-19 patients. The quantification of COVID-19 pneumonia on CT has proven to be an important predictor of the treatment course and outcome of the patient although it remains heavily reliant on the radiologist's subjective perceptions. Here, we show that with the adoption of CT for COVID-19 management, a new type of psychophysical bias has emerged in radiology. A preliminary survey of 40 radiologists and a retrospective analysis of CT data from 109 patients from two hospitals revealed that radiologists overestimated the percentage of lung involvement by 10.23 ± 4.65% and 15.8 ± 6.6%, respectively. In the subsequent randomised controlled trial, artificial intelligence (AI) decision support reduced the absolute overestimation error (P < 0.001) from 9.5% ± 6.6 (No-AI analysis arm, n = 38) to 1.0% ± 5.2 (AI analysis arm, n = 38). These results indicate a human perception bias in radiology that has clinically meaningful effects on the quantitative analysis of COVID-19 on CT. The objectivity of AI was shown to be a valuable complement in mitigating the radiologist's subjectivity, reducing the overestimation tenfold.Trial registration: https://Clinicaltrial.gov . Identifier: NCT05282056, Date of registration: 01/02/2022.
Collapse
Affiliation(s)
- Bogdan A Bercean
- Rayscape, 5, Nicolae Iorga, 010431, Bucharest, Romania.
- Politehnica University of Timișoara, 2, Victoriei Square, 300006, Timisoara, Romania.
| | | | - Paula G Ardelean
- Rayscape, 5, Nicolae Iorga, 010431, Bucharest, Romania
- Department of Radiology, Pius Brinzeu County Emergency Hospital, 156, Liviu Rebreanu, 300723, Timisoara, Romania
| | - Ioana Barbulescu
- Rayscape, 5, Nicolae Iorga, 010431, Bucharest, Romania
- Department of Radiology, Pius Brinzeu County Emergency Hospital, 156, Liviu Rebreanu, 300723, Timisoara, Romania
| | - Marius M Benta
- Rayscape, 5, Nicolae Iorga, 010431, Bucharest, Romania
- Department of Radiology, Pius Brinzeu County Emergency Hospital, 156, Liviu Rebreanu, 300723, Timisoara, Romania
| | - Cristina D Rasadean
- Rayscape, 5, Nicolae Iorga, 010431, Bucharest, Romania
- Department of Radiology, Pius Brinzeu County Emergency Hospital, 156, Liviu Rebreanu, 300723, Timisoara, Romania
| | - Dan Costachescu
- Rayscape, 5, Nicolae Iorga, 010431, Bucharest, Romania
- Victor Babeş University of Medicine and Pharmacy, 2, Eftimie Murgu Square, 300041, Timisoara, Romania
| | - Cristian Avramescu
- Rayscape, 5, Nicolae Iorga, 010431, Bucharest, Romania
- Politehnica University of Timișoara, 2, Victoriei Square, 300006, Timisoara, Romania
| | - Andrei Tenescu
- Rayscape, 5, Nicolae Iorga, 010431, Bucharest, Romania
- Politehnica University of Timișoara, 2, Victoriei Square, 300006, Timisoara, Romania
| | - Stefan Iarca
- Rayscape, 5, Nicolae Iorga, 010431, Bucharest, Romania
| | - Alexandru S Buburuzan
- Rayscape, 5, Nicolae Iorga, 010431, Bucharest, Romania
- The University of Manchester, Oxford Rd, Manchester, M13 9PL, UK
| | - Marius Marcu
- Politehnica University of Timișoara, 2, Victoriei Square, 300006, Timisoara, Romania
| | - Florin Birsasteanu
- Department of Radiology, Pius Brinzeu County Emergency Hospital, 156, Liviu Rebreanu, 300723, Timisoara, Romania
- Victor Babeş University of Medicine and Pharmacy, 2, Eftimie Murgu Square, 300041, Timisoara, Romania
| |
Collapse
|
6
|
Abler D, Schaer R, Oreiller V, Verma H, Reichenbach J, Aidonopoulos O, Evéquoz F, Jreige M, Prior JO, Depeursinge A. QuantImage v2: a comprehensive and integrated physician-centered cloud platform for radiomics and machine learning research. Eur Radiol Exp 2023; 7:16. [PMID: 36947346 PMCID: PMC10033788 DOI: 10.1186/s41747-023-00326-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2022] [Accepted: 01/23/2023] [Indexed: 03/23/2023] Open
Abstract
BACKGROUND Radiomics, the field of image-based computational medical biomarker research, has experienced rapid growth over the past decade due to its potential to revolutionize the development of personalized decision support models. However, despite its research momentum and important advances toward methodological standardization, the translation of radiomics prediction models into clinical practice only progresses slowly. The lack of physicians leading the development of radiomics models and insufficient integration of radiomics tools in the clinical workflow contributes to this slow uptake. METHODS We propose a physician-centered vision of radiomics research and derive minimal functional requirements for radiomics research software to support this vision. Free-to-access radiomics tools and frameworks were reviewed to identify best practices and reveal the shortcomings of existing software solutions to optimally support physician-driven radiomics research in a clinical environment. RESULTS Support for user-friendly development and evaluation of radiomics prediction models via machine learning was found to be missing in most tools. QuantImage v2 (QI2) was designed and implemented to address these shortcomings. QI2 relies on well-established existing tools and open-source libraries to realize and concretely demonstrate the potential of a one-stop tool for physician-driven radiomics research. It provides web-based access to cohort management, feature extraction, and visualization and supports "no-code" development and evaluation of machine learning models against patient-specific outcome data. CONCLUSIONS QI2 fills a gap in the radiomics software landscape by enabling "no-code" radiomics research, including model validation, in a clinical environment. Further information about QI2, a public instance of the system, and its source code is available at https://medgift.github.io/quantimage-v2-info/ . Key points As domain experts, physicians play a key role in the development of radiomics models. Existing software solutions do not support physician-driven research optimally. QuantImage v2 implements a physician-centered vision for radiomics research. QuantImage v2 is a web-based, "no-code" radiomics research platform.
Collapse
Affiliation(s)
- Daniel Abler
- Institute of Informatics, School of Management, HES-SO Valais-Wallis, Sierre, Switzerland
- Department of Oncology, Precision Oncology Center, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland
| | - Roger Schaer
- Institute of Informatics, School of Management, HES-SO Valais-Wallis, Sierre, Switzerland
- Department of Nuclear Medicine and Molecular Imaging, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland
| | - Valentin Oreiller
- Institute of Informatics, School of Management, HES-SO Valais-Wallis, Sierre, Switzerland
- Department of Nuclear Medicine and Molecular Imaging, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland
| | - Himanshu Verma
- Knowledge and Intelligence Design Group, Delft University of Technology, Delft, The Netherlands
| | - Julien Reichenbach
- Institute of Informatics, School of Management, HES-SO Valais-Wallis, Sierre, Switzerland
| | - Orfeas Aidonopoulos
- Institute of Informatics, School of Management, HES-SO Valais-Wallis, Sierre, Switzerland
| | - Florian Evéquoz
- Institute of Informatics, School of Management, HES-SO Valais-Wallis, Sierre, Switzerland
| | - Mario Jreige
- Department of Nuclear Medicine and Molecular Imaging, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland
| | - John O Prior
- Department of Nuclear Medicine and Molecular Imaging, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland
| | - Adrien Depeursinge
- Institute of Informatics, School of Management, HES-SO Valais-Wallis, Sierre, Switzerland.
- Department of Nuclear Medicine and Molecular Imaging, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland.
| |
Collapse
|
7
|
Peehl DM, Badea CT, Chenevert TL, Daldrup-Link HE, Ding L, Dobrolecki LE, Houghton AM, Kinahan PE, Kurhanewicz J, Lewis MT, Li S, Luker GD, Ma CX, Manning HC, Mowery YM, O’Dwyer PJ, Pautler RG, Rosen MA, Roudi R, Ross BD, Shoghi KI, Sriram R, Talpaz M, Wahl RL, Zhou R. Animal Models and Their Role in Imaging-Assisted Co-Clinical Trials. Tomography 2023; 9:657-680. [PMID: 36961012 PMCID: PMC10037611 DOI: 10.3390/tomography9020053] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2023] [Revised: 03/08/2023] [Accepted: 03/08/2023] [Indexed: 03/19/2023] Open
Abstract
The availability of high-fidelity animal models for oncology research has grown enormously in recent years, enabling preclinical studies relevant to prevention, diagnosis, and treatment of cancer to be undertaken. This has led to increased opportunities to conduct co-clinical trials, which are studies on patients that are carried out parallel to or sequentially with animal models of cancer that mirror the biology of the patients' tumors. Patient-derived xenografts (PDX) and genetically engineered mouse models (GEMM) are considered to be the models that best represent human disease and have high translational value. Notably, one element of co-clinical trials that still needs significant optimization is quantitative imaging. The National Cancer Institute has organized a Co-Clinical Imaging Resource Program (CIRP) network to establish best practices for co-clinical imaging and to optimize translational quantitative imaging methodologies. This overview describes the ten co-clinical trials of investigators from eleven institutions who are currently supported by the CIRP initiative and are members of the Animal Models and Co-clinical Trials (AMCT) Working Group. Each team describes their corresponding clinical trial, type of cancer targeted, rationale for choice of animal models, therapy, and imaging modalities. The strengths and weaknesses of the co-clinical trial design and the challenges encountered are considered. The rich research resources generated by the members of the AMCT Working Group will benefit the broad research community and improve the quality and translational impact of imaging in co-clinical trials.
Collapse
Affiliation(s)
- Donna M. Peehl
- Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, CA 94158, USA; (J.K.); (R.S.)
| | - Cristian T. Badea
- Department of Radiology, Duke University Medical Center, Durham, NC 27710, USA;
| | - Thomas L. Chenevert
- Department of Radiology and the Center for Molecular Imaging, University of Michigan School of Medicine, Ann Arbor, MI 48109, USA; (T.L.C.); (G.D.L.); (B.D.R.)
| | - Heike E. Daldrup-Link
- Molecular Imaging Program at Stanford (MIPS), Department of Radiology, Stanford University, Stanford, CA 94305, USA; (H.E.D.-L.); (R.R.)
| | - Li Ding
- Department of Medicine, Washington University School of Medicine, St. Louis, MO 63110, USA; (L.D.); (S.L.); (C.X.M.)
| | - Lacey E. Dobrolecki
- Advanced Technology Cores, Baylor College of Medicine, Houston, TX 77030, USA;
| | | | - Paul E. Kinahan
- Department of Radiology, University of Washington, Seattle, WA 98105, USA;
| | - John Kurhanewicz
- Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, CA 94158, USA; (J.K.); (R.S.)
| | - Michael T. Lewis
- Departments of Molecular and Cellular Biology and Radiology, Baylor College of Medicine, Houston, TX 77030, USA;
| | - Shunqiang Li
- Department of Medicine, Washington University School of Medicine, St. Louis, MO 63110, USA; (L.D.); (S.L.); (C.X.M.)
| | - Gary D. Luker
- Department of Radiology and the Center for Molecular Imaging, University of Michigan School of Medicine, Ann Arbor, MI 48109, USA; (T.L.C.); (G.D.L.); (B.D.R.)
- Department of Microbiology and Immunology, University of Michigan School of Medicine, Ann Arbor, MI 48109, USA
| | - Cynthia X. Ma
- Department of Medicine, Washington University School of Medicine, St. Louis, MO 63110, USA; (L.D.); (S.L.); (C.X.M.)
| | - H. Charles Manning
- Department of Cancer Systems Imaging, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA;
| | - Yvonne M. Mowery
- Department of Radiation Oncology, Duke University School of Medicine, Durham, NC 27708, USA;
- Department of Head and Neck Surgery & Communication Sciences, Duke University School of Medicine, Durham, NC 27708, USA
| | - Peter J. O’Dwyer
- Abramson Cancer Center, University of Pennsylvania, Philadelphia, PA 19104, USA; (P.J.O.); (M.A.R.); (R.Z.)
| | - Robia G. Pautler
- Department of Integrative Physiology, Baylor College of Medicine, Houston, TX 77030, USA;
| | - Mark A. Rosen
- Abramson Cancer Center, University of Pennsylvania, Philadelphia, PA 19104, USA; (P.J.O.); (M.A.R.); (R.Z.)
- Department of Radiology, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Raheleh Roudi
- Molecular Imaging Program at Stanford (MIPS), Department of Radiology, Stanford University, Stanford, CA 94305, USA; (H.E.D.-L.); (R.R.)
| | - Brian D. Ross
- Department of Radiology and the Center for Molecular Imaging, University of Michigan School of Medicine, Ann Arbor, MI 48109, USA; (T.L.C.); (G.D.L.); (B.D.R.)
- Department of Biological Chemistry, University of Michigan School of Medicine, Ann Arbor, MI 48109, USA
| | - Kooresh I. Shoghi
- Mallinckrodt Institute of Radiology (MIR), Washington University School of Medicine, St. Louis, MO 63110, USA; (K.I.S.); (R.L.W.)
| | - Renuka Sriram
- Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, CA 94158, USA; (J.K.); (R.S.)
| | - Moshe Talpaz
- Division of Hematology/Oncology, University of Michigan School of Medicine, Ann Arbor, MI 48109, USA;
- Department of Internal Medicine, University of Michigan School of Medicine, Ann Arbor, MI 48109, USA
| | - Richard L. Wahl
- Mallinckrodt Institute of Radiology (MIR), Washington University School of Medicine, St. Louis, MO 63110, USA; (K.I.S.); (R.L.W.)
| | - Rong Zhou
- Abramson Cancer Center, University of Pennsylvania, Philadelphia, PA 19104, USA; (P.J.O.); (M.A.R.); (R.Z.)
- Department of Radiology, University of Pennsylvania, Philadelphia, PA 19104, USA
| |
Collapse
|
8
|
Artificial intelligence and machine learning in cancer imaging. COMMUNICATIONS MEDICINE 2022; 2:133. [PMID: 36310650 PMCID: PMC9613681 DOI: 10.1038/s43856-022-00199-0] [Citation(s) in RCA: 73] [Impact Index Per Article: 36.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2020] [Accepted: 10/06/2022] [Indexed: 11/16/2022] Open
Abstract
An increasing array of tools is being developed using artificial intelligence (AI) and machine learning (ML) for cancer imaging. The development of an optimal tool requires multidisciplinary engagement to ensure that the appropriate use case is met, as well as to undertake robust development and testing prior to its adoption into healthcare systems. This multidisciplinary review highlights key developments in the field. We discuss the challenges and opportunities of AI and ML in cancer imaging; considerations for the development of algorithms into tools that can be widely used and disseminated; and the development of the ecosystem needed to promote growth of AI and ML in cancer imaging.
Collapse
|
9
|
Yamashita R, Kapoor T, Alam MN, Galimzianova A, Syed SA, Ugur Akdogan M, Alkim E, Wentland AL, Madhuripan N, Goff D, Barbee V, Sheybani ND, Sagreiya H, Rubin DL, Desser TS. Toward Reduction in False-Positive Thyroid Nodule Biopsies with a Deep Learning-based Risk Stratification System Using US Cine-Clip Images. Radiol Artif Intell 2022; 4:e210174. [PMID: 35652118 PMCID: PMC9152684 DOI: 10.1148/ryai.210174] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2021] [Revised: 01/16/2022] [Accepted: 04/19/2022] [Indexed: 06/15/2023]
Abstract
PURPOSE To develop a deep learning-based risk stratification system for thyroid nodules using US cine images. MATERIALS AND METHODS In this retrospective study, 192 biopsy-confirmed thyroid nodules (175 benign, 17 malignant) in 167 unique patients (mean age, 56 years ± 16 [SD], 137 women) undergoing cine US between April 2017 and May 2018 with American College of Radiology (ACR) Thyroid Imaging Reporting and Data System (TI-RADS)-structured radiology reports were evaluated. A deep learning-based system that exploits the cine images obtained during three-dimensional volumetric thyroid scans and outputs malignancy risk was developed and compared, using fivefold cross-validation, against a two-dimensional (2D) deep learning-based model (Static-2DCNN), a radiomics-based model using cine images (Cine-Radiomics), and the ACR TI-RADS level, with histopathologic diagnosis as ground truth. The system was used to revise the ACR TI-RADS recommendation, and its diagnostic performance was compared against the original ACR TI-RADS. RESULTS The system achieved higher average area under the receiver operating characteristic curve (AUC, 0.88) than Static-2DCNN (0.72, P = .03) and tended toward higher average AUC than Cine-Radiomics (0.78, P = .16) and ACR TI-RADS level (0.80, P = .21). The system downgraded recommendations for 92 benign and two malignant nodules and upgraded none. The revised recommendation achieved higher specificity (139 of 175, 79.4%) than the original ACR TI-RADS (47 of 175, 26.9%; P < .001), with no difference in sensitivity (12 of 17, 71% and 14 of 17, 82%, respectively; P = .63). CONCLUSION The risk stratification system using US cine images had higher diagnostic performance than prior models and improved specificity of ACR TI-RADS when used to revise ACR TI-RADS recommendation.Keywords: Neural Networks, US, Abdomen/GI, Head/Neck, Thyroid, Computer Applications-3D, Oncology, Diagnosis, Supervised Learning, Transfer Learning, Convolutional Neural Network (CNN) Supplemental material is available for this article. © RSNA, 2022.
Collapse
|
10
|
Aljabri M, AlAmir M, AlGhamdi M, Abdel-Mottaleb M, Collado-Mesa F. Towards a better understanding of annotation tools for medical imaging: a survey. MULTIMEDIA TOOLS AND APPLICATIONS 2022; 81:25877-25911. [PMID: 35350630 PMCID: PMC8948453 DOI: 10.1007/s11042-022-12100-1] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/06/2021] [Revised: 08/04/2021] [Accepted: 01/03/2022] [Indexed: 05/07/2023]
Abstract
Medical imaging refers to several different technologies that are used to view the human body to diagnose, monitor, or treat medical conditions. It requires significant expertise to efficiently and correctly interpret the images generated by each of these technologies, which among others include radiography, ultrasound, and magnetic resonance imaging. Deep learning and machine learning techniques provide different solutions for medical image interpretation including those associated with detection and diagnosis. Despite the huge success of deep learning algorithms in image analysis, training algorithms to reach human-level performance in these tasks depends on the availability of large amounts of high-quality training data, including high-quality annotations to serve as ground-truth. Different annotation tools have been developed to assist with the annotation process. In this survey, we present the currently available annotation tools for medical imaging, including descriptions of graphical user interfaces (GUI) and supporting instruments. The main contribution of this study is to provide an intensive review of the popular annotation tools and show their successful usage in annotating medical imaging dataset to guide researchers in this area.
Collapse
Affiliation(s)
- Manar Aljabri
- Department of Computer Science, Umm Al-Qura University, Mecca, Saudi Arabia
| | - Manal AlAmir
- Department of Computer Science, Umm Al-Qura University, Mecca, Saudi Arabia
| | - Manal AlGhamdi
- Department of Computer Science, Umm Al-Qura University, Mecca, Saudi Arabia
| | | | - Fernando Collado-Mesa
- Department of Radiology, University of Miami Miller School of Medicine, Florida, FL USA
| |
Collapse
|
11
|
Deep Radiotranscriptomics of Non-Small Cell Lung Carcinoma for Assessing Molecular and Histology Subtypes with a Data-Driven Analysis. Diagnostics (Basel) 2021; 11:diagnostics11122383. [PMID: 34943617 PMCID: PMC8700168 DOI: 10.3390/diagnostics11122383] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2021] [Revised: 12/15/2021] [Accepted: 12/16/2021] [Indexed: 11/17/2022] Open
Abstract
Radiogenomic and radiotranscriptomic studies have the potential to pave the way for a holistic decision support system built on genomics, transcriptomics, radiomics, deep features and clinical parameters to assess treatment evaluation and care planning. The integration of invasive and routine imaging data into a common feature space has the potential to yield robust models for inferring the drivers of underlying biological mechanisms. In this non-small cell lung carcinoma study, a multi-omics representation comprised deep features and transcriptomics was evaluated to further explore the synergetic and complementary properties of these diverse multi-view data sources by utilizing data-driven machine learning models. The proposed deep radiotranscriptomic analysis is a feature-based fusion that significantly enhances sensitivity by up to 0.174 and AUC by up to 0.22, compared to the baseline single source models, across all experiments on the unseen testing set. Additionally, a radiomics-based fusion was also explored as an alternative methodology yielding radiomic signatures that are comparable to several previous publications in the field of radiogenomics. Furthermore, the machine learning multi-omics analysis based on deep features and transcriptomics achieved an AUC performance of up to 0.831 ± 0.09/0.925 ± 0.04 for the examined molecular and histology subtypes analysis, respectively. The clinical impact of such high-performing models can add prognostic value and lead to optimal treatment assessment by targeting specific oncogenes, namely the response of tyrosine kinase inhibitors of EGFR mutated or predicting the chemotherapy resistance of KRAS mutated tumors.
Collapse
|
12
|
Data Sharing of Imaging in an Evolving Health Care World: Report of the ACR Data Sharing Workgroup Part 2: Annotation, Curation, and Contracting. J Am Coll Radiol 2021; 18:1655-1665. [PMID: 34607753 DOI: 10.1016/j.jacr.2021.07.015] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2021] [Accepted: 07/12/2021] [Indexed: 12/29/2022]
Abstract
A core principle of ethical data sharing is maintaining the security and anonymity of the data, and care must be taken to ensure medical records and images cannot be reidentified to be traced back to patients or misconstrued as a breach in the trust between health care providers and patients. Once those principles have been observed, those seeking to share data must take the appropriate steps to curate the data in a way that organizes the clinically relevant information so as to be useful to the data sharing party, assesses the ensuing value of the data set and its annotations, and informs the data sharing contracts that will govern use of the data. Embarking on a data sharing partnership engenders a host of ethical, practical, technical, legal, and commercial challenges that require a thoughtful, considered approach. In 2019 the ACR convened a Data Sharing Workgroup to develop philosophies around best practices in the sharing of health information. This is Part 2 of a Report on the workgroup's efforts in exploring these issues.
Collapse
|
13
|
Jaggi A, Mastrodicasa D, Charville GW, Jeffrey RB, Napel S, Patel B. Quantitative image features from radiomic biopsy differentiate oncocytoma from chromophobe renal cell carcinoma. J Med Imaging (Bellingham) 2021; 8:054501. [PMID: 34514033 DOI: 10.1117/1.jmi.8.5.054501] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2021] [Accepted: 08/05/2021] [Indexed: 11/14/2022] Open
Abstract
Purpose: To differentiate oncocytoma and chromophobe renal cell carcinoma (RCC) using radiomics features computed from spherical samples of image regions of interest, "radiomic biopsies" (RBs). Approach: In a retrospective cohort study of 102 CT cases [68 males (67%), 34 females (33%); mean age ± SD, 63 ± 12 years ], we pathology-confirmed 42 oncocytomas (41%) and 60 chromophobes (59%). A board-certified radiologist performed two RB rounds. From each RB round, we computed radiomics features and compared the performance of a random forest and AdaBoost binary classifier trained from the features. To control for overfitting, we performed 10 rounds of 70% to 30% train-test splits with feature-selection, cross-validation, and hyperparameter-optimization on each split. We evaluated the performance with test ROC AUC. We tested models on data from the other RB round and compared with the same round testing with the DeLong test. We clustered important features for each round and measured a bootstrapped adjusted Rand index agreement. Results: Our best classifiers achieved an average AUC of 0.71 ± 0.024 . We found no evidence of an effect for RB round ( p = 1 ). We also found no evidence for a decrease in model performance when tested on the other RB round ( p = 0.85 ). Feature clustering produced seven clusters in each RB round with high agreement ( Rand index = 0.981 ± 0.002 , p < 0.00001 ). Conclusions: A consistent radiomic signature can be derived from RBs and could help distinguish oncocytoma and chromophobe RCC.
Collapse
Affiliation(s)
- Akshay Jaggi
- Stanford University School of Medicine, Department of Radiology, Stanford, California, United States
| | - Domenico Mastrodicasa
- Stanford University School of Medicine, Department of Radiology, Stanford, California, United States
| | - Gregory W Charville
- Stanford University School of Medicine, Department of Pathology, Stanford, California, United States
| | - R Brooke Jeffrey
- Stanford University School of Medicine, Department of Radiology, Stanford, California, United States
| | - Sandy Napel
- Stanford University School of Medicine, Department of Radiology, Stanford, California, United States
| | - Bhavik Patel
- Mayo Clinic Arizona, Department of Radiology, Phoenix, Arizona, United States.,Arizona State University, Ira A. Fulton School of Engineering, Phoenix, Arizona, United States
| |
Collapse
|
14
|
Le NQK, Kha QH, Nguyen VH, Chen YC, Cheng SJ, Chen CY. Machine Learning-Based Radiomics Signatures for EGFR and KRAS Mutations Prediction in Non-Small-Cell Lung Cancer. Int J Mol Sci 2021; 22:ijms22179254. [PMID: 34502160 PMCID: PMC8431041 DOI: 10.3390/ijms22179254] [Citation(s) in RCA: 71] [Impact Index Per Article: 23.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2021] [Revised: 08/22/2021] [Accepted: 08/25/2021] [Indexed: 12/25/2022] Open
Abstract
Early identification of epidermal growth factor receptor (EGFR) and Kirsten rat sarcoma viral oncogene homolog (KRAS) mutations is crucial for selecting a therapeutic strategy for patients with non-small-cell lung cancer (NSCLC). We proposed a machine learning-based model for feature selection and prediction of EGFR and KRAS mutations in patients with NSCLC by including the least number of the most semantic radiomics features. We included a cohort of 161 patients from 211 patients with NSCLC from The Cancer Imaging Archive (TCIA) and analyzed 161 low-dose computed tomography (LDCT) images for detecting EGFR and KRAS mutations. A total of 851 radiomics features, which were classified into 9 categories, were obtained through manual segmentation and radiomics feature extraction from LDCT. We evaluated our models using a validation set consisting of 18 patients derived from the same TCIA dataset. The results showed that the genetic algorithm plus XGBoost classifier exhibited the most favorable performance, with an accuracy of 0.836 and 0.86 for detecting EGFR and KRAS mutations, respectively. We demonstrated that a noninvasive machine learning-based model including the least number of the most semantic radiomics signatures could robustly predict EGFR and KRAS mutations in patients with NSCLC.
Collapse
Affiliation(s)
- Nguyen Quoc Khanh Le
- Professional Master Program in Artificial Intelligence in Medicine, College of Medicine, Taipei Medical University, Taipei 106, Taiwan;
- Research Center for Artificial Intelligence in Medicine, Taipei Medical University, Taipei 106, Taiwan
- Translational Imaging Research Center, Taipei Medical University Hospital, Taipei 110, Taiwan
- Correspondence: (N.Q.K.L.); (S.-J.C.); Tel.: +886-02-66382736 (ext. 1992) (N.Q.K.L.)
| | - Quang Hien Kha
- International Master/Ph.D. Program in Medicine, College of Medicine, Taipei Medical University, Taipei 110, Taiwan; (Q.H.K.); (V.H.N.)
| | - Van Hiep Nguyen
- International Master/Ph.D. Program in Medicine, College of Medicine, Taipei Medical University, Taipei 110, Taiwan; (Q.H.K.); (V.H.N.)
- Oncology Center, Bai Chay Hospital, Quang Ninh 20000, Vietnam
| | - Yung-Chieh Chen
- Department of Medical Imaging, Taipei Medical University Hospital, Taipei 11031, Taiwan;
| | - Sho-Jen Cheng
- Department of Medical Imaging, Taipei Medical University Hospital, Taipei 11031, Taiwan;
- Correspondence: (N.Q.K.L.); (S.-J.C.); Tel.: +886-02-66382736 (ext. 1992) (N.Q.K.L.)
| | - Cheng-Yu Chen
- Professional Master Program in Artificial Intelligence in Medicine, College of Medicine, Taipei Medical University, Taipei 106, Taiwan;
- Research Center for Artificial Intelligence in Medicine, Taipei Medical University, Taipei 106, Taiwan
- Department of Medical Imaging, Taipei Medical University Hospital, Taipei 11031, Taiwan;
- Department of Radiology, School of Medicine, College of Medicine, Taipei Medical University, Taipei 11031, Taiwan
| |
Collapse
|
15
|
Dong Q, Luo G, Haynor D, O'Reilly M, Linnau K, Yaniv Z, Jarvik JG, Cross N. DicomAnnotator: a Configurable Open-Source Software Program for Efficient DICOM Image Annotation. J Digit Imaging 2021; 33:1514-1526. [PMID: 32666365 DOI: 10.1007/s10278-020-00370-w] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022] Open
Abstract
Modern, supervised machine learning approaches to medical image classification, image segmentation, and object detection usually require many annotated images. As manual annotation is usually labor-intensive and time-consuming, a well-designed software program can aid and expedite the annotation process. Ideally, this program should be configurable for various annotation tasks, enable efficient placement of several types of annotations on an image or a region of an image, attribute annotations to individual annotators, and be able to display Digital Imaging and Communications in Medicine (DICOM)-formatted images. No current open-source software program fulfills these requirements. To fill this gap, we developed DicomAnnotator, a configurable open-source software program for DICOM image annotation. This program fulfills the above requirements and provides user-friendly features to aid the annotation process. In this paper, we present the design and implementation of DicomAnnotator. Using spine image annotation as a test case, our evaluation showed that annotators with various backgrounds can use DicomAnnotator to annotate DICOM images efficiently. DicomAnnotator is freely available at https://github.com/UW-CLEAR-Center/DICOM-Annotator under the GPLv3 license.
Collapse
Affiliation(s)
- Qifei Dong
- Department of Biomedical Informatics and Medical Education, University of Washington, Seattle, WA, 98195, USA
| | - Gang Luo
- Department of Biomedical Informatics and Medical Education, University of Washington, Seattle, WA, 98195, USA
| | - David Haynor
- Department of Radiology, University of Washington, Seattle, WA, 98195-7115, USA
| | - Michael O'Reilly
- Department of Radiology, University of Washington, Seattle, WA, 98195-7115, USA
| | - Ken Linnau
- Department of Radiology, University of Washington, Seattle, WA, 98195-7115, USA
| | - Ziv Yaniv
- Medical Science & Computing, LLC, Rockville, MD, 20852, USA.,National Institute of Allergy and Infectious Diseases, National Institutes of Health, Bethesda, MD, 20814, USA
| | - Jeffrey G Jarvik
- Departments of Radiology, Neurological Surgery and Health Services, University of Washington, Seattle, WA, 98104-2499, USA
| | - Nathan Cross
- Department of Radiology, University of Washington, Seattle, WA, 98195-7115, USA.
| |
Collapse
|
16
|
Machine Learning and Deep Learning in Oncologic Imaging: Potential Hurdles, Opportunities for Improvement, and Solutions-Abdominal Imagers' Perspective. J Comput Assist Tomogr 2021; 45:805-811. [PMID: 34270486 DOI: 10.1097/rct.0000000000001183] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
ABSTRACT The applications of machine learning in clinical radiology practice and in particular oncologic imaging practice are steadily evolving. However, there are several potential hurdles for widespread implementation of machine learning in oncologic imaging, including the lack of availability of a large number of annotated data sets and lack of use of consistent methodology and terminology for reporting the findings observed on the staging and follow-up imaging studies that apply to a wide spectrum of solid tumors. This short review discusses some potential hurdles to the implementation of machine learning in oncologic imaging, opportunities for improvement, and potential solutions that can facilitate robust machine learning from the vast number of radiology reports and annotations generated by the dictating radiologists.
Collapse
|
17
|
Witowski J, Choi J, Jeon S, Kim D, Chung J, Conklin J, Longo MGF, Succi MD, Do S. MarkIt: A Collaborative Artificial Intelligence Annotation Platform Leveraging Blockchain For Medical Imaging Research. BLOCKCHAIN IN HEALTHCARE TODAY 2021; 4:176. [PMID: 36777485 PMCID: PMC9907418 DOI: 10.30953/bhty.v4.176] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/26/2021] [Revised: 03/24/2021] [Accepted: 03/24/2021] [Indexed: 06/18/2023]
Abstract
Current research on medical image processing relies heavily on the amount and quality of input data. Specifically, supervised machine learning methods require well-annotated datasets. A lack of annotation tools limits the potential to achieve high-volume processing and scaled systems with a proper reward mechanism. We developed MarkIt, a web-based tool, for collaborative annotation of medical imaging data with artificial intelligence and blockchain technologies. Our platform handles both Digital Imaging and Communications in Medicine (DICOM) and non-DICOM images, and allows users to annotate them for classification and object detection tasks in an efficient manner. MarkIt can accelerate the annotation process and keep track of user activities to calculate a fair reward. A proof-of-concept experiment was conducted with three fellowship-trained radiologists, each of whom annotated 1,000 chest X-ray studies for multi-label classification. We calculated the inter-rater agreement and estimated the value of the dataset to distribute the reward for annotators using a crypto currency. We hypothesize that MarkIt allows the typically arduous annotation task to become more efficient. In addition, MarkIt can serve as a platform to evaluate the value of data and trade the annotation results in a more scalable manner in the future. The platform is publicly available for testing on https://markit.mgh.harvard.edu.
Collapse
Affiliation(s)
- Jan Witowski
- Laboratory of Medical Imaging and Computation, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
| | - Jongmun Choi
- Laboratory of Medical Imaging and Computation, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
| | - Soomin Jeon
- Laboratory of Medical Imaging and Computation, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
| | - Doyun Kim
- Laboratory of Medical Imaging and Computation, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
| | - Joowon Chung
- Laboratory of Medical Imaging and Computation, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
| | - John Conklin
- Division of Emergency Imaging, Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
| | - Maria Gabriela Figueiro Longo
- Division of Emergency Imaging, Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
| | - Marc D. Succi
- Medically Engineered Solutions in Healthcare (MESH) Incubator, Massachusetts General Hospital, Boston, MA, USA
| | - Synho Do
- Laboratory of Medical Imaging and Computation, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
| |
Collapse
|
18
|
Stember JN, Celik H, Gutman D, Swinburne N, Young R, Eskreis-Winkler S, Holodny A, Jambawalikar S, Wood BJ, Chang PD, Krupinski E, Bagci U. Integrating Eye Tracking and Speech Recognition Accurately Annotates MR Brain Images for Deep Learning: Proof of Principle. Radiol Artif Intell 2021; 3:e200047. [PMID: 33842890 PMCID: PMC7845782 DOI: 10.1148/ryai.2020200047] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2020] [Revised: 07/23/2020] [Accepted: 08/03/2020] [Indexed: 12/19/2022]
Abstract
PURPOSE To generate and assess an algorithm combining eye tracking and speech recognition to extract brain lesion location labels automatically for deep learning (DL). MATERIALS AND METHODS In this retrospective study, 700 two-dimensional brain tumor MRI scans from the Brain Tumor Segmentation database were clinically interpreted. For each image, a single radiologist dictated a standard phrase describing the lesion into a microphone, simulating clinical interpretation. Eye-tracking data were recorded simultaneously. Using speech recognition, gaze points corresponding to each lesion were obtained. Lesion locations were used to train a keypoint detection convolutional neural network to find new lesions. A network was trained to localize lesions for an independent test set of 85 images. The statistical measure to evaluate our method was percent accuracy. RESULTS Eye tracking with speech recognition was 92% accurate in labeling lesion locations from the training dataset, thereby demonstrating that fully simulated interpretation can yield reliable tumor location labels. These labels became those that were used to train the DL network. The detection network trained on these labels predicted lesion location of a separate testing set with 85% accuracy. CONCLUSION The DL network was able to locate brain tumors on the basis of training data that were labeled automatically from simulated clinical image interpretation.© RSNA, 2020.
Collapse
Affiliation(s)
- Joseph N. Stember
- From the Department of Radiology, Memorial Sloan-Kettering Cancer Center, 1275 York Ave, New York, NY 10065 (J.N.S., D.G., N.S., R.Y., S.E.W., A.H.); The National Institutes of Health Clinical Center, Bethesda, Md (H.C., B.J.W.); Department of Radiology, Columbia University Medical Center, New York, NY (S.J.); Department of Radiology, University of California–Irvine, Irvine, Calif (P.D.C.); Department of Radiology & Imaging Sciences, Emory University, Atlanta, Ga (E.K.); and Center for Research in Computer Vision, University of Central Florida, Orlando, Fla (U.B.)
| | - Haydar Celik
- From the Department of Radiology, Memorial Sloan-Kettering Cancer Center, 1275 York Ave, New York, NY 10065 (J.N.S., D.G., N.S., R.Y., S.E.W., A.H.); The National Institutes of Health Clinical Center, Bethesda, Md (H.C., B.J.W.); Department of Radiology, Columbia University Medical Center, New York, NY (S.J.); Department of Radiology, University of California–Irvine, Irvine, Calif (P.D.C.); Department of Radiology & Imaging Sciences, Emory University, Atlanta, Ga (E.K.); and Center for Research in Computer Vision, University of Central Florida, Orlando, Fla (U.B.)
| | - David Gutman
- From the Department of Radiology, Memorial Sloan-Kettering Cancer Center, 1275 York Ave, New York, NY 10065 (J.N.S., D.G., N.S., R.Y., S.E.W., A.H.); The National Institutes of Health Clinical Center, Bethesda, Md (H.C., B.J.W.); Department of Radiology, Columbia University Medical Center, New York, NY (S.J.); Department of Radiology, University of California–Irvine, Irvine, Calif (P.D.C.); Department of Radiology & Imaging Sciences, Emory University, Atlanta, Ga (E.K.); and Center for Research in Computer Vision, University of Central Florida, Orlando, Fla (U.B.)
| | - Nathaniel Swinburne
- From the Department of Radiology, Memorial Sloan-Kettering Cancer Center, 1275 York Ave, New York, NY 10065 (J.N.S., D.G., N.S., R.Y., S.E.W., A.H.); The National Institutes of Health Clinical Center, Bethesda, Md (H.C., B.J.W.); Department of Radiology, Columbia University Medical Center, New York, NY (S.J.); Department of Radiology, University of California–Irvine, Irvine, Calif (P.D.C.); Department of Radiology & Imaging Sciences, Emory University, Atlanta, Ga (E.K.); and Center for Research in Computer Vision, University of Central Florida, Orlando, Fla (U.B.)
| | - Robert Young
- From the Department of Radiology, Memorial Sloan-Kettering Cancer Center, 1275 York Ave, New York, NY 10065 (J.N.S., D.G., N.S., R.Y., S.E.W., A.H.); The National Institutes of Health Clinical Center, Bethesda, Md (H.C., B.J.W.); Department of Radiology, Columbia University Medical Center, New York, NY (S.J.); Department of Radiology, University of California–Irvine, Irvine, Calif (P.D.C.); Department of Radiology & Imaging Sciences, Emory University, Atlanta, Ga (E.K.); and Center for Research in Computer Vision, University of Central Florida, Orlando, Fla (U.B.)
| | - Sarah Eskreis-Winkler
- From the Department of Radiology, Memorial Sloan-Kettering Cancer Center, 1275 York Ave, New York, NY 10065 (J.N.S., D.G., N.S., R.Y., S.E.W., A.H.); The National Institutes of Health Clinical Center, Bethesda, Md (H.C., B.J.W.); Department of Radiology, Columbia University Medical Center, New York, NY (S.J.); Department of Radiology, University of California–Irvine, Irvine, Calif (P.D.C.); Department of Radiology & Imaging Sciences, Emory University, Atlanta, Ga (E.K.); and Center for Research in Computer Vision, University of Central Florida, Orlando, Fla (U.B.)
| | - Andrei Holodny
- From the Department of Radiology, Memorial Sloan-Kettering Cancer Center, 1275 York Ave, New York, NY 10065 (J.N.S., D.G., N.S., R.Y., S.E.W., A.H.); The National Institutes of Health Clinical Center, Bethesda, Md (H.C., B.J.W.); Department of Radiology, Columbia University Medical Center, New York, NY (S.J.); Department of Radiology, University of California–Irvine, Irvine, Calif (P.D.C.); Department of Radiology & Imaging Sciences, Emory University, Atlanta, Ga (E.K.); and Center for Research in Computer Vision, University of Central Florida, Orlando, Fla (U.B.)
| | - Sachin Jambawalikar
- From the Department of Radiology, Memorial Sloan-Kettering Cancer Center, 1275 York Ave, New York, NY 10065 (J.N.S., D.G., N.S., R.Y., S.E.W., A.H.); The National Institutes of Health Clinical Center, Bethesda, Md (H.C., B.J.W.); Department of Radiology, Columbia University Medical Center, New York, NY (S.J.); Department of Radiology, University of California–Irvine, Irvine, Calif (P.D.C.); Department of Radiology & Imaging Sciences, Emory University, Atlanta, Ga (E.K.); and Center for Research in Computer Vision, University of Central Florida, Orlando, Fla (U.B.)
| | - Bradford J. Wood
- From the Department of Radiology, Memorial Sloan-Kettering Cancer Center, 1275 York Ave, New York, NY 10065 (J.N.S., D.G., N.S., R.Y., S.E.W., A.H.); The National Institutes of Health Clinical Center, Bethesda, Md (H.C., B.J.W.); Department of Radiology, Columbia University Medical Center, New York, NY (S.J.); Department of Radiology, University of California–Irvine, Irvine, Calif (P.D.C.); Department of Radiology & Imaging Sciences, Emory University, Atlanta, Ga (E.K.); and Center for Research in Computer Vision, University of Central Florida, Orlando, Fla (U.B.)
| | - Peter D. Chang
- From the Department of Radiology, Memorial Sloan-Kettering Cancer Center, 1275 York Ave, New York, NY 10065 (J.N.S., D.G., N.S., R.Y., S.E.W., A.H.); The National Institutes of Health Clinical Center, Bethesda, Md (H.C., B.J.W.); Department of Radiology, Columbia University Medical Center, New York, NY (S.J.); Department of Radiology, University of California–Irvine, Irvine, Calif (P.D.C.); Department of Radiology & Imaging Sciences, Emory University, Atlanta, Ga (E.K.); and Center for Research in Computer Vision, University of Central Florida, Orlando, Fla (U.B.)
| | - Elizabeth Krupinski
- From the Department of Radiology, Memorial Sloan-Kettering Cancer Center, 1275 York Ave, New York, NY 10065 (J.N.S., D.G., N.S., R.Y., S.E.W., A.H.); The National Institutes of Health Clinical Center, Bethesda, Md (H.C., B.J.W.); Department of Radiology, Columbia University Medical Center, New York, NY (S.J.); Department of Radiology, University of California–Irvine, Irvine, Calif (P.D.C.); Department of Radiology & Imaging Sciences, Emory University, Atlanta, Ga (E.K.); and Center for Research in Computer Vision, University of Central Florida, Orlando, Fla (U.B.)
| | - Ulas Bagci
- From the Department of Radiology, Memorial Sloan-Kettering Cancer Center, 1275 York Ave, New York, NY 10065 (J.N.S., D.G., N.S., R.Y., S.E.W., A.H.); The National Institutes of Health Clinical Center, Bethesda, Md (H.C., B.J.W.); Department of Radiology, Columbia University Medical Center, New York, NY (S.J.); Department of Radiology, University of California–Irvine, Irvine, Calif (P.D.C.); Department of Radiology & Imaging Sciences, Emory University, Atlanta, Ga (E.K.); and Center for Research in Computer Vision, University of Central Florida, Orlando, Fla (U.B.)
| |
Collapse
|
19
|
Fedorov A, Hancock M, Clunie D, Brochhausen M, Bona J, Kirby J, Freymann J, Pieper S, J. W. L. Aerts H, Kikinis R, Prior F. DICOM re-encoding of volumetrically annotated Lung Imaging Database Consortium (LIDC) nodules. Med Phys 2020; 47:5953-5965. [PMID: 32772385 PMCID: PMC7721965 DOI: 10.1002/mp.14445] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2020] [Revised: 07/29/2020] [Accepted: 08/04/2020] [Indexed: 01/03/2023] Open
Abstract
PURPOSE The dataset contains annotations for lung nodules collected by the Lung Imaging Data Consortium and Image Database Resource Initiative (LIDC) stored as standard DICOM objects. The annotations accompany a collection of computed tomography (CT) scans for over 1000 subjects annotated by multiple expert readers, and correspond to "nodules ≥ 3 mm", defined as any lesion considered to be a nodule with greatest in-plane dimension in the range 3-30 mm regardless of presumed histology. The present dataset aims to simplify reuse of the data with the readily available tools, and is targeted towards researchers interested in the analysis of lung CT images. ACQUISITION AND VALIDATION METHODS Open source tools were utilized to parse the project-specific XML representation of LIDC-IDRI annotations and save the result as standard DICOM objects. Validation procedures focused on establishing compliance of the resulting objects with the standard, consistency of the data between the DICOM and project-specific representation, and evaluating interoperability with the existing tools. DATA FORMAT AND USAGE NOTES The dataset utilizes DICOM Segmentation objects for storing annotations of the lung nodules, and DICOM Structured Reporting objects for communicating qualitative evaluations (nine attributes) and quantitative measurements (three attributes) associated with the nodules. The total of 875 subjects contain 6859 nodule annotations. Clustering of the neighboring annotations resulted in 2651 distinct nodules. The data are available in TCIA at https://doi.org/10.7937/TCIA.2018.h7umfurq. POTENTIAL APPLICATIONS The standardized dataset maintains the content of the original contribution of the LIDC-IDRI consortium, and should be helpful in developing automated tools for characterization of lung lesions and image phenotyping. In addition to those properties, the representation of the present dataset makes it more FAIR (Findable, Accessible, Interoperable, Reusable) for the research community, and enables its integration with other standardized data collections.
Collapse
Affiliation(s)
| | | | | | | | - Jonathan Bona
- University of Arkansas for Medical SciencesLittle RockAR72205USA
| | - Justin Kirby
- Frederick National Laboratory for Cancer ResearchFrederickMD21701USA
| | - John Freymann
- Frederick National Laboratory for Cancer ResearchFrederickMD21701USA
| | | | | | - Ron Kikinis
- Brigham and Women’s HospitalBostonMA02115USA
| | - Fred Prior
- University of Arkansas for Medical SciencesLittle RockAR72205USA
| |
Collapse
|
20
|
Mattonen SA, Gude D, Echegaray S, Bakr S, Rubin DL, Napel S. Quantitative imaging feature pipeline: a web-based tool for utilizing, sharing, and building image-processing pipelines. J Med Imaging (Bellingham) 2020; 7:042803. [PMID: 32206688 PMCID: PMC7070161 DOI: 10.1117/1.jmi.7.4.042803] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2019] [Accepted: 02/26/2020] [Indexed: 12/20/2022] Open
Abstract
Quantitative image features that can be computed from medical images are proving to be valuable biomarkers of underlying cancer biology that can be used for assessing treatment response and predicting clinical outcomes. However, validation and eventual clinical implementation of these tools is challenging due to the absence of shared software algorithms, architectures, and the tools required for computing, comparing, evaluating, and disseminating predictive models. Similarly, researchers need to have programming expertise in order to complete these tasks. The quantitative image feature pipeline (QIFP) is an open-source, web-based, graphical user interface (GUI) of configurable quantitative image-processing pipelines for both planar (two-dimensional) and volumetric (three-dimensional) medical images. This allows researchers and clinicians a GUI-driven approach to process and analyze images, without having to write any software code. The QIFP allows users to upload a repository of linked imaging, segmentation, and clinical data or access publicly available datasets (e.g., The Cancer Imaging Archive) through direct links. Researchers have access to a library of file conversion, segmentation, quantitative image feature extraction, and machine learning algorithms. An interface is also provided to allow users to upload their own algorithms in Docker containers. The QIFP gives researchers the tools and infrastructure for the assessment and development of new imaging biomarkers and the ability to use them for single and multicenter clinical and virtual clinical trials.
Collapse
Affiliation(s)
- Sarah A. Mattonen
- Stanford University, Department of Radiology, Stanford, California, United States
- The University of Western Ontario, Department of Medical Biophysics, London, Ontario, Canada
- The University of Western Ontario, Department of Oncology, London, Ontario, Canada
| | - Dev Gude
- Stanford University, Department of Radiology, Stanford, California, United States
| | - Sebastian Echegaray
- Stanford University, Department of Radiology, Stanford, California, United States
| | - Shaimaa Bakr
- Stanford University, Department of Electrical Engineering, Stanford, California, United States
| | - Daniel L. Rubin
- Stanford University, Department of Radiology, Stanford, California, United States
- Stanford University, Department of Medicine, Stanford, California, United States
- Stanford University, Department of Biomedical Data Science, Stanford, California, United States
| | - Sandy Napel
- Stanford University, Department of Radiology, Stanford, California, United States
- Stanford University, Department of Electrical Engineering, Stanford, California, United States
- Stanford University, Department of Medicine, Stanford, California, United States
| |
Collapse
|
21
|
Abstract
The National Cancer Institute's Quantitative Imaging Network (QIN) has thrived over the past 12 years with an emphasis on the development of image-based decision support software tools for improving measurements of imaging metrics. An overarching goal has been to develop advanced tools that could be translated into clinical trials to provide for improved prediction of response to therapeutic interventions. This article provides an overview of the successes in development and translation of new algorithms into the clinical workflow by the many research teams of the Quantitative Imaging Network.
Collapse
|
22
|
Fedorov A, Beichel R, Kalpathy-Cramer J, Clunie D, Onken M, Riesmeier J, Herz C, Bauer C, Beers A, Fillion-Robin JC, Lasso A, Pinter C, Pieper S, Nolden M, Maier-Hein K, Herrmann MD, Saltz J, Prior F, Fennessy F, Buatti J, Kikinis R. Quantitative Imaging Informatics for Cancer Research. JCO Clin Cancer Inform 2020; 4:444-453. [PMID: 32392097 PMCID: PMC7265794 DOI: 10.1200/cci.19.00165] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/16/2020] [Indexed: 01/06/2023] Open
Abstract
PURPOSE We summarize Quantitative Imaging Informatics for Cancer Research (QIICR; U24 CA180918), one of the first projects funded by the National Cancer Institute (NCI) Informatics Technology for Cancer Research program. METHODS QIICR was motivated by the 3 use cases from the NCI Quantitative Imaging Network. 3D Slicer was selected as the platform for implementation of open-source quantitative imaging (QI) tools. Digital Imaging and Communications in Medicine (DICOM) was chosen for standardization of QI analysis outputs. Support of improved integration with community repositories focused on The Cancer Imaging Archive (TCIA). Priorities included improved capabilities of the standard, toolkits and tools, reference datasets, collaborations, and training and outreach. RESULTS Fourteen new tools to support head and neck cancer, glioblastoma, and prostate cancer QI research were introduced and downloaded over 100,000 times. DICOM was amended, with over 40 correction proposals addressing QI needs. Reference implementations of the standard in a popular toolkit and standalone tools were introduced. Eight datasets exemplifying the application of the standard and tools were contributed. An open demonstration/connectathon was organized, attracting the participation of academic groups and commercial vendors. Integration of tools with TCIA was improved by implementing programmatic communication interface and by refining best practices for QI analysis results curation. CONCLUSION Tools, capabilities of the DICOM standard, and datasets we introduced found adoption and utility within the cancer imaging community. A collaborative approach is critical to addressing challenges in imaging informatics at the national and international levels. Numerous challenges remain in establishing and maintaining the infrastructure of analysis tools and standardized datasets for the imaging community. Ideas and technology developed by the QIICR project are contributing to the NCI Imaging Data Commons currently being developed.
Collapse
Affiliation(s)
- Andrey Fedorov
- Department of Radiology, Brigham and Women’s Hospital, Harvard Medical School, Boston, MA
| | | | | | | | | | | | - Christian Herz
- Department of Radiology, Brigham and Women’s Hospital, Harvard Medical School, Boston, MA
| | | | | | | | | | | | | | - Marco Nolden
- German Cancer Research Center, Heidelberg, Germany
| | | | - Markus D. Herrmann
- Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA
| | | | - Fred Prior
- University of Arkansas for Medical Sciences, Little Rock, AR
| | - Fiona Fennessy
- Department of Radiology, Brigham and Women’s Hospital, Harvard Medical School, Boston, MA
| | | | - Ron Kikinis
- Department of Radiology, Brigham and Women’s Hospital, Harvard Medical School, Boston, MA
| |
Collapse
|
23
|
Willemink MJ, Koszek WA, Hardell C, Wu J, Fleischmann D, Harvey H, Folio LR, Summers RM, Rubin DL, Lungren MP. Preparing Medical Imaging Data for Machine Learning. Radiology 2020; 295:4-15. [PMID: 32068507 PMCID: PMC7104701 DOI: 10.1148/radiol.2020192224] [Citation(s) in RCA: 358] [Impact Index Per Article: 89.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2019] [Revised: 12/03/2019] [Accepted: 12/30/2019] [Indexed: 12/19/2022]
Abstract
Artificial intelligence (AI) continues to garner substantial interest in medical imaging. The potential applications are vast and include the entirety of the medical imaging life cycle from image creation to diagnosis to outcome prediction. The chief obstacles to development and clinical implementation of AI algorithms include availability of sufficiently large, curated, and representative training data that includes expert labeling (eg, annotations). Current supervised AI methods require a curation process for data to optimally train, validate, and test algorithms. Currently, most research groups and industry have limited data access based on small sample sizes from small geographic areas. In addition, the preparation of data is a costly and time-intensive process, the results of which are algorithms with limited utility and poor generalization. In this article, the authors describe fundamental steps for preparing medical imaging data in AI algorithm development, explain current limitations to data curation, and explore new approaches to address the problem of data availability.
Collapse
Affiliation(s)
- Martin J. Willemink
- From the Department of Radiology, Stanford University School of Medicine, 300 Pasteur Dr, S-072, Stanford, CA 94305-5105 (M.J.W., D.F., D.L.R., M.P.L.); Segmed, Menlo Park, Calif (M.J.W., W.A.K., C.H., J.W.); School of Engineering, Stanford University, Stanford, Calif (J.W.); Institute of Cognitive Neuroscience, University College London, London, England (H.H.); Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, Md (L.R.F.); Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, National Institutes of Health, Clinical Center, Bethesda, Md (R.M.S.); Department of Biomedical Data Science, Stanford University School of Medicine, Stanford, Calif (D.L.R.); and Stanford Center for Artificial Intelligence in Medicine and Imaging (AIMI), Stanford, Calif (M.P.L.)
| | - Wojciech A. Koszek
- From the Department of Radiology, Stanford University School of Medicine, 300 Pasteur Dr, S-072, Stanford, CA 94305-5105 (M.J.W., D.F., D.L.R., M.P.L.); Segmed, Menlo Park, Calif (M.J.W., W.A.K., C.H., J.W.); School of Engineering, Stanford University, Stanford, Calif (J.W.); Institute of Cognitive Neuroscience, University College London, London, England (H.H.); Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, Md (L.R.F.); Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, National Institutes of Health, Clinical Center, Bethesda, Md (R.M.S.); Department of Biomedical Data Science, Stanford University School of Medicine, Stanford, Calif (D.L.R.); and Stanford Center for Artificial Intelligence in Medicine and Imaging (AIMI), Stanford, Calif (M.P.L.)
| | - Cailin Hardell
- From the Department of Radiology, Stanford University School of Medicine, 300 Pasteur Dr, S-072, Stanford, CA 94305-5105 (M.J.W., D.F., D.L.R., M.P.L.); Segmed, Menlo Park, Calif (M.J.W., W.A.K., C.H., J.W.); School of Engineering, Stanford University, Stanford, Calif (J.W.); Institute of Cognitive Neuroscience, University College London, London, England (H.H.); Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, Md (L.R.F.); Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, National Institutes of Health, Clinical Center, Bethesda, Md (R.M.S.); Department of Biomedical Data Science, Stanford University School of Medicine, Stanford, Calif (D.L.R.); and Stanford Center for Artificial Intelligence in Medicine and Imaging (AIMI), Stanford, Calif (M.P.L.)
| | - Jie Wu
- From the Department of Radiology, Stanford University School of Medicine, 300 Pasteur Dr, S-072, Stanford, CA 94305-5105 (M.J.W., D.F., D.L.R., M.P.L.); Segmed, Menlo Park, Calif (M.J.W., W.A.K., C.H., J.W.); School of Engineering, Stanford University, Stanford, Calif (J.W.); Institute of Cognitive Neuroscience, University College London, London, England (H.H.); Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, Md (L.R.F.); Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, National Institutes of Health, Clinical Center, Bethesda, Md (R.M.S.); Department of Biomedical Data Science, Stanford University School of Medicine, Stanford, Calif (D.L.R.); and Stanford Center for Artificial Intelligence in Medicine and Imaging (AIMI), Stanford, Calif (M.P.L.)
| | - Dominik Fleischmann
- From the Department of Radiology, Stanford University School of Medicine, 300 Pasteur Dr, S-072, Stanford, CA 94305-5105 (M.J.W., D.F., D.L.R., M.P.L.); Segmed, Menlo Park, Calif (M.J.W., W.A.K., C.H., J.W.); School of Engineering, Stanford University, Stanford, Calif (J.W.); Institute of Cognitive Neuroscience, University College London, London, England (H.H.); Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, Md (L.R.F.); Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, National Institutes of Health, Clinical Center, Bethesda, Md (R.M.S.); Department of Biomedical Data Science, Stanford University School of Medicine, Stanford, Calif (D.L.R.); and Stanford Center for Artificial Intelligence in Medicine and Imaging (AIMI), Stanford, Calif (M.P.L.)
| | - Hugh Harvey
- From the Department of Radiology, Stanford University School of Medicine, 300 Pasteur Dr, S-072, Stanford, CA 94305-5105 (M.J.W., D.F., D.L.R., M.P.L.); Segmed, Menlo Park, Calif (M.J.W., W.A.K., C.H., J.W.); School of Engineering, Stanford University, Stanford, Calif (J.W.); Institute of Cognitive Neuroscience, University College London, London, England (H.H.); Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, Md (L.R.F.); Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, National Institutes of Health, Clinical Center, Bethesda, Md (R.M.S.); Department of Biomedical Data Science, Stanford University School of Medicine, Stanford, Calif (D.L.R.); and Stanford Center for Artificial Intelligence in Medicine and Imaging (AIMI), Stanford, Calif (M.P.L.)
| | - Les R. Folio
- From the Department of Radiology, Stanford University School of Medicine, 300 Pasteur Dr, S-072, Stanford, CA 94305-5105 (M.J.W., D.F., D.L.R., M.P.L.); Segmed, Menlo Park, Calif (M.J.W., W.A.K., C.H., J.W.); School of Engineering, Stanford University, Stanford, Calif (J.W.); Institute of Cognitive Neuroscience, University College London, London, England (H.H.); Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, Md (L.R.F.); Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, National Institutes of Health, Clinical Center, Bethesda, Md (R.M.S.); Department of Biomedical Data Science, Stanford University School of Medicine, Stanford, Calif (D.L.R.); and Stanford Center for Artificial Intelligence in Medicine and Imaging (AIMI), Stanford, Calif (M.P.L.)
| | - Ronald M. Summers
- From the Department of Radiology, Stanford University School of Medicine, 300 Pasteur Dr, S-072, Stanford, CA 94305-5105 (M.J.W., D.F., D.L.R., M.P.L.); Segmed, Menlo Park, Calif (M.J.W., W.A.K., C.H., J.W.); School of Engineering, Stanford University, Stanford, Calif (J.W.); Institute of Cognitive Neuroscience, University College London, London, England (H.H.); Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, Md (L.R.F.); Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, National Institutes of Health, Clinical Center, Bethesda, Md (R.M.S.); Department of Biomedical Data Science, Stanford University School of Medicine, Stanford, Calif (D.L.R.); and Stanford Center for Artificial Intelligence in Medicine and Imaging (AIMI), Stanford, Calif (M.P.L.)
| | - Daniel L. Rubin
- From the Department of Radiology, Stanford University School of Medicine, 300 Pasteur Dr, S-072, Stanford, CA 94305-5105 (M.J.W., D.F., D.L.R., M.P.L.); Segmed, Menlo Park, Calif (M.J.W., W.A.K., C.H., J.W.); School of Engineering, Stanford University, Stanford, Calif (J.W.); Institute of Cognitive Neuroscience, University College London, London, England (H.H.); Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, Md (L.R.F.); Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, National Institutes of Health, Clinical Center, Bethesda, Md (R.M.S.); Department of Biomedical Data Science, Stanford University School of Medicine, Stanford, Calif (D.L.R.); and Stanford Center for Artificial Intelligence in Medicine and Imaging (AIMI), Stanford, Calif (M.P.L.)
| | - Matthew P. Lungren
- From the Department of Radiology, Stanford University School of Medicine, 300 Pasteur Dr, S-072, Stanford, CA 94305-5105 (M.J.W., D.F., D.L.R., M.P.L.); Segmed, Menlo Park, Calif (M.J.W., W.A.K., C.H., J.W.); School of Engineering, Stanford University, Stanford, Calif (J.W.); Institute of Cognitive Neuroscience, University College London, London, England (H.H.); Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, Md (L.R.F.); Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, National Institutes of Health, Clinical Center, Bethesda, Md (R.M.S.); Department of Biomedical Data Science, Stanford University School of Medicine, Stanford, Calif (D.L.R.); and Stanford Center for Artificial Intelligence in Medicine and Imaging (AIMI), Stanford, Calif (M.P.L.)
| |
Collapse
|
24
|
Ziegler E, Urban T, Brown D, Petts J, Pieper SD, Lewis R, Hafey C, Harris GJ. Open Health Imaging Foundation Viewer: An Extensible Open-Source Framework for Building Web-Based Imaging Applications to Support Cancer Research. JCO Clin Cancer Inform 2020; 4:336-345. [PMID: 32324447 PMCID: PMC7259879 DOI: 10.1200/cci.19.00131] [Citation(s) in RCA: 32] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/16/2020] [Indexed: 11/20/2022] Open
Abstract
PURPOSE Zero-footprint Web architecture enables imaging applications to be deployed on premise or in the cloud without requiring installation of custom software on the user's computer. Benefits include decreased costs and information technology support requirements, as well as improved accessibility across sites. The Open Health Imaging Foundation (OHIF) Viewer is an extensible platform developed to leverage these benefits and address the demand for open-source Web-based imaging applications. The platform can be modified to support site-specific workflows and accommodate evolving research requirements. MATERIALS AND METHODS The OHIF Viewer provides basic image review functionality (eg, image manipulation and measurement) as well as advanced visualization (eg, multiplanar reformatting). It is written as a client-only, single-page Web application that can easily be embedded into third-party applications or hosted as a standalone Web site. The platform provides extension points for software developers to include custom tools and adapt the system for their workflows. It is standards compliant and relies on DICOMweb for data exchange and OpenID Connect for authentication, but it can be configured to use any data source or authentication flow. Additionally, the user interface components are provided in a standalone component library so that developers can create custom extensions. RESULTS The OHIF Viewer and its underlying components have been widely adopted and integrated into multiple clinical research platforms (e,g Precision Imaging Metrics, XNAT, LabCAS, ISB-CGC) and commercial applications (eg, Osirix). It has also been used to build custom imaging applications (eg, ProstateCancer.ai, Crowds Cure Cancer [presented as a case study]). CONCLUSION The OHIF Viewer provides a flexible framework for building applications to support imaging research. Its adoption could reduce redundancies in software development for National Cancer Institute-funded projects, including Informatics Technology for Cancer Research and the Quantitative Imaging Network.
Collapse
Affiliation(s)
| | - Trinity Urban
- Open Health Imaging Foundation, Boston, MA
- Precision Imaging Metrics, Boston, MA
| | | | | | | | - Rob Lewis
- Open Health Imaging Foundation, Boston, MA
| | | | - Gordon J. Harris
- Open Health Imaging Foundation, Boston, MA
- Precision Imaging Metrics, Boston, MA
| |
Collapse
|
25
|
Exploitation des données pour la recherche et l’intelligence artificielle : enjeux médicaux, éthiques, juridiques, techniques. IMAGERIE DE LA FEMME 2019. [DOI: 10.1016/j.femme.2019.04.004] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
|