1
|
Paverd H, Zormpas-Petridis K, Clayton H, Burge S, Crispin-Ortuzar M. Radiology and multi-scale data integration for precision oncology. NPJ Precis Oncol 2024; 8:158. [PMID: 39060351 PMCID: PMC11282284 DOI: 10.1038/s41698-024-00656-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2024] [Accepted: 07/15/2024] [Indexed: 07/28/2024] Open
Abstract
In this Perspective paper we explore the potential of integrating radiological imaging with other data types, a critical yet underdeveloped area in comparison to the fusion of other multi-omic data. Radiological images provide a comprehensive, three-dimensional view of cancer, capturing features that would be missed by biopsies or other data modalities. This paper explores the complexities and challenges of incorporating medical imaging into data integration models, in the context of precision oncology. We present the different categories of imaging-omics integration and discuss recent progress, highlighting the opportunities that arise from bringing together spatial data on different scales.
Collapse
Affiliation(s)
- Hania Paverd
- Cambridge University Hospitals NHS Foundation Trust, Cambridge, UK
- Department of Oncology, University of Cambridge, Cambridge, UK
- Cancer Research UK Cambridge Centre, University of Cambridge, Cambridge, UK
| | | | - Hannah Clayton
- Department of Oncology, University of Cambridge, Cambridge, UK
- Cancer Research UK Cambridge Centre, University of Cambridge, Cambridge, UK
| | - Sarah Burge
- Cancer Research UK Cambridge Centre, University of Cambridge, Cambridge, UK
| | - Mireia Crispin-Ortuzar
- Department of Oncology, University of Cambridge, Cambridge, UK.
- Cancer Research UK Cambridge Centre, University of Cambridge, Cambridge, UK.
| |
Collapse
|
2
|
Miller CM, Zhu Z, Mazurowski MA, Bashir MR, Wiggins WF. Automated selection of abdominal MRI series using a DICOM metadata classifier and selective use of a pixel-based classifier. Abdom Radiol (NY) 2024:10.1007/s00261-024-04379-5. [PMID: 38860997 DOI: 10.1007/s00261-024-04379-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2024] [Revised: 05/10/2024] [Accepted: 05/11/2024] [Indexed: 06/12/2024]
Abstract
Accurate, automated MRI series identification is important for many applications, including display ("hanging") protocols, machine learning, and radiomics. The use of the series description or a pixel-based classifier each has limitations. We demonstrate a combined approach utilizing a DICOM metadata-based classifier and selective use of a pixel-based classifier to identify abdominal MRI series. The metadata classifier was assessed alone as Group metadata and combined with selective use of the pixel-based classifier for predictions with less than 70% certainty (Group combined). The overall accuracy (mean and 95% confidence intervals) for Groups metadata and combined on the test dataset were 0.870 CI (0.824,0.912) and 0.930 CI (0.893,0.963), respectively. With this combined metadata and pixel-based approach, we demonstrate accurate classification of 95% or greater for all pre-contrast MRI series and improved performance for some post-contrast series.
Collapse
Affiliation(s)
- Chad M Miller
- Duke University School of Medicine, Durham, NC, 27710, USA.
| | - Zhe Zhu
- Duke University School of Medicine, Durham, NC, 27710, USA
| | | | | | | |
Collapse
|
3
|
Reis EP, Blankemeier L, Zambrano Chaves JM, Jensen MEK, Yao S, Truyts CAM, Willis MH, Adams S, Amaro E, Boutin RD, Chaudhari AS. Automated abdominal CT contrast phase detection using an interpretable and open-source artificial intelligence algorithm. Eur Radiol 2024:10.1007/s00330-024-10769-6. [PMID: 38683384 DOI: 10.1007/s00330-024-10769-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2023] [Revised: 03/11/2024] [Accepted: 03/20/2024] [Indexed: 05/01/2024]
Abstract
OBJECTIVES To develop and validate an open-source artificial intelligence (AI) algorithm to accurately detect contrast phases in abdominal CT scans. MATERIALS AND METHODS Retrospective study aimed to develop an AI algorithm trained on 739 abdominal CT exams from 2016 to 2021, from 200 unique patients, covering 1545 axial series. We performed segmentation of five key anatomic structures-aorta, portal vein, inferior vena cava, renal parenchyma, and renal pelvis-using TotalSegmentator, a deep learning-based tool for multi-organ segmentation, and a rule-based approach to extract the renal pelvis. Radiomics features were extracted from the anatomical structures for use in a gradient-boosting classifier to identify four contrast phases: non-contrast, arterial, venous, and delayed. Internal and external validation was performed using the F1 score and other classification metrics, on the external dataset "VinDr-Multiphase CT". RESULTS The training dataset consisted of 172 patients (mean age, 70 years ± 8, 22% women), and the internal test set included 28 patients (mean age, 68 years ± 8, 14% women). In internal validation, the classifier achieved an accuracy of 92.3%, with an average F1 score of 90.7%. During external validation, the algorithm maintained an accuracy of 90.1%, with an average F1 score of 82.6%. Shapley feature attribution analysis indicated that renal and vascular radiodensity values were the most important for phase classification. CONCLUSION An open-source and interpretable AI algorithm accurately detects contrast phases in abdominal CT scans, with high accuracy and F1 scores in internal and external validation, confirming its generalization capability. CLINICAL RELEVANCE STATEMENT Contrast phase detection in abdominal CT scans is a critical step for downstream AI applications, deploying algorithms in the clinical setting, and for quantifying imaging biomarkers, ultimately allowing for better diagnostics and increased access to diagnostic imaging. KEY POINTS Digital Imaging and Communications in Medicine labels are inaccurate for determining the abdominal CT scan phase. AI provides great help in accurately discriminating the contrast phase. Accurate contrast phase determination aids downstream AI applications and biomarker quantification.
Collapse
Affiliation(s)
- Eduardo Pontes Reis
- Department of Radiology, Stanford University, Stanford, CA, USA.
- Center for Artificial Intelligence in Medicine & Imaging (AIMI), Stanford University, Stanford, CA, USA.
- Hospital Israelita Albert Einstein, Sao Paulo, Brazil.
| | - Louis Blankemeier
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA
| | - Juan Manuel Zambrano Chaves
- Department of Radiology, Stanford University, Stanford, CA, USA
- Department of Biomedical Data Science, Stanford University, Stanford, CA, USA
| | | | - Sally Yao
- Department of Radiology, Stanford University, Stanford, CA, USA
| | | | - Marc H Willis
- Department of Radiology, Stanford University, Stanford, CA, USA
| | - Scott Adams
- Department of Radiology, Stanford University, Stanford, CA, USA
| | - Edson Amaro
- Hospital Israelita Albert Einstein, Sao Paulo, Brazil
| | - Robert D Boutin
- Department of Radiology, Stanford University, Stanford, CA, USA
| | - Akshay S Chaudhari
- Department of Radiology, Stanford University, Stanford, CA, USA
- Department of Biomedical Data Science, Stanford University, Stanford, CA, USA
| |
Collapse
|
4
|
Bartnik A, Singh S, Sum C, Smith M, Bergsland N, Zivadinov R, Dwyer MG. An Automated Tool to Classify and Transform Unstructured MRI Data into BIDS Datasets. Neuroinformatics 2024:10.1007/s12021-024-09659-5. [PMID: 38530566 DOI: 10.1007/s12021-024-09659-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/07/2024] [Indexed: 03/28/2024]
Abstract
The increasing use of neuroimaging in clinical research has driven the creation of many large imaging datasets. However, these datasets often rely on inconsistent naming conventions in image file headers to describe acquisition, and time-consuming manual curation is necessary. Therefore, we sought to automate the process of classifying and organizing magnetic resonance imaging (MRI) data according to acquisition types common to the clinical routine, as well as automate the transformation of raw, unstructured images into Brain Imaging Data Structure (BIDS) datasets. To do this, we trained an XGBoost model to classify MRI acquisition types using relatively few acquisition parameters that are automatically stored by the MRI scanner in image file metadata, which are then mapped to the naming conventions prescribed by BIDS to transform the input images to the BIDS structure. The model recognizes MRI types with 99.475% accuracy, as well as a micro/macro-averaged precision of 0.9995/0.994, a micro/macro-averaged recall of 0.9995/0.989, and a micro/macro-averaged F1 of 0.9995/0.991. Our approach accurately and quickly classifies MRI types and transforms unstructured data into standardized structures with little-to-no user intervention, reducing the barrier of entry for clinical scientists and increasing the accessibility of existing neuroimaging data.
Collapse
Affiliation(s)
- Alexander Bartnik
- Buffalo Neuroimaging Analysis Center, Department of Neurology, Jacobs School of Medicine and Biomedical Sciences, University at Buffalo, 77 Goodell St, Buffalo, NY, 14203, USA
| | - Sujal Singh
- Buffalo Neuroimaging Analysis Center, Department of Neurology, Jacobs School of Medicine and Biomedical Sciences, University at Buffalo, 77 Goodell St, Buffalo, NY, 14203, USA
| | - Conan Sum
- Buffalo Neuroimaging Analysis Center, Department of Neurology, Jacobs School of Medicine and Biomedical Sciences, University at Buffalo, 77 Goodell St, Buffalo, NY, 14203, USA
| | - Mackenzie Smith
- Buffalo Neuroimaging Analysis Center, Department of Neurology, Jacobs School of Medicine and Biomedical Sciences, University at Buffalo, 77 Goodell St, Buffalo, NY, 14203, USA
| | - Niels Bergsland
- Buffalo Neuroimaging Analysis Center, Department of Neurology, Jacobs School of Medicine and Biomedical Sciences, University at Buffalo, 77 Goodell St, Buffalo, NY, 14203, USA
| | - Robert Zivadinov
- Buffalo Neuroimaging Analysis Center, Department of Neurology, Jacobs School of Medicine and Biomedical Sciences, University at Buffalo, 77 Goodell St, Buffalo, NY, 14203, USA
| | - Michael G Dwyer
- Buffalo Neuroimaging Analysis Center, Department of Neurology, Jacobs School of Medicine and Biomedical Sciences, University at Buffalo, 77 Goodell St, Buffalo, NY, 14203, USA.
| |
Collapse
|
5
|
Li W, Lin HM, Lin A, Napoleone M, Moreland R, Murari A, Stepanov M, Ivanov E, Prasad AS, Shih G, Hu Z, Zulbayar S, Sejdić E, Colak E. Machine Learning Classification of Body Part, Imaging Axis, and Intravenous Contrast Enhancement on CT Imaging. Can Assoc Radiol J 2024; 75:82-91. [PMID: 37439250 DOI: 10.1177/08465371231180844] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/14/2023] Open
Abstract
Purpose: The development and evaluation of machine learning models that automatically identify the body part(s) imaged, axis of imaging, and the presence of intravenous contrast material of a CT series of images. Methods: This retrospective study included 6955 series from 1198 studies (501 female, 697 males, mean age 56.5 years) obtained between January 2010 and September 2021. Each series was annotated by a trained board-certified radiologist with labels consisting of 16 body parts, 3 imaging axes, and whether an intravenous contrast agent was used. The studies were randomly assigned to the training, validation and testing sets with a proportion of 70%, 20% and 10%, respectively, to develop a 3D deep neural network for each classification task. External validation was conducted with a total of 35,272 series from 7 publicly available datasets. The classification accuracy for each series was independently assessed for each task to evaluate model performance. Results: The accuracies for identifying the body parts, imaging axes, and the presence of intravenous contrast were 96.0% (95% CI: 94.6%, 97.2%), 99.2% (95% CI: 98.5%, 99.7%), and 97.5% (95% CI: 96.4%, 98.5%) respectively. The generalizability of the models was demonstrated through external validation with accuracies of 89.7 - 97.8%, 98.6 - 100%, and 87.8 - 98.6% for the same tasks. Conclusions: The developed models demonstrated high performance on both internal and external testing in identifying key aspects of a CT series.
Collapse
Affiliation(s)
- Wuqi Li
- The Edward S. Rogers Department of Electrical and Computer Engineering, University of Toronto, Toronto, ON, Canada
| | - Hui Ming Lin
- Department of Medical Imaging, Unity Health Toronto, Toronto, ON, Canada
| | - Amy Lin
- Department of Medical Imaging, Unity Health Toronto, Toronto, ON, Canada
- Department of Medical Imaging, Faculty of Medicine, University of Toronto, Toronto, ON, Canada
| | - Marc Napoleone
- Department of Medical Imaging, Unity Health Toronto, Toronto, ON, Canada
- Department of Medical Imaging, Faculty of Medicine, University of Toronto, Toronto, ON, Canada
| | - Robert Moreland
- Department of Medical Imaging, Unity Health Toronto, Toronto, ON, Canada
- Department of Medical Imaging, Faculty of Medicine, University of Toronto, Toronto, ON, Canada
| | - Alexis Murari
- The Edward S. Rogers Department of Electrical and Computer Engineering, University of Toronto, Toronto, ON, Canada
| | - Maxim Stepanov
- The Edward S. Rogers Department of Electrical and Computer Engineering, University of Toronto, Toronto, ON, Canada
| | - Eric Ivanov
- The Edward S. Rogers Department of Electrical and Computer Engineering, University of Toronto, Toronto, ON, Canada
| | - Abhinav Sanjeeva Prasad
- The Edward S. Rogers Department of Electrical and Computer Engineering, University of Toronto, Toronto, ON, Canada
| | - George Shih
- Department of Radiology, Weill Cornell Medicine, New York, NY, USA
| | - Zixuan Hu
- The Edward S. Rogers Department of Electrical and Computer Engineering, University of Toronto, Toronto, ON, Canada
| | - Suvd Zulbayar
- Dalla Lana School of Public Health, University of Toronto, Toronto, ON, Canada
| | - Ervin Sejdić
- The Edward S. Rogers Department of Electrical and Computer Engineering, University of Toronto, Toronto, ON, Canada
- North York General Hospital, Toronto, ON, Canada
| | - Errol Colak
- Department of Medical Imaging, Unity Health Toronto, Toronto, ON, Canada
- Department of Medical Imaging, Faculty of Medicine, University of Toronto, Toronto, ON, Canada
- Li Ka Shing Knowledge Institute, St Michael's Hospital, Unity Health Toronto, Toronto, ON, Canada
| |
Collapse
|
6
|
Kim B, Romeijn S, van Buchem M, Mehrizi MHR, Grootjans W. A holistic approach to implementing artificial intelligence in radiology. Insights Imaging 2024; 15:22. [PMID: 38270790 PMCID: PMC10811299 DOI: 10.1186/s13244-023-01586-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2023] [Accepted: 12/01/2023] [Indexed: 01/26/2024] Open
Abstract
OBJECTIVE Despite the widespread recognition of the importance of artificial intelligence (AI) in healthcare, its implementation is often limited. This article aims to address this implementation gap by presenting insights from an in-depth case study of an organisation that approached AI implementation with a holistic approach. MATERIALS AND METHODS We conducted a longitudinal, qualitative case study of the implementation of AI in radiology at a large academic medical centre in the Netherlands for three years. Collected data consists of 43 days of work observations, 30 meeting observations, 18 interviews and 41 relevant documents. Abductive reasoning was used for systematic data analysis, which revealed three change initiative themes responding to specific AI implementation challenges. RESULTS This study identifies challenges of implementing AI in radiology at different levels and proposes a holistic approach to tackle those challenges. At the technology level, there is the issue of multiple narrow AI applications with no standard use interface; at the workflow level, AI results allow limited interaction with radiologists; at the people and organisational level, there are divergent expectations and limited experience with AI. The case of Southern illustrates that organisations can reap more benefits from AI implementation by investing in long-term initiatives that holistically align both social and technological aspects of clinical practice. CONCLUSION This study highlights the importance of a holistic approach to AI implementation that addresses challenges spanning technology, workflow, and organisational levels. Aligning change initiatives between these different levels has proven to be important to facilitate wide-scale implementation of AI in clinical practice. CRITICAL RELEVANCE STATEMENT Adoption of artificial intelligence is crucial for future-ready radiological care. This case study highlights the importance of a holistic approach that addresses technological, workflow, and organisational aspects, offering practical insights and solutions to facilitate successful AI adoption in clinical practice. KEY POINTS 1. Practical and actionable insights into successful AI implementation in radiology are lacking. 2. Aligning technology, workflow, organisational aspects is crucial for a successful AI implementation 3. Holistic approach aids organisations to create sustainable value through AI implementation.
Collapse
Affiliation(s)
- Bomi Kim
- House of Innovation (Department of Entrepreneurship, Innovation and Technology), Stockholm School of Economics, Stockholm, Sweden
| | - Stephan Romeijn
- Radiology, Leiden University Medical Center, Leiden, Netherlands.
| | - Mark van Buchem
- Radiology, Leiden University Medical Center, Leiden, Netherlands
| | | | - Willem Grootjans
- Radiology, Leiden University Medical Center, Leiden, Netherlands
| |
Collapse
|
7
|
You S, Wiest R, Reyes M. SaRF: Saliency regularized feature learning improves MRI sequence classification. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 243:107867. [PMID: 37866127 DOI: 10.1016/j.cmpb.2023.107867] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/26/2023] [Revised: 10/11/2023] [Accepted: 10/15/2023] [Indexed: 10/24/2023]
Abstract
BACKGROUND AND OBJECTIVE Deep learning based medical image analysis technologies have the potential to greatly improve the workflow of neuro-radiologists dealing routinely with multi-sequence MRI. However, an essential step for current deep learning systems employing multi-sequence MRI is to ensure that their sequence type is correctly assigned. This requirement is not easily satisfied in clinical practice and is subjected to protocol and human-prone errors. Although deep learning models are promising for image-based sequence classification, robustness, and reliability issues limit their application to clinical practice. METHODS In this paper, we propose a novel method that uses saliency information to guide the learning of features for sequence classification. The method uses two self-supervised loss terms to first enhance the distinctiveness among class-specific saliency maps and, secondly, to promote similarity between class-specific saliency maps and learned deep features. RESULTS On a cohort of 2100 patient cases comprising six different MR sequences per case, our method shows an improvement in mean accuracy by 4.4% (from 0.935 to 0.976), mean AUC by 1.2% (from 0.9851 to 0.9968), and mean F1 score by 20.5% (from 0.767 to 0.924). Furthermore, based on feedback from an expert neuroradiologist, we show that the proposed approach improves the interpretability of trained models as well as their calibration with reduced expected calibration error (by 30.8%, from 0.065 to 0.045). The code will be made publicly available. CONCLUSIONS In this paper, the proposed method shows an improvement in accuracy, AUC, and F1 score, as well as improved calibration and interpretability of resulting saliency maps.
Collapse
Affiliation(s)
- Suhang You
- ARTORG, Graduate School for Cellular and Biomedical Research, University of Bern, Murtenstrasse 50, Bern, 3008, Switzerland.
| | - Roland Wiest
- Support Center of Advanced Neuroimaging, Institute of Diagnostic and Interventional Neuroradiology, University Hospital Bern, University of Bern, Freiburgstrasse 18, Bern, 3010, Switzerland.
| | - Mauricio Reyes
- ARTORG, Graduate School for Cellular and Biomedical Research, University of Bern, Murtenstrasse 50, Bern, 3008, Switzerland; Department of Radiation Oncology, Inselspital, Bern University Hospital, University of Bern, Freiburgstrasse, Bern, 3010, Switzerland.
| |
Collapse
|
8
|
Na S, Ko Y, Ham SJ, Sung YS, Kim MH, Shin Y, Jung SC, Ju C, Kim BS, Yoon K, Kim KW. Sequence-Type Classification of Brain MRI for Acute Stroke Using a Self-Supervised Machine Learning Algorithm. Diagnostics (Basel) 2023; 14:70. [PMID: 38201379 PMCID: PMC10804387 DOI: 10.3390/diagnostics14010070] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2023] [Revised: 12/18/2023] [Accepted: 12/22/2023] [Indexed: 01/12/2024] Open
Abstract
We propose a self-supervised machine learning (ML) algorithm for sequence-type classification of brain MRI using a supervisory signal from DICOM metadata (i.e., a rule-based virtual label). A total of 1787 brain MRI datasets were constructed, including 1531 from hospitals and 256 from multi-center trial datasets. The ground truth (GT) was generated by two experienced image analysts and checked by a radiologist. An ML framework called ImageSort-net was developed using various features related to MRI acquisition parameters and used for training virtual labels and ML algorithms derived from rule-based labeling systems that act as labels for supervised learning. For the performance evaluation of ImageSort-net (MLvirtual), we compare and analyze the performances of models trained with human expert labels (MLhumans), using as a test set blank data that the rule-based labeling system failed to infer from each dataset. The performance of ImageSort-net (MLvirtual) was comparable to that of MLhuman (98.5% and 99%, respectively) in terms of overall accuracy when trained with hospital datasets. When trained with a relatively small multi-center trial dataset, the overall accuracy was relatively lower than that of MLhuman (95.6% and 99.4%, respectively). After integrating the two datasets and re-training them, MLvirtual showed higher accuracy than MLvirtual trained only on multi-center datasets (95.6% and 99.7%, respectively). Additionally, the multi-center dataset inference performances after the re-training of MLvirtual and MLhumans were identical (99.7%). Training of ML algorithms based on rule-based virtual labels achieved high accuracy for sequence-type classification of brain MRI and enabled us to build a sustainable self-learning system.
Collapse
Affiliation(s)
- Seongwon Na
- Department of Computer Science and Engineering, Konkuk University, Seoul 05029, Republic of Korea;
- Biomedical Research Center, Asan Institute for Life Sciences, Asan Medical Center, Seoul 05505, Republic of Korea
| | - Yousun Ko
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul 05505, Republic of Korea; (Y.K.)
| | - Su Jung Ham
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul 05505, Republic of Korea; (Y.K.)
| | - Yu Sub Sung
- Clinical Research Center, Asan Medical Center, Seoul 05505, Republic of Korea
- Department of Convergence Medicine, University of Ulsan College of Medicine, Seoul 05505, Republic of Korea
| | - Mi-Hyun Kim
- Trialinformatics Inc., Seoul 05505, Republic of Korea
- Department of Radiation Science & Technology, Jeonbuk National University, Jeonju 56212, Republic of Korea
| | - Youngbin Shin
- Biomedical Research Center, Asan Institute for Life Sciences, Asan Medical Center, Seoul 05505, Republic of Korea
| | - Seung Chai Jung
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul 05505, Republic of Korea; (Y.K.)
| | - Chung Ju
- Shin Poong Pharm. Co., Ltd., Seoul 06246, Republic of Korea
- Graduate School of Clinical Pharmacy, CHA University, Pocheon-si 11160, Republic of Korea
| | - Byung Su Kim
- Shin Poong Pharm. Co., Ltd., Seoul 06246, Republic of Korea
| | - Kyoungro Yoon
- Department of Computer Science and Engineering, Konkuk University, Seoul 05029, Republic of Korea;
- Department of Smart ICT Convergence Engineering, Konkuk University, Seoul 05029, Republic of Korea
| | - Kyung Won Kim
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul 05505, Republic of Korea; (Y.K.)
| |
Collapse
|
9
|
Macdonald JA, Zhu Z, Konkel B, Mazurowski MA, Wiggins WF, Bashir MR. Duke Liver Dataset: A Publicly Available Liver MRI Dataset with Liver Segmentation Masks and Series Labels. Radiol Artif Intell 2023; 5:e220275. [PMID: 37795141 PMCID: PMC10546360 DOI: 10.1148/ryai.220275] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2022] [Revised: 06/26/2023] [Accepted: 07/03/2023] [Indexed: 10/06/2023]
Abstract
The Duke Liver Dataset contains 2146 abdominal MRI series from 105 patients, including a majority with cirrhotic features, and 310 image series with corresponding manually segmented liver masks.
Collapse
Affiliation(s)
- Jacob A. Macdonald
- From the Department of Radiology (J.A.M., Z.Z., B.K., M.A.M., W.F.W., M.R.B.), Department of Electrical and Computer Engineering (M.A.M.), Department of Computer Science (M.A.M.), Center for Advanced Magnetic Resonance Development (M.R.B.), and Division of Gastroenterology, Department of Medicine (M.R.B.), Duke University, 2301 Erwin Rd, Durham, NC 27710
| | - Zhe Zhu
- From the Department of Radiology (J.A.M., Z.Z., B.K., M.A.M., W.F.W., M.R.B.), Department of Electrical and Computer Engineering (M.A.M.), Department of Computer Science (M.A.M.), Center for Advanced Magnetic Resonance Development (M.R.B.), and Division of Gastroenterology, Department of Medicine (M.R.B.), Duke University, 2301 Erwin Rd, Durham, NC 27710
| | - Brandon Konkel
- From the Department of Radiology (J.A.M., Z.Z., B.K., M.A.M., W.F.W., M.R.B.), Department of Electrical and Computer Engineering (M.A.M.), Department of Computer Science (M.A.M.), Center for Advanced Magnetic Resonance Development (M.R.B.), and Division of Gastroenterology, Department of Medicine (M.R.B.), Duke University, 2301 Erwin Rd, Durham, NC 27710
| | - Maciej A. Mazurowski
- From the Department of Radiology (J.A.M., Z.Z., B.K., M.A.M., W.F.W., M.R.B.), Department of Electrical and Computer Engineering (M.A.M.), Department of Computer Science (M.A.M.), Center for Advanced Magnetic Resonance Development (M.R.B.), and Division of Gastroenterology, Department of Medicine (M.R.B.), Duke University, 2301 Erwin Rd, Durham, NC 27710
| | - Walter F. Wiggins
- From the Department of Radiology (J.A.M., Z.Z., B.K., M.A.M., W.F.W., M.R.B.), Department of Electrical and Computer Engineering (M.A.M.), Department of Computer Science (M.A.M.), Center for Advanced Magnetic Resonance Development (M.R.B.), and Division of Gastroenterology, Department of Medicine (M.R.B.), Duke University, 2301 Erwin Rd, Durham, NC 27710
| | - Mustafa R. Bashir
- From the Department of Radiology (J.A.M., Z.Z., B.K., M.A.M., W.F.W., M.R.B.), Department of Electrical and Computer Engineering (M.A.M.), Department of Computer Science (M.A.M.), Center for Advanced Magnetic Resonance Development (M.R.B.), and Division of Gastroenterology, Department of Medicine (M.R.B.), Duke University, 2301 Erwin Rd, Durham, NC 27710
| |
Collapse
|
10
|
Pierre K, Haneberg AG, Kwak S, Peters KR, Hochhegger B, Sananmuang T, Tunlayadechanont P, Tighe PJ, Mancuso A, Forghani R. Applications of Artificial Intelligence in the Radiology Roundtrip: Process Streamlining, Workflow Optimization, and Beyond. Semin Roentgenol 2023; 58:158-169. [PMID: 37087136 DOI: 10.1053/j.ro.2023.02.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2023] [Accepted: 02/14/2023] [Indexed: 04/24/2023]
Abstract
There are many impactful applications of artificial intelligence (AI) in the electronic radiology roundtrip and the patient's journey through the healthcare system that go beyond diagnostic applications. These tools have the potential to improve quality and safety, optimize workflow, increase efficiency, and increase patient satisfaction. In this article, we review the role of AI for process improvement and workflow enhancement which includes applications beginning from the time of order entry, scan acquisition, applications supporting the image interpretation task, and applications supporting tasks after image interpretation such as result communication. These non-diagnostic workflow and process optimization tasks are an important part of the arsenal of potential AI tools that can streamline day to day clinical practice and patient care.
Collapse
Affiliation(s)
- Kevin Pierre
- Radiomics and Augmented Intelligence Laboratory (RAIL), Department of Radiology and the Norman Fixel Institute for Neurological Diseases, University of Florida College of Medicine, Gainesville, FL; Department of Radiology, University of Florida College of Medicine, Gainesville, FL
| | - Adam G Haneberg
- Radiomics and Augmented Intelligence Laboratory (RAIL), Department of Radiology and the Norman Fixel Institute for Neurological Diseases, University of Florida College of Medicine, Gainesville, FL; Division of Medical Physics, Department of Radiology, University of Florida College of Medicine, Gainesville, FL
| | - Sean Kwak
- Radiomics and Augmented Intelligence Laboratory (RAIL), Department of Radiology and the Norman Fixel Institute for Neurological Diseases, University of Florida College of Medicine, Gainesville, FL
| | - Keith R Peters
- Radiomics and Augmented Intelligence Laboratory (RAIL), Department of Radiology and the Norman Fixel Institute for Neurological Diseases, University of Florida College of Medicine, Gainesville, FL; Department of Radiology, University of Florida College of Medicine, Gainesville, FL
| | - Bruno Hochhegger
- Radiomics and Augmented Intelligence Laboratory (RAIL), Department of Radiology and the Norman Fixel Institute for Neurological Diseases, University of Florida College of Medicine, Gainesville, FL; Department of Radiology, University of Florida College of Medicine, Gainesville, FL
| | - Thiparom Sananmuang
- Department of Diagnostic and Therapeutic Radiology and Research, Faculty of Medicine Ramathibodi Hospital, Ratchathewi, Bangkok, Thailand
| | - Padcha Tunlayadechanont
- Department of Diagnostic and Therapeutic Radiology and Research, Faculty of Medicine Ramathibodi Hospital, Ratchathewi, Bangkok, Thailand
| | - Patrick J Tighe
- Departments of Anesthesiology & Orthopaedic Surgery, University of Florida College of Medicine, Gainesville, FL
| | - Anthony Mancuso
- Radiomics and Augmented Intelligence Laboratory (RAIL), Department of Radiology and the Norman Fixel Institute for Neurological Diseases, University of Florida College of Medicine, Gainesville, FL; Department of Radiology, University of Florida College of Medicine, Gainesville, FL
| | - Reza Forghani
- Radiomics and Augmented Intelligence Laboratory (RAIL), Department of Radiology and the Norman Fixel Institute for Neurological Diseases, University of Florida College of Medicine, Gainesville, FL; Department of Radiology, University of Florida College of Medicine, Gainesville, FL; Division of Medical Physics, Department of Radiology, University of Florida College of Medicine, Gainesville, FL.
| |
Collapse
|
11
|
Cluceru J, Lupo JM, Interian Y, Bove R, Crane JC. Improving the Automatic Classification of Brain MRI Acquisition Contrast with Machine Learning. J Digit Imaging 2023; 36:289-305. [PMID: 35941406 PMCID: PMC9984597 DOI: 10.1007/s10278-022-00690-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2021] [Revised: 06/22/2022] [Accepted: 07/22/2022] [Indexed: 10/15/2022] Open
Abstract
Automated quantification of data acquired as part of an MRI exam requires identification of the specific acquisition of relevance to a particular analysis. This motivates the development of methods capable of reliably classifying MRI acquisitions according to their nominal contrast type, e.g., T1 weighted, T1 post-contrast, T2 weighted, T2-weighted FLAIR, proton-density weighted. Prior studies have investigated using imaging-based methods and DICOM metadata-based methods with success on cohorts of patients acquired as part of a clinical trial. This study compares the performance of these methods on heterogeneous clinical datasets acquired with many different scanners from many institutions. RF and CNN models were trained on metadata and pixel data, respectively. A combined RF model incorporated CNN logits from the pixel-based model together with metadata. Four cohorts were used for model development and evaluation: MS research (n = 11,106 series), MS clinical (n = 3244 series), glioma research (n = 612 series, test/validation only), and ADNI PTSD (n = 477 series, training only). Together, these cohorts represent a broad range of acquisition contexts (scanners, sequences, institutions) and subject pathologies. Pixel-based CNN and combined models achieved accuracies between 97 and 98% on the clinical MS cohort. Validation/test accuracies with the glioma cohort were 99.7% (metadata only) and 98.4 (CNN). Accurate and generalizable classification of MRI acquisition contrast types was demonstrated. Such methods are important for enabling automated data selection in high-throughput and big-data image analysis applications.
Collapse
Affiliation(s)
- Julia Cluceru
- Center for Intelligent Imaging, Department of Radiology & Biomedical Imaging, University of California San Francisco, San Francisco, CA, USA
| | - Janine M Lupo
- Center for Intelligent Imaging, Department of Radiology & Biomedical Imaging, University of California San Francisco, San Francisco, CA, USA
| | - Yannet Interian
- MS in Analytics Program, University of San Francisco, San Francisco, CA, USA
| | - Riley Bove
- Department of Neurology, MS and Neuroinflammation Clinic, University of California San Francisco, San Francisco, CA, USA
- Weill Institute for Neurosciences, University of California San Francisco, San Francisco, CA, USA
| | - Jason C Crane
- Center for Intelligent Imaging, Department of Radiology & Biomedical Imaging, University of California San Francisco, San Francisco, CA, USA.
| |
Collapse
|
12
|
Head CT deep learning model is highly accurate for early infarct estimation. Sci Rep 2023; 13:189. [PMID: 36604467 PMCID: PMC9814956 DOI: 10.1038/s41598-023-27496-5] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2022] [Accepted: 01/03/2023] [Indexed: 01/06/2023] Open
Abstract
Non-contrast head CT (NCCT) is extremely insensitive for early (< 3-6 h) acute infarct identification. We developed a deep learning model that detects and delineates suspected early acute infarcts on NCCT, using diffusion MRI as ground truth (3566 NCCT/MRI training patient pairs). The model substantially outperformed 3 expert neuroradiologists on a test set of 150 CT scans of patients who were potential candidates for thrombectomy (60 stroke-negative, 90 stroke-positive middle cerebral artery territory only infarcts), with sensitivity 96% (specificity 72%) for the model versus 61-66% (specificity 90-92%) for the experts; model infarct volume estimates also strongly correlated with those of diffusion MRI (r2 > 0.98). When this 150 CT test set was expanded to include a total of 364 CT scans with a more heterogeneous distribution of infarct locations (94 stroke-negative, 270 stroke-positive mixed territory infarcts), model sensitivity was 97%, specificity 99%, for detection of infarcts larger than the 70 mL volume threshold used for patient selection in several major randomized controlled trials of thrombectomy treatment.
Collapse
|
13
|
Devi S, Bakshi S, Sahoo MN. Effect of situational and instrumental distortions on the classification of brain MR images. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104177] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
|
14
|
Kasmanoff N, Lee MD, Razavian N, Lui YW. Deep multi-task learning and random forest for series classification by pulse sequence type and orientation. Neuroradiology 2023; 65:77-87. [PMID: 35906437 PMCID: PMC9361920 DOI: 10.1007/s00234-022-03023-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2022] [Accepted: 07/19/2022] [Indexed: 01/11/2023]
Abstract
PURPOSE Increasingly complex MRI studies and variable series naming conventions reveal limitations of rule-based image routing, especially in health systems with multiple scanners and sites. Accurate methods to identify series based on image content would aid post-processing and PACS viewing. Recent deep/machine learning efforts classify 5-8 basic brain MR sequences. We present an ensemble model combining a convolutional neural network and a random forest classifier to differentiate 25 brain sequences and image orientation. METHODS Series were grouped by descriptions into 25 sequences and 4 orientations. Dataset A, obtained from our institution, was divided into training (16,828 studies; 48,512 series; 112,028 images), validation (4746 studies; 16,612 series; 26,222 images) and test sets (6348 studies; 58,705 series; 3,314,018 images). Dataset B, obtained from a separate hospital, was used for out-of-domain external validation (1252 studies; 2150 series; 234,944 images). We developed an ensemble model combining a 2D convolutional neural network with a custom multi-task learning architecture and random forest classifier trained on DICOM metadata to classify sequence and orientation by series. RESULTS The neural network, random forest, and ensemble achieved 95%, 97%, and 98% overall sequence accuracy on dataset A, and 98%, 99%, and 99% accuracy on dataset B, respectively. All models achieved > 99% orientation accuracy on both datasets. CONCLUSION The ensemble model for series identification accommodates the complexity of brain MRI studies in state-of-the-art clinical practice. Expanding on previous work demonstrating proof-of-concept, our approach is more comprehensive with greater sequence diversity and orientation classification.
Collapse
Affiliation(s)
- Noah Kasmanoff
- Center for Data Science, New York University, New York, NY USA
| | - Matthew D. Lee
- Department of Radiology, NYU Grossman School of Medicine, New York University, New York, NY 10016 USA
| | - Narges Razavian
- Center for Data Science, New York University, New York, NY USA ,Department of Radiology, NYU Grossman School of Medicine, New York University, New York, NY 10016 USA ,Department of Population Health, NYU Grossman School of Medicine, New York University, New York, NY USA
| | - Yvonne W. Lui
- Department of Radiology, NYU Grossman School of Medicine, New York University, New York, NY 10016 USA
| |
Collapse
|
15
|
Bridge CP, Gorman C, Pieper S, Doyle SW, Lennerz JK, Kalpathy-Cramer J, Clunie DA, Fedorov AY, Herrmann MD. Highdicom: a Python Library for Standardized Encoding of Image Annotations and Machine Learning Model Outputs in Pathology and Radiology. J Digit Imaging 2022; 35:1719-1737. [PMID: 35995898 PMCID: PMC9712874 DOI: 10.1007/s10278-022-00683-y] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2021] [Revised: 05/20/2022] [Accepted: 05/26/2022] [Indexed: 10/15/2022] Open
Abstract
Machine learning (ML) is revolutionizing image-based diagnostics in pathology and radiology. ML models have shown promising results in research settings, but the lack of interoperability between ML systems and enterprise medical imaging systems has been a major barrier for clinical integration and evaluation. The DICOM® standard specifies information object definitions (IODs) and services for the representation and communication of digital images and related information, including image-derived annotations and analysis results. However, the complexity of the standard represents an obstacle for its adoption in the ML community and creates a need for software libraries and tools that simplify working with datasets in DICOM format. Here we present the highdicom library, which provides a high-level application programming interface (API) for the Python programming language that abstracts low-level details of the standard and enables encoding and decoding of image-derived information in DICOM format in a few lines of Python code. The highdicom library leverages NumPy arrays for efficient data representation and ties into the extensive Python ecosystem for image processing and machine learning. Simultaneously, by simplifying creation and parsing of DICOM-compliant files, highdicom achieves interoperability with the medical imaging systems that hold the data used to train and run ML models, and ultimately communicate and store model outputs for clinical use. We demonstrate through experiments with slide microscopy and computed tomography imaging, that, by bridging these two ecosystems, highdicom enables developers and researchers to train and evaluate state-of-the-art ML models in pathology and radiology while remaining compliant with the DICOM standard and interoperable with clinical systems at all stages. To promote standardization of ML research and streamline the ML model development and deployment process, we made the library available free and open-source at https://github.com/herrmannlab/highdicom .
Collapse
Affiliation(s)
- Christopher P Bridge
- Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Boston, MA, USA
- MGH & BWH Center for Clinical Data Science, Mass General Brigham, Boston, MA, USA
| | - Chris Gorman
- Computational Pathology, Department of Pathology, Massachusetts General Hospital, Boston, MA, USA
| | | | - Sean W Doyle
- MGH & BWH Center for Clinical Data Science, Mass General Brigham, Boston, MA, USA
| | - Jochen K Lennerz
- Center for Integrated Diagnostics, Department of Pathology, Massachusetts General Hospital, Boston, MA, USA
- Department of Pathology, Harvard Medical School, Boston, MA, USA
| | - Jayashree Kalpathy-Cramer
- Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Boston, MA, USA
- MGH & BWH Center for Clinical Data Science, Mass General Brigham, Boston, MA, USA
- Department of Radiology, Harvard Medical School, Boston, MA, USA
| | | | - Andriy Y Fedorov
- Department of Radiology, Harvard Medical School, Boston, MA, USA
- Surgical Planning Laboratory, Department of Radiology, Brigham and Women's Hospital, Boston, MA, USA
| | - Markus D Herrmann
- Computational Pathology, Department of Pathology, Massachusetts General Hospital, Boston, MA, USA.
- Department of Pathology, Harvard Medical School, Boston, MA, USA.
| |
Collapse
|
16
|
Bridge CP, Bizzo BC, Hillis JM, Chin JK, Comeau DS, Gauriau R, Macruz F, Pawar J, Noro FTC, Sharaf E, Straus Takahashi M, Wright B, Kalafut JF, Andriole KP, Pomerantz SR, Pedemonte S, González RG. Development and clinical application of a deep learning model to identify acute infarct on magnetic resonance imaging. Sci Rep 2022; 12:2154. [PMID: 35140277 PMCID: PMC8828773 DOI: 10.1038/s41598-022-06021-0] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2021] [Accepted: 01/18/2022] [Indexed: 11/09/2022] Open
Abstract
Stroke is a leading cause of death and disability. The ability to quickly identify the presence of acute infarct and quantify the volume on magnetic resonance imaging (MRI) has important treatment implications. We developed a machine learning model that used the apparent diffusion coefficient and diffusion weighted imaging series. It was trained on 6,657 MRI studies from Massachusetts General Hospital (MGH; Boston, USA). All studies were labelled positive or negative for infarct (classification annotation) with 377 having the region of interest outlined (segmentation annotation). The different annotation types facilitated training on more studies while not requiring the extensive time to manually segment every study. We initially validated the model on studies sequestered from the training set. We then tested the model on studies from three clinical scenarios: consecutive stroke team activations for 6-months at MGH, consecutive stroke team activations for 6-months at a hospital that did not provide training data (Brigham and Women’s Hospital [BWH]; Boston, USA), and an international site (Diagnósticos da América SA [DASA]; Brazil). The model results were compared to radiologist ground truth interpretations. The model performed better when trained on classification and segmentation annotations (area under the receiver operating curve [AUROC] 0.995 [95% CI 0.992–0.998] and median Dice coefficient for segmentation overlap of 0.797 [IQR 0.642–0.861]) compared to segmentation annotations alone (AUROC 0.982 [95% CI 0.972–0.990] and Dice coefficient 0.776 [IQR 0.584–0.857]). The model accurately identified infarcts for MGH stroke team activations (AUROC 0.964 [95% CI 0.943–0.982], 381 studies), BWH stroke team activations (AUROC 0.981 [95% CI 0.966–0.993], 247 studies), and at DASA (AUROC 0.998 [95% CI 0.993–1.000], 171 studies). The model accurately segmented infarcts with Pearson correlation comparing model output and ground truth volumes between 0.968 and 0.986 for the three scenarios. Acute infarct can be accurately detected and segmented on MRI in real-world clinical scenarios using a machine learning model.
Collapse
Affiliation(s)
- Christopher P Bridge
- MGH & BWH Center for Clinical Data Science, Mass General Brigham, Boston, USA.,Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Boston, USA.,Harvard Medical School, Boston, USA.,Department of Radiology, Massachusetts General Hospital, Boston, USA
| | - Bernardo C Bizzo
- MGH & BWH Center for Clinical Data Science, Mass General Brigham, Boston, USA. .,Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Boston, USA. .,Harvard Medical School, Boston, USA. .,Department of Radiology, Massachusetts General Hospital, Boston, USA. .,Diagnósticos da América SA, São Paulo, Brazil. .,MGH & BWH Center for Clinical Data Science, Mass General Brigham, Suite 1303, Floor 13, 100 Cambridge St, Boston, MA, 02114, USA.
| | - James M Hillis
- MGH & BWH Center for Clinical Data Science, Mass General Brigham, Boston, USA.,Harvard Medical School, Boston, USA.,Department of Neurology, Massachusetts General Hospital, Boston, USA
| | - John K Chin
- MGH & BWH Center for Clinical Data Science, Mass General Brigham, Boston, USA
| | - Donnella S Comeau
- MGH & BWH Center for Clinical Data Science, Mass General Brigham, Boston, USA
| | - Romane Gauriau
- MGH & BWH Center for Clinical Data Science, Mass General Brigham, Boston, USA
| | - Fabiola Macruz
- MGH & BWH Center for Clinical Data Science, Mass General Brigham, Boston, USA
| | - Jayashri Pawar
- MGH & BWH Center for Clinical Data Science, Mass General Brigham, Boston, USA
| | - Flavia T C Noro
- MGH & BWH Center for Clinical Data Science, Mass General Brigham, Boston, USA
| | - Elshaimaa Sharaf
- MGH & BWH Center for Clinical Data Science, Mass General Brigham, Boston, USA
| | | | - Bradley Wright
- MGH & BWH Center for Clinical Data Science, Mass General Brigham, Boston, USA
| | | | - Katherine P Andriole
- MGH & BWH Center for Clinical Data Science, Mass General Brigham, Boston, USA.,Harvard Medical School, Boston, USA.,Department of Radiology, Brigham and Women's Hospital, Boston, USA
| | - Stuart R Pomerantz
- MGH & BWH Center for Clinical Data Science, Mass General Brigham, Boston, USA.,Harvard Medical School, Boston, USA.,Department of Radiology, Massachusetts General Hospital, Boston, USA
| | - Stefano Pedemonte
- MGH & BWH Center for Clinical Data Science, Mass General Brigham, Boston, USA
| | - R Gilberto González
- MGH & BWH Center for Clinical Data Science, Mass General Brigham, Boston, USA.,Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Boston, USA.,Harvard Medical School, Boston, USA.,Department of Radiology, Massachusetts General Hospital, Boston, USA
| |
Collapse
|
17
|
Park C, You SC, Jeon H, Jeong CW, Choi JW, Park RW. Development and Validation of the Radiology Common Data Model (R-CDM) for the International Standardization of Medical Imaging Data. Yonsei Med J 2022; 63:S74-S83. [PMID: 35040608 PMCID: PMC8790584 DOI: 10.3349/ymj.2022.63.s74] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/08/2021] [Revised: 10/28/2021] [Accepted: 10/31/2021] [Indexed: 12/02/2022] Open
Abstract
PURPOSE Digital Imaging and Communications in Medicine (DICOM), a standard file format for medical imaging data, contains metadata describing each file. However, metadata are often incomplete, and there is no standardized format for recording metadata, leading to inefficiency during the metadata-based data retrieval process. Here, we propose a novel standardization method for DICOM metadata termed the Radiology Common Data Model (R-CDM). MATERIALS AND METHODS R-CDM was designed to be compatible with Health Level Seven International (HL7)/Fast Healthcare Interoperability Resources (FHIR) and linked with the Observational Medical Outcomes Partnership (OMOP)-CDM to achieve a seamless link between clinical data and medical imaging data. The terminology system was standardized using the RadLex playbook, a comprehensive lexicon of radiology. As a proof of concept, the R-CDM conversion process was conducted with 41.7 TB of data from the Ajou University Hospital. The R-CDM database visualizer was developed to visualize the main characteristics of the R-CDM database. RESULTS Information from 2801360 cases and 87203226 DICOM files was organized into two tables constituting the R-CDM. Information on imaging device and image resolution was recorded with more than 99.9% accuracy. Furthermore, OMOP-CDM and R-CDM were linked to efficiently extract specific types of images from specific patient cohorts. CONCLUSION R-CDM standardizes the structure and terminology for recording medical imaging data to eliminate incomplete and unstandardized information. Successful standardization was achieved by the extract, transform, and load process and image classifier. We hope that the R-CDM will contribute to deep learning research in the medical imaging field by enabling the securement of large-scale medical imaging data from multinational institutions.
Collapse
Affiliation(s)
- ChulHyoung Park
- Department of Biomedical Informatics, Ajou University School of Medicine, Suwon, Korea
| | - Seng Chan You
- Department of Preventive Medicine, Yonsei University College of Medicine, Seoul, Korea
| | - Hokyun Jeon
- Department of Biomedical Informatics, Ajou University School of Medicine, Suwon, Korea
| | - Chang Won Jeong
- Medical Convergence Research Center, Wonkwang University, Iksan, Korea
| | - Jin Wook Choi
- Department of Radiology, Ajou University Medical Center, Suwon, Korea
| | - Rae Woong Park
- Department of Biomedical Informatics, Ajou University School of Medicine, Suwon, Korea
- Department of Biomedical Sciences, Ajou University Graduate School of Medicine, Suwon, Korea.
| |
Collapse
|
18
|
Ranschaert E, Topff L, Pianykh O. Optimization of Radiology Workflow with Artificial Intelligence. Radiol Clin North Am 2021; 59:955-966. [PMID: 34689880 DOI: 10.1016/j.rcl.2021.06.006] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
The potential of artificial intelligence (AI) in radiology goes far beyond image analysis. AI can be used to optimize all steps of the radiology workflow by supporting a variety of nondiagnostic tasks, including order entry support, patient scheduling, resource allocation, and improving the radiologist's workflow. This article discusses several principal directions of using AI algorithms to improve radiological operations and workflow management, with the intention of providing a broader understanding of the value of applying AI in the radiology department.
Collapse
Affiliation(s)
- Erik Ranschaert
- Elisabeth-Tweesteden Hospital, Hilvarenbeekseweg 60, 5022 GC Tilburg, The Netherlands; Ghent University, C. Heymanslaan 10, 9000 Gent, Belgium.
| | - Laurens Topff
- Netherlands Cancer Institute, Plesmanlaan 121, 1066 CX, Amsterdam, The Netherlands
| | - Oleg Pianykh
- Department of Radiology, Harvard Medical School, Massachusetts General Hospital, 25 New Chardon Street, Suite 470, Boston, MA 02114, USA
| |
Collapse
|
19
|
Deep Semi-Supervised Algorithm for Learning Cluster-Oriented Representations of Medical Images Using Partially Observable DICOM Tags and Images. Diagnostics (Basel) 2021; 11:diagnostics11101920. [PMID: 34679618 PMCID: PMC8534981 DOI: 10.3390/diagnostics11101920] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2021] [Revised: 10/13/2021] [Accepted: 10/15/2021] [Indexed: 11/16/2022] Open
Abstract
The task of automatically extracting large homogeneous datasets of medical images based on detailed criteria and/or semantic similarity can be challenging because the acquisition and storage of medical images in clinical practice is not fully standardised and can be prone to errors, which are often made unintentionally by medical professionals during manual input. In this paper, we propose an algorithm for learning cluster-oriented representations of medical images by fusing images with partially observable DICOM tags. Pairwise relations are modelled by thresholding the Gower distance measure which is calculated using eight DICOM tags. We trained the models using 30,000 images, and we tested them using a disjoint test set consisting of 8000 images, gathered retrospectively from the PACS repository of the Clinical Hospital Centre Rijeka in 2017. We compare our method against the standard and deep unsupervised clustering algorithms, as well as the popular semi-supervised algorithms combined with the most commonly used feature descriptors. Our model achieves an NMI score of 0.584 with respect to the anatomic region, and an NMI score of 0.793 with respect to the modality. The results suggest that DICOM data can be used to generate pairwise constraints that can help improve medical images clustering, even when using only a small number of constraints.
Collapse
|
20
|
Radiology Implementation Considerations for Artificial Intelligence (AI) Applied to COVID-19, From the AJR Special Series on AI Applications. AJR Am J Roentgenol 2021; 219:15-23. [PMID: 34612681 DOI: 10.2214/ajr.21.26717] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Abstract
Hundreds of imaging-based artificial intelligence (AI) models have been developed in response to the COVID-19 pandemic. AI systems that incorporate imaging have shown promise in primary detection, severity grading, and prognostication of outcomes in COVID-19, and have enabled integration of imaging with a broad range of additional clinical and epidemiologic data. However, systematic reviews of AI models applied to COVID-19 medical imaging have highlighted problems in the field, including methodologic issues and problems in real-world deployment. Clinical use of such models should be informed by both the promise and potential pitfalls of implementation. How does a practicing radiologist make sense of this complex topic, and what factors should be considered in the implementation of AI tools for imaging of COVID-19? This critical review aims to help the radiologist understand the nuances that impact the clinical deployment of AI for imaging of COVID-19. We review imaging use cases for AI models in COVID-19 (e.g., diagnosis, severity assessment, and prognostication) and explore considerations for AI model development and testing, deployment infrastructure, clinical user interfaces, quality control, and institutional review board and regulatory approvals, with a practical focus on what a radiologist should consider when implementing an AI tool for COVID-19.
Collapse
|
21
|
Gauriau R, Bizzo BC, Kitamura FC, Landi Junior O, Ferraciolli SF, Macruz FBC, Sanchez TA, Garcia MRT, Vedolin LM, Domingues RC, Gasparetto EL, Andriole KP. A Deep Learning-based Model for Detecting Abnormalities on Brain MR Images for Triaging: Preliminary Results from a Multisite Experience. Radiol Artif Intell 2021; 3:e200184. [PMID: 34350408 DOI: 10.1148/ryai.2021200184] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2020] [Revised: 03/02/2021] [Accepted: 04/07/2021] [Indexed: 11/11/2022]
Abstract
Purpose To develop a deep learning model for detecting brain abnormalities on MR images. Materials and Methods In this retrospective study, a deep learning approach using T2-weighted fluid-attenuated inversion recovery images was developed to classify brain MRI findings as "likely normal" or "likely abnormal." A convolutional neural network model was trained on a large, heterogeneous dataset collected from two different continents and covering a broad panel of pathologic conditions, including neoplasms, hemorrhages, infarcts, and others. Three datasets were used. Dataset A consisted of 2839 patients, dataset B consisted of 6442 patients, and dataset C consisted of 1489 patients and was only used for testing. Datasets A and B were split into training, validation, and test sets. A total of three models were trained: model A (using only dataset A), model B (using only dataset B), and model A + B (using training datasets from A and B). All three models were tested on subsets from dataset A, dataset B, and dataset C separately. The evaluation was performed by using annotations based on the images, as well as labels based on the radiology reports. Results Model A trained on dataset A from one institution and tested on dataset C from another institution reached an F1 score of 0.72 (95% CI: 0.70, 0.74) and an area under the receiver operating characteristic curve of 0.78 (95% CI: 0.75, 0.80) when compared with findings from the radiology reports. Conclusion The model shows relatively good performance for differentiating between likely normal and likely abnormal brain examination findings by using data from different institutions.Keywords: MR-Imaging, Head/Neck, Computer Applications-General (Informatics), Convolutional Neural Network (CNN), Deep Learning Algorithms, Machine Learning Algorithms© RSNA, 2021Supplemental material is available for this article.
Collapse
Affiliation(s)
- Romane Gauriau
- MGH & BWH Center for Clinical Data Science, Ste 1303, Floor 13, 100 Cambridge St, Boston, MA 02114 (R.G., B.C.B., F.B.C.M., K.P.A.); Department of Artificial Intelligence, Diagnósticos da América, São Paulo, Brazil (B.C.B., F.C.K., O.L.J., S.F.F., M.R.T.G., L.M.V., R.C.D., E.L.G.); Head of AI, Diagnósticos da América SA, São Paulo, Brazil (F.C.K.); Department of Radiology, Federal University of Rio de Janeiro, Rio de Janeiro, Brazil (B.C.B., T.A.S., E.L.G.); Department of Radiology, Massachusetts General Hospital, Boston, Mass (B.C.B.); and Department of Radiology, Brigham and Women's Hospital and Harvard Medical School, Harvard University, Boston, Mass (K.P.A.)
| | - Bernardo C Bizzo
- MGH & BWH Center for Clinical Data Science, Ste 1303, Floor 13, 100 Cambridge St, Boston, MA 02114 (R.G., B.C.B., F.B.C.M., K.P.A.); Department of Artificial Intelligence, Diagnósticos da América, São Paulo, Brazil (B.C.B., F.C.K., O.L.J., S.F.F., M.R.T.G., L.M.V., R.C.D., E.L.G.); Head of AI, Diagnósticos da América SA, São Paulo, Brazil (F.C.K.); Department of Radiology, Federal University of Rio de Janeiro, Rio de Janeiro, Brazil (B.C.B., T.A.S., E.L.G.); Department of Radiology, Massachusetts General Hospital, Boston, Mass (B.C.B.); and Department of Radiology, Brigham and Women's Hospital and Harvard Medical School, Harvard University, Boston, Mass (K.P.A.)
| | - Felipe C Kitamura
- MGH & BWH Center for Clinical Data Science, Ste 1303, Floor 13, 100 Cambridge St, Boston, MA 02114 (R.G., B.C.B., F.B.C.M., K.P.A.); Department of Artificial Intelligence, Diagnósticos da América, São Paulo, Brazil (B.C.B., F.C.K., O.L.J., S.F.F., M.R.T.G., L.M.V., R.C.D., E.L.G.); Head of AI, Diagnósticos da América SA, São Paulo, Brazil (F.C.K.); Department of Radiology, Federal University of Rio de Janeiro, Rio de Janeiro, Brazil (B.C.B., T.A.S., E.L.G.); Department of Radiology, Massachusetts General Hospital, Boston, Mass (B.C.B.); and Department of Radiology, Brigham and Women's Hospital and Harvard Medical School, Harvard University, Boston, Mass (K.P.A.)
| | - Osvaldo Landi Junior
- MGH & BWH Center for Clinical Data Science, Ste 1303, Floor 13, 100 Cambridge St, Boston, MA 02114 (R.G., B.C.B., F.B.C.M., K.P.A.); Department of Artificial Intelligence, Diagnósticos da América, São Paulo, Brazil (B.C.B., F.C.K., O.L.J., S.F.F., M.R.T.G., L.M.V., R.C.D., E.L.G.); Head of AI, Diagnósticos da América SA, São Paulo, Brazil (F.C.K.); Department of Radiology, Federal University of Rio de Janeiro, Rio de Janeiro, Brazil (B.C.B., T.A.S., E.L.G.); Department of Radiology, Massachusetts General Hospital, Boston, Mass (B.C.B.); and Department of Radiology, Brigham and Women's Hospital and Harvard Medical School, Harvard University, Boston, Mass (K.P.A.)
| | - Suely F Ferraciolli
- MGH & BWH Center for Clinical Data Science, Ste 1303, Floor 13, 100 Cambridge St, Boston, MA 02114 (R.G., B.C.B., F.B.C.M., K.P.A.); Department of Artificial Intelligence, Diagnósticos da América, São Paulo, Brazil (B.C.B., F.C.K., O.L.J., S.F.F., M.R.T.G., L.M.V., R.C.D., E.L.G.); Head of AI, Diagnósticos da América SA, São Paulo, Brazil (F.C.K.); Department of Radiology, Federal University of Rio de Janeiro, Rio de Janeiro, Brazil (B.C.B., T.A.S., E.L.G.); Department of Radiology, Massachusetts General Hospital, Boston, Mass (B.C.B.); and Department of Radiology, Brigham and Women's Hospital and Harvard Medical School, Harvard University, Boston, Mass (K.P.A.)
| | - Fabiola B C Macruz
- MGH & BWH Center for Clinical Data Science, Ste 1303, Floor 13, 100 Cambridge St, Boston, MA 02114 (R.G., B.C.B., F.B.C.M., K.P.A.); Department of Artificial Intelligence, Diagnósticos da América, São Paulo, Brazil (B.C.B., F.C.K., O.L.J., S.F.F., M.R.T.G., L.M.V., R.C.D., E.L.G.); Head of AI, Diagnósticos da América SA, São Paulo, Brazil (F.C.K.); Department of Radiology, Federal University of Rio de Janeiro, Rio de Janeiro, Brazil (B.C.B., T.A.S., E.L.G.); Department of Radiology, Massachusetts General Hospital, Boston, Mass (B.C.B.); and Department of Radiology, Brigham and Women's Hospital and Harvard Medical School, Harvard University, Boston, Mass (K.P.A.)
| | - Tiago A Sanchez
- MGH & BWH Center for Clinical Data Science, Ste 1303, Floor 13, 100 Cambridge St, Boston, MA 02114 (R.G., B.C.B., F.B.C.M., K.P.A.); Department of Artificial Intelligence, Diagnósticos da América, São Paulo, Brazil (B.C.B., F.C.K., O.L.J., S.F.F., M.R.T.G., L.M.V., R.C.D., E.L.G.); Head of AI, Diagnósticos da América SA, São Paulo, Brazil (F.C.K.); Department of Radiology, Federal University of Rio de Janeiro, Rio de Janeiro, Brazil (B.C.B., T.A.S., E.L.G.); Department of Radiology, Massachusetts General Hospital, Boston, Mass (B.C.B.); and Department of Radiology, Brigham and Women's Hospital and Harvard Medical School, Harvard University, Boston, Mass (K.P.A.)
| | - Marcio R T Garcia
- MGH & BWH Center for Clinical Data Science, Ste 1303, Floor 13, 100 Cambridge St, Boston, MA 02114 (R.G., B.C.B., F.B.C.M., K.P.A.); Department of Artificial Intelligence, Diagnósticos da América, São Paulo, Brazil (B.C.B., F.C.K., O.L.J., S.F.F., M.R.T.G., L.M.V., R.C.D., E.L.G.); Head of AI, Diagnósticos da América SA, São Paulo, Brazil (F.C.K.); Department of Radiology, Federal University of Rio de Janeiro, Rio de Janeiro, Brazil (B.C.B., T.A.S., E.L.G.); Department of Radiology, Massachusetts General Hospital, Boston, Mass (B.C.B.); and Department of Radiology, Brigham and Women's Hospital and Harvard Medical School, Harvard University, Boston, Mass (K.P.A.)
| | - Leonardo M Vedolin
- MGH & BWH Center for Clinical Data Science, Ste 1303, Floor 13, 100 Cambridge St, Boston, MA 02114 (R.G., B.C.B., F.B.C.M., K.P.A.); Department of Artificial Intelligence, Diagnósticos da América, São Paulo, Brazil (B.C.B., F.C.K., O.L.J., S.F.F., M.R.T.G., L.M.V., R.C.D., E.L.G.); Head of AI, Diagnósticos da América SA, São Paulo, Brazil (F.C.K.); Department of Radiology, Federal University of Rio de Janeiro, Rio de Janeiro, Brazil (B.C.B., T.A.S., E.L.G.); Department of Radiology, Massachusetts General Hospital, Boston, Mass (B.C.B.); and Department of Radiology, Brigham and Women's Hospital and Harvard Medical School, Harvard University, Boston, Mass (K.P.A.)
| | - Romeu C Domingues
- MGH & BWH Center for Clinical Data Science, Ste 1303, Floor 13, 100 Cambridge St, Boston, MA 02114 (R.G., B.C.B., F.B.C.M., K.P.A.); Department of Artificial Intelligence, Diagnósticos da América, São Paulo, Brazil (B.C.B., F.C.K., O.L.J., S.F.F., M.R.T.G., L.M.V., R.C.D., E.L.G.); Head of AI, Diagnósticos da América SA, São Paulo, Brazil (F.C.K.); Department of Radiology, Federal University of Rio de Janeiro, Rio de Janeiro, Brazil (B.C.B., T.A.S., E.L.G.); Department of Radiology, Massachusetts General Hospital, Boston, Mass (B.C.B.); and Department of Radiology, Brigham and Women's Hospital and Harvard Medical School, Harvard University, Boston, Mass (K.P.A.)
| | - Emerson L Gasparetto
- MGH & BWH Center for Clinical Data Science, Ste 1303, Floor 13, 100 Cambridge St, Boston, MA 02114 (R.G., B.C.B., F.B.C.M., K.P.A.); Department of Artificial Intelligence, Diagnósticos da América, São Paulo, Brazil (B.C.B., F.C.K., O.L.J., S.F.F., M.R.T.G., L.M.V., R.C.D., E.L.G.); Head of AI, Diagnósticos da América SA, São Paulo, Brazil (F.C.K.); Department of Radiology, Federal University of Rio de Janeiro, Rio de Janeiro, Brazil (B.C.B., T.A.S., E.L.G.); Department of Radiology, Massachusetts General Hospital, Boston, Mass (B.C.B.); and Department of Radiology, Brigham and Women's Hospital and Harvard Medical School, Harvard University, Boston, Mass (K.P.A.)
| | - Katherine P Andriole
- MGH & BWH Center for Clinical Data Science, Ste 1303, Floor 13, 100 Cambridge St, Boston, MA 02114 (R.G., B.C.B., F.B.C.M., K.P.A.); Department of Artificial Intelligence, Diagnósticos da América, São Paulo, Brazil (B.C.B., F.C.K., O.L.J., S.F.F., M.R.T.G., L.M.V., R.C.D., E.L.G.); Head of AI, Diagnósticos da América SA, São Paulo, Brazil (F.C.K.); Department of Radiology, Federal University of Rio de Janeiro, Rio de Janeiro, Brazil (B.C.B., T.A.S., E.L.G.); Department of Radiology, Massachusetts General Hospital, Boston, Mass (B.C.B.); and Department of Radiology, Brigham and Women's Hospital and Harvard Medical School, Harvard University, Boston, Mass (K.P.A.)
| |
Collapse
|
22
|
The Importance of Body Part Labeling to Enable Enterprise Imaging: A HIMSS-SIIM Enterprise Imaging Community Collaborative White Paper. J Digit Imaging 2021; 34:1-15. [PMID: 33481143 PMCID: PMC7887098 DOI: 10.1007/s10278-020-00415-0] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/23/2020] [Indexed: 11/16/2022] Open
Abstract
In order for enterprise imaging to be successful across a multitude of specialties, systems, and sites, standards are essential to categorize and classify imaging data. The HIMSS-SIIM Enterprise Imaging Community believes that the Digital Imaging Communications in Medicine (DICOM) Anatomic Region Sequence, or its equivalent in other data standards, is a vital data element for this role, when populated with standard coded values. We believe that labeling images with standard Anatomic Region Sequence codes will enhance the user’s ability to consume data, facilitate interoperability, and allow greater control of privacy. Image consumption—when a user views a patient’s images, he or she often wants to see relevant comparison images of the same lesion or anatomic region for the same patient automatically presented. Relevant comparison images may have been acquired from a variety of modalities and specialties. The Anatomic Region Sequence data element provides a basis to allow for efficient comparison in both instances. Interoperability—as patients move between health care systems, it is important to minimize friction for data transfer. Health care providers and facilities need to be able to consume and review the increasingly large and complex volume of data efficiently. The use of Anatomic Region Sequence, or its equivalent, populated with standard values enables seamless interoperability of imaging data regardless of whether images are used within a site or across different sites and systems. Privacy—as more visible light photographs are integrated into electronic systems, it becomes apparent that some images may need to be sequestered. Although additional work is needed to protect sensitive images, standard coded values in Anatomic Region Sequence support the identification of potentially sensitive images, enable facilities to create access control policies, and can be used as an interim surrogate for more sophisticated rule-based or attribute-based access control mechanisms. To satisfy such use cases, the HIMSS-SIIM Enterprise Imaging Community encourages the use of a pre-existing body part ontology. Through this white paper, we will identify potential challenges in employing this standard and provide potential solutions for these challenges.
Collapse
|