1
|
Islam S, Murthy VN, Neumann D, Das BK, Sharma P, Maier A, Comaniciu D, Ghesu FC. Self-supervised learning for interventional image analytics: toward robust device trackers. J Med Imaging (Bellingham) 2024; 11:035001. [PMID: 38756438 PMCID: PMC11094643 DOI: 10.1117/1.jmi.11.3.035001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2023] [Revised: 04/23/2024] [Accepted: 05/01/2024] [Indexed: 05/18/2024] Open
Abstract
Purpose The accurate detection and tracking of devices, such as guiding catheters in live X-ray image acquisitions, are essential prerequisites for endovascular cardiac interventions. This information is leveraged for procedural guidance, e.g., directing stent placements. To ensure procedural safety and efficacy, there is a need for high robustness/no failures during tracking. To achieve this, one needs to efficiently tackle challenges, such as device obscuration by the contrast agent or other external devices or wires and changes in the field-of-view or acquisition angle, as well as the continuous movement due to cardiac and respiratory motion. Approach To overcome the aforementioned challenges, we propose an approach to learn spatio-temporal features from a very large data cohort of over 16 million interventional X-ray frames using self-supervision for image sequence data. Our approach is based on a masked image modeling technique that leverages frame interpolation-based reconstruction to learn fine inter-frame temporal correspondences. The features encoded in the resulting model are fine-tuned downstream in a light-weight model. Results Our approach achieves state-of-the-art performance, in particular for robustness, compared to ultra optimized reference solutions (that use multi-stage feature fusion or multi-task and flow regularization). The experiments show that our method achieves a 66.31% reduction in the maximum tracking error against the reference solutions (23.20% when flow regularization is used), achieving a success score of 97.95% at a 3 × faster inference speed of 42 frames-per-second (on GPU). In addition, we achieve a 20% reduction in the standard deviation of errors, which indicates a much more stable tracking performance. Conclusions The proposed data-driven approach achieves superior performance, particularly in robustness and speed compared with the frequently used multi-modular approaches for device tracking. The results encourage the use of our approach in various other tasks within interventional image analytics that require effective understanding of spatio-temporal semantics.
Collapse
Affiliation(s)
- Saahil Islam
- Friedrich-Alexander-Universität Erlangen-Nürnberg, Pattern Recognition Lab, Erlangen, Germany
- Siemens Healthineers, Digital Technology and Innovation, Erlangen, Germany
| | - Venkatesh N. Murthy
- Siemens Healthineers, Digital Technology and Innovation, Princeton, New Jersey, United States
| | - Dominik Neumann
- Siemens Healthineers, Digital Technology and Innovation, Erlangen, Germany
| | - Badhan Kumar Das
- Friedrich-Alexander-Universität Erlangen-Nürnberg, Pattern Recognition Lab, Erlangen, Germany
- Siemens Healthineers, Digital Technology and Innovation, Erlangen, Germany
| | - Puneet Sharma
- Siemens Healthineers, Digital Technology and Innovation, Princeton, New Jersey, United States
| | - Andreas Maier
- Friedrich-Alexander-Universität Erlangen-Nürnberg, Pattern Recognition Lab, Erlangen, Germany
| | - Dorin Comaniciu
- Siemens Healthineers, Digital Technology and Innovation, Princeton, New Jersey, United States
| | - Florin C. Ghesu
- Siemens Healthineers, Digital Technology and Innovation, Erlangen, Germany
| |
Collapse
|
2
|
Das BK, Zhao G, Islam S, Re TJ, Comaniciu D, Gibson E, Maier A. Co-ordinate-based positional embedding that captures resolution to enhance transformer's performance in medical image analysis. Sci Rep 2024; 14:9380. [PMID: 38654066 DOI: 10.1038/s41598-024-59813-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2023] [Accepted: 04/15/2024] [Indexed: 04/25/2024] Open
Abstract
Vision transformers (ViTs) have revolutionized computer vision by employing self-attention instead of convolutional neural networks and demonstrated success due to their ability to capture global dependencies and remove spatial biases of locality. In medical imaging, where input data may differ in size and resolution, existing architectures require resampling or resizing during pre-processing, leading to potential spatial resolution loss and information degradation. This study proposes a co-ordinate-based embedding that encodes the geometry of medical images, capturing physical co-ordinate and resolution information without the need for resampling or resizing. The effectiveness of the proposed embedding is demonstrated through experiments with UNETR and SwinUNETR models for infarct segmentation on MRI dataset with AxTrace and AxADC contrasts. The dataset consists of 1142 training, 133 validation and 143 test subjects. Both models with the addition of co-ordinate based positional embedding achieved substantial improvements in mean Dice score by 6.5% and 7.6%. The proposed embedding showcased a statistically significant advantage p-value< 0.0001 over alternative approaches. In conclusion, the proposed co-ordinate-based pixel-wise positional embedding method offers a promising solution for Transformer-based models in medical image analysis. It effectively leverages physical co-ordinate information to enhance performance without compromising spatial resolution and provides a foundation for future advancements in positional embedding techniques for medical applications.
Collapse
Affiliation(s)
- Badhan Kumar Das
- Digital Technology and Innovation, Siemens Healthineers, Erlangen, Germany.
- Pattern Recognition Lab, Department of Computer Science, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany.
| | - Gengyan Zhao
- Digital Technology and Innovation, Siemens Healthineers, Princeton, NJ, USA
| | - Saahil Islam
- Digital Technology and Innovation, Siemens Healthineers, Erlangen, Germany
- Pattern Recognition Lab, Department of Computer Science, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| | - Thomas J Re
- Digital Technology and Innovation, Siemens Healthineers, Princeton, NJ, USA
| | - Dorin Comaniciu
- Digital Technology and Innovation, Siemens Healthineers, Princeton, NJ, USA
| | - Eli Gibson
- Digital Technology and Innovation, Siemens Healthineers, Princeton, NJ, USA
| | - Andreas Maier
- Pattern Recognition Lab, Department of Computer Science, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| |
Collapse
|
3
|
Yoo Y, Gibson E, Zhao G, Sandu A, Re T, Das J, Hesheng W, Kim MM, Shen C, Lee YZ, Kondziolka D, Ibrahim M, Lian J, Jain R, Zhu T, Parmar H, Comaniciu D, Balter J, Cao Y. An Automated Brain Metastasis Detection and Segmentation System from MRI with a Large Multi-Institutional Dataset. Int J Radiat Oncol Biol Phys 2023; 117:S88-S89. [PMID: 37784596 DOI: 10.1016/j.ijrobp.2023.06.414] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/04/2023]
Abstract
PURPOSE/OBJECTIVE(S) Developments of automated systems for brain metastasis (BM) detection and segmentation from MRI for assisting early detection and stereotactic radiosurgery (SRS) have been reported but most based upon relatively small datasets from single institutes. This work aims to develop and evaluate a system using a large multi-institutional dataset, and to improve both identification of small/subtle BMs and segmentation accuracy of large BMs. MATERIALS/METHODS A 3D U-Net system was trained and evaluated to detect and segment intraparenchymal BMs with a size > 2mm using 1856 MRI volumes from 1791 patients treated with SRS from seven institutions (1539 volumes for training, 183 for validation, and 134 for testing). All patients had 3D post-Gd T1w MRI scans pre-SRS. Gross tumor volumes (GTVs) of BMs for SRS were curated by each institute first. Then, additional efforts were spent to create GTVs for the untreated and/or uncontoured BMs, including central reviews by two radiologists, to improve accuracy of ground truth. The training dataset was augmented with synthetic BMs of 3773 MRIs using a 3D generative pipeline. Our system consists of two U-Nets with one using small 3D patches dedicated for detecting small BMs and another using large 3D patches for segmenting large BMs, and a random-forest based fusion module for combining the two network outputs. The first U-Net was trained with 3D patches containing at least one BM < 0.1 cm3. For detection performance, we measured BM-level sensitivity and case-level false-positive (FP) rate. For segmentation performance, we measured BM-level Dice similarity coefficient (DSC) and 95-percentile Hausdorff distance (HD95). We also stratified performances based upon BM sizes. RESULTS For 739 BMs in the 134 testing cases, the overall lesion-level sensitivity was 0.870 with an average case-level FP of 1.34±1.92 (95% CI: 1.02-1.67). The sensitivity was >0.969 for the BMs >0.1 cm3, but dropped to 0.755 for the BMs < 0.1 cm3 (Table 1). The average DSC and HD95 for all detected BMs were 0.786 and 1.35mm. The worse performance for BMs > 20 cm3 was caused by a case with 83 cm3 GTV and artifacts in the MRI volume. CONCLUSION We achieved excellent detection sensitivity and segmentation accuracy for BMs > 0.1 cm3, and promising performance for small BMs (<0.1cm3) with a controlled FP rate using a large multi-institutional dataset. Clinical utility for assisting early detection and SRS planning will be investigated. Table 1: Per-lesion detection and segmentation performance stratified by individual BM size. N is the number of BMs in each category.
Collapse
Affiliation(s)
- Y Yoo
- Siemens Healthineers, Princeton, NJ
| | - E Gibson
- Siemens Healthineers, Princeton, NJ
| | - G Zhao
- Siemens Healthineers, Princeton, NJ
| | - A Sandu
- Siemens Healthineers, Princeton, NJ
| | - T Re
- Siemens Healthineers, Princeton, NJ
| | - J Das
- Siemens Healthineers, Princeton, NJ
| | | | - M M Kim
- Department of Radiation Oncology, University of Michigan, Ann Arbor, MI
| | - C Shen
- Department of Radiation Oncology, University of North Carolina, Chapel Hill, NC
| | - Y Z Lee
- University of North Carolina, Chapel Hill, NC
| | - D Kondziolka
- Department of Neurosurgery, NYU Langone Health, New York, NY
| | - M Ibrahim
- University of Michigan, Ann Arbor, MI
| | - J Lian
- University of North Carolina, Chapel Hill, NC
| | - R Jain
- New York University, New York, NY
| | - T Zhu
- Washington University, St. Louis, MO
| | - H Parmar
- Department of Radiology, University of Michigan, Ann Arbor, MI
| | | | - J Balter
- Department of Radiation Oncology, University of Michigan, Ann Arbor, MI
| | - Y Cao
- Department of Radiation Oncology, University of Michigan, Ann Arbor, MI
| |
Collapse
|
4
|
Ghesu FC, Georgescu B, Mansoor A, Yoo Y, Neumann D, Patel P, Vishwanath RS, Balter JM, Cao Y, Grbic S, Comaniciu D. Contrastive self-supervised learning from 100 million medical images with optional supervision. J Med Imaging (Bellingham) 2022; 9:064503. [PMID: 36466078 PMCID: PMC9710476 DOI: 10.1117/1.jmi.9.6.064503] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2022] [Accepted: 11/14/2022] [Indexed: 12/05/2022] Open
Abstract
Purpose Building accurate and robust artificial intelligence systems for medical image assessment requires the creation of large sets of annotated training examples. However, constructing such datasets is very costly due to the complex nature of annotation tasks, which often require expert knowledge (e.g., a radiologist). To counter this limitation, we propose a method to learn from medical images at scale in a self-supervised way. Approach Our approach, based on contrastive learning and online feature clustering, leverages training datasets of over 100,000,000 medical images of various modalities, including radiography, computed tomography (CT), magnetic resonance (MR) imaging, and ultrasonography (US). We propose to use the learned features to guide model training in supervised and hybrid self-supervised/supervised regime on various downstream tasks. Results We highlight a number of advantages of this strategy on challenging image assessment problems in radiography, CT, and MR: (1) significant increase in accuracy compared to the state-of-the-art (e.g., area under the curve boost of 3% to 7% for detection of abnormalities from chest radiography scans and hemorrhage detection on brain CT); (2) acceleration of model convergence during training by up to 85% compared with using no pretraining (e.g., 83% when training a model for detection of brain metastases in MR scans); and (3) increase in robustness to various image augmentations, such as intensity variations, rotations or scaling reflective of data variation seen in the field. Conclusions The proposed approach enables large gains in accuracy and robustness on challenging image assessment problems. The improvement is significant compared with other state-of-the-art approaches trained on medical or vision images (e.g., ImageNet).
Collapse
Affiliation(s)
- Florin C. Ghesu
- Siemens Healthineers, Digital Technology and Innovation, Princeton, New Jersey, United States
| | - Bogdan Georgescu
- Siemens Healthineers, Digital Technology and Innovation, Princeton, New Jersey, United States
| | - Awais Mansoor
- Siemens Healthineers, Digital Technology and Innovation, Princeton, New Jersey, United States
| | - Youngjin Yoo
- Siemens Healthineers, Digital Technology and Innovation, Princeton, New Jersey, United States
| | - Dominik Neumann
- Siemens Healthineers, Digital Technology and Innovation, Erlangen, Germany
| | - Pragneshkumar Patel
- Siemens Healthineers, Digital Technology and Innovation, Princeton, New Jersey, United States
| | | | - James M. Balter
- University of Michigan, Department of Radiation Oncology, Ann Arbor, Michigan, United States
| | - Yue Cao
- University of Michigan, Department of Radiation Oncology, Ann Arbor, Michigan, United States
| | - Sasa Grbic
- Siemens Healthineers, Digital Technology and Innovation, Princeton, New Jersey, United States
| | - Dorin Comaniciu
- Siemens Healthineers, Digital Technology and Innovation, Princeton, New Jersey, United States
| |
Collapse
|
5
|
Gibson E, Georgescu B, Ceccaldi P, Trigan PH, Yoo Y, Das J, Re TJ, Rs V, Balachandran A, Eibenberger E, Chekkoury A, Brehm B, Bodanapally UK, Nicolaou S, Sanelli PC, Schroeppel TJ, Flohr T, Comaniciu D, Lui YW. Artificial Intelligence with Statistical Confidence Scores for Detection of Acute or Subacute Hemorrhage on Noncontrast CT Head Scans. Radiol Artif Intell 2022; 4:e210115. [PMID: 35652116 DOI: 10.1148/ryai.210115] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2021] [Revised: 03/01/2022] [Accepted: 04/01/2022] [Indexed: 11/11/2022]
Abstract
Purpose To present a method that automatically detects, subtypes, and locates acute or subacute intracranial hemorrhage (ICH) on noncontrast CT (NCCT) head scans; generates detection confidence scores to identify high-confidence data subsets with higher accuracy; and improves radiology worklist prioritization. Such scores may enable clinicians to better use artificial intelligence (AI) tools. Materials and Methods This retrospective study included 46 057 studies from seven "internal" centers for development (training, architecture selection, hyperparameter tuning, and operating-point calibration; n = 25 946) and evaluation (n = 2947) and three "external" centers for calibration (n = 400) and evaluation (n = 16 764). Internal centers contributed developmental data, whereas external centers did not. Deep neural networks predicted the presence of ICH and subtypes (intraparenchymal, intraventricular, subarachnoid, subdural, and/or epidural hemorrhage) and segmentations per case. Two ICH confidence scores are discussed: a calibrated classifier entropy score and a Dempster-Shafer score. Evaluation was completed by using receiver operating characteristic curve analysis and report turnaround time (RTAT) modeling on the evaluation set and on confidence score-defined subsets using bootstrapping. Results The areas under the receiver operating characteristic curve for ICH were 0.97 (0.97, 0.98) and 0.95 (0.94, 0.95) on internal and external center data, respectively. On 80% of the data stratified by calibrated classifier and Dempster-Shafer scores, the system improved the Youden indexes, increasing them from 0.84 to 0.93 (calibrated classifier) and from 0.84 to 0.92 (Dempster-Shafer) for internal centers and increasing them from 0.78 to 0.88 (calibrated classifier) and from 0.78 to 0.89 (Dempster-Shafer) for external centers (P < .001). Models estimated shorter RTAT for AI-prioritized worklists with confidence measures than for AI-prioritized worklists without confidence measures, shortening RTAT by 27% (calibrated classifier) and 27% (Dempster-Shafer) for internal centers and shortening RTAT by 25% (calibrated classifier) and 27% (Dempster-Shafer) for external centers (P < .001). Conclusion AI that provided statistical confidence measures for ICH detection on NCCT scans reliably detected and subtyped hemorrhages, identified high-confidence predictions, and improved worklist prioritization in simulation.Keywords: CT, Head/Neck, Hemorrhage, Convolutional Neural Network (CNN) Supplemental material is available for this article. © RSNA, 2022.
Collapse
Affiliation(s)
- Eli Gibson
- Department of Digital Technology and Innovation, Siemens Healthineers, 755 College Rd E, Princeton, NJ 08540 (E.G., B.G., P.C., P.H.T., Y.Y., J.D., T.J.R., D.C.); Department of Digital Technology and Innovation, Siemens Healthineers, Bangalore, India (V.R.S., A.B.); Department of Computed Tomography, Siemens Healthineers, Forchheim, Germany (E.E., A.C., B.B., T.F.); Department of Radiology, University of Maryland Medical Center, Baltimore, Md (U.K.B.); Department of Radiology, Vancouver General Hospital, Vancouver, Canada (S.N.); Department of Radiology, Northwell Health, New York, NY (P.C.S.); Department of Surgery, UCHealth Memorial Hospital, Colorado Springs, Colo (T.J.S.); and Department of Radiology, NYU Langone Health, New York University School of Medicine, New York, NY (Y.W.L.)
| | - Bogdan Georgescu
- Department of Digital Technology and Innovation, Siemens Healthineers, 755 College Rd E, Princeton, NJ 08540 (E.G., B.G., P.C., P.H.T., Y.Y., J.D., T.J.R., D.C.); Department of Digital Technology and Innovation, Siemens Healthineers, Bangalore, India (V.R.S., A.B.); Department of Computed Tomography, Siemens Healthineers, Forchheim, Germany (E.E., A.C., B.B., T.F.); Department of Radiology, University of Maryland Medical Center, Baltimore, Md (U.K.B.); Department of Radiology, Vancouver General Hospital, Vancouver, Canada (S.N.); Department of Radiology, Northwell Health, New York, NY (P.C.S.); Department of Surgery, UCHealth Memorial Hospital, Colorado Springs, Colo (T.J.S.); and Department of Radiology, NYU Langone Health, New York University School of Medicine, New York, NY (Y.W.L.)
| | - Pascal Ceccaldi
- Department of Digital Technology and Innovation, Siemens Healthineers, 755 College Rd E, Princeton, NJ 08540 (E.G., B.G., P.C., P.H.T., Y.Y., J.D., T.J.R., D.C.); Department of Digital Technology and Innovation, Siemens Healthineers, Bangalore, India (V.R.S., A.B.); Department of Computed Tomography, Siemens Healthineers, Forchheim, Germany (E.E., A.C., B.B., T.F.); Department of Radiology, University of Maryland Medical Center, Baltimore, Md (U.K.B.); Department of Radiology, Vancouver General Hospital, Vancouver, Canada (S.N.); Department of Radiology, Northwell Health, New York, NY (P.C.S.); Department of Surgery, UCHealth Memorial Hospital, Colorado Springs, Colo (T.J.S.); and Department of Radiology, NYU Langone Health, New York University School of Medicine, New York, NY (Y.W.L.)
| | - Pierre-Hugo Trigan
- Department of Digital Technology and Innovation, Siemens Healthineers, 755 College Rd E, Princeton, NJ 08540 (E.G., B.G., P.C., P.H.T., Y.Y., J.D., T.J.R., D.C.); Department of Digital Technology and Innovation, Siemens Healthineers, Bangalore, India (V.R.S., A.B.); Department of Computed Tomography, Siemens Healthineers, Forchheim, Germany (E.E., A.C., B.B., T.F.); Department of Radiology, University of Maryland Medical Center, Baltimore, Md (U.K.B.); Department of Radiology, Vancouver General Hospital, Vancouver, Canada (S.N.); Department of Radiology, Northwell Health, New York, NY (P.C.S.); Department of Surgery, UCHealth Memorial Hospital, Colorado Springs, Colo (T.J.S.); and Department of Radiology, NYU Langone Health, New York University School of Medicine, New York, NY (Y.W.L.)
| | - Youngjin Yoo
- Department of Digital Technology and Innovation, Siemens Healthineers, 755 College Rd E, Princeton, NJ 08540 (E.G., B.G., P.C., P.H.T., Y.Y., J.D., T.J.R., D.C.); Department of Digital Technology and Innovation, Siemens Healthineers, Bangalore, India (V.R.S., A.B.); Department of Computed Tomography, Siemens Healthineers, Forchheim, Germany (E.E., A.C., B.B., T.F.); Department of Radiology, University of Maryland Medical Center, Baltimore, Md (U.K.B.); Department of Radiology, Vancouver General Hospital, Vancouver, Canada (S.N.); Department of Radiology, Northwell Health, New York, NY (P.C.S.); Department of Surgery, UCHealth Memorial Hospital, Colorado Springs, Colo (T.J.S.); and Department of Radiology, NYU Langone Health, New York University School of Medicine, New York, NY (Y.W.L.)
| | - Jyotipriya Das
- Department of Digital Technology and Innovation, Siemens Healthineers, 755 College Rd E, Princeton, NJ 08540 (E.G., B.G., P.C., P.H.T., Y.Y., J.D., T.J.R., D.C.); Department of Digital Technology and Innovation, Siemens Healthineers, Bangalore, India (V.R.S., A.B.); Department of Computed Tomography, Siemens Healthineers, Forchheim, Germany (E.E., A.C., B.B., T.F.); Department of Radiology, University of Maryland Medical Center, Baltimore, Md (U.K.B.); Department of Radiology, Vancouver General Hospital, Vancouver, Canada (S.N.); Department of Radiology, Northwell Health, New York, NY (P.C.S.); Department of Surgery, UCHealth Memorial Hospital, Colorado Springs, Colo (T.J.S.); and Department of Radiology, NYU Langone Health, New York University School of Medicine, New York, NY (Y.W.L.)
| | - Thomas J Re
- Department of Digital Technology and Innovation, Siemens Healthineers, 755 College Rd E, Princeton, NJ 08540 (E.G., B.G., P.C., P.H.T., Y.Y., J.D., T.J.R., D.C.); Department of Digital Technology and Innovation, Siemens Healthineers, Bangalore, India (V.R.S., A.B.); Department of Computed Tomography, Siemens Healthineers, Forchheim, Germany (E.E., A.C., B.B., T.F.); Department of Radiology, University of Maryland Medical Center, Baltimore, Md (U.K.B.); Department of Radiology, Vancouver General Hospital, Vancouver, Canada (S.N.); Department of Radiology, Northwell Health, New York, NY (P.C.S.); Department of Surgery, UCHealth Memorial Hospital, Colorado Springs, Colo (T.J.S.); and Department of Radiology, NYU Langone Health, New York University School of Medicine, New York, NY (Y.W.L.)
| | - Vishwanath Rs
- Department of Digital Technology and Innovation, Siemens Healthineers, 755 College Rd E, Princeton, NJ 08540 (E.G., B.G., P.C., P.H.T., Y.Y., J.D., T.J.R., D.C.); Department of Digital Technology and Innovation, Siemens Healthineers, Bangalore, India (V.R.S., A.B.); Department of Computed Tomography, Siemens Healthineers, Forchheim, Germany (E.E., A.C., B.B., T.F.); Department of Radiology, University of Maryland Medical Center, Baltimore, Md (U.K.B.); Department of Radiology, Vancouver General Hospital, Vancouver, Canada (S.N.); Department of Radiology, Northwell Health, New York, NY (P.C.S.); Department of Surgery, UCHealth Memorial Hospital, Colorado Springs, Colo (T.J.S.); and Department of Radiology, NYU Langone Health, New York University School of Medicine, New York, NY (Y.W.L.)
| | - Abishek Balachandran
- Department of Digital Technology and Innovation, Siemens Healthineers, 755 College Rd E, Princeton, NJ 08540 (E.G., B.G., P.C., P.H.T., Y.Y., J.D., T.J.R., D.C.); Department of Digital Technology and Innovation, Siemens Healthineers, Bangalore, India (V.R.S., A.B.); Department of Computed Tomography, Siemens Healthineers, Forchheim, Germany (E.E., A.C., B.B., T.F.); Department of Radiology, University of Maryland Medical Center, Baltimore, Md (U.K.B.); Department of Radiology, Vancouver General Hospital, Vancouver, Canada (S.N.); Department of Radiology, Northwell Health, New York, NY (P.C.S.); Department of Surgery, UCHealth Memorial Hospital, Colorado Springs, Colo (T.J.S.); and Department of Radiology, NYU Langone Health, New York University School of Medicine, New York, NY (Y.W.L.)
| | - Eva Eibenberger
- Department of Digital Technology and Innovation, Siemens Healthineers, 755 College Rd E, Princeton, NJ 08540 (E.G., B.G., P.C., P.H.T., Y.Y., J.D., T.J.R., D.C.); Department of Digital Technology and Innovation, Siemens Healthineers, Bangalore, India (V.R.S., A.B.); Department of Computed Tomography, Siemens Healthineers, Forchheim, Germany (E.E., A.C., B.B., T.F.); Department of Radiology, University of Maryland Medical Center, Baltimore, Md (U.K.B.); Department of Radiology, Vancouver General Hospital, Vancouver, Canada (S.N.); Department of Radiology, Northwell Health, New York, NY (P.C.S.); Department of Surgery, UCHealth Memorial Hospital, Colorado Springs, Colo (T.J.S.); and Department of Radiology, NYU Langone Health, New York University School of Medicine, New York, NY (Y.W.L.)
| | - Andrei Chekkoury
- Department of Digital Technology and Innovation, Siemens Healthineers, 755 College Rd E, Princeton, NJ 08540 (E.G., B.G., P.C., P.H.T., Y.Y., J.D., T.J.R., D.C.); Department of Digital Technology and Innovation, Siemens Healthineers, Bangalore, India (V.R.S., A.B.); Department of Computed Tomography, Siemens Healthineers, Forchheim, Germany (E.E., A.C., B.B., T.F.); Department of Radiology, University of Maryland Medical Center, Baltimore, Md (U.K.B.); Department of Radiology, Vancouver General Hospital, Vancouver, Canada (S.N.); Department of Radiology, Northwell Health, New York, NY (P.C.S.); Department of Surgery, UCHealth Memorial Hospital, Colorado Springs, Colo (T.J.S.); and Department of Radiology, NYU Langone Health, New York University School of Medicine, New York, NY (Y.W.L.)
| | - Barbara Brehm
- Department of Digital Technology and Innovation, Siemens Healthineers, 755 College Rd E, Princeton, NJ 08540 (E.G., B.G., P.C., P.H.T., Y.Y., J.D., T.J.R., D.C.); Department of Digital Technology and Innovation, Siemens Healthineers, Bangalore, India (V.R.S., A.B.); Department of Computed Tomography, Siemens Healthineers, Forchheim, Germany (E.E., A.C., B.B., T.F.); Department of Radiology, University of Maryland Medical Center, Baltimore, Md (U.K.B.); Department of Radiology, Vancouver General Hospital, Vancouver, Canada (S.N.); Department of Radiology, Northwell Health, New York, NY (P.C.S.); Department of Surgery, UCHealth Memorial Hospital, Colorado Springs, Colo (T.J.S.); and Department of Radiology, NYU Langone Health, New York University School of Medicine, New York, NY (Y.W.L.)
| | - Uttam K Bodanapally
- Department of Digital Technology and Innovation, Siemens Healthineers, 755 College Rd E, Princeton, NJ 08540 (E.G., B.G., P.C., P.H.T., Y.Y., J.D., T.J.R., D.C.); Department of Digital Technology and Innovation, Siemens Healthineers, Bangalore, India (V.R.S., A.B.); Department of Computed Tomography, Siemens Healthineers, Forchheim, Germany (E.E., A.C., B.B., T.F.); Department of Radiology, University of Maryland Medical Center, Baltimore, Md (U.K.B.); Department of Radiology, Vancouver General Hospital, Vancouver, Canada (S.N.); Department of Radiology, Northwell Health, New York, NY (P.C.S.); Department of Surgery, UCHealth Memorial Hospital, Colorado Springs, Colo (T.J.S.); and Department of Radiology, NYU Langone Health, New York University School of Medicine, New York, NY (Y.W.L.)
| | - Savvas Nicolaou
- Department of Digital Technology and Innovation, Siemens Healthineers, 755 College Rd E, Princeton, NJ 08540 (E.G., B.G., P.C., P.H.T., Y.Y., J.D., T.J.R., D.C.); Department of Digital Technology and Innovation, Siemens Healthineers, Bangalore, India (V.R.S., A.B.); Department of Computed Tomography, Siemens Healthineers, Forchheim, Germany (E.E., A.C., B.B., T.F.); Department of Radiology, University of Maryland Medical Center, Baltimore, Md (U.K.B.); Department of Radiology, Vancouver General Hospital, Vancouver, Canada (S.N.); Department of Radiology, Northwell Health, New York, NY (P.C.S.); Department of Surgery, UCHealth Memorial Hospital, Colorado Springs, Colo (T.J.S.); and Department of Radiology, NYU Langone Health, New York University School of Medicine, New York, NY (Y.W.L.)
| | - Pina C Sanelli
- Department of Digital Technology and Innovation, Siemens Healthineers, 755 College Rd E, Princeton, NJ 08540 (E.G., B.G., P.C., P.H.T., Y.Y., J.D., T.J.R., D.C.); Department of Digital Technology and Innovation, Siemens Healthineers, Bangalore, India (V.R.S., A.B.); Department of Computed Tomography, Siemens Healthineers, Forchheim, Germany (E.E., A.C., B.B., T.F.); Department of Radiology, University of Maryland Medical Center, Baltimore, Md (U.K.B.); Department of Radiology, Vancouver General Hospital, Vancouver, Canada (S.N.); Department of Radiology, Northwell Health, New York, NY (P.C.S.); Department of Surgery, UCHealth Memorial Hospital, Colorado Springs, Colo (T.J.S.); and Department of Radiology, NYU Langone Health, New York University School of Medicine, New York, NY (Y.W.L.)
| | - Thomas J Schroeppel
- Department of Digital Technology and Innovation, Siemens Healthineers, 755 College Rd E, Princeton, NJ 08540 (E.G., B.G., P.C., P.H.T., Y.Y., J.D., T.J.R., D.C.); Department of Digital Technology and Innovation, Siemens Healthineers, Bangalore, India (V.R.S., A.B.); Department of Computed Tomography, Siemens Healthineers, Forchheim, Germany (E.E., A.C., B.B., T.F.); Department of Radiology, University of Maryland Medical Center, Baltimore, Md (U.K.B.); Department of Radiology, Vancouver General Hospital, Vancouver, Canada (S.N.); Department of Radiology, Northwell Health, New York, NY (P.C.S.); Department of Surgery, UCHealth Memorial Hospital, Colorado Springs, Colo (T.J.S.); and Department of Radiology, NYU Langone Health, New York University School of Medicine, New York, NY (Y.W.L.)
| | - Thomas Flohr
- Department of Digital Technology and Innovation, Siemens Healthineers, 755 College Rd E, Princeton, NJ 08540 (E.G., B.G., P.C., P.H.T., Y.Y., J.D., T.J.R., D.C.); Department of Digital Technology and Innovation, Siemens Healthineers, Bangalore, India (V.R.S., A.B.); Department of Computed Tomography, Siemens Healthineers, Forchheim, Germany (E.E., A.C., B.B., T.F.); Department of Radiology, University of Maryland Medical Center, Baltimore, Md (U.K.B.); Department of Radiology, Vancouver General Hospital, Vancouver, Canada (S.N.); Department of Radiology, Northwell Health, New York, NY (P.C.S.); Department of Surgery, UCHealth Memorial Hospital, Colorado Springs, Colo (T.J.S.); and Department of Radiology, NYU Langone Health, New York University School of Medicine, New York, NY (Y.W.L.)
| | - Dorin Comaniciu
- Department of Digital Technology and Innovation, Siemens Healthineers, 755 College Rd E, Princeton, NJ 08540 (E.G., B.G., P.C., P.H.T., Y.Y., J.D., T.J.R., D.C.); Department of Digital Technology and Innovation, Siemens Healthineers, Bangalore, India (V.R.S., A.B.); Department of Computed Tomography, Siemens Healthineers, Forchheim, Germany (E.E., A.C., B.B., T.F.); Department of Radiology, University of Maryland Medical Center, Baltimore, Md (U.K.B.); Department of Radiology, Vancouver General Hospital, Vancouver, Canada (S.N.); Department of Radiology, Northwell Health, New York, NY (P.C.S.); Department of Surgery, UCHealth Memorial Hospital, Colorado Springs, Colo (T.J.S.); and Department of Radiology, NYU Langone Health, New York University School of Medicine, New York, NY (Y.W.L.)
| | - Yvonne W Lui
- Department of Digital Technology and Innovation, Siemens Healthineers, 755 College Rd E, Princeton, NJ 08540 (E.G., B.G., P.C., P.H.T., Y.Y., J.D., T.J.R., D.C.); Department of Digital Technology and Innovation, Siemens Healthineers, Bangalore, India (V.R.S., A.B.); Department of Computed Tomography, Siemens Healthineers, Forchheim, Germany (E.E., A.C., B.B., T.F.); Department of Radiology, University of Maryland Medical Center, Baltimore, Md (U.K.B.); Department of Radiology, Vancouver General Hospital, Vancouver, Canada (S.N.); Department of Radiology, Northwell Health, New York, NY (P.C.S.); Department of Surgery, UCHealth Memorial Hospital, Colorado Springs, Colo (T.J.S.); and Department of Radiology, NYU Langone Health, New York University School of Medicine, New York, NY (Y.W.L.)
| |
Collapse
|
6
|
Jung HM, Yang R, Gefter WB, Ghesu FC, Mailhe B, Mansoor A, Grbic S, Comaniciu D, Vogt S, Mortani Barbosa EJ. Value of quantitative airspace disease measured on chest CT and chest radiography at initial diagnosis compared to clinical variables for prediction of severe COVID-19. J Med Imaging (Bellingham) 2022; 9:034003. [PMID: 35721308 DOI: 10.1117/1.jmi.9.3.034003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2022] [Accepted: 05/31/2022] [Indexed: 11/14/2022] Open
Abstract
Purpose: Rapid prognostication of COVID-19 patients is important for efficient resource allocation. We evaluated the relative prognostic value of baseline clinical variables (CVs), quantitative human-read chest CT (qCT), and AI-read chest radiograph (qCXR) airspace disease (AD) in predicting severe COVID-19. Approach: We retrospectively selected 131 COVID-19 patients (SARS-CoV-2 positive, March to October, 2020) at a tertiary hospital in the United States, who underwent chest CT and CXR within 48 hr of initial presentation. CVs included patient demographics and laboratory values; imaging variables included qCT volumetric percentage AD (POv) and qCXR area-based percentage AD (POa), assessed by a deep convolutional neural network. Our prognostic outcome was need for ICU admission. We compared the performance of three logistic regression models: using CVs known to be associated with prognosis (model I), using a dimension-reduced set of best predictor variables (model II), and using only age and AD (model III). Results: 60/131 patients required ICU admission, whereas 71/131 did not. Model I performed the poorest ( AUC = 0.67 [0.58 to 0.76]; accuracy = 77 % ). Model II performed the best ( AUC = 0.78 [0.71 to 0.86]; accuracy = 81 % ). Model III was equivalent ( AUC = 0.75 [0.67 to 0.84]; accuracy = 80 % ). Both models II and III outperformed model I ( AUC difference = 0.11 [0.02 to 0.19], p = 0.01 ; AUC difference = 0.08 [0.01 to 0.15], p = 0.04 , respectively). Model II and III results did not change significantly when POv was replaced by POa. Conclusions: Severe COVID-19 can be predicted using only age and quantitative AD imaging metrics at initial diagnosis, which outperform the set of CVs. Moreover, AI-read qCXR can replace qCT metrics without loss of prognostic performance, promising more resource-efficient prognostication.
Collapse
Affiliation(s)
- Hae-Min Jung
- University of Pennsylvania, Perelman School of Medicine, Philadelphia, Pennsylvania, United States
| | - Rochelle Yang
- University of Pennsylvania, Perelman School of Medicine, Philadelphia, Pennsylvania, United States
| | - Warren B Gefter
- University of Pennsylvania, Perelman School of Medicine, Philadelphia, Pennsylvania, United States
| | - Florin C Ghesu
- Siemens Healthineers, Digital Technology and Innovation, Princeton, New Jersey, United States
| | - Boris Mailhe
- Siemens Healthineers, Digital Technology and Innovation, Princeton, New Jersey, United States
| | - Awais Mansoor
- Siemens Healthineers, Digital Technology and Innovation, Princeton, New Jersey, United States
| | - Sasa Grbic
- Siemens Healthineers, Digital Technology and Innovation, Princeton, New Jersey, United States
| | - Dorin Comaniciu
- Siemens Healthineers, Digital Technology and Innovation, Princeton, New Jersey, United States
| | - Sebastian Vogt
- Siemens Healthineers, X-Ray Products, Malvern, Pennsylvania, United States
| | | |
Collapse
|
7
|
Singh V, Kamaleswaran R, Chalfin D, Buño-Soto A, San Roman J, Rojas-Kenney E, Molinaro R, von Sengbusch S, Hodjat P, Comaniciu D, Kamen A. A deep learning approach for predicting severity of COVID-19 patients using a parsimonious set of laboratory markers. iScience 2021; 24:103523. [PMID: 34870131 PMCID: PMC8626152 DOI: 10.1016/j.isci.2021.103523] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2021] [Revised: 11/17/2021] [Accepted: 11/23/2021] [Indexed: 12/02/2022] Open
Abstract
The SARS-CoV-2 virus has caused tremendous healthcare burden worldwide. Our focus was to develop a practical and easy-to-deploy system to predict the severe manifestation of disease in patients with COVID-19 with an aim to assist clinicians in triage and treatment decisions. Our proposed predictive algorithm is a trained artificial intelligence-based network using 8,427 COVID-19 patient records from four healthcare systems. The model provides a severity risk score along with likelihoods of various clinical outcomes, namely ventilator use and mortality. The trained model using patient age and nine laboratory markers has the prediction accuracy with an area under the curve (AUC) of 0.78, 95% CI: 0.77–0.82, and the negative predictive value NPV of 0.86, 95% CI: 0.84–0.88 for the need to use a ventilator and has an accuracy with AUC of 0.85, 95% CI: 0.84–0.86, and the NPV of 0.94, 95% CI: 0.92–0.96 for predicting in-hospital 30-day mortality. Algorithm using 9 laboratory markers & age may predict severity in patients with COVID-19 Model was trained and tested on a multicenter sample of 10,937 patients Algorithm can predict ventilator use (NPV, 0.86) and mortality (NPV, 0.94) High NPV suggests utility as an adjunct to aid in triaging of patients with COVID-19
Collapse
Affiliation(s)
- Vivek Singh
- Siemens Healthineers, Digital Technology and Innovation, 755 College Road East, Princeton, NJ 08540, USA
| | - Rishikesan Kamaleswaran
- Emory University School of Medicine WMB, 1010 Woodruff Circle, Suite 4127, Atlanta, GA 30322, USA
| | - Donald Chalfin
- Siemens Healthineers, Laboratory Diagnostics, 511 Benedict Avenue, Tarrytown, NY 10591, USA.,Jefferson College of Population Health of Thomas Jefferson University, 901 Walnut Street, Philadelphia, PA 19107, USA
| | - Antonio Buño-Soto
- Department of Laboratory Medicine, Hospital Universitario La Paz, Madrid, Spain
| | - Janika San Roman
- Siemens Healthineers, Laboratory Diagnostics, 511 Benedict Avenue, Tarrytown, NY 10591, USA
| | - Edith Rojas-Kenney
- Siemens Healthineers, Laboratory Diagnostics, 511 Benedict Avenue, Tarrytown, NY 10591, USA
| | - Ross Molinaro
- Siemens Healthineers, Laboratory Diagnostics, 511 Benedict Avenue, Tarrytown, NY 10591, USA
| | - Sabine von Sengbusch
- Siemens Healthineers, Laboratory Diagnostics, 511 Benedict Avenue, Tarrytown, NY 10591, USA
| | - Parsa Hodjat
- Department of Pathology and Genomic Medicine, Houston Methodist Hospital, 6565 Fannin Street, Houston, TX 77030, USA
| | - Dorin Comaniciu
- Siemens Healthineers, Digital Technology and Innovation, 755 College Road East, Princeton, NJ 08540, USA
| | - Ali Kamen
- Siemens Healthineers, Digital Technology and Innovation, 755 College Road East, Princeton, NJ 08540, USA
| |
Collapse
|
8
|
Winkel DJ, Tong A, Lou B, Kamen A, Comaniciu D, Disselhorst JA, Rodríguez-Ruiz A, Huisman H, Szolar D, Shabunin I, Choi MH, Xing P, Penzkofer T, Grimm R, von Busch H, Boll DT. A Novel Deep Learning Based Computer-Aided Diagnosis System Improves the Accuracy and Efficiency of Radiologists in Reading Biparametric Magnetic Resonance Images of the Prostate: Results of a Multireader, Multicase Study. Invest Radiol 2021; 56:605-613. [PMID: 33787537 DOI: 10.1097/rli.0000000000000780] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/20/2023]
Abstract
OBJECTIVE The aim of this study was to evaluate the effect of a deep learning based computer-aided diagnosis (DL-CAD) system on radiologists' interpretation accuracy and efficiency in reading biparametric prostate magnetic resonance imaging scans. MATERIALS AND METHODS We selected 100 consecutive prostate magnetic resonance imaging cases from a publicly available data set (PROSTATEx Challenge) with and without histopathologically confirmed prostate cancer. Seven board-certified radiologists were tasked to read each case twice in 2 reading blocks (with and without the assistance of a DL-CAD), with a separation between the 2 reading sessions of at least 2 weeks. Reading tasks were to localize and classify lesions according to Prostate Imaging Reporting and Data System (PI-RADS) v2.0 and to assign a radiologist's level of suspicion score (scale from 1-5 in 0.5 increments; 1, benign; 5, malignant). Ground truth was established by consensus readings of 3 experienced radiologists. The detection performance (receiver operating characteristic curves), variability (Fleiss κ), and average reading time without DL-CAD assistance were evaluated. RESULTS The average accuracy of radiologists in terms of area under the curve in detecting clinically significant cases (PI-RADS ≥4) was 0.84 (95% confidence interval [CI], 0.79-0.89), whereas the same using DL-CAD was 0.88 (95% CI, 0.83-0.94) with an improvement of 4.4% (95% CI, 1.1%-7.7%; P = 0.010). Interreader concordance (in terms of Fleiss κ) increased from 0.22 to 0.36 (P = 0.003). Accuracy of radiologists in detecting cases with PI-RADS ≥3 was improved by 2.9% (P = 0.10). The median reading time in the unaided/aided scenario was reduced by 21% from 103 to 81 seconds (P < 0.001). CONCLUSIONS Using a DL-CAD system increased the diagnostic accuracy in detecting highly suspicious prostate lesions and reduced both the interreader variability and the reading time.
Collapse
Affiliation(s)
- David J Winkel
- From the Department of Radiology, University Hospital of Basel, Basel, Basel-Stadt, Switzerland
| | - Angela Tong
- Department of Radiology, NYU Langone Health, New York, NY
| | - Bin Lou
- Siemens Healthineers, Digital Technology and Innovation, Princeton, NJ
| | - Ali Kamen
- Siemens Healthineers, Digital Technology and Innovation, Princeton, NJ
| | - Dorin Comaniciu
- Siemens Healthineers, Digital Technology and Innovation, Princeton, NJ
| | | | | | - Henkjan Huisman
- Department of Radiology, Radboud University Medical Center, Nijmegen, the Netherlands
| | | | | | - Moon Hyung Choi
- Eunpyeong St Mary's Hospital, Catholic University of Korea, Seoul, Republic of Korea
| | - Pengyi Xing
- Radiology Department, Changhai Hospital of Shanghai, Shanghai, China
| | | | - Robert Grimm
- Siemens Healthineers Diagnostic Imaging, Erlangen, Germany
| | | | - Daniel T Boll
- From the Department of Radiology, University Hospital of Basel, Basel, Basel-Stadt, Switzerland
| |
Collapse
|
9
|
Gündel S, Setio AAA, Ghesu FC, Grbic S, Georgescu B, Maier A, Comaniciu D. Robust classification from noisy labels: Integrating additional knowledge for chest radiography abnormality assessment. Med Image Anal 2021; 72:102087. [PMID: 34015595 DOI: 10.1016/j.media.2021.102087] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2020] [Revised: 02/24/2021] [Accepted: 04/16/2021] [Indexed: 12/29/2022]
Abstract
Chest radiography is the most common radiographic examination performed in daily clinical practice for the detection of various heart and lung abnormalities. The large amount of data to be read and reported, with more than 100 studies per day for a single radiologist, poses a challenge in consistently maintaining high interpretation accuracy. The introduction of large-scale public datasets has led to a series of novel systems for automated abnormality classification. However, the labels of these datasets were obtained using natural language processed medical reports, yielding a large degree of label noise that can impact the performance. In this study, we propose novel training strategies that handle label noise from such suboptimal data. Prior label probabilities were measured on a subset of training data re-read by 4 board-certified radiologists and were used during training to increase the robustness of the training model to the label noise. Furthermore, we exploit the high comorbidity of abnormalities observed in chest radiography and incorporate this information to further reduce the impact of label noise. Additionally, anatomical knowledge is incorporated by training the system to predict lung and heart segmentation, as well as spatial knowledge labels. To deal with multiple datasets and images derived from various scanners that apply different post-processing techniques, we introduce a novel image normalization strategy. Experiments were performed on an extensive collection of 297,541 chest radiographs from 86,876 patients, leading to a state-of-the-art performance level for 17 abnormalities from 2 datasets. With an average AUC score of 0.880 across all abnormalities, our proposed training strategies can be used to significantly improve performance scores.
Collapse
Affiliation(s)
- Sebastian Gündel
- Digital Technology and Inovation, Siemens Healthineers, Erlangen 91052, Germany; Pattern Recognition Lab, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen 91058, Germany.
| | - Arnaud A A Setio
- Digital Technology and Inovation, Siemens Healthineers, Erlangen 91052, Germany
| | - Florin C Ghesu
- Digital Technology and Inovation, Siemens Healthineers, Princeton, NJ 08540, USA
| | - Sasa Grbic
- Digital Technology and Inovation, Siemens Healthineers, Princeton, NJ 08540, USA
| | - Bogdan Georgescu
- Digital Technology and Inovation, Siemens Healthineers, Princeton, NJ 08540, USA
| | - Andreas Maier
- Pattern Recognition Lab, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen 91058, Germany
| | - Dorin Comaniciu
- Digital Technology and Inovation, Siemens Healthineers, Princeton, NJ 08540, USA
| |
Collapse
|
10
|
Nael K, Gibson E, Yang C, Ceccaldi P, Yoo Y, Das J, Doshi A, Georgescu B, Janardhanan N, Odry B, Nadar M, Bush M, Re TJ, Huwer S, Josan S, von Busch H, Meyer H, Mendelson D, Drayer BP, Comaniciu D, Fayad ZA. Automated detection of critical findings in multi-parametric brain MRI using a system of 3D neural networks. Sci Rep 2021; 11:6876. [PMID: 33767226 PMCID: PMC7994311 DOI: 10.1038/s41598-021-86022-7] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2020] [Accepted: 03/08/2021] [Indexed: 01/22/2023] Open
Abstract
With the rapid growth and increasing use of brain MRI, there is an interest in automated image classification to aid human interpretation and improve workflow. We aimed to train a deep convolutional neural network and assess its performance in identifying abnormal brain MRIs and critical intracranial findings including acute infarction, acute hemorrhage and mass effect. A total of 13,215 clinical brain MRI studies were categorized to training (74%), validation (9%), internal testing (8%) and external testing (8%) datasets. Up to eight contrasts were included from each brain MRI and each image volume was reformatted to common resolution to accommodate for differences between scanners. Following reviewing the radiology reports, three neuroradiologists assigned each study to abnormal vs normal, and identified three critical findings including acute infarction, acute hemorrhage, and mass effect. A deep convolutional neural network was constructed by a combination of localization feature extraction (LFE) modules and global classifiers to identify the presence of 4 variables in brain MRIs including abnormal, acute infarction, acute hemorrhage and mass effect. Training, validation and testing sets were randomly defined on a patient basis. Training was performed on 9845 studies using balanced sampling to address class imbalance. Receiver operating characteristic (ROC) analysis was performed. The ROC analysis of our models for 1050 studies within our internal test data showed AUC/sensitivity/specificity of 0.91/83%/86% for normal versus abnormal brain MRI, 0.95/92%/88% for acute infarction, 0.90/89%/81% for acute hemorrhage, and 0.93/93%/85% for mass effect. For 1072 studies within our external test data, it showed AUC/sensitivity/specificity of 0.88/80%/80% for normal versus abnormal brain MRI, 0.97/90%/97% for acute infarction, 0.83/72%/88% for acute hemorrhage, and 0.87/79%/81% for mass effect. Our proposed deep convolutional network can accurately identify abnormal and critical intracranial findings on individual brain MRIs, while addressing the fact that some MR contrasts might not be available in individual studies.
Collapse
Affiliation(s)
- Kambiz Nael
- Department of Radiological Sciences, David Geffen School of Medicine at University of California Los Angeles, 757 Westwood Plaza, Suite 1621, Los Angeles, CA, 90095-7532, USA.
- Department of Diagnostic, Molecular and Interventional Radiology, Icahn School of Medicine at Mount Sinai, New York, USA.
| | - Eli Gibson
- Digital Technology and Innovation, Siemens Healthineers, Princeton, USA
| | - Chen Yang
- Department of Diagnostic, Molecular and Interventional Radiology, Icahn School of Medicine at Mount Sinai, New York, USA
| | - Pascal Ceccaldi
- Digital Technology and Innovation, Siemens Healthineers, Princeton, USA
| | - Youngjin Yoo
- Digital Technology and Innovation, Siemens Healthineers, Princeton, USA
| | - Jyotipriya Das
- Digital Technology and Innovation, Siemens Healthineers, Princeton, USA
| | - Amish Doshi
- Department of Diagnostic, Molecular and Interventional Radiology, Icahn School of Medicine at Mount Sinai, New York, USA
| | - Bogdan Georgescu
- Digital Technology and Innovation, Siemens Healthineers, Princeton, USA
| | | | - Benjamin Odry
- AI for Clinical Analytics, Covera Health, New York, NY, USA
| | - Mariappan Nadar
- Digital Technology and Innovation, Siemens Healthineers, Princeton, USA
| | - Michael Bush
- Magnetic Resonance, Siemens Healthineers, New York, USA
| | - Thomas J Re
- Digital Technology and Innovation, Siemens Healthineers, Princeton, USA
| | - Stefan Huwer
- Magnetic Resonance, Siemens Healthineers, Erlangen, Germany
| | - Sonal Josan
- Digital Health, Siemens Healthineers, Erlangen, Germany
| | | | - Heiko Meyer
- Magnetic Resonance, Siemens Healthineers, Erlangen, Germany
| | - David Mendelson
- Department of Diagnostic, Molecular and Interventional Radiology, Icahn School of Medicine at Mount Sinai, New York, USA
| | - Burton P Drayer
- Department of Diagnostic, Molecular and Interventional Radiology, Icahn School of Medicine at Mount Sinai, New York, USA
| | - Dorin Comaniciu
- Digital Technology and Innovation, Siemens Healthineers, Princeton, USA
| | - Zahi A Fayad
- Department of Diagnostic, Molecular and Interventional Radiology, Icahn School of Medicine at Mount Sinai, New York, USA
| |
Collapse
|
11
|
Weikert T, Rapaka S, Grbic S, Re T, Chaganti S, Winkel DJ, Anastasopoulos C, Niemann T, Wiggli BJ, Bremerich J, Twerenbold R, Sommer G, Comaniciu D, Sauter AW. Prediction of Patient Management in COVID-19 Using Deep Learning-Based Fully Automated Extraction of Cardiothoracic CT Metrics and Laboratory Findings. Korean J Radiol 2021; 22:994-1004. [PMID: 33686818 PMCID: PMC8154782 DOI: 10.3348/kjr.2020.0994] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2020] [Revised: 12/21/2020] [Accepted: 12/23/2020] [Indexed: 11/15/2022] Open
Abstract
Objective To extract pulmonary and cardiovascular metrics from chest CTs of patients with coronavirus disease 2019 (COVID-19) using a fully automated deep learning-based approach and assess their potential to predict patient management. Materials and Methods All initial chest CTs of patients who tested positive for severe acute respiratory syndrome coronavirus 2 at our emergency department between March 25 and April 25, 2020, were identified (n = 120). Three patient management groups were defined: group 1 (outpatient), group 2 (general ward), and group 3 (intensive care unit [ICU]). Multiple pulmonary and cardiovascular metrics were extracted from the chest CT images using deep learning. Additionally, six laboratory findings indicating inflammation and cellular damage were considered. Differences in CT metrics, laboratory findings, and demographics between the patient management groups were assessed. The potential of these parameters to predict patients' needs for intensive care (yes/no) was analyzed using logistic regression and receiver operating characteristic curves. Internal and external validity were assessed using 109 independent chest CT scans. Results While demographic parameters alone (sex and age) were not sufficient to predict ICU management status, both CT metrics alone (including both pulmonary and cardiovascular metrics; area under the curve [AUC] = 0.88; 95% confidence interval [CI] = 0.79–0.97) and laboratory findings alone (C-reactive protein, lactate dehydrogenase, white blood cell count, and albumin; AUC = 0.86; 95% CI = 0.77–0.94) were good classifiers. Excellent performance was achieved by a combination of demographic parameters, CT metrics, and laboratory findings (AUC = 0.91; 95% CI = 0.85–0.98). Application of a model that combined both pulmonary CT metrics and demographic parameters on a dataset from another hospital indicated its external validity (AUC = 0.77; 95% CI = 0.66–0.88). Conclusion Chest CT of patients with COVID-19 contains valuable information that can be accessed using automated image analysis. These metrics are useful for the prediction of patient management.
Collapse
Affiliation(s)
- Thomas Weikert
- Department of Radiology, University Hospital Basel, University of Basel, Basel, Switzerland.
| | | | - Sasa Grbic
- Siemens Healthineers, Princeton, NJ, USA
| | - Thomas Re
- Siemens Healthineers, Princeton, NJ, USA
| | | | - David J Winkel
- Department of Radiology, University Hospital Basel, University of Basel, Basel, Switzerland
| | | | - Tilo Niemann
- Department of Radiology, Kantonsspital Baden, Baden, Switzerland
| | - Benedikt J Wiggli
- Department of Infectious Diseases & Infection Control, Kantonsspital Baden, Baden, Switzerland
| | - Jens Bremerich
- Department of Radiology, University Hospital Basel, University of Basel, Basel, Switzerland
| | - Raphael Twerenbold
- Department of Cardiology, University Hospital Basel, University of Basel, Basel, Switzerland
| | - Gregor Sommer
- Department of Radiology, University Hospital Basel, University of Basel, Basel, Switzerland
| | | | - Alexander W Sauter
- Department of Radiology, University Hospital Basel, University of Basel, Basel, Switzerland
| |
Collapse
|
12
|
Liu S, Setio AAA, Ghesu FC, Gibson E, Grbic S, Georgescu B, Comaniciu D. No Surprises: Training Robust Lung Nodule Detection for Low-Dose CT Scans by Augmenting With Adversarial Attacks. IEEE Trans Med Imaging 2021; 40:335-345. [PMID: 32966215 DOI: 10.1109/tmi.2020.3026261] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Detecting malignant pulmonary nodules at an early stage can allow medical interventions which may increase the survival rate of lung cancer patients. Using computer vision techniques to detect nodules can improve the sensitivity and the speed of interpreting chest CT for lung cancer screening. Many studies have used CNNs to detect nodule candidates. Though such approaches have been shown to outperform the conventional image processing based methods regarding the detection accuracy, CNNs are also known to be limited to generalize on under-represented samples in the training set and prone to imperceptible noise perturbations. Such limitations can not be easily addressed by scaling up the dataset or the models. In this work, we propose to add adversarial synthetic nodules and adversarial attack samples to the training data to improve the generalization and the robustness of the lung nodule detection systems. To generate hard examples of nodules from a differentiable nodule synthesizer, we use projected gradient descent (PGD) to search the latent code within a bounded neighbourhood that would generate nodules to decrease the detector response. To make the network more robust to unanticipated noise perturbations, we use PGD to search for noise patterns that can trigger the network to give over-confident mistakes. By evaluating on two different benchmark datasets containing consensus annotations from three radiologists, we show that the proposed techniques can improve the detection performance on real CT data. To understand the limitations of both the conventional networks and the proposed augmented networks, we also perform stress-tests on the false positive reduction networks by feeding different types of artificially produced patches. We show that the augmented networks are more robust to both under-represented nodules as well as resistant to noise perturbations.
Collapse
|
13
|
Chaganti S, Balachandran A, Chabin G, Cohen S, Flohr T, Georgescu B, Grenier P, Grbic S, Liu S, Mellot F, Murray N, Nicolaou S, Parker W, Re T, Sanelli P, Sauter AW, Xu Z, Yoo Y, Ziebandt V, Comaniciu D. Automated Quantification of CT Patterns Associated with COVID-19 from Chest CT. ArXiv 2020:arXiv:2004.01279v7. [PMID: 32550252 PMCID: PMC7280906] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Subscribe] [Scholar Register] [Revised: 11/18/2020] [Indexed: 12/29/2022]
Abstract
PURPOSE To present a method that automatically segments and quantifies abnormal CT patterns commonly present in coronavirus disease 2019 (COVID-19), namely ground glass opacities and consolidations. MATERIALS AND METHODS In this retrospective study, the proposed method takes as input a non-contrasted chest CT and segments the lesions, lungs, and lobes in three dimensions, based on a dataset of 9749 chest CT volumes. The method outputs two combined measures of the severity of lung and lobe involvement, quantifying both the extent of COVID-19 abnormalities and presence of high opacities, based on deep learning and deep reinforcement learning. The first measure of (PO, PHO) is global, while the second of (LSS, LHOS) is lobewise. Evaluation of the algorithm is reported on CTs of 200 participants (100 COVID-19 confirmed patients and 100 healthy controls) from institutions from Canada, Europe and the United States collected between 2002-Present (April, 2020). Ground truth is established by manual annotations of lesions, lungs, and lobes. Correlation and regression analyses were performed to compare the prediction to the ground truth. RESULTS Pearson correlation coefficient between method prediction and ground truth for COVID-19 cases was calculated as 0.92 for PO (P < .001), 0.97 for PHO(P < .001), 0.91 for LSS (P < .001), 0.90 for LHOS (P < .001). 98 of 100 healthy controls had a predicted PO of less than 1%, 2 had between 1-2%. Automated processing time to compute the severity scores was 10 seconds per case compared to 30 minutes required for manual annotations. CONCLUSION A new method segments regions of CT abnormalities associated with COVID-19 and computes (PO, PHO), as well as (LSS, LHOS) severity scores.
Collapse
|
14
|
Winkel DJ, Wetterauer C, Matthias MO, Lou B, Shi B, Kamen A, Comaniciu D, Seifert HH, Rentsch CA, Boll DT. Autonomous Detection and Classification of PI-RADS Lesions in an MRI Screening Population Incorporating Multicenter-Labeled Deep Learning and Biparametric Imaging: Proof of Concept. Diagnostics (Basel) 2020; 10:diagnostics10110951. [PMID: 33202680 PMCID: PMC7697194 DOI: 10.3390/diagnostics10110951] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2020] [Revised: 10/27/2020] [Accepted: 11/11/2020] [Indexed: 12/12/2022] Open
Abstract
Background: Opportunistic prostate cancer (PCa) screening is a controversial topic. Magnetic resonance imaging (MRI) has proven to detect prostate cancer with a high sensitivity and specificity, leading to the idea to perform an image-guided prostate cancer (PCa) screening; Methods: We evaluated a prospectively enrolled cohort of 49 healthy men participating in a dedicated image-guided PCa screening trial employing a biparametric MRI (bpMRI) protocol consisting of T2-weighted (T2w) and diffusion weighted imaging (DWI) sequences. Datasets were analyzed both by human readers and by a fully automated artificial intelligence (AI) software using deep learning (DL). Agreement between the algorithm and the reports—serving as the ground truth—was compared on a per-case and per-lesion level using metrics of diagnostic accuracy and k statistics; Results: The DL method yielded an 87% sensitivity (33/38) and 50% specificity (5/10) with a k of 0.42. 12/28 (43%) Prostate Imaging Reporting and Data System (PI-RADS) 3, 16/22 (73%) PI-RADS 4, and 5/5 (100%) PI-RADS 5 lesions were detected compared to the ground truth. Targeted biopsy revealed PCa in six participants, all correctly diagnosed by both the human readers and AI. Conclusions: The results of our study show that in our AI-assisted, image-guided prostate cancer screening the software solution was able to identify highly suspicious lesions and has the potential to effectively guide the targeted-biopsy workflow.
Collapse
Affiliation(s)
- David J. Winkel
- Department of Radiology, University Hospital of Basel, 4051 Basel, Basel-Stadt, Switzerland;
- Siemens Healthineers, Medical Imaging Technologies Princeton, Princeton, NJ 08540, USA; (B.L.); (B.S.); (A.K.); (D.C.)
- Correspondence: ; Tel.: +41-61-328-65-22; Fax: +41-61-265-43-54
| | - Christian Wetterauer
- Department of Urology, University Hospital of Basel, 4051 Basel, Basel-Stadt, Switzerland; (C.W.); (M.O.M.); (H.-H.S.); (C.A.R.)
| | - Marc Oliver Matthias
- Department of Urology, University Hospital of Basel, 4051 Basel, Basel-Stadt, Switzerland; (C.W.); (M.O.M.); (H.-H.S.); (C.A.R.)
| | - Bin Lou
- Siemens Healthineers, Medical Imaging Technologies Princeton, Princeton, NJ 08540, USA; (B.L.); (B.S.); (A.K.); (D.C.)
| | - Bibo Shi
- Siemens Healthineers, Medical Imaging Technologies Princeton, Princeton, NJ 08540, USA; (B.L.); (B.S.); (A.K.); (D.C.)
| | - Ali Kamen
- Siemens Healthineers, Medical Imaging Technologies Princeton, Princeton, NJ 08540, USA; (B.L.); (B.S.); (A.K.); (D.C.)
| | - Dorin Comaniciu
- Siemens Healthineers, Medical Imaging Technologies Princeton, Princeton, NJ 08540, USA; (B.L.); (B.S.); (A.K.); (D.C.)
| | - Hans-Helge Seifert
- Department of Urology, University Hospital of Basel, 4051 Basel, Basel-Stadt, Switzerland; (C.W.); (M.O.M.); (H.-H.S.); (C.A.R.)
| | - Cyrill A. Rentsch
- Department of Urology, University Hospital of Basel, 4051 Basel, Basel-Stadt, Switzerland; (C.W.); (M.O.M.); (H.-H.S.); (C.A.R.)
| | - Daniel T. Boll
- Department of Radiology, University Hospital of Basel, 4051 Basel, Basel-Stadt, Switzerland;
| |
Collapse
|
15
|
Ghesu FC, Georgescu B, Mansoor A, Yoo Y, Gibson E, Vishwanath RS, Balachandran A, Balter JM, Cao Y, Singh R, Digumarthy SR, Kalra MK, Grbic S, Comaniciu D. Quantifying and leveraging predictive uncertainty for medical image assessment. Med Image Anal 2020; 68:101855. [PMID: 33260116 DOI: 10.1016/j.media.2020.101855] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/29/2020] [Revised: 08/21/2020] [Accepted: 09/14/2020] [Indexed: 11/19/2022]
Abstract
The interpretation of medical images is a challenging task, often complicated by the presence of artifacts, occlusions, limited contrast and more. Most notable is the case of chest radiography, where there is a high inter-rater variability in the detection and classification of abnormalities. This is largely due to inconclusive evidence in the data or subjective definitions of disease appearance. An additional example is the classification of anatomical views based on 2D Ultrasound images. Often, the anatomical context captured in a frame is not sufficient to recognize the underlying anatomy. Current machine learning solutions for these problems are typically limited to providing probabilistic predictions, relying on the capacity of underlying models to adapt to limited information and the high degree of label noise. In practice, however, this leads to overconfident systems with poor generalization on unseen data. To account for this, we propose a system that learns not only the probabilistic estimate for classification, but also an explicit uncertainty measure which captures the confidence of the system in the predicted output. We argue that this approach is essential to account for the inherent ambiguity characteristic of medical images from different radiologic exams including computed radiography, ultrasonography and magnetic resonance imaging. In our experiments we demonstrate that sample rejection based on the predicted uncertainty can significantly improve the ROC-AUC for various tasks, e.g., by 8% to 0.91 with an expected rejection rate of under 25% for the classification of different abnormalities in chest radiographs. In addition, we show that using uncertainty-driven bootstrapping to filter the training data, one can achieve a significant increase in robustness and accuracy. Finally, we present a multi-reader study showing that the predictive uncertainty is indicative of reader errors.
Collapse
Affiliation(s)
- Florin C Ghesu
- Siemens Healthineers, Digital Technology and Innovation, Princeton, NJ, USA.
| | - Bogdan Georgescu
- Siemens Healthineers, Digital Technology and Innovation, Princeton, NJ, USA
| | - Awais Mansoor
- Siemens Healthineers, Digital Technology and Innovation, Princeton, NJ, USA
| | - Youngjin Yoo
- Siemens Healthineers, Digital Technology and Innovation, Princeton, NJ, USA
| | - Eli Gibson
- Siemens Healthineers, Digital Technology and Innovation, Princeton, NJ, USA
| | - R S Vishwanath
- Siemens Healthineers, Digital Technology and Innovation, Bangalore, India
| | | | - James M Balter
- University of Michigan, Department of Radiation Oncology, Ann Arbor, MI, USA
| | - Yue Cao
- University of Michigan, Department of Radiation Oncology, Ann Arbor, MI, USA
| | - Ramandeep Singh
- Department of Radiology, Massachusetts General Hospital, Boston, MA, USA; Harvard Medical School, Boston, MA, USA
| | - Subba R Digumarthy
- Department of Radiology, Massachusetts General Hospital, Boston, MA, USA; Harvard Medical School, Boston, MA, USA
| | - Mannudeep K Kalra
- Department of Radiology, Massachusetts General Hospital, Boston, MA, USA; Harvard Medical School, Boston, MA, USA
| | - Sasa Grbic
- Siemens Healthineers, Digital Technology and Innovation, Princeton, NJ, USA
| | - Dorin Comaniciu
- Siemens Healthineers, Digital Technology and Innovation, Princeton, NJ, USA
| |
Collapse
|
16
|
Wang DD, Qian Z, Vukicevic M, Engelhardt S, Kheradvar A, Zhang C, Little SH, Verjans J, Comaniciu D, O'Neill WW, Vannan MA. 3D Printing, Computational Modeling, and Artificial Intelligence for Structural Heart Disease. JACC Cardiovasc Imaging 2020; 14:41-60. [PMID: 32861647 DOI: 10.1016/j.jcmg.2019.12.022] [Citation(s) in RCA: 43] [Impact Index Per Article: 10.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/02/2019] [Revised: 11/27/2019] [Accepted: 12/02/2019] [Indexed: 01/19/2023]
Abstract
Structural heart disease (SHD) is a new field within cardiovascular medicine. Traditional imaging modalities fall short in supporting the needs of SHD interventions, as they have been constructed around the concept of disease diagnosis. SHD interventions disrupt traditional concepts of imaging in requiring imaging to plan, simulate, and predict intraprocedural outcomes. In transcatheter SHD interventions, the absence of a gold-standard open cavity surgical field deprives physicians of the opportunity for tactile feedback and visual confirmation of cardiac anatomy. Hence, dependency on imaging in periprocedural guidance has led to evolution of a new generation of procedural skillsets, concept of a visual field, and technologies in the periprocedural planning period to accelerate preclinical device development, physician, and patient education. Adaptation of 3-dimensional (3D) printing in clinical care and procedural planning has demonstrated a reduction in early-operator learning curve for transcatheter interventions. Integration of computation modeling to 3D printing has accelerated research and development understanding of fluid mechanics within device testing. Application of 3D printing, computational modeling, and ultimately incorporation of artificial intelligence is changing the landscape of physician training and delivery of patient-centric care. Transcatheter structural heart interventions are requiring in-depth periprocedural understanding of cardiac pathophysiology and device interactions not afforded by traditional imaging metrics.
Collapse
Affiliation(s)
- Dee Dee Wang
- Center for Structural Heart Disease, Division of Cardiology, Henry Ford Health System, Detroit, Michigan, USA.
| | - Zhen Qian
- Hippocrates Research Lab, Tencent America, Palo Alto, California, USA
| | - Marija Vukicevic
- Department of Cardiology, Methodist DeBakey Heart Center, Houston Methodist Hospital, Houston, Texas, USA
| | - Sandy Engelhardt
- Artificial Intelligence in Cardiovascular Medicine, Heidelberg University Hospital, Heidelberg, Germany
| | - Arash Kheradvar
- Department of Biomedical Engineering, Edwards Lifesciences Center for Advanced Cardiovascular Technology, University of California, Irvine, California, USA
| | - Chuck Zhang
- H. Milton Stewart School of Industrial & Systems Engineering and Georgia Tech Manufacturing Institute, Georgia Institute of Technology, Atlanta Georgia, USA
| | - Stephen H Little
- Department of Cardiology, Methodist DeBakey Heart Center, Houston Methodist Hospital, Houston, Texas, USA
| | - Johan Verjans
- Australian Institute for Machine Learning, University of Adelaide, Adelaide South Australia, Australia
| | - Dorin Comaniciu
- Siemens Healthineers, Medical Imaging Technologies, Princeton, New Jersey, USA
| | - William W O'Neill
- Center for Structural Heart Disease, Division of Cardiology, Henry Ford Health System, Detroit, Michigan, USA
| | - Mani A Vannan
- Hippocrates Research Lab, Tencent America, Palo Alto, California, USA
| |
Collapse
|
17
|
Chaganti S, Grenier P, Balachandran A, Chabin G, Cohen S, Flohr T, Georgescu B, Grbic S, Liu S, Mellot F, Murray N, Nicolaou S, Parker W, Re T, Sanelli P, Sauter AW, Xu Z, Yoo Y, Ziebandt V, Comaniciu D. Automated Quantification of CT Patterns Associated with COVID-19 from Chest CT. Radiol Artif Intell 2020; 2:e200048. [PMID: 33928255 PMCID: PMC7392373 DOI: 10.1148/ryai.2020200048] [Citation(s) in RCA: 76] [Impact Index Per Article: 19.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/12/2023]
Abstract
PURPOSE To present a method that automatically segments and quantifies abnormal CT patterns commonly present in coronavirus disease 2019 (COVID-19), namely ground glass opacities and consolidations. MATERIALS AND METHODS In this retrospective study, the proposed method takes as input a non-contrasted chest CT and segments the lesions, lungs, and lobes in three dimensions, based on a dataset of 9749 chest CT volumes. The method outputs two combined measures of the severity of lung and lobe involvement, quantifying both the extent of COVID-19 abnormalities and presence of high opacities, based on deep learning and deep reinforcement learning. The first measure of (PO, PHO) is global, while the second of (LSS, LHOS) is lobe-wise. Evaluation of the algorithm is reported on CTs of 200 participants (100 COVID-19 confirmed patients and 100 healthy controls) from institutions from Canada, Europe and the United States collected between 2002-Present (April 2020). Ground truth is established by manual annotations of lesions, lungs, and lobes. Correlation and regression analyses were performed to compare the prediction to the ground truth. RESULTS Pearson correlation coefficient between method prediction and ground truth for COVID-19 cases was calculated as 0.92 for PO (P < .001), 0.97 for PHO (P < .001), 0.91 for LSS (P < .001), 0.90 for LHOS (P < .001). 98 of 100 healthy controls had a predicted PO of less than 1%, 2 had between 1-2%. Automated processing time to compute the severity scores was 10 seconds per case compared to 30 minutes required for manual annotations. CONCLUSION A new method segments regions of CT abnormalities associated with COVID-19 and computes (PO, PHO), as well as (LSS, LHOS) severity scores.
Collapse
Affiliation(s)
- Shikha Chaganti
- From the Hôpital Foch, Suresnes, France (P.G., F.M.), Donald and Barbara Zucker School of Medicine, Feinstein Institutes for Medical Research, Northwell Health, Manhasset, NY, USA (S.C., P.S.), Siemens Healthinners, Bangalore, India (A.B.), Siemens Healthineers, Forchheim, Germany (T.F., V.Z.), Siemens Healthineers, Princeton, NJ, USA (S.C., B.G., S.G., S.L., T.R., Z.X., Y.Y., D.C.), Siemens Healthineers, Paris, France (G.C.), University Hospital Basel, Clinic of Radiology & Nuclear medicine, Basel, Switzerland (A.W.S.), Vancouver General Hospital, Vancouver, Canada (N.M., S.N., W.P.)
| | - Philippe Grenier
- From the Hôpital Foch, Suresnes, France (P.G., F.M.), Donald and Barbara Zucker School of Medicine, Feinstein Institutes for Medical Research, Northwell Health, Manhasset, NY, USA (S.C., P.S.), Siemens Healthinners, Bangalore, India (A.B.), Siemens Healthineers, Forchheim, Germany (T.F., V.Z.), Siemens Healthineers, Princeton, NJ, USA (S.C., B.G., S.G., S.L., T.R., Z.X., Y.Y., D.C.), Siemens Healthineers, Paris, France (G.C.), University Hospital Basel, Clinic of Radiology & Nuclear medicine, Basel, Switzerland (A.W.S.), Vancouver General Hospital, Vancouver, Canada (N.M., S.N., W.P.)
| | - Abishek Balachandran
- From the Hôpital Foch, Suresnes, France (P.G., F.M.), Donald and Barbara Zucker School of Medicine, Feinstein Institutes for Medical Research, Northwell Health, Manhasset, NY, USA (S.C., P.S.), Siemens Healthinners, Bangalore, India (A.B.), Siemens Healthineers, Forchheim, Germany (T.F., V.Z.), Siemens Healthineers, Princeton, NJ, USA (S.C., B.G., S.G., S.L., T.R., Z.X., Y.Y., D.C.), Siemens Healthineers, Paris, France (G.C.), University Hospital Basel, Clinic of Radiology & Nuclear medicine, Basel, Switzerland (A.W.S.), Vancouver General Hospital, Vancouver, Canada (N.M., S.N., W.P.)
| | - Guillaume Chabin
- From the Hôpital Foch, Suresnes, France (P.G., F.M.), Donald and Barbara Zucker School of Medicine, Feinstein Institutes for Medical Research, Northwell Health, Manhasset, NY, USA (S.C., P.S.), Siemens Healthinners, Bangalore, India (A.B.), Siemens Healthineers, Forchheim, Germany (T.F., V.Z.), Siemens Healthineers, Princeton, NJ, USA (S.C., B.G., S.G., S.L., T.R., Z.X., Y.Y., D.C.), Siemens Healthineers, Paris, France (G.C.), University Hospital Basel, Clinic of Radiology & Nuclear medicine, Basel, Switzerland (A.W.S.), Vancouver General Hospital, Vancouver, Canada (N.M., S.N., W.P.)
| | - Stuart Cohen
- From the Hôpital Foch, Suresnes, France (P.G., F.M.), Donald and Barbara Zucker School of Medicine, Feinstein Institutes for Medical Research, Northwell Health, Manhasset, NY, USA (S.C., P.S.), Siemens Healthinners, Bangalore, India (A.B.), Siemens Healthineers, Forchheim, Germany (T.F., V.Z.), Siemens Healthineers, Princeton, NJ, USA (S.C., B.G., S.G., S.L., T.R., Z.X., Y.Y., D.C.), Siemens Healthineers, Paris, France (G.C.), University Hospital Basel, Clinic of Radiology & Nuclear medicine, Basel, Switzerland (A.W.S.), Vancouver General Hospital, Vancouver, Canada (N.M., S.N., W.P.)
| | - Thomas Flohr
- From the Hôpital Foch, Suresnes, France (P.G., F.M.), Donald and Barbara Zucker School of Medicine, Feinstein Institutes for Medical Research, Northwell Health, Manhasset, NY, USA (S.C., P.S.), Siemens Healthinners, Bangalore, India (A.B.), Siemens Healthineers, Forchheim, Germany (T.F., V.Z.), Siemens Healthineers, Princeton, NJ, USA (S.C., B.G., S.G., S.L., T.R., Z.X., Y.Y., D.C.), Siemens Healthineers, Paris, France (G.C.), University Hospital Basel, Clinic of Radiology & Nuclear medicine, Basel, Switzerland (A.W.S.), Vancouver General Hospital, Vancouver, Canada (N.M., S.N., W.P.)
| | - Bogdan Georgescu
- From the Hôpital Foch, Suresnes, France (P.G., F.M.), Donald and Barbara Zucker School of Medicine, Feinstein Institutes for Medical Research, Northwell Health, Manhasset, NY, USA (S.C., P.S.), Siemens Healthinners, Bangalore, India (A.B.), Siemens Healthineers, Forchheim, Germany (T.F., V.Z.), Siemens Healthineers, Princeton, NJ, USA (S.C., B.G., S.G., S.L., T.R., Z.X., Y.Y., D.C.), Siemens Healthineers, Paris, France (G.C.), University Hospital Basel, Clinic of Radiology & Nuclear medicine, Basel, Switzerland (A.W.S.), Vancouver General Hospital, Vancouver, Canada (N.M., S.N., W.P.)
| | - Sasa Grbic
- From the Hôpital Foch, Suresnes, France (P.G., F.M.), Donald and Barbara Zucker School of Medicine, Feinstein Institutes for Medical Research, Northwell Health, Manhasset, NY, USA (S.C., P.S.), Siemens Healthinners, Bangalore, India (A.B.), Siemens Healthineers, Forchheim, Germany (T.F., V.Z.), Siemens Healthineers, Princeton, NJ, USA (S.C., B.G., S.G., S.L., T.R., Z.X., Y.Y., D.C.), Siemens Healthineers, Paris, France (G.C.), University Hospital Basel, Clinic of Radiology & Nuclear medicine, Basel, Switzerland (A.W.S.), Vancouver General Hospital, Vancouver, Canada (N.M., S.N., W.P.)
| | - Siqi Liu
- From the Hôpital Foch, Suresnes, France (P.G., F.M.), Donald and Barbara Zucker School of Medicine, Feinstein Institutes for Medical Research, Northwell Health, Manhasset, NY, USA (S.C., P.S.), Siemens Healthinners, Bangalore, India (A.B.), Siemens Healthineers, Forchheim, Germany (T.F., V.Z.), Siemens Healthineers, Princeton, NJ, USA (S.C., B.G., S.G., S.L., T.R., Z.X., Y.Y., D.C.), Siemens Healthineers, Paris, France (G.C.), University Hospital Basel, Clinic of Radiology & Nuclear medicine, Basel, Switzerland (A.W.S.), Vancouver General Hospital, Vancouver, Canada (N.M., S.N., W.P.)
| | - François Mellot
- From the Hôpital Foch, Suresnes, France (P.G., F.M.), Donald and Barbara Zucker School of Medicine, Feinstein Institutes for Medical Research, Northwell Health, Manhasset, NY, USA (S.C., P.S.), Siemens Healthinners, Bangalore, India (A.B.), Siemens Healthineers, Forchheim, Germany (T.F., V.Z.), Siemens Healthineers, Princeton, NJ, USA (S.C., B.G., S.G., S.L., T.R., Z.X., Y.Y., D.C.), Siemens Healthineers, Paris, France (G.C.), University Hospital Basel, Clinic of Radiology & Nuclear medicine, Basel, Switzerland (A.W.S.), Vancouver General Hospital, Vancouver, Canada (N.M., S.N., W.P.)
| | - Nicolas Murray
- From the Hôpital Foch, Suresnes, France (P.G., F.M.), Donald and Barbara Zucker School of Medicine, Feinstein Institutes for Medical Research, Northwell Health, Manhasset, NY, USA (S.C., P.S.), Siemens Healthinners, Bangalore, India (A.B.), Siemens Healthineers, Forchheim, Germany (T.F., V.Z.), Siemens Healthineers, Princeton, NJ, USA (S.C., B.G., S.G., S.L., T.R., Z.X., Y.Y., D.C.), Siemens Healthineers, Paris, France (G.C.), University Hospital Basel, Clinic of Radiology & Nuclear medicine, Basel, Switzerland (A.W.S.), Vancouver General Hospital, Vancouver, Canada (N.M., S.N., W.P.)
| | - Savvas Nicolaou
- From the Hôpital Foch, Suresnes, France (P.G., F.M.), Donald and Barbara Zucker School of Medicine, Feinstein Institutes for Medical Research, Northwell Health, Manhasset, NY, USA (S.C., P.S.), Siemens Healthinners, Bangalore, India (A.B.), Siemens Healthineers, Forchheim, Germany (T.F., V.Z.), Siemens Healthineers, Princeton, NJ, USA (S.C., B.G., S.G., S.L., T.R., Z.X., Y.Y., D.C.), Siemens Healthineers, Paris, France (G.C.), University Hospital Basel, Clinic of Radiology & Nuclear medicine, Basel, Switzerland (A.W.S.), Vancouver General Hospital, Vancouver, Canada (N.M., S.N., W.P.)
| | - William Parker
- From the Hôpital Foch, Suresnes, France (P.G., F.M.), Donald and Barbara Zucker School of Medicine, Feinstein Institutes for Medical Research, Northwell Health, Manhasset, NY, USA (S.C., P.S.), Siemens Healthinners, Bangalore, India (A.B.), Siemens Healthineers, Forchheim, Germany (T.F., V.Z.), Siemens Healthineers, Princeton, NJ, USA (S.C., B.G., S.G., S.L., T.R., Z.X., Y.Y., D.C.), Siemens Healthineers, Paris, France (G.C.), University Hospital Basel, Clinic of Radiology & Nuclear medicine, Basel, Switzerland (A.W.S.), Vancouver General Hospital, Vancouver, Canada (N.M., S.N., W.P.)
| | - Thomas Re
- From the Hôpital Foch, Suresnes, France (P.G., F.M.), Donald and Barbara Zucker School of Medicine, Feinstein Institutes for Medical Research, Northwell Health, Manhasset, NY, USA (S.C., P.S.), Siemens Healthinners, Bangalore, India (A.B.), Siemens Healthineers, Forchheim, Germany (T.F., V.Z.), Siemens Healthineers, Princeton, NJ, USA (S.C., B.G., S.G., S.L., T.R., Z.X., Y.Y., D.C.), Siemens Healthineers, Paris, France (G.C.), University Hospital Basel, Clinic of Radiology & Nuclear medicine, Basel, Switzerland (A.W.S.), Vancouver General Hospital, Vancouver, Canada (N.M., S.N., W.P.)
| | - Pina Sanelli
- From the Hôpital Foch, Suresnes, France (P.G., F.M.), Donald and Barbara Zucker School of Medicine, Feinstein Institutes for Medical Research, Northwell Health, Manhasset, NY, USA (S.C., P.S.), Siemens Healthinners, Bangalore, India (A.B.), Siemens Healthineers, Forchheim, Germany (T.F., V.Z.), Siemens Healthineers, Princeton, NJ, USA (S.C., B.G., S.G., S.L., T.R., Z.X., Y.Y., D.C.), Siemens Healthineers, Paris, France (G.C.), University Hospital Basel, Clinic of Radiology & Nuclear medicine, Basel, Switzerland (A.W.S.), Vancouver General Hospital, Vancouver, Canada (N.M., S.N., W.P.)
| | - Alexander W. Sauter
- From the Hôpital Foch, Suresnes, France (P.G., F.M.), Donald and Barbara Zucker School of Medicine, Feinstein Institutes for Medical Research, Northwell Health, Manhasset, NY, USA (S.C., P.S.), Siemens Healthinners, Bangalore, India (A.B.), Siemens Healthineers, Forchheim, Germany (T.F., V.Z.), Siemens Healthineers, Princeton, NJ, USA (S.C., B.G., S.G., S.L., T.R., Z.X., Y.Y., D.C.), Siemens Healthineers, Paris, France (G.C.), University Hospital Basel, Clinic of Radiology & Nuclear medicine, Basel, Switzerland (A.W.S.), Vancouver General Hospital, Vancouver, Canada (N.M., S.N., W.P.)
| | - Zhoubing Xu
- From the Hôpital Foch, Suresnes, France (P.G., F.M.), Donald and Barbara Zucker School of Medicine, Feinstein Institutes for Medical Research, Northwell Health, Manhasset, NY, USA (S.C., P.S.), Siemens Healthinners, Bangalore, India (A.B.), Siemens Healthineers, Forchheim, Germany (T.F., V.Z.), Siemens Healthineers, Princeton, NJ, USA (S.C., B.G., S.G., S.L., T.R., Z.X., Y.Y., D.C.), Siemens Healthineers, Paris, France (G.C.), University Hospital Basel, Clinic of Radiology & Nuclear medicine, Basel, Switzerland (A.W.S.), Vancouver General Hospital, Vancouver, Canada (N.M., S.N., W.P.)
| | - Youngjin Yoo
- From the Hôpital Foch, Suresnes, France (P.G., F.M.), Donald and Barbara Zucker School of Medicine, Feinstein Institutes for Medical Research, Northwell Health, Manhasset, NY, USA (S.C., P.S.), Siemens Healthinners, Bangalore, India (A.B.), Siemens Healthineers, Forchheim, Germany (T.F., V.Z.), Siemens Healthineers, Princeton, NJ, USA (S.C., B.G., S.G., S.L., T.R., Z.X., Y.Y., D.C.), Siemens Healthineers, Paris, France (G.C.), University Hospital Basel, Clinic of Radiology & Nuclear medicine, Basel, Switzerland (A.W.S.), Vancouver General Hospital, Vancouver, Canada (N.M., S.N., W.P.)
| | - Valentin Ziebandt
- From the Hôpital Foch, Suresnes, France (P.G., F.M.), Donald and Barbara Zucker School of Medicine, Feinstein Institutes for Medical Research, Northwell Health, Manhasset, NY, USA (S.C., P.S.), Siemens Healthinners, Bangalore, India (A.B.), Siemens Healthineers, Forchheim, Germany (T.F., V.Z.), Siemens Healthineers, Princeton, NJ, USA (S.C., B.G., S.G., S.L., T.R., Z.X., Y.Y., D.C.), Siemens Healthineers, Paris, France (G.C.), University Hospital Basel, Clinic of Radiology & Nuclear medicine, Basel, Switzerland (A.W.S.), Vancouver General Hospital, Vancouver, Canada (N.M., S.N., W.P.)
| | - Dorin Comaniciu
- From the Hôpital Foch, Suresnes, France (P.G., F.M.), Donald and Barbara Zucker School of Medicine, Feinstein Institutes for Medical Research, Northwell Health, Manhasset, NY, USA (S.C., P.S.), Siemens Healthinners, Bangalore, India (A.B.), Siemens Healthineers, Forchheim, Germany (T.F., V.Z.), Siemens Healthineers, Princeton, NJ, USA (S.C., B.G., S.G., S.L., T.R., Z.X., Y.Y., D.C.), Siemens Healthineers, Paris, France (G.C.), University Hospital Basel, Clinic of Radiology & Nuclear medicine, Basel, Switzerland (A.W.S.), Vancouver General Hospital, Vancouver, Canada (N.M., S.N., W.P.)
| |
Collapse
|
18
|
Winkel DJ, Weikert TJ, Breit HC, Chabin G, Gibson E, Heye TJ, Comaniciu D, Boll DT. Validation of a fully automated liver segmentation algorithm using multi-scale deep reinforcement learning and comparison versus manual segmentation. Eur J Radiol 2020; 126:108918. [PMID: 32171914 DOI: 10.1016/j.ejrad.2020.108918] [Citation(s) in RCA: 25] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2019] [Revised: 01/29/2020] [Accepted: 02/23/2020] [Indexed: 12/12/2022]
Abstract
PURPOSE To evaluate the performance of an artificial intelligence (AI) based software solution tested on liver volumetric analyses and to compare the results to the manual contour segmentation. MATERIALS AND METHODS We retrospectively obtained 462 multiphasic CT datasets with six series for each patient: three different contrast phases and two slice thickness reconstructions (1.5/5 mm), totaling 2772 series. AI-based liver volumes were determined using multi-scale deep-reinforcement learning for 3D body markers detection and 3D structure segmentation. The algorithm was trained for liver volumetry on approximately 5000 datasets. We computed the absolute error of each automatically- and manually-derived volume relative to the mean manual volume. The mean processing time/dataset and method was recorded. Variations of liver volumes were compared using univariate generalized linear model analyses. A subgroup of 60 datasets was manually segmented by three radiologists, with a further subgroup of 20 segmented three times by each, to compare the automatically-derived results with the ground-truth. RESULTS The mean absolute error of the automatically-derived measurement was 44.3 mL (representing 2.37 % of the averaged liver volumes). The liver volume was neither dependent on the contrast phase (p = 0.697), nor on the slice thickness (p = 0.446). The mean processing time/dataset with the algorithm was 9.94 s (sec) compared to manual segmentation with 219.34 s. We found an excellent agreement between both approaches with an ICC value of 0.996. CONCLUSION The results of our study demonstrate that AI-powered fully automated liver volumetric analyses can be done with excellent accuracy, reproducibility, robustness, speed and agreement with the manual segmentation.
Collapse
Affiliation(s)
- David J Winkel
- Department of Radiology, University Hospital of Basel, Basel, Switzerland; Siemens Healthineers, Medical Imaging Technologies, Princeton, NJ, USA.
| | - Thomas J Weikert
- Department of Radiology, University Hospital of Basel, Basel, Switzerland
| | | | - Guillaume Chabin
- Siemens Healthineers, Medical Imaging Technologies, Princeton, NJ, USA
| | - Eli Gibson
- Siemens Healthineers, Medical Imaging Technologies, Princeton, NJ, USA
| | - Tobias J Heye
- Department of Radiology, University Hospital of Basel, Basel, Switzerland
| | - Dorin Comaniciu
- Siemens Healthineers, Medical Imaging Technologies, Princeton, NJ, USA
| | - Daniel T Boll
- Department of Radiology, University Hospital of Basel, Basel, Switzerland
| |
Collapse
|
19
|
Taghanaki SA, Zheng Y, Kevin Zhou S, Georgescu B, Sharma P, Xu D, Comaniciu D, Hamarneh G. Combo loss: Handling input and output imbalance in multi-organ segmentation. Comput Med Imaging Graph 2019; 75:24-33. [PMID: 31129477 DOI: 10.1016/j.compmedimag.2019.04.005] [Citation(s) in RCA: 88] [Impact Index Per Article: 17.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2018] [Revised: 03/20/2019] [Accepted: 04/26/2019] [Indexed: 10/26/2022]
Abstract
Simultaneous segmentation of multiple organs from different medical imaging modalities is a crucial task as it can be utilized for computer-aided diagnosis, computer-assisted surgery, and therapy planning. Thanks to the recent advances in deep learning, several deep neural networks for medical image segmentation have been introduced successfully for this purpose. In this paper, we focus on learning a deep multi-organ segmentation network that labels voxels. In particular, we examine the critical choice of a loss function in order to handle the notorious imbalance problem that plagues both the input and output of a learning model. The input imbalance refers to the class-imbalance in the input training samples (i.e., small foreground objects embedded in an abundance of background voxels, as well as organs of varying sizes). The output imbalance refers to the imbalance between the false positives and false negatives of the inference model. In order to tackle both types of imbalance during training and inference, we introduce a new curriculum learning based loss function. Specifically, we leverage Dice similarity coefficient to deter model parameters from being held at bad local minima and at the same time gradually learn better model parameters by penalizing for false positives/negatives using a cross entropy term. We evaluated the proposed loss function on three datasets: whole body positron emission tomography (PET) scans with 5 target organs, magnetic resonance imaging (MRI) prostate scans, and ultrasound echocardigraphy images with a single target organ i.e., left ventricular. We show that a simple network architecture with the proposed integrative loss function can outperform state-of-the-art methods and results of the competing methods can be improved when our proposed loss is used.
Collapse
Affiliation(s)
- Saeid Asgari Taghanaki
- School of Computing Science, Simon Fraser University, Canada; Medical Imaging Technologies, Siemens Healthineers, Princeton, NJ, USA.
| | - Yefeng Zheng
- Medical Imaging Technologies, Siemens Healthineers, Princeton, NJ, USA.
| | - S Kevin Zhou
- Medical Imaging Technologies, Siemens Healthineers, Princeton, NJ, USA.
| | - Bogdan Georgescu
- Medical Imaging Technologies, Siemens Healthineers, Princeton, NJ, USA.
| | - Puneet Sharma
- Medical Imaging Technologies, Siemens Healthineers, Princeton, NJ, USA.
| | - Daguang Xu
- Medical Imaging Technologies, Siemens Healthineers, Princeton, NJ, USA.
| | - Dorin Comaniciu
- Medical Imaging Technologies, Siemens Healthineers, Princeton, NJ, USA.
| | | |
Collapse
|
20
|
Dey D, Slomka PJ, Leeson P, Comaniciu D, Shrestha S, Sengupta PP, Marwick TH. Artificial Intelligence in Cardiovascular Imaging: JACC State-of-the-Art Review. J Am Coll Cardiol 2019; 73:1317-1335. [PMID: 30898208 PMCID: PMC6474254 DOI: 10.1016/j.jacc.2018.12.054] [Citation(s) in RCA: 303] [Impact Index Per Article: 60.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/31/2018] [Accepted: 12/13/2018] [Indexed: 12/11/2022]
Abstract
Data science is likely to lead to major changes in cardiovascular imaging. Problems with timing, efficiency, and missed diagnoses occur at all stages of the imaging chain. The application of artificial intelligence (AI) is dependent on robust data; the application of appropriate computational approaches and tools; and validation of its clinical application to image segmentation, automated measurements, and eventually, automated diagnosis. AI may reduce cost and improve value at the stages of image acquisition, interpretation, and decision-making. Moreover, the precision now possible with cardiovascular imaging, combined with "big data" from the electronic health record and pathology, is likely to better characterize disease and personalize therapy. This review summarizes recent promising applications of AI in cardiology and cardiac imaging, which potentially add value to patient care.
Collapse
Affiliation(s)
- Damini Dey
- Departments of Biomedical Sciences and Medicine, Cedars-Sinai Medical Center, Biomedical Imaging Research Institute, Los Angeles, California
| | - Piotr J Slomka
- Departments of Biomedical Sciences and Medicine, Cedars-Sinai Medical Center, Biomedical Imaging Research Institute, Los Angeles, California
| | - Paul Leeson
- Oxford Cardiovascular Clinical Research Facility, Radcliffe Department of Medicine, University of Oxford, Oxford, United Kingdom
| | | | - Sirish Shrestha
- Section of Cardiology, West Virginia University, Morgantown, West Virginia
| | - Partho P Sengupta
- Section of Cardiology, West Virginia University, Morgantown, West Virginia
| | - Thomas H Marwick
- Baker Heart and Diabetes Research Institute, Melbourne, Australia.
| |
Collapse
|
21
|
Ghesu FC, Georgescu B, Zheng Y, Grbic S, Maier A, Hornegger J, Comaniciu D. Multi-Scale Deep Reinforcement Learning for Real-Time 3D-Landmark Detection in CT Scans. IEEE Trans Pattern Anal Mach Intell 2019; 41:176-189. [PMID: 29990011 DOI: 10.1109/tpami.2017.2782687] [Citation(s) in RCA: 120] [Impact Index Per Article: 24.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Robust and fast detection of anatomical structures is a prerequisite for both diagnostic and interventional medical image analysis. Current solutions for anatomy detection are typically based on machine learning techniques that exploit large annotated image databases in order to learn the appearance of the captured anatomy. These solutions are subject to several limitations, including the use of suboptimal feature engineering techniques and most importantly the use of computationally suboptimal search-schemes for anatomy detection. To address these issues, we propose a method that follows a new paradigm by reformulating the detection problem as a behavior learning task for an artificial agent. We couple the modeling of the anatomy appearance and the object search in a unified behavioral framework, using the capabilities of deep reinforcement learning and multi-scale image analysis. In other words, an artificial agent is trained not only to distinguish the target anatomical object from the rest of the body but also how to find the object by learning and following an optimal navigation path to the target object in the imaged volumetric space. We evaluated our approach on 1487 3D-CT volumes from 532 patients, totaling over 500,000 image slices and show that it significantly outperforms state-of-the-art solutions on detecting several anatomical structures with no failed cases from a clinical acceptance perspective, while also achieving a 20-30 percent higher detection accuracy. Most importantly, we improve the detection-speed of the reference methods by 2-3 orders of magnitude, achieving unmatched real-time performance on large 3D-CT scans.
Collapse
|
22
|
Itu L, Rapaka S, Passerini T, Georgescu B, Schwemmer C, Schoebinger M, Flohr T, Sharma P, Comaniciu D. Reply to Liu et al. J Appl Physiol (1985) 2018; 125:1353. [PMID: 30354943 DOI: 10.1152/japplphysiol.00563.2018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Affiliation(s)
- Lucian Itu
- Corporate Technology, Siemens SRL, Brasov , Romania.,Department of Automation and Information Technology, Transilvania University of Brasov , Brasov , Romania
| | - Saikiran Rapaka
- Medical Imaging Technologies, Siemens Medical Solutions USA, Princeton, New Jersey
| | - Tiziano Passerini
- Medical Imaging Technologies, Siemens Medical Solutions USA, Princeton, New Jersey
| | - Bogdan Georgescu
- Medical Imaging Technologies, Siemens Medical Solutions USA, Princeton, New Jersey
| | - Chris Schwemmer
- Computed Tomography - Research & Development, Siemens Healthcare GmbH, Forchheim, Germany
| | - Max Schoebinger
- Computed Tomography - Research & Development, Siemens Healthcare GmbH, Forchheim, Germany
| | - Thomas Flohr
- Computed Tomography - Research & Development, Siemens Healthcare GmbH, Forchheim, Germany
| | - Puneet Sharma
- Medical Imaging Technologies, Siemens Medical Solutions USA, Princeton, New Jersey
| | - Dorin Comaniciu
- Medical Imaging Technologies, Siemens Medical Solutions USA, Princeton, New Jersey
| |
Collapse
|
23
|
Ghesu FC, Georgescu B, Grbic S, Maier A, Hornegger J, Comaniciu D. Towards intelligent robust detection of anatomical structures in incomplete volumetric data. Med Image Anal 2018; 48:203-213. [PMID: 29966940 DOI: 10.1016/j.media.2018.06.007] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2018] [Revised: 06/11/2018] [Accepted: 06/18/2018] [Indexed: 12/27/2022]
Abstract
Robust and fast detection of anatomical structures represents an important component of medical image analysis technologies. Current solutions for anatomy detection are based on machine learning, and are generally driven by suboptimal and exhaustive search strategies. In particular, these techniques do not effectively address cases of incomplete data, i.e., scans acquired with a partial field-of-view. We address these challenges by following a new paradigm, which reformulates the detection task to teaching an intelligent artificial agent how to actively search for an anatomical structure. Using the principles of deep reinforcement learning with multi-scale image analysis, artificial agents are taught optimal navigation paths in the scale-space representation of an image, while accounting for structures that are missing from the field-of-view. The spatial coherence of the observed anatomical landmarks is ensured using elements from statistical shape modeling and robust estimation theory. Experiments show that our solution outperforms marginal space deep learning, a powerful deep learning method, at detecting different anatomical structures without any failure. The dataset contains 5043 3D-CT volumes from over 2000 patients, totaling over 2,500,000 image slices. In particular, our solution achieves 0% false-positive and 0% false-negative rates at detecting whether the landmarks are captured in the field-of-view of the scan (excluding all border cases), with an average detection accuracy of 2.78 mm. In terms of runtime, we reduce the detection-time of the marginal space deep learning method by 20-30 times to under 40 ms, an unmatched performance for high resolution incomplete 3D-CT data.
Collapse
Affiliation(s)
- Florin C Ghesu
- Siemens Healthineers, Medical Imaging Technologies, Princeton, NJ, USA; Pattern Recognition Lab, Friedrich-Alexander-Universität Erlangen-Nürnberg, Germany.
| | - Bogdan Georgescu
- Siemens Healthineers, Medical Imaging Technologies, Princeton, NJ, USA
| | - Sasa Grbic
- Siemens Healthineers, Medical Imaging Technologies, Princeton, NJ, USA
| | - Andreas Maier
- Pattern Recognition Lab, Friedrich-Alexander-Universität Erlangen-Nürnberg, Germany
| | - Joachim Hornegger
- Pattern Recognition Lab, Friedrich-Alexander-Universität Erlangen-Nürnberg, Germany
| | - Dorin Comaniciu
- Siemens Healthineers, Medical Imaging Technologies, Princeton, NJ, USA
| |
Collapse
|
24
|
Katus H, Ziegler A, Ekinci O, Giannitsis E, Stough WG, Achenbach S, Blankenberg S, Brueckmann M, Collinson P, Comaniciu D, Crea F, Dinh W, Ducrocq G, Flachskampf FA, Fox KAA, Friedrich MG, Hebert KA, Himmelmann A, Hlatky M, Lautsch D, Lindahl B, Lindholm D, Mills NL, Minotti G, Möckel M, Omland T, Semjonow V. Early diagnosis of acute coronary syndrome. Eur Heart J 2017; 38:3049-3055. [PMID: 29029109 DOI: 10.1093/eurheartj/ehx492] [Citation(s) in RCA: 40] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/05/2017] [Accepted: 08/21/2017] [Indexed: 01/01/2023] Open
Abstract
The diagnostic evaluation of acute chest pain has been augmented in recent years by advances in the sensitivity and precision of cardiac troponin assays, new biomarkers, improvements in imaging modalities, and release of new clinical decision algorithms. This progress has enabled physicians to diagnose or rule-out acute myocardial infarction earlier after the initial patient presentation, usually in emergency department settings, which may facilitate prompt initiation of evidence-based treatments, investigation of alternative diagnoses for chest pain, or discharge, and permit better utilization of healthcare resources. A non-trivial proportion of patients fall in an indeterminate category according to rule-out algorithms, and minimal evidence-based guidance exists for the optimal evaluation, monitoring, and treatment of these patients. The Cardiovascular Round Table of the ESC proposes approaches for the optimal application of early strategies in clinical practice to improve patient care following the review of recent advances in the early diagnosis of acute coronary syndrome. The following specific 'indeterminate' patient categories were considered: (i) patients with symptoms and high-sensitivity cardiac troponin <99th percentile; (ii) patients with symptoms and high-sensitivity troponin <99th percentile but above the limit of detection; (iii) patients with symptoms and high-sensitivity troponin >99th percentile but without dynamic change; and (iv) patients with symptoms and high-sensitivity troponin >99th percentile and dynamic change but without coronary plaque rupture/erosion/dissection. Definitive evidence is currently lacking to manage these patients whose early diagnosis is 'indeterminate' and these areas of uncertainty should be assigned a high priority for research.
Collapse
Affiliation(s)
- Hugo Katus
- Medizinische Klinik III, University of Heidelberg, Im Neuenheimer Feld 410, 69120 Heidelberg, Germany
| | | | - Okan Ekinci
- Siemens Healthineers, Erlangen, Germany
- University College Dublin, Dublin, Ireland
| | - Evangelos Giannitsis
- Medizinische Klinik III, University of Heidelberg, Im Neuenheimer Feld 410, 69120 Heidelberg, Germany
| | | | - Stephan Achenbach
- Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| | | | - Martina Brueckmann
- Boehringer-Ingelheim GmbH & Co. KG, Ingelheim am Rhein, Germany
- Faculty of Medicine Mannheim, University of Heidelberg, Mannheim, Germany
| | - Paul Collinson
- St. George's University Hospitals NHS Foundation Trust, London, UK
- St. Georges, University of London, London, UK
| | | | - Filippo Crea
- Universita Cattolica del Sacro Cuore, Rome, Italy
| | - Wilfried Dinh
- Bayer AG Pharmaceuticals, Drug Discovery, Wuppertal, Germany
- Department of Cardiology, HELIOS Clinic Wuppertal, University Hospital Witten/Herdecke, Wuppertal, Germany
| | | | - Frank A Flachskampf
- Department of Medical Sciences, Clinical Physiology/Cardiology, Uppsala University, Uppsala, Sweden
| | - Keith A A Fox
- Centre for Cardiovascular Science, University and Royal Infirmary of Edinburgh, Edinburgh, UK
| | - Matthias G Friedrich
- Departments of Medicine and Diagnostic Radiology, McGill University Health Centre, Montreal, Canada
- Heidelberg University, Heidelberg, Germany
| | | | | | - Mark Hlatky
- Stanford University School of Medicine, Stanford, CA, USA
| | | | - Bertil Lindahl
- Department of Medical Sciences, Clinical Physiology/Cardiology, Uppsala University, Uppsala, Sweden
| | - Daniel Lindholm
- Department of Medical Sciences, Cardiology, Uppsala Clinical Research Center, Uppsala University, Uppsala, Sweden
| | - Nicholas L Mills
- BHF Center for Cardiovascular Sciences, University of Edinburgh, Edinburgh, UK
| | | | | | - Torbjørn Omland
- Akershus University Hospital and University of Oslo, Oslo, Norway
| | | |
Collapse
|
25
|
Itu L, Sharma P, Suciu C, Moldoveanu F, Comaniciu D. Personalized blood flow computations: A hierarchical parameter estimation framework for tuning boundary conditions. Int J Numer Method Biomed Eng 2017; 33:e02803. [PMID: 27194580 DOI: 10.1002/cnm.2803] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/09/2015] [Revised: 04/08/2016] [Accepted: 05/15/2016] [Indexed: 06/05/2023]
Abstract
We propose a hierarchical parameter estimation framework for performing patient-specific hemodynamic computations in arterial models, which use structured tree boundary conditions. A calibration problem is formulated at each stage of the hierarchical framework, which seeks the fixed point solution of a nonlinear system of equations. Common hemodynamic properties, like resistance and compliance, are estimated at the first stage in order to match the objectives given by clinical measurements of pressure and/or flow rate. The second stage estimates the parameters of the structured trees so as to match the values of the hemodynamic properties determined at the first stage. A key feature of the proposed method is that to ensure a large range of variation, two different structured tree parameters are personalized for each hemodynamic property. First, the second stage of the parameter estimation framework is evaluated based on the properties of the outlet boundary conditions in a full body arterial model: the calibration method converges for all structured trees in less than 10 iterations. Next, the proposed framework is successfully evaluated on a patient-specific aortic model with coarctation: only six iterations are required for the computational model to be in close agreement with the clinical measurements used as objectives, and overall, there is a good agreement between the measured and computed quantities. Copyright © 2016 John Wiley & Sons, Ltd.
Collapse
Affiliation(s)
- Lucian Itu
- Corporate Technology, Siemens SRL, B-dul Eroilor nr. 5, Brasov, 500007, Romania
- Transilvania University of Brasov, B-dul Eroilor nr. 29, 500036, Brasov, Romania
| | - Puneet Sharma
- Siemens Medical Solutions USA, Inc., 755 College Road East, Princeton, NJ 08540, USA
| | - Constantin Suciu
- Corporate Technology, Siemens SRL, B-dul Eroilor nr. 5, Brasov, 500007, Romania
- Transilvania University of Brasov, B-dul Eroilor nr. 29, 500036, Brasov, Romania
| | - Florin Moldoveanu
- Transilvania University of Brasov, B-dul Eroilor nr. 29, 500036, Brasov, Romania
| | - Dorin Comaniciu
- Siemens Medical Solutions USA, Inc., 755 College Road East, Princeton, NJ 08540, USA
| |
Collapse
|
26
|
Yang D, Xiong T, Xu D, Huang Q, Liu D, Zhou SK, Xu Z, Park J, Chen M, Tran TD, Chin SP, Metaxas D, Comaniciu D. Automatic Vertebra Labeling in Large-Scale 3D CT Using Deep Image-to-Image Network with Message Passing and Sparsity Regularization. Lecture Notes in Computer Science 2017. [DOI: 10.1007/978-3-319-59050-9_50] [Citation(s) in RCA: 48] [Impact Index Per Article: 6.9] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/12/2023]
|
27
|
Ghesu FC, Georgescu B, Grbic S, Maier AK, Hornegger J, Comaniciu D. Robust Multi-scale Anatomical Landmark Detection in Incomplete 3D-CT Data. Medical Image Computing and Computer Assisted Intervention − MICCAI 2017 2017. [DOI: 10.1007/978-3-319-66182-7_23] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/05/2022]
|
28
|
Yang D, Xu D, Zhou SK, Georgescu B, Chen M, Grbic S, Metaxas D, Comaniciu D. Automatic Liver Segmentation Using an Adversarial Image-to-Image Network. Medical Image Computing and Computer Assisted Intervention − MICCAI 2017 2017. [DOI: 10.1007/978-3-319-66179-7_58] [Citation(s) in RCA: 91] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/05/2022]
|
29
|
Zhang F, Kanik J, Mansi T, Voigt I, Sharma P, Ionasec RI, Subrahmanyan L, Lin BA, Sugeng L, Yuh D, Comaniciu D, Duncan J. Towards patient-specific modeling of mitral valve repair: 3D transesophageal echocardiography-derived parameter estimation. Med Image Anal 2017; 35:599-609. [DOI: 10.1016/j.media.2016.09.006] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2015] [Revised: 09/12/2016] [Accepted: 09/19/2016] [Indexed: 11/29/2022]
|
30
|
Grbic S, Easley TF, Mansi T, Bloodworth CH, Pierce EL, Voigt I, Neumann D, Krebs J, Yuh DD, Jensen MO, Comaniciu D, Yoganathan AP. Personalized mitral valve closure computation and uncertainty analysis from 3D echocardiography. Med Image Anal 2017; 35:238-249. [DOI: 10.1016/j.media.2016.03.011] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2015] [Revised: 03/22/2016] [Accepted: 03/30/2016] [Indexed: 10/21/2022]
|
31
|
Calmac L, Niculescu R, Badila E, Weiss E, Penes D, Zamfir D, Itu L, Lazar L, Carp M, Itu A, Suciu C, Passerini T, Sharma P, Georgescu B, Comaniciu D. TCT-527 A data-driven approach combining image-based anatomical features and resting state measurements for the functional assessment of coronary artery disease. J Am Coll Cardiol 2016. [DOI: 10.1016/j.jacc.2016.09.664] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
32
|
Comaniciu D, Engel K, Georgescu B, Mansi T. Shaping the future through innovations: From medical imaging to precision medicine. Med Image Anal 2016; 33:19-26. [PMID: 27349829 DOI: 10.1016/j.media.2016.06.016] [Citation(s) in RCA: 40] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2016] [Revised: 06/08/2016] [Accepted: 06/13/2016] [Indexed: 10/21/2022]
Abstract
Medical images constitute a source of information essential for disease diagnosis, treatment and follow-up. In addition, due to its patient-specific nature, imaging information represents a critical component required for advancing precision medicine into clinical practice. This manuscript describes recently developed technologies for better handling of image information: photorealistic visualization of medical images with Cinematic Rendering, artificial agents for in-depth image understanding, support for minimally invasive procedures, and patient-specific computational models with enhanced predictive power. Throughout the manuscript we will analyze the capabilities of such technologies and extrapolate on their potential impact to advance the quality of medical care, while reducing its cost.
Collapse
Affiliation(s)
- Dorin Comaniciu
- Medical Imaging Technologies, Siemens Healthcare Technology Center, Princeton, NJ, USA
| | - Klaus Engel
- Medical Imaging Technologies, Siemens Healthcare Technology Center, Erlangen, Germany
| | - Bogdan Georgescu
- Medical Imaging Technologies, Siemens Healthcare Technology Center, Princeton, NJ, USA
| | - Tommaso Mansi
- Medical Imaging Technologies, Siemens Healthcare Technology Center, Princeton, NJ, USA.
| |
Collapse
|
33
|
Ghesu FC, Krubasik E, Georgescu B, Singh V, Hornegger J, Comaniciu D. Marginal Space Deep Learning: Efficient Architecture for Volumetric Image Parsing. IEEE Trans Med Imaging 2016; 35:1217-1228. [PMID: 27046846 DOI: 10.1109/tmi.2016.2538802] [Citation(s) in RCA: 53] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
Robust and fast solutions for anatomical object detection and segmentation support the entire clinical workflow from diagnosis, patient stratification, therapy planning, intervention and follow-up. Current state-of-the-art techniques for parsing volumetric medical image data are typically based on machine learning methods that exploit large annotated image databases. Two main challenges need to be addressed, these are the efficiency in scanning high-dimensional parametric spaces and the need for representative image features which require significant efforts of manual engineering. We propose a pipeline for object detection and segmentation in the context of volumetric image parsing, solving a two-step learning problem: anatomical pose estimation and boundary delineation. For this task we introduce Marginal Space Deep Learning (MSDL), a novel framework exploiting both the strengths of efficient object parametrization in hierarchical marginal spaces and the automated feature design of Deep Learning (DL) network architectures. In the 3D context, the application of deep learning systems is limited by the very high complexity of the parametrization. More specifically 9 parameters are necessary to describe a restricted affine transformation in 3D, resulting in a prohibitive amount of billions of scanning hypotheses. The mechanism of marginal space learning provides excellent run-time performance by learning classifiers in clustered, high-probability regions in spaces of gradually increasing dimensionality. To further increase computational efficiency and robustness, in our system we learn sparse adaptive data sampling patterns that automatically capture the structure of the input. Given the object localization, we propose a DL-based active shape model to estimate the non-rigid object boundary. Experimental results are presented on the aortic valve in ultrasound using an extensive dataset of 2891 volumes from 869 patients, showing significant improvements of up to 45.2% over the state-of-the-art. To our knowledge, this is the first successful demonstration of the DL potential to detection and segmentation in full 3D data with parametrized representations.
Collapse
|
34
|
Neumann D, Mansi T, Itu L, Georgescu B, Kayvanpour E, Sedaghat-Hamedani F, Amr A, Haas J, Katus H, Meder B, Steidl S, Hornegger J, Comaniciu D. A self-taught artificial agent for multi-physics computational model personalization. Med Image Anal 2016; 34:52-64. [PMID: 27133269 DOI: 10.1016/j.media.2016.04.003] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2016] [Revised: 04/08/2016] [Accepted: 04/19/2016] [Indexed: 02/05/2023]
Abstract
Personalization is the process of fitting a model to patient data, a critical step towards application of multi-physics computational models in clinical practice. Designing robust personalization algorithms is often a tedious, time-consuming, model- and data-specific process. We propose to use artificial intelligence concepts to learn this task, inspired by how human experts manually perform it. The problem is reformulated in terms of reinforcement learning. In an off-line phase, Vito, our self-taught artificial agent, learns a representative decision process model through exploration of the computational model: it learns how the model behaves under change of parameters. The agent then automatically learns an optimal strategy for on-line personalization. The algorithm is model-independent; applying it to a new model requires only adjusting few hyper-parameters of the agent and defining the observations to match. The full knowledge of the model itself is not required. Vito was tested in a synthetic scenario, showing that it could learn how to optimize cost functions generically. Then Vito was applied to the inverse problem of cardiac electrophysiology and the personalization of a whole-body circulation model. The obtained results suggested that Vito could achieve equivalent, if not better goodness of fit than standard methods, while being more robust (up to 11% higher success rates) and with faster (up to seven times) convergence rate. Our artificial intelligence approach could thus make personalization algorithms generalizable and self-adaptable to any patient and any model.
Collapse
Affiliation(s)
- Dominik Neumann
- Medical Imaging Technologies, Siemens Healthcare GmbH, Erlangen, Germany; Pattern Recognition Lab, FAU Erlangen-Nürnberg, Erlangen, Germany.
| | - Tommaso Mansi
- Medical Imaging Technologies, Siemens Healthcare, Princeton, USA
| | - Lucian Itu
- Siemens Corporate Technology, Siemens SRL, Brasov, Romania; Transilvania University of Brasov, Brasov, Romania
| | - Bogdan Georgescu
- Medical Imaging Technologies, Siemens Healthcare, Princeton, USA
| | - Elham Kayvanpour
- Department of Internal Medicine III, University Hospital Heidelberg, Germany
| | | | - Ali Amr
- Department of Internal Medicine III, University Hospital Heidelberg, Germany
| | - Jan Haas
- Department of Internal Medicine III, University Hospital Heidelberg, Germany
| | - Hugo Katus
- Department of Internal Medicine III, University Hospital Heidelberg, Germany
| | - Benjamin Meder
- Department of Internal Medicine III, University Hospital Heidelberg, Germany
| | - Stefan Steidl
- Pattern Recognition Lab, FAU Erlangen-Nürnberg, Erlangen, Germany
| | | | - Dorin Comaniciu
- Medical Imaging Technologies, Siemens Healthcare, Princeton, USA
| |
Collapse
|
35
|
Itu L, Rapaka S, Passerini T, Georgescu B, Schwemmer C, Schoebinger M, Flohr T, Sharma P, Comaniciu D. A machine-learning approach for computation of fractional flow reserve from coronary computed tomography. J Appl Physiol (1985) 2016; 121:42-52. [PMID: 27079692 DOI: 10.1152/japplphysiol.00752.2015] [Citation(s) in RCA: 227] [Impact Index Per Article: 28.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2015] [Accepted: 04/07/2016] [Indexed: 01/03/2023] Open
Abstract
Fractional flow reserve (FFR) is a functional index quantifying the severity of coronary artery lesions and is clinically obtained using an invasive, catheter-based measurement. Recently, physics-based models have shown great promise in being able to noninvasively estimate FFR from patient-specific anatomical information, e.g., obtained from computed tomography scans of the heart and the coronary arteries. However, these models have high computational demand, limiting their clinical adoption. In this paper, we present a machine-learning-based model for predicting FFR as an alternative to physics-based approaches. The model is trained on a large database of synthetically generated coronary anatomies, where the target values are computed using the physics-based model. The trained model predicts FFR at each point along the centerline of the coronary tree, and its performance was assessed by comparing the predictions against physics-based computations and against invasively measured FFR for 87 patients and 125 lesions in total. Correlation between machine-learning and physics-based predictions was excellent (0.9994, P < 0.001), and no systematic bias was found in Bland-Altman analysis: mean difference was -0.00081 ± 0.0039. Invasive FFR ≤ 0.80 was found in 38 lesions out of 125 and was predicted by the machine-learning algorithm with a sensitivity of 81.6%, a specificity of 83.9%, and an accuracy of 83.2%. The correlation was 0.729 (P < 0.001). Compared with the physics-based computation, average execution time was reduced by more than 80 times, leading to near real-time assessment of FFR. Average execution time went down from 196.3 ± 78.5 s for the CFD model to ∼2.4 ± 0.44 s for the machine-learning model on a workstation with 3.4-GHz Intel i7 8-core processor.
Collapse
Affiliation(s)
- Lucian Itu
- Corporate Technology, Siemens SRL, Brasov, Romania; Department of Automation and Information Technology, Transilvania University of Brasov, Brasov, Romania
| | - Saikiran Rapaka
- Medical Imaging Technologies, Siemens Healthcare, Princeton, New Jersey; and
| | - Tiziano Passerini
- Medical Imaging Technologies, Siemens Healthcare, Princeton, New Jersey; and
| | - Bogdan Georgescu
- Medical Imaging Technologies, Siemens Healthcare, Princeton, New Jersey; and
| | - Chris Schwemmer
- Computed Tomography-Research & Development, Siemens Healthcare GmbH, Forchheim, Germany
| | - Max Schoebinger
- Computed Tomography-Research & Development, Siemens Healthcare GmbH, Forchheim, Germany
| | - Thomas Flohr
- Computed Tomography-Research & Development, Siemens Healthcare GmbH, Forchheim, Germany
| | - Puneet Sharma
- Medical Imaging Technologies, Siemens Healthcare, Princeton, New Jersey; and
| | - Dorin Comaniciu
- Medical Imaging Technologies, Siemens Healthcare, Princeton, New Jersey; and
| |
Collapse
|
36
|
Ralovich K, Itu L, Vitanovski D, Sharma P, Ionasec R, Mihalef V, Krawtschuk W, Zheng Y, Everett A, Pongiglione G, Leonardi B, Ringel R, Navab N, Heimann T, Comaniciu D. Noninvasive hemodynamic assessment, treatment outcome prediction and follow-up of aortic coarctation from MR imaging. Med Phys 2016; 42:2143-56. [PMID: 25979009 DOI: 10.1118/1.4914856] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022] Open
Abstract
PURPOSE Coarctation of the aorta (CoA) is a congenital heart disease characterized by an abnormal narrowing of the proximal descending aorta. Severity of this pathology is quantified by the blood pressure drop (△P) across the stenotic coarctation lesion. In order to evaluate the physiological significance of the preoperative coarctation and to assess the postoperative results, the hemodynamic analysis is routinely performed by measuring the △P across the coarctation site via invasive cardiac catheterization. The focus of this work is to present an alternative, noninvasive measurement of blood pressure drop △P through the introduction of a fast, image-based workflow for personalized computational modeling of the CoA hemodynamics. METHODS The authors propose an end-to-end system comprised of shape and computational models, their personalization setup using MR imaging, and a fast, noninvasive method based on computational fluid dynamics (CFD) to estimate the pre- and postoperative hemodynamics for coarctation patients. A virtual treatment method is investigated to assess the predictive power of our approach. RESULTS Automatic thoracic aorta segmentation was applied on a population of 212 3D MR volumes, with mean symmetric point-to-mesh error of 3.00 ± 1.58 mm and average computation time of 8 s. Through quantitative evaluation of 6 CoA patients, good agreement between computed blood pressure drop and catheter measurements is shown: average differences are 2.38 ± 0.82 mm Hg (pre-), 1.10 ± 0.63 mm Hg (postoperative), and 4.99 ± 3.00 mm Hg (virtual stenting), respectively. CONCLUSIONS The complete workflow is realized in a fast, mostly-automated system that is integrable in the clinical setting. To the best of our knowledge, this is the first time that three different settings (preoperative--severity assessment, poststenting--follow-up, and virtual stenting--treatment outcome prediction) of CoA are investigated on multiple subjects. We believe that in future-given wider clinical validation-our noninvasive in-silico method could replace invasive pressure catheterization for CoA.
Collapse
Affiliation(s)
- Kristóf Ralovich
- Siemens AG, Imaging and Computer Vision, San-Carlos-Strasse 7, 91058 Erlangen, Germany and Technical University of Munich, Boltzmannstrasse 3, Munich 85748, Germany
| | - Lucian Itu
- Siemens S.r.l., Imaging and Computer Vision, B-dul Eroilor nr. 5, 500007 Brasov, Romania and Transilvania University of Brasov, B-dul Eroilor nr. 29, 500036 Brasov, Romania
| | - Dime Vitanovski
- Siemens AG, Imaging and Computer Vision, San-Carlos-Strasse 7, 91058 Erlangen, Germany and Pattern Recognition Lab, Friedrich-Alexander University Erlangen-Nuremberg, Martensstrasse 3, 91058 Erlangen, Germany
| | - Puneet Sharma
- Siemens Corporation, Imaging and Computer Vision, 755 College Road East, Princeton, New Jersey 08540
| | - Razvan Ionasec
- Siemens Corporation, Imaging and Computer Vision, 755 College Road East, Princeton, New Jersey 08540
| | - Viorel Mihalef
- Siemens Corporation, Imaging and Computer Vision, 755 College Road East, Princeton, New Jersey 08540
| | - Waldemar Krawtschuk
- Siemens AG, Imaging and Computer Vision, San-Carlos-Strasse 7, 91058 Erlangen, Germany
| | - Yefeng Zheng
- Siemens Corporation, Imaging and Computer Vision, 755 College Road East, Princeton, New Jersey 08540
| | - Allen Everett
- The Johns Hopkins Hospital, 600 North Wolfe Street, Baltimore, Maryland 21287
| | | | - Benedetta Leonardi
- Ospedale Pediatrico Bambino Gesù, Piazza Sant'Onofrio 4, 00165 Rome, Italy
| | - Richard Ringel
- The Johns Hopkins Hospital, 600 North Wolfe Street, Baltimore, Maryland 21287
| | - Nassir Navab
- Technical University of Munich, Boltzmannstrasse 3, Munich 85748, Germany
| | - Tobias Heimann
- Siemens AG, Imaging and Computer Vision, San-Carlos-Strasse 7, 91058 Erlangen, Germany
| | - Dorin Comaniciu
- Siemens Corporation, Imaging and Computer Vision, 755 College Road East, Princeton, New Jersey 08540
| |
Collapse
|
37
|
Ghesu FC, Georgescu B, Mansi T, Neumann D, Hornegger J, Comaniciu D. An Artificial Agent for Anatomical Landmark Detection in Medical Images. Lecture Notes in Computer Science 2016. [DOI: 10.1007/978-3-319-46726-9_27] [Citation(s) in RCA: 64] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/05/2022]
|
38
|
Tröbs M, Achenbach S, Röther J, Redel T, Scheuering M, Winneberger D, Klingenbeck K, Itu L, Passerini T, Kamen A, Sharma P, Comaniciu D, Schlundt C. Comparison of Fractional Flow Reserve Based on Computational Fluid Dynamics Modeling Using Coronary Angiographic Vessel Morphology Versus Invasively Measured Fractional Flow Reserve. Am J Cardiol 2016; 117:29-35. [PMID: 26596195 DOI: 10.1016/j.amjcard.2015.10.008] [Citation(s) in RCA: 42] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/04/2015] [Revised: 10/01/2015] [Accepted: 10/01/2015] [Indexed: 01/10/2023]
Abstract
Invasive fractional flow reserve (FFRinvasive), although gold standard to identify hemodynamically relevant coronary stenoses, is time consuming and potentially associated with complications. We developed and evaluated a new approach to determine lesion-specific FFR on the basis of coronary anatomy as visualized by invasive coronary angiography (FFRangio): 100 coronary lesions (50% to 90% diameter stenosis) in 73 patients (48 men, 25 women; mean age 67 ± 9 years) were studied. On the basis of coronary angiograms acquired at rest from 2 views at angulations at least 30° apart, a PC-based computational fluid dynamics modeling software used personalized boundary conditions determined from 3-dimensional reconstructed angiography, heart rate, and blood pressure to derive FFRangio. The results were compared with FFRinvasive. Interobserver variability was determined in a subset of 25 narrowings. Twenty-nine of 100 coronary lesions were hemodynamically significant (FFRinvasive ≤ 0.80). FFRangio identified these with an accuracy of 90%, sensitivity of 79%, specificity of 94%, positive predictive value of 85%, and negative predictive value of 92%. The area under the receiver operating characteristic curve was 0.93. Correlation between FFRinvasive (mean: 0.84 ± 0.11) and FFRangio (mean: 0.85 ± 0.12) was r = 0.85. Interobserver variability of FFRangio was low, with a correlation of r = 0.88. In conclusion, estimation of coronary FFR with PC-based computational fluid dynamics modeling on the basis of lesion morphology as determined by invasive angiography is possible with high diagnostic accuracy compared to invasive measurements.
Collapse
|
39
|
Itu L, Sharma P, Georgescu B, Kamen A, Suciu C, Comaniciu D. Model based non-invasive estimation of PV loop from echocardiography. Annu Int Conf IEEE Eng Med Biol Soc 2015; 2014:6774-7. [PMID: 25571551 DOI: 10.1109/embc.2014.6945183] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
We introduce a model-based approach for the non-invasive estimation of patient specific, left ventricular PV loops. A lumped parameter circulation model is used, composed of the pulmonary venous circulation, left atrium, left ventricle and the systemic circulation. A fully automated parameter estimation framework is introduced for model personalization, composed of two sequential steps: first, a series of parameters are computed directly, and, next, a fully automatic optimization-based calibration method is employed to iteratively estimate the values of the remaining parameters. The proposed methodology is first evaluated for three healthy volunteers: a perfect agreement is obtained between the computed quantities and the clinical measurements. Additionally, for an initial validation of the methodology, we computed the PV loop for a patient with mild aortic valve regurgitation and compared the results against the invasively determined quantities: there is a close agreement between the time-varying LV and aortic pressures, time-varying LV volumes, and PV loops.
Collapse
|
40
|
Calmac L, Niculescu R, Badila E, Weiss E, Zamfir D, Itu L, Lazar L, Carp M, Itu A, Suciu C, Passerini T, Sharma P, Georgescu B, Comaniciu D. TCT-40 Image-Based Computation of Instantaneous Wave-free Ratio from Routine Coronary Angiography - Initial Validation by Invasively Measured Coronary Pressures. J Am Coll Cardiol 2015. [DOI: 10.1016/j.jacc.2015.08.087] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
41
|
|
42
|
Kayvanpour E, Mansi T, Sedaghat-Hamedani F, Amr A, Neumann D, Georgescu B, Seegerer P, Kamen A, Haas J, Frese KS, Irawati M, Wirsz E, King V, Buss S, Mereles D, Zitron E, Keller A, Katus HA, Comaniciu D, Meder B. Towards Personalized Cardiology: Multi-Scale Modeling of the Failing Heart. PLoS One 2015; 10:e0134869. [PMID: 26230546 PMCID: PMC4521877 DOI: 10.1371/journal.pone.0134869] [Citation(s) in RCA: 45] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2015] [Accepted: 07/14/2015] [Indexed: 01/14/2023] Open
Abstract
BACKGROUND Despite modern pharmacotherapy and advanced implantable cardiac devices, overall prognosis and quality of life of HF patients remain poor. This is in part due to insufficient patient stratification and lack of individualized therapy planning, resulting in less effective treatments and a significant number of non-responders. METHODS AND RESULTS State-of-the-art clinical phenotyping was acquired, including magnetic resonance imaging (MRI) and biomarker assessment. An individualized, multi-scale model of heart function covering cardiac anatomy, electrophysiology, biomechanics and hemodynamics was estimated using a robust framework. The model was computed on n=46 HF patients, showing for the first time that advanced multi-scale models can be fitted consistently on large cohorts. Novel multi-scale parameters derived from the model of all cases were analyzed and compared against clinical parameters, cardiac imaging, lab tests and survival scores to evaluate the explicative power of the model and its potential for better patient stratification. Model validation was pursued by comparing clinical parameters that were not used in the fitting process against model parameters. CONCLUSION This paper illustrates how advanced multi-scale models can complement cardiovascular imaging and how they could be applied in patient care. Based on obtained results, it becomes conceivable that, after thorough validation, such heart failure models could be applied for patient management and therapy planning in the future, as we illustrate in one patient of our cohort who received CRT-D implantation.
Collapse
Affiliation(s)
- Elham Kayvanpour
- Department of Medicine III, University of Heidelberg, Heidelberg, Germany
- DZHK (German Centre for Cardiovascular Research), Heidelberg, Germany
| | - Tommaso Mansi
- Siemens Corporation, Corporate Technology, Imaging and Computer Vision, Princeton, New Jersey, United States of America
| | - Farbod Sedaghat-Hamedani
- Department of Medicine III, University of Heidelberg, Heidelberg, Germany
- DZHK (German Centre for Cardiovascular Research), Heidelberg, Germany
| | - Ali Amr
- Department of Medicine III, University of Heidelberg, Heidelberg, Germany
- DZHK (German Centre for Cardiovascular Research), Heidelberg, Germany
| | - Dominik Neumann
- Siemens Corporation, Corporate Technology, Imaging and Computer Vision, Princeton, New Jersey, United States of America
| | - Bogdan Georgescu
- Siemens Corporation, Corporate Technology, Imaging and Computer Vision, Princeton, New Jersey, United States of America
| | - Philipp Seegerer
- Siemens Corporation, Corporate Technology, Imaging and Computer Vision, Princeton, New Jersey, United States of America
| | - Ali Kamen
- Siemens Corporation, Corporate Technology, Imaging and Computer Vision, Princeton, New Jersey, United States of America
| | - Jan Haas
- Department of Medicine III, University of Heidelberg, Heidelberg, Germany
- DZHK (German Centre for Cardiovascular Research), Heidelberg, Germany
| | - Karen S. Frese
- Department of Medicine III, University of Heidelberg, Heidelberg, Germany
- DZHK (German Centre for Cardiovascular Research), Heidelberg, Germany
| | - Maria Irawati
- Department of Medicine III, University of Heidelberg, Heidelberg, Germany
| | - Emil Wirsz
- Siemens AG, Corporate Technology, Erlangen, Germany
| | - Vanessa King
- Siemens Corporation, Corporate Technology, Sensor Technologies, Princeton, New Jersey, United States of America
| | - Sebastian Buss
- Department of Medicine III, University of Heidelberg, Heidelberg, Germany
| | - Derliz Mereles
- Department of Medicine III, University of Heidelberg, Heidelberg, Germany
| | - Edgar Zitron
- Department of Medicine III, University of Heidelberg, Heidelberg, Germany
| | - Andreas Keller
- Biomarker Discovery Center Heidelberg, Heidelberg, Germany
- Department of Human Genetics, Saarland University, Homburg, Germany
| | - Hugo A. Katus
- Department of Medicine III, University of Heidelberg, Heidelberg, Germany
- DZHK (German Centre for Cardiovascular Research), Heidelberg, Germany
- Klaus Tschira Institute for Computational Cardiology, Heidelberg, Germany
| | - Dorin Comaniciu
- Siemens Corporation, Corporate Technology, Imaging and Computer Vision, Princeton, New Jersey, United States of America
| | - Benjamin Meder
- Department of Medicine III, University of Heidelberg, Heidelberg, Germany
- DZHK (German Centre for Cardiovascular Research), Heidelberg, Germany
- Klaus Tschira Institute for Computational Cardiology, Heidelberg, Germany
| |
Collapse
|
43
|
Itu L, Sharma P, Kamen A, Suciu C, Comaniciu D. A novel coupling algorithm for computing blood flow in viscoelastic arterial models. Annu Int Conf IEEE Eng Med Biol Soc 2015; 2013:727-30. [PMID: 24109790 DOI: 10.1109/embc.2013.6609603] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
We propose a novel coupling algorithm, based on the operator-splitting scheme, which implements the viscoelastic wall law at the coupling nodes of the vessels. Two different viscoelastic models are used (V1 and V2), leading to five different computational setups: elastic wall law, model V1 applied at interior and coupling grid points, model V1 applied only at the interior grid points (V1-int), model V2 applied at interior and coupling grid points, model V2 applied only at the interior grid points (V2-int). These have been tested with two arterial configurations: (i) single artery, and (ii) complete arterial tree. Models V1-int and V2-int lead to incorrect conclusions and to errors which can be of the same order as, and are at least 1/5 of, the difference between the results with the elastic and the viscoelastic laws. Both test cases demonstrate the importance of modeling the viscous component of the pressure-area relationship at all grid points, including the coupling points between vessels or at the inlet/outlet of the model.
Collapse
|
44
|
Audigier C, Mansi T, Delingette H, Rapaka S, Mihalef V, Carnegie D, Boctor E, Choti M, Kamen A, Ayache N, Comaniciu D. Efficient Lattice Boltzmann Solver for Patient-Specific Radiofrequency Ablation of Hepatic Tumors. IEEE Trans Med Imaging 2015; 34:1576-1589. [PMID: 30132760 DOI: 10.1109/tmi.2015.2406575] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Radiofrequency ablation (RFA) is an established treatment for liver cancer when resection is not possible. Yet, its optimal delivery is challenged by the presence of large blood vessels and the time-varying thermal conductivity of biological tissue. Incomplete treatment and an increased risk of recurrence are therefore common. A tool that would enable the accurate planning of RFA is hence necessary. This manuscript describes a new method to compute the extent of ablation required based on the Lattice Boltzmann Method (LBM) and patient-specific, pre-operative images. A detailed anatomical model of the liver is obtained from volumetric images. Then a computational model of heat diffusion, cellular necrosis, and blood flow through the vessels and liver is employed to compute the extent of ablated tissue given the probe location, ablation duration and biological parameters. The model was verified against an analytical solution, showing good fidelity. We also evaluated the predictive power of the proposed framework on ten patients who underwent RFA, for whom pre- and post-operative images were available. Comparisons between the computed ablation extent and ground truth, as observed in postoperative images, were promising (DICE index: 42%, sensitivity: 67%, positive predictive value: 38%). The importance of considering liver perfusion while simulating electrical-heating ablation was also highlighted. Implemented on graphics processing units (GPU), our method simulates 1 minute of ablation in 1.14 minutes, allowing near real-time computation.
Collapse
|
45
|
Seegerer P, Mansi T, Jolly MP, Neumann D, Georgescu B, Kamen A, Kayvanpour E, Amr A, Sedaghat-Hamedani F, Haas J, Katus H, Meder B, Comaniciu D. Estimation of Regional Electrical Properties of the Heart from 12-Lead ECG and Images. ACTA ACUST UNITED AC 2015. [DOI: 10.1007/978-3-319-14678-2_21] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/21/2023]
|
46
|
Schlundt C, Redel T, Scheuering M, Groke D, Klingenbeck K, Itu L, Sharma P, Kamen A, Comaniciu D, Achenbach S. TCT-334 Model-Based Determination of Fractional Flow Reserve Based on Coronary Angiography – Initial Validation by Invasively Measured FFR. J Am Coll Cardiol 2014. [DOI: 10.1016/j.jacc.2014.07.380] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
47
|
Sofka M, Zhang J, Good S, Zhou SK, Comaniciu D. Automatic detection and measurement of structures in fetal head ultrasound volumes using sequential estimation and Integrated Detection Network (IDN). IEEE Trans Med Imaging 2014; 33:1054-70. [PMID: 24770911 DOI: 10.1109/tmi.2014.2301936] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/14/2023]
Abstract
Routine ultrasound exam in the second and third trimesters of pregnancy involves manually measuring fetal head and brain structures in 2-D scans. The procedure requires a sonographer to find the standardized visualization planes with a probe and manually place measurement calipers on the structures of interest. The process is tedious, time consuming, and introduces user variability into the measurements. This paper proposes an automatic fetal head and brain (AFHB) system for automatically measuring anatomical structures from 3-D ultrasound volumes. The system searches the 3-D volume in a hierarchy of resolutions and by focusing on regions that are likely to be the measured anatomy. The output is a standardized visualization of the plane with correct orientation and centering as well as the biometric measurement of the anatomy. The system is based on a novel framework for detecting multiple structures in 3-D volumes. Since a joint model is difficult to obtain in most practical situations, the structures are detected in a sequence, one-by-one. The detection relies on Sequential Estimation techniques, frequently applied to visual tracking. The interdependence of structure poses and strong prior information embedded in our domain yields faster and more accurate results than detecting the objects individually. The posterior distribution of the structure pose is approximated at each step by sequential Monte Carlo. The samples are propagated within the sequence across multiple structures and hierarchical levels. The probabilistic model helps solve many challenges present in the ultrasound images of the fetus such as speckle noise, signal drop-out, shadows caused by bones, and appearance variations caused by the differences in the fetus gestational age. This is possible by discriminative learning on an extensive database of scans comprising more than two thousand volumes and more than thirteen thousand annotations. The average difference between ground truth and automatic measurements is below 2 mm with a running time of 6.9 s (GPU) or 14.7 s (CPU). The accuracy of the AFHB system is within inter-user variability and the running time is fast, which meets the requirements for clinical use.
Collapse
|
48
|
Zettinig O, Mansi T, Neumann D, Georgescu B, Rapaka S, Seegerer P, Kayvanpour E, Sedaghat-Hamedani F, Amr A, Haas J, Steen H, Katus H, Meder B, Navab N, Kamen A, Comaniciu D. Data-driven estimation of cardiac electrical diffusivity from 12-lead ECG signals. Med Image Anal 2014; 18:1361-76. [PMID: 24857832 DOI: 10.1016/j.media.2014.04.011] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2014] [Revised: 03/17/2014] [Accepted: 04/10/2014] [Indexed: 11/25/2022]
Abstract
Diagnosis and treatment of dilated cardiomyopathy (DCM) is challenging due to a large variety of causes and disease stages. Computational models of cardiac electrophysiology (EP) can be used to improve the assessment and prognosis of DCM, plan therapies and predict their outcome, but require personalization. In this work, we present a data-driven approach to estimate the electrical diffusivity parameter of an EP model from standard 12-lead electrocardiograms (ECG). An efficient forward model based on a mono-domain, phenomenological Lattice-Boltzmann model of cardiac EP, and a boundary element-based mapping of potentials to the body surface is employed. The electrical diffusivity of myocardium, left ventricle and right ventricle endocardium is then estimated using polynomial regression which takes as input the QRS duration and electrical axis. After validating the forward model, we computed 9500 EP simulations on 19 different DCM patients in just under three seconds each to learn the regression model. Using this database, we quantify the intrinsic uncertainty of electrical diffusion for given ECG features and show in a leave-one-patient-out cross-validation that the regression method is able to predict myocardium diffusion within the uncertainty range. Finally, our approach is tested on the 19 cases using their clinical ECG. 84% of them could be personalized using our method, yielding mean prediction errors of 18.7ms for the QRS duration and 6.5° for the electrical axis, both values being within clinical acceptability. By providing an estimate of diffusion parameters from readily available clinical data, our data-driven approach could therefore constitute a first calibration step toward a more complete personalization of cardiac EP.
Collapse
Affiliation(s)
- Oliver Zettinig
- Siemens Corporate Technology, Imaging and Computer Vision, Princeton, NJ, USA; Computer Aided Medical Procedures, Technische Universität München, Germany
| | - Tommaso Mansi
- Siemens Corporate Technology, Imaging and Computer Vision, Princeton, NJ, USA.
| | - Dominik Neumann
- Siemens Corporate Technology, Imaging and Computer Vision, Princeton, NJ, USA; Pattern Recognition Lab, Friedrich-Alexander-Universität Erlangen-Nürnberg, Germany
| | - Bogdan Georgescu
- Siemens Corporate Technology, Imaging and Computer Vision, Princeton, NJ, USA
| | - Saikiran Rapaka
- Siemens Corporate Technology, Imaging and Computer Vision, Princeton, NJ, USA
| | - Philipp Seegerer
- Siemens Corporate Technology, Imaging and Computer Vision, Princeton, NJ, USA; Pattern Recognition Lab, Friedrich-Alexander-Universität Erlangen-Nürnberg, Germany
| | | | | | - Ali Amr
- Heidelberg University Hospital, Heidelberg, Germany
| | - Jan Haas
- Heidelberg University Hospital, Heidelberg, Germany
| | | | - Hugo Katus
- Heidelberg University Hospital, Heidelberg, Germany
| | | | - Nassir Navab
- Computer Aided Medical Procedures, Technische Universität München, Germany
| | - Ali Kamen
- Siemens Corporate Technology, Imaging and Computer Vision, Princeton, NJ, USA
| | - Dorin Comaniciu
- Siemens Corporate Technology, Imaging and Computer Vision, Princeton, NJ, USA
| |
Collapse
|
49
|
John M, Comaniciu D. Multi-part modeling and segmentation of left atrium in C-arm CT for image-guided ablation of atrial fibrillation. IEEE Trans Med Imaging 2014; 33:318-331. [PMID: 24108749 DOI: 10.1109/tmi.2013.2284382] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
As a minimally invasive surgery to treat atrial fibrillation (AF), catheter based ablation uses high radio-frequency energy to eliminate potential sources of abnormal electrical events, especially around the ostia of pulmonary veins (PV). Fusing a patient-specific left atrium (LA) model (including LA chamber, appendage, and PVs) with electro-anatomical maps or overlaying the model onto 2-D real-time fluoroscopic images provides valuable visual guidance during the intervention. In this work, we present a fully automatic LA segmentation system on nongated C-arm computed tomography (C-arm CT) data, where thin boundaries between the LA and surrounding tissues are often blurred due to the cardiac motion artifacts. To avoid segmentation leakage, the shape prior should be exploited to guide the segmentation. A single holistic shape model is often not accurate enough to represent the whole LA shape population under anatomical variations, e.g., the left common PVs vs. separate left PVs. Instead, a part based LA model is proposed, which includes the chamber, appendage, four major PVs, and right middle PVs. Each part is a much simpler anatomical structure compared to the holistic one and can be segmented using a model-based approach (except the right middle PVs). After segmenting the LA parts, the gaps and overlaps among the parts are resolved and segmentation of the ostia region is further refined. As a common anatomical variation, some patients may contain extra right middle PVs, which are segmented using a graph cuts algorithm under the constraints from the already extracted major right PVs. Our approach is computationally efficient, taking about 2.6 s to process a volume with 256 × 256 × 245 voxels. Experiments on 687 C-arm CT datasets demonstrate its robustness and state-of-the-art segmentation accuracy.
Collapse
|
50
|
Ecabert O, Chen T, Wels M, Rieber J, Ostermeier M, Comaniciu D. Image-based Co-Registration of Angiography and Intravascular Ultrasound Images. IEEE Trans Med Imaging 2013; 32:2238-2249. [PMID: 24001984 DOI: 10.1109/tmi.2013.2279754] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
In image-guided cardiac interventions, X-ray imaging and intravascular ultrasound (IVUS) imaging are two often used modalities. Interventional X-ray images, including angiography and fluoroscopy, are used to assess the lumen of the coronary arteries and to monitor devices in real time. IVUS provides rich intravascular information, such as vessel wall composition, plaque, and stent expansions, but lacks spatial orientations. Since the two imaging modalities are complementary to each other, it is highly desirable to co-register the two modalities to provide a comprehensive picture of the coronaries for interventional cardiologists. In this paper, we present a solution for co-registering 2-D angiography and IVUS through image-based device tracking. The presented framework includes learning-based vessel detection and device detections, model-based tracking, and geodesic distance-based registration. The system first interactively detects the coronary branch under investigation in a reference angiography image. During the pullback of the IVUS transducers, the system acquires both ECG-triggered fluoroscopy and IVUS images, and automatically tracks the position of the medical devices in fluoroscopy. The localization of tracked IVUS transducers and guiding catheter tips is used to associate an IVUS imaging plane to a corresponding location on the vessel branch under investigation. The presented image-based solution can be conveniently integrated into existing cardiology workflow. The system is validated with a set of clinical cases, and achieves good accuracy and robustness.
Collapse
|