51
|
Assessing the Accuracy of an Artificial Intelligence-Based Segmentation Algorithm for the Thoracic Aorta in Computed Tomography Applications. Diagnostics (Basel) 2022; 12:diagnostics12081790. [PMID: 35892500 PMCID: PMC9330011 DOI: 10.3390/diagnostics12081790] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2022] [Revised: 07/01/2022] [Accepted: 07/21/2022] [Indexed: 12/03/2022] Open
Abstract
The aim was to evaluate the accuracy of a prototypical artificial intelligence-based algorithm for automated segmentation and diameter measurement of the thoracic aorta (TA) using CT. One hundred twenty-two patients who underwent dual-source CT were retrospectively included. Ninety-three of these patients had been administered intravenous iodinated contrast. Images were evaluated using the prototypical algorithm, which segments the TA and determines the corresponding diameters at predefined anatomical locations based on the American Heart Association guidelines. The reference standard was established by two radiologists individually in a blinded, randomized fashion. Equivalency was tested and inter-reader agreement was assessed using intra-class correlation (ICC). In total, 99.2% of the parameters measured by the prototype were assessable. In nine patients, the prototype failed to determine one diameter along the vessel. Measurements along the TA did not differ between the algorithm and readers (p > 0.05), establishing equivalence. Inter-reader agreement between the algorithm and readers (ICC ≥ 0.961; 95% CI: 0.940−0.974), and between the readers was excellent (ICC ≥ 0.879; 95% CI: 0.818−0.92). The evaluated prototypical AI-based algorithm accurately measured TA diameters at each region of interest independent of the use of either contrast utilization or pathology. This indicates that the prototypical algorithm has substantial potential as a valuable tool in the rapid clinical evaluation of aortic pathology.
Collapse
|
52
|
Chen R, Ma Y, Chen N, Liu L, Cui Z, Lin Y, Wang W. Structure-Aware Long Short-Term Memory Network for 3D Cephalometric Landmark Detection. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:1791-1801. [PMID: 35130151 DOI: 10.1109/tmi.2022.3149281] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/09/2023]
Abstract
Detecting 3D landmarks on cone-beam computed tomography (CBCT) is crucial to assessing and quantifying the anatomical abnormalities in 3D cephalometric analysis. However, the current methods are time-consuming and suffer from large biases in landmark localization, leading to unreliable diagnosis results. In this work, we propose a novel Structure-Aware Long Short-Term Memory framework (SA-LSTM) for efficient and accurate 3D landmark detection. To reduce the computational burden, SA-LSTM is designed in two stages. It first locates the coarse landmarks via heatmap regression on a down-sampled CBCT volume and then progressively refines landmarks by attentive offset regression using multi-resolution cropped patches. To boost accuracy, SA-LSTM captures global-local dependence among the cropping patches via self-attention. Specifically, a novel graph attention module implicitly encodes the landmark's global structure to rationalize the predicted position. Moreover, a novel attention-gated module recursively filters irrelevant local features and maintains high-confident local predictions for aggregating the final result. Experiments conducted on an in-house dataset and a public dataset show that our method outperforms state-of-the-art methods, achieving 1.64 mm and 2.37 mm average errors, respectively. Furthermore, our method is very efficient, taking only 0.5 seconds for inferring the whole CBCT volume of resolution 768×768×576 .
Collapse
|
53
|
Gibson E, Georgescu B, Ceccaldi P, Trigan PH, Yoo Y, Das J, Re TJ, Rs V, Balachandran A, Eibenberger E, Chekkoury A, Brehm B, Bodanapally UK, Nicolaou S, Sanelli PC, Schroeppel TJ, Flohr T, Comaniciu D, Lui YW. Artificial Intelligence with Statistical Confidence Scores for Detection of Acute or Subacute Hemorrhage on Noncontrast CT Head Scans. Radiol Artif Intell 2022; 4:e210115. [PMID: 35652116 DOI: 10.1148/ryai.210115] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2021] [Revised: 03/01/2022] [Accepted: 04/01/2022] [Indexed: 11/11/2022]
Abstract
Purpose To present a method that automatically detects, subtypes, and locates acute or subacute intracranial hemorrhage (ICH) on noncontrast CT (NCCT) head scans; generates detection confidence scores to identify high-confidence data subsets with higher accuracy; and improves radiology worklist prioritization. Such scores may enable clinicians to better use artificial intelligence (AI) tools. Materials and Methods This retrospective study included 46 057 studies from seven "internal" centers for development (training, architecture selection, hyperparameter tuning, and operating-point calibration; n = 25 946) and evaluation (n = 2947) and three "external" centers for calibration (n = 400) and evaluation (n = 16 764). Internal centers contributed developmental data, whereas external centers did not. Deep neural networks predicted the presence of ICH and subtypes (intraparenchymal, intraventricular, subarachnoid, subdural, and/or epidural hemorrhage) and segmentations per case. Two ICH confidence scores are discussed: a calibrated classifier entropy score and a Dempster-Shafer score. Evaluation was completed by using receiver operating characteristic curve analysis and report turnaround time (RTAT) modeling on the evaluation set and on confidence score-defined subsets using bootstrapping. Results The areas under the receiver operating characteristic curve for ICH were 0.97 (0.97, 0.98) and 0.95 (0.94, 0.95) on internal and external center data, respectively. On 80% of the data stratified by calibrated classifier and Dempster-Shafer scores, the system improved the Youden indexes, increasing them from 0.84 to 0.93 (calibrated classifier) and from 0.84 to 0.92 (Dempster-Shafer) for internal centers and increasing them from 0.78 to 0.88 (calibrated classifier) and from 0.78 to 0.89 (Dempster-Shafer) for external centers (P < .001). Models estimated shorter RTAT for AI-prioritized worklists with confidence measures than for AI-prioritized worklists without confidence measures, shortening RTAT by 27% (calibrated classifier) and 27% (Dempster-Shafer) for internal centers and shortening RTAT by 25% (calibrated classifier) and 27% (Dempster-Shafer) for external centers (P < .001). Conclusion AI that provided statistical confidence measures for ICH detection on NCCT scans reliably detected and subtyped hemorrhages, identified high-confidence predictions, and improved worklist prioritization in simulation.Keywords: CT, Head/Neck, Hemorrhage, Convolutional Neural Network (CNN) Supplemental material is available for this article. © RSNA, 2022.
Collapse
Affiliation(s)
- Eli Gibson
- Department of Digital Technology and Innovation, Siemens Healthineers, 755 College Rd E, Princeton, NJ 08540 (E.G., B.G., P.C., P.H.T., Y.Y., J.D., T.J.R., D.C.); Department of Digital Technology and Innovation, Siemens Healthineers, Bangalore, India (V.R.S., A.B.); Department of Computed Tomography, Siemens Healthineers, Forchheim, Germany (E.E., A.C., B.B., T.F.); Department of Radiology, University of Maryland Medical Center, Baltimore, Md (U.K.B.); Department of Radiology, Vancouver General Hospital, Vancouver, Canada (S.N.); Department of Radiology, Northwell Health, New York, NY (P.C.S.); Department of Surgery, UCHealth Memorial Hospital, Colorado Springs, Colo (T.J.S.); and Department of Radiology, NYU Langone Health, New York University School of Medicine, New York, NY (Y.W.L.)
| | - Bogdan Georgescu
- Department of Digital Technology and Innovation, Siemens Healthineers, 755 College Rd E, Princeton, NJ 08540 (E.G., B.G., P.C., P.H.T., Y.Y., J.D., T.J.R., D.C.); Department of Digital Technology and Innovation, Siemens Healthineers, Bangalore, India (V.R.S., A.B.); Department of Computed Tomography, Siemens Healthineers, Forchheim, Germany (E.E., A.C., B.B., T.F.); Department of Radiology, University of Maryland Medical Center, Baltimore, Md (U.K.B.); Department of Radiology, Vancouver General Hospital, Vancouver, Canada (S.N.); Department of Radiology, Northwell Health, New York, NY (P.C.S.); Department of Surgery, UCHealth Memorial Hospital, Colorado Springs, Colo (T.J.S.); and Department of Radiology, NYU Langone Health, New York University School of Medicine, New York, NY (Y.W.L.)
| | - Pascal Ceccaldi
- Department of Digital Technology and Innovation, Siemens Healthineers, 755 College Rd E, Princeton, NJ 08540 (E.G., B.G., P.C., P.H.T., Y.Y., J.D., T.J.R., D.C.); Department of Digital Technology and Innovation, Siemens Healthineers, Bangalore, India (V.R.S., A.B.); Department of Computed Tomography, Siemens Healthineers, Forchheim, Germany (E.E., A.C., B.B., T.F.); Department of Radiology, University of Maryland Medical Center, Baltimore, Md (U.K.B.); Department of Radiology, Vancouver General Hospital, Vancouver, Canada (S.N.); Department of Radiology, Northwell Health, New York, NY (P.C.S.); Department of Surgery, UCHealth Memorial Hospital, Colorado Springs, Colo (T.J.S.); and Department of Radiology, NYU Langone Health, New York University School of Medicine, New York, NY (Y.W.L.)
| | - Pierre-Hugo Trigan
- Department of Digital Technology and Innovation, Siemens Healthineers, 755 College Rd E, Princeton, NJ 08540 (E.G., B.G., P.C., P.H.T., Y.Y., J.D., T.J.R., D.C.); Department of Digital Technology and Innovation, Siemens Healthineers, Bangalore, India (V.R.S., A.B.); Department of Computed Tomography, Siemens Healthineers, Forchheim, Germany (E.E., A.C., B.B., T.F.); Department of Radiology, University of Maryland Medical Center, Baltimore, Md (U.K.B.); Department of Radiology, Vancouver General Hospital, Vancouver, Canada (S.N.); Department of Radiology, Northwell Health, New York, NY (P.C.S.); Department of Surgery, UCHealth Memorial Hospital, Colorado Springs, Colo (T.J.S.); and Department of Radiology, NYU Langone Health, New York University School of Medicine, New York, NY (Y.W.L.)
| | - Youngjin Yoo
- Department of Digital Technology and Innovation, Siemens Healthineers, 755 College Rd E, Princeton, NJ 08540 (E.G., B.G., P.C., P.H.T., Y.Y., J.D., T.J.R., D.C.); Department of Digital Technology and Innovation, Siemens Healthineers, Bangalore, India (V.R.S., A.B.); Department of Computed Tomography, Siemens Healthineers, Forchheim, Germany (E.E., A.C., B.B., T.F.); Department of Radiology, University of Maryland Medical Center, Baltimore, Md (U.K.B.); Department of Radiology, Vancouver General Hospital, Vancouver, Canada (S.N.); Department of Radiology, Northwell Health, New York, NY (P.C.S.); Department of Surgery, UCHealth Memorial Hospital, Colorado Springs, Colo (T.J.S.); and Department of Radiology, NYU Langone Health, New York University School of Medicine, New York, NY (Y.W.L.)
| | - Jyotipriya Das
- Department of Digital Technology and Innovation, Siemens Healthineers, 755 College Rd E, Princeton, NJ 08540 (E.G., B.G., P.C., P.H.T., Y.Y., J.D., T.J.R., D.C.); Department of Digital Technology and Innovation, Siemens Healthineers, Bangalore, India (V.R.S., A.B.); Department of Computed Tomography, Siemens Healthineers, Forchheim, Germany (E.E., A.C., B.B., T.F.); Department of Radiology, University of Maryland Medical Center, Baltimore, Md (U.K.B.); Department of Radiology, Vancouver General Hospital, Vancouver, Canada (S.N.); Department of Radiology, Northwell Health, New York, NY (P.C.S.); Department of Surgery, UCHealth Memorial Hospital, Colorado Springs, Colo (T.J.S.); and Department of Radiology, NYU Langone Health, New York University School of Medicine, New York, NY (Y.W.L.)
| | - Thomas J Re
- Department of Digital Technology and Innovation, Siemens Healthineers, 755 College Rd E, Princeton, NJ 08540 (E.G., B.G., P.C., P.H.T., Y.Y., J.D., T.J.R., D.C.); Department of Digital Technology and Innovation, Siemens Healthineers, Bangalore, India (V.R.S., A.B.); Department of Computed Tomography, Siemens Healthineers, Forchheim, Germany (E.E., A.C., B.B., T.F.); Department of Radiology, University of Maryland Medical Center, Baltimore, Md (U.K.B.); Department of Radiology, Vancouver General Hospital, Vancouver, Canada (S.N.); Department of Radiology, Northwell Health, New York, NY (P.C.S.); Department of Surgery, UCHealth Memorial Hospital, Colorado Springs, Colo (T.J.S.); and Department of Radiology, NYU Langone Health, New York University School of Medicine, New York, NY (Y.W.L.)
| | - Vishwanath Rs
- Department of Digital Technology and Innovation, Siemens Healthineers, 755 College Rd E, Princeton, NJ 08540 (E.G., B.G., P.C., P.H.T., Y.Y., J.D., T.J.R., D.C.); Department of Digital Technology and Innovation, Siemens Healthineers, Bangalore, India (V.R.S., A.B.); Department of Computed Tomography, Siemens Healthineers, Forchheim, Germany (E.E., A.C., B.B., T.F.); Department of Radiology, University of Maryland Medical Center, Baltimore, Md (U.K.B.); Department of Radiology, Vancouver General Hospital, Vancouver, Canada (S.N.); Department of Radiology, Northwell Health, New York, NY (P.C.S.); Department of Surgery, UCHealth Memorial Hospital, Colorado Springs, Colo (T.J.S.); and Department of Radiology, NYU Langone Health, New York University School of Medicine, New York, NY (Y.W.L.)
| | - Abishek Balachandran
- Department of Digital Technology and Innovation, Siemens Healthineers, 755 College Rd E, Princeton, NJ 08540 (E.G., B.G., P.C., P.H.T., Y.Y., J.D., T.J.R., D.C.); Department of Digital Technology and Innovation, Siemens Healthineers, Bangalore, India (V.R.S., A.B.); Department of Computed Tomography, Siemens Healthineers, Forchheim, Germany (E.E., A.C., B.B., T.F.); Department of Radiology, University of Maryland Medical Center, Baltimore, Md (U.K.B.); Department of Radiology, Vancouver General Hospital, Vancouver, Canada (S.N.); Department of Radiology, Northwell Health, New York, NY (P.C.S.); Department of Surgery, UCHealth Memorial Hospital, Colorado Springs, Colo (T.J.S.); and Department of Radiology, NYU Langone Health, New York University School of Medicine, New York, NY (Y.W.L.)
| | - Eva Eibenberger
- Department of Digital Technology and Innovation, Siemens Healthineers, 755 College Rd E, Princeton, NJ 08540 (E.G., B.G., P.C., P.H.T., Y.Y., J.D., T.J.R., D.C.); Department of Digital Technology and Innovation, Siemens Healthineers, Bangalore, India (V.R.S., A.B.); Department of Computed Tomography, Siemens Healthineers, Forchheim, Germany (E.E., A.C., B.B., T.F.); Department of Radiology, University of Maryland Medical Center, Baltimore, Md (U.K.B.); Department of Radiology, Vancouver General Hospital, Vancouver, Canada (S.N.); Department of Radiology, Northwell Health, New York, NY (P.C.S.); Department of Surgery, UCHealth Memorial Hospital, Colorado Springs, Colo (T.J.S.); and Department of Radiology, NYU Langone Health, New York University School of Medicine, New York, NY (Y.W.L.)
| | - Andrei Chekkoury
- Department of Digital Technology and Innovation, Siemens Healthineers, 755 College Rd E, Princeton, NJ 08540 (E.G., B.G., P.C., P.H.T., Y.Y., J.D., T.J.R., D.C.); Department of Digital Technology and Innovation, Siemens Healthineers, Bangalore, India (V.R.S., A.B.); Department of Computed Tomography, Siemens Healthineers, Forchheim, Germany (E.E., A.C., B.B., T.F.); Department of Radiology, University of Maryland Medical Center, Baltimore, Md (U.K.B.); Department of Radiology, Vancouver General Hospital, Vancouver, Canada (S.N.); Department of Radiology, Northwell Health, New York, NY (P.C.S.); Department of Surgery, UCHealth Memorial Hospital, Colorado Springs, Colo (T.J.S.); and Department of Radiology, NYU Langone Health, New York University School of Medicine, New York, NY (Y.W.L.)
| | - Barbara Brehm
- Department of Digital Technology and Innovation, Siemens Healthineers, 755 College Rd E, Princeton, NJ 08540 (E.G., B.G., P.C., P.H.T., Y.Y., J.D., T.J.R., D.C.); Department of Digital Technology and Innovation, Siemens Healthineers, Bangalore, India (V.R.S., A.B.); Department of Computed Tomography, Siemens Healthineers, Forchheim, Germany (E.E., A.C., B.B., T.F.); Department of Radiology, University of Maryland Medical Center, Baltimore, Md (U.K.B.); Department of Radiology, Vancouver General Hospital, Vancouver, Canada (S.N.); Department of Radiology, Northwell Health, New York, NY (P.C.S.); Department of Surgery, UCHealth Memorial Hospital, Colorado Springs, Colo (T.J.S.); and Department of Radiology, NYU Langone Health, New York University School of Medicine, New York, NY (Y.W.L.)
| | - Uttam K Bodanapally
- Department of Digital Technology and Innovation, Siemens Healthineers, 755 College Rd E, Princeton, NJ 08540 (E.G., B.G., P.C., P.H.T., Y.Y., J.D., T.J.R., D.C.); Department of Digital Technology and Innovation, Siemens Healthineers, Bangalore, India (V.R.S., A.B.); Department of Computed Tomography, Siemens Healthineers, Forchheim, Germany (E.E., A.C., B.B., T.F.); Department of Radiology, University of Maryland Medical Center, Baltimore, Md (U.K.B.); Department of Radiology, Vancouver General Hospital, Vancouver, Canada (S.N.); Department of Radiology, Northwell Health, New York, NY (P.C.S.); Department of Surgery, UCHealth Memorial Hospital, Colorado Springs, Colo (T.J.S.); and Department of Radiology, NYU Langone Health, New York University School of Medicine, New York, NY (Y.W.L.)
| | - Savvas Nicolaou
- Department of Digital Technology and Innovation, Siemens Healthineers, 755 College Rd E, Princeton, NJ 08540 (E.G., B.G., P.C., P.H.T., Y.Y., J.D., T.J.R., D.C.); Department of Digital Technology and Innovation, Siemens Healthineers, Bangalore, India (V.R.S., A.B.); Department of Computed Tomography, Siemens Healthineers, Forchheim, Germany (E.E., A.C., B.B., T.F.); Department of Radiology, University of Maryland Medical Center, Baltimore, Md (U.K.B.); Department of Radiology, Vancouver General Hospital, Vancouver, Canada (S.N.); Department of Radiology, Northwell Health, New York, NY (P.C.S.); Department of Surgery, UCHealth Memorial Hospital, Colorado Springs, Colo (T.J.S.); and Department of Radiology, NYU Langone Health, New York University School of Medicine, New York, NY (Y.W.L.)
| | - Pina C Sanelli
- Department of Digital Technology and Innovation, Siemens Healthineers, 755 College Rd E, Princeton, NJ 08540 (E.G., B.G., P.C., P.H.T., Y.Y., J.D., T.J.R., D.C.); Department of Digital Technology and Innovation, Siemens Healthineers, Bangalore, India (V.R.S., A.B.); Department of Computed Tomography, Siemens Healthineers, Forchheim, Germany (E.E., A.C., B.B., T.F.); Department of Radiology, University of Maryland Medical Center, Baltimore, Md (U.K.B.); Department of Radiology, Vancouver General Hospital, Vancouver, Canada (S.N.); Department of Radiology, Northwell Health, New York, NY (P.C.S.); Department of Surgery, UCHealth Memorial Hospital, Colorado Springs, Colo (T.J.S.); and Department of Radiology, NYU Langone Health, New York University School of Medicine, New York, NY (Y.W.L.)
| | - Thomas J Schroeppel
- Department of Digital Technology and Innovation, Siemens Healthineers, 755 College Rd E, Princeton, NJ 08540 (E.G., B.G., P.C., P.H.T., Y.Y., J.D., T.J.R., D.C.); Department of Digital Technology and Innovation, Siemens Healthineers, Bangalore, India (V.R.S., A.B.); Department of Computed Tomography, Siemens Healthineers, Forchheim, Germany (E.E., A.C., B.B., T.F.); Department of Radiology, University of Maryland Medical Center, Baltimore, Md (U.K.B.); Department of Radiology, Vancouver General Hospital, Vancouver, Canada (S.N.); Department of Radiology, Northwell Health, New York, NY (P.C.S.); Department of Surgery, UCHealth Memorial Hospital, Colorado Springs, Colo (T.J.S.); and Department of Radiology, NYU Langone Health, New York University School of Medicine, New York, NY (Y.W.L.)
| | - Thomas Flohr
- Department of Digital Technology and Innovation, Siemens Healthineers, 755 College Rd E, Princeton, NJ 08540 (E.G., B.G., P.C., P.H.T., Y.Y., J.D., T.J.R., D.C.); Department of Digital Technology and Innovation, Siemens Healthineers, Bangalore, India (V.R.S., A.B.); Department of Computed Tomography, Siemens Healthineers, Forchheim, Germany (E.E., A.C., B.B., T.F.); Department of Radiology, University of Maryland Medical Center, Baltimore, Md (U.K.B.); Department of Radiology, Vancouver General Hospital, Vancouver, Canada (S.N.); Department of Radiology, Northwell Health, New York, NY (P.C.S.); Department of Surgery, UCHealth Memorial Hospital, Colorado Springs, Colo (T.J.S.); and Department of Radiology, NYU Langone Health, New York University School of Medicine, New York, NY (Y.W.L.)
| | - Dorin Comaniciu
- Department of Digital Technology and Innovation, Siemens Healthineers, 755 College Rd E, Princeton, NJ 08540 (E.G., B.G., P.C., P.H.T., Y.Y., J.D., T.J.R., D.C.); Department of Digital Technology and Innovation, Siemens Healthineers, Bangalore, India (V.R.S., A.B.); Department of Computed Tomography, Siemens Healthineers, Forchheim, Germany (E.E., A.C., B.B., T.F.); Department of Radiology, University of Maryland Medical Center, Baltimore, Md (U.K.B.); Department of Radiology, Vancouver General Hospital, Vancouver, Canada (S.N.); Department of Radiology, Northwell Health, New York, NY (P.C.S.); Department of Surgery, UCHealth Memorial Hospital, Colorado Springs, Colo (T.J.S.); and Department of Radiology, NYU Langone Health, New York University School of Medicine, New York, NY (Y.W.L.)
| | - Yvonne W Lui
- Department of Digital Technology and Innovation, Siemens Healthineers, 755 College Rd E, Princeton, NJ 08540 (E.G., B.G., P.C., P.H.T., Y.Y., J.D., T.J.R., D.C.); Department of Digital Technology and Innovation, Siemens Healthineers, Bangalore, India (V.R.S., A.B.); Department of Computed Tomography, Siemens Healthineers, Forchheim, Germany (E.E., A.C., B.B., T.F.); Department of Radiology, University of Maryland Medical Center, Baltimore, Md (U.K.B.); Department of Radiology, Vancouver General Hospital, Vancouver, Canada (S.N.); Department of Radiology, Northwell Health, New York, NY (P.C.S.); Department of Surgery, UCHealth Memorial Hospital, Colorado Springs, Colo (T.J.S.); and Department of Radiology, NYU Langone Health, New York University School of Medicine, New York, NY (Y.W.L.)
| |
Collapse
|
54
|
Iyer S, Blair A, Dawes L, Moses D, White C, Sowmya A. Supervised and semi-supervised 3D organ localisation in CT images combining reinforcement learning with imitation learning. Biomed Phys Eng Express 2022; 8. [PMID: 35385835 DOI: 10.1088/2057-1976/ac64c5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2022] [Accepted: 04/06/2022] [Indexed: 11/12/2022]
Abstract
Computer aided diagnostics often requires analysis of a region of interest (ROI) within a radiology scan, and the ROI may be an organ or a suborgan. Although deep learning algorithms have the ability to outperform other methods, they rely on the availability of a large amount of annotated data. Motivated by the need to address this limitation, an approach to localisation and detection of multiple organs based on supervised and semi-supervised learning is presented here. It draws upon previous work by the authors on localising the thoracic and lumbar spine region in CT images. The method generates six bounding boxes of organs of interest, which are then fused to a single bounding box. The results of experiments on localisation of the Spleen, Left and Right Kidneys in CT Images using supervised and semi supervised learning (SSL) demonstrate the ability to address data limitations with a much smaller data set and fewer annotations, compared to other state-of-the-art methods. The SSL performance was evaluated using three different mixes of labelled and unlabelled data (i.e. 30:70,35:65,40:60) for each of lumbar spine, spleen left and right kidneys respectively. The results indicate that SSL provides a workable alternative especially in medical imaging where it is difficult to obtain annotated data.
Collapse
Affiliation(s)
- Sankaran Iyer
- School of Computer Science and Engineering, The University of New South Wales, Australia
| | - Alan Blair
- School of Computer Science and Engineering, The University of New South Wales, Australia
| | - Laughlin Dawes
- Department of Medical Imaging, Prince of Wales Hospital, NSW, Australia
| | - Daniel Moses
- Department of Medical Imaging, Prince of Wales Hospital, NSW, Australia
| | - Christopher White
- Department of Endocrinology and Metabolism, Prince of Wales Hospital, NSW, Australia
| | - Arcot Sowmya
- School of Computer Science and Engineering, The University of New South Wales, Australia
| |
Collapse
|
55
|
Exploration for Countering the Episodic Memory. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:7286186. [PMID: 35419049 PMCID: PMC8995543 DOI: 10.1155/2022/7286186] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/17/2021] [Accepted: 02/17/2022] [Indexed: 11/17/2022]
Abstract
Reinforcement learning is a prominent computational approach for goal-directed learning and decision making, and exploration plays an important role in improving the agent's performance in reinforcement learning. In low-dimensional Markov decision processes, table reinforcement learning incorporated within count-based exploration works well for states of the Markov decision processes that can be easily exhausted. It is generally accepted that count-based exploration strategies turn inefficient when applied to high-dimensional Markov decision processes (generally high-dimensional state spaces, continuous action spaces, or both) since most states occur only once in deep reinforcement learning. Exploration methods widely applied in deep reinforcement learning rely on heuristic intrinsic motivation to explore unseen states or unreached parts of one state. The episodic memory module simulates the performance of hippocampus in human brain. This is exactly the memory of past experience. It seems logical to use episodic memory to count the situations encountered. Therefore, we use the contextual memory module to remember the states that the agent has encountered, as a count of states, and the purpose of exploration is to reduce the probability of encountering these states again. The purpose of exploration is to counter the episodic memory. In this article, we try to take advantage of the episodic memory module to estimate the number of states experienced, so as to counter the episodic memory. We conducted experiments on the OpenAI platform and found that counting accuracy of state is higher than that of the CTS model. At the same time, this method is used in high-dimensional object detection and tracking, also achieving good results.
Collapse
|
56
|
Abstract
Artificial intelligence (AI) is transforming the way we perform advanced imaging. From high-resolution image reconstruction to predicting functional response from clinically acquired data, AI is promising to revolutionize clinical evaluation of lung performance, pushing the boundary in pulmonary functional imaging for patients suffering from respiratory conditions. In this review, we overview the current developments and expound on some of the encouraging new frontiers. We focus on the recent advances in machine learning and deep learning that enable reconstructing images, quantitating, and predicting functional responses of the lung. Finally, we shed light on the potential opportunities and challenges ahead in adopting AI for functional lung imaging in clinical settings.
Collapse
Affiliation(s)
- Raúl San José Estépar
- Applied Chest Imaging Laboratory, Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, United States
| |
Collapse
|
57
|
Generative Adversarial CT Volume Extrapolation for Robust Small-to-Large Field of View Registration. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12062944] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Intraoperative Computer Tomographs (iCT) provide near real time visualizations which can be registered with high-quality preoperative images to improve the confidence of surgical instrument navigation. However, intraoperative images have a small field of view making the registration process error prone due to the reduced amount of mutual information. We herein propose a method to extrapolate thin acquisitions as a prior step to registration, to increase the field of view of the intraoperative images, and hence also the robustness of the guiding system. The method is based on a deep neural network which is trained adversarially using self-supervision to extrapolate slices from the existing ones. Median landmark detection errors are reduced by approximately 40%, yielding a better initial alignment. Furthermore, the intensity-based registration is improved; the surface distance errors are reduced by an order of magnitude, from 5.66 mm to 0.57 mm (p-value = 4.18×10−6). The proposed extrapolation method increases the registration robustness, which plays a key role in guiding the surgical intervention confidently.
Collapse
|
58
|
Madan N, Lucas J, Akhter N, Collier P, Cheng F, Guha A, Zhang L, Sharma A, Hamid A, Ndiokho I, Wen E, Garster NC, Scherrer-Crosbie M, Brown SA. Artificial intelligence and imaging: Opportunities in cardio-oncology. AMERICAN HEART JOURNAL PLUS : CARDIOLOGY RESEARCH AND PRACTICE 2022; 15:100126. [PMID: 35693323 PMCID: PMC9187287 DOI: 10.1016/j.ahjo.2022.100126] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/31/2021] [Revised: 03/20/2022] [Accepted: 03/21/2022] [Indexed: 12/29/2022]
Abstract
Cardiovascular disease is a leading cause of death in cancer survivors. It is critical to apply new predictive and early diagnostic methods in this population, as this can potentially inform cardiovascular treatment and surveillance decision-making. We discuss the application of artificial intelligence (AI) technologies to cardiovascular imaging in cardio-oncology, with a particular emphasis on prevention and targeted treatment of a variety of cardiovascular conditions in cancer patients. Recently, the use of AI-augmented cardiac imaging in cardio-oncology is gaining traction. A large proportion of cardio-oncology patients are screened and followed using left ventricular ejection fraction (LVEF) and global longitudinal strain (GLS), currently obtained using echocardiography. This use will continue to increase with new cardiotoxic cancer treatments. AI is being tested to increase precision, throughput, and accuracy of LVEF and GLS, guide point-of-care image acquisition, and integrate imaging and clinical data to optimize the prediction and detection of cardiac dysfunction. The application of AI to cardiovascular magnetic resonance imaging (CMR), computed tomography (CT; especially coronary artery calcium or CAC scans), single proton emission computed tomography (SPECT) and positron emission tomography (PET) imaging acquisition is also in early stages of analysis for prediction and assessment of cardiac tumors and cardiovascular adverse events in patients treated for childhood or adult cancer. The opportunities for application of AI in cardio-oncology imaging are promising, and if availed, will improve clinical practice and benefit patient care.
Collapse
Affiliation(s)
- Nidhi Madan
- Division of Cardiology, Rush University Medical Center, Chicago, IL, USA
| | | | - Nausheen Akhter
- Division of Cardiology, Northwestern University, Chicago, IL, USA
| | - Patrick Collier
- Robert and Suzanne Tomsich Department of Cardiovascular Medicine, Sydell and Arnold Miller Family Heart and Vascular Institute, Cleveland Clinic, Cleveland, OH, USA
| | - Feixiong Cheng
- Genomic Medicine Institute, Lerner Research Institute, Cleveland Clinic, Cleveland, OH, USA
| | - Avirup Guha
- Harrington Heart and Vascular Institute, Cleveland, OH, USA
| | - Lili Zhang
- Cardio-Oncology Program, Division of Cardiology, Department of Medicine, Montefiore Medical Center, Albert Einstein College of Medicine, Bronx, NY, USA
| | - Abhinav Sharma
- Division of Cardiovascular Medicine, Medical College of Wisconsin, Milwaukee, WI, USA
| | | | - Imeh Ndiokho
- Medical College of Wisconsin, Milwaukee, WI, USA
| | - Ethan Wen
- Medical College of Wisconsin, Milwaukee, WI, USA
| | - Noelle C. Garster
- Division of Cardiovascular Medicine, Medical College of Wisconsin, Milwaukee, WI, USA
| | | | - Sherry-Ann Brown
- Cardio-Oncology Program, Division of Cardiovascular Medicine, Medical College of Wisconsin, Milwaukee, WI, USA
| |
Collapse
|
59
|
Wang Y, Yang J, Lu Y, Fan W, Bai L, Nie Z, Wang R, Yu J, Liu L, Liu Y, He L, Wen K, Chen L, Yang F, Qi B. Thoracic Aorta Diameter Calculation by Artificial Intelligence Can Predict the Degree of Arterial Stiffness. Front Cardiovasc Med 2022; 8:737161. [PMID: 34977168 PMCID: PMC8714774 DOI: 10.3389/fcvm.2021.737161] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2021] [Accepted: 10/26/2021] [Indexed: 01/01/2023] Open
Abstract
Background: Arterial aging is characterized by decreased vascular function, caused by arterial stiffness (AS), and vascular morphological changes, caused by arterial dilatation. We analyzed the relationship of pre-AS and AS, as assessed by cardio ankle vascular index (CAVI), with arterial diameters (AD) at nine levels, from the aortic sinus to the abdominal aorta, as measured by artificial intelligence (AI) on non-enhanced chest computed tomography (CT) images. Methods: Overall, 801 patients who underwent both chest CT scan and arterial elasticity test were enrolled. Nine horizontal diameters of the thoracic aorta (from the aortic sinuses of Valsalva to the abdominal aorta at the celiac axis origin) were measured by AI using CT. Patients were divided into non-AS (mean value of the left and right CAVIs [M.CAVI] < 8), pre-AS (8 ≤ M.CAVI < 9), and AS (M.CAVI ≥ 9) groups. We compared AD differences among groups, analyzed the correlation of age, ADs, and M.CAVI or the mean pressure-independent CAVI (M.CAVI0), Furthermore, we evaluated the risk predictors and the diagnostic value of the nine ADs for pre-AS and AS. Results: The AD at mid descending aorta (MD) correlated strongest with CAVI (r = 0.46, p < 0.001) or M.CAVI0 (r = 0.42, p < 0.001). M.CAVI was most affected by the MD AD and by age. An increase in the MD AD independently predicted the occurrence of pre-AS or AS. For MD AD, every 4.37 mm increase caused a 14% increase in the pre-AS and AS risk and a 13% increase in the AS risk. With a cut-off value of 26.95 mm for the MD AD, the area under the curve (AUC) for identifying the risk of AS was 0.743. With a cut-off value of 25.15 mm, the AUC for identifying the risk of the stage after the prophase of AS is 0.739. Conclusions: Aging is associated with an increase in AD and a decrease in arterial elasticity. An increase in AD, particularly at the MD level is an independent predictor of AS development.
Collapse
Affiliation(s)
- Yaoling Wang
- Department of Geriatrics, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Jinrong Yang
- Department of Radiology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Yichen Lu
- Siemens Healthineers Digital Technology (Shanghai) Co., Ltd., Shanghai, China
| | - Wenliang Fan
- Department of Radiology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Lijuan Bai
- Department of Geriatrics, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Zhuang Nie
- Department of Radiology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Ruiyun Wang
- Department of Geriatrics, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Jie Yu
- Department of Radiology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Lihua Liu
- Department of Geriatrics, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Yun Liu
- Department of Geriatrics, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Linfeng He
- Department of Geriatrics, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Kai Wen
- School of Software and Microelectronics, Peking University, Beijing, China
| | - Li Chen
- Novartis Pharmaceuticals Corporation, East Hanover, NJ, United States
| | - Fan Yang
- Department of Radiology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Benling Qi
- Department of Geriatrics, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| |
Collapse
|
60
|
Wang KN, Yang X, Miao J, Li L, Yao J, Zhou P, Xue W, Zhou GQ, Zhuang X, Ni D. AWSnet: An Auto-weighted Supervision Attention Network for Myocardial Scar and Edema Segmentation in Multi-sequence Cardiac Magnetic Resonance Images. Med Image Anal 2022; 77:102362. [DOI: 10.1016/j.media.2022.102362] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2021] [Revised: 10/26/2021] [Accepted: 01/10/2022] [Indexed: 10/19/2022]
|
61
|
Wang Y, Bai L, Yang J, Lu Y, Fan W, Nie Z, Yu J, Wen K, Wang R, He L, Yang F, Qi B. Artificial intelligence measuring the aortic diameter assist in identifying adverse blood pressure status including masked hypertension. Postgrad Med 2021; 134:111-121. [PMID: 34762815 DOI: 10.1080/00325481.2021.2003150] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
INTRODUCTION AND OBJECTIVES Artificial intelligence (AI) made it achievable that aortic dilation could be measured in CT images indirectly, while aortic diameter (AD) has the certain relationship with blood pressure. It was potential that the blood pressure condition be determined by AD measurement using the data obtained from a CT scanning especially in identifying masked hypertension and predicting the risk of poor control of blood pressure (BP) which was easy to elude diagnosis in clinic. We aimed to evaluate the possibility of utilizing AD by AI for predicting the risk of adverse BP status (including masked hypertension or poor BP control) and the optimal thoracic aortic position in measurement as well as the cutoff value for predicting the risk. METHODS Eight hundred and one patients were enrolled in our study. AI-Rad Companion Cardiovascular (K183268 FDA approved) was used to perform automatic aorta measurement in thoracic CT images at nine key positions based on AHA guidelines. Data was post processed by software from AI-Rad Companion undergone rigorous clinical validation by both FDA and CE as verification of its efficacy and usability. The AD's risk and diagnostic value was assessed in identifying hypertension in the general population, in identifying the poor BP controlled in the hypertension population, and in screening masked hypertension in the general population respectively by multiple regression analysis and receiver operating curve analysis. RESULTS AD measured by AI was a risk factor for adverse BP status after clinical covariates adjustment (OR = 1.02 ~ 1.26). The AD at mid descending aorta was mostly affected by BP particularly, which is optimal indicator in identifying hypertension in the general population (AUC = 0.73) and for screening masked hypertension (AUC = 0.78). CONCLUSION Using AI to measure the AD of the aorta, particularly at the position of mid descending aorta, is greatly valuable for identifying people with poor BP status. It will be possible to reveal more clinical information reflected by ordinary CT images and enrich the screening methods for hypertension, especially masked hypertension.PLAIN LANGUAGE SUMMARYHTN has a significant adverse effect on arterial deformation. BP and arterial dilation promote each other in a vicious circle. Arterial dilation may not be restricted by apparent fluctuations in BP and is objective evidence of an undesirable BP state. The accuracy of AD measurements by AI on chest CT images has been verified. There has not been the application of AD measurement by AI in the scene of poor BP status in clinical practice.In this study, we applied AI to measure the diameter of the aorta in nine consecutive positions. We explored the association between AD at various positions and BP levels and the possibility that AD in identifying poor BP status in different populations. We found that the AD at the MD is of great value in screening MH and evaluating the control state of BP in HTN. It will be possible to significantly expand the clinical information reflected by ordinary CT images and enrich the screening methods for HTN, especially MH.
Collapse
Affiliation(s)
- Yaoling Wang
- Department of Geriatrics, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Lijuan Bai
- Department of Geriatrics, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Jinrong Yang
- Department of Radiology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Yichen Lu
- Digital Health, Siemens Healthineers Digital Technology(Shanghai)Co., Ltd, Shanghai, China
| | - Wenliang Fan
- Department of Software Engineering and Data Technology, School of Software & Microelectronics, Peking University, Beijing, China
| | - Zhuang Nie
- Department of Radiology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Jie Yu
- Department of Radiology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Kai Wen
- Department of Software Engineering and Data Technology, School of Software & Microelectronics, Peking University, Beijing, China
| | - Ruiyun Wang
- Department of Geriatrics, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Linfeng He
- Department of Geriatrics, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Fan Yang
- Department of Radiology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Benling Qi
- Department of Geriatrics, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| |
Collapse
|
62
|
Chen X, Lian C, Deng HH, Kuang T, Lin HY, Xiao D, Gateno J, Shen D, Xia JJ, Yap PT. Fast and Accurate Craniomaxillofacial Landmark Detection via 3D Faster R-CNN. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:3867-3878. [PMID: 34310293 PMCID: PMC8686670 DOI: 10.1109/tmi.2021.3099509] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/09/2023]
Abstract
Automatic craniomaxillofacial (CMF) landmark localization from cone-beam computed tomography (CBCT) images is challenging, considering that 1) the number of landmarks in the images may change due to varying deformities and traumatic defects, and 2) the CBCT images used in clinical practice are typically large. In this paper, we propose a two-stage, coarse-to-fine deep learning method to tackle these challenges with both speed and accuracy in mind. Specifically, we first use a 3D faster R-CNN to roughly locate landmarks in down-sampled CBCT images that have varying numbers of landmarks. By converting the landmark point detection problem to a generic object detection problem, our 3D faster R-CNN is formulated to detect virtual, fixed-size objects in small boxes with centers indicating the approximate locations of the landmarks. Based on the rough landmark locations, we then crop 3D patches from the high-resolution images and send them to a multi-scale UNet for the regression of heatmaps, from which the refined landmark locations are finally derived. We evaluated the proposed approach by detecting up to 18 landmarks on a real clinical dataset of CMF CBCT images with various conditions. Experiments show that our approach achieves state-of-the-art accuracy of 0.89 ± 0.64mm in an average time of 26.2 seconds per volume.
Collapse
|
63
|
Kumar Sen K, Dubey R, Goyal M, Sethi H, Sharawat A, Arora R. COVITALE 2020 from eastern Indian population: imageologists perspective, a learning curve. THE EGYPTIAN JOURNAL OF RADIOLOGY AND NUCLEAR MEDICINE 2021. [PMCID: PMC8493775 DOI: 10.1186/s43055-021-00634-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/08/2023] Open
Abstract
Background High-resolution computed tomography (HRCT) chest becomes a valuable diagnostic tool for identifying patients infected with Coronavirus Disease 2019 (COVID-19) in the early stage, where patients may be asymptomatic or with non-specific pulmonary symptoms. An early diagnosis of COVID-19 is of utmost importance, so that patients can be isolated and treated in time, eventually preventing spread of the disease, improving the prognosis and reducing the mortality. In this paper, we have highlighted our radiological experience of dealing with the pandemic crisis of 2020 through the study of HRCT thorax, lung ultrasonography, chest X-rays and artificial intelligence (AI). Results Results of CT thorax analysis have been given in detail. We had also compared CT severity score (CTSS) with clinical and laboratory parameters. Correlation of CTSS with SpO2 values and comorbidities was also studied. We also analysed manual CTSS with the CTSS scored calculated by the AI software. Conclusions CTSS and use of COVID-19 Reporting and Data System (CORADS) result in accuracy and uniform percolation of information among the clinicians. Bed-side X-rays and ultrasonography have played a role where the patients could not be shifted for CT scan. The possibility of predicting impending or progression of hypoxia was not possible when SpO2 mapping was correlated with the CTSS. AI was alternatively tried with available software (CT pneumonia analysis) which was not so appropriate considering the imaging patterns in the bulk of atypical category.
Collapse
|
64
|
Yousefirizi F, Pierre Decazes, Amyar A, Ruan S, Saboury B, Rahmim A. AI-Based Detection, Classification and Prediction/Prognosis in Medical Imaging:: Towards Radiophenomics. PET Clin 2021; 17:183-212. [PMID: 34809866 DOI: 10.1016/j.cpet.2021.09.010] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
Artificial intelligence (AI) techniques have significant potential to enable effective, robust, and automated image phenotyping including the identification of subtle patterns. AI-based detection searches the image space to find the regions of interest based on patterns and features. There is a spectrum of tumor histologies from benign to malignant that can be identified by AI-based classification approaches using image features. The extraction of minable information from images gives way to the field of "radiomics" and can be explored via explicit (handcrafted/engineered) and deep radiomics frameworks. Radiomics analysis has the potential to be used as a noninvasive technique for the accurate characterization of tumors to improve diagnosis and treatment monitoring. This work reviews AI-based techniques, with a special focus on oncological PET and PET/CT imaging, for different detection, classification, and prediction/prognosis tasks. We also discuss needed efforts to enable the translation of AI techniques to routine clinical workflows, and potential improvements and complementary techniques such as the use of natural language processing on electronic health records and neuro-symbolic AI techniques.
Collapse
Affiliation(s)
- Fereshteh Yousefirizi
- Department of Integrative Oncology, BC Cancer Research Institute, 675 West 10th Avenue, Vancouver, British Columbia V5Z 1L3, Canada.
| | - Pierre Decazes
- Department of Nuclear Medicine, Henri Becquerel Centre, Rue d'Amiens - CS 11516 - 76038 Rouen Cedex 1, France; QuantIF-LITIS, Faculty of Medicine and Pharmacy, Research Building - 1st floor, 22 boulevard Gambetta, 76183 Rouen Cedex, France
| | - Amine Amyar
- QuantIF-LITIS, Faculty of Medicine and Pharmacy, Research Building - 1st floor, 22 boulevard Gambetta, 76183 Rouen Cedex, France; General Electric Healthcare, Buc, France
| | - Su Ruan
- QuantIF-LITIS, Faculty of Medicine and Pharmacy, Research Building - 1st floor, 22 boulevard Gambetta, 76183 Rouen Cedex, France
| | - Babak Saboury
- Department of Radiology and Imaging Sciences, Clinical Center, National Institutes of Health, Bethesda, MD, USA; Department of Computer Science and Electrical Engineering, University of Maryland, Baltimore County, Baltimore, MD, USA; Department of Radiology, Hospital of the University of Pennsylvania, Philadelphia, PA, USA
| | - Arman Rahmim
- Department of Integrative Oncology, BC Cancer Research Institute, 675 West 10th Avenue, Vancouver, British Columbia V5Z 1L3, Canada; Department of Radiology, University of British Columbia, Vancouver, British Columbia, Canada; Department of Physics, University of British Columbia, Vancouver, British Columbia, Canada
| |
Collapse
|
65
|
Golla AK, Tönnes C, Russ T, Bauer DF, Froelich MF, Diehl SJ, Schoenberg SO, Keese M, Schad LR, Zöllner FG, Rink JS. Automated Screening for Abdominal Aortic Aneurysm in CT Scans under Clinical Conditions Using Deep Learning. Diagnostics (Basel) 2021; 11:2131. [PMID: 34829478 PMCID: PMC8621263 DOI: 10.3390/diagnostics11112131] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2021] [Revised: 11/10/2021] [Accepted: 11/14/2021] [Indexed: 11/16/2022] Open
Abstract
Abdominal aortic aneurysms (AAA) may remain clinically silent until they enlarge and patients present with a potentially lethal rupture. This necessitates early detection and elective treatment. The goal of this study was to develop an easy-to-train algorithm which is capable of automated AAA screening in CT scans and can be applied to an intra-hospital environment. Three deep convolutional neural networks (ResNet, VGG-16 and AlexNet) were adapted for 3D classification and applied to a dataset consisting of 187 heterogenous CT scans. The 3D ResNet outperformed both other networks. Across the five folds of the first training dataset it achieved an accuracy of 0.856 and an area under the curve (AUC) of 0.926. Subsequently, the algorithms performance was verified on a second data set containing 106 scans, where it ran fully automated and resulted in an accuracy of 0.953 and an AUC of 0.971. A layer-wise relevance propagation (LRP) made the decision process interpretable and showed that the network correctly focused on the aortic lumen. In conclusion, the deep learning-based screening proved to be robust and showed high performance even on a heterogeneous multi-center data set. Integration into hospital workflow and its effect on aneurysm management would be an exciting topic of future research.
Collapse
Affiliation(s)
- Alena-K. Golla
- Computer Assisted Clinical Medicine, Mannheim Institute for Intelligent Systems in Medicine, Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, D-68167 Mannheim, Germany; (A.-K.G.); (C.T.); (T.R.); (D.F.B.); (L.R.S.); (F.G.Z.)
| | - Christian Tönnes
- Computer Assisted Clinical Medicine, Mannheim Institute for Intelligent Systems in Medicine, Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, D-68167 Mannheim, Germany; (A.-K.G.); (C.T.); (T.R.); (D.F.B.); (L.R.S.); (F.G.Z.)
| | - Tom Russ
- Computer Assisted Clinical Medicine, Mannheim Institute for Intelligent Systems in Medicine, Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, D-68167 Mannheim, Germany; (A.-K.G.); (C.T.); (T.R.); (D.F.B.); (L.R.S.); (F.G.Z.)
| | - Dominik F. Bauer
- Computer Assisted Clinical Medicine, Mannheim Institute for Intelligent Systems in Medicine, Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, D-68167 Mannheim, Germany; (A.-K.G.); (C.T.); (T.R.); (D.F.B.); (L.R.S.); (F.G.Z.)
| | - Matthias F. Froelich
- Department of Radiology and Nuclear Medicine, University Medical Center Mannheim, Theodor-Kutzer-Ufer 1-3, D-68167 Mannheim, Germany; (M.F.F.); (S.J.D.); (S.O.S.)
| | - Steffen J. Diehl
- Department of Radiology and Nuclear Medicine, University Medical Center Mannheim, Theodor-Kutzer-Ufer 1-3, D-68167 Mannheim, Germany; (M.F.F.); (S.J.D.); (S.O.S.)
| | - Stefan O. Schoenberg
- Department of Radiology and Nuclear Medicine, University Medical Center Mannheim, Theodor-Kutzer-Ufer 1-3, D-68167 Mannheim, Germany; (M.F.F.); (S.J.D.); (S.O.S.)
| | - Michael Keese
- Department of Surgery, University Medical Center Mannheim, Theodor-Kutzer-Ufer 1-3, D-68167 Mannheim, Germany;
| | - Lothar R. Schad
- Computer Assisted Clinical Medicine, Mannheim Institute for Intelligent Systems in Medicine, Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, D-68167 Mannheim, Germany; (A.-K.G.); (C.T.); (T.R.); (D.F.B.); (L.R.S.); (F.G.Z.)
| | - Frank G. Zöllner
- Computer Assisted Clinical Medicine, Mannheim Institute for Intelligent Systems in Medicine, Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, D-68167 Mannheim, Germany; (A.-K.G.); (C.T.); (T.R.); (D.F.B.); (L.R.S.); (F.G.Z.)
| | - Johann S. Rink
- Department of Radiology and Nuclear Medicine, University Medical Center Mannheim, Theodor-Kutzer-Ufer 1-3, D-68167 Mannheim, Germany; (M.F.F.); (S.J.D.); (S.O.S.)
| |
Collapse
|
66
|
Uncertainty-guided graph attention network for parapneumonic effusion diagnosis. Med Image Anal 2021; 75:102217. [PMID: 34775280 DOI: 10.1016/j.media.2021.102217] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2021] [Revised: 08/12/2021] [Accepted: 08/23/2021] [Indexed: 01/08/2023]
Abstract
Parapneumonic effusion (PPE) is a common condition that causes death in patients hospitalized with pneumonia. Rapid distinction of complicated PPE (CPPE) from uncomplicated PPE (UPPE) in Computed Tomography (CT) scans is of great importance for the management and medical treatment of PPE. However, UPPE and CPPE display similar appearances in CT scans, and it is challenging to distinguish CPPE from UPPE via a single 2D CT image, whether attempted by a human expert, or by any of the existing disease classification approaches. 3D convolutional neural networks (CNNs) can utilize the entire 3D volume for classification: however, they typically suffer from the intrinsic defect of over-fitting. Therefore, it is important to develop a method that not only overcomes the heavy memory and computational requirements of 3D CNNs, but also leverages the 3D information. In this paper, we propose an uncertainty-guided graph attention network (UG-GAT) that can automatically extract and integrate information from all CT slices in a 3D volume for classification into UPPE, CPPE, and normal control cases. Specifically, we frame the distinction of different cases as a graph classification problem. Each individual is represented as a directed graph with a topological structure, where vertices represent the image features of slices, and edges encode the spatial relationship between them. To estimate the contribution of each slice, we first extract the slice representations with uncertainty, using a Bayesian CNN: we then make use of the uncertainty information to weight each slice during the graph prediction phase in order to enable more reliable decision-making. We construct a dataset consisting of 302 chest CT volumetric data from different subjects (99 UPPE, 99 CPPE and 104 normal control cases) in this study, and to the best of our knowledge, this is the first attempt to classify UPPE, CPPE and normal cases using a deep learning method. Extensive experiments show that our approach is lightweight in demands, and outperforms accepted state-of-the-art methods by a large margin. Code is available at https://github.com/iMED-Lab/UG-GAT.
Collapse
|
67
|
|
68
|
Saeedi-Hosseiny MS, Alruwaili F, Patel AS, McMillan S, Iordachita II, Abedin-Nasab MH. Spatial Detection of the Shafts of Fractured Femur for Image-Guided Robotic Surgery . ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2021; 2021:3301-3304. [PMID: 34891946 DOI: 10.1109/embc46164.2021.9630866] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Femur fractures due to traumatic forces often require surgical intervention. Such surgeries require alignment of the femur in the presence of large muscular forces up to 500 N. Currently, orthopedic surgeons perform this alignment manually before fixation, leading to extra soft tissue damage and inaccurate alignment. One of the limitations of femoral fracture surgery is the limited vision and two-dimensional nature of X-ray images, which typically guide the surgeon in diagnosing the position of the femur. Other limitations include the lack of precise intraoperative planning and the process of trial-and-error alignment. To alleviate the issues discussed, we develop a marker-based approach for detecting the position of femur fragments using two X-ray images. The relative spatial position of the femur fragments plays a key role in guiding an innovative robotic system, named Robossis, for femur fracture alignment surgeries. Using the derived three-dimensional data, we simulate pre-programmed movements to visualize the proposed steps of the alignment method, while the bone fragments are attached to the robot. Ultimately, Robossis aims to improve the accuracy of femur alignment, which results in improved patient outcomes.
Collapse
|
69
|
Pradella M, Weikert T, Sperl JI, Kärgel R, Cyriac J, Achermann R, Sauter AW, Bremerich J, Stieltjes B, Brantner P, Sommer G. Fully automated guideline-compliant diameter measurements of the thoracic aorta on ECG-gated CT angiography using deep learning. Quant Imaging Med Surg 2021; 11:4245-4257. [PMID: 34603980 DOI: 10.21037/qims-21-142] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2021] [Accepted: 05/27/2021] [Indexed: 11/06/2022]
Abstract
Background Manually performed diameter measurements on ECG-gated CT-angiography (CTA) represent the gold standard for diagnosis of thoracic aortic dilatation. However, they are time-consuming and show high inter-reader variability. Therefore, we aimed to evaluate the accuracy of measurements of a deep learning-(DL)-algorithm in comparison to those of radiologists and evaluated measurement times (MT). Methods We retrospectively analyzed 405 ECG-gated CTA exams of 371 consecutive patients with suspected aortic dilatation between May 2010 and June 2019. The DL-algorithm prototype detected aortic landmarks (deep reinforcement learning) and segmented the lumen of the thoracic aorta (multi-layer convolutional neural network). It performed measurements according to AHA-guidelines and created visual outputs. Manual measurements were performed by radiologists using centerline technique. Human performance variability (HPV), MT and DL-performance were analyzed in a research setting using a linear mixed model based on 21 randomly selected, repeatedly measured cases. DL-algorithm results were then evaluated in a clinical setting using matched differences. If the differences were within 5 mm for all locations, the cases was regarded as coherent; if there was a discrepancy >5 mm at least at one location (incl. missing values), the case was completely reviewed. Results HPV ranged up to ±3.4 mm in repeated measurements under research conditions. In the clinical setting, 2,778/3,192 (87.0%) of DL-algorithm's measurements were coherent. Mean differences of paired measurements between DL-algorithm and radiologists at aortic sinus and ascending aorta were -0.45±5.52 and -0.02±3.36 mm. Detailed analysis revealed that measurements at the aortic root were over-/underestimated due to a tilted measurement plane. In total, calculated time saved by DL-algorithm was 3:10 minutes/case. Conclusions The DL-algorithm provided coherent results to radiologists at almost 90% of measurement locations, while the majority of discrepent cases were located at the aortic root. In summary, the DL-algorithm assisted radiologists in performing AHA-compliant measurements by saving 50% of time per case.
Collapse
Affiliation(s)
- Maurice Pradella
- Department of Radiology, Clinic of Radiology & Nuclear Medicine, University Hospital Basel, University of Basel, Petersgraben 4, 4031 Basel, Switzerland
| | - Thomas Weikert
- Department of Radiology, Clinic of Radiology & Nuclear Medicine, University Hospital Basel, University of Basel, Petersgraben 4, 4031 Basel, Switzerland
| | | | - Rainer Kärgel
- Siemens Healthineers, Siemensstraße 3, 91301 Forchheim, Germany
| | - Joshy Cyriac
- Department of Radiology, Clinic of Radiology & Nuclear Medicine, University Hospital Basel, University of Basel, Petersgraben 4, 4031 Basel, Switzerland
| | - Rita Achermann
- Department of Radiology, Clinic of Radiology & Nuclear Medicine, University Hospital Basel, University of Basel, Petersgraben 4, 4031 Basel, Switzerland
| | - Alexander W Sauter
- Department of Radiology, Clinic of Radiology & Nuclear Medicine, University Hospital Basel, University of Basel, Petersgraben 4, 4031 Basel, Switzerland
| | - Jens Bremerich
- Department of Radiology, Clinic of Radiology & Nuclear Medicine, University Hospital Basel, University of Basel, Petersgraben 4, 4031 Basel, Switzerland
| | - Bram Stieltjes
- Department of Radiology, Clinic of Radiology & Nuclear Medicine, University Hospital Basel, University of Basel, Petersgraben 4, 4031 Basel, Switzerland
| | - Philipp Brantner
- Department of Radiology, Clinic of Radiology & Nuclear Medicine, University Hospital Basel, University of Basel, Petersgraben 4, 4031 Basel, Switzerland.,Regional Hospitals Rheinfelden and Laufenburg, Riburgerstrasse 12, 4310 Rheinfelden, Switzerland
| | - Gregor Sommer
- Department of Radiology, Clinic of Radiology & Nuclear Medicine, University Hospital Basel, University of Basel, Petersgraben 4, 4031 Basel, Switzerland
| |
Collapse
|
70
|
Deep Reinforcement Learning with Explicit Spatio-Sequential Encoding Network for Coronary Ostia Identification in CT Images. SENSORS 2021; 21:s21186187. [PMID: 34577391 PMCID: PMC8469841 DOI: 10.3390/s21186187] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/21/2021] [Revised: 08/31/2021] [Accepted: 09/13/2021] [Indexed: 11/16/2022]
Abstract
Accurate identification of the coronary ostia from 3D coronary computed tomography angiography (CCTA) is a essential prerequisite step for automatically tracking and segmenting three main coronary arteries. In this paper, we propose a novel deep reinforcement learning (DRL) framework to localize the two coronary ostia from 3D CCTA. An optimal action policy is determined using a fully explicit spatial-sequential encoding policy network applying 2.5D Markovian states with three past histories. The proposed network is trained using a dueling DRL framework on the CAT08 dataset. The experiment results show that our method is more efficient and accurate than the other methods. blueFloating-point operations (FLOPs) are calculated to measure computational efficiency. The result shows that there are 2.5M FLOPs on the proposed method, which is about 10 times smaller value than 3D box-based methods. In terms of accuracy, the proposed method shows that 2.22 ± 1.12 mm and 1.94 ± 0.83 errors on the left and right coronary ostia, respectively. The proposed method can be applied to the tasks to identify other target objects by changing the target locations in the ground truth data. Further, the proposed method can be utilized as a pre-processing method for coronary artery tracking methods.
Collapse
|
71
|
Jeon B. Deep Recursive Bayesian Tracking for Fully Automatic Centerline Extraction of Coronary Arteries in CT Images. SENSORS (BASEL, SWITZERLAND) 2021; 21:6087. [PMID: 34577293 PMCID: PMC8471768 DOI: 10.3390/s21186087] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/30/2021] [Revised: 09/07/2021] [Accepted: 09/08/2021] [Indexed: 11/17/2022]
Abstract
Extraction of coronary arteries in coronary computed tomography (CT) angiography is a prerequisite for the quantification of coronary lesions. In this study, we propose a tracking method combining a deep convolutional neural network (DNN) and particle filtering method to identify the trajectories from the coronary ostium to each distal end from 3D CT images. The particle filter, as a non-linear approximator, is an appropriate tracking framework for such thin and elongated structures; however, the robust 'vesselness' measurement is essential for extracting coronary centerlines. Importantly, we employed the DNN to robustly measure the vesselness using patch images, and we integrated softmax values to the likelihood function in our particle filtering framework. Tangent patches represent cross-sections of coronary arteries of circular shapes. Thus, 2D tangent patches are assumed to include enough features of coronary arteries, and the use of 2D patches significantly reduces computational complexity. Because coronary vasculature has multiple bifurcations, we also modeled a method to detect branching sites by clustering the particle locations. The proposed method is compared with three commercial workstations and two conventional methods from the academic literature.
Collapse
Affiliation(s)
- Byunghwan Jeon
- School of Computer Science, Kyungil University, Gyeongsan 38428, Korea
| |
Collapse
|
72
|
Fuchs P, Kröger T, Garbe CS. Defect detection in CT scans of cast aluminum parts: A machine vision perspective. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2021.04.094] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
|
73
|
Mader C, Bernatz S, Michalik S, Koch V, Martin SS, Mahmoudi S, Basten L, Grünewald LD, Bucher A, Albrecht MH, Vogl TJ, Booz C. Quantification of COVID-19 Opacities on Chest CT - Evaluation of a Fully Automatic AI-approach to Noninvasively Differentiate Critical Versus Noncritical Patients. Acad Radiol 2021; 28:1048-1057. [PMID: 33741210 PMCID: PMC7936551 DOI: 10.1016/j.acra.2021.03.001] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2021] [Revised: 02/14/2021] [Accepted: 03/01/2021] [Indexed: 12/31/2022]
Abstract
Objectives To evaluate the potential of a fully automatic artificial intelligence (AI)-driven computed tomography (CT) software prototype to quantify severity of COVID-19 infection on chest CT in relationship with clinical and laboratory data. Methods We retrospectively analyzed 50 patients with laboratory confirmed COVID-19 infection who had received chest CT between March and July 2020. Pulmonary opacifications were automatically evaluated by an AI-driven software and correlated with clinical and laboratory parameters using Spearman-Rho and linear regression analysis. We divided the patients into sub cohorts with or without necessity of intensive care unit (ICU) treatment. Sub cohort differences were evaluated employing Wilcoxon-Mann-Whitney-Test. Results We included 50 CT examinations (mean age, 57.24 years), of whom 24 (48%) had an ICU stay. Extent of COVID-19 like opacities on chest CT showed correlations (all p < 0.001 if not otherwise stated) with occurrence of ICU stay (R = 0.74), length of ICU stay (R = 0.81), lethal outcome (R = 0.56) and length of hospital stay (R = 0.33, p < 0.05). The opacities extent was correlated with laboratory parameters: neutrophil count (NEU) (R = 0.60), lactate dehydrogenase (LDH) (R = 0.60), troponin (TNTHS) (R = 0.55) and c-reactive protein (CRP) (R = 0.51). Differences (p < 0.001) between ICU group and non-ICU group concerned longer length of hospital stay (24.04 vs. 10.92 days), higher opacity score (12.50 vs. 4.96) and severity of laboratory data changes such as c-reactive protein (11.64 vs. 5.07 mg/dl, p < 0.01). Conclusions Automatically AI-driven quantification of opacities on chest CT correlates with laboratory and clinical data in patients with confirmed COVID-19 infection and may serve as non-invasive predictive marker for clinical course of COVID-19.
Collapse
Affiliation(s)
- Christoph Mader
- Department of Diagnostic and Interventional Radiology, University Hospital Frankfurt, Frankfurt am Main, Germany.
| | - Simon Bernatz
- Department of Diagnostic and Interventional Radiology, University Hospital Frankfurt, Frankfurt am Main, Germany; Dr. Senckenberg Institute for Pathology, University Hospital Frankfurt, Frankfurt am Main, Germany
| | - Sabine Michalik
- Department of Diagnostic and Interventional Radiology, University Hospital Frankfurt, Frankfurt am Main, Germany
| | - Vitali Koch
- Department of Diagnostic and Interventional Radiology, University Hospital Frankfurt, Frankfurt am Main, Germany
| | - Simon S Martin
- Department of Diagnostic and Interventional Radiology, University Hospital Frankfurt, Frankfurt am Main, Germany
| | - Scherwin Mahmoudi
- Department of Diagnostic and Interventional Radiology, University Hospital Frankfurt, Frankfurt am Main, Germany
| | - Lajos Basten
- Department of Diagnostic and Interventional Radiology, University Hospital Frankfurt, Frankfurt am Main, Germany
| | - Leon D Grünewald
- Department of Diagnostic and Interventional Radiology, University Hospital Frankfurt, Frankfurt am Main, Germany
| | - Andreas Bucher
- Department of Diagnostic and Interventional Radiology, University Hospital Frankfurt, Frankfurt am Main, Germany
| | - Moritz H Albrecht
- Department of Diagnostic and Interventional Radiology, University Hospital Frankfurt, Frankfurt am Main, Germany
| | - Thomas J Vogl
- Department of Diagnostic and Interventional Radiology, University Hospital Frankfurt, Frankfurt am Main, Germany
| | - Christian Booz
- Department of Diagnostic and Interventional Radiology, University Hospital Frankfurt, Frankfurt am Main, Germany
| |
Collapse
|
74
|
Sermesant M, Delingette H, Cochet H, Jaïs P, Ayache N. Applications of artificial intelligence in cardiovascular imaging. Nat Rev Cardiol 2021; 18:600-609. [PMID: 33712806 DOI: 10.1038/s41569-021-00527-2] [Citation(s) in RCA: 55] [Impact Index Per Article: 18.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 02/08/2021] [Indexed: 01/31/2023]
Abstract
Research into artificial intelligence (AI) has made tremendous progress over the past decade. In particular, the AI-powered analysis of images and signals has reached human-level performance in many applications owing to the efficiency of modern machine learning methods, in particular deep learning using convolutional neural networks. Research into the application of AI to medical imaging is now very active, especially in the field of cardiovascular imaging because of the challenges associated with acquiring and analysing images of this dynamic organ. In this Review, we discuss the clinical questions in cardiovascular imaging that AI can be used to address and the principal methodological AI approaches that have been developed to solve the related image analysis problems. Some approaches are purely data-driven and rely mainly on statistical associations, whereas others integrate anatomical and physiological information through additional statistical, geometric and biophysical models of the human heart. In a structured manner, we provide representative examples of each of these approaches, with particular attention to the underlying computational imaging challenges. Finally, we discuss the remaining limitations of AI approaches in cardiovascular imaging (such as generalizability and explainability) and how they can be overcome.
Collapse
Affiliation(s)
| | | | - Hubert Cochet
- IHU Liryc, CHU Bordeaux, Université Bordeaux, Inserm 1045, Pessac, France
| | - Pierre Jaïs
- IHU Liryc, CHU Bordeaux, Université Bordeaux, Inserm 1045, Pessac, France
| | | |
Collapse
|
75
|
Yang X, Dou H, Huang R, Xue W, Huang Y, Qian J, Zhang Y, Luo H, Guo H, Wang T, Xiong Y, Ni D. Agent With Warm Start and Adaptive Dynamic Termination for Plane Localization in 3D Ultrasound. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:1950-1961. [PMID: 33784618 DOI: 10.1109/tmi.2021.3069663] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Accurate standard plane (SP) localization is the fundamental step for prenatal ultrasound (US) diagnosis. Typically, dozens of US SPs are collected to determine the clinical diagnosis. 2D US has to perform scanning for each SP, which is time-consuming and operator-dependent. While 3D US containing multiple SPs in one shot has the inherent advantages of less user-dependency and more efficiency. Automatically locating SP in 3D US is very challenging due to the huge search space and large fetal posture variations. Our previous study proposed a deep reinforcement learning (RL) framework with an alignment module and active termination to localize SPs in 3D US automatically. However, termination of agent search in RL is important and affects the practical deployment. In this study, we enhance our previous RL framework with a newly designed adaptive dynamic termination to enable an early stop for the agent searching, saving at most 67% inference time, thus boosting the accuracy and efficiency of the RL framework at the same time. Besides, we validate the effectiveness and generalizability of our algorithm extensively on our in-house multi-organ datasets containing 433 fetal brain volumes, 519 fetal abdomen volumes, and 683 uterus volumes. Our approach achieves localization error of 2.52mm/10.26° , 2.48mm/10.39° , 2.02mm/10.48° , 2.00mm/14.57° , 2.61mm/9.71° , 3.09mm/9.58° , 1.49mm/7.54° for the transcerebellar, transventricular, transthalamic planes in fetal brain, abdominal plane in fetal abdomen, and mid-sagittal, transverse and coronal planes in uterus, respectively. Experimental results show that our method is general and has the potential to improve the efficiency and standardization of US scanning.
Collapse
|
76
|
Zhou Q, Wang J, Guo J, Huang Z, Ding M, Yuchi M, Zhang X. Anterior chamber angle classification in anterior segment optical coherence tomography images using hybrid attention based pyramidal convolutional network. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.102686] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
77
|
Edwards CA, Goyal A, Rusheen AE, Kouzani AZ, Lee KH. DeepNavNet: Automated Landmark Localization for Neuronavigation. Front Neurosci 2021; 15:670287. [PMID: 34220429 PMCID: PMC8245762 DOI: 10.3389/fnins.2021.670287] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2021] [Accepted: 05/25/2021] [Indexed: 11/13/2022] Open
Abstract
Functional neurosurgery requires neuroimaging technologies that enable precise navigation to targeted structures. Insufficient image resolution of deep brain structures necessitates alignment to a brain atlas to indirectly locate targets within preoperative magnetic resonance imaging (MRI) scans. Indirect targeting through atlas-image registration is innately imprecise, increases preoperative planning time, and requires manual identification of anterior and posterior commissure (AC and PC) reference landmarks which is subject to human error. As such, we created a deep learning-based pipeline that consistently and automatically locates, with submillimeter accuracy, the AC and PC anatomical landmarks within MRI volumes without the need for an atlas. Our novel deep learning pipeline (DeepNavNet) regresses from MRI scans to heatmap volumes centered on AC and PC anatomical landmarks to extract their three-dimensional coordinates with submillimeter accuracy. We collated and manually labeled the location of AC and PC points in 1128 publicly available MRI volumes used for training, validation, and inference experiments. Instantiations of our DeepNavNet architecture, as well as a baseline model for reference, were evaluated based on the average 3D localization errors for the AC and PC points across 311 MRI volumes. Our DeepNavNet model significantly outperformed a baseline and achieved a mean 3D localization error of 0.79 ± 0.33 mm and 0.78 ± 0.33 mm between the ground truth and the detected AC and PC points, respectively. In conclusion, the DeepNavNet model pipeline provides submillimeter accuracy for localizing AC and PC anatomical landmarks in MRI volumes, enabling improved surgical efficiency and accuracy.
Collapse
Affiliation(s)
- Christine A Edwards
- School of Engineering, Deakin University, Geelong, VIC, Australia.,Department of Neurologic Surgery, Mayo Clinic, Rochester, MN, United States.,Mayo Clinic Graduate School of Biomedical Sciences, Mayo Clinic, Rochester, MN, United States
| | - Abhinav Goyal
- Department of Neurologic Surgery, Mayo Clinic, Rochester, MN, United States.,Mayo Clinic College of Medical Scientist Training Program, Mayo Clinic, Rochester, MN, United States
| | - Aaron E Rusheen
- Department of Neurologic Surgery, Mayo Clinic, Rochester, MN, United States.,Mayo Clinic College of Medical Scientist Training Program, Mayo Clinic, Rochester, MN, United States
| | - Abbas Z Kouzani
- School of Engineering, Deakin University, Geelong, VIC, Australia
| | - Kendall H Lee
- Department of Neurologic Surgery, Mayo Clinic, Rochester, MN, United States.,Mayo Clinic Graduate School of Biomedical Sciences, Mayo Clinic, Rochester, MN, United States.,Mayo Clinic College of Medical Scientist Training Program, Mayo Clinic, Rochester, MN, United States.,Department of Physiology and Biomedical Engineering, Mayo Clinic, Rochester, MN, United States
| |
Collapse
|
78
|
Lai JW, Ang CKE, Acharya UR, Cheong KH. Schizophrenia: A Survey of Artificial Intelligence Techniques Applied to Detection and Classification. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2021; 18:6099. [PMID: 34198829 PMCID: PMC8201065 DOI: 10.3390/ijerph18116099] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/01/2021] [Revised: 05/26/2021] [Accepted: 05/28/2021] [Indexed: 02/07/2023]
Abstract
Artificial Intelligence in healthcare employs machine learning algorithms to emulate human cognition in the analysis of complicated or large sets of data. Specifically, artificial intelligence taps on the ability of computer algorithms and software with allowable thresholds to make deterministic approximate conclusions. In comparison to traditional technologies in healthcare, artificial intelligence enhances the process of data analysis without the need for human input, producing nearly equally reliable, well defined output. Schizophrenia is a chronic mental health condition that affects millions worldwide, with impairment in thinking and behaviour that may be significantly disabling to daily living. Multiple artificial intelligence and machine learning algorithms have been utilized to analyze the different components of schizophrenia, such as in prediction of disease, and assessment of current prevention methods. These are carried out in hope of assisting with diagnosis and provision of viable options for individuals affected. In this paper, we review the progress of the use of artificial intelligence in schizophrenia.
Collapse
Affiliation(s)
- Joel Weijia Lai
- Science, Mathematics and Technology, Singapore University of Technology and Design, 8 Somapah Road, Singapore 487372, Singapore; (J.W.L.); (C.K.E.A.)
| | - Candice Ke En Ang
- Science, Mathematics and Technology, Singapore University of Technology and Design, 8 Somapah Road, Singapore 487372, Singapore; (J.W.L.); (C.K.E.A.)
- MOH Holdings Pte Ltd, 1 Maritime Square, Singapore 099253, Singapore
| | - U. Rajendra Acharya
- Department of Electronics and Computer Engineering, Ngee Ann Polytechnic, Clementi 599489, Singapore;
- Department of Biomedical Engineering, School of Science and Technology, Singapore University of Social Sciences, Clementi 599491, Singapore
- Department of Biomedical Informatics and Medical Engineering, Asia University, Taichung 41354, Taiwan
| | - Kang Hao Cheong
- Science, Mathematics and Technology, Singapore University of Technology and Design, 8 Somapah Road, Singapore 487372, Singapore; (J.W.L.); (C.K.E.A.)
| |
Collapse
|
79
|
Zhao D, Lu Q, Zou S, Sun J, Hu F. Accuracy of individualized 3D modeling of ossicles using high-resolution computed tomography imaging data. Quant Imaging Med Surg 2021; 11:2406-2414. [PMID: 34079711 DOI: 10.21037/qims-20-894] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Background The present study aimed to investigate the visibility of small ossicle parts/landmarks on high-resolution computed tomography (HRCT)/3D reconstruction (3D) to investigate what improvements in scanning resolution are needed before accurate 3D printing of patient-specific ossicles is possible. Methods A total of 24 patients with sudden deafness sought consultation at the Department of Otorhinolaryngology Head and Neck Surgery at the Sixth Medical Center of People's Liberation Army General Hospital between October 2013 and June 2014 were enrolled in the study. All participants underwent a 256-slice spiral HRCT temporal bone axial scan, yielding a Digital Imaging and Communications in Medicine documents series. These documents were then inputted into Mimics 16.0 interactive medical image processing software for data conversion and the creation of 3D segmentation and visualizations of the ossicles. Finally, the 3D images were compared using multiplanar reformation (MPR) and 3D volume-rendering (VR) reconstructed images of ossicles to verify their consistency. These were then compared with the normal ossicle structure to evaluate the accuracy of the restoration. Results The findings indicated that the morphology of the ossicles from the converted Mimics 16.0 data achieved a display rate of ≥90% when used to display 7 landmarks (the caput mallei, collum mallei, processus lateralis mallei, manubrium mallei, corpus incudis, crus longum incudis, and crus breve incudis). This demonstrates excellent matching with the images of ossicles obtained from MPR and 3D VR reconstruction. Kappa consistency testing found that the κ-value was higher than 0.75. When displaying the lenticular process, caput stapedis, crus anterius stapedis, and crus posterius stapedis landmarks. The display rate was around 60%, which shows good matching with the ossicles' images obtained from MPR and 3D VR reconstruction, with a κ-value >0.4. However, the display rate of the stapes footplate was only 25%, showing greater differences with the images obtained from MPR (76.4%) and 3D VR reconstruction (52.8%), with a κ-value <0.4. Conclusions The accuracy of the visualization of the malleus and incus after restoration via Mimics 16.0 software, based on temporal bone HRCT data, was high, and the degree of restoration was good. However, the accuracy and degree of restoration of the stapes footplate require further improvement.
Collapse
Affiliation(s)
- Danheng Zhao
- College of Otolaryngology Head and Neck Surgery, Chinese PLA General Hospital, Beijing, China.,Department of Otolaryngology Head and Neck Surgery, the Sixth Medical Center of PLA General Hospital, Beijing, China.,National Clinical Research Center for Otolaryngologic Diseases, Beijing, China
| | - Qiaohui Lu
- Department of Imaging, The Sixth Medical Center of People's Liberation Army General Hospital, Beijing, China
| | - Shizhen Zou
- College of Otolaryngology Head and Neck Surgery, Chinese PLA General Hospital, Beijing, China.,Department of Otolaryngology Head and Neck Surgery, the Sixth Medical Center of PLA General Hospital, Beijing, China.,National Clinical Research Center for Otolaryngologic Diseases, Beijing, China
| | - Jianjun Sun
- College of Otolaryngology Head and Neck Surgery, Chinese PLA General Hospital, Beijing, China.,Department of Otolaryngology Head and Neck Surgery, the Sixth Medical Center of PLA General Hospital, Beijing, China.,National Clinical Research Center for Otolaryngologic Diseases, Beijing, China
| | - Fazong Hu
- Center of 3D Printing Technology, Shanghai, China
| |
Collapse
|
80
|
Rueckel J, Sperl JI, Kaestle S, Hoppe BF, Fink N, Rudolph J, Schwarze V, Geyer T, Strobl FF, Ricke J, Ingrisch M, Sabel BO. Reduction of missed thoracic findings in emergency whole-body computed tomography using artificial intelligence assistance. Quant Imaging Med Surg 2021; 11:2486-2498. [PMID: 34079718 DOI: 10.21037/qims-20-1037] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/16/2023]
Abstract
Background Radiology reporting of emergency whole-body computed tomography (CT) scans is time-critical and therefore involves a significant risk of pathology under-detection. We hypothesize a relevant number of initially missed secondary thoracic findings that would have been detected by an artificial intelligence (AI) software platform including several pathology-specific AI algorithms. Methods This retrospective proof-of-concept-study consecutively included 105 shock-room whole-body CT scans. Image data was analyzed by platform-bundled AI-algorithms, findings were reviewed by radiology experts and compared with the original radiologist's reports. We focused on secondary thoracic findings, such as cardiomegaly, coronary artery plaques, lung lesions, aortic aneurysms and vertebral fractures. Results We identified a relevant number of initially missed findings, with their quantification based on 105 analyzed CT scans as follows: up to 25 patients (23.8%) with cardiomegaly or borderline heart size, 17 patients (16.2%) with coronary plaques, 34 patients (32.4%) with aortic ectasia, 2 patients (1.9%) with lung lesions classified as "recommended to control" and 13 initially missed vertebral fractures (two with an acute traumatic origin). A high number of false positive or non-relevant AI-based findings remain problematic especially regarding lung lesions and vertebral fractures. Conclusions We consider AI to be a promising approach to reduce the number of missed findings in clinical settings with a necessary time-critical radiological reporting. Nevertheless, algorithm improvement is necessary focusing on a reduction of "false positive" findings and on algorithm features assessing the finding relevance, e.g., fracture age or lung lesion malignancy.
Collapse
Affiliation(s)
- Johannes Rueckel
- Department of Radiology, University Hospital, LMU Munich, Munich, Germany
| | | | - Sophia Kaestle
- Department of Radiology, University Hospital, LMU Munich, Munich, Germany
| | - Boj F Hoppe
- Department of Radiology, University Hospital, LMU Munich, Munich, Germany
| | - Nicola Fink
- Department of Radiology, University Hospital, LMU Munich, Munich, Germany.,Comprehensive Pneumology Center (CPC-M), Member of the German Center for Lung Research (DZL), Munich, Germany
| | - Jan Rudolph
- Department of Radiology, University Hospital, LMU Munich, Munich, Germany
| | - Vincent Schwarze
- Department of Radiology, University Hospital, LMU Munich, Munich, Germany
| | - Thomas Geyer
- Department of Radiology, University Hospital, LMU Munich, Munich, Germany
| | - Frederik F Strobl
- Department of Radiology, University Hospital, LMU Munich, Munich, Germany.,Die Radiologie am Isarklinikum, Munich, Germany
| | - Jens Ricke
- Department of Radiology, University Hospital, LMU Munich, Munich, Germany
| | - Michael Ingrisch
- Department of Radiology, University Hospital, LMU Munich, Munich, Germany
| | - Bastian O Sabel
- Department of Radiology, University Hospital, LMU Munich, Munich, Germany
| |
Collapse
|
81
|
Gündel S, Setio AAA, Ghesu FC, Grbic S, Georgescu B, Maier A, Comaniciu D. Robust classification from noisy labels: Integrating additional knowledge for chest radiography abnormality assessment. Med Image Anal 2021; 72:102087. [PMID: 34015595 DOI: 10.1016/j.media.2021.102087] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2020] [Revised: 02/24/2021] [Accepted: 04/16/2021] [Indexed: 12/29/2022]
Abstract
Chest radiography is the most common radiographic examination performed in daily clinical practice for the detection of various heart and lung abnormalities. The large amount of data to be read and reported, with more than 100 studies per day for a single radiologist, poses a challenge in consistently maintaining high interpretation accuracy. The introduction of large-scale public datasets has led to a series of novel systems for automated abnormality classification. However, the labels of these datasets were obtained using natural language processed medical reports, yielding a large degree of label noise that can impact the performance. In this study, we propose novel training strategies that handle label noise from such suboptimal data. Prior label probabilities were measured on a subset of training data re-read by 4 board-certified radiologists and were used during training to increase the robustness of the training model to the label noise. Furthermore, we exploit the high comorbidity of abnormalities observed in chest radiography and incorporate this information to further reduce the impact of label noise. Additionally, anatomical knowledge is incorporated by training the system to predict lung and heart segmentation, as well as spatial knowledge labels. To deal with multiple datasets and images derived from various scanners that apply different post-processing techniques, we introduce a novel image normalization strategy. Experiments were performed on an extensive collection of 297,541 chest radiographs from 86,876 patients, leading to a state-of-the-art performance level for 17 abnormalities from 2 datasets. With an average AUC score of 0.880 across all abnormalities, our proposed training strategies can be used to significantly improve performance scores.
Collapse
Affiliation(s)
- Sebastian Gündel
- Digital Technology and Inovation, Siemens Healthineers, Erlangen 91052, Germany; Pattern Recognition Lab, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen 91058, Germany.
| | - Arnaud A A Setio
- Digital Technology and Inovation, Siemens Healthineers, Erlangen 91052, Germany
| | - Florin C Ghesu
- Digital Technology and Inovation, Siemens Healthineers, Princeton, NJ 08540, USA
| | - Sasa Grbic
- Digital Technology and Inovation, Siemens Healthineers, Princeton, NJ 08540, USA
| | - Bogdan Georgescu
- Digital Technology and Inovation, Siemens Healthineers, Princeton, NJ 08540, USA
| | - Andreas Maier
- Pattern Recognition Lab, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen 91058, Germany
| | - Dorin Comaniciu
- Digital Technology and Inovation, Siemens Healthineers, Princeton, NJ 08540, USA
| |
Collapse
|
82
|
Chen Z, Qiu T, Tian Y, Feng H, Zhang Y, Wang H. Automated brain structures segmentation from PET/CT images based on landmark-constrained dual-modality atlas registration. Phys Med Biol 2021; 66. [PMID: 33765673 DOI: 10.1088/1361-6560/abf201] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2020] [Accepted: 03/25/2021] [Indexed: 11/12/2022]
Abstract
Automated brain structures segmentation in positron emission tomography (PET) images has been widely investigated to help brain disease diagnosis and follow-up. To relieve the burden of a manual definition of volume of interest (VOI), automated atlas-based VOI definition algorithms were developed, but these algorithms mostly adopted a global optimization strategy which may not be particularly accurate for local small structures (especially the deep brain structures). This paper presents a PET/CT-based brain VOI segmentation algorithm combining anatomical atlas, local landmarks, and dual-modality information. The method incorporates local deep brain landmarks detected by the Deep Q-Network (DQN) to constrain the atlas registration process. Dual-modality PET/CT image information is also combined to improve the registration accuracy of the extracerebral contour. We compare our algorithm with the representative brain atlas registration methods based on 86 clinical PET/CT images. The proposed algorithm obtained accurate delineation of brain VOIs with an average Dice similarity score of 0.79, an average surface distance of 0.97 mm (sub-pixel level), and a volume recovery coefficient close to 1. The main advantage of our method is that it optimizes both global-scale brain matching and local-scale small structure alignment around the key landmarks, it is fully automated and produces high-quality parcellation of the brain structures from brain PET/CT images.
Collapse
Affiliation(s)
- Zhaofeng Chen
- Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian 116024, People's Republic of China.,School of Electronic and Information Engineering, Jiujiang University, Jiujiang 332005, People's Republic of China
| | - Tianshuang Qiu
- Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian 116024, People's Republic of China
| | - Yang Tian
- Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian 116024, People's Republic of China
| | - Hongbo Feng
- Department of Nuclear Medicine, First Affiliated Hospital of Dalian Medical University Dalian 116011, People's Republic of China
| | - Yanjun Zhang
- Department of Nuclear Medicine, First Affiliated Hospital of Dalian Medical University Dalian 116011, People's Republic of China
| | - Hongkai Wang
- Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian 116024, People's Republic of China
| |
Collapse
|
83
|
Lian S, Luo Z, Feng C, Li S, Li S. APRIL: Anatomical prior-guided reinforcement learning for accurate carotid lumen diameter and intima-media thickness measurement. Med Image Anal 2021; 71:102040. [PMID: 33789178 DOI: 10.1016/j.media.2021.102040] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2020] [Revised: 01/30/2021] [Accepted: 03/09/2021] [Indexed: 01/17/2023]
Abstract
Carotid artery lumen diameter (CALD) and carotid artery intima-media thickness (CIMT) are essential factors for estimating the risk of many cardiovascular diseases. The automatic measurement of them in ultrasound (US) images is an efficient assisting diagnostic procedure. Despite the advances, existing methods still suffer the issue of low measuring accuracy and poor prediction stability, mainly due to the following disadvantages: (1) ignore anatomical prior and prone to give anatomically inaccurate estimation; (2) require carefully designed post-processing, which may introduce more estimation errors; (3) rely on massive pixel-wise annotations during training; (4) can not estimate the uncertainty of the predictions. In this study, we propose the Anatomical Prior-guided ReInforcement Learning model (APRIL), which innovatively formulate the measurement of CALD & CIMT as an RL problem and dynamically incorporate anatomical prior (AP) into the system through a novel reward. With the guidance of AP, the designed keypoints in APRIL can avoid various anatomy impossible mis-locations, and accurately measure the CALD & CIMT based on their corresponding locations. Moreover, this formulation significantly reduces human annotation effort by only using several keypoints and can help to eliminate the extra post-processing steps. Further, we introduce an uncertainty module for measuring the prediction variance, which can guide us to adaptively rectify the estimation of those frames with considerable uncertainty. Experiments on a challenging carotid US dataset show that APRIL can achieve MAE (in pixel/mm) of 3.02±2.23 / 0.18±0.13 for CALD, and 0.96±0.70 / 0.06±0.04 for CIMT, which significantly surpass popular approaches that use more annotations.
Collapse
Affiliation(s)
- Sheng Lian
- Department of Artificial Intelligence, Xiamen University, Xiamen, Fujian, China; Digital Image Group (DIG), London, ON, Canada; School of Biomedical Engineering, Western University, London, ON, Canada
| | - Zhiming Luo
- Department of Artificial Intelligence, Xiamen University, Xiamen, Fujian, China.
| | - Cheng Feng
- Department of Ultrasound, The Second Affiliated Hospital, Southern University of Science and Technology, Shenzhen Third Peoples Hospital, Shenzhen, Guangdong, China
| | - Shaozi Li
- Department of Artificial Intelligence, Xiamen University, Xiamen, Fujian, China.
| | - Shuo Li
- Digital Image Group (DIG), London, ON, Canada; School of Biomedical Engineering, Western University, London, ON, Canada.
| |
Collapse
|
84
|
Ribeiro JM, Astudillo P, de Backer O, Budde R, Nuis RJ, Goudzwaard J, Van Mieghem NM, Lumens J, Mortier P, Mattace-Raso F, Boersma E, Cummins P, Bruining N, de Jaegere PP. Artificial Intelligence and Transcatheter Interventions for Structural Heart Disease: A glance at the (near) future. Trends Cardiovasc Med 2021; 32:153-159. [PMID: 33581255 DOI: 10.1016/j.tcm.2021.02.002] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/06/2020] [Revised: 02/03/2021] [Accepted: 02/04/2021] [Indexed: 01/16/2023]
Abstract
With innovations in therapeutic technologies and changes in population demographics, transcatheter interventions for structural heart disease have become the preferred treatment and will keep growing. Yet, a thorough clinical selection and efficient pathway from diagnosis to treatment and follow-up are mandatory. In this review we reflect on how artificial intelligence may help to improve patient selection, pre-procedural planning, procedure execution and follow-up so to establish efficient and high quality health care in an increasing number of patients.
Collapse
Affiliation(s)
- Joana Maria Ribeiro
- Department of Cardiology, Thoraxcenter, Erasmus Medical Center, Rotterdam, the Netherlands; Department of Cardiology, Centro Hospitalar de Entre o Douro e Vouga, Santa Maria da Feira, Portugal; Department of Cardiology, Centro Hospitalar e Universitário de Coimbra, Coimbra, Portugal
| | | | - Ole de Backer
- Department of Cardiology, Rigshospitalet University Hospital, Copenhagen, Denmark
| | - Ricardo Budde
- Department of Radiology, Erasmus Medical Center, Rotterdam, the Netherlands
| | - Rutger Jan Nuis
- Department of Cardiology, Thoraxcenter, Erasmus Medical Center, Rotterdam, the Netherlands
| | - Jeanette Goudzwaard
- Department of Internal Medicine, Erasmus Medical Center, Rotterdam, the Netherlands
| | - Nicolas M Van Mieghem
- Department of Cardiology, Thoraxcenter, Erasmus Medical Center, Rotterdam, the Netherlands
| | - Joost Lumens
- CARIM School for Cardiovascular Diseases, Maastricht University Medical Center, Maastricht, the Netherlands
| | | | | | - Eric Boersma
- Department of Cardiology, Thoraxcenter, Erasmus Medical Center, Rotterdam, the Netherlands
| | - Paul Cummins
- Department of Cardiology, Thoraxcenter, Erasmus Medical Center, Rotterdam, the Netherlands
| | - Nico Bruining
- Department of Cardiology, Thoraxcenter, Erasmus Medical Center, Rotterdam, the Netherlands
| | - Peter Pt de Jaegere
- Department of Cardiology, Thoraxcenter, Erasmus Medical Center, Rotterdam, the Netherlands.
| |
Collapse
|
85
|
Zhong X, Amrehn M, Ravikumar N, Chen S, Strobel N, Birkhold A, Kowarschik M, Fahrig R, Maier A. Deep action learning enables robust 3D segmentation of body organs in various CT and MRI images. Sci Rep 2021; 11:3311. [PMID: 33558570 PMCID: PMC7870874 DOI: 10.1038/s41598-021-82370-6] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2020] [Accepted: 01/14/2021] [Indexed: 11/09/2022] Open
Abstract
In this study, we propose a novel point cloud based 3D registration and segmentation framework using reinforcement learning. An artificial agent, implemented as a distinct actor based on value networks, is trained to predict the optimal piece-wise linear transformation of a point cloud for the joint tasks of registration and segmentation. The actor network estimates a set of plausible actions and the value network aims to select the optimal action for the current observation. Point-wise features that comprise spatial positions (and surface normal vectors in the case of structured meshes), and their corresponding image features, are used to encode the observation and represent the underlying 3D volume. The actor and value networks are applied iteratively to estimate a sequence of transformations that enable accurate delineation of object boundaries. The proposed approach was extensively evaluated in both segmentation and registration tasks using a variety of challenging clinical datasets. Our method has fewer trainable parameters and lower computational complexity compared to the 3D U-Net, and it is independent of the volume resolution. We show that the proposed method is applicable to mono- and multi-modal segmentation tasks, achieving significant improvements over the state-of-the-art for the latter. The flexibility of the proposed framework is further demonstrated for a multi-modal registration application. As we learn to predict actions rather than a target, the proposed method is more robust compared to the 3D U-Net when dealing with previously unseen datasets, acquired using different protocols or modalities. As a result, the proposed method provides a promising multi-purpose segmentation and registration framework, particular in the context of image-guided interventions.
Collapse
Affiliation(s)
- Xia Zhong
- Pattern Recognition Lab, Friedrich-Alexander University, Erlangen-Nürnberg, Germany.
| | - Mario Amrehn
- Pattern Recognition Lab, Friedrich-Alexander University, Erlangen-Nürnberg, Germany
| | - Nishant Ravikumar
- Pattern Recognition Lab, Friedrich-Alexander University, Erlangen-Nürnberg, Germany
| | - Shuqing Chen
- Pattern Recognition Lab, Friedrich-Alexander University, Erlangen-Nürnberg, Germany
| | - Norbert Strobel
- Institute of Medical Engineering, University of Applied Sciences, Würzburg-Schweinfurt, Germany
| | | | | | | | - Andreas Maier
- Pattern Recognition Lab, Friedrich-Alexander University, Erlangen-Nürnberg, Germany
| |
Collapse
|
86
|
Zhang B, Liu H, Luo H, Li K. Automatic quality assessment for 2D fetal sonographic standard plane based on multitask learning. Medicine (Baltimore) 2021; 100:e24427. [PMID: 33530242 PMCID: PMC7850658 DOI: 10.1097/md.0000000000024427] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/14/2020] [Revised: 11/18/2020] [Accepted: 12/31/2020] [Indexed: 01/05/2023] Open
Abstract
ABSTRACT The quality control of fetal sonographic (FS) images is essential for the correct biometric measurements and fetal anomaly diagnosis. However, quality control requires professional sonographers to perform and is often labor-intensive. To solve this problem, we propose an automatic image quality assessment scheme based on multitask learning to assist in FS image quality control. An essential criterion for FS image quality control is that all the essential anatomical structures in the section should appear full and remarkable with a clear boundary. Therefore, our scheme aims to identify those essential anatomical structures to judge whether an FS image is the standard image, which is achieved by 3 convolutional neural networks. The Feature Extraction Network aims to extract deep level features of FS images. Based on the extracted features, the Class Prediction Network determines whether the structure meets the standard and Region Proposal Network identifies its position. The scheme has been applied to 3 types of fetal sections, which are the head, abdominal, and heart. The experimental results show that our method can make a quality assessment of an FS image within less a second. Also, our method achieves competitive performance in both the segmentation and diagnosis compared with state-of-the-art methods.
Collapse
Affiliation(s)
- Bo Zhang
- Department of Ultrasound, West China Second Hospital, Sichuan University/ Key Laboratory of Obstetrics & Gynecology, Pediatric Diseases, and Birth Defects of the Ministry of Education
| | - Han Liu
- Glasgow College, University of Electronic Science and Technology of China
| | - Hong Luo
- Department of Ultrasound, West China Second Hospital, Sichuan University/ Key Laboratory of Obstetrics & Gynecology, Pediatric Diseases, and Birth Defects of the Ministry of Education
| | - Kejun Li
- Wangwang Technology Company, Chengdu, China
| |
Collapse
|
87
|
Wu S, Guo C, Wang X. Application of Two New Feature Fusion Networks to Improve Real-time Prostate Capsula Detection. Curr Med Imaging 2021; 17:1128-1136. [PMID: 33511951 DOI: 10.2174/1573405617666210129110832] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2020] [Revised: 12/18/2020] [Accepted: 12/21/2020] [Indexed: 11/22/2022]
Abstract
BACKGROUND Excess prostate tissue is trimmed near the prostate capsula boundary during transurethral plasma kinetic enucleation of prostate (PKEP) and transurethral bipolar plasmakinetic resection of prostate (PKRP) surgeries. If too much tissue is removed, a prostate capsula perforation can potentially occur. As such, real-time accurate prostate capsula (PC) detection is critical for the prevention of these perforations. OBJECTIVE This study investigated the potential for using image denoising, image dimension reduction and feature fusion to improve real-time prostate capsula detection with two objective. First, this paper mainly studied feature selection and input dimension reduction. Second, image denoising were evaluated, as they are of paramount importance to transient stability assessment based on neural networks. METHOD Two new feature fusion techniques, maxpooling bilinear interpolation single-shot multibox detector (PBSSD) and bilinear interpolation single shot multibox detector (BSSD) were proposed. Before original images were sent to the neural network, they were processed by principal component analysis (PCA) and adaptive median filter (AMF) for dimension reduction and image denoising. RESULTS The results showed that application of PCA and AMF with PBSSD increased the mean average precision (mAP) for prostate capsula images by 8.55% and reached 80.15%, compared with single shot multibox detector (SSD) alone. Application of PCA with BSSD increased the mAP for prostate capsula images by 4.6% compared with SSD alone. CONCLUSION Compared with other methods, ours were proven to be more accurate for real-time prostate capsula detection. The improved mAP results suggest that the proposed approaches are powerful tools for improving SSD networks.
Collapse
Affiliation(s)
- Shixiao Wu
- Communication and Information System, School of Electronic Information, Wuhan University, Wuhan. China
| | - Chengcheng Guo
- Communication and Information System, School of Electronic Information, Wuhan University, Wuhan. China
| | - Xinghuan Wang
- Urology, Zhongnan Hospital of Wuhan University, Wuhan. China
| |
Collapse
|
88
|
Xu C, Zhang D, Chong J, Chen B, Li S. Synthesis of gadolinium-enhanced liver tumors on nonenhanced liver MR images using pixel-level graph reinforcement learning. Med Image Anal 2021; 69:101976. [PMID: 33535110 DOI: 10.1016/j.media.2021.101976] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2020] [Revised: 01/11/2021] [Accepted: 01/18/2021] [Indexed: 01/24/2023]
Abstract
If successful, synthesis of gadolinium (Gd)-enhanced liver tumors on nonenhanced liver MR images will be critical for liver tumor diagnosis and treatment. This synthesis will offer a safe, efficient, and low-cost clinical alternative to eliminate the use of contrast agents in the current clinical workflow and significantly benefit global healthcare systems. In this study, we propose a novel pixel-level graph reinforcement learning method (Pix-GRL). This method directly takes regular nonenhanced liver images as input and outputs AI-enhanced liver tumor images, thereby making them comparable to traditional Gd-enhanced liver tumor images. In Pix-GRL, each pixel has a pixel-level agent, and the agent explores the pixels features and outputs a pixel-level action to iteratively change the pixel value, ultimately generating AI-enhanced liver tumor images. Most importantly, Pix-GRL creatively embeds a graph convolution to represent all the pixel-level agents. A graph convolution is deployed to the agent for feature exploration to improve the effectiveness through the aggregation of long-range contextual features, as well as outputting the action to enhance the efficiency through shared parameter training between agents. Moreover, in our Pix-GRL method, a novel reward is used to measure pixel-level action to significantly improve the performance by considering the improvement in each action in each pixel with its own future state, as well as those of neighboring pixels. Pix-GRL significantly upgrades the existing medical DRL methods from a single agent to multiple pixel-level agents, becoming the first DRL method for medical image synthesis. Comprehensive experiments on three types of liver tumor datasets (benign, cancerous, and healthy controls) with 325 patients (24,375 images) show that our novel Pix-GRL method outperforms existing medical image synthesis learning methods. It achieved an SSIM of 0.85 ± 0.06 and a Pearson correlation coefficient of 0.92 in terms of the tumor size. These results prove that the potential exists to develop a successful clinical alternative to Gd-enhanced liver MR imaging.
Collapse
Affiliation(s)
- Chenchu Xu
- School of Computer Science and Technology, Anhui University, Hefei, China; Department of Medical Imaging, Western University, London ON, Canada
| | - Dong Zhang
- Department of Medical Imaging, Western University, London ON, Canada
| | - Jaron Chong
- Department of Medical Imaging, Western University, London ON, Canada
| | - Bo Chen
- School of Health Science, Western University, London ON, Canada
| | - Shuo Li
- Department of Medical Imaging, Western University, London ON, Canada.
| |
Collapse
|
89
|
Winkel DJ, Breit HC, Weikert TJ, Stieltjes B. Building Large-Scale Quantitative Imaging Databases with Multi-Scale Deep Reinforcement Learning: Initial Experience with Whole-Body Organ Volumetric Analyses. J Digit Imaging 2021; 34:124-133. [PMID: 33469724 PMCID: PMC7887142 DOI: 10.1007/s10278-020-00398-y] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2020] [Revised: 09/21/2020] [Accepted: 11/10/2020] [Indexed: 11/27/2022] Open
Abstract
To explore the feasibility of a fully automated workflow for whole-body volumetric analyses based on deep reinforcement learning (DRL) and to investigate the influence of contrast-phase (CP) and slice thickness (ST) on the calculated organ volume. This retrospective study included 431 multiphasic CT datasets—including three CP and two ST reconstructions for abdominal organs—totaling 10,508 organ volumes (10,344 abdominal organ volumes: liver, spleen, and kidneys, 164 lung volumes). Whole-body organ volumes were determined using multi-scale DRL for 3D anatomical landmark detection and 3D organ segmentation. Total processing time for all volumes and mean calculation time per case were recorded. Repeated measures analyses of variance (ANOVA) were conducted to test for robustness considering CP and ST. The algorithm calculated organ volumes for the liver, spleen, and right and left kidney (mean volumes in milliliter (interquartile range), portal venous CP, 5 mm ST: 1868.6 (1426.9, 2157.8), 350.19 (45.46, 395.26), 186.30 (147.05, 214.99) and 181.91 (143.22, 210.35), respectively), and for the right and left lung (2363.1 (1746.3, 2851.3) and 1950.9 (1335.2, 2414.2)). We found no statistically significant effects of the variable contrast phase or the variable slice thickness on the organ volumes. Mean computational time per case was 10 seconds. The evaluated approach, using state-of-the art DRL, enables a fast processing of substantial amounts irrespective of CP and ST, allowing building up organ-specific volumetric databases. The thus derived volumes may serve as reference for quantitative imaging follow-up.
Collapse
Affiliation(s)
- David J Winkel
- Department of Radiology, University Hospital of Basel, Basel, Basel-Stadt, Switzerland.
| | - Hanns-Christian Breit
- Department of Radiology, University Hospital of Basel, Basel, Basel-Stadt, Switzerland
| | - Thomas J Weikert
- Department of Radiology, University Hospital of Basel, Basel, Basel-Stadt, Switzerland
| | - Bram Stieltjes
- Department of Radiology, University Hospital of Basel, Basel, Basel-Stadt, Switzerland
| |
Collapse
|
90
|
Jeon B, Jung S, Shim H, Chang HJ. Bayesian Estimation of Geometric Morphometric Landmarks for Simultaneous Localization of Multiple Anatomies in Cardiac CT Images. ENTROPY (BASEL, SWITZERLAND) 2021; 23:E64. [PMID: 33401695 PMCID: PMC7824462 DOI: 10.3390/e23010064] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/06/2020] [Revised: 12/18/2020] [Accepted: 12/27/2020] [Indexed: 11/16/2022]
Abstract
We propose a robust method to simultaneously localize multiple objects in cardiac computed tomography angiography (CTA) images. The relative prior distributions of the multiple objects in the three-dimensional (3D) space can be obtained through integrating the geometric morphological relationship of each target object to some reference objects. In cardiac CTA images, the cross-sections of ascending and descending aorta can play the role of the reference objects. We employed the maximum a posteriori (MAP) estimator that utilizes anatomic prior knowledge to address this problem of localizing multiple objects. We propose a new feature for each pixel using the relative distances, which can define any objects that have unclear boundaries. Our experimental results targeting four pulmonary veins (PVs) and the left atrial appendage (LAA) in cardiac CTA images demonstrate the robustness of the proposed method. The method could also be extended to localize other multiple objects in different applications.
Collapse
Affiliation(s)
- Byunghwan Jeon
- School of Computer Science, Kyungil University, Gyeongsan 38428, Korea;
| | - Sunghee Jung
- CONNECT-AI R&D Center, Yonsei University College of Medicine, Seoul 03722,Korea; (S.J.); (H.S.)
| | - Hackjoon Shim
- CONNECT-AI R&D Center, Yonsei University College of Medicine, Seoul 03722,Korea; (S.J.); (H.S.)
| | - Hyuk-Jae Chang
- CONNECT-AI R&D Center, Yonsei University College of Medicine, Seoul 03722,Korea; (S.J.); (H.S.)
- Division of Cardiology Department of Internal Medicine, Yonsei University College of Medicine, Seoul 03722, Korea
| |
Collapse
|
91
|
Pan X, Phan TL, Adel M, Fossati C, Gaidon T, Wojak J, Guedj E. Multi-View Separable Pyramid Network for AD Prediction at MCI Stage by 18F-FDG Brain PET Imaging. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:81-92. [PMID: 32894711 DOI: 10.1109/tmi.2020.3022591] [Citation(s) in RCA: 24] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Alzheimer's Disease (AD), one of the main causes of death in elderly people, is characterized by Mild Cognitive Impairment (MCI) at prodromal stage. Nevertheless, only part of MCI subjects could progress to AD. The main objective of this paper is thus to identify those who will develop a dementia of AD type among MCI patients. 18F-FluoroDeoxyGlucose Positron Emission Tomography (18F-FDG PET) serves as a neuroimaging modality for early diagnosis as it can reflect neural activity via measuring glucose uptake at resting-state. In this paper, we design a deep network on 18F-FDG PET modality to address the problem of AD identification at early MCI stage. To this end, a Multi-view Separable Pyramid Network (MiSePyNet) is proposed, in which representations are learned from axial, coronal and sagittal views of PET scans so as to offer complementary information and then combined to make a decision jointly. Different from the widely and naturally used 3D convolution operations for 3D images, the proposed architecture is deployed with separable convolution from slice-wise to spatial-wise successively, which can retain the spatial information and reduce training parameters compared to 2D and 3D networks, respectively. Experiments on ADNI dataset show that the proposed method can yield better performance than both traditional and deep learning-based algorithms for predicting the progression of Mild Cognitive Impairment, with a classification accuracy of 83.05%.
Collapse
|
92
|
Chen H, Sung JJY. Potentials of AI in medical image analysis in Gastroenterology and Hepatology. J Gastroenterol Hepatol 2021; 36:31-38. [PMID: 33140875 DOI: 10.1111/jgh.15327] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/16/2020] [Revised: 10/30/2020] [Accepted: 10/30/2020] [Indexed: 12/15/2022]
Abstract
With the advancement of artificial intelligence (AI) technology, it comes in a big wave carrying possibly huge impact in the field of medicine. Gastroenterology and hepatology, being a specialty relying much on diagnostic imaging, endoscopy, and histopathology, AI technology has promised improving the quality and consistency of care to the patients. In this review, we will elucidate the development of machine learning methods, especially the visual representation mechanism in deep learning on recognition tasks. Various AI-image analysis applications in endoscopy, radiology, and pathology are covered in gastroenterology and hepatology and reveal the enormous potentials for AI in assisting diagnosis, prognosis, and treatment. We also discuss the promises as well as pitfalls for AI in medical image analysis and pointing out future research directions.
Collapse
Affiliation(s)
- Hao Chen
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Shatin, Hong Kong
| | - Joseph J Y Sung
- Department of Medicine and Therapeutics, The Chinese University of Hong Kong, Shatin, Hong Kong
| |
Collapse
|
93
|
Awasthi N, Jain G, Kalva SK, Pramanik M, Yalavarthy PK. Deep Neural Network-Based Sinogram Super-Resolution and Bandwidth Enhancement for Limited-Data Photoacoustic Tomography. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2020; 67:2660-2673. [PMID: 32142429 DOI: 10.1109/tuffc.2020.2977210] [Citation(s) in RCA: 41] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/09/2023]
Abstract
Photoacoustic tomography (PAT) is a noninvasive imaging modality combining the benefits of optical contrast at ultrasonic resolution. Analytical reconstruction algorithms for photoacoustic (PA) signals require a large number of data points for accurate image reconstruction. However, in practical scenarios, data are collected using the limited number of transducers along with data being often corrupted with noise resulting in only qualitative images. Furthermore, the collected boundary data are band-limited due to limited bandwidth (BW) of the transducer, making the PA imaging with limited data being qualitative. In this work, a deep neural network-based model with loss function being scaled root-mean-squared error was proposed for super-resolution, denoising, as well as BW enhancement of the PA signals collected at the boundary of the domain. The proposed network has been compared with traditional as well as other popular deep-learning methods in numerical as well as experimental cases and is shown to improve the collected boundary data, in turn, providing superior quality reconstructed PA image. The improvement obtained in the Pearson correlation, structural similarity index metric, and root-mean-square error was as high as 35.62%, 33.81%, and 41.07%, respectively, for phantom cases and signal-to-noise ratio improvement in the reconstructed PA images was as high as 11.65 dB for in vivo cases compared with reconstructed image obtained using original limited BW data. Code is available at https://sites.google.com/site/sercmig/home/dnnpat.
Collapse
|
94
|
Noothout JMH, De Vos BD, Wolterink JM, Postma EM, Smeets PAM, Takx RAP, Leiner T, Viergever MA, Isgum I. Deep Learning-Based Regression and Classification for Automatic Landmark Localization in Medical Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:4011-4022. [PMID: 32746142 DOI: 10.1109/tmi.2020.3009002] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/14/2023]
Abstract
In this study, we propose a fast and accurate method to automatically localize anatomical landmarks in medical images. We employ a global-to-local localization approach using fully convolutional neural networks (FCNNs). First, a global FCNN localizes multiple landmarks through the analysis of image patches, performing regression and classification simultaneously. In regression, displacement vectors pointing from the center of image patches towards landmark locations are determined. In classification, presence of landmarks of interest in the patch is established. Global landmark locations are obtained by averaging the predicted displacement vectors, where the contribution of each displacement vector is weighted by the posterior classification probability of the patch that it is pointing from. Subsequently, for each landmark localized with global localization, local analysis is performed. Specialized FCNNs refine the global landmark locations by analyzing local sub-images in a similar manner, i.e. by performing regression and classification simultaneously and combining the results. Evaluation was performed through localization of 8 anatomical landmarks in CCTA scans, 2 landmarks in olfactory MR scans, and 19 landmarks in cephalometric X-rays. We demonstrate that the method performs similarly to a second observer and is able to localize landmarks in a diverse set of medical images, differing in image modality, image dimensionality, and anatomical coverage.
Collapse
|
95
|
Tian Y, Fu S. A descriptive framework for the field of deep learning applications in medical images. Knowl Based Syst 2020. [DOI: 10.1016/j.knosys.2020.106445] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
|
96
|
Itchhaporia D. Artificial intelligence in cardiology. Trends Cardiovasc Med 2020; 32:34-41. [PMID: 33242635 DOI: 10.1016/j.tcm.2020.11.007] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/04/2019] [Revised: 10/19/2020] [Accepted: 11/16/2020] [Indexed: 12/22/2022]
Abstract
This review examines the current state and application of artificial intelligence (AI) and machine learning (ML) in cardiovascular medicine. AI is changing the clinical practice of medicine in other specialties. With progress continuing in this emerging technology, the impact for cardiovascular medicine is highlighted to provide insight for the practicing clinician and to identify potential patient benefits.
Collapse
Affiliation(s)
- Dipti Itchhaporia
- Hoag Hospital Newport Beach and University of California, 520 Superior Avenue, Suite 325, Newport Beach, Irvine, CA 92663, United States.
| |
Collapse
|
97
|
Rueckel J, Reidler P, Fink N, Sperl J, Geyer T, Fabritius MP, Ricke J, Ingrisch M, Sabel BO. Artificial intelligence assistance improves reporting efficiency of thoracic aortic aneurysm CT follow-up. Eur J Radiol 2020; 134:109424. [PMID: 33259990 DOI: 10.1016/j.ejrad.2020.109424] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2020] [Revised: 10/29/2020] [Accepted: 11/14/2020] [Indexed: 11/19/2022]
Abstract
OBJECTIVE Follow-up of aortic aneurysms by computed tomography (CT) is crucial to balance the risks of treatment and rupture. Artificial intelligence (AI)-assisted radiology reporting promises time savings and reduced inter-reader variabilities. METHODS The influence of AI assistance on the efficiency and accuracy of aortic aneurysm reporting according to the AHA / ESC guidelines was quantified based on 324 AI measurements and 1944 radiological measurements: 18 aortic aneurysm patients, each with two CT scans (arterial contrast phase, electrocardiogram-gated) with an interval of at least six months have been included. One board-certified radiologist and two residents (8/4/2 years of experience in vascular imaging) independently assessed aortic diameters at nine landmark positions. Aneurysm extensions were compared with original CT reports. After three weeks washout period, CTs were re-assessed, based on graphically illustrated AI measurements. RESULTS Time-consuming guideline-compliant aortic measurements revealed additional affections of the root / arch for 80 % of aneurysms that had initially been reported to be limited to the ascending aorta. AI assistance reduced mean reporting time by 63 % from 13:01 to 04:46 min including manual corrections of AI measurements (performed for 33.6 % of all measurements with predominance at the sinuses of Vasalva). AI assistance reduced total diameter inter-reader variability by 42.5 % (0.42 / 1.16 mm with / without AI assistance, mean of all patients and landmark positions, significant reduction for 6 out of 9 measuring positions). Conventional and AI-assisted quantification aneurysm progress varied to small extent (mean of 0.75 mm over all patients / landmark positions) not significantly exceeding radiologist's inter-reader variabilities. CONCLUSIONS Guideline-compliant aorta measurement is crucial to report detailed aneurysm extension which might affect the strategy of interventional repair. AI assistance promises improved reporting efficiency and has high potential to reduce radiologist's inter-reader variabilities that can hamper diagnostic follow-up accuracy. KEY POINT The time-consuming guideline-compliant aorta aneurysm assessment is crucial to report aneurysm extension in detail; AI-assisted measurement reduces reporting time, improves extension evaluation and reduces inter-reader variability.
Collapse
Affiliation(s)
- J Rueckel
- Department of Radiology, University Hospital, LMU Munich, Munich, Germany.
| | - P Reidler
- Department of Radiology, University Hospital, LMU Munich, Munich, Germany
| | - N Fink
- Department of Radiology, University Hospital, LMU Munich, Munich, Germany; Comprehensive Pneumology Center (CPC-M), Member of the German Center for Lung Research (DZL), Munich, Germany
| | - J Sperl
- Siemens Healthineers AG, Erlangen, Germany
| | - T Geyer
- Department of Radiology, University Hospital, LMU Munich, Munich, Germany
| | - M P Fabritius
- Department of Radiology, University Hospital, LMU Munich, Munich, Germany
| | - J Ricke
- Department of Radiology, University Hospital, LMU Munich, Munich, Germany
| | - M Ingrisch
- Department of Radiology, University Hospital, LMU Munich, Munich, Germany
| | - B O Sabel
- Department of Radiology, University Hospital, LMU Munich, Munich, Germany
| |
Collapse
|
98
|
Abstract
Diagnostic processes typically rely on traditional and laborious methods, that are prone to human error, resulting in frequent misdiagnosis of diseases. Computational approaches are being increasingly used for more precise diagnosis of the clinical pathology, diagnosis of genetic and microbial diseases, and analysis of clinical chemistry data. These approaches are progressively used for improving the reliability of testing, resulting in reduced diagnostic errors. Artificial intelligence (AI)-based computational approaches mostly rely on training sets obtained from patient data stored in clinical databases. However, the use of AI is associated with several ethical issues, including patient privacy and data ownership. The capacity of AI-based mathematical models to interpret complex clinical data frequently leads to data bias and reporting of erroneous results based on patient data. In order to improve the reliability of computational approaches in clinical diagnostics, strategies to reduce data bias and analyzing real-life patient data need to be further refined.
Collapse
Affiliation(s)
- Mohammed A Alaidarous
- Department of Medical Laboratory Sciences, College of Applied Medical Sciences, Majmaah University, Majmaah, Kingdom of Saudi Arabia. E-mail.
| |
Collapse
|
99
|
Abstract
This paper presents a review of deep learning (DL)-based medical image registration methods. We summarized the latest developments and applications of DL-based registration methods in the medical field. These methods were classified into seven categories according to their methods, functions and popularity. A detailed review of each category was presented, highlighting important contributions and identifying specific challenges. A short assessment was presented following the detailed review of each category to summarize its achievements and future potential. We provided a comprehensive comparison among DL-based methods for lung and brain registration using benchmark datasets. Lastly, we analyzed the statistics of all the cited works from various aspects, revealing the popularity and future trend of DL-based medical image registration.
Collapse
Affiliation(s)
- Yabo Fu
- Department of Radiation Oncology, Emory University, Atlanta, GA, United States of America
| | | | | | | | | | | |
Collapse
|
100
|
Sequential conditional reinforcement learning for simultaneous vertebral body detection and segmentation with modeling the spine anatomy. Med Image Anal 2020; 67:101861. [PMID: 33075640 DOI: 10.1016/j.media.2020.101861] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2019] [Revised: 08/14/2020] [Accepted: 09/30/2020] [Indexed: 10/23/2022]
Abstract
Accurate vertebral body (VB) detection and segmentation are critical for spine disease identification and diagnosis. Existing automatic VB detection and segmentation methods may cause false-positive results to the background tissue or inaccurate results to the desirable VB. Because they usually cannot take both the global spine pattern and the local VB appearance into consideration concurrently. In this paper, we propose a Sequential Conditional Reinforcement Learning network (SCRL) to tackle the simultaneous detection and segmentation of VBs from MR spine images. The SCRL, for the first time, applies deep reinforcement learning into VB detection and segmentation. It innovatively models the spatial correlation between VBs from top to bottom as sequential dynamic-interaction processes, thereby globally focusing detection and segmentation on each VB. Simultaneously, SCRL also perceives the local appearance feature of each desirable VB comprehensively, thereby achieving accurate detection and segmentation result. Particularly, SCRL seamlessly combines three parts: 1) Anatomy-Modeling Reinforcement Learning Network dynamically interacts with the image and focuses an attention-region on the VB; 2) Fully-Connected Residual Neural Network learns rich global context information of the VB including both the detailed low-level features and the abstracted high-level features to detect the accurate bounding-box of the VB based on the attention-region; 3) Y-shaped Network learns comprehensive detailed texture information of VB including multi-scale, coarse-to-fine features to segment the boundary of VB from the attention-region. On 240 subjects, SCRL achieves accurate detection and segmentation results, where on average the detection IoU is 92.3%, segmentation Dice is 92.6%, and classification mean accuracy is 96.4%. These excellent results demonstrate that SCRL can be an efficient aided-diagnostic tool to assist clinicians when diagnosing spinal diseases.
Collapse
|