1
|
Alwood BT, Meyer DM, Ionita C, Snyder KV, Santos R, Perrotta L, Crooks R, Van Orden K, Torres D, Poynor B, Pham N, Kelly S, Meyer BC, Bolar DS. Multicenter comparison using two AI stroke CT perfusion software packages for determining thrombectomy eligibility. J Stroke Cerebrovasc Dis 2024; 33:107750. [PMID: 38703875 DOI: 10.1016/j.jstrokecerebrovasdis.2024.107750] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2023] [Revised: 04/25/2024] [Accepted: 04/29/2024] [Indexed: 05/06/2024] Open
Abstract
BACKGROUND Stroke AI platforms assess infarcted core and potentially salvageable tissue (penumbra) to identify patients suitable for mechanical thrombectomy. Few studies have compared outputs of these platforms, and none have been multicenter or considered NIHSS or scanner/protocol differences. Our objective was to compare volume estimates and thrombectomy eligibility from two widely used CT perfusion (CTP) packages, Viz.ai and RAPID.AI, in a large multicenter cohort. METHODS We analyzed CTP data of acute stroke patients with large vessel occlusion (LVO) from four institutions. Core and penumbra volumes were estimated by each software and DEFUSE-3 thrombectomy eligibility assessed. Results between software packages were compared and categorized by NIHSS score, scanner manufacturer/model, and institution. RESULTS Primary analysis of 362 cases found statistically significant differences in both software's volume estimations, with subgroup analysis showing these differences were driven by results from a single scanner model, the Canon Aquilion One. Viz.ai provided larger estimates with mean differences of 8cc and 18cc for core and penumbra, respectively (p<0.001). NIHSS subgroup analysis also showed systematically larger Viz.ai volumes (p<0.001). Despite volume differences, a significant difference in thrombectomy eligibility was not found. Additional subgroup analysis showed significant differences in penumbra volume for the Phillips Ingenuity scanner, and thrombectomy eligibility for the Canon Aquilion One scanner at one center (7 % increased eligibility with Viz.ai, p=0.03). CONCLUSIONS Despite systematic differences in core and penumbra volume estimates between Viz.ai and RAPID.AI, DEFUSE-3 eligibility was not statistically different in primary or NIHSS subgroup analysis. A DEFUSE-3 eligibility difference, however, was seen on one scanner at one institution, suggesting scanner model and local CTP protocols can influence performance and cause discrepancies in thrombectomy eligibility. We thus recommend centers discuss optimal scanning protocols with software vendors and scanner manufacturers to maximize CTP accuracy.
Collapse
Affiliation(s)
- Benjamin T Alwood
- Department of Vascular Neurology, University of Florida, Jacksonville, FL, United States; University of California San Diego Stroke Center, University of California San Diego, San Diego, CA, United States.
| | - Dawn M Meyer
- University of California San Diego Stroke Center, University of California San Diego, San Diego, CA, United States
| | - Chip Ionita
- Department of Biomedical Engineering and Neurosurgery, University at Buffalo, Buffalo NY, United States
| | - Kenneth V Snyder
- Department of Biomedical Engineering and Neurosurgery, University at Buffalo, Buffalo NY, United States
| | - Roberta Santos
- Department of Vascular Neurology, University of Florida, Jacksonville, FL, United States
| | - Lindsey Perrotta
- Department of Vascular Neurology, University of Florida, Jacksonville, FL, United States
| | - Ryan Crooks
- Department of Vascular Neurology, University of Florida, Jacksonville, FL, United States
| | - Kimberlee Van Orden
- University of California San Diego Stroke Center, University of California San Diego, San Diego, CA, United States
| | - Dolores Torres
- University of California San Diego Stroke Center, University of California San Diego, San Diego, CA, United States
| | - Briana Poynor
- University of California San Diego Stroke Center, University of California San Diego, San Diego, CA, United States
| | - Nhan Pham
- Department of Radiology, University of California San Diego, San Diego, CA, United States
| | - Sophie Kelly
- Department of Radiology, University of California San Diego, San Diego, CA, United States
| | - Brett C Meyer
- University of California San Diego Stroke Center, University of California San Diego, San Diego, CA, United States
| | - Divya S Bolar
- Department of Radiology, University of California San Diego, San Diego, CA, United States; Center for Functional MRI, University of California San Diego, San Diego, CA, United States
| |
Collapse
|
2
|
Westwood M, Ramaekers B, Grimm S, Armstrong N, Wijnen B, Ahmadu C, de Kock S, Noake C, Joore M. Software with artificial intelligence-derived algorithms for analysing CT brain scans in people with a suspected acute stroke: a systematic review and cost-effectiveness analysis. Health Technol Assess 2024; 28:1-204. [PMID: 38512017 PMCID: PMC11017149 DOI: 10.3310/rdpa1487] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/22/2024] Open
Abstract
Background Artificial intelligence-derived software technologies have been developed that are intended to facilitate the review of computed tomography brain scans in patients with suspected stroke. Objectives To evaluate the clinical and cost-effectiveness of using artificial intelligence-derived software to support review of computed tomography brain scans in acute stroke in the National Health Service setting. Methods Twenty-five databases were searched to July 2021. The review process included measures to minimise error and bias. Results were summarised by research question, artificial intelligence-derived software technology and study type. The health economic analysis focused on the addition of artificial intelligence-derived software-assisted review of computed tomography angiography brain scans for guiding mechanical thrombectomy treatment decisions for people with an ischaemic stroke. The de novo model (developed in R Shiny, R Foundation for Statistical Computing, Vienna, Austria) consisted of a decision tree (short-term) and a state transition model (long-term) to calculate the mean expected costs and quality-adjusted life-years for people with ischaemic stroke and suspected large-vessel occlusion comparing artificial intelligence-derived software-assisted review to usual care. Results A total of 22 studies (30 publications) were included in the review; 18/22 studies concerned artificial intelligence-derived software for the interpretation of computed tomography angiography to detect large-vessel occlusion. No study evaluated an artificial intelligence-derived software technology used as specified in the inclusion criteria for this assessment. For artificial intelligence-derived software technology alone, sensitivity and specificity estimates for proximal anterior circulation large-vessel occlusion were 95.4% (95% confidence interval 92.7% to 97.1%) and 79.4% (95% confidence interval 75.8% to 82.6%) for Rapid (iSchemaView, Menlo Park, CA, USA) computed tomography angiography, 91.2% (95% confidence interval 77.0% to 97.0%) and 85.0 (95% confidence interval 64.0% to 94.8%) for Viz LVO (Viz.ai, Inc., San Fransisco, VA, USA) large-vessel occlusion, 83.8% (95% confidence interval 77.3% to 88.7%) and 95.7% (95% confidence interval 91.0% to 98.0%) for Brainomix (Brainomix Ltd, Oxford, UK) e-computed tomography angiography and 98.1% (95% confidence interval 94.5% to 99.3%) and 98.2% (95% confidence interval 95.5% to 99.3%) for Avicenna CINA (Avicenna AI, La Ciotat, France) large-vessel occlusion, based on one study each. These studies were not considered appropriate to inform cost-effectiveness modelling but formed the basis by which the accuracy of artificial intelligence plus human reader could be elicited by expert opinion. Probabilistic analyses based on the expert elicitation to inform the sensitivity of the diagnostic pathway indicated that the addition of artificial intelligence to detect large-vessel occlusion is potentially more effective (quality-adjusted life-year gain of 0.003), more costly (increased costs of £8.61) and cost-effective for willingness-to-pay thresholds of £3380 per quality-adjusted life-year and higher. Limitations and conclusions The available evidence is not suitable to determine the clinical effectiveness of using artificial intelligence-derived software to support the review of computed tomography brain scans in acute stroke. The economic analyses did not provide evidence to prefer the artificial intelligence-derived software strategy over current clinical practice. However, results indicated that if the addition of artificial intelligence-derived software-assisted review for guiding mechanical thrombectomy treatment decisions increased the sensitivity of the diagnostic pathway (i.e. reduced the proportion of undetected large-vessel occlusions), this may be considered cost-effective. Future work Large, preferably multicentre, studies are needed (for all artificial intelligence-derived software technologies) that evaluate these technologies as they would be implemented in clinical practice. Study registration This study is registered as PROSPERO CRD42021269609. Funding This award was funded by the National Institute for Health and Care Research (NIHR) Evidence Synthesis programme (NIHR award ref: NIHR133836) and is published in full in Health Technology Assessment; Vol. 28, No. 11. See the NIHR Funding and Awards website for further award information.
Collapse
Affiliation(s)
| | - Bram Ramaekers
- Department of Clinical Epidemiology and Medical Technology Assessment, Maastricht University Medical Centre (MUMC), Maastricht, Netherlands
| | | | | | - Ben Wijnen
- Kleijnen Systematic Reviews (KSR) Ltd, York, UK
| | | | | | - Caro Noake
- Kleijnen Systematic Reviews (KSR) Ltd, York, UK
| | - Manuela Joore
- Department of Clinical Epidemiology and Medical Technology Assessment, Maastricht University Medical Centre (MUMC), Maastricht, Netherlands
| |
Collapse
|
3
|
Yearley AG, Goedmakers CMW, Panahi A, Doucette J, Rana A, Ranganathan K, Smith TR. FDA-approved machine learning algorithms in neuroradiology: A systematic review of the current evidence for approval. Artif Intell Med 2023; 143:102607. [PMID: 37673576 DOI: 10.1016/j.artmed.2023.102607] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2022] [Revised: 05/30/2023] [Accepted: 06/05/2023] [Indexed: 09/08/2023]
Abstract
Over the past decade, machine learning (ML) and artificial intelligence (AI) have become increasingly prevalent in the medical field. In the United States, the Food and Drug Administration (FDA) is responsible for regulating AI algorithms as "medical devices" to ensure patient safety. However, recent work has shown that the FDA approval process may be deficient. In this study, we evaluate the evidence supporting FDA-approved neuroalgorithms, the subset of machine learning algorithms with applications in the central nervous system (CNS), through a systematic review of the primary literature. Articles covering the 53 FDA-approved algorithms with applications in the CNS published in PubMed, EMBASE, Google Scholar and Scopus between database inception and January 25, 2022 were queried. Initial searches identified 1505 studies, of which 92 articles met the criteria for extraction and inclusion. Studies were identified for 26 of the 53 neuroalgorithms, of which 10 algorithms had only a single peer-reviewed publication. Performance metrics were available for 15 algorithms, external validation studies were available for 24 algorithms, and studies exploring the use of algorithms in clinical practice were available for 7 algorithms. Papers studying the clinical utility of these algorithms focused on three domains: workflow efficiency, cost savings, and clinical outcomes. Our analysis suggests that there is a meaningful gap between the FDA approval of machine learning algorithms and their clinical utilization. There appears to be room for process improvement by implementation of the following recommendations: the provision of compelling evidence that algorithms perform as intended, mandating minimum sample sizes, reporting of a predefined set of performance metrics for all algorithms and clinical application of algorithms prior to widespread use. This work will serve as a baseline for future research into the ideal regulatory framework for AI applications worldwide.
Collapse
Affiliation(s)
- Alexander G Yearley
- Harvard Medical School, 25 Shattuck St, Boston, MA 02115, USA; Computational Neuroscience Outcomes Center (CNOC), Department of Neurosurgery, Brigham and Women's Hospital, Harvard Medical School, 75 Francis St, Boston, MA 02115, USA.
| | - Caroline M W Goedmakers
- Computational Neuroscience Outcomes Center (CNOC), Department of Neurosurgery, Brigham and Women's Hospital, Harvard Medical School, 75 Francis St, Boston, MA 02115, USA; Department of Neurosurgery, Leiden University Medical Center, Albinusdreef 2, 2333 ZA Leiden, Netherlands
| | - Armon Panahi
- The George Washington University School of Medicine and Health Sciences, 2300 I St NW, Washington, DC 20052, USA
| | - Joanne Doucette
- Computational Neuroscience Outcomes Center (CNOC), Department of Neurosurgery, Brigham and Women's Hospital, Harvard Medical School, 75 Francis St, Boston, MA 02115, USA; School of Pharmacy, MCPHS University, 179 Longwood Ave, Boston, MA 02115, USA
| | - Aakanksha Rana
- Computational Neuroscience Outcomes Center (CNOC), Department of Neurosurgery, Brigham and Women's Hospital, Harvard Medical School, 75 Francis St, Boston, MA 02115, USA; Massachusetts Institute of Technology, 77 Massachusetts Ave, Cambridge, MA 02139, USA
| | - Kavitha Ranganathan
- Division of Plastic Surgery, Brigham and Women's Hospital, 75 Francis St, Boston, MA 02115, USA
| | - Timothy R Smith
- Harvard Medical School, 25 Shattuck St, Boston, MA 02115, USA; Computational Neuroscience Outcomes Center (CNOC), Department of Neurosurgery, Brigham and Women's Hospital, Harvard Medical School, 75 Francis St, Boston, MA 02115, USA
| |
Collapse
|
4
|
Chandrabhatla AS, Kuo EA, Sokolowski JD, Kellogg RT, Park M, Mastorakos P. Artificial Intelligence and Machine Learning in the Diagnosis and Management of Stroke: A Narrative Review of United States Food and Drug Administration-Approved Technologies. J Clin Med 2023; 12:jcm12113755. [PMID: 37297949 DOI: 10.3390/jcm12113755] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2023] [Revised: 05/22/2023] [Accepted: 05/26/2023] [Indexed: 06/12/2023] Open
Abstract
Stroke is an emergency in which delays in treatment can lead to significant loss of neurological function and be fatal. Technologies that increase the speed and accuracy of stroke diagnosis or assist in post-stroke rehabilitation can improve patient outcomes. No resource exists that comprehensively assesses artificial intelligence/machine learning (AI/ML)-enabled technologies indicated for the management of ischemic and hemorrhagic stroke. We queried a United States Food and Drug Administration (FDA) database, along with PubMed and private company websites, to identify the recent literature assessing the clinical performance of FDA-approved AI/ML-enabled technologies. The FDA has approved 22 AI/ML-enabled technologies that triage brain imaging for more immediate diagnosis or promote post-stroke neurological/functional recovery. Technologies that assist with diagnosis predominantly use convolutional neural networks to identify abnormal brain images (e.g., CT perfusion). These technologies perform comparably to neuroradiologists, improve clinical workflows (e.g., time from scan acquisition to reading), and improve patient outcomes (e.g., days spent in the neurological ICU). Two devices are indicated for post-stroke rehabilitation by leveraging neuromodulation techniques. Multiple FDA-approved technologies exist that can help clinicians better diagnose and manage stroke. This review summarizes the most up-to-date literature regarding the functionality, performance, and utility of these technologies so clinicians can make informed decisions when using them in practice.
Collapse
Affiliation(s)
- Anirudha S Chandrabhatla
- School of Medicine, University of Virginia Health Sciences Center, 1215 Lee Street, Charlottesville, VA 22903, USA
- Department of Neurological Surgery, University of Virginia Health Sciences Center, 1215 Lee Street, Charlottesville, VA 22903, USA
| | - Elyse A Kuo
- School of Medicine, University of Virginia Health Sciences Center, 1215 Lee Street, Charlottesville, VA 22903, USA
- Department of Neurological Surgery, University of Virginia Health Sciences Center, 1215 Lee Street, Charlottesville, VA 22903, USA
| | - Jennifer D Sokolowski
- Department of Neurological Surgery, University of Virginia Health Sciences Center, 1215 Lee Street, Charlottesville, VA 22903, USA
| | - Ryan T Kellogg
- Department of Neurological Surgery, University of Virginia Health Sciences Center, 1215 Lee Street, Charlottesville, VA 22903, USA
| | - Min Park
- Department of Neurological Surgery, University of Virginia Health Sciences Center, 1215 Lee Street, Charlottesville, VA 22903, USA
| | - Panagiotis Mastorakos
- Department of Neurological Surgery, University of Virginia Health Sciences Center, 1215 Lee Street, Charlottesville, VA 22903, USA
- Department of Neurological Surgery, Thomas Jefferson University Hospital, 111 S 11th Street, Philadelphia, PA 19107, USA
| |
Collapse
|
5
|
Liu J, Wang J, Wu J, Gu S, Yao Y, Li J, Li Y, Ren H, Luo T. Comparison of two computed tomography perfusion post-processing software to assess infarct volume in patients with acute ischemic stroke. Front Neurosci 2023; 17:1151823. [PMID: 37179549 PMCID: PMC10166848 DOI: 10.3389/fnins.2023.1151823] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2023] [Accepted: 04/07/2023] [Indexed: 05/15/2023] Open
Abstract
Objectives We used two automated software commonly employed in clinical practice-Olea Sphere (Olea) and Shukun-PerfusionGo (PerfusionGo)-to compare the diagnostic utility and volumetric agreement of computed tomography perfusion (CTP)-predicted final infarct volume (FIV) with true FIV in patients with anterior-circulation acute ischemic stroke (AIS). Methods In all, 122 patients with anterior-circulation AIS who met the inclusion and exclusion criteria were retrospectively enrolled and divided into two groups: intervention group (n = 52) and conservative group (n = 70), according to recanalization of blood vessels and clinical outcome (NIHSS) after different treatments. Patients in both groups underwent one-stop 4D-CT angiography (CTA)/CTP, and the raw CTP data were processed on a workstation using Olea and PerfusionGo post-processing software, to calculate and obtain the ischemic core (IC) and hypoperfusion (IC plus penumbra) volumes, hypoperfusion in the conservative group and IC in the intervention group were used to define the predicted FIV. The ITK-SNAP software was used to manually outline and measure true FIV on the follow-up non-enhanced CT or MRI-DWI images. Intraclass correlation coefficients (ICC), Bland-Altman, and Kappa analysis were used to compare the differences in IC and penumbra volumes calculated by the Olea and PerfusionGo software to investigate the relationship between their predicted FIV and true FIV. Results The IC and penumbra difference between Olea and PerfusionGo within the same group (p < 0.001) was statistically significant. Olea obtained larger IC and smaller penumbra than PerfusionGo. Both software partially overestimated the infarct volume, but Olea significantly overestimated it by a larger percentage. ICC analysis showed that Olea performed better than PerfusionGo (intervention-Olea: ICC 0.633, 95%CI 0.439-0.771; intervention-PerfusionGo: ICC 0.526, 95%CI 0.299-0.696; conservative-Olea: ICC 0.623, 95%CI 0.457-0.747; conservative-PerfusionGo: ICC 0.507, 95%CI 0.312-0.662). Olea and PerfusionGo had the same capacity in accurately diagnosing and classifying patients with infarct volume <70 ml. Conclusion Both software had differences in the evaluation of the IC and penumbra. Olea's predicted FIV was more closely correlated with the true FIV than PerfusionGo's prediction. Accurate assessment of infarction on CTP post-processing software remains challenging. Our results may have important practice implications for the clinical use of perfusion post-processing software.
Collapse
Affiliation(s)
- Jiayang Liu
- Department of Radiology, The First Affiliated Hospital of Chongqing Medical University, Chongqing, China
| | - Jingjie Wang
- Department of Radiology, The First Affiliated Hospital of Chongqing Medical University, Chongqing, China
| | - Jiajing Wu
- Department of Radiology, Hospital of PLA Army, Chongqing, China
| | - Sirun Gu
- Department of Radiology, The First Affiliated Hospital of Chongqing Medical University, Chongqing, China
| | - Yunzhuo Yao
- Department of Radiology, The First Affiliated Hospital of Chongqing Medical University, Chongqing, China
| | - Jing Li
- Department of Radiology, The First Affiliated Hospital of Chongqing Medical University, Chongqing, China
| | - Yongmei Li
- Department of Radiology, The First Affiliated Hospital of Chongqing Medical University, Chongqing, China
| | - Huanhuan Ren
- Department of Radiology, The First Affiliated Hospital of Chongqing Medical University, Chongqing, China
- Department of Radiology, Chongqing General Hospital, Chongqing, China
| | - Tianyou Luo
- Department of Radiology, The First Affiliated Hospital of Chongqing Medical University, Chongqing, China
| |
Collapse
|
6
|
Predictors of ghost infarct core on baseline computed tomography perfusion in stroke patients with successful recanalization after mechanical thrombectomy. Eur Radiol 2023; 33:1792-1800. [PMID: 36282310 DOI: 10.1007/s00330-022-09189-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2022] [Revised: 09/13/2022] [Accepted: 09/19/2022] [Indexed: 11/04/2022]
Abstract
OBJECTIVES To assess the predictors of ghost infarct core (GIC) in stroke patients achieving successful recanalization after mechanical thrombectomy (MT), based on final infarct volume (FIV) calculated from follow-up diffusion-weighted imaging (DWI). METHODS A total of 115 consecutive stroke patients who had undergone baseline computed tomography perfusion (CTP) scan, achieved successful recanalization after MT, and finished follow-up DWI evaluation were retrospectively enrolled. Ischemic core volume was automatically generated from baseline CTP, and FIV was determined manually based on follow-up DWI. Stroke-related risk factors and demographic, clinical, imaging, and procedural data were collected and assessed. Univariate and multivariate analyses were applied to identify the predictors of GIC. RESULTS Of the 115 included patients (31 women and 84 men; median age, 66 years), 18 patients (15.7%) showed a GIC. The GIC group showed significantly shorter time interval from stroke onset to CTP scan and that from stroke onset to recanalization (both p < 0.001), but higher ischemic core volume (p < 0.001), hypoperfused area volume (p < 0.001), mismatch area volume (p = 0.006), and hypoperfusion ratio (p = 0.001) than the no-GIC group. In multivariate analysis, time interval from stroke onset to CTP scan (odds ratio [OR], 0.983; p = 0.005) and ischemic core volume (OR, 1.073; p < 0.001) were independently associated with the occurrence of GIC. CONCLUSIONS In stroke patients achieving successful recanalization after MT, time interval from stroke onset to CTP and ischemic core volume are associated with the occurrence of GIC. Patients cannot be excluded from MT solely based on baseline CTP-derived ischemic core volume, especially for patients with a shorter onset time. KEY POINTS • Ghost infarct core (GIC) was found in 15.7% of patients with acute ischemic stroke (AIS) in our study cohort. • GIC was associated with stroke onset time, volumetric parameters derived from CTP, and collateral status indicated by HIR. • Time interval from stroke onset to CTP scan and ischemic core volume were independent predictors of GIC.
Collapse
|
7
|
Lu Q, Fu J, Lv K, Han Y, Pan Y, Xu Y, Zhang J, Geng D. Agreement of three CT perfusion software packages in patients with acute ischemic stroke: A comparison with RAPID. Eur J Radiol 2022; 156:110500. [PMID: 36099834 DOI: 10.1016/j.ejrad.2022.110500] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2022] [Revised: 08/07/2022] [Accepted: 08/22/2022] [Indexed: 11/26/2022]
Abstract
PURPOSE To compare ischemic core volume (ICV) and penumbra volume (PV) measured by MIStar, F-STROKE, and Syngo.via with that measured by RAPID in acute ischemic stroke (AIS), and their concordance in selecting patients for endovascular thrombectomy (EVT). METHODS Computed tomography perfusion (CTP) data were processed with four software packages. Bland-Altman analysis and intraclass correlation coefficient (ICC) were performed to evaluate their agreement in quantifying ICV and PV. Kappa test was conducted to assess consistency in the selection of EVT candidates. The correlation between predicted ICV and segmented final infarct volume (FIV) on follow-up images was investigated. RESULTS A total of 91 patients were retrospectively included. F-STROKE had the best consistency with RAPID (ICV: ICC = 0.97; PV: ICC = 0.84) and Syngo.via had the worst consistency (ICV: ICC = 0.77; PV: ICC = 0.66). F-STROKE had the narrowest limits of agreements both in ICV (-27.02, 24.40 mL) and PV (-85.59, 101.80 mL). When selecting EVT candidates, MIStar (kappa = 0.71-0.88) and F-STROKE (kappa = 0.84-0.90) had good to excellent consistency with RAPID, while Syngo.via had poor consistency (kappa = 0.20-0.41). ICV predicted by MIStar was correlated strongest with FIV (r = 0.77). CONCLUSIONS F-STROKE is most consistent with RAPID in quantitative ICV and PV. F-STROKE and MIStar exhibit similar EVT candidate selection to RAPID. Syngo.via, for its part, seems to have overestimated ICV and underestimated PV, leading to an overly restrictive selection of EVT candidates.
Collapse
Affiliation(s)
- Qingqing Lu
- Department of Radiology, Huashan Hospital, State Key Laboratory of Medical Neurobiology, Fudan University, Shanghai 200040, China; Department of Radiology, Ningbo First Hospital, Ningbo 315000, China
| | - Junyan Fu
- Department of Radiology, Huashan Hospital, State Key Laboratory of Medical Neurobiology, Fudan University, Shanghai 200040, China
| | - Kun Lv
- Department of Radiology, Huashan Hospital, State Key Laboratory of Medical Neurobiology, Fudan University, Shanghai 200040, China
| | - Yan Han
- Department of Radiology, Huashan Hospital, State Key Laboratory of Medical Neurobiology, Fudan University, Shanghai 200040, China
| | - Yuning Pan
- Department of Radiology, Ningbo First Hospital, Ningbo 315000, China
| | - Yiren Xu
- Department of Radiology, Ningbo First Hospital, Ningbo 315000, China
| | - Jun Zhang
- Department of Radiology, Huashan Hospital, State Key Laboratory of Medical Neurobiology, Fudan University, Shanghai 200040, China; National Center for Neurological Disorders, Shanghai 200040, China.
| | - Daoying Geng
- Department of Radiology, Huashan Hospital, State Key Laboratory of Medical Neurobiology, Fudan University, Shanghai 200040, China; Center for Shanghai Intelligent Imaging for Critical Brain Diseases Engineering and Technology Reasearch, Huashan Hospital, Fudan Universtiy, Shanghai 200040, China.
| |
Collapse
|
8
|
He Y, Luo Z, Zhou Y, Xue R, Li J, Hu H, Yan S, Chen Z, Wang J, Lou M. U-net Models Based on Computed Tomography Perfusion Predict Tissue Outcome in Patients with Different Reperfusion Patterns. Transl Stroke Res 2022; 13:707-715. [PMID: 35043358 DOI: 10.1007/s12975-022-00986-w] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2021] [Revised: 12/05/2021] [Accepted: 01/10/2022] [Indexed: 12/30/2022]
Abstract
Evaluation of cerebral perfusion is important for treatment selection in patients with acute large vessel occlusion (LVO). To assess ischemic core and tissue at risk more accurately, we developed a deep learning model named U-net using computed tomography perfusion (CTP) images. A total of 110 acute ischemic stroke patients undergoing endovascular treatment with major reperfusion (≥ 80%) or minimal reperfusion (≤ 20%) were included. Using baseline CTP, we developed two U-net models: one model in major reperfusion group to identify infarct core; the other in minimal reperfusion group to identify tissue at risk. The performance of fixed-thresholding methods was compared with that of U-net models. In the major reperfusion group, the model estimated infarct core with a Dice score coefficient (DSC) of 0.61 and an area under the curve (AUC) of 0.92, while fixed-thresholding methods had a DSC of 0.52. In the minimal reperfusion group, the model estimated tissue at risk with a DSC of 0.67 and an AUC of 0.93, while fixed-thresholding methods had a DSC of 0.51. In both groups, excellent volumetric consistency (intraclass correlation coefficient was 0.951 in major reperfusion and 0.746 in minimal reperfusion) was achieved between the estimated lesion and the actual lesion volume. Thus, in patients with anterior LVO, the CTP-based U-net models were able to identify infarct core and tissue at risk on baseline CTP superior to fixed-thresholding methods, providing individualized prediction of final lesion in patients with different reperfusion patterns.
Collapse
Affiliation(s)
- Yaode He
- Department of Neurology, School of Medicine, the Second Affiliated Hospital of Zhejiang University, 88# Jiefang Road, Hangzhou, 310009, China
| | - Zhongyu Luo
- Department of Neurology, School of Medicine, the Second Affiliated Hospital of Zhejiang University, 88# Jiefang Road, Hangzhou, 310009, China
| | - Ying Zhou
- Department of Neurology, School of Medicine, the Second Affiliated Hospital of Zhejiang University, 88# Jiefang Road, Hangzhou, 310009, China
| | - Rui Xue
- Department of Neurology, School of Medicine, the Second Affiliated Hospital of Zhejiang University, 88# Jiefang Road, Hangzhou, 310009, China
| | - Jiaping Li
- Department of Neurology, School of Medicine, the Second Affiliated Hospital of Zhejiang University, 88# Jiefang Road, Hangzhou, 310009, China
| | - Haitao Hu
- Department of Neurology, School of Medicine, the Second Affiliated Hospital of Zhejiang University, 88# Jiefang Road, Hangzhou, 310009, China
| | - Shenqiang Yan
- Department of Neurology, School of Medicine, the Second Affiliated Hospital of Zhejiang University, 88# Jiefang Road, Hangzhou, 310009, China
| | - Zhicai Chen
- Department of Neurology, School of Medicine, the Second Affiliated Hospital of Zhejiang University, 88# Jiefang Road, Hangzhou, 310009, China
| | - Jianan Wang
- Department of Neurology, School of Medicine, the Second Affiliated Hospital of Zhejiang University, 88# Jiefang Road, Hangzhou, 310009, China
| | - Min Lou
- Department of Neurology, School of Medicine, the Second Affiliated Hospital of Zhejiang University, 88# Jiefang Road, Hangzhou, 310009, China.
| |
Collapse
|