1
|
Ouyang D, Theurer J, Stein NR, Hughes JW, Elias P, He B, Yuan N, Duffy G, Sandhu RK, Ebinger J, Botting P, Jujjavarapu M, Claggett B, Tooley JE, Poterucha T, Chen JH, Nurok M, Perez M, Perotte A, Zou JY, Cook NR, Chugh SS, Cheng S, Albert CM. Electrocardiographic deep learning for predicting post-procedural mortality: a model development and validation study. Lancet Digit Health 2024; 6:e70-e78. [PMID: 38065778 DOI: 10.1016/s2589-7500(23)00220-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2022] [Revised: 10/01/2023] [Accepted: 10/18/2023] [Indexed: 12/22/2023]
Abstract
BACKGROUND Preoperative risk assessments used in clinical practice are insufficient in their ability to identify risk for postoperative mortality. Deep-learning analysis of electrocardiography can identify hidden risk markers that can help to prognosticate postoperative mortality. We aimed to develop a prognostic model that accurately predicts postoperative mortality in patients undergoing medical procedures and who had received preoperative electrocardiographic diagnostic testing. METHODS In a derivation cohort of preoperative patients with available electrocardiograms (ECGs) from Cedars-Sinai Medical Center (Los Angeles, CA, USA) between Jan 1, 2015 and Dec 31, 2019, a deep-learning algorithm was developed to leverage waveform signals to discriminate postoperative mortality. We randomly split patients (8:1:1) into subsets for training, internal validation, and final algorithm test analyses. Model performance was assessed using area under the receiver operating characteristic curve (AUC) values in the hold-out test dataset and in two external hospital cohorts and compared with the established Revised Cardiac Risk Index (RCRI) score. The primary outcome was post-procedural mortality across three health-care systems. FINDINGS 45 969 patients had a complete ECG waveform image available for at least one 12-lead ECG performed within the 30 days before the procedure date (59 975 inpatient procedures and 112 794 ECGs): 36 839 patients in the training dataset, 4549 in the internal validation dataset, and 4581 in the internal test dataset. In the held-out internal test cohort, the algorithm discriminates mortality with an AUC value of 0·83 (95% CI 0·79-0·87), surpassing the discrimination of the RCRI score with an AUC of 0·67 (0·61-0·72). The algorithm similarly discriminated risk for mortality in two independent US health-care systems, with AUCs of 0·79 (0·75-0·83) and 0·75 (0·74-0·76), respectively. Patients determined to be high risk by the deep-learning model had an unadjusted odds ratio (OR) of 8·83 (5·57-13·20) for postoperative mortality compared with an unadjusted OR of 2·08 (0·77-3·50) for postoperative mortality for RCRI scores of more than 2. The deep-learning algorithm performed similarly for patients undergoing cardiac surgery (AUC 0·85 [0·77-0·92]), non-cardiac surgery (AUC 0·83 [0·79-0·88]), and catheterisation or endoscopy suite procedures (AUC 0·76 [0·72-0·81]). INTERPRETATION A deep-learning algorithm interpreting preoperative ECGs can improve discrimination of postoperative mortality. The deep-learning algorithm worked equally well for risk stratification of cardiac surgeries, non-cardiac surgeries, and catheterisation laboratory procedures, and was validated in three independent health-care systems. This algorithm can provide additional information to clinicians making the decision to perform medical procedures and stratify the risk of future complications. FUNDING National Heart, Lung, and Blood Institute.
Collapse
Affiliation(s)
- David Ouyang
- Department of Cardiology, Smidt Heart Institute, Cedars-Sinai Medical Center, Los Angeles, CA, USA; Division of Artificial Intelligence in Medicine, Department of Medicine, Cedars-Sinai Medical Center, Los Angeles, CA, USA.
| | - John Theurer
- Department of Cardiology, Smidt Heart Institute, Cedars-Sinai Medical Center, Los Angeles, CA, USA
| | - Nathan R Stein
- Department of Cardiology, Smidt Heart Institute, Cedars-Sinai Medical Center, Los Angeles, CA, USA
| | - J Weston Hughes
- Department of Computer Science, Stanford University, Palo Alto, CA, USA
| | - Pierre Elias
- Milstein Division of Cardiology, Department of Medicine, Columbia University Irving Medical Center, New York, NY, USA; Department of Biomedical Informatics, Columbia University Irving Medical Center, New York, NY, USA
| | - Bryan He
- Department of Computer Science, Stanford University, Palo Alto, CA, USA
| | - Neal Yuan
- Department of Cardiology, Smidt Heart Institute, Cedars-Sinai Medical Center, Los Angeles, CA, USA
| | - Grant Duffy
- Department of Cardiology, Smidt Heart Institute, Cedars-Sinai Medical Center, Los Angeles, CA, USA
| | - Roopinder K Sandhu
- Department of Cardiology, Smidt Heart Institute, Cedars-Sinai Medical Center, Los Angeles, CA, USA
| | - Joseph Ebinger
- Department of Cardiology, Smidt Heart Institute, Cedars-Sinai Medical Center, Los Angeles, CA, USA
| | - Patrick Botting
- Department of Cardiology, Smidt Heart Institute, Cedars-Sinai Medical Center, Los Angeles, CA, USA
| | - Melvin Jujjavarapu
- Department of Cardiology, Smidt Heart Institute, Cedars-Sinai Medical Center, Los Angeles, CA, USA
| | - Brian Claggett
- Division of Cardiovascular Medicine, Department of Medicine, Brigham and Women's Hospital, Boston, MA, USA
| | - James E Tooley
- Division of Cardiology, Stanford University, Palo Alto, CA, USA
| | - Tim Poterucha
- Milstein Division of Cardiology, Department of Medicine, Columbia University Irving Medical Center, New York, NY, USA
| | - Jonathan H Chen
- Division of Bioinformatics Research, Stanford University, Palo Alto, CA, USA
| | - Michael Nurok
- Division of Anesthesia, Department of Surgery, Cedars-Sinai Medical Center, Los Angeles, CA, USA
| | - Marco Perez
- Division of Cardiology, Stanford University, Palo Alto, CA, USA
| | - Adler Perotte
- Milstein Division of Cardiology, Department of Medicine, Columbia University Irving Medical Center, New York, NY, USA
| | - James Y Zou
- Department of Computer Science, Stanford University, Palo Alto, CA, USA; Department of Medicine, and Department of Biomedical Data Science, Stanford University, Palo Alto, CA, USA
| | - Nancy R Cook
- Division of Preventive Medicine, Department of Medicine, Brigham and Women's Hospital, Boston, MA, USA
| | - Sumeet S Chugh
- Department of Cardiology, Smidt Heart Institute, Cedars-Sinai Medical Center, Los Angeles, CA, USA; Division of Artificial Intelligence in Medicine, Department of Medicine, Cedars-Sinai Medical Center, Los Angeles, CA, USA
| | - Susan Cheng
- Department of Cardiology, Smidt Heart Institute, Cedars-Sinai Medical Center, Los Angeles, CA, USA
| | - Christine M Albert
- Department of Cardiology, Smidt Heart Institute, Cedars-Sinai Medical Center, Los Angeles, CA, USA
| |
Collapse
|
2
|
Holmstrom L, Christensen M, Yuan N, Weston Hughes J, Theurer J, Jujjavarapu M, Fatehi P, Kwan A, Sandhu RK, Ebinger J, Cheng S, Zou J, Chugh SS, Ouyang D. Deep learning-based electrocardiographic screening for chronic kidney disease. Commun Med (Lond) 2023; 3:73. [PMID: 37237055 DOI: 10.1038/s43856-023-00278-w] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2022] [Accepted: 03/10/2023] [Indexed: 05/28/2023] Open
Abstract
BACKGROUND Undiagnosed chronic kidney disease (CKD) is a common and usually asymptomatic disorder that causes a high burden of morbidity and early mortality worldwide. We developed a deep learning model for CKD screening from routinely acquired ECGs. METHODS We collected data from a primary cohort with 111,370 patients which had 247,655 ECGs between 2005 and 2019. Using this data, we developed, trained, validated, and tested a deep learning model to predict whether an ECG was taken within one year of the patient receiving a CKD diagnosis. The model was additionally validated using an external cohort from another healthcare system which had 312,145 patients with 896,620 ECGs between 2005 and 2018. RESULTS Using 12-lead ECG waveforms, our deep learning algorithm achieves discrimination for CKD of any stage with an AUC of 0.767 (95% CI 0.760-0.773) in a held-out test set and an AUC of 0.709 (0.708-0.710) in the external cohort. Our 12-lead ECG-based model performance is consistent across the severity of CKD, with an AUC of 0.753 (0.735-0.770) for mild CKD, AUC of 0.759 (0.750-0.767) for moderate-severe CKD, and an AUC of 0.783 (0.773-0.793) for ESRD. In patients under 60 years old, our model achieves high performance in detecting any stage CKD with both 12-lead (AUC 0.843 [0.836-0.852]) and 1-lead ECG waveform (0.824 [0.815-0.832]). CONCLUSIONS Our deep learning algorithm is able to detect CKD using ECG waveforms, with stronger performance in younger patients and more severe CKD stages. This ECG algorithm has the potential to augment screening for CKD.
Collapse
Affiliation(s)
- Lauri Holmstrom
- Department of Cardiology, Smidt Heart Institute, Cedars-Sinai Medical Center, Los Angeles, CA, USA
- Center for Cardiac Arrest Prevention, Department of Cardiology, Smidt Heart Institute, Cedars-Sinai Medical Center, Los Angeles, CA, USA
- Research Unit of Internal Medicine, Medical Research Center Oulu, University of Oulu and Oulu University Hospital, Oulu, Finland
- Division of Artificial Intelligence in Medicine, Department of Medicine, Cedars-Sinai Medical Center, Los Angeles, CA, USA
| | - Matthew Christensen
- Department of Cardiology, Smidt Heart Institute, Cedars-Sinai Medical Center, Los Angeles, CA, USA
- Division of Artificial Intelligence in Medicine, Department of Medicine, Cedars-Sinai Medical Center, Los Angeles, CA, USA
| | - Neal Yuan
- Department of Medicine, Division of Cardiology, San Francisco VA, UCSF, San Francisco, CA, USA
| | - J Weston Hughes
- Department of Computer Science, Stanford University, Palo Alto, CA, USA
| | - John Theurer
- Department of Cardiology, Smidt Heart Institute, Cedars-Sinai Medical Center, Los Angeles, CA, USA
- Division of Artificial Intelligence in Medicine, Department of Medicine, Cedars-Sinai Medical Center, Los Angeles, CA, USA
| | - Melvin Jujjavarapu
- Enterprise Information Service, Cedars-Sinai Medical Center, Los Angeles, CA, USA
| | - Pedram Fatehi
- Division of Nephrology, Department of Medicine, Stanford University, Palo Alto, CA, USA
| | - Alan Kwan
- Department of Cardiology, Smidt Heart Institute, Cedars-Sinai Medical Center, Los Angeles, CA, USA
| | - Roopinder K Sandhu
- Department of Cardiology, Smidt Heart Institute, Cedars-Sinai Medical Center, Los Angeles, CA, USA
| | - Joseph Ebinger
- Department of Cardiology, Smidt Heart Institute, Cedars-Sinai Medical Center, Los Angeles, CA, USA
| | - Susan Cheng
- Department of Cardiology, Smidt Heart Institute, Cedars-Sinai Medical Center, Los Angeles, CA, USA
| | - James Zou
- Department of Computer Science, Stanford University, Palo Alto, CA, USA
- Department of Biomedical Data Science, Stanford University, Palo Alto, CA, USA
| | - Sumeet S Chugh
- Department of Cardiology, Smidt Heart Institute, Cedars-Sinai Medical Center, Los Angeles, CA, USA
- Center for Cardiac Arrest Prevention, Department of Cardiology, Smidt Heart Institute, Cedars-Sinai Medical Center, Los Angeles, CA, USA
- Division of Artificial Intelligence in Medicine, Department of Medicine, Cedars-Sinai Medical Center, Los Angeles, CA, USA
| | - David Ouyang
- Department of Cardiology, Smidt Heart Institute, Cedars-Sinai Medical Center, Los Angeles, CA, USA.
- Division of Artificial Intelligence in Medicine, Department of Medicine, Cedars-Sinai Medical Center, Los Angeles, CA, USA.
| |
Collapse
|
3
|
He B, Kwan AC, Cho JH, Yuan N, Pollick C, Shiota T, Ebinger J, Bello NA, Wei J, Josan K, Duffy G, Jujjavarapu M, Siegel R, Cheng S, Zou JY, Ouyang D. Blinded, randomized trial of sonographer versus AI cardiac function assessment. Nature 2023; 616:520-524. [PMID: 37020027 PMCID: PMC10115627 DOI: 10.1038/s41586-023-05947-3] [Citation(s) in RCA: 44] [Impact Index Per Article: 44.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2022] [Accepted: 03/13/2023] [Indexed: 04/07/2023]
Abstract
Artificial intelligence (AI) has been developed for echocardiography1-3, although it has not yet been tested with blinding and randomization. Here we designed a blinded, randomized non-inferiority clinical trial (ClinicalTrials.gov ID: NCT05140642; no outside funding) of AI versus sonographer initial assessment of left ventricular ejection fraction (LVEF) to evaluate the impact of AI in the interpretation workflow. The primary end point was the change in the LVEF between initial AI or sonographer assessment and final cardiologist assessment, evaluated by the proportion of studies with substantial change (more than 5% change). From 3,769 echocardiographic studies screened, 274 studies were excluded owing to poor image quality. The proportion of studies substantially changed was 16.8% in the AI group and 27.2% in the sonographer group (difference of -10.4%, 95% confidence interval: -13.2% to -7.7%, P < 0.001 for non-inferiority, P < 0.001 for superiority). The mean absolute difference between final cardiologist assessment and independent previous cardiologist assessment was 6.29% in the AI group and 7.23% in the sonographer group (difference of -0.96%, 95% confidence interval: -1.34% to -0.54%, P < 0.001 for superiority). The AI-guided workflow saved time for both sonographers and cardiologists, and cardiologists were not able to distinguish between the initial assessments by AI versus the sonographer (blinding index of 0.088). For patients undergoing echocardiographic quantification of cardiac function, initial assessment of LVEF by AI was non-inferior to assessment by sonographers.
Collapse
Affiliation(s)
- Bryan He
- Department of Computer Science, Stanford University, Palo Alto, CA, USA
| | - Alan C Kwan
- Department of Cardiology, Smidt Heart Institute, Cedars-Sinai Medical Center, Los Angeles, CA, USA
| | - Jae Hyung Cho
- Department of Cardiology, Smidt Heart Institute, Cedars-Sinai Medical Center, Los Angeles, CA, USA
| | - Neal Yuan
- Department of Medicine, Division of Cardiology, San Francisco VA, UCSF, San Francisco, CA, USA
| | - Charles Pollick
- Department of Cardiology, Smidt Heart Institute, Cedars-Sinai Medical Center, Los Angeles, CA, USA
| | - Takahiro Shiota
- Department of Cardiology, Smidt Heart Institute, Cedars-Sinai Medical Center, Los Angeles, CA, USA
| | - Joseph Ebinger
- Department of Cardiology, Smidt Heart Institute, Cedars-Sinai Medical Center, Los Angeles, CA, USA
| | - Natalie A Bello
- Department of Cardiology, Smidt Heart Institute, Cedars-Sinai Medical Center, Los Angeles, CA, USA
| | - Janet Wei
- Department of Cardiology, Smidt Heart Institute, Cedars-Sinai Medical Center, Los Angeles, CA, USA
| | - Kiranbir Josan
- Department of Cardiology, Smidt Heart Institute, Cedars-Sinai Medical Center, Los Angeles, CA, USA
| | - Grant Duffy
- Department of Cardiology, Smidt Heart Institute, Cedars-Sinai Medical Center, Los Angeles, CA, USA
| | - Melvin Jujjavarapu
- Enterprise Information Services, Cedars-Sinai Medical Center, Los Angeles, CA, USA
| | - Robert Siegel
- Department of Cardiology, Smidt Heart Institute, Cedars-Sinai Medical Center, Los Angeles, CA, USA
| | - Susan Cheng
- Department of Cardiology, Smidt Heart Institute, Cedars-Sinai Medical Center, Los Angeles, CA, USA.
| | - James Y Zou
- Department of Computer Science, Stanford University, Palo Alto, CA, USA.
- Department of Biomedical Data Science, Stanford University, Palo Alto, CA, USA.
| | - David Ouyang
- Department of Cardiology, Smidt Heart Institute, Cedars-Sinai Medical Center, Los Angeles, CA, USA.
- Division of Artificial Intelligence in Medicine, Cedars-Sinai Medical Center, Los Angeles, CA, USA.
| |
Collapse
|