1
|
Harøy J, Bache-Mathiesen LK, Andersen TE. Lower HAGOS subscale scores associated with a longer duration of groin problems in football players in the subsequent season. BMJ Open Sport Exerc Med 2024; 10:e001812. [PMID: 38685919 PMCID: PMC11057268 DOI: 10.1136/bmjsem-2023-001812] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/10/2024] [Indexed: 05/02/2024] Open
Abstract
Introduction Groin injuries represent a considerable problem in football. Although the Adductor Strengthening Programme reduced groin injury risk, players can still experience groin symptoms throughout the season. This study aimed to determine whether preseason Copenhagen Hip and Groin Outcome Score (HAGOS) and a history of previous injury can identify individuals at risk of having a longer duration of groin problems the subsequent season, using an 'any physical complaint' definition of injury. Methods Preseason HAGOS score and weekly groin problems were registered with the Oslo Sports Trauma Research Center Overuse questionnaire during one full season in 632 male semiprofessional adult players. Results The prognostic model showed a decreased number of weeks with groin problems for each increase in HAGOS score for 'groin-related quality of life' (QOL) (IRR=0.99, p=0.003). A 10-point higher 'QOL' score predicted 10% fewer weeks of groin problems. Additionally, previous hip/groin injury was associated with a 74% increase in the number of weeks with symptoms (p<0.001). Conclusion The HAGOS questionnaire applied preseason can detect players at risk of getting more weeks with groin problems the following season. The 'QOL' subscale seems to be the superior subscale for estimating subsequent groin problem duration. While HAGOS appears promising in identifying players at risk, previous groin injury is the most robust indicator, showing a substantial 74% increase in weeks with symptoms.
Collapse
Affiliation(s)
- Joar Harøy
- Oslo Sports Trauma Research Center, Department of of Sports Medicine, Norwegian School of Sports Sciences, Oslo, Norway
- The Norwegian Football Association's Sports Medicine Center, Oslo, Norway
| | - Lena Kristin Bache-Mathiesen
- Oslo Sports Trauma Research Center, Department of of Sports Medicine, Norwegian School of Sports Sciences, Oslo, Norway
| | - Thor Einar Andersen
- Oslo Sports Trauma Research Center, Department of of Sports Medicine, Norwegian School of Sports Sciences, Oslo, Norway
- The Norwegian Football Association's Sports Medicine Center, Oslo, Norway
| |
Collapse
|
2
|
Riley RD, Pate A, Dhiman P, Archer L, Martin GP, Collins GS. Clinical prediction models and the multiverse of madness. BMC Med 2023; 21:502. [PMID: 38110939 PMCID: PMC10729337 DOI: 10.1186/s12916-023-03212-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/25/2023] [Accepted: 12/05/2023] [Indexed: 12/20/2023] Open
Abstract
BACKGROUND Each year, thousands of clinical prediction models are developed to make predictions (e.g. estimated risk) to inform individual diagnosis and prognosis in healthcare. However, most are not reliable for use in clinical practice. MAIN BODY We discuss how the creation of a prediction model (e.g. using regression or machine learning methods) is dependent on the sample and size of data used to develop it-were a different sample of the same size used from the same overarching population, the developed model could be very different even when the same model development methods are used. In other words, for each model created, there exists a multiverse of other potential models for that sample size and, crucially, an individual's predicted value (e.g. estimated risk) may vary greatly across this multiverse. The more an individual's prediction varies across the multiverse, the greater the instability. We show how small development datasets lead to more different models in the multiverse, often with vastly unstable individual predictions, and explain how this can be exposed by using bootstrapping and presenting instability plots. We recommend healthcare researchers seek to use large model development datasets to reduce instability concerns. This is especially important to ensure reliability across subgroups and improve model fairness in practice. CONCLUSIONS Instability is concerning as an individual's predicted value is used to guide their counselling, resource prioritisation, and clinical decision making. If different samples lead to different models with very different predictions for the same individual, then this should cast doubt into using a particular model for that individual. Therefore, visualising, quantifying and reporting the instability in individual-level predictions is essential when proposing a new model.
Collapse
Affiliation(s)
- Richard D Riley
- College of Medical and Dental Sciences, Institute of Applied Health Research, University of Birmingham, Birmingham, B15 2TT, UK.
- National Institute for Health and Care Research (NIHR) Birmingham Biomedical Research Centre, Birmingham, UK.
| | - Alexander Pate
- Division of Informatics, Imaging and Data Science, Faculty of Biology, Medicine and Health, University of Manchester, Manchester, UK
| | - Paula Dhiman
- Centre for Statistics in Medicine, Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, University of Oxford, Oxford, OX3 7LD, UK
| | - Lucinda Archer
- College of Medical and Dental Sciences, Institute of Applied Health Research, University of Birmingham, Birmingham, B15 2TT, UK
- National Institute for Health and Care Research (NIHR) Birmingham Biomedical Research Centre, Birmingham, UK
| | - Glen P Martin
- Division of Informatics, Imaging and Data Science, Faculty of Biology, Medicine and Health, University of Manchester, Manchester, UK
| | - Gary S Collins
- Centre for Statistics in Medicine, Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, University of Oxford, Oxford, OX3 7LD, UK
| |
Collapse
|
3
|
Ledger A, Ceusters J, Valentin L, Testa A, Van Holsbeke C, Franchi D, Bourne T, Froyman W, Timmerman D, Van Calster B. Multiclass risk models for ovarian malignancy: an illustration of prediction uncertainty due to the choice of algorithm. BMC Med Res Methodol 2023; 23:276. [PMID: 38001421 PMCID: PMC10668424 DOI: 10.1186/s12874-023-02103-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2023] [Accepted: 11/14/2023] [Indexed: 11/26/2023] Open
Abstract
BACKGROUND Assessing malignancy risk is important to choose appropriate management of ovarian tumors. We compared six algorithms to estimate the probabilities that an ovarian tumor is benign, borderline malignant, stage I primary invasive, stage II-IV primary invasive, or secondary metastatic. METHODS This retrospective cohort study used 5909 patients recruited from 1999 to 2012 for model development, and 3199 patients recruited from 2012 to 2015 for model validation. Patients were recruited at oncology referral or general centers and underwent an ultrasound examination and surgery ≤ 120 days later. We developed models using standard multinomial logistic regression (MLR), Ridge MLR, random forest (RF), XGBoost, neural networks (NN), and support vector machines (SVM). We used nine clinical and ultrasound predictors but developed models with or without CA125. RESULTS Most tumors were benign (3980 in development and 1688 in validation data), secondary metastatic tumors were least common (246 and 172). The c-statistic (AUROC) to discriminate benign from any type of malignant tumor ranged from 0.89 to 0.92 for models with CA125, from 0.89 to 0.91 for models without. The multiclass c-statistic ranged from 0.41 (SVM) to 0.55 (XGBoost) for models with CA125, and from 0.42 (SVM) to 0.51 (standard MLR) for models without. Multiclass calibration was best for RF and XGBoost. Estimated probabilities for a benign tumor in the same patient often differed by more than 0.2 (20% points) depending on the model. Net Benefit for diagnosing malignancy was similar for algorithms at the commonly used 10% risk threshold, but was slightly higher for RF at higher thresholds. Comparing models, between 3% (XGBoost vs. NN, with CA125) and 30% (NN vs. SVM, without CA125) of patients fell on opposite sides of the 10% threshold. CONCLUSION Although several models had similarly good performance, individual probability estimates varied substantially.
Collapse
Affiliation(s)
- Ashleigh Ledger
- Department of Development and Regeneration, KU Leuven, Herestraat 49 box 805, Leuven, 3000, Belgium
| | - Jolien Ceusters
- Department of Development and Regeneration, KU Leuven, Herestraat 49 box 805, Leuven, 3000, Belgium
- Department of Oncology, Leuven Cancer Institute, Laboratory of Tumor Immunology and Immunotherapy, KU Leuven, Leuven, Belgium
| | - Lil Valentin
- Department of Obstetrics and Gynecology, Skåne University Hospital, Malmö, Sweden
- Department of Clinical Sciences Malmö, Lund University, Malmö, Sweden
| | - Antonia Testa
- Department of Woman, Child and Public Health, Fondazione Policlinico Universitario A. Gemelli IRCCS, Rome, Italy
- Dipartimento Universitario Scienze della Vita e Sanità Pubblica, Università Cattolica del Sacro Cuore, Rome, Italy
| | | | - Dorella Franchi
- Preventive Gynecology Unit, Division of Gynecology, European Institute of Oncology IRCCS, Milan, Italy
| | - Tom Bourne
- Department of Development and Regeneration, KU Leuven, Herestraat 49 box 805, Leuven, 3000, Belgium
- Department of Obstetrics and Gynecology, University Hospitals Leuven, Leuven, Belgium
- Queen Charlotte's and Chelsea Hospital, Imperial College, London, UK
| | - Wouter Froyman
- Department of Development and Regeneration, KU Leuven, Herestraat 49 box 805, Leuven, 3000, Belgium
- Department of Obstetrics and Gynecology, University Hospitals Leuven, Leuven, Belgium
| | - Dirk Timmerman
- Department of Development and Regeneration, KU Leuven, Herestraat 49 box 805, Leuven, 3000, Belgium
- Department of Obstetrics and Gynecology, University Hospitals Leuven, Leuven, Belgium
| | - Ben Van Calster
- Department of Development and Regeneration, KU Leuven, Herestraat 49 box 805, Leuven, 3000, Belgium.
- Department of Biomedical Data Sciences, Leiden University Medical Centre (LUMC), Leiden, Netherlands.
- Leuven Unit for Health Technology Assessment Research (LUHTAR), KU Leuven, Leuven, Belgium.
| |
Collapse
|
4
|
Pidborochynski T, Bozso SJ, Buchholz H, Freed DH, MacArthur R, Conway J. Predicting outcomes following short-term ventricular assist device implant with the MELD-XI score. Artif Organs 2023; 47:1752-1761. [PMID: 37476924 DOI: 10.1111/aor.14617] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2023] [Revised: 06/26/2023] [Accepted: 07/14/2023] [Indexed: 07/22/2023]
Abstract
BACKGROUND Short-term continuous flow (STCF) ventricular assist devices (VADs) are utilized in adults with cardiogenic shock; however, mortality remains high. Previous studies have found that high pre-operative MELD-XI scores in durable VAD patients are associated with mortality. The use of the MELD-XI score to predict outcomes in STCF-VAD patients has not been explored. We sought to determine the relationship between MELD-XI and outcomes in adults with STCF-VADs. METHODS This was a retrospective review of adults implanted with STCF-VADs between 2009 and 2019. Receiver operating characteristic (ROC) analysis was performed to predict outcomes and Kaplan-Meier analysis was done to assess survival. RESULTS Seventy-nine patients were included with a median MELD-XI score of 21.2 (IQR 13.5, 27.0). Patients with an unsuccessful wean from support (p < 0.001) or major post-operative bleeding (p = 0.03) had significantly higher pre-implant MELD-XI scores. The optimal MELD-XI cut-point for mortality was 24.9 with 27.8 for major bleeding. Survival was worse among patients in the high-risk MELD-XI group, however, not statistically significant (p = 0.09). Prior ECMO support, but not MELD-XI, was an independent predictor of unsuccessful wean (p = 0.03). CONCLUSIONS Pre-operative MELD-XI score was a moderate predictor of unsuccessful wean with limited utility in predicting bleeding in patients on STCF-VAD support. This scoring system may be useful in the clinical setting for pre-implant risk stratification and counseling among patients and outcomes.
Collapse
Affiliation(s)
- Tara Pidborochynski
- Department of Pediatric Cardiology, University of Alberta, Edmonton, Alberta, Canada
| | - Sabin J Bozso
- Division of Cardiac Surgery, University of Alberta, Edmonton, Alberta, Canada
| | - Holger Buchholz
- Division of Cardiac Surgery, University of Alberta, Edmonton, Alberta, Canada
| | - Darren H Freed
- Division of Cardiac Surgery, University of Alberta, Edmonton, Alberta, Canada
- Division of Pediatric Cardiac Surgery, Stollery Children's Hospital, Edmonton, Alberta, Canada
| | - Roderick MacArthur
- Division of Cardiac Surgery, University of Alberta, Edmonton, Alberta, Canada
| | - Jennifer Conway
- Department of Pediatric Cardiology, University of Alberta, Edmonton, Alberta, Canada
- Division of Pediatric Cardiology, Stollery Children's Hospital, Edmonton, Alberta, Canada
| |
Collapse
|
5
|
Sisk R, Sperrin M, Peek N, van Smeden M, Martin GP. Imputation and missing indicators for handling missing data in the development and deployment of clinical prediction models: A simulation study. Stat Methods Med Res 2023; 32:1461-1477. [PMID: 37105540 PMCID: PMC10515473 DOI: 10.1177/09622802231165001] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/29/2023]
Abstract
Background: In clinical prediction modelling, missing data can occur at any stage of the model pipeline; development, validation or deployment. Multiple imputation is often recommended yet challenging to apply at deployment; for example, the outcome cannot be in the imputation model, as recommended under multiple imputation. Regression imputation uses a fitted model to impute the predicted value of missing predictors from observed data, and could offer a pragmatic alternative at deployment. Moreover, the use of missing indicators has been proposed to handle informative missingness, but it is currently unknown how well this method performs in the context of clinical prediction models. Methods: We simulated data under various missing data mechanisms to compare the predictive performance of clinical prediction models developed using both imputation methods. We consider deployment scenarios where missing data is permitted or prohibited, imputation models that use or omit the outcome, and clinical prediction models that include or omit missing indicators. We assume that the missingness mechanism remains constant across the model pipeline. We also apply the proposed strategies to critical care data. Results: With complete data available at deployment, our findings were in line with existing recommendations; that the outcome should be used to impute development data when using multiple imputation and omitted under regression imputation. When missingness is allowed at deployment, omitting the outcome from the imputation model at the development was preferred. Missing indicators improved model performance in many cases but can be harmful under outcome-dependent missingness. Conclusion: We provide evidence that commonly taught principles of handling missing data via multiple imputation may not apply to clinical prediction models, particularly when data can be missing at deployment. We observed comparable predictive performance under multiple imputation and regression imputation. The performance of the missing data handling method must be evaluated on a study-by-study basis, and the most appropriate strategy for handling missing data at development should consider whether missing data are allowed at deployment. Some guidance is provided.
Collapse
Affiliation(s)
- Rose Sisk
- Division of Informatics, Imaging and Data Science, Faculty of Biology, Medicine and Health, Manchester Academic Health Science Centre, University of Manchester, Manchester, UK
- Gendius Ltd, Macclesfield, UK
| | - Matthew Sperrin
- Division of Informatics, Imaging and Data Science, Faculty of Biology, Medicine and Health, Manchester Academic Health Science Centre, University of Manchester, Manchester, UK
- Alan Turing Institute, London, UK
| | - Niels Peek
- Division of Informatics, Imaging and Data Science, Faculty of Biology, Medicine and Health, Manchester Academic Health Science Centre, University of Manchester, Manchester, UK
- Alan Turing Institute, London, UK
- NIHR Manchester Biomedical Research Centre, Faculty of Biology, Medicine and Health, Manchester Academic Health Science Centre, University of Manchester, Manchester, UK
| | - Maarten van Smeden
- Julius Center for Health Sciences and Primary Care, University Medical Center Utrecht, Utrecht University, Utrecht, the Netherlands
| | - Glen Philip Martin
- Division of Informatics, Imaging and Data Science, Faculty of Biology, Medicine and Health, Manchester Academic Health Science Centre, University of Manchester, Manchester, UK
| |
Collapse
|
6
|
Cheatle MD, Giordano NA, Themelis K, Tang NKY. Suicidal thoughts and behaviors in patients with chronic pain, with and without co-occurring opioid use disorder. PAIN MEDICINE (MALDEN, MASS.) 2023; 24:941-948. [PMID: 37014415 PMCID: PMC10391589 DOI: 10.1093/pm/pnad043] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/12/2023] [Revised: 03/07/2023] [Accepted: 03/25/2023] [Indexed: 04/05/2023]
Abstract
BACKGROUND Individuals with chronic pain and a co-occurring substance use disorder present higher risk of suicide, but the individual and joint impacts of chronic pain and substance use disorders on suicide risk are not well defined. The objective of this study was to exam the factors associated with suicidal thoughts and behaviors in a cohort of patients with chronic non-cancer pain (CNCP), with or without concomitant opioid use disorder (OUD). DESIGN Cross sectional cohort design. SETTING Primary care clinics, pain clinics, and substance abuse treatment facilities in Pennsylvania, Washington, and Utah. SUBJECTS In total, 609 adults with CNCP treated with long-term opioid therapy (>/= 6 months) who either developed an OUD (cases, n = 175) or displayed no evidence of OUD (controls, n = 434). METHODS The predicted outcome was elevated suicidal behavior in patients with CNCP as indicated by a Suicide Behavior Questionnaire-Revised (SBQ-R) score of 8 or above. The presence of CNCP and OUD were key predictors. Covariates included demographics, pain severity, psychiatric history, pain coping, social support, depression, pain catastrophizing and mental defeat. RESULTS Participants with CNCP and co-occurring OUD had an increased odds ratio of 3.44 in reporting elevated suicide scores as compared to participants with chronic pain only. Multivariable modeling revealed that mental defeat, pain catastrophizing, depression, and having chronic pain, and co-occurring OUD significantly increased the odds of elevated suicide scores. CONCLUSIONS Patients with CNCP and co-morbid OUD are associated with a 3-fold increase in risk of suicide.
Collapse
Affiliation(s)
- Martin D Cheatle
- Department of Psychiatry and Anesthesiology and Critical Care, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA 19106, United States
| | - Nicholas A Giordano
- Nell Hodgson Woodruff School of Nursing, Emory University, Atlanta, GA 30322, United States
| | - Kristy Themelis
- Department of Psychology, University of Warwick, CV4 7AL Coventry, United Kingdom
| | - Nicole K Y Tang
- Department of Psychology, University of Warwick, CV4 7AL Coventry, United Kingdom
| |
Collapse
|
7
|
Sadatsafavi M, Yoon Lee T, Gustafson P. Uncertainty and the Value of Information in Risk Prediction Modeling. Med Decis Making 2022; 42:661-671. [PMID: 35209762 PMCID: PMC9194963 DOI: 10.1177/0272989x221078789] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Background Because of the finite size of the development sample, predicted probabilities from a risk prediction model are inevitably uncertain. We apply value-of-information methodology to evaluate the decision-theoretic implications of prediction uncertainty. Methods Adopting a Bayesian perspective, we extend the definition of the expected value of perfect information (EVPI) from decision analysis to net benefit calculations in risk prediction. In the context of model development, EVPI is the expected gain in net benefit by using the correct predictions as opposed to predictions from a proposed model. We suggest bootstrap methods for sampling from the posterior distribution of predictions for EVPI calculation using Monte Carlo simulations. We used subsets of data of various sizes from a clinical trial for predicting mortality after myocardial infarction to show how EVPI changes with sample size. Results With a sample size of 1000 and at the prespecified threshold of 2% on predicted risks, the gains in net benefit using the proposed and the correct models were 0.0006 and 0.0011, respectively, resulting in an EVPI of 0.0005 and a relative EVPI of 87%. EVPI was zero only at unrealistically high thresholds (>85%). As expected, EVPI declined with larger samples. We summarize an algorithm for incorporating EVPI calculations into the commonly used bootstrap method for optimism correction. Conclusion The development EVPI can be used to decide whether a model can advance to validation, whether it should be abandoned, or whether a larger development sample is needed. Value-of-information methods can be applied to explore decision-theoretic consequences of uncertainty in risk prediction and can complement inferential methods in predictive analytics. R code for implementing this method is provided.
Collapse
Affiliation(s)
- Mohsen Sadatsafavi
- Respiratory Evaluation Sciences Program, Collaboration for Outcomes Research and Evaluation, Faculty of Pharmaceutical Sciences, The University of British Columbia, Vancouver, Canada
| | - Tae Yoon Lee
- Respiratory Evaluation Sciences Program, Collaboration for Outcomes Research and Evaluation, Faculty of Pharmaceutical Sciences, The University of British Columbia, Vancouver, Canada
| | - Paul Gustafson
- Department of Statistics, The University of British Columbia, Vancouver, Canada
| |
Collapse
|
8
|
Martin GP, Riley RD, Collins GS, Sperrin M. Developing clinical prediction models when adhering to minimum sample size recommendations: The importance of quantifying bootstrap variability in tuning parameters and predictive performance. Stat Methods Med Res 2021; 30:2545-2561. [PMID: 34623193 PMCID: PMC8649413 DOI: 10.1177/09622802211046388] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
Recent minimum sample size formula (Riley et al.) for developing clinical prediction models help ensure that development datasets are of sufficient size to minimise overfitting. While these criteria are known to avoid excessive overfitting on average, the extent of variability in overfitting at recommended sample sizes is unknown. We investigated this through a simulation study and empirical example to develop logistic regression clinical prediction models using unpenalised maximum likelihood estimation, and various post-estimation shrinkage or penalisation methods. While the mean calibration slope was close to the ideal value of one for all methods, penalisation further reduced the level of overfitting, on average, compared to unpenalised methods. This came at the cost of higher variability in predictive performance for penalisation methods in external data. We recommend that penalisation methods are used in data that meet, or surpass, minimum sample size requirements to further mitigate overfitting, and that the variability in predictive performance and any tuning parameters should always be examined as part of the model development process, since this provides additional information over average (optimism-adjusted) performance alone. Lower variability would give reassurance that the developed clinical prediction model will perform well in new individuals from the same population as was used for model development.
Collapse
Affiliation(s)
- Glen P Martin
- Division of Informatics, Imaging and Data Science, Faculty of
Biology, Medicine and Health, University of Manchester, Manchester Academic Health Science Centre, UK
| | - Richard D Riley
- Centre for Prognosis Research, School of Medicine, Keele University,
UK
| | - Gary S Collins
- Centre for Statistics in Medicine, Nuffield Department of
Orthopaedics, Rheumatology and Musculoskeletal Sciences, University of Oxford,
UK
| | - Matthew Sperrin
- Division of Informatics, Imaging and Data Science, Faculty of
Biology, Medicine and Health, University of Manchester, Manchester Academic Health Science Centre, UK
| |
Collapse
|
9
|
Martin GP, Sperrin M, Snell KIE, Buchan I, Riley RD. Authors' reply to Sabour and Ghajari "Clinical prediction models to predict the risk of multiple binary outcomes: Methodological issues". Stat Med 2021; 40:1861-1862. [PMID: 33687094 DOI: 10.1002/sim.8872] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2020] [Accepted: 12/19/2020] [Indexed: 11/07/2022]
Affiliation(s)
- Glen Philip Martin
- Division of Informatics, Imaging and Data Science, Faculty of Biology, Medicine and Health, Manchester Academic Health Science Centre, University of Manchester, Manchester, UK
| | - Matthew Sperrin
- Division of Informatics, Imaging and Data Science, Faculty of Biology, Medicine and Health, Manchester Academic Health Science Centre, University of Manchester, Manchester, UK
| | - Kym I E Snell
- Centre for Prognosis Research, School of Primary, Community and Social Care, Keele University, Staffordshire, UK
| | - Iain Buchan
- Institute of Population Health Sciences, Faculty of Health and Life Sciences, University of Liverpool, Liverpool, UK
| | - Richard D Riley
- Centre for Prognosis Research, School of Primary, Community and Social Care, Keele University, Staffordshire, UK
| |
Collapse
|
10
|
Kline JA, Camargo CA, Courtney DM, Kabrhel C, Nordenholz KE, Aufderheide T, Baugh JJ, Beiser DG, Bennett CL, Bledsoe J, Castillo E, Chisolm-Straker M, Goldberg EM, House H, House S, Jang T, Lim SC, Madsen TE, McCarthy DM, Meltzer A, Moore S, Newgard C, Pagenhardt J, Pettit KL, Pulia MS, Puskarich MA, Southerland LT, Sparks S, Turner-Lawrence D, Vrablik M, Wang A, Weekes AJ, Westafer L, Wilburn J. Clinical prediction rule for SARS-CoV-2 infection from 116 U.S. emergency departments 2-22-2021. PLoS One 2021; 16:e0248438. [PMID: 33690722 PMCID: PMC7946184 DOI: 10.1371/journal.pone.0248438] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2021] [Accepted: 02/25/2021] [Indexed: 12/11/2022] Open
Abstract
Objectives Accurate and reliable criteria to rapidly estimate the probability of infection with the novel coronavirus-2 that causes the severe acute respiratory syndrome (SARS-CoV-2) and associated disease (COVID-19) remain an urgent unmet need, especially in emergency care. The objective was to derive and validate a clinical prediction score for SARS-CoV-2 infection that uses simple criteria widely available at the point of care. Methods Data came from the registry data from the national REgistry of suspected COVID-19 in EmeRgency care (RECOVER network) comprising 116 hospitals from 25 states in the US. Clinical variables and 30-day outcomes were abstracted from medical records of 19,850 emergency department (ED) patients tested for SARS-CoV-2. The criterion standard for diagnosis of SARS-CoV-2 required a positive molecular test from a swabbed sample or positive antibody testing within 30 days. The prediction score was derived from a 50% random sample (n = 9,925) using unadjusted analysis of 107 candidate variables as a screening step, followed by stepwise forward logistic regression on 72 variables. Results Multivariable regression yielded a 13-variable score, which was simplified to a 13-point score: +1 point each for age>50 years, measured temperature>37.5°C, oxygen saturation<95%, Black race, Hispanic or Latino ethnicity, household contact with known or suspected COVID-19, patient reported history of dry cough, anosmia/dysgeusia, myalgias or fever; and -1 point each for White race, no direct contact with infected person, or smoking. In the validation sample (n = 9,975), the probability from logistic regression score produced an area under the receiver operating characteristic curve of 0.80 (95% CI: 0.79–0.81), and this level of accuracy was retained across patients enrolled from the early spring to summer of 2020. In the simplified score, a score of zero produced a sensitivity of 95.6% (94.8–96.3%), specificity of 20.0% (19.0–21.0%), negative likelihood ratio of 0.22 (0.19–0.26). Increasing points on the simplified score predicted higher probability of infection (e.g., >75% probability with +5 or more points). Conclusion Criteria that are available at the point of care can accurately predict the probability of SARS-CoV-2 infection. These criteria could assist with decisions about isolation and testing at high throughput checkpoints.
Collapse
Affiliation(s)
- Jeffrey A. Kline
- Department of Emergency Medicine, Indiana University School of Medicine, Indianapolis, Indiana, United States of America
- * E-mail:
| | - Carlos A. Camargo
- Department of Emergency Medicine, Massachusetts General Hospital, Harvard Medical School, Boston, Massachusetts, United States of America
| | - D. Mark Courtney
- Department of Emergency Medicine, University of Texas Southwestern, Dallas, Texas, United States of America
| | - Christopher Kabrhel
- Department of Emergency Medicine, Massachusetts General Hospital, Harvard Medical School, Boston, Massachusetts, United States of America
| | - Kristen E. Nordenholz
- Department of Emergency Medicine, University of Colorado School of Medicine, Aurora, Colorado, United States of America
| | - Thomas Aufderheide
- Department of Emergency Medicine, Medical College of Wisconsin, Milwaukee, Wisconsin, United States of America
| | - Joshua J. Baugh
- Department of Emergency Medicine, Massachusetts General Hospital, Harvard Medical School, Boston, Massachusetts, United States of America
| | - David G. Beiser
- Section of Emergency Medicine, University of Chicago, Chicago, Illinois, United States of America
| | - Christopher L. Bennett
- Department of Emergency Medicine, Stanford University School of Medicine, Palo Alto, California, United States of America
| | - Joseph Bledsoe
- Department of Emergency Medicine, Healthcare Delivery Institute, Intermountain Healthcare, Salt Lake City, Utah, United States of America
| | - Edward Castillo
- Department of Emergency Medicine, University of California, San Diego, California, United States of America
| | - Makini Chisolm-Straker
- Department of Emergency Medicine, Mt. Sinai School of Medicine, New York, New York, United States of America
| | - Elizabeth M. Goldberg
- Department of Emergency Medicine, Warren Alpert Medical School of Brown University, Providence, Rhode Island, United States of America
| | - Hans House
- Department of Emergency Medicine, University of Iowa School of Medicine, Iowa City, Iowa, United States of America
| | - Stacey House
- Department of Emergency Medicine, Washington University School of Medicine, St. Louise, Missouri, United States of America
| | - Timothy Jang
- Department of Emergency Medicine, David Geffen School of Medicine at UCLA, Los Angeles, California, United States of America
| | - Stephen C. Lim
- University Medical Center New Orleans, Louisiana State University School of Medicine, New Orleans, Louisiana, United States of America
| | - Troy E. Madsen
- Division of Emergency Medicine, Department Surgery, University of Utah School of Medicine, Salt Lake City, Utah, United States of America
| | - Danielle M. McCarthy
- Department of Emergency Medicine, Feinberg School of Medicine, Northwestern University, Chicago, Illinois, United States of America
| | - Andrew Meltzer
- Department of Emergency Medicine, George Washington University School of Medicine, Washington D.C., DC, United States of America
| | - Stephen Moore
- Department of Emergency Medicine, Penn State Milton S. Hershey Medical Center, Hershey, Pennsylvania, United States of America
| | - Craig Newgard
- Department of Emergency Medicine, Oregon Health and Science University, Portland, Oregon, United States of America
| | - Justine Pagenhardt
- Department of Emergency Medicine, West Virginia University School of Medicine, Morgantown, West Virginia, United States of America
| | - Katherine L. Pettit
- Department of Emergency Medicine, Indiana University School of Medicine, Indianapolis, Indiana, United States of America
| | - Michael S. Pulia
- Department of Emergency Medicine, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin, United States of America
| | - Michael A. Puskarich
- Department of Emergency Medicine, Hennepin County Medical Center and the University of Minnesota, Minneapolis, Minnesota, United States of America
| | - Lauren T. Southerland
- Department of Emergency Medicine, Ohio State University Medical Center, Columbus, Ohio, United States of America
| | - Scott Sparks
- Department of Emergency Medicine, Riverside Regional Medical Center, Newport News, Virginia, United States of America
| | - Danielle Turner-Lawrence
- Department of Emergency Medicine, Beaumont Health, Royal Oak, Michigan, United States of America
| | - Marie Vrablik
- Department of Emergency Medicine, University of Washington School of Medicine, Seattle, Washington, United States of America
| | - Alfred Wang
- Department of Emergency Medicine, Indiana University School of Medicine, Indianapolis, Indiana, United States of America
| | - Anthony J. Weekes
- Department of Emergency Medicine, Carolinas Medical Center at Atrium Health, Charlotte, North Carolina, United States of America
| | - Lauren Westafer
- Department of Emergency Medicine, Baystate Health, Springfield, Massachusetts, United States of America
| | - John Wilburn
- Department of Emergency Medicine, Wayne State University School of Medicine, Detroit, Michigan, United States of America
| |
Collapse
|
11
|
Mohammadi T, Mohammadi B. Drawing clinical pictures of heart failure with high mortality risk. INFORMATICS IN MEDICINE UNLOCKED 2021. [DOI: 10.1016/j.imu.2021.100752] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022] Open
|