1
|
Amgad M, Hodge JM, Elsebaie MAT, Bodelon C, Puvanesarajah S, Gutman DA, Siziopikou KP, Goldstein JA, Gaudet MM, Teras LR, Cooper LAD. A population-level digital histologic biomarker for enhanced prognosis of invasive breast cancer. Nat Med 2024; 30:85-97. [PMID: 38012314 DOI: 10.1038/s41591-023-02643-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2023] [Accepted: 10/13/2023] [Indexed: 11/29/2023]
Abstract
Breast cancer is a heterogeneous disease with variable survival outcomes. Pathologists grade the microscopic appearance of breast tissue using the Nottingham criteria, which are qualitative and do not account for noncancerous elements within the tumor microenvironment. Here we present the Histomic Prognostic Signature (HiPS), a comprehensive, interpretable scoring of the survival risk incurred by breast tumor microenvironment morphology. HiPS uses deep learning to accurately map cellular and tissue structures to measure epithelial, stromal, immune, and spatial interaction features. It was developed using a population-level cohort from the Cancer Prevention Study-II and validated using data from three independent cohorts, including the Prostate, Lung, Colorectal, and Ovarian Cancer trial, Cancer Prevention Study-3, and The Cancer Genome Atlas. HiPS consistently outperformed pathologists in predicting survival outcomes, independent of tumor-node-metastasis stage and pertinent variables. This was largely driven by stromal and immune features. In conclusion, HiPS is a robustly validated biomarker to support pathologists and improve patient prognosis.
Collapse
Affiliation(s)
- Mohamed Amgad
- Department of Pathology, Northwestern University Feinberg School of Medicine, Chicago, IL, USA
| | - James M Hodge
- Department of Population Science, American Cancer Society, Atlanta, GA, USA
| | - Maha A T Elsebaie
- Department of Medicine, John H. Stroger, Jr. Hospital of Cook County, Chicago, IL, USA
| | - Clara Bodelon
- Department of Population Science, American Cancer Society, Atlanta, GA, USA
| | | | - David A Gutman
- Department of Pathology, Emory University School of Medicine, Atlanta, GA, USA
| | - Kalliopi P Siziopikou
- Department of Pathology, Northwestern University Feinberg School of Medicine, Chicago, IL, USA
| | - Jeffery A Goldstein
- Department of Pathology, Northwestern University Feinberg School of Medicine, Chicago, IL, USA
| | - Mia M Gaudet
- Division of Cancer Epidemiology and Genetics, National Cancer Institute, Bethesda, MD, USA
| | - Lauren R Teras
- Department of Population Science, American Cancer Society, Atlanta, GA, USA
| | - Lee A D Cooper
- Department of Pathology, Northwestern University Feinberg School of Medicine, Chicago, IL, USA.
| |
Collapse
|
2
|
Roychoudhury S, Anderson KM, Ye J, Mukhopadhyay P. Robust Design and Analysis of Clinical Trials With Nonproportional Hazards: A Straw Man Guidance From a Cross-Pharma Working Group. Stat Biopharm Res 2021. [DOI: 10.1080/19466315.2021.1874507] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/08/2023]
Affiliation(s)
| | | | - Jiabu Ye
- Astrazeneca Pharmaceuticals, Gaithersburg, MD
| | | |
Collapse
|
3
|
Campbell H. The consequences of checking for zero‐inflation and overdispersion in the analysis of count data. Methods Ecol Evol 2021. [DOI: 10.1111/2041-210x.13559] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/05/2023]
|
4
|
Campbell H, Lakens D. Can we disregard the whole model? Omnibus non-inferiority testing for R 2 in multi-variable linear regression and η̂2 in ANOVA. THE BRITISH JOURNAL OF MATHEMATICAL AND STATISTICAL PSYCHOLOGY 2021; 74:64-89. [PMID: 32056209 DOI: 10.1111/bmsp.12201] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/06/2019] [Revised: 01/08/2020] [Indexed: 06/10/2023]
Abstract
Determining a lack of association between an outcome variable and a number of different explanatory variables is frequently necessary in order to disregard a proposed model (i.e., to confirm the lack of a meaningful association between an outcome and predictors). Despite this, the literature rarely offers information about, or technical recommendations concerning, the appropriate statistical methodology to be used to accomplish this task. This paper introduces non-inferiority tests for ANOVA and linear regression analyses, which correspond to the standard widely used F test for η̂2 and R2 , respectively. A simulation study is conducted to examine the Type I error rates and statistical power of the tests, and a comparison is made with an alternative Bayesian testing approach. The results indicate that the proposed non-inferiority test is a potentially useful tool for 'testing the null'.
Collapse
Affiliation(s)
- Harlan Campbell
- Department of Statistics, University of British Columbia, Vancouver, British Columbia, Canada
| | | |
Collapse
|
5
|
de Jong VM, Moons KG, Riley RD, Tudur Smith C, Marson AG, Eijkemans MJ, Debray TP. Individual participant data meta-analysis of intervention studies with time-to-event outcomes: A review of the methodology and an applied example. Res Synth Methods 2020; 11:148-168. [PMID: 31759339 PMCID: PMC7079159 DOI: 10.1002/jrsm.1384] [Citation(s) in RCA: 51] [Impact Index Per Article: 12.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2019] [Revised: 10/23/2019] [Accepted: 10/24/2019] [Indexed: 12/14/2022]
Abstract
Many randomized trials evaluate an intervention effect on time-to-event outcomes. Individual participant data (IPD) from such trials can be obtained and combined in a so-called IPD meta-analysis (IPD-MA), to summarize the overall intervention effect. We performed a narrative literature review to provide an overview of methods for conducting an IPD-MA of randomized intervention studies with a time-to-event outcome. We focused on identifying good methodological practice for modeling frailty of trial participants across trials, modeling heterogeneity of intervention effects, choosing appropriate association measures, dealing with (trial differences in) censoring and follow-up times, and addressing time-varying intervention effects and effect modification (interactions).We discuss how to achieve this using parametric and semi-parametric methods, and describe how to implement these in a one-stage or two-stage IPD-MA framework. We recommend exploring heterogeneity of the effect(s) through interaction and non-linear effects. Random effects should be applied to account for residual heterogeneity of the intervention effect. We provide further recommendations, many of which specific to IPD-MA of time-to-event data from randomized trials examining an intervention effect.We illustrate several key methods in a real IPD-MA, where IPD of 1225 participants from 5 randomized clinical trials were combined to compare the effects of Carbamazepine and Valproate on the incidence of epileptic seizures.
Collapse
Affiliation(s)
- Valentijn M.T. de Jong
- Julius Center for Health Sciences and Primary CareUniversity Medical Center Utrecht, Utrecht UniversityUtrechtthe Netherlands
| | - Karel G.M. Moons
- Julius Center for Health Sciences and Primary CareUniversity Medical Center Utrecht, Utrecht UniversityUtrechtthe Netherlands
- Cochrane Netherlands, Julius Center for Health Sciences and Primary CareUniversity Medical Center Utrecht, Utrecht UniversityUtrechtthe Netherlands
| | - Richard D. Riley
- Centre for Prognosis Research, Research Institute for Primary Care and Health Sciences, Keele UniversityStaffordshireUK
| | | | - Anthony G. Marson
- Department of Molecular and Clinical PharmacologyUniversity of LiverpoolLiverpoolUK
| | - Marinus J.C. Eijkemans
- Julius Center for Health Sciences and Primary CareUniversity Medical Center Utrecht, Utrecht UniversityUtrechtthe Netherlands
| | - Thomas P.A. Debray
- Julius Center for Health Sciences and Primary CareUniversity Medical Center Utrecht, Utrecht UniversityUtrechtthe Netherlands
- Cochrane Netherlands, Julius Center for Health Sciences and Primary CareUniversity Medical Center Utrecht, Utrecht UniversityUtrechtthe Netherlands
| |
Collapse
|
6
|
Morris TP, White IR, Crowther MJ. Using simulation studies to evaluate statistical methods. Stat Med 2019; 38:2074-2102. [PMID: 30652356 PMCID: PMC6492164 DOI: 10.1002/sim.8086] [Citation(s) in RCA: 523] [Impact Index Per Article: 104.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2017] [Revised: 08/23/2018] [Accepted: 11/02/2018] [Indexed: 12/11/2022]
Abstract
Simulation studies are computer experiments that involve creating data by pseudo-random sampling. A key strength of simulation studies is the ability to understand the behavior of statistical methods because some "truth" (usually some parameter/s of interest) is known from the process of generating the data. This allows us to consider properties of methods, such as bias. While widely used, simulation studies are often poorly designed, analyzed, and reported. This tutorial outlines the rationale for using simulation studies and offers guidance for design, execution, analysis, reporting, and presentation. In particular, this tutorial provides a structured approach for planning and reporting simulation studies, which involves defining aims, data-generating mechanisms, estimands, methods, and performance measures ("ADEMP"); coherent terminology for simulation studies; guidance on coding simulation studies; a critical discussion of key performance measures and their estimation; guidance on structuring tabular and graphical presentation of results; and new graphical presentations. With a view to describing recent practice, we review 100 articles taken from Volume 34 of Statistics in Medicine, which included at least one simulation study and identify areas for improvement.
Collapse
Affiliation(s)
- Tim P. Morris
- London Hub for Trials Methodology ResearchMRC Clinical Trials Unit at UCLLondonUnited Kingdom
| | - Ian R. White
- London Hub for Trials Methodology ResearchMRC Clinical Trials Unit at UCLLondonUnited Kingdom
| | - Michael J. Crowther
- Biostatistics Research Group, Department of Health SciencesUniversity of LeicesterLeicesterUnited Kingdom
| |
Collapse
|
7
|
Callegaro A, Spiessens B. Testing Treatment Effect in Randomized Clinical Trials With Possible Nonproportional Hazards. Stat Biopharm Res 2017. [DOI: 10.1080/19466315.2016.1257436] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
8
|
|
9
|
Kahan BC, Diaz-Ordaz K, Homer K, Carnes D, Underwood M, Taylor SJ, Bremner SA, Eldridge S. Coping with persistent pain, effectiveness research into self-management (COPERS): statistical analysis plan for a randomised controlled trial. Trials 2014; 15:59. [PMID: 24528484 PMCID: PMC3930300 DOI: 10.1186/1745-6215-15-59] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2013] [Accepted: 02/03/2014] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND The Coping with Persistent Pain, Effectiveness Research into Self-management (COPERS) trial assessed whether a group-based self-management course is effective in reducing pain-related disability in participants with chronic musculoskeletal pain. This article describes the statistical analysis plan for the COPERS trial. METHODS AND DESIGN COPERS was a pragmatic, multicentre, unmasked, parallel group, randomised controlled trial. This article describes (a) the overall analysis principles (including which participants will be included in each analysis, how results will be presented, which covariates will be adjusted for, and how we will account for clustering in the intervention group); (b) the primary and secondary outcomes, and how each outcome will be analysed; (c) sensitivity analyses; (d) subgroup analyses; and (e) adherence-adjusted analyses. TRIAL REGISTRATION ISRCTN24426731.
Collapse
Affiliation(s)
- Brennan C Kahan
- Pragmatic Clinical Trials Unit, Queen Mary University of London, 58 Turner St, London E1 2AB, UK.
| | | | | | | | | | | | | | | |
Collapse
|
10
|
Morris TP, Kahan BC, White IR. Choosing sensitivity analyses for randomised trials: principles. BMC Med Res Methodol 2014; 14:11. [PMID: 24456267 PMCID: PMC3904008 DOI: 10.1186/1471-2288-14-11] [Citation(s) in RCA: 37] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2013] [Accepted: 01/21/2014] [Indexed: 01/17/2023] Open
Abstract
BACKGROUND Sensitivity analyses are an important tool for understanding the extent to which the results of randomised trials depend upon the assumptions of the analysis. There is currently no guidance governing the choice of sensitivity analyses. DISCUSSION We provide a principled approach to choosing sensitivity analyses through the consideration of the following questions: 1) Does the proposed sensitivity analysis address the same question as the primary analysis? 2) Is it possible for the proposed sensitivity analysis to return a different result to the primary analysis? 3) If the results do differ, is there any uncertainty as to which will be believed? Answering all of these questions in the affirmative will help researchers to identify relevant sensitivity analyses. Treating analyses as sensitivity analyses when one or more of the answers are negative can be misleading and confuse the interpretation of studies. The value of these questions is illustrated with several examples. SUMMARY By removing unreasonable analyses that might have been performed, these questions will lead to relevant sensitivity analyses, which help to assess the robustness of trial results.
Collapse
Affiliation(s)
- Tim P Morris
- Hub for Trials Methodology Research, MRC Clinical Trials Unit at UCL, Aviation House, 125 Kingsway, London WC2B 6NH, UK
| | - Brennan C Kahan
- Pragmatic Clinical Trials Unit, Queen Mary University of London, 58 Turner Street, London E1 2AB, UK
| | - Ian R White
- MRC Biostatistics Unit, Cambridge Institute of Public Health, Forvie Site, Robinson Way, Cambridge Biomedical Campus, Cambridge CB2 0SR, UK
| |
Collapse
|