1
|
Sheikh MA. Confounding, Mediation, or Independent Effect? Childhood Psychological Abuse, Mental Health, Mood/Psychological State, COPD, and Migraine. J Interpers Violence 2021; 36:NP8706-NP8723. [PMID: 31046532 DOI: 10.1177/0886260519844773] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
In some settings, it may be difficult to differentiate between a confounder and a mediator. For instance, the observed association of self-reported childhood psychological abuse (CPA) with onset of chronic obstructive pulmonary disease (COPD) and migraine may be confounded by current mood/psychological state (e.g., the subjective evaluation of one's own affective state), as well as mediated by an individual's psychopathological symptoms. In this study, we propose the "independence hypothesis," which could prove meaningful to explore in data that lack prospective or objective indices of CPA. We used cross-sectional data from wave VI (2007-2008) of the Tromsø Study, Norway (N = 12,981). The associations between CPA and COPD and migraine were assessed with Poisson regression models. CPA was associated with a 46% increased risk of COPD (relative risk [RR] = 1.46, 95% confidence interval [CI]: [1.02, 1.90]) and a 28% increased risk of migraine in adulthood (RR = 1.28, 95% CI: [1.04, 1.53]), independent of age, sex, parental history of psychiatric problems/asthma/dementia, smoking, respondent's mood/psychological state, and mental health. These findings suggest that the association between retrospectively reported CPA and COPD and migraine is not driven entirely by respondent's mood/psychological state and mental health. Assessing the independent effect of self-reported CPA on COPD and migraine in retrospective studies may prove more meaningful than exploring the mediating role of mental health. Here, we provide the analytical rationale for assessing the independent effect in settings where it is difficult to differentiate between a confounder and a mediator. Moreover, we provide a theoretical rationale for assessing the independent effect of retrospectively reported childhood adversity on health and well-being.
Collapse
|
2
|
Keogh RH, Shaw PA, Gustafson P, Carroll RJ, Deffner V, Dodd KW, Küchenhoff H, Tooze JA, Wallace MP, Kipnis V, Freedman LS. STRATOS guidance document on measurement error and misclassification of variables in observational epidemiology: Part 1-Basic theory and simple methods of adjustment. Stat Med 2020; 39:2197-2231. [PMID: 32246539 PMCID: PMC7450672 DOI: 10.1002/sim.8532] [Citation(s) in RCA: 71] [Impact Index Per Article: 17.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2018] [Revised: 02/25/2020] [Accepted: 02/28/2020] [Indexed: 11/11/2022]
Abstract
Measurement error and misclassification of variables frequently occur in epidemiology and involve variables important to public health. Their presence can impact strongly on results of statistical analyses involving such variables. However, investigators commonly fail to pay attention to biases resulting from such mismeasurement. We provide, in two parts, an overview of the types of error that occur, their impacts on analytic results, and statistical methods to mitigate the biases that they cause. In this first part, we review different types of measurement error and misclassification, emphasizing the classical, linear, and Berkson models, and on the concepts of nondifferential and differential error. We describe the impacts of these types of error in covariates and in outcome variables on various analyses, including estimation and testing in regression models and estimating distributions. We outline types of ancillary studies required to provide information about such errors and discuss the implications of covariate measurement error for study design. Methods for ascertaining sample size requirements are outlined, both for ancillary studies designed to provide information about measurement error and for main studies where the exposure of interest is measured with error. We describe two of the simpler methods, regression calibration and simulation extrapolation (SIMEX), that adjust for bias in regression coefficients caused by measurement error in continuous covariates, and illustrate their use through examples drawn from the Observing Protein and Energy (OPEN) dietary validation study. Finally, we review software available for implementing these methods. The second part of the article deals with more advanced topics.
Collapse
Affiliation(s)
- Ruth H Keogh
- Department of Medical Statistics, London School of Hygiene and Tropical Medicine, London, UK
| | - Pamela A Shaw
- Department of Biostatistics, Epidemiology, and Informatics, University of Pennsylvania Perelman School of Medicine, Philadelphia, Pennsylvania, USA
| | - Paul Gustafson
- Department of Statistics, University of British Columbia, Vancouver, British Columbia, Canada
| | - Raymond J Carroll
- Department of Statistics, Texas A&M University, College Station, Texas, USA
- School of Mathematical and Physical Sciences, University of Technology Sydney, Broadway, New South Wales, Australia
| | - Veronika Deffner
- Statistical Consulting Unit StaBLab, Department of Statistics, Ludwig-Maximilians-Universität, Munich, Germany
| | - Kevin W Dodd
- Biometry Research Group, Division of Cancer Prevention, National Cancer Institute, Bethesda, Maryland, USA
| | - Helmut Küchenhoff
- Department of Statistics, Statistical Consulting Unit StaBLab, Ludwig-Maximilians-Universität, Munich, Germany
| | - Janet A Tooze
- Department of Biostatistics and Data Science, Wake Forest School of Medicine, Winston-Salem, North Carolina, USA
| | - Michael P Wallace
- Department of Statistics and Actuarial Science, University of Waterloo, Waterloo, Ontario, Canada
| | - Victor Kipnis
- Biometry Research Group, Division of Cancer Prevention, National Cancer Institute, Bethesda, Maryland, USA
| | - Laurence S Freedman
- Biostatistics and Biomathematics Unit, Gertner Institute for Epidemiology and Health Policy Research, Tel Hashomer, Israel
- Information Management Services Inc., Rockville, Maryland, USA
| |
Collapse
|
3
|
Keogh RH, Carroll RJ, Tooze JA, Kirkpatrick SI, Freedman LS. Statistical issues related to dietary intake as the response variable in intervention trials. Stat Med 2016; 35:4493-4508. [PMID: 27324170 PMCID: PMC5050089 DOI: 10.1002/sim.7011] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2015] [Revised: 05/11/2016] [Accepted: 05/15/2016] [Indexed: 12/13/2022]
Abstract
The focus of this paper is dietary intervention trials. We explore the statistical issues involved when the response variable, intake of a food or nutrient, is based on self‐report data that are subject to inherent measurement error. There has been little work on handling error in this context. A particular feature of self‐reported dietary intake data is that the error may be differential by intervention group. Measurement error methods require information on the nature of the errors in the self‐report data. We assume that there is a calibration sub‐study in which unbiased biomarker data are available. We outline methods for handling measurement error in this setting and use theory and simulations to investigate how self‐report and biomarker data may be combined to estimate the intervention effect. Methods are illustrated using data from the Trial of Nonpharmacologic Intervention in the Elderly, in which the intervention was a sodium‐lowering diet and the response was sodium intake. Simulations are used to investigate the methods under differential error, differing reliability of self‐reports relative to biomarkers and different proportions of individuals in the calibration sub‐study. When the reliability of self‐report measurements is comparable with that of the biomarker, it is advantageous to use the self‐report data in addition to the biomarker to estimate the intervention effect. If, however, the reliability of the self‐report data is low compared with that in the biomarker, then, there is little to be gained by using the self‐report data. Our findings have important implications for the design of dietary intervention trials. © 2016 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.
Collapse
Affiliation(s)
- Ruth H Keogh
- Department of Medical Statistics, London School of Hygiene and Tropical Medicine, London, U.K..
| | - Raymond J Carroll
- Department of Statistics, Texas A&M University, 3143 TAMU, College Station, TX, 77843-3143, U.S.A.,School of Mathematical and Physical Sciences, University of Technology Sydney, Broadway, New South Wales, 2007, Australia
| | - Janet A Tooze
- Department of Biostatistical Sciences, Wake Forest School of Medicine, Winston-Salem, NC, U.S.A
| | - Sharon I Kirkpatrick
- School of Public Health and Health Systems, University of Waterloo, Waterloo, Ontario, Canada
| | - Laurence S Freedman
- Information Management Services, Inc., Rockville, MD, U.S.A.,Biostatistics Unit, Gertner Institute for Epidemiology and Health Policy Research, Sheba Medical Center, Ramat Gan, Israel
| |
Collapse
|