1
|
Wilkinson J, Heal C, Antoniou GA, Flemyng E, Avenell A, Barbour V, Bordewijk EM, Brown NJL, Clarke M, Dumville J, Grohmann S, Gurrin LC, Hayden JA, Hunter KE, Lam E, Lasserson T, Li T, Lensen S, Liu J, Lundh A, Meyerowitz-Katz G, Mol BW, O'Connell NE, Parker L, Redman B, Seidler AL, Sheldrick K, Sydenham E, Dahly DL, van Wely M, Bero L, Kirkham JJ. A survey of experts to identify methods to detect problematic studies: stage 1 of the INveStigating ProblEmatic Clinical Trials in Systematic Reviews project. J Clin Epidemiol 2024; 175:111512. [PMID: 39222724 DOI: 10.1016/j.jclinepi.2024.111512] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2024] [Revised: 06/28/2024] [Accepted: 08/27/2024] [Indexed: 09/04/2024]
Abstract
BACKGROUND AND OBJECTIVE Randomized controlled trials (RCTs) inform health-care decisions. Unfortunately, some published RCTs contain false data, and some appear to have been entirely fabricated. Systematic reviews are performed to identify and synthesize all RCTs which have been conducted on a given topic. This means that any of these 'problematic studies' are likely to be included, but there are no agreed methods for identifying them. The INveStigating ProblEmatic Clinical Trials in Systematic Reviews (INSPECT-SR) project is developing a tool to identify problematic RCTs in systematic reviews of health care-related interventions. The tool will guide the user through a series of 'checks' to determine a study's authenticity. The first objective in the development process is to assemble a comprehensive list of checks to consider for inclusion. METHODS We assembled an initial list of checks for assessing the authenticity of research studies, with no restriction to RCTs, and categorized these into five domains: Inspecting results in the paper; Inspecting the research team; Inspecting conduct, governance, and transparency; Inspecting text and publication details; Inspecting the individual participant data. We implemented this list as an online survey, and invited people with expertise and experience of assessing potentially problematic studies to participate through professional networks and online forums. Participants were invited to provide feedback on the checks on the list, and were asked to describe any additional checks they knew of, which were not featured in the list. RESULTS Extensive feedback on an initial list of 102 checks was provided by 71 participants based in 16 countries across five continents. Fourteen new checks were proposed across the five domains, and suggestions were made to reword checks on the initial list. An updated list of checks was constructed, comprising 116 checks. Many participants expressed a lack of familiarity with statistical checks, and emphasized the importance of feasibility of the tool. CONCLUSION A comprehensive list of trustworthiness checks has been produced. The checks will be evaluated to determine which should be included in the INSPECT-SR tool. PLAIN LANGUAGE SUMMARY Systematic reviews draw upon evidence from randomized controlled trials (RCTs) to find out whether treatments are safe and effective. The conclusions from systematic reviews are often very influential, and inform both health-care policy and individual treatment decisions. However, it is now clear that the results of many published RCTs are not genuine. In some cases, the entire study may have been fabricated. It is not usual for the veracity of RCTs to be questioned during the process of compiling a systematic review. As a consequence, these "problematic studies" go unnoticed, and are allowed to contribute to the conclusions of influential systematic reviews, thereby influencing patient care. This prompts the question of how these problematic studies could be identified. In this study, we created an extensive list of checks that could be performed to try to identify these studies. We started by assembling a list of checks identified in previous research, and conducting a survey of experts to ask whether they were aware of any additional methods, and to give feedback on the list. As a result, a list of 116 potential "trustworthiness checks" was created. In subsequent research, we will evaluate these checks to see which should be included in a tool, INveStigating ProblEmatic Clinical Trials in Systematic Reviews, which can be used to detect problematic studies.
Collapse
Affiliation(s)
- Jack Wilkinson
- Centre for Biostatistics, The University of Manchester, Manchester Academic Health Science Centre, Manchester, UK.
| | - Calvin Heal
- Centre for Biostatistics, The University of Manchester, Manchester Academic Health Science Centre, Manchester, UK
| | - George A Antoniou
- Manchester Vascular Centre, Manchester University NHS Foundation Trust, Manchester, UK; Division of Cardiovascular Sciences, School of Medical Sciences, Manchester Academic Health Science Centre, The University of Manchester, Manchester, UK
| | - Ella Flemyng
- Evidence Production and Methods Directorate, Cochrane Central Executive, London, UK
| | - Alison Avenell
- Health Services Research Unit, University of Aberdeen, Aberdeen, UK
| | | | - Esmee M Bordewijk
- Department of Obstetrics and Gynaecology, Centre for Reproductive Medicine, Amsterdam University Medical Center, Amsterdam, Netherlands
| | | | - Mike Clarke
- Northern Ireland Methodology Hub, Queen's University Belfast, Belfast, UK
| | - Jo Dumville
- Division of Nursing, Midwifery & Social Work, School of Health Sciences, The University of Manchester, Manchester, UK; NIHR Manchester Biomedical Research Centre, Manchester University NHS Foundation Trust, Manchester Academic Health Science Centre, Manchester, UK
| | - Steph Grohmann
- Evidence Production and Methods Directorate, Cochrane Central Executive, London, UK
| | - Lyle C Gurrin
- School of Population and Global Health, The University of Melbourne, Melbourne, Australia
| | - Jill A Hayden
- Department of Community Health & Epidemiology, Dalhousie University, Halifax, Canada
| | - Kylie E Hunter
- NHMRC Clinical Trials Centre, University of Sydney, Sydney, Australia
| | - Emily Lam
- Independent Lay Member, Unaffiliated, Cheshire, UK
| | - Toby Lasserson
- Evidence Production and Methods Directorate, Cochrane Central Executive, London, UK
| | - Tianjing Li
- Department of Ophthalmology, University of Colorado Anschutz Medical Campus, Aurora, CO, USA
| | - Sarah Lensen
- Department of Obstetrics, Gynaecology and Newborth Health, Royal Women's Hospital, University of Melbourne, Melbourne, Australia
| | - Jianping Liu
- Director, Centre for Evidence-Based Chinese Medicine, Beijing University of Chinese Medicine, Beijing, China
| | - Andreas Lundh
- Cochrane Denmark & Centre for Evidence-Based Medicine Odense, Department of Clinical Research, University of Southern Denmark, Odense, Denmark; Department of Respiratory Medicine and Infectious Diseases, Copenhagen University Hospital - Bispebjerg and Frederiksberg, Copenhagen, Denmark
| | | | - Ben W Mol
- Department of Obstetrics and Gynaecology, Monash University, Melbourne, Australia
| | - Neil E O'Connell
- Department of Health Sciences, Centre for Wellbeing Across the Lifecourse, Brunel University London, London, UK
| | - Lisa Parker
- Charles Perkins Centre, Sydney Medical School, University of Sydney, Sydney, Australia
| | | | - Anna Lene Seidler
- NHMRC Clinical Trials Centre, University of Sydney, Sydney, Australia
| | - Kyle Sheldrick
- Faculty of Medicine, University of New South Wales, Sydney, Australia
| | | | - Darren L Dahly
- HRB Clinical Research Facility, University College Cork, Cork, Ireland
| | - Madelon van Wely
- Department of Obstetrics and Gynaecology, Centre for Reproductive Medicine, Amsterdam University Medical Center, Amsterdam, Netherlands
| | - Lisa Bero
- University of Colorado Anschutz Medical Campus, Aurora, CO, USA
| | - Jamie J Kirkham
- Centre for Biostatistics, The University of Manchester, Manchester Academic Health Science Centre, Manchester, UK
| |
Collapse
|
2
|
Niangoran S, Journot V, Marcy O, Anglaret X, Alioum A. Performance of four centralized statistical monitoring methods for early detection of an atypical center in a multicenter study. Contemp Clin Trials Commun 2023; 34:101168. [PMID: 37425338 PMCID: PMC10328794 DOI: 10.1016/j.conctc.2023.101168] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2023] [Revised: 06/02/2023] [Accepted: 06/18/2023] [Indexed: 07/11/2023] Open
Abstract
Background Ensuring the quality of data is essential for the credibility of a multicenter clinical trial. Centralized Statistical Monitoring (CSM) of data allows the detection of a center in which the distribution of a specific variable is atypical compared to other centers. The ideal CSM method should allow early detection of problem and therefore involve the fewest possible participants. Methods We simulated clinical trials and compared the performance of four CSM methods (Student, Hatayama, Desmet, Distance) to detect whether the distribution of a quantitative variable was atypical in one center in relation to the others, with different numbers of participants and different mean deviation amplitudes. Results The Student and Hatayama methods had good sensitivity but poor specificity, which disqualifies them for practical use in CSM. The Desmet and Distance methods had very high specificity for detecting all the mean deviations tested (including small values) but low sensitivity with mean deviations less than 50%. Conclusion Although the Student and Hatayama methods are more sensitive, their low specificity would lead to too many alerts being triggered, which would result in additional unnecessary control work to ensure data quality. The Desmet and Distance methods have low sensitivity when the deviation from the mean is low, suggesting that the CSM should be used alongside other conventional monitoring procedures rather than replacing them. However, they have excellent specificity, which suggests they can be applied routinely, since using them takes up no time at central level and does not cause any unnecessary workload in investigating centers.
Collapse
Affiliation(s)
- Serge Niangoran
- University of Bordeaux, National Institute for Health and Medical Research (INSERM) UMR 1219, Bordeaux Population Health Research Center, Bordeaux, France
- Research Institute for Sustainable Development (IRD) EMR 271, Bordeaux, France
- Programme PACCI, Abidjan, Côte d'Ivoire
| | - Valérie Journot
- University of Bordeaux, National Institute for Health and Medical Research (INSERM) UMR 1219, Bordeaux Population Health Research Center, Bordeaux, France
- Research Institute for Sustainable Development (IRD) EMR 271, Bordeaux, France
| | - Olivier Marcy
- University of Bordeaux, National Institute for Health and Medical Research (INSERM) UMR 1219, Bordeaux Population Health Research Center, Bordeaux, France
- Research Institute for Sustainable Development (IRD) EMR 271, Bordeaux, France
| | - Xavier Anglaret
- University of Bordeaux, National Institute for Health and Medical Research (INSERM) UMR 1219, Bordeaux Population Health Research Center, Bordeaux, France
- Research Institute for Sustainable Development (IRD) EMR 271, Bordeaux, France
| | - Amadou Alioum
- University of Bordeaux, National Institute for Health and Medical Research (INSERM) UMR 1219, Bordeaux Population Health Research Center, Bordeaux, France
| |
Collapse
|
3
|
Bordewijk EM, Li W, van Eekelen R, Wang R, Showell M, Mol BW, van Wely M. Methods to assess research misconduct in health-related research: A scoping review. J Clin Epidemiol 2021; 136:189-202. [PMID: 34033915 DOI: 10.1016/j.jclinepi.2021.05.012] [Citation(s) in RCA: 31] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2021] [Revised: 05/11/2021] [Accepted: 05/12/2021] [Indexed: 01/17/2023]
Abstract
OBJECTIVE To give an overview of the available methods to investigate research misconduct in health-related research. STUDY DESIGN AND SETTING In this scoping review, we conducted a literature search in MEDLINE, Embase, The Cochrane CENTRAL Register of Studies Online (CRSO), and The Virtual Health Library portal up to July 2020. We included papers that mentioned and/or described methods for screening or assessing research misconduct in health-related research. We categorized identified methods into the following four groups according to their scopes: overall concern, textual concern, image concern, and data concern. RESULTS We included 57 papers reporting on 27 methods: two on overall concern, four on textual concern, three on image concern, and 18 on data concern. Apart from the methods to locate textual plagiarism and image manipulation, all other methods, be it theoretical or empirical, are based on examples, are not standardized, and lack formal validation. CONCLUSION Existing methods cover a wide range of issues regarding research misconduct. Although measures to counteract textual plagiarism are well implemented, tools to investigate other forms of research misconduct are rudimentary and labour-intensive. To cope with the rising challenge of research misconduct, further development of automatic tools and routine validation of these methods is needed. TRIAL REGISTRATION NUMBER Center for Open Science (OSF) (https://osf.io/mq89w).
Collapse
Affiliation(s)
- Esmee M Bordewijk
- Centre for Reproductive Medicine, Amsterdam UMC, Amsterdam, The Netherlands; Department of Obstetrics and Gynecology, Monash University, Clayton, Australia
| | - Wentao Li
- Department of Obstetrics and Gynecology, Monash University, Clayton, Australia.
| | - Rik van Eekelen
- Centre for Reproductive Medicine, Amsterdam UMC, Amsterdam, The Netherlands
| | - Rui Wang
- Department of Obstetrics and Gynecology, Monash University, Clayton, Australia
| | - Marian Showell
- Department of Obstetrics and Gynaecology, University of Auckland, Auckland, New Zealand
| | - Ben W Mol
- Department of Obstetrics and Gynecology, Monash University, Clayton, Australia
| | - Madelon van Wely
- Centre for Reproductive Medicine, Amsterdam UMC, Amsterdam, The Netherlands
| |
Collapse
|
4
|
Cragg WJ, Hurley C, Yorke-Edwards V, Stenning SP. Dynamic methods for ongoing assessment of site-level risk in risk-based monitoring of clinical trials: A scoping review. Clin Trials 2021; 18:245-259. [PMID: 33611927 PMCID: PMC8010889 DOI: 10.1177/1740774520976561] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/21/2023]
Abstract
Background/Aims It is increasingly recognised that reliance on frequent site visits for monitoring clinical trials is inefficient. Regulators and trialists have recently encouraged more risk-based monitoring. Risk assessment should take place before a trial begins to define the overarching monitoring strategy. It can also be done on an ongoing basis, to target sites for monitoring activity. Various methods have been proposed for such prioritisation, often using terms like ‘central statistical monitoring’, ‘triggered monitoring’ or, as in the International Conference on Harmonization Good Clinical Practice guidance, ‘targeted on-site monitoring’. We conducted a scoping review to identify such methods, to establish if any were supported by adequate evidence to allow wider implementation, and to guide future developments in this field of research. Methods We used seven publication databases, two sets of methodological conference abstracts and an Internet search engine to identify methods for using centrally held trial data to assess site conduct during a trial. We included only reports in English, and excluded reports published before 1996 or not directly relevant to our research question. We used reference and citation searches to find additional relevant reports. We extracted data using a predefined template. We contacted authors to request additional information about included reports. Results We included 30 reports in our final dataset, of which 21 were peer-reviewed publications. In all, 20 reports described central statistical monitoring methods (of which 7 focussed on detection of fraud or misconduct) and 9 described triggered monitoring methods; 21 reports included some assessment of their methods’ effectiveness, typically exploring the methods’ characteristics using real trial data without known integrity issues. Of the 21 with some effectiveness assessment, most contained limited information about whether or not concerns identified through central monitoring constituted meaningful problems. Several reports demonstrated good classification ability based on more than one classification statistic, but never without caveats of unclear reporting or other classification statistics being low or unavailable. Some reports commented on cost savings from reduced on-site monitoring, but none gave detailed costings for the development and maintenance of central monitoring methods themselves. Conclusion Our review identified various proposed methods, some of which could be combined within the same trial. The apparent emphasis on fraud detection may not be proportionate in all trial settings. Despite some promising evidence and some self-justifying benefits for data cleaning activity, many proposed methods have limitations that may currently prevent their routine use for targeting trial monitoring activity. The implementation costs, or uncertainty about these, may also be a barrier. We make recommendations for how the evidence-base supporting these methods could be improved.
Collapse
Affiliation(s)
- William J Cragg
- MRC Clinical Trials Unit at UCL, London, UK.,Clinical Trials Research Unit, Leeds Institute of Clinical Trials Research, University of Leeds, Leeds, UK
| | - Caroline Hurley
- Health Research Board-Trials Methodology Research Network (HRB-TMRN), National University of Ireland Galway, Galway, Ireland
| | | | | |
Collapse
|
5
|
Design and conduct of confirmatory chronic pain clinical trials. Pain Rep 2020; 6:e845. [PMID: 33511323 PMCID: PMC7837951 DOI: 10.1097/pr9.0000000000000854] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2020] [Revised: 08/07/2020] [Accepted: 08/19/2020] [Indexed: 12/30/2022] Open
Abstract
The purpose of this article is to provide readers with a basis for understanding the emerging science of clinical trials and to provide a set of practical, evidence-based suggestions for designing and executing confirmatory clinical trials in a manner that minimizes measurement error. The most important step in creating a mindset of quality clinical research is to abandon the antiquated concept that clinical trials are a method for capturing data from clinical practice and shifting to a concept of the clinical trial as a measurement system, consisting of an interconnected set of processes, each of which must be in calibration for the trial to generate an accurate and reliable estimate of the efficacy (and safety) of a given treatment. The status quo of inaccurate, unreliable, and protracted clinical trials is unacceptable and unsustainable. This article gathers aspects of study design and conduct under a single broad umbrella of techniques available to improve the accuracy and reliability of confirmatory clinical trials across traditional domain boundaries.
Collapse
|
6
|
Gewandter JS, Dworkin RH, Turk DC, Devine EG, Hewitt D, Jensen MP, Katz NP, Kirkwood AA, Malamut R, Markman JD, Vrijens B, Burke L, Campbell JN, Carr DB, Conaghan PG, Cowan P, Doyle MK, Edwards RR, Evans SR, Farrar JT, Freeman R, Gilron I, Juge D, Kerns RD, Kopecky EA, McDermott MP, Niebler G, Patel KV, Rauck R, Rice ASC, Rowbotham M, Sessler NE, Simon LS, Singla N, Skljarevski V, Tockarshewsky T, Vanhove GF, Wasan AD, Witter J. Improving Study Conduct and Data Quality in Clinical Trials of Chronic Pain Treatments: IMMPACT Recommendations. THE JOURNAL OF PAIN 2020; 21:931-942. [PMID: 31843583 PMCID: PMC7292738 DOI: 10.1016/j.jpain.2019.12.003] [Citation(s) in RCA: 33] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/12/2019] [Revised: 10/30/2019] [Accepted: 12/11/2019] [Indexed: 11/30/2022]
Abstract
The estimated probability of progressing from phase 3 analgesic clinical trials to regulatory approval is approximately 57%, suggesting that a considerable number of treatments with phase 2 trial results deemed sufficiently successful to progress to phase 3 do not yield positive phase 3 results. Deficiencies in the quality of clinical trial conduct could account for some of this failure. An Initiative on Methods, Measurement, and Pain Assessment in Clinical Trials meeting was convened to identify potential areas for improvement in trial conduct in order to improve assay sensitivity (ie, ability of trials to detect a true treatment effect). We present recommendations based on presentations and discussions at the meeting, literature reviews, and iterative revisions of this article. The recommendations relate to the following areas: 1) study design (ie, to promote feasibility), 2) site selection and staff training, 3) participant selection and training, 4) treatment adherence, 5) data collection, and 6) data and study monitoring. Implementation of these recommendations may improve the quality of clinical trial data and thus the validity and assay sensitivity of clinical trials. Future research regarding the effects of these strategies will help identify the most efficient use of resources for conducting high quality clinical trials. PERSPECTIVE: Every effort should be made to optimize the quality of clinical trial data. This manuscript discusses considerations to improve conduct of pain clinical trials based on research in multiple medical fields and the expert consensus of pain researchers and stakeholders from academia, regulatory agencies, and industry.
Collapse
Affiliation(s)
| | | | | | | | | | | | - Nathaniel P Katz
- Analgesic Solutions, Natick, Massachusetts; Tufts University, Boston, Massachusetts
| | - Amy A Kirkwood
- CR UK and UCL Cancer Trials Centre, UCL Cancer Institute, London, UK
| | | | - John D Markman
- University of Rochester Medical Center, Rochester, New York
| | | | | | | | - Daniel B Carr
- Tufts University School of Medicine, Boston, Massachusetts
| | - Philip G Conaghan
- Leeds Institute of Rheumatic and Musculoskeletal Medicine, University of Leeds, & NIHR Leeds Biomedical Research Centre, Leeds, UK
| | - Penney Cowan
- American Chronic Pain Association, Rocklin, California
| | | | | | - Scott R Evans
- George Washington University, Washington, District of Columbia
| | - John T Farrar
- University of Pennsylvania, Philadelphia, Pennsylvania
| | - Roy Freeman
- Brigham & Women's Hospital, Boston, Massachusetts
| | - Ian Gilron
- Queen's University, Kingston, Ontario, Canada
| | - Dean Juge
- Horizon Pharma, Lake Forest, Illinois
| | | | | | | | | | | | - Richard Rauck
- Wake Forest University School of Medicine, Winston-Salem, North Carolina
| | | | | | | | | | - Neil Singla
- Lotus Clinical Research, Pasadena, California
| | | | | | | | - Ajay D Wasan
- University of Pittsburgh Medical Center, Pittsburgh, Pennsylvania
| | - James Witter
- National Institutes of Health, Bethesda, Maryland
| |
Collapse
|
7
|
Oba K. Statistical challenges for central monitoring in clinical trials: a review. Int J Clin Oncol 2015; 21:28-37. [PMID: 26499195 DOI: 10.1007/s10147-015-0914-4] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2015] [Accepted: 10/08/2015] [Indexed: 01/27/2023]
Abstract
Recently, the complexity and costs of clinical trials have increased dramatically, especially in the area of new drug development. Risk-based monitoring (RBM) has been attracting attention as an efficient and effective trial monitoring approach, which can be applied irrespectively of the trial sponsor, i.e., academic institution or pharmaceutical company. In the RBM paradigm, it is expected that a statistical approach to central monitoring can help improve the effectiveness of on-site monitoring by prioritizing and guiding site visits according to central statistical data checks, as evidenced by examples of actual trial datasets. In this review, several statistical methods for central monitoring are presented. It is important to share knowledge about the role and performance capabilities of statistical methodology among clinical trial team members (i.e., sponsors, investigators, data managers, monitors, and biostatisticians) in order to adopt central statistical monitoring for assessing data quality in the actual clinical trial.
Collapse
Affiliation(s)
- Koji Oba
- Interfaculty Initiative in Information Studies, Graduate School of Interdisciplinary Information Studies, The University of Tokyo, Tokyo, Japan.
- Department of Biostatistics, School of Public Health, Graduate School of Medicine, The University of Tokyo, 7-3-1, Hongo, Bunkyo-ku, Tokyo, Japan.
| |
Collapse
|
8
|
Pogue J, Sackett DL. Clinician-trialist rounds: 19. Faux pas or fraud? Identifying centers that have fabricated their data in your multi-center trial. Clin Trials 2014; 11:128-30. [DOI: 10.1177/1740774513503524] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Affiliation(s)
- Janice Pogue
- Trout Research & Education Centre at Irish Lake, Markdale, ON, Canada
| | - David L Sackett
- Trout Research & Education Centre at Irish Lake, Markdale, ON, Canada
| |
Collapse
|
9
|
A Case-Control Comparison of Retracted and Non-Retracted Clinical Trials: Can Retraction Be Predicted? PUBLICATIONS 2014. [DOI: 10.3390/publications2010027] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
|
10
|
Kirkwood AA, Cox T, Hackshaw A. Application of methods for central statistical monitoring in clinical trials. Clin Trials 2013; 10:783-806. [DOI: 10.1177/1740774513494504] [Citation(s) in RCA: 40] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Background On-site source data verification is a common and expensive activity, with little evidence that it is worthwhile. Central statistical monitoring (CSM) is a cheaper alternative, where data checks are performed by the coordinating centre, avoiding the need to visit all sites. Several publications have suggested methods for CSM; however, few have described their use in real trials. Methods R-programs were created to check data at either the subject level (7 tests within 3 programs) or site level (9 tests within 8 programs) using previously described methods or new ones we developed. These aimed to find possible data errors such as outliers, incorrect dates, or anomalous data patterns; digit preference, values too close or too far from the means, unusual correlation structures, extreme variances which may indicate fraud or procedural errors and under-reporting of adverse events. The methods were applied to three trials, one of which had closed and has been published, one in follow-up, and a third to which fabricated data were added. We examined how well the methods work, discussing their strengths and limitations. Results The R-programs produced simple tables or easy-to-read figures. Few data errors were found in the first two trials, and those added to the third were easily detected. The programs were able to identify patients with outliers based on single or multiple variables. They also detected (1) fabricated patients, generated to have values too close to the multivariate mean, or with too low variances in repeated measurements, and (2) sites which had unusual correlation structures or too few adverse events. Some methods were unreliable if applied to centres with few patients or if data were fabricated in a way which did not fit the assumptions used to create the programs. Outputs from the R-programs are interpreted using examples. Limitations Detecting data errors is relatively straightforward; however, there are several limitations in the detection of fraud: some programs cannot be applied to small trials or to centres with few patients (<10) and data falsified in a manner which does not fit the program’s assumptions may not be detected. In addition, many tests require a visual assessment of the output (showing flagged participants or sites), before data queries are made or on-site visits performed. Conclusions CSM is a worthwhile alternative to on-site data checking and may be used to limit the number of site visits by targeting only sites which are picked up by the programs. We summarise the methods, show how they are implemented and that they can be easy to interpret. The methods can identify incorrect or unusual data for a trial subject, or centres where the data considered together are too different to other centres and therefore should be reviewed, possibly through an on-site visit.
Collapse
Affiliation(s)
- Amy A Kirkwood
- Cancer Research UK & UCL Cancer Trials Centre, University College London, London, UK
| | - Trevor Cox
- Cancer Research UK Liverpool Cancer Trials Unit, University of Liverpool Cancer Trials Centre, Liverpool Cancer Research UK Centre, University of Liverpool, Liverpool, UK
| | - Allan Hackshaw
- Cancer Research UK & UCL Cancer Trials Centre, University College London, London, UK
| |
Collapse
|
11
|
Pogue JM, Devereaux PJ, Thorlund K, Yusuf S. Central statistical monitoring: detecting fraud in clinical trials. Clin Trials 2013; 10:225-35. [PMID: 23283577 DOI: 10.1177/1740774512469312] [Citation(s) in RCA: 34] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
BACKGROUND Central statistical monitoring in multicenter trials could allow trialists to identify centers with problematic data or conduct and intervene while the trial is still ongoing. Currently, there are few published models that can be used for this purpose. PURPOSE To develop and validate a series of risk scores to identify fabricated data within a multicenter trial, to be used in central statistical monitoring. METHODS We used a database from a multicenter trial in which data from 9 of 109 centers were documented to be fabricated. These data were used to build a series of risk scores to predict fraud at centers. All analyses were performed at the level of the center. Exploratory factor analysis was used to select from 52 possible predictors, chosen from a variety of previously published methods. The final models were selected from a total of 18 independent predictors, based on the factors identified. These models were converted to risk scores for each center. RESULTS Five different risk scores were identified, and each had the ability to discriminate well between centers with and without fabricated data (area under the curve values ranged from 0.90 to 0.95). True- and false-positive rates are presented for each risk score to arrive at a recommended cutoff of seven or above (high risk score). We validated these risk scores, using an independent multicenter trial database that contained no data fabrication and found the occurrence of false-positive high scores to be low and comparable to the model-building data set. LIMITATIONS These risk score have been validated only for their false-positive rate and require validation within another trial that contains centers that have fabricated data. Validation in noncardiovascular trials is also required to gage the usefulness of these risk scores in central statistical monitoring. CONCLUSIONS With further validation, these risk scores could become part of a series of tools that provide evidence-based central statistical monitoring, which in turn can improve the efficiency of trials, and minimize the need for more expensive on-site monitoring.
Collapse
Affiliation(s)
- Janice M Pogue
- Department of Clinical Epidemiology and Biostatistics, McMaster University, Hamilton, ON, Canada.
| | | | | | | |
Collapse
|
12
|
Orita M, Hagiwara Y, Moritomo A, Tsunoyama K, Watanabe T, Ohno K. Agreement of drug discovery data with Benford's law. Expert Opin Drug Discov 2012; 8:1-5. [DOI: 10.1517/17460441.2013.740007] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Affiliation(s)
- Masaya Orita
- Chemistry Research Labs, Drug Discovery Research, Astellas Pharma, Inc.,
21 Miyukigaoka, Tsukuba, Ibaraki, 305-8585, Japan ;
| | - Yosuke Hagiwara
- Chemistry Research Labs, Drug Discovery Research, Astellas Pharma, Inc.,
21 Miyukigaoka, Tsukuba, Ibaraki, 305-8585, Japan
| | - Ayako Moritomo
- Chemistry Research Labs, Drug Discovery Research, Astellas Pharma, Inc.,
21 Miyukigaoka, Tsukuba, Ibaraki, 305-8585, Japan
| | - Kazuhisa Tsunoyama
- Molecular Medicine Research Labs, Drug Discovery Research, Astellas Pharma, Inc.,
21 Miyukigaoka, Tsukuba, Ibaraki, 305-8585, Japan
| | - Toshihiro Watanabe
- Chemistry Research Labs, Drug Discovery Research, Astellas Pharma, Inc.,
21 Miyukigaoka, Tsukuba, Ibaraki, 305-8585, Japan
| | - Kazuki Ohno
- Chemistry Research Labs, Drug Discovery Research, Astellas Pharma, Inc.,
21 Miyukigaoka, Tsukuba, Ibaraki, 305-8585, Japan
| |
Collapse
|
13
|
Wu X, Carlsson M. Detecting data fabrication in clinical trials from cluster analysis perspective. Pharm Stat 2010; 10:257-64. [DOI: 10.1002/pst.462] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
14
|
|