1
|
Takaoka A, Zytaruk N, Davis M, Matte A, Johnstone J, Lauzier F, Marshall J, Adhikari N, Clarke FJ, Rochwerg B, Lamontagne F, Hand L, Watpool I, Porteous RK, Masse MH, D'Aragon F, Niven D, Heels-Ansdell D, Duan E, Dionne J, English S, St-Arnaud C, Millen T, Cook DJ. Monitoring and auditing protocol adherence, data integrity and ethical conduct of a randomized clinical trial: A case study. J Crit Care 2022; 71:154094. [PMID: 35724443 DOI: 10.1016/j.jcrc.2022.154094] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2022] [Accepted: 06/01/2022] [Indexed: 11/29/2022]
Abstract
PURPOSE To categorize, quantify and interpret findings documented in feedback letters of monitoring or auditing visits for an investigator-initiated, peer-review funded multicenter randomized trial testing probiotics for critically ill patients. MATERIALS & METHODS In 37 Canadian centers, monitoring and auditing visits were performed by 3 trained individuals; findings were reported in feedback letters. At trial termination, we performed duplicate content analysis on letters, categorizing observations first into unique findings, followed by 10 pre-determined trial quality management domains. We further classified each observation into a) missing operational records, b) errors in process, and potential threats to c) data integrity, d) patient privacy or e) safety. RESULTS Across 37 monitoring or auditing visits, 75 unique findings were categorized into 10 domains. Most frequently, observations were in domains of training documentation (180/566 [32%]) and the informed consent process (133/566 [23%]). Most observations were missing operational records (438/566 [77%]) rather than errors in process (128/566 [23%]). Of 75 findings, 13 (62/566 observations [11%]) posed a potential threat to data integrity, 1 (1/566 observation [0.18%]) to patient privacy, and 9 (49/566 observations [8.7%]) to patient safety. CONCLUSIONS Monitoring and auditing findings predominantly concerned missing documentation with minimal threats to data integrity, patient privacy or safety. TRIAL REGISTRATION PROSPECT (Probiotics: Prevention of Severe Pneumonia and Endotracheal Colonization Trial): NCT02462590.
Collapse
Affiliation(s)
- Alyson Takaoka
- Department of Health Research Methods, Evidence and Impact, McMaster University, Hamilton, Ontario, Canada.
| | - Nicole Zytaruk
- Department of Health Research Methods, Evidence and Impact, McMaster University, Hamilton, Ontario, Canada.
| | - Megan Davis
- School of Medicine, Royal College of Surgeons in Ireland, Dublin, Ireland.
| | - Andrea Matte
- Department of Respiratory Therapy, Humber River Hospital, North York, Ontario, Canada
| | - Jennie Johnstone
- Departments of Laboratory Medicine and Pathobiology & Dalla Lana School of Public Health, University of Toronto, Toronto, Ontario, Canada.
| | - François Lauzier
- Department of Critical Care, Université Laval, Laval, Quebec, Canada.
| | - John Marshall
- Interdepartmental Division of Critical Care, University of Toronto, Toronto, Ontario, Canada.
| | - Neill Adhikari
- Interdepartmental Division of Critical Care, University of Toronto, Toronto, Ontario, Canada.
| | - France J Clarke
- Department of Health Research Methods, Evidence and Impact, McMaster University, Hamilton, Ontario, Canada.
| | - Bram Rochwerg
- Department of Health Research Methods, Evidence and Impact, McMaster University, Hamilton, Ontario, Canada; Department of Medicine, McMaster University, Hamilton, Ontario, Canada.
| | - François Lamontagne
- Department of Critical Care, Université de Sherbrooke, Sherbrooke, Quebec, Canada.
| | - Lori Hand
- Department of Health Research Methods, Evidence and Impact, McMaster University, Hamilton, Ontario, Canada.
| | - Irene Watpool
- Department of Critical Care, Ottawa Health Research Institute, Ottawa, Ontario, Canada.
| | - Rebecca K Porteous
- Department of Critical Care, Ottawa Health Research Institute, Ottawa, Ontario, Canada.
| | - Marie-Hélène Masse
- Department of Critical Care, Université de Sherbrooke, Sherbrooke, Quebec, Canada.
| | - Frédérick D'Aragon
- Department of Critical Care, Université de Sherbrooke, Sherbrooke, Quebec, Canada.
| | - Daniel Niven
- Department of Critical Care, University of Calgary, Calgary, Alberta, Canada.
| | - Diane Heels-Ansdell
- Department of Health Research Methods, Evidence and Impact, McMaster University, Hamilton, Ontario, Canada.
| | - Erick Duan
- Department of Health Research Methods, Evidence and Impact, McMaster University, Hamilton, Ontario, Canada; Department of Medicine, McMaster University, Hamilton, Ontario, Canada.
| | - Joanna Dionne
- Department of Health Research Methods, Evidence and Impact, McMaster University, Hamilton, Ontario, Canada; Department of Medicine, McMaster University, Hamilton, Ontario, Canada.
| | - Shane English
- Department of Critical Care, Ottawa Health Research Institute, Ottawa, Ontario, Canada.
| | - Charles St-Arnaud
- Department of Critical Care, Université de Sherbrooke, Sherbrooke, Quebec, Canada.
| | - Tina Millen
- Department of Medicine, McMaster University, Hamilton, Ontario, Canada.
| | - Deborah J Cook
- Department of Health Research Methods, Evidence and Impact, McMaster University, Hamilton, Ontario, Canada; Department of Medicine, McMaster University, Hamilton, Ontario, Canada.
| | | |
Collapse
|
2
|
Lindner L, Weiß A, Reich A, Kindler S, Behrens F, Braun J, Listing J, Schett G, Sieper J, Strangfeld A, Regierer AC. Implementing an automated monitoring process in a digital, longitudinal observational cohort study. Arthritis Res Ther 2021; 23:181. [PMID: 34233730 PMCID: PMC8262053 DOI: 10.1186/s13075-021-02563-2] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2021] [Accepted: 06/24/2021] [Indexed: 11/29/2022] Open
Abstract
Background Clinical data collection requires correct and complete data sets in order to perform correct statistical analysis and draw valid conclusions. While in randomized clinical trials much effort concentrates on data monitoring, this is rarely the case in observational studies- due to high numbers of cases and often-restricted resources. We have developed a valid and cost-effective monitoring tool, which can substantially contribute to an increased data quality in observational research. Methods An automated digital monitoring system for cohort studies developed by the German Rheumatism Research Centre (DRFZ) was tested within the disease register RABBIT-SpA, a longitudinal observational study including patients with axial spondyloarthritis and psoriatic arthritis. Physicians and patients complete electronic case report forms (eCRF) twice a year for up to 10 years. Automatic plausibility checks were implemented to verify all data after entry into the eCRF. To identify conflicts that cannot be found by this approach, all possible conflicts were compiled into a catalog. This “conflict catalog” was used to create queries, which are displayed as part of the eCRF. The proportion of queried eCRFs and responses were analyzed by descriptive methods. For the analysis of responses, the type of conflict was assigned to either a single conflict only (affecting individual items) or a conflict that required the entire eCRF to be queried. Results Data from 1883 patients was analyzed. A total of n = 3145 eCRFs submitted between baseline (T0) and T3 (12 months) had conflicts (40–64%). Fifty-six to 100% of the queries regarding eCRFs that were completely missing were answered. A mean of 1.4 to 2.4 single conflicts occurred per eCRF, of which 59–69% were answered. The most common missing values were CRP, ESR, Schober’s test, data on systemic glucocorticoid therapy, and presence of enthesitis. Conclusion Providing high data quality in large observational cohort studies is a major challenge, which requires careful monitoring. An automated monitoring process was successfully implemented and well accepted by the study centers. Two thirds of the queries were answered with new data. While conventional manual monitoring is resource-intensive and may itself create new sources of errors, automated processes are a convenient way to augment data quality. Supplementary Information The online version contains supplementary material available at 10.1186/s13075-021-02563-2.
Collapse
Affiliation(s)
- Lisa Lindner
- Epidemiology Unit, German Rheumatism Research Centre (DRFZ), Charitéplatz 1, 10117, Berlin, Germany.
| | - Anja Weiß
- Epidemiology Unit, German Rheumatism Research Centre (DRFZ), Charitéplatz 1, 10117, Berlin, Germany
| | - Andreas Reich
- Epidemiology Unit, German Rheumatism Research Centre (DRFZ), Charitéplatz 1, 10117, Berlin, Germany
| | - Siegfried Kindler
- Epidemiology Unit, German Rheumatism Research Centre (DRFZ), Charitéplatz 1, 10117, Berlin, Germany
| | | | | | - Joachim Listing
- Epidemiology Unit, German Rheumatism Research Centre (DRFZ), Charitéplatz 1, 10117, Berlin, Germany
| | - Georg Schett
- Rheumatology and Immunology, Universitätsklinikum Erlangen, Erlangen, Germany
| | - Joachim Sieper
- Charité - Universitätsmedizin Berlin, CBF, Berlin, Germany
| | - Anja Strangfeld
- Epidemiology Unit, German Rheumatism Research Centre (DRFZ), Charitéplatz 1, 10117, Berlin, Germany
| | - Anne C Regierer
- Epidemiology Unit, German Rheumatism Research Centre (DRFZ), Charitéplatz 1, 10117, Berlin, Germany
| |
Collapse
|
3
|
Houston L, Martin A, Yu P, Probst Y. Time-consuming and expensive data quality monitoring procedures persist in clinical trials: A national survey. Contemp Clin Trials 2021; 103:106290. [PMID: 33503495 DOI: 10.1016/j.cct.2021.106290] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2020] [Revised: 01/19/2021] [Accepted: 01/20/2021] [Indexed: 12/21/2022]
Abstract
INTRODUCTION The Good Clinical Practice guideline identifies that data monitoring is an essential research activity. However, limited evidence exists on how to perform monitoring including the amount or frequency that is needed to ensure data quality. This study aims to explore the monitoring procedures that are implemented to ensure data quality in Australian clinical research studies. MATERIAL AND METHODS Clinical studies listed on the Australian and New Zealand Clinical Trials Registry were invited to participate in a national cross-sectional, mixed-mode, multi-contact (postal letter and e-mail) web-based survey. Information was gathered about the types of data quality monitoring procedures being implemented. RESULTS Of the 3689 clinical studies contacted, 589 (16.0%) responded, of which 441 (77.4%) completed the survey. Over half (55%) of the studies applied source data verification (SDV) compared to risk-based targeted and triggered monitoring (10-11%). Conducting 100% on-site monitoring was most common for those who implemented the traditional approach. Respondents who did not conduct 100% monitoring, included 1-25% of data points for SDV, centralized or on-site monitoring. The incidence of adverse events and protocol deviations were the most likely factors to trigger a site visit for risk-based triggered (63% and 44%) and centralized monitoring (48% and 44%), respectively. CONCLUSION Instead of using more optimal risk-based approaches, small single-site clinical studies are conducting traditional monitoring procedures which are time consuming and expensive. Formal guidelines need to be improved and provided to all researchers for 'new' risk-based monitoring approaches.
Collapse
Affiliation(s)
- Lauren Houston
- School of Medicine, University of Wollongong, Australia; Illawarra Health and Medical Research Institute, Australia.
| | | | - Ping Yu
- Illawarra Health and Medical Research Institute, Australia; School of Computing and Information Technology, University of Wollongong, Australia
| | - Yasmine Probst
- School of Medicine, University of Wollongong, Australia; Illawarra Health and Medical Research Institute, Australia
| |
Collapse
|
4
|
Statistical Considerations for Trials in Adjuvant Treatment of Colorectal Cancer. Cancers (Basel) 2020; 12:cancers12113442. [PMID: 33228149 PMCID: PMC7699469 DOI: 10.3390/cancers12113442] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2020] [Revised: 10/29/2020] [Accepted: 11/17/2020] [Indexed: 12/26/2022] Open
Abstract
The design of the best possible clinical trials of adjuvant interventions in colorectal cancer will entail the use of both time-tested and novel methods that allow efficient, reliable and patient-relevant therapeutic development. The ultimate goal of this endeavor is to safely and expeditiously bring to clinical practice novel interventions that impact patient lives. In this paper, we discuss statistical aspects and provide suggestions to optimize trial design, data collection, study implementation, and the use of predictive biomarkers and endpoints in phase 3 trials of systemic adjuvant therapy. We also discuss the issues of collaboration and patient centricity, expecting that several novel agents with activity in the (neo)adjuvant therapy of colon and rectal cancers will become available in the near future.
Collapse
|
5
|
Buyse M, Trotta L, Saad ED, Sakamoto J. Central statistical monitoring of investigator-led clinical trials in oncology. Int J Clin Oncol 2020; 25:1207-1214. [PMID: 32577951 PMCID: PMC7308734 DOI: 10.1007/s10147-020-01726-6] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2020] [Accepted: 06/14/2020] [Indexed: 01/17/2023]
Abstract
Investigator-led clinical trials are pragmatic trials that aim to investigate the benefits and harms of treatments in routine clinical practice. These much-needed trials represent the majority of all trials currently conducted. They are however threatened by the rising costs of clinical research, which are in part due to extensive trial monitoring processes that focus on unimportant details. Risk-based quality management focuses, instead, on “things that really matter”. We discuss the role of central statistical monitoring as part of risk-based quality management. We describe the principles of central statistical monitoring, provide examples of its use, and argue that it could help drive down the cost of randomized clinical trials, especially investigator-led trials, whilst improving their quality.
Collapse
Affiliation(s)
- Marc Buyse
- International Drug Development Institute (IDDI), San Francisco, CA, USA. .,Interuniversity Institute for Biostatistics and Statistical Bioinformatics (I-BioStat), Hasselt University, Hasselt, Belgium. .,CluePoints, Louvain-la-Neuve, Belgium.
| | | | - Everardo D Saad
- International Drug Development Institute (IDDI), 30 avenue provinciale, 1340, Ottignies-Louvain-la-Neuve, Belgium
| | - Junichi Sakamoto
- Tokai Central Hospital, Kakamigahara, Japan.,Epidemiological and Clinical Research Information Network (ECRIN), Kyoto, Japan
| |
Collapse
|
6
|
Fougerou-Leurent C, Laviolle B, Tual C, Visseiche V, Veislinger A, Danjou H, Martin A, Turmel V, Renault A, Bellissant E. Impact of a targeted monitoring on data-quality and data-management workload of randomized controlled trials: A prospective comparative study. Br J Clin Pharmacol 2019; 85:2784-2792. [PMID: 31471967 DOI: 10.1111/bcp.14108] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2019] [Revised: 07/24/2019] [Accepted: 08/22/2019] [Indexed: 11/30/2022] Open
Abstract
AIMS Monitoring risk-based approaches in clinical trials are encouraged by regulatory guidance. However, the impact of a targeted source data verification (SDV) on data-management (DM) workload and on final data quality needs to be addressed. METHODS MONITORING was a prospective study aiming at comparing full SDV (100% of data verified for all patients) and targeted SDV (only key data verified for all patients) followed by the same DM program (detecting missing data and checking consistency) on final data quality, global workload and staffing costs. RESULTS In all, 137 008 data including 18 124 key data were collected for 126 patients from 6 clinical trials. Compared to the final database obtained using the full SDV monitoring process, the final database obtained using the targeted SDV monitoring process had a residual error rate of 1.47% (95% confidence interval, 1.41-1.53%) on overall data and 0.78% (95% confidence interval, 0.65-0.91%) on key data. There were nearly 4 times more queries per study with targeted SDV than with full SDV (mean ± standard deviation: 132 ± 101 vs 34 ± 26; P = .03). For a handling time of 15 minutes per query, the global workload of the targeted SDV monitoring strategy remained below that of the full SDV monitoring strategy. From 25 minutes per query it was above, increasing progressively to represent a 50% increase for 45 minutes per query. CONCLUSION Targeted SDV monitoring is accompanied by increased workload for DM, which allows to obtain a small proportion of remaining errors on key data (<1%), but may substantially increase trial costs.
Collapse
Affiliation(s)
- Claire Fougerou-Leurent
- CIC 1414 (Clinical Investigation Center), INSERM, Rennes, France.,Clinical Pharmacology Department, CHU Rennes, Rennes, France
| | - Bruno Laviolle
- CIC 1414 (Clinical Investigation Center), INSERM, Rennes, France.,Clinical Pharmacology Department, CHU Rennes, Rennes, France.,Experimental and Clinical Pharmacology Laboratory, Univ Rennes, Rennes, France
| | - Christelle Tual
- CIC 1414 (Clinical Investigation Center), INSERM, Rennes, France.,Clinical Pharmacology Department, CHU Rennes, Rennes, France
| | | | - Aurélie Veislinger
- CIC 1414 (Clinical Investigation Center), INSERM, Rennes, France.,Clinical Pharmacology Department, CHU Rennes, Rennes, France
| | - Hélène Danjou
- CIC 1414 (Clinical Investigation Center), INSERM, Rennes, France.,Clinical Pharmacology Department, CHU Rennes, Rennes, France
| | - Amélie Martin
- CIC 1414 (Clinical Investigation Center), INSERM, Rennes, France.,Clinical Pharmacology Department, CHU Rennes, Rennes, France
| | - Valérie Turmel
- CIC 1414 (Clinical Investigation Center), INSERM, Rennes, France.,Clinical Pharmacology Department, CHU Rennes, Rennes, France
| | - Alain Renault
- CIC 1414 (Clinical Investigation Center), INSERM, Rennes, France.,Experimental and Clinical Pharmacology Laboratory, Univ Rennes, Rennes, France
| | - Eric Bellissant
- CIC 1414 (Clinical Investigation Center), INSERM, Rennes, France.,Clinical Pharmacology Department, CHU Rennes, Rennes, France.,Experimental and Clinical Pharmacology Laboratory, Univ Rennes, Rennes, France
| |
Collapse
|
7
|
Trotta L, Kabeya Y, Buyse M, Doffagne E, Venet D, Desmet L, Burzykowski T, Tsuburaya A, Yoshida K, Miyashita Y, Morita S, Sakamoto J, Praveen P, Oba K. Detection of atypical data in multicenter clinical trials using unsupervised statistical monitoring. Clin Trials 2019; 16:512-522. [PMID: 31331195 DOI: 10.1177/1740774519862564] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
BACKGROUND/AIMS A risk-based approach to clinical research may include a central statistical assessment of data quality. We investigated the operating characteristics of unsupervised statistical monitoring aimed at detecting atypical data in multicenter experiments. The approach is premised on the assumption that, save for random fluctuations and natural variations, data coming from all centers should be comparable and statistically consistent. Unsupervised statistical monitoring consists of performing as many statistical tests as possible on all trial data, in order to detect centers whose data are inconsistent with data from other centers. METHODS We conducted simulations using data from a large multicenter trial conducted in Japan for patients with advanced gastric cancer. The actual trial data were contaminated in computer simulations for varying percentages of centers, percentages of patients modified within each center and numbers and types of modified variables. The unsupervised statistical monitoring software was run by a blinded team on the contaminated data sets, with the purpose of detecting the centers with contaminated data. The operating characteristics (sensitivity, specificity and Youden's J-index) were calculated for three detection methods: one using the p-values of individual statistical tests after adjustment for multiplicity, one using a summary of all p-values for a given center, called the Data Inconsistency Score, and one using both of these methods. RESULTS The operating characteristics of the three methods were satisfactory in situations of data contamination likely to occur in practice, specifically when a single or a few centers were contaminated. As expected, the sensitivity increased for increasing proportions of patients and increasing numbers of variables contaminated. The three methods showed a specificity better than 93% in all scenarios of contamination. The method based on the Data Inconsistency Score and individual p-values adjusted for multiplicity generally had slightly higher sensitivity at the expense of a slightly lower specificity. CONCLUSIONS The use of brute force (a computer-intensive approach that generates large numbers of statistical tests) is an effective way to check data quality in multicenter clinical trials. It can provide a cost-effective complement to other data-management and monitoring techniques.
Collapse
Affiliation(s)
| | - Yuusuke Kabeya
- Department of Biostatistics, The University of Tokyo, Tokyo, Japan.,EPS Corporation, Tokyo, Japan
| | - Marc Buyse
- International Drug Development Institute (IDDI), San Francisco, CA, USA.,CluePoints, Wayne, PA, USA
| | | | - David Venet
- Institut de Recherches Interdisciplinaires et de Développements en Intelligence Artificielle (IRIDIA), University of Brussels, Brussels, Belgium
| | - Lieven Desmet
- Institute of Statistics, Biostatistics and Actuarial Sciences (ISBA), University of Louvain, Louvain-la-Neuve, Belgium
| | - Tomasz Burzykowski
- International Drug Development Institute (IDDI), Louvain-la-Neuve, Belgium.,Interuniversity Institute for Biostatistics and Statistical Bioinformatics (I-BioStat), University of Hasselt, Hasselt, Belgium
| | - Akira Tsuburaya
- Department of Surgery, Jizankai Medical Foundation, Tsuboi Cancer Center Hospital, Koriyama, Japan
| | - Kazuhiro Yoshida
- Department of Surgical Oncology, Graduate School of Medicine, Gifu University, Gifu, Japan
| | - Yumi Miyashita
- Epidemiological and Clinical Research Information Network (ECRIN), Okazaki, Japan
| | - Satoshi Morita
- Department of Biomedical Statistics and Bioinformatics, Graduate School of Medicine, Kyoto University, Kyoto, Japan
| | - Junichi Sakamoto
- Epidemiological and Clinical Research Information Network (ECRIN), Okazaki, Japan.,Tokai Central Hospital, Kakamigahara, Japan
| | | | - Koji Oba
- Department of Biostatistics, The University of Tokyo, Tokyo, Japan.,Interfaculty Initiative in Information Studies, The University of Tokyo, Tokyo, Japan
| |
Collapse
|
8
|
Assessing data quality and the variability of source data verification auditing methods in clinical research settings. J Biomed Inform 2018; 83:25-32. [PMID: 29783038 DOI: 10.1016/j.jbi.2018.05.010] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2017] [Revised: 03/22/2018] [Accepted: 05/17/2018] [Indexed: 11/22/2022]
Abstract
INTRODUCTION Data audits within clinical settings are extensively used as a major strategy to identify errors, monitor study operations and ensure high-quality data. However, clinical trial guidelines are non-specific in regards to recommended frequency, timing and nature of data audits. The absence of a well-defined data quality definition and method to measure error undermines the reliability of data quality assessment. This review aimed to assess the variability of source data verification (SDV) auditing methods to monitor data quality in a clinical research setting. MATERIAL AND METHODS The scientific databases MEDLINE, Scopus and Science Direct were searched for English language publications, with no date limits applied. Studies were considered if they included data from a clinical trial or clinical research setting and measured and/or reported data quality using a SDV auditing method. RESULTS In total 15 publications were included. The nature and extent of SDV audit methods in the articles varied widely, depending upon the complexity of the source document, type of study, variables measured (primary or secondary), data audit proportion (3-100%) and collection frequency (6-24 months). Methods for coding, classifying and calculating error were also inconsistent. Transcription errors and inexperienced personnel were the main source of reported error. Repeated SDV audits using the same dataset demonstrated ∼ 40% improvement in data accuracy and completeness over time. No description was given in regards to what determines poor data quality in clinical trials. CONCLUSIONS A wide range of SDV auditing methods are reported in the published literature though no uniform SDV auditing method could be determined for "best practice" in clinical trials. Published audit methodology articles are warranted for the development of a standardised SDV auditing method to monitor data quality in clinical research settings.
Collapse
|
9
|
Park S, Nam CM, Park S, Noh YH, Ahn CR, Yu WS, Kim BK, Kim SM, Kim JS, Rha SY. 'Screening audit' as a quality assurance tool in good clinical practice compliant research environments. BMC Med Ethics 2018; 19:30. [PMID: 29699559 PMCID: PMC5922013 DOI: 10.1186/s12910-018-0269-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2018] [Accepted: 04/10/2018] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND With the growing amount of clinical research, regulations and research ethics are becoming more stringent. This trend introduces a need for quality assurance measures for ensuring adherence to research ethics and human research protection beyond Institutional Review Board approval. Audits, one of the most effective tools for assessing quality assurance, are measures used to evaluate Good Clinical Practice (GCP) and protocol compliance in clinical research. However, they are laborious, time consuming, and require expertise. Therefore, we developed a simple auditing process (a screening audit) and evaluated its feasibility and effectiveness. METHODS The screening audit was developed using a routine audit checklist based on the Severance Hospital's Human Research Protection Program policies and procedures. The measure includes 20 questions, and results are summarized in five categories of audit findings. We analyzed 462 studies that were reviewed by the Severance Hospital Human Research Protection Center between 2013 and 2017. We retrospectively analyzed research characteristics, reply rate, audit findings, associated factors and post-screening audit compliance, etc. RESULTS: Investigator reply rates gradually increased, except for the first year (73% → 26% → 53% → 49% → 55%). The studies were graded as "critical," "major," "minor," and "not a finding" (11.9, 39.0, 42.9, and 6.3%, respectively), based on findings and number of deficiencies. The auditors' decisions showed fair agreement with weighted kappa values of 0.316, 0.339, and 0.373. Low-risk level studies, single center studies, and non-phase clinical research showed more prevalent frequencies of being "major" or "critical" (p = 0.002, < 0.0001, < 0.0001, respectively). Inappropriateness of documents, failure to obtain informed consent, inappropriateness of informed consent process, and failure to protect participants' personal information were associated with higher audit grade (p < 0.0001, p = 0.0001, p < 0.0001, p = 0.003). We were able to observe critical GCP violations in the routine internal audit results of post-screening audit compliance checks in "non-responding" and "critical" studies upon applying the screening audit. CONCLUSIONS Our screening audit is a simple and effective way to assess overall GCP compliance by institutions and to ensure medical ethics. The tool also provides useful selection criteria for conducting routine audits.
Collapse
Affiliation(s)
- Sinyoung Park
- Human research Protection Center, Severance Hospital, Yonsei University Health System, Seoul, Korea
| | - Chung Mo Nam
- Department of Preventive Medicine, Yonsei University College of Medicine, Seoul, Korea
| | - Sejung Park
- Department of Biostatistics, Yonsei University College of Medicine, Seoul, Korea
| | - Yang Hee Noh
- Human research Protection Center, Severance Hospital, Yonsei University Health System, Seoul, Korea
| | - Cho Rong Ahn
- Human research Protection Center, Severance Hospital, Yonsei University Health System, Seoul, Korea
| | - Wan Sun Yu
- Human research Protection Center, Severance Hospital, Yonsei University Health System, Seoul, Korea
| | - Bo Kyung Kim
- Human research Protection Center, Severance Hospital, Yonsei University Health System, Seoul, Korea
| | - Seung Min Kim
- Department of Neurology, Yonsei University College of Medicine, Seoul, Korea
| | - Jin Seok Kim
- Human research Protection Center, Severance Hospital, Yonsei University Health System, Seoul, Korea.,Division of Hematology, Department of Internal Medicine, Yonsei Cancer Center, Yonsei University College of Medicine, Seoul, Republic of Korea
| | - Sun Young Rha
- Human research Protection Center, Severance Hospital, Yonsei University Health System, Seoul, Korea. .,Division of Medical Oncology, Department of Internal Medicine, Yonsei Cancer Center, Yonsei University College of Medicine, 50-1 Yonsei-ro, Seodaemun-gu, Seoul, 03722, Korea.
| |
Collapse
|
10
|
Houston L, Probst Y, Yu P, Martin A. Exploring Data Quality Management within Clinical Trials. Appl Clin Inform 2018; 9:72-81. [PMID: 29388180 DOI: 10.1055/s-0037-1621702] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022] Open
Abstract
BACKGROUND Clinical trials are an important research method for improving medical knowledge and patient care. Multiple international and national guidelines stipulate the need for data quality and assurance. Many strategies and interventions are developed to reduce error in trials, including standard operating procedures, personnel training, data monitoring, and design of case report forms. However, guidelines are nonspecific in the nature and extent of necessary methods. OBJECTIVE This article gathers information about current data quality tools and procedures used within Australian clinical trial sites, with the aim to develop standard data quality monitoring procedures to ensure data integrity. METHODS Relevant information about data quality management methods and procedures, error levels, data monitoring, staff training, and development were collected. Staff members from 142 clinical trials listed on the National Health and Medical Research Council (NHMRC) clinical trials Web site were invited to complete a short self-reported semiquantitative anonymous online survey. RESULTS Twenty (14%) clinical trials completed the survey. Results from the survey indicate that procedures to ensure data quality varies among clinical trial sites. Centralized monitoring (65%) was the most common procedure to ensure high-quality data. Ten (50%) trials reported having a data management plan in place and two sites utilized an error acceptance level to minimize discrepancy, set at <5% and 5 to 10%, respectively. The quantity of data variables checked (10-100%), the frequency of visits (once-a-month to annually), and types of variables (100%, critical data or critical and noncritical data audits) for data monitoring varied among respondents. The average time spent on staff training per person was 11.58 hours over a 12-month period and the type of training was diverse. CONCLUSION Clinical trial sites are implementing ad hoc methods pragmatically to ensure data quality. Findings highlight the necessity for further research into "standard practice" focusing on developing and implementing publically available data quality monitoring procedures.
Collapse
|
11
|
Buyse M, Squifflet P, Coart E, Quinaux E, Punt CJ, Saad ED. The impact of data errors on the outcome of randomized clinical trials. Clin Trials 2017. [PMID: 28641461 DOI: 10.1177/1740774517716158] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Background/aims Considerable human and financial resources are typically spent to ensure that data collected for clinical trials are free from errors. We investigated the impact of random and systematic errors on the outcome of randomized clinical trials. Methods We used individual patient data relating to response endpoints of interest in two published randomized clinical trials, one in ophthalmology and one in oncology. These randomized clinical trials enrolled 1186 patients with age-related macular degeneration and 736 patients with metastatic colorectal cancer. The ophthalmology trial tested the benefit of pegaptanib for the treatment of age-related macular degeneration and identified a statistically significant treatment benefit, whereas the oncology trial assessed the benefit of adding cetuximab to a regimen of capecitabine, oxaliplatin, and bevacizumab for the treatment of metastatic colorectal cancer and failed to identify a statistically significant treatment difference. We simulated trial results by adding errors that were independent of the treatment group (random errors) and errors that favored one of the treatment groups (systematic errors). We added such errors to the data for the response endpoint of interest for increasing proportions of randomly selected patients. Results Random errors added to up to 50% of the cases produced only slightly inflated variance in the estimated treatment effect of both trials, with no qualitative change in the p-value. In contrast, systematic errors produced bias even for very small proportions of patients with added errors. Conclusion A substantial amount of random errors is required before appreciable effects on the outcome of randomized clinical trials are noted. In contrast, even a small amount of systematic errors can severely bias the estimated treatment effects. Therefore, resources devoted to randomized clinical trials should be spent primarily on minimizing sources of systematic errors which can bias the analyses, rather than on random errors which result only in a small loss in power.
Collapse
Affiliation(s)
- Marc Buyse
- 1 International Drug Development Institute (IDDI), San Francisco, CA, USA.,2 Interuniversity Institute for Biostatistics and Statistical Bioinformatics (I-BioStat), Hasselt University, Hasselt, Belgium
| | - Pierre Squifflet
- 3 International Drug Development Institute (IDDI), Louvain-la-Neuve, Belgium
| | - Elisabeth Coart
- 3 International Drug Development Institute (IDDI), Louvain-la-Neuve, Belgium
| | - Emmanuel Quinaux
- 3 International Drug Development Institute (IDDI), Louvain-la-Neuve, Belgium
| | - Cornelis Ja Punt
- 4 Department of Medical Oncology, Academic Medical Center, University of Amsterdam, Amsterdam, The Netherlands
| | - Everardo D Saad
- 3 International Drug Development Institute (IDDI), Louvain-la-Neuve, Belgium
| |
Collapse
|
12
|
DeMets DL, Fost N, Powers M. An Institutional Review Board dilemma: responsible for safety monitoring but not in control. Clin Trials 2016; 3:142-8. [PMID: 16773956 DOI: 10.1191/1740774506cn137oa] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
Clinical trials have become a major research tool to evaluate new medical interventions. Most trials require some level of data monitoring for quality control and many trials require special monitoring for participant safety. For national multicenter trials, independent data monitoring committees have become the standard for monitoring for evidence of participant benefit or harm in trials with irreversible outcomes such as death or serious morbidity. The Institutional Review Board (IRB) is held responsible for monitoring local trials. Often local institutions do not have an infrastructure in place to meet this responsibility, and therefore local IRBs cannot fulfill this obligation. In addition, IRBs are currently inundated with individual safety reports from local and multi-institutional trials which may appear to provide some level of safety monitoring, but in fact gives a false sense of security. We propose the establishment of institutional data monitoring committees and appropriate informatics infrastructure to monitor local trials.
Collapse
Affiliation(s)
- David L DeMets
- Biostatistics and Medical Informatics, University of Wisconsin, 600 Highland Avenue, Box 4675, K6/446 CSC, Madison, WI 53792-4675, USA.
| | | | | |
Collapse
|
13
|
Liénard JL, Quinaux E, Fabre-Guillevin E, Piedbois P, Jouhaud A, Decoster G, Buyse M. Impact of on-site initiation visits on patient recruitment and data quality in a randomized trial of adjuvant chemotherapy for breast cancer. Clin Trials 2016; 3:486-92. [PMID: 17060222 DOI: 10.1177/1740774506070807] [Citation(s) in RCA: 39] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Purpose To provide empirical evidence on the impact of on-site initiation visits on the following outcomes: patient recruitment, quantity and quality of data submitted to the trial coordinating office, and patients' follow-up time. Patients and methods This methodological study was performed as part of a randomized trial comparing two combination chemotherapies for adjuvant treatment of breast cancer. Centers participating to the trial were randomized to either receive systematic on-site visits (Visited group), or not (Non-visited group). Results The study was terminated after two years, while the main randomized trial continued. Of the 135 centers that had expressed an interest in the trial, only 69 randomized at least one patient (35/68 in the Visited group, 34/67 in the Nonvisited group). Almost two-thirds of the patients were entered by 17 centers (10 in the Visited group, seven in the Non-visited group) that accrued more than 10 patients each. None of the prespecified outcomes favored the group of centers submitted to on-site initiation visits (ie, mean number of queries par patient: 6.1 ± 9.7 versus 5.4 ± 6.4, respectively for the Visited and Non-visited groups). Spontaneous transmittal of case report forms, although required by protocol, was low in both randomized groups (mean number of pages per patient: 1.5 ± 2.0 versus 2.1 ± 2.3, respectively), with investigators submitting about one-third of the expected forms on time (29% and 39%, respectively). Limitations This study could not evaluate the impact of repeated on-site visits on clinical outcomes. Conclusion Systematic on-site initiation visits did not contribute significantly to this clinical trial.
Collapse
Affiliation(s)
- J-L Liénard
- International Drug Development Institute (IDDI), Brussels, Belgium.
| | | | | | | | | | | | | |
Collapse
|
14
|
Macefield RC, Beswick AD, Blazeby JM, Lane JA. A systematic review of on-site monitoring methods for health-care randomised controlled trials. Clin Trials 2013; 10:104-24. [PMID: 23345308 DOI: 10.1177/1740774512467405] [Citation(s) in RCA: 28] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
BACKGROUND Monitoring the conduct of clinical trials is recommended by International Conference of Harmonisation Good Clinical Practice (ICH GCP) guidelines and is integral to trial quality assurance. On-site monitoring, that is, visiting trial sites, is one part of this process but little is known about the procedures that are performed in practice. PURPOSE To examine and summarise published on-site monitoring methods for health-care clinical trials, including evaluations of their benefits and costs to trials. METHODS A systematic literature review identified all articles reporting the methods and practices of on-site monitoring of randomised controlled trials (RCTs). Articles were categorised into (1) reports from research groups and organisations, (2) reports from individual RCTs, (3) randomised trials of on-site monitoring interventions, (4) cost simulations, or (5) surveys of trial staff and monitors. Data were extracted on the characteristics of the trials and groups reporting on-site monitoring (e.g., geographical origin, sponsor, and trial focus). Information from articles in categories (1)-(3) was summarised on the frequency and scope of site monitoring visits, monitoring team size and composition, activities during site visits, and reporting structures. Evaluations of the benefits and disadvantages of on-site monitoring were examined for all included articles. RESULTS In total, 57 articles were identified, comprising 21 articles about the on-site monitoring practices of 16 research groups, 30 articles from 26 RCTs, 1 on-site monitoring intervention RCT, 2 cost simulations, and 3 surveys. Publications in categories (1)-(3), mostly originated from the United States (33/52, 63%) or Europe (15/52, 29%), were predominantly describing non-commercial organisations or trials (45/52, 87%), with heart disease (9/26, 35%) or cancer (5/26, 19%) the commonest focus of individual RCTs. The frequency of visits ranged from every 6-8 weeks up to once every 3 years, with mostly all trial sites visited. The number of monitors visiting a site varied between 1 and 8. The most common on-site monitoring activity was verifying source data and consent forms, with a focus on data accuracy. Only six articles evaluated their on-site monitoring process, with improvements observed in recruitment rates and protocol adherence but with direct costs and staff time viewed as the major disadvantages. The on-site monitoring RCT ended prematurely so preventing full assessment. LIMITATIONS Trialists and organisations may utilise additional unpublished on-site monitoring systems. The varied terminology used to describe monitoring may have limited identification of some relevant articles. CONCLUSIONS This review demonstrated that on-site monitoring is utilised in trials worldwide but systems vary considerably with little evidence to support practice. These on-site monitoring practices need to be evaluated empirically, including costs, to provide robust evidence for the contribution of site visits to trial performance and quality.
Collapse
Affiliation(s)
- Rhiannon C Macefield
- School of Social and Community Medicine, University of Bristol, Bristol BS8 2PS, UK
| | | | | | | |
Collapse
|
15
|
Mitchel JT, Kim YJ, Choi J, Park G, Cappi S, Horn D, Kist M, D Agostino RB. Evaluation of Data Entry Errors and Data Changes to an Electronic Data Capture Clinical Trial Database. ACTA ACUST UNITED AC 2011; 45:421-430. [PMID: 24058221 DOI: 10.1177/009286151104500404] [Citation(s) in RCA: 29] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Monitoring of clinical trials includes several disciplines, stakeholders, and skill sets. The aim of the present study was to identify database changes and data entry errors to an electronic data capture (EDC) clinical trial database, and to access the impact of the changes. To accomblish the aim, Target e*CRF was used as the EDC tool for a multinational, dose-finding, multicenter, double-blind, randomized, parallel, placebo-controlled trial to investigate efficacy and safety of a new treatment in men with lower urinary tract symptoms associated with benign prostatic hyperplasia. The main errors observed were simple transcription errors from the paper source documents to the EDC database. This observation was to be expected, since every transaction has an inherant error rate. What and how to monitor must be assessed within the risk-based monitoring section of the comprehensive data monitoring plan. With the advent of direct data entry, and the elimination of the requirement to transcribe from a paper source record to an EDC system, error rates should go down dramatically. In addition, protocol violations and data outside the normal range can be identified at the time of data entry and not days, weeks, and months after the fact.
Collapse
|
16
|
Bakobaki JM, Rauchenberger M, Joffe N, McCormack S, Stenning S, Meredith S. The potential for central monitoring techniques to replace on-site monitoring: findings from an international multi-centre clinical trial. Clin Trials 2011; 9:257-64. [DOI: 10.1177/1740774511427325] [Citation(s) in RCA: 59] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Background Compliance with Good Clinical Practice (GCP) guidelines should ensure the safety of trial participants and the reliability of trial results. Over the last decade, increasing emphasis has been placed on the role of costly on-site monitoring and source data verification as processes to demonstrate that GCP is being followed, despite a lack of empirical evidence that these are effective. Purpose To assess whether findings from on-site monitoring of a recent international multi-centre clinical trial could have been identified using central data review and other centralised monitoring techniques. Methods Findings documented in a sample of site monitoring reports, and Programme Management Board Executive (PMBe) reports, from the Microbicides Development Programme (MDP) 301 trial – a randomised placebo-controlled trial of a microbicide gel to prevent vaginally acquired HIV infection conducted in four countries in East and Southern Africa – were extracted and individually assessed to determine whether they could have been detected in the trial database or through other central means. Results Four site visit reports contained 268 monitoring findings from a review of 104 participant files covering 324 study visits. Of the 268 findings, 76 (28.4%) were also identified in the study database. Central checks, had these been in place (such as central receipt and review of back-translated documents, enrolment and testing logs, informed consent, and more complex database queries), could have identified a further 179 (66.8%); 13 (4.9%) other findings (all minor) could have been identified through a review of the participant folder at site. The four PMBe reports reviewed included six major and three critical findings from a review of over 1000 participant files: only two of these (both major) were assessed as unlikely to be identified using central monitoring techniques. Limitations The study data used were not collected with this retrospective review in mind. It suggests that prospective work is needed to compare monitoring practices in real time. Conclusions While there may be some categories of findings that it is not possible to identify centrally, the very large majority of findings reviewed in this analysis could be identified using central monitoring strategies. These data suggest that with better central and targeted on-site monitoring, it should be possible to identify and address most protocol and procedural compliance issues without performing intensive and costly routine on-site data monitoring.
Collapse
Affiliation(s)
| | | | - Nicola Joffe
- Medical Research Council Clinical Trials Unit, London, UK
| | | | - Sally Stenning
- Medical Research Council Clinical Trials Unit, London, UK
| | - Sarah Meredith
- Medical Research Council Clinical Trials Unit, London, UK
| |
Collapse
|
17
|
Lane JA, Wade J, Down L, Bonnington S, Holding PN, Lennon T, Jones AJ, Salter CE, Neal DE, Hamdy FC, Donovan JL. A Peer Review Intervention for Monitoring and Evaluating sites (PRIME) that improved randomized controlled trial conduct and performance. J Clin Epidemiol 2011; 64:628-36. [PMID: 21239142 DOI: 10.1016/j.jclinepi.2010.10.003] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2009] [Revised: 07/23/2010] [Accepted: 10/01/2010] [Indexed: 11/28/2022]
Abstract
OBJECTIVE Good clinical practice (GCP) guidelines emphasize trial site monitoring, although the implementation is unspecified and evidence for benefit is sparse. We aimed to develop a site monitoring process using peer reviewers to improve staff training, site performance, data collection, and GCP compliance. STUDY DESIGN AND SETTING The Peer Review Intervention for Monitoring and Evaluating sites (PRIME) team observed and gave feedback on trial recruitment and follow-up appointments, held staff meetings, and examined documentation during annual 2-day site visits. The intervention was evaluated in the ProtecT trial, a UK randomized controlled trial of localized prostate cancer treatments (ISRCTN20141297). The ProtecT coordinator and senior nurses conducted three monitoring rounds at eight sites (2004-2007). The process evaluation used PRIME report findings, trial databases, resource use, and a site nurse survey. RESULTS Adverse findings decreased across all sites from 44 in round 1 to 19 in round 3. Most findings related to protocol adherence or site organizational issues, including improvements in eligibility criteria application and data collection. Staff found site monitoring acceptable and made changes after reviews. CONCLUSION The PRIME process used observation by peer reviewers to improve protocol adherence and train site staff, which increased trial performance and consistency.
Collapse
Affiliation(s)
- J Athene Lane
- School of Social and Community Medicine, University of Bristol, Canynge Hall, 39 Whatley Road, Bristol BS8 2PS, UK.
| | | | | | | | | | | | | | | | | | | | | | | |
Collapse
|
18
|
Williams GW. The other side of clinical trial monitoring; assuring data quality and procedural adherence. Clin Trials 2008; 3:530-7. [PMID: 17170037 DOI: 10.1177/1740774506073104] [Citation(s) in RCA: 27] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
BACKGROUND Data monitoring can mean different things. It can mean statistical methodologies for clinical trial monitoring, interim data analysis, monitoring for quality control or assurance or safety reporting to regulatory agencies. PURPOSE The various facets of data monitoring will be discussed and reviewed from primarily an industry perspective. METHODS By careful attention to the design and conduct of a clinical trial, the expense of monitoring can be markedly reduced. Careful attention should be given to the qualifications of investigators in the selection of clinical sites and central facilities. Site personnel must be adequately trained. The sponsor should utilize appropriately qualified individuals to supervise the overall conduct of the trial. The monitor should visit the investigator at the site of the investigation frequently enough to ensure acceptable quality. The monitor is responsible for inspecting the case report forms at regular intervals. Quality control should be applied to each stage of data handling to ensure that all data are reliable and have been processed correctly. The auditor will assess whether the site is being monitored in accordance with the monitoring plan. The determination of the extent and nature of monitoring should be based on considerations such as the objective, design and complexity of the trial. Statistical sampling may be an acceptable method for selecting the data to be verified. The monitor should ensure that adverse events are reported. Study data will be monitored on an ongoing basis to ensure patient safety. The sponsor may utilize a Data Monitoring Committee to protect the validity of a trial. CONCLUSIONS Discussions between industry, academia and regulatory groups regarding the optimal extent and methods for monitoring of clinical trials are encouraged.
Collapse
|
19
|
You YN, Jacobs L, Martinez E, Ota DM. The audit process and how to ensure a successful audit. Cancer Treat Res 2007; 132:179-97. [PMID: 17305022 DOI: 10.1007/978-0-387-33225-3_9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/14/2023]
Affiliation(s)
- Y Nancy You
- Department of Surgery, Mayo Clinic, Rochester, MN, USA
| | | | | | | |
Collapse
|
20
|
Eisenstein EL, Lemons PW, Tardiff BE, Schulman KA, Jolly MK, Califf RM. Reducing the costs of phase III cardiovascular clinical trials. Am Heart J 2005; 149:482-8. [PMID: 15864237 DOI: 10.1016/j.ahj.2004.04.049] [Citation(s) in RCA: 87] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/14/2023]
Abstract
BACKGROUND The pharmaceutical industry spends approximately 26.4 billion dollars annually for research and development (4.1 billion dollars in cardiovascular products). We compared pharmaceutical companies' planned resource use and costs in Phase III cardiovascular trials and identified cost-saving strategies. METHODS AND RESULTS We developed 2 case scenarios (a 17,000-patient, open-label acute coronary syndromes [ACS] trial and a 14,500-patient, double-blind congestive heart failure [CHF]) trial and surveyed 6 pharmaceutical experts about expected resources (e.g., number of sites, case report form [CRF] pages, and monitoring visits) needed for the trials. Using a validated model, we estimated costs under each expert's assumptions. ACS trial costs averaged 83 million dollars (median, 67 million dollars; range, 57 dollars to 158 million dollars) and 142 million dollars (median, 135 million dollars; range, 102 dollars to 207 million dollars) for the CHF trial. Site-related expenses (site management and payments) were >65% of total costs for both trials. In sensitivity analyses, total costs were reduced >40% by simultaneously reducing CRF pages, monitoring visits, and site-payment amounts but maintaining the numbers of patients and sites. CONCLUSIONS With a set number of sites and patients, the most efficient way to reduce trial costs and still meet the trial's scientific objectives is to reduce management complexity. Modest changes in management parameters release significant monies to answer more research questions.
Collapse
Affiliation(s)
- Eric L Eisenstein
- Division of Cardiology, Department of Medicine, Duke University Medical Center and Duke Clinical Research Institute, Durham, NC 27715, USA.
| | | | | | | | | | | |
Collapse
|
21
|
Ottevanger PB, Therasse P, van de Velde C, Bernier J, van Krieken H, Grol R, De Mulder P. Quality assurance in clinical trials. Crit Rev Oncol Hematol 2003; 47:213-35. [PMID: 12962897 DOI: 10.1016/s1040-8428(03)00028-3] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022] Open
Abstract
From the literature that was initially searched by electronic databases using the keywords quality, quality control and quality assurance in combination with clinical trials, surgery, pathology, radiotherapy, chemotherapy and data management, a comprehensive review is given on what quality assurance means, the various methods used for quality assurance in different aspects of clinical trials and the impact of this quality assurance on outcome and every day practice.
Collapse
Affiliation(s)
- P B Ottevanger
- Department of Internal Medicine, Division of Medical Oncology, 550, University Hospital Nijmegen, Geert Grooteplein 8, PO 9101, 6500HB Nijmegen, The Netherlands.
| | | | | | | | | | | | | |
Collapse
|
22
|
Califf RM, Morse MA, Wittes J, Goodman SN, Nelson DK, DeMets DL, Iafrate RP, Sugarman J. Toward protecting the safety of participants in clinical trials. CONTROLLED CLINICAL TRIALS 2003; 24:256-71. [PMID: 12757992 DOI: 10.1016/s0197-2456(03)00005-9] [Citation(s) in RCA: 47] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
It is a widely held belief that the current system of oversight of clinical research, particularly the means of assessing risks and minimizing harms to participants in clinical trials, could be improved. In particular, the system is inefficient with overemphasis on the monitoring ability of some groups such as research ethics review boards and investigators, underemphasis on others such as data monitoring committees (DMCs) and sponsors, confusion about responsibilities for safety and imperfect communication between these different groups. Research ethics review boards are not able to perform safety monitoring by review of individual adverse events and are often burdened by duplicative reviews of large multicenter studies. There are no standards for DMCs to ensure they can reliably identify safety issues. Sponsors may be overreliant on data audits and slow to disseminate safety data in a coherent summary. Investigators, their staffs and clinical sites may not fully appreciate all the nuances of good clinical practice or may be inattentive to the daily conduct of studies. Regulators, particularly those in the United States, have failed to completely harmonize their policies with each other or with international regulatory agencies. We recommend well-designed monitoring plans for all studies that are appropriate to their scope and risk, more centralized review of large multisite studies and closer local scrutiny of single-institution studies. In addition, sponsors should pay greater attention to monitoring adverse events and keeping up-to-date databases or investigator's brochures emphasizing safety issues. A minimal standard of education or expertise in good clinical practice should be established for investigators, their staffs and research ethics review board members. DMC composition and functions should be standardized and regulations should be harmonized nationally and internationally. Finally, there should be a concerted effort to study the efficacy of various components of the system.
Collapse
Affiliation(s)
- Robert M Califf
- Duke Clinical Research Institute, Duke University Medical Center, PO Box 17969, Durham, NC 27715, USA.
| | | | | | | | | | | | | | | |
Collapse
|
23
|
Knatterud GL, Rockhold FW, George SL, Barton FB, Davis CE, Fairweather WR, Honohan T, Mowery R, O'Neill R. Guidelines for quality assurance in multicenter trials: a position paper. CONTROLLED CLINICAL TRIALS 1998; 19:477-93. [PMID: 9741868 DOI: 10.1016/s0197-2456(98)00033-6] [Citation(s) in RCA: 65] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Abstract
In the wake of reports of falsified data in one of the trials of the National Surgical Adjuvant Project for Breast and Bowel Cancer supported by the National Cancer Institute, clinical trials came under close scrutiny by the public, the press, and Congress. Questions were asked about the quality and integrity of the collected data and the analyses and conclusions of trials. In 1995, the leaders of the Society for Clinical Trials (the Chair of the Policy Committee, Dr. David DeMets, and the President of the Society, Dr. Sylvan Green) asked two members of the Society (Dr. Genell Knatterud and Dr. Frank Rockhold) to act as co-chairs of a newly formed subcommittee to discuss the issues of data integrity and auditing. In consultation with Drs. DeMets and Green, the co-chairs selected other members (Ms. Franca Barton, Dr. C.E. Davis, Dr. Bill Fairweather, Dr. Stephen George, Mr. Tom Honohan, Dr. Richard Mowery, and Dr. Robert O'Neill) to serve on the subcommittee. The subcommittee considered "how clean clinical trial data should be, to what extent auditing procedures are required, and who should conduct audits and how often." During the initial discussions, the subcommittee concluded that data auditing was insufficient to achieve data integrity. Accordingly, the subcommittee prepared this set of guidelines for standards of quality assurance for multicenter clinical trials. We include recommendations for appropriate action if problems are detected.
Collapse
Affiliation(s)
- G L Knatterud
- Maryland Medical Research Institute, Baltimore 21210, USA
| | | | | | | | | | | | | | | | | |
Collapse
|