1
|
White N, Parsons R, Borg D, Collins G, Barnett A. Planned but ever published? A retrospective analysis of clinical prediction model studies registered on clinicaltrials.gov since 2000. J Clin Epidemiol 2024; 173:111433. [PMID: 38897482 DOI: 10.1016/j.jclinepi.2024.111433] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2024] [Revised: 06/10/2024] [Accepted: 06/12/2024] [Indexed: 06/21/2024]
Abstract
OBJECTIVES To describe the characteristics and publication outcomes of clinical prediction model studies registered on clinicaltrials.gov since 2000. STUDY DESIGN AND SETTING Observational studies registered on clinicaltrials.gov between January 1, 2000, and March 2, 2022, describing the development of a new clinical prediction model or the validation of an existing model for predicting individual-level prognostic or diagnostic risk were analyzed. Eligible clinicaltrials.gov records were classified by modeling study type (development, validation) and the model outcome being predicted (prognostic, diagnostic). Recorded characteristics included study status, sample size information, Medical Subject Headings, and plans to share individual participant data. Publication outcomes were analyzed by linking National Clinical Trial numbers for eligible records with PubMed abstracts. RESULTS Nine hundred twenty-eight records were analyzed from a possible 89,896 observational study records. Publications searches found 170 matching peer-reviewed publications for 137 clinicaltrials.gov records. The estimated proportion of records with 1 or more matching publications after accounting for time since study start was 2.8% at 2 years (95% CI: 1.7%, 3.9%), 12.3% at 5 years (9.8% to 14.9%) and 27% at 10 years (23% to 33%). Stratifying records by study start year indicated that publication proportions improved over time. Records tended to prioritize the development of new prediction models over the validation of existing models (76%; 704/928 vs. 24%; 182/928). At the time of download, 27% of records were marked as complete, 35% were still recruiting, and 14.7% had unknown status. Only 7.4% of records stated plans to share individual participant data. CONCLUSION Published clinical prediction model studies are only a fraction of overall research efforts, with many studies planned but not completed or published. Improving the uptake of study preregistration and follow-up will increase the visibility of planned research. Introducing additional registry features and guidance may improve the identification of clinical prediction model studies posted to clinical registries.
Collapse
Affiliation(s)
- Nicole White
- Australian Centre for Health Services Innovation and Centre for Healthcare Transformation, School of Public Health and Social Work, Queensland University of Technology, Kelvin Grove, Queensland, Australia.
| | - Rex Parsons
- Australian Centre for Health Services Innovation and Centre for Healthcare Transformation, School of Public Health and Social Work, Queensland University of Technology, Kelvin Grove, Queensland, Australia
| | - David Borg
- Australian Centre for Health Services Innovation and Centre for Healthcare Transformation, School of Public Health and Social Work, Queensland University of Technology, Kelvin Grove, Queensland, Australia; School of Exercise and Nutrition Sciences, Queensland University of Technology, Kelvin Grove, Queensland, Australia
| | - Gary Collins
- Centre for Statistics in Medicine, Nuffield Department of Orthopaedics, Rheumatology & Musculoskeletal Sciences, Botnar Research Centre, University of Oxford, Oxford, United Kingdom
| | - Adrian Barnett
- Australian Centre for Health Services Innovation and Centre for Healthcare Transformation, School of Public Health and Social Work, Queensland University of Technology, Kelvin Grove, Queensland, Australia
| |
Collapse
|
2
|
Ponzano M, Signori A, Bellavia A, Carbone A, Bovis F, Schiavetti I, Montobbio N, Sormani MP. Race and ethnicity in multiple sclerosis phase 3 clinical trials: A systematic review. Mult Scler 2024; 30:934-967. [PMID: 38849992 DOI: 10.1177/13524585241254283] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/09/2024]
Abstract
BACKGROUND Distinctive differences in multiple sclerosis (MS) have been observed by race and ethnicity. We aim to (1) assess how often race and ethnicity were reported in clinical trials registered on ClinicalTrials.gov, (2) evaluate whether the population was diverse enough, and (3) compare with publications. METHODS We included phase 3 clinical trials registered with results on ClinicalTrials.gov between 2007 and 2023. When race and/or ethnicity were reported, we searched for the corresponding publications. RESULTS Out of the 99 included studies, 56% reported race and/or ethnicity, of which only 26% of those primarily completed before 2017. Studies reporting race or ethnicity contributed to a total of 33,891 participants, mainly enrolled in Eastern Europe. Most were White (93%), and the median percentage of White participants was 93% (interquartile range (IQR) = 86%-98%), compared to 3% for Black (IQR = 1%-12%) and 0.2% for Asian (IQR = 0%-1%). Four trials omitted race and ethnicity in publications and even when information was reported, some discrepancies in terminology were identified and categories with fewer participants were often collapsed. CONCLUSION More efforts should be done to improve transparency, accuracy, and representativeness, in publications and at a design phase, by addressing social determinants of health that historically limit the enrollment of underrepresented population.
Collapse
Affiliation(s)
- Marta Ponzano
- Department of Health Sciences, University of Genoa, Genoa, Italy
| | - Alessio Signori
- Department of Health Sciences, University of Genoa, Genoa, Italy
| | - Andrea Bellavia
- Department of Environmental Health, Harvard T.H. Chan School of Public Health, Boston, MA, USA
| | - Alessio Carbone
- Department of Health Sciences, University of Genoa, Genoa, Italy
| | - Francesca Bovis
- Department of Health Sciences, University of Genoa, Genoa, Italy
| | - Irene Schiavetti
- Department of Health Sciences, University of Genoa, Genoa, Italy
| | - Noemi Montobbio
- Department of Health Sciences, University of Genoa, Genoa, Italy
| | - Maria Pia Sormani
- Department of Health Sciences, University of Genoa, Genoa, Italy; Ospedale Policlinico San Martino IRCCS, Genoa, Italy
| |
Collapse
|
3
|
Pan E, Roberts K. Linking Cancer Clinical Trials to their Result Publications. AMIA JOINT SUMMITS ON TRANSLATIONAL SCIENCE PROCEEDINGS. AMIA JOINT SUMMITS ON TRANSLATIONAL SCIENCE 2024; 2024:642-651. [PMID: 38827077 PMCID: PMC11141816] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Subscribe] [Scholar Register] [Indexed: 06/04/2024]
Abstract
The results of clinical trials are a valuable source of evidence for researchers, policy makers, and healthcare professionals. However, online trial registries do not always contain links to the publications that report on their results, instead requiring a time-consuming manual search. Here, we explored the application of pre-trained transformer-based language models to automatically identify result-reporting publications of cancer clinical trials by computing dense vectors and performing semantic search. Models were fine-tuned on text data from trial registry fields and article metadata using a contrastive learning approach. The best performing model was PubMedBERT, which achieved a mean average precision of 0.592 and ranked 70.3% of a trial's publications in the top 5 results when tested on the holdout test trials. Our results suggest that semantic search using embeddings from transformer models may be an effective approach to the task of linking trials to their publications.
Collapse
Affiliation(s)
- Evan Pan
- Department of Computer Science & Engineering, Texas A&M University, College Station, TX, USA
| | - Kirk Roberts
- School of Biomedical Informatics, The University of Texas Health Science Center at Houston, Houston, TX, USA
| |
Collapse
|
4
|
Schmidt L, Mohamed S, Meader N, Bacardit J, Craig D. Automated data analysis of unstructured grey literature in health research: A mapping review. Res Synth Methods 2024; 15:178-197. [PMID: 38115736 DOI: 10.1002/jrsm.1692] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2023] [Revised: 11/07/2023] [Accepted: 11/22/2023] [Indexed: 12/21/2023]
Abstract
The amount of grey literature and 'softer' intelligence from social media or websites is vast. Given the long lead-times of producing high-quality peer-reviewed health information, this is causing a demand for new ways to provide prompt input for secondary research. To our knowledge, this is the first review of automated data extraction methods or tools for health-related grey literature and soft data, with a focus on (semi)automating horizon scans, health technology assessments (HTA), evidence maps, or other literature reviews. We searched six databases to cover both health- and computer-science literature. After deduplication, 10% of the search results were screened by two reviewers, the remainder was single-screened up to an estimated 95% sensitivity; screening was stopped early after screening an additional 1000 results with no new includes. All full texts were retrieved, screened, and extracted by a single reviewer and 10% were checked in duplicate. We included 84 papers covering automation for health-related social media, internet fora, news, patents, government agencies and charities, or trial registers. From each paper, we extracted data about important functionalities for users of the tool or method; information about the level of support and reliability; and about practical challenges and research gaps. Poor availability of code, data, and usable tools leads to low transparency regarding performance and duplication of work. Financial implications, scalability, integration into downstream workflows, and meaningful evaluations should be carefully planned before starting to develop a tool, given the vast amounts of data and opportunities those tools offer to expedite research.
Collapse
Affiliation(s)
- Lena Schmidt
- National Institute for Health and Care Research Innovation Observatory, Population Health Sciences Institute, Newcastle University, Newcastle upon Tyne, UK
| | - Saleh Mohamed
- National Institute for Health and Care Research Innovation Observatory, Population Health Sciences Institute, Newcastle University, Newcastle upon Tyne, UK
| | - Nick Meader
- National Institute for Health and Care Research Innovation Observatory, Population Health Sciences Institute, Newcastle University, Newcastle upon Tyne, UK
| | - Jaume Bacardit
- Interdisciplinary Computing and Complex BioSystems (ICOS) Research Group, School of Computing, Newcastle University, Newcastle upon Tyne, UK
| | - Dawn Craig
- National Institute for Health and Care Research Innovation Observatory, Population Health Sciences Institute, Newcastle University, Newcastle upon Tyne, UK
| |
Collapse
|
5
|
Vorland CJ, Brown AW, Kilicoglu H, Ying X, Mayo-Wilson E. Publication of Results of Registered Trials With Published Study Protocols, 2011-2022. JAMA Netw Open 2024; 7:e2350688. [PMID: 38190185 PMCID: PMC10774993 DOI: 10.1001/jamanetworkopen.2023.50688] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/12/2023] [Accepted: 11/17/2023] [Indexed: 01/09/2024] Open
Abstract
Importance Publishing study protocols might reduce research waste because of unclear methods or incomplete reporting; on the other hand, there might be few additional benefits of publishing protocols for registered trials that are never completed or published. No study has investigated the proportion of published protocols associated with published results. Objective To estimate the proportion of published trial protocols for which there are not associated published results. Design, Setting, and Participants This cross-sectional study used stratified random sampling to identify registered clinical trials with protocols published between January 2011 and August 2022 and indexed in PubMed Central. Ongoing studies and those within 1 year of the primary completion date on ClinicalTrials.gov were excluded. Published results were sought from August 2022 to March 2023 by searching ClinicalTrials.gov, emailing authors, and using an automated tool, as well as through incidental discovery. Main Outcomes and Measures The primary outcome was a weighted estimate of the proportion of registered trials with published protocols that also had published main results. The proportion of trials with unpublished results was estimated using a weighted mean. Results From 1500 citations that were screened, 308 clinical trial protocols were included, and it was found that 87 trials had not published their main results. Most included trials were investigator-initiated evaluations of nonregulated products. When published, results appeared a mean (SD) of 3.4 (2.0) years after protocol publications. With the use of a weighted mean, an estimated 4754 (95% CI, 4296-5226) eligible clinical trial protocols were published and indexed in PubMed Central between 2011 and 2022. In the weighted analysis, 1708 of those protocols (36%; 95% CI, 31%-41%) were not associated with publication of main results. In a sensitivity analysis excluding protocols published after 2019, an estimated 25% (95% CI, 20%-30%) of 3670 (95% CI, 3310-4032) protocol publications were not associated with publication of main results. Conclusions and Relevance This cross-sectional study of clinical trial protocols published on PubMed Central between 2011 and 2022 suggests that many protocols were not associated with subsequent publication of results. The overall benefits of publishing study protocols might outweigh the research waste caused by unnecessary protocol publications.
Collapse
Affiliation(s)
- Colby J. Vorland
- Department of Epidemiology and Biostatistics, Indiana University School of Public Health–Bloomington
| | - Andrew W. Brown
- Department of Biostatistics, University of Arkansas for Medical Sciences, Little Rock
- Arkansas Children’s Research Institute, Little Rock
| | - Halil Kilicoglu
- School of Information Sciences, University of Illinois at Urbana-Champaign
| | - Xiangji Ying
- Department of Epidemiology, Johns Hopkins Bloomberg School of Public Health, Baltimore, Maryland
| | - Evan Mayo-Wilson
- Department of Epidemiology, University of North Carolina Gillings School of Global Public Health, Chapel Hill
| |
Collapse
|
6
|
Schmidt L, Sinyor M, Webb RT, Marshall C, Knipe D, Eyles EC, John A, Gunnell D, Higgins JPT. A narrative review of recent tools and innovations toward automating living systematic reviews and evidence syntheses. ZEITSCHRIFT FUR EVIDENZ, FORTBILDUNG UND QUALITAT IM GESUNDHEITSWESEN 2023; 181:65-75. [PMID: 37596160 DOI: 10.1016/j.zefq.2023.06.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/14/2022] [Revised: 06/19/2023] [Accepted: 06/25/2023] [Indexed: 08/20/2023]
Abstract
Living reviews are an increasingly popular research paradigm. The purpose of a 'living' approach is to allow rapid collation, appraisal and synthesis of evolving evidence on an important research topic, enabling timely influence on patient care and public health policy. However, living reviews are time- and resource-intensive. The accumulation of new evidence and the possibility of developments within the review's research topic can introduce unique challenges into the living review workflow. To investigate the potential of software tools to support living systematic or rapid reviews, we present a narrative review informed by an examination of tools contained on the Systematic Review Toolbox website. We identified 11 tools with relevant functionalities and discuss the important features of these tools with respect to different steps of the living review workflow. Four tools (NestedKnowledge, SWIFT-ActiveScreener, DistillerSR, EPPI-Reviewer) covered multiple, successive steps of the review process, and the remaining tools addressed specific components of the workflow, including scoping and protocol formulation, reference retrieval, automated data extraction, write-up and dissemination of data. We identify several ways in which living reviews can be made more efficient and practical. Most of these focus on general workflow management, or automation through artificial intelligence and machine-learning, in the screening process. More sophisticated uses of automation mostly target living rapid reviews to increase the speed of production or evidence maps to broaden the scope of the map. We use a case study to highlight some of the barriers and challenges to incorporating tools into the living review workflow and processes. These include increased workload, the need for organisation, ensuring timely dissemination and challenges related to the development of bespoke automation tools to facilitate the review process. We describe how current end-user tools address these challenges, and which knowledge gaps remain that could be addressed by future tool development. Dedicated web presences for automatic dissemination of in-progress evidence updates, rather than solely relying on peer-reviewed journal publications, help to make the effort of a living evidence synthesis worthwhile. Despite offering basic living review functionalities, existing end-user tools could be further developed to be interoperable with other tools to support multiple workflow steps seamlessly, to address broader automatic evidence retrieval from a larger variety of sources, and to improve dissemination of evidence between review updates.
Collapse
Affiliation(s)
- Lena Schmidt
- National Institute for Health and Care Research Innovation Observatory, Population Health Sciences Institute, Newcastle University, Newcastle, UK; Sciome LLC, Research Triangle Park, North Carolina, USA.
| | - Mark Sinyor
- Department of Psychiatry, Sunnybrook Health Sciences Centre, Toronto, Canada; Department of Psychiatry, University of Toronto, Toronto, Canada
| | - Roger T Webb
- Division of Psychology and Mental Health, The University of Manchester, Manchester, UK; National Institute for Health and Care Research Greater Manchester Patient Safety Translational Research Centre (NIHR GM PSTRC), Manchester, UK
| | | | - Duleeka Knipe
- Population Health Sciences, Bristol Medical School, University of Bristol, Bristol, UK
| | - Emily C Eyles
- Population Health Sciences, Bristol Medical School, University of Bristol, Bristol, UK; The National Institute of Health and Care Research Applied Research Collaboration West (NIHR ARC West), University Hospitals Bristol NHS Foundation Trust, Bristol, UK
| | - Ann John
- Population Data Science, Swansea University, Swansea, UK; Public Health Wales NHS Trust, Wales, UK
| | - David Gunnell
- Population Health Sciences, Bristol Medical School, University of Bristol, Bristol, UK; The National Institute of Health and Care Research Biomedical Research Centre, University Hospitals Bristol NHS Foundation Trust and the University of Bristol, Bristol, UK
| | - Julian P T Higgins
- Population Health Sciences, Bristol Medical School, University of Bristol, Bristol, UK; The National Institute of Health and Care Research Applied Research Collaboration West (NIHR ARC West), University Hospitals Bristol NHS Foundation Trust, Bristol, UK; The National Institute of Health and Care Research Biomedical Research Centre, University Hospitals Bristol NHS Foundation Trust and the University of Bristol, Bristol, UK
| |
Collapse
|
7
|
Wright EC, Kapuria D, Ben-Yakov G, Sharma D, Basu D, Cho MH, Abijo T, Wilkins KJ. Time to Publication for Randomized Clinical Trials Presented as Abstracts at Three Gastroenterology and Hepatology Conferences in 2017. GASTRO HEP ADVANCES 2023; 2:370-379. [PMID: 36938381 PMCID: PMC10022591 DOI: 10.1016/j.gastha.2022.12.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/03/2022] [Accepted: 12/12/2022] [Indexed: 12/23/2022]
Abstract
Background & Aims Results of randomized clinical trials are often first presented as conference abstracts, but these abstracts may be difficult to find, and trial results included in the abstract may not be followed by subsequent journal publications. In a review of abstracts submitted to eight major medical and surgical conferences in 2017, we identified 237 abstracts reporting primary results of randomized clinical trials accepted for presentation at three major gastroenterology and hepatology conferences. The aims of this new analysis were to determine the publication rate for these abstracts and the proportion of publications that included trial registration numbers in the publication abstract. Methods Clinical trial registries, PubMed, Europe PMC, and Google Scholar were searched through November 1, 2021, for publications reporting trial results for the selected abstracts. Publications were reviewed to determine if they included a trial registration number and if the registration number was in the abstract. Results Publications were found for 157 abstracts (66%) within four years of the conference. Publications were found more frequently for the 194 abstracts reporting results of registered trials (144, 74%) than for the 43 abstracts reporting unregistered trials (13, 30%), but only 67% of these 144 publications included the registration number in the publication abstract. Ten unpublished trials had summary results posted on ClinicalTrials.gov. Conclusions Clinical trial results could be more accessible if all trials were registered, authors included registration numbers in both conference and journal abstracts, and journal editors required the inclusion of registration numbers in publication abstracts for registered clinical trials.
Collapse
Affiliation(s)
- Elizabeth C. Wright
- Office of the Director, National Institute of Diabetes and Digestive and Kidney Diseases, National Institutes of Health, Bethesda, Maryland
| | - Devika Kapuria
- Department of Gastroenterology, Washington University in St. Louis, St. Louis, Missouri
| | - Gil Ben-Yakov
- The Center for liver diseases Sheba, Tel-Hashomer medical center, Ramat Gan, Israel
| | - Disha Sharma
- Liver Diseases Branch, National Institute of Diabetes and Digestive and Kidney Diseases, National Institutes of Health, Bethesda, Maryland
| | - Dev Basu
- Medstar Good Samaritan Hospital, Baltimore, Maryland
| | - Min Ho Cho
- Department of Medicine, Baystate Medical Center, Springfield, Massachusetts
| | - Tomilowo Abijo
- Office of the Director, National Institute of Diabetes and Digestive and Kidney Diseases, National Institutes of Health, Bethesda, Maryland
| | - Kenneth J. Wilkins
- Office of the Director, National Institute of Diabetes and Digestive and Kidney Diseases, National Institutes of Health, Bethesda, Maryland
| |
Collapse
|