1
|
Kilkenny C, Browne WJ, Cuthill IC, Emerson M, Altman DG. Improving bioscience research reporting: the ARRIVE guidelines for reporting animal research. PLoS Biol 2010; 8:e1000412. [PMID: 20613859 PMCID: PMC2893951 DOI: 10.1371/journal.pbio.1000412] [Citation(s) in RCA: 5457] [Impact Index Per Article: 363.8] [Reference Citation Analysis] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023] Open
|
Guideline |
15 |
5457 |
2
|
Thomas BH, Ciliska D, Dobbins M, Micucci S. A process for systematically reviewing the literature: providing the research evidence for public health nursing interventions. Worldviews Evid Based Nurs 2008; 1:176-84. [PMID: 17163895 DOI: 10.1111/j.1524-475x.2004.04006.x] [Citation(s) in RCA: 1422] [Impact Index Per Article: 83.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
Abstract
BACKGROUND Several groups have outlined methodologies for systematic literature reviews of the effectiveness of interventions. The Effective Public Health Practice Project (EPHPP) began in 1998. Its mandate is to provide research evidence to guide and support the Ontario Ministry of Health in outlining minimum requirements for public health services in the province. Also, the project is expected to disseminate the results provincially, nationally, and internationally. Most of the reviews are relevant to public health nursing practice. AIMS This article describes four issues related to the systematic literature reviews of the effectiveness of public health nursing interventions: (1) the process of systematically reviewing the literature, (2) the development of a quality assessment instrument, (3) the results of the EPHPP to date, and (4) some results of the dissemination strategies used. METHODS The eight steps of the systematic review process including question formulation, searching and retrieving the literature, establishing relevance criteria, assessing studies for relevance, assessing relevant studies for methodological quality, data extraction and synthesis, writing the report, and dissemination are outlined. Also, the development and assessment of content and construct validity and intrarater reliability of the quality assessment questionnaire used in the process are described. RESULTS More than 20 systematic reviews have been completed. Content validity was ascertained by the use of a number of experts to review the questionnaire during its development. Construct validity was demonstrated through comparisons with another highly rated instrument. Intrarater reliability was established using Cohen's Kappa. Dissemination strategies used appear to be effective in that professionals report being aware of the reviews and using them in program planning/policymaking decisions. CONCLUSIONS The EPHPP has demonstrated the ability to adapt the most current methods of systematic literature reviews of effectiveness to questions related to public health nursing. Other positive outcomes from the process include the development of a critical mass of public health researchers and practitioners who can actively participate in the process, and the work on dissemination has been successful in attracting external funds. A program of research in this area is being developed.
Collapse
|
Validation Study |
17 |
1422 |
3
|
van Tulder M, Furlan A, Bombardier C, Bouter L. Updated method guidelines for systematic reviews in the cochrane collaboration back review group. Spine (Phila Pa 1976) 2003; 28:1290-9. [PMID: 12811274 DOI: 10.1097/01.brs.0000065484.95996.af] [Citation(s) in RCA: 1307] [Impact Index Per Article: 59.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
STUDY DESIGN Descriptive method guidelines. OBJECTIVES To help reviewers design, conduct, and report reviews of trials in the field of back and neck pain. SUMMARY OF BACKGROUND DATA In 1997, the Cochrane Collaboration Back Review Group published method guidelines for systematic reviews. Since its publication, new methodologic evidence emerged and more experience was acquired in conducting reviews. METHODS All reviews and protocols of the Back Review Group were assessed for compliance with the 1997 method guidelines. Also, the most recent version of the Cochrane Handbook (4.1) was checked for new recommendations. In addition, some important topics that were not addressed in the 1997 method guidelines were included (e.g., methods for qualitative analysis, reporting of conclusions, and discussion of clinical relevance of the results). In May 2002, preliminary results were presented and discussed in a workshop. In two rounds, a list of all possible recommendations and the final draft were circulated for comments among the editors of the Back Review Group. RESULTS The recommendations are divided in five categories: literature search, inclusion criteria, methodologic quality assessment, data extraction, and data analysis. Each recommendation is classified in minimum criteria and further guidance. Additional recommendations are included regarding assessment of clinical relevance, and reporting of results and conclusions. CONCLUSIONS Systematic reviews need to be conducted as carefully as the trials they report and, to achieve full impact, systematic reviews need to meet high methodologic standards.
Collapse
|
Guideline |
22 |
1307 |
4
|
Morse JM. Critical Analysis of Strategies for Determining Rigor in Qualitative Inquiry. QUALITATIVE HEALTH RESEARCH 2015; 25:1212-22. [PMID: 26184336 DOI: 10.1177/1049732315588501] [Citation(s) in RCA: 1254] [Impact Index Per Article: 125.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
Criteria for determining the trustworthiness of qualitative research were introduced by Guba and Lincoln in the 1980s when they replaced terminology for achieving rigor, reliability, validity, and generalizability with dependability, credibility, and transferability. Strategies for achieving trustworthiness were also introduced. This landmark contribution to qualitative research remains in use today, with only minor modifications in format. Despite the significance of this contribution over the past four decades, the strategies recommended to achieve trustworthiness have not been critically examined. Recommendations for where, why, and how to use these strategies have not been developed, and how well they achieve their intended goal has not been examined. We do not know, for example, what impact these strategies have on the completed research. In this article, I critique these strategies. I recommend that qualitative researchers return to the terminology of social sciences, using rigor, reliability, validity, and generalizability. I then make recommendations for the appropriate use of the strategies recommended to achieve rigor: prolonged engagement, persistent observation, and thick, rich description; inter-rater reliability, negative case analysis; peer review or debriefing; clarifying researcher bias; member checking; external audits; and triangulation.
Collapse
|
|
10 |
1254 |
5
|
Kelly CJ, Karthikesalingam A, Suleyman M, Corrado G, King D. Key challenges for delivering clinical impact with artificial intelligence. BMC Med 2019; 17:195. [PMID: 31665002 PMCID: PMC6821018 DOI: 10.1186/s12916-019-1426-2] [Citation(s) in RCA: 829] [Impact Index Per Article: 138.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/31/2019] [Accepted: 09/16/2019] [Indexed: 02/07/2023] Open
Abstract
BACKGROUND Artificial intelligence (AI) research in healthcare is accelerating rapidly, with potential applications being demonstrated across various domains of medicine. However, there are currently limited examples of such techniques being successfully deployed into clinical practice. This article explores the main challenges and limitations of AI in healthcare, and considers the steps required to translate these potentially transformative technologies from research to clinical practice. MAIN BODY Key challenges for the translation of AI systems in healthcare include those intrinsic to the science of machine learning, logistical difficulties in implementation, and consideration of the barriers to adoption as well as of the necessary sociocultural or pathway changes. Robust peer-reviewed clinical evaluation as part of randomised controlled trials should be viewed as the gold standard for evidence generation, but conducting these in practice may not always be appropriate or feasible. Performance metrics should aim to capture real clinical applicability and be understandable to intended users. Regulation that balances the pace of innovation with the potential for harm, alongside thoughtful post-market surveillance, is required to ensure that patients are not exposed to dangerous interventions nor deprived of access to beneficial innovations. Mechanisms to enable direct comparisons of AI systems must be developed, including the use of independent, local and representative test sets. Developers of AI algorithms must be vigilant to potential dangers, including dataset shift, accidental fitting of confounders, unintended discriminatory bias, the challenges of generalisation to new populations, and the unintended negative consequences of new algorithms on health outcomes. CONCLUSION The safe and timely translation of AI research into clinically validated and appropriately regulated systems that can benefit everyone is challenging. Robust clinical evaluation, using metrics that are intuitive to clinicians and ideally go beyond measures of technical accuracy to include quality of care and patient outcomes, is essential. Further work is required (1) to identify themes of algorithmic bias and unfairness while developing mitigations to address these, (2) to reduce brittleness and improve generalisability, and (3) to develop methods for improved interpretability of machine learning predictions. If these goals can be achieved, the benefits for patients are likely to be transformational.
Collapse
|
brief-report |
6 |
829 |
6
|
Marshall M, Lockwood A, Bradley C, Adams C, Joy C, Fenton M. Unpublished rating scales: a major source of bias in randomised controlled trials of treatments for schizophrenia. Br J Psychiatry 2000; 176:249-52. [PMID: 10755072 DOI: 10.1192/bjp.176.3.249] [Citation(s) in RCA: 656] [Impact Index Per Article: 26.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
Abstract
BACKGROUND A recent review suggested an association between using unpublished scales in clinical trials and finding significant results. AIMS To determine whether such an association existed in schizophrenia trials. METHOD Three hundred trials were randomly selected from the Cochrane Schizophrenia Group's Register. All comparisons between treatment groups and control groups using rating scales were identified. The publication status of each scale was determined and claims of a significant treatment effect were recorded. RESULTS Trials were more likely to report that a treatment was superior to control when an unpublished scale was used to make the comparison (relative risk 1.37 (95% CI 1.12-1.68)). This effect increased when a 'gold-standard' definition of treatment superiority was applied (RR 1.94 (95% CI 1.35-2.79)). In non-pharmacological trials, one-third of 'gold-standard' claims of treatment superiority would not have been made if published scales had been used. CONCLUSIONS Unpublished scales are a source of bias in schizophrenia trials.
Collapse
|
|
25 |
656 |
7
|
Ryan TJ, Faxon DP, Gunnar RM, Kennedy JW, King SB, Loop FD, Peterson KL, Reeves TJ, Williams DO, Winters WL. Guidelines for percutaneous transluminal coronary angioplasty. A report of the American College of Cardiology/American Heart Association Task Force on Assessment of Diagnostic and Therapeutic Cardiovascular Procedures (Subcommittee on Percutaneous Transluminal Coronary Angioplasty). Circulation 1988; 78:486-502. [PMID: 2969312 DOI: 10.1161/01.cir.78.2.486] [Citation(s) in RCA: 625] [Impact Index Per Article: 16.9] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/03/2023]
|
Comparative Study |
37 |
625 |
8
|
Macleod MR, Michie S, Roberts I, Dirnagl U, Chalmers I, Ioannidis JPA, Al-Shahi Salman R, Chan AW, Glasziou P. Biomedical research: increasing value, reducing waste. Lancet 2014; 383:101-4. [PMID: 24411643 DOI: 10.1016/s0140-6736(13)62329-6] [Citation(s) in RCA: 605] [Impact Index Per Article: 55.0] [Reference Citation Analysis] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/26/2023]
|
|
11 |
605 |
9
|
Abstract
The objective of this study was to assess the validity of an index of the scientific quality of research overviews, the Overview Quality Assessment Questionnaire (OQAQ). Thirty-six published review articles were assessed by 9 judges using the OQAQ. Authors reports of what they had done were compared to OQAQ ratings. The sensibility of the OQAQ was assessed using a 13 item questionnaire. Seven a priori hypotheses were used to assess construct validity. The review articles were drawn from three sampling frames: articles highly rated by criteria external to the study, meta-analyses, and a broad spectrum of medical journals. Three categories of judges were used to assess the articles: research assistants, clinicians with research training and experts in research methodology, with 3 judges in each category. The sensibility of the index was assessed by 15 randomly selected faculty members of the Department of Clinical Epidemiology and Biostatistics at McMaster. Authors' reports of their methods related closely to ratings from corresponding OQAQ items: for each criterion, the mean score was significantly higher for articles for which the authors responses indicated that they had used more rigorous methods. For 10 of the 13 questions used to assess sensibility the mean rating was 5 or greater, indicating general satisfaction with the instrument. The primary shortcoming noted was the need for judgement in applying the index. Six of the 7 hypotheses used to test construct validity held true. The OQAQ is a valid measure of the quality of research overviews.
Collapse
|
Meta-Analysis |
34 |
510 |
10
|
|
|
33 |
420 |
11
|
Abstract
An experiment in which 150 proposals submitted to the National Science Foundation were evaluated independently by a new set of reviewers indicates that getting a research grant depends to a significant extent on chance. The degree of disagreement within the population of eligible reviewers is such that whether or not a proposal is funded depends in a large proportion of cases upon which reviewers happen to be selected for it. No evidence of systematic bias in the selection of NSF reviewers was found.
Collapse
|
|
44 |
403 |
12
|
Husereau D, Drummond M, Augustovski F, de Bekker-Grob E, Briggs AH, Carswell C, Caulley L, Chaiyakunapruk N, Greenberg D, Loder E, Mauskopf J, Mullins CD, Petrou S, Pwu RF, Staniszewska S. Consolidated Health Economic Evaluation Reporting Standards (CHEERS) 2022 Explanation and Elaboration: A Report of the ISPOR CHEERS II Good Practices Task Force. VALUE IN HEALTH : THE JOURNAL OF THE INTERNATIONAL SOCIETY FOR PHARMACOECONOMICS AND OUTCOMES RESEARCH 2022; 25:10-31. [PMID: 35031088 DOI: 10.1016/j.jval.2021.10.008] [Citation(s) in RCA: 367] [Impact Index Per Article: 122.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 11/03/2021] [Indexed: 05/22/2023]
Abstract
Health economic evaluations are comparative analyses of alternative courses of action in terms of their costs and consequences. The Consolidated Health Economic Evaluation Reporting Standards (CHEERS) statement, published in 2013, was created to ensure health economic evaluations are identifiable, interpretable, and useful for decision making. It was intended as guidance to help authors report accurately which health interventions were being compared and in what context, how the evaluation was undertaken, what the findings were, and other details that may aid readers and reviewers in interpretation and use of the study. The new CHEERS 2022 statement replaces the previous CHEERS reporting guidance. It reflects the need for guidance that can be more easily applied to all types of health economic evaluation, new methods and developments in the field, and the increased role of stakeholder involvement including patients and the public. It is also broadly applicable to any form of intervention intended to improve the health of individuals or the population, whether simple or complex, and without regard to context (such as healthcare, public health, education, and social care). This Explanation and Elaboration Report presents the new CHEERS 2022 28-item checklist with recommendations and explanation and examples for each item. The CHEERS 2022 statement is primarily intended for researchers reporting economic evaluations for peer-reviewed journals and the peer reviewers and editors assessing them for publication. Nevertheless, we anticipate familiarity with reporting requirements will be useful for analysts when planning studies. It may also be useful for health technology assessment bodies seeking guidance on reporting, given that there is an increasing emphasis on transparency in decision making.
Collapse
|
|
3 |
367 |
13
|
Flanagin A, Carey LA, Fontanarosa PB, Phillips SG, Pace BP, Lundberg GD, Rennie D. Prevalence of articles with honorary authors and ghost authors in peer-reviewed medical journals. JAMA 1998; 280:222-4. [PMID: 9676661 DOI: 10.1001/jama.280.3.222] [Citation(s) in RCA: 358] [Impact Index Per Article: 13.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
CONTEXT Authorship in biomedical publications establishes accountability, responsibility, and credit. Misappropriation of authorship undermines the integrity of the authorship system, but accurate data on its prevalence are limited. OBJECTIVES To determine the prevalence of articles with honorary authors (named authors who have not met authorship criteria) and ghost authors (individuals not named as authors but who contributed substantially to the work) in peer-reviewed medical journals and to identify journal characteristics and article types associated with such authorship misappropriation. DESIGN Mailed, self-administered, confidential survey. PARTICIPANTS A total of 809 corresponding authors (1179 surveyed, 69% response rate) of articles published in 1996 in 3 peer-reviewed, large-circulation general medical journals (Annals of Internal Medicine, JAMA, and The New England Journal of Medicine) and 3 peer-reviewed, smaller-circulation journals that publish supplements (American Journal of Cardiology, American Journal of Medicine, and American Journal of Obstetrics and Gynecology). MAIN OUTCOME MEASURES Prevalence of articles with honorary authors and ghost authors, as reported by corresponding authors. RESULTS Of the 809 articles, 492 were original research reports, 240 were reviews and articles not reporting original data, and 77 were editorials. A total of 156 articles (1 9%) had evidence of honorary authors (range, 11%-25% among journals); 93 articles (11%) had evidence of ghost authors (range, 7%-16% among journals); and 13 articles (2%) had evidence of both. The prevalence of articles with honorary authors was greater among review articles than research articles (odds ratio [OR], 1.8; 95% confidence interval [CI], 1.2-2.6) but did not differ significantly between large-circulation and smaller-circulation journals (OR, 1.4; 95% CI, 0.96-2.03). Compared with similar-type articles in large-circulation journals, articles with ghost authors in smaller-circulation journals were more likely to be reviews (OR, 4.2; 95% CI, 1.5-13.5) and less likely to be research articles (OR, 0.49; 95% CI, 0.27-0.88). CONCLUSION A substantial proportion of articles in peer-reviewed medical journals demonstrate evidence of honorary authors or ghost authors.
Collapse
|
|
27 |
358 |
14
|
Jadad AR, Cook DJ, Jones A, Klassen TP, Tugwell P, Moher M, Moher D. Methodology and reports of systematic reviews and meta-analyses: a comparison of Cochrane reviews with articles published in paper-based journals. JAMA 1998; 280:278-80. [PMID: 9676681 DOI: 10.1001/jama.280.3.278] [Citation(s) in RCA: 348] [Impact Index Per Article: 12.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
CONTEXT Review articles are important sources of information to help guide decisions by clinicians, patients, and other decision makers. Ideally, reviews should include strategies to minimize bias and to maximize precision and be reported so explicitly that any interested reader would be able to replicate them. OBJECTIVE To compare the methodological and reporting aspects of systematic reviews and meta-analyses published by the Cochrane Collaboration with those published in paper-based journals indexed in MEDLINE. DATA SOURCES The Cochrane Library, issue 2 of 1995, and a search of MEDLINE restricted to 1995. STUDY SELECTION All 36 completed reviews published in the Cochrane Database of Systematic Reviews and a randomly selected sample of 39 meta-analyses or systematic reviews published in journals indexed by MEDLINE in 1995. DATA EXTRACTION Number of authors, trials, and patients; trial sources; inclusion and exclusion criteria; language restrictions; primary outcome; trial quality assessment; heterogeneity testing; and effect estimates. Updating by 1997 was evaluated. RESULTS Reviews found in MEDLINE included more authors (median, 3 vs 2; P<.001), more trials (median, 13.5 vs 5; P<.001), and more patients (median, 1280 vs 528; P<.001) than Cochrane reviews. More Cochrane reviews, however, included a description of the inclusion and exclusion criteria (35/36 vs 18/39; P<.001) and assessed trial quality (36/36 vs 12/39; P<.001). No Cochrane reviews had language restrictions (0/36 vs 7/39; P<.01). There were no differences in sources of trials, heterogeneity testing, or description of effect estimates. By June 1997, 18 of 36 Cochrane reviews had been updated vs 1 of 39 reviews listed in MEDLINE. CONCLUSIONS Cochrane reviews appear to have greater methodological rigor and are more frequently updated than systematic reviews or meta-analyses published in paper-based journals.
Collapse
|
Comparative Study |
27 |
348 |
15
|
Fernández-de-Las-Peñas C, Palacios-Ceña D, Gómez-Mayordomo V, Florencio LL, Cuadrado ML, Plaza-Manzano G, Navarro-Santana M. Prevalence of post-COVID-19 symptoms in hospitalized and non-hospitalized COVID-19 survivors: A systematic review and meta-analysis. Eur J Intern Med 2021; 92:55-70. [PMID: 34167876 PMCID: PMC8206636 DOI: 10.1016/j.ejim.2021.06.009] [Citation(s) in RCA: 329] [Impact Index Per Article: 82.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/20/2021] [Revised: 05/21/2021] [Accepted: 06/05/2021] [Indexed: 02/08/2023]
Abstract
BACKGROUND Single studies support the presence of several post-COVID-19 symptoms; however, no meta-analysis differentiating hospitalized and non-hospitalized patients has been published to date. This meta-analysis analyses the prevalence of post-COVID-19 symptoms in hospitalized and non-hospitalized patients recovered from COVID-19 . METHODS MEDLINE, CINAHL, PubMed, EMBASE, and Web of Science databases, as well as medRxiv and bioRxiv preprint servers were searched up to March 15, 2021. Peer-reviewed studies or preprints reporting data on post-COVID-19 symptoms collected by personal, telephonic or electronic interview were included. Methodological quality of the studies was assessed using the Newcastle-Ottawa Scale. We used a random-effects models for meta-analytical pooled prevalence of each post-COVID-19 symptom, and I² statistics for heterogeneity. Data synthesis was categorized at 30, 60, and ≥90 days after . RESULTS From 15,577 studies identified, 29 peer-reviewed studies and 4 preprints met inclusion criteria. The sample included 15,244 hospitalized and 9011 non-hospitalized patients. The methodological quality of most studies was fair. The results showed that 63.2, 71.9 and 45.9% of the sample exhibited ≥one post-COVID-19 symptom at 30, 60, or ≥90days after onset/hospitalization. Fatigue and dyspnea were the most prevalent symptoms with a pooled prevalence ranging from 35 to 60% depending on the follow-up. Other post-COVID-19 symptoms included cough (20-25%), anosmia (10-20%), ageusia (15-20%) or joint pain (15-20%). Time trend analysis revealed a decreased prevalence 30days after with an increase after 60days . CONCLUSION This meta-analysis shows that post-COVID-19 symptoms are present in more than 60% of patients infected by SARS-CoV‑2. Fatigue and dyspnea were the most prevalent post-COVID-19 symptoms, particularly 60 and ≥90 days after.
Collapse
|
Meta-Analysis |
4 |
329 |
16
|
Haynes B, Haines A. Barriers and bridges to evidence based clinical practice. BMJ (CLINICAL RESEARCH ED.) 1998; 317:273-6. [PMID: 9677226 PMCID: PMC1113594 DOI: 10.1136/bmj.317.7153.273] [Citation(s) in RCA: 319] [Impact Index Per Article: 11.8] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
|
Review |
27 |
319 |
17
|
Araújo MB, Anderson RP, Márcia Barbosa A, Beale CM, Dormann CF, Early R, Garcia RA, Guisan A, Maiorano L, Naimi B, O’Hara RB, Zimmermann NE, Rahbek C. Standards for distribution models in biodiversity assessments. SCIENCE ADVANCES 2019; 5:eaat4858. [PMID: 30746437 PMCID: PMC6357756 DOI: 10.1126/sciadv.aat4858] [Citation(s) in RCA: 316] [Impact Index Per Article: 52.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/03/2018] [Accepted: 12/11/2018] [Indexed: 05/20/2023]
Abstract
Demand for models in biodiversity assessments is rising, but which models are adequate for the task? We propose a set of best-practice standards and detailed guidelines enabling scoring of studies based on species distribution models for use in biodiversity assessments. We reviewed and scored 400 modeling studies over the past 20 years using the proposed standards and guidelines. We detected low model adequacy overall, but with a marked tendency of improvement over time in model building and, to a lesser degree, in biological data and model evaluation. We argue that implementation of agreed-upon standards for models in biodiversity assessments would promote transparency and repeatability, eventually leading to higher quality of the models and the inferences used in assessments. We encourage broad community participation toward the expansion and ongoing development of the proposed standards and guidelines.
Collapse
|
Systematic Review |
6 |
316 |
18
|
Chang ES, Kannoth S, Levy S, Wang SY, Lee JE, Levy BR. Global reach of ageism on older persons' health: A systematic review. PLoS One 2020; 15:e0220857. [PMID: 31940338 PMCID: PMC6961830 DOI: 10.1371/journal.pone.0220857] [Citation(s) in RCA: 299] [Impact Index Per Article: 59.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2019] [Accepted: 07/24/2019] [Indexed: 12/20/2022] Open
Abstract
OBJECTIVE Although there is anecdotal evidence of ageism occurring at both the structural level (in which societal institutions reinforce systematic bias against older persons) and individual level (in which older persons take in the negative views of aging of their culture), previous systematic reviews have not examined how both levels simultaneously influence health. Thus, the impact of ageism may be underestimated. We hypothesized that a comprehensive systematic review would reveal that these ageism levels adversely impact the health of older persons across geography, health outcomes, and time. METHOD A literature search was performed using 14 databases with no restrictions on region, language, and publication type. The systematic search yielded 13,691 papers for screening, 638 for full review, and 422 studies for analyses. Sensitivity analyses that adjusted for sample size and study quality were conducted using standardized tools. The study protocol is registered (PROSPERO CRD42018090857). RESULTS Ageism led to significantly worse health outcomes in 95.5% of the studies and 74.0% of the 1,159 ageism-health associations examined. The studies reported ageism effects in all 45 countries, 11 health domains, and 25 years studied, with the prevalence of significant findings increasing over time (p < .0001). A greater prevalence of significant ageism-health findings was found in less-developed countries than more-developed countries (p = .0002). Older persons who were less educated were particularly likely to experience adverse health effects of ageism. Evidence of ageism was found across the age, sex, and race/ethnicity of the targeters (i.e., persons perpetrating ageism). CONCLUSION The current analysis which included over 7 million participants is the most comprehensive review of health consequences of ageism to date. Considering that the analysis revealed that the detrimental impact of ageism on older persons' health has been occurring simultaneously at the structural and individual level in five continents, our systematic review demonstrates the pernicious reach of ageism.
Collapse
|
Meta-Analysis |
5 |
299 |
19
|
Abstract
Preparing a review entails many judgments. The focus of the review must be decided. Studies that are relevant to the focus of the review must be identified, selected for inclusion and critically appraised. Information must be collected and synthesised from the relevant studies, and conclusions must be drawn. Checklists can help prevent important errors in this process. Reviewers, editors, content experts, and users of reviews all have a role to play in improving the quality of published reviews and promoting the appropriate use of reviews by decisionmakers. It is essential that both providers and users appraise the validity of review articles.
Collapse
|
research-article |
31 |
291 |
20
|
Touitou Y, Portaluppi F, Smolensky MH, Rensing L. Ethical principles and standards for the conduct of human and animal biological rhythm research. Chronobiol Int 2004; 21:161-70. [PMID: 15129830 DOI: 10.1081/cbi-120030045] [Citation(s) in RCA: 286] [Impact Index Per Article: 13.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/14/2023]
Abstract
Most research papers published in Chronobiology International report the findings of investigations conducted on laboratory animals and human beings. The Journal, its editors and the publication committee endorse the compliance of investigators to the principles of the Declaration of Helsinki of the World Medical Association relating to the conduct of ethical research on human beings and the Guide for the Care and Use of Laboratory Animals of the Institute for Laboratory Animal Research of the National Research Council relating to the conduct of ethical research on laboratory and other animals. Chronobiology International requires that submitted manuscripts reporting the findings of human and animal research conform to the respective policy and mandates of the Declaration of Helsinki and the Guide for the Care and Use of Laboratory Animals. The peer review of manuscripts will thus include judgment of whether or not the involved research methods conform to the standards of good research practice. This article outlines the basic expectations for the methods of human and animal biological rhythm research, both from the perspective of the fundamental criteria necessary for quality chronobiology investigation and from the perspective of humane and ethical research on human beings and animals.
Collapse
|
Journal Article |
21 |
286 |
21
|
Sherry B, Jefferds ME, Grummer-Strawn LM. Accuracy of adolescent self-report of height and weight in assessing overweight status: a literature review. ARCHIVES OF PEDIATRICS & ADOLESCENT MEDICINE 2007; 161:1154-61. [PMID: 18056560 DOI: 10.1001/archpedi.161.12.1154] [Citation(s) in RCA: 283] [Impact Index Per Article: 15.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/28/2023]
Abstract
OBJECTIVE To examine the accuracy of self-reported height and weight data to classify adolescent overweight status. Self-reported height and weight are commonly used with minimal consideration of accuracy. DATA SOURCES Eleven studies (4 nationally representative, 7 convenience sample or locally based). STUDY SELECTION Peer-reviewed articles of studies conducted in the United States that compared self-reported and directly measured height, weight, and/or body mass index data to classify overweight among adolescents. MAIN EXPOSURES Self-reported and directly measured height and weight. MAIN OUTCOME MEASURES Overweight prevalence; missing data, bias, and accuracy. RESULTS Studies varied in examination of bias. Sensitivity of self-reported data for classification of overweight ranged from 55% to 76% (4 of 4 studies). Overweight prevalence was -0.4% to -17.7% lower when body mass index was based on self-reported data vs directly measured data (5 of 5 studies). Females underestimated weight more than males (ranges, -4.0 to -1.0 kg vs -2.6 to 1.5 kg, respectively) (9 of 9 studies); overweight individuals underestimated weight more than nonoverweight individuals (6 of 6 studies). Missing self-reported data ranged from 0% to 23% (9 of 9 studies). There was inadequate information on bias by age and race/ethnicity. CONCLUSIONS Self-reported data are valuable if the only source of data. However, self-reported data underestimate overweight prevalence and there is bias by sex and weight status. Lower sensitivities of self-reported data indicate that one-fourth to one-half of those overweight would be missed. Other potential biases in self-reported data, such as across subgroups, need further clarification. The feasibility of collecting directly measured height and weight data on a state/community level should be explored because directly measured data are more accurate.
Collapse
|
Review |
18 |
283 |
22
|
Abstract
This article classifies the major approaches to the assessment of the process and outcomes of medical care. The apparent need to safeguard and enhance the quality of care has led to the institution of mechanisms that subject care to constant review so that deficiencies may be found and corrected. The article reviews the developments that led to the involvement of the federal government in this activity through its sponsorship of professional standards review organizations (PSRO's). The major features of the PSRO's are described and their possible effects discussed. It is too early to say how the PSRO's will fare, but should they fail to accomplish their objectives the pressure for more radical solutions will be difficult to resist.
Collapse
|
|
47 |
279 |
23
|
Lee H, Cashin AG, Lamb SE, Hopewell S, Vansteelandt S, VanderWeele TJ, MacKinnon DP, Mansell G, Collins GS, Golub RM, McAuley JH, Localio AR, van Amelsvoort L, Guallar E, Rijnhart J, Goldsmith K, Fairchild AJ, Lewis CC, Kamper SJ, Williams CM, Henschke N. A Guideline for Reporting Mediation Analyses of Randomized Trials and Observational Studies: The AGReMA Statement. JAMA 2021; 326:1045-1056. [PMID: 34546296 PMCID: PMC8974292 DOI: 10.1001/jama.2021.14075] [Citation(s) in RCA: 251] [Impact Index Per Article: 62.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/20/2022]
Abstract
Importance Mediation analyses of randomized trials and observational studies can generate evidence about the mechanisms by which interventions and exposures may influence health outcomes. Publications of mediation analyses are increasing, but the quality of their reporting is suboptimal. Objective To develop international, consensus-based guidance for the reporting of mediation analyses of randomized trials and observational studies (A Guideline for Reporting Mediation Analyses; AGReMA). Design, Setting, and Participants The AGReMA statement was developed using the Enhancing Quality and Transparency of Health Research (EQUATOR) methodological framework for developing reporting guidelines. The guideline development process included (1) an overview of systematic reviews to assess the need for a reporting guideline; (2) review of systematic reviews of relevant evidence on reporting mediation analyses; (3) conducting a Delphi survey with panel members that included methodologists, statisticians, clinical trialists, epidemiologists, psychologists, applied clinical researchers, clinicians, implementation scientists, evidence synthesis experts, representatives from the EQUATOR Network, and journal editors (n = 19; June-November 2019); (4) having a consensus meeting (n = 15; April 28-29, 2020); and (5) conducting a 4-week external review and pilot test that included methodologists and potential users of AGReMA (n = 21; November 2020). Results A previously reported overview of 54 systematic reviews of mediation studies demonstrated the need for a reporting guideline. Thirty-three potential reporting items were identified from 3 systematic reviews of mediation studies. Over 3 rounds, the Delphi panelists ranked the importance of these items, provided 60 qualitative comments for item refinement and prioritization, and suggested new items for consideration. All items were reviewed during a 2-day consensus meeting and participants agreed on a 25-item AGReMA statement for studies in which mediation analyses are the primary focus and a 9-item short-form AGReMA statement for studies in which mediation analyses are a secondary focus. These checklists were externally reviewed and pilot tested by 21 expert methodologists and potential users, which led to minor adjustments and consolidation of the checklists. Conclusions and Relevance The AGReMA statement provides recommendations for reporting primary and secondary mediation analyses of randomized trials and observational studies. Improved reporting of studies that use mediation analyses could facilitate peer review and help produce publications that are complete, accurate, transparent, and reproducible.
Collapse
|
Consensus Development Conference |
4 |
251 |
24
|
Richmond S, Shaw WC, Roberts CT, Andrews M. The PAR Index (Peer Assessment Rating): methods to determine outcome of orthodontic treatment in terms of improvement and standards. Eur J Orthod 1992; 14:180-7. [PMID: 1628684 DOI: 10.1093/ejo/14.3.180] [Citation(s) in RCA: 245] [Impact Index Per Article: 7.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/27/2022]
Abstract
In orthodontics it is important to objectively assess whether a worthwhile improvement has been achieved in terms of overall alignment and occlusion for an individual patient or the greater proportion of a practitioner's caseload. An objective measure is described that has been validated against the subjective opinions of 74 dentists. Using the weighted PAR Index it was revealed that at least a 30 per cent reduction in PAR score is required for a case to be considered as 'improved' and a change of 22 PAR points to bring about 'great improvement'. For a practitioner to demonstrate high standards the proportion of an individual's case load falling in the 'worse or no different' category should be negligible and the mean reduction should be as high as possible (e.g. greater than 70 per cent). If the mean percentage reduction in PAR score is high and the proportion of cases that have been 'greatly improved' is also high, this indicates that the practitioner is treating a great proportion of cases with a clear need for treatment to a high standard.
Collapse
|
|
33 |
245 |
25
|
Park RE, Fink A, Brook RH, Chassin MR, Kahn KL, Merrick NJ, Kosecoff J, Solomon DH. Physician ratings of appropriate indications for six medical and surgical procedures. Am J Public Health 1986; 76:766-72. [PMID: 3521341 PMCID: PMC1646864 DOI: 10.2105/ajph.76.7.766] [Citation(s) in RCA: 244] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/06/2023]
Abstract
We convened three panels of physicians to rate the appropriateness of a large number of indications for performing a total of six medical and surgical procedures. The panels followed a modified Delphi process. Panelists separately assigned initial ratings, then met in Santa Monica, California where they received reports showing their initial ratings and the distribution of the other panelists ratings. They discussed the indications and revised the indications lists, then individually assigned final ratings. There was generally better agreement on the final ratings than on the initial ratings. Based on reasonable criteria for agreement and disagreement, and excluding one outlying procedure, the panelists agreed on ratings for 42 to 56 per cent of the indications, and disagreed on 11 to 29 per cent.
Collapse
|
research-article |
39 |
244 |