Roberge-Dao J, Maggio LA, Zaccagnini M, Rochette A, Shikako-Thomas K, Boruff J, Thomas A. Quality, methods, and recommendations of systematic reviews on measures of evidence-based practice: an umbrella review.
JBI Evid Synth 2022;
20:1004-1073. [PMID:
35220381 DOI:
10.11124/jbies-21-00118]
[Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/31/2022]
Abstract
OBJECTIVES
The objective of the review was to estimate the quality of systematic reviews on evidence-based practice measures across health care professions and identify differences between systematic reviews regarding approaches used to assess the adequacy of evidence-based practice measures and recommended measures.
INTRODUCTION
Systematic reviews on the psychometric properties of evidence-based practice measures guide researchers, clinical managers, and educators in selecting an appropriate measure for use. The lack of psychometric standards specific to evidence-based practice measures, in addition to recent findings suggesting the low methodological quality of psychometric systematic reviews, calls into question the quality and methods of systematic reviews examining evidence-based practice measures.
INCLUSION CRITERIA
We included systematic reviews that identified measures that assessed evidence-based practice as a whole or of constituent parts (eg, knowledge, attitudes, skills, behaviors), and described the psychometric evidence for any health care professional group irrespective of assessment context (education or clinical practice).
METHODS
We searched five databases (MEDLINE, Embase, CINAHL, PsycINFO, and ERIC) on January 18, 2021. Two independent reviewers conducted screening, data extraction, and quality appraisal following the JBI approach. A narrative synthesis was performed.
RESULTS
Ten systematic reviews, published between 2006 and 2020, were included and focused on the following groups: all health care professionals (n = 3), nurses (n = 2), occupational therapists (n = 2), physical therapists (n = 1), medical students (n = 1), and family medical residents (n = 1). The overall quality of the systematic reviews was low: none of the reviews assessed the quality of primary studies or adhered to methodological guidelines, and only one registered a protocol. Reporting of psychometric evidence and measurement characteristics differed. While all the systematic reviews discussed internal consistency, feasibility was only addressed by three. Many approaches were used to assess the adequacy of measures, and five systematic reviews referenced tools. Criteria for the adequacy of individual properties and measures varied, but mainly followed standards for patient-reported outcome measures or The Standards of Educational and Psychological Testing. Two hundred and four unique measures were identified across 10 reviews. One review explicitly recommended measures for occupational therapists, and four reviews identified adequate measures for all health care professionals (n = 3) and medical students (n = 1). The 27 measures deemed adequate by these five systematic reviews are described.
CONCLUSIONS
Our results suggest a need to improve the overall methodological quality and reporting of systematic reviews on evidence-based practice measures to increase the trustworthiness of recommendations and allow comprehensive interpretation by end-users. Risk of bias is common to all the included systematic reviews as the quality of primary studies was not assessed. The diversity of tools and approaches used to evaluate the adequacy of evidence-based practice measures reflects tensions regarding the conceptualization of validity, suggesting a need to reflect on the most appropriate application of validity theory to evidence-based practice measures.
SYSTEMATIC REVIEW REGISTRATION NUMBER
PROSPERO CRD42020160874.
Collapse