1
|
Abstract
OBJECTIVE The research attempted to develop search filters for biomedical literature databases that improve retrieval of studies of clinical relevance for the nursing and rehabilitation professions. METHODS Diagnostic testing framework compared machine-culled and practitioner-nominated search terms with a hand-tagged clinical literature database. RESULTS We were unable to: (1) develop filters for nursing, likely because of the overlapping and expanding scope of practice for nurses in comparison with medical professionals, or (2) develop filters for rehabilitation, because of its broad scope and the profession's multifaceted understanding of "health and ability." CONCLUSIONS We found limitations on search filter development for these health professions: nursing and rehabilitation.
Collapse
|
2
|
Abstract
Researchers and practitioners have problems retrieving qualitative studies. Search strategies that can easily and effectively retrieve these studies from large databases such as PsycINFO are therefore important. To determine if search strategies can identify qualitative studies, 64 journals published in 2000 were hand searched using explicit methodological criteria to identify qualitative studies. The authors tested multiple search strategies using 4,985 potential search terms in PsycINFO (Ovid Technologies) and compared the results with the hand search data to calculate retrieval effectiveness. A total of 125 qualitative studies were identified. Single-term and multiple-term strategies had sensitivities (maximizing retrieval of qualitative studies) up to 94.4% and specificities (minimizing retrieval of nonqualitative studies and reports) up to 98.6% with ranges of precision and accuracy. Search strategies included terms that were variations of interview, qualitative, themes, and experience. Formal indexing terms performed poorly. Empirically derived search strategies combining textwords can effectively, but not perfectly, retrieve qualitative studies from PsycINFO.
Collapse
|
3
|
Technology-mediated interventions for enhancing medication adherence. J Am Med Inform Assoc 2015; 22:e177-93. [PMID: 25726568 PMCID: PMC7659214 DOI: 10.1093/jamia/ocu047] [Citation(s) in RCA: 59] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2014] [Revised: 12/05/2014] [Accepted: 12/14/2014] [Indexed: 11/12/2022] Open
Abstract
BACKGROUND Despite effective therapies for many conditions, patients find it difficult to adhere to prescribed treatments. Technology-mediated interventions (TMIs) are increasingly being used with the hope of improving adherence. OBJECTIVE To assess the effects of TMI, intended to enhance patient adherence to prescribed medications, on both medication adherence and clinical outcomes. METHODS A secondary in-depth analysis was conducted of the subset of studies that utilized technology in at least one component of the intervention from an updated Cochrane review on all interventions for enhancing medication adherence. We included studies that clearly described an information and communication technology or medical device as the sole or major component of the adherence intervention. RESULTS Thirty-eight studies were eligible for in-depth review. Only seven had a low risk of bias for study design features, primary adherence, and clinical outcomes. Eighteen studies used a TMI for education and/or counseling, 11 studies used a TMI for self-monitoring and/or feedback, and nine studies used electronic reminders. Studies used a variety of TMIs, with telephone the most common technology in use. Studies targeted a wide distribution of diseases and used a variety of adherence and clinical outcome measures. A minority targeted children and adolescents. Fourteen studies reported significant effects in both adherence and clinical outcome measures. CONCLUSIONS This review provides evidence for the inconsistent effectiveness of TMI for medication adherence and clinical outcomes. These results must be interpreted with caution due to a lack of high-quality studies.
Collapse
|
4
|
Increasing the quantity and quality of searching for current best evidence to answer clinical questions: protocol and intervention design of the MacPLUS FS Factorial Randomized Controlled Trials. Implement Sci 2014; 9:125. [PMID: 25239537 PMCID: PMC4177052 DOI: 10.1186/s13012-014-0125-9] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2014] [Accepted: 09/04/2014] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND & AIMS Finding current best evidence for clinical decisions remains challenging. With 3,000 new studies published every day, no single evidence-based resource provides all answers or is sufficiently updated. McMaster Premium LiteratUre Service--Federated Search (MacPLUS FS) addresses this issue by looking in multiple high quality resources simultaneously and displaying results in a one-page pyramid with the most clinically useful at the top. Yet, additional logistical and educational barriers need to be addressed to enhance point-of-care evidence retrieval. This trial seeks to test three innovative interventions, among clinicians registered to MacPLUS FS, to increase the quantity and quality of searching for current best evidence to answer clinical questions. METHODS & DESIGN In a user-centered approach, we designed three interventions embedded in MacPLUS FS: (A) a web-based Clinical Question Recorder; (B) an Evidence Retrieval Coach composed of eight short educational videos; (C) an Audit, Feedback and Gamification approach to evidence retrieval, based on the allocation of 'badges' and 'reputation scores.' We will conduct a randomized factorial controlled trial among all the 904 eligible medical doctors currently registered to MacPLUS FS at the hospitals affiliated with McMaster University, Canada. Postgraduate trainees (n=429) and clinical faculty/staff (n=475) will be randomized to each of the three following interventions in a factorial design (AxBxC). Utilization will be continuously recorded through clinicians’ accounts that track logins and usage, down to the level of individual keystrokes. The primary outcome is the rate of searches per month per user during the six months of follow-up. Secondary outcomes, measured through the validated Impact Assessment Method questionnaire, include: utility of answers found (meeting clinicians’ information needs), use (application in practice), and perceived usefulness on patient outcomes. DISCUSSION Built on effective models for the point-of-care teaching, these interventions approach evidence retrieval as a clinical skill. If effective, they may offer the opportunity to enhance it for a large audience, at low cost, providing better access to relevant evidence across many top EBM resources in parallel. TRIAL REGISTRATION ClinicalTrials.Gov NCT02038439.
Collapse
|
5
|
Adherence measurement and patient recruitment methods are poor in intervention trials to improve patient adherence. J Clin Epidemiol 2014; 67:1076-82. [PMID: 25087180 DOI: 10.1016/j.jclinepi.2014.06.008] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2013] [Revised: 03/25/2014] [Accepted: 06/01/2014] [Indexed: 11/26/2022]
Abstract
OBJECTIVES To develop a scale and survey the measurement of patient adherence and patient recruitment, and to explore how these methods impact the results in randomized controlled trials of interventions to improve patient adherence to medications. STUDY DESIGN Analytic survey of a purposively selected sample of patient adherence intervention trials from a systematic review, assessing the quality of adherence measurement and patient recruitment methods. RESULTS We identified 44 different measures of adherence, with qualities ranging from valid and objective to unreliable and subjective. The median overall quality of measures of adherence was 5 (interquartile range [IQR], 3; range, 0-9, 9 is high quality). The quality of the measures was associated with variation in the estimate of adherence (Spearman r = 0.66; 95% confidence interval: 0.39, 0.83). The median overall quality of patient recruitment methods was 2 (IQR, 1; maximum score 6, higher is better). There was no significant correlation between the power of the trial to detect an effect and the quality of the patient recruitment methods. CONCLUSION Measurement and recruitment methods in adherence trials varied considerably, and most methods were of low quality. Adherence research could be advanced by using higher quality measures of adherence and better selection and baseline assessment of study participants.
Collapse
|
6
|
High-performance information search filters for CKD content in PubMed, Ovid MEDLINE, and EMBASE. Am J Kidney Dis 2014; 65:26-32. [PMID: 25059221 DOI: 10.1053/j.ajkd.2014.06.010] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2014] [Accepted: 06/02/2014] [Indexed: 11/11/2022]
Abstract
BACKGROUND Finding relevant articles in large bibliographic databases such as PubMed, Ovid MEDLINE, and EMBASE to inform care and future research is challenging. Articles relevant to chronic kidney disease (CKD) are particularly difficult to find because they are often published under different terminology and are found across a wide range of journal types. STUDY DESIGN We used computer automation within a diagnostic test assessment framework to develop and validate information search filters to identify CKD articles in large bibliographic databases. SETTING & PARTICIPANTS 22,992 full-text articles in PubMed, Ovid MEDLINE, or EMBASE. INDEX TEST 1,374,148 unique search filters. REFERENCE TEST We established the reference standard of article relevance to CKD by manual review of all full-text articles using prespecified criteria to determine whether each article contained CKD content or not. We then assessed filter performance by calculating sensitivity, specificity, and positive predictive value for the retrieval of CKD articles. Filters with high sensitivity and specificity for the identification of CKD articles in the development phase (two-thirds of the sample) were then retested in the validation phase (remaining one-third of the sample). RESULTS We developed and validated high-performance CKD search filters for each bibliographic database. Filters optimized for sensitivity reached at least 99% sensitivity, and filters optimized for specificity reached at least 97% specificity. The filters were complex; for example, one PubMed filter included more than 89 terms used in combination, including "chronic kidney disease," "renal insufficiency," and "renal fibrosis." In proof-of-concept searches, physicians found more articles relevant to the topic of CKD with the use of these filters. LIMITATIONS As knowledge of the pathogenesis of CKD grows and definitions change, these filters will need to be updated to incorporate new terminology used to index relevant articles. CONCLUSIONS PubMed, Ovid MEDLINE, and EMBASE can be filtered reliably for articles relevant to CKD. These high-performance information filters are now available online and can be used to better identify CKD content in large bibliographic databases.
Collapse
|
7
|
High-performance information search filters for acute kidney injury content in PubMed, Ovid Medline and Embase. Nephrol Dial Transplant 2014; 29:823-32. [PMID: 24449104 DOI: 10.1093/ndt/gft531] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022] Open
Abstract
BACKGROUND We frequently fail to identify articles relevant to the subject of acute kidney injury (AKI) when searching the large bibliographic databases such as PubMed, Ovid Medline or Embase. To address this issue, we used computer automation to create information search filters to better identify articles relevant to AKI in these databases. METHODS We first manually reviewed a sample of 22 992 full-text articles and used prespecified criteria to determine whether each article contained AKI content or not. In the development phase (two-thirds of the sample), we developed and tested the performance of >1.3-million unique filters. Filters with high sensitivity and high specificity for the identification of AKI articles were then retested in the validation phase (remaining third of the sample). RESULTS We succeeded in developing and validating high-performance AKI search filters for each bibliographic database with sensitivities and specificities in excess of 90%. Filters optimized for sensitivity reached at least 97.2% sensitivity, and filters optimized for specificity reached at least 99.5% specificity. The filters were complex; for example one PubMed filter included >140 terms used in combination, including 'acute kidney injury', 'tubular necrosis', 'azotemia' and 'ischemic injury'. In proof-of-concept searches, physicians found more articles relevant to topics in AKI with the use of the filters. CONCLUSIONS PubMed, Ovid Medline and Embase can be filtered for articles relevant to AKI in a reliable manner. These high-performance information filters are now available online and can be used to better identify AKI content in large bibliographic databases.
Collapse
|
8
|
Net improvement of correct answers to therapy questions after pubmed searches: pre/post comparison. J Med Internet Res 2013; 15:e243. [PMID: 24217329 PMCID: PMC3841361 DOI: 10.2196/jmir.2572] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2013] [Revised: 09/04/2013] [Accepted: 09/11/2013] [Indexed: 12/21/2022] Open
Abstract
BACKGROUND Clinicians search PubMed for answers to clinical questions although it is time consuming and not always successful. OBJECTIVE To determine if PubMed used with its Clinical Queries feature to filter results based on study quality would improve search success (more correct answers to clinical questions related to therapy). METHODS We invited 528 primary care physicians to participate, 143 (27.1%) consented, and 111 (21.0% of the total and 77.6% of those who consented) completed the study. Participants answered 14 yes/no therapy questions and were given 4 of these (2 originally answered correctly and 2 originally answered incorrectly) to search using either the PubMed main screen or PubMed Clinical Queries narrow therapy filter via a purpose-built system with identical search screens. Participants also picked 3 of the first 20 retrieved citations that best addressed each question. They were then asked to re-answer the original 14 questions. RESULTS We found no statistically significant differences in the rates of correct or incorrect answers using the PubMed main screen or PubMed Clinical Queries. The rate of correct answers increased from 50.0% to 61.4% (95% CI 55.0%-67.8%) for the PubMed main screen searches and from 50.0% to 59.1% (95% CI 52.6%-65.6%) for Clinical Queries searches. These net absolute increases of 11.4% and 9.1%, respectively, included previously correct answers changing to incorrect at a rate of 9.5% (95% CI 5.6%-13.4%) for PubMed main screen searches and 9.1% (95% CI 5.3%-12.9%) for Clinical Queries searches, combined with increases in the rate of being correct of 20.5% (95% CI 15.2%-25.8%) for PubMed main screen searches and 17.7% (95% CI 12.7%-22.7%) for Clinical Queries searches. CONCLUSIONS PubMed can assist clinicians answering clinical questions with an approximately 10% absolute rate of improvement in correct answers. This small increase includes more correct answers partially offset by a decrease in previously correct answers.
Collapse
|
9
|
WhatisKT wiki: a case study of a platform for knowledge translation terms and definitions--descriptive analysis. Implement Sci 2013; 8:13. [PMID: 23347357 PMCID: PMC3564745 DOI: 10.1186/1748-5908-8-13] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2012] [Accepted: 01/11/2013] [Indexed: 12/01/2022] Open
Abstract
Background More than a hundred terms, often with unclear definitions and varying emphases, are used by health research and practice communities across the world who are interested in getting the best possible evidence applied (e.g., knowledge translation, implementation science, diffusion of innovations, and technology transfer). This makes finding published evidence difficult and can result in reduced, misinterpreted, or challenging interactions among professionals. Open dialogue and interaction among various professionals is needed to achieve consolidation of vocabulary. We use case report methods to describe how we sought to build an online tool to present the range of terms and facilitate the dialogue process across groups and disciplines interested in harnessing research evidence for healthcare. Methods We used a wiki platform from Wikispaces to present the problem of terminology and make a case and opportunity for collaboration on usage. Wikis are web sites where communities of users can collaborate online to build content and discuss progress. We gathered terms related to getting research into practice, sought published definitions, and posted these on the wiki (WhatisKT http://whatiskt.wikispaces.com/). We built the wiki in mid-2008 and promoted it through various groups and publications. This report describes the content of the site, our promotion efforts, use of the site, and how the site was used for collaboration up to the end of 2011. Results The WhatisKT wiki site now includes more than 120 pages. Traffic to the site has increased substantially from an average of 200 monthly visits in 2008 to 1700 in 2011. Visitors from 143 countries viewed the wiki in 2011, compared with 12 countries in 2008. However, most use has been limited to short term accesses of about 40 seconds per visit, and discussion of consolidation and solidifying terminology is conspicuously absent. Conclusions Although considerable interest exists in the terms and definitions related to getting research into practice based on increasing numbers of accesses, use of the WhatisKT wiki site for anything beyond quick lookups was minimal. Additional efforts must be directed towards increasing the level of interaction among the members of the site to encourage collaboration on term use.
Collapse
|
10
|
How current are leading evidence-based medical textbooks? An analytic survey of four online textbooks. J Med Internet Res 2012; 14:e175. [PMID: 23220465 PMCID: PMC3799557 DOI: 10.2196/jmir.2105] [Citation(s) in RCA: 32] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2012] [Revised: 07/03/2012] [Accepted: 09/23/2012] [Indexed: 11/13/2022] Open
Abstract
BACKGROUND The consistency of treatment recommendations of evidence-based medical textbooks with more recently published evidence has not been investigated to date. Inconsistencies could affect the quality of medical care. OBJECTIVE To determine the frequency with which topics in leading online evidence-based medical textbooks report treatment recommendations consistent with more recently published research evidence. METHODS Summarized treatment recommendations in 200 clinical topics (ie, disease states) covered in four evidence-based textbooks--UpToDate, Physicians' Information Education Resource (PIER), DynaMed, and Best Practice--were compared with articles identified in an evidence rating service (McMaster Premium Literature Service, PLUS) since the date of the most recent topic updates in each textbook. Textbook treatment recommendations were compared with article results to determine if the articles provided different, new conclusions. From these findings, the proportion of topics which potentially require updating in each textbook was calculated. RESULTS 478 clinical topics were assessed for inclusion to find 200 topics that were addressed by all four textbooks. The proportion of topics for which there was 1 or more recently published articles found in PLUS with evidence that differed from the textbooks' treatment recommendations was 23% (95% CI 17-29%) for DynaMed, 52% (95% CI 45-59%) for UpToDate, 55% (95% CI 48-61%) for PIER, and 60% (95% CI 53-66%) for Best Practice (χ(2) (3)=65.3, P<.001). The time since the last update for each textbook averaged from 170 days (range 131-209) for DynaMed, to 488 days (range 423-554) for PIER (P<.001 across all textbooks). CONCLUSIONS In online evidence-based textbooks, the proportion of topics with potentially outdated treatment recommendations varies substantially.
Collapse
|
11
|
Dialysis search filters for PubMed, Ovid MEDLINE, and Embase databases. Clin J Am Soc Nephrol 2012; 7:1624-31. [PMID: 22917701 PMCID: PMC3463205 DOI: 10.2215/cjn.02360312] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2012] [Accepted: 07/09/2012] [Indexed: 11/23/2022]
Abstract
BACKGROUND AND OBJECTIVES Physicians frequently search bibliographic databases, such as MEDLINE via PubMed, for best evidence for patient care. The objective of this study was to develop and test search filters to help physicians efficiently retrieve literature related to dialysis (hemodialysis or peritoneal dialysis) from all other articles indexed in PubMed, Ovid MEDLINE, and Embase. DESIGN, SETTING, PARTICIPANTS, & MEASUREMENTS A diagnostic test assessment framework was used to develop and test robust dialysis filters. The reference standard was a manual review of the full texts of 22,992 articles from 39 journals to determine whether each article contained dialysis information. Next, 1,623,728 unique search filters were developed, and their ability to retrieve relevant articles was evaluated. RESULTS The high-performance dialysis filters consisted of up to 65 search terms in combination. These terms included the words "dialy" (truncated), "uremic," "catheters," and "renal transplant wait list." These filters reached peak sensitivities of 98.6% and specificities of 98.5%. The filters' performance remained robust in an independent validation subset of articles. CONCLUSIONS These empirically derived and validated high-performance search filters should enable physicians to effectively retrieve dialysis information from PubMed, Ovid MEDLINE, and Embase.
Collapse
|
12
|
MEDLINE clinical queries are robust when searching in recent publishing years. J Am Med Inform Assoc 2012; 20:363-8. [PMID: 23019242 DOI: 10.1136/amiajnl-2012-001075] [Citation(s) in RCA: 41] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022] Open
Abstract
OBJECTIVE To determine if the PubMed and Ovid MEDLINE clinical queries (which were developed in the publishing year 2000, for the purpose categories therapy, diagnosis, prognosis, etiology, and clinical prediction guides) perform as well when searching in current publishing years. METHODS A gold standard database of recently published research literature was created using the McMaster health knowledge refinery (http://hiru.mcmaster.ca/hiru/HIRU_McMaster_HKR.aspx) and its continuously updated database, McMaster PLUS (http://hiru.mcmaster.ca/hiru/HIRU_McMaster_PLUS_projects.aspx). This database contains articles from over 120 clinical journals that are tagged for meeting or not meeting criteria for scientific merit and clinical relevance. The clinical queries sensitive ('broad') and specific ('narrow') search filters were tested in this gold standard database, and sensitivity and specificity were calculated and compared with those originally reported for the clinical queries. RESULTS In all cases, the sensitivity of the highly sensitive search filters and the specificity of the highly specific search filters did not differ substantively when comparing results derived in 2000 with those derived in a more current database. In addition, in all cases, the specificities for the highly sensitive search filters and the sensitivities for the highly specific search filters remained above 50% when testing them in the current database. DISCUSSION These results are reassuring for modern-day searchers. The clinical queries that were derived in the year 2000 perform equally well a decade later. CONCLUSION The PubMed and Ovid MEDLINE clinical queries have been revalidated and remain a useful public resource for searching the world's medical literature for research that is most relevant to clinical care.
Collapse
|
13
|
Correction: Development and Validation of Filters for the Retrieval of Studies of Clinical Examination From Medline. J Med Internet Res 2012; 14:e108. [PMID: 22864147 PMCID: PMC3411179 DOI: 10.2196/jmir.2232] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2012] [Revised: 07/27/2012] [Accepted: 07/27/2012] [Indexed: 12/01/2022] Open
|
14
|
Glomerular disease search filters for Pubmed, Ovid Medline, and Embase: a development and validation study. BMC Med Inform Decis Mak 2012; 12:49. [PMID: 22672435 PMCID: PMC3471011 DOI: 10.1186/1472-6947-12-49] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2011] [Accepted: 05/14/2012] [Indexed: 11/16/2022] Open
Abstract
BACKGROUND Tools to enhance physician searches of Medline and other bibliographic databases have potential to improve the application of new knowledge in patient care. This is particularly true for articles about glomerular disease, which are published across multiple disciplines and are often difficult to track down. Our objective was to develop and test search filters for PubMed, Ovid Medline, and Embase that allow physicians to search within a subset of the database to retrieve articles relevant to glomerular disease. METHODS We used a diagnostic test assessment framework with development and validation phases. We read a total of 22,992 full text articles for relevance and assigned them to the development or validation set to define the reference standard. We then used combinations of search terms to develop 997,298 unique glomerular disease filters. Outcome measures for each filter included sensitivity, specificity, precision, and accuracy. We selected optimal sensitive and specific search filters for each database and applied them to the validation set to test performance. RESULTS High performance filters achieved at least 93.8% sensitivity and specificity in the development set. Filters optimized for sensitivity reached at least 96.7% sensitivity and filters optimized for specificity reached at least 98.4% specificity. Performance of these filters was consistent in the validation set and similar among all three databases. CONCLUSIONS PubMed, Ovid Medline, and Embase can be filtered for articles relevant to glomerular disease in a reliable manner. These filters can now be used to facilitate physician searching.
Collapse
|
15
|
How well are journal and clinical article characteristics associated with the journal impact factor? a retrospective cohort study. J Med Libr Assoc 2012; 100:28-33. [PMID: 22272156 DOI: 10.3163/1536-5050.100.1.006] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/03/2022] Open
Abstract
OBJECTIVE Journal impact factor (JIF) is often used as a measure of journal quality. A retrospective cohort study determined the ability of clinical article and journal characteristics, including appraisal measures collected at the time of publication, to predict subsequent JIFs. METHODS Clinical research articles that passed methods quality criteria were included. Each article was rated for relevance and newsworthiness by 3 to 24 physicians from a panel of more than 4,000 practicing clinicians. The 1,267 articles (from 103 journals) were divided 60∶40 into derivation (760 articles) and validation sets (507 articles), representing 99 and 88 journals, respectively. A multiple regression model was produced determining the association of 10 journal and article measures with the 2007 JIF. RESULTS Four of the 10 measures were significant in the regression model: number of authors, number of databases indexing the journal, proportion of articles passing methods criteria, and mean clinical newsworthiness scores. With the number of disciplines rating the article, the 5 variables accounted for 61% of the variation in JIF (R(2) = 0.607, 95% CI 0.444 to 0.706, P<0.001). CONCLUSION For the clinical literature, measures of scientific quality and clinical newsworthiness available at the time of publication can predict JIFs with 60% accuracy.
Collapse
|
16
|
Search filters can find some but not all knowledge translation articles in MEDLINE: an analytic survey. J Clin Epidemiol 2012; 65:651-9. [PMID: 22424986 DOI: 10.1016/j.jclinepi.2011.10.014] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2011] [Revised: 10/06/2011] [Accepted: 10/26/2011] [Indexed: 11/16/2022]
Abstract
OBJECTIVE Advances from health research are not well applied giving rise to over- and underuse of resources and inferior care. Knowledge translation (KT), actions and processes of getting research findings used in practice, can improve research application. The KT literature is difficult to find because of nonstandardized terminology, rapid evolution of the field, and it is spread across several domains. We created multiple search filters to retrieve KT articles from MEDLINE. STUDY DESIGN AND SETTING Analytic survey using articles from 12 journals tagged as having KT content and also as describing a KT application or containing a KT theory. RESULTS Of 2,594 articles, 579 were KT articles of which 201 were about KT applications and 152 about KT theory. Search filter sensitivity (retrieval efficiency) maximized at 83%-94% with specificity (no retrieval of irrelevant material) approximately 50%. Filter performances were enhanced with multiple terms, but these filters often had reduced specificity. Performance was higher for KT applications and KT theory articles. These filters can select KT material although many irrelevant articles also will be retrieved. CONCLUSION KT search filters were developed and tested, with good sensitivity but suboptimal specificity. Further research must improve their performance.
Collapse
|
17
|
Inclusion of methodological filters in searches for diagnostic test accuracy studies misses relevant studies. J Clin Epidemiol 2012; 65:116-7; author reply 117-8. [PMID: 22118266 DOI: 10.1016/j.jclinepi.2011.02.011] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2011] [Accepted: 02/19/2011] [Indexed: 11/18/2022]
|
18
|
Abstract
BACKGROUND Physicians face challenges when searching PubMed for research evidence, and they may miss relevant articles while retrieving too many nonrelevant articles. We investigated whether the use of search filters in PubMed improves searching by physicians. METHODS We asked a random sample of Canadian nephrologists to answer unique clinical questions derived from 100 systematic reviews of renal therapy. Physicians provided the search terms that they would type into PubMed to locate articles to answer these questions. We entered the physician-provided search terms into PubMed and applied two types of search filters alone or in combination: a methods-based filter designed to identify high-quality studies about treatment (clinical queries "therapy") and a topic-based filter designed to identify studies with renal content. We evaluated the comprehensiveness (proportion of relevant articles found) and efficiency (ratio of relevant to nonrelevant articles) of the filtered and nonfiltered searches. Primary studies included in the systematic reviews served as the reference standard for relevant articles. RESULTS The average physician-provided search terms retrieved 46% of the relevant articles, while 6% of the retrieved articles were relevant (corrected) (the ratio of relevant to nonrelevant articles was 1:16). The use of both filters together produced a marked improvement in efficiency, resulting in a ratio of relevant to nonrelevant articles of 1:5 (16 percentage point improvement; 99% confidence interval 9% to 22%; p < 0.003) with no substantive change in comprehensiveness (44% of relevant articles found; p = 0.55). INTERPRETATION The use of PubMed search filters improves the efficiency of physician searches. Improved search performance may enhance the transfer of research into practice and improve patient care.
Collapse
|
19
|
Abstract P200: Computerized Clinical Decision Support Systems for Therapeutic Drug Monitoring and Dosing: A Decision Maker--Researcher Partnership Systematic Review. Circ Cardiovasc Qual Outcomes 2011. [DOI: 10.1161/circoutcomes.4.suppl_2.ap200] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Background
Optimization of the return on investments in information technology innovations requires that current best evidence be considered for an effect on care processes and health outcomes. Computerized clinical decisions support systems (CCDSSs) might improve therapeutic drug monitoring and dosing (TDMD) by providing patient-tailored clinical recommendations. We summarized current evidence from randomized controlled trials (RCTs) for the effect of CCDSSs on TDMD.
Methods
A decision-maker - researcher partnership systematic review was performed to optimize the practical implementation of results. Studies from a previous review on the effect of CCDSSs (Garg AX, 2005) were included if they addressed TDMD and were RCTs. Additional RCTs were sought until January 2010 in MEDLINE, EMBASE, Evidence-Based Medicine Reviews and Inspec databases. RCTs assessing the effect of a CCDSS on process of care or patient outcomes were selected by pairs of independent reviewers.
Results
In total, 33 RCTs were identified that assessed the effect of a CCDSS on management of vitamin K antagonists (14), insulin (6), theophylline/aminophylline (4), aminoglycosides (3), digoxin (2), lidocaine (1), or as part of a multifaceted approach (3). All studies combined enrolled 24,627 patients; 13,219 were in the largest study, and only 6 other studies enrolled over 500 patients. Most studies were performed in one center (63%) and cluster randomization, of either clinics or physicians, was rarely used (18%). CCDSSs were usually stand-alone systems (76%) primarily used by physicians (85%).
Overall, 18 of 30 studies (60%) showed an improvement in the process of care and 4 of 19 (21%) an improvement in patient outcomes. All evaluable studies assessing insulin dosing for glycemic control showed an improvement. In meta-analysis, CCDSSs for vitamin K antagonist dosing improved the time that patients spent in the therapeutic range by 6.1% (95% confidence interval: 0.46-11.83; p=0.03).
Conclusions
CCDSSs have potential for improving process of care for TDMD, specifically insulin and vitamin K antagonist dosing, but effects on patient outcomes were uncertain. More potent CCDSSs are needed and should be evaluated using cluster randomization, primarily assessing patient outcomes.
Collapse
|
20
|
Search filter precision can be improved by NOTing out irrelevant content. AMIA ... ANNUAL SYMPOSIUM PROCEEDINGS. AMIA SYMPOSIUM 2011; 2011:1506-1513. [PMID: 22195215 PMCID: PMC3243169] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
BACKGROUND Most methodologic search filters developed for use in large electronic databases such as MEDLINE have low precision. One method that has been proposed but not tested for improving precision is NOTing out irrelevant content. OBJECTIVE To determine if search filter precision can be improved by NOTing out the text words and index terms assigned to those articles that are retrieved but are off-target. DESIGN Analytic survey. METHODS NOTing out unique terms in off-target articles and testing search filter performance in the Clinical Hedges Database. MAIN OUTCOME MEASURES Sensitivity, specificity, precision and number needed to read (NNR). RESULTS For all purpose categories (diagnosis, prognosis and etiology) except treatment and for all databases (MEDLINE, EMBASE, CINAHL and PsycINFO), constructing search filters that NOTed out irrelevant content resulted in substantive improvements in NNR (over four-fold for some purpose categories and databases). CONCLUSION Search filter precision can be improved by NOTing out irrelevant content.
Collapse
|
21
|
Development and validation of filters for the retrieval of studies of clinical examination from Medline. J Med Internet Res 2011; 13:e82. [PMID: 22011384 PMCID: PMC3222198 DOI: 10.2196/jmir.1826] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2011] [Revised: 06/03/2011] [Accepted: 06/06/2011] [Indexed: 11/30/2022] Open
Abstract
Background Efficiently finding clinical examination studies—studies that quantify the value of symptoms and signs in the diagnosis of disease—is becoming increasingly difficult. Filters developed to retrieve studies of diagnosis from Medline lack specificity because they also retrieve large numbers of studies on the diagnostic value of imaging and laboratory tests. Objective The objective was to develop filters for retrieving clinical examination studies from Medline. Methods We developed filters in a training dataset and validated them in a testing database. We created the training database by hand searching 161 journals (n = 52,636 studies). We evaluated the recall and precision of 65 candidate single-term filters in identifying studies that reported the sensitivity and specificity of symptoms or signs in the training database. To identify best combinations of these search terms, we used recursive partitioning. The best-performing filters in the training database as well as 13 previously developed filters were evaluated in a testing database (n = 431,120 studies). We also examined the impact of examining reference lists of included articles on recall. Results In the training database, the single-term filters with the highest recall (95%) and the highest precision (8.4%) were diagnosis[subheading] and “medical history taking”[MeSH], respectively. The multiple-term filter developed using recursive partitioning (the RP filter) had a recall of 100% and a precision of 89% in the training database. In the testing database, the Haynes-2004-Sensitive filter (recall 98%, precision 0.13%) and the RP filter (recall 89%, precision 0.52%) showed the best performance. The recall of these two filters increased to 99% and 94% respectively with review of the reference lists of the included articles. Conclusions Recursive partitioning appears to be a useful method of developing search filters. The empirical search filters proposed here can assist in the retrieval of clinical examination studies from Medline; however, because of the low precision of the search strategies, retrieving relevant studies remains challenging. Improving precision may require systematic changes in the tagging of articles by the National Library of Medicine.
Collapse
|
22
|
Computerized clinical decision support systems for therapeutic drug monitoring and dosing: a decision-maker-researcher partnership systematic review. Implement Sci 2011; 6:90. [PMID: 21824384 PMCID: PMC3170236 DOI: 10.1186/1748-5908-6-90] [Citation(s) in RCA: 64] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2011] [Accepted: 08/03/2011] [Indexed: 11/26/2022] Open
Abstract
Background Some drugs have a narrow therapeutic range and require monitoring and dose adjustments to optimize their efficacy and safety. Computerized clinical decision support systems (CCDSSs) may improve the net benefit of these drugs. The objective of this review was to determine if CCDSSs improve processes of care or patient outcomes for therapeutic drug monitoring and dosing. Methods We conducted a decision-maker-researcher partnership systematic review. Studies from our previous review were included, and new studies were sought until January 2010 in MEDLINE, EMBASE, Evidence-Based Medicine Reviews, and Inspec databases. Randomized controlled trials assessing the effect of a CCDSS on process of care or patient outcomes were selected by pairs of independent reviewers. A study was considered to have a positive effect (i.e., CCDSS showed improvement) if at least 50% of the relevant study outcomes were statistically significantly positive. Results Thirty-three randomized controlled trials were identified, assessing the effect of a CCDSS on management of vitamin K antagonists (14), insulin (6), theophylline/aminophylline (4), aminoglycosides (3), digoxin (2), lidocaine (1), or as part of a multifaceted approach (3). Cluster randomization was rarely used (18%) and CCDSSs were usually stand-alone systems (76%) primarily used by physicians (85%). Overall, 18 of 30 studies (60%) showed an improvement in the process of care and 4 of 19 (21%) an improvement in patient outcomes. All evaluable studies assessing insulin dosing for glycaemic control showed an improvement. In meta-analysis, CCDSSs for vitamin K antagonist dosing significantly improved time in therapeutic range. Conclusions CCDSSs have potential for improving process of care for therapeutic drug monitoring and dosing, specifically insulin and vitamin K antagonist dosing. However, studies were small and generally of modest quality, and effects on patient outcomes were uncertain, with no convincing benefit in the largest studies. At present, no firm recommendation for specific systems can be given. More potent CCDSSs need to be developed and should be evaluated by independent researchers using cluster randomization and primarily assess patient outcomes related to drug efficacy and safety.
Collapse
|
23
|
Computerized clinical decision support systems for acute care management: a decision-maker-researcher partnership systematic review of effects on process of care and patient outcomes. Implement Sci 2011; 6:91. [PMID: 21824385 PMCID: PMC3169487 DOI: 10.1186/1748-5908-6-91] [Citation(s) in RCA: 89] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2011] [Accepted: 08/03/2011] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Acute medical care often demands timely, accurate decisions in complex situations. Computerized clinical decision support systems (CCDSSs) have many features that could help. However, as for any medical intervention, claims that CCDSSs improve care processes and patient outcomes need to be rigorously assessed. The objective of this review was to systematically review the effects of CCDSSs on process of care and patient outcomes for acute medical care. METHODS We conducted a decision-maker-researcher partnership systematic review. MEDLINE, EMBASE, Evidence-Based Medicine Reviews databases (Cochrane Database of Systematic Reviews, DARE, ACP Journal Club, and others), and the Inspec bibliographic database were searched to January 2010, in all languages, for randomized controlled trials (RCTs) of CCDSSs in all clinical areas. We included RCTs that evaluated the effect on process of care or patient outcomes of a CCDSS used for acute medical care compared with care provided without a CCDSS. A study was considered to have a positive effect (i.e., CCDSS showed improvement) if at least 50% of the relevant study outcomes were statistically significantly positive. RESULTS Thirty-six studies met our inclusion criteria for acute medical care. The CCDSS improved process of care in 63% (22/35) of studies, including 64% (9/14) of medication dosing assistants, 82% (9/11) of management assistants using alerts/reminders, 38% (3/8) of management assistants using guidelines/algorithms, and 67% (2/3) of diagnostic assistants. Twenty studies evaluated patient outcomes, of which three (15%) reported improvements, all of which were medication dosing assistants. CONCLUSION The majority of CCDSSs demonstrated improvements in process of care, but patient outcomes were less likely to be evaluated and far less likely to show positive results.
Collapse
|
24
|
Can computerized clinical decision support systems improve practitioners' diagnostic test ordering behavior? A decision-maker-researcher partnership systematic review. Implement Sci 2011; 6:88. [PMID: 21824382 PMCID: PMC3174115 DOI: 10.1186/1748-5908-6-88] [Citation(s) in RCA: 84] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2011] [Accepted: 08/03/2011] [Indexed: 11/24/2022] Open
Abstract
Background Underuse and overuse of diagnostic tests have important implications for health outcomes and costs. Decision support technology purports to optimize the use of diagnostic tests in clinical practice. The objective of this review was to assess whether computerized clinical decision support systems (CCDSSs) are effective at improving ordering of tests for diagnosis, monitoring of disease, or monitoring of treatment. The outcome of interest was effect on the diagnostic test-ordering behavior of practitioners. Methods We conducted a decision-maker-researcher partnership systematic review. We searched MEDLINE, EMBASE, Ovid's EBM Reviews database, Inspec, and reference lists for eligible articles published up to January 2010. We included randomized controlled trials comparing the use of CCDSSs to usual practice or non-CCDSS controls in clinical care settings. Trials were eligible if at least one component of the CCDSS gave suggestions for ordering or performing a diagnostic procedure. We considered studies 'positive' if they showed a statistically significant improvement in at least 50% of test ordering outcomes. Results Thirty-five studies were identified, with significantly higher methodological quality in those published after the year 2000 (p = 0.002). Thirty-three trials reported evaluable data on diagnostic test ordering, and 55% (18/33) of CCDSSs improved testing behavior overall, including 83% (5/6) for diagnosis, 63% (5/8) for treatment monitoring, 35% (6/17) for disease monitoring, and 100% (3/3) for other purposes. Four of the systems explicitly attempted to reduce test ordering rates and all succeeded. Factors of particular interest to decision makers include costs, user satisfaction, and impact on workflow but were rarely investigated or reported. Conclusions Some CCDSSs can modify practitioner test-ordering behavior. To better inform development and implementation efforts, studies should describe in more detail potentially important factors such as system design, user interface, local context, implementation strategy, and evaluate impact on user satisfaction and workflow, costs, and unintended consequences.
Collapse
|
25
|
Computerized clinical decision support systems for primary preventive care: a decision-maker-researcher partnership systematic review of effects on process of care and patient outcomes. Implement Sci 2011; 6:87. [PMID: 21824381 PMCID: PMC3173370 DOI: 10.1186/1748-5908-6-87] [Citation(s) in RCA: 103] [Impact Index Per Article: 7.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2011] [Accepted: 08/03/2011] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Computerized clinical decision support systems (CCDSSs) are claimed to improve processes and outcomes of primary preventive care (PPC), but their effects, safety, and acceptance must be confirmed. We updated our previous systematic reviews of CCDSSs and integrated a knowledge translation approach in the process. The objective was to review randomized controlled trials (RCTs) assessing the effects of CCDSSs for PPC on process of care, patient outcomes, harms, and costs. METHODS We conducted a decision-maker-researcher partnership systematic review. We searched MEDLINE, EMBASE, Ovid's EBM Reviews Database, Inspec, and other databases, as well as reference lists through January 2010. We contacted authors to confirm data or provide additional information. We included RCTs that assessed the effect of a CCDSS for PPC on process of care and patient outcomes compared to care provided without a CCDSS. A study was considered to have a positive effect (i.e., CCDSS showed improvement) if at least 50% of the relevant study outcomes were statistically significantly positive. RESULTS We added 17 new RCTs to our 2005 review for a total of 41 studies. RCT quality improved over time. CCDSSs improved process of care in 25 of 40 (63%) RCTs. Cumulative scientifically strong evidence supports the effectiveness of CCDSSs for screening and management of dyslipidaemia in primary care. There is mixed evidence for effectiveness in screening for cancer and mental health conditions, multiple preventive care activities, vaccination, and other preventive care interventions. Fourteen (34%) trials assessed patient outcomes, and four (29%) reported improvements with the CCDSS. Most trials were not powered to evaluate patient-important outcomes. CCDSS costs and adverse events were reported in only six (15%) and two (5%) trials, respectively. Information on study duration was often missing, limiting our ability to assess sustainability of CCDSS effects. CONCLUSIONS Evidence supports the effectiveness of CCDSSs for screening and treatment of dyslipidaemia in primary care with less consistent evidence for CCDSSs used in screening for cancer and mental health-related conditions, vaccinations, and other preventive care. CCDSS effects on patient outcomes, safety, costs of care, and provider satisfaction remain poorly supported.
Collapse
|
26
|
Computerized clinical decision support systems for chronic disease management: a decision-maker-researcher partnership systematic review. Implement Sci 2011; 6:92. [PMID: 21824386 PMCID: PMC3170626 DOI: 10.1186/1748-5908-6-92] [Citation(s) in RCA: 146] [Impact Index Per Article: 11.2] [Reference Citation Analysis] [Abstract] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2011] [Accepted: 08/03/2011] [Indexed: 11/13/2022] Open
Abstract
Background The use of computerized clinical decision support systems (CCDSSs) may improve chronic disease management, which requires recurrent visits to multiple health professionals, ongoing disease and treatment monitoring, and patient behavior modification. The objective of this review was to determine if CCDSSs improve the processes of chronic care (such as diagnosis, treatment, and monitoring of disease) and associated patient outcomes (such as effects on biomarkers and clinical exacerbations). Methods We conducted a decision-maker-researcher partnership systematic review. We searched MEDLINE, EMBASE, Ovid's EBM Reviews database, Inspec, and reference lists for potentially eligible articles published up to January 2010. We included randomized controlled trials that compared the use of CCDSSs to usual practice or non-CCDSS controls. Trials were eligible if at least one component of the CCDSS was designed to support chronic disease management. We considered studies 'positive' if they showed a statistically significant improvement in at least 50% of relevant outcomes. Results Of 55 included trials, 87% (n = 48) measured system impact on the process of care and 52% (n = 25) of those demonstrated statistically significant improvements. Sixty-five percent (36/55) of trials measured impact on, typically, non-major (surrogate) patient outcomes, and 31% (n = 11) of those demonstrated benefits. Factors of interest to decision makers, such as cost, user satisfaction, system interface and feature sets, unique design and deployment characteristics, and effects on user workflow were rarely investigated or reported. Conclusions A small majority (just over half) of CCDSSs improved care processes in chronic disease management and some improved patient health. Policy makers, healthcare administrators, and practitioners should be aware that the evidence of CCDSS effectiveness is limited, especially with respect to the small number and size of studies measuring patient outcomes.
Collapse
|
27
|
Computerized clinical decision support systems for drug prescribing and management: a decision-maker-researcher partnership systematic review. Implement Sci 2011; 6:89. [PMID: 21824383 PMCID: PMC3179735 DOI: 10.1186/1748-5908-6-89] [Citation(s) in RCA: 85] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2011] [Accepted: 08/03/2011] [Indexed: 02/02/2023] Open
Abstract
Background Computerized clinical decision support systems (CCDSSs) for drug therapy management are designed to promote safe and effective medication use. Evidence documenting the effectiveness of CCDSSs for improving drug therapy is necessary for informed adoption decisions. The objective of this review was to systematically review randomized controlled trials assessing the effects of CCDSSs for drug therapy management on process of care and patient outcomes. We also sought to identify system and study characteristics that predicted benefit. Methods We conducted a decision-maker-researcher partnership systematic review. We updated our earlier reviews (1998, 2005) by searching MEDLINE, EMBASE, EBM Reviews, Inspec, and other databases, and consulting reference lists through January 2010. Authors of 82% of included studies confirmed or supplemented extracted data. We included only randomized controlled trials that evaluated the effect on process of care or patient outcomes of a CCDSS for drug therapy management compared to care provided without a CCDSS. A study was considered to have a positive effect (i.e., CCDSS showed improvement) if at least 50% of the relevant study outcomes were statistically significantly positive. Results Sixty-five studies met our inclusion criteria, including 41 new studies since our previous review. Methodological quality was generally high and unchanged with time. CCDSSs improved process of care performance in 37 of the 59 studies assessing this type of outcome (64%, 57% of all studies). Twenty-nine trials assessed patient outcomes, of which six trials (21%, 9% of all trials) reported improvements. Conclusions CCDSSs inconsistently improved process of care measures and seldomly improved patient outcomes. Lack of clear patient benefit and lack of data on harms and costs preclude a recommendation to adopt CCDSSs for drug therapy management.
Collapse
|
28
|
Sensitive Clinical Queries retrieved relevant systematic reviews as well as primary studies: an analytic survey. J Clin Epidemiol 2011; 64:1341-9. [PMID: 21775104 DOI: 10.1016/j.jclinepi.2011.04.007] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2011] [Revised: 04/11/2011] [Accepted: 04/18/2011] [Indexed: 10/17/2022]
Abstract
OBJECTIVE To determine how well the previously validated broad and narrow Clinical Queries for treatment, diagnosis, prognosis, and etiology studies, retrieve not only primary studies but also relevant systematic reviews. STUDY DESIGN AND SETTING Using the Clinical Hedges Database housed at McMaster University, we tested the retrieval performance of the Clinical Queries. RESULTS For most purpose categories (therapy, diagnosis, prognosis, and etiology) and most databases (MEDLINE, EMBASE, CINAHL, and PsycINFO), the sensitive (broad) Clinical Queries search terms had sensitivities higher than 90% for retrieving relevant systematic reviews as well as primary studies. When testing specific (narrow) Clinical Queries, in 8 of 12 cases, specificity was 94% or higher, but sensitivity dropped below 50%. For all purpose categories and all databases, performance was improved when combining the sensitive or specific Clinical Queries with our existing sensitive or specific systematic review search filter using the Boolean OR; sensitivities ranged from 90.7% to 99.7% and specificities ranged from 92.4% to 98.0% with sensitivities higher than 50%. CONCLUSION The sensitive Clinical Queries for therapy, diagnosis, prognosis, and etiology perform well in retrieving not only primary studies but also systematic reviews. Search performance can be improved by combining the Clinical Queries with our sensitive or specific systematic review filter.
Collapse
|
29
|
Retrieval of diagnostic and treatment studies for clinical use through PubMed and PubMed's Clinical Queries filters. J Am Med Inform Assoc 2011; 18:652-9. [PMID: 21680559 DOI: 10.1136/amiajnl-2011-000233] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/03/2022] Open
Abstract
OBJECTIVE Clinical Queries filters were developed to improve the retrieval of high-quality studies in searches on clinical matters. The study objective was to determine the yield of relevant citations and physician satisfaction while searching for diagnostic and treatment studies using the Clinical Queries page of PubMed compared with searching PubMed without these filters. MATERIALS AND METHODS Forty practicing physicians, presented with standardized treatment and diagnosis questions and one question of their choosing, entered search terms which were processed in a random, blinded fashion through PubMed alone and PubMed Clinical Queries. Participants rated search retrievals for applicability to the question at hand and satisfaction. RESULTS For treatment, the primary outcome of retrieval of relevant articles was not significantly different between the groups, but a higher proportion of articles from the Clinical Queries searches met methodologic criteria (p=0.049), and more articles were published in core internal medicine journals (p=0.056). For diagnosis, the filtered results returned more relevant articles (p=0.031) and fewer irrelevant articles (overall retrieval less, p=0.023); participants needed to screen fewer articles before arriving at the first relevant citation (p<0.05). Relevance was also influenced by content terms used by participants in searching. Participants varied greatly in their search performance. DISCUSSION Clinical Queries filtered searches returned more high-quality studies, though the retrieval of relevant articles was only statistically different between the groups for diagnosis questions. CONCLUSION Retrieving clinically important research studies from Medline is a challenging task for physicians. Methodological search filters can improve search retrieval.
Collapse
|
30
|
Abstract
OBJECTIVE This study identified the journals with the highest yield of clinical obesity research articles and surveyed the scatter of such studies across journals. The study exemplifies an approach to establish a journal collection that is likely to contain most new knowledge about a field. DESIGN AND METHODS All original studies that were cited in 40 systematic reviews about obesity topics ('included studies') were compiled and journal titles in which they were published were extracted. The journals were ranked by the number of included studies. The highest-yielding journals for clinical obesity and the scatter across journal titles were determined. A subset of these journals was created in MEDLINE (PubMed) to test search recall and precision for high-quality studies of obesity treatment (that is, articles that pass predetermined methodology criteria, including random allocation of participants to comparison groups, assessment of clinical outcomes, and at least 80% follow-up). RESULTS Articles in 252 journals were cited in the systematic reviews. The three highest-yielding journals specialized in obesity, but they published only 19.2% of the research, leaving 80.8% scattered across 249 non-obesity journals. The MEDLINE journal subset comprised 241 journals (11 journals were not indexed in MEDLINE) and included 82% of the clinical obesity research articles retrieved by a search for high-quality treatment studies ('recall' of 82%). Of the articles retrieved, 11% were about clinical obesity care ('precision' of 11%), compared with precision of 6% for obesity treatment studies in the full MEDLINE database. CONCLUSION Obesity journals captured only a small proportion of the literature on clinical obesity care. Those wishing to keep up in this field will need to develop more inclusive strategies than reading these specialty journals. A journal subset based on these findings may be useful when searching large electronic databases to increase search precision.
Collapse
|
31
|
Abstract
BACKGROUND As the knowledge translation and comparative effectiveness research agendas gain momentum, we can expect more evidence on which to base quality improvement (QI) programmes. Unaided searches for such content in the literature, however, are likely to be daunting, with searches missing key articles while mainly retrieving articles that are irrelevant to the question being asked. The objective of this study was to develop and validate optimal Medline search filters for retrieving original and review articles about clinical QI. METHODS Analytical survey in the McMaster Clinical Hedges database and Health Knowledge Refinery (HKR) of 161 clinical journals to determine the operating characteristics of QI search filters developed by computerised combinations of terms selected to detect original QI studies and systematic reviews meeting basic methodological criteria for scientific merit. Results from a derivation random subset of articles were tested in a validation random subset. RESULTS The Clinical Hedges QI database contained 49,233 citations of which 471 (0.96%) were original or review QI studies; of those, 282 (60%) were methodologically sound. Combinations of search terms reached peak sensitivities of 100% at a specificity of 89.3% for detecting methodologically sound original and review QI studies, and sensitivities of 97.6% at a specificity of 53.0% for detecting all original and review QI studies independent of rigour. Operating characteristics of the search filters derived in the development database worked similarly in the validation database, without statistical differences. CONCLUSION New empirically derived Medline search filters have been validated to optimise retrieval of original and review QI articles.
Collapse
|
32
|
Optimal search filters for renal information in EMBASE. Am J Kidney Dis 2010; 56:14-22. [PMID: 20231047 DOI: 10.1053/j.ajkd.2009.11.026] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2009] [Accepted: 11/27/2009] [Indexed: 11/11/2022]
Abstract
BACKGROUND EMBASE is a popular database used to retrieve biomedical information. Our objective was to develop and test search filters to help clinicians and researchers efficiently retrieve articles with renal information in EMBASE. STUDY DESIGN We used a diagnostic test assessment framework because filters operate similarly to screening tests. SETTINGS & PARTICIPANTS We divided a sample of 5,302 articles from 39 journals into development and validation sets of articles. INDEX TEST Information retrieval properties were assessed by treating each search filter as a "diagnostic test" or screening procedure for the detection of relevant articles. We tested the performance of 1,936,799 search filters made of unique renal terms and their combinations. REFERENCE STANDARD & OUTCOME: The reference standard was manual review of each article. We calculated the sensitivity and specificity of each filter to identify articles with renal information. RESULTS The best renal filters consisted of multiple search terms, such as "renal replacement therapy," "renal," "kidney disease," and "proteinuria," and the truncated terms "kidney," "dialy," "neph," "glomerul," and "hemodial." These filters achieved peak sensitivities of 98.7% (95% CI, 97.9-99.6) and specificities of 98.5% (95% CI, 98.0-99.0). The retrieval performance of these filters remained excellent in the validation set of independent articles. LIMITATIONS The retrieval performance of any search will vary depending on the quality of all search concepts used, not just renal terms. CONCLUSIONS We empirically developed and validated high-performance renal search filters for EMBASE. These filters can be programmed into the search engine or used on their own to improve the efficiency of searching.
Collapse
|
33
|
A cross-sectional study of the number and frequency of terms used to refer to knowledge translation in a body of health literature in 2006: a Tower of Babel? Implement Sci 2010; 5:16. [PMID: 21080976 PMCID: PMC2834600 DOI: 10.1186/1748-5908-5-16] [Citation(s) in RCA: 188] [Impact Index Per Article: 13.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2009] [Accepted: 02/12/2010] [Indexed: 11/29/2022] Open
|
34
|
A cross-sectional study of the number and frequency of terms used to refer to knowledge translation in a body of health literature in 2006: a Tower of Babel? Implement Sci 2010. [PMID: 21080976 DOI: 10.1186/1748‐5908‐5‐16] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022] Open
Abstract
UNLABELLED BACKGROUND The study of implementing research findings into practice is rapidly growing and has acquired many competing names (e.g., dissemination, uptake, utilization, translation) and contributing disciplines. The use of multiple terms across disciplines pose barriers to communication and progress for applying research findings. We sought to establish an inventory of terms describing this field and how often authors use them in a collection of health literature published in 2006. METHODS We refer to this field as knowledge translation (KT). Terms describing aspects of KT and their definitions were collected from literature, the internet, reports, textbooks, and contact with experts. We compiled a database of KT and other articles by reading 12 healthcare journals representing multiple disciplines. All articles published in these journals in 2006 were categorized as being KT or not. The KT articles (all KT) were further categorized, if possible, for whether they described KT projects or implementations (KT application articles), or presented the theoretical basis, models, tools, methods, or techniques of KT (KT theory articles). Accuracy was checked using duplicate reading. Custom designed software determined how often KT terms were used in the titles and abstracts of articles categorized as being KT. RESULTS A total of 2,603 articles were assessed, and 581 were identified as KT articles. Of these, 201 described KT applications, and 153 included KT theory. Of the 100 KT terms collected, 46 were used by the authors in the titles or abstracts of articles categorized as being KT. For all 581 KT articles, eight terms or term variations used by authors were highly discriminating for separating KT and non-KT articles (p < 0.001): implementation, adoption, quality improvement, dissemination, complex intervention (with multiple endings), implementation (within three words of) research, and complex intervention. More KT terms were associated with KT application articles (n = 13) and KT theory articles (n = 18). CONCLUSIONS We collected 100 terms describing KT research. Authors used 46 of them in titles and abstracts of KT articles. Of these, approximately half discriminated between KT and non-KT articles. Thus, the need for consolidation and consistent use of fewer terms related to KT research is evident.
Collapse
|
35
|
Effects of computerized clinical decision support systems on practitioner performance and patient outcomes: methods of a decision-maker-researcher partnership systematic review. Implement Sci 2010; 5:12. [PMID: 20181104 PMCID: PMC2829489 DOI: 10.1186/1748-5908-5-12] [Citation(s) in RCA: 70] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2009] [Accepted: 02/05/2010] [Indexed: 11/30/2022] Open
Abstract
Background Computerized clinical decision support systems are information technology-based systems designed to improve clinical decision-making. As with any healthcare intervention with claims to improve process of care or patient outcomes, decision support systems should be rigorously evaluated before widespread dissemination into clinical practice. Engaging healthcare providers and managers in the review process may facilitate knowledge translation and uptake. The objective of this research was to form a partnership of healthcare providers, managers, and researchers to review randomized controlled trials assessing the effects of computerized decision support for six clinical application areas: primary preventive care, therapeutic drug monitoring and dosing, drug prescribing, chronic disease management, diagnostic test ordering and interpretation, and acute care management; and to identify study characteristics that predict benefit. Methods The review was undertaken by the Health Information Research Unit, McMaster University, in partnership with Hamilton Health Sciences, the Hamilton, Niagara, Haldimand, and Brant Local Health Integration Network, and pertinent healthcare service teams. Following agreement on information needs and interests with decision-makers, our earlier systematic review was updated by searching Medline, EMBASE, EBM Review databases, and Inspec, and reviewing reference lists through 6 January 2010. Data extraction items were expanded according to input from decision-makers. Authors of primary studies were contacted to confirm data and to provide additional information. Eligible trials were organized according to clinical area of application. We included randomized controlled trials that evaluated the effect on practitioner performance or patient outcomes of patient care provided with a computerized clinical decision support system compared with patient care without such a system. Results Data will be summarized using descriptive summary measures, including proportions for categorical variables and means for continuous variables. Univariable and multivariable logistic regression models will be used to investigate associations between outcomes of interest and study specific covariates. When reporting results from individual studies, we will cite the measures of association and p-values reported in the studies. If appropriate for groups of studies with similar features, we will conduct meta-analyses. Conclusion A decision-maker-researcher partnership provides a model for systematic reviews that may foster knowledge translation and uptake.
Collapse
|
36
|
Finding knowledge translation articles in CINAHL. Stud Health Technol Inform 2010; 160:1179-1183. [PMID: 20841870] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/29/2023]
Abstract
BACKGROUND The process of moving research into practice has a number of names including knowledge translation (KT). Researchers and decision makers need to be able to readily access the literature on KT for the field to grow and to evaluate the existing evidence. METHODS To develop and validate search filters for finding KT articles in the database Cumulative Index to Nursing and Allied Health (CINAHL). A gold standard database was constructed by hand searching and classifying articles from 12 journals as KT Content, KT Applications and KT Theory. MAIN OUTCOME MEASURES Sensitivity, specificity, precision, and accuracy of the search filters. RESULTS Optimized search filters had fairly low sensitivity and specificity for KT Content (58.4% and 64.9% respectively), while sensitivity and specificity increased for retrieving KT Application (67.5% and 70.2%) and KT Theory articles (70.4% and 77.8%). CONCLUSION Search filter performance was suboptimal marking the broad base of disciplines and vocabularies used by KT researchers. Such diversity makes retrieval of KT studies in CINAHL difficult.
Collapse
|
37
|
Consistency and accuracy of indexing systematic review articles and meta-analyses in medline. Health Info Libr J 2009; 26:203-10. [PMID: 19712212 DOI: 10.1111/j.1471-1842.2008.00823.x] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Abstract
BACKGROUND Systematic review articles support the advance of science and translation of research evidence into healthcare practice. Inaccurate retrieval from medline could limit access to reviews. OBJECTIVE To determine the quality of indexing systematic reviews and meta-analyses in medline. METHODS The Clinical Hedges Database, containing the results of a hand search of 161 journals, was used to test medline indexing terms for their ability to retrieve systematic reviews that met predefined methodologic criteria (labelled as 'pass' review articles) and reviews that reported a meta-analysis. RESULTS The Clinical Hedges Database contained 49 028 articles; 753 were 'pass' review articles (552 with a meta-analysis). In total 758 review articles (independent of whether they passed) reported a meta-analysis. The search strategy that retrieved the highest number of 'pass' systematic reviews achieved a sensitivity of 97.1%. The publication type 'meta analysis' had a false positive rate of 5.6% (95% CI 3.9 to 7.6), and false negative rate of 0.31% (95% CI 0.26 to 0.36) for retrieving systematic reviews that reported a meta-analysis. CONCLUSIONS Inaccuracies in indexing systematic reviews and meta-analyses in medline can be partly overcome by a 5-term search strategy. Introducing a publication type for systematic reviews of the literature could improve retrieval performance.
Collapse
|
38
|
Abstract
OBJECTIVE To develop and test a Medline filter that allows clinicians to search for articles within a clinical discipline, rather than searching the entire Medline database. DESIGN Diagnostic test assessment framework with development and validation phases. SETTING Sample of 4657 articles published in 2006 from 40 journals. Reviews Each article was manually reviewed, and 19.8% contained information relevant to the discipline of nephrology. The performance of 1 155 087 unique renal filters was compared with the manual review. MAIN OUTCOME MEASURES Sensitivity, specificity, precision, and accuracy of each filter. RESULTS The best renal filters combined two to 14 terms or phrases and included the terms "kidney" with multiple endings (that is, truncation), "renal replacement therapy", "renal dialysis", "kidney function tests", "renal", "nephr" truncated, "glomerul" truncated, and "proteinuria". These filters achieved peak sensitivities of 97.8% and specificities of 98.5%. Performance of filters remained excellent in the validation phase. CONCLUSIONS Medline can be filtered for the discipline of nephrology in a reliable manner. Storing these high performance renal filters in PubMed could help clinicians with their everyday searching. Filters can also be developed for other clinical disciplines by using similar methods.
Collapse
|
39
|
Diagnostic test systematic reviews: bibliographic search filters ("Clinical Queries") for diagnostic accuracy studies perform well. J Clin Epidemiol 2009; 62:974-81. [PMID: 19230607 DOI: 10.1016/j.jclinepi.2008.11.006] [Citation(s) in RCA: 29] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2008] [Revised: 10/16/2008] [Accepted: 11/09/2008] [Indexed: 12/27/2022]
Abstract
BACKGROUND Systematic reviews of health care topics are valuable summaries of all pertinent studies on focused questions. However, finding all relevant primary studies for systematic reviews remains challenging. OBJECTIVES To determine the performance of the Clinical Queries sensitive search filter for diagnostic accuracy studies for retrieving studies for systematic reviews. METHODS We compared the yield of the sensitive Clinical Queries diagnosis search filter for MEDLINE and EMBASE to retrieve studies in diagnostic accuracy systematic reviews reported in ACP Journal Club in 2006. RESULTS Twelve of 22 diagnostic accuracy reviews (452 included studies) met the inclusion criteria. After excluding 11 studies not in MEDLINE or EMBASE, 95% of articles (417 of 441) were captured by the sensitive Clinical Queries diagnosis search filter (MEDLINE and EMBASE combined). Of 24 studies not retrieved by the filter, 22 were not diagnostic accuracy studies. Reanalysis of the Clinical Queries filter without these 22 nondiagnosis articles increased its performance to 99% (417 of 419). We found no substantive impact of the two articles missed by the Clinical Queries filter on the conclusions of the systematic reviews in which they were cited. CONCLUSION The sensitive Clinical Queries diagnostic search filter captured 99% of articles and 100% of substantive articles indexed in MEDLINE and EMBASE in diagnostic accuracy systematic reviews.
Collapse
|
40
|
Toward automatic recognition of high quality clinical evidence. AMIA ... ANNUAL SYMPOSIUM PROCEEDINGS. AMIA SYMPOSIUM 2008; 2008:368. [PMID: 18998881 PMCID: PMC2656036] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Subscribe] [Scholar Register] [Received: 03/13/2008] [Revised: 07/11/2008] [Indexed: 05/27/2023]
Abstract
Automatic methods for recognizing topically relevant documents supported by high quality research can assist clinicians in practicing evidence-based medicine. We approach the challenge of identifying articles with high quality clinical evidence as a binary classification problem. Combining predictions from supervised machine learning methods and using deep semantic features, we achieve 73.5% precision and 67% recall.
Collapse
|
41
|
Towards automatic recognition of scientifically rigorous clinical research evidence. J Am Med Inform Assoc 2008; 16:25-31. [PMID: 18952929 DOI: 10.1197/jamia.m2996] [Citation(s) in RCA: 54] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022] Open
Abstract
The growing numbers of topically relevant biomedical publications readily available due to advances in document retrieval methods pose a challenge to clinicians practicing evidence-based medicine. It is increasingly time consuming to acquire and critically appraise the available evidence. This problem could be addressed in part if methods were available to automatically recognize rigorous studies immediately applicable in a specific clinical situation. We approach the problem of recognizing studies containing useable clinical advice from retrieved topically relevant articles as a binary classification problem. The gold standard used in the development of PubMed clinical query filters forms the basis of our approach. We identify scientifically rigorous studies using supervised machine learning techniques (Naïve Bayes, support vector machine (SVM), and boosting) trained on high-level semantic features. We combine these methods using an ensemble learning method (stacking). The performance of learning methods is evaluated using precision, recall and F(1) score, in addition to area under the receiver operating characteristic (ROC) curve (AUC). Using a training set of 10,000 manually annotated MEDLINE citations, and a test set of an additional 2,000 citations, we achieve 73.7% precision and 61.5% recall in identifying rigorous, clinically relevant studies, with stacking over five feature-classifier combinations and 82.5% precision and 84.3% recall in recognizing rigorous studies with treatment focus using stacking over word + metadata feature vector. Our results demonstrate that a high quality gold standard and advanced classification methods can help clinicians acquire best evidence from the medical literature.
Collapse
|
42
|
Sample size determination for bibliographic retrieval studies. BMC Med Inform Decis Mak 2008; 8:43. [PMID: 18823538 PMCID: PMC2569926 DOI: 10.1186/1472-6947-8-43] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2008] [Accepted: 09/29/2008] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Research for developing search strategies to retrieve high-quality clinical journal articles from MEDLINE is expensive and time-consuming. The objective of this study was to determine the minimal number of high-quality articles in a journal subset that would need to be hand-searched to update or create new MEDLINE search strategies for treatment, diagnosis, and prognosis studies. METHODS The desired width of the 95% confidence intervals (W) for the lowest sensitivity among existing search strategies was used to calculate the number of high-quality articles needed to reliably update search strategies. New search strategies were derived in journal subsets formed by 2 approaches: random sampling of journals and top journals (having the most high-quality articles). The new strategies were tested in both the original large journal database and in a low-yielding journal (having few high-quality articles) subset. RESULTS For treatment studies, if W was 10% or less for the lowest sensitivity among our existing search strategies, a subset of 15 randomly selected journals or 2 top journals were adequate for updating search strategies, based on each approach having at least 99 high-quality articles. The new strategies derived in 15 randomly selected journals or 2 top journals performed well in the original large journal database. Nevertheless, the new search strategies developed using the random sampling approach performed better than those developed using the top journal approach in a low-yielding journal subset. For studies of diagnosis and prognosis, no journal subset had enough high-quality articles to achieve the expected W (10%). CONCLUSION The approach of randomly sampling a small subset of journals that includes sufficient high-quality articles is an efficient way to update or create search strategies for high-quality articles on therapy in MEDLINE. The concentrations of diagnosis and prognosis articles are too low for this approach.
Collapse
|
43
|
Quality of reporting of diagnostic accuracy studies: no change since STARD statement publication--before-and-after study. Radiology 2008; 248:817-23. [PMID: 18710977 DOI: 10.1148/radiol.2483072067] [Citation(s) in RCA: 55] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
PURPOSE To determine the quality of reporting of diagnostic accuracy studies before and after the Standards for Reporting of Diagnostic Accuracy (STARD) statement publication and to determine whether there is a difference in the quality of reporting by comparing STARD (endorsing) and non-STARD (nonendorsing) journals. MATERIALS AND METHODS Diagnostic accuracy studies were identified by hand searching six STARD and six non-STARD journals for 2001, 2002, 2004, and 2005. Diagnostic accuracy studies (n = 240) were assessed by using a checklist of 13 of 25 STARD items. The change in the mean total score on the modified STARD checklist was evaluated with analysis of covariance. The change in proportion of times that each individual STARD item was reported before and after STARD statement publication was evaluated (chi(2) tests for linear trend). RESULTS With mean total score as dependent factor, analysis of covariance showed that the interaction between the two independent factors (STARD or non-STARD journal and year of publication) was not significant (F = 0.664, df = 3, partial eta(2) = 0.009, P = .58). Additionally, the frequency with which individual items on the STARD checklist were reported before and after STARD statement publication has remained relatively constant, with little difference between STARD and non-STARD journals. CONCLUSION After publication of the STARD statement in 2003, the quality of reporting of diagnostic accuracy studies remained similar to pre-STARD statement publication levels, and there was no meaningful difference (ie, one additional item on the checklist of 13 of 25 STARD items being reported) in the quality of reporting between those journals that published the STARD statement and those that did not.
Collapse
|
44
|
Systematic reviews and original articles differ in relevance, novelty, and use in an evidence-based service for physicians: PLUS project. J Clin Epidemiol 2008; 61:449-54. [PMID: 18394537 DOI: 10.1016/j.jclinepi.2007.10.016] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2007] [Revised: 09/24/2007] [Accepted: 10/01/2007] [Indexed: 11/29/2022]
Abstract
OBJECTIVES To describe the ratings from physicians, and use by physicians, of high quality, clinically pertinent original articles and systematic reviews from over 110 clinical journals and the Cochrane Database of Systematic Reviews (CDSRs). STUDY DESIGN AND SETTING Prospective observational study. Data were collected via an online clinical rating system of relevance and newsworthiness for quality-filtered clinical articles and via an online delivery service for practicing physicians, during the course of the McMaster Premium LiteratUre Service Trial. Clinical ratings of articles in the MORE system by over 1,900 physicians were compared and the usage rates over 13 months of these articles by physicians, who were not raters, were examined. RESULTS Systematic reviews were rated significantly higher than original articles for relevance (P<0.001), but significantly lower for newsworthiness (P<0.001). Reviews published in the CDSR had significantly lower ratings for both relevance (P<0.001) and newsworthiness (P<0.001) than reviews published in other journals. Participants accessed reviews more often than original articles (P<0.001), and accessed reviews from journals more often than from CDSR (P<0.001). CONCLUSION Physician ratings and the use of high-quality original articles and systematic reviews differed, generally favoring systematic reviews over original articles. Reviews published in journals were rated higher and accessed more often than Cochrane reviews.
Collapse
|
45
|
Cumulative Index to Nursing and Allied Health Literature search strategies for identifying methodologically sound causation and prognosis studies. Appl Nurs Res 2008; 21:98-103. [PMID: 18457749 DOI: 10.1016/j.apnr.2006.04.004] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2006] [Accepted: 04/14/2006] [Indexed: 10/22/2022]
Abstract
We developed search strategies for detecting sound articles on causation and prognosis in Cumulative Index to Nursing and Allied Health Literature (CINAHL) in the year 2000. An analytic survey was conducted, comparing hand searches of 75 journals with retrievals from CINAHL for 5,020 search terms and 11,784 combinations for causation and 9,946 combinations for prognosis. For detecting sound causation studies, a three-term strategy maximized sensitivity at 97.0% with a specificity of 52.3%. For detecting sound prognosis studies, a three-term strategy maximized sensitivity at 92.2% with a specificity of 50.0%. These search filters will enhance the searching efforts of clinicians and researchers.
Collapse
|
46
|
Prediction of citation counts for clinical articles at two years using data available within three weeks of publication: retrospective cohort study. BMJ 2008; 336:655-7. [PMID: 18292132 PMCID: PMC2270947 DOI: 10.1136/bmj.39482.526713.be] [Citation(s) in RCA: 100] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
OBJECTIVE To determine if citation counts at two years could be predicted for clinical articles that pass basic criteria for critical appraisal using data within three weeks of publication from external sources and an online article rating service. DESIGN Retrospective cohort study. SETTING Online rating service, Canada. PARTICIPANTS 1274 articles from 105 journals published from January to June 2005, randomly divided into a 60:40 split to provide derivation and validation datasets. MAIN OUTCOME MEASURES 20 article and journal features, including ratings of clinical relevance and newsworthiness, routinely collected by the McMaster online rating of evidence system, compared with citation counts at two years. RESULTS The derivation analysis showed that the regression equation accounted for 60% of the variation (R2=0.60, 95% confidence interval 0.538 to 0.629). This model applied to the validation dataset gave a similar prediction (R2=0.56, 0.476 to 0.596, shrinkage 0.04; shrinkage measures how well the derived equation matches data from the validation dataset). Cited articles in the top half and top third were predicted with 83% and 61% sensitivity and 72% and 82% specificity. Higher citations were predicted by indexing in numerous databases; number of authors; abstraction in synoptic journals; clinical relevance scores; number of cited references; and original, multicentred, and therapy articles from journals with a greater proportion of articles abstracted. CONCLUSION Citation counts can be reliably predicted at two years using data within three weeks of publication.
Collapse
|
47
|
Indexing of diagnosis accuracy studies in MEDLINE and EMBASE. AMIA ... ANNUAL SYMPOSIUM PROCEEDINGS. AMIA SYMPOSIUM 2007; 2007:801-805. [PMID: 18693947 PMCID: PMC2813664] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Subscribe] [Scholar Register] [Received: 03/12/2007] [Revised: 07/18/2007] [Accepted: 10/11/2007] [Indexed: 05/26/2023]
Abstract
BACKGROUND STAndards for Reporting of Diagnostic Accuracy (STARD) were published in 2003 and endorsed by some journals but not others. OBJECTIVE To determine whether the quality of indexing of diagnostic accuracy studies in MEDLINE and EMBASE has improved since the STARD statement was published. DESIGN Evaluate the change in the mean number of "accurate index terms" assigned to diagnostic accuracy studies, comparing STARD (endorsing) and non-STARD (non-endorsing) journals, for 2 years before and after STARD publication. RESULTS In MEDLINE, no differences in indexing quality were found for STARD and non-STARD journals before or after the STARD statement was published in 2003. In EMBASE, indexing in STARD journals improved compared with non-STARD journals (p = 0.02). However, articles in STARD journals had half the number of accurate indexing terms as articles in non-STARD journals, both before and after STARD statement publication (p < 0.001).
Collapse
|
48
|
Abstract
Nurses, allied health professionals, clinicians, and researchers increasingly use online access to evidence in the course of patient care or when conducting reviews on a particular topic. Qualitative research has an important role in evidence-based health care. Online searching for qualitative studies can be difficult, however, resulting in the need to develop search filters. The objective of this study was to develop optimal search strategies to retrieve qualitative studies in CINAHL for the 2000 publishing year. The authors conducted an analytic survey comparing hand searches of journals with retrievals from CINAHL for candidate search terms and combinations. Combinations of search terms reached peak sensitivities of 98.9% and peak specificities of 99.5%. Combining search terms optimized both sensitivity and specificity at 94.2%. Empirically derived search strategies combining indexing terms and textwords can achieve high sensitivity and high specificity for retrieving qualitative studies from CINAHL.
Collapse
|
49
|
Response to Corrao et al.: Improving efficacy of PubMed clinical queries for retrieving scientifically strong studies on treatment. J Am Med Inform Assoc 2007; 14:247-8. [PMID: 17213490 PMCID: PMC2213472 DOI: 10.1197/jamia.m2297] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022] Open
|
50
|
A method for defining a journal subset for a clinical discipline using the bibliographies of systematic reviews. Stud Health Technol Inform 2007; 129:721-4. [PMID: 17911811] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/17/2023]
Abstract
BACKGROUND Searching for best evidence for clinical decisions in large biomedical databases is problematic because advances in health care practice that are ready for application are but a very dilute constituent in a much larger pool of biomedical literature. Sensitive search strategies have been developed to help alleviate this problem but search precision is still generally low. If "virtual journal subsets" that are likely to include all relevant articles can be defined for clinical discipline areas or disease content areas this will likely improve search precision. OBJECTIVE To determine whether studies cited in systematic literature reviews can define a journal subset for a given clinical discipline. DESIGN Survey of the primary studies included in systematic reviews that are relevant to the clinical discipline of nephrology. METHODS Four data sources were searched to identify systematic reviews relevant to clinical nephrology: the Cochrane Database of Systematic Reviews, McMaster PLUS (Premium LiteratUre Service), MEDLINE, and the Renal Health Library. Three research assistants recorded data pertinent to each of the included primary studies. RESULTS 195 systematic reviews relevant to nephrology were defined and the 2,779 unique original articles they cited were concentrated in 466 journals, with 90% of the articles in 217 titles. This journal subset can be stored online and used when searching the large biomedical data-bases such as MEDLINE. CONCLUSION The bibliographies of systematic reviews can be used to define a journal subset for a clinical discipline area.
Collapse
|