1
|
Vrebalov Cindro P, Bukic J, Pranić S, Leskur D, Rušić D, Šešelja Perišin A, Božić J, Vuković J, Modun D. Did an introduction of CONSORT for abstracts guidelines improve reporting quality of randomised controlled trials' abstracts on Helicobacter pylori infection? Observational study. BMJ Open 2022; 12:e054978. [PMID: 35354625 PMCID: PMC8969005 DOI: 10.1136/bmjopen-2021-054978] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
OBJECTIVE To determine abstracts' adherence to the Consolidated Standards of Reporting Trials for Abstracts (CONSORT-A) statement and to explore the factors associated with reporting quality. DESIGN An observational study. SETTING Abstracts of randomised controlled trials published between 2010 and 2019, found searching the MEDLINE database. PARTICIPANTS A total of 451 abstracts of the clinical trials on Helicobacter pylori infections were included. PRIMARY AND SECONDARY OUTCOME MEASURES Abstracts' reporting quality was determined by assessing their adherence to 17-item CONSORT-A checklist, with overall score being calculated as the sum of items that were adequately reported for each abstract. Additional factors that might influence the reporting quality of the abstracts were analysed, with univariate and multivariate linear regression used to determine how those factors influenced the overall reporting quality. RESULTS Included abstracts had an overall median quality score of 8/17 (IQR 7-9). Large proportions of abstracts adequately reported interventions, participants, objectives, numbers randomised and conclusions (97.1, 99.3, 89.1. 94.7 and 98.4% of abstracts, respectively). Trial design, randomisation, blinding and funding were severely under-reported with only 8.0, 2.7, 11.0 and 2.0% of abstracts reporting each item. Overall quality scores for H. pylori abstracts were higher in association with CONSORT-A endorsement (B=5.698; 95% CI 1.781 to 9.615), pharmacological interventions (B=4.063; 95% CI 0.224 to 7.902), multicentre settings (B=5.057; 95% CI 2.370 to 7.743), higher numbers of participants (B=3.607; 95% CI 1.272 to 5.942), hospital settings (B=4.827; 95% CI 1.753 to 7.901) and longer abstracts (B=3.878; 95% CI 0.787 to 6.969 for abstracts with 251-300 words and B=7.404; 95% CI 3.930 to 10.878 for abstracts with more than 300 words). CONCLUSIONS The overall reporting quality of abstracts was inadequate. The endorsement of CONSORT-A guidelines by more journals might improve the standards of reporting.
Collapse
Affiliation(s)
- Pavle Vrebalov Cindro
- Department of Gastroenterology and Hepatology, University Hospital of Split, Split, Croatia
| | - Josipa Bukic
- Department of Pharmacy, University of Split School of Medicine, Split, Croatia
| | - Shelly Pranić
- Department of Research in Biomedicine and Health, University of Split School of Medicine, Split, Croatia
| | - Dario Leskur
- Department of Pharmacy, University of Split School of Medicine, Split, Croatia
| | - Doris Rušić
- Department of Pharmacy, University of Split School of Medicine, Split, Croatia
| | - Ana Šešelja Perišin
- Department of Pharmacy, University of Split School of Medicine, Split, Croatia
| | - Joško Božić
- Department of Pathophysiology, University of Split School of Medicine, Split, Croatia
| | - Jonatan Vuković
- Department of Gastroenterology and Hepatology, University Hospital of Split, Split, Croatia
| | - Darko Modun
- Department of Pharmacy, University of Split School of Medicine, Split, Croatia
| |
Collapse
|
2
|
Sounderajah V, Normahani P, Aggarwal R, Jayakumar S, Markar SR, Ashrafian H, Darzi A. Reporting Standards and Quality Assessment Tools in Artificial Intelligence–Centered Healthcare Research. Artif Intell Med 2022. [DOI: 10.1007/978-3-030-64573-1_34] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
3
|
Sounderajah V, Normahani P, Aggarwal R, Jayakumar S, Markar SR, Ashrafian H, Darzi A. Reporting Standards and Quality Assessment Tools in Artificial Intelligence Centered Healthcare Research. Artif Intell Med 2021. [DOI: 10.1007/978-3-030-58080-3_34-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
4
|
Crowley RJ, Tan YJ, Ioannidis JPA. Empirical assessment of bias in machine learning diagnostic test accuracy studies. J Am Med Inform Assoc 2020; 27:1092-1101. [PMID: 32548642 PMCID: PMC7647361 DOI: 10.1093/jamia/ocaa075] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2020] [Revised: 04/12/2020] [Accepted: 04/24/2020] [Indexed: 12/29/2022] Open
Abstract
OBJECTIVE Machine learning (ML) diagnostic tools have significant potential to improve health care. However, methodological pitfalls may affect diagnostic test accuracy studies used to appraise such tools. We aimed to evaluate the prevalence and reporting of design characteristics within the literature. Further, we sought to empirically assess whether design features may be associated with different estimates of diagnostic accuracy. MATERIALS AND METHODS We systematically retrieved 2 × 2 tables (n = 281) describing the performance of ML diagnostic tools, derived from 114 publications in 38 meta-analyses, from PubMed. Data extracted included test performance, sample sizes, and design features. A mixed-effects metaregression was run to quantify the association between design features and diagnostic accuracy. RESULTS Participant ethnicity and blinding in test interpretation was unreported in 90% and 60% of studies, respectively. Reporting was occasionally lacking for rudimentary characteristics such as study design (28% unreported). Internal validation without appropriate safeguards was used in 44% of studies. Several design features were associated with larger estimates of accuracy, including having unreported (relative diagnostic odds ratio [RDOR], 2.11; 95% confidence interval [CI], 1.43-3.1) or case-control study designs (RDOR, 1.27; 95% CI, 0.97-1.66), and recruiting participants for the index test (RDOR, 1.67; 95% CI, 1.08-2.59). DISCUSSION Significant underreporting of experimental details was present. Study design features may affect estimates of diagnostic performance in the ML diagnostic test accuracy literature. CONCLUSIONS The present study identifies pitfalls that threaten the validity, generalizability, and clinical value of ML diagnostic tools and provides recommendations for improvement.
Collapse
Affiliation(s)
- Ryan J Crowley
- Meta-Research Innovation Center at Stanford, Stanford University, Stanford, California, USA
- Department of Bioengineering, Stanford School of Engineering, Stanford University, Stanford, California, USA
| | - Yuan Jin Tan
- Meta-Research Innovation Center at Stanford, Stanford University, Stanford, California, USA
- Department of Epidemiology and Population Health, Stanford University School of Medicine, Stanford, California, USA
| | - John P A Ioannidis
- Meta-Research Innovation Center at Stanford, Stanford University, Stanford, California, USA
- Department of Epidemiology and Population Health, Stanford University School of Medicine, Stanford, California, USA
- Stanford Prevention Research Center, Department of Medicine, Stanford Medicine, Stanford University, Stanford, California, USA
- Department of Biomedical Data Science, Stanford Medicine, Stanford University, Stanford, California, USA
- Department of Statistics, School of Humanities and Science, Stanford University, Stanford, California, USA
| |
Collapse
|
5
|
Sivendran S, Newport K, Horst M, Albert A, Galsky MD. Reporting quality of abstracts in phase III clinical trials of systemic therapy in metastatic solid malignancies. Trials 2015; 16:341. [PMID: 26253548 PMCID: PMC4545856 DOI: 10.1186/s13063-015-0885-9] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2015] [Accepted: 07/24/2015] [Indexed: 11/10/2022] Open
Abstract
Background Manuscript abstracts represent a critical source of information for oncology practitioners. Practitioners may utilize the information contained in abstracts as a basis for treatment decisions particularly when full-text articles are not accessible. In 2007, the Consolidated Standards of Reporting Trials (CONSORT) extension statement for abstracts provided a minimum list of elements that should be included in abstracts. In this study we evaluate the degree of adherence to these recommendations and accessibility of full text publications in oncology publications. Methods A systematic review of abstracts of randomized, controlled, phase III trials in metastatic solid malignancies published between January 2009 and December 2011 in PubMed, Medline, and Embase was completed. Abstracts were assigned a completeness score of 0–18 based on the number of CONSORT-recommended elements. Accessibility through open access was recorded. Results 174 abstracts with data for 95,956 patients were reviewed. The median completeness score was 9 (range, 3–17). Open access to full text articles was available for 80 % of abstracts. The remaining 20 % (35 out of 174) had a median cost of 38 USD (range: $22–49.95). The least frequently reported elements were: trial design description (20 %), participant allocation method (13 %), blinding (24 %), trial enrollment status (22 %), registration and name of trial (26 %) and funding source (18 %). The most frequently reported elements were eligibility criteria (98 %), study interventions (100 %), and primary endpoint (87 %). Conclusion There is poor adherence to the CONSORT recommendations for abstract reporting in publications of randomized cancer clinical trials which could negatively impact clinical decision-making. Full-text articles are frequently available through open access.
Collapse
Affiliation(s)
- Shanthi Sivendran
- Ann B. Barshinger Cancer Institute, Lancaster General Health, Lancaster, PA, 17604, USA.
| | | | - Michael Horst
- Research Institute, Lancaster General Health, Lancaster, PA, USA.
| | - Adam Albert
- Department of Internal Medicine, Veterans Administration Medical Center, Lebanon, PA, USA.
| | - Matthew D Galsky
- Icahn School of Medicine, Tisch Cancer Institute, Mount Sinai, NY, USA.
| |
Collapse
|
6
|
Korevaar DA, Cohen JF, Hooft L, Bossuyt PMM. Literature survey of high-impact journals revealed reporting weaknesses in abstracts of diagnostic accuracy studies. J Clin Epidemiol 2015; 68:708-15. [PMID: 25703213 DOI: 10.1016/j.jclinepi.2015.01.014] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2014] [Revised: 12/17/2014] [Accepted: 01/21/2015] [Indexed: 11/28/2022]
Abstract
OBJECTIVES Informative journal abstracts are crucial for the identification and initial appraisal of studies. We aimed to evaluate the informativeness of abstracts of diagnostic accuracy studies. STUDY DESIGN AND SETTING PubMed was searched for reports of studies that had evaluated the diagnostic accuracy of a test against a clinical reference standard, published in 12 high-impact journals in 2012. Two reviewers independently evaluated the information contained in included abstracts using 21 items deemed important based on published guidance for adequate reporting and study quality assessment. RESULTS We included 103 abstracts. Crucial information on study population, setting, patient sampling, and blinding as well as confidence intervals around accuracy estimates were reported in <50% of the abstracts. The mean number of reported items per abstract was 10.1 of 21 (standard deviation 2.2). The mean number of reported items was significantly lower for multiple-gate (case-control type) studies, in reports in specialty journals, and for studies with smaller sample sizes and lower abstract word counts. No significant differences were found between studies evaluating different types of tests. CONCLUSION Many abstracts of diagnostic accuracy study reports in high-impact journals are insufficiently informative. Developing guidelines for such abstracts could help the transparency and completeness of reporting.
Collapse
Affiliation(s)
- Daniël A Korevaar
- Department of Clinical Epidemiology, Biostatistics and Bioinformatics, Academic Medical Centre, University of Amsterdam, Meibergdreef 9, 1105 AZ, Amsterdam, The Netherlands.
| | - Jérémie F Cohen
- Department of Clinical Epidemiology, Biostatistics and Bioinformatics, Academic Medical Centre, University of Amsterdam, Meibergdreef 9, 1105 AZ, Amsterdam, The Netherlands; Department of Pediatrics, Necker-Enfants Malades Hospital, Assistance Publique-Hôpitaux de Paris, Paris Descartes University, 149, rue de Sevres, 75015 Paris, France; Inserm, Obstetrical, Perinatal and Pediatric Epidemiology Research Team, Center for Epidemiology and Biostatistics (U1153), Paris Descartes University, 53, avenue de l'Observatoire, 75014 Paris, France
| | - Lotty Hooft
- Dutch Cochrane Centre, Julius Center for Health Sciences and Primary Care, University Medical Centre Utrecht, University Utrecht, Heidelberglaan 100, 3584 CX, Utrecht, The Netherlands
| | - Patrick M M Bossuyt
- Department of Clinical Epidemiology, Biostatistics and Bioinformatics, Academic Medical Centre, University of Amsterdam, Meibergdreef 9, 1105 AZ, Amsterdam, The Netherlands
| |
Collapse
|
7
|
Altwairgi AK, Booth CM, Hopman WM, Baetz TD. Discordance between conclusions stated in the abstract and conclusions in the article: analysis of published randomized controlled trials of systemic therapy in lung cancer. J Clin Oncol 2012; 30:3552-7. [PMID: 22649130 DOI: 10.1200/jco.2012.41.8319] [Citation(s) in RCA: 38] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022] Open
Abstract
PURPOSE Clinicians may read only the abstract of an article to keep abreast of newly published randomized controlled trials (RCTs). However, discordances have been noticed in summary conclusions in the abstracts and the main body of some articles. This article evaluated such discordances in detail. METHODS RCTs of systemic therapy for lung cancer published between 2004 and 2009 were considered. Conclusions in the body of the articles and those in the abstracts were graded by using a 7-point Likert scale; 1 for strong endorsement of the control arm, 4 for a neutral statement, and 7 for strong endorsement of the experimental arm. Conclusions were classified as discordant if the difference in scores was ≥ 2. χ(2) tests and logistic regression were used to identify factors associated with discordance. RESULTS From among 114 eligible RCTs identified (90 for non-small-cell and 24 for small-cell lung cancer), 11 (10%) articles presented discordant conclusions in the abstract and in the body of the articles. Discordance was most common when the experimental arm was strongly supported in the abstract but not in the body of the article (nine of 11; 82%); however, the converse was much less common (two of 11; 18%; P < .001). Intraclass correlations for the two reviewers were ≥ 0.9. The discordances were found to be independent of trial-related factors. CONCLUSION Conclusive statements in the abstract can differ from those in the full text. Clinicians should use caution when they consider making changes in their practice on the basis of reading only the abstract of a published RCT.
Collapse
Affiliation(s)
- Abdullah K Altwairgi
- Cancer Center of Southeastern Ontario, Queen's University, 25 King St West, Kingston, Ontario, Canada
| | | | | | | |
Collapse
|
8
|
|
9
|
Hopewell S, Clarke M, Moher D, Wager E, Middleton P, Altman DG, Schulz KF. CONSORT for reporting randomized controlled trials in journal and conference abstracts: explanation and elaboration. PLoS Med 2008; 5:e20. [PMID: 18215107 PMCID: PMC2211558 DOI: 10.1371/journal.pmed.0050020] [Citation(s) in RCA: 423] [Impact Index Per Article: 26.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/03/2007] [Accepted: 12/07/2007] [Indexed: 11/19/2022] Open
Abstract
BACKGROUND Clear, transparent, and sufficiently detailed abstracts of conferences and journal articles related to randomized controlled trials (RCTs) are important, because readers often base their assessment of a trial solely on information in the abstract. Here, we extend the CONSORT (Consolidated Standards of Reporting Trials) Statement to develop a minimum list of essential items, which authors should consider when reporting the results of a RCT in any journal or conference abstract. METHODS AND FINDINGS We generated a list of items from existing quality assessment tools and empirical evidence. A three-round, modified-Delphi process was used to select items. In all, 109 participants were invited to participate in an electronic survey; the response rate was 61%. Survey results were presented at a meeting of the CONSORT Group in Montebello, Canada, January 2007, involving 26 participants, including clinical trialists, statisticians, epidemiologists, and biomedical editors. Checklist items were discussed for eligibility into the final checklist. The checklist was then revised to ensure that it reflected discussions held during and subsequent to the meeting. CONSORT for Abstracts recommends that abstracts relating to RCTs have a structured format. Items should include details of trial objectives; trial design (e.g., method of allocation, blinding/masking); trial participants (i.e., description, numbers randomized, and number analyzed); interventions intended for each randomized group and their impact on primary efficacy outcomes and harms; trial conclusions; trial registration name and number; and source of funding. We recommend the checklist be used in conjunction with this explanatory document, which includes examples of good reporting, rationale, and evidence, when available, for the inclusion of each item. CONCLUSIONS CONSORT for Abstracts aims to improve reporting of abstracts of RCTs published in journal articles and conference proceedings. It will help authors of abstracts of these trials provide the detail and clarity needed by readers wishing to assess a trial's validity and the applicability of its results.
Collapse
|
10
|
Hopewell S, Eisinga A, Clarke M. Better reporting of randomized trials in biomedical journal and conference abstracts. J Inf Sci 2007. [DOI: 10.1177/0165551507080415] [Citation(s) in RCA: 29] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Well reported research published in conference and journal abstracts is important as individuals reading these reports often base their initial assessment of a study based on information reported in the abstract. However, there is growing concern about the reliability and quality of information published in these reports. This article provides an overview of research evidence underpinning the need for better reporting of abstracts reported in conference proceedings and abstracts of journal articles; with a particular focus in the area of health care. Where available we highlight evidence which refers specifically to abstracts reporting randomized trials. We seek to identify current initiatives aimed at improving the reporting of these reports and recommend that an extension of the CONSORT Statement (Consolidated Standards of Reporting Trials), CONSORT for Abstracts, be developed. This checklist would include a list of essential items to be reported in any conference or journal abstract reporting the results of a randomized trial.
Collapse
|
11
|
Rutjes AWS, Reitsma JB, Di Nisio M, Smidt N, van Rijn JC, Bossuyt PMM. Evidence of bias and variation in diagnostic accuracy studies. CMAJ 2006; 174:469-76. [PMID: 16477057 PMCID: PMC1373751 DOI: 10.1503/cmaj.050090] [Citation(s) in RCA: 408] [Impact Index Per Article: 22.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023] Open
Abstract
BACKGROUND Studies with methodologic shortcomings can overestimate the accuracy of a medical test. We sought to determine and compare the direction and magnitude of the effects of a number of potential sources of bias and variation in studies on estimates of diagnostic accuracy. METHODS We identified meta-analyses of the diagnostic accuracy of tests through an electronic search of the databases MEDLINE, EMBASE, DARE and MEDION (1999-2002). We included meta-analyses with at least 10 primary studies without preselection based on design features. Pairs of reviewers independently extracted study characteristics and original data from the primary studies. We used a multivariable meta-epidemiologic regression model to investigate the direction and strength of the association between 15 study features on estimates of diagnostic accuracy. RESULTS We selected 31 meta-analyses with 487 primary studies of test evaluations. Only 1 study had no design deficiencies. The quality of reporting was poor in most of the studies. We found significantly higher estimates of diagnostic accuracy in studies with nonconsecutive inclusion of patients (relative diagnostic odds ratio [RDOR] 1.5, 95% confidence interval [CI] 1.0-2.1) and retrospective data collection (RDOR 1.6, 95% CI 1.1-2.2). The estimates were highest in studies that had severe cases and healthy controls (RDOR 4.9, 95% CI 0.6-37.3). Studies that selected patients based on whether they had been referred for the index test, rather than on clinical symptoms, produced significantly lower estimates of diagnostic accuracy (RDOR 0.5, 95% CI 0.3-0.9). The variance between meta-analyses of the effect of design features was large to moderate for type of design (cohort v. case-control), the use of composite reference standards and the use of differential verification; the variance was close to zero for the other design features. INTERPRETATION Shortcomings in study design can affect estimates of diagnostic accuracy, but the magnitude of the effect may vary from one situation to another. Design features and clinical characteristics of patient groups should be carefully considered by researchers when designing new studies and by readers when appraising the results of such studies. Unfortunately, incomplete reporting hampers the evaluation of potential sources of bias in diagnostic accuracy studies.
Collapse
Affiliation(s)
- Anne W S Rutjes
- Deptartment of Clinical Epidemiology and Biostatistics, Academic Medical Center, University of Amsterdam, Amsterdam, The Netherlands
| | | | | | | | | | | |
Collapse
|
12
|
Gates RL, Caniano DA, Hayes JR, Arca MJ. Does VATS provide optimal treatment of empyema in children? A systematic review. J Pediatr Surg 2004; 39:381-6. [PMID: 15017556 DOI: 10.1016/j.jpedsurg.2003.11.045] [Citation(s) in RCA: 69] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
Abstract
PURPOSE The surgical literature is replete with studies describing methods of treatment for pediatric empyema. The purpose of this report was to perform an evidence-based review of the literature to determine the most effective and appropriate treatment for empyema in infants and children. METHODS The MEDLINE database was searched for English- and Spanish-language articles published from 1987 through 2002 on the treatment of thoracic empyema in children. Additional unpublished data were obtained by contacting individual study authors. There were no multiinstitutional prospective studies; all were retrospective, institutional series. A true meta-analysis could not be performed because of inherent institutional bias and variability in outcome measures among studies. A Kruskal-Wallis nonparametric test was used to compare methods detailed in the individual studies. RESULTS Forty-four retrospective studies with a total of 1,369 patients were available for analysis. Four treatment strategies were compared: chest tube drainage alone (16 studies, 611 patients), chest tube drainage with fibrinolytic instillation (10 studies, 83 patients), thoracotomy (13 studies, 226 patients), and video-assisted thoracoscopic decortication (VATS; 22 studies, 449 patients). Outcome measures common to the majority of studies included length of stay, fever duration, l of antibiotic therapy duration, and duration of chest tube drainage. Patients undergoing early VATS or thoracotomy had shorter length of stay (P =.003). There was a trend for shorter duration of postoperative fever compared with chest tube alone or with fibrinolytic therapy, but this did not reach statistical significance (P =.055). There was no statistical difference in chest tube duration between methods. There was no trend correlating antibiotic use with treatment methods, length of hospital stay, duration of fever, or length of chest tube requirement. CONCLUSIONS Early VATS or thoracotomy leads to shorter hospitalization. The duration of chest tube placement and antibiotic use is variable and does not correlate with treatment method. A carefully designed, multiinstitutional, randomized study would lead to the development of evidence-based standards that may optimize the treatment of thoracic empyema in children.
Collapse
Affiliation(s)
- Robert L Gates
- Division of Pediatric Surgery, Department of Surgery, The Ohio State University, College of Medicine and Public Health and Children's Hospital, Columbus, OH, USA
| | | | | | | |
Collapse
|
13
|
Dijkers MPJM. Searching the literature for information on traumatic spinal cord injury: the usefulness of abstracts. Spinal Cord 2003; 41:76-84. [PMID: 12595869 DOI: 10.1038/sj.sc.3101414] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
STUDY DESIGN Systematic review of abstracts of published papers presumed to contain information on chronic pain in persons with spinal cord injury (SCI). OBJECTIVES To determine to what degree papers on SCI are abstracted in such a way that they can be retrieved, and evaluated as to the paper's applicability to a reader's questions. SETTING US--academic department of rehabilitation medicine. METHODS 868 abstracts published in Medline were independently examined by two out of 13 screeners, who answered four questions on the subjects and nature of the paper with 'Yes', 'No' or 'insufficient information'. Frequency of ratings 'insufficient information', and screener agreement were evaluated as affected by screener and abstract/paper characteristics. RESULTS Screeners could not determine whether the paper dealt with persons with traumatic SCI for 37% of abstracts; whether chronic pain was a topic could not be determined in 18%. Physicians were less willing than other disciplines to assign 'insufficient information'. Screener agreement was better than chance, but not at the level suggested for quality measurement. Screener discipline and task experience did not make a difference, nor did abstract length, structure, or decade of publication of the paper. CONCLUSION Authors need to improve the quality of abstracts to make retrieval and screening of relevant papers more effective and efficient. SPONSORSHIP National Institute on Disability and Rehabilitation Research.
Collapse
Affiliation(s)
- M P J M Dijkers
- Department of Rehabilitation Medicine, Mount Sinai School of Medicine, New York, NY 10029-6574, USA.
| |
Collapse
|
14
|
Cullen RJ. In search of evidence: family practitioners' use of the Internet for clinical information. J Med Libr Assoc 2002; 90:370-9. [PMID: 12398243 PMCID: PMC128953] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/27/2023] Open
Abstract
PURPOSE The aim of the study was to determine the extent of use of the Internet for clinical information among family practitioners in New Zealand, their skills in accessing and evaluating this information, and the ways they dealt with patient use of information from the Internet. METHOD A random sample of members of the Royal New Zealand College of General Practitioners was surveyed to determine their use of the Internet as an information source and their access to MEDLINE. They were asked how they evaluated and applied the retrieved information and what they knew about their patients' use of the Internet. Structured interviews with twelve participants focused in more depth on issues such as the physicians' skills in using MEDLINE and in evaluating retrieved material, their searches for evidence-based information, their understanding of critical appraisal, their patients' use of the Internet, and the ways they handle this use. RESULTS More than 80% (294/363) of members in the sample completed and returned the questionnaire. Of these, 48.6% reported that they used the Internet to look for clinical information. Gender and age were more significant in determining use than practice type or location. Information was primarily sought on rare diseases, updates on common diseases, diagnosis, and information for patients. MEDLINE was the most frequently accessed source. Search skills were basic, and abstracts were commonly used if the full text of an item was not readily available. Most reported that up to 10% of patients bring information from the Internet to consultations. Both Internet users and non-Internet users encouraged patients to search the Web. Internet users were more likely to recommend specific sites. CONCLUSIONS Practitioners urgently need training in searching and evaluating information on the Internet and in identifying and applying evidence-based information. Portals to provide access to high-quality, evidence-based clinical and patient information are needed along with access to the full text of relevant items.
Collapse
Affiliation(s)
- Rowena J Cullen
- School of Information Management, Victoria University of Wellington, New Zealand.
| |
Collapse
|
15
|
Tsourounis C. How to Evaluate a Randomized Controlled Trial: What Every Pharmacist Should Know. Hosp Pharm 2000. [DOI: 10.1177/001857870003501004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
The primary literature is replete with drug-related-information. The pharmacist is often faced with interpreting and applying this information to clinical practice. Although there are many forms of primary literature, the randomized controlled trial (RCT) is the most commonly used for assessing drug efficacy. This article is designed to review the components of a RCT, including the title, abstract, introduction, methods, results, conclusions, and references. Bias may be introduced into any of these components, either intentionally or unintentionally. Knowing how to evaluate these components will provide pharmacists a framework for interpreting RCT quality. In addition, examples of bias will be provided for each RCT component. Goals (1) to familiarize pharmacists with the components of an RCT; (2) to review sources of RCT bias; and (3) to help pharmacists apply RCT findings to clinical practice. Objectives This article should enable pharmacists to (1) list the components of an RCT; (2) describe five ways bias may be introduced in an RCT; and (3) discuss the limitations and constraints of using an RCT to make clinical decisions.
Collapse
Affiliation(s)
- Candy Tsourounis
- Department of Clinical Pharmacy, School of Pharmacy, University of California, San Francisco CA 94143-0622
| |
Collapse
|