1
|
Slim K, Nini E, Forestier D, Kwiatkowski F, Panis Y, Chipponi J. Methodological index for non-randomized studies (minors): development and validation of a new instrument. ANZ J Surg 2003; 73:712-6. [PMID: 12956787 DOI: 10.1046/j.1445-2197.2003.02748.x] [Citation(s) in RCA: 5531] [Impact Index Per Article: 251.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
Abstract
BACKGROUND Because of specific methodological difficulties in conducting randomized trials, surgical research remains dependent predominantly on observational or non-randomized studies. Few validated instruments are available to determine the methodological quality of such studies either from the reader's perspective or for the purpose of meta-analysis. The aim of the present study was to develop and validate such an instrument. METHODS After an initial conceptualization phase of a methodological index for non-randomized studies (MINORS), a list of 12 potential items was sent to 100 experts from different surgical specialties for evaluation and was also assessed by 10 clinical methodologists. Subsequent testing involved the assessment of inter-reviewer agreement, test-retest reliability at 2 months, internal consistency reliability and external validity. RESULTS The final version of MINORS contained 12 items, the first eight being specifically for non-comparative studies. Reliability was established on the basis of good inter-reviewer agreement, high test-retest reliability by the kappa-coefficient and good internal consistency by a high Cronbach's alpha-coefficient. External validity was established in terms of the ability of MINORS to identify excellent trials. CONCLUSIONS MINORS is a valid instrument designed to assess the methodological quality of non-randomized surgical studies, whether comparative or non-comparative. The next step will be to determine its external validity when used in a large number of studies and to compare it with other existing instruments.
Collapse
|
|
22 |
5531 |
2
|
Abstract
Content experts frequently are used in the judgment-quantification stage of content validation of instruments. However, errors in instrumentation may arise when important steps in selecting and using these experts are not carefully planned. The systematic process of choosing, orienting, and using content experts in the judgment-qualification stage of instrument development is addressed, with particular attention to the often neglected, important step of familiarizing these experts with the conceptual underpinnings and measurement model of the instrument. An example using experts to validate content for a measure of caregiver burden is used to illustrate this stage of instrument review.
Collapse
|
Review |
28 |
572 |
3
|
Ahmed SM, Palermo AGS. Community engagement in research: frameworks for education and peer review. Am J Public Health 2010; 100:1380-7. [PMID: 20558798 PMCID: PMC2901283 DOI: 10.2105/ajph.2009.178137] [Citation(s) in RCA: 277] [Impact Index Per Article: 18.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/14/2009] [Indexed: 11/04/2022]
Abstract
Community engagement in research may enhance a community's ability to address its own health needs and health disparities issues while ensuring that researchers understand community priorities. However, there are researchers with limited understanding of and experience with effective methods of engaging communities. Furthermore, limited guidance is available for peer-review panels on evaluating proposals for research that engages communities. The National Institutes of Health Director's Council of Public Representatives developed a community engagement framework that includes values, strategies to operationalize each value, and potential outcomes of their use, as well as a peer-review framework for evaluating research that engages communities. Use of these frameworks for educating researchers to create and sustain authentic community-academic partnerships will increase accountability and equality between the partners.
Collapse
|
article-commentary |
15 |
277 |
4
|
|
|
22 |
251 |
5
|
Katrak P, Bialocerkowski AE, Massy-Westropp N, Kumar VSS, Grimmer KA. A systematic review of the content of critical appraisal tools. BMC Med Res Methodol 2004; 4:22. [PMID: 15369598 PMCID: PMC521688 DOI: 10.1186/1471-2288-4-22] [Citation(s) in RCA: 223] [Impact Index Per Article: 10.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2004] [Accepted: 09/16/2004] [Indexed: 01/08/2023] Open
Abstract
Background Consumers of research (researchers, administrators, educators and clinicians) frequently use standard critical appraisal tools to evaluate the quality of published research reports. However, there is no consensus regarding the most appropriate critical appraisal tool for allied health research. We summarized the content, intent, construction and psychometric properties of published, currently available critical appraisal tools to identify common elements and their relevance to allied health research. Methods A systematic review was undertaken of 121 published critical appraisal tools sourced from 108 papers located on electronic databases and the Internet. The tools were classified according to the study design for which they were intended. Their items were then classified into one of 12 criteria based on their intent. Commonly occurring items were identified. The empirical basis for construction of the tool, the method by which overall quality of the study was established, the psychometric properties of the critical appraisal tools and whether guidelines were provided for their use were also recorded. Results Eighty-seven percent of critical appraisal tools were specific to a research design, with most tools having been developed for experimental studies. There was considerable variability in items contained in the critical appraisal tools. Twelve percent of available tools were developed using specified empirical research. Forty-nine percent of the critical appraisal tools summarized the quality appraisal into a numeric summary score. Few critical appraisal tools had documented evidence of validity of their items, or reliability of use. Guidelines regarding administration of the tools were provided in 43% of cases. Conclusions There was considerable variability in intent, components, construction and psychometric properties of published critical appraisal tools for research reports. There is no "gold standard' critical appraisal tool for any study design, nor is there any widely accepted generic tool that can be applied equally well across study types. No tool was specific to allied health research requirements. Thus interpretation of critical appraisal of research reports currently needs to be considered in light of the properties and intent of the critical appraisal tool chosen for the task.
Collapse
|
Systematic Review |
21 |
223 |
6
|
|
Review |
32 |
217 |
7
|
|
|
20 |
175 |
8
|
Abstract
CONTEXT Editorial peer review is widely used to select submissions to journals for publication and is presumed to improve their usefulness. Sufficient research on peer review has been published to consider a synthesis of its effects. METHODS To examine the evidence of the effects of editorial peer-review processes in biomedical journals, we conducted electronic and full-text searches of private and public databases to June 2000 and corresponded with the World Association of Medical Editors, European Association of Science Editors, Council of Science Editors, and researchers in the field to locate comparative studies assessing the effects of any stage of the peer-review process that made some attempt to control for confounding. Nineteen of 135 identified studies fulfilled our criteria. Because of the diversity of study questions, methods, and outcomes, we did not pool results. RESULTS Nine studies considered the effects of concealing reviewer/author identity. Four studies suggested that concealing reviewer or author identity affected review quality (mostly positively); however, methodological limitations make their findings ambiguous, and other studies' results were either negative or inconclusive. One study suggested that a statistical checklist can improve report quality, but another failed to find an effect of publishing another checklist. One study found no evidence that training referees improves performance and another showed increased interrater reliability; both used open designs, making interpretation difficult. Two studies of how journals communicate with reviewers did not demonstrate any effect on review quality. One study failed to show reviewer bias, but the findings may not be generalizable. One nonrandomized study compared the quality of articles published in peer-reviewed vs other journals. Two studies showed that editorial processes make articles more readable and improve the quality of reporting, but the findings may have limited generalizability to other journals. CONCLUSIONS Editorial peer review, although widely used, is largely untested and its effects are uncertain.
Collapse
|
Review |
23 |
172 |
9
|
|
Review |
28 |
160 |
10
|
Marshall IJ, Kuiper J, Wallace BC. RobotReviewer: evaluation of a system for automatically assessing bias in clinical trials. J Am Med Inform Assoc 2016; 23:193-201. [PMID: 26104742 PMCID: PMC4713900 DOI: 10.1093/jamia/ocv044] [Citation(s) in RCA: 148] [Impact Index Per Article: 16.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2014] [Revised: 04/16/2015] [Accepted: 04/18/2015] [Indexed: 11/13/2022] Open
Abstract
OBJECTIVE To develop and evaluate RobotReviewer, a machine learning (ML) system that automatically assesses bias in clinical trials. From a (PDF-formatted) trial report, the system should determine risks of bias for the domains defined by the Cochrane Risk of Bias (RoB) tool, and extract supporting text for these judgments. METHODS We algorithmically annotated 12,808 trial PDFs using data from the Cochrane Database of Systematic Reviews (CDSR). Trials were labeled as being at low or high/unclear risk of bias for each domain, and sentences were labeled as being informative or not. This dataset was used to train a multi-task ML model. We estimated the accuracy of ML judgments versus humans by comparing trials with two or more independent RoB assessments in the CDSR. Twenty blinded experienced reviewers rated the relevance of supporting text, comparing ML output with equivalent (human-extracted) text from the CDSR. RESULTS By retrieving the top 3 candidate sentences per document (top3 recall), the best ML text was rated more relevant than text from the CDSR, but not significantly (60.4% ML text rated 'highly relevant' v 56.5% of text from reviews; difference +3.9%, [-3.2% to +10.9%]). Model RoB judgments were less accurate than those from published reviews, though the difference was <10% (overall accuracy 71.0% with ML v 78.3% with CDSR). CONCLUSION Risk of bias assessment may be automated with reasonable accuracy. Automatically identified text supporting bias assessment is of equal quality to the manually identified text in the CDSR. This technology could substantially reduce reviewer workload and expedite evidence syntheses.
Collapse
|
research-article |
9 |
148 |
11
|
Hoffmann TC, Oxman AD, Ioannidis JP, Moher D, Lasserson TJ, Tovey DI, Stein K, Sutcliffe K, Ravaud P, Altman DG, Perera R, Glasziou P. Enhancing the usability of systematic reviews by improving the consideration and description of interventions. BMJ 2017; 358:j2998. [PMID: 28729459 DOI: 10.1136/bmj.j2998] [Citation(s) in RCA: 140] [Impact Index Per Article: 17.5] [Reference Citation Analysis] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/03/2022]
|
|
8 |
140 |
12
|
|
research-article |
9 |
136 |
13
|
Abstract
The peer-review process is a turf battle with the ultimate prize of the knowledge, science or doctrine being published. On the one side, we have the writers and originators of ideas, on the other, we have the editors and critics. But it was not always so.
Collapse
|
Historical Article |
23 |
136 |
14
|
Abstract
Anthony G Tuckett outlines the strategies and operational techniques he used to attain rigour in a qualitative research study through relying on Guba and Lincoln's trustworthiness criterion. Research strategies such as use of personal journals, audio recording and transcript auditing, and operational techniques including triangulation strategies and peer review, are examined.
Collapse
|
Review |
20 |
111 |
15
|
Montgomery AA, Graham A, Evans PH, Fahey T. Inter-rater agreement in the scoring of abstracts submitted to a primary care research conference. BMC Health Serv Res 2002; 2:8. [PMID: 11914164 PMCID: PMC101393 DOI: 10.1186/1472-6963-2-8] [Citation(s) in RCA: 109] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2001] [Accepted: 03/26/2002] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Checklists for peer review aim to guide referees when assessing the quality of papers, but little evidence exists on the extent to which referees agree when evaluating the same paper. The aim of this study was to investigate agreement on dimensions of a checklist between two referees when evaluating abstracts submitted for a primary care conference. METHODS Anonymised abstracts were scored using a structured assessment comprising seven categories. Between one (poor) and four (excellent) marks were awarded for each category, giving a maximum possible score of 28 marks. Every abstract was assessed independently by two referees and agreement measured using intraclass correlation coefficients. Mean total scores of abstracts accepted and rejected for the meeting were compared using an unpaired t test. RESULTS Of 52 abstracts, agreement between reviewers was greater for three components relating to study design (adjusted intraclass correlation coefficients 0.40 to 0.45) compared to four components relating to more subjective elements such as the importance of the study and likelihood of provoking discussion (0.01 to 0.25). Mean score for accepted abstracts was significantly greater than those that were rejected (17.4 versus 14.6, 95% CI for difference 1.3 to 4.1, p = 0.0003). CONCLUSIONS The findings suggest that inclusion of subjective components in a review checklist may result in greater disagreement between reviewers. However in terms of overall quality scores, abstracts accepted for the meeting were rated significantly higher than those that were rejected.
Collapse
|
research-article |
23 |
109 |
16
|
Ford R, Schwartz L, Dancey J, Dodd LE, Eisenhauer EA, Gwyther S, Rubinstein L, Sargent D, Shankar L, Therasse P, Verweij J. Lessons learned from independent central review. Eur J Cancer 2009; 45:268-74. [PMID: 19101138 DOI: 10.1016/j.ejca.2008.10.031] [Citation(s) in RCA: 92] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2008] [Accepted: 10/29/2008] [Indexed: 11/19/2022]
Abstract
Independent central review (ICR) is advocated by regulatory authorities as a means of independent verification of clinical trial end-points dependent on medical imaging, when the data from the trials may be submitted for licensing applications [Food and Drug Administration. United States food and drug administration guidance for industry: clinical trial endpoints for the approval of cancer drugs and biologics. Rockville, MD: US Department of Health and Human Services; 2007; Committee for Medicinal Products for Human Use. European Medicines Agency Committee for Medicinal Products for Human Use (CHMP) guideline on the evaluation of anticancer medicinal products in man. London, UK: European Medicines Agency; 2006; United States Food and Drug Administration Center for Drug Evaluation and Research. Approval package for application number NDA 21-492 (oxaliplatin). Rockville, MD: US Department of Health and Human Services; 2002; United States Food and Drug Administration Center for Drug Evaluation and Research. Approval package for application number NDA 21-923 (sorafenib tosylate). Rockville, MD: US Department of Health and Human Services; 2005; United States Food and Drug Administration Center for Drug Evaluation and Research. Approval package for application number NDA 22-065 (ixabepilone). Rockville, MD: US Department of Health and Human Services; 2007; United States Food and Drug Administration Center for Drug Evaluation and Research. Approval package for application number NDA 22-059 (lapatinib ditosylate). Rockville, MD: US Department of Health and Human Services; 2007; United States Food and Drug Administration Center for Biologics Evaluation and Research. Approval package for BLA numbers 97-0260 and BLA Number 97-0244 (rituximab). Rockville, MD: US Department of Health and Human Services; 1997; United States Food and Drug Administration. FDA clinical review of BLA 98-0369 (Herceptin((R)) trastuzumab (rhuMAb HER2)). FDA Center for Biologics Evaluation and Research; 1998; United States Food and Drug Administration. FDA Briefing Document Oncology Drugs Advisory Committee meeting NDA 21801 (satraplatin). Rockville, MD: US Department of Health and Human Services; 2007; Thomas ES, Gomez HL, Li RK, et al. Ixabepilone plus capecitabine for metastatic breast cancer progressing after anthracycline and taxane treatment. JCO 2007(November):5210-7]. In addition, clinical trial sponsors have used ICR in Phase I-II studies to assist in critical pathway decisions including in-licensing of compounds [Cannistra SA, Matulonis UA, Penson RT, et al. Phase II study of bevacizumab in patients with platinum-resistant ovarian cancer or peritoneal serous cancer. JCO 2007(November):5180-6; Perez EA, Lerzo G, Pivot X, et al. Efficacy and safety of ixabepilone (BMS-247550) in a phase II study of patients with advanced breast cancer resistant to an anthracycline, a taxane, and capecitabine. JCO 2007(August):3407-14; Vermorken JB, Trigo J, Hitt R, et al. Open-label, uncontrolled, multicenter phase II study to evaluate the efficacy and toxicity of cetuximab as a single agent in patients with recurrent and/or metastatic squamous cell carcinoma of the head and neck who failed to respond to platinum-based therapy. JCO 2007(June):2171-7; Ghassan KA, Schwartz L, Ricci S, et al. Phase II study of sorafenib in patients with advanced hepatocellular carcinoma. JCO 2006(September):4293-300; Boué F, Gabarre J, GaBarre J, et al. Phase II trial of CHOP plus rituximab in patients with HIV-associated non-Hodgkin's lymphoma. JCO 2006(September):4123-8; Chen HX, Mooney M, Boron M, et al. Phase II multicenter trial of bevacizumab plus fluorouracil and leucovorin in patients with advanced refractory colorectal cancer: an NCI Treatment Referral Center Trial TRC-0301. JCO 2006(July):3354-60; Ratain MJ, Eisen T, Stadler WM, et al. Phase II placebo-controlled randomized discontinuation trial of sorafenib in patients with metastatic renal cell carcinoma. JCO 2006(June):2502-12; Jaffer AA, Lee FC, Singh DA, et al. Multicenter phase II trial of S-1 plus cisplatin in patients with untreated advanced gastric or gastroesophageal junction adenocarcinoma. JCO 2006(February):663-7; Bouché O, Raoul JL, Bonnetain F, et al. Randomized multicenter phase II trial of a biweekly regimen of fluorouracil and leucovorin (LV5FU2), LV5FU2 plus cisplatin, or LV5FU2 plus irinotecan in patients with previously untreated metastatic gastric cancer: a Fédération Francophone de Cancérologie Digestive Group Study-FFCD 9803. JCO 2004(November):4319-28]. This article will focus on the definition and purpose of ICR and the issues and lessons learned in the ICR setting primarily in Phase II and III oncology studies. This will include a discussion on discordance between local and central interpretations, consequences of ICR, reader discordance during the ICR, operational considerations and the need for specific imaging requirements as part of the study protocol.
Collapse
|
Journal Article |
16 |
92 |
17
|
|
|
31 |
91 |
18
|
Abstract
Anonymity for peer reviewers remains the overwhelming norm within biomedical journals. While acknowledging that open review is not without challenges, this article presents 4 key arguments in its favor: (1) ethical superiority, (2) lack of important adverse effects, (3) feasibility in practice, and (4) potential to balance greater accountability for reviewers with credit for the work they do. Barriers to more widespread use of open review include conservatism within the research community and the fact that openness makes editors publicly responsible for their choice of reviewers and their interpretation of reviewers' comments. Forces for change include the growing use of preprint servers combined with open commentary. I look forward to a time when open commentary and review replace the current, flawed system of closed prepublication peer review and its false reassurances about the reliability of what is published.
Collapse
|
|
23 |
90 |
19
|
van Rooyen S, Black N, Godlee F. Development of the review quality instrument (RQI) for assessing peer reviews of manuscripts. J Clin Epidemiol 1999; 52:625-9. [PMID: 10391655 DOI: 10.1016/s0895-4356(99)00047-5] [Citation(s) in RCA: 90] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
Abstract
Research on the value of peer review is limited by the lack of a validated instrument to measure the quality of reviews. The aim of this study was to develop a simple, reliable, and valid scale that could be used in studies of peer review. A Review Quality Instrument (RQI) that assesses the extent to which a reviewer has commented on five aspects of a manuscript (importance of the research question, originality of the paper, strengths and weaknesses of the method, presentation, interpretation of results) and on two aspects of the review (constructiveness and substantiation of comments) was devised and tested. Its internal consistency was high (Cronbach's alpha 0.84). The mean total score (based on the seven items each scored on a 5-point Likert scale from 1 to 5) had good test-retest (Kw = 1.00) and inter-rater (Kw = 0.83) reliability. There was no evidence of floor or ceiling effects, construct validity was evident, and the respondent burden was acceptable (2-10 minutes). Although improvements to the RQI should be pursued, the instrument can be recommended for use in the study of peer review.
Collapse
|
|
26 |
90 |
20
|
Sarabipour S, Debat HJ, Emmott E, Burgess SJ, Schwessinger B, Hensel Z. On the value of preprints: An early career researcher perspective. PLoS Biol 2019; 17:e3000151. [PMID: 30789895 PMCID: PMC6400415 DOI: 10.1371/journal.pbio.3000151] [Citation(s) in RCA: 71] [Impact Index Per Article: 11.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Revised: 03/05/2019] [Indexed: 11/18/2022] Open
Abstract
Peer-reviewed journal publication is the main means for academic researchers in the life sciences to create a permanent public record of their work. These publications are also the de facto currency for career progress, with a strong link between journal brand recognition and perceived value. The current peer-review process can lead to long delays between submission and publication, with cycles of rejection, revision, and resubmission causing redundant peer review. This situation creates unique challenges for early career researchers (ECRs), who rely heavily on timely publication of their work to gain recognition for their efforts. Today, ECRs face a changing academic landscape, including the increased interdisciplinarity of life sciences research, expansion of the researcher population, and consequent shifts in employer and funding demands. The publication of preprints, publicly available scientific manuscripts posted on dedicated preprint servers prior to journal-managed peer review, can play a key role in addressing these ECR challenges. Preprinting benefits include rapid dissemination of academic work, open access, establishing priority or concurrence, receiving feedback, and facilitating collaborations. Although there is a growing appreciation for and adoption of preprints, a minority of all articles in life sciences and medicine are preprinted. The current low rate of preprint submissions in life sciences and ECR concerns regarding preprinting need to be addressed. We provide a perspective from an interdisciplinary group of ECRs on the value of preprints and advocate their wide adoption to advance knowledge and facilitate career development.
Collapse
|
other |
6 |
71 |
21
|
Miwa M, Thomas J, O'Mara-Eves A, Ananiadou S. Reducing systematic review workload through certainty-based screening. J Biomed Inform 2014; 51:242-53. [PMID: 24954015 PMCID: PMC4199186 DOI: 10.1016/j.jbi.2014.06.005] [Citation(s) in RCA: 66] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2013] [Revised: 06/04/2014] [Accepted: 06/07/2014] [Indexed: 11/19/2022]
Abstract
In systematic reviews, the growing number of published studies imposes a significant screening workload on reviewers. Active learning is a promising approach to reduce the workload by automating some of the screening decisions, but it has been evaluated for a limited number of disciplines. The suitability of applying active learning to complex topics in disciplines such as social science has not been studied, and the selection of useful criteria and enhancements to address the data imbalance problem in systematic reviews remains an open problem. We applied active learning with two criteria (certainty and uncertainty) and several enhancements in both clinical medicine and social science (specifically, public health) areas, and compared the results in both. The results show that the certainty criterion is useful for finding relevant documents, and weighting positive instances is promising to overcome the data imbalance problem in both data sets. Latent dirichlet allocation (LDA) is also shown to be promising when little manually-assigned information is available. Active learning is effective in complex topics, although its efficiency is limited due to the difficulties in text classification. The most promising criterion and weighting method are the same regardless of the review topic, and unsupervised techniques like LDA have a possibility to boost the performance of active learning without manual annotation.
Collapse
|
research-article |
11 |
66 |
22
|
|
|
23 |
64 |
23
|
Abstract
The audit trail helps to establish the credibility of qualitative studies and serves to convince the scientific community of their rigor. The author describes the activities of the audit trail and explores the creation of its structures and processes. Excerpts of audit trails from qualitative studies are presented for faculty and students.
Collapse
|
Review |
22 |
62 |
24
|
Timmer A, Sutherland LR, Hilsden RJ. Development and evaluation of a quality score for abstracts. BMC Med Res Methodol 2003; 3:2. [PMID: 12581457 PMCID: PMC149448 DOI: 10.1186/1471-2288-3-2] [Citation(s) in RCA: 60] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2002] [Accepted: 02/11/2003] [Indexed: 01/04/2023] Open
Abstract
BACKGROUND The evaluation of abstracts for scientific meetings has been shown to suffer from poor inter observer reliability. A measure was developed to assess the formal quality of abstract submissions in a standardized way. METHODS Item selection was based on scoring systems for full reports, taking into account published guidelines for structured abstracts. Interrater agreement was examined using a random sample of submissions to the American Gastroenterological Association, stratified for research type (n = 100, 1992-1995). For construct validity, the association of formal quality with acceptance for presentation was examined. A questionnaire to expert reviewers evaluated sensibility items, such as ease of use and comprehensiveness. RESULTS The index comprised 19 items. The summary quality scores showed good interrater agreement (intra class coefficient 0.60 - 0.81). Good abstract quality was associated with abstract acceptance for presentation at the meeting. The instrument was found to be acceptable by expert reviewers. CONCLUSION A quality index was developed for the evaluation of scientific meeting abstracts which was shown to be reliable, valid and useful.
Collapse
|
research-article |
22 |
60 |
25
|
Sinclair S, Kondejewski J, Jaggi P, Dennett L, Roze des Ordons AL, Hack TF. What Is the State of Compassion Education? A Systematic Review of Compassion Training in Health Care. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2021; 96:1057-1070. [PMID: 33830949 PMCID: PMC8231671 DOI: 10.1097/acm.0000000000004114] [Citation(s) in RCA: 57] [Impact Index Per Article: 14.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/07/2023]
Abstract
PURPOSE To investigate the current state and quality of compassion education interventions offered to health care providers during training or practice, determine how the components of each education intervention map onto the domains of an empirically based clinical model of compassion, and identify the most common approaches to compassion education. METHOD The MEDLINE, Embase, CINAHL Plus with Full Text, Sociological Abstracts, Web of Science, ERIC, and Education Research Complete databases were searched from inception to March 2020 in this systematic review. Studies that evaluated a compassion education intervention for health care providers or those in training to enhance compassion toward patients and/or families were included. A narrative synthesis of the included studies was performed. The components of each intervention were mapped onto the domains of compassion described in the Patient Compassion Model. RESULTS One hundred eight peer-reviewed publications describing 103 interventions were included. Modalities ranged from establishing curricula and interventions in clinical settings to programs that used humanities-based reflective practices, clinical simulation, role modeling, and contemplative practices, with many education interventions adopting a multimodal approach. Most interventions mapped to the virtuous response domain of the Patient Compassion Model; very few mapped to the other domains of this model. CONCLUSIONS Most interventions were limited as they focused on a single domain of compassion; did not adequately define compassion; were assessed exclusively by self-report; were devoid of a comparator/control group; and did not evaluate retention, sustainability, and translation to clinical practice over time. The authors suggest that compassion education interventions be grounded in an empirically based definition of compassion; use a competency-based approach; employ multimodal teaching methods that address the requisite attitudes, skills, behaviors, and knowledge within the multiple domains of compassion; evaluate learning over time; and incorporate patient, preceptor, and peer evaluations.
Collapse
|
Systematic Review |
4 |
57 |