1
|
Vieira RADC, Paulinellli RR, Rodrigues FFO, Moreira MAR, Caponero R, Pessoa EC, Rahal RMS, Facina G, de Freitas R. Criteria for selection and classification of studies in medical events. REVISTA DA ASSOCIACAO MEDICA BRASILEIRA (1992) 2023; 69:e20220888. [PMID: 37075364 PMCID: PMC10176649 DOI: 10.1590/1806-9282.20220888] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/20/2022] [Accepted: 12/19/2022] [Indexed: 04/21/2023]
Abstract
OBJECTIVE The aim of this study was to evaluate the impact of study methodology and evaluation type on the selection of studies during the presentation of scientific events. METHODS A prospective, observational, transversal approach was applied to a cohort of studies that were submitted for presentation at the 2021 Brazilian Breast Cancer Symposium. Three forms of criteria (CR) were presented. CR1 was based on six criteria (method, ethics, design, originality, promotion, and social contribution); CR2 graded the studies from 0 to 10 for each study, and CR3 was based on five criteria (presentation, method, originality, scientific knowledge, and social contribution). To evaluate the item correlation, Cronbach's alpha and factorial analysis were performed. For the evaluation of differences between the tests, we used the Kruskal-Wallis and post-hoc Dunn tests. To determine the differences in the study classifications, we used the Friedman test and Namenyi's all-pairs comparisons. RESULTS A total of 122 studies were evaluated. There was also a good correlation with the items concerning criterion 1 (α=0.730) and 3 (α=0.937). Evaluating CR1 methodology, study design and social contribution (p=0.741) represents the main factor and CR3 methodology, and the scientific contribution (p=0.994) represents the main factor. The Kruskal-Wallis test showed differences in the results (p<0.001) for all the criteria that were used [CR1-CR2 (p<0.001), CR1-CR3 (p<0.001), and CR2-CR3 (p=0.004)]. The Friedman test showed differences in the ranking of the studies (p<0.001) for all studies (p<0.01). CONCLUSION Methodologies that use multiple criteria show good correlation and should be taken into account when ranking the best studies.
Collapse
Affiliation(s)
- René Aloisio da Costa Vieira
- Universidade Estadual Paulista “Júlio de Mesquita Filho”, Faculdade de Medicina de Botucatu, Programa de Pós-Graduação em Tocoginecologia – Botucatu (SP), Brazil
| | | | | | | | | | - Eduardo Carvalho Pessoa
- Universidade Estadual Paulista “Júlio de Mesquita Filho”, Faculdade de Medicina de Botucatu, Programa de Pós-Graduação em Tocoginecologia – Botucatu (SP), Brazil
| | | | - Gil Facina
- Universidade Federal de São Paulo – São Paulo (SP), Brazil
| | | |
Collapse
|
2
|
Kaefer M, Beckers G, Gobet R, El-Ghoneimi A, Fossum M. How the ESPU grades clinical abstracts. J Pediatr Urol 2018; 14:451-452. [PMID: 30181100 DOI: 10.1016/j.jpurol.2018.07.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/17/2018] [Accepted: 07/13/2018] [Indexed: 10/28/2022]
Abstract
The ability to consistently review abstracts in an unbiased and objective fashion is a skill that most academics hope to master. However, robust standardized rating systems are sparse, with most scientific boards leaving the task of rating abstracts poorly defined and at the whim of the reviewer. In an effort to bring consistency to this process, in 2013, the ESPU board adopted an abstract rating system that has been previously used in the field of plastic surgery and orthopedics. (van der Steen et al., 2004; Poolman et al., 2007). The aim of this manuscript is to outline this practice.
Collapse
Affiliation(s)
- Martin Kaefer
- Department of Pediatric Urology, Riley Children's Hospital, Indiana University, Indianapolis, IN, USA
| | - Goedele Beckers
- Department of Urology, VU University Medical Center, Amsterdam, The Netherlands
| | - Rita Gobet
- Department of Pediatric Urology, Kinderspital, Zürich, Switzerland
| | | | - Magdalena Fossum
- Department of Pediatric Urology, Astrid Lindgren Children's Hospital, Karolinska University Hospital, Stockholm, Sweden; Dept. of Women's and Children's Health, Karolinska Institutet, Stockholm, Sweden.
| | | |
Collapse
|
3
|
Beckers GMA, Fossum M, Kaefer M. How to review an abstract for a scientific meeting. J Pediatr Urol 2018; 14:71-72. [PMID: 29223858 DOI: 10.1016/j.jpurol.2017.11.007] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/29/2017] [Accepted: 11/01/2017] [Indexed: 11/15/2022]
Affiliation(s)
- G M A Beckers
- Department of Urology, Pediatric Urology Section, VU University Medical Center, Amsterdam, The Netherlands.
| | - M Fossum
- Department of Pediatric Surgery, Section of Urology, Astrid Lindgren Children's Hospital, Karolinska University Hospital, Stockholm, Sweden
| | - M Kaefer
- Indiana University, 702 Barnhill Drive, Suite 4230, Indianapolis, IN, USA
| |
Collapse
|
4
|
Khorasani H, Lassen MH, Kuzon W, Bonde C. Scientific impact of presentations from the EURAPS and the AAPS meetings: A 10-year review. J Plast Reconstr Aesthet Surg 2017; 70:31-36. [DOI: 10.1016/j.bjps.2016.09.022] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2015] [Revised: 09/15/2016] [Accepted: 09/26/2016] [Indexed: 10/20/2022]
|
5
|
Kuczmarski TM, Raja AS, Pallin DJ. How do Medical Societies Select Science for Conference Presentation? How Should They? West J Emerg Med 2015; 16:543-50. [PMID: 26265966 PMCID: PMC4530912 DOI: 10.5811/westjem.2015.5.25518] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2015] [Revised: 05/06/2015] [Accepted: 05/18/2015] [Indexed: 11/28/2022] Open
Abstract
Introduction Nothing has been published to describe the practices of medical societies in choosing abstracts for presentations at their annual meetings. We surveyed medical societies to determine their practices, and also present a theoretical analysis of the topic. Methods We contacted a convenience sample of large U.S. medical conferences, and determined their approach to choosing abstracts. We obtained information from web sites, telephone, and email. Our theoretical analysis compares values-based and empirical approaches for scoring system development. Results We contacted 32 societies and obtained data on 28 (response rate 88%). We excluded one upon learning that research was not presented at its annual meeting, leaving 27 for analysis. Only 2 (7%) made their abstract scoring process available to submitters. Reviews were blinded in most societies (21;78%), and all but one asked reviewers to recuse themselves for conflict of interest (96%). All required ≥3 reviewers. Of the 24 providing information on how scores were generated, 21 (88%) reported using a single gestalt score, and three used a combined score created from pooled domain-specific sub-scores. We present a framework for societies to use in choosing abstracts, and demonstrate its application in the development of a new scoring system. Conclusions Most medical societies use subjective, gestalt methods to select research for presentation at their annual meetings and do not disclose to submitters the details of how abstracts are chosen. We present a new scoring system that is transparent to submitters and reviewers alike with an accompanying statement of values and ground rules. We discuss the challenges faced in selecting abstracts for a large scientific meeting and share the values and practical considerations that undergird the new system.
Collapse
Affiliation(s)
- Thomas M Kuczmarski
- Brigham and Women's Hospital, Department of Emergency Medicine, Boston, Massachusetts
| | - Ali S Raja
- Massachusetts General Hospital, Department of Emergency Medicine, Boston, Massachusetts
| | - Daniel J Pallin
- Harvard Medical School, Brigham and Women's Hospital, Department of Emergency Medicine, Boston, Massachusetts
| |
Collapse
|
6
|
Vita S, Coplin H, Feiereisel KB, Garten S, Mechaber AJ, Estrada C. Decreasing the ceiling effect in assessing meeting quality at an academic professional meeting. TEACHING AND LEARNING IN MEDICINE 2013; 25:47-54. [PMID: 23330894 DOI: 10.1080/10401334.2012.741543] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
BACKGROUND The psychometric properties of evaluations at academic meetings have not been well studied. PURPOSE To explore the ceiling effect in the evaluation of quality of a professional meeting and whether a change in the scale labels would decrease the ceiling effect. METHODS Cross-sectional study at two national meetings (2009-2010), attendees completed the evaluation on paper forms or online (5-point Likert scale). RESULTS Of 1,064 evaluations, the mean session ratings was higher among respondents to the paper version in 2009 (4.2; 95% confidence interval [CI], 4.1 to 4.3) as compared to online responders in 2009 (3.0; 95% CI, 2.9 to 3.1) or online responders in 2010 (3.0; 95% CI, 2.9 to 3.1)(p < 0.001). CONCLUSION A ceiling effect was present in the evaluation of an academic meeting. A change in the evaluation scale labels decreased the ceiling effect and increased evaluation variability.
Collapse
Affiliation(s)
- Swaroop Vita
- Birmingham Southern College, Birmingham, AL 35294, USA
| | | | | | | | | | | |
Collapse
|
7
|
Newsom J, Estrada CA, Panisko D, Willett L. Selecting the best clinical vignettes for academic meetings: should the scoring tool criteria be modified? J Gen Intern Med 2012; 27:202-6. [PMID: 21927965 PMCID: PMC3270243 DOI: 10.1007/s11606-011-1879-2] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/25/2010] [Revised: 08/24/2010] [Accepted: 09/01/2011] [Indexed: 11/28/2022]
Abstract
BACKGROUND The performance of scoring tools to select clinical vignettes for presentation at academic meetings has never been assessed. OBJECTIVE To measure the psychometric properties of two scoring tools used to select clinical vignettes and to determine which elements are most helpful. DESIGN Prospective observational study. PARTICIPANTS Participants submitting clinical vignette abstracts, Society of General Internal Medicine annual meetings (2006-2007). MAIN MEASURES The 2006 scoring tool had three criteria (clarity, significance, and relevance) with brief general descriptors. The 2007 modified tool had five criteria (clarity, significance, relevance, teaching value, and overall assessment) with more detailed descriptors. KEY RESULTS A total of 938 clinical vignette abstracts were submitted (484 in 2006; 454 in 2007); 59.5% (n=288) were accepted for presentation. Cronbach's alpha was 0.81 for the 2006 three-item tool and 0.95 for the 2007 modified five-item tool. Simplifying the five-item 2007 tool to three items (relevance, teaching value, overall assessment) yielded a Cronbach's alpha of 0.95. The agreement between the number of clinical vignettes accepted for presentation (2007) using the average score of the five items with the number that would have been accepted using the simplified three items (relevance, teaching value, overall assessment) was almost perfect, with kappa 0.89 (95% confidence interval, 0.85 to 0.93). CONCLUSIONS Both scoring tools performed well, but a simplified tool with three items (relevance, teaching value, and overall assessment) and detailed descriptors was optimal; the simplified tool could improve the reviewer efficiency and quality of clinical vignettes presented at national meetings.
Collapse
Affiliation(s)
- Jeremiah Newsom
- The University of Alabama at Birmingham, , Birmingham, AL USA
- Birmingham VA Medical Center, Veterans Affairs National Quality Scholars Program, , Birmingham, AL USA
| | - Carlos A. Estrada
- The University of Alabama at Birmingham, , Birmingham, AL USA
- Birmingham VA Medical Center, Veterans Affairs National Quality Scholars Program, , Birmingham, AL USA
| | | | - Lisa Willett
- The University of Alabama at Birmingham, , Birmingham, AL USA
- BDB 341, 1530 3rd Avenue South, , Birmingham, AL 35294-0012 USA
| |
Collapse
|
8
|
Quality of Reporting in Poster versus Oral Presentations at the American Society of Plastic Surgeons 2008 Conference in Chicago. Plast Reconstr Surg 2010; 125:219e-221e. [DOI: 10.1097/prs.0b013e3181d51753] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
9
|
Bydder S, Marion K, Taylor M, Semmens J. Assessment of abstracts submitted to the annual scientific meeting of the Royal Australian and New Zealand College of Radiologists. ACTA ACUST UNITED AC 2006; 50:355-9. [PMID: 16884423 DOI: 10.1111/j.1440-1673.2006.01599.x] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
The process for selecting abstracts submitted for presentation at annual scientific meetings should ensure both the quality of these meetings and fairness to prospective presenters. The aim of the present study was to review the assessment of radiation oncology abstracts submitted for oral presentation to the 2004 Royal Australian and New Zealand College of Radiologists annual scientific meeting. Selection criteria were developed that were primarily focused on the subjective aspects of abstract quality. All research abstracts were reviewed blindly by five individual reviewers (four radiation oncologists and a statistician), scoring each abstract in five categories. The scores of three reviewers were used to select the top 30 general and top eight trainee entries. For comparison, cluster analysis using the scores of all five reviewers was used to group papers into two ranks. There was a strong correlation in total scores for each paper, between all reviewers. Similarly, the study design subscale was strongly correlated between all reviewers. Abstracts belonging to the first-rank cluster were generally selected. Most trainee entries would have been successful in being accepted into the general programme. The selection process described appears feasible and fair and may improve the quality of meetings.
Collapse
Affiliation(s)
- S Bydder
- Department of Radiation Oncology, Sir Charles Gairdner Hospital, Perth, WA, Australia.
| | | | | | | |
Collapse
|
10
|
Intra-rater repeatability of a structured method of selecting abstracts for the annual euraps scientific meeting. EUROPEAN JOURNAL OF PLASTIC SURGERY 2006. [DOI: 10.1007/s00238-006-0061-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
11
|
Cohen IT, Patel K. Peer review interrater concordance of scientific abstracts: a study of anesthesiology subspecialty and component societies. Anesth Analg 2006; 102:1501-3. [PMID: 16632833 DOI: 10.1213/01.ane.0000200314.73035.4d] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
Abstracts presented at anesthesiology subspeciality and component society meetings are chosen by peer review. We assessed this process by examining selection criteria and determining interrater concordance. For the societies studied, the level of reviewer agreement ranged from poor to moderate, i.e., slightly better than by chance alone. We hypothesize that having clearer evaluation criteria, scoring systems with interval scales, and assessment based on quality can strengthen the peer review process.
Collapse
Affiliation(s)
- Ira Todd Cohen
- Department of Anesthesiology and Pediatrics, Children's National Medical Center, George Washington University, Washington, DC 20010, USA.
| | | |
Collapse
|
12
|
van der Steen LPE, Hage JJ, Loonen MPJ, Kon M. Full Publication of Papers Presented at the 1995 through 1999 European Association of Plastic Surgeons Annual Scientific Meetings: A Systemic Bibliometric Analysis. Plast Reconstr Surg 2004; 114:113-20. [PMID: 15220578 DOI: 10.1097/01.prs.0000127804.00139.58] [Citation(s) in RCA: 26] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
From the multitude of oral presentations at major medical meetings, the most informative and highest-quality studies make it to full publication in peer-reviewed journals. The rate of publication may be regarded as an indicator of the scientific level of the meeting. Study of the publication rates of consecutive annual meetings allows for the evaluation of the consistency of the scientific level of these meetings and for comparison with publication rates of other meetings in the same field of interest. To grade how useful any publication is to other authors, one can furthermore measure how frequently they cite it in their own publications. Finally, the time lag between oral presentation and full publication is of importance to both its authors and the audience at the meeting. The main objectives of this study were to determine the publication rate of papers of various fields of interest as presented at five consecutive annual meetings of the European Association of Plastic Surgeons (EURAPS) and the time lag between these presentations and their publication. The authors compared their overall findings to those reported for other surgical specialties. Moreover, they identified and classified the journals in which the full publications appeared as an indicator of the scientific value of the meeting. They conclude that a greater than average number of papers presented at the 1995 through 1999 annual EURAPS meetings went on to full publication in peer-reviewed journals. Among these journals, Plastic and Reconstructive Surgery was the best source for information presented at the meetings. Although approximately 90 percent of the publications appeared before 3 years had passed after a meeting, additional publications may be expected to appear even more than 6 years after the meeting. Given the high publication rate and the high average normalized impact factor of the journals in which the presentations appeared, the five studied EURAPS meetings overall had high scientific value.
Collapse
Affiliation(s)
- Lydia P E van der Steen
- Department of Plastic and Reconstructive Surgery, Antoni van Leeuwenhoek Hospital, Amsterdam, The Netherlands
| | | | | | | |
Collapse
|