51
|
Cheng HY, Davis M. Geriatrics Curricula for Internal and Family Medicine Residents: Assessing Study Quality and Learning Outcomes. J Grad Med Educ 2017; 9:33-45. [PMID: 28261392 PMCID: PMC5319626 DOI: 10.4300/jgme-d-16-00037.1] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/06/2022] Open
Abstract
BACKGROUND Prior reviews of geriatrics curricula for internal medicine (IM) and family medicine (FM) residents have not evaluated study quality or assessed learning objectives or specific IM or FM competencies. OBJECTIVE This review of geriatrics curricula for IM and FM residents seeks to answer 3 questions: (1) What types of learning outcomes were measured? (2) How were learning outcomes measured? and (3) What was the quality of the studies? METHODS We evaluated geriatrics curricula that reported learning objectives or competencies, teaching methods, and learning outcomes, and those that used a comparative design. We searched PubMed and 4 other data sets from 2003-2015, and assessed learning outcomes, outcome measures, and the quality of studies using the Medical Education Research Study Quality Instrument (MERSQI) and Best Evidence Medical Education (BEME) methods. RESULTS Fourteen studies met inclusion criteria. Most curricula were intended for IM residents in the inpatient setting; only 1 was solely dedicated to FM residents. Median duration was 1 month, and minimum geriatrics competencies covered were 4. Learning outcomes ranged from Kirkpatrick levels 1 to 3. Studies that reported effect size showed a considerable impact on attitudes and knowledge, mainly via pretests and posttests. The mean MERSQI score was 10.5 (range, 8.5-13) on a scale of 5 (lowest quality) to 18 (highest quality). CONCLUSIONS Few geriatrics curricula for IM and FM residents that included learning outcome assessments were published recently. Overall, changes in attitudes and knowledge were sizeable, but reporting was limited to low to moderate Kirkpatrick levels. Study quality was moderate.
Collapse
Affiliation(s)
- Huai Yong Cheng
- Corresponding author: Huai Yong Cheng, MD, MPH, University of Virginia, Division of General Medicine, Geriatrics, and Palliative Care Medicine and Hospital Medicine, PO Box 800901, Charlottesville, VA 22981, 434.924.4849, fax 434.243.9282,
| | | |
Collapse
|
52
|
Horsley T, Galipeau J, Petkovic J, Zeiter J, Hamstra SJ, Cook DA. Reporting quality and risk of bias in randomised trials in health professions education. MEDICAL EDUCATION 2017; 51:61-71. [PMID: 27981660 DOI: 10.1111/medu.13130] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/18/2015] [Revised: 01/26/2016] [Accepted: 06/01/2016] [Indexed: 06/06/2023]
Abstract
CONTEXT Complete reporting of research is essential to enable consumers to accurately appraise, interpret and apply findings. Quality appraisal checklists are giving way to tools that judge the risk for bias. OBJECTIVES We sought to determine the prevalence of these complementary aspects of research reports (completeness of reporting and perceived risk for bias) of randomised studies in health professions education. METHODS We searched bibliographic databases for randomised studies of health professions education. We appraised two cohorts representing different time periods (2008-2010 and 2014, respectively) and worked in duplicate to apply the CONSORT guidelines and Cochrane Risk of Bias tool. We explored differences between time periods using independent-samples t-tests or the chi-squared test, as appropriate. RESULTS We systematically identified 180 randomised studies (2008-2010, n = 150; 2014, n = 30). Frequencies of reporting of CONSORT elements within full-text reports were highly variable and most elements were reported in fewer than 50% of studies. We found a statistically significant difference in the CONSORT reporting index (maximum score: 500) between the 2008-2010 (mean ± standard deviation [SD]: 242.7 ± 55.6) and 2014 (mean ± SD: 311.6 ± 53.2) cohorts (p < 0.001). High or unclear risk for bias was most common for allocation concealment (157, 87%) and blinding of participants (147, 82%), personnel (152, 84%) and outcome assessors (112, 62%). Most risk for bias elements were judged to be unclear (range: 51-84%). Risk for bias elements significantly improved over time for blinding of participants (p = 0.007), incomplete data (p < 0.001) and the presence of other sources of bias (p < 0.001). CONCLUSIONS Reports of randomised studies in health professions education frequently omit elements recommended by the CONSORT statement. Most reports were assessed as having a high or unclear risk for bias. Greater attention to how studies are reported at study outset and in manuscript preparation could improve levels of complete transparent reporting.
Collapse
Affiliation(s)
- Tanya Horsley
- Research Unit, Royal College of Physicians and Surgeons of Canada, Ottawa, Ontario, Canada
| | | | | | - Jeanie Zeiter
- Research Unit, Royal College of Physicians and Surgeons of Canada, Ottawa, Ontario, Canada
| | - Stanley J Hamstra
- Accreditation Council for Graduate Medical Education, Chicago, IL
- Faculty of Education, University of Ottawa, Ottawa, Canada
| | - David A Cook
- Mayo Clinic College of Medicine, Rochester, Minnesota, USA
| |
Collapse
|
53
|
Abstract
STATEMENT Serious games are computer-based games designed for training purposes. They are poised to expand their role in medical education. This systematic review, conducted in accordance with PRISMA guidelines, aimed to synthesize current serious gaming trends in health care training, especially those pertaining to developmental methodologies and game evaluation. PubMed, EMBASE, and Cochrane databases were queried for relevant documents published through December 2014. Of the 3737 publications identified, 48 of them, covering 42 serious games, were included. From 2007 to 2014, they demonstrate a growth from 2 games and 2 genres to 42 games and 8 genres. Overall, study design was heterogeneous and methodological quality by MERQSI score averaged 10.5/18, which is modest. Seventy-nine percent of serious games were evaluated for training outcomes. As the number of serious games for health care training continues to grow, having schemas that organize how educators approach their development and evaluation is essential for their success.
Collapse
|
54
|
Jordan J, Coates WC, Clarke S, Runde DP, Fowlkes E, Kurth J, Yarris LM. Exploring Scholarship and the Emergency Medicine Educator: A Workforce Study. West J Emerg Med 2016; 18:163-168. [PMID: 28116031 PMCID: PMC5226754 DOI: 10.5811/westjem.2016.10.32636] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2016] [Accepted: 10/27/2016] [Indexed: 11/11/2022] Open
Abstract
INTRODUCTION Recent literature calls for initiatives to improve the quality of education studies and support faculty in approaching educational problems in a scholarly manner. Understanding the emergency medicine (EM) educator workforce is a crucial precursor to developing policies to support educators and promote education scholarship in EM. This study aims to illuminate the current workforce model for the academic EM educator. METHODS Program leadership at EM training programs completed an online survey consisting of multiple choice, completion, and free-response type items. We calculated and reported descriptive statistics. RESULTS 112 programs participated. Mean number of core faculty/program: 16.02 ± 7.83 [14.53-17.5]. Mean number of faculty full-time equivalents (FTEs)/program dedicated to education is 6.92 ± 4.92 [5.87-7.98], including (mean FTE): Vice chair for education (0.25); director of medical education (0.13); education fellowship director (0.2); residency program director (0.83); associate residency director (0.94); assistant residency director (1.1); medical student clerkship director (0.8); assistant/associate clerkship director (0.28); simulation fellowship director (0.11); simulation director (0.42); director of faculty development (0.13). Mean number of FTEs/program for education administrative support is 2.34 ± 1.1 [2.13-2.61]. Determination of clinical hours varied; 38.75% of programs had personnel with education research expertise. CONCLUSION Education faculty represent about 43% of the core faculty workforce. Many programs do not have the full spectrum of education leadership roles and educational faculty divide their time among multiple important academic roles. Clinical requirements vary. Many departments lack personnel with expertise in education research. This information may inform interventions to promote education scholarship.
Collapse
Affiliation(s)
- Jaime Jordan
- Harbor-UCLA Medical Center, Department of Emergency Medicine, Torrance, California; David Geffen School of Medicine at University of California Los Angeles, Los Angeles, California; Los Angeles Biomedical Research Institute at Harbor-UCLA, Torrance, California
| | - Wendy C Coates
- Harbor-UCLA Medical Center, Department of Emergency Medicine, Torrance, California; David Geffen School of Medicine at University of California Los Angeles, Los Angeles, California; Los Angeles Biomedical Research Institute at Harbor-UCLA, Torrance, California
| | - Samuel Clarke
- University of California Davis Medical Center, Department of Emergency Medicine, Sacramento, California
| | - Daniel P Runde
- University of Iowa Hospitals and Clinics, Department of Emergency Medicine, Iowa City, Iowa
| | - Emilie Fowlkes
- University of Iowa Hospitals and Clinics, Department of Emergency Medicine, Iowa City, Iowa
| | - Jacqueline Kurth
- University of California Los Angeles, Department of Emergency Medicine, Los Angeles, CA
| | - Lalena M Yarris
- Oregon Health and Sciences University Medical Center, Department of Emergency Medicine, Portland, Oregon
| |
Collapse
|
55
|
Rogers D. Which educational interventions improve healthcare professionals' resilience? MEDICAL TEACHER 2016; 38:1236-1241. [PMID: 27573430 DOI: 10.1080/0142159x.2016.1210111] [Citation(s) in RCA: 54] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/20/2023]
Abstract
INTRODUCTION This literature review summarizes the current evidence on educational interventions to develop healthcare worker resilience. METHODS Electronic databases were systematically searched using the search terms: education OR training OR medical students AND resilience. The initial search was refined using criteria including population (healthcare students and professionals), interventions (educational), and outcome (resilience changes). RESULTS Resilience has been defined and measured in various ways. The following educational interventions to develop resilience were identified: resilience workshops, small group problem solving, reflection, cognitive behavioral training, mindfulness and relaxation training, and mentoring. CONCLUSIONS The strongest evidence was for using resilience workshops, cognitive behavioral training, or a combination of interventions. The literature is sometimes conflicting suggesting that developing resilience is a complex process, and our understanding is not fully developed.
Collapse
Affiliation(s)
- David Rogers
- a Department of Family Medicine , University of Pretoria , Pretoria , South Africa
- b Department of Clinical Education , Plymouth University Peninsula Schools of Medicine and Dentistry , Plymouth , UK
| |
Collapse
|
56
|
Yarris LM, Jordan J, Coates WC. Education Scholarship Fellowships: An Emerging Model for Creating Educational Leaders. J Grad Med Educ 2016; 8:668-673. [PMID: 28018530 PMCID: PMC5180520 DOI: 10.4300/jgme-d-15-00616.1] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/06/2022] Open
Affiliation(s)
- Lalena M. Yarris
- Corresponding author: Lalena M. Yarris, MD, MCR, Oregon Health & Science University, 3181 SW Sam Jackson Park Road, Mail Code CDW-EM, Portland, OR 97239, 503.494.2962,
| | | | | |
Collapse
|
57
|
Cheng A, Kessler D, Mackinnon R, Chang TP, Nadkarni VM, Hunt EA, Duval-Arnould J, Lin Y, Cook DA, Pusic M, Hui J, Moher D, Egger M, Auerbach M. Reporting Guidelines for Health Care Simulation Research. Clin Simul Nurs 2016. [DOI: 10.1016/j.ecns.2016.04.008] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
58
|
Cheng A, Kessler D, Mackinnon R, Chang TP, Nadkarni VM, Hunt EA, Duval-Arnould J, Lin Y, Cook DA, Pusic M, Hui J, Moher D, Egger M, Auerbach M. Reporting guidelines for health care simulation research: extensions to the CONSORT and STROBE statements. Adv Simul (Lond) 2016; 1:25. [PMID: 29449994 PMCID: PMC5806464 DOI: 10.1186/s41077-016-0025-y] [Citation(s) in RCA: 144] [Impact Index Per Article: 18.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2016] [Accepted: 07/08/2016] [Indexed: 12/17/2022] Open
Abstract
BACKGROUND Simulation-based research (SBR) is rapidly expanding but the quality of reporting needs improvement. For a reader to critically assess a study, the elements of the study need to be clearly reported. Our objective was to develop reporting guidelines for SBR by creating extensions to the Consolidated Standards of Reporting Trials (CONSORT) and Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) Statements. METHODS An iterative multistep consensus-building process was used on the basis of the recommended steps for developing reporting guidelines. The consensus process involved the following: (1) developing a steering committee, (2) defining the scope of the reporting guidelines, (3) identifying a consensus panel, (4) generating a list of items for discussion via online premeeting survey, (5) conducting a consensus meeting, and (6) drafting reporting guidelines with an explanation and elaboration document. RESULTS The following 11 extensions were recommended for CONSORT: item 1 (title/abstract), item 2 (background), item 5 (interventions), item 6 (outcomes), item 11 (blinding), item 12 (statistical methods), item 15 (baseline data), item 17 (outcomes/ estimation), item 20 (limitations), item 21 (generalizability), and item 25 (funding). The following 10 extensions were recommended for STROBE: item 1 (title/abstract), item 2 (background/rationale), item 7 (variables), item 8 (data sources/measurement), item 12 (statistical methods), item 14 (descriptive data), item 16 (main results), item 19 (limitations), item 21 (generalizability), and item 22 (funding). An elaboration document was created to provide examples and explanation for each extension. CONCLUSIONS We have developed extensions for the CONSORT and STROBE Statements that can help improve the quality of reporting for SBR (Sim Healthcare 00:00-00, 2016).
Collapse
Affiliation(s)
- Adam Cheng
- Section of Emergency Medicine, Department of Pediatrics, Alberta Children’s Hospital, University of Calgary KidSim-ASPIRE Research Program, 2888 Shaganappi Trail NW, Calgary, Alberta T3B 6A8 Canada
| | - David Kessler
- Columbia University College of Physicians and Surgeons, New York, NY USA
| | - Ralph Mackinnon
- Royal Manchester Children’s Hospital, Central Manchester University Hospitals NHS Foundation Trust, Manchester, UK
- Department of Learning, Informatics, Management and Ethics, Karolinska Institute, Stockholm, Sweden
| | - Todd P. Chang
- Children’s Hospital Los Angeles, University of Southern California, Los Angeles, CA USA
| | - Vinay M. Nadkarni
- The Children’s Hospital of Philadelphia, University of Pennsylvania Perelman School of Medicine, Philadelphia, PA USA
| | | | | | - Yiqun Lin
- Alberta Children’s Hospital, Cumming School of Medicine, University of Calgary, Calgary, Alberta Canada
| | - David A. Cook
- Multidisciplinary Simulation Center, Mayo Clinic Online Learning, and Division of General Internal Medicine, Mayo Clinic College of Medicine, Rochester, USA
| | - Martin Pusic
- Institute for Innovations in Medical Education, Division of Education Quality and Analytics, NYU School of Medicine, New York, NY USA
| | - Joshua Hui
- Department of Emergency Medicine, David Geffen School of Medicine at UCLA, Los Angeles, CA USA
| | - David Moher
- Ottawa Methods Centre, Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, Ontario Canada
| | - Matthias Egger
- Institute of Social and Preventive Medicine, University of Bern, Bern, Switzerland
| | - Marc Auerbach
- Department of Pediatrics, Section of Emergency Medicine, Yale University School of Medicine, New Haven, CT USA
| | - for the International Network for Simulation-based Pediatric Innovation, Research, and Education (INSPIRE) Reporting Guidelines Investigators
- Section of Emergency Medicine, Department of Pediatrics, Alberta Children’s Hospital, University of Calgary KidSim-ASPIRE Research Program, 2888 Shaganappi Trail NW, Calgary, Alberta T3B 6A8 Canada
- Columbia University College of Physicians and Surgeons, New York, NY USA
- Royal Manchester Children’s Hospital, Central Manchester University Hospitals NHS Foundation Trust, Manchester, UK
- Department of Learning, Informatics, Management and Ethics, Karolinska Institute, Stockholm, Sweden
- Children’s Hospital Los Angeles, University of Southern California, Los Angeles, CA USA
- The Children’s Hospital of Philadelphia, University of Pennsylvania Perelman School of Medicine, Philadelphia, PA USA
- Johns Hopkins University School of Medicine, Baltimore, MD USA
- Alberta Children’s Hospital, Cumming School of Medicine, University of Calgary, Calgary, Alberta Canada
- Multidisciplinary Simulation Center, Mayo Clinic Online Learning, and Division of General Internal Medicine, Mayo Clinic College of Medicine, Rochester, USA
- Institute for Innovations in Medical Education, Division of Education Quality and Analytics, NYU School of Medicine, New York, NY USA
- Department of Emergency Medicine, David Geffen School of Medicine at UCLA, Los Angeles, CA USA
- Ottawa Methods Centre, Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, Ontario Canada
- Institute of Social and Preventive Medicine, University of Bern, Bern, Switzerland
- Department of Pediatrics, Section of Emergency Medicine, Yale University School of Medicine, New Haven, CT USA
| |
Collapse
|
59
|
Cheng A, Kessler D, Mackinnon R, Chang TP, Nadkarni VM, Hunt EA, Duval-Arnould J, Lin Y, Cook DA, Pusic M, Hui J, Moher D, Egger M, Auerbach M. Reporting guidelines for health care simulation research: Extensions to the CONSORT and STROBE statements. BMJ SIMULATION & TECHNOLOGY ENHANCED LEARNING 2016; 2:51-60. [DOI: 10.1136/bmjstel-2016-000124] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
|
60
|
Facey AD, Tallentire V, Selzer RM, Rotstein L. Understanding and reducing work-related psychological distress in interns: a systematic review. Intern Med J 2016; 45:995-1004. [PMID: 25871700 DOI: 10.1111/imj.12785] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2014] [Accepted: 04/05/2015] [Indexed: 11/29/2022]
Abstract
The aim of this study was to collate and evaluate studies investigating either the factors influencing work-related psychological distress in postgraduate year one (PGY1) doctors or the strategies designed to reduce it. This is a systematic review conducted in May 2014. The data sources were key databases (MEDLINE, PsycINFO and Embase) and manual searches of reference lists for relevant studies published in the last 15 years. This study is an empirical research designed either to elucidate the factors influencing work-related psychological distress in PGY1 doctors, or examine the effects of an intervention designed to reduce it. Key information was extracted into an electronic data extraction form, which incorporated elements of Murphy's model of work stress factors. A total of 21 studies was included in the review; 16 studies had examined the factors influencing work-related psychological distress, four studies had investigated strategies to reduce it, and a single study addressed both. Analysis of the findings of each individual study through the conceptual framework provided by Murphy's model revealed a discrepancy between the factors influencing work-related psychological distress and the focus of strategies designed to reduce it. Factors such as career progression and a PGY1 doctor's role within the organisation were not addressed in the interventions identified. Significant sources of psychological distress in PGY1 doctors remain overlooked by current interventions. Strategies designed to prevent or reduce psychological distress should be broad-based and grounded in both the literature exploring salient factors and existing theories of work-related stress.
Collapse
Affiliation(s)
- A D Facey
- Alfred Hospital, Melbourne, Victoria, Australia
| | | | - R M Selzer
- Alfred Hospital, Melbourne, Victoria, Australia.,Central Clinical School, Monash University, Alfred Centre, Melbourne, Victoria, Australia
| | - L Rotstein
- Alfred Hospital, Melbourne, Victoria, Australia.,Central Clinical School, Monash University, Alfred Centre, Melbourne, Victoria, Australia
| |
Collapse
|
61
|
Jordan J, Jones D, Williams D, Druck J. Publishing Venues for Education Scholarship: A Needs Assessment. Acad Emerg Med 2016; 23:731-5. [PMID: 27155165 DOI: 10.1111/acem.13003] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2016] [Revised: 05/03/2016] [Accepted: 05/04/2016] [Indexed: 11/27/2022]
Abstract
OBJECTIVES Education research is a developing field. It is unknown if there are adequate venues for scholarship distribution. The objectives of this study were to identify types of education scholarship produced, where this type of scholarship is published, barriers to achieving publication for education scholarship, and perceptions of adequacy of publication venues. METHODS Study participants were emergency medicine (EM) education and academic leaders who completed an online survey consisting of multiple-choice, completion, and 10-point Likert scale items. RESULTS A total of 45 of 59 (76.3%) subjects completed the survey. A total of 33 of 45 (73.3%) respondents had published education scholarship. Most (29/44, 65.9%) felt that there were inadequate venues for publishing education scholarship. Of those who publish education scholarship, most (30/33; 90.9%) publish either less than one or one to two peer-reviewed products per year, but collaborate with others more frequently (less than one per year, 7/33, 21.2%; one or two per year, 17/33, 51.5%; three or four per year, 7/33, 21.2%; five or more per year, 2/33, 6.1%). The most frequently published scholarship were curricular innovations and original research, with mean ratings of 5.61 and 5.21, respectively, on a 10-point Likert scale. Peer-reviewed print journal was the most frequently utilized venue, with a mean rating of 6.21. Other venues (mean rating) include peer-reviewed online journal (4.0), MedEd Portal (3.58), free open-access education (3.47), newsletter (3.0), and curricular toolbox (2.55). The most common rejection reason was "not suitable for this journal/venue," with a mean rating of 5.33. Other reasons include research methodology (4.07), small sample size (4.17), single-site study (4.28), and misunderstanding of project purpose (4.10). Respondents believed that additional education supplements in journals would be most helpful in increasing successful publication, with a mean rating of 8.31. Other helpful items included a central online repository of venues that publish education scholarship, online training in education research design/methodology, and an online networking site of education researchers to promote collaboration, with mean ratings of 6.88, 6.75, and 6.28, respectively. CONCLUSION The majority of our sampling of EM education and academic leaders publish education scholarship. There is a perceived lack of venues for this work. Multiple barriers as well as potential strategies for success have been identified. This information may inform interventions to support the dissemination of education scholarship.
Collapse
Affiliation(s)
- Jaime Jordan
- Harbor-UCLA Medical Center; Torrance CA
- David Geffen School of Medicine at UCLA; Los Angeles CA
- Los Angeles Biomedical Research Institute at Harbor-UCLA Medical Center; Los Angeles CA
| | - David Jones
- Oregon Health and Sciences University Medical Center; Portland OR
| | | | | |
Collapse
|
62
|
Liu Q, Peng W, Zhang F, Hu R, Li Y, Yan W. The Effectiveness of Blended Learning in Health Professions: Systematic Review and Meta-Analysis. J Med Internet Res 2016; 18:e2. [PMID: 26729058 PMCID: PMC4717286 DOI: 10.2196/jmir.4807] [Citation(s) in RCA: 290] [Impact Index Per Article: 36.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2015] [Revised: 08/28/2015] [Accepted: 10/07/2015] [Indexed: 12/18/2022] Open
Abstract
BACKGROUND Blended learning, defined as the combination of traditional face-to-face learning and asynchronous or synchronous e-learning, has grown rapidly and is now widely used in education. Concerns about the effectiveness of blended learning have led to an increasing number of studies on this topic. However, there has yet to be a quantitative synthesis evaluating the effectiveness of blended learning on knowledge acquisition in health professions. OBJECTIVE We aimed to assess the effectiveness of blended learning for health professional learners compared with no intervention and with nonblended learning. We also aimed to explore factors that could explain differences in learning effects across study designs, participants, country socioeconomic status, intervention durations, randomization, and quality score for each of these questions. METHODS We conducted a search of citations in Medline, CINAHL, Science Direct, Ovid Embase, Web of Science, CENTRAL, and ERIC through September 2014. Studies in any language that compared blended learning with no intervention or nonblended learning among health professional learners and assessed knowledge acquisition were included. Two reviewers independently evaluated study quality and abstracted information including characteristics of learners and intervention (study design, exercises, interactivity, peer discussion, and outcome assessment). RESULTS We identified 56 eligible articles. Heterogeneity across studies was large (I(2) ≥93.3) in all analyses. For studies comparing knowledge gained from blended learning versus no intervention, the pooled effect size was 1.40 (95% CI 1.04-1.77; P<.001; n=20 interventions) with no significant publication bias, and exclusion of any single study did not change the overall result. For studies comparing blended learning with nonblended learning (pure e-learning or pure traditional face-to-face learning), the pooled effect size was 0.81 (95% CI 0.57-1.05; P<.001; n=56 interventions), and exclusion of any single study did not change the overall result. Although significant publication bias was found, the trim and fill method showed that the effect size changed to 0.26 (95% CI -0.01 to 0.54) after adjustment. In the subgroup analyses, pre-posttest study design, presence of exercises, and objective outcome assessment yielded larger effect sizes. CONCLUSIONS Blended learning appears to have a consistent positive effect in comparison with no intervention, and to be more effective than or at least as effective as nonblended instruction for knowledge acquisition in health professions. Due to the large heterogeneity, the conclusion should be treated with caution.
Collapse
Affiliation(s)
- Qian Liu
- Department of Epidemiology and Biostatistics, School of Public Health, Tongji Medical College of Huazhong University of Science &Technology, Wuhan, China
| | | | | | | | | | | |
Collapse
|
63
|
Ahmed R, Farooq A, Storie D, Hartling L, Oswald A. Building capacity for education research among clinical educators in the health professions: A BEME (Best Evidence Medical Education) Systematic Review of the outcomes of interventions: BEME Guide No. 34. MEDICAL TEACHER 2016; 38:123-36. [PMID: 26610023 DOI: 10.3109/0142159x.2015.1112893] [Citation(s) in RCA: 26] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
BACKGROUND/PURPOSE There is a growing desire for health professions educators to generate high-quality education research; yet, few of them encounter the training to do so. In response, health professions faculties have increasingly been devoting resources to provide members with the skills necessary for education research. The form and impact of these efforts have not been reviewed, though such a synthesis could be useful for practice. The objectives of this systematic review were to (1) identify interventions aimed at building capacity for education research among health professions clinical educators and (2) review the outcomes of these interventions. METHODOLOGY We developed a systematic review protocol based on our pilot scoping search. This protocol underwent peer review and was prospectively registered with the Best Evidence Medical Education Collaboration. Based on this protocol, we conducted a comprehensive search of health professions' databases and related grey literature. Systematic methods were applied: two independent reviewers completed title screening and full text review for inclusion, data extraction, and methodological quality assessment. Studies were included if they reported outcomes for interventions designed to increase capacity for health professions clinical educators to conduct education research. We conducted a qualitative synthesis of the evidence which included detailed reporting of intervention characteristics and outcomes. RESULTS Our search returned 14, 149 results, 241 of which were retained after title and abstract screening, and 30 of which met inclusion criteria after full text review. Seven groups of interventions were identified, the most frequent being teaching scholars programs (n = 10), health professions education fellowships (n = 3) or master's programs (n = 4). The most commonly measured outcome was change related to enhanced scholarly outputs (grants, papers, abstracts, and presentations) post-intervention. Unfortunately, most of the included studies lacked detailed description of the intervention and were of low to moderate quality with post-test only design. DISCUSSION/CONCLUSIONS This review demonstrates that various interventions can have a positive impact on the ability of health professions clinical educators to conduct education research. We note several key elements of the interventions including: (1) protected time, (2) mentorship and/or collaboration, (3) departmental and institutional commitment and leadership, and (4) financial support. Through our analysis we describe the complexities around evaluating clinical educators' health professions research activities and the interventions used to promote education research. While improved study quality would allow more detailed understanding and evaluation of these key features, we are able to provide recommendations for potential strategies for improving participation in and quality of health professions education research based on this analysis.
Collapse
|
64
|
Fey MK, Gloe D, Mariani B. Assessing the Quality of Simulation-Based Research Articles: A Rating Rubric. Clin Simul Nurs 2015. [DOI: 10.1016/j.ecns.2015.10.005] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/20/2023]
|
65
|
Sawatsky AP, Beckman TJ, Edakkanambeth Varayil J, Mandrekar JN, Reed DA, Wang AT. Association Between Study Quality and Publication Rates of Medical Education Abstracts Presented at the Society of General Internal Medicine Annual Meeting. J Gen Intern Med 2015; 30:1172-7. [PMID: 25814265 PMCID: PMC4510227 DOI: 10.1007/s11606-015-3269-7] [Citation(s) in RCA: 34] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
BACKGROUND Studies reveal that 44.5% of abstracts presented at national meetings are subsequently published in indexed journals, with lower rates for abstracts of medical education scholarship. OBJECTIVE We sought to determine whether the quality of medical education abstracts is associated with subsequent publication in indexed journals, and to compare the quality of medical education abstracts presented as scientific abstracts versus innovations in medical education (IME). DESIGN Retrospective cohort study. PARTICIPANTS Medical education abstracts presented at the Society of General Internal Medicine (SGIM) 2009 annual meeting. MAIN MEASURES Publication rates were measured using database searches for full-text publications through December 2013. Quality was assessed using the validated Medical Education Research Study Quality Instrument (MERSQI). KEY RESULTS Overall, 64 (44%) medical education abstracts presented at the 2009 SGIM annual meeting were subsequently published in indexed medical journals. The MERSQI demonstrated good inter-rater reliability (intraclass correlation range, 0.77-1.00) for grading the quality of medical education abstracts. MERSQI scores were higher for published versus unpublished abstracts (9.59 vs. 8.81, p = 0.03). Abstracts with a MERSQI score of 10 or greater were more likely to be published (OR 3.18, 95% CI 1.47-6.89, p = 0.003). ). MERSQI scores were higher for scientific versus IME abstracts (9.88 vs. 8.31, p < 0.001). Publication rates were higher for scientific abstracts (42 [66%] vs. 37 [46%], p = 0.02) and oral presentations (15 [23%] vs. 6 [8%], p = 0.01). CONCLUSIONS The publication rate of medical education abstracts presented at the 2009 SGIM annual meeting was similar to reported publication rates for biomedical research abstracts, but higher than publication rates reported for medical education abstracts. MERSQI scores were associated with higher abstract publication rates, suggesting that attention to measures of quality--such as sampling, instrument validity, and data analysis--may improve the likelihood that medical education abstracts will be published.
Collapse
Affiliation(s)
- Adam P Sawatsky
- Division of General Internal Medicine, Mayo Clinic, Rochester, MN, USA,
| | | | | | | | | | | |
Collapse
|
66
|
Cook DA, Reed DA. Appraising the quality of medical education research methods: the Medical Education Research Study Quality Instrument and the Newcastle-Ottawa Scale-Education. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2015; 90:1067-76. [PMID: 26107881 DOI: 10.1097/acm.0000000000000786] [Citation(s) in RCA: 456] [Impact Index Per Article: 50.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/13/2023]
Abstract
PURPOSE The Medical Education Research Study Quality Instrument (MERSQI) and the Newcastle-Ottawa Scale-Education (NOS-E) were developed to appraise methodological quality in medical education research. The study objective was to evaluate the interrater reliability, normative scores, and between-instrument correlation for these two instruments. METHOD In 2014, the authors searched PubMed and Google for articles using the MERSQI or NOS-E. They obtained or extracted data for interrater reliability-using the intraclass correlation coefficient (ICC)-and normative scores. They calculated between-scale correlation using Spearman rho. RESULTS Each instrument contains items concerning sampling, controlling for confounders, and integrity of outcomes. Interrater reliability for overall scores ranged from 0.68 to 0.95. Interrater reliability was "substantial" or better (ICC > 0.60) for nearly all domain-specific items on both instruments. Most instances of low interrater reliability were associated with restriction of range, and raw agreement was usually good. Across 26 studies evaluating published research, the median overall MERSQI score was 11.3 (range 8.9-15.1, of possible 18). Across six studies, the median overall NOS-E score was 3.22 (range 2.08-3.82, of possible 6). Overall MERSQI and NOS-E scores correlated reasonably well (rho 0.49-0.72). CONCLUSIONS The MERSQI and NOS-E are useful, reliable, complementary tools for appraising methodological quality of medical education research. Interpretation and use of their scores should focus on item-specific codes rather than overall scores. Normative scores should be used for relative rather than absolute judgments because different research questions require different study designs.
Collapse
Affiliation(s)
- David A Cook
- D.A. Cook is professor of medicine and medical education, Department of Medicine; associate director, Mayo Center for Online Learning; and research chair, Mayo Clinic Multidisciplinary Simulation Center, Mayo Clinic College of Medicine, Rochester, Minnesota. D.A. Reed is associate professor of medicine and medical education, Department of Medicine, and senior associate dean of academic affairs, Mayo Medical School, Mayo Clinic College of Medicine, Rochester, Minnesota
| | | |
Collapse
|
67
|
Cook DA, Hatala R. Got power? A systematic review of sample size adequacy in health professions education research. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2015; 20:73-83. [PMID: 24819405 DOI: 10.1007/s10459-014-9509-5] [Citation(s) in RCA: 34] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/16/2014] [Accepted: 04/29/2014] [Indexed: 06/03/2023]
Abstract
Many education research studies employ small samples, which in turn lowers statistical power. We re-analyzed the results of a meta-analysis of simulation-based education to determine study power across a range of effect sizes, and the smallest effect that could be plausibly excluded. We systematically searched multiple databases through May 2011, and included all studies evaluating simulation-based education for health professionals in comparison with no intervention or another simulation intervention. Reviewers working in duplicate abstracted information to calculate standardized mean differences (SMD's). We included 897 original research studies. Among the 627 no-intervention-comparison studies the median sample size was 25. Only two studies (0.3%) had ≥80% power to detect a small difference (SMD > 0.2 standard deviations) and 136 (22%) had power to detect a large difference (SMD > 0.8). 110 no-intervention-comparison studies failed to find a statistically significant difference, but none excluded a small difference and only 47 (43%) excluded a large difference. Among 297 studies comparing alternate simulation approaches the median sample size was 30. Only one study (0.3%) had ≥80% power to detect a small difference and 79 (27%) had power to detect a large difference. Of the 128 studies that did not detect a statistically significant effect, 4 (3%) excluded a small difference and 91 (71%) excluded a large difference. In conclusion, most education research studies are powered only to detect effects of large magnitude. For most studies that do not reach statistical significance, the possibility of large and important differences still exists.
Collapse
Affiliation(s)
- David A Cook
- Division of General Internal Medicine, Mayo Clinic College of Medicine, Mayo 17, 200 First Street SW, Rochester, MN, 55905, USA,
| | | |
Collapse
|
68
|
Amutio A, Martínez-Taboada C, Delgado LC, Hermosilla D, Mozaz MJ. Acceptability and Effectiveness of a Long-Term Educational Intervention to Reduce Physicians' Stress-Related Conditions. THE JOURNAL OF CONTINUING EDUCATION IN THE HEALTH PROFESSIONS 2015; 35:255-260. [PMID: 26953856 DOI: 10.1097/ceh.0000000000000002] [Citation(s) in RCA: 31] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
INTRODUCTION This study aimed to test the acceptability and effectiveness of a two-phase mindfulness-based stress reduction program (8-week initial treatment plus a 10-month maintenance phase) in alleviating work stress-related symptoms (i.e., burnout, heart rate [HR], and blood pressure [BP]) in a sample of 42 physicians. METHODS A randomized controlled trial and a simple pre-post design were used, respectively, for each of the two phases of the study. Outcome measures included the Five Facets of Mindfulness Questionnaire and the Maslach Burnout Questionnaire. HR and BP measures were also obtained in the experimental group by means of a digital monitor. RESULTS After the initial 8 weeks of treatment, significant improvements for the experimental group in mindfulness levels and reductions in emotional exhaustion, HR, and BP were obtained. Effect sizes (Cohen d) significantly increased over the 10-month maintenance period, especially for mindfulness and systolic BP. Acceptance was notably high (low attrition rate and high compliance with program activities). DISCUSSION Outcomes are significant in terms of practical consequences for reducing and controlling risks of developing burnout and cardiovascular disease in this population and enhancing well-being in life.
Collapse
Affiliation(s)
- Alberto Amutio
- Dr. Amutio: Department of Social Psychology and Methodology of the Behavioral Sciences, Faculty of Psychology, University of the Basque Country (UPV/EHU), Spain. Dr. Martínez-Taboada: Department of Social Psychology and Methodology of the Behavioral Sciences, Faculty of Psychology, University of the Basque Country (UPV/EHU), Spain. Dr. Delgado: Department of Psychology and Sociology, University of Zaragoza-Unizar, Spain. Dr. Hermosilla: Department of Social Psychology and Methodology of the Behavioral Sciences, Faculty of Psychology, University of the Basque Country (UPV/EHU), Spain. Dr. Mozaz: Department of Basic Psychology, Faculty of Psychology, University of the Basque Country (UPV/EHU), Spain
| | | | | | | | | |
Collapse
|
69
|
Sullivan GM, Simpson D, Cook DA, DeIorio NM, Andolsek K, Opas L, Philibert I, Yarris LM. Redefining Quality in Medical Education Research: A Consumer's View. J Grad Med Educ 2014; 6:424-9. [PMID: 26294940 PMCID: PMC4542451 DOI: 10.4300/jgme-d-14-00339.1] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/06/2022] Open
Abstract
BACKGROUND Despite an explosion of medical education research and publications, it is not known how medical educator consumers decide what to read or apply in their practice. OBJECTIVE To determine how consumers of medical education research define quality and value. METHODS Journal of Graduate Medical Education editors performed a literature search to identify articles on medical education research quality published between 2000 and 2013, surveyed medical educators for their criteria for judging quality, and led a consensus-building workshop at a 2013 Association of American Medical Colleges meeting to further explore how users defined quality in education research. The workshop used standard consensus-building techniques to reach concept saturation. Attendees then voted for the 3 concepts they valued most in medical education research. RESULTS The 110 survey responses generated a list of 37 overlapping features in 10 categories considered important aspects of quality. The literature search yielded 27 articles, including quality indexes, systematic and narrative reviews, and commentaries. Thirty-two participants, 12 facilitators, and 1 expert observer attended the workshop. Participants endorsed the following features of education research as being most valuable: (1) provocative, novel, or challenged established thinking; (2) adhered to sound research principles; (3) relevant to practice, role, or needs; (4) feasible, practical application in real-world settings; and (5) connection to a conceptual framework. CONCLUSIONS Medical educators placed high value on rigorous methods and conceptual frameworks, consistent with published quality indexes. They also valued innovative or provocative work, feasibility, and applicability to their setting. End-user opinions of quality may illuminate how educators translate knowledge into practice.
Collapse
|
70
|
Yarris LM, Juve AM, Artino AR, Sullivan GM, Rougas S, Joyce B, Eva K. Expertise, Time, Money, Mentoring, and Reward: Systemic Barriers That Limit Education Researcher Productivity-Proceedings From the AAMC GEA Workshop. J Grad Med Educ 2014; 6:430-6. [PMID: 26279767 PMCID: PMC4535205 DOI: 10.4300/jgme-d-14-00340.1] [Citation(s) in RCA: 36] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/06/2022] Open
Abstract
BACKGROUND To further evolve in an evidence-based fashion, medical education needs to develop and evaluate new practices for teaching, learning, and assessment. However, educators face barriers in designing, conducting, and publishing education research. OBJECTIVE To explore the barriers medical educators face in formulating, conducting, and publishing high-quality medical education research, and to identify strategies for overcoming them. METHODS A consensus workshop was held November 5, 2013, at the Association of American Medical Colleges annual meeting. A working group of education research experts and educators completed a preconference literature review focusing on barriers to education research. During the workshop, consensus-based and small group techniques were used to refine the broad themes into content categories. Attendees then ranked the most important barriers and strategies for overcoming them with the highest potential impact. RESULTS Barriers participants faced in conducting quality education research included lack of (1) expertise, (2) time, (3) funding, (4) mentorship, and (5) reward. The strategy considered most effective in overcoming these barriers involved building communities of education researchers for collaboration and networking, and advocating for education researchers' interests. Other suggestions included trying to secure increased funding opportunities, developing mentoring programs, and encouraging mechanisms to ensure protected time. CONCLUSIONS Barriers to education research productivity clearly exist. Many appear to result from feelings of isolation that may be overcome with systemic efforts to develop and enable communities of practice across institutions. Finally, the theme of "reward" is novel and complex and may have implications for education research productivity.
Collapse
|
71
|
Goebell PJ, Kamat AM, Sylvester RJ, Black P, Droller M, Godoy G, Hudson MA, Junker K, Kassouf W, Knowles MA, Schulz WA, Seiler R, Schmitz-Dräger BJ. Assessing the quality of studies on the diagnostic accuracy of tumor markers. Urol Oncol 2014; 32:1051-60. [PMID: 25159014 DOI: 10.1016/j.urolonc.2013.10.003] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2013] [Revised: 10/03/2013] [Accepted: 10/05/2013] [Indexed: 01/11/2023]
Abstract
OBJECTIVES With rapidly increasing numbers of publications, assessments of study quality, reporting quality, and classification of studies according to their level of evidence or developmental stage have become key issues in weighing the relevance of new information reported. Diagnostic marker studies are often criticized for yielding highly discrepant and even controversial results. Much of this discrepancy has been attributed to differences in study quality. So far, numerous tools for measuring study quality have been developed, but few of them have been used for systematic reviews and meta-analysis. This is owing to the fact that most tools are complicated and time consuming, suffer from poor reproducibility, and do not permit quantitative scoring. METHODS The International Bladder Cancer Network (IBCN) has adopted this problem and has systematically identified the more commonly used tools developed since 2000. RESULTS In this review, those tools addressing study quality (Quality Assessment of Studies of Diagnostic Accuracy and Newcastle-Ottawa Scale), reporting quality (Standards for Reporting of Diagnostic Accuracy), and developmental stage (IBCN phases) of studies on diagnostic markers in bladder cancer are introduced and critically analyzed. Based upon this, the IBCN has launched an initiative to assess and validate existing tools with emphasis on diagnostic bladder cancer studies. CONCLUSIONS The development of simple and reproducible tools for quality assessment of diagnostic marker studies permitting quantitative scoring is suggested.
Collapse
Affiliation(s)
- Peter J Goebell
- Urologische Klinik, Friedrich-Alexander-Universität, Erlangen, Germany
| | - Ashish M Kamat
- Department of Urology, Division of Surgery, The University of Texas MD Anderson Cancer Center, Houston, TX
| | | | - Peter Black
- Department of Urology, Division of Surgery, University of British Columbia, Vancouver, Canada
| | | | - Guilherme Godoy
- Scott Department of Urology, Baylor College of Medicine, Houston, TX
| | - M'Liss A Hudson
- Ochsner Clinic Foundation, Tom and Gayle Benson Cancer Center, New Orleans, LA
| | - Kerstin Junker
- Urologische Klinik und Poliklinik, Universität des Saarlandes, Saarland, Germany
| | - Wassim Kassouf
- Department of Surgery (Urology), McGill University, Montreal, Quebec, Canada
| | - Margaret A Knowles
- Section of Experimental Oncology, Leeds Institute of Cancer and Pathology, St James's University Hospital, Leeds, UK
| | - Wolfgang A Schulz
- Urologische Klinik und Poliklinik, Heinrich-Heine-Universität, Düsseldorf, Germany
| | - Roland Seiler
- Department of Urology, University of Berne, Berne, Switzerland
| | | |
Collapse
|
72
|
Cook DA. How much evidence does it take? A cumulative meta-analysis of outcomes of simulation-based education. MEDICAL EDUCATION 2014; 48:750-60. [PMID: 25039731 DOI: 10.1111/medu.12473] [Citation(s) in RCA: 81] [Impact Index Per Article: 8.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/23/2013] [Revised: 02/15/2014] [Accepted: 02/17/2014] [Indexed: 05/07/2023]
Abstract
CONTEXT Studies that investigate research questions that have already been resolved represent a waste of resources. However, the failure to collect sufficient evidence to resolve a given question results in ambiguity. OBJECTIVES The present study was conducted to reanalyse the results of a meta-analysis of simulation-based education (SBE) to determine: (i) whether researchers continue to replicate research studies after the answer to a research question has become known, and (ii) whether researchers perform enough replications to definitively answer important questions. METHODS A systematic search of multiple databases to May 2011 was conducted to identify original research evaluating SBE for health professionals in comparison with no intervention or any active intervention, using skill outcomes. Data were extracted by reviewers working in duplicate. Data synthesis involved a cumulative meta-analysis to illuminate patterns of evidence by sequentially adding studies according to a variable of interest (e.g. publication year) and re-calculating the pooled effect size with each addition. Cumulative meta-analysis by publication year was applied to 592 comparative studies using several thresholds of 'sufficiency', including: statistical significance; stable effect size classification and magnitude (Hedges' g ± 0.1), and precise estimates (confidence intervals of less than ± 0.2). RESULTS Among studies that compared the outcomes of SBE with those of no intervention, evidence supporting a favourable effect of SBE on skills existed as early as 1973 (one publication) and further evidence confirmed a quantitatively large effect of SBE by 1997 (28 studies). Since then, a further 404 studies were published. Among studies comparing SBE with non-simulation instruction, the effect initially favoured non-simulation training, but the addition of a third study in 1997 brought the pooled effect to slightly favour simulation, and by 2004 (14 studies) this effect was statistically significant (p < 0.05) and the magnitude had stabilised (small effect). A further 37 studies were published after 2004. By contrast, evidence from studies evaluating repetition continued to show borderline statistical significance and wide confidence intervals in 2011. CONCLUSIONS Some replication is necessary to obtain stable estimates of effect and to explore different contexts, but the number of studies of SBE often exceeds the minimum number of replications required.
Collapse
Affiliation(s)
- David A Cook
- Division of General Internal Medicine, Mayo Clinic College of Medicine, Rochester, Minnesota, USA; Center for Online Learning, Mayo Clinic College of Medicine, Rochester, Minnesota, USA; Mayo Multidisciplinary Simulation Center, Mayo Clinic College of Medicine, Rochester, Minnesota, USA
| |
Collapse
|
73
|
Tolsgaard MG, Ku C, Woods NN, Kulasegaram KM, Brydges R, Ringsted C. Quality of randomised controlled trials in medical education reported between 2012 and 2013: a systematic review protocol. BMJ Open 2014; 4:e005155. [PMID: 25079932 PMCID: PMC4120313 DOI: 10.1136/bmjopen-2014-005155] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/05/2023] Open
Abstract
INTRODUCTION Research in medical education has increased in volume over the past decades but concerns have been raised regarding the quality of trials conducted within this field. Randomised controlled trials (RCTs) involving educational interventions that are reported in biomedical journals have been criticised for their insufficient conceptual, theoretical framework. RCTs published in journals dedicated to medical education, on the other hand, have been questioned regarding their methodological rigour. The aim of this study is therefore to assess the quality of RCTs of educational interventions reported in 2012 and 2013 in journals dedicated to medical education compared to biomedical journals with respect to objective quality criteria. METHODS AND ANALYSIS RCTs published between 1 January 2012 and 31 December 2013 in English are included. The search strategy is developed with the help of experienced librarians to search online databases for key terms. All of the identified RCTs are screened based on their titles and abstracts individually by the authors and then compared in pairs to assess agreement. Data are extracted from the included RCTs by independently scoring each RCT using a data collection form. The data collection form consists of four steps. Step 1 includes confirmation of RCT eligibility; step 2 consists of the CONSORT checklist; step 3 consists of the Medical Education Research Study Quality Instrument framework; step 4 consists of a Medical Education Extension (MEdEx) to the CONSORT checklist. The MEdEx includes the following elements: Description of scientific background, explanation of rationale, quality of research questions and hypotheses, clarity in the description of the use of the intervention and control as well as interpretation of results. ETHICS AND DISSEMINATION This review is the first to systematically examine the quality of RCTs conducted in medical education. We plan to disseminate the results through publications and presentation at relevant conferences. Ethical approval is not sought for this review.
Collapse
Affiliation(s)
- Martin G Tolsgaard
- Centre for Clinical Education and the Juliane Marie Centre, Copenhagen University Hospital Rigshospitalet, Copenhagen, Denmark
| | - Cheryl Ku
- The Wilson Centre, University of Toronto and University Health Network, Toronto, Ontario, Canada
| | - Nicole N Woods
- Department of Surgery, The Wilson Centre, University of Toronto and University Health Network, Toronto, Ontario, Canada
| | - Kulamakan Mahan Kulasegaram
- Department of Family, Community Medicine and The Wilson Centre, University of Toronto and University Health Network, Toronto, Ontario, Canada
| | - Ryan Brydges
- Department of Medicine, The Wilson Centre, University of Toronto and University Health Network, Toronto, Ontario, Canada
| | - Charlotte Ringsted
- Department of Anesthesia, The Wilson Centre, University of Toronto and University Health Network, Toronto, Ontario, Canada
| |
Collapse
|
74
|
Abstract
OBJECTIVES Evaluating the patient impact of health professions education is a societal priority with many challenges. Researchers would benefit from a summary of topics studied and potential methodological problems. We sought to summarize key information on patient outcomes identified in a comprehensive systematic review of simulation-based instruction. DATA SOURCES Systematic search of MEDLINE, EMBASE, CINAHL, PsychINFO, Scopus, key journals, and bibliographies of previous reviews through May 2011. STUDY ELIGIBILITY Original research in any language measuring the direct effects on patients of simulation-based instruction for health professionals, in comparison with no intervention or other instruction. APPRAISAL AND SYNTHESIS Two reviewers independently abstracted information on learners, topics, study quality including unit of analysis, and validity evidence. We pooled outcomes using random effects. RESULTS From 10,903 articles screened, we identified 50 studies reporting patient outcomes for at least 3,221 trainees and 16,742 patients. Clinical topics included airway management (14 studies), gastrointestinal endoscopy (12), and central venous catheter insertion (8). There were 31 studies involving postgraduate physicians and seven studies each involving practicing physicians, nurses, and emergency medicine technicians. Fourteen studies (28 %) used an appropriate unit of analysis. Measurement validity was supported in seven studies reporting content evidence, three reporting internal structure, and three reporting relations with other variables. The pooled Hedges' g effect size for 33 comparisons with no intervention was 0.47 (95 % confidence interval [CI], 0.31-0.63); and for nine comparisons with non-simulation instruction, it was 0.36 (95 % CI, -0.06 to 0.78). LIMITATIONS Focused field in education; high inconsistency (I(2) > 50 % in most analyses). CONCLUSIONS Simulation-based education was associated with small-moderate patient benefits in comparison with no intervention and non-simulation instruction, although the latter did not reach statistical significance. Unit of analysis errors were common, and validity evidence was infrequently reported.
Collapse
|
75
|
Cook DA, Brydges R, Zendejas B, Hamstra SJ, Hatala R. Technology-enhanced simulation to assess health professionals: a systematic review of validity evidence, research methods, and reporting quality. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2013; 88:872-83. [PMID: 23619073 DOI: 10.1097/acm.0b013e31828ffdcf] [Citation(s) in RCA: 117] [Impact Index Per Article: 10.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/07/2023]
Abstract
PURPOSE To summarize the tool characteristics, sources of validity evidence, methodological quality, and reporting quality for studies of technology-enhanced simulation-based assessments for health professions learners. METHOD The authors conducted a systematic review, searching MEDLINE, EMBASE, CINAHL, ERIC, PsychINFO, Scopus, key journals, and previous reviews through May 2011. They selected original research in any language evaluating simulation-based assessment of practicing and student physicians, nurses, and other health professionals. Reviewers working in duplicate evaluated validity evidence using Messick's five-source framework; methodological quality using the Medical Education Research Study Quality Instrument and the revised Quality Assessment of Diagnostic Accuracy Studies; and reporting quality using the Standards for Reporting Diagnostic Accuracy and Guidelines for Reporting Reliability and Agreement Studies. RESULTS Of 417 studies, 350 (84%) involved physicians at some stage in training. Most focused on procedural skills, including minimally invasive surgery (N=142), open surgery (81), and endoscopy (67). Common elements of validity evidence included relations with trainee experience (N=306), content (142), relations with other measures (128), and interrater reliability (124). Of the 217 studies reporting more than one element of evidence, most were judged as having high or unclear risk of bias due to selective sampling (N=192) or test procedures (132). Only 64% proposed a plan for interpreting the evidence to be presented (validity argument). CONCLUSIONS Validity evidence for simulation-based assessments is sparse and is concentrated within specific specialties, tools, and sources of validity evidence. The methodological and reporting quality of assessment studies leaves much room for improvement.
Collapse
Affiliation(s)
- David A Cook
- Office of Education Research, Mayo Clinic College of Medicine, Rochester, Minnesota 55905, USA.
| | | | | | | | | |
Collapse
|
76
|
|
77
|
Samaan Z, Mbuagbaw L, Kosa D, Borg Debono V, Dillenburg R, Zhang S, Fruci V, Dennis B, Bawor M, Thabane L. A systematic scoping review of adherence to reporting guidelines in health care literature. J Multidiscip Healthc 2013; 6:169-88. [PMID: 23671390 PMCID: PMC3649856 DOI: 10.2147/jmdh.s43952] [Citation(s) in RCA: 72] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022] Open
Abstract
Background Reporting guidelines have been available for the past 17 years since the inception of the Consolidated Standards of Reporting Trials statement in 1996. These guidelines were developed to improve the quality of reporting of studies in medical literature. Despite the widespread availability of these guidelines, the quality of reporting of medical literature remained suboptimal. In this study, we assess the current adherence practice to reporting guidelines; determine key factors associated with better adherence to these guidelines; and provide recommendations to enhance adherence to reporting guidelines for future studies. Methods We undertook a systematic scoping review of systematic reviews of adherence to reporting guidelines across different clinical areas and study designs. We searched four electronic databases (Cumulative Index to Nursing and Allied Health Literature, Web of Science, Embase, and Medline) from January 1996 to September 2012. Studies were included if they addressed adherence to one of the following guidelines: Consolidated Standards of Reporting Trials (CONSORT), Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA), Quality of Reporting of Meta-analysis (QUOROM), Transparent Reporting of Evaluations with Nonrandomized Designs (TREND), Meta-analysis Of Observational Studies in Epidemiology (MOOSE) and Strengthening the Reporting of Observational Studies in Epidemiology (STROBE). A protocol for this study was devised. A literature search, data extraction, and quality assessment were performed independently by two authors in duplicate. This study reporting follows the PRISMA guidelines. Results Our search retrieved 5159 titles, of which 50 were eligible. Overall, 86.0% of studies reported suboptimal levels of adherence to reporting guidelines. Factors associated with better adherence included journal impact factor and endorsement of guidelines, publication date, funding source, multisite studies, pharmacological interventions and larger studies. Conclusion Reporting guidelines in the clinical literature are important to improve the standards of reporting of clinical studies; however, adherence to these guidelines remains suboptimal. Action is therefore needed to enhance the adherence to these standards. Strategies to enhance adherence include journal editorial policies endorsing these guidelines.
Collapse
Affiliation(s)
- Zainab Samaan
- Department of Psychiatry and Behavioral Neurosciences, McMaster University, Hamilton, ON, Canada ; Department of Clinical Epidemiology and Biostatistics, McMaster University, Hamilton, ON, Canada ; Population Genomics Program, McMaster University, Hamilton, ON, Canada
| | | | | | | | | | | | | | | | | | | |
Collapse
|
78
|
State of the evidence on simulation-based training for laparoscopic surgery: a systematic review. Ann Surg 2013; 257:586-93. [PMID: 23407298 DOI: 10.1097/sla.0b013e318288c40b] [Citation(s) in RCA: 223] [Impact Index Per Article: 20.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/30/2022]
Abstract
OBJECTIVE Summarize the outcomes and best practices of simulation training for laparoscopic surgery. BACKGROUND Simulation-based training for laparoscopic surgery has become a mainstay of surgical training. Much new evidence has accrued since previous reviews were published. METHODS We systematically searched the literature through May 2011 for studies evaluating simulation, in comparison with no intervention or an alternate training activity, for training health professionals in laparoscopic surgery. Outcomes were classified as satisfaction, skills (in a test setting) of time (to perform the task), process (eg, performance rating), product (eg, knot strength), and behaviors when caring for patients. We used random effects to pool effect sizes. RESULTS From 10,903 articles screened, we identified 219 eligible studies enrolling 7138 trainees, including 91 (42%) randomized trials. For comparisons with no intervention (n = 151 studies), pooled effect size (ES) favored simulation for outcomes of knowledge (1.18; N = 9 studies), skills time (1.13; N = 89), skills process (1.23; N = 114), skills product (1.09; N = 7), behavior time (1.15; N = 7), behavior process (1.22; N = 15), and patient effects (1.28; N = 1), all P < 0.05. When compared with nonsimulation instruction (n = 3 studies), results significantly favored simulation for outcomes of skills time (ES, 0.75) and skills process (ES, 0.54). Comparisons between different simulation interventions (n = 79 studies) clarified best practices. For example, in comparison with virtual reality, box trainers have similar effects for process skills outcomes and seem to be superior for outcomes of satisfaction and skills time. CONCLUSIONS Simulation-based laparoscopic surgery training of health professionals has large benefits when compared with no intervention and is moderately more effective than nonsimulation instruction.
Collapse
|
79
|
Ilgen JS, Sherbino J, Cook DA. Technology-enhanced simulation in emergency medicine: a systematic review and meta-analysis. Acad Emerg Med 2013; 20:117-27. [PMID: 23406070 DOI: 10.1111/acem.12076] [Citation(s) in RCA: 119] [Impact Index Per Article: 10.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2012] [Revised: 08/27/2012] [Accepted: 08/27/2012] [Indexed: 12/16/2022]
Abstract
OBJECTIVES Technology-enhanced simulation is used frequently in emergency medicine (EM) training programs. Evidence for its effectiveness, however, remains unclear. The objective of this study was to evaluate the effectiveness of technology-enhanced simulation for training in EM and identify instructional design features associated with improved outcomes by conducting a systematic review. METHODS The authors systematically searched MEDLINE, EMBASE, CINAHL, ERIC, PsychINFO, Scopus, key journals, and previous review bibliographies through May 2011. Original research articles in any language were selected if they compared simulation to no intervention or another educational activity for the purposes of training EM health professionals (including student and practicing physicians, midlevel providers, nurses, and prehospital providers). Reviewers evaluated study quality and abstracted information on learners, instructional design (curricular integration, feedback, repetitive practice, mastery learning), and outcomes. RESULTS From a collection of 10,903 articles, 85 eligible studies enrolling 6,099 EM learners were identified. Of these, 56 studies compared simulation to no intervention, 12 compared simulation with another form of instruction, and 19 compared two forms of simulation. Effect sizes were pooled using a random-effects model. Heterogeneity among these studies was large (I(2) ≥ 50%). Among studies comparing simulation to no intervention, pooled effect sizes were large (range = 1.13 to 1.48) for knowledge, time, and skills and small to moderate for behaviors with patients (0.62) and patient effects (0.43; all p < 0.02 except patient effects p = 0.12). Among comparisons between simulation and other forms of instruction, the pooled effect sizes were small (≤ 0.33) for knowledge, time, and process skills (all p > 0.1). Qualitative comparisons of different simulation curricula are limited, although feedback, mastery learning, and higher fidelity were associated with improved learning outcomes. CONCLUSIONS Technology-enhanced simulation for EM learners is associated with moderate or large favorable effects in comparison with no intervention and generally small and nonsignificant benefits in comparison with other instruction. Future research should investigate the features that lead to effective simulation-based instructional design.
Collapse
Affiliation(s)
- Jonathan S. Ilgen
- Division of Emergency Medicine; Department of Medicine; University of Washington School of Medicine; Seattle; WA
| | - Jonathan Sherbino
- Division of Emergency Medicine; Department of Medicine; McMaster University; Hamilton; Ontario; Canada
| | | |
Collapse
|
80
|
Cook DA, West CP. Perspective: Reconsidering the focus on "outcomes research" in medical education: a cautionary note. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2013; 88:162-7. [PMID: 23269304 DOI: 10.1097/acm.0b013e31827c3d78] [Citation(s) in RCA: 118] [Impact Index Per Article: 10.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
Researchers in medical education have been placing increased emphasis on "outcomes research," or the observable impact of educational interventions on patient care. However, although patient outcomes are obviously important, they should not be the sole focus of attention in medical education research. The purpose of this perspective is both to highlight the limitations of outcomes research in medical education and to offer suggestions to facilitate a proper balance between learner-centered and patient-centered assessments. The authors cite five challenges to research using patient outcomes in medical education, namely (1) dilution (the progressively attenuated impact of education as filtered through other health care providers and systems), (2) inadequate sample size, (3) failure to establish a causal link, (4) potentially biased outcome selection, and (5) teaching to the test. Additionally, nonpatient outcomes continue to hold value, particularly in theory-building research and in the evaluation of program implementation. To educators selecting outcomes and instruments in medical education research, the authors offer suggestions including to clarify the study objective and conceptual framework before selecting outcomes, and to consider the development and use of behavioral and other intermediary outcomes. Deliberately weighing the available options will facilitate informed choices during the design of research that, in turn, informs the art and science of medical education.
Collapse
Affiliation(s)
- David A Cook
- Office of Education Research, College of Medicine, Mayo Clinic, Rochester, Minnesota, USA.
| | | |
Collapse
|
81
|
Steckelberg A, Mühlhauser I, Albrecht M. [Do we want to know what we are doing? The evidence-basedness of educational interventions]. ZEITSCHRIFT FUR EVIDENZ, FORTBILDUNG UND QUALITAT IM GESUNDHEITSWESEN 2013; 107:13-18. [PMID: 23415338 DOI: 10.1016/j.zefq.2012.12.004] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/08/2012] [Revised: 12/14/2012] [Accepted: 12/14/2012] [Indexed: 06/01/2023]
Abstract
BACKGROUND Educational practice is characterised by fashion, myths und traditions. Studies examining the efficacy of educational interventions are rare. OBJECTIVE We studied which educational interventions in the field of education and training have been evaluated in randomised controlled trials (RCTs) during the past three years. METHODS Systematic searches were conducted in PubMed, Psyndex, Psychinfo, and Education Research Information Center (ERIC). The database searches were limited to the RCT study design and trials published in German or English language for the period from January 2009 to February 2012. Studies with the following target groups were included: children, pupils, students, and employed persons; settings: pre-school institutions, schools, universities and universities of applied sciences, and settings of vocational or occupational education and training; interventions: educational interventions in pre-school institutions, schools, universities, universities of applied sciences and institutions of vocational or occupational education and training, as well as prevention programmes in schools and pre-school institutions. We excluded studies on patient education. The data collection was carried out using a data extraction sheet. Only predefined categories were used for the interventions. Further categories were developed in a second step. The following data were surveyed: target group, setting, type of intervention, country, institutions conducting the studies, and funding. Sample size calculations were documented to survey the quality of studies. Frequencies were calculated for the categories. RESULTS 259 RCTs carried out in 36 countries were included. About half of the educational studies (n=154) were initiated in the medical field. The majority of the 95 studies, which addressed pre-schoolers and pupils, studied prevention programmes (n=75). Only 16 out of 259 studies were conducted in Germany. Sample size calculations were reported in 85 studies. CONCLUSION As yet only very few RCTs of educational interventions have been conducted. In particular, this kind of study is lacking in Germany. The quality of the studies seems questionable. There is an urgent need for RCTs on educational interventions.
Collapse
|
82
|
Akbari M, Shah S, Velayos FS, Mahadevan U, Cheifetz AS. Systematic review and meta-analysis on the effects of thiopurines on birth outcomes from female and male patients with inflammatory bowel disease. Inflamm Bowel Dis 2013; 19:15-22. [PMID: 22434610 DOI: 10.1002/ibd.22948] [Citation(s) in RCA: 160] [Impact Index Per Article: 14.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
Abstract
BACKGROUND Inflammatory bowel disease (IBD) affects people during their prime reproductive years. The thiopurines (6-mercaptopurine and azathioprine), commonly used for induction and maintenance of remission, are U.S. Food and Drug Administration (FDA) pregnancy category D, raising concern for fetal risk. We performed a systematic review and meta-analysis to evaluate the effects of thiopurine exposure during pregnancy or at the time of conception on three measures of fetal risk in women and men with IBD. METHODS A systematic search of PubMed and Web of Science using a combination of Mesh and text terms was performed to identify studies reporting birth outcomes from IBD women and men exposed to thiopurines within 3 months of conception and/or during pregnancy. A meta-analysis was performed using the random effects model to pool estimates and report odds ratio (OR) for three outcomes in women: low birth weight (LBW), preterm birth, and congenital abnormalities and one in men: congenital abnormalities. RESULTS In women with IBD exposed to thiopurines, the pooled ORs for LBW, preterm birth, and congenital abnormalities were 1.01 (95% confidence interval [CI] 0.96, 1.06), 1.67 (95% CI 1.26, 2.20), and 1.45 (95% CI 0.99, 2.13), respectively. In men, the pooled OR for congenital abnormality was 1.87 (95% CI 0.67, 5.25). CONCLUSIONS Thiopurine exposure in women with IBD was not associated with LBW or congenital abnormalities, but was associated with preterm birth. Exposure in men at the time of conception was not associated with congenital abnormalities.
Collapse
Affiliation(s)
- Mona Akbari
- Department of Internal Medicine, Beth Israel Deaconess Medical Center, Boston, Massachusetts 02215, USA
| | | | | | | | | |
Collapse
|
83
|
Yarris LM, Gruppen LD, Hamstra SJ, Anders Ericsson K, Cook DA. Overcoming barriers to addressing education problems with research design: a panel discussion. Acad Emerg Med 2012; 19:1344-9. [PMID: 23252365 DOI: 10.1111/acem.12025] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2012] [Accepted: 06/28/2012] [Indexed: 12/28/2022]
Abstract
A plenary panel session at the 2012 Academic Emergency Medicine consensus conference "Education Research in Emergency Medicine: Opportunities, Challenges, and Strategies for Success" discussed barriers educators face in imagining, designing, and implementing studies to address educational challenges. This proceedings article presents a general approach to getting started in education research. Four examples of studies from the medical education literature that illustrate a distinct way to approach specific research questions are discussed. The study designs used are applicable to a variety of education research problems in emergency medicine (EM). Potential applications of studies are discussed, as well as effects and lessons learned.
Collapse
Affiliation(s)
- Lalena M. Yarris
- Department of Emergency Medicine; Oregon Health & Science University; Portland; OR
| | - Larry D. Gruppen
- Department of Medical Education; University of Michigan Medical School; Ann Arbor; MI
| | - Stanley J. Hamstra
- Academy for Innovation in Medical Education; University of Ottawa Skills and Simulation Centre; Ottawa; Ontario; Canada
| | | | - David A. Cook
- Division of General Internal Medicine; Office of Education Research; Mayo Clinic College of Medicine; Rochester; MN
| |
Collapse
|
84
|
|
85
|
Fabry G, Fischer MR. The GMA Journal for Medical Education--recent achievements and future goals. GMS ZEITSCHRIFT FUR MEDIZINISCHE AUSBILDUNG 2012; 29:Doc60. [PMID: 22916086 PMCID: PMC3420122 DOI: 10.3205/zma000830] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Download PDF] [Subscribe] [Scholar Register] [Received: 08/04/2012] [Revised: 08/06/2012] [Accepted: 08/06/2012] [Indexed: 11/30/2022]
|
86
|
Cost: the missing outcome in simulation-based medical education research: a systematic review. Surgery 2012; 153:160-76. [PMID: 22884087 DOI: 10.1016/j.surg.2012.06.025] [Citation(s) in RCA: 223] [Impact Index Per Article: 18.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2011] [Accepted: 06/08/2012] [Indexed: 12/12/2022]
Abstract
BACKGROUND The costs involved with technology-enhanced simulation remain unknown. Appraising the value of simulation-based medical education (SBME) requires complete accounting and reporting of cost. We sought to summarize the quantity and quality of studies that contain an economic analysis of SBME for the training of health professions learners. METHODS We performed a systematic search of MEDLINE, EMBASE, CINAHL, ERIC, PsychINFO, Scopus, key journals, and previous review bibliographies through May 2011. Articles reporting original research in any language evaluating the cost of simulation, in comparison with nonstimulation instruction or another simulation intervention, for training practicing and student physicians, nurses, and other health professionals were selected. Reviewers working in duplicate evaluated study quality and abstracted information on learners, instructional design, cost elements, and outcomes. RESULTS From a pool of 10,903 articles we identified 967 comparative studies. Of these, 59 studies (6.1%) reported any cost elements and 15 (1.6%) provided information on cost compared with another instructional approach. We identified 11 cost components reported, most often the cost of the simulator (n = 42 studies; 71%) and training materials (n = 21; 36%). Ten potential cost components were never reported. The median number of cost components reported per study was 2 (range, 1-9). Only 12 studies (20%) reported cost in the Results section; most reported it in the Discussion (n = 34; 58%). CONCLUSION Cost reporting in SBME research is infrequent and incomplete. We propose a comprehensive model for accounting and reporting costs in SBME.
Collapse
|
87
|
Cook DA. If you teach them, they will learn: why medical education needs comparative effectiveness research. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2012; 17:305-10. [PMID: 22696095 DOI: 10.1007/s10459-012-9381-0] [Citation(s) in RCA: 74] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/22/2012] [Accepted: 05/22/2012] [Indexed: 05/14/2023]
|
88
|
Complementary and alternative medicine education for medical profession: systematic review. EVIDENCE-BASED COMPLEMENTARY AND ALTERNATIVE MEDICINE 2012; 2012:656812. [PMID: 22619692 PMCID: PMC3350858 DOI: 10.1155/2012/656812] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Received: 12/21/2011] [Accepted: 02/14/2012] [Indexed: 11/30/2022]
Abstract
Purpose. To help integrate traditional, complementary and alternative medicine (TCAM) into health systems, efforts are being made to educate biomedical doctors (BMD) and medical students on TCAM. We systematically evaluated the effect of TCAM education on BMD and medical students' attitude, knowledge, and behavior towards TCAM utilization and integration with biomedical medicine.
Methods. Evaluative studies were identified from four databases. Methodological quality was assessed using the Medical Education Research Study Quality Instrument (MERSQI). Study outcomes were classified using Kirkpatrick's hierarchy.
Results. 3122 studies were identified and 12 studies of mediocre quality met inclusion criteria. Qualitative synthesis showed usage of diverse approaches including didactic, experiential learning, varying length, teacher background and intensity of exposure. More positive attitudes and improved knowledge after intervention were noted especially when teachers were BM trained. However, few studies assessed behavior change objectively. Finally, longer-term objective outcomes such as impact on patient care were not assessed.
Conclusions. Lack of use of objective and reliable instruments preclude firm conclusion on
the effect of TCAM education on study participants. However, positive changes, although mostly subjectively reported, were noted in most studies. Future evaluation should use validated or objective outcome assessments, and the value of using dual trained instructors.
Collapse
|
89
|
Ratnapradipa D, Dundulis WP, Ritzel DO, Haseeb A. The Role of Health Education in Addressing Uncertainty About Health and Cell Phone Use—A Commentary. AMERICAN JOURNAL OF HEALTH EDUCATION 2012. [DOI: 10.1080/19325037.2012.10599212] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
Affiliation(s)
- Dhitinut Ratnapradipa
- a Department of Health Education and Recreation , Southern Illinois University Carbondale , Pulliam Hall 307, Carbondale , IL , 62901
| | | | - Dale O. Ritzel
- c Department of Health Education and Recreation , Southern Illinois University , Carbondale , IL , 62901
| | - Abdul Haseeb
- d Department of Family and Community Medicine , Southern Illinois University, School of Medicine , Carbondale , IL , 62901
| |
Collapse
|
90
|
Cook DA. Randomized controlled trials and meta-analysis in medical education: what role do they play? MEDICAL TEACHER 2012; 34:468-73. [PMID: 22489980 DOI: 10.3109/0142159x.2012.671978] [Citation(s) in RCA: 33] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/14/2023]
Abstract
Education researchers seek to understand what works, for whom, in what circumstances. Unfortunately, educational environments are complex and research itself is highly context dependent. Faced with these challenges, some have argued that qualitative methods should supplant quantitative methods such as randomized controlled trials (RCTs) and meta-analysis. I disagree. Good qualitative and mixed-methods research are complementary to, rather than exclusive of, quantitative methods. The complexity and challenges we face should not beguile us into ignoring methods that provide strong evidence. What, then, is the proper role for RCTs and meta-analysis in medical education? First, the choice of study design depends on the research question. RCTs and meta-analysis are appropriate for many, but not all, study goals. They have compelling strengths but also numerous limitations. Second, strong methods will not compensate for a pointless question. RCTs do not advance the science when they make confounded comparisons, or make comparison with no intervention. Third, clinical medicine now faces many of the same challenges we encounter in education. We can learn much from other fields about how to handle complexity in RCTs. Finally, no single study will definitively answer any research question. We need carefully planned, theory-building, programmatic research, reflecting a variety of paradigms and approaches, as we accumulate evidence to change the art and science of education.
Collapse
Affiliation(s)
- David A Cook
- Office of Education Research, Division of General Internal Medicine, Mayo Clinic College of Medicine, Mayo 17, 200 First Street SW, Rochester, MN 55905, USA.
| |
Collapse
|
91
|
Midwifery Education: A Research Agenda. INTERNATIONAL JOURNAL OF CHILDBIRTH 2012. [DOI: 10.1891/0886-6708.2.3.151] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
92
|
|
93
|
|