1
|
Campbell SR, Castillo R, Lalani N, Ingledew PA. COVID-19 Effects on Medical Education: A Viral Transfer of Knowledge to Radiation Oncology. Int J Radiat Oncol Biol Phys 2022; 113:705-713. [PMID: 35772437 PMCID: PMC9236201 DOI: 10.1016/j.ijrobp.2022.03.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2022] [Accepted: 03/02/2022] [Indexed: 11/29/2022]
Affiliation(s)
| | - Richard Castillo
- Department of Radiation Oncology, Emory University, Atlanta, Georgia
| | - Nafisha Lalani
- Department of Radiation Oncology, University of Ottawa, Ottawa, Ontario, Canada
| | - Paris-Ann Ingledew
- Department of Radiation Oncology, British Columbia Cancer Agency, Vancouver, British Columbia, Canada.
| |
Collapse
|
2
|
Abstract
There is a growing interest in neonatologists to train in echocardiography. Recommendations for training have been published by medical societies and working groups, but concerns exist on their feasibility in the face of limited resources. Simulators are increasingly used for training in medicine, including echocardiography. They have the potential to help overcome the shortage of training opportunities. We describe the currently available 2 echocardiography simulators designed for neonatology. Both systems are based on real 3-dimensional echocardiographic data and use an electromagnetic tracking system. Although limited data exist proving their effectiveness, deduction from other disciplines support this assumption.
Collapse
Affiliation(s)
- Michael Weidenbach
- Department of Pediatric Cardiology, Heart Center Leipzig, University of Leipzig, Struempellstr. 39, Leipzig 04289, Germany.
| | - Christian Paech
- Department of Pediatric Cardiology, Heart Center Leipzig, University of Leipzig, Struempellstr. 39, Leipzig 04289, Germany
| |
Collapse
|
3
|
Yu M, Wilson E, Janssens S. Simulation-based educational package to improve delivery of the deeply impacted fetal head at caesarean section. Aust N Z J Obstet Gynaecol 2019; 59:308-311. [PMID: 30773612 DOI: 10.1111/ajo.12946] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2018] [Accepted: 12/18/2018] [Indexed: 12/26/2022]
Abstract
Deeply impacted fetal head at caesarean section at full dilation is a rare obstetric emergency, and exposure for trainees can be limited. We aimed to pilot and evaluate a hospital-based training program incorporating mastery learning principles for trainees performing caesarean section at full dilation. We demonstrated improvements in knowledge, skills and self-confidence, and feel that this educational package shows promise as an important component of obstetric training, and warrants further exploration in the future.
Collapse
Affiliation(s)
- Michael Yu
- Mater Mother's Hospital, Brisbane, Queensland, Australia
| | - Erin Wilson
- Mater Research, Brisbane, Queensland, Australia.,School of Medicine, University of Queensland, Brisbane, Queensland, Australia
| | - Sarah Janssens
- Mater Mother's Hospital, Brisbane, Queensland, Australia.,School of Medicine, University of Queensland, Brisbane, Queensland, Australia.,Mater Education, Brisbane, Queensland, Australia
| |
Collapse
|
4
|
Albarqouni L, Hoffmann T, Glasziou P. Evidence-based practice educational intervention studies: a systematic review of what is taught and how it is measured. BMC MEDICAL EDUCATION 2018; 18:177. [PMID: 30068343 PMCID: PMC6090869 DOI: 10.1186/s12909-018-1284-1] [Citation(s) in RCA: 65] [Impact Index Per Article: 10.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/20/2017] [Accepted: 07/19/2018] [Indexed: 05/21/2023]
Abstract
BACKGROUND Despite the established interest in evidence-based practice (EBP) as a core competence for clinicians, evidence for how best to teach and evaluate EBP remains weak. We sought to systematically assess coverage of the five EBP steps, review the outcome domains measured, and assess the properties of the instruments used in studies evaluating EBP educational interventions. METHODS We conducted a systematic review of controlled studies (i.e. studies with a separate control group) which had investigated the effect of EBP educational interventions. We used citation analysis technique and tracked the forward and backward citations of the index articles (i.e. the systematic reviews and primary studies included in an overview of the effect of EBP teaching) using Web of Science until May 2017. We extracted information on intervention content (grouped into the five EBP steps), and the outcome domains assessed. We also searched the literature for published reliability and validity data of the EBP instruments used. RESULTS Of 1831 records identified, 302 full-text articles were screened, and 85 included. Of these, 46 (54%) studies were randomised trials, 51 (60%) included postgraduate level participants, and 63 (75%) taught medical professionals. EBP Step 3 (critical appraisal) was the most frequently taught step (63 studies; 74%). Only 10 (12%) of the studies taught content which addressed all five EBP steps. Of the 85 studies, 52 (61%) evaluated EBP skills, 39 (46%) knowledge, 35 (41%) attitudes, 19 (22%) behaviours, 15 (18%) self-efficacy, and 7 (8%) measured reactions to EBP teaching delivery. Of the 24 instruments used in the included studies, 6 were high-quality (achieved ≥3 types of established validity evidence) and these were used in 14 (29%) of the 52 studies that measured EBP skills; 14 (41%) of the 39 studies that measured EBP knowledge; and 8 (26%) of the 35 studies that measured EBP attitude. CONCLUSIONS Most EBP educational interventions which have been evaluated in controlled studies focus on teaching only some of the EBP steps (predominantly critically appraisal of evidence) and did not use high-quality instruments to measure outcomes. Educational packages and instruments which address all EBP steps are needed to improve EBP teaching.
Collapse
Affiliation(s)
- Loai Albarqouni
- Centre for Research in Evidence Based Practice (CREBP), Faculty of Health Science and Medicine, Bond University, Gold Coast, Australia
| | - Tammy Hoffmann
- Centre for Research in Evidence Based Practice (CREBP), Faculty of Health Science and Medicine, Bond University, Gold Coast, Australia
| | - Paul Glasziou
- Centre for Research in Evidence Based Practice (CREBP), Faculty of Health Science and Medicine, Bond University, Gold Coast, Australia
| |
Collapse
|
5
|
Post JA, Wittich CM, Thomas KG, Dupras DM, Halvorsen AJ, Mandrekar JN, Oxentenko AS, Beckman TJ. Rating the Quality of Entrustable Professional Activities: Content Validation and Associations with the Clinical Context. J Gen Intern Med 2016; 31:518-23. [PMID: 26902239 PMCID: PMC4835372 DOI: 10.1007/s11606-016-3611-8] [Citation(s) in RCA: 36] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/29/2015] [Revised: 11/18/2015] [Accepted: 01/27/2016] [Indexed: 11/29/2022]
Abstract
BACKGROUND Entrustable professional activities (EPAs) have been developed to assess resident physicians with respect to Accreditation Council for Graduate Medical Education (ACGME) competencies and milestones. Although the feasibility of using EPAs has been reported, we are unaware of previous validation studies on EPAs and potential associations between EPA quality scores and characteristics of educational programs. OBJECTIVES Our aim was to validate an instrument for assessing the quality of EPAs for assessment of internal medicine residents, and to examine associations between EPA quality scores and features of rotations. DESIGN This was a prospective content validation study to design an instrument to measure the quality of EPAs that were written for assessing internal medicine residents. PARTICIPANTS Residency leadership at Mayo Clinic, Rochester participated in this study. This included the Program Director, Associate program directors and individual rotation directors. INTERVENTIONS The authors reviewed salient literature. Items were developed to reflect domains of EPAs useful for assessment. The instrument underwent further testing and refinement. Each participating rotation director created EPAs that they felt would be meaningful to assess learner performance in their area. These 229 EPAs were then assessed with the QUEPA instrument to rate the quality of each EPA. MAIN MEASURES Performance characteristics of the QUEPA are reported. Quality ratings of EPAs were compared to the primary ACGME competency, inpatient versus outpatient setting and specialty type. KEY RESULTS QUEPA tool scores demonstrated excellent reliability (ICC range 0.72 to 0.94). Higher ratings were given to inpatient versus outpatient (3.88, 3.66; p = 0.03) focused EPAs. Medical knowledge EPAs scored significantly lower than EPAs assessing other competencies (3.34, 4.00; p < 0.0001). CONCLUSIONS The QUEPA tool is supported by good validity evidence and may help in rating the quality of EPAs developed by individual programs. Programs should take care when writing EPAs for the outpatient setting or to assess medical knowledge, as these tended to be rated lower.
Collapse
Affiliation(s)
- Jason A Post
- College of Medicine, Department of Internal Medicine, Mayo Clinic, 200 1st St SW, Rochester, MN, 55905, USA.
| | - Christopher M Wittich
- College of Medicine, Department of Internal Medicine, Mayo Clinic, 200 1st St SW, Rochester, MN, 55905, USA
| | - Kris G Thomas
- College of Medicine, Department of Internal Medicine, Mayo Clinic, 200 1st St SW, Rochester, MN, 55905, USA
| | - Denise M Dupras
- College of Medicine, Department of Internal Medicine, Mayo Clinic, 200 1st St SW, Rochester, MN, 55905, USA
| | - Andrew J Halvorsen
- College of Medicine, Department of Internal Medicine, Mayo Clinic, 200 1st St SW, Rochester, MN, 55905, USA
| | - Jay N Mandrekar
- College of Medicine, Department of Health Sciences Research, Mayo Clinic, 200 1st St SW, Rochester, MN, 55905, USA
| | - Amy S Oxentenko
- College of Medicine, Department of Internal Medicine, Mayo Clinic, 200 1st St SW, Rochester, MN, 55905, USA
| | - Thomas J Beckman
- College of Medicine, Department of Internal Medicine, Mayo Clinic, 200 1st St SW, Rochester, MN, 55905, USA
| |
Collapse
|
6
|
Ratelle JT, Wittich CM, Yu RC, Newman JS, Jenkins SM, Beckman TJ. Associations between teaching effectiveness scores and characteristics of presentations in hospital medicine continuing education. J Hosp Med 2015; 10:569-73. [PMID: 26014666 DOI: 10.1002/jhm.2391] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/24/2015] [Revised: 04/10/2015] [Accepted: 04/28/2015] [Indexed: 11/08/2022]
Abstract
BACKGROUND There is little research regarding characteristics of effective continuing medical education (CME) presentations in hospital medicine (HM). Therefore, we sought to identify associations between validated CME teaching effectiveness scores and characteristics of CME presentations in the field of HM. DESIGN/SETTING This was a cross-sectional study of participants and didactic presentations from a national HM CME course in 2014. MEASUREMENTS Participants provided CME teaching effectiveness (CMETE) ratings using an instrument with known validity evidence. Overall CMETE scores (5-point scale: 1 = strongly disagree; 5 = strongly agree) were averaged for each presentation, and associations between scores and presentation characteristics were determined using the Kruskal-Wallis test. The threshold for statistical significance was set at P < 0.05. RESULTS A total of 277 out of 368 participants (75.3%) completed evaluations for the 32 presentations. CMETE scores (mean [standard deviation]) were significantly associated with the use of audience response (4.64 [0.16]) versus no audience response (4.49 [0.16]; P = 0.01), longer presentations (≥30 minutes: 4.67 [0.13] vs <30 minutes: 4.51 [0.18]; P = 0.02), and larger number of slides (≥50: 4.66 [0.17] vs <50: 4.55 [0.17]; P = 0.04). There were no significant associations between CMETE scores and use of clinical cases, defined goals, or summary slides. CONCLUSIONS To our knowledge, this is the first study regarding associations between validated teaching effectiveness scores and characteristics of effective CME presentations in HM. Our findings, which support previous research in other fields, indicate that CME presentations may be improved by increasing interactivity through the use of audience response systems and allowing longer presentations.
Collapse
Affiliation(s)
- John T Ratelle
- Division of Hospital Internal Medicine, Department of Medicine, Mayo Clinic, Rochester, Minnesota
| | - Christopher M Wittich
- Division of General Internal Medicine, Department of Medicine, Mayo Clinic, Rochester, Minnesota
| | - Roger C Yu
- Division of Hospital Internal Medicine, Department of Medicine, Mayo Clinic, Rochester, Minnesota
| | - James S Newman
- Division of Hospital Internal Medicine, Department of Medicine, Mayo Clinic, Rochester, Minnesota
| | - Sarah M Jenkins
- Division of Biomedical Statistics and Informatics, Department of Health Sciences Research, Mayo Clinic, Rochester, Minnesota
| | - Thomas J Beckman
- Division of General Internal Medicine, Department of Medicine, Mayo Clinic, Rochester, Minnesota
| |
Collapse
|
7
|
McLaughlin K, Coderre S. Finding the middle path in tracking former patients in the electronic health record for the purpose of learning. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2015; 90:1007-1009. [PMID: 25565264 DOI: 10.1097/acm.0000000000000634] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
As medical trainees gain clinical experience, they increasingly form diagnoses based on their association with predisposing conditions and clinical features rather than pathophysiological explanations. Knowledge of these associations is housed as scripts in long-term memory, and data from the expertise literature imply that expert performance is largely explained by experts possessing more accurate scripts. In rotation-based clerkships, students typically spend a short period of time involved in the care of patients and are frequently deprived of the opportunity to observe the evolution and resolution of illness and the correct association between predisposing conditions, clinical features, and final diagnosis that is required for accurate script formation. Thanks to the introduction of an electronic health record (EHR), students now have the opportunity to track former patients until the final diagnosis and response to treatment is known. Although former patients are unlikely to benefit from being tracked by medical students, this type of learning experience may help students form more accurate scripts and, thus, improve their diagnostic performance on subsequent patients. But, because the purpose of EHRs is to improve clinical care of patients, is it ethically acceptable to allow students no longer involved in the care of patients to use these data solely for the purposes of learning? In this Commentary, the authors highlight the potential for ethical conflict whenever clinical care and teaching mingle, and discuss how these competing interests can still be balanced in the face of advancing technology by applying universal ethical principles and following the advice of Hippocrates.
Collapse
Affiliation(s)
- Kevin McLaughlin
- K. McLaughlin is assistant dean of undergraduate medical education, Cumming School of Medicine, University of Calgary, Calgary, Alberta, Canada. S. Coderre is associate dean of undergraduate medical education, Cumming School of Medicine, University of Calgary, Calgary, Alberta, Canada
| | | |
Collapse
|
8
|
Brydges R, Hatala R, Zendejas B, Erwin PJ, Cook DA. Linking simulation-based educational assessments and patient-related outcomes: a systematic review and meta-analysis. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2015; 90:246-56. [PMID: 25374041 DOI: 10.1097/acm.0000000000000549] [Citation(s) in RCA: 161] [Impact Index Per Article: 17.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/07/2023]
Abstract
PURPOSE To examine the evidence supporting the use of simulation-based assessments as surrogates for patient-related outcomes assessed in the workplace. METHOD The authors systematically searched MEDLINE, EMBASE, Scopus, and key journals through February 26, 2013. They included original studies that assessed health professionals and trainees using simulation and then linked those scores with patient-related outcomes assessed in the workplace. Two reviewers independently extracted information on participants, tasks, validity evidence, study quality, patient-related and simulation-based outcomes, and magnitude of correlation. All correlations were pooled using random-effects meta-analysis. RESULTS Of 11,628 potentially relevant articles, the 33 included studies enrolled 1,203 participants, including postgraduate physicians (n = 24 studies), practicing physicians (n = 8), medical students (n = 6), dentists (n = 2), and nurses (n = 1). The pooled correlation for provider behaviors was 0.51 (95% confidence interval [CI], 0.38 to 0.62; n = 27 studies); for time behaviors, 0.44 (95% CI, 0.15 to 0.66; n = 7); and for patient outcomes, 0.24 (95% CI, -0.02 to 0.47; n = 5). Most reported validity evidence was favorable, though studies often included only correlational evidence. Validity evidence of internal structure (n = 13 studies), content (n = 12), response process (n = 2), and consequences (n = 1) were reported less often. Three tools showed large pooled correlations and favorable (albeit incomplete) validity evidence. CONCLUSIONS Simulation-based assessments often correlate positively with patient-related outcomes. Although these surrogates are imperfect, tools with established validity evidence may replace workplace-based assessments for evaluating select procedural skills.
Collapse
Affiliation(s)
- Ryan Brydges
- Dr. Brydges is assistant professor, Department of Medicine, University of Toronto, and scientist, Wilson Centre, University Health Network, Toronto, Ontario, Canada. Dr. Hatala is associate professor, Department of Medicine, University of British Columbia, Vancouver, British Columbia, Canada. Dr. Zendejas is a resident, Department of Surgery, Mayo Clinic College of Medicine, Rochester, Minnesota. Ms. Erwin is assistant professor of medical education, Mayo Clinic Libraries, Mayo Clinic College of Medicine, Rochester, Minnesota. Dr. Cook is professor of medicine and medical education and consultant, Division of General Internal Medicine, Mayo Clinic College of Medicine, Rochester, Minnesota
| | | | | | | | | |
Collapse
|
9
|
Gibbs T, Brigden D, Hellenberg D. Opening up the debate on medical education in South Africa. S Afr Fam Pract (2004) 2014. [DOI: 10.1080/20786204.2004.10873144] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022] Open
|
10
|
Mattick K, Barnes R, Dieppe P. Medical education: a particularly complex intervention to research. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2013; 18:769-778. [PMID: 23086398 DOI: 10.1007/s10459-012-9415-7] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/14/2012] [Accepted: 09/28/2012] [Indexed: 06/01/2023]
Abstract
Previous debate has explored whether medical education research should become more like health services research in terms of frameworks, collaborations and methodologies. Notable recent changes in health services research include an increasing emphasis on complex interventions, defined as interventions that involve more than one component. The purpose of this study was to explore the extent of thinking about medical education as a complex intervention and to analyse medical education research to determine whether its collaborations and methodologies are becoming more like health services research. Research articles published in three journals over 2 years were analysed to determine the purpose of the research in relation to a framework for evaluating complex interventions, the degree of collaboration, and the methodology. Most studies aimed to develop theory or assess effectiveness and many categories of the complex interventions framework were not represented in the medical education research literature. Studies usually involved only one research site and were predominantly quantitative but not experimental or quasi-experimental. Whilst medical education research has not moved significantly in the direction of health services research over recent years, the complex interventions lens provided insights into why this might be so (namely the significant challenges associated with researching medical education). We recommend that medical education researchers work within a complex interventions framework and look to research fields with similar challenges (e.g. the study of chronic illness in a changing context) for ideas about theories, frameworks, methodologies and collaborations that can illuminate the field of medical education research.
Collapse
Affiliation(s)
- Karen Mattick
- Medical School, St Luke's Campus, University of Exeter, Exeter, UK
| | | | | |
Collapse
|
11
|
Abstract
OBJECTIVES Evaluating the patient impact of health professions education is a societal priority with many challenges. Researchers would benefit from a summary of topics studied and potential methodological problems. We sought to summarize key information on patient outcomes identified in a comprehensive systematic review of simulation-based instruction. DATA SOURCES Systematic search of MEDLINE, EMBASE, CINAHL, PsychINFO, Scopus, key journals, and bibliographies of previous reviews through May 2011. STUDY ELIGIBILITY Original research in any language measuring the direct effects on patients of simulation-based instruction for health professionals, in comparison with no intervention or other instruction. APPRAISAL AND SYNTHESIS Two reviewers independently abstracted information on learners, topics, study quality including unit of analysis, and validity evidence. We pooled outcomes using random effects. RESULTS From 10,903 articles screened, we identified 50 studies reporting patient outcomes for at least 3,221 trainees and 16,742 patients. Clinical topics included airway management (14 studies), gastrointestinal endoscopy (12), and central venous catheter insertion (8). There were 31 studies involving postgraduate physicians and seven studies each involving practicing physicians, nurses, and emergency medicine technicians. Fourteen studies (28 %) used an appropriate unit of analysis. Measurement validity was supported in seven studies reporting content evidence, three reporting internal structure, and three reporting relations with other variables. The pooled Hedges' g effect size for 33 comparisons with no intervention was 0.47 (95 % confidence interval [CI], 0.31-0.63); and for nine comparisons with non-simulation instruction, it was 0.36 (95 % CI, -0.06 to 0.78). LIMITATIONS Focused field in education; high inconsistency (I(2) > 50 % in most analyses). CONCLUSIONS Simulation-based education was associated with small-moderate patient benefits in comparison with no intervention and non-simulation instruction, although the latter did not reach statistical significance. Unit of analysis errors were common, and validity evidence was infrequently reported.
Collapse
|
12
|
Cook DA, West CP. Perspective: Reconsidering the focus on "outcomes research" in medical education: a cautionary note. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2013; 88:162-7. [PMID: 23269304 DOI: 10.1097/acm.0b013e31827c3d78] [Citation(s) in RCA: 115] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
Researchers in medical education have been placing increased emphasis on "outcomes research," or the observable impact of educational interventions on patient care. However, although patient outcomes are obviously important, they should not be the sole focus of attention in medical education research. The purpose of this perspective is both to highlight the limitations of outcomes research in medical education and to offer suggestions to facilitate a proper balance between learner-centered and patient-centered assessments. The authors cite five challenges to research using patient outcomes in medical education, namely (1) dilution (the progressively attenuated impact of education as filtered through other health care providers and systems), (2) inadequate sample size, (3) failure to establish a causal link, (4) potentially biased outcome selection, and (5) teaching to the test. Additionally, nonpatient outcomes continue to hold value, particularly in theory-building research and in the evaluation of program implementation. To educators selecting outcomes and instruments in medical education research, the authors offer suggestions including to clarify the study objective and conceptual framework before selecting outcomes, and to consider the development and use of behavioral and other intermediary outcomes. Deliberately weighing the available options will facilitate informed choices during the design of research that, in turn, informs the art and science of medical education.
Collapse
Affiliation(s)
- David A Cook
- Office of Education Research, College of Medicine, Mayo Clinic, Rochester, Minnesota, USA.
| | | |
Collapse
|
13
|
McLaughlin K, Zanussi L. Taking the middle path in evaluating technology in medical education. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2012; 17:607-609. [PMID: 22948858 DOI: 10.1007/s10459-012-9378-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/29/2012] [Accepted: 05/09/2012] [Indexed: 06/01/2023]
Affiliation(s)
- Kevin McLaughlin
- Office of Undergraduate Medical Education, University of Calgary, Health Sciences Centre, 3330 Hospital Drive NW, Calgary, AB T2N 4N1, Canada.
| | | |
Collapse
|
14
|
Wittich CM, Lopez-Jimenez F, Decker LK, Szostek JH, Mandrekar JN, Morgenthaler TI, Beckman TJ. Measuring faculty reflection on adverse patient events: development and initial validation of a case-based learning system. J Gen Intern Med 2011; 26:293-8. [PMID: 20978863 PMCID: PMC3043183 DOI: 10.1007/s11606-010-1548-x] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/25/2010] [Revised: 09/27/2010] [Accepted: 10/04/2010] [Indexed: 10/18/2022]
Abstract
BACKGROUND Critical reflection by faculty physicians on adverse patient events is important for changing physician's behaviors. However, there is little research regarding physician reflection on quality improvement (QI). OBJECTIVE To develop and validate a computerized case-based learning system (CBLS) to measure faculty physicians' reflections on adverse patient events. DESIGN Prospective validation study. PARTICIPANTS Staff physicians in the Department of Medicine at Mayo Clinic Rochester. MAIN MEASURES The CBLS was developed by Mayo Clinic information technology, medical education, and QI specialists. The reflection questionnaire, adapted from a previously validated instrument, contained eight items structured on five-point scales. Three cases, representing actual adverse events, were developed based on the most common error types: systems, medication, and diagnostic. In 2009, all Mayo Clinic hospital medicine, non-interventional cardiology, and pulmonary faculty were invited to participate. Faculty reviewed each case, determined the next management step, rated case generalizability and relevance, and completed the reflection questionnaire. Factor analysis and internal consistency reliability were calculated. Associations between reflection scores and characteristics of faculty and patient cases were determined. KEY RESULTS Forty-four faculty completed 107 case reflections. The CBLS was rated as average to excellent in 95 of 104 (91.3%) completed satisfaction surveys. Factor analysis revealed two levels of reflection: Minimal and High. Internal consistency reliability was very good (overall Cronbach's α=0.77). Item mean scores ranged from 2.89 to 3.73 on a five-point scale. The overall reflection score was 3.41 (standard deviation 0.64). Reflection scores were positively associated with case generalizability (p=0.001), and case relevance (p=0.02). CONCLUSIONS The CBLS is a valid method for stratifying faculty physicians' levels of reflection on adverse patient events. Reflection scores are associated with case generalizability and relevance, indicating that reflection improves with pertinent patient encounters. We anticipate that this instrument will be useful in future research on QI among low versus high-reflecting physicians.
Collapse
Affiliation(s)
- Christopher M Wittich
- Department of Internal Medicine, Division of General Internal Medicine, Mayo Clinic College of Medicine, 200 First Street SW, Rochester, MN 55905, USA.
| | | | | | | | | | | | | |
Collapse
|
15
|
Maudsley G. Mixing it but not mixed-up: mixed methods research in medical education (a critical narrative review). MEDICAL TEACHER 2011; 33:e92-104. [PMID: 21275539 DOI: 10.3109/0142159x.2011.542523] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
BACKGROUND Some important research questions in medical education and health services research need 'mixed methods research' (particularly synthesizing quantitative and qualitative findings). The approach is not new, but should be more explicitly reported. AIM The broad search question here, of a disjointed literature, was thus: What is mixed methods research - how should it relate to medical education research?, focused on explicit acknowledgement of 'mixing'. METHODS Literature searching focused on Web of Knowledge supplemented by other databases across disciplines. FINDINGS Five main messages emerged: - Thinking quantitative and qualitative, not quantitative versus qualitative - Appreciating that mixed methods research blends different knowledge claims, enquiry strategies, and methods - Using a 'horses for courses' [whatever works] approach to the question, and clarifying the mix - Appreciating how medical education research competes with the 'evidence-based' movement, health services research, and the 'RCT' - Being more explicit about the role of mixed methods in medical education research, and the required expertise CONCLUSION Mixed methods research is valuable, yet the literature relevant to medical education is fragmented and poorly indexed. The required time, effort, expertise, and techniques deserve better recognition. More write-ups should explicitly discuss the 'mixing' (particularly of findings), rather than report separate components.
Collapse
Affiliation(s)
- Gillian Maudsley
- Division of Public Health, Whelan Building, Quadrangle, The University of Liverpool, Liverpool L69 3GB, UK.
| |
Collapse
|
16
|
Cook DA, Andriole DA, Durning SJ, Roberts NK, Triola MM. Longitudinal research databases in medical education: facilitating the study of educational outcomes over time and across institutions. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2010; 85:1340-1346. [PMID: 20671463 DOI: 10.1097/acm.0b013e3181e5c050] [Citation(s) in RCA: 36] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/29/2023]
Abstract
Many education research questions cannot be answered using participants from one institution or short periods of follow-up. In response to societal demands for accountability and evidence of effectiveness, new models of research must be developed to study the outcomes of educational activities. Following the 2007 Millennium Conference on Medical Education Research, organizers assigned a task force to explore the use of longitudinal databases in education research. This article summarizes the task force's findings. Similar to the Framingham studies in clinical medicine, longitudinal databases assemble prospectively collected information to retrospectively answer questions of interest. Many studies using such databases have been published. The task force identified three general approaches to database-type research. First, institutions can obtain identified information from existing sources, link it with school-specific information and other identified information, deidentify it, and merge it with similar information from other collaborating schools. Second, researchers can obtain from existing sources deidentified information on large samples and explore associations within this dataset. Third, investigators can design and implement databases to prospectively collect trainee information over time and across multiple institutions for the purpose of education research. Although costly, such comprehensive, purpose-built databases would ensure the availability of information needed to answer a variety of medical education research questions. Millennium Conference participants believed that stakeholders should explore the funding and development of such prospective databases. In the meantime, education researchers should use existing sources of individualized learner data to better understand how to develop competent, compassionate clinicians.
Collapse
Affiliation(s)
- David A Cook
- Division of General Internal Medicine, Mayo Clinic, Rochester, MN, USA.
| | | | | | | | | |
Collapse
|
17
|
Traynor R, Eva KW. The evolving field of medical education research. BIOCHEMISTRY AND MOLECULAR BIOLOGY EDUCATION : A BIMONTHLY PUBLICATION OF THE INTERNATIONAL UNION OF BIOCHEMISTRY AND MOLECULAR BIOLOGY 2010; 38:211-215. [PMID: 21567830 DOI: 10.1002/bmb.20422] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Affiliation(s)
- Robyn Traynor
- Department of Psychology, McMaster University, Hamilton, Canada
| | | |
Collapse
|
18
|
Cook DA, Levinson AJ, Garside S, Dupras DM, Erwin PJ, Montori VM. Instructional design variations in internet-based learning for health professions education: a systematic review and meta-analysis. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2010; 85:909-22. [PMID: 20520049 DOI: 10.1097/acm.0b013e3181d6c319] [Citation(s) in RCA: 316] [Impact Index Per Article: 22.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/05/2023]
Abstract
PURPOSE A recent systematic review (2008) described the effectiveness of Internet-based learning (IBL) in health professions education. A comprehensive synthesis of research investigating how to improve IBL is needed. This systematic review sought to provide such a synthesis. METHOD The authors searched MEDLINE, CINAHL, EMBASE, Web of Science, Scopus, ERIC, TimeLit, and the University of Toronto Research and Development Resource Base for articles published from 1990 through November 2008. They included all studies quantifying the effect of IBL compared with another Internet-based or computer-assisted instructional intervention on practicing and student physicians, nurses, pharmacists, dentists, and other health professionals. Reviewers working independently and in duplicate abstracted information, coded study quality, and grouped studies according to inductively identified themes. RESULTS From 2,705 articles, the authors identified 51 eligible studies, including 30 randomized trials. The pooled effect size (ES) for learning outcomes in 15 studies investigating high versus low interactivity was 0.27 (95% confidence interval, 0.08-0.46; P = .006). Also associated with higher learning were practice exercises (ES 0.40 [0.08-0.71; P = .01]; 10 studies), feedback (ES 0.68 [0.01-1.35; P = .047]; 2 studies), and repetition of study material (ES 0.19 [0.09-0.30; P < .001]; 2 studies). The ES was 0.26 (-0.62 to 1.13; P = .57) for three studies examining online discussion. Inconsistency was large (I(2) >or=89%) in most analyses. Meta-analyses for other themes generally yielded imprecise results. CONCLUSIONS Interactivity, practice exercises, repetition, and feedback seem to be associated with improved learning outcomes, although inconsistency across studies tempers conclusions. Evidence for other instructional variations remains inconclusive.
Collapse
Affiliation(s)
- David A Cook
- Office of Education Research, College of Medicine, Mayo Clinic, Rochester, Minnesota 55905, USA.
| | | | | | | | | | | |
Collapse
|
19
|
Bierer SB, Chen HC. How to measure success: the impact of scholarly concentrations on students--a literature review. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2010; 85:438-52. [PMID: 20182116 DOI: 10.1097/acm.0b013e3181cccbd4] [Citation(s) in RCA: 31] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/19/2023]
Abstract
PURPOSE Scholarly concentrations (SCs) are elective or required curricular experiences that give students opportunities to study subjects in-depth beyond the conventional medical curriculum and require them to complete an independent scholarly project. This literature review explores the question, "What impact do SC programs have on medical students?" METHOD In 2008, the authors retrieved published articles using Medline, ERIC, and PsycINFO electronic databases and scanned reference lists to locate additional citations. They extracted data from selected articles using a structured form and used Kirkpatrick's evaluation model to organize learner outcomes into four categories: reactions, learning, behavior, and results. RESULTS Of 1,640 citations, 82 full-text papers were considered, and 39 studies met inclusion criteria. Most articles described SC programs that offered students research opportunities. Fourteen articles provided evidence that SC experiences influenced students' choice of clinical specialty or fostered their interest in research. Eight articles reported that SCs improved students' understanding of research principles and methods. Nineteen articles reported publications and presentations to document students' ability to apply acquired knowledge and skills. Twelve studies confirmed the entry of SC graduates into academic medicine with continued engagement in research or success in obtaining grant funding. Students' criticisms focused on requiring research during clinical training and the effort needed to complete scholarly projects. CONCLUSIONS The diversity of articles and variable results prevent definitive conclusions about the value of SCs. Findings suggest several implications for future SC program evaluations and educational research. The authors advocate increased rigor in evaluation designs to demonstrate SCs' true impact.
Collapse
Affiliation(s)
- S Beth Bierer
- Cleveland Clinic Lerner College of Medicine of Case Western Reserve University, Cleveland Clinic, Cleveland, Ohio 44195, USA.
| | | |
Collapse
|
20
|
Abstract
At one time or another, nearly all educators will need to evaluate an educational program to determine its merit or worth. These tips will help readers collect information to inform a meaningful evaluation, whether for local use or broad dissemination (i.e., research). The two most important questions in any evaluation are, 'Whose opinion matters?' and 'What would really be meaningful to them?' Other key steps include getting input from others, focusing on desired outcomes before selecting instruments, considering the validity or trustworthiness of the data, and pilot testing the evaluation process.
Collapse
Affiliation(s)
- David A Cook
- Mayo Clinic College of Medicine, Rochester, MN 55905, USA.
| |
Collapse
|
21
|
|
22
|
Cook DA, Bowen JL, Gerrity MS, Kalet AL, Kogan JR, Spickard A, Wayne DB. Proposed standards for medical education submissions to the Journal of General Internal Medicine. J Gen Intern Med 2008; 23:908-13. [PMID: 18612716 PMCID: PMC2517930 DOI: 10.1007/s11606-008-0676-z] [Citation(s) in RCA: 26] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Abstract
To help authors design rigorous studies and prepare clear and informative manuscripts, improve the transparency of editorial decisions, and raise the bar on educational scholarship, the Deputy Editors of the Journal of General Internal Medicine articulate standards for medical education submissions to the Journal. General standards include: (1) quality questions, (2) quality methods to match the questions, (3) insightful interpretation of findings, (4) transparent, unbiased reporting, and (5) attention to human subjects' protection and ethical research conduct. Additional standards for specific study types are described. We hope these proposed standards will generate discussion that will foster their continued evolution.
Collapse
Affiliation(s)
- David A Cook
- Office of Education Research and Division of General Internal Medicine, Mayo Clinic College of Medicine, Rochester, MN, USA.
| | | | | | | | | | | | | |
Collapse
|
23
|
Cook DA, Bordage G, Schmidt HG. Description, justification and clarification: a framework for classifying the purposes of research in medical education. MEDICAL EDUCATION 2008; 42:128-33. [PMID: 18194162 DOI: 10.1111/j.1365-2923.2007.02974.x] [Citation(s) in RCA: 283] [Impact Index Per Article: 17.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/06/2023]
Abstract
CONTEXT Authors have questioned the degree to which medical education research informs practice and advances the science of medical education. OBJECTIVE This study aims to propose a framework for classifying the purposes of education research and to quantify the frequencies of purposes among medical education experiments. METHODS We looked at articles published in 2003 and 2004 in Academic Medicine, Advances in Health Sciences Education, American Journal of Surgery, Journal of General Internal Medicine, Medical Education and Teaching and Learning in Medicine (1459 articles). From the 185 articles describing education experiments, a random sample of 110 was selected. The purpose of each study was classified as description ('What was done?'), justification ('Did it work?') or clarification ('Why or how did it work?'). Educational topics were identified inductively and each study was classified accordingly. RESULTS Of the 105 articles suitable for review, 75 (72%) were justification studies, 17 (16%) were description studies, and 13 (12%) were clarification studies. Experimental studies of assessment methods (5/6, 83%) and interventions aimed at knowledge and attitudes (5/28, 18%) were more likely to be clarification studies than were studies addressing other educational topics (< 8%). CONCLUSIONS Clarification studies are uncommon in experimental studies in medical education. Studies with this purpose (i.e. studies asking: 'How and why does it work?') are needed to deepen our understanding and advance the art and science of medical education. We hope that this framework stimulates education scholars to reflect on the purpose of their inquiry and the research questions they ask, and to strive to ask more clarification questions.
Collapse
Affiliation(s)
- David A Cook
- Division of General Internal Medicine, Mayo Clinic College of Medicine, Rochester, MN 55905, USA.
| | | | | |
Collapse
|
24
|
Beckman TJ, Cook DA. Developing scholarly projects in education: a primer for medical teachers. MEDICAL TEACHER 2007; 29:210-8. [PMID: 17701635 DOI: 10.1080/01421590701291469] [Citation(s) in RCA: 73] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/11/2023]
Abstract
Boyer and Glassick's broad definition of and standards for assessing scholarship apply to all aspects of education. Research on the quality of published medical education studies also reveals fundamentally important elements to address. In this article a three-step approach to developing medical education projects is proposed: refine the scholarly question, identify appropriate designs and methods, and select outcomes. Refining the scholarly question requires careful attention to literature review, conceptual framework, and statements of problem and study intent. The authors emphasize statement of study intent, which is a study's focal point, and conceptual framework, which situates a project within a theoretical context and provides a means for interpreting the results. They then review study designs and methods commonly used in education projects. They conclude with outcomes, which should be distinguished from assessment methods and instruments, and are separated into Kirkpatrick's hierarchy of reaction, learning, behavior and results.
Collapse
|
25
|
|
26
|
Watt GCM. Where health services research has led, medical education research may follow. MEDICAL EDUCATION 2005; 39:555-6. [PMID: 15910429 DOI: 10.1111/j.1365-2929.2005.02188.x] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/02/2023]
Affiliation(s)
- Graham C M Watt
- General Practice and Primary Care, Division of Community-based Sciences, University of Glasgow, UK.
| |
Collapse
|
27
|
|