51
|
Brydges R, Hatala R, Mylopoulos M. Examining Residents' Strategic Mindfulness During Self-Regulated Learning of a Simulated Procedural Skill. J Grad Med Educ 2016; 8:364-71. [PMID: 27413439 PMCID: PMC4936854 DOI: 10.4300/jgme-d-15-00491.1] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/30/2022] Open
Abstract
BACKGROUND Simulation-based training is currently embedded in most health professions education curricula. Without evidence for how trainees think about their simulation-based learning, some training techniques may not support trainees' learning strategies. OBJECTIVE This study explored how residents think about and self-regulate learning during a lumbar puncture (LP) training session using a simulator. METHODS In 2010, 20 of 45 postgraduate year 1 internal medicine residents attended a mandatory procedural skills training boot camp. Independently, residents practiced the entire LP skill on a part-task trainer using a clinical LP tray and proper sterile technique. We interviewed participants regarding how they thought about and monitored their learning processes, and then we conducted a thematic analysis of the interview data. RESULTS The analysis suggested that participants considered what they could and could not learn from the simulator; they developed their self-confidence by familiarizing themselves with the LP equipment and repeating the LP algorithmic steps. Participants articulated an idiosyncratic model of learning they used to interpret the challenges and successes they experienced. Participants reported focusing on obtaining cerebrospinal fluid and memorizing the "routine" version of the LP procedure. They did not report much thinking about their learning strategies (eg, self-questioning). CONCLUSIONS During simulation-based training, residents described assigning greater weight to achieving procedural outcomes and tended to think that the simulated task provided them with routine, generalizable skills. Over this typical 1-hour session, trainees did not appear to consider their strategic mindfulness (ie, awareness and use of learning strategies).
Collapse
|
52
|
Böhm A, Musil P, Urban L, Slezak P, Hatala R, Bacharova L, Cvicela M. PS188 Association of Apelin and Atrial Fibrillation in Patients Undergoing Catheter Ablation. Glob Heart 2016. [DOI: 10.1016/j.gheart.2016.03.156] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022] Open
|
53
|
Hatala R. Teaching gynaecological examinations. MEDICAL EDUCATION 2016; 50:592. [PMID: 27072481 DOI: 10.1111/medu.12993] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
|
54
|
Pugh D, Hatala R. Being a good supervisor: it's all about the relationship. MEDICAL EDUCATION 2016; 50:395-397. [PMID: 26995478 DOI: 10.1111/medu.12952] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
|
55
|
Böhm A, Tothova L, Urban L, Slezak P, Bacharova L, Musil P, Hatala R. The relation between oxidative stress biomarkers and atrial fibrillation after pulmonary veins isolation. J Electrocardiol 2016; 49:423-8. [PMID: 27034122 DOI: 10.1016/j.jelectrocard.2016.03.007] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2015] [Indexed: 10/22/2022]
Abstract
INTRODUCTION The current paradigm claims a link between oxidative stress and atrial fibrillation. The aim of our research was to study a relation between the percentage of time spent in atrial fibrillation (AF burden) and concentrations of oxidative stress biomarkers, before and after pulmonary veins isolation (PVI). METHODOLOGY We included 19 patients (mean age 55±10years, 4 females and 15 males) with implanted loop recorders undergoing PVI. Plasmatic concentrations of advanced glycation end-products (AGEs), fructosamine, advanced oxidation protein products and thiobarbituric-acid reacting substances (TBARS) were measured and AF burden was recorded immediately before and 3months after the PVI. AF burden was also recorded 9months after the PVI. RESULTS Post procedural AGEs concentration significantly negatively correlated with AF burden after 3months (ρ=-0.63; p<0.01) and 9months (ρ=-0.5; p=0.04), respectively as well as TBARS concentration significantly negatively correlated with AF burden after 9months (ρ=-0.61; p=0.01). CONCLUSION Our study showed AGEs and TBARS to be potential predictors for AF burden after the PVI. We suppose that the more oxidative stress after the PVI is provoked, the more fibrotic tissue is produced. That means a better electrical isolation of pulmonary veins and consequently a lower AF burden.
Collapse
|
56
|
Pugh D, Bhanji F, Cole G, Dupre J, Hatala R, Humphrey-Murto S, Touchie C, Wood TJ. Do OSCE progress test scores predict performance in a national high-stakes examination? MEDICAL EDUCATION 2016; 50:351-358. [PMID: 26896020 DOI: 10.1111/medu.12942] [Citation(s) in RCA: 32] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/04/2015] [Revised: 09/01/2015] [Accepted: 09/21/2015] [Indexed: 06/05/2023]
Abstract
CONTEXT Progress tests, in which learners are repeatedly assessed on equivalent content at different times in their training and provided with feedback, would seem to lend themselves well to a competency-based framework, which requires more frequent formative assessments. The objective structured clinical examination (OSCE) progress test is a relatively new form of assessment that is used to assess the progression of clinical skills. The purpose of this study was to establish further evidence for the use of an OSCE progress test by demonstrating an association between scores from this assessment method and those from a national high-stakes examination. METHODS The results of 8 years' of data from an Internal Medicine Residency OSCE (IM-OSCE) progress test were compared with scores on the Royal College of Physicians and Surgeons of Canada Comprehensive Objective Examination in Internal Medicine (RCPSC IM examination), which is comprised of both a written and performance-based component (n = 180). Correlations between scores in the two examinations were calculated. Logistic regression analyses were performed comparing IM-OSCE progress test scores with an 'elevated risk of failure' on either component of the RCPSC IM examination. RESULTS Correlations between scores from the IM-OSCE (for PGY-1 residents to PGY-4 residents) and those from the RCPSC IM examination ranged from 0.316 (p = 0.001) to 0.554 (<.001) for the performance-based component and 0.305 (p = 0.002) to 0.516 (p < 0.001) for the written component. Logistic regression models demonstrated that PGY-2 and PGY-4 scores from the IM-OSCE were predictive of an 'elevated risk of failure' on both components of the RCPSC IM examination. CONCLUSIONS This study provides further evidence for the use of OSCE progress testing by demonstrating a correlation between scores from an OSCE progress test and a national high-stakes examination. Furthermore, there is evidence that OSCE progress test scores are predictive of future performance on a national high-stakes examination.
Collapse
|
57
|
Arishenkoff S, Eddy C, Roberts JM, Chen L, Chang S, Nair P, Hatala R, Eva KW, Meneilly GS. Accuracy of Spleen Measurement by Medical Residents Using Hand-Carried Ultrasound. JOURNAL OF ULTRASOUND IN MEDICINE : OFFICIAL JOURNAL OF THE AMERICAN INSTITUTE OF ULTRASOUND IN MEDICINE 2015; 34:2203-2207. [PMID: 26507695 DOI: 10.7863/ultra.15.02022] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/10/2015] [Accepted: 03/14/2015] [Indexed: 06/05/2023]
Abstract
OBJECTIVES Easily palpable splenomegaly can be identified on physical examination, but it is difficult to detect lesser degrees of splenomegaly. Rapid bedside assessment can be conducted with hand-carried ultrasound. We performed this study to determine whether medical residents could reliably assess spleen size using hand-carried ultrasound after a brief educational intervention. METHODS Postgraduate year 1 internal medicine residents were shown a brief (45-minute) presentation on ultrasound basics, the use of hand-carried ultrasound, and principles of splenic ultrasound imaging. They practiced on each other, using hand-carried ultrasound to assess spleen size, for 1 hour in the presence of an instructor. Patients with varying degrees of splenomegaly and hospital staff were recruited at Vancouver General Hospital. A sonographer measured spleen size in each participant using conventional ultrasound. Subsequently, the trained residents scanned the participants using hand-carried ultrasound, blinded to the sonographer's measurements and the participants' diagnoses. The instructor was not present during scanning. RESULTS Twelve first-year residents (8 male and 4 female; mean age ± SEM, 28 ± 1 years; all with limited prior ultrasound training) and 19 patients and staff members (10 male and 9 female; mean age, 60 ± 4 years; body mass index, 24 ± 2 kg/m(2)) were recruited. The greatest longitudinal measurements were 14.0 ± 0.7 cm with conventional ultrasound administered by the sonographer and 13.2 ± 0.9 cm with hand-carried ultrasound administered by the residents (P > .05, not significant). The correlation between conventional and hand-carried ultrasound was r = 0.81 (P < .001). CONCLUSIONS Internal medicine residents can reliably assess spleen size at the point of care using hand-carried ultrasound with minimal training. Our findings, if replicated in other centers and in different clinical scenarios, may change the way that clinicians examine the spleen.
Collapse
|
58
|
Hatala R, Cook DA, Brydges R, Hawkins R. Constructing a validity argument for the Objective Structured Assessment of Technical Skills (OSATS): a systematic review of validity evidence. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2015; 20:1149-75. [PMID: 25702196 DOI: 10.1007/s10459-015-9593-1] [Citation(s) in RCA: 90] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/20/2014] [Accepted: 02/15/2015] [Indexed: 05/28/2023]
Abstract
In order to construct and evaluate the validity argument for the Objective Structured Assessment of Technical Skills (OSATS), based on Kane's framework, we conducted a systematic review. We searched MEDLINE, EMBASE, CINAHL, PsycINFO, ERIC, Web of Science, Scopus, and selected reference lists through February 2013. Working in duplicate, we selected original research articles in any language evaluating the OSATS as an assessment tool for any health professional. We iteratively and collaboratively extracted validity evidence from included articles to construct and evaluate the validity argument for varied uses of the OSATS. Twenty-nine articles met the inclusion criteria, all focussed on surgical technical skills assessment. We identified three intended uses for the OSATS, namely formative feedback, high-stakes assessment and program evaluation. Following Kane's framework, four inferences in the validity argument were examined (scoring, generalization, extrapolation, decision). For formative feedback and high-stakes assessment, there was reasonable evidence for scoring and extrapolation. However, for high-stakes assessment there was a dearth of evidence for generalization aside from inter-rater reliability data and an absence of evidence linking multi-station OSATS scores to performance in real clinical settings. For program evaluation, the OSATS validity argument was supported by reasonable generalization and extrapolation evidence. There was a complete lack of evidence regarding implications and decisions based on OSATS scores. In general, validity evidence supported the use of the OSATS for formative feedback. Research to provide support for decisions based on OSATS scores is required if the OSATS is to be used for higher-stakes decisions and program evaluation.
Collapse
|
59
|
Hatala R, Cook DA, Brydges R, Hawkins R. Erratum to: Constructing a validity argument for the Objective Structured Assessment of Technical Skills (OSATS): a systematic review of validity evidence. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2015; 20:1177-1178. [PMID: 26374730 DOI: 10.1007/s10459-015-9636-7] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
|
60
|
Pusic MV, Boutis K, Hatala R, Cook DA. Learning curves in health professions education. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2015; 90:1034-42. [PMID: 25806621 DOI: 10.1097/acm.0000000000000681] [Citation(s) in RCA: 96] [Impact Index Per Article: 10.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/16/2023]
Abstract
Learning curves, which graphically show the relationship between learning effort and achievement, are common in published education research but are not often used in day-to-day educational activities. The purpose of this article is to describe the generation and analysis of learning curves and their applicability to health professions education. The authors argue that the time is right for a closer look at using learning curves-given their desirable properties-to inform both self-directed instruction by individuals and education management by instructors.A typical learning curve is made up of a measure of learning (y-axis), a measure of effort (x-axis), and a mathematical linking function. At the individual level, learning curves make manifest a single person's progress towards competence including his/her rate of learning, the inflection point where learning becomes more effortful, and the remaining distance to mastery attainment. At the group level, overlaid learning curves show the full variation of a group of learners' paths through a given learning domain. Specifically, they make overt the difference between time-based and competency-based approaches to instruction. Additionally, instructors can use learning curve information to more accurately target educational resources to those who most require them.The learning curve approach requires a fine-grained collection of data that will not be possible in all educational settings; however, the increased use of an assessment paradigm that explicitly includes effort and its link to individual achievement could result in increased learner engagement and more effective instructional design.
Collapse
|
61
|
Cook DA, Brydges R, Ginsburg S, Hatala R. A contemporary approach to validity arguments: a practical guide to Kane's framework. MEDICAL EDUCATION 2015; 49:560-75. [PMID: 25989405 DOI: 10.1111/medu.12678] [Citation(s) in RCA: 315] [Impact Index Per Article: 35.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/03/2014] [Revised: 11/20/2014] [Accepted: 12/19/2014] [Indexed: 05/13/2023]
Abstract
CONTEXT Assessment is central to medical education and the validation of assessments is vital to their use. Earlier validity frameworks suffer from a multiplicity of types of validity or failure to prioritise among sources of validity evidence. Kane's framework addresses both concerns by emphasising key inferences as the assessment progresses from a single observation to a final decision. Evidence evaluating these inferences is planned and presented as a validity argument. OBJECTIVES We aim to offer a practical introduction to the key concepts of Kane's framework that educators will find accessible and applicable to a wide range of assessment tools and activities. RESULTS All assessments are ultimately intended to facilitate a defensible decision about the person being assessed. Validation is the process of collecting and interpreting evidence to support that decision. Rigorous validation involves articulating the claims and assumptions associated with the proposed decision (the interpretation/use argument), empirically testing these assumptions, and organising evidence into a coherent validity argument. Kane identifies four inferences in the validity argument: Scoring (translating an observation into one or more scores); Generalisation (using the score[s] as a reflection of performance in a test setting); Extrapolation (using the score[s] as a reflection of real-world performance), and Implications (applying the score[s] to inform a decision or action). Evidence should be collected to support each of these inferences and should focus on the most questionable assumptions in the chain of inference. Key assumptions (and needed evidence) vary depending on the assessment's intended use or associated decision. Kane's framework applies to quantitative and qualitative assessments, and to individual tests and programmes of assessment. CONCLUSIONS Validation focuses on evaluating the key claims, assumptions and inferences that link assessment scores with their intended interpretations and uses. The Implications and associated decisions are the most important inferences in the validity argument.
Collapse
|
62
|
Wollmann CG, Gradaus R, Böcker D, Fetsch T, Hintringer F, Hoh G, Hatala R, Podczeck-Schweighofer A, Kreutzer U, Kamaryt P, Hauser T, Kersten JF, Wegscheider K, Breithardt G. Variations of heart rate variability parameters prior to the onset of ventricular tachyarrhythmia and sinus tachycardia in ICD patients. Results from the heart rate variability analysis with automated ICDs (HAWAI) registry. Physiol Meas 2015; 36:1047-61. [DOI: 10.1088/0967-3334/36/5/1047] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
|
63
|
Brydges R, Manzone J, Shanks D, Hatala R, Hamstra SJ, Zendejas B, Cook DA. Self-regulated learning in simulation-based training: a systematic review and meta-analysis. MEDICAL EDUCATION 2015; 49:368-78. [PMID: 25800297 DOI: 10.1111/medu.12649] [Citation(s) in RCA: 74] [Impact Index Per Article: 8.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/30/2014] [Revised: 08/08/2014] [Accepted: 10/21/2014] [Indexed: 05/14/2023]
Abstract
CONTEXT Self-regulated learning (SRL) requires an active learner who has developed a set of processes for managing the achievement of learning goals. Simulation-based training is one context in which trainees can safely practise learning how to learn. OBJECTIVES The purpose of the present study was to evaluate, in the simulation-based training context, the effectiveness of interventions designed to support trainees in SRL activities. We used the social-cognitive model of SRL to guide a systematic review and meta-analysis exploring the links between instructor supervision, supports or scaffolds for SRL, and educational outcomes. METHODS We searched databases including MEDLINE and Scopus, and previous reviews, for material published until December 2011. Studies comparing simulation-based SRL interventions with another intervention for teaching health professionals were included. Reviewers worked independently and in duplicate to extract information on learners, study quality and educational outcomes. We used random-effects meta-analysis to compare the effects of supervision (instructor present or absent) and SRL educational supports (e.g. goal-setting study guides present or absent). RESULTS From 11,064 articles, we included 32 studies enrolling 2482 trainees. Only eight of the 32 studies included educational supports for SRL. Compared with instructor-supervised interventions, unsupervised interventions were associated with poorer immediate post-test outcomes (pooled effect size: -0.34, p = 0.09; n = 19 studies) and negligible effects on delayed (i.e. > 1 week) retention tests (pooled effect size: 0.11, p = 0.63; n = 8 studies). Interventions including SRL supports were associated with small benefits compared with interventions without supports on both immediate post-tests (pooled effect size: 0.23, p = 0.22; n = 5 studies) and delayed retention tests (pooled effect size: 0.44, p = 0.067; n = 3 studies). CONCLUSIONS Few studies in the simulation literature have designed SRL training to explicitly support trainees' capacity to self-regulate their learning. We recommend that educators and researchers shift from thinking about SRL as learning alone to thinking of SRL as comprising a shared responsibility between the trainee and the instructional designer (i.e. learning using designed supports that help prepare individuals for future learning).
Collapse
|
64
|
Cook DA, Hatala R. Got power? A systematic review of sample size adequacy in health professions education research. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2015; 20:73-83. [PMID: 24819405 DOI: 10.1007/s10459-014-9509-5] [Citation(s) in RCA: 34] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/16/2014] [Accepted: 04/29/2014] [Indexed: 06/03/2023]
Abstract
Many education research studies employ small samples, which in turn lowers statistical power. We re-analyzed the results of a meta-analysis of simulation-based education to determine study power across a range of effect sizes, and the smallest effect that could be plausibly excluded. We systematically searched multiple databases through May 2011, and included all studies evaluating simulation-based education for health professionals in comparison with no intervention or another simulation intervention. Reviewers working in duplicate abstracted information to calculate standardized mean differences (SMD's). We included 897 original research studies. Among the 627 no-intervention-comparison studies the median sample size was 25. Only two studies (0.3%) had ≥80% power to detect a small difference (SMD > 0.2 standard deviations) and 136 (22%) had power to detect a large difference (SMD > 0.8). 110 no-intervention-comparison studies failed to find a statistically significant difference, but none excluded a small difference and only 47 (43%) excluded a large difference. Among 297 studies comparing alternate simulation approaches the median sample size was 30. Only one study (0.3%) had ≥80% power to detect a small difference and 79 (27%) had power to detect a large difference. Of the 128 studies that did not detect a statistically significant effect, 4 (3%) excluded a small difference and 91 (71%) excluded a large difference. In conclusion, most education research studies are powered only to detect effects of large magnitude. For most studies that do not reach statistical significance, the possibility of large and important differences still exists.
Collapse
|
65
|
Ilgen JS, Ma IWY, Hatala R, Cook DA. A systematic review of validity evidence for checklists versus global rating scales in simulation-based assessment. MEDICAL EDUCATION 2015; 49:161-73. [PMID: 25626747 DOI: 10.1111/medu.12621] [Citation(s) in RCA: 198] [Impact Index Per Article: 22.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/29/2014] [Revised: 08/01/2014] [Accepted: 09/09/2014] [Indexed: 05/14/2023]
Abstract
CONTEXT The relative advantages and disadvantages of checklists and global rating scales (GRSs) have long been debated. To compare the merits of these scale types, we conducted a systematic review of the validity evidence for checklists and GRSs in the context of simulation-based assessment of health professionals. METHODS We conducted a systematic review of multiple databases including MEDLINE, EMBASE and Scopus to February 2013. We selected studies that used both a GRS and checklist in the simulation-based assessment of health professionals. Reviewers working in duplicate evaluated five domains of validity evidence, including correlation between scales and reliability. We collected information about raters, instrument characteristics, assessment context, and task. We pooled reliability and correlation coefficients using random-effects meta-analysis. RESULTS We found 45 studies that used a checklist and GRS in simulation-based assessment. All studies included physicians or physicians in training; one study also included nurse anaesthetists. Topics of assessment included open and laparoscopic surgery (n = 22), endoscopy (n = 8), resuscitation (n = 7) and anaesthesiology (n = 4). The pooled GRS-checklist correlation was 0.76 (95% confidence interval [CI] 0.69-0.81, n = 16 studies). Inter-rater reliability was similar between scales (GRS 0.78, 95% CI 0.71-0.83, n = 23; checklist 0.81, 95% CI 0.75-0.85, n = 21), whereas GRS inter-item reliabilities (0.92, 95% CI 0.84-0.95, n = 6) and inter-station reliabilities (0.80, 95% CI 0.73-0.85, n = 10) were higher than those for checklists (0.66, 95% CI 0-0.84, n = 4 and 0.69, 95% CI 0.56-0.77, n = 10, respectively). Content evidence for GRSs usually referenced previously reported instruments (n = 33), whereas content evidence for checklists usually described expert consensus (n = 26). Checklists and GRSs usually had similar evidence for relations to other variables. CONCLUSIONS Checklist inter-rater reliability and trainee discrimination were more favourable than suggested in earlier work, but each task requires a separate checklist. Compared with the checklist, the GRS has higher average inter-item and inter-station reliability, can be used across multiple tasks, and may better capture nuanced elements of expertise.
Collapse
|
66
|
Brydges R, Hatala R, Zendejas B, Erwin PJ, Cook DA. Linking simulation-based educational assessments and patient-related outcomes: a systematic review and meta-analysis. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2015; 90:246-56. [PMID: 25374041 DOI: 10.1097/acm.0000000000000549] [Citation(s) in RCA: 161] [Impact Index Per Article: 17.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/07/2023]
Abstract
PURPOSE To examine the evidence supporting the use of simulation-based assessments as surrogates for patient-related outcomes assessed in the workplace. METHOD The authors systematically searched MEDLINE, EMBASE, Scopus, and key journals through February 26, 2013. They included original studies that assessed health professionals and trainees using simulation and then linked those scores with patient-related outcomes assessed in the workplace. Two reviewers independently extracted information on participants, tasks, validity evidence, study quality, patient-related and simulation-based outcomes, and magnitude of correlation. All correlations were pooled using random-effects meta-analysis. RESULTS Of 11,628 potentially relevant articles, the 33 included studies enrolled 1,203 participants, including postgraduate physicians (n = 24 studies), practicing physicians (n = 8), medical students (n = 6), dentists (n = 2), and nurses (n = 1). The pooled correlation for provider behaviors was 0.51 (95% confidence interval [CI], 0.38 to 0.62; n = 27 studies); for time behaviors, 0.44 (95% CI, 0.15 to 0.66; n = 7); and for patient outcomes, 0.24 (95% CI, -0.02 to 0.47; n = 5). Most reported validity evidence was favorable, though studies often included only correlational evidence. Validity evidence of internal structure (n = 13 studies), content (n = 12), response process (n = 2), and consequences (n = 1) were reported less often. Three tools showed large pooled correlations and favorable (albeit incomplete) validity evidence. CONCLUSIONS Simulation-based assessments often correlate positively with patient-related outcomes. Although these surrogates are imperfect, tools with established validity evidence may replace workplace-based assessments for evaluating select procedural skills.
Collapse
|
67
|
Illikova V, Hlivak P, Hatala R. Cardiac channelopathies in pediatric patients - 7-years single center experience. J Electrocardiol 2015; 48:150-6. [PMID: 25554238 DOI: 10.1016/j.jelectrocard.2014.11.010] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2014] [Indexed: 10/24/2022]
Abstract
INTRODUCTION Channelopathies are associated with mutations of genes encoding proteins creating or interacting with the specialized ion channels in myocardial cell membranes, thus forming arrhythmogenic substrate predisposing the patient to sudden cardiac death. The study focuses the clinical and ECG presentation and management of children with channelopathies in Slovakia. SUBJECT AND METHODS Twenty-two children with suspected channelopathy were admitted to Children's Cardiac Center Bratislava in the years 2007-2014. Genetic testing was made in 19 patients. RESULTS Fourteen patients were symptomatic. Long QT syndrome was genetically proven in eight and catecholaminergic polymorphic ventricular tachycardia in five patients. Twenty children are treated with beta-blockers, five in combination with mexiletine or flecainide. Nine patients received implantable cardiac defibrillator and one underwent left cardiac sympathetic denervation. CONCLUSION Both clinical presentation and genetic testing must be considered in the diagnostic and therapeutic process of channelopathies. Early diagnosis allows for adequate treatment and lifestyle modification.
Collapse
|
68
|
Bou Ezzeddine H, Vachulova A, Svetlosak M, Urban L, Hlivak P, Margitfalvi P, Bernat V, Gladisova K, Sasov M, Hatala R. Occurrence of symptoms after catheter ablation of atrial fibrillation. ACTA ACUST UNITED AC 2015; 116:461-4. [DOI: 10.4149/bll_2015_086] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
69
|
Hatala R, Lunati M, Calvi V, Favale S, Goncalvesová E, Haim M, Jovanovic V, Kaczmarek K, Kautzner J, Merkely B, Pokushalov E, Revishvili A, Theodorakis G, Vatasescu R, Zalevsky V, Zupan I, Vicini I, Corbucci G. Clinical implementation of cardiac resynchronization therapy-regional disparities across selected ESC member countries. Ann Noninvasive Electrocardiol 2014; 20:43-52. [PMID: 25546696 PMCID: PMC4654273 DOI: 10.1111/anec.12243] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/18/2023] Open
Abstract
Background The present analysis aimed to estimate the penetration of cardiac resynchronization therapy (CRT) on the basis of the prevalence and incidence of eligible patients in selected European countries and in Israel. Methods and Results The following countries were considered: Italy, Slovakia, Greece, Israel, Slovenia, Serbia, the Czech Republic, Poland, Romania, Hungary, Ukraine, and the Russian Federation. CRT penetration was defined as the number of patients treated with CRT (CRT patients) divided by the prevalence of patients eligible for CRT. The number of CRT patients was estimated as the sum of CRT implantations in the last 5 years, the European Heart Rhythm Association (EHRA) White Book being used as the source. The prevalence of CRT indications was derived from the literature by applying three epidemiologic models, a synthesis of which indicates that 10% of heart failure (HF) patients are candidates for CRT. HF prevalence was considered to range from 1% to 2% of the general population, resulting in an estimated range of prevalence of CRT indication between 1000 and 2000 patients per million inhabitants. Similarly, the annual incidence of CRT indication, representing the potential target population once CRT has fully penetrated, was estimated as between 100 and 200 individuals per million. The results showed the best CRT penetration in Italy (47–93%), while in some countries it was less than 5% (Romania, Russian Federation, and Ukraine). Conclusion CRT penetration differs markedly among the countries analyzed. The main barriers are the lack of reimbursement for the procedure and insufficient awareness of guidelines by the referring physicians.
Collapse
|
70
|
Hamstra SJ, Brydges R, Hatala R, Cook DA. In reply to Rubio et al. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2014; 89:1317. [PMID: 25247542 DOI: 10.1097/acm.0000000000000461] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
|
71
|
Lee M, Roberts JM, Chen L, Chang S, Hatala R, Eva KW, Meneilly GS. Estimation of spleen size with hand-carried ultrasound. JOURNAL OF ULTRASOUND IN MEDICINE : OFFICIAL JOURNAL OF THE AMERICAN INSTITUTE OF ULTRASOUND IN MEDICINE 2014; 33:1225-1230. [PMID: 24958409 DOI: 10.7863/ultra.33.7.1225] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
OBJECTIVES Physical examination can identify palpable splenomegaly easily, but evaluating lesser degrees of splenomegaly is problematic. Hand-carried ultrasound allows rapid bedside assessment of patients. We conducted this study to determine whether hand-carried ultrasound can reliably assess spleen size. METHODS Patients with varying degrees of splenomegaly were studied. Two sonographers blindly measured spleen size in each patient using either a hand-carried or conventional ultrasound device in random order. Sonographers completed a data sheet indicating the adequacy of the image, clinical measurements of enlargement, and confidence in their observations. RESULTS Sixteen patients (10 male and 6 female; mean age ± SEM, 60 ± 4 years) were recruited. Image quality was adequate or better in all scans with conventional ultrasound and in 15 of 16 scans with hand-carried ultrasound. The greatest longitudinal measurement recorded was statistically equivalent across ultrasound techniques, with mean values of 16.4 cm (95% confidence interval, 14.8-18.0 cm) for conventional ultrasound and 15.8 cm (95% confidence interval, 14.1-17.4 cm) for hand-carried ultrasound. The correlation between measurement techniques was r = 0.89 (P < .0001). Sonographers were somewhat or very confident in the outcomes of all scans with conventional ultrasound and in 15 of 16 cases with hand-carried ultrasound. In general, it took longer for sonographers to obtain images with hand-carried ultrasound. CONCLUSIONS We have shown that hand-carried ultrasound can be used at the point of care by trained individuals to diagnose splenomegaly. However, hand-carried ultrasound images were less likely to be judged excellent, were accompanied by less diagnostic certainty, and took longer to obtain.
Collapse
|
72
|
Murad MH, Montori VM, Ioannidis JPA, Jaeschke R, Devereaux PJ, Prasad K, Neumann I, Carrasco-Labra A, Agoritsas T, Hatala R, Meade MO, Wyer P, Cook DJ, Guyatt G. How to read a systematic review and meta-analysis and apply the results to patient care: users' guides to the medical literature. JAMA 2014; 312:171-9. [PMID: 25005654 DOI: 10.1001/jama.2014.5559] [Citation(s) in RCA: 294] [Impact Index Per Article: 29.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/13/2023]
Abstract
Clinical decisions should be based on the totality of the best evidence and not the results of individual studies. When clinicians apply the results of a systematic review or meta-analysis to patient care, they should start by evaluating the credibility of the methods of the systematic review, ie, the extent to which these methods have likely protected against misleading results. Credibility depends on whether the review addressed a sensible clinical question; included an exhaustive literature search; demonstrated reproducibility of the selection and assessment of studies; and presented results in a useful manner. For reviews that are sufficiently credible, clinicians must decide on the degree of confidence in the estimates that the evidence warrants (quality of evidence). Confidence depends on the risk of bias in the body of evidence; the precision and consistency of the results; whether the results directly apply to the patient of interest; and the likelihood of reporting bias. Shared decision making requires understanding of the estimates of magnitude of beneficial and harmful effects, and confidence in those estimates.
Collapse
|
73
|
Cook DA, Zendejas B, Hamstra SJ, Hatala R, Brydges R. What counts as validity evidence? Examples and prevalence in a systematic review of simulation-based assessment. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2014; 19:233-50. [PMID: 23636643 DOI: 10.1007/s10459-013-9458-4] [Citation(s) in RCA: 129] [Impact Index Per Article: 12.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/12/2012] [Accepted: 04/09/2013] [Indexed: 05/26/2023]
Abstract
Ongoing transformations in health professions education underscore the need for valid and reliable assessment. The current standard for assessment validation requires evidence from five sources: content, response process, internal structure, relations with other variables, and consequences. However, researchers remain uncertain regarding the types of data that contribute to each evidence source. We sought to enumerate the validity evidence sources and supporting data elements for assessments using technology-enhanced simulation. We conducted a systematic literature search including MEDLINE, ERIC, and Scopus through May 2011. We included original research that evaluated the validity of simulation-based assessment scores using two or more evidence sources. Working in duplicate, we abstracted information on the prevalence of each evidence source and the underlying data elements. Among 217 eligible studies only six (3 %) referenced the five-source framework, and 51 (24 %) made no reference to any validity framework. The most common evidence sources and data elements were: relations with other variables (94 % of studies; reported most often as variation in simulator scores across training levels), internal structure (76 %; supported by reliability data or item analysis), and content (63 %; reported as expert panels or modification of existing instruments). Evidence of response process and consequences were each present in <10 % of studies. We conclude that relations with training level appear to be overrepresented in this field, while evidence of consequences and response process are infrequently reported. Validation science will be improved as educators use established frameworks to collect and interpret evidence from the full spectrum of possible sources and elements.
Collapse
|
74
|
Hatala R, Cook DA, Zendejas B, Hamstra SJ, Brydges R. Feedback for simulation-based procedural skills training: a meta-analysis and critical narrative synthesis. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2014; 19:251-72. [PMID: 23712700 DOI: 10.1007/s10459-013-9462-8] [Citation(s) in RCA: 95] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/07/2013] [Accepted: 05/13/2013] [Indexed: 05/11/2023]
Abstract
Although feedback has been identified as a key instructional feature in simulation based medical education (SBME), we remain uncertain as to the magnitude of its effectiveness and the mechanisms by which it may be effective. We employed a meta-analysis and critical narrative synthesis to examine the effectiveness of feedback for SBME procedural skills training and to examine how it works in this context. Our results demonstrate that feedback is moderately effective during procedural skills training in SBME, with a pooled effect size favoring feedback for skill outcomes of 0.74 (95 % CI 0.38-1.09; p < .001). Terminal feedback appears more effective than concurrent feedback for novice learners' skill retention. Multiple sources of feedback, including instructor feedback, lead to short-term performance gains although data on long-term effects is lacking. The mechanism by which feedback may be operating is consistent with the guidance hypothesis, with more research needed to examine other mechanisms such as cognitive load theory and social development theory.
Collapse
|
75
|
Chen R, Dore K, Grierson LEM, Hatala R, Norman G. Cognitive Load Theory: Implications for Nursing Education and Research. Can J Nurs Res 2014; 46:28-41. [PMID: 29509499 DOI: 10.1177/084456211404600204] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
This article provides an overview of cognitive load theory (CLT) and explores applications of CLT to health profession and nursing education research, particularly for multimedia and simulation-based applications. The article first reviews the 3 components of cognitive load: intrinsic, extraneous, and germane. It then discusses strategies for manipulating cognitive load variables to enhance instruction. Examples of how CLT variables can be modulated during instruction are provided. Lastly, the article discusses current applications of CLT to health profession and nursing education research and presents future research directions, focusing on the areas of multimedia and simulation-based learning.
Collapse
|