1
|
Durning SJ, Jung E, Kim DH, Lee YM. Teaching clinical reasoning: principles from the literature to help improve instruction from the classroom to the bedside. KOREAN JOURNAL OF MEDICAL EDUCATION 2024; 36:145-155. [PMID: 38835308 DOI: 10.3946/kjme.2024.292] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/18/2024] [Accepted: 04/30/2024] [Indexed: 06/06/2024]
Abstract
Clinical reasoning has been characterized as being an essential aspect of being a physician. Despite this, clinical reasoning has a variety of definitions and medical error, which is often attributed to clinical reasoning, has been reported to be a leading cause of death in the United States and abroad. Further, instructors struggle with teaching this essential ability which often does not play a significant role in the curriculum. In this article, we begin with defining clinical reasoning and then discuss four principles from the literature as well as a variety of techniques for teaching these principles to help ground an instructors' understanding in clinical reasoning. We also tackle contemporary challenges in teaching clinical reasoning such as the integration of artificial intelligence and strategies to help with transitions in instruction (e.g., from the classroom to the clinic or from medical school to residency/registrar training) and suggest next steps for research and innovation in clinical reasoning.
Collapse
Affiliation(s)
- Steven J Durning
- Center for Health Professions Education, Uniformed Services University of the Health Sciences, MD, USA
| | - Eulho Jung
- Center for Health Professions Education, Uniformed Services University of the Health Sciences, MD, USA
- Henry M Jackson Foundation for the Advancement of Military Medicine, Bethesda, MD, USA
| | - Do-Hwan Kim
- Department of Medical Education, Hanyang University College of Medicine, Seoul, Korea
| | - Young-Mee Lee
- Department of Medical Education, Korea University College of Medicine, Seoul, Korea
| |
Collapse
|
2
|
Jeffery M, Kar AR, Pradhan A, Brannigan S, Terregino C, Rashid H, Salisbury R, Johnson C, Jagpal S. Evaluating Clinical Reasoning in Undergraduate Medical Education: The Value of a Virtual Oral Assessment. Am Surg 2024:31348241250049. [PMID: 38676698 DOI: 10.1177/00031348241250049] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/29/2024]
Abstract
BACKGROUND Oral assessments are essential components of board certification in numerous fields, as they provide insight into problem-solving capacity and clinical reasoning. The development of clinical reasoning often begins in undergraduate medical education and remains a challenge to assess. OBJECTIVE We developed a pilot oral assessment to evaluate medical student oral presentations and systematically assess clinical reasoning. This was incorporated into a previously existing cumulative assessment at the conclusion of the third year of medical school, with the intent to demonstrate feasibility and future reliability of this exam format. METHODS This pilot oral assessment was developed using content taught during third year clerkships. A modified Assessment of Reasoning Tool (ART) was used as the evaluation metric. It was conducted virtually to include faculty members from multiple disciplines and accommodate schedules and space limitations. RESULTS A total of 152 third year medical students completed the exam, with a total of 15 faculty examiners. 89% of medical students scored as complete in hypothesis directed history, 93% in problem representation, 86% in prioritized differential diagnoses, and 67% in effectively directing management. Most examiners felt an oral assessment is effective to determine a medical student's clinical reasoning ability. CONCLUSIONS Virtual oral assessments of clinical reasoning can be incorporated in undergraduate medical education to identify students struggling with components of clinical reasoning, while also allowing maximum flexibility for the clinician educator workforce as examiners. Longitudinal use of these exams would be valuable to track the development of clinical reasoning across the medical school curriculum.
Collapse
Affiliation(s)
- Michelle Jeffery
- Rutgers Robert Wood Johnson Medical School, New Brunswick, NJ, USA
| | - A Reema Kar
- Rutgers Robert Wood Johnson Medical School, New Brunswick, NJ, USA
| | - Archana Pradhan
- Rutgers Robert Wood Johnson Medical School, New Brunswick, NJ, USA
| | | | - Carol Terregino
- Rutgers Robert Wood Johnson Medical School, New Brunswick, NJ, USA
| | - Hanin Rashid
- Rutgers Robert Wood Johnson Medical School, New Brunswick, NJ, USA
| | - Rick Salisbury
- Rutgers Robert Wood Johnson Medical School, New Brunswick, NJ, USA
| | - Conrad Johnson
- Rutgers Robert Wood Johnson Medical School, New Brunswick, NJ, USA
| | - Sugeet Jagpal
- Rutgers Robert Wood Johnson Medical School, New Brunswick, NJ, USA
| |
Collapse
|
3
|
Eng K, Johnston K, Cerda I, Kadakia K, Mosier-Mills A, Vanka A. A Patient-Centered Documentation Skills Curriculum for Preclerkship Medical Students in an Open Notes Era. MEDEDPORTAL : THE JOURNAL OF TEACHING AND LEARNING RESOURCES 2024; 20:11392. [PMID: 38533390 PMCID: PMC10963659 DOI: 10.15766/mep_2374-8265.11392] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/28/2023] [Accepted: 01/04/2024] [Indexed: 03/28/2024]
Abstract
Introduction New legislation allows patients (with permitted exceptions) to read their clinical notes, leading to both benefits and ethical dilemmas. Medical students need a robust curriculum to learn documentation skills within this challenging context. We aimed to teach note-writing skills through a patient-centered lens with special consideration for the impact on patients and providers. We developed this session for first-year medical students within their foundational clinical skills course to place bias-free language at the forefront of how they learn to construct a medical note. Methods One hundred seventy-three first-year medical and dental students participated in this curriculum. They completed an asynchronous presession module first, followed by a 2-hour synchronous workshop including a didactic, student-led discussion and sample patient note exercise. Students were subsequently responsible throughout the year for constructing patient-centered notes, graded by faculty with a newly developed rubric and checklist of best practices. Results On postworkshop surveys, learners reported increased preparedness in their ability to document in a patient-centered manner (presession M = 2.2, midyear M = 3.9, p < .001), as rated on a 5-point Likert scale (1 = not prepared at all, 5 = very prepared), and also found this topic valuable to learn early in their training. Discussion This curriculum utilizes a multipart approach to prepare learners to employ clinical notes to communicate with patients and providers, with special attention to how patients and their care partners receive a note. Future directions include expanding the curriculum to higher levels of learning and validating the developed materials.
Collapse
Affiliation(s)
- Kathleen Eng
- Fourth-Year Medical Student, Harvard Medical School
| | | | - Ivo Cerda
- Third-Year Medical Student, Harvard Medical School
| | | | | | - Anita Vanka
- Assistant Professor, Department of Medicine, Harvard Medical School
| |
Collapse
|
4
|
Boyle SM, Martindale J, Parsons AS, Sozio SM, Hilburg R, Bahrainwala J, Chan L, Stern LD, Warburton KM. Development and Validation of a Formative Assessment Tool for Nephrology Fellows' Clinical Reasoning. Clin J Am Soc Nephrol 2024; 19:26-34. [PMID: 37851423 PMCID: PMC10843222 DOI: 10.2215/cjn.0000000000000315] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2023] [Accepted: 10/02/2023] [Indexed: 10/19/2023]
Abstract
BACKGROUND Diagnostic errors are commonly driven by failures in clinical reasoning. Deficits in clinical reasoning are common among graduate medical learners, including nephrology fellows. We created and validated an instrument to assess clinical reasoning in a national cohort of nephrology fellows and established performance thresholds for remedial coaching. METHODS Experts in nephrology education and clinical reasoning remediation designed an instrument to measure clinical reasoning through a written patient encounter note from a web-based, simulated AKI consult. The instrument measured clinical reasoning in three domains: problem representation, differential diagnosis with justification, and diagnostic plan with justification. Inter-rater reliability was established in a pilot cohort ( n =7 raters) of first-year nephrology fellows using a two-way random effects agreement intraclass correlation coefficient model. The instrument was then administered to a larger cohort of first-year fellows to establish performance standards for coaching using the Hofstee method ( n =6 raters). RESULTS In the pilot cohort, there were 15 fellows from four training program, and in the study cohort, there were 61 fellows from 20 training programs. The intraclass correlation coefficients for problem representation, differential diagnosis, and diagnostic plan were 0.90, 0.70, and 0.50, respectively. Passing thresholds (% total points) in problem representation, differential diagnosis, and diagnostic plan were 59%, 57%, and 62%, respectively. Fifty-nine percent ( n =36) met the threshold for remedial coaching in at least one domain. CONCLUSIONS We provide validity evidence for a simulated AKI consult for formative assessment of clinical reasoning in nephrology fellows. Most fellows met criteria for coaching in at least one of three reasoning domains, demonstrating a need for learner assessment and instruction in clinical reasoning.
Collapse
Affiliation(s)
- Suzanne M. Boyle
- Section of Nephrology, Hypertension, and Kidney Transplantation, Lewis Katz School of Medicine at Temple University, Philadelphia, Pennsylvania
| | - James Martindale
- Office of Medical Education, University of Virginia School of Medicine, Charlottesville, Virginia
| | - Andrew S. Parsons
- Division of General, Geriatric, Palliative, and Hospital Medicine, University of Virginia School of Medicine, Charlottesville, Virginia
| | - Stephen M. Sozio
- Division of Nephrology, Department of Medicine, Department of Epidemiology, Johns Hopkins Bloomberg School of Public Health, Johns Hopkins University School of Medicine, Baltimore, Maryland
| | - Rachel Hilburg
- Renal, Electrolyte, and Hypertension Division, Perelman School of Medicine, The University of Pennsylvania, Philadelphia, Pennsylvania
| | - Jehan Bahrainwala
- Division of Nephrology, Stanford University School of Medicine, Palo Alto, California
| | - Lili Chan
- Barbara T. Murphy Division of Nephrology, Mt. Sinai School of Medicine, New York, New York
| | - Lauren D. Stern
- Renal Section, Boston University Chobanian and Avedisian School of Medicine, Boston, Massachusetts
| | - Karen M. Warburton
- Division of Nephrology, University of Virginia School of Medicine, Charlottsville, Virginia
| |
Collapse
|
5
|
Choi JJ, Gribben J, Lin M, Abramson EL, Aizer J. Using an experiential learning model to teach clinical reasoning theory and cognitive bias: an evaluation of a first-year medical student curriculum. MEDICAL EDUCATION ONLINE 2023; 28:2153782. [PMID: 36454201 PMCID: PMC9718553 DOI: 10.1080/10872981.2022.2153782] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/28/2022] [Revised: 11/07/2022] [Accepted: 11/28/2022] [Indexed: 06/17/2023]
Abstract
BACKGROUND Most medical students entering clerkships have limited understanding of clinical reasoning concepts. The value of teaching theories of clinical reasoning and cognitive biases to first-year medical students is unknown. This study aimed to evaluate the value of explicitly teaching clinical reasoning theory and cognitive bias to first-year medical students. METHODS Using Kolb's experiential learning model, we introduced dual process theory, script theory, and cognitive biases in teaching clinical reasoning to first-year medical students at an academic medical center in New York City between January and June 2020. Due to the COVID-19 pandemic, instruction was transitioned to a distance learning format in March 2020. The curriculum included a series of written clinical reasoning examinations with facilitated small group discussions. Written self-assessments prompted each student to reflect on the experience, draw conclusions about their clinical reasoning, and plan for future encounters involving clinical reasoning. We evaluated the value of the curriculum using mixed-methods to analyze faculty assessments, student self-assessment questionnaires, and an end-of-curriculum anonymous questionnaire eliciting student feedback. RESULTS Among 318 total examinations of 106 students, 254 (80%) had a complete problem representation, while 199 (63%) of problem representations were considered concise. The most common cognitive biases described by students in their clinical reasoning were anchoring bias, availability bias, and premature closure. Four major themes emerged as valuable outcomes of the CREs as identified by students: (1) synthesis of medical knowledge; (2) enhanced ability to generate differential diagnoses; (3) development of self-efficacy related to clinical reasoning; (4) raised awareness of personal cognitive biases. CONCLUSIONS We found that explicitly teaching clinical reasoning theory and cognitive biases using an experiential learning model provides first-year medical students with valuable opportunities for developing knowledge, skills, and self-efficacy related to clinical reasoning.
Collapse
Affiliation(s)
- Justin J. Choi
- Division of General Internal Medicine, Weill Cornell Medicine, New York, NY, USA
- Department of Medicine, Weill Cornell Medicine, New York, NY, USA
| | - Jeanie Gribben
- Department of Medicine, Weill Cornell Medicine, New York, NY, USA
| | - Myriam Lin
- Division of Rheumatology, Hospital for Special Surgery, New York, NY, USA
| | - Erika L. Abramson
- Department of Pediatrics, Weill Cornell Medicine, New York, NY, USA
- Department of Population Health Sciences, Weill Cornell Medicine, New York, NY, USA
| | - Juliet Aizer
- Department of Medicine, Weill Cornell Medicine, New York, NY, USA
- Division of Rheumatology, Hospital for Special Surgery, New York, NY, USA
| |
Collapse
|
6
|
Bond WF, Zhou J, Bhat S, Park YS, Ebert-Allen RA, Ruger RL, Yudkowsky R. Automated Patient Note Grading: Examining Scoring Reliability and Feasibility. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2023; 98:S90-S97. [PMID: 37983401 DOI: 10.1097/acm.0000000000005357] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/11/2023]
Abstract
PURPOSE Scoring postencounter patient notes (PNs) yields significant insights into student performance, but the resource intensity of scoring limits its use. Recent advances in natural language processing (NLP) and machine learning allow application of automated short answer grading (ASAG) for this task. This retrospective study evaluated psychometric characteristics and reliability of an ASAG system for PNs and factors contributing to implementation, including feasibility and case-specific phrase annotation required to tune the system for a new case. METHOD PNs from standardized patient (SP) cases within a graduation competency exam were used to train the ASAG system, applying a feed-forward neural networks algorithm for scoring. Using faculty phrase-level annotation, 10 PNs per case were required to tune the ASAG system. After tuning, ASAG item-level ratings for 20 notes were compared across ASAG-faculty (4 cases, 80 pairings) and ASAG-nonfaculty (2 cases, 40 pairings). Psychometric characteristics were examined using item analysis and Cronbach's alpha. Inter-rater reliability (IRR) was examined using kappa. RESULTS ASAG scores demonstrated sufficient variability in differentiating learner PN performance and high IRR between machine and human ratings. Across all items the ASAG-faculty scoring mean kappa was .83 (SE ± .02). The ASAG-nonfaculty pairings kappa was .83 (SE ± .02). The ASAG scoring demonstrated high item discrimination. Internal consistency reliability values at the case level ranged from a Cronbach's alpha of .65 to .77. Faculty time cost to train and supervise nonfaculty raters for 4 cases was approximately $1,856. Faculty cost to tune the ASAG system was approximately $928. CONCLUSIONS NLP-based automated scoring of PNs demonstrated a high degree of reliability and psychometric confidence for use as learner feedback. The small number of phrase-level annotations required to tune the system to a new case enhances feasibility. ASAG-enabled PN scoring has broad implications for improving feedback in case-based learning contexts in medical education.
Collapse
Affiliation(s)
- William F Bond
- W.F. Bond is professor, Department of Emergency Medicine, University of Illinois College of Medicine, Peoria, Illinois, and is affiliated with Jump Simulation, an OSF HealthCare and University of Illinois College of Medicine at Peoria Collaboration; ORCID: http://orcid.org/0000-0001-6714-7152
| | - Jianing Zhou
- J. Zhou is a PhD student, Department of Computer Science, University of Illinois, Urbana-Champaign, Champaign, Illinois
| | - Suma Bhat
- S. Bhat is assistant professor, Department of Electrical and Computer Engineering, University of Illinois, Urbana-Champaign, Champaign, Illinois; ORCID: http://orcid.org/0000-0003-0324-5890
| | - Yoon Soo Park
- Y.S. Park is professor, Department of Medical Education, University of Illinois College of Medicine, Chicago, Illinois
| | - Rebecca A Ebert-Allen
- R.A. Ebert-Allen is a research project manager, Jump Simulation, an OSF HealthCare and University of Illinois College of Medicine at Peoria Collaboration, Peoria, Illinois; ORCID: http://orcid.org/0000-0001-6607-0229
| | - Rebecca L Ruger
- R.L. Ruger was a research assistant, Jump Simulation, and is now a graduate student, Department of Psychology, Penn State University, University Park, Pennsylvania; ORCID: http://orcid.org/0009-0005-8739-3226
| | - Rachel Yudkowsky
- R. Yudkowsky is professor, Department of Medical Education, University of Illinois College of Medicine, Chicago, Illinois; ORCID: https://orcid.org/0000-0002-2145-7582
| |
Collapse
|
7
|
Donovan CM. Augmented Reality Integration in Manikin-Based Simulations: Bringing Basic Science to the Critical Care Bedside with Limited Augmented Reality Resources. MEDICAL SCIENCE EDUCATOR 2023; 33:829-833. [PMID: 37546210 PMCID: PMC10403467 DOI: 10.1007/s40670-023-01821-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 06/16/2023] [Indexed: 08/08/2023]
Abstract
Immersive simulation and augmented reality (AR) are powerful educational tools in high-risk medical professions. Basic science AR, such as anatomic holograms, are gaining popularity. Many educators want to adopt AR and integrate basic science review in high-risk clinical decision-making but cannot afford it. In this project, we designed three AR integrated manikin-based simulations (ARI-MBS) by combining critical care scenarios with commercially available AR programs. Using a single headset and limited equipment, we technically integrated AR into MBS in a way that both students and faculty found rewarding. We present our design, so that others may replicate it. Supplementary Information The online version contains supplementary material available at 10.1007/s40670-023-01821-z.
Collapse
Affiliation(s)
- Colleen M. Donovan
- Department of Emergency Medicine, Rutgers-RWJMS, New Brunswick, NJ USA
- Department of Pharmacy Practice & Administration, Rutgers-EMSOP, Piscataway, USA
| |
Collapse
|
8
|
Schaye V, Guzman B, Burk-Rafel J, Marin M, Reinstein I, Kudlowitz D, Miller L, Chun J, Aphinyanaphongs Y. Development and Validation of a Machine Learning Model for Automated Assessment of Resident Clinical Reasoning Documentation. J Gen Intern Med 2022; 37:2230-2238. [PMID: 35710676 PMCID: PMC9296753 DOI: 10.1007/s11606-022-07526-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/13/2021] [Accepted: 03/29/2022] [Indexed: 11/29/2022]
Abstract
BACKGROUND Residents receive infrequent feedback on their clinical reasoning (CR) documentation. While machine learning (ML) and natural language processing (NLP) have been used to assess CR documentation in standardized cases, no studies have described similar use in the clinical environment. OBJECTIVE The authors developed and validated using Kane's framework a ML model for automated assessment of CR documentation quality in residents' admission notes. DESIGN, PARTICIPANTS, MAIN MEASURES Internal medicine residents' and subspecialty fellows' admission notes at one medical center from July 2014 to March 2020 were extracted from the electronic health record. Using a validated CR documentation rubric, the authors rated 414 notes for the ML development dataset. Notes were truncated to isolate the relevant portion; an NLP software (cTAKES) extracted disease/disorder named entities and human review generated CR terms. The final model had three input variables and classified notes as demonstrating low- or high-quality CR documentation. The ML model was applied to a retrospective dataset (9591 notes) for human validation and data analysis. Reliability between human and ML ratings was assessed on 205 of these notes with Cohen's kappa. CR documentation quality by post-graduate year (PGY) was evaluated by the Mantel-Haenszel test of trend. KEY RESULTS The top-performing logistic regression model had an area under the receiver operating characteristic curve of 0.88, a positive predictive value of 0.68, and an accuracy of 0.79. Cohen's kappa was 0.67. Of the 9591 notes, 31.1% demonstrated high-quality CR documentation; quality increased from 27.0% (PGY1) to 31.0% (PGY2) to 39.0% (PGY3) (p < .001 for trend). Validity evidence was collected in each domain of Kane's framework (scoring, generalization, extrapolation, and implications). CONCLUSIONS The authors developed and validated a high-performing ML model that classifies CR documentation quality in resident admission notes in the clinical environment-a novel application of ML and NLP with many potential use cases.
Collapse
Affiliation(s)
- Verity Schaye
- NYU Grossman School of Medicine, New York, NY, USA. .,NYC Health & Hospitals/Bellevue, New York, NY, USA.
| | | | | | - Marina Marin
- NYU Grossman School of Medicine, New York, NY, USA
| | | | | | - Louis Miller
- Zucker School of Medicine at Hofstra/Northwell, Hempstead, NY, USA
| | - Jonathan Chun
- Stanford University School of Medicine, Stanford, CA, USA
| | | |
Collapse
|
9
|
Smith KJ, Childs‐Kean LM, Smith MD. Developing Clinical Reasoning: An Introduction for Pharmacy Preceptors. JOURNAL OF THE AMERICAN COLLEGE OF CLINICAL PHARMACY 2022. [DOI: 10.1002/jac5.1624] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Affiliation(s)
- Kathryn J. Smith
- University of Oklahoma Health Sciences Center College of Pharmacy 1110 N. Stonewall Ave CPB 229 Oklahoma City Oklahoma
| | | | | |
Collapse
|
10
|
Schaye V, Miller L, Kudlowitz D, Chun J, Burk-Rafel J, Cocks P, Guzman B, Aphinyanaphongs Y, Marin M. Development of a Clinical Reasoning Documentation Assessment Tool for Resident and Fellow Admission Notes: a Shared Mental Model for Feedback. J Gen Intern Med 2022; 37:507-512. [PMID: 33945113 PMCID: PMC8858363 DOI: 10.1007/s11606-021-06805-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/10/2020] [Accepted: 04/03/2021] [Indexed: 02/03/2023]
Abstract
BACKGROUND Residents and fellows receive little feedback on their clinical reasoning documentation. Barriers include lack of a shared mental model and variability in the reliability and validity of existing assessment tools. Of the existing tools, the IDEA assessment tool includes a robust assessment of clinical reasoning documentation focusing on four elements (interpretive summary, differential diagnosis, explanation of reasoning for lead and alternative diagnoses) but lacks descriptive anchors threatening its reliability. OBJECTIVE Our goal was to develop a valid and reliable assessment tool for clinical reasoning documentation building off the IDEA assessment tool. DESIGN, PARTICIPANTS, AND MAIN MEASURES The Revised-IDEA assessment tool was developed by four clinician educators through iterative review of admission notes written by medicine residents and fellows and subsequently piloted with additional faculty to ensure response process validity. A random sample of 252 notes from July 2014 to June 2017 written by 30 trainees across several chief complaints was rated. Three raters rated 20% of the notes to demonstrate internal structure validity. A quality cut-off score was determined using Hofstee standard setting. KEY RESULTS The Revised-IDEA assessment tool includes the same four domains as the IDEA assessment tool with more detailed descriptive prompts, new Likert scale anchors, and a score range of 0-10. Intraclass correlation was high for the notes rated by three raters, 0.84 (95% CI 0.74-0.90). Scores ≥6 were determined to demonstrate high-quality clinical reasoning documentation. Only 53% of notes (134/252) were high-quality. CONCLUSIONS The Revised-IDEA assessment tool is reliable and easy to use for feedback on clinical reasoning documentation in resident and fellow admission notes with descriptive anchors that facilitate a shared mental model for feedback.
Collapse
Affiliation(s)
- Verity Schaye
- NYU Grossman School of Medicine, New York, NY, USA. .,NYC Health + Hospitals/Bellevue, New York, NY, USA.
| | - Louis Miller
- Zucker School of Medicine at Hofstra/Northwell, Hempstead, NY, USA
| | | | - Jonathan Chun
- Stanford University School of Medicine, Stanford, CA, USA
| | | | | | | | | | - Marina Marin
- NYU Grossman School of Medicine, New York, NY, USA
| |
Collapse
|
11
|
Augustin RC, Simonson MG, Rothenberger SD, Lalama C, Bonifacino E, DiNardo DJ, Tilstra SA. The use of podcasts as a tool to teach clinical reasoning: a pseudorandomized and controlled study. Diagnosis (Berl) 2022; 9:323-331. [PMID: 35086184 DOI: 10.1515/dx-2021-0136] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2021] [Accepted: 01/06/2022] [Indexed: 12/27/2022]
Abstract
OBJECTIVES Podcasts have emerged as an efficient method for widespread delivery of educational clinical reasoning (CR) content. However, the impact of such podcasts on CR skills has not been established. We set out to determine whether exposure to expert reasoning in a podcast format leads to enhanced CR skills. METHODS This is a pseudo-randomized study of third-year medical students (MS3) to either a control group (n=22) of pre-established online CR modules, or intervention group (n=26) with both the online modules and novel CR podcasts. The podcasts were developed from four "clinical unknown" cases presented to expert clinician educators. After completing these assignments in weeks 1-2, weekly history and physical (H&P) notes were collected and graded according to the validated IDEA rubric between weeks 3-7. A longitudinal regression model was used to compare the H&P IDEA scores over time. Usage and perception of the podcasts was also assessed via survey data. RESULTS Ninety control and 128 intervention H&Ps were scored. There was no statistical difference in the change of average IDEA scores between intervention (0.92, p=0.35) and control groups (-0.33, p=0.83). Intervention participants positively received the podcasts and noted increased discussion of CR principles from both their ward (3.1 vs. 2.4, p=0.08) and teaching (3.2 vs. 2.5, p=0.05) attendings. CONCLUSIONS This is the first objective, pseudo-randomized assessment of CR podcasts in undergraduate medical education. While we did not demonstrate significant improvement in IDEA scores, our data show that podcasts are a well-received tool that can prime learners to recognize CR principles.
Collapse
Affiliation(s)
- Ryan C Augustin
- Division of General Internal Medicine, Department of Medicine, University of Pittsburgh, Pittsburgh, PA, USA
| | - Michael G Simonson
- Division of General Internal Medicine, Department of Medicine, University of Pittsburgh, Pittsburgh, PA, USA
| | - Scott D Rothenberger
- Division of General Internal Medicine, Department of Medicine, University of Pittsburgh, Pittsburgh, PA, USA
- Center for Research on Health Care (CRHC) Data Center, Division of General Internal Medicine, Department of Medicine, University of Pittsburgh, Pittsburgh, PA, USA
| | - Christina Lalama
- Division of General Internal Medicine, Department of Medicine, University of Pittsburgh, Pittsburgh, PA, USA
- Center for Research on Health Care (CRHC) Data Center, Division of General Internal Medicine, Department of Medicine, University of Pittsburgh, Pittsburgh, PA, USA
| | - Eliana Bonifacino
- Division of General Internal Medicine, Department of Medicine, University of Pittsburgh, Pittsburgh, PA, USA
| | - Deborah J DiNardo
- Division of General Internal Medicine, Department of Medicine, University of Pittsburgh, Pittsburgh, PA, USA
- VA Pittsburgh Healthcare System, Pittsburgh, PA, USA
| | - Sarah A Tilstra
- Division of General Internal Medicine, Department of Medicine, University of Pittsburgh, Pittsburgh, PA, USA
| |
Collapse
|
12
|
Brentnall J, Thackray D, Judd B. Evaluating the Clinical Reasoning of Student Health Professionals in Placement and Simulation Settings: A Systematic Review. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2022; 19:ijerph19020936. [PMID: 35055758 PMCID: PMC8775520 DOI: 10.3390/ijerph19020936] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/30/2021] [Revised: 12/21/2021] [Accepted: 12/22/2021] [Indexed: 11/16/2022]
Abstract
(1) Background: Clinical reasoning is essential to the effective practice of autonomous health professionals and is, therefore, an essential capability to develop as students. This review aimed to systematically identify the tools available to health professional educators to evaluate students' attainment of clinical reasoning capabilities in clinical placement and simulation settings. (2) Methods: A systemic review of seven databases was undertaken. Peer-reviewed, English-language publications reporting studies that developed or tested relevant tools were included. Searches included multiple terms related to clinical reasoning and health disciplines. Data regarding each tool's conceptual basis and evaluated constructs were systematically extracted and analysed. (3) Results: Most of the 61 included papers evaluated students in medical and nursing disciplines, and over half reported on the Script Concordance Test or Lasater Clinical Judgement Rubric. A number of conceptual frameworks were referenced, though many papers did not reference any framework. (4) Conclusions: Overall, key outcomes highlighted an emphasis on diagnostic reasoning, as opposed to management reasoning. Tools were predominantly aligned with individual health disciplines and with limited cross-referencing within the field. Future research into clinical reasoning evaluation tools should build on and refer to existing approaches and consider contributions across professional disciplinary divides.
Collapse
Affiliation(s)
- Jennie Brentnall
- Work Integrated Learning, Faculty of Medicine and Health, The University of Sydney, Sydney, NSW 2006, Australia;
- Correspondence:
| | - Debbie Thackray
- Physiotherapy, School of Health Sciences, University of Southampton, Southampton SO17 1BJ, UK;
| | - Belinda Judd
- Work Integrated Learning, Faculty of Medicine and Health, The University of Sydney, Sydney, NSW 2006, Australia;
| |
Collapse
|
13
|
Hung H, Kueh LL, Tseng CC, Huang HW, Wang SY, Hu YN, Lin PY, Wang JL, Chen PF, Liu CC, Roan JN. Assessing the quality of electronic medical records as a platform for resident education. BMC MEDICAL EDUCATION 2021; 21:577. [PMID: 34774027 PMCID: PMC8590775 DOI: 10.1186/s12909-021-03011-0] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/12/2021] [Accepted: 10/25/2021] [Indexed: 06/13/2023]
Abstract
BACKGROUND Previous studies have assessed note quality and the use of electronic medical record (EMR) as a part of medical training. However, a generalized and user-friendly note quality assessment tool is required for quick clinical assessment. We held a medical record writing competition and developed a checklist for assessing the note quality of participants' medical records. Using the checklist, this study aims to explore note quality between residents of different specialties and offer pedagogical implications. METHODS The authors created an inpatient checklist that examined fundamental EMR requirements through six note types and twenty items. A total of 149 records created by residents from 32 departments/stations were randomly selected. Seven senior physicians rated the EMRs using a checklist. Medical records were grouped as general medicine, surgery, paediatric, obstetrics and gynaecology, and other departments. The overall and group performances were analysed using analysis of variance (ANOVA). RESULTS Overall performance was rated as fair to good. Regarding the six note types, discharge notes (0.81) gained the highest scores, followed by admission notes (0.79), problem list (0.73), overall performance (0.73), progress notes (0.71), and weekly summaries (0.66). Among the five groups, other departments (80.20) had the highest total score, followed by obstetrics and gynaecology (78.02), paediatrics (77.47), general medicine (75.58), and surgery (73.92). CONCLUSIONS This study suggested that duplication in medical notes and the documentation abilities of residents affect the quality of medical records in different departments. Further research is required to apply the insights obtained in this study to improve the quality of notes and, thereby, the effectiveness of resident training.
Collapse
Affiliation(s)
- Hsuan Hung
- Tainan Municipal North District Kaiyuan Elementary School, Tainan, Taiwan
| | - Ling-Ling Kueh
- Institute of Education, National Cheng Kung University, Tainan, Taiwan
| | - Chin-Chung Tseng
- Division of Nephrology, Department of Internal Medicine, National Cheng Kung University Hospital Dou-Liou Branch, College of Medicine, National Cheng Kung University, Yunlin, Taiwan
| | - Han-Wei Huang
- Department of Neurology, National Cheng Kung University Hospital, College of Medicine, National Cheng Kung University, Tainan, Taiwan
| | - Shu-Yen Wang
- Quality Center, National Cheng Kung University Hospital, College of Health Sciences, Chang Jung Christian University, Tainan, Taiwan
| | - Yu-Ning Hu
- Division of Cardiovascular Surgery, Department of Surgery, National Cheng Kung University Hospital, College of Medicine, National Cheng Kung University, Tainan, Taiwan
| | - Pao-Yen Lin
- Division of Cardiovascular Surgery, Department of Surgery, National Cheng Kung University Hospital, College of Medicine, National Cheng Kung University, Tainan, Taiwan
| | - Jiun-Ling Wang
- Department of Internal Medicine, National Cheng Kung University Hospital, College of Medicine, National Cheng Kung University, Tainan, Taiwan
| | - Po-Fan Chen
- Department of Obstetrics and Gynecology, National Cheng Kung University Hospital, College of Medicine, National Cheng Kung University, Tainan, Taiwan
| | - Ching-Chuan Liu
- Department of Pediatrics, National Cheng Kung University Hospital, College of Medicine, National Cheng Kung University, Tainan, Taiwan
| | - Jun-Neng Roan
- Division of Cardiovascular Surgery, Department of Surgery, National Cheng Kung University Hospital, College of Medicine, National Cheng Kung University, Tainan, Taiwan.
- Medical Device Innovation Center, National Cheng Kung University Hospital, College of Medicine, National Cheng Kung University, Tainan, Taiwan.
- Institute of Clinical Medicine, National Cheng Kung University Hospital, College of Medicine, National Cheng Kung University, Tainan, Taiwan.
| |
Collapse
|
14
|
Thammasitboon S, Sur M, Rencic JJ, Dhaliwal G, Kumar S, Sundaram S, Krishnamurthy P. Psychometric validation of the reconstructed version of the assessment of reasoning tool. MEDICAL TEACHER 2021; 43:168-173. [PMID: 33073665 DOI: 10.1080/0142159x.2020.1830960] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
BACKGROUND Assessing learners' competence in diagnostic reasoning is challenging and unstandardized in medical education. We developed a theory-informed, behaviorally anchored rubric, the Assessment of Reasoning Tool (ART), with content and response process validity. This study gathered evidence to support the internal structure and the interpretation of measurements derived from this tool. METHODS We derived a reconstructed version of ART (ART-R) as a 15-item, 5-point Likert scale using the ART domains and descriptors. A psychometric evaluation was performed. We created 18 video variations of learner oral presentations, portraying different performance levels of the ART-R. RESULTS 152 faculty viewed two videos and rated the learner globally and then using the ART-R. The confirmatory factor analysis showed a favorable comparative fit index = 0.99, root mean square error of approximation = 0.097, and standardized root mean square residual = 0.026. The five domains, hypothesis-directed information gathering, problem representation, prioritized differential diagnosis, diagnostic evaluation, and awareness of cognitive tendencies/emotional factors, had high internal consistency. The total score for each domain had a positive association with the global assessment of diagnostic reasoning. CONCLUSIONS Our findings provide validity evidence for the ART-R as an assessment tool with five theoretical domains, internal consistency, and association with global assessment.
Collapse
Affiliation(s)
- Satid Thammasitboon
- Department of Pediatrics, Section of Critical Care Medicine, Baylor College of Medicine, Houston, TX, USA
- Department of Pediatrics, Center for Research, Innovation and Scholarship in Medical Education, Texas Children's Hospital, Houston, TX, USA
| | - Moushumi Sur
- Department of Pediatrics, Section of Critical Care Medicine, Baylor College of Medicine, Houston, TX, USA
| | - Joseph J Rencic
- Department of Medicine, Boston University School of Medicine, Boston, MA, USA
| | - Gurpreet Dhaliwal
- Department of Medicine, University of California San Francisco, San Francisco, CA, USA
- Medical Service Department, San Francisco VA Medical Center, San Francisco, CA, USA
| | - Shelley Kumar
- Department of Pediatrics, Center for Research, Innovation and Scholarship in Medical Education, Texas Children's Hospital, Houston, TX, USA
| | - Suresh Sundaram
- Department of Administration, Alfred Lerner College of Business & Economics, University of Delaware, Newark, DE, USA
| | - Parthasarathy Krishnamurthy
- Department of Pediatrics, Section of Critical Care Medicine, Baylor College of Medicine, Houston, TX, USA
- Department of Marketing and Entrepreneurship, C.T. Bauer College of Business, University of Houston, Houston, TX, USA
- Department of Anesthesiology and Pain Medicine, University of Texas Medical Branch, Houston, TX, USA
| |
Collapse
|
15
|
(En)trust me: Validating an assessment rubric for documenting clinical encounters during a surgery clerkship clinical skills exam. Am J Surg 2020; 219:258-262. [DOI: 10.1016/j.amjsurg.2018.12.055] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2018] [Revised: 12/21/2018] [Accepted: 12/21/2018] [Indexed: 11/19/2022]
|
16
|
Abstract
Clinical reasoning is a core component of clinical competency that is used in all patient encounters from simple to complex presentations. It involves synthesis of myriad clinical and investigative data, to generate and prioritize an appropriate differential diagnosis and inform safe and targeted management plans.The literature is rich with proposed methods to teach this critical skill to trainees of all levels. Yet, ensuring that reasoning ability is appropriately assessed across the spectrum of knowledge acquisition to workplace-based clinical performance can be challenging.In this perspective, we first introduce the concepts of illness scripts and dual-process theory that describe the roles of analytic system 1 and non-analytic system 2 reasoning in clinical decision making. Thereafter, we draw upon existing evidence and expert opinion to review a range of methods that allow for effective assessment of clinical reasoning, contextualized within Miller's pyramid of learner assessment. Key assessment strategies that allow teachers to evaluate their learners' clinical reasoning ability are described from the level of knowledge acquisition, through to real-world demonstration in the clinical workplace.
Collapse
Affiliation(s)
- Harish Thampy
- Division of Medical Education, School of Medical Sciences, Faculty of Biology, Medicine & Health, University of Manchester, Manchester, UK.
| | - Emma Willert
- Division of Medical Education, School of Medical Sciences, Faculty of Biology, Medicine & Health, University of Manchester, Manchester, UK
| | - Subha Ramani
- Harvard Medical School, Brigham and Women's Hospital, General Internal Medicine, Department of Medicine, Boston, MA, USA
| |
Collapse
|
17
|
Bonifacino E, Follansbee WP, Farkas AH, Jeong K, McNeil MA, DiNardo DJ. Implementation of a clinical reasoning curriculum for clerkship-level medical students: a pseudo-randomized and controlled study. ACTA ACUST UNITED AC 2019; 6:165-172. [PMID: 30920952 DOI: 10.1515/dx-2018-0063] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2018] [Accepted: 03/07/2019] [Indexed: 11/15/2022]
Abstract
Background The National Academies of Sciences report Improving Diagnosis in Healthcare highlighted the need for better training in medical decision-making, but most medical schools lack formal education in clinical reasoning. Methods We conducted a pseudo-randomized and controlled study to evaluate the impact of a clinical reasoning curriculum in an internal medicine clerkship. Students in the intervention group completed six interactive online modules focused on reasoning concepts and a skills-based workshop. We assessed the impact of the curriculum on clinical reasoning knowledge and skills and perception of education by evaluating: (1) performance on a clinical reasoning concept quiz, (2) demonstration of reasoning in hospital admission notes, and (3) awareness of attending physician utilization of clinical reasoning concepts. Results Students in the intervention group demonstrated superior performance on the clinical reasoning knowledge quiz (67% vs. 54%, p < 0.001). Students in the intervention group demonstrated superior written reasoning skills in the data synthesis (2.3 vs. 2.0, p = 0.02) and diagnostic reasoning (2.2 vs. 1.9, p = 0.02) portions of their admission notes, and reported more discussion of clinical reasoning by their attending physicians. Conclusions Exposure to a clinical reasoning curriculum was associated with superior reasoning knowledge and superior written demonstration of clinical reasoning skills by third-year medical students on an internal medicine clerkship.
Collapse
Affiliation(s)
- Eliana Bonifacino
- Department of Medicine, University of Pittsburgh School of Medicine, 200 Lothrop Street 9 South, Pittsburgh, PA 15213, USA
| | - William P Follansbee
- Professor of Medicine, Department of Medicine, University of Pittsburgh School of Medicine, Pittsburgh, PA, USA
| | - Amy H Farkas
- Assistant Professor of Medicine, Department of Medicine, Medical College of Wisconsin, Milwaukee, WI, USA
| | - Kwonho Jeong
- Center for Research on Healthcare Data Center, Division of General Internal Medicine, University of Pittsburgh, Pittsburgh, PA, USA
| | - Melissa A McNeil
- Professor of Medicine, Department of Medicine, University of Pittsburgh School of Medicine, Pittsburgh, PA, USA
- VA Pittsburgh Healthcare System, Department of Medicine, Pittsburgh, PA, USA
| | - Deborah J DiNardo
- VA Pittsburgh Healthcare System, Department of Medicine, Pittsburgh, PA, USA
- Clinical Instructor in Medicine, Department of Medicine, University of Pittsburgh School of Medicine, Pittsburgh, PA, USA
| |
Collapse
|
18
|
Daniel M, Rencic J, Durning SJ, Holmboe E, Santen SA, Lang V, Ratcliffe T, Gordon D, Heist B, Lubarsky S, Estrada CA, Ballard T, Artino AR, Sergio Da Silva A, Cleary T, Stojan J, Gruppen LD. Clinical Reasoning Assessment Methods: A Scoping Review and Practical Guidance. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2019; 94:902-912. [PMID: 30720527 DOI: 10.1097/acm.0000000000002618] [Citation(s) in RCA: 125] [Impact Index Per Article: 25.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
PURPOSE An evidence-based approach to assessment is critical for ensuring the development of clinical reasoning (CR) competence. The wide array of CR assessment methods creates challenges for selecting assessments fit for the purpose; thus, a synthesis of the current evidence is needed to guide practice. A scoping review was performed to explore the existing menu of CR assessments. METHOD Multiple databases were searched from their inception to 2016 following PRISMA guidelines. Articles of all study design types were included if they studied a CR assessment method. The articles were sorted by assessment methods and reviewed by pairs of authors. Extracted data were used to construct descriptive appendixes, summarizing each method, including common stimuli, response formats, scoring, typical uses, validity considerations, feasibility issues, advantages, and disadvantages. RESULTS A total of 377 articles were included in the final synthesis. The articles broadly fell into three categories: non-workplace-based assessments (e.g., multiple-choice questions, extended matching questions, key feature examinations, script concordance tests); assessments in simulated clinical environments (objective structured clinical examinations and technology-enhanced simulation); and workplace-based assessments (e.g., direct observations, global assessments, oral case presentations, written notes). Validity considerations, feasibility issues, advantages, and disadvantages differed by method. CONCLUSIONS There are numerous assessment methods that align with different components of the complex construct of CR. Ensuring competency requires the development of programs of assessment that address all components of CR. Such programs are ideally constructed of complementary assessment methods to account for each method's validity and feasibility issues, advantages, and disadvantages.
Collapse
Affiliation(s)
- Michelle Daniel
- M. Daniel is assistant dean for curriculum and associate professor of emergency medicine and learning health sciences, University of Michigan Medical School, Ann Arbor, Michigan; ORCID: http://orcid.org/0000-0001-8961-7119. J. Rencic is associate program director of the internal medicine residency program and associate professor of medicine, Tufts University School of Medicine, Boston, Massachusetts; ORCID: http://orcid.org/0000-0002-2598-3299. S.J. Durning is director of graduate programs in health professions education and professor of medicine and pathology, Uniformed Services University of the Health Sciences, Bethesda, Maryland. E. Holmboe is senior vice president of milestone development and evaluation, Accreditation Council for Graduate Medical Education, and adjunct professor of medicine, Northwestern Feinberg School of Medicine, Chicago, Illinois; ORCID: http://orcid.org/0000-0003-0108-6021. S.A. Santen is senior associate dean and professor of emergency medicine, Virginia Commonwealth University, Richmond, Virginia; ORCID: http://orcid.org/0000-0002-8327-8002. V. Lang is associate professor of medicine, University of Rochester School of Medicine and Dentistry, Rochester, New York; ORCID: http://orcid.org/0000-0002-2157-7613. T. Ratcliffe is associate professor of medicine, University of Texas Long School of Medicine at San Antonio, San Antonio, Texas. D. Gordon is medical undergraduate education director, associate residency program director of emergency medicine, and associate professor of surgery, Duke University School of Medicine, Durham, North Carolina. B. Heist is clerkship codirector and assistant professor of medicine, University of Pittsburgh School of Medicine, Pittsburgh, Pennsylvania. S. Lubarsky is assistant professor of neurology, McGill University, and faculty of medicine and core member, McGill Center for Medical Education, Montreal, Quebec, Canada; ORCID: http://orcid.org/0000-0001-5692-1771. C.A. Estrada is staff physician, Birmingham Veterans Affairs Medical Center, and director, Division of General Internal Medicine, and professor of medicine, University of Alabama, Birmingham, Alabama; ORCID: https://orcid.org/0000-0001-6262-7421. T. Ballard is plastic surgeon, Ann Arbor Plastic Surgery, Ann Arbor, Michigan. A.R. Artino Jr is deputy director for graduate programs in health professions education and professor of medicine, preventive medicine, and biometrics pathology, Uniformed Services University of the Health Sciences, Bethesda, Maryland; ORCID: http://orcid.org/0000-0003-2661-7853. A. Sergio Da Silva is senior lecturer in medical education and director of the masters in medical education program, Swansea University Medical School, Swansea, United Kingdom; ORCID: http://orcid.org/0000-0001-7262-0215. T. Cleary is chair, Applied Psychology Department, CUNY Graduate School and University Center, New York, New York, and associate professor of applied and professional psychology, Rutgers University, New Brunswick, New Jersey. J. Stojan is associate professor of internal medicine and pediatrics, University of Michigan Medical School, Ann Arbor, Michigan. L.D. Gruppen is director of the master of health professions education program and professor of learning health sciences, University of Michigan Medical School, Ann Arbor, Michigan; ORCID: http://orcid.org/0000-0002-2107-0126
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Collapse
|
19
|
The Assessment of Reasoning Tool (ART): structuring the conversation between teachers and learners. Diagnosis (Berl) 2018; 5:197-203. [DOI: 10.1515/dx-2018-0052] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2018] [Accepted: 10/19/2018] [Indexed: 11/15/2022]
Abstract
Abstract
Background
Excellence in clinical reasoning is one of the most important outcomes of medical education programs, but assessing learners’ reasoning to inform corrective feedback is challenging and unstandardized.
Methods
The Society to Improve Diagnosis in Medicine formed a multi-specialty team of medical educators to develop the Assessment of Reasoning Tool (ART). This paper describes the tool development process. The tool was designed to facilitate clinical teachers’ assessment of learners’ oral presentation for competence in clinical reasoning and facilitate formative feedback. Reasoning frameworks (e.g. script theory), contemporary practice goals (e.g. high-value care [HVC]) and proposed error reduction strategies (e.g. metacognition) were used to guide the development of the tool.
Results
The ART is a behaviorally anchored, three-point scale assessing five domains of reasoning: (1) hypothesis-directed data gathering, (2) articulation of a problem representation, (3) formulation of a prioritized differential diagnosis, (4) diagnostic testing aligned with HVC principles and (5) metacognition. Instructional videos were created for faculty development for each domain, guided by principles of multimedia learning.
Conclusions
The ART is a theory-informed assessment tool that allows teachers to assess clinical reasoning and structure feedback conversations.
Collapse
|
20
|
Bajwa NM, Yudkowsky R, Belli D, Vu NV, Park YS. Validity Evidence for a Residency Admissions Standardized Assessment Letter for Pediatrics. TEACHING AND LEARNING IN MEDICINE 2018; 30:173-183. [PMID: 29190140 DOI: 10.1080/10401334.2017.1367297] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
UNLABELLED Construct: This study aims to provide validity evidence for the standardized Assessment Letter for Pediatrics as a measure of competencies expected of a 1st-year pediatrics resident as part of a pediatric residency admissions process. BACKGROUND The Narrative Letter of Recommendation is a frequently used tool in the residency admissions process even though it has poor interrater reliability, lacks pertinent content, and does not correlate with residency performance. A newer tool, the Standardized Letter, has shown validity evidence for content and interrater reliability in other specialties. We sought to develop and provide validity evidence for the standardized Assessment Letter for Pediatrics. APPROACH All 2012 and 2013 applicants invited to interview at the University of Geneva Pediatrics Residency Program provided 2 standardized Assessment Letters. Content for the letter was based on CanMEDS roles and ratings of 6 desired competencies and an overall assessment. Validity evidence was gathered for internal structure (Cronbach's alpha and generalizability), response process (interrater reliability with intraclass correlation), relations to other variables (Pearson's correlation coefficient), and consequences (logistic regression to predict admission). RESULTS One hundred fourteen faculty completed 142 standardized Assessment Letters for 71 applicants. Average overall assessment was 3.0 of 4 (SD = 0.59). Cronbach's alpha was 0.93. The G-coefficient was 0.59. The decision study projected that four Assessment Letters are needed to attain a G-coefficient of 0.73. Applicant variance (28.5%) indicated high applicant differentiation. The Assessment Letter intraclass coefficient was 0.51, 95% confidence interval (CI) [0.43, 0.59]. Assessment Letter scores were correlated with the structured interview (r = .28), 95% CI [0.05, 0.51]; global rating (r = .36), 95% CI [0.13, 0.58]; and admissions decision (r = .25), 95% CI [0.02, 0.46]. Assessment Letter scores did not predict the admissions decision (odds ratio = 1.67, p = .37) after controlling for the unique contribution of the structured interview and global rating scores. CONCLUSION Validity evidence supports use of the Assessment Letter for Pediatrics; future studies should refine items to improve predictive validity and explore how to best integrate the Assessment Letter into the residency admissions process.
Collapse
Affiliation(s)
- Nadia M Bajwa
- a Department of Child and Adolescent Medicine , Children's Hospital, Geneva University Hospitals , Geneva , Switzerland
| | - Rachel Yudkowsky
- b Department of Medical Education , University of Illinois at Chicago, College of Medicine , Chicago , Illinois , USA
| | - Dominique Belli
- a Department of Child and Adolescent Medicine , Children's Hospital, Geneva University Hospitals , Geneva , Switzerland
| | - Nu Viet Vu
- c Unit of Development and Research in Medical Education , University of Geneva, Faculty of Medicine , Geneva , Switzerland
| | - Yoon Soo Park
- b Department of Medical Education , University of Illinois at Chicago, College of Medicine , Chicago , Illinois , USA
| |
Collapse
|
21
|
Hsieh MC, Lee MS, Chen TY, Tsai TC, Pai YF, Sheu MM. Analyzing the effectiveness of teaching and factors in clinical decision-making. Tzu Chi Med J 2018; 29:223-227. [PMID: 29296052 PMCID: PMC5740696 DOI: 10.4103/tcmj.tcmj_34_17] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022] Open
Abstract
Objective The aim of this study is to prepare junior physicians, clinical education should focus on the teaching of clinical decision-making. This research is designed to explore teaching of clinical decision-making and to analyze the benefits of an "Analogy guide clinical decision-making" as a learning intervention for junior doctors. Materials and Methods This study had a "quasi-experimental design" and was conducted in a medical center in eastern Taiwan. Participants and Program Description: Thirty junior doctors and three clinical teachers were involved in the study. The experimental group (15) received 1 h of instruction from the "Analogy guide for teaching clinical decision-making" every day for 3 months. Program Evaluation: A "Clinical decision-making self-evaluation form" was used as the assessment tool to evaluate participant learning efficiency before and after the teaching program. Semi-structured qualitative research interviews were also conducted. Results We found using the analogy guide for teaching clinical decision-making could help enhance junior doctors' self-confidence. Important factors influencing clinical decision-making included workload, decision-making, and past experience. Conclusion Clinical teaching using the analogy guide for clinical decision-making may be a helpful tool for training and can contribute to a more comprehensive understanding of decision-making.
Collapse
Affiliation(s)
- Ming-Chen Hsieh
- Department of Medical Education, Buddhist Tzu Chi General Hospital, Hualien, Taiwan.,Department of Medicine, College of Medicine, Tzu Chi University, Hualien, Taiwan.,Department of Education and Human Potentials Development, National Dong Hwa University, Hualien, Taiwan
| | - Ming-Shinn Lee
- Department of Education and Human Potentials Development, National Dong Hwa University, Hualien, Taiwan
| | - Tsung-Ying Chen
- Department of Medicine, College of Medicine, Tzu Chi University, Hualien, Taiwan.,Department of Education and Human Potentials Development, National Dong Hwa University, Hualien, Taiwan
| | - Tsuen-Chiuan Tsai
- Department of Pediatrics, Kaohsiung Medical University Hospital and Kaohsiung Medical University, Kaohsiung, Taiwan
| | - Yi-Fong Pai
- Department of Education and Human Potentials Development, National Dong Hwa University, Hualien, Taiwan
| | - Min-Muh Sheu
- Department of Ophthalmology, Mennonite Christian Hospital, Hualien, Taiwan
| |
Collapse
|
22
|
Fischer MA, Kennedy KM, Durning S, Schijven MP, Ker J, O’Connor P, Doherty E, Kropmans TJB. Situational awareness within objective structured clinical examination stations in undergraduate medical training - a literature search. BMC MEDICAL EDUCATION 2017; 17:262. [PMID: 29268744 PMCID: PMC5740962 DOI: 10.1186/s12909-017-1105-y] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/09/2016] [Accepted: 12/12/2017] [Indexed: 05/30/2023]
Abstract
BACKGROUND Medical students may not be able to identify the essential elements of situational awareness (SA) necessary for clinical reasoning. Recent studies suggest that students have little insight into cognitive processing and SA in clinical scenarios. Objective Structured Clinical Examinations (OSCEs) could be used to assess certain elements of situational awareness. The purpose of this paper is to review the literature with a view to identifying whether levels of SA based on Endsley's model can be assessed utilising OSCEs during undergraduate medical training. METHODS A systematic search was performed pertaining to SA and OSCEs, to identify studies published between January 1975 (first paper describing an OSCE) and February 2017, in peer reviewed international journals published in English. PUBMED, EMBASE, PsycINFO Ovid and SCOPUS were searched for papers that described the assessment of SA using OSCEs among undergraduate medical students. Key search terms included "objective structured clinical examination", "objective structured clinical assessment" or "OSCE" and "non-technical skills", "sense-making", "clinical reasoning", "perception", "comprehension", "projection", "situation awareness", "situational awareness" and "situation assessment". Boolean operators (AND, OR) were used as conjunctions to narrow the search strategy, resulting in the limitation of papers relevant to the research interest. Areas of interest were elements of SA that can be assessed by these examinations. RESULTS The initial search of the literature retrieved 1127 publications. Upon removal of duplicates and papers relating to nursing, paramedical disciplines, pharmacy and veterinary education by title, abstract or full text, 11 articles were eligible for inclusion as related to the assessment of elements of SA in undergraduate medical students. DISCUSSION Review of the literature suggests that whole-task OSCEs enable the evaluation of SA associated with clinical reasoning skills. If they address the levels of SA, these OSCEs can provide supportive feedback and strengthen educational measures associated with higher diagnostic accuracy and reasoning abilities. CONCLUSION Based on the findings, the early exposure of medical students to SA is recommended, utilising OSCEs to evaluate and facilitate SA in dynamic environments.
Collapse
Affiliation(s)
- Markus A. Fischer
- National University Ireland Galway, School of Medicine, University Road, Galway, H91TK33 Ireland
| | - Kieran M. Kennedy
- National University Ireland Galway, School of Medicine, University Road, Galway, H91TK33 Ireland
| | - Steven Durning
- Department of Internal Medicine, University of the Health Sciences, 4301 Jones Bridge Road, Bethesda, MD 20814 USA
| | - Marlies P. Schijven
- Department of Surgery, Academic Medical Center Amsterdam, Meibergdreef 9, 1105 AZ Amsterdam-Zuidoost, The Netherlands
| | - Jean Ker
- University of Dundee. Clinical Skills Centre Level 6, Ninewells Hospital & Medical School, Dundee, UK
| | - Paul O’Connor
- National University Galway Ireland, Discipline of General Practice, Distillery Road, Galway, H91TK33 Ireland
| | - Eva Doherty
- Royal College of Surgeons in Ireland, 123 St Stephen’s Green, Dublin 2, Ireland
| | - Thomas J. B. Kropmans
- National University Ireland Galway, School of Medicine, University Road, Galway, H91TK33 Ireland
| |
Collapse
|
23
|
King MA, Phillipi CA, Buchanan PM, Lewin LO. Self-Directed Rater Training for Pediatric History and Physical Exam Evaluation (P-HAPEE) Rubric, a Validated Written H&P Assessment Tool. MEDEDPORTAL : THE JOURNAL OF TEACHING AND LEARNING RESOURCES 2017; 13:10603. [PMID: 30800805 PMCID: PMC6374741 DOI: 10.15766/mep_2374-8265.10603] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/21/2016] [Accepted: 05/15/2017] [Indexed: 05/25/2023]
Abstract
INTRODUCTION We developed, revised, and implemented self-directed rater training materials in the course of a validity study for a written Pediatric History and Physical Exam Evaluation (P-HAPEE) rubric. METHODS Core training materials consist of a single-page instruction sheet, sample written history and physical (H&P), and detailed answer key. We iteratively revised the materials based on reviewer comments and pilot testing. Eighteen attending physicians and five senior residents underwent self-directed training, scored 10 H&Ps, and completed a rubric utility survey in the course of the validity study. We have since implemented the P-HAPEE rubric and self-directed rater training in a pediatric clerkship. Based on input from reviewers, study raters, faculty members, and medical student users, we have also developed and implemented additional optional supplemental training materials. RESULTS Pilot testing indicated that training takes approximately 1 hour. While reviewers endorsed the training format, several suggested having optional supplemental materials available. Nineteen out of 23 volunteer study raters completed the rubric utility survey. All described the rubric as good or very good and indicated strong to very strong interest in continued use. DISCUSSION The P-HAPEE rubric offers a novel, practical, reliable, and valid method for supervising physicians to assess pediatric written H&Ps and can be implemented using brief, self-directed rater training.
Collapse
Affiliation(s)
- Marta A. King
- Associate Professor, Department of Pediatrics, Saint Louis University School of Medicine
| | - Carrie A. Phillipi
- Professor, Department of Pediatrics, Oregon Health & Science University School of Medicine
| | - Paula M. Buchanan
- Associate Professor, Center for Outcomes Research, Saint Louis University
| | - Linda O. Lewin
- Associate Professor, Department of Pediatrics, University of Maryland School of Medicine
| |
Collapse
|
24
|
Sando KR, Skoy E, Bradley C, Frenzel J, Kirwin J, Urteaga E. Assessment of SOAP note evaluation tools in colleges and schools of pharmacy. CURRENTS IN PHARMACY TEACHING & LEARNING 2017; 9:576-584. [PMID: 29233430 DOI: 10.1016/j.cptl.2017.03.010] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/13/2016] [Revised: 01/10/2017] [Accepted: 03/13/2017] [Indexed: 06/07/2023]
Abstract
INTRODUCTION To describe current methods used to assess SOAP notes in colleges and schools of pharmacy. METHODS Members of the American Association of Colleges of Pharmacy Laboratory Instructors Special Interest Group were invited to share assessment tools for SOAP notes. Content of submissions was evaluated to characterize overall qualities and how the tools assessed subjective, objective, assessment, and plan information. RESULTS Thirty-nine assessment tools from 25 schools were evaluated. Twenty-nine (74%) of the tools were rubrics and ten (26%) were checklists. All rubrics included analytic scoring elements, while two (7%) were mixed with holistic and analytic scoring elements. A majority of the rubrics (35%) used a four-item rating scale. Substantial variability existed in how tools evaluated subjective and objective sections. All tools included problem identification in the assessment section. Other assessment items included goals (82%) and rationale (69%). Seventy-seven percent assessed drug therapy; however, only 33% assessed non-drug therapy. Other plan items included education (59%) and follow-up (90%). DISCUSSION AND CONCLUSIONS There is a great deal of variation in the specific elements used to evaluate SOAP notes in colleges and schools of pharmacy. Improved consistency in assessment methods to evaluate SOAP notes may better prepare students to produce standardized documentation when entering practice.
Collapse
Affiliation(s)
- Karen R Sando
- Dept. of Pharmacotherapy & Translational Research, University of Florida College of Pharmacy, Gainesville, FL, United States.
| | - Elizabeth Skoy
- Dept. of Pharmacy Practice, North Dakota State University School of Pharmacy, Fargo, ND, United States.
| | - Courtney Bradley
- High Point University, Fred Wilson School of Pharmacy, High Point, NC, United States.
| | - Jeanne Frenzel
- Dept. of Pharmacy Practice, North Dakota State University School of Pharmacy, Fargo, ND, United States.
| | - Jennifer Kirwin
- Dept. of Pharmacy and Health Systems Sciences, School of Pharmacy, Northeastern University, Boston, MA, United States.
| | - Elizabeth Urteaga
- Dept. of Pharmacy Practice, Feik School of Pharmacy, University of the Incarnate Word, San Antonio, TX, United States.
| |
Collapse
|
25
|
King MA, Phillipi CA, Buchanan PM, Lewin LO. Developing Validity Evidence for the Written Pediatric History and Physical Exam Evaluation Rubric. Acad Pediatr 2017; 17:68-73. [PMID: 27521461 DOI: 10.1016/j.acap.2016.08.001] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/17/2016] [Revised: 07/29/2016] [Accepted: 08/03/2016] [Indexed: 11/28/2022]
Abstract
OBJECTIVE The written history and physical examination (H&P) is an underutilized source of medical trainee assessment. The authors describe development and validity evidence for the Pediatric History and Physical Exam Evaluation (P-HAPEE) rubric: a novel tool for evaluating written H&Ps. METHODS Using an iterative process, the authors drafted, revised, and implemented the 10-item rubric at 3 academic institutions in 2014. Eighteen attending physicians and 5 senior residents each scored 10 third-year medical student H&Ps. Inter-rater reliability (IRR) was determined using intraclass correlation coefficients. Cronbach α was used to report consistency and Spearman rank-order correlations to determine relationships between rubric items. Raters provided a global assessment, recorded time to review and score each H&P, and completed a rubric utility survey. RESULTS Overall intraclass correlation was 0.85, indicating adequate IRR. Global assessment IRR was 0.89. IRR for low- and high-quality H&Ps was significantly greater than for medium-quality ones but did not differ on the basis of rater category (attending physician vs. senior resident), note format (electronic health record vs nonelectronic), or student diagnostic accuracy. Cronbach α was 0.93. The highest correlation between an individual item and total score was for assessments was 0.84; the highest interitem correlation was between assessment and differential diagnosis (0.78). Mean time to review and score an H&P was 16.3 minutes; residents took significantly longer than attending physicians. All raters described rubric utility as "good" or "very good" and endorsed continued use. CONCLUSIONS The P-HAPEE rubric offers a novel, practical, reliable, and valid method for supervising physicians to assess pediatric written H&Ps.
Collapse
Affiliation(s)
- Marta A King
- Division of General Academic Pediatrics, Saint Louis University School of Medicine, St Louis, Mo.
| | - Carrie A Phillipi
- Department of Pediatrics, Oregon Health & Science University, Portland, OR
| | - Paula M Buchanan
- Center for Outcomes Research, Saint Louis University, St Louis, Mo
| | - Linda O Lewin
- Department of Pediatrics, University of Maryland School of Medicine, Bethesda, MD
| |
Collapse
|