1
|
Taha MH, Mohammed HEEG, Abdalla ME, Yusoff MSB, Mohd Napiah MK, Wadi MM. The pattern of reporting and presenting validity evidence of extended matching questions (EMQs) in health professions education: a systematic review. MEDICAL EDUCATION ONLINE 2024; 29:2412392. [PMID: 39445670 PMCID: PMC11504699 DOI: 10.1080/10872981.2024.2412392] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/05/2022] [Revised: 10/16/2023] [Accepted: 09/30/2024] [Indexed: 10/25/2024]
Abstract
The Extended matching Questions (EMQs), or R-type questions, are format of selected-response. The validity evidence for this format is crucial, but there have been reports of misunderstandings about validity. It is unclear what kinds of evidence should be presented and how to present them to support their educational impact. This review explores the pattern and quality of reporting the sources of validity evidence of EMQs in health professions education, encompassing content, response process, internal structure, relationship to other variables, and consequences. A systematic search in the electronic databases including MEDLINE via PubMed, Scopus, Web of Science, CINAHL, and ERIC was conducted to extract studies that utilize EMQs. The framework for a unitary concept of validity was applied to extract data. A total of 218 titles were initially selected, the final number of titles was 19. The most reported pieces of evidence were the reliability coefficient, followed by the relationship to another variable. Additionally, the adopted definition of validity is mostly the old tripartite concept. This study found that reporting and presenting validity evidence appeared to be deficient. The available evidence can hardly provide a strong validity argument that supports the educational impact of EMQs. This review calls for more work on developing a tool to measure the reporting and presenting validity evidence.
Collapse
Affiliation(s)
- Mohamed H. Taha
- College of Medicine and Medical Education Centre, University of Sharjah, Sharjah, United Arab Emirates
| | | | | | - Muhamad Saiful Bahri Yusoff
- Medical Education Department, School of Medical Sciences, Universiti Sains Malaysia, Kubang Kerian, Malaysia
| | | | - Majed M. Wadi
- Medical Education Department, College of Medicine, Qassim University, Qassim, Saudi Arabia
| |
Collapse
|
2
|
Si J. Fostering clinical reasoning ability in preclinical students through an illness script worksheet approach in flipped learning: a quasi-experimental study. BMC MEDICAL EDUCATION 2024; 24:658. [PMID: 38872172 DOI: 10.1186/s12909-024-05614-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Subscribe] [Scholar Register] [Received: 08/21/2023] [Accepted: 05/29/2024] [Indexed: 06/15/2024]
Abstract
BACKGROUND The consensus that clinical reasoning should be explicitly addressed throughout medical training is increasing; however, studies on specific teaching methods, particularly, for preclinical students, are lacking. This study investigated the effects of an illness script worksheet approach in flipped learning on the development of clinical reasoning abilities in preclinical students. It also explored whether the impact of this intervention differed depending on clinical reasoning ability after dividing the students into high and low groups based on their pre-diagnostic thinking inventory (DTI) scores. METHODS This study used a one-group pre-post test design and convenience sampling. Forty-two second-year medical students were invited to participate in this study. The course, "clinical reasoning method," was redesigned as an illness script worksheet approach in flipped learning. The course was an eight-week long program. The students met once or twice per week with a different professor each time and engaged with 15 clinical cases in small groups in one classroom. Each time, one professor facilitated seven groups in a single classroom. The effectiveness of the intervention was measured using DTI before and after the intervention. A learning experience survey was conducted with post-DTI assessment. RESULTS Thirty-six students participated in the survey and their data were analyzed. The mean pre-DTI score was 170.4, and the mean post-DTI score was 185.2, indicating an 8.68% increase (p < .001). Significant differences were also found in both high and low groups between the pre- and post-DTI assessments. However, the low group improved much more than the high group and exhibited a significant increase in one of the DTI subscales as well. The overall average score on the learning experience survey was 3.11 out of 4. CONCLUSION The findings indicated that the intervention was an effective instructional method for the development of clinical reasoning in preclinical students and was more beneficial for students with a low level of clinical reasoning ability. This study demonstrated that the intervention can be a feasible and scalable method to effectively and efficiently train clinical reasoning in preclinical students in a classroom.
Collapse
Affiliation(s)
- Jihyun Si
- Department of Medical Education, Dong-A University College of Medicine, 32 Daesingongwon-ro, Seo-gu, Busan, 49201, Korea.
| |
Collapse
|
3
|
Si J. Validating the Korean shorter Diagnostic Thinking Inventory in medical education: a pilot study. KOREAN JOURNAL OF MEDICAL EDUCATION 2024; 36:17-26. [PMID: 38462239 PMCID: PMC10925811 DOI: 10.3946/kjme.2024.281] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/10/2023] [Revised: 01/11/2024] [Accepted: 01/31/2024] [Indexed: 03/12/2024]
Abstract
PURPOSE Developing clinical reasoning across the medical curriculum requires valid, reliable, and feasible assessment tools. However, few validated tools are available for the convenient and efficient quantification of clinical reasoning. Thus, this study aimed to create a shorter version of the Diagnostic Thinking Inventory (DTI) and validate it in the Korean medical education context (DTI-SK). METHODS The DTI-SK was constructed using content validity and a translation and back-translation process. It comprises two subcategories and 14 items. Its validity and reliability were explored using exploratory and confirmatory factor analyses, mean comparisons of four medical student groups (med 1 to med 4), and internal consistency using Cronbach's α. Two hundred medical students were invited to participate through email, and the survey was administered for 2 weeks. RESULTS Data from 136 students were analyzed. Exploratory factor analysis revealed two factors with eigenvalues greater than 1.0 and they together explained 54.65% of the variance. Confirmatory factor analysis demonstrated that the model had acceptable level of fit and convergent validity. Discriminant validity was confirmed using heterotrait-monotrait criterion. Group comparisons demonstrated that the med 4 students showed significantly higher scores than the med 1 and 2 students. The inventory exhibited strong internal consistency for all items (Cronbach's α=0.906). CONCLUSION The findings indicated that the DTI-SK is a reliable and valid tool for measuring medical students' clinical reasoning in the context of Korean medical education.
Collapse
Affiliation(s)
- Jihyun Si
- Department of Medical Education, Dong-A University College of Medicine, Busan, Korea
| |
Collapse
|
4
|
Hermasari BK, Nugroho D, Maftuhah A, Pamungkasari EP, Budiastuti VI, Laras AA. Promoting medical student's clinical reasoning during COVID-19 pandemic. KOREAN JOURNAL OF MEDICAL EDUCATION 2023; 35:187-198. [PMID: 37291847 DOI: 10.3946/kjme.2023.259] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/04/2022] [Accepted: 03/17/2023] [Indexed: 06/10/2023]
Abstract
PURPOSE The development of students' clinical reasoning skills should be a consideration in the design of instruction and evaluation in medical education. In response to the coronavirus disease 2019 (COVID-19) pandemic, several changes in the medical curriculum have been implemented in promoting clinical reasoning. This study aims to explore medical students' perceptions and experiences with the clinical reasoning curriculum during the COVID-19 pandemic and determine their skills development. METHODS The study used a mixed-method design with a concurrent approach. A cross-sectional study was conducted to compare and examine the relationship between the outcomes of the structured oral examination (SOE) and the Diagnostic Thinking Inventory (DTI). Then, the qualitative method was used. A focus group discussion using a semi-structured interview guide with open-ended questions was conducted, then the verbatim transcript was subjected to thematic analysis. RESULTS There is an increase in SOE and DTI scores between second-year to fourth-year students. The diagnostic thinking domains and SOE are significantly correlated (r=0.302, 0.313, and 0.241 with p<0.05). The three primary themes from the qualitative analysis are perceptions regarding clinical reasoning, clinical reasoning activities, and the learning component. CONCLUSION Even if students are still studying throughout the COVID-19 pandemic, their clinical reasoning skills can improve. The clinical reasoning and diagnostic thinking skills of medical students increase as the length of the school year increases. Online case-based learning and assessment support the development of clinical reasoning skills. The skills are supported in their development by positive attitudes toward faculty, peers, case type, and prior knowledge.
Collapse
Affiliation(s)
| | - Dian Nugroho
- Faculty of Medicine, Sebelas Maret University, Surakarta, Indonesia
| | - Atik Maftuhah
- Faculty of Medicine, Sebelas Maret University, Surakarta, Indonesia
| | | | | | | |
Collapse
|
5
|
Edgar AK, Ainge L, Backhouse S, Armitage JA. A cohort study for the development and validation of a reflective inventory to quantify diagnostic reasoning skills in optometry practice. BMC MEDICAL EDUCATION 2022; 22:536. [PMID: 35820888 PMCID: PMC9277884 DOI: 10.1186/s12909-022-03493-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/26/2021] [Accepted: 05/23/2022] [Indexed: 05/19/2023]
Abstract
BACKGROUND Diagnostic reasoning is an essential skill for optometry practice and a vital part of the curriculum for optometry trainees but there is limited understanding of how diagnostic reasoning is performed in optometry or how this skill is best developed. A validated and reliable self-reflective inventory for diagnostic reasoning in optometry, would enable trainees and registered practitioners to benchmark their diagnostic reasoning skills, identify areas of strength and areas for improvement. METHODS A 41 item self-reflective inventory, the Diagnostic Thinking Inventory, used extensively in the medical field was adapted for use in optometry and called the Diagnostic Thinking Inventory for Optometry (DTI-O). The inventory measures two subdomains of diagnostic reasoning, flexibility in thinking and structured memory. Context based changes were made to the original inventory and assessed for face and content validity by a panel of experts. The inventory was administered to two groups, experienced (qualified) optometrists and second-year optometry students to establish validity and reliability of the self-reflective tool in optometry. RESULTS Exploratory Factor Analysis uncovered 13 domain specific items were measuring a single construct, diagnostic reasoning. One misfitting item was removed following Rasch analysis. Two unidimensional subdomains were confirmed in the remaining 12 items: Flexibility in Thinking (χ2 = 12.98, P = 0.37) and Structured Memory (χ2 = 8.74, P = 0.72). The 'Diagnostic Thinking Inventory for Optometry Short' (DTI-OS) tool was formed from these items with the total and subdomain scores exhibiting strong internal reliability; Total score Cα = 0.92. External reliability was established by test-retest methodology (ICC 0.92, 95% CI 0.83-0.96, P < .001) and stacked Rasch analysis (one-way ANOVA, F = 0.07, P = 0.80). Qualified optometrists scored significantly higher (P < .001) than students, demonstrating construct validity. CONCLUSION This study showed that the DTI-O and DTI-OS are valid and reliable self-reflective inventories to quantify diagnostic reasoning ability in optometry. With no other validated tool to measure this metacognitive skill underpinning diagnostic reasoning a self-reflective inventory could support the development of diagnostic reasoning in practitioners and guide curriculum design in optometry education.
Collapse
Affiliation(s)
- Amanda K Edgar
- School of Medicine (Optometry), Deakin University, 75 Pigdons Road, Waurn Ponds, 3216, Australia.
| | - Lucinda Ainge
- School of Medicine (Optometry), Deakin University, 75 Pigdons Road, Waurn Ponds, 3216, Australia
| | - Simon Backhouse
- School of Medicine (Optometry), Deakin University, 75 Pigdons Road, Waurn Ponds, 3216, Australia
| | - James A Armitage
- School of Medicine (Optometry), Deakin University, 75 Pigdons Road, Waurn Ponds, 3216, Australia
| |
Collapse
|
6
|
Hamzeh H, Madi M. Using the diagnostic thinking inventory in musculoskeletal physiotherapy: a validity and reliability study. PHYSIOTHERAPY RESEARCH INTERNATIONAL 2021; 26:e1895. [PMID: 33464675 DOI: 10.1002/pri.1895] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2020] [Revised: 11/20/2020] [Accepted: 12/25/2020] [Indexed: 11/06/2022]
Abstract
BACKGROUND Development of clinical reasoning is an essential aspect in musculoskeletal physiotherapy practice that is linked to better outcomes. The measurement of clinical reasoning has placed an emphasis on diagnostic reasoning using different types of examinations. The Diagnostic Thinking Inventory (DTI) is a self-assessment tool developed to measure two aspects of diagnostic reasoning: flexibility in thinking (FT) and structure in memory (SM). DTI is valid and reliable that has been used extensively in medical field. OBJECTIVE To investigate the validity and reliability of DTI in musculoskeletal physiotherapy practice. METHODS Two groups of musculoskeletal physiotherapists completed DTI. Expert musculoskeletal physiotherapists assessed face and content validity. Data from the second group of musculoskeletal physiotherapists were used to assess test-retest reliability. Internal consistency was calculated using Cronbach's alpha. Construct validity was assessed by comparing both groups. Data were analyzed using the IBM SPSS statistics 25.0 version. RESULTS The experts agreed that DTI measures diagnostic reasoning. For test-retest reliability, average intraclass correlation coefficient was 0.91, 0.92 and 0.90 (p < 0.001) for DTI, FT and SM scores, respectively. Cronbach's alpha was 0.909, 0.919 and 0.897 (p < 0.001) for DTI, FT and SM, respectively. The independent samples t-test demonstrated that the experts group achieved higher and statistically significant score (p < 0.001). CONCLUSION DTI is valid and reliable in measuring diagnostic reasoning in the context of musculoskeletal physiotherapy practice. It can be used to assess the impact of continuing education on musculoskeletal physiotherapists' diagnostic reasoning.
Collapse
Affiliation(s)
- Hayat Hamzeh
- Department of Physiotherapy, School of Rehabilitation Sciences, The University of Jordan, Amman, Jordan
| | - Mohammad Madi
- Department of Physiotherapy and Occupational Therapy, School of Applied Medical Sciences, The Hashemite University, Zarqa, Jordan
| |
Collapse
|
7
|
Schaye V, Eliasz KL, Janjigian M, Stern DT. Theory-guided teaching: Implementation of a clinical reasoning curriculum in residents. MEDICAL TEACHER 2019; 41:1192-1199. [PMID: 31287343 DOI: 10.1080/0142159x.2019.1626977] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Introduction: Educators have theorized that interventions grounded in dual process theory (DPT) and script theory (ST) may improve the diagnostic reasoning process of physicians but little empirical evidence exists. Methods: In this quasi-experimental study, we assessed the impact of a clinical reasoning (CR) curriculum grounded in DPT and ST on medicine residents participating in one of three groups during a 6-month period: no, partial, or full intervention. Residents completed the diagnostic thinking inventory (DTI) at baseline and 6 months. At 6 months, participants also completed a post-survey assessing application of concepts to cases. Results: There was a significant difference between groups in application of concepts (no intervention 1.6 (0.65) compared to partial 2.3 (0.81) and full 2.2 (0.91), p = 0.05), as well as describing cases in problem representation format (no intervention 1.2 (0.38) and partial 1.5 (0.55) compared to full 2.1 (0.93), p = 0.004). There was no significant difference in change in DTI scores (no intervention 7.0 (16.3), partial 8.8 (9.8), full 7.8 (12.0)). Conclusions: Residents who participated in a CR curriculum grounded in DPT and ST were effective in applying principles of CR in cases from their practice. To our knowledge, this is the first workplace-based CR educational intervention study showing differences in the reasoning process residents apply to patients.
Collapse
Affiliation(s)
- Verity Schaye
- New York University School of Medicine , New York , NY , USA
- Department of Medicine, NYC Health and Hospitals Bellevue , New York , NY , USA
| | - Kinga L Eliasz
- New York University School of Medicine , New York , NY , USA
- NYU Langone Health , New York , NY , USA
| | - Michael Janjigian
- New York University School of Medicine , New York , NY , USA
- Department of Medicine, NYC Health and Hospitals Bellevue , New York , NY , USA
| | - David T Stern
- New York University School of Medicine , New York , NY , USA
- VA New York Harbor Healthcare System , New York , NY , USA
| |
Collapse
|
8
|
Daniel M, Rencic J, Durning SJ, Holmboe E, Santen SA, Lang V, Ratcliffe T, Gordon D, Heist B, Lubarsky S, Estrada CA, Ballard T, Artino AR, Sergio Da Silva A, Cleary T, Stojan J, Gruppen LD. Clinical Reasoning Assessment Methods: A Scoping Review and Practical Guidance. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2019; 94:902-912. [PMID: 30720527 DOI: 10.1097/acm.0000000000002618] [Citation(s) in RCA: 132] [Impact Index Per Article: 26.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
PURPOSE An evidence-based approach to assessment is critical for ensuring the development of clinical reasoning (CR) competence. The wide array of CR assessment methods creates challenges for selecting assessments fit for the purpose; thus, a synthesis of the current evidence is needed to guide practice. A scoping review was performed to explore the existing menu of CR assessments. METHOD Multiple databases were searched from their inception to 2016 following PRISMA guidelines. Articles of all study design types were included if they studied a CR assessment method. The articles were sorted by assessment methods and reviewed by pairs of authors. Extracted data were used to construct descriptive appendixes, summarizing each method, including common stimuli, response formats, scoring, typical uses, validity considerations, feasibility issues, advantages, and disadvantages. RESULTS A total of 377 articles were included in the final synthesis. The articles broadly fell into three categories: non-workplace-based assessments (e.g., multiple-choice questions, extended matching questions, key feature examinations, script concordance tests); assessments in simulated clinical environments (objective structured clinical examinations and technology-enhanced simulation); and workplace-based assessments (e.g., direct observations, global assessments, oral case presentations, written notes). Validity considerations, feasibility issues, advantages, and disadvantages differed by method. CONCLUSIONS There are numerous assessment methods that align with different components of the complex construct of CR. Ensuring competency requires the development of programs of assessment that address all components of CR. Such programs are ideally constructed of complementary assessment methods to account for each method's validity and feasibility issues, advantages, and disadvantages.
Collapse
Affiliation(s)
- Michelle Daniel
- M. Daniel is assistant dean for curriculum and associate professor of emergency medicine and learning health sciences, University of Michigan Medical School, Ann Arbor, Michigan; ORCID: http://orcid.org/0000-0001-8961-7119. J. Rencic is associate program director of the internal medicine residency program and associate professor of medicine, Tufts University School of Medicine, Boston, Massachusetts; ORCID: http://orcid.org/0000-0002-2598-3299. S.J. Durning is director of graduate programs in health professions education and professor of medicine and pathology, Uniformed Services University of the Health Sciences, Bethesda, Maryland. E. Holmboe is senior vice president of milestone development and evaluation, Accreditation Council for Graduate Medical Education, and adjunct professor of medicine, Northwestern Feinberg School of Medicine, Chicago, Illinois; ORCID: http://orcid.org/0000-0003-0108-6021. S.A. Santen is senior associate dean and professor of emergency medicine, Virginia Commonwealth University, Richmond, Virginia; ORCID: http://orcid.org/0000-0002-8327-8002. V. Lang is associate professor of medicine, University of Rochester School of Medicine and Dentistry, Rochester, New York; ORCID: http://orcid.org/0000-0002-2157-7613. T. Ratcliffe is associate professor of medicine, University of Texas Long School of Medicine at San Antonio, San Antonio, Texas. D. Gordon is medical undergraduate education director, associate residency program director of emergency medicine, and associate professor of surgery, Duke University School of Medicine, Durham, North Carolina. B. Heist is clerkship codirector and assistant professor of medicine, University of Pittsburgh School of Medicine, Pittsburgh, Pennsylvania. S. Lubarsky is assistant professor of neurology, McGill University, and faculty of medicine and core member, McGill Center for Medical Education, Montreal, Quebec, Canada; ORCID: http://orcid.org/0000-0001-5692-1771. C.A. Estrada is staff physician, Birmingham Veterans Affairs Medical Center, and director, Division of General Internal Medicine, and professor of medicine, University of Alabama, Birmingham, Alabama; ORCID: https://orcid.org/0000-0001-6262-7421. T. Ballard is plastic surgeon, Ann Arbor Plastic Surgery, Ann Arbor, Michigan. A.R. Artino Jr is deputy director for graduate programs in health professions education and professor of medicine, preventive medicine, and biometrics pathology, Uniformed Services University of the Health Sciences, Bethesda, Maryland; ORCID: http://orcid.org/0000-0003-2661-7853. A. Sergio Da Silva is senior lecturer in medical education and director of the masters in medical education program, Swansea University Medical School, Swansea, United Kingdom; ORCID: http://orcid.org/0000-0001-7262-0215. T. Cleary is chair, Applied Psychology Department, CUNY Graduate School and University Center, New York, New York, and associate professor of applied and professional psychology, Rutgers University, New Brunswick, New Jersey. J. Stojan is associate professor of internal medicine and pediatrics, University of Michigan Medical School, Ann Arbor, Michigan. L.D. Gruppen is director of the master of health professions education program and professor of learning health sciences, University of Michigan Medical School, Ann Arbor, Michigan; ORCID: http://orcid.org/0000-0002-2107-0126
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Collapse
|
9
|
Cultural adaption and validation of the German version of the diagnostic thinking inventory (DTI-G) / Ein Instrument zur Erhebung diagnostischer Kompetenz: Validierung und kulturelle Adaptation des Diagnostic Thinking Inventory (DTI-G). INTERNATIONAL JOURNAL OF HEALTH PROFESSIONS 2019. [DOI: 10.2478/ijhp-2019-0002] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Abstract
Diagnostic ability is essential for expert professional practice. Several instruments have been developed to assess diagnostic skills independent of specific knowledge. One such instrument is the diagnostic thinking inventory (DTI), which is used in different settings to evaluate diagnostic performance and has shown acceptable reliability and validity.
The aim of the present study was to translate and validate a German version (DTI-G).
Cultural adaptation and translation were performed according to international guidelines. Internal consistency and item discrimination indexes were calculated. The factorial structure of the DTI-G, test-retest reliability and known-groups validity were tested.
A total of 388 physiotherapists completed the questionnaire. The internal consistency was good for the overall score of the DTI-G (Cronbach’s a = 0.84). Exploratory factor analysis yielded a five-factor solution with 21 items that explained 55% of the total variance across items. A confirmatory principal component analysis resulted in the same five-factor structure, showing an acceptable to good overall fit of the model (CFI = 0.93; RMSEA = 0.05; SRMR = 0.06). Test-retest reliability was found to be good (intraclass correlation coefficient ICC2,1 = 0.87, p < 0.001, n = 118). The difference between participants with more than 9 years of clinical experience and those with less than 9 years of clinical experience (median split) was significant (t385 = 6.00, p < 0.001), supporting known-groups validity.
The results support construct validity and indicate good test-retest reliability of the DTI-G. The DTI-G can be used to measure and develop diagnostic ability of physiotherapists in clinical practice and education. Further research is necessary to validate the questionnaire for other health professions.
Collapse
|
10
|
Findyartini A, Hawthorne L, McColl G, Chiavaroli N. How clinical reasoning is taught and learned: Cultural perspectives from the University of Melbourne and Universitas Indonesia. BMC MEDICAL EDUCATION 2016; 16:185. [PMID: 27443145 PMCID: PMC4957336 DOI: 10.1186/s12909-016-0709-y] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/16/2015] [Accepted: 07/12/2016] [Indexed: 05/20/2023]
Abstract
BACKGROUND The majority of schools in the Asia-Pacific region have adopted medical curricula based on western pedagogy. However to date there has been minimal exploration of the influence of the culture of learning on the teaching and learning process. This paper explores this issue in relation to clinical reasoning. METHOD A comparative case study was conducted in 2 medical schools in Australia (University of Melbourne) and Asia (Universitas Indonesia). It involved assessment of medical students' attitudes to clinical reasoning through administration of the Diagnostic Thinking Inventory (DTI), followed by qualitative interviews which explored related cultural issues. A total of 11 student focus group discussions (45 students) and 24 individual medical teacher interviews were conducted, followed by thematic analysis. RESULTS Students from Universitas Indonesia were found to score lower on the Flexibility in Thinking subscale of the DTI. Qualitative data analysis based on Hofstede's theoretical constructs concerning the culture of learning also highlighted clear differences in relation to attitudes to authority and uncertainty avoidance, with potential impacts on attitudes to teaching and learning of clinical reasoning in undergraduate medical education. CONCLUSIONS Different attitudes to teaching and learning clinical reasoning reflecting western and Asian cultures of learning were identified in this study. The potential impact of cultural differences should be understood when planning how clinical reasoning can be best taught and learned in the changing global contexts of medical education, especially when the western medical education approach is implemented in Asian contexts.
Collapse
Affiliation(s)
- Ardi Findyartini
- />Department of Medical Education, Faculty of Medicine, Universitas Indonesia, Jakarta, Indonesia
| | - Lesleyanne Hawthorne
- />Melbourne School of Population and Global Health, University of Melbourne, Melbourne, Australia
| | - Geoff McColl
- />Melbourne Medical School, Faculty of Medicine Dentistry and Health Sciences, University of Melbourne, Melbourne, Australia
| | - Neville Chiavaroli
- />Melbourne Medical School, Faculty of Medicine Dentistry and Health Sciences, University of Melbourne, Melbourne, Australia
| |
Collapse
|
11
|
Schmidt HG, Mamede S. How to improve the teaching of clinical reasoning: a narrative review and a proposal. MEDICAL EDUCATION 2015; 49:961-73. [PMID: 26383068 DOI: 10.1111/medu.12775] [Citation(s) in RCA: 151] [Impact Index Per Article: 16.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/10/2015] [Revised: 03/25/2015] [Accepted: 04/22/2015] [Indexed: 05/08/2023]
Abstract
CONTEXT The development of clinical reasoning (CR) in students has traditionally been left to clinical rotations, which, however, often offer limited practice and suboptimal supervision. Medical schools begin to address these limitations by organising pre-clinical CR courses. The purpose of this paper is to review the variety of approaches employed in the teaching of CR and to present a proposal to improve these practices. METHODS We conducted a narrative review of the literature on teaching CR. To that end, we searched PubMed and Web of Science for papers published until June 2014. Additional publications were identified in the references cited in the initial papers. We used theoretical considerations to characterise approaches and noted empirical findings, when available. RESULTS Of the 48 reviewed papers, only 24 reported empirical findings. The approaches to teaching CR were shown to vary on two dimensions. The first pertains to the way the case information is presented. The case is either unfolded to students gradually - the 'serial-cue' approach - or is presented in a 'whole-case' format. The second dimension concerns the purpose of the exercise: is its aim to help students acquire or apply knowledge, or is its purpose to teach students a way of thinking? The most prevalent approach is the serial-cue approach, perhaps because it tries to directly simulate the diagnostic activities of doctors. Evidence supporting its effectiveness is, however, lacking. There is some empirical evidence that whole-case, knowledge-oriented approaches contribute to the improvement of students' CR. However, thinking process-oriented approaches were shown to be largely ineffective. CONCLUSIONS Based on research on how expertise develops in medicine, we argue that students in different phases of their training may benefit from different approaches to the teaching of CR.
Collapse
Affiliation(s)
- Henk G Schmidt
- Department of Psychology, Erasmus University Rotterdam, Rotterdam, The Netherlands
| | - Sílvia Mamede
- Institute of Medical Education Research Rotterdam, Erasmus Medical Centre, Rotterdam, The Netherlands
| |
Collapse
|
12
|
Gehlhar K, Klimke-Jung K, Stosch C, Fischer MR. Do different medical curricula influence self-assessed clinical thinking of students? GMS ZEITSCHRIFT FUR MEDIZINISCHE AUSBILDUNG 2014; 31:Doc23. [PMID: 24872858 PMCID: PMC4027808 DOI: 10.3205/zma000915] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/02/2013] [Revised: 03/13/2014] [Accepted: 04/02/2014] [Indexed: 11/30/2022]
Abstract
OBJECTIVES As a fundamental element of medical practice, clinical reasoning should be cultivated in courses of study in human medicine. To date, however, no conclusive evidence has been offered as to what forms of teaching and learning are most effective in achieving this goal. The Diagnostic Thinking Inventory (DTI) was developed as a means of measuring knowledge-unrelated components of clinical reasoning. The present pilot study examines the adequacy of this instrument in measuring differences in the clinical reasoning of students in varying stages of education in three curricula of medical studies. METHODS The Diagnostic Thinking Inventory (DTI) comprises 41 items in two subscales ("Flexibility in Thinking" and "Structure of Knowledge in Memory"). Each item contains a statement or finding concerning clinical reasoning in the form of a stem under which a 6-point scale presents opposing conclusions. The subjects are asked to assess their clinical thinking within this range. The German-language version of the DTI was completed by 247 student volunteers from three schools and varying clinical semesters. In a quasi-experimental design, 219 subjects from traditional and model courses of study in the German state of North Rhine-Westphalia took part. Specifically, these were 5(th), 6(th) and 8(th) semester students from the model course of study at Witten/Herdecke University (W/HU), from the model (7(th) and 9(th) semester) and traditional (7(th) semester) courses of study at the Ruhr University Bochum (RUB) and from the model course of study (9(th) semester) at the University of Cologne (UoC). The data retrieved were quantitatively assessed. RESULTS The reliability of the questionnaire in its entirety was good (Cronbach's alpha between 0.71 and 0.83); the reliability of the subscales ranged between 0.49 and 0.75. The different groups were compared using the Mann-Whitney test, revealing significant differences among semester cohorts within a school as well as between students from similar academic years in different schools. Among the participants from the model course of study at the W/HU, scores increased from the 5(th) to the 6(th) semester and from the 5(th) to the 9(th) semester. Among individual cohorts at RUB, no differences could be established between model and traditional courses of study or between 7(th) and 9(th) semester students in model courses of study. Comparing all participating highest semester students, the 8(th) semester participants from the W/HU achieved the highest scores - significantly higher than those of 9(th) semester RUB students or 9(th) semester UoC students. Scores from the RUB 9(th) semester participants were significantly higher than those of the 9(th) semester UoC participants. DISCUSSION The German-language version of the DTI measures self-assessed differences in diagnostic reasoning among students from various semesters and different model and traditional courses of study with satisfactory reliability. The results can be used for discussion in the context of diverse curricula. The DTI is therefore appropriate for further research that can then be correlated with the different teaching method characteristics and outcomes of various curricula.
Collapse
Affiliation(s)
- Kirsten Gehlhar
- Carl von Ossietzky University of Oldenburg, School of Medicine and Health Sciences, Oldenburg, Germany
| | | | | | - Martin R Fischer
- Clinic of the Ludwig Maximilian University of Munich, Institute for Medical Education, Munich, Germany
| |
Collapse
|
13
|
Bauer D, Holzer M, Kopp V, Fischer MR. Pick-N multiple choice-exams: a comparison of scoring algorithms. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2011; 16:211-221. [PMID: 21038082 DOI: 10.1007/s10459-010-9256-1] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/16/2010] [Accepted: 10/20/2010] [Indexed: 05/30/2023]
Abstract
To compare different scoring algorithms for Pick-N multiple correct answer multiple-choice (MC) exams regarding test reliability, student performance, total item discrimination and item difficulty. Data from six 3rd year medical students' end of term exams in internal medicine from 2005 to 2008 at Munich University were analysed (1,255 students, 180 Pick-N items in total). Scoring Algorithms: Each question scored a maximum of one point. We compared: (a) Dichotomous scoring (DS): One point if all true and no wrong answers were chosen. (b) Partial credit algorithm 1 (PS(50)): One point for 100% true answers; 0.5 points for 50% or more true answers; zero points for less than 50% true answers. No point deduction for wrong choices. (c) Partial credit algorithm 2 (PS(1/m)): A fraction of one point depending on the total number of true answers was given for each correct answer identified. No point deduction for wrong choices. Application of partial crediting resulted in psychometric results superior to dichotomous scoring (DS). Algorithms examined resulted in similar psychometric data with PS(50) only slightly exceeding PS(1/m) in higher coefficients of reliability. The Pick-N MC format and its scoring using the PS(50) and PS(1/m) algorithms are suited for undergraduate medical examinations. Partial knowledge should be awarded in Pick-N MC exams.
Collapse
Affiliation(s)
- Daniel Bauer
- Faculty of Health, Institute for Teaching and Educational Research in Health Sciences, Witten/Herdecke University, Germany.
| | | | | | | |
Collapse
|
14
|
Diagnostic grand rounds: a new teaching concept to train diagnostic reasoning. Eur J Radiol 2009; 78:349-52. [PMID: 19497695 DOI: 10.1016/j.ejrad.2009.05.015] [Citation(s) in RCA: 13] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2009] [Accepted: 05/04/2009] [Indexed: 11/21/2022]
Abstract
INTRODUCTION Diagnostic reasoning is a core skill in teaching and learning in undergraduate curricula. Diagnostic grand rounds (DGRs) as a subform of grand rounds are intended to train the students' skills in the selection of appropriate tests and in the interpretation of test results. The aim of this study was to test DGRs for their ability to improve diagnostic reasoning by using a pre-post-test design. METHODS During one winter term, all 398 fifth-year students (36.1% male, 63.9% female) solved 23 clinical cases presented in 8 DGRs. In an online questionnaire, a Diagnostic Thinking Inventory (DTI) with 41 items was evaluated for flexibility in thinking and structure of knowledge in memory. Results were correlated with those from a summative multiple-choice knowledge test and of the learning objectives in a logbook. RESULTS The students' DTI scores in the post-test were significantly higher than those reported in the pre-test. DTI scores at either testing time did not correlate with medical knowledge as assessed by a multiple-choice knowledge test. Abilities acquired during clinical clerkships as documented in a logbook could only account for a small proportion of the increase in the flexibility subscale score. This effect still remained significant after accounting for potential confounders. CONCLUSION Establishing DGRs proofed to be an effective way of successfully improving both students' diagnostic reasoning and the ability to select the appropriate test method in routine clinical practice.
Collapse
|
15
|
|