1
|
Fábry S, Rózsa S, Hargittay C, Kristóf P, Szélvári Á, Vörös K, Torzsa P, Németh E, Dornan T, Eőry A. Evaluating real-patient learning in medical education - Hungarian validation of the Manchester Clinical Placement Index. Front Med (Lausanne) 2023; 10:1265804. [PMID: 38162882 PMCID: PMC10756501 DOI: 10.3389/fmed.2023.1265804] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2023] [Accepted: 10/20/2023] [Indexed: 01/03/2024] Open
Abstract
Introduction The Manchester Clinical Placement Index (MCPI) is an instrument to measure medical undergraduates' real-patient learning in communities of practice both in hospital and in GP placements. Its suitability to evaluate the quality of placement learning environments has been validated in an English-language context; however, there is a lack of evidence for its applicability in other languages. Our aim was to thoroughly explore the factor structure and the key psychometric properties of the Hungarian language version. Methods MCPI is an 8-item, mixed-method instrument which evaluates the quality of clinical placements as represented by the leadership, reception, supportiveness, facilities and organization of the placement (learning environment) as well as instruction, observation and feedback (training) on 7-point Likert scales with options for free-text comments on the strengths and weaknesses of the given placement on any of the items. We collected data online from medical students in their preclinical (1st, 2nd) as well as clinical years (4th, 5th) in a cross-sectional design in the academic years 2019-2020 and 2021-2022, by the end of their clinical placements. Our sample comprises data from 748 medical students. Exploratory and confirmatory factor analyses were performed, and higher-order factors were tested. Results Although a bifactor model gave the best model fit (RMSEA = 0.024, CFI = 0.999, and TLI = 0.998), a high explained common variance (ECV = 0.82) and reliability coefficients (ωH = 0.87) for the general factor suggested that the Hungarian version of the MCPI could be considered unidimensional. Individual application of either of the subscales was not supported statistically due to their low reliabilities. Discussion The Hungarian language version of MCPI proved to be a valid unidimensional instrument to measure the quality of undergraduate medical placements. The previously reported subscales were not robust enough, in the Hungarian context, to distinguish, statistically, the quality of learning environments from the training provided within those environments. This does not, however, preclude formative use of the subscales for quality improvement purposes.
Collapse
Affiliation(s)
- Szabolcs Fábry
- Heart and Vascular Center, Semmelweis University, Budapest, Hungary
- Department of Anaesthesiology and Intensive Therapy, Semmelweis University, Budapest, Hungary
| | - Sándor Rózsa
- Department of Personality and Health Psychology, Károli Gáspár University of the Reformed Church, Budapest, Hungary
| | - Csenge Hargittay
- Department of Family Medicine, Semmelweis University, Budapest, Hungary
| | - Petra Kristóf
- Faculty of Medicine, Semmelweis University, Budapest, Hungary
| | - Ágnes Szélvári
- Department of Family Medicine, Semmelweis University, Budapest, Hungary
| | - Krisztián Vörös
- Department of Family Medicine, Semmelweis University, Budapest, Hungary
| | - Péter Torzsa
- Department of Family Medicine, Semmelweis University, Budapest, Hungary
| | - Endre Németh
- Heart and Vascular Center, Semmelweis University, Budapest, Hungary
- Department of Anaesthesiology and Intensive Therapy, Semmelweis University, Budapest, Hungary
| | - Timothy Dornan
- Centre for Medical Education, Queen’s University Belfast, Belfast, United Kingdom
| | - Ajándék Eőry
- Department of Family Medicine, Semmelweis University, Budapest, Hungary
| |
Collapse
|
2
|
González La Rotta M, Mazzanti V, Serna Rivas L, Triana Schoonewolff CA. Cognitive load in academic clinical simulation activities. Cross-sectional study. COLOMBIAN JOURNAL OF ANESTHESIOLOGY 2022. [DOI: 10.5554/22562087.e1044] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022] Open
Abstract
Introduction: Cognitive load determines working memory ability to store and retain information in long-term memory, thus conditioning learning.
Objective: To compare cognitive loads among different simulation activities, including anesthesia and surgery simulation workshops in medical students.
Methods: Cross-sectional analytical observational study. Two cognitive load measurement scales (Paas and NASA-TLX) were given to the students after each simulation workshop. Comparisons were made based on the scores derived from the scales.
Results: Relevant differences were found in terms of the mental effort assessed by means of the Paas scale, as relates to student rotation order in the airway management workshop, with a greater effort being found in the group that rotated initially in surgery (6.19 vs. 5.53; p = 0.029). The workshop with the highest associated rate of frustration was the airway management workshop. Higher scores were obtained for this workshop in all the items of the NASA-TLX scale, reflecting a higher cognitive load when compared to the others.
Conclusion: It was not possible to determine whether higher scores in some of the activities were associated with the inherent difficulty of airway management or the specific workshop design. Consequently, further studies are required to distinguish between those components in order to improve the way learning activities are designed.
Collapse
|
3
|
Aoun Bahous S, Salameh P, Salloum A, Salameh W, Park YS, Tekian A. Voluntary vs. compulsory student evaluation of clerkships: effect on validity and potential bias. BMC MEDICAL EDUCATION 2018; 18:9. [PMID: 29304800 PMCID: PMC5756350 DOI: 10.1186/s12909-017-1116-8] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/16/2017] [Accepted: 12/28/2017] [Indexed: 05/30/2023]
Abstract
BACKGROUND Students evaluations of their learning experiences can provide a useful source of information about clerkship effectiveness in undergraduate medical education. However, low response rates in clerkship evaluation surveys remain an important limitation. This study examined the impact of increasing response rates using a compulsory approach on validity evidence. METHODS Data included 192 responses obtained voluntarily from 49 third-year students in 2014-2015, and 171 responses obtained compulsorily from 49 students in the first six months of the consecutive year at one medical school in Lebanon. Evidence supporting internal structure and response process validity was compared between the two administration modalities. The authors also tested for potential bias introduced by the use of the compulsory approach by examining students' responses to a sham item that was added to the last survey administration. RESULTS Response rates increased from 56% in the voluntary group to 100% in the compulsory group (P < 0.001). Students in both groups provided comparable clerkship rating except for one clerkship that received higher rating in the voluntary group (P = 0.02). Respondents in the voluntary group had higher academic performance compared to the compulsory group but this difference diminished when whole class grades were compared. Reliability of ratings was adequately high and comparable between the two consecutive years. Testing for non-response bias in the voluntary group showed that females were more frequent responders in two clerkships. Testing for authority-induced bias revealed that students might complete the evaluation randomly without attention to content. CONCLUSIONS While increasing response rates is often a policy requirement aimed to improve the credibility of ratings, using authority to enforce responses may not increase reliability and can raise concerns over the meaningfulness of the evaluation. Administrators are urged to consider not only response rates, but also representativeness and quality of responses in administering evaluation surveys.
Collapse
Affiliation(s)
- Sola Aoun Bahous
- Lebanese American University School of Medicine, Byblos, Lebanon
- Lebanese American University Medical Center – Rizk Hospital, May Zahhar Street, Ashrafieh, P.O. Box: 11-3288, Beirut, Lebanon
| | - Pascale Salameh
- Lebanese American University School of Pharmacy, Byblos, Lebanon
| | | | - Wael Salameh
- Lebanese American University School of Medicine, Byblos, Lebanon
| | - Yoon Soo Park
- Department of Medical Education, College of Medicine, University of Illinois at Chicago, Chicago, IL USA
| | - Ara Tekian
- Department of Medical Education, College of Medicine, University of Illinois at Chicago, Chicago, IL USA
| |
Collapse
|
4
|
Bzowyckyj AS, Dow A, Knab MS. Evaluating the Impact of Educational Interventions on Patients and Communities: A Conceptual Framework. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2017; 92:1531-1535. [PMID: 28471778 DOI: 10.1097/acm.0000000000001718] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
Health professions education programs can have direct effects on patients and communities as well as on learners. However, few studies have examined the patient and community outcomes of educational interventions. To better integrate education and health care delivery, educators and researchers would benefit from a unifying framework to guide the planning of educational interventions and evaluation of their impact on patients.The authors of this Perspective mirrored approaches from Miller's pyramid of educational assessment and Moore and colleagues' framework for evaluating continuing professional development to propose a conceptual framework for evaluating the impact of educational interventions on patients and communities. This proposed framework, which complements these existing frameworks for evaluating the impact of educational interventions on learners, includes four levels: (1) interaction; (2) acceptability; (3) individual outcomes (i.e., knowledge, skills, activation, behaviors, and individual health indicators); and (4) population outcomes (i.e., community health indicators, capacity, and disparities). The authors describe measures and outcomes at each level and provide an example of the application of their new conceptual framework.The authors encourage educators and researchers to use this conceptual framework to evaluate the impact of educational interventions on patients and to more clearly identify and define which educational interventions strengthen communities and enhance overall health outcomes.
Collapse
Affiliation(s)
- Andrew S Bzowyckyj
- A.S. Bzowyckyj is clinical assistant professor, Division of Pharmacy Practice and Administration, University of Missouri-Kansas City School of Pharmacy, Kansas City, Missouri; ORCID: http://orcid.org/0000-0002-9007-5852. A. Dow is assistant vice president of health sciences for interprofessional education and collaborative care and professor, Internal Medicine, Virginia Commonwealth University School of Medicine, Richmond, Virginia; ORCID: http://orcid.org/0000-0002-9004-7528. M.S. Knab is associate professor and director of IMPACT Practice, Center for Interprofessional Studies and Innovation, MGH Institute of Health Professions, Boston, Massachusetts
| | | | | |
Collapse
|
5
|
Bartlett M, Potts J, McKinley B. Do quality indicators for general practice teaching practices predict good outcomes for students? EDUCATION FOR PRIMARY CARE 2016; 27:271-9. [PMID: 27117344 DOI: 10.1080/14739879.2016.1175913] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Abstract
Keele medical students spend 113 days in general practices over our five-year programme. We collect practice data thought to indicate good quality teaching. We explored the relationships between these data and two outcomes for students; Objective Structured Clinical Examination (OSCE) scores and feedback regarding the placements. Though both are surrogate markers of good teaching, they are widely used. We collated practice and outcome data for one academic year. Two separate statistical analyses were carried out: (1) to determine how much of the variation seen in the OSCE scores was due to the effect of the practice and how much to the individual student. (2) to identify practice characteristics with a relationship to student feedback scores. (1) OSCE performance: 268 students in 90 practices: six quality indicators independently influenced the OSCE score, though without linear relationships and not to statistical significance. (2) Student satisfaction: 144 students in 69 practices: student feedback scores are not influenced by practice characteristics. The relationships between the quality indicators we collect for practices and outcomes for students are not clear. It may be that neither the quality indicators nor the outcome measures are reliable enough to inform decisions about practices' suitability for teaching.
Collapse
Affiliation(s)
| | | | - Bob McKinley
- a Keele University School of Medicine , Keele , UK
| |
Collapse
|
6
|
Eppich W, Rethans JJ, Teunissen PW, Dornan T. Learning to Work Together Through Talk: Continuing Professional Development in Medicine. PROFESSIONAL AND PRACTICE-BASED LEARNING 2016. [DOI: 10.1007/978-3-319-29019-5_3] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
|
7
|
Schiekirka S, Raupach T. A systematic review of factors influencing student ratings in undergraduate medical education course evaluations. BMC MEDICAL EDUCATION 2015; 15:30. [PMID: 25853890 PMCID: PMC4391198 DOI: 10.1186/s12909-015-0311-8] [Citation(s) in RCA: 29] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/15/2014] [Accepted: 02/18/2015] [Indexed: 05/10/2023]
Abstract
BACKGROUND Student ratings are a popular source of course evaluations in undergraduate medical education. Data on the reliability and validity of such ratings have mostly been derived from studies unrelated to medical education. Since medical education differs considerably from other higher education settings, an analysis of factors influencing overall student ratings with a specific focus on medical education was needed. METHODS For the purpose of this systematic review, online databases (PubMed, PsycInfo and Web of Science) were searched up to August 1st, 2013. Original research articles on the use of student ratings in course evaluations in undergraduate medical education were eligible for inclusion. Included studies considered the format of evaluation tools and assessed the association of independent and dependent (i.e., overall course ratings) variables. Inclusion and exclusion criteria were checked by two independent reviewers, and results were synthesised in a narrative review. RESULTS Twenty-five studies met the inclusion criteria. Qualitative research (2 studies) indicated that overall course ratings are mainly influenced by student satisfaction with teaching and exam difficulty rather than objective determinants of high quality teaching. Quantitative research (23 studies) yielded various influencing factors related to four categories: student characteristics, exposure to teaching, satisfaction with examinations and the evaluation process itself. Female gender, greater initial interest in course content, higher exam scores and higher satisfaction with exams were associated with more positive overall course ratings. CONCLUSIONS Due to the heterogeneity and methodological limitations of included studies, results must be interpreted with caution. Medical educators need to be aware of various influences on student ratings when developing data collection instruments and interpreting evaluation results. More research into the reliability and validity of overall course ratings as typically used in the evaluation of undergraduate medical education is warranted.
Collapse
Affiliation(s)
- Sarah Schiekirka
- Department of Cardiology and Pneumology, University Hospital Göttingen, Göttingen, Germany
- Study Deanery of Göttingen Medical School, Göttingen, Germany
| | - Tobias Raupach
- Department of Cardiology and Pneumology, University Hospital Göttingen, Göttingen, Germany
- Department of Clinical, Educational and Health Psychology, University College London, London, UK
| |
Collapse
|
8
|
Dornan T, Tan N, Boshuizen H, Gick R, Isba R, Mann K, Scherpbier A, Spencer J, Timmins E. How and what do medical students learn in clerkships? Experience based learning (ExBL). ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2014; 19:721-49. [PMID: 24638146 DOI: 10.1007/s10459-014-9501-0] [Citation(s) in RCA: 120] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/06/2013] [Accepted: 03/03/2014] [Indexed: 05/05/2023]
Abstract
Clerkship education has been called a 'black box' because so little is known about what, how, and under which conditions students learn. Our aim was to develop a blueprint for education in ambulatory and inpatient settings, and in single encounters, traditional rotations, or longitudinal experiences. We identified 548 causal links between conditions, processes, and outcomes of clerkship education in 168 empirical papers published over 7 years and synthesised a theory of how students learn. They do so when they are given affective, pedagogic, and organisational support. Affective support comes from doctors' and many other health workers' interactions with students. Pedagogic support comes from informal interactions and modelling as well as doctors' teaching, supervision, and precepting. Organisational support comes from every tier of a curriculum. Core learning processes of observing, rehearsing, and contributing to authentic clinical activities take place within triadic relationships between students, patients, and practitioners. The phrase 'supported participation in practice' best describes the educational process. Much of the learning that results is too tacit, complex, contextualised, and individual to be defined as a set of competencies. We conclude that clerkship education takes place within relationships between students, patients, and doctors, supported by informal, individual, contextualised, and affective elements of the learned curriculum, alongside formal, standardised elements of the taught and assessed curriculum. This research provides a blueprint for designing and evaluating clerkship curricula as well as helping patients, students, and practitioners collaborate in educating tomorrow's doctors.
Collapse
Affiliation(s)
- Tim Dornan
- Department of Educational Development and Research, Maastricht University, PO Box 616, 6200 MD, Maastricht, The Netherlands,
| | | | | | | | | | | | | | | | | |
Collapse
|
9
|
Dornan T, Muijtjens A, Graham J, Scherpbier A, Boshuizen H. Manchester Clinical Placement Index (MCPI). Conditions for medical students' learning in hospital and community placements. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2012; 17:703-16. [PMID: 22234383 PMCID: PMC3490061 DOI: 10.1007/s10459-011-9344-x] [Citation(s) in RCA: 28] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/01/2011] [Accepted: 12/20/2011] [Indexed: 05/16/2023]
Abstract
The drive to quality-manage medical education has created a need for valid measurement instruments. Validity evidence includes the theoretical and contextual origin of items, choice of response processes, internal structure, and interrelationship of a measure's variables. This research set out to explore the validity and potential utility of an 11-item measurement instrument, whose theoretical and empirical origins were in an Experience Based Learning model of how medical students learn in communities of practice (COPs), and whose contextual origins were in a community-oriented, horizontally integrated, undergraduate medical programme. The objectives were to examine the psychometric properties of the scale in both hospital and community COPs and provide validity evidence to support using it to measure the quality of placements. The instrument was administered twice to students learning in both hospital and community placements and analysed using exploratory factor analysis and a generalizability analysis. 754 of a possible 902 questionnaires were returned (84% response rate), representing 168 placements. Eight items loaded onto two factors, which accounted for 78% of variance in the hospital data and 82% of variance in the community data. One factor was the placement learning environment, whose five constituent items were how learners were received at the start of the placement, people's supportiveness, and the quality of organisation, leadership, and facilities. The other factor represented the quality of training-instruction in skills, observing students performing skills, and providing students with feedback. Alpha coefficients ranged between 0.89 and 0.93 and there were no redundant or ambiguous items. Generalisability analysis showed that between 7 and 11 raters would be needed to achieve acceptable reliability. There is validity evidence to support using the simple 8-item, mixed methods Manchester Clinical Placement Index to measure key conditions for undergraduate medical students' experience based learning: the quality of the learning environment and the training provided within it. Its conceptual orientation is towards Communities of Practice, which is a dominant contemporary theory in undergraduate medical education.
Collapse
Affiliation(s)
- Tim Dornan
- Department of Educational Development and Research, Maastricht University, The Netherlands.
| | | | | | | | | |
Collapse
|
10
|
Abstract
OBJECTIVE Online curricula are used increasingly for educating physicians, and evaluating educational outcomes can help improve their effectiveness. It is unknown how specific educational outcomes associate with each other among learners using online curricula. We set out to study how two educational outcomes, learner satisfaction and knowledge, and the learner's year of training and training hospital, were associated with one another among learners accessing a widely used online curriculum. METHODS Using data from the 2006-2007 academic year, learner satisfaction was compared with pretest knowledge, posttest knowledge, changes in knowledge, module topic, year of training, and training hospital among 3229 residents at 73 internal medicine residency training programs. A multivariable model was used to calculate the odds ratio of learner satisfaction relative to changes in knowledge. RESULTS Module topic, year of training, and hospital type were associated with learner satisfaction. Second-year residents were more satisfied with training modules (mean rating 4.01) than first- and third-year residents (mean ratings 3.97 and 3.95, respectively; P < 0.05). Learner satisfaction was greater among community hospital residents than university hospital residents (mean rating 4.0 vs 3.92; P < 0.05). Learner satisfaction was greater in residents with high pretest and high posttest knowledge (P < 0.05). In multivariate analyses, greater gains in knowledge were associated with greater learner satisfaction (P < 0.05). CONCLUSIONS Greater learner satisfaction is associated with greater baseline knowledge, greater knowledge after completing a curriculum, and greater improvement in knowledge while enrolled in a curriculum.
Collapse
|
11
|
Bell K, Boshuizen HPA, Scherpbier A, Dornan T. When only the real thing will do: junior medical students' learning from real patients. MEDICAL EDUCATION 2009; 43:1036-43. [PMID: 19874495 DOI: 10.1111/j.1365-2923.2009.03508.x] [Citation(s) in RCA: 70] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
OBJECTIVES This study aimed to explore how medical students experience contacts with real patients and what they learn from them. METHODS We carried out a post hoc, single-group study in one teaching sector of a 5-year, problem-based, horizontally integrated, outcome-based and community-oriented undergraduate programme, in which students lacked clinical exposure in the pre-clerkship phase. Subjects comprised five cohorts of students on their first clerkships. Data consisted of purposively selected, voluntary, self-report statements regarding real patient learning (RPL). Constant comparative analysis was performed by two independent researchers. RESULTS Respondents valued patients as an instructional resource that made learning more real. They reported learning through visual pattern recognition as well as through dialogue and physical examination. They more often used social than professional language to describe RPL. They reported affective outcomes including enhanced confidence, motivation, satisfaction and a sense of professional identity. They also reported cognitive outcomes including perspective, context, a temporal dimension, and an appreciation of complexity. Real patient learning helped respondents link theory learned earlier with reality as represented by verbal, visual and auditory experiences. It made learning easier, more meaningful and more focused. It helped respondents acquire complex skills and knowledge. Above all, RPL helped learners to remember subject matter. Most negative responses concerned the difficulty of acquiring appropriate experience, but RPL made a minority of respondents feel uncomfortable and incompetent. CONCLUSIONS Real patient learning led to a rich variety of learning outcomes, of which at least some medical students showed high metacognitive awareness. Sensitivity from clinical mentors towards the positive and negative outcomes of RPL reported here could support reflective clinical learning.
Collapse
Affiliation(s)
- Kathryn Bell
- University of Manchester Medical School, Manchester M6 8HD, UK
| | | | | | | |
Collapse
|
12
|
Dornan T. Self-assessment in CPD: lessons from the UK undergraduate and postgraduate education domains. THE JOURNAL OF CONTINUING EDUCATION IN THE HEALTH PROFESSIONS 2008; 28:32-7. [PMID: 18366126 DOI: 10.1002/chp.153] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
UK continuing education is moving from credit-earning, taught continuing medical education (CME) to a continuing professional development (CPD) system that explicitly links education to change in practice, managed and monitored through mandatory peer appraisal. Alongside multisource feedback and consideration of issues of poor performance, satisfactory personal development planning will be required for relicensure and recertification. That system gives self-assessment, in the guise of reflection, a central place in personal development. This article uses instances of directed self-assessment drawn from undergraduate and early postgraduate medical education to consider how a positive system of self-assessment and professional self-regulation could be operationalized. It explores why medical students made avid use of an e-technology that presents the intended outcomes of their problem-based curriculum in a way that helps them seek out appropriate clinical opportunities and identify what they learned from them. It contrasts the experience of early postgraduate learners who, presented with a similar e-technology, found it hard to see links between their official curriculum and their day-by-day learning experiences, at least partly because the intended outcomes it offered were remote from what they were actually learning. Any extrapolation to CPD must be very tentative, but I advocate continued exploration of how best to use e-technology to support and structure (ie, direct) self-assessment. Direction could originate from consensus statements and other well-defined external standards when learners lack mastery of a domain. When learners must respond to institutional demands, direction could be provided by corporate goals. In areas of mastery, I propose learners themselves should define personal standards. In areas of difficulty, external assessment would take the place of self-assessment.
Collapse
Affiliation(s)
- Tim Dornan
- University of Manchester, Manchester, England.
| |
Collapse
|
13
|
Jolly B. Clinical education: teachers and Stritter's student. MEDICAL EDUCATION 2006; 40:604-6. [PMID: 16836531 DOI: 10.1111/j.1365-2929.2006.02521.x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
|